Name: capi-controller-manager-bc4cf8c95-9ftbs Namespace: capi-system Priority: 0 Service Account: capi-manager Node: instance/199.204.45.30 Start Time: Sun, 12 Apr 2026 21:32:29 +0000 Labels: cluster.x-k8s.io/provider=cluster-api control-plane=controller-manager pod-template-hash=bc4cf8c95 Annotations: Status: Running SeccompProfile: RuntimeDefault IP: 10.0.0.3 IPs: IP: 10.0.0.3 Controlled By: ReplicaSet/capi-controller-manager-bc4cf8c95 Containers: manager: Container ID: containerd://4f39d94179618033f706b405892c3a6c584fafe89a042173b69ac65ce3533082 Image: harbor.atmosphere.dev/registry.k8s.io/cluster-api/cluster-api-controller:v1.10.5 Image ID: harbor.atmosphere.dev/registry.k8s.io/cluster-api/cluster-api-controller@sha256:d93407d031296336ccbabc8494005672dc048c4ebc616ccfc18f813d49bd87fc Ports: 9443/TCP, 9440/TCP, 8443/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP Command: /manager Args: --leader-elect --diagnostics-address=:8443 --insecure-diagnostics=false --feature-gates=MachinePool=true,ClusterResourceSet=true,ClusterTopology=true,RuntimeSDK=false,MachineSetPreflightChecks=true,MachineWaitForVolumeDetachConsiderVolumeAttachments=true,PriorityQueue=false State: Running Started: Sun, 12 Apr 2026 21:37:35 +0000 Last State: Terminated Reason: Error Message: cluster.x-k8s.io" controllerKind="ClusterClass" I0412 21:32:58.750377 1 controller.go:248] "Starting workers" controller="clusterclass" controllerGroup="cluster.x-k8s.io" controllerKind="ClusterClass" worker count=10 I0412 21:32:58.750413 1 controller.go:239] "Starting Controller" controller="machinedeployment" controllerGroup="cluster.x-k8s.io" controllerKind="MachineDeployment" I0412 21:32:58.750429 1 controller.go:248] "Starting workers" controller="machinedeployment" controllerGroup="cluster.x-k8s.io" controllerKind="MachineDeployment" worker count=10 I0412 21:32:58.750440 1 controller.go:239] "Starting Controller" controller="topology/cluster" controllerGroup="cluster.x-k8s.io" controllerKind="Cluster" I0412 21:32:58.750453 1 controller.go:248] "Starting workers" controller="topology/cluster" controllerGroup="cluster.x-k8s.io" controllerKind="Cluster" worker count=10 I0412 21:32:58.750430 1 controller.go:239] "Starting Controller" controller="machine" controllerGroup="cluster.x-k8s.io" controllerKind="Machine" I0412 21:32:58.750485 1 controller.go:248] "Starting workers" controller="machine" controllerGroup="cluster.x-k8s.io" controllerKind="Machine" worker count=10 E0412 21:37:07.992744 1 leaderelection.go:429] Failed to update lock optimistically: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capi-system/leases/controller-leader-election-capi?timeout=5s": context deadline exceeded, falling back to slow path E0412 21:37:12.992469 1 leaderelection.go:436] error retrieving resource lock capi-system/controller-leader-election-capi: Get "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capi-system/leases/controller-leader-election-capi?timeout=5s": context deadline exceeded I0412 21:37:12.992542 1 leaderelection.go:297] failed to renew lease capi-system/controller-leader-election-capi: context deadline exceeded E0412 21:37:12.992647 1 main.go:433] "Problem running manager" err="leader election lost" logger="setup" Exit Code: 1 Started: Sun, 12 Apr 2026 21:32:31 +0000 Finished: Sun, 12 Apr 2026 21:37:13 +0000 Ready: True Restart Count: 1 Liveness: http-get http://:healthz/healthz delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:healthz/readyz delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: POD_NAMESPACE: capi-system (v1:metadata.namespace) POD_NAME: capi-controller-manager-bc4cf8c95-9ftbs (v1:metadata.name) POD_UID: (v1:metadata.uid) Mounts: /tmp/k8s-webhook-server/serving-certs from cert (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rxzg6 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: cert: Type: Secret (a volume populated by a Secret) SecretName: capi-webhook-service-cert Optional: false kube-api-access-rxzg6: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: openstack-control-plane=enabled Tolerations: node-role.kubernetes.io/control-plane:NoSchedule node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 32m default-scheduler Successfully assigned capi-system/capi-controller-manager-bc4cf8c95-9ftbs to instance Warning Unhealthy 27m (x2 over 27m) kubelet Liveness probe failed: Get "http://10.0.0.3:9440/healthz": dial tcp 10.0.0.3:9440: connect: connection refused Warning Unhealthy 27m (x2 over 27m) kubelet Readiness probe failed: Get "http://10.0.0.3:9440/readyz": dial tcp 10.0.0.3:9440: connect: connection refused Normal Pulled 27m (x2 over 32m) kubelet Container image "harbor.atmosphere.dev/registry.k8s.io/cluster-api/cluster-api-controller:v1.10.5" already present on machine Normal Created 27m (x2 over 32m) kubelet Created container manager Normal Started 27m (x2 over 32m) kubelet Started container manager