Name: capi-controller-manager-bc4cf8c95-nqrmx Namespace: capi-system Priority: 0 Service Account: capi-manager Node: instance/199.204.45.210 Start Time: Thu, 23 Apr 2026 16:45:05 +0000 Labels: cluster.x-k8s.io/provider=cluster-api control-plane=controller-manager pod-template-hash=bc4cf8c95 Annotations: Status: Running SeccompProfile: RuntimeDefault IP: 10.0.0.13 IPs: IP: 10.0.0.13 Controlled By: ReplicaSet/capi-controller-manager-bc4cf8c95 Containers: manager: Container ID: containerd://9a5f1023bdaf4108e84bb6373f914e91d4114a0b776680f5c5e003a4db61f338 Image: harbor.atmosphere.dev/registry.k8s.io/cluster-api/cluster-api-controller:v1.10.5 Image ID: harbor.atmosphere.dev/registry.k8s.io/cluster-api/cluster-api-controller@sha256:d93407d031296336ccbabc8494005672dc048c4ebc616ccfc18f813d49bd87fc Ports: 9443/TCP, 9440/TCP, 8443/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP Command: /manager Args: --leader-elect --diagnostics-address=:8443 --insecure-diagnostics=false --feature-gates=MachinePool=true,ClusterResourceSet=true,ClusterTopology=true,RuntimeSDK=false,MachineSetPreflightChecks=true,MachineWaitForVolumeDetachConsiderVolumeAttachments=true,PriorityQueue=false State: Running Started: Thu, 23 Apr 2026 16:49:42 +0000 Last State: Terminated Reason: Error Message: ler.go:248] "Starting workers" controller="machinedeployment" controllerGroup="cluster.x-k8s.io" controllerKind="MachineDeployment" worker count=10 I0423 16:45:24.229830 1 controller.go:239] "Starting Controller" controller="machinepool" controllerGroup="cluster.x-k8s.io" controllerKind="MachinePool" I0423 16:45:24.229970 1 controller.go:248] "Starting workers" controller="machinepool" controllerGroup="cluster.x-k8s.io" controllerKind="MachinePool" worker count=10 E0423 16:47:25.597287 1 leaderelection.go:429] Failed to update lock optimistically: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capi-system/leases/controller-leader-election-capi?timeout=5s": context deadline exceeded, falling back to slow path E0423 16:48:39.675843 1 leaderelection.go:429] Failed to update lock optimistically: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capi-system/leases/controller-leader-election-capi?timeout=5s": context deadline exceeded, falling back to slow path E0423 16:48:44.676711 1 leaderelection.go:436] error retrieving resource lock capi-system/controller-leader-election-capi: Get "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capi-system/leases/controller-leader-election-capi?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) E0423 16:48:44.676941 1 leaderelection.go:429] Failed to update lock optimistically: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, falling back to slow path E0423 16:48:44.677054 1 leaderelection.go:436] error retrieving resource lock capi-system/controller-leader-election-capi: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline I0423 16:48:44.677102 1 leaderelection.go:297] failed to renew lease capi-system/controller-leader-election-capi: context deadline exceeded E0423 16:48:44.677174 1 main.go:433] "Problem running manager" err="leader election lost" logger="setup" Exit Code: 1 Started: Thu, 23 Apr 2026 16:45:06 +0000 Finished: Thu, 23 Apr 2026 16:48:44 +0000 Ready: True Restart Count: 1 Liveness: http-get http://:healthz/healthz delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:healthz/readyz delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: POD_NAMESPACE: capi-system (v1:metadata.namespace) POD_NAME: capi-controller-manager-bc4cf8c95-nqrmx (v1:metadata.name) POD_UID: (v1:metadata.uid) Mounts: /tmp/k8s-webhook-server/serving-certs from cert (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qx7q9 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: cert: Type: Secret (a volume populated by a Secret) SecretName: capi-webhook-service-cert Optional: false kube-api-access-qx7q9: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: openstack-control-plane=enabled Tolerations: node-role.kubernetes.io/control-plane:NoSchedule node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 36m default-scheduler Successfully assigned capi-system/capi-controller-manager-bc4cf8c95-nqrmx to instance Warning Unhealthy 32m (x3 over 33m) kubelet Liveness probe failed: Get "http://10.0.0.13:9440/healthz": dial tcp 10.0.0.13:9440: connect: connection refused Normal Killing 32m kubelet Container manager failed liveness probe, will be restarted Warning Unhealthy 32m (x6 over 33m) kubelet Readiness probe failed: Get "http://10.0.0.13:9440/readyz": dial tcp 10.0.0.13:9440: connect: connection refused Normal Pulled 32m (x2 over 36m) kubelet Container image "harbor.atmosphere.dev/registry.k8s.io/cluster-api/cluster-api-controller:v1.10.5" already present on machine Normal Created 32m (x2 over 36m) kubelet Created container manager Normal Started 32m (x2 over 36m) kubelet Started container manager