Name: capi-controller-manager-bc4cf8c95-6tmzn Namespace: capi-system Priority: 0 Service Account: capi-manager Node: instance/199.204.45.233 Start Time: Fri, 17 Apr 2026 11:37:24 +0000 Labels: cluster.x-k8s.io/provider=cluster-api control-plane=controller-manager pod-template-hash=bc4cf8c95 Annotations: Status: Running SeccompProfile: RuntimeDefault IP: 10.0.0.198 IPs: IP: 10.0.0.198 Controlled By: ReplicaSet/capi-controller-manager-bc4cf8c95 Containers: manager: Container ID: containerd://3c3f9738bb28ecc0dc481d94683851f9e9588b44c37d8ab2af6283ce8e024a3b Image: harbor.atmosphere.dev/registry.k8s.io/cluster-api/cluster-api-controller:v1.10.5 Image ID: harbor.atmosphere.dev/registry.k8s.io/cluster-api/cluster-api-controller@sha256:d93407d031296336ccbabc8494005672dc048c4ebc616ccfc18f813d49bd87fc Ports: 9443/TCP, 9440/TCP, 8443/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP Command: /manager Args: --leader-elect --diagnostics-address=:8443 --insecure-diagnostics=false --feature-gates=MachinePool=true,ClusterResourceSet=true,ClusterTopology=true,RuntimeSDK=false,MachineSetPreflightChecks=true,MachineWaitForVolumeDetachConsiderVolumeAttachments=true,PriorityQueue=false State: Running Started: Fri, 17 Apr 2026 11:44:32 +0000 Last State: Terminated Reason: Error Message: terresourceset" controllerGroup="addons.cluster.x-k8s.io" controllerKind="ClusterResourceSet" worker count=10 I0417 11:43:18.024277 1 controller.go:239] "Starting Controller" controller="clusterresourcesetbinding" controllerGroup="addons.cluster.x-k8s.io" controllerKind="ClusterResourceSetBinding" I0417 11:43:18.024318 1 controller.go:248] "Starting workers" controller="clusterresourcesetbinding" controllerGroup="addons.cluster.x-k8s.io" controllerKind="ClusterResourceSetBinding" worker count=10 E0417 11:43:29.469340 1 leaderelection.go:429] Failed to update lock optimistically: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capi-system/leases/controller-leader-election-capi?timeout=5s": context deadline exceeded (Client.Timeout exceeded while awaiting headers), falling back to slow path E0417 11:43:43.445735 1 leaderelection.go:429] Failed to update lock optimistically: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capi-system/leases/controller-leader-election-capi?timeout=5s": context deadline exceeded, falling back to slow path E0417 11:43:48.446047 1 leaderelection.go:436] error retrieving resource lock capi-system/controller-leader-election-capi: Get "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capi-system/leases/controller-leader-election-capi?timeout=5s": context deadline exceeded I0417 11:43:48.446106 1 leaderelection.go:297] failed to renew lease capi-system/controller-leader-election-capi: context deadline exceeded E0417 11:43:48.446221 1 main.go:433] "Problem running manager" err="leader election lost" logger="setup" I0417 11:43:48.446249 1 recorder.go:104] "capi-controller-manager-bc4cf8c95-6tmzn_802773c0-07f7-4dec-a810-20ed44364edf stopped leading" logger="events" type="Normal" object={"kind":"Lease","namespace":"capi-system","name":"controller-leader-election-capi","uid":"3ee7f632-73a4-40e8-87f0-3e7d5c91cf2e","apiVersion":"coordination.k8s.io/v1","resourceVersion":"22847"} reason="LeaderElection" Exit Code: 1 Started: Fri, 17 Apr 2026 11:42:56 +0000 Finished: Fri, 17 Apr 2026 11:44:28 +0000 Ready: True Restart Count: 2 Liveness: http-get http://:healthz/healthz delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:healthz/readyz delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: POD_NAMESPACE: capi-system (v1:metadata.namespace) POD_NAME: capi-controller-manager-bc4cf8c95-6tmzn (v1:metadata.name) POD_UID: (v1:metadata.uid) Mounts: /tmp/k8s-webhook-server/serving-certs from cert (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7vwm4 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: cert: Type: Secret (a volume populated by a Secret) SecretName: capi-webhook-service-cert Optional: false kube-api-access-7vwm4: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: openstack-control-plane=enabled Tolerations: node-role.kubernetes.io/control-plane:NoSchedule node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 18m default-scheduler Successfully assigned capi-system/capi-controller-manager-bc4cf8c95-6tmzn to instance Warning Unhealthy 14m (x3 over 14m) kubelet Liveness probe failed: Get "http://10.0.0.198:9440/healthz": dial tcp 10.0.0.198:9440: connect: connection refused Normal Killing 14m kubelet Container manager failed liveness probe, will be restarted Normal Pulled 13m (x2 over 18m) kubelet Container image "harbor.atmosphere.dev/registry.k8s.io/cluster-api/cluster-api-controller:v1.10.5" already present on machine Warning Unhealthy 12m (x12 over 14m) kubelet Readiness probe failed: Get "http://10.0.0.198:9440/readyz": dial tcp 10.0.0.198:9440: connect: connection refused Normal Created 12m (x2 over 18m) kubelet Created container manager Normal Started 12m (x2 over 18m) kubelet Started container manager Warning Unhealthy 11m (x2 over 11m) kubelet Liveness probe failed: Get "http://10.0.0.198:9440/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Warning Unhealthy 11m (x2 over 11m) kubelet Readiness probe failed: Get "http://10.0.0.198:9440/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)