Name: capi-kubeadm-bootstrap-controller-manager-6558cd8d7f-7psfx Namespace: capi-kubeadm-bootstrap-system Priority: 0 Service Account: capi-kubeadm-bootstrap-manager Node: instance/199.204.45.246 Start Time: Fri, 17 Apr 2026 01:10:47 +0000 Labels: cluster.x-k8s.io/provider=bootstrap-kubeadm control-plane=controller-manager pod-template-hash=6558cd8d7f Annotations: Status: Running SeccompProfile: RuntimeDefault IP: 10.0.0.97 IPs: IP: 10.0.0.97 Controlled By: ReplicaSet/capi-kubeadm-bootstrap-controller-manager-6558cd8d7f Containers: manager: Container ID: containerd://773e74d0f2ce9ecca4f9583889530337debc1a94b1c6b84800268283713e3ea0 Image: harbor.atmosphere.dev/registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.10.5 Image ID: harbor.atmosphere.dev/registry.k8s.io/cluster-api/kubeadm-bootstrap-controller@sha256:e0f657538b3dc93a2eb0462561d0c895c78d9a515f048871bc90c927c0d4ce64 Ports: 9443/TCP, 9440/TCP, 8443/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP Command: /manager Args: --leader-elect --diagnostics-address=:8443 --insecure-diagnostics=false --feature-gates=MachinePool=true,KubeadmBootstrapFormatIgnition=false,PriorityQueue=false --bootstrap-token-ttl=15m State: Running Started: Fri, 17 Apr 2026 01:18:06 +0000 Last State: Terminated Reason: Error Message: unt=1 I0417 01:11:30.125445 1 controller.go:239] "Starting Controller" controller="kubeadmconfig" controllerGroup="bootstrap.cluster.x-k8s.io" controllerKind="KubeadmConfig" I0417 01:11:30.125464 1 controller.go:248] "Starting workers" controller="kubeadmconfig" controllerGroup="bootstrap.cluster.x-k8s.io" controllerKind="KubeadmConfig" worker count=10 E0417 01:17:51.903803 1 leaderelection.go:429] Failed to update lock optimistically: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capi-kubeadm-bootstrap-system/leases/kubeadm-bootstrap-manager-leader-election-capi?timeout=5s": context deadline exceeded, falling back to slow path E0417 01:17:56.903169 1 leaderelection.go:436] error retrieving resource lock capi-kubeadm-bootstrap-system/kubeadm-bootstrap-manager-leader-election-capi: Get "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capi-kubeadm-bootstrap-system/leases/kubeadm-bootstrap-manager-leader-election-capi?timeout=5s": context deadline exceeded I0417 01:17:56.903237 1 leaderelection.go:297] failed to renew lease capi-kubeadm-bootstrap-system/kubeadm-bootstrap-manager-leader-election-capi: context deadline exceeded I0417 01:17:56.903672 1 recorder.go:104] "capi-kubeadm-bootstrap-controller-manager-6558cd8d7f-7psfx_a07af195-8c4a-4633-92b0-d1404531654c stopped leading" logger="events" type="Normal" object={"kind":"Lease","namespace":"capi-kubeadm-bootstrap-system","name":"kubeadm-bootstrap-manager-leader-election-capi","uid":"0f8ca0e0-ef10-4f02-8246-be7f60a2ef77","apiVersion":"coordination.k8s.io/v1","resourceVersion":"18204"} reason="LeaderElection" I0417 01:17:56.903900 1 internal.go:538] "Stopping and waiting for non leader election runnables" I0417 01:17:56.903955 1 internal.go:542] "Stopping and waiting for leader election runnables" E0417 01:17:56.903962 1 main.go:306] "problem running manager" err="leader election lost" logger="setup" I0417 01:17:56.903982 1 internal.go:550] "Stopping and waiting for caches" Exit Code: 1 Started: Fri, 17 Apr 2026 01:11:07 +0000 Finished: Fri, 17 Apr 2026 01:18:02 +0000 Ready: True Restart Count: 1 Liveness: http-get http://:healthz/healthz delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:healthz/readyz delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: POD_NAMESPACE: capi-kubeadm-bootstrap-system (v1:metadata.namespace) POD_NAME: capi-kubeadm-bootstrap-controller-manager-6558cd8d7f-7psfx (v1:metadata.name) POD_UID: (v1:metadata.uid) Mounts: /tmp/k8s-webhook-server/serving-certs from cert (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b6hck (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: cert: Type: Secret (a volume populated by a Secret) SecretName: capi-kubeadm-bootstrap-webhook-service-cert Optional: false kube-api-access-b6hck: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: openstack-control-plane=enabled Tolerations: node-role.kubernetes.io/control-plane:NoSchedule node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 26m default-scheduler Successfully assigned capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-6558cd8d7f-7psfx to instance Normal Pulling 26m kubelet Pulling image "harbor.atmosphere.dev/registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.10.5" Normal Pulled 25m kubelet Successfully pulled image "harbor.atmosphere.dev/registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.10.5" in 173ms (18.421s including waiting) Warning Unhealthy 18m kubelet Readiness probe failed: Get "http://10.0.0.97:9440/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Warning Unhealthy 18m kubelet Liveness probe failed: Get "http://10.0.0.97:9440/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Normal Pulled 18m kubelet Container image "harbor.atmosphere.dev/registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.10.5" already present on machine Normal Created 18m (x2 over 25m) kubelet Created container manager Normal Started 18m (x2 over 25m) kubelet Started container manager