Name: capi-kubeadm-bootstrap-controller-manager-6558cd8d7f-7xmf2 Namespace: capi-kubeadm-bootstrap-system Priority: 0 Service Account: capi-kubeadm-bootstrap-manager Node: instance/162.253.55.36 Start Time: Mon, 02 Mar 2026 19:45:58 +0000 Labels: cluster.x-k8s.io/provider=bootstrap-kubeadm control-plane=controller-manager pod-template-hash=6558cd8d7f Annotations: Status: Running SeccompProfile: RuntimeDefault IP: 10.0.0.150 IPs: IP: 10.0.0.150 Controlled By: ReplicaSet/capi-kubeadm-bootstrap-controller-manager-6558cd8d7f Containers: manager: Container ID: containerd://59a28cb79030a202c7eada6b231cffef9d055f20a6b44adcd96e88e69c75512f Image: harbor.atmosphere.dev/registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.10.5 Image ID: harbor.atmosphere.dev/registry.k8s.io/cluster-api/kubeadm-bootstrap-controller@sha256:e0f657538b3dc93a2eb0462561d0c895c78d9a515f048871bc90c927c0d4ce64 Ports: 9443/TCP, 9440/TCP, 8443/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP Command: /manager Args: --leader-elect --diagnostics-address=:8443 --insecure-diagnostics=false --feature-gates=MachinePool=true,KubeadmBootstrapFormatIgnition=false,PriorityQueue=false --bootstrap-token-ttl=15m State: Running Started: Mon, 02 Mar 2026 19:50:56 +0000 Last State: Terminated Reason: Error Message: 1 reflector.go:376] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251 I0302 19:46:21.917443 1 controller.go:239] "Starting Controller" controller="crdmigrator" controllerGroup="apiextensions.k8s.io" controllerKind="CustomResourceDefinition" I0302 19:46:21.917484 1 controller.go:248] "Starting workers" controller="crdmigrator" controllerGroup="apiextensions.k8s.io" controllerKind="CustomResourceDefinition" worker count=1 I0302 19:46:21.917496 1 controller.go:239] "Starting Controller" controller="clustercache" controllerGroup="cluster.x-k8s.io" controllerKind="Cluster" I0302 19:46:21.917518 1 controller.go:248] "Starting workers" controller="clustercache" controllerGroup="cluster.x-k8s.io" controllerKind="Cluster" worker count=100 I0302 19:46:21.918567 1 controller.go:239] "Starting Controller" controller="kubeadmconfig" controllerGroup="bootstrap.cluster.x-k8s.io" controllerKind="KubeadmConfig" I0302 19:46:21.918593 1 controller.go:248] "Starting workers" controller="kubeadmconfig" controllerGroup="bootstrap.cluster.x-k8s.io" controllerKind="KubeadmConfig" worker count=10 E0302 19:50:14.827790 1 leaderelection.go:429] Failed to update lock optimistically: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capi-kubeadm-bootstrap-system/leases/kubeadm-bootstrap-manager-leader-election-capi?timeout=5s": context deadline exceeded, falling back to slow path E0302 19:50:19.828157 1 leaderelection.go:472] Failed to update lock: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capi-kubeadm-bootstrap-system/leases/kubeadm-bootstrap-manager-leader-election-capi?timeout=5s": context deadline exceeded I0302 19:50:19.828212 1 leaderelection.go:297] failed to renew lease capi-kubeadm-bootstrap-system/kubeadm-bootstrap-manager-leader-election-capi: context deadline exceeded E0302 19:50:19.828344 1 main.go:306] "problem running manager" err="leader election lost" logger="setup" Exit Code: 1 Started: Mon, 02 Mar 2026 19:46:01 +0000 Finished: Mon, 02 Mar 2026 19:50:19 +0000 Ready: True Restart Count: 1 Liveness: http-get http://:healthz/healthz delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:healthz/readyz delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: POD_NAMESPACE: capi-kubeadm-bootstrap-system (v1:metadata.namespace) POD_NAME: capi-kubeadm-bootstrap-controller-manager-6558cd8d7f-7xmf2 (v1:metadata.name) POD_UID: (v1:metadata.uid) Mounts: /tmp/k8s-webhook-server/serving-certs from cert (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xdf8t (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: cert: Type: Secret (a volume populated by a Secret) SecretName: capi-kubeadm-bootstrap-webhook-service-cert Optional: false kube-api-access-xdf8t: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: openstack-control-plane=enabled Tolerations: node-role.kubernetes.io/control-plane:NoSchedule node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 17m default-scheduler Successfully assigned capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-6558cd8d7f-7xmf2 to instance Normal Pulling 17m kubelet Pulling image "harbor.atmosphere.dev/registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.10.5" Normal Pulled 17m kubelet Successfully pulled image "harbor.atmosphere.dev/registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.10.5" in 182ms (1.891s including waiting) Warning Unhealthy 12m (x3 over 13m) kubelet Readiness probe failed: Get "http://10.0.0.150:9440/readyz": dial tcp 10.0.0.150:9440: connect: connection refused Warning Unhealthy 12m (x3 over 13m) kubelet Liveness probe failed: Get "http://10.0.0.150:9440/healthz": dial tcp 10.0.0.150:9440: connect: connection refused Normal Killing 12m kubelet Container manager failed liveness probe, will be restarted Normal Created 12m (x2 over 17m) kubelet Created container manager Normal Started 12m (x2 over 17m) kubelet Started container manager Normal Pulled 12m kubelet Container image "harbor.atmosphere.dev/registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.10.5" already present on machine