Name: capi-kubeadm-bootstrap-controller-manager-6558cd8d7f-l2kgx Namespace: capi-kubeadm-bootstrap-system Priority: 0 Service Account: capi-kubeadm-bootstrap-manager Node: instance/199.204.45.41 Start Time: Thu, 26 Feb 2026 22:44:47 +0000 Labels: cluster.x-k8s.io/provider=bootstrap-kubeadm control-plane=controller-manager pod-template-hash=6558cd8d7f Annotations: Status: Running SeccompProfile: RuntimeDefault IP: 10.0.0.96 IPs: IP: 10.0.0.96 Controlled By: ReplicaSet/capi-kubeadm-bootstrap-controller-manager-6558cd8d7f Containers: manager: Container ID: containerd://d87beba9bc8b13f1f9629e84c7e5058d28056221a8e2b5d2112ff10cbd97acb6 Image: harbor.atmosphere.dev/registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.10.5 Image ID: harbor.atmosphere.dev/registry.k8s.io/cluster-api/kubeadm-bootstrap-controller@sha256:e0f657538b3dc93a2eb0462561d0c895c78d9a515f048871bc90c927c0d4ce64 Ports: 9443/TCP, 9440/TCP, 8443/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP Command: /manager Args: --leader-elect --diagnostics-address=:8443 --insecure-diagnostics=false --feature-gates=MachinePool=true,KubeadmBootstrapFormatIgnition=false,PriorityQueue=false --bootstrap-token-ttl=15m State: Running Started: Thu, 26 Feb 2026 22:50:36 +0000 Last State: Terminated Reason: Error Message: " I0226 22:50:05.393158 1 internal.go:538] "Stopping and waiting for non leader election runnables" I0226 22:50:05.393188 1 internal.go:542] "Stopping and waiting for leader election runnables" I0226 22:50:05.393204 1 internal.go:550] "Stopping and waiting for caches" I0226 22:50:05.393218 1 internal.go:554] "Stopping and waiting for webhooks" I0226 22:50:05.393227 1 internal.go:557] "Stopping and waiting for HTTP servers" I0226 22:50:05.393239 1 internal.go:561] "Wait completed, proceeding to shutdown the manager" I0226 22:50:05.393251 1 controller.go:268] "Shutdown signal received, waiting for all workers to finish" controller="kubeadmconfig" controllerGroup="bootstrap.cluster.x-k8s.io" controllerKind="KubeadmConfig" I0226 22:50:05.393239 1 controller.go:268] "Shutdown signal received, waiting for all workers to finish" controller="crdmigrator" controllerGroup="apiextensions.k8s.io" controllerKind="CustomResourceDefinition" I0226 22:50:05.393327 1 server.go:249] "Shutting down webhook server with timeout of 1 minute" logger="controller-runtime.webhook" I0226 22:50:05.393330 1 controller.go:268] "Shutdown signal received, waiting for all workers to finish" controller="clustercache" controllerGroup="cluster.x-k8s.io" controllerKind="Cluster" I0226 22:50:05.393479 1 server.go:68] "shutting down server" name="health probe" addr="[::]:9440" I0226 22:50:05.393506 1 controller.go:270] "All workers finished" controller="kubeadmconfig" controllerGroup="bootstrap.cluster.x-k8s.io" controllerKind="KubeadmConfig" I0226 22:50:05.393560 1 controller.go:270] "All workers finished" controller="clustercache" controllerGroup="cluster.x-k8s.io" controllerKind="Cluster" I0226 22:50:05.393472 1 controller.go:270] "All workers finished" controller="crdmigrator" controllerGroup="apiextensions.k8s.io" controllerKind="CustomResourceDefinition" E0226 22:50:05.393249 1 main.go:306] "problem running manager" err="leader election lost" logger="setup" Exit Code: 1 Started: Thu, 26 Feb 2026 22:44:52 +0000 Finished: Thu, 26 Feb 2026 22:50:05 +0000 Ready: True Restart Count: 1 Liveness: http-get http://:healthz/healthz delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:healthz/readyz delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: POD_NAMESPACE: capi-kubeadm-bootstrap-system (v1:metadata.namespace) POD_NAME: capi-kubeadm-bootstrap-controller-manager-6558cd8d7f-l2kgx (v1:metadata.name) POD_UID: (v1:metadata.uid) Mounts: /tmp/k8s-webhook-server/serving-certs from cert (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j9kqp (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: cert: Type: Secret (a volume populated by a Secret) SecretName: capi-kubeadm-bootstrap-webhook-service-cert Optional: false kube-api-access-j9kqp: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: openstack-control-plane=enabled Tolerations: node-role.kubernetes.io/control-plane:NoSchedule node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 33m default-scheduler Successfully assigned capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-6558cd8d7f-l2kgx to instance Normal Pulling 33m kubelet Pulling image "harbor.atmosphere.dev/registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.10.5" Normal Pulled 33m kubelet Successfully pulled image "harbor.atmosphere.dev/registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.10.5" in 171ms (3.655s including waiting) Warning Unhealthy 28m (x4 over 28m) kubelet Readiness probe failed: Get "http://10.0.0.96:9440/readyz": dial tcp 10.0.0.96:9440: connect: connection refused Warning Unhealthy 28m (x3 over 28m) kubelet Liveness probe failed: Get "http://10.0.0.96:9440/healthz": dial tcp 10.0.0.96:9440: connect: connection refused Normal Killing 28m kubelet Container manager failed liveness probe, will be restarted Normal Pulled 28m kubelet Container image "harbor.atmosphere.dev/registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.10.5" already present on machine Normal Created 27m (x2 over 33m) kubelet Created container manager Normal Started 27m (x2 over 33m) kubelet Started container manager