Name: capi-kubeadm-bootstrap-controller-manager-6558cd8d7f-b6ckh Namespace: capi-kubeadm-bootstrap-system Priority: 0 Service Account: capi-kubeadm-bootstrap-manager Node: instance/199.204.45.30 Start Time: Sun, 12 Apr 2026 21:31:58 +0000 Labels: cluster.x-k8s.io/provider=bootstrap-kubeadm control-plane=controller-manager pod-template-hash=6558cd8d7f Annotations: Status: Running SeccompProfile: RuntimeDefault IP: 10.0.0.250 IPs: IP: 10.0.0.250 Controlled By: ReplicaSet/capi-kubeadm-bootstrap-controller-manager-6558cd8d7f Containers: manager: Container ID: containerd://82ac374603c0d45db9b9617fec0f4711b4857bfae5fd0ee6c298e3b53218e7aa Image: harbor.atmosphere.dev/registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.10.5 Image ID: harbor.atmosphere.dev/registry.k8s.io/cluster-api/kubeadm-bootstrap-controller@sha256:e0f657538b3dc93a2eb0462561d0c895c78d9a515f048871bc90c927c0d4ce64 Ports: 9443/TCP, 9440/TCP, 8443/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP Command: /manager Args: --leader-elect --diagnostics-address=:8443 --insecure-diagnostics=false --feature-gates=MachinePool=true,KubeadmBootstrapFormatIgnition=false,PriorityQueue=false --bootstrap-token-ttl=15m State: Running Started: Sun, 12 Apr 2026 21:37:37 +0000 Last State: Terminated Reason: Error Message: o/client-go@v0.32.3/tools/cache/reflector.go:251 I0412 21:32:26.690431 1 controller.go:239] "Starting Controller" controller="crdmigrator" controllerGroup="apiextensions.k8s.io" controllerKind="CustomResourceDefinition" I0412 21:32:26.690480 1 controller.go:248] "Starting workers" controller="crdmigrator" controllerGroup="apiextensions.k8s.io" controllerKind="CustomResourceDefinition" worker count=1 I0412 21:32:26.692564 1 controller.go:239] "Starting Controller" controller="clustercache" controllerGroup="cluster.x-k8s.io" controllerKind="Cluster" I0412 21:32:26.692591 1 controller.go:248] "Starting workers" controller="clustercache" controllerGroup="cluster.x-k8s.io" controllerKind="Cluster" worker count=100 I0412 21:32:26.692609 1 controller.go:239] "Starting Controller" controller="kubeadmconfig" controllerGroup="bootstrap.cluster.x-k8s.io" controllerKind="KubeadmConfig" I0412 21:32:26.692625 1 controller.go:248] "Starting workers" controller="kubeadmconfig" controllerGroup="bootstrap.cluster.x-k8s.io" controllerKind="KubeadmConfig" worker count=10 E0412 21:37:07.994887 1 leaderelection.go:429] Failed to update lock optimistically: the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io kubeadm-bootstrap-manager-leader-election-capi), falling back to slow path E0412 21:37:12.992100 1 leaderelection.go:436] error retrieving resource lock capi-kubeadm-bootstrap-system/kubeadm-bootstrap-manager-leader-election-capi: Get "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capi-kubeadm-bootstrap-system/leases/kubeadm-bootstrap-manager-leader-election-capi?timeout=5s": context deadline exceeded I0412 21:37:12.992145 1 leaderelection.go:297] failed to renew lease capi-kubeadm-bootstrap-system/kubeadm-bootstrap-manager-leader-election-capi: context deadline exceeded E0412 21:37:12.992195 1 main.go:306] "problem running manager" err="leader election lost" logger="setup" Exit Code: 1 Started: Sun, 12 Apr 2026 21:31:59 +0000 Finished: Sun, 12 Apr 2026 21:37:13 +0000 Ready: True Restart Count: 1 Liveness: http-get http://:healthz/healthz delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:healthz/readyz delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: POD_NAMESPACE: capi-kubeadm-bootstrap-system (v1:metadata.namespace) POD_NAME: capi-kubeadm-bootstrap-controller-manager-6558cd8d7f-b6ckh (v1:metadata.name) POD_UID: (v1:metadata.uid) Mounts: /tmp/k8s-webhook-server/serving-certs from cert (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x5qn4 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: cert: Type: Secret (a volume populated by a Secret) SecretName: capi-kubeadm-bootstrap-webhook-service-cert Optional: false kube-api-access-x5qn4: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: openstack-control-plane=enabled Tolerations: node-role.kubernetes.io/control-plane:NoSchedule node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 32m default-scheduler Successfully assigned capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-6558cd8d7f-b6ckh to instance Warning Unhealthy 27m (x2 over 27m) kubelet Readiness probe failed: Get "http://10.0.0.250:9440/readyz": dial tcp 10.0.0.250:9440: connect: connection refused Warning Unhealthy 27m (x2 over 27m) kubelet Liveness probe failed: Get "http://10.0.0.250:9440/healthz": dial tcp 10.0.0.250:9440: connect: connection refused Normal Pulled 27m (x2 over 32m) kubelet Container image "harbor.atmosphere.dev/registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.10.5" already present on machine Normal Created 27m (x2 over 32m) kubelet Created container manager Normal Started 27m (x2 over 32m) kubelet Started container manager