Name: capo-controller-manager-6975759b4b-gxg6d Namespace: capo-system Priority: 0 Service Account: capo-manager Node: instance/162.253.55.195 Start Time: Thu, 26 Feb 2026 22:42:35 +0000 Labels: cluster.x-k8s.io/provider=infrastructure-openstack control-plane=capo-controller-manager pod-template-hash=6975759b4b Annotations: Status: Running SeccompProfile: RuntimeDefault IP: 10.0.0.78 IPs: IP: 10.0.0.78 Controlled By: ReplicaSet/capo-controller-manager-6975759b4b Containers: manager: Container ID: containerd://fd400f5a2e6d4edc74389108cfa7db29657fd56d94b8a3ef8faf683e8e801bf2 Image: harbor.atmosphere.dev/registry.k8s.io/capi-openstack/capi-openstack-controller:v0.12.4 Image ID: harbor.atmosphere.dev/registry.k8s.io/capi-openstack/capi-openstack-controller@sha256:237da708e483aa8c39766a217d25e45de52816d446764569c77550e6f56f0970 Ports: 9443/TCP, 9440/TCP Host Ports: 0/TCP, 0/TCP Command: /manager Args: --leader-elect --v=2 --diagnostics-address=127.0.0.1:8080 --insecure-diagnostics=true State: Running Started: Thu, 26 Feb 2026 22:46:08 +0000 Last State: Terminated Reason: Error Message: or.go:243 I0226 22:42:39.186801 1 reflector.go:368] Caches populated for *v1beta1.OpenStackMachine from pkg/mod/k8s.io/client-go@v0.31.10/tools/cache/reflector.go:243 I0226 22:42:39.280870 1 controller.go:217] "Starting workers" controller="openstackserver" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackServer" worker count=10 I0226 22:42:39.285891 1 controller.go:217] "Starting workers" controller="openstackmachine" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackMachine" worker count=10 I0226 22:42:39.286000 1 controller.go:217] "Starting workers" controller="openstackcluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackCluster" worker count=10 I0226 22:42:39.286206 1 controller.go:217] "Starting workers" controller="openstackfloatingippool" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackFloatingIPPool" worker count=1 E0226 22:45:16.514541 1 leaderelection.go:429] Failed to update lock optimitically: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capo-system/leases/controller-leader-election-capo?timeout=5s": context deadline exceeded, falling back to slow path E0226 22:45:21.514260 1 leaderelection.go:436] error retrieving resource lock capo-system/controller-leader-election-capo: Get "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capo-system/leases/controller-leader-election-capo?timeout=5s": context deadline exceeded I0226 22:45:21.514304 1 leaderelection.go:297] failed to renew lease capo-system/controller-leader-election-capo: timed out waiting for the condition E0226 22:45:26.515424 1 leaderelection.go:322] Failed to release lock: the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io controller-leader-election-capo) E0226 22:45:26.515534 1 main.go:281] "problem running manager" err="leader election lost" logger="setup" Exit Code: 1 Started: Thu, 26 Feb 2026 22:42:36 +0000 Finished: Thu, 26 Feb 2026 22:46:07 +0000 Ready: True Restart Count: 1 Liveness: http-get http://:healthz/healthz delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:healthz/readyz delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: CLUSTER_API_OPENSTACK_INSTANCE_CREATE_TIMEOUT: 10 Mounts: /tmp/k8s-webhook-server/serving-certs from cert (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8wcsp (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: cert: Type: Secret (a volume populated by a Secret) SecretName: capo-webhook-service-cert Optional: false kube-api-access-8wcsp: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: openstack-control-plane=enabled Tolerations: node-role.kubernetes.io/control-plane:NoSchedule node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 36m default-scheduler Successfully assigned capo-system/capo-controller-manager-6975759b4b-gxg6d to instance Warning Unhealthy 33m (x3 over 33m) kubelet Liveness probe failed: Get "http://10.0.0.78:9440/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Normal Killing 33m kubelet Container manager failed liveness probe, will be restarted Warning Unhealthy 33m (x5 over 33m) kubelet Readiness probe failed: Get "http://10.0.0.78:9440/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Normal Pulled 33m (x2 over 36m) kubelet Container image "harbor.atmosphere.dev/registry.k8s.io/capi-openstack/capi-openstack-controller:v0.12.4" already present on machine Normal Created 33m (x2 over 36m) kubelet Created container manager Normal Started 33m (x2 over 36m) kubelet Started container manager