Name: capo-controller-manager-65565fdb7f-dw9qm Namespace: capo-system Priority: 0 Service Account: capo-manager Node: instance/199.204.45.216 Start Time: Sun, 26 Apr 2026 06:03:36 +0000 Labels: cluster.x-k8s.io/provider=infrastructure-openstack control-plane=capo-controller-manager pod-template-hash=65565fdb7f Annotations: Status: Running SeccompProfile: RuntimeDefault IP: 10.0.0.88 IPs: IP: 10.0.0.88 Controlled By: ReplicaSet/capo-controller-manager-65565fdb7f Containers: manager: Container ID: containerd://c69be3beebdc2f05089939dcc25133731771485daf01b2222d24c99b99bd9300 Image: harbor.atmosphere.dev/registry.k8s.io/capi-openstack/capi-openstack-controller:v0.12.7 Image ID: harbor.atmosphere.dev/registry.k8s.io/capi-openstack/capi-openstack-controller@sha256:b964c2f374ad4f26a593749d3c737492d5c1821db143b26596c9285b9055a8b4 Ports: 9443/TCP, 9440/TCP Host Ports: 0/TCP, 0/TCP Command: /manager Args: --leader-elect --v=2 --diagnostics-address=127.0.0.1:8080 --insecure-diagnostics=true State: Running Started: Sun, 26 Apr 2026 06:08:47 +0000 Last State: Terminated Reason: Error Message: Caches populated for *v1beta1.Machine from pkg/mod/k8s.io/client-go@v0.31.14/tools/cache/reflector.go:243 I0426 06:07:36.278121 1 controller.go:217] "Starting workers" controller="openstackserver" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackServer" worker count=10 I0426 06:07:36.280420 1 controller.go:217] "Starting workers" controller="openstackfloatingippool" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackFloatingIPPool" worker count=1 I0426 06:07:36.282139 1 controller.go:217] "Starting workers" controller="openstackcluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackCluster" worker count=10 I0426 06:07:36.282713 1 controller.go:217] "Starting workers" controller="openstackmachine" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackMachine" worker count=10 E0426 06:08:19.327878 1 leaderelection.go:429] Failed to update lock optimitically: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capo-system/leases/controller-leader-election-capo?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers), falling back to slow path E0426 06:08:24.327800 1 leaderelection.go:436] error retrieving resource lock capo-system/controller-leader-election-capo: Get "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capo-system/leases/controller-leader-election-capo?timeout=5s": context deadline exceeded I0426 06:08:24.327874 1 leaderelection.go:297] failed to renew lease capo-system/controller-leader-election-capo: timed out waiting for the condition E0426 06:08:26.576390 1 leaderelection.go:322] Failed to release lock: Operation cannot be fulfilled on leases.coordination.k8s.io "controller-leader-election-capo": the object has been modified; please apply your changes to the latest version and try again E0426 06:08:26.576473 1 main.go:281] "problem running manager" err="leader election lost" logger="setup" Exit Code: 1 Started: Sun, 26 Apr 2026 06:07:35 +0000 Finished: Sun, 26 Apr 2026 06:08:46 +0000 Ready: True Restart Count: 2 Liveness: http-get http://:healthz/healthz delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:healthz/readyz delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: CLUSTER_API_OPENSTACK_INSTANCE_CREATE_TIMEOUT: 10 Mounts: /tmp/k8s-webhook-server/serving-certs from cert (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r758d (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: cert: Type: Secret (a volume populated by a Secret) SecretName: capo-webhook-service-cert Optional: false kube-api-access-r758d: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: openstack-control-plane=enabled Tolerations: node-role.kubernetes.io/control-plane:NoSchedule node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 23m default-scheduler Successfully assigned capo-system/capo-controller-manager-65565fdb7f-dw9qm to instance Warning Unhealthy 18m (x2 over 18m) kubelet Liveness probe failed: Get "http://10.0.0.88:9440/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Warning Unhealthy 18m (x2 over 18m) kubelet Readiness probe failed: Get "http://10.0.0.88:9440/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Normal Pulled 18m (x3 over 23m) kubelet Container image "harbor.atmosphere.dev/registry.k8s.io/capi-openstack/capi-openstack-controller:v0.12.7" already present on machine Warning Unhealthy 18m (x3 over 19m) kubelet Liveness probe failed: Get "http://10.0.0.88:9440/healthz": dial tcp 10.0.0.88:9440: connect: connection refused Warning Unhealthy 18m (x4 over 19m) kubelet Readiness probe failed: Get "http://10.0.0.88:9440/readyz": dial tcp 10.0.0.88:9440: connect: connection refused Normal Killing 18m kubelet Container manager failed liveness probe, will be restarted Normal Created 18m (x3 over 23m) kubelet Created container manager Normal Started 18m (x3 over 23m) kubelet Started container manager