Name: capo-controller-manager-6975759b4b-rp8g8 Namespace: capo-system Priority: 0 Service Account: capo-manager Node: instance/199.204.45.30 Start Time: Sun, 12 Apr 2026 21:33:01 +0000 Labels: cluster.x-k8s.io/provider=infrastructure-openstack control-plane=capo-controller-manager pod-template-hash=6975759b4b Annotations: Status: Running SeccompProfile: RuntimeDefault IP: 10.0.0.71 IPs: IP: 10.0.0.71 Controlled By: ReplicaSet/capo-controller-manager-6975759b4b Containers: manager: Container ID: containerd://34663d89f3f2e4e36bf37becb8801cf5e2fbe9cec45d6352ec3a864715f3eee4 Image: harbor.atmosphere.dev/registry.k8s.io/capi-openstack/capi-openstack-controller:v0.12.4 Image ID: harbor.atmosphere.dev/registry.k8s.io/capi-openstack/capi-openstack-controller@sha256:237da708e483aa8c39766a217d25e45de52816d446764569c77550e6f56f0970 Ports: 9443/TCP, 9440/TCP Host Ports: 0/TCP, 0/TCP Command: /manager Args: --leader-elect --v=2 --diagnostics-address=127.0.0.1:8080 --insecure-diagnostics=true State: Running Started: Sun, 12 Apr 2026 21:37:35 +0000 Last State: Terminated Reason: Error Message: Caches populated for *v1alpha1.Image from pkg/mod/k8s.io/client-go@v0.31.10/tools/cache/reflector.go:243 I0412 21:33:12.598751 1 controller.go:217] "Starting workers" controller="openstackcluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackCluster" worker count=10 I0412 21:33:12.614225 1 controller.go:217] "Starting workers" controller="openstackfloatingippool" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackFloatingIPPool" worker count=1 I0412 21:33:12.614232 1 controller.go:217] "Starting workers" controller="openstackmachine" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackMachine" worker count=10 I0412 21:33:12.614276 1 controller.go:217] "Starting workers" controller="openstackserver" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackServer" worker count=10 E0412 21:37:07.991967 1 leaderelection.go:429] Failed to update lock optimitically: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capo-system/leases/controller-leader-election-capo?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers), falling back to slow path E0412 21:37:12.991462 1 leaderelection.go:436] error retrieving resource lock capo-system/controller-leader-election-capo: Get "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capo-system/leases/controller-leader-election-capo?timeout=5s": context deadline exceeded I0412 21:37:12.991506 1 leaderelection.go:297] failed to renew lease capo-system/controller-leader-election-capo: timed out waiting for the condition E0412 21:37:14.685285 1 leaderelection.go:322] Failed to release lock: Operation cannot be fulfilled on leases.coordination.k8s.io "controller-leader-election-capo": the object has been modified; please apply your changes to the latest version and try again E0412 21:37:14.685383 1 main.go:281] "problem running manager" err="leader election lost" logger="setup" Exit Code: 1 Started: Sun, 12 Apr 2026 21:33:02 +0000 Finished: Sun, 12 Apr 2026 21:37:14 +0000 Ready: True Restart Count: 1 Liveness: http-get http://:healthz/healthz delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:healthz/readyz delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: CLUSTER_API_OPENSTACK_INSTANCE_CREATE_TIMEOUT: 10 Mounts: /tmp/k8s-webhook-server/serving-certs from cert (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6m88l (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: cert: Type: Secret (a volume populated by a Secret) SecretName: capo-webhook-service-cert Optional: false kube-api-access-6m88l: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: openstack-control-plane=enabled Tolerations: node-role.kubernetes.io/control-plane:NoSchedule node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 31m default-scheduler Successfully assigned capo-system/capo-controller-manager-6975759b4b-rp8g8 to instance Warning Unhealthy 27m (x2 over 27m) kubelet Liveness probe failed: Get "http://10.0.0.71:9440/healthz": dial tcp 10.0.0.71:9440: connect: connection refused Warning Unhealthy 27m (x2 over 27m) kubelet Readiness probe failed: Get "http://10.0.0.71:9440/readyz": dial tcp 10.0.0.71:9440: connect: connection refused Normal Pulled 27m (x2 over 31m) kubelet Container image "harbor.atmosphere.dev/registry.k8s.io/capi-openstack/capi-openstack-controller:v0.12.4" already present on machine Normal Created 27m (x2 over 31m) kubelet Created container manager Normal Started 27m (x2 over 31m) kubelet Started container manager