Name: capo-controller-manager-65565fdb7f-r68s9 Namespace: capo-system Priority: 0 Service Account: capo-manager Node: instance/199.19.213.75 Start Time: Tue, 28 Apr 2026 07:39:02 +0000 Labels: cluster.x-k8s.io/provider=infrastructure-openstack control-plane=capo-controller-manager pod-template-hash=65565fdb7f Annotations: Status: Running SeccompProfile: RuntimeDefault IP: 10.0.0.2 IPs: IP: 10.0.0.2 Controlled By: ReplicaSet/capo-controller-manager-65565fdb7f Containers: manager: Container ID: containerd://dc91bee27f00001742c4176c990e2eb79542b5bdaf9e8030687ec1be043b1b51 Image: harbor.atmosphere.dev/registry.k8s.io/capi-openstack/capi-openstack-controller:v0.12.7 Image ID: harbor.atmosphere.dev/registry.k8s.io/capi-openstack/capi-openstack-controller@sha256:b964c2f374ad4f26a593749d3c737492d5c1821db143b26596c9285b9055a8b4 Ports: 9443/TCP, 9440/TCP Host Ports: 0/TCP, 0/TCP Command: /manager Args: --leader-elect --v=2 --diagnostics-address=127.0.0.1:8080 --insecure-diagnostics=true State: Running Started: Tue, 28 Apr 2026 07:51:58 +0000 Last State: Terminated Reason: Error Message: :06.030640 1 reflector.go:368] Caches populated for *v1alpha1.Image from pkg/mod/k8s.io/client-go@v0.31.14/tools/cache/reflector.go:243 I0428 07:39:06.129326 1 controller.go:217] "Starting workers" controller="openstackfloatingippool" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackFloatingIPPool" worker count=1 I0428 07:39:06.129503 1 controller.go:217] "Starting workers" controller="openstackmachine" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackMachine" worker count=10 I0428 07:39:06.129508 1 controller.go:217] "Starting workers" controller="openstackserver" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackServer" worker count=10 I0428 07:39:06.129526 1 controller.go:217] "Starting workers" controller="openstackcluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackCluster" worker count=10 E0428 07:51:21.136614 1 leaderelection.go:429] Failed to update lock optimitically: the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io controller-leader-election-capo), falling back to slow path E0428 07:51:26.140470 1 leaderelection.go:436] error retrieving resource lock capo-system/controller-leader-election-capo: Get "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capo-system/leases/controller-leader-election-capo?timeout=5s": context deadline exceeded I0428 07:51:26.140523 1 leaderelection.go:297] failed to renew lease capo-system/controller-leader-election-capo: timed out waiting for the condition E0428 07:51:29.783786 1 leaderelection.go:322] Failed to release lock: Operation cannot be fulfilled on leases.coordination.k8s.io "controller-leader-election-capo": the object has been modified; please apply your changes to the latest version and try again E0428 07:51:29.783884 1 main.go:281] "problem running manager" err="leader election lost" logger="setup" Exit Code: 1 Started: Tue, 28 Apr 2026 07:39:03 +0000 Finished: Tue, 28 Apr 2026 07:51:29 +0000 Ready: True Restart Count: 1 Liveness: http-get http://:healthz/healthz delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:healthz/readyz delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: CLUSTER_API_OPENSTACK_INSTANCE_CREATE_TIMEOUT: 10 Mounts: /tmp/k8s-webhook-server/serving-certs from cert (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-thds2 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: cert: Type: Secret (a volume populated by a Secret) SecretName: capo-webhook-service-cert Optional: false kube-api-access-thds2: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: openstack-control-plane=enabled Tolerations: node-role.kubernetes.io/control-plane:NoSchedule node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 24m default-scheduler Successfully assigned capo-system/capo-controller-manager-65565fdb7f-r68s9 to instance Warning Unhealthy 12m kubelet Readiness probe failed: Get "http://10.0.0.2:9440/readyz": dial tcp 10.0.0.2:9440: connect: connection refused Warning Unhealthy 12m kubelet Liveness probe failed: Get "http://10.0.0.2:9440/healthz": dial tcp 10.0.0.2:9440: connect: connection refused Normal Pulled 12m (x2 over 24m) kubelet Container image "harbor.atmosphere.dev/registry.k8s.io/capi-openstack/capi-openstack-controller:v0.12.7" already present on machine Normal Created 12m (x2 over 24m) kubelet Created container manager Normal Started 11m (x2 over 24m) kubelet Started container manager