Name: capo-controller-manager-6975759b4b-5ln4m Namespace: capo-system Priority: 0 Service Account: capo-manager Node: instance/199.204.45.246 Start Time: Fri, 17 Apr 2026 01:11:54 +0000 Labels: cluster.x-k8s.io/provider=infrastructure-openstack control-plane=capo-controller-manager pod-template-hash=6975759b4b Annotations: Status: Running SeccompProfile: RuntimeDefault IP: 10.0.0.5 IPs: IP: 10.0.0.5 Controlled By: ReplicaSet/capo-controller-manager-6975759b4b Containers: manager: Container ID: containerd://245f045364bb18d378c9a97a879a77b300f44f17cea508cc362fe0a887822095 Image: harbor.atmosphere.dev/registry.k8s.io/capi-openstack/capi-openstack-controller:v0.12.4 Image ID: harbor.atmosphere.dev/registry.k8s.io/capi-openstack/capi-openstack-controller@sha256:237da708e483aa8c39766a217d25e45de52816d446764569c77550e6f56f0970 Ports: 9443/TCP, 9440/TCP Host Ports: 0/TCP, 0/TCP Command: /manager Args: --leader-elect --v=2 --diagnostics-address=127.0.0.1:8080 --insecure-diagnostics=true State: Running Started: Fri, 17 Apr 2026 01:18:06 +0000 Last State: Terminated Reason: Error Message: 1.OpenStackCluster from pkg/mod/k8s.io/client-go@v0.31.10/tools/cache/reflector.go:243 I0417 01:12:07.237828 1 controller.go:217] "Starting workers" controller="openstackfloatingippool" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackFloatingIPPool" worker count=1 I0417 01:12:07.237973 1 controller.go:217] "Starting workers" controller="openstackmachine" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackMachine" worker count=10 I0417 01:12:07.237887 1 controller.go:217] "Starting workers" controller="openstackserver" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackServer" worker count=10 I0417 01:12:07.238146 1 controller.go:217] "Starting workers" controller="openstackcluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackCluster" worker count=10 E0417 01:17:51.323216 1 leaderelection.go:429] Failed to update lock optimitically: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capo-system/leases/controller-leader-election-capo?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers), falling back to slow path E0417 01:17:56.322925 1 leaderelection.go:436] error retrieving resource lock capo-system/controller-leader-election-capo: Get "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capo-system/leases/controller-leader-election-capo?timeout=5s": context deadline exceeded I0417 01:17:56.323000 1 leaderelection.go:297] failed to renew lease capo-system/controller-leader-election-capo: timed out waiting for the condition E0417 01:18:01.324537 1 leaderelection.go:322] Failed to release lock: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capo-system/leases/controller-leader-election-capo?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) E0417 01:18:01.324644 1 main.go:281] "problem running manager" err="leader election lost" logger="setup" Exit Code: 1 Started: Fri, 17 Apr 2026 01:11:54 +0000 Finished: Fri, 17 Apr 2026 01:18:02 +0000 Ready: True Restart Count: 1 Liveness: http-get http://:healthz/healthz delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:healthz/readyz delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: CLUSTER_API_OPENSTACK_INSTANCE_CREATE_TIMEOUT: 10 Mounts: /tmp/k8s-webhook-server/serving-certs from cert (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pc47x (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: cert: Type: Secret (a volume populated by a Secret) SecretName: capo-webhook-service-cert Optional: false kube-api-access-pc47x: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: openstack-control-plane=enabled Tolerations: node-role.kubernetes.io/control-plane:NoSchedule node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 25m default-scheduler Successfully assigned capo-system/capo-controller-manager-6975759b4b-5ln4m to instance Normal Pulled 18m (x2 over 25m) kubelet Container image "harbor.atmosphere.dev/registry.k8s.io/capi-openstack/capi-openstack-controller:v0.12.4" already present on machine Warning Unhealthy 18m kubelet Readiness probe failed: Get "http://10.0.0.5:9440/readyz": dial tcp 10.0.0.5:9440: connect: connection refused Warning Unhealthy 18m kubelet Liveness probe failed: Get "http://10.0.0.5:9440/healthz": dial tcp 10.0.0.5:9440: connect: connection refused Normal Created 18m (x2 over 25m) kubelet Created container manager Normal Started 18m (x2 over 25m) kubelet Started container manager