Name: capo-controller-manager-6975759b4b-7fp86 Namespace: capo-system Priority: 0 Service Account: capo-manager Node: instance/199.204.45.233 Start Time: Fri, 17 Apr 2026 11:37:56 +0000 Labels: cluster.x-k8s.io/provider=infrastructure-openstack control-plane=capo-controller-manager pod-template-hash=6975759b4b Annotations: Status: Running SeccompProfile: RuntimeDefault IP: 10.0.0.62 IPs: IP: 10.0.0.62 Controlled By: ReplicaSet/capo-controller-manager-6975759b4b Containers: manager: Container ID: containerd://50bb33a116219f52fa8182526894292f2ca523b4e10a726ce2894df80cc54bb9 Image: harbor.atmosphere.dev/registry.k8s.io/capi-openstack/capi-openstack-controller:v0.12.4 Image ID: harbor.atmosphere.dev/registry.k8s.io/capi-openstack/capi-openstack-controller@sha256:237da708e483aa8c39766a217d25e45de52816d446764569c77550e6f56f0970 Ports: 9443/TCP, 9440/TCP Host Ports: 0/TCP, 0/TCP Command: /manager Args: --leader-elect --v=2 --diagnostics-address=127.0.0.1:8080 --insecure-diagnostics=true State: Running Started: Fri, 17 Apr 2026 11:44:32 +0000 Last State: Terminated Reason: Error Message: Caches populated for *v1alpha1.Image from pkg/mod/k8s.io/client-go@v0.31.10/tools/cache/reflector.go:243 I0417 11:42:48.234330 1 controller.go:217] "Starting workers" controller="openstackcluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackCluster" worker count=10 I0417 11:42:48.234424 1 controller.go:217] "Starting workers" controller="openstackfloatingippool" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackFloatingIPPool" worker count=1 I0417 11:42:48.234513 1 controller.go:217] "Starting workers" controller="openstackmachine" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackMachine" worker count=10 I0417 11:42:48.237973 1 controller.go:217] "Starting workers" controller="openstackserver" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackServer" worker count=10 E0417 11:43:44.365940 1 leaderelection.go:429] Failed to update lock optimitically: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capo-system/leases/controller-leader-election-capo?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers), falling back to slow path E0417 11:43:49.366253 1 leaderelection.go:436] error retrieving resource lock capo-system/controller-leader-election-capo: Get "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/capo-system/leases/controller-leader-election-capo?timeout=5s": context deadline exceeded I0417 11:43:49.366318 1 leaderelection.go:297] failed to renew lease capo-system/controller-leader-election-capo: timed out waiting for the condition E0417 11:43:53.579376 1 leaderelection.go:322] Failed to release lock: Operation cannot be fulfilled on leases.coordination.k8s.io "controller-leader-election-capo": the object has been modified; please apply your changes to the latest version and try again E0417 11:43:53.579494 1 main.go:281] "problem running manager" err="leader election lost" logger="setup" Exit Code: 1 Started: Fri, 17 Apr 2026 11:42:19 +0000 Finished: Fri, 17 Apr 2026 11:44:28 +0000 Ready: True Restart Count: 2 Liveness: http-get http://:healthz/healthz delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:healthz/readyz delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: CLUSTER_API_OPENSTACK_INSTANCE_CREATE_TIMEOUT: 10 Mounts: /tmp/k8s-webhook-server/serving-certs from cert (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h9kzf (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: cert: Type: Secret (a volume populated by a Secret) SecretName: capo-webhook-service-cert Optional: false kube-api-access-h9kzf: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: openstack-control-plane=enabled Tolerations: node-role.kubernetes.io/control-plane:NoSchedule node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 17m default-scheduler Successfully assigned capo-system/capo-controller-manager-6975759b4b-7fp86 to instance Warning Unhealthy 14m (x3 over 14m) kubelet Liveness probe failed: Get "http://10.0.0.62:9440/healthz": dial tcp 10.0.0.62:9440: connect: connection refused Normal Pulled 13m (x2 over 17m) kubelet Container image "harbor.atmosphere.dev/registry.k8s.io/capi-openstack/capi-openstack-controller:v0.12.4" already present on machine Warning Unhealthy 13m (x7 over 14m) kubelet Readiness probe failed: Get "http://10.0.0.62:9440/readyz": dial tcp 10.0.0.62:9440: connect: connection refused Normal Created 13m (x2 over 17m) kubelet Created container manager Normal Started 13m (x2 over 17m) kubelet Started container manager Normal Killing 11m (x2 over 14m) kubelet Container manager failed liveness probe, will be restarted Warning Unhealthy 11m (x3 over 11m) kubelet Liveness probe failed: Get "http://10.0.0.62:9440/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Warning Unhealthy 11m (x5 over 11m) kubelet Readiness probe failed: Get "http://10.0.0.62:9440/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)