Name: orc-controller-manager-6cb597b5d4-9djfp Namespace: orc-system Priority: 0 Service Account: orc-controller-manager Node: instance/199.19.213.236 Start Time: Wed, 04 Mar 2026 16:14:48 +0000 Labels: control-plane=controller-manager pod-template-hash=6cb597b5d4 Annotations: kubectl.kubernetes.io/default-container: manager Status: Running SeccompProfile: RuntimeDefault IP: 10.0.0.197 IPs: IP: 10.0.0.197 Controlled By: ReplicaSet/orc-controller-manager-6cb597b5d4 Containers: manager: Container ID: containerd://af0f258c8677fa690b2dcbea00393fc94e41e64ac4b4a8e5e075c35c5709267e Image: harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller:v2.2.0 Image ID: harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller@sha256:e3c51b1c3048c3f8c2856a6327810981fd4624602091021c4d310092c85e247c Port: Host Port: Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 State: Running Started: Wed, 04 Mar 2026 16:23:31 +0000 Last State: Terminated Reason: Error Message: :21:17Z INFO Starting Controller {"controller": "port_deletion_guard_for_server", "controllerGroup": "openstack.k-orc.cloud", "controllerKind": "Port"} 2026-03-04T16:21:17Z INFO Starting workers {"controller": "port_deletion_guard_for_server", "controllerGroup": "openstack.k-orc.cloud", "controllerKind": "Port", "worker count": 1} 2026-03-04T16:21:17Z INFO Starting Controller {"controller": "servergroup", "controllerGroup": "openstack.k-orc.cloud", "controllerKind": "ServerGroup"} 2026-03-04T16:21:17Z INFO Starting workers {"controller": "servergroup", "controllerGroup": "openstack.k-orc.cloud", "controllerKind": "ServerGroup", "worker count": 1} 2026-03-04T16:21:17Z INFO Starting Controller {"controller": "server", "controllerGroup": "openstack.k-orc.cloud", "controllerKind": "Server"} 2026-03-04T16:21:17Z INFO Starting workers {"controller": "server", "controllerGroup": "openstack.k-orc.cloud", "controllerKind": "Server", "worker count": 1} E0304 16:21:29.604666 1 leaderelection.go:429] Failed to update lock optimistically: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/orc-system/leases/f35396c5.k-orc.cloud?timeout=5s": context deadline exceeded, falling back to slow path E0304 16:21:34.604579 1 leaderelection.go:436] error retrieving resource lock orc-system/f35396c5.k-orc.cloud: Get "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/orc-system/leases/f35396c5.k-orc.cloud?timeout=5s": context deadline exceeded I0304 16:21:34.604630 1 leaderelection.go:297] failed to renew lease orc-system/f35396c5.k-orc.cloud: context deadline exceeded E0304 16:21:39.605085 1 leaderelection.go:322] Failed to release lock: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/orc-system/leases/f35396c5.k-orc.cloud?timeout=5s": context deadline exceeded 2026-03-04T16:21:39Z ERROR setup Error starting manager {"error": "problem running manager: leader election lost"} main.main /workspace/cmd/manager/main.go:121 runtime.main /usr/local/go/src/runtime/proc.go:272 Exit Code: 1 Started: Wed, 04 Mar 2026 16:21:00 +0000 Finished: Wed, 04 Mar 2026 16:22:15 +0000 Ready: True Restart Count: 2 Limits: cpu: 500m memory: 256Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-47ljz (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-47ljz: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: openstack-control-plane=enabled Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 44m default-scheduler Successfully assigned orc-system/orc-controller-manager-6cb597b5d4-9djfp to instance Normal Pulling 44m kubelet Pulling image "harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller:v2.2.0" Normal Pulled 43m kubelet Successfully pulled image "harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller:v2.2.0" in 5.593s (5.593s including waiting) Warning Unhealthy 38m (x2 over 39m) kubelet Liveness probe failed: Get "http://10.0.0.197:8081/healthz": dial tcp 10.0.0.197:8081: connect: connection refused Normal Killing 38m kubelet Container manager failed liveness probe, will be restarted Normal Pulled 38m kubelet Container image "harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller:v2.2.0" already present on machine Warning Unhealthy 37m (x11 over 39m) kubelet Readiness probe failed: Get "http://10.0.0.197:8081/readyz": dial tcp 10.0.0.197:8081: connect: connection refused Normal Created 37m (x2 over 43m) kubelet Created container manager Normal Started 37m (x2 over 43m) kubelet Started container manager Warning Unhealthy 37m (x2 over 39m) kubelet Liveness probe failed: Get "http://10.0.0.197:8081/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Warning Unhealthy 36m (x3 over 39m) kubelet Readiness probe failed: Get "http://10.0.0.197:8081/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)