Name: orc-controller-manager-6cb597b5d4-dqbqm Namespace: orc-system Priority: 0 Service Account: orc-controller-manager Node: instance/162.253.55.110 Start Time: Wed, 04 Mar 2026 13:40:56 +0000 Labels: control-plane=controller-manager pod-template-hash=6cb597b5d4 Annotations: kubectl.kubernetes.io/default-container: manager Status: Running SeccompProfile: RuntimeDefault IP: 10.0.0.170 IPs: IP: 10.0.0.170 Controlled By: ReplicaSet/orc-controller-manager-6cb597b5d4 Containers: manager: Container ID: containerd://c62d167b211cf0f966b3af09b33655e2dbf85279feed5183711b8e9413bc4395 Image: harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller:v2.2.0 Image ID: harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller@sha256:e3c51b1c3048c3f8c2856a6327810981fd4624602091021c4d310092c85e247c Port: Host Port: Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 State: Running Started: Wed, 04 Mar 2026 13:45:46 +0000 Last State: Terminated Reason: Error Message: IP"} 2026-03-04T13:41:00Z INFO Starting workers {"controller": "floatingip", "controllerGroup": "openstack.k-orc.cloud", "controllerKind": "FloatingIP", "worker count": 1} 2026-03-04T13:41:00Z INFO Starting Controller {"controller": "network", "controllerGroup": "openstack.k-orc.cloud", "controllerKind": "Network"} 2026-03-04T13:41:00Z INFO Starting workers {"controller": "network", "controllerGroup": "openstack.k-orc.cloud", "controllerKind": "Network", "worker count": 1} 2026-03-04T13:41:00Z INFO Starting Controller {"controller": "credentials_deletion_guard_for_floatingip", "controllerGroup": "", "controllerKind": "Secret"} 2026-03-04T13:41:00Z INFO Starting workers {"controller": "credentials_deletion_guard_for_floatingip", "controllerGroup": "", "controllerKind": "Secret", "worker count": 1} 2026-03-04T13:41:01Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": ":8443", "secure": true} E0304 13:45:21.993196 1 leaderelection.go:429] Failed to update lock optimistically: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/orc-system/leases/f35396c5.k-orc.cloud?timeout=5s": context deadline exceeded, falling back to slow path E0304 13:45:26.993204 1 leaderelection.go:436] error retrieving resource lock orc-system/f35396c5.k-orc.cloud: Get "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/orc-system/leases/f35396c5.k-orc.cloud?timeout=5s": context deadline exceeded I0304 13:45:26.993263 1 leaderelection.go:297] failed to renew lease orc-system/f35396c5.k-orc.cloud: context deadline exceeded E0304 13:45:30.856562 1 leaderelection.go:322] Failed to release lock: Operation cannot be fulfilled on leases.coordination.k8s.io "f35396c5.k-orc.cloud": the object has been modified; please apply your changes to the latest version and try again 2026-03-04T13:45:30Z ERROR setup Error starting manager {"error": "problem running manager: leader election lost"} main.main /workspace/cmd/manager/main.go:121 runtime.main /usr/local/go/src/runtime/proc.go:272 Exit Code: 1 Started: Wed, 04 Mar 2026 13:40:59 +0000 Finished: Wed, 04 Mar 2026 13:45:30 +0000 Ready: True Restart Count: 1 Limits: cpu: 500m memory: 256Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l9sfm (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-l9sfm: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: openstack-control-plane=enabled Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 34m default-scheduler Successfully assigned orc-system/orc-controller-manager-6cb597b5d4-dqbqm to instance Normal Pulling 34m kubelet Pulling image "harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller:v2.2.0" Normal Pulled 34m kubelet Successfully pulled image "harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller:v2.2.0" in 1.529s (1.529s including waiting) Warning Unhealthy 30m kubelet Readiness probe failed: Get "http://10.0.0.170:8081/readyz": dial tcp 10.0.0.170:8081: connect: connection refused Warning Unhealthy 30m kubelet Liveness probe failed: Get "http://10.0.0.170:8081/healthz": dial tcp 10.0.0.170:8081: connect: connection refused Normal Pulled 30m kubelet Container image "harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller:v2.2.0" already present on machine Normal Created 30m (x2 over 34m) kubelet Created container manager Normal Started 30m (x2 over 34m) kubelet Started container manager