Name: orc-controller-manager-6cb597b5d4-gpfpl Namespace: orc-system Priority: 0 Service Account: orc-controller-manager Node: instance/199.204.45.246 Start Time: Fri, 17 Apr 2026 01:10:29 +0000 Labels: control-plane=controller-manager pod-template-hash=6cb597b5d4 Annotations: kubectl.kubernetes.io/default-container: manager Status: Running SeccompProfile: RuntimeDefault IP: 10.0.0.211 IPs: IP: 10.0.0.211 Controlled By: ReplicaSet/orc-controller-manager-6cb597b5d4 Containers: manager: Container ID: containerd://709151198546f6248bbafd69ac1cba72fdcc96f2252ec011294c1fcc02ec236d Image: harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller:v2.2.0 Image ID: harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller@sha256:e3c51b1c3048c3f8c2856a6327810981fd4624602091021c4d310092c85e247c Port: Host Port: Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 State: Running Started: Fri, 17 Apr 2026 01:18:06 +0000 Last State: Terminated Reason: Error Message: T01:10:33Z INFO Starting workers {"controller": "network", "controllerGroup": "openstack.k-orc.cloud", "controllerKind": "Network", "worker count": 1} 2026-04-17T01:10:33Z INFO Starting Controller {"controller": "subnet", "controllerGroup": "openstack.k-orc.cloud", "controllerKind": "Subnet"} 2026-04-17T01:10:33Z INFO Starting workers {"controller": "subnet", "controllerGroup": "openstack.k-orc.cloud", "controllerKind": "Subnet", "worker count": 1} 2026-04-17T01:10:33Z INFO Starting Controller {"controller": "credentials_deletion_guard_for_router", "controllerGroup": "", "controllerKind": "Secret"} 2026-04-17T01:10:33Z INFO Starting workers {"controller": "credentials_deletion_guard_for_router", "controllerGroup": "", "controllerKind": "Secret", "worker count": 1} 2026-04-17T01:10:34Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": ":8443", "secure": true} E0417 01:17:51.324207 1 leaderelection.go:429] Failed to update lock optimistically: the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io f35396c5.k-orc.cloud), falling back to slow path E0417 01:17:56.322758 1 leaderelection.go:436] error retrieving resource lock orc-system/f35396c5.k-orc.cloud: Get "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/orc-system/leases/f35396c5.k-orc.cloud?timeout=5s": context deadline exceeded I0417 01:17:56.322808 1 leaderelection.go:297] failed to renew lease orc-system/f35396c5.k-orc.cloud: context deadline exceeded E0417 01:18:01.323488 1 leaderelection.go:322] Failed to release lock: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/orc-system/leases/f35396c5.k-orc.cloud?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) 2026-04-17T01:18:01Z ERROR setup Error starting manager {"error": "problem running manager: leader election lost"} main.main /workspace/cmd/manager/main.go:121 runtime.main /usr/local/go/src/runtime/proc.go:272 Exit Code: 1 Started: Fri, 17 Apr 2026 01:10:32 +0000 Finished: Fri, 17 Apr 2026 01:18:02 +0000 Ready: True Restart Count: 1 Limits: cpu: 500m memory: 256Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r2dds (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-r2dds: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: openstack-control-plane=enabled Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 28m default-scheduler Successfully assigned orc-system/orc-controller-manager-6cb597b5d4-gpfpl to instance Normal Pulling 27m kubelet Pulling image "harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller:v2.2.0" Normal Pulled 27m kubelet Successfully pulled image "harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller:v2.2.0" in 1.588s (1.588s including waiting) Normal Pulled 20m kubelet Container image "harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller:v2.2.0" already present on machine Normal Created 20m (x2 over 27m) kubelet Created container manager Normal Started 20m (x2 over 27m) kubelet Started container manager