Name: orc-controller-manager-6cb597b5d4-dhtlc Namespace: orc-system Priority: 0 Service Account: orc-controller-manager Node: instance/162.253.55.195 Start Time: Thu, 26 Feb 2026 22:41:39 +0000 Labels: control-plane=controller-manager pod-template-hash=6cb597b5d4 Annotations: kubectl.kubernetes.io/default-container: manager Status: Running SeccompProfile: RuntimeDefault IP: 10.0.0.8 IPs: IP: 10.0.0.8 Controlled By: ReplicaSet/orc-controller-manager-6cb597b5d4 Containers: manager: Container ID: containerd://f5e398830319590f063152dae7a957455bf6dd510346769c7c4ce8f96c5037b9 Image: harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller:v2.2.0 Image ID: harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller@sha256:e3c51b1c3048c3f8c2856a6327810981fd4624602091021c4d310092c85e247c Port: Host Port: Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 State: Running Started: Thu, 26 Feb 2026 22:46:09 +0000 Last State: Terminated Reason: Error Message: rkers {"controller": "image", "controllerGroup": "openstack.k-orc.cloud", "controllerKind": "Image", "worker count": 1} 2026-02-26T22:41:43Z INFO Starting Controller {"controller": "routerinterface", "controllerGroup": "openstack.k-orc.cloud", "controllerKind": "Router"} 2026-02-26T22:41:43Z INFO Starting workers {"controller": "routerinterface", "controllerGroup": "openstack.k-orc.cloud", "controllerKind": "Router", "worker count": 1} 2026-02-26T22:41:43Z INFO Starting Controller {"controller": "subnet_deletion_guard_for_port", "controllerGroup": "openstack.k-orc.cloud", "controllerKind": "Subnet"} 2026-02-26T22:41:43Z INFO Starting workers {"controller": "subnet_deletion_guard_for_port", "controllerGroup": "openstack.k-orc.cloud", "controllerKind": "Subnet", "worker count": 1} 2026-02-26T22:41:43Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": ":8443", "secure": true} E0226 22:45:16.513170 1 leaderelection.go:429] Failed to update lock optimistically: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/orc-system/leases/f35396c5.k-orc.cloud?timeout=5s": context deadline exceeded, falling back to slow path E0226 22:45:21.513722 1 leaderelection.go:436] error retrieving resource lock orc-system/f35396c5.k-orc.cloud: Get "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/orc-system/leases/f35396c5.k-orc.cloud?timeout=5s": context deadline exceeded I0226 22:45:21.513800 1 leaderelection.go:297] failed to renew lease orc-system/f35396c5.k-orc.cloud: context deadline exceeded E0226 22:45:26.515111 1 leaderelection.go:322] Failed to release lock: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/orc-system/leases/f35396c5.k-orc.cloud?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) 2026-02-26T22:45:26Z ERROR setup Error starting manager {"error": "problem running manager: leader election lost"} main.main /workspace/cmd/manager/main.go:121 runtime.main /usr/local/go/src/runtime/proc.go:272 Exit Code: 1 Started: Thu, 26 Feb 2026 22:41:41 +0000 Finished: Thu, 26 Feb 2026 22:46:07 +0000 Ready: True Restart Count: 1 Limits: cpu: 500m memory: 256Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b5l7m (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-b5l7m: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: openstack-control-plane=enabled Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 40m default-scheduler Successfully assigned orc-system/orc-controller-manager-6cb597b5d4-dhtlc to instance Normal Pulling 40m kubelet Pulling image "harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller:v2.2.0" Normal Pulled 40m kubelet Successfully pulled image "harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller:v2.2.0" in 1.726s (1.726s including waiting) Warning Unhealthy 36m (x5 over 36m) kubelet Readiness probe failed: Get "http://10.0.0.8:8081/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Warning Unhealthy 36m (x2 over 36m) kubelet Liveness probe failed: Get "http://10.0.0.8:8081/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Normal Pulled 36m kubelet Container image "harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller:v2.2.0" already present on machine Normal Created 36m (x2 over 40m) kubelet Created container manager Normal Started 36m (x2 over 40m) kubelet Started container manager