Name: orc-controller-manager-6cb597b5d4-xct92 Namespace: orc-system Priority: 0 Service Account: orc-controller-manager Node: instance/199.204.45.30 Start Time: Sun, 12 Apr 2026 21:31:49 +0000 Labels: control-plane=controller-manager pod-template-hash=6cb597b5d4 Annotations: kubectl.kubernetes.io/default-container: manager Status: Running SeccompProfile: RuntimeDefault IP: 10.0.0.93 IPs: IP: 10.0.0.93 Controlled By: ReplicaSet/orc-controller-manager-6cb597b5d4 Containers: manager: Container ID: containerd://998b22c08003625ccfcd94d4954a026a29490d91a78e1d405c39458f0567f389 Image: harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller:v2.2.0 Image ID: harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller@sha256:e3c51b1c3048c3f8c2856a6327810981fd4624602091021c4d310092c85e247c Port: Host Port: Command: /manager Args: --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 State: Running Started: Sun, 12 Apr 2026 21:37:35 +0000 Last State: Terminated Reason: Error Message: tarting workers {"controller": "credentials_deletion_guard_for_router", "controllerGroup": "", "controllerKind": "Secret", "worker count": 1} 2026-04-12T21:31:53Z INFO Starting Controller {"controller": "credentials_deletion_guard_for_floatingip", "controllerGroup": "", "controllerKind": "Secret"} 2026-04-12T21:31:53Z INFO Starting workers {"controller": "credentials_deletion_guard_for_floatingip", "controllerGroup": "", "controllerKind": "Secret", "worker count": 1} 2026-04-12T21:31:53Z INFO Starting Controller {"controller": "credentials_deletion_guard_for_securitygroup", "controllerGroup": "", "controllerKind": "Secret"} 2026-04-12T21:31:53Z INFO Starting workers {"controller": "credentials_deletion_guard_for_securitygroup", "controllerGroup": "", "controllerKind": "Secret", "worker count": 1} 2026-04-12T21:31:53Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": ":8443", "secure": true} E0412 21:37:07.992067 1 leaderelection.go:429] Failed to update lock optimistically: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/orc-system/leases/f35396c5.k-orc.cloud?timeout=5s": context deadline exceeded, falling back to slow path E0412 21:37:12.992646 1 leaderelection.go:436] error retrieving resource lock orc-system/f35396c5.k-orc.cloud: Get "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/orc-system/leases/f35396c5.k-orc.cloud?timeout=5s": context deadline exceeded I0412 21:37:12.992685 1 leaderelection.go:297] failed to renew lease orc-system/f35396c5.k-orc.cloud: context deadline exceeded E0412 21:37:14.687264 1 leaderelection.go:322] Failed to release lock: Operation cannot be fulfilled on leases.coordination.k8s.io "f35396c5.k-orc.cloud": the object has been modified; please apply your changes to the latest version and try again 2026-04-12T21:37:14Z ERROR setup Error starting manager {"error": "problem running manager: leader election lost"} main.main /workspace/cmd/manager/main.go:121 runtime.main /usr/local/go/src/runtime/proc.go:272 Exit Code: 1 Started: Sun, 12 Apr 2026 21:31:52 +0000 Finished: Sun, 12 Apr 2026 21:37:14 +0000 Ready: True Restart Count: 1 Limits: cpu: 500m memory: 256Mi Requests: cpu: 10m memory: 64Mi Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lcncl (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-lcncl: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: openstack-control-plane=enabled Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 34m default-scheduler Successfully assigned orc-system/orc-controller-manager-6cb597b5d4-xct92 to instance Normal Pulling 34m kubelet Pulling image "harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller:v2.2.0" Normal Pulled 34m kubelet Successfully pulled image "harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller:v2.2.0" in 1.436s (1.436s including waiting) Warning Unhealthy 29m (x2 over 29m) kubelet Readiness probe failed: Get "http://10.0.0.93:8081/readyz": dial tcp 10.0.0.93:8081: connect: connection refused Warning Unhealthy 29m kubelet Liveness probe failed: Get "http://10.0.0.93:8081/healthz": dial tcp 10.0.0.93:8081: connect: connection refused Normal Created 29m (x2 over 34m) kubelet Created container manager Normal Started 29m (x2 over 34m) kubelet Started container manager Normal Pulled 29m kubelet Container image "harbor.atmosphere.dev/quay.io/orc/openstack-resource-controller:v2.2.0" already present on machine