Name: cilium-operator-69d99df44c-7x8d9 Namespace: kube-system Priority: 2000000000 Priority Class Name: system-cluster-critical Service Account: cilium-operator Node: controller-2/199.19.213.145 Start Time: Tue, 17 Mar 2026 20:06:33 +0000 Labels: app.kubernetes.io/name=cilium-operator app.kubernetes.io/part-of=cilium io.cilium/app=operator name=cilium-operator pod-template-hash=69d99df44c Annotations: Status: Running IP: 199.19.213.145 IPs: IP: 199.19.213.145 Controlled By: ReplicaSet/cilium-operator-69d99df44c Containers: cilium-operator: Container ID: containerd://d76ed58648b6227aeae801ac10ef3c93f533780bb8040bcbfbfb44e75686381e Image: quay.io/cilium/operator-generic:v1.14.8 Image ID: quay.io/cilium/operator-generic@sha256:56d373c12483c09964a00a29246595917603a077a298aa90a98e4de32c86b7dc Port: Host Port: Command: cilium-operator-generic Args: --config-dir=/tmp/cilium/config-map --debug=$(CILIUM_DEBUG) State: Running Started: Tue, 17 Mar 2026 20:18:51 +0000 Last State: Terminated Reason: Error Message: =info msg=Stopping subsys=hive level=info msg="Stop hook executed" duration="100.183µs" function="*api.server.Stop" subsys=hive level=info msg="Stop hook executed" duration="98.582µs" function="identitygc.registerGC.func2 (gc.go:122)" subsys=hive level=info msg="Stop hook executed" duration="759.14µs" function="cmd.(*legacyOnLeader).onStop" subsys=hive level=info msg="Stop hook executed" duration="5.62µs" function="*resource.resource[*github.com/cilium/cilium/pkg/k8s/slim/k8s/api/networking/v1.IngressClass].Stop" subsys=hive level=info msg="Stop hook executed" duration="3.02µs" function="*resource.resource[*github.com/cilium/cilium/pkg/k8s/apis/cilium.io/v2alpha1.CiliumPodIPPool].Stop" subsys=hive level=info msg="Stop hook executed" duration="3.06µs" function="*resource.resource[*github.com/cilium/cilium/pkg/k8s.Endpoints].Stop" subsys=hive level=info msg="Stop hook executed" duration="39.311µs" function="*resource.resource[*github.com/cilium/cilium/pkg/k8s/apis/cilium.io/v2.CiliumIdentity].Stop" subsys=hive level=info msg="LB-IPAM done initializing" subsys=lbipam level=info msg="Stop hook executed" duration="170.134µs" function="*job.group.Stop" subsys=hive level=info msg="Stop hook executed" duration="23.012µs" function="*resource.resource[*github.com/cilium/cilium/pkg/k8s/slim/k8s/api/core/v1.Service].Stop" subsys=hive level=info msg="Stop hook executed" duration="125.413µs" function="*resource.resource[*github.com/cilium/cilium/pkg/k8s/apis/cilium.io/v2alpha1.CiliumLoadBalancerIPPool].Stop" subsys=hive level=info msg="Stop hook executed" duration=1.538679ms function="cmd.registerOperatorHooks.func2 (root.go:166)" subsys=hive level=info msg="Stop hook executed" duration="26.7µs" function="client.(*compositeClientset).onStop" subsys=hive level=info msg="Stopped gops server" address="127.0.0.1:9891" subsys=gops level=info msg="Stop hook executed" duration="218.416µs" function="gops.registerGopsHooks.func2 (cell.go:50)" subsys=hive level=fatal msg="Leader election lost" subsys=cilium-operator-generic Exit Code: 1 Started: Tue, 17 Mar 2026 20:15:21 +0000 Finished: Tue, 17 Mar 2026 20:18:50 +0000 Ready: True Restart Count: 2 Liveness: http-get http://127.0.0.1:9234/healthz delay=60s timeout=3s period=10s #success=1 #failure=3 Readiness: http-get http://127.0.0.1:9234/healthz delay=0s timeout=3s period=5s #success=1 #failure=5 Environment: K8S_NODE_NAME: (v1:spec.nodeName) CILIUM_K8S_NAMESPACE: kube-system (v1:metadata.namespace) CILIUM_DEBUG: Optional: true Mounts: /tmp/cilium/config-map from cilium-config-path (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j675k (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready True ContainersReady True PodScheduled True Volumes: cilium-config-path: Type: ConfigMap (a volume populated by a ConfigMap) Name: cilium-config Optional: false kube-api-access-j675k: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt Optional: false DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/os=linux node-role.kubernetes.io/control-plane= Tolerations: op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 14m default-scheduler Successfully assigned kube-system/cilium-operator-69d99df44c-7x8d9 to controller-2 Normal Pulling 14m kubelet Pulling image "quay.io/cilium/operator-generic:v1.14.8" Normal Pulled 13m kubelet Successfully pulled image "quay.io/cilium/operator-generic:v1.14.8" in 16.741s (27.691s including waiting). Image size: 24504733 bytes. Warning NodeNotReady 7m28s node-controller Node is not ready Normal Created 5m28s (x2 over 13m) kubelet Created container: cilium-operator Normal Started 5m28s (x2 over 13m) kubelet Started container cilium-operator Normal Pulled 5m28s kubelet Container image "quay.io/cilium/operator-generic:v1.14.8" already present on machine Warning Unhealthy 5m26s (x4 over 13m) kubelet Readiness probe failed: Get "http://127.0.0.1:9234/healthz": dial tcp 127.0.0.1:9234: connect: connection refused Normal Pulled 119s kubelet Container image "quay.io/cilium/operator-generic:v1.14.8" already present on machine Normal Created 119s kubelet Created container: cilium-operator Normal Started 118s kubelet Started container cilium-operator Warning Unhealthy 117s (x2 over 118s) kubelet Readiness probe failed: Get "http://127.0.0.1:9234/healthz": dial tcp 127.0.0.1:9234: connect: connection refused