Name: cilium-jxzvv Namespace: kube-system Priority: 2000001000 Priority Class Name: system-node-critical Service Account: cilium Node: controller-1/199.204.45.33 Start Time: Fri, 17 Apr 2026 00:56:50 +0000 Labels: app.kubernetes.io/name=cilium-agent app.kubernetes.io/part-of=cilium controller-revision-hash=f8b6c767c k8s-app=cilium pod-template-generation=1 Annotations: container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites: unconfined container.apparmor.security.beta.kubernetes.io/cilium-agent: unconfined container.apparmor.security.beta.kubernetes.io/clean-cilium-state: unconfined container.apparmor.security.beta.kubernetes.io/mount-cgroup: unconfined Status: Running IP: 199.204.45.33 IPs: IP: 199.204.45.33 Controlled By: DaemonSet/cilium Init Containers: config: Container ID: containerd://3a8205e1a884138fbbc11d257a00ec99eea112f8257797269c8d5b98ff0ff0a2 Image: quay.io/cilium/cilium:v1.14.8 Image ID: quay.io/cilium/cilium@sha256:7fca3ba4b04af066e8b086b5c1a52e30f52db01ffc642e7db0a439514aed3ada Port: Host Port: Command: cilium build-config State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 17 Apr 2026 00:57:04 +0000 Finished: Fri, 17 Apr 2026 00:57:10 +0000 Ready: True Restart Count: 0 Environment: K8S_NODE_NAME: (v1:spec.nodeName) CILIUM_K8S_NAMESPACE: kube-system (v1:metadata.namespace) Mounts: /tmp from tmp (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-thxkr (ro) mount-cgroup: Container ID: containerd://7ee58b402cec5664bd0a5b40a92998623cdad4a7ec801897fc15daff51553d49 Image: quay.io/cilium/cilium:v1.14.8 Image ID: quay.io/cilium/cilium@sha256:7fca3ba4b04af066e8b086b5c1a52e30f52db01ffc642e7db0a439514aed3ada Port: Host Port: Command: sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; rm /hostbin/cilium-mount State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 17 Apr 2026 00:57:14 +0000 Finished: Fri, 17 Apr 2026 00:57:14 +0000 Ready: True Restart Count: 0 Environment: CGROUP_ROOT: /run/cilium/cgroupv2 BIN_PATH: /opt/cni/bin Mounts: /hostbin from cni-path (rw) /hostproc from hostproc (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-thxkr (ro) apply-sysctl-overwrites: Container ID: containerd://9012bcca416fd0fd5463328087d868536aca4fec98261005b11eb5a16301e6de Image: quay.io/cilium/cilium:v1.14.8 Image ID: quay.io/cilium/cilium@sha256:7fca3ba4b04af066e8b086b5c1a52e30f52db01ffc642e7db0a439514aed3ada Port: Host Port: Command: sh -ec cp /usr/bin/cilium-sysctlfix /hostbin/cilium-sysctlfix; nsenter --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-sysctlfix"; rm /hostbin/cilium-sysctlfix State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 17 Apr 2026 00:57:15 +0000 Finished: Fri, 17 Apr 2026 00:57:15 +0000 Ready: True Restart Count: 0 Environment: BIN_PATH: /opt/cni/bin Mounts: /hostbin from cni-path (rw) /hostproc from hostproc (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-thxkr (ro) mount-bpf-fs: Container ID: containerd://2cc58ea6d6df31f1e8a7bc12a460d6f82ca7c52bc8bd3e3e69e6b19690a033bf Image: quay.io/cilium/cilium:v1.14.8 Image ID: quay.io/cilium/cilium@sha256:7fca3ba4b04af066e8b086b5c1a52e30f52db01ffc642e7db0a439514aed3ada Port: Host Port: Command: /bin/bash -c -- Args: mount | grep "/sys/fs/bpf type bpf" || mount -t bpf bpf /sys/fs/bpf State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 17 Apr 2026 00:57:16 +0000 Finished: Fri, 17 Apr 2026 00:57:16 +0000 Ready: True Restart Count: 0 Environment: Mounts: /sys/fs/bpf from bpf-maps (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-thxkr (ro) clean-cilium-state: Container ID: containerd://01cec828efbd85c2c7af82c583fc140ea2a0e0b06a47940c0313bd813c418480 Image: quay.io/cilium/cilium:v1.14.8 Image ID: quay.io/cilium/cilium@sha256:7fca3ba4b04af066e8b086b5c1a52e30f52db01ffc642e7db0a439514aed3ada Port: Host Port: Command: /init-container.sh State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 17 Apr 2026 00:57:17 +0000 Finished: Fri, 17 Apr 2026 00:57:17 +0000 Ready: True Restart Count: 0 Environment: CILIUM_ALL_STATE: Optional: true CILIUM_BPF_STATE: Optional: true Mounts: /run/cilium/cgroupv2 from cilium-cgroup (rw) /sys/fs/bpf from bpf-maps (rw) /var/run/cilium from cilium-run (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-thxkr (ro) install-cni-binaries: Container ID: containerd://98263b5803a0bcbd3fe4d91b2396a61c9717a491af59b3667e47eb3e25c1678e Image: quay.io/cilium/cilium:v1.14.8 Image ID: quay.io/cilium/cilium@sha256:7fca3ba4b04af066e8b086b5c1a52e30f52db01ffc642e7db0a439514aed3ada Port: Host Port: Command: /install-plugin.sh State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 17 Apr 2026 00:57:18 +0000 Finished: Fri, 17 Apr 2026 00:57:18 +0000 Ready: True Restart Count: 0 Requests: cpu: 100m memory: 10Mi Environment: Mounts: /host/opt/cni/bin from cni-path (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-thxkr (ro) Containers: cilium-agent: Container ID: containerd://f17d07da58d9bef852f39639263e469674a9976accc5d5eafc6901a7fea534b6 Image: quay.io/cilium/cilium:v1.14.8 Image ID: quay.io/cilium/cilium@sha256:7fca3ba4b04af066e8b086b5c1a52e30f52db01ffc642e7db0a439514aed3ada Port: Host Port: Command: cilium-agent Args: --config-dir=/tmp/cilium/config-map State: Running Started: Fri, 17 Apr 2026 00:57:19 +0000 Ready: True Restart Count: 0 Liveness: http-get http://127.0.0.1:9879/healthz delay=0s timeout=5s period=30s #success=1 #failure=10 Readiness: http-get http://127.0.0.1:9879/healthz delay=0s timeout=5s period=30s #success=1 #failure=3 Startup: http-get http://127.0.0.1:9879/healthz delay=0s timeout=1s period=2s #success=1 #failure=105 Environment: K8S_NODE_NAME: (v1:spec.nodeName) CILIUM_K8S_NAMESPACE: kube-system (v1:metadata.namespace) CILIUM_CLUSTERMESH_CONFIG: /var/lib/cilium/clustermesh/ Mounts: /host/etc/cni/net.d from etc-cni-netd (rw) /host/proc/sys/kernel from host-proc-sys-kernel (rw) /host/proc/sys/net from host-proc-sys-net (rw) /lib/modules from lib-modules (ro) /run/xtables.lock from xtables-lock (rw) /sys/fs/bpf from bpf-maps (rw) /tmp from tmp (rw) /var/lib/cilium/clustermesh from clustermesh-secrets (ro) /var/run/cilium from cilium-run (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-thxkr (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready True ContainersReady True PodScheduled True Volumes: tmp: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: cilium-run: Type: HostPath (bare host directory volume) Path: /var/run/cilium HostPathType: DirectoryOrCreate bpf-maps: Type: HostPath (bare host directory volume) Path: /sys/fs/bpf HostPathType: DirectoryOrCreate hostproc: Type: HostPath (bare host directory volume) Path: /proc HostPathType: Directory cilium-cgroup: Type: HostPath (bare host directory volume) Path: /run/cilium/cgroupv2 HostPathType: DirectoryOrCreate cni-path: Type: HostPath (bare host directory volume) Path: /opt/cni/bin HostPathType: DirectoryOrCreate etc-cni-netd: Type: HostPath (bare host directory volume) Path: /etc/cni/net.d HostPathType: DirectoryOrCreate lib-modules: Type: HostPath (bare host directory volume) Path: /lib/modules HostPathType: xtables-lock: Type: HostPath (bare host directory volume) Path: /run/xtables.lock HostPathType: FileOrCreate clustermesh-secrets: Type: Projected (a volume that contains injected data from multiple sources) SecretName: cilium-clustermesh Optional: true SecretName: clustermesh-apiserver-remote-cert Optional: true host-proc-sys-net: Type: HostPath (bare host directory volume) Path: /proc/sys/net HostPathType: Directory host-proc-sys-kernel: Type: HostPath (bare host directory volume) Path: /proc/sys/kernel HostPathType: Directory kube-api-access-thxkr: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt Optional: false DownwardAPI: true QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: op=Exists node.kubernetes.io/disk-pressure:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/network-unavailable:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists node.kubernetes.io/pid-pressure:NoSchedule op=Exists node.kubernetes.io/unreachable:NoExecute op=Exists node.kubernetes.io/unschedulable:NoSchedule op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 14m default-scheduler Successfully assigned kube-system/cilium-jxzvv to controller-1 Normal Pulling 14m kubelet Pulling image "quay.io/cilium/cilium:v1.14.8" Normal Pulled 14m kubelet Successfully pulled image "quay.io/cilium/cilium:v1.14.8" in 12.78s (12.78s including waiting). Image size: 201882379 bytes. Normal Created 14m kubelet Created container: config Normal Started 14m kubelet Started container config Normal Pulled 13m kubelet Container image "quay.io/cilium/cilium:v1.14.8" already present on machine Normal Created 13m kubelet Created container: mount-cgroup Normal Started 13m kubelet Started container mount-cgroup Normal Pulled 13m kubelet Container image "quay.io/cilium/cilium:v1.14.8" already present on machine Normal Created 13m kubelet Created container: apply-sysctl-overwrites Normal Started 13m kubelet Started container apply-sysctl-overwrites Normal Pulled 13m kubelet Container image "quay.io/cilium/cilium:v1.14.8" already present on machine Normal Created 13m kubelet Created container: mount-bpf-fs Normal Started 13m kubelet Started container mount-bpf-fs Normal Pulled 13m kubelet Container image "quay.io/cilium/cilium:v1.14.8" already present on machine Normal Created 13m kubelet Created container: clean-cilium-state Normal Started 13m kubelet Started container clean-cilium-state Normal Pulled 13m kubelet Container image "quay.io/cilium/cilium:v1.14.8" already present on machine Normal Created 13m kubelet Created container: install-cni-binaries Normal Started 13m kubelet Started container install-cni-binaries Normal Pulled 13m kubelet Container image "quay.io/cilium/cilium:v1.14.8" already present on machine Normal Created 13m kubelet Created container: cilium-agent Normal Started 13m kubelet Started container cilium-agent Warning Unhealthy 13m (x2 over 13m) kubelet Startup probe failed: Get "http://127.0.0.1:9879/healthz": dial tcp 127.0.0.1:9879: connect: connection refused Warning NodeNotReady 10m node-controller Node is not ready