Name: cilium-jh78n Namespace: kube-system Priority: 2000001000 Priority Class Name: system-node-critical Service Account: cilium Node: controller-2/199.204.45.70 Start Time: Fri, 17 Apr 2026 00:56:50 +0000 Labels: app.kubernetes.io/name=cilium-agent app.kubernetes.io/part-of=cilium controller-revision-hash=f8b6c767c k8s-app=cilium pod-template-generation=1 Annotations: container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites: unconfined container.apparmor.security.beta.kubernetes.io/cilium-agent: unconfined container.apparmor.security.beta.kubernetes.io/clean-cilium-state: unconfined container.apparmor.security.beta.kubernetes.io/mount-cgroup: unconfined Status: Running IP: 199.204.45.70 IPs: IP: 199.204.45.70 Controlled By: DaemonSet/cilium Init Containers: config: Container ID: containerd://249e7f3ac381c6d953c19dafd9bef4963f08434b8c156c0aa4361d9b809e52d1 Image: quay.io/cilium/cilium:v1.14.8 Image ID: quay.io/cilium/cilium@sha256:7fca3ba4b04af066e8b086b5c1a52e30f52db01ffc642e7db0a439514aed3ada Port: Host Port: Command: cilium build-config State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 17 Apr 2026 00:57:05 +0000 Finished: Fri, 17 Apr 2026 00:57:09 +0000 Ready: True Restart Count: 0 Environment: K8S_NODE_NAME: (v1:spec.nodeName) CILIUM_K8S_NAMESPACE: kube-system (v1:metadata.namespace) Mounts: /tmp from tmp (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvtkf (ro) mount-cgroup: Container ID: containerd://ead14a2c026021a36bdff4068305410b30dc3c7ace3c452b3bf72910af6f55e2 Image: quay.io/cilium/cilium:v1.14.8 Image ID: quay.io/cilium/cilium@sha256:7fca3ba4b04af066e8b086b5c1a52e30f52db01ffc642e7db0a439514aed3ada Port: Host Port: Command: sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; rm /hostbin/cilium-mount State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 17 Apr 2026 00:57:15 +0000 Finished: Fri, 17 Apr 2026 00:57:15 +0000 Ready: True Restart Count: 0 Environment: CGROUP_ROOT: /run/cilium/cgroupv2 BIN_PATH: /opt/cni/bin Mounts: /hostbin from cni-path (rw) /hostproc from hostproc (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvtkf (ro) apply-sysctl-overwrites: Container ID: containerd://d7b25d5c046138aafd683aa69e063b0c2d340c3c2fc53cba740e3cc5653af592 Image: quay.io/cilium/cilium:v1.14.8 Image ID: quay.io/cilium/cilium@sha256:7fca3ba4b04af066e8b086b5c1a52e30f52db01ffc642e7db0a439514aed3ada Port: Host Port: Command: sh -ec cp /usr/bin/cilium-sysctlfix /hostbin/cilium-sysctlfix; nsenter --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-sysctlfix"; rm /hostbin/cilium-sysctlfix State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 17 Apr 2026 00:57:16 +0000 Finished: Fri, 17 Apr 2026 00:57:16 +0000 Ready: True Restart Count: 0 Environment: BIN_PATH: /opt/cni/bin Mounts: /hostbin from cni-path (rw) /hostproc from hostproc (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvtkf (ro) mount-bpf-fs: Container ID: containerd://af0dba5ea682751649068549985242fad1e9cb5b043aa17b3910b345e2cd9aa4 Image: quay.io/cilium/cilium:v1.14.8 Image ID: quay.io/cilium/cilium@sha256:7fca3ba4b04af066e8b086b5c1a52e30f52db01ffc642e7db0a439514aed3ada Port: Host Port: Command: /bin/bash -c -- Args: mount | grep "/sys/fs/bpf type bpf" || mount -t bpf bpf /sys/fs/bpf State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 17 Apr 2026 00:57:17 +0000 Finished: Fri, 17 Apr 2026 00:57:17 +0000 Ready: True Restart Count: 0 Environment: Mounts: /sys/fs/bpf from bpf-maps (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvtkf (ro) clean-cilium-state: Container ID: containerd://fc9a7da7372194604e7b7087dd9ab91b359029632438a631cc226e94d4f4f63f Image: quay.io/cilium/cilium:v1.14.8 Image ID: quay.io/cilium/cilium@sha256:7fca3ba4b04af066e8b086b5c1a52e30f52db01ffc642e7db0a439514aed3ada Port: Host Port: Command: /init-container.sh State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 17 Apr 2026 00:57:18 +0000 Finished: Fri, 17 Apr 2026 00:57:18 +0000 Ready: True Restart Count: 0 Environment: CILIUM_ALL_STATE: Optional: true CILIUM_BPF_STATE: Optional: true Mounts: /run/cilium/cgroupv2 from cilium-cgroup (rw) /sys/fs/bpf from bpf-maps (rw) /var/run/cilium from cilium-run (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvtkf (ro) install-cni-binaries: Container ID: containerd://da4cdab88a66c96cf512c13ef03190a775da04694cf1e19cf4777145418050d8 Image: quay.io/cilium/cilium:v1.14.8 Image ID: quay.io/cilium/cilium@sha256:7fca3ba4b04af066e8b086b5c1a52e30f52db01ffc642e7db0a439514aed3ada Port: Host Port: Command: /install-plugin.sh State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 17 Apr 2026 00:57:19 +0000 Finished: Fri, 17 Apr 2026 00:57:19 +0000 Ready: True Restart Count: 0 Requests: cpu: 100m memory: 10Mi Environment: Mounts: /host/opt/cni/bin from cni-path (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvtkf (ro) Containers: cilium-agent: Container ID: containerd://8aa0bd490c804b9c586224a78fee08065ce7deff2447e53ac2b1160cd0ca2774 Image: quay.io/cilium/cilium:v1.14.8 Image ID: quay.io/cilium/cilium@sha256:7fca3ba4b04af066e8b086b5c1a52e30f52db01ffc642e7db0a439514aed3ada Port: Host Port: Command: cilium-agent Args: --config-dir=/tmp/cilium/config-map State: Running Started: Fri, 17 Apr 2026 00:57:20 +0000 Ready: True Restart Count: 0 Liveness: http-get http://127.0.0.1:9879/healthz delay=0s timeout=5s period=30s #success=1 #failure=10 Readiness: http-get http://127.0.0.1:9879/healthz delay=0s timeout=5s period=30s #success=1 #failure=3 Startup: http-get http://127.0.0.1:9879/healthz delay=0s timeout=1s period=2s #success=1 #failure=105 Environment: K8S_NODE_NAME: (v1:spec.nodeName) CILIUM_K8S_NAMESPACE: kube-system (v1:metadata.namespace) CILIUM_CLUSTERMESH_CONFIG: /var/lib/cilium/clustermesh/ Mounts: /host/etc/cni/net.d from etc-cni-netd (rw) /host/proc/sys/kernel from host-proc-sys-kernel (rw) /host/proc/sys/net from host-proc-sys-net (rw) /lib/modules from lib-modules (ro) /run/xtables.lock from xtables-lock (rw) /sys/fs/bpf from bpf-maps (rw) /tmp from tmp (rw) /var/lib/cilium/clustermesh from clustermesh-secrets (ro) /var/run/cilium from cilium-run (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvtkf (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready True ContainersReady True PodScheduled True Volumes: tmp: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: cilium-run: Type: HostPath (bare host directory volume) Path: /var/run/cilium HostPathType: DirectoryOrCreate bpf-maps: Type: HostPath (bare host directory volume) Path: /sys/fs/bpf HostPathType: DirectoryOrCreate hostproc: Type: HostPath (bare host directory volume) Path: /proc HostPathType: Directory cilium-cgroup: Type: HostPath (bare host directory volume) Path: /run/cilium/cgroupv2 HostPathType: DirectoryOrCreate cni-path: Type: HostPath (bare host directory volume) Path: /opt/cni/bin HostPathType: DirectoryOrCreate etc-cni-netd: Type: HostPath (bare host directory volume) Path: /etc/cni/net.d HostPathType: DirectoryOrCreate lib-modules: Type: HostPath (bare host directory volume) Path: /lib/modules HostPathType: xtables-lock: Type: HostPath (bare host directory volume) Path: /run/xtables.lock HostPathType: FileOrCreate clustermesh-secrets: Type: Projected (a volume that contains injected data from multiple sources) SecretName: cilium-clustermesh Optional: true SecretName: clustermesh-apiserver-remote-cert Optional: true host-proc-sys-net: Type: HostPath (bare host directory volume) Path: /proc/sys/net HostPathType: Directory host-proc-sys-kernel: Type: HostPath (bare host directory volume) Path: /proc/sys/kernel HostPathType: Directory kube-api-access-vvtkf: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt Optional: false DownwardAPI: true QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: op=Exists node.kubernetes.io/disk-pressure:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/network-unavailable:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists node.kubernetes.io/pid-pressure:NoSchedule op=Exists node.kubernetes.io/unreachable:NoExecute op=Exists node.kubernetes.io/unschedulable:NoSchedule op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 14m default-scheduler Successfully assigned kube-system/cilium-jh78n to controller-2 Normal Pulling 14m kubelet Pulling image "quay.io/cilium/cilium:v1.14.8" Normal Pulled 14m kubelet Successfully pulled image "quay.io/cilium/cilium:v1.14.8" in 11.052s (13.081s including waiting). Image size: 201882379 bytes. Normal Created 14m kubelet Created container: config Normal Started 14m kubelet Started container config Normal Pulled 13m kubelet Container image "quay.io/cilium/cilium:v1.14.8" already present on machine Normal Created 13m kubelet Created container: mount-cgroup Normal Started 13m kubelet Started container mount-cgroup Normal Pulled 13m kubelet Container image "quay.io/cilium/cilium:v1.14.8" already present on machine Normal Created 13m kubelet Created container: apply-sysctl-overwrites Normal Started 13m kubelet Started container apply-sysctl-overwrites Normal Pulled 13m kubelet Container image "quay.io/cilium/cilium:v1.14.8" already present on machine Normal Created 13m kubelet Created container: mount-bpf-fs Normal Started 13m kubelet Started container mount-bpf-fs Normal Pulled 13m kubelet Container image "quay.io/cilium/cilium:v1.14.8" already present on machine Normal Created 13m kubelet Created container: clean-cilium-state Normal Started 13m kubelet Started container clean-cilium-state Normal Pulled 13m kubelet Container image "quay.io/cilium/cilium:v1.14.8" already present on machine Normal Created 13m kubelet Created container: install-cni-binaries Normal Started 13m kubelet Started container install-cni-binaries Normal Pulled 13m kubelet Container image "quay.io/cilium/cilium:v1.14.8" already present on machine Normal Created 13m kubelet Created container: cilium-agent Normal Started 13m kubelet Started container cilium-agent Warning Unhealthy 13m (x2 over 13m) kubelet Startup probe failed: Get "http://127.0.0.1:9879/healthz": dial tcp 127.0.0.1:9879: connect: connection refused