Name: kube-prometheus-stack-grafana-8579d75655-tdpgk Namespace: monitoring Priority: 0 Service Account: kube-prometheus-stack-grafana Node: instance/199.204.45.113 Start Time: Fri, 17 Apr 2026 04:44:12 +0000 Labels: app.kubernetes.io/instance=kube-prometheus-stack app.kubernetes.io/name=grafana pod-template-hash=8579d75655 Annotations: checksum/config: a6d3966cf653edd626aebc891cc8311336aac20c2b7f0e5190acc84771b85f07 checksum/sc-dashboard-provider-config: e70bf6a851099d385178a76de9757bb0bef8299da6d8443602590e44f05fdf24 checksum/secret: d8aa0a055fe28ea01a56ddff03aa105b51a852bd4d0a47247e9045420209b88d kubectl.kubernetes.io/default-container: grafana Status: Running IP: 10.0.0.251 IPs: IP: 10.0.0.251 Controlled By: ReplicaSet/kube-prometheus-stack-grafana-8579d75655 Containers: grafana-sc-dashboard: Container ID: containerd://c98a34b3595752d139085a3df714df5617153f2d3b5f73155462312f7b7599c2 Image: harbor.atmosphere.dev/quay.io/kiwigrid/k8s-sidecar:1.26.1 Image ID: harbor.atmosphere.dev/quay.io/kiwigrid/k8s-sidecar@sha256:b8d5067137fec093cf48670dc3a1dbb38f9e734f3a6683015c2e89a45db5fd16 Port: Host Port: SeccompProfile: RuntimeDefault State: Running Started: Fri, 17 Apr 2026 04:44:13 +0000 Ready: True Restart Count: 0 Environment: METHOD: WATCH LABEL: grafana_dashboard LABEL_VALUE: 1 FOLDER: /tmp/dashboards RESOURCE: both NAMESPACE: ALL REQ_USERNAME: Optional: false REQ_PASSWORD: Optional: false REQ_URL: http://localhost:3000/api/admin/provisioning/dashboards/reload REQ_METHOD: POST Mounts: /tmp/dashboards from sc-dashboard-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x8lv8 (ro) grafana-sc-datasources: Container ID: containerd://236326e08a6975457d8c8fdbbf3543ffe15155d21746e1e701999beef096695e Image: harbor.atmosphere.dev/quay.io/kiwigrid/k8s-sidecar:1.26.1 Image ID: harbor.atmosphere.dev/quay.io/kiwigrid/k8s-sidecar@sha256:b8d5067137fec093cf48670dc3a1dbb38f9e734f3a6683015c2e89a45db5fd16 Port: Host Port: SeccompProfile: RuntimeDefault State: Running Started: Fri, 17 Apr 2026 04:44:13 +0000 Ready: True Restart Count: 0 Environment: METHOD: WATCH LABEL: grafana_datasource LABEL_VALUE: 1 FOLDER: /etc/grafana/provisioning/datasources RESOURCE: both REQ_USERNAME: Optional: false REQ_PASSWORD: Optional: false REQ_URL: http://localhost:3000/api/admin/provisioning/datasources/reload REQ_METHOD: POST Mounts: /etc/grafana/provisioning/datasources from sc-datasources-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x8lv8 (ro) grafana: Container ID: containerd://07ab486c350778893b5b106c07a1a2050903852ab86e654b3e56df7fb18535c4 Image: harbor.atmosphere.dev/docker.io/grafana/grafana:11.0.0 Image ID: harbor.atmosphere.dev/docker.io/grafana/grafana@sha256:0dc5a246ab16bb2c38a349fb588174e832b4c6c2db0981d0c3e6cd774ba66a54 Ports: 3000/TCP, 9094/TCP, 9094/UDP Host Ports: 0/TCP, 0/TCP, 0/UDP SeccompProfile: RuntimeDefault State: Running Started: Fri, 17 Apr 2026 04:44:13 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:3000/api/health delay=60s timeout=30s period=10s #success=1 #failure=10 Readiness: http-get http://:3000/api/health delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: POD_IP: (v1:status.podIP) GF_SECURITY_ADMIN_USER: Optional: false GF_SECURITY_ADMIN_PASSWORD: Optional: false GF_PATHS_DATA: /var/lib/grafana/ GF_PATHS_LOGS: /var/log/grafana GF_PATHS_PLUGINS: /var/lib/grafana/plugins GF_PATHS_PROVISIONING: /etc/grafana/provisioning Mounts: /etc/grafana/grafana.ini from config (rw,path="grafana.ini") /etc/grafana/provisioning/dashboards/sc-dashboardproviders.yaml from sc-dashboard-provider (rw,path="provider.yaml") /etc/grafana/provisioning/datasources from sc-datasources-volume (rw) /etc/secrets/auth_generic_oauth from auth-generic-oauth-secret-mount (ro) /tmp/dashboards from sc-dashboard-volume (rw) /var/lib/grafana from storage (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x8lv8 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: config: Type: ConfigMap (a volume populated by a ConfigMap) Name: kube-prometheus-stack-grafana Optional: false storage: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: sc-dashboard-volume: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: sc-dashboard-provider: Type: ConfigMap (a volume populated by a ConfigMap) Name: kube-prometheus-stack-grafana-config-dashboards Optional: false sc-datasources-volume: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: auth-generic-oauth-secret-mount: Type: Secret (a volume populated by a Secret) SecretName: kube-prometheus-stack-grafana-client-secret Optional: false kube-api-access-x8lv8: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: openstack-control-plane=enabled Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 33m default-scheduler Successfully assigned monitoring/kube-prometheus-stack-grafana-8579d75655-tdpgk to instance Normal Pulled 33m kubelet Container image "harbor.atmosphere.dev/quay.io/kiwigrid/k8s-sidecar:1.26.1" already present on machine Normal Created 33m kubelet Created container grafana-sc-dashboard Normal Started 33m kubelet Started container grafana-sc-dashboard Normal Pulled 33m kubelet Container image "harbor.atmosphere.dev/quay.io/kiwigrid/k8s-sidecar:1.26.1" already present on machine Normal Created 33m kubelet Created container grafana-sc-datasources Normal Started 33m kubelet Started container grafana-sc-datasources Normal Pulled 33m kubelet Container image "harbor.atmosphere.dev/docker.io/grafana/grafana:11.0.0" already present on machine Normal Created 33m kubelet Created container grafana Normal Started 33m kubelet Started container grafana Warning Unhealthy 32m (x2 over 32m) kubelet Readiness probe failed: Get "http://10.0.0.251:3000/api/health": dial tcp 10.0.0.251:3000: connect: connection refused