Name: rook-ceph-rgw-ceph-a-67fd8975c6-9qh88 Namespace: openstack Priority: 0 Service Account: rook-ceph-rgw Node: instance/199.204.45.115 Start Time: Thu, 23 Apr 2026 13:54:09 +0000 Labels: app=rook-ceph-rgw app.kubernetes.io/component=cephobjectstores.ceph.rook.io app.kubernetes.io/created-by=rook-ceph-operator app.kubernetes.io/instance=ceph app.kubernetes.io/managed-by=rook-ceph-operator app.kubernetes.io/name=ceph-rgw app.kubernetes.io/part-of=ceph ceph_daemon_id=ceph ceph_daemon_type=rgw pod-template-hash=67fd8975c6 rgw=ceph rook.io/operator-namespace=rook-ceph rook_cluster=openstack rook_object_store=ceph Annotations: Status: Running IP: 10.0.0.178 IPs: IP: 10.0.0.178 Controlled By: ReplicaSet/rook-ceph-rgw-ceph-a-67fd8975c6 Init Containers: chown-container-data-dir: Container ID: containerd://4b09c5808455014fb405b015d10f39fdd68f4993819f8d34efbeb4975ff8ac9b Image: harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.1 Image ID: harbor.atmosphere.dev/quay.io/ceph/ceph@sha256:9f35728f6070a596500c0804814a12ab6b98e05067316dc64876fb4b28d04af3 Port: Host Port: Command: chown Args: --verbose --recursive ceph:ceph /var/log/ceph /var/lib/ceph/crash /var/lib/ceph/rgw/ceph-ceph State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 23 Apr 2026 13:54:10 +0000 Finished: Thu, 23 Apr 2026 13:54:10 +0000 Ready: True Restart Count: 0 Environment: Mounts: /etc/ceph from rook-config-override (ro) /etc/ceph/keyring-store/ from rook-ceph-rgw-ceph-a-keyring (ro) /var/lib/ceph/crash from rook-ceph-crash (rw) /var/lib/ceph/rgw/ceph-ceph from ceph-daemon-data (rw) /var/log/ceph from rook-ceph-log (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5rv9z (ro) Containers: rgw: Container ID: containerd://0bf352d2149eed7d3e37110d021a4f698f81b19138dfd8659a0203a83e7f62dd Image: harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.1 Image ID: harbor.atmosphere.dev/quay.io/ceph/ceph@sha256:9f35728f6070a596500c0804814a12ab6b98e05067316dc64876fb4b28d04af3 Port: Host Port: Command: radosgw Args: --fsid=4837cbf8-4f90-4300-b3f6-726c9b9f89b4 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=$(ROOK_CEPH_MON_HOST) --mon-initial-members=$(ROOK_CEPH_MON_INITIAL_MEMBERS) --id=rgw.ceph.a --setuser=ceph --setgroup=ceph --foreground --rgw-frontends=beast port=8080 --host=$(POD_NAME) --rgw-mime-types-file=/etc/ceph/rgw/mime.types --rgw-realm=ceph --rgw-zonegroup=ceph --rgw-zone=ceph State: Running Started: Thu, 23 Apr 2026 13:54:11 +0000 Ready: True Restart Count: 0 Liveness: tcp-socket :8080 delay=10s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:8080/swift/healthcheck delay=10s timeout=1s period=10s #success=1 #failure=3 Startup: tcp-socket :8080 delay=10s timeout=1s period=10s #success=1 #failure=18 Environment: CONTAINER_IMAGE: harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.1 POD_NAME: rook-ceph-rgw-ceph-a-67fd8975c6-9qh88 (v1:metadata.name) POD_NAMESPACE: openstack (v1:metadata.namespace) NODE_NAME: (v1:spec.nodeName) POD_MEMORY_LIMIT: node allocatable (limits.memory) POD_MEMORY_REQUEST: 0 (requests.memory) POD_CPU_LIMIT: node allocatable (limits.cpu) POD_CPU_REQUEST: 0 (requests.cpu) CEPH_USE_RANDOM_NONCE: true ROOK_CEPH_MON_HOST: Optional: false ROOK_CEPH_MON_INITIAL_MEMBERS: Optional: false Mounts: /etc/ceph from rook-config-override (ro) /etc/ceph/keyring-store/ from rook-ceph-rgw-ceph-a-keyring (ro) /etc/ceph/rgw from rook-ceph-rgw-ceph-mime-types (ro) /var/lib/ceph/crash from rook-ceph-crash (rw) /var/lib/ceph/rgw/ceph-ceph from ceph-daemon-data (rw) /var/log/ceph from rook-ceph-log (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5rv9z (ro) log-collector: Container ID: containerd://d8366f29f442147c9983a936db00cf60b13968f5c971af4f6a5c4ecc530f48be Image: harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.1 Image ID: harbor.atmosphere.dev/quay.io/ceph/ceph@sha256:9f35728f6070a596500c0804814a12ab6b98e05067316dc64876fb4b28d04af3 Port: Host Port: Command: /bin/bash -x -e -m -c CEPH_CLIENT_ID=ceph-client.rgw.ceph.a PERIODICITY=daily LOG_ROTATE_CEPH_FILE=/etc/logrotate.d/ceph LOG_MAX_SIZE=500M # edit the logrotate file to only rotate a specific daemon log # otherwise we will logrotate log files without reloading certain daemons # this might happen when multiple daemons run on the same machine sed -i "s|*.log|$CEPH_CLIENT_ID.log|" "$LOG_ROTATE_CEPH_FILE" # replace default daily with given user input sed --in-place "s/daily/$PERIODICITY/g" "$LOG_ROTATE_CEPH_FILE" if [ "$LOG_MAX_SIZE" != "0" ]; then # adding maxsize $LOG_MAX_SIZE at the 4th line of the logrotate config file with 4 spaces to maintain indentation sed --in-place "4i \ \ \ \ maxsize $LOG_MAX_SIZE" "$LOG_ROTATE_CEPH_FILE" fi while true; do # we don't force the logrorate but we let the logrotate binary handle the rotation based on user's input for periodicity and size logrotate --verbose "$LOG_ROTATE_CEPH_FILE" sleep 15m done State: Running Started: Thu, 23 Apr 2026 13:54:12 +0000 Ready: True Restart Count: 0 Limits: cpu: 500m memory: 1Gi Requests: cpu: 100m memory: 100Mi Environment: Mounts: /etc/ceph from rook-config-override (ro) /var/lib/ceph/crash from rook-ceph-crash (rw) /var/log/ceph from rook-ceph-log (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5rv9z (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: rook-config-override: Type: Projected (a volume that contains injected data from multiple sources) ConfigMapName: rook-config-override ConfigMapOptional: rook-ceph-rgw-ceph-a-keyring: Type: Secret (a volume populated by a Secret) SecretName: rook-ceph-rgw-ceph-a-keyring Optional: false rook-ceph-log: Type: HostPath (bare host directory volume) Path: /var/lib/rook/openstack/log HostPathType: rook-ceph-crash: Type: HostPath (bare host directory volume) Path: /var/lib/rook/openstack/crash HostPathType: ceph-daemon-data: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: rook-ceph-rgw-ceph-mime-types: Type: ConfigMap (a volume populated by a ConfigMap) Name: rook-ceph-rgw-ceph-mime-types Optional: false kube-api-access-5rv9z: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 5s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 60m default-scheduler Successfully assigned openstack/rook-ceph-rgw-ceph-a-67fd8975c6-9qh88 to instance Normal Pulled 60m kubelet Container image "harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.1" already present on machine Normal Created 60m kubelet Created container chown-container-data-dir Normal Started 60m kubelet Started container chown-container-data-dir Normal Pulled 60m kubelet Container image "harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.1" already present on machine Normal Created 60m kubelet Created container rgw Normal Started 60m kubelet Started container rgw Normal Pulled 60m kubelet Container image "harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.1" already present on machine Normal Created 60m kubelet Created container log-collector Normal Started 60m kubelet Started container log-collector