2026-04-03 02:47:36.461003 I | rookcmd: starting Rook v1.10.10 with arguments '/usr/local/bin/rook ceph operator' 2026-04-03 02:47:36.461231 I | rookcmd: flag values: --enable-machine-disruption-budget=false, --help=false, --kubeconfig=, --log-level=INFO, --operator-image=, --service-account= 2026-04-03 02:47:36.461237 I | cephcmd: starting Rook-Ceph operator 2026-04-03 02:47:36.888962 I | cephcmd: base ceph version inside the rook operator image is "ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)" 2026-04-03 02:47:36.920168 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2026-04-03 02:47:36.920197 I | operator: watching all namespaces for Ceph CRs 2026-04-03 02:47:36.920304 I | operator: setting up schemes 2026-04-03 02:47:36.926421 I | operator: setting up the controller-runtime manager 2026-04-03 02:47:37.740422 I | op-k8sutil: ROOK_DISABLE_ADMISSION_CONTROLLER="true" (configmap) 2026-04-03 02:47:37.740444 I | operator: delete webhook resources since webhook is disabled 2026-04-03 02:47:37.740450 I | operator: deleting validating webhook rook-ceph-webhook 2026-04-03 02:47:37.743718 I | operator: deleting webhook cert manager Certificate rook-admission-controller-cert 2026-04-03 02:47:37.746241 I | operator: deleting webhook cert manager Issuer %sselfsigned-issuer 2026-04-03 02:47:37.748958 I | operator: deleting validating webhook service %srook-ceph-admission-controller 2026-04-03 02:47:37.751073 I | ceph-cluster-controller: successfully started 2026-04-03 02:47:37.751200 I | ceph-cluster-controller: enabling hotplug orchestration 2026-04-03 02:47:37.751247 I | ceph-crashcollector-controller: successfully started 2026-04-03 02:47:37.751313 I | ceph-block-pool-controller: successfully started 2026-04-03 02:47:37.751350 I | ceph-object-store-user-controller: successfully started 2026-04-03 02:47:37.751407 I | ceph-object-realm-controller: successfully started 2026-04-03 02:47:37.751436 I | ceph-object-zonegroup-controller: successfully started 2026-04-03 02:47:37.751492 I | ceph-object-zone-controller: successfully started 2026-04-03 02:47:37.751720 I | ceph-object-controller: successfully started 2026-04-03 02:47:37.751793 I | ceph-file-controller: successfully started 2026-04-03 02:47:37.751866 I | ceph-nfs-controller: successfully started 2026-04-03 02:47:37.751926 I | ceph-rbd-mirror-controller: successfully started 2026-04-03 02:47:37.751960 I | ceph-client-controller: successfully started 2026-04-03 02:47:37.752011 I | ceph-filesystem-mirror-controller: successfully started 2026-04-03 02:47:37.752050 I | operator: rook-ceph-operator-config-controller successfully started 2026-04-03 02:47:37.755526 I | op-k8sutil: ROOK_DISABLE_ADMISSION_CONTROLLER="true" (configmap) 2026-04-03 02:47:37.755586 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2026-04-03 02:47:37.755640 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2026-04-03 02:47:37.755668 I | ceph-bucket-topic: successfully started 2026-04-03 02:47:37.755687 I | ceph-bucket-notification: successfully started 2026-04-03 02:47:37.755735 I | ceph-bucket-notification: successfully started 2026-04-03 02:47:37.755752 I | ceph-fs-subvolumegroup-controller: successfully started 2026-04-03 02:47:37.755792 I | blockpool-rados-namespace-controller: successfully started 2026-04-03 02:47:37.757347 I | operator: starting the controller-runtime manager 2026-04-03 02:47:38.463731 I | op-k8sutil: ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS="15" (configmap) 2026-04-03 02:47:38.463764 I | op-k8sutil: ROOK_LOG_LEVEL="INFO" (configmap) 2026-04-03 02:47:38.463782 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (env var) 2026-04-03 02:47:38.466439 I | ceph-csi: CSI Ceph RBD driver disabled 2026-04-03 02:47:38.466456 I | op-k8sutil: removing daemonset csi-rbdplugin if it exists 2026-04-03 02:47:38.467862 I | op-k8sutil: ROOK_CEPH_ALLOW_LOOP_DEVICES="false" (configmap) 2026-04-03 02:47:38.467893 I | operator: rook-ceph-operator-config-controller done reconciling 2026-04-03 02:47:38.469495 I | op-k8sutil: removing deployment csi-rbdplugin-provisioner if it exists 2026-04-03 02:47:38.477497 I | ceph-csi: successfully removed CSI Ceph RBD driver 2026-04-03 02:47:38.477588 I | ceph-csi: CSI CephFS driver disabled 2026-04-03 02:47:38.477606 I | op-k8sutil: removing daemonset csi-cephfsplugin if it exists 2026-04-03 02:47:38.479638 I | op-k8sutil: removing deployment csi-cephfsplugin-provisioner if it exists 2026-04-03 02:47:38.485196 I | ceph-csi: successfully removed CSI CephFS driver 2026-04-03 02:47:38.485218 I | ceph-csi: CSI NFS driver disabled 2026-04-03 02:47:38.485224 I | op-k8sutil: removing daemonset csi-nfsplugin if it exists 2026-04-03 02:47:38.486808 I | op-k8sutil: removing deployment csi-nfsplugin-provisioner if it exists 2026-04-03 02:47:38.494092 I | ceph-csi: successfully removed CSI NFS driver 2026-04-03 02:48:00.887082 I | ceph-spec: adding finalizer "cephcluster.ceph.rook.io" on "ceph" 2026-04-03 02:48:00.889791 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (configmap) 2026-04-03 02:48:00.891087 I | clusterdisruption-controller: deleted all legacy node drain canary pods 2026-04-03 02:48:00.954666 I | ceph-spec: adding finalizer "cephobjectstore.ceph.rook.io" on "ceph" 2026-04-03 02:48:01.105160 I | ceph-cluster-controller: reconciling ceph cluster in namespace "openstack" 2026-04-03 02:48:01.173624 I | ceph-csi: successfully created csi config map "rook-ceph-csi-config" 2026-04-03 02:48:01.173874 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-04-03 02:48:01.173924 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-04-03 02:48:01.174029 I | op-k8sutil: ROOK_OBC_WATCH_OPERATOR_NAMESPACE="true" (configmap) 2026-04-03 02:48:01.174044 I | op-bucket-prov: ceph bucket provisioner launched watching for provisioner "openstack.ceph.rook.io/bucket" 2026-04-03 02:48:01.174532 I | op-bucket-prov: successfully reconciled bucket provisioner I0403 02:48:01.174631 1 manager.go:135] objectbucket.io/provisioner-manager "msg"="starting provisioner" "name"="openstack.ceph.rook.io/bucket" 2026-04-03 02:48:01.184323 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-04-03 02:48:01.184351 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-04-03 02:48:01.184381 I | ceph-spec: found the cluster info to connect to the external cluster. will use "client.admin" to check health and monitor status. mons=map[instance:0xc001f7f620] 2026-04-03 02:48:01.184441 I | ceph-spec: detecting the ceph image version for image harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.1... 2026-04-03 02:48:01.263092 I | op-k8sutil: ROOK_CSI_ENABLE_RBD="false" (configmap) 2026-04-03 02:48:01.263201 I | op-k8sutil: ROOK_CSI_ENABLE_CEPHFS="false" (configmap) 2026-04-03 02:48:01.263231 I | op-k8sutil: ROOK_CSI_ENABLE_NFS="false" (configmap) 2026-04-03 02:48:01.263261 I | op-k8sutil: ROOK_CSI_ALLOW_UNSUPPORTED_VERSION="false" (default) 2026-04-03 02:48:01.263285 I | op-k8sutil: ROOK_CSI_ENABLE_GRPC_METRICS="false" (configmap) 2026-04-03 02:48:01.263316 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (configmap) 2026-04-03 02:48:01.263345 I | op-k8sutil: CSI_FORCE_CEPHFS_KERNEL_CLIENT="true" (configmap) 2026-04-03 02:48:01.263374 I | op-k8sutil: CSI_GRPC_TIMEOUT_SECONDS="150" (configmap) 2026-04-03 02:48:01.263399 I | op-k8sutil: CSI_CEPHFS_GRPC_METRICS_PORT="9091" (default) 2026-04-03 02:48:01.263430 I | op-k8sutil: CSI_CEPHFS_GRPC_METRICS_PORT="9091" (default) 2026-04-03 02:48:01.263458 I | op-k8sutil: CSI_CEPHFS_LIVENESS_METRICS_PORT="9081" (default) 2026-04-03 02:48:01.263488 I | op-k8sutil: CSI_CEPHFS_LIVENESS_METRICS_PORT="9081" (default) 2026-04-03 02:48:01.263517 I | op-k8sutil: CSI_RBD_GRPC_METRICS_PORT="9090" (default) 2026-04-03 02:48:01.263543 I | op-k8sutil: CSI_RBD_GRPC_METRICS_PORT="9090" (default) 2026-04-03 02:48:01.263570 I | op-k8sutil: CSIADDONS_PORT="9070" (default) 2026-04-03 02:48:01.263599 I | op-k8sutil: CSIADDONS_PORT="9070" (default) 2026-04-03 02:48:01.263623 I | op-k8sutil: CSI_RBD_LIVENESS_METRICS_PORT="9080" (default) 2026-04-03 02:48:01.263645 I | op-k8sutil: CSI_RBD_LIVENESS_METRICS_PORT="9080" (default) 2026-04-03 02:48:01.263672 I | op-k8sutil: CSI_ENABLE_LIVENESS="false" (default) 2026-04-03 02:48:01.263695 I | op-k8sutil: CSI_PLUGIN_PRIORITY_CLASSNAME="system-node-critical" (configmap) 2026-04-03 02:48:01.263723 I | op-k8sutil: CSI_PROVISIONER_PRIORITY_CLASSNAME="system-cluster-critical" (configmap) 2026-04-03 02:48:01.263756 I | op-k8sutil: CSI_ENABLE_OMAP_GENERATOR="false" (configmap) 2026-04-03 02:48:01.263784 I | op-k8sutil: CSI_ENABLE_RBD_SNAPSHOTTER="true" (configmap) 2026-04-03 02:48:01.263813 I | op-k8sutil: CSI_ENABLE_CEPHFS_SNAPSHOTTER="true" (configmap) 2026-04-03 02:48:01.263841 I | op-k8sutil: CSI_ENABLE_NFS_SNAPSHOTTER="true" (configmap) 2026-04-03 02:48:01.263865 I | op-k8sutil: CSI_ENABLE_CSIADDONS="false" (configmap) 2026-04-03 02:48:01.263888 I | op-k8sutil: CSI_ENABLE_TOPOLOGY="false" (configmap) 2026-04-03 02:48:01.263910 I | op-k8sutil: CSI_ENABLE_ENCRYPTION="false" (configmap) 2026-04-03 02:48:01.263940 I | op-k8sutil: CSI_ENABLE_METADATA="false" (configmap) 2026-04-03 02:48:01.263964 I | op-k8sutil: CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY="RollingUpdate" (default) 2026-04-03 02:48:01.263999 I | op-k8sutil: CSI_NFS_PLUGIN_UPDATE_STRATEGY="RollingUpdate" (default) 2026-04-03 02:48:01.264027 I | op-k8sutil: CSI_RBD_PLUGIN_UPDATE_STRATEGY="RollingUpdate" (default) 2026-04-03 02:48:01.264076 I | op-k8sutil: CSI_RBD_PLUGIN_UPDATE_STRATEGY_MAX_UNAVAILABLE="1" (default) 2026-04-03 02:48:01.264108 I | op-k8sutil: CSI_PLUGIN_ENABLE_SELINUX_HOST_MOUNT="false" (configmap) 2026-04-03 02:48:01.264137 I | ceph-csi: Kubernetes version is 1.28 2026-04-03 02:48:01.264175 I | op-k8sutil: ROOK_CSI_RESIZER_IMAGE="registry.k8s.io/sig-storage/csi-resizer:v1.7.0" (default) 2026-04-03 02:48:01.264230 I | op-k8sutil: CSI_LOG_LEVEL="" (default) 2026-04-03 02:48:01.264262 I | op-k8sutil: CSI_SIDECAR_LOG_LEVEL="" (default) 2026-04-03 02:48:01.363565 I | op-k8sutil: ROOK_CSI_CEPH_IMAGE="quay.io/cephcsi/cephcsi:v3.7.2" (default) 2026-04-03 02:48:01.363675 I | op-k8sutil: ROOK_CSI_REGISTRAR_IMAGE="registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.7.0" (default) 2026-04-03 02:48:01.363701 I | op-k8sutil: ROOK_CSI_PROVISIONER_IMAGE="registry.k8s.io/sig-storage/csi-provisioner:v3.4.0" (default) 2026-04-03 02:48:01.363721 I | op-k8sutil: ROOK_CSI_ATTACHER_IMAGE="registry.k8s.io/sig-storage/csi-attacher:v4.1.0" (default) 2026-04-03 02:48:01.363764 I | op-k8sutil: ROOK_CSI_SNAPSHOTTER_IMAGE="registry.k8s.io/sig-storage/csi-snapshotter:v6.2.1" (default) 2026-04-03 02:48:01.363788 I | op-k8sutil: ROOK_CSI_KUBELET_DIR_PATH="/var/lib/kubelet" (default) 2026-04-03 02:48:01.363807 I | op-k8sutil: ROOK_CSIADDONS_IMAGE="quay.io/csiaddons/k8s-sidecar:v0.5.0" (configmap) 2026-04-03 02:48:01.363870 I | op-k8sutil: CSI_TOPOLOGY_DOMAIN_LABELS="" (default) 2026-04-03 02:48:01.363901 I | op-k8sutil: ROOK_CSI_CEPHFS_POD_LABELS="" (default) 2026-04-03 02:48:01.363921 I | op-k8sutil: ROOK_CSI_NFS_POD_LABELS="" (default) 2026-04-03 02:48:01.363960 I | op-k8sutil: ROOK_CSI_RBD_POD_LABELS="" (default) 2026-04-03 02:48:01.363986 I | op-k8sutil: CSI_CLUSTER_NAME="" (default) 2026-04-03 02:48:01.364006 I | op-k8sutil: ROOK_CSI_IMAGE_PULL_POLICY="IfNotPresent" (configmap) 2026-04-03 02:48:01.364040 I | ceph-csi: skipping csi version check, since unsupported versions are allowed or csi is disabled 2026-04-03 02:48:01.364063 I | ceph-csi: CSI Ceph RBD driver disabled 2026-04-03 02:48:01.364090 I | op-k8sutil: removing daemonset csi-rbdplugin if it exists 2026-04-03 02:48:01.461080 I | op-k8sutil: removing deployment csi-rbdplugin-provisioner if it exists 2026-04-03 02:48:01.576164 I | ceph-csi: successfully removed CSI Ceph RBD driver 2026-04-03 02:48:01.576185 I | ceph-csi: CSI CephFS driver disabled 2026-04-03 02:48:01.576191 I | op-k8sutil: removing daemonset csi-cephfsplugin if it exists 2026-04-03 02:48:01.584908 I | op-k8sutil: removing deployment csi-cephfsplugin-provisioner if it exists 2026-04-03 02:48:01.806614 I | ceph-csi: successfully removed CSI CephFS driver 2026-04-03 02:48:01.806642 I | ceph-csi: CSI NFS driver disabled 2026-04-03 02:48:01.806651 I | op-k8sutil: removing daemonset csi-nfsplugin if it exists 2026-04-03 02:48:02.139828 I | op-k8sutil: removing deployment csi-nfsplugin-provisioner if it exists 2026-04-03 02:48:02.164897 I | ceph-csi: successfully removed CSI NFS driver 2026-04-03 02:48:23.460903 I | ceph-spec: detected ceph image version: "18.2.1-0 reef" 2026-04-03 02:48:23.460930 I | ceph-cluster-controller: validating ceph version from provided image 2026-04-03 02:48:23.468953 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-04-03 02:48:23.468976 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-04-03 02:48:23.472156 I | cephclient: writing config file /var/lib/rook/openstack/openstack.config 2026-04-03 02:48:23.472447 I | cephclient: generated admin config in /var/lib/rook/openstack 2026-04-03 02:48:24.980146 E | cephver: external cluster ceph version is a major version higher "18.2.7-0 reef" than the local cluster "0.0.0-0 ", consider upgrading 2026-04-03 02:48:27.264067 W | ceph-cluster-controller: image spec version 18.2.1-0 reef is lower than the running cluster version 18.2.7-0 reef, downgrading is not supported 2026-04-03 02:48:29.379726 I | ceph-cluster-controller: upgrading ceph cluster to "18.2.1-0 reef" 2026-04-03 02:48:29.379772 I | ceph-cluster-controller: cluster "openstack": version "18.2.1-0 reef" detected for image "harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.1" 2026-04-03 02:48:32.415203 I | ceph-cluster-controller: creating "rook-config-override" configmap 2026-04-03 02:48:33.482282 I | ceph-cluster-controller: creating "rook-ceph-config" secret 2026-04-03 02:48:34.072271 I | clusterdisruption-controller: all PGs are active+clean. Restoring default OSD pdb settings 2026-04-03 02:48:34.072304 I | clusterdisruption-controller: creating the default pdb "rook-ceph-osd" with maxUnavailable=1 for all osd 2026-04-03 02:48:34.288471 I | ceph-cluster-controller: external cluster identity established 2026-04-03 02:48:34.288502 I | cephclient: getting or creating ceph auth key "client.csi-rbd-provisioner" 2026-04-03 02:48:34.292149 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:48:38.780006 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:48:40.753742 I | cephclient: getting or creating ceph auth key "client.csi-rbd-node" 2026-04-03 02:48:41.949411 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:48:43.214977 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-provisioner" 2026-04-03 02:48:44.520450 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-node" 2026-04-03 02:48:46.243431 I | ceph-csi: created kubernetes csi secrets for cluster "openstack" 2026-04-03 02:48:46.253484 I | ceph-csi: successfully created csi config map "rook-ceph-csi-config" 2026-04-03 02:48:46.421497 I | ceph-cluster-controller: successfully updated csi config map 2026-04-03 02:48:46.421535 I | cephclient: getting or creating ceph auth key "client.crash" 2026-04-03 02:48:47.881113 I | ceph-crashcollector-controller: created kubernetes crash collector secret for cluster "openstack" 2026-04-03 02:48:47.881163 I | ceph-cluster-controller: enabling ceph mon monitoring goroutine for cluster "openstack" 2026-04-03 02:48:47.881198 I | ceph-cluster-controller: ceph status check interval is 1m0s 2026-04-03 02:48:47.881208 I | ceph-cluster-controller: enabling ceph status monitoring goroutine for cluster "openstack" 2026-04-03 02:48:53.123575 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-04-03 02:48:53.123608 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-04-03 02:48:53.123622 I | ceph-spec: detecting the ceph image version for image harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.1... 2026-04-03 02:48:57.321970 I | ceph-spec: detected ceph image version: "18.2.1-0 reef" 2026-04-03 02:49:00.502528 I | ceph-object-controller: reconciling object store deployments 2026-04-03 02:49:00.519542 I | ceph-object-controller: ceph object store gateway service running at 10.100.161.237 2026-04-03 02:49:00.519588 I | ceph-object-controller: reconciling object store pools 2026-04-03 02:49:06.689793 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:49:09.673763 I | cephclient: reconciling replicated pool ceph.rgw.control succeeded 2026-04-03 02:49:10.476834 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:7} {StateName:creating+peering Count:2}]" 2026-04-03 02:49:10.483773 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:49:12.061099 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.control" 2026-04-03 02:49:18.118441 I | cephclient: reconciling replicated pool ceph.rgw.meta succeeded 2026-04-03 02:49:20.068441 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.meta" 2026-04-03 02:49:26.524232 I | cephclient: reconciling replicated pool ceph.rgw.log succeeded 2026-04-03 02:49:28.507285 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.log" 2026-04-03 02:49:39.761604 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-03 02:49:40.789560 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:55} {StateName:creating+peering Count:2}]" 2026-04-03 02:49:40.796607 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:49:40.981920 I | cephclient: reconciling replicated pool ceph.rgw.buckets.index succeeded 2026-04-03 02:49:43.387908 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:49:44.380321 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.buckets.index" 2026-04-03 02:49:53.782352 I | cephclient: reconciling replicated pool ceph.rgw.buckets.non-ec succeeded 2026-04-03 02:49:56.796626 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.buckets.non-ec" 2026-04-03 02:50:04.492353 I | cephclient: reconciling replicated pool ceph.rgw.otp succeeded 2026-04-03 02:50:06.473650 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.otp" 2026-04-03 02:50:12.817779 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:50:16.074681 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:50:18.108342 I | cephclient: reconciling replicated pool .rgw.root succeeded 2026-04-03 02:50:20.182459 I | cephclient: setting pool property "pg_num_min" to "8" on pool ".rgw.root" 2026-04-03 02:50:29.666189 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-03 02:50:30.546852 I | cephclient: reconciling replicated pool ceph.rgw.buckets.data succeeded 2026-04-03 02:50:32.574919 I | ceph-object-controller: setting multisite settings for object store "ceph" 2026-04-03 02:50:34.406345 I | ceph-object-controller: committing changes to RGW configuration period for CephObjectStore "openstack/ceph" 2026-04-03 02:50:35.445516 I | ceph-object-controller: Multisite for object-store: realm=ceph, zonegroup=ceph, zone=ceph 2026-04-03 02:50:35.445553 I | ceph-object-controller: multisite configuration for object-store ceph is complete 2026-04-03 02:50:35.445562 I | ceph-object-controller: creating object store "ceph" in namespace "openstack" 2026-04-03 02:50:35.445575 I | cephclient: getting or creating ceph auth key "client.rgw.ceph.a" 2026-04-03 02:50:36.694123 I | ceph-object-controller: setting rgw config flags 2026-04-03 02:50:36.694156 I | op-config: setting "client.rgw.ceph.a"="rgw_log_object_name_utc"="true" option to the mon configuration database 2026-04-03 02:50:37.675144 I | op-config: successfully set "client.rgw.ceph.a"="rgw_log_object_name_utc"="true" option to the mon configuration database 2026-04-03 02:50:37.675180 I | op-config: setting "client.rgw.ceph.a"="rgw_enable_usage_log"="true" option to the mon configuration database 2026-04-03 02:50:38.674613 I | op-config: successfully set "client.rgw.ceph.a"="rgw_enable_usage_log"="true" option to the mon configuration database 2026-04-03 02:50:38.674648 I | op-config: setting "client.rgw.ceph.a"="rgw_zone"="ceph" option to the mon configuration database 2026-04-03 02:50:39.620374 I | op-config: successfully set "client.rgw.ceph.a"="rgw_zone"="ceph" option to the mon configuration database 2026-04-03 02:50:39.620416 I | op-config: setting "client.rgw.ceph.a"="rgw_zonegroup"="ceph" option to the mon configuration database 2026-04-03 02:50:40.670067 I | op-config: successfully set "client.rgw.ceph.a"="rgw_zonegroup"="ceph" option to the mon configuration database 2026-04-03 02:50:40.670097 I | op-config: setting "client.rgw.ceph.a"="rgw_log_nonexistent_bucket"="true" option to the mon configuration database 2026-04-03 02:50:41.613391 I | op-config: successfully set "client.rgw.ceph.a"="rgw_log_nonexistent_bucket"="true" option to the mon configuration database 2026-04-03 02:50:41.613547 I | ceph-object-controller: object store "ceph" deployment "rook-ceph-rgw-ceph-a" created 2026-04-03 02:50:41.782871 I | ceph-object-controller: enabling rgw dashboard 2026-04-03 02:50:41.840295 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-instance": the object has been modified; please apply your changes to the latest version and try again 2026-04-03 02:50:41.909093 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-instance": the object has been modified; please apply your changes to the latest version and try again 2026-04-03 02:50:45.699793 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:79} {StateName:peering Count:1}]" 2026-04-03 02:50:45.712828 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:50:47.692132 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:50:48.780249 I | ceph-object-controller: setting the dashboard api secret key 2026-04-03 02:50:48.780426 I | ceph-object-controller: created object store "ceph" in namespace "openstack" 2026-04-03 02:50:50.674570 I | ceph-object-controller: done setting the dashboard api secret key 2026-04-03 02:50:51.631168 I | ceph-object-controller: starting rgw health checker for CephObjectStore "openstack/ceph" 2026-04-03 02:50:52.902834 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:77} {StateName:peering Count:1}]" 2026-04-03 02:50:52.909348 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:51:10.074422 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:74} {StateName:peering Count:2}]" 2026-04-03 02:51:10.079629 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:51:18.574454 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:75} {StateName:unknown Count:31}]" 2026-04-03 02:51:18.580575 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:51:18.673654 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-03 02:51:19.899112 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:104} {StateName:peering Count:1}]" 2026-04-03 02:51:19.906405 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:51:49.779787 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:101} {StateName:creating+peering Count:5} {StateName:unknown Count:3} {StateName:peering Count:1}]" 2026-04-03 02:51:49.785331 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:51:50.993070 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:101} {StateName:creating+peering Count:5} {StateName:unknown Count:3} {StateName:peering Count:1}]" 2026-04-03 02:51:50.999652 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:52:06.178865 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-03 02:52:11.304432 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:104} {StateName:peering Count:1}]" 2026-04-03 02:52:11.309974 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:52:20.896330 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:117} {StateName:unknown Count:9} {StateName:peering Count:3}]" 2026-04-03 02:52:20.903604 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:52:22.185900 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:52:53.273930 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:52:56.162238 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:52:56.401876 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-03 02:53:13.270442 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:53:24.504931 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:53:27.386645 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:53:44.294710 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-03 02:53:55.699195 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:53:58.619120 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:54:15.211237 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:54:27.531284 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:54:30.277983 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:54:32.478916 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-03 02:54:58.682413 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:55:01.495033 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:55:15.767070 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:55:19.971627 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-03 02:55:29.879829 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:55:32.678826 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:56:01.075832 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:56:03.897462 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:56:07.573513 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-03 02:56:17.105160 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:56:32.316720 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:56:35.109235 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:56:54.999655 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-03 02:57:03.491743 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:57:06.293401 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:57:18.389216 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:57:34.708375 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:57:37.612115 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:57:42.477205 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-03 02:58:05.876647 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:58:08.792107 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:58:19.709879 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:58:29.897780 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-03 02:58:37.087268 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:58:39.986963 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:59:08.299277 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:59:11.168847 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:59:20.170524 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-03 02:59:22.002682 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:59:39.478340 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 02:59:42.381102 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 03:00:07.578457 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-03 03:00:10.673030 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 03:00:13.596944 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 03:00:23.670374 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 03:00:41.814517 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 03:00:44.788427 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 03:00:55.012334 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-03 03:01:12.986143 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 03:01:16.008068 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 03:01:23.799966 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-03 03:01:42.409844 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-03 03:01:44.208418 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0