2026-04-08 01:36:38.496088 I | rookcmd: starting Rook v1.10.10 with arguments '/usr/local/bin/rook ceph operator' 2026-04-08 01:36:38.496285 I | rookcmd: flag values: --enable-machine-disruption-budget=false, --help=false, --kubeconfig=, --log-level=INFO, --operator-image=, --service-account= 2026-04-08 01:36:38.496291 I | cephcmd: starting Rook-Ceph operator 2026-04-08 01:36:38.806023 I | cephcmd: base ceph version inside the rook operator image is "ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)" 2026-04-08 01:36:38.816093 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2026-04-08 01:36:38.816117 I | operator: watching all namespaces for Ceph CRs 2026-04-08 01:36:38.816190 I | operator: setting up schemes 2026-04-08 01:36:38.819150 I | operator: setting up the controller-runtime manager 2026-04-08 01:36:39.630243 I | op-k8sutil: ROOK_DISABLE_ADMISSION_CONTROLLER="true" (configmap) 2026-04-08 01:36:39.630280 I | operator: delete webhook resources since webhook is disabled 2026-04-08 01:36:39.630288 I | operator: deleting validating webhook rook-ceph-webhook 2026-04-08 01:36:39.633769 I | operator: deleting webhook cert manager Certificate rook-admission-controller-cert 2026-04-08 01:36:39.636883 I | operator: deleting webhook cert manager Issuer %sselfsigned-issuer 2026-04-08 01:36:39.639652 I | operator: deleting validating webhook service %srook-ceph-admission-controller 2026-04-08 01:36:39.643059 I | ceph-cluster-controller: successfully started 2026-04-08 01:36:39.643152 I | ceph-cluster-controller: enabling hotplug orchestration 2026-04-08 01:36:39.643184 I | ceph-crashcollector-controller: successfully started 2026-04-08 01:36:39.643214 I | ceph-block-pool-controller: successfully started 2026-04-08 01:36:39.643250 I | ceph-object-store-user-controller: successfully started 2026-04-08 01:36:39.643279 I | ceph-object-realm-controller: successfully started 2026-04-08 01:36:39.643302 I | ceph-object-zonegroup-controller: successfully started 2026-04-08 01:36:39.643324 I | ceph-object-zone-controller: successfully started 2026-04-08 01:36:39.643457 I | ceph-object-controller: successfully started 2026-04-08 01:36:39.643513 I | ceph-file-controller: successfully started 2026-04-08 01:36:39.643560 I | ceph-nfs-controller: successfully started 2026-04-08 01:36:39.643598 I | ceph-rbd-mirror-controller: successfully started 2026-04-08 01:36:39.643633 I | ceph-client-controller: successfully started 2026-04-08 01:36:39.643663 I | ceph-filesystem-mirror-controller: successfully started 2026-04-08 01:36:39.643699 I | operator: rook-ceph-operator-config-controller successfully started 2026-04-08 01:36:39.647399 I | op-k8sutil: ROOK_DISABLE_ADMISSION_CONTROLLER="true" (configmap) 2026-04-08 01:36:39.647436 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2026-04-08 01:36:39.647463 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2026-04-08 01:36:39.647485 I | ceph-bucket-topic: successfully started 2026-04-08 01:36:39.647506 I | ceph-bucket-notification: successfully started 2026-04-08 01:36:39.647520 I | ceph-bucket-notification: successfully started 2026-04-08 01:36:39.647537 I | ceph-fs-subvolumegroup-controller: successfully started 2026-04-08 01:36:39.647560 I | blockpool-rados-namespace-controller: successfully started 2026-04-08 01:36:39.648749 I | operator: starting the controller-runtime manager 2026-04-08 01:36:40.493585 I | op-k8sutil: ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS="15" (configmap) 2026-04-08 01:36:40.493627 I | op-k8sutil: ROOK_LOG_LEVEL="INFO" (configmap) 2026-04-08 01:36:40.493649 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (env var) 2026-04-08 01:36:40.496511 I | ceph-csi: CSI Ceph RBD driver disabled 2026-04-08 01:36:40.496557 I | op-k8sutil: removing daemonset csi-rbdplugin if it exists 2026-04-08 01:36:40.497606 I | op-k8sutil: ROOK_CEPH_ALLOW_LOOP_DEVICES="false" (configmap) 2026-04-08 01:36:40.497631 I | operator: rook-ceph-operator-config-controller done reconciling 2026-04-08 01:36:40.500095 I | op-k8sutil: removing deployment csi-rbdplugin-provisioner if it exists 2026-04-08 01:36:40.508385 I | ceph-csi: successfully removed CSI Ceph RBD driver 2026-04-08 01:36:40.508409 I | ceph-csi: CSI CephFS driver disabled 2026-04-08 01:36:40.508418 I | op-k8sutil: removing daemonset csi-cephfsplugin if it exists 2026-04-08 01:36:40.510862 I | op-k8sutil: removing deployment csi-cephfsplugin-provisioner if it exists 2026-04-08 01:36:40.518717 I | ceph-csi: successfully removed CSI CephFS driver 2026-04-08 01:36:40.518739 I | ceph-csi: CSI NFS driver disabled 2026-04-08 01:36:40.518747 I | op-k8sutil: removing daemonset csi-nfsplugin if it exists 2026-04-08 01:36:40.520571 I | op-k8sutil: removing deployment csi-nfsplugin-provisioner if it exists 2026-04-08 01:36:40.526543 I | ceph-csi: successfully removed CSI NFS driver 2026-04-08 01:37:00.280562 I | ceph-spec: adding finalizer "cephcluster.ceph.rook.io" on "ceph" 2026-04-08 01:37:00.284114 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (configmap) 2026-04-08 01:37:00.290123 I | clusterdisruption-controller: deleted all legacy node drain canary pods 2026-04-08 01:37:00.290678 I | ceph-spec: adding finalizer "cephobjectstore.ceph.rook.io" on "ceph" 2026-04-08 01:37:00.295464 I | ceph-cluster-controller: reconciling ceph cluster in namespace "openstack" 2026-04-08 01:37:00.306145 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-04-08 01:37:00.306175 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-04-08 01:37:00.306266 I | op-k8sutil: ROOK_OBC_WATCH_OPERATOR_NAMESPACE="true" (configmap) 2026-04-08 01:37:00.306292 I | op-bucket-prov: ceph bucket provisioner launched watching for provisioner "openstack.ceph.rook.io/bucket" 2026-04-08 01:37:00.306618 I | ceph-csi: successfully created csi config map "rook-ceph-csi-config" 2026-04-08 01:37:00.307911 E | ceph-spec: failed to update cluster condition to {Type:Connecting Status:True Reason:ClusterConnecting Message:Attempting to connect to an external Ceph cluster LastHeartbeatTime:2026-04-08 01:37:00.295527059 +0000 UTC m=+21.956548609 LastTransitionTime:2026-04-08 01:37:00.295526969 +0000 UTC m=+21.956548529}. failed to update object "openstack/ceph" status: Operation cannot be fulfilled on cephclusters.ceph.rook.io "ceph": the object has been modified; please apply your changes to the latest version and try again 2026-04-08 01:37:00.308298 I | op-bucket-prov: successfully reconciled bucket provisioner I0408 01:37:00.308503 1 manager.go:135] objectbucket.io/provisioner-manager "msg"="starting provisioner" "name"="openstack.ceph.rook.io/bucket" 2026-04-08 01:37:00.318952 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-04-08 01:37:00.318992 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-04-08 01:37:00.319022 I | ceph-spec: found the cluster info to connect to the external cluster. will use "client.admin" to check health and monitor status. mons=map[instance:0xc000695d70] 2026-04-08 01:37:00.319052 I | ceph-spec: detecting the ceph image version for image harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.1... 2026-04-08 01:37:00.327420 I | op-k8sutil: ROOK_CSI_ENABLE_RBD="false" (configmap) 2026-04-08 01:37:00.327440 I | op-k8sutil: ROOK_CSI_ENABLE_CEPHFS="false" (configmap) 2026-04-08 01:37:00.327444 I | op-k8sutil: ROOK_CSI_ENABLE_NFS="false" (configmap) 2026-04-08 01:37:00.327448 I | op-k8sutil: ROOK_CSI_ALLOW_UNSUPPORTED_VERSION="false" (default) 2026-04-08 01:37:00.327452 I | op-k8sutil: ROOK_CSI_ENABLE_GRPC_METRICS="false" (configmap) 2026-04-08 01:37:00.327455 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (configmap) 2026-04-08 01:37:00.327458 I | op-k8sutil: CSI_FORCE_CEPHFS_KERNEL_CLIENT="true" (configmap) 2026-04-08 01:37:00.327461 I | op-k8sutil: CSI_GRPC_TIMEOUT_SECONDS="150" (configmap) 2026-04-08 01:37:00.327464 I | op-k8sutil: CSI_CEPHFS_GRPC_METRICS_PORT="9091" (default) 2026-04-08 01:37:00.327469 I | op-k8sutil: CSI_CEPHFS_GRPC_METRICS_PORT="9091" (default) 2026-04-08 01:37:00.327472 I | op-k8sutil: CSI_CEPHFS_LIVENESS_METRICS_PORT="9081" (default) 2026-04-08 01:37:00.327475 I | op-k8sutil: CSI_CEPHFS_LIVENESS_METRICS_PORT="9081" (default) 2026-04-08 01:37:00.327478 I | op-k8sutil: CSI_RBD_GRPC_METRICS_PORT="9090" (default) 2026-04-08 01:37:00.327480 I | op-k8sutil: CSI_RBD_GRPC_METRICS_PORT="9090" (default) 2026-04-08 01:37:00.327485 I | op-k8sutil: CSIADDONS_PORT="9070" (default) 2026-04-08 01:37:00.327488 I | op-k8sutil: CSIADDONS_PORT="9070" (default) 2026-04-08 01:37:00.327491 I | op-k8sutil: CSI_RBD_LIVENESS_METRICS_PORT="9080" (default) 2026-04-08 01:37:00.327495 I | op-k8sutil: CSI_RBD_LIVENESS_METRICS_PORT="9080" (default) 2026-04-08 01:37:00.327501 I | op-k8sutil: CSI_ENABLE_LIVENESS="false" (default) 2026-04-08 01:37:00.327505 I | op-k8sutil: CSI_PLUGIN_PRIORITY_CLASSNAME="system-node-critical" (configmap) 2026-04-08 01:37:00.327512 I | op-k8sutil: CSI_PROVISIONER_PRIORITY_CLASSNAME="system-cluster-critical" (configmap) 2026-04-08 01:37:00.327516 I | op-k8sutil: CSI_ENABLE_OMAP_GENERATOR="false" (configmap) 2026-04-08 01:37:00.327522 I | op-k8sutil: CSI_ENABLE_RBD_SNAPSHOTTER="true" (configmap) 2026-04-08 01:37:00.327527 I | op-k8sutil: CSI_ENABLE_CEPHFS_SNAPSHOTTER="true" (configmap) 2026-04-08 01:37:00.327542 I | op-k8sutil: CSI_ENABLE_NFS_SNAPSHOTTER="true" (configmap) 2026-04-08 01:37:00.327546 I | op-k8sutil: CSI_ENABLE_CSIADDONS="false" (configmap) 2026-04-08 01:37:00.327550 I | op-k8sutil: CSI_ENABLE_TOPOLOGY="false" (configmap) 2026-04-08 01:37:00.327554 I | op-k8sutil: CSI_ENABLE_ENCRYPTION="false" (configmap) 2026-04-08 01:37:00.327559 I | op-k8sutil: CSI_ENABLE_METADATA="false" (configmap) 2026-04-08 01:37:00.327564 I | op-k8sutil: CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY="RollingUpdate" (default) 2026-04-08 01:37:00.327573 I | op-k8sutil: CSI_NFS_PLUGIN_UPDATE_STRATEGY="RollingUpdate" (default) 2026-04-08 01:37:00.327577 I | op-k8sutil: CSI_RBD_PLUGIN_UPDATE_STRATEGY="RollingUpdate" (default) 2026-04-08 01:37:00.327582 I | op-k8sutil: CSI_RBD_PLUGIN_UPDATE_STRATEGY_MAX_UNAVAILABLE="1" (default) 2026-04-08 01:37:00.327587 I | op-k8sutil: CSI_PLUGIN_ENABLE_SELINUX_HOST_MOUNT="false" (configmap) 2026-04-08 01:37:00.327604 I | ceph-csi: Kubernetes version is 1.28 2026-04-08 01:37:00.327609 I | op-k8sutil: ROOK_CSI_RESIZER_IMAGE="registry.k8s.io/sig-storage/csi-resizer:v1.7.0" (default) 2026-04-08 01:37:00.327613 I | op-k8sutil: CSI_LOG_LEVEL="" (default) 2026-04-08 01:37:00.327618 I | op-k8sutil: CSI_SIDECAR_LOG_LEVEL="" (default) 2026-04-08 01:37:00.489552 I | op-k8sutil: ROOK_CSI_CEPH_IMAGE="quay.io/cephcsi/cephcsi:v3.7.2" (default) 2026-04-08 01:37:00.489586 I | op-k8sutil: ROOK_CSI_REGISTRAR_IMAGE="registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.7.0" (default) 2026-04-08 01:37:00.489592 I | op-k8sutil: ROOK_CSI_PROVISIONER_IMAGE="registry.k8s.io/sig-storage/csi-provisioner:v3.4.0" (default) 2026-04-08 01:37:00.489597 I | op-k8sutil: ROOK_CSI_ATTACHER_IMAGE="registry.k8s.io/sig-storage/csi-attacher:v4.1.0" (default) 2026-04-08 01:37:00.489602 I | op-k8sutil: ROOK_CSI_SNAPSHOTTER_IMAGE="registry.k8s.io/sig-storage/csi-snapshotter:v6.2.1" (default) 2026-04-08 01:37:00.489606 I | op-k8sutil: ROOK_CSI_KUBELET_DIR_PATH="/var/lib/kubelet" (default) 2026-04-08 01:37:00.489611 I | op-k8sutil: ROOK_CSIADDONS_IMAGE="quay.io/csiaddons/k8s-sidecar:v0.5.0" (configmap) 2026-04-08 01:37:00.489615 I | op-k8sutil: CSI_TOPOLOGY_DOMAIN_LABELS="" (default) 2026-04-08 01:37:00.489619 I | op-k8sutil: ROOK_CSI_CEPHFS_POD_LABELS="" (default) 2026-04-08 01:37:00.489623 I | op-k8sutil: ROOK_CSI_NFS_POD_LABELS="" (default) 2026-04-08 01:37:00.489626 I | op-k8sutil: ROOK_CSI_RBD_POD_LABELS="" (default) 2026-04-08 01:37:00.489630 I | op-k8sutil: CSI_CLUSTER_NAME="" (default) 2026-04-08 01:37:00.489634 I | op-k8sutil: ROOK_CSI_IMAGE_PULL_POLICY="IfNotPresent" (configmap) 2026-04-08 01:37:00.489637 I | ceph-csi: skipping csi version check, since unsupported versions are allowed or csi is disabled 2026-04-08 01:37:00.489642 I | ceph-csi: CSI Ceph RBD driver disabled 2026-04-08 01:37:00.489651 I | op-k8sutil: removing daemonset csi-rbdplugin if it exists 2026-04-08 01:37:00.492376 I | op-k8sutil: removing deployment csi-rbdplugin-provisioner if it exists 2026-04-08 01:37:00.890093 I | ceph-csi: successfully removed CSI Ceph RBD driver 2026-04-08 01:37:00.890133 I | ceph-csi: CSI CephFS driver disabled 2026-04-08 01:37:00.890141 I | op-k8sutil: removing daemonset csi-cephfsplugin if it exists 2026-04-08 01:37:00.893330 I | op-k8sutil: removing deployment csi-cephfsplugin-provisioner if it exists 2026-04-08 01:37:01.091144 I | ceph-csi: successfully removed CSI CephFS driver 2026-04-08 01:37:01.091173 I | ceph-csi: CSI NFS driver disabled 2026-04-08 01:37:01.091183 I | op-k8sutil: removing daemonset csi-nfsplugin if it exists 2026-04-08 01:37:01.092745 I | op-k8sutil: removing deployment csi-nfsplugin-provisioner if it exists 2026-04-08 01:37:01.289562 I | ceph-csi: successfully removed CSI NFS driver 2026-04-08 01:37:17.594708 I | ceph-spec: detected ceph image version: "18.2.1-0 reef" 2026-04-08 01:37:17.594743 I | ceph-cluster-controller: validating ceph version from provided image 2026-04-08 01:37:17.601095 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-04-08 01:37:17.601124 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-04-08 01:37:17.606110 I | cephclient: writing config file /var/lib/rook/openstack/openstack.config 2026-04-08 01:37:17.606222 I | cephclient: generated admin config in /var/lib/rook/openstack 2026-04-08 01:37:18.847231 E | cephver: external cluster ceph version is a major version higher "18.2.7-0 reef" than the local cluster "0.0.0-0 ", consider upgrading 2026-04-08 01:37:20.137822 W | ceph-cluster-controller: image spec version 18.2.1-0 reef is lower than the running cluster version 18.2.7-0 reef, downgrading is not supported 2026-04-08 01:37:22.210937 I | ceph-cluster-controller: upgrading ceph cluster to "18.2.1-0 reef" 2026-04-08 01:37:22.210967 I | ceph-cluster-controller: cluster "openstack": version "18.2.1-0 reef" detected for image "harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.1" 2026-04-08 01:37:22.289980 I | ceph-cluster-controller: creating "rook-config-override" configmap 2026-04-08 01:37:22.297133 I | ceph-cluster-controller: creating "rook-ceph-config" secret 2026-04-08 01:37:22.309431 I | ceph-cluster-controller: external cluster identity established 2026-04-08 01:37:22.309469 I | cephclient: getting or creating ceph auth key "client.csi-rbd-provisioner" 2026-04-08 01:37:23.797535 I | clusterdisruption-controller: all PGs are active+clean. Restoring default OSD pdb settings 2026-04-08 01:37:23.797568 I | clusterdisruption-controller: creating the default pdb "rook-ceph-osd" with maxUnavailable=1 for all osd 2026-04-08 01:37:23.809602 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:37:25.199045 I | cephclient: getting or creating ceph auth key "client.csi-rbd-node" 2026-04-08 01:37:26.790199 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:37:28.114229 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-provisioner" 2026-04-08 01:37:29.690146 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:37:30.345027 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-node" 2026-04-08 01:37:31.763102 I | ceph-csi: created kubernetes csi secrets for cluster "openstack" 2026-04-08 01:37:31.768631 I | ceph-csi: successfully created csi config map "rook-ceph-csi-config" 2026-04-08 01:37:31.927517 I | ceph-cluster-controller: successfully updated csi config map 2026-04-08 01:37:31.927557 I | cephclient: getting or creating ceph auth key "client.crash" 2026-04-08 01:37:33.156263 I | ceph-crashcollector-controller: created kubernetes crash collector secret for cluster "openstack" 2026-04-08 01:37:33.156306 I | ceph-cluster-controller: enabling ceph mon monitoring goroutine for cluster "openstack" 2026-04-08 01:37:33.156363 I | ceph-cluster-controller: ceph status check interval is 1m0s 2026-04-08 01:37:33.156375 I | ceph-cluster-controller: enabling ceph status monitoring goroutine for cluster "openstack" 2026-04-08 01:37:40.387767 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-04-08 01:37:40.387811 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-04-08 01:37:40.387827 I | ceph-spec: detecting the ceph image version for image harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.1... 2026-04-08 01:37:42.592952 I | ceph-spec: detected ceph image version: "18.2.1-0 reef" 2026-04-08 01:37:46.022139 I | ceph-object-controller: reconciling object store deployments 2026-04-08 01:37:46.039364 I | ceph-object-controller: ceph object store gateway service running at 10.104.149.23 2026-04-08 01:37:46.039393 I | ceph-object-controller: reconciling object store pools 2026-04-08 01:37:53.660139 I | cephclient: reconciling replicated pool ceph.rgw.control succeeded 2026-04-08 01:37:56.807515 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:37:58.091062 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.control" 2026-04-08 01:37:59.807512 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:27} {StateName:unknown Count:6}]" 2026-04-08 01:37:59.814510 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:38:06.840915 I | cephclient: reconciling replicated pool ceph.rgw.meta succeeded 2026-04-08 01:38:08.811460 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.meta" 2026-04-08 01:38:14.946711 I | cephclient: reconciling replicated pool ceph.rgw.log succeeded 2026-04-08 01:38:16.941710 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.log" 2026-04-08 01:38:23.801042 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-08 01:38:28.190004 I | cephclient: reconciling replicated pool ceph.rgw.buckets.index succeeded 2026-04-08 01:38:28.709198 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:49} {StateName:unknown Count:6} {StateName:creating+peering Count:2}]" 2026-04-08 01:38:28.718010 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:38:31.507373 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.buckets.index" 2026-04-08 01:38:32.698222 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:38:41.786424 I | cephclient: reconciling replicated pool ceph.rgw.buckets.non-ec succeeded 2026-04-08 01:38:43.716500 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.buckets.non-ec" 2026-04-08 01:38:50.915277 I | cephclient: reconciling replicated pool ceph.rgw.otp succeeded 2026-04-08 01:38:52.890826 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.otp" 2026-04-08 01:39:01.609891 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:87} {StateName:unknown Count:11} {StateName:creating+peering Count:5} {StateName:clean+premerge+peered Count:2}]" 2026-04-08 01:39:01.690206 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:39:02.129036 I | cephclient: reconciling replicated pool .rgw.root succeeded 2026-04-08 01:39:05.612915 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:98} {StateName:creating+peering Count:5} {StateName:clean+premerge+peered Count:1}]" 2026-04-08 01:39:05.620140 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:39:05.891328 I | cephclient: setting pool property "pg_num_min" to "8" on pool ".rgw.root" 2026-04-08 01:39:13.829261 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-08 01:39:17.489485 I | cephclient: reconciling replicated pool ceph.rgw.buckets.data succeeded 2026-04-08 01:39:19.590256 I | ceph-object-controller: setting multisite settings for object store "ceph" 2026-04-08 01:39:21.135385 I | ceph-object-controller: committing changes to RGW configuration period for CephObjectStore "openstack/ceph" 2026-04-08 01:39:21.649458 I | ceph-object-controller: Multisite for object-store: realm=ceph, zonegroup=ceph, zone=ceph 2026-04-08 01:39:21.649493 I | ceph-object-controller: multisite configuration for object-store ceph is complete 2026-04-08 01:39:21.649504 I | ceph-object-controller: creating object store "ceph" in namespace "openstack" 2026-04-08 01:39:21.649520 I | cephclient: getting or creating ceph auth key "client.rgw.ceph.a" 2026-04-08 01:39:22.848889 I | ceph-object-controller: setting rgw config flags 2026-04-08 01:39:22.848931 I | op-config: setting "client.rgw.ceph.a"="rgw_log_nonexistent_bucket"="true" option to the mon configuration database 2026-04-08 01:39:23.894604 I | op-config: successfully set "client.rgw.ceph.a"="rgw_log_nonexistent_bucket"="true" option to the mon configuration database 2026-04-08 01:39:23.894643 I | op-config: setting "client.rgw.ceph.a"="rgw_log_object_name_utc"="true" option to the mon configuration database 2026-04-08 01:39:24.908713 I | op-config: successfully set "client.rgw.ceph.a"="rgw_log_object_name_utc"="true" option to the mon configuration database 2026-04-08 01:39:24.908760 I | op-config: setting "client.rgw.ceph.a"="rgw_enable_usage_log"="true" option to the mon configuration database 2026-04-08 01:39:25.941211 I | op-config: successfully set "client.rgw.ceph.a"="rgw_enable_usage_log"="true" option to the mon configuration database 2026-04-08 01:39:25.941257 I | op-config: setting "client.rgw.ceph.a"="rgw_zone"="ceph" option to the mon configuration database 2026-04-08 01:39:26.948036 I | op-config: successfully set "client.rgw.ceph.a"="rgw_zone"="ceph" option to the mon configuration database 2026-04-08 01:39:26.948070 I | op-config: setting "client.rgw.ceph.a"="rgw_zonegroup"="ceph" option to the mon configuration database 2026-04-08 01:39:28.002647 I | op-config: successfully set "client.rgw.ceph.a"="rgw_zonegroup"="ceph" option to the mon configuration database 2026-04-08 01:39:28.002906 I | ceph-object-controller: object store "ceph" deployment "rook-ceph-rgw-ceph-a" created 2026-04-08 01:39:28.051337 I | ceph-object-controller: enabling rgw dashboard 2026-04-08 01:39:28.131818 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-instance": the object has been modified; please apply your changes to the latest version and try again 2026-04-08 01:39:28.190426 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-instance": the object has been modified; please apply your changes to the latest version and try again 2026-04-08 01:39:32.496267 I | ceph-object-controller: created object store "ceph" in namespace "openstack" 2026-04-08 01:39:32.496453 I | ceph-object-controller: setting the dashboard api secret key 2026-04-08 01:39:33.695786 I | ceph-object-controller: starting rgw health checker for CephObjectStore "openstack/ceph" 2026-04-08 01:39:35.010056 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:102} {StateName:activating Count:1}]" 2026-04-08 01:39:35.017450 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:39:35.299965 I | ceph-object-controller: done setting the dashboard api secret key 2026-04-08 01:39:36.498911 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:101} {StateName:activating Count:1}]" 2026-04-08 01:39:36.506475 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:39:37.810443 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:101} {StateName:activating Count:1}]" 2026-04-08 01:39:37.818256 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:39:52.215676 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:100} {StateName:peering Count:1}]" 2026-04-08 01:39:52.225161 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:40:01.205968 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-08 01:40:06.316027 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:40:09.021963 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:40:37.598146 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:122} {StateName:clean+premerge+peered Count:2}]" 2026-04-08 01:40:37.606604 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:40:40.238138 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:40:48.715273 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-08 01:40:53.805490 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:120} {StateName:peering Count:2}]" 2026-04-08 01:40:53.812857 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:41:08.903772 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:140} {StateName:activating Count:1}]" 2026-04-08 01:41:08.912165 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:41:11.495263 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:138} {StateName:activating Count:1} {StateName:peering Count:1}]" 2026-04-08 01:41:11.504444 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:41:36.331790 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-08 01:41:40.140685 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:41:42.802395 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:41:55.007314 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:129} {StateName:clean+premerge+peered Count:1}]" 2026-04-08 01:41:55.013549 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:42:11.317983 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:42:14.015551 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:42:23.921515 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-08 01:42:42.509231 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:42:45.211635 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:42:56.421204 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:43:11.329948 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-08 01:43:13.721768 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:43:16.435070 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:43:44.929122 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:43:47.614350 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:43:59.298494 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:44:00.431774 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-08 01:44:16.204069 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:44:18.907794 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:44:49.102826 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:44:49.996036 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-08 01:44:51.001319 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:44:58.937573 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:45:20.341310 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:45:22.315279 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:45:37.627024 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-08 01:45:51.512388 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:45:53.502084 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:46:00.302268 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:46:22.690140 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:46:26.221044 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:46:26.624480 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-08 01:46:53.919666 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:46:57.517505 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:47:02.516305 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:47:14.102500 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-08 01:47:25.141395 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:47:28.724323 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:47:56.331699 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:48:01.001744 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:48:05.800169 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-08 01:48:06.398370 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-08 01:48:27.515736 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0