2026/03/10 13:57:37 maxprocs: Updating GOMAXPROCS=1: determined from CPU quota 2026-03-10 13:57:37.537753 I | rookcmd: starting Rook v1.15.9 with arguments '/usr/local/bin/rook ceph operator' 2026-03-10 13:57:37.537782 I | rookcmd: flag values: --enable-machine-disruption-budget=false, --help=false, --kubeconfig=, --log-level=INFO 2026-03-10 13:57:37.537785 I | cephcmd: starting Rook-Ceph operator 2026-03-10 13:57:37.653390 I | cephcmd: base ceph version inside the rook operator image is "ceph version 18.2.4 (e7ad5345525c7aa95470c26863873b581076945d) reef (stable)" 2026-03-10 13:57:37.660473 I | op-k8sutil: operator setting "CSI_PLUGIN_ENABLE_SELINUX_HOST_MOUNT" = "false" 2026-03-10 13:57:37.660503 I | op-k8sutil: operator setting "ROOK_CSI_ENABLE_NFS" = "false" 2026-03-10 13:57:37.660511 I | op-k8sutil: operator setting "CSI_ENABLE_METADATA" = "false" 2026-03-10 13:57:37.660517 I | op-k8sutil: operator setting "CSI_NFS_FSGROUPPOLICY" = "File" 2026-03-10 13:57:37.660522 I | op-k8sutil: operator setting "CSI_ENABLE_ENCRYPTION" = "false" 2026-03-10 13:57:37.660526 I | op-k8sutil: operator setting "CSI_ENABLE_VOLUME_GROUP_SNAPSHOT" = "true" 2026-03-10 13:57:37.660559 I | op-k8sutil: operator setting "CSI_PROVISIONER_REPLICAS" = "2" 2026-03-10 13:57:37.660569 I | op-k8sutil: operator setting "ROOK_CEPH_ALLOW_LOOP_DEVICES" = "false" 2026-03-10 13:57:37.660573 I | op-k8sutil: operator setting "ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS" = "15" 2026-03-10 13:57:37.660577 I | op-k8sutil: operator setting "ROOK_CSIADDONS_IMAGE" = "quay.io/csiaddons/k8s-sidecar:v0.9.1" 2026-03-10 13:57:37.660589 I | op-k8sutil: operator setting "CSI_CEPHFS_PLUGIN_RESOURCE" = "- name : driver-registrar\n resource:\n requests:\n memory: 128Mi\n cpu: 50m\n limits:\n memory: 256Mi\n- name : csi-cephfsplugin\n resource:\n requests:\n memory: 512Mi\n cpu: 250m\n limits:\n memory: 1Gi\n- name : liveness-prometheus\n resource:\n requests:\n memory: 128Mi\n cpu: 50m\n limits:\n memory: 256Mi\n" 2026-03-10 13:57:37.660596 I | op-k8sutil: operator setting "CSI_ENABLE_CSIADDONS" = "false" 2026-03-10 13:57:37.660601 I | op-k8sutil: operator setting "ROOK_CSI_ATTACHER_IMAGE" = "registry.k8s.io/sig-storage/csi-attacher:v4.6.1" 2026-03-10 13:57:37.660606 I | op-k8sutil: operator setting "ROOK_CSI_SNAPSHOTTER_IMAGE" = "registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1" 2026-03-10 13:57:37.660610 I | op-k8sutil: operator setting "ROOK_CSI_IMAGE_PULL_POLICY" = "IfNotPresent" 2026-03-10 13:57:37.660637 I | op-k8sutil: operator setting "ROOK_CSI_PROVISIONER_IMAGE" = "registry.k8s.io/sig-storage/csi-provisioner:v5.0.1" 2026-03-10 13:57:37.660649 I | op-k8sutil: operator setting "ROOK_CSI_RESIZER_IMAGE" = "registry.k8s.io/sig-storage/csi-resizer:v1.11.1" 2026-03-10 13:57:37.660653 I | op-k8sutil: operator setting "CSI_FORCE_CEPHFS_KERNEL_CLIENT" = "true" 2026-03-10 13:57:37.660656 I | op-k8sutil: operator setting "ROOK_CSI_DISABLE_DRIVER" = "false" 2026-03-10 13:57:37.660661 I | op-k8sutil: operator setting "CSI_PROVISIONER_PRIORITY_CLASSNAME" = "system-cluster-critical" 2026-03-10 13:57:37.660680 I | op-k8sutil: operator setting "ROOK_LOG_LEVEL" = "INFO" 2026-03-10 13:57:37.660690 I | op-k8sutil: operator setting "CSI_ENABLE_CEPHFS_SNAPSHOTTER" = "true" 2026-03-10 13:57:37.660694 I | op-k8sutil: operator setting "CSI_PLUGIN_PRIORITY_CLASSNAME" = "system-node-critical" 2026-03-10 13:57:37.660698 I | op-k8sutil: operator setting "CSI_ENABLE_OMAP_GENERATOR" = "false" 2026-03-10 13:57:37.660703 I | op-k8sutil: operator setting "ROOK_CSI_REGISTRAR_IMAGE" = "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.11.1" 2026-03-10 13:57:37.660708 I | op-k8sutil: operator setting "CSI_NFS_ATTACH_REQUIRED" = "true" 2026-03-10 13:57:37.660716 I | op-k8sutil: operator setting "CSI_NFS_PROVISIONER_RESOURCE" = "- name : csi-provisioner\n resource:\n requests:\n memory: 128Mi\n cpu: 100m\n limits:\n memory: 256Mi\n- name : csi-nfsplugin\n resource:\n requests:\n memory: 512Mi\n cpu: 250m\n limits:\n memory: 1Gi\n- name : csi-attacher\n resource:\n requests:\n memory: 512Mi\n cpu: 250m\n limits:\n memory: 1Gi\n" 2026-03-10 13:57:37.660740 I | op-k8sutil: operator setting "CSI_RBD_PROVISIONER_RESOURCE" = "- name : csi-provisioner\n resource:\n requests:\n memory: 128Mi\n cpu: 100m\n limits:\n memory: 256Mi\n- name : csi-resizer\n resource:\n requests:\n memory: 128Mi\n cpu: 100m\n limits:\n memory: 256Mi\n- name : csi-attacher\n resource:\n requests:\n memory: 128Mi\n cpu: 100m\n limits:\n memory: 256Mi\n- name : csi-snapshotter\n resource:\n requests:\n memory: 128Mi\n cpu: 100m\n limits:\n memory: 256Mi\n- name : csi-rbdplugin\n resource:\n requests:\n memory: 512Mi\n limits:\n memory: 1Gi\n- name : csi-omap-generator\n resource:\n requests:\n memory: 512Mi\n cpu: 250m\n limits:\n memory: 1Gi\n- name : liveness-prometheus\n resource:\n requests:\n memory: 128Mi\n cpu: 50m\n limits:\n memory: 256Mi\n" 2026-03-10 13:57:37.660770 I | op-k8sutil: operator setting "ROOK_CSI_ENABLE_RBD" = "false" 2026-03-10 13:57:37.660781 I | op-k8sutil: operator setting "ROOK_ENABLE_DISCOVERY_DAEMON" = "false" 2026-03-10 13:57:37.660795 I | op-k8sutil: operator setting "CSI_CEPHFS_PROVISIONER_RESOURCE" = "- name : csi-provisioner\n resource:\n requests:\n memory: 128Mi\n cpu: 100m\n limits:\n memory: 256Mi\n- name : csi-resizer\n resource:\n requests:\n memory: 128Mi\n cpu: 100m\n limits:\n memory: 256Mi\n- name : csi-attacher\n resource:\n requests:\n memory: 128Mi\n cpu: 100m\n limits:\n memory: 256Mi\n- name : csi-snapshotter\n resource:\n requests:\n memory: 128Mi\n cpu: 100m\n limits:\n memory: 256Mi\n- name : csi-cephfsplugin\n resource:\n requests:\n memory: 512Mi\n cpu: 250m\n limits:\n memory: 1Gi\n- name : liveness-prometheus\n resource:\n requests:\n memory: 128Mi\n cpu: 50m\n limits:\n memory: 256Mi\n" 2026-03-10 13:57:37.660801 I | op-k8sutil: operator setting "CSI_ENABLE_TOPOLOGY" = "false" 2026-03-10 13:57:37.660808 I | op-k8sutil: operator setting "CSI_ENABLE_RBD_SNAPSHOTTER" = "true" 2026-03-10 13:57:37.660813 I | op-k8sutil: operator setting "CSI_GRPC_TIMEOUT_SECONDS" = "150" 2026-03-10 13:57:37.660823 I | op-k8sutil: operator setting "CSI_NFS_PLUGIN_RESOURCE" = "- name : driver-registrar\n resource:\n requests:\n memory: 128Mi\n cpu: 50m\n limits:\n memory: 256Mi\n- name : csi-nfsplugin\n resource:\n requests:\n memory: 512Mi\n cpu: 250m\n limits:\n memory: 1Gi\n" 2026-03-10 13:57:37.660849 I | op-k8sutil: operator setting "CSI_RBD_ATTACH_REQUIRED" = "true" 2026-03-10 13:57:37.660853 I | op-k8sutil: operator setting "CSI_RBD_FSGROUPPOLICY" = "File" 2026-03-10 13:57:37.660881 I | op-k8sutil: operator setting "CSI_RBD_PLUGIN_RESOURCE" = "- name : driver-registrar\n resource:\n requests:\n memory: 128Mi\n cpu: 50m\n limits:\n memory: 256Mi\n- name : csi-rbdplugin\n resource:\n requests:\n memory: 512Mi\n cpu: 250m\n limits:\n memory: 1Gi\n- name : liveness-prometheus\n resource:\n requests:\n memory: 128Mi\n cpu: 50m\n limits:\n memory: 256Mi\n" 2026-03-10 13:57:37.660892 I | op-k8sutil: operator setting "CSI_CEPHFS_ATTACH_REQUIRED" = "true" 2026-03-10 13:57:37.660896 I | op-k8sutil: operator setting "CSI_DISABLE_HOLDER_PODS" = "true" 2026-03-10 13:57:37.660901 I | op-k8sutil: operator setting "ROOK_CSI_CEPH_IMAGE" = "quay.io/cephcsi/cephcsi:v3.12.3" 2026-03-10 13:57:37.660905 I | op-k8sutil: operator setting "ROOK_OBC_WATCH_OPERATOR_NAMESPACE" = "true" 2026-03-10 13:57:37.660910 I | op-k8sutil: operator setting "CSI_ENABLE_NFS_SNAPSHOTTER" = "true" 2026-03-10 13:57:37.660921 I | op-k8sutil: operator setting "ROOK_CSI_ENABLE_CEPHFS" = "false" 2026-03-10 13:57:37.660926 I | op-k8sutil: operator setting "CSI_CEPHFS_FSGROUPPOLICY" = "File" 2026-03-10 13:57:37.660931 I | op-k8sutil: operator setting "CSI_ENABLE_HOST_NETWORK" = "true" 2026-03-10 13:57:37.660938 I | operator: watching all namespaces for Ceph CRs 2026-03-10 13:57:37.661048 I | operator: setting up schemes 2026-03-10 13:57:37.663919 I | operator: setting up the controller-runtime manager 2026-03-10 13:57:37.664555 I | ceph-cluster-controller: successfully started 2026-03-10 13:57:37.665097 I | ceph-cluster-controller: enabling hotplug orchestration 2026-03-10 13:57:37.665122 I | ceph-nodedaemon-controller: successfully started 2026-03-10 13:57:37.665185 I | ceph-block-pool-controller: successfully started 2026-03-10 13:57:37.665254 I | ceph-object-store-user-controller: successfully started 2026-03-10 13:57:37.665367 I | ceph-object-realm-controller: successfully started 2026-03-10 13:57:37.665386 I | ceph-object-zonegroup-controller: successfully started 2026-03-10 13:57:37.665395 I | ceph-object-zone-controller: successfully started 2026-03-10 13:57:37.665533 I | ceph-object-controller: successfully started 2026-03-10 13:57:37.665596 I | ceph-file-controller: successfully started 2026-03-10 13:57:37.665621 I | ceph-nfs-controller: successfully started 2026-03-10 13:57:37.665668 I | ceph-rbd-mirror-controller: successfully started 2026-03-10 13:57:37.665711 I | ceph-client-controller: successfully started 2026-03-10 13:57:37.665731 I | ceph-filesystem-mirror-controller: successfully started 2026-03-10 13:57:37.665755 I | operator: rook-ceph-operator-config-controller successfully started 2026-03-10 13:57:37.665786 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2026-03-10 13:57:37.665955 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2026-03-10 13:57:37.665981 I | ceph-bucket-topic: successfully started 2026-03-10 13:57:37.666018 I | ceph-bucket-notification: successfully started 2026-03-10 13:57:37.666026 I | ceph-bucket-notification: successfully started 2026-03-10 13:57:37.666038 I | ceph-fs-subvolumegroup-controller: successfully started 2026-03-10 13:57:37.666107 I | blockpool-rados-namespace-controller: successfully started 2026-03-10 13:57:37.666178 I | ceph-cosi-controller: successfully started 2026-03-10 13:57:37.666197 I | operator: starting the controller-runtime manager 2026-03-10 13:57:37.900714 I | operator: rook-ceph-operator-config-controller done reconciling 2026-03-10 13:57:37.907063 I | ceph-csi: successfully created csi config map "rook-ceph-csi-config" 2026-03-10 13:57:37.910371 I | op-k8sutil: removing daemonset csi-rbdplugin if it exists 2026-03-10 13:57:37.912636 I | op-k8sutil: removing deployment csi-rbdplugin-provisioner if it exists 2026-03-10 13:57:37.919366 I | ceph-csi: successfully removed CSI Ceph RBD driver 2026-03-10 13:57:37.919386 I | op-k8sutil: removing daemonset csi-cephfsplugin if it exists 2026-03-10 13:57:37.921200 I | op-k8sutil: removing deployment csi-cephfsplugin-provisioner if it exists 2026-03-10 13:57:37.925979 I | ceph-csi: successfully removed CSI CephFS driver 2026-03-10 13:57:37.926000 I | op-k8sutil: removing daemonset csi-nfsplugin if it exists 2026-03-10 13:57:37.927533 I | op-k8sutil: removing deployment csi-nfsplugin-provisioner if it exists 2026-03-10 13:57:37.932089 I | ceph-csi: successfully removed CSI NFS driver 2026-03-10 13:57:59.535550 I | ceph-spec: adding finalizer "cephcluster.ceph.rook.io" on "ceph" 2026-03-10 13:57:59.542388 I | ceph-spec: adding finalizer "cephobjectstore.ceph.rook.io" on "ceph" 2026-03-10 13:57:59.544057 I | clusterdisruption-controller: deleted all legacy node drain canary pods 2026-03-10 13:57:59.551791 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-03-10 13:57:59.551828 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-03-10 13:57:59.551991 I | op-bucket-prov: ceph bucket provisioner launched watching for provisioner "openstack.ceph.rook.io/bucket" 2026-03-10 13:57:59.552545 I | op-bucket-prov: successfully reconciled bucket provisioner I0310 13:57:59.552658 1 manager.go:135] "msg"="starting provisioner" "logger"="objectbucket.io/provisioner-manager" "name"="openstack.ceph.rook.io/bucket" 2026-03-10 13:57:59.555167 I | ceph-cluster-controller: reconciling ceph cluster in namespace "openstack" 2026-03-10 13:57:59.583302 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-03-10 13:57:59.583324 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-03-10 13:57:59.583343 I | ceph-spec: found the cluster info to connect to the external cluster. will use "client.admin" to check health and monitor status. mons=map[instance:0xc00176a840] 2026-03-10 13:57:59.583359 I | ceph-spec: detecting the ceph image version for image harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.7... 2026-03-10 13:58:00.539849 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-03-10 13:58:00.539888 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-03-10 13:58:00.945361 I | op-k8sutil: removing daemonset csi-rbdplugin if it exists 2026-03-10 13:58:00.948047 I | op-k8sutil: removing deployment csi-rbdplugin-provisioner if it exists 2026-03-10 13:58:01.146881 I | ceph-csi: successfully removed CSI Ceph RBD driver 2026-03-10 13:58:01.146914 I | op-k8sutil: removing daemonset csi-cephfsplugin if it exists 2026-03-10 13:58:01.149212 I | op-k8sutil: removing deployment csi-cephfsplugin-provisioner if it exists 2026-03-10 13:58:01.342078 I | ceph-csi: successfully removed CSI CephFS driver 2026-03-10 13:58:01.342108 I | op-k8sutil: removing daemonset csi-nfsplugin if it exists 2026-03-10 13:58:01.346174 I | op-k8sutil: removing deployment csi-nfsplugin-provisioner if it exists 2026-03-10 13:58:01.544394 I | ceph-csi: successfully removed CSI NFS driver 2026-03-10 13:58:17.364226 I | ceph-spec: detected ceph image version: "18.2.7-0 reef" 2026-03-10 13:58:17.364261 I | ceph-cluster-controller: validating ceph version from provided image 2026-03-10 13:58:17.371201 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-03-10 13:58:17.371229 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-03-10 13:58:17.373338 I | cephclient: writing config file /var/lib/rook/openstack/openstack.config 2026-03-10 13:58:17.373413 I | cephclient: generated admin config in /var/lib/rook/openstack 2026-03-10 13:58:17.863770 E | cephver: external cluster ceph version is a major version higher "18.2.7-0 reef" than the local cluster "0.0.0-0 ", consider upgrading 2026-03-10 13:58:18.345087 I | ceph-cluster-controller: cluster "openstack": version "18.2.7-0 reef" detected for image "harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.7" 2026-03-10 13:58:18.392467 I | ceph-cluster-controller: creating "rook-config-override" configmap 2026-03-10 13:58:18.407145 I | ceph-cluster-controller: creating "rook-ceph-config" secret 2026-03-10 13:58:18.421308 I | ceph-cluster-controller: external cluster identity established 2026-03-10 13:58:18.421328 I | cephclient: getting or creating ceph auth key "client.csi-rbd-provisioner" 2026-03-10 13:58:19.195989 I | cephclient: getting or creating ceph auth key "client.csi-rbd-node" 2026-03-10 13:58:19.712996 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-provisioner" 2026-03-10 13:58:20.684430 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-node" 2026-03-10 13:58:20.745206 I | clusterdisruption-controller: all PGs are active+clean. Restoring default OSD pdb settings 2026-03-10 13:58:20.745244 I | clusterdisruption-controller: creating the default pdb "rook-ceph-osd" with maxUnavailable=1 for all osd 2026-03-10 13:58:20.758960 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 13:58:21.801392 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 13:58:21.873505 I | ceph-csi: created kubernetes csi secrets for cluster "openstack" 2026-03-10 13:58:21.882625 I | ceph-cluster-controller: successfully updated csi config map 2026-03-10 13:58:21.882651 I | cephclient: getting or creating ceph auth key "client.crash" 2026-03-10 13:58:22.389938 I | ceph-nodedaemon-controller: created kubernetes crash collector secret for cluster "openstack" 2026-03-10 13:58:22.445926 I | ceph-cluster-controller: enabling ceph mon monitoring goroutine for cluster "openstack" 2026-03-10 13:58:22.445994 I | ceph-cluster-controller: ceph status check interval is 1m0s 2026-03-10 13:58:22.446003 I | ceph-cluster-controller: enabling ceph status monitoring goroutine for cluster "openstack" 2026-03-10 13:58:29.628284 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-03-10 13:58:29.628313 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-03-10 13:58:29.628328 I | ceph-spec: detecting the ceph image version for image harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.7... 2026-03-10 13:58:31.561623 I | ceph-spec: detected ceph image version: "18.2.7-0 reef" 2026-03-10 13:58:32.859555 I | ceph-object-controller: reconciling object store deployments 2026-03-10 13:58:32.878572 I | ceph-object-controller: ceph object store gateway service running at 10.104.165.177 2026-03-10 13:58:32.878640 I | ceph-object-controller: reconciling object store pools 2026-03-10 13:58:36.211128 I | cephclient: reconciling replicated pool ceph.rgw.control succeeded 2026-03-10 13:58:36.211210 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.control" 2026-03-10 13:58:40.574146 I | cephclient: reconciling replicated pool ceph.rgw.meta succeeded 2026-03-10 13:58:40.574191 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.meta" 2026-03-10 13:58:45.357517 I | cephclient: reconciling replicated pool ceph.rgw.log succeeded 2026-03-10 13:58:45.357552 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.log" 2026-03-10 13:58:49.403240 I | cephclient: reconciling replicated pool ceph.rgw.buckets.index succeeded 2026-03-10 13:58:49.403275 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.buckets.index" 2026-03-10 13:58:51.264193 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:24} {StateName:unknown Count:8} {StateName:creating+peering Count:1}]" 2026-03-10 13:58:51.270218 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 13:58:52.709432 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:27} {StateName:unknown Count:6}]" 2026-03-10 13:58:52.714922 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 13:58:54.472364 I | cephclient: reconciling replicated pool ceph.rgw.buckets.non-ec succeeded 2026-03-10 13:58:54.472406 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.buckets.non-ec" 2026-03-10 13:58:58.556938 I | cephclient: reconciling replicated pool ceph.rgw.otp succeeded 2026-03-10 13:58:58.556992 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.otp" 2026-03-10 13:59:02.622779 I | cephclient: reconciling replicated pool .rgw.root succeeded 2026-03-10 13:59:02.622973 I | cephclient: setting pool property "pg_num_min" to "8" on pool ".rgw.root" 2026-03-10 13:59:06.708090 I | cephclient: reconciling replicated pool ceph.rgw.buckets.data succeeded 2026-03-10 13:59:06.708135 I | ceph-object-controller: configuring object store "ceph" 2026-03-10 13:59:07.389980 I | ceph-object-controller: Object store "ceph": realm=ceph, zonegroup=ceph, zone=ceph 2026-03-10 13:59:07.701010 I | ceph-object-controller: committing changes to RGW configuration period for CephObjectStore "openstack/ceph" 2026-03-10 13:59:08.300571 I | ceph-object-controller: configuration for object-store ceph is complete 2026-03-10 13:59:08.303491 I | ceph-object-controller: creating object store "ceph" in namespace "openstack" 2026-03-10 13:59:08.306908 I | cephclient: getting or creating ceph auth key "client.rgw.ceph.a" 2026-03-10 13:59:09.152566 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-10 13:59:09.306427 I | ceph-object-controller: setting rgw config flags 2026-03-10 13:59:09.306476 I | ceph-object-controller: Configuring authentication with keystone 2026-03-10 13:59:09.306517 I | op-config: setting option "rgw_log_nonexistent_bucket" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:09.706821 I | op-config: successfully set option "rgw_log_nonexistent_bucket" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:09.706864 I | op-config: setting option "rgw_log_object_name_utc" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:10.103505 I | op-config: successfully set option "rgw_log_object_name_utc" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:10.103548 I | op-config: setting option "rgw_keystone_admin_project" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:10.522330 I | op-config: successfully set option "rgw_keystone_admin_project" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:10.522380 I | op-config: setting option "rgw_swift_versioning_enabled" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:10.928803 I | op-config: successfully set option "rgw_swift_versioning_enabled" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:10.928839 I | op-config: setting option "rgw_zone" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:11.359763 I | op-config: successfully set option "rgw_zone" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:11.359820 I | op-config: setting option "rgw_zonegroup" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:11.768801 I | op-config: successfully set option "rgw_zonegroup" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:11.768851 I | op-config: setting option "rgw_keystone_implicit_tenants" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:12.169962 I | op-config: successfully set option "rgw_keystone_implicit_tenants" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:12.170013 I | op-config: setting option "rgw_keystone_admin_password" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:12.570771 I | op-config: successfully set option "rgw_keystone_admin_password" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:12.570825 I | op-config: setting option "rgw_swift_account_in_url" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:12.967428 I | op-config: successfully set option "rgw_swift_account_in_url" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:12.967480 I | op-config: setting option "rgw_s3_auth_use_keystone" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:13.369186 I | op-config: successfully set option "rgw_s3_auth_use_keystone" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:13.369236 I | op-config: setting option "rgw_enable_usage_log" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:13.758870 I | op-config: successfully set option "rgw_enable_usage_log" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:13.758918 I | op-config: setting option "rgw_keystone_url" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:14.170918 I | op-config: successfully set option "rgw_keystone_url" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:14.170958 I | op-config: setting option "rgw_keystone_accepted_roles" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:14.605929 I | op-config: successfully set option "rgw_keystone_accepted_roles" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:14.605974 I | op-config: setting option "rgw_keystone_api_version" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:15.013042 I | op-config: successfully set option "rgw_keystone_api_version" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:15.013082 I | op-config: setting option "rgw_keystone_admin_domain" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:15.392017 I | op-config: successfully set option "rgw_keystone_admin_domain" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:15.392054 I | op-config: setting option "rgw_keystone_admin_user" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:15.797811 I | op-config: successfully set option "rgw_keystone_admin_user" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:15.797853 I | op-config: setting option "rgw_run_sync_thread" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:16.194549 I | op-config: successfully set option "rgw_run_sync_thread" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-10 13:59:16.194969 I | ceph-object-controller: object store "ceph" deployment "rook-ceph-rgw-ceph-a" created 2026-03-10 13:59:16.249724 I | ceph-object-controller: enabling rgw dashboard 2026-03-10 13:59:18.177140 I | ceph-object-controller: created object store "ceph" in namespace "openstack" 2026-03-10 13:59:18.177885 I | ceph-object-controller: setting the dashboard api secret key 2026-03-10 13:59:18.963651 I | ceph-object-controller: done setting the dashboard api secret key 2026-03-10 13:59:19.103636 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 13:59:21.748354 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:58} {StateName:unknown Count:31}]" 2026-03-10 13:59:21.754210 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 13:59:23.208295 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:58} {StateName:unknown Count:31}]" 2026-03-10 13:59:23.212644 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 13:59:52.230057 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 13:59:53.715612 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 13:59:55.087825 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-10 14:00:22.694964 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:00:24.186473 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:00:41.073663 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-10 14:00:53.172373 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:00:54.658057 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:01:23.679634 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:01:25.124436 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:01:27.317135 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-10 14:01:54.163634 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:01:55.587033 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:02:13.231551 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-10 14:02:24.634780 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:02:26.070251 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:02:55.126416 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:02:56.534224 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:02:59.188801 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-10 14:03:25.625118 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:03:27.008544 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:03:45.122979 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-10 14:03:56.110758 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:03:57.495853 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:04:26.585650 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:04:27.975805 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:04:31.098412 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-10 14:04:57.071310 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:04:58.463932 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:05:17.033551 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-10 14:05:27.552107 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:05:28.938122 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:05:58.029707 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:05:59.407066 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:06:02.980768 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-10 14:06:28.511520 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:06:29.878119 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:06:48.948821 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-10 14:06:59.002541 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:07:00.366215 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:07:29.491332 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:07:30.857985 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:07:34.886820 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-10 14:07:59.972990 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:08:01.339508 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:08:20.826120 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-10 14:08:30.449175 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:08:31.808605 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:09:00.939470 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:09:02.270598 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-10 14:09:06.749259 W | op-mon: external cluster mon count is 1, consider adding new monitors.