2026/03/19 22:11:09 maxprocs: Updating GOMAXPROCS=1: determined from CPU quota 2026-03-19 22:11:09.707967 I | rookcmd: starting Rook v1.15.9 with arguments '/usr/local/bin/rook ceph operator' 2026-03-19 22:11:09.707989 I | rookcmd: flag values: --enable-machine-disruption-budget=false, --help=false, --kubeconfig=, --log-level=INFO 2026-03-19 22:11:09.707992 I | cephcmd: starting Rook-Ceph operator 2026-03-19 22:11:09.972898 I | cephcmd: base ceph version inside the rook operator image is "ceph version 18.2.4 (e7ad5345525c7aa95470c26863873b581076945d) reef (stable)" 2026-03-19 22:11:09.979587 I | op-k8sutil: operator setting "CSI_CEPHFS_ATTACH_REQUIRED" = "true" 2026-03-19 22:11:09.979720 I | op-k8sutil: operator setting "CSI_CEPHFS_PROVISIONER_RESOURCE" = "- name : csi-provisioner\n resource:\n requests:\n memory: 128Mi\n cpu: 100m\n limits:\n memory: 256Mi\n- name : csi-resizer\n resource:\n requests:\n memory: 128Mi\n cpu: 100m\n limits:\n memory: 256Mi\n- name : csi-attacher\n resource:\n requests:\n memory: 128Mi\n cpu: 100m\n limits:\n memory: 256Mi\n- name : csi-snapshotter\n resource:\n requests:\n memory: 128Mi\n cpu: 100m\n limits:\n memory: 256Mi\n- name : csi-cephfsplugin\n resource:\n requests:\n memory: 512Mi\n cpu: 250m\n limits:\n memory: 1Gi\n- name : liveness-prometheus\n resource:\n requests:\n memory: 128Mi\n cpu: 50m\n limits:\n memory: 256Mi\n" 2026-03-19 22:11:09.979796 I | op-k8sutil: operator setting "CSI_RBD_ATTACH_REQUIRED" = "true" 2026-03-19 22:11:09.979837 I | op-k8sutil: operator setting "CSI_CEPHFS_FSGROUPPOLICY" = "File" 2026-03-19 22:11:09.979964 I | op-k8sutil: operator setting "CSI_CEPHFS_PLUGIN_RESOURCE" = "- name : driver-registrar\n resource:\n requests:\n memory: 128Mi\n cpu: 50m\n limits:\n memory: 256Mi\n- name : csi-cephfsplugin\n resource:\n requests:\n memory: 512Mi\n cpu: 250m\n limits:\n memory: 1Gi\n- name : liveness-prometheus\n resource:\n requests:\n memory: 128Mi\n cpu: 50m\n limits:\n memory: 256Mi\n" 2026-03-19 22:11:09.980058 I | op-k8sutil: operator setting "CSI_ENABLE_OMAP_GENERATOR" = "false" 2026-03-19 22:11:09.980111 I | op-k8sutil: operator setting "CSI_ENABLE_VOLUME_GROUP_SNAPSHOT" = "true" 2026-03-19 22:11:09.980157 I | op-k8sutil: operator setting "ROOK_CSI_IMAGE_PULL_POLICY" = "IfNotPresent" 2026-03-19 22:11:09.980208 I | op-k8sutil: operator setting "ROOK_CSI_RESIZER_IMAGE" = "registry.k8s.io/sig-storage/csi-resizer:v1.11.1" 2026-03-19 22:11:09.980250 I | op-k8sutil: operator setting "ROOK_LOG_LEVEL" = "INFO" 2026-03-19 22:11:09.980298 I | op-k8sutil: operator setting "CSI_ENABLE_ENCRYPTION" = "false" 2026-03-19 22:11:09.980358 I | op-k8sutil: operator setting "CSI_RBD_PROVISIONER_RESOURCE" = "- name : csi-provisioner\n resource:\n requests:\n memory: 128Mi\n cpu: 100m\n limits:\n memory: 256Mi\n- name : csi-resizer\n resource:\n requests:\n memory: 128Mi\n cpu: 100m\n limits:\n memory: 256Mi\n- name : csi-attacher\n resource:\n requests:\n memory: 128Mi\n cpu: 100m\n limits:\n memory: 256Mi\n- name : csi-snapshotter\n resource:\n requests:\n memory: 128Mi\n cpu: 100m\n limits:\n memory: 256Mi\n- name : csi-rbdplugin\n resource:\n requests:\n memory: 512Mi\n limits:\n memory: 1Gi\n- name : csi-omap-generator\n resource:\n requests:\n memory: 512Mi\n cpu: 250m\n limits:\n memory: 1Gi\n- name : liveness-prometheus\n resource:\n requests:\n memory: 128Mi\n cpu: 50m\n limits:\n memory: 256Mi\n" 2026-03-19 22:11:09.980430 I | op-k8sutil: operator setting "ROOK_CSI_ENABLE_RBD" = "false" 2026-03-19 22:11:09.980456 I | op-k8sutil: operator setting "ROOK_CSI_SNAPSHOTTER_IMAGE" = "registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1" 2026-03-19 22:11:09.980505 I | op-k8sutil: operator setting "ROOK_CSI_PROVISIONER_IMAGE" = "registry.k8s.io/sig-storage/csi-provisioner:v5.0.1" 2026-03-19 22:11:09.980553 I | op-k8sutil: operator setting "CSI_ENABLE_HOST_NETWORK" = "true" 2026-03-19 22:11:09.980602 I | op-k8sutil: operator setting "CSI_FORCE_CEPHFS_KERNEL_CLIENT" = "true" 2026-03-19 22:11:09.980657 I | op-k8sutil: operator setting "CSI_NFS_FSGROUPPOLICY" = "File" 2026-03-19 22:11:09.980751 I | op-k8sutil: operator setting "CSI_RBD_FSGROUPPOLICY" = "File" 2026-03-19 22:11:09.980775 I | op-k8sutil: operator setting "ROOK_CSIADDONS_IMAGE" = "quay.io/csiaddons/k8s-sidecar:v0.9.1" 2026-03-19 22:11:09.980830 I | op-k8sutil: operator setting "ROOK_CSI_DISABLE_DRIVER" = "false" 2026-03-19 22:11:09.980884 I | op-k8sutil: operator setting "ROOK_CSI_ENABLE_NFS" = "false" 2026-03-19 22:11:09.980903 I | op-k8sutil: operator setting "CSI_ENABLE_METADATA" = "false" 2026-03-19 22:11:09.980978 I | op-k8sutil: operator setting "ROOK_CEPH_ALLOW_LOOP_DEVICES" = "false" 2026-03-19 22:11:09.981034 I | op-k8sutil: operator setting "ROOK_CSI_ATTACHER_IMAGE" = "registry.k8s.io/sig-storage/csi-attacher:v4.6.1" 2026-03-19 22:11:09.981104 I | op-k8sutil: operator setting "ROOK_CSI_ENABLE_CEPHFS" = "false" 2026-03-19 22:11:09.981173 I | op-k8sutil: operator setting "ROOK_CSI_REGISTRAR_IMAGE" = "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.11.1" 2026-03-19 22:11:09.981235 I | op-k8sutil: operator setting "ROOK_ENABLE_DISCOVERY_DAEMON" = "false" 2026-03-19 22:11:09.981296 I | op-k8sutil: operator setting "CSI_DISABLE_HOLDER_PODS" = "true" 2026-03-19 22:11:09.981340 I | op-k8sutil: operator setting "CSI_ENABLE_CSIADDONS" = "false" 2026-03-19 22:11:09.981390 I | op-k8sutil: operator setting "CSI_ENABLE_TOPOLOGY" = "false" 2026-03-19 22:11:09.981441 I | op-k8sutil: operator setting "CSI_GRPC_TIMEOUT_SECONDS" = "150" 2026-03-19 22:11:09.981500 I | op-k8sutil: operator setting "CSI_PLUGIN_ENABLE_SELINUX_HOST_MOUNT" = "false" 2026-03-19 22:11:09.981556 I | op-k8sutil: operator setting "ROOK_OBC_WATCH_OPERATOR_NAMESPACE" = "true" 2026-03-19 22:11:09.981601 I | op-k8sutil: operator setting "CSI_ENABLE_CEPHFS_SNAPSHOTTER" = "true" 2026-03-19 22:11:09.981670 I | op-k8sutil: operator setting "CSI_ENABLE_NFS_SNAPSHOTTER" = "true" 2026-03-19 22:11:09.981736 I | op-k8sutil: operator setting "CSI_ENABLE_RBD_SNAPSHOTTER" = "true" 2026-03-19 22:11:09.981779 I | op-k8sutil: operator setting "CSI_PROVISIONER_PRIORITY_CLASSNAME" = "system-cluster-critical" 2026-03-19 22:11:09.981837 I | op-k8sutil: operator setting "CSI_PROVISIONER_REPLICAS" = "2" 2026-03-19 22:11:09.981885 I | op-k8sutil: operator setting "CSI_RBD_PLUGIN_RESOURCE" = "- name : driver-registrar\n resource:\n requests:\n memory: 128Mi\n cpu: 50m\n limits:\n memory: 256Mi\n- name : csi-rbdplugin\n resource:\n requests:\n memory: 512Mi\n cpu: 250m\n limits:\n memory: 1Gi\n- name : liveness-prometheus\n resource:\n requests:\n memory: 128Mi\n cpu: 50m\n limits:\n memory: 256Mi\n" 2026-03-19 22:11:09.981955 I | op-k8sutil: operator setting "CSI_NFS_ATTACH_REQUIRED" = "true" 2026-03-19 22:11:09.982032 I | op-k8sutil: operator setting "CSI_NFS_PLUGIN_RESOURCE" = "- name : driver-registrar\n resource:\n requests:\n memory: 128Mi\n cpu: 50m\n limits:\n memory: 256Mi\n- name : csi-nfsplugin\n resource:\n requests:\n memory: 512Mi\n cpu: 250m\n limits:\n memory: 1Gi\n" 2026-03-19 22:11:09.982104 I | op-k8sutil: operator setting "CSI_NFS_PROVISIONER_RESOURCE" = "- name : csi-provisioner\n resource:\n requests:\n memory: 128Mi\n cpu: 100m\n limits:\n memory: 256Mi\n- name : csi-nfsplugin\n resource:\n requests:\n memory: 512Mi\n cpu: 250m\n limits:\n memory: 1Gi\n- name : csi-attacher\n resource:\n requests:\n memory: 512Mi\n cpu: 250m\n limits:\n memory: 1Gi\n" 2026-03-19 22:11:09.982139 I | op-k8sutil: operator setting "CSI_PLUGIN_PRIORITY_CLASSNAME" = "system-node-critical" 2026-03-19 22:11:09.982193 I | op-k8sutil: operator setting "ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS" = "15" 2026-03-19 22:11:09.982227 I | op-k8sutil: operator setting "ROOK_CSI_CEPH_IMAGE" = "quay.io/cephcsi/cephcsi:v3.12.3" 2026-03-19 22:11:09.982248 I | operator: watching all namespaces for Ceph CRs 2026-03-19 22:11:09.982311 I | operator: setting up schemes 2026-03-19 22:11:09.984923 I | operator: setting up the controller-runtime manager 2026-03-19 22:11:09.985674 I | ceph-cluster-controller: successfully started 2026-03-19 22:11:09.986226 I | ceph-cluster-controller: enabling hotplug orchestration 2026-03-19 22:11:09.986461 I | ceph-nodedaemon-controller: successfully started 2026-03-19 22:11:09.986583 I | ceph-block-pool-controller: successfully started 2026-03-19 22:11:09.986686 I | ceph-object-store-user-controller: successfully started 2026-03-19 22:11:09.986774 I | ceph-object-realm-controller: successfully started 2026-03-19 22:11:09.986827 I | ceph-object-zonegroup-controller: successfully started 2026-03-19 22:11:09.986909 I | ceph-object-zone-controller: successfully started 2026-03-19 22:11:09.987064 I | ceph-object-controller: successfully started 2026-03-19 22:11:09.987173 I | ceph-file-controller: successfully started 2026-03-19 22:11:09.987264 I | ceph-nfs-controller: successfully started 2026-03-19 22:11:09.987356 I | ceph-rbd-mirror-controller: successfully started 2026-03-19 22:11:09.987445 I | ceph-client-controller: successfully started 2026-03-19 22:11:09.987526 I | ceph-filesystem-mirror-controller: successfully started 2026-03-19 22:11:09.987607 I | operator: rook-ceph-operator-config-controller successfully started 2026-03-19 22:11:09.987685 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2026-03-19 22:11:09.987910 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2026-03-19 22:11:09.988011 I | ceph-bucket-topic: successfully started 2026-03-19 22:11:09.988085 I | ceph-bucket-notification: successfully started 2026-03-19 22:11:09.988136 I | ceph-bucket-notification: successfully started 2026-03-19 22:11:09.988207 I | ceph-fs-subvolumegroup-controller: successfully started 2026-03-19 22:11:09.988335 I | blockpool-rados-namespace-controller: successfully started 2026-03-19 22:11:09.988494 I | ceph-cosi-controller: successfully started 2026-03-19 22:11:09.988609 I | operator: starting the controller-runtime manager 2026-03-19 22:11:10.288028 I | operator: rook-ceph-operator-config-controller done reconciling 2026-03-19 22:11:10.297548 I | ceph-csi: successfully created csi config map "rook-ceph-csi-config" 2026-03-19 22:11:10.299872 I | op-k8sutil: removing daemonset csi-rbdplugin if it exists 2026-03-19 22:11:10.302260 I | op-k8sutil: removing deployment csi-rbdplugin-provisioner if it exists 2026-03-19 22:11:10.308114 I | ceph-csi: successfully removed CSI Ceph RBD driver 2026-03-19 22:11:10.308138 I | op-k8sutil: removing daemonset csi-cephfsplugin if it exists 2026-03-19 22:11:10.309876 I | op-k8sutil: removing deployment csi-cephfsplugin-provisioner if it exists 2026-03-19 22:11:10.315517 I | ceph-csi: successfully removed CSI CephFS driver 2026-03-19 22:11:10.315549 I | op-k8sutil: removing daemonset csi-nfsplugin if it exists 2026-03-19 22:11:10.317385 I | op-k8sutil: removing deployment csi-nfsplugin-provisioner if it exists 2026-03-19 22:11:10.322193 I | ceph-csi: successfully removed CSI NFS driver 2026-03-19 22:11:34.619970 I | ceph-spec: adding finalizer "cephcluster.ceph.rook.io" on "ceph" 2026-03-19 22:11:34.624740 I | clusterdisruption-controller: deleted all legacy node drain canary pods 2026-03-19 22:11:34.626433 I | ceph-spec: adding finalizer "cephobjectstore.ceph.rook.io" on "ceph" 2026-03-19 22:11:34.656830 I | ceph-cluster-controller: reconciling ceph cluster in namespace "openstack" 2026-03-19 22:11:34.659566 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-03-19 22:11:34.659585 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-03-19 22:11:34.659634 I | op-bucket-prov: ceph bucket provisioner launched watching for provisioner "openstack.ceph.rook.io/bucket" 2026-03-19 22:11:34.660024 I | op-bucket-prov: successfully reconciled bucket provisioner I0319 22:11:34.660090 1 manager.go:135] "msg"="starting provisioner" "logger"="objectbucket.io/provisioner-manager" "name"="openstack.ceph.rook.io/bucket" 2026-03-19 22:11:34.821936 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-03-19 22:11:34.821968 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-03-19 22:11:34.821994 I | ceph-spec: found the cluster info to connect to the external cluster. will use "client.admin" to check health and monitor status. mons=map[instance:0xc001bf11a0] 2026-03-19 22:11:34.822019 I | ceph-spec: detecting the ceph image version for image harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.7... 2026-03-19 22:11:35.422365 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-03-19 22:11:35.422399 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-03-19 22:11:36.029244 I | op-k8sutil: removing daemonset csi-rbdplugin if it exists 2026-03-19 22:11:36.032379 I | op-k8sutil: removing deployment csi-rbdplugin-provisioner if it exists 2026-03-19 22:11:36.728064 I | ceph-csi: successfully removed CSI Ceph RBD driver 2026-03-19 22:11:36.728209 I | op-k8sutil: removing daemonset csi-cephfsplugin if it exists 2026-03-19 22:11:36.739606 I | op-k8sutil: removing deployment csi-cephfsplugin-provisioner if it exists 2026-03-19 22:11:36.754557 I | ceph-csi: successfully removed CSI CephFS driver 2026-03-19 22:11:36.754579 I | op-k8sutil: removing daemonset csi-nfsplugin if it exists 2026-03-19 22:11:36.757201 I | op-k8sutil: removing deployment csi-nfsplugin-provisioner if it exists 2026-03-19 22:11:36.763613 I | ceph-csi: successfully removed CSI NFS driver 2026-03-19 22:11:54.697332 I | ceph-spec: detected ceph image version: "18.2.7-0 reef" 2026-03-19 22:11:54.697364 I | ceph-cluster-controller: validating ceph version from provided image 2026-03-19 22:11:54.702574 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-03-19 22:11:54.702596 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-03-19 22:11:54.705894 I | cephclient: writing config file /var/lib/rook/openstack/openstack.config 2026-03-19 22:11:54.706137 I | cephclient: generated admin config in /var/lib/rook/openstack 2026-03-19 22:11:56.659998 E | cephver: external cluster ceph version is a major version higher "18.2.7-0 reef" than the local cluster "0.0.0-0 ", consider upgrading 2026-03-19 22:11:56.660817 I | clusterdisruption-controller: all PGs are active+clean. Restoring default OSD pdb settings 2026-03-19 22:11:56.660849 I | clusterdisruption-controller: creating the default pdb "rook-ceph-osd" with maxUnavailable=1 for all osd 2026-03-19 22:11:56.766352 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:11:57.627684 I | ceph-cluster-controller: cluster "openstack": version "18.2.7-0 reef" detected for image "harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.7" 2026-03-19 22:11:58.795030 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:11:58.880971 I | ceph-cluster-controller: creating "rook-config-override" configmap 2026-03-19 22:11:58.884035 I | ceph-cluster-controller: creating "rook-ceph-config" secret 2026-03-19 22:11:58.935371 I | ceph-cluster-controller: external cluster identity established 2026-03-19 22:11:58.935407 I | cephclient: getting or creating ceph auth key "client.csi-rbd-provisioner" 2026-03-19 22:11:59.566871 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:11:59.992515 I | cephclient: getting or creating ceph auth key "client.csi-rbd-node" 2026-03-19 22:12:01.357887 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-provisioner" 2026-03-19 22:12:02.522147 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-node" 2026-03-19 22:12:03.061432 I | ceph-csi: created kubernetes csi secrets for cluster "openstack" 2026-03-19 22:12:03.070775 I | ceph-cluster-controller: successfully updated csi config map 2026-03-19 22:12:03.070797 I | cephclient: getting or creating ceph auth key "client.crash" 2026-03-19 22:12:03.548785 I | ceph-nodedaemon-controller: created kubernetes crash collector secret for cluster "openstack" 2026-03-19 22:12:03.627287 I | ceph-cluster-controller: enabling ceph mon monitoring goroutine for cluster "openstack" 2026-03-19 22:12:03.627343 I | ceph-cluster-controller: ceph status check interval is 1m0s 2026-03-19 22:12:03.627349 I | ceph-cluster-controller: enabling ceph status monitoring goroutine for cluster "openstack" 2026-03-19 22:12:05.862849 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-03-19 22:12:05.862881 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-03-19 22:12:05.862897 I | ceph-spec: detecting the ceph image version for image harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.7... 2026-03-19 22:12:07.987095 I | ceph-spec: detected ceph image version: "18.2.7-0 reef" 2026-03-19 22:12:09.205708 I | ceph-object-controller: reconciling object store deployments 2026-03-19 22:12:09.255459 I | ceph-object-controller: ceph object store gateway service running at 10.102.162.211 2026-03-19 22:12:09.255500 I | ceph-object-controller: reconciling object store pools 2026-03-19 22:12:12.479469 I | cephclient: reconciling replicated pool ceph.rgw.control succeeded 2026-03-19 22:12:12.479510 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.control" 2026-03-19 22:12:16.515105 I | cephclient: reconciling replicated pool ceph.rgw.meta succeeded 2026-03-19 22:12:16.515137 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.meta" 2026-03-19 22:12:20.585621 I | cephclient: reconciling replicated pool ceph.rgw.log succeeded 2026-03-19 22:12:20.585692 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.log" 2026-03-19 22:12:24.643190 I | cephclient: reconciling replicated pool ceph.rgw.buckets.index succeeded 2026-03-19 22:12:24.643225 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.buckets.index" 2026-03-19 22:12:27.749961 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:25} {StateName:creating+peering Count:7} {StateName:unknown Count:1}]" 2026-03-19 22:12:27.754993 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:12:29.858978 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:26} {StateName:creating+peering Count:7}]" 2026-03-19 22:12:29.863977 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:12:30.782454 I | cephclient: reconciling replicated pool ceph.rgw.buckets.non-ec succeeded 2026-03-19 22:12:30.782487 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.buckets.non-ec" 2026-03-19 22:12:34.862832 I | cephclient: reconciling replicated pool ceph.rgw.otp succeeded 2026-03-19 22:12:34.862866 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.otp" 2026-03-19 22:12:38.909900 I | cephclient: reconciling replicated pool .rgw.root succeeded 2026-03-19 22:12:38.909943 I | cephclient: setting pool property "pg_num_min" to "8" on pool ".rgw.root" 2026-03-19 22:12:43.003113 I | cephclient: reconciling replicated pool ceph.rgw.buckets.data succeeded 2026-03-19 22:12:43.003148 I | ceph-object-controller: configuring object store "ceph" 2026-03-19 22:12:43.645679 I | ceph-object-controller: Object store "ceph": realm=ceph, zonegroup=ceph, zone=ceph 2026-03-19 22:12:43.845314 I | ceph-object-controller: committing changes to RGW configuration period for CephObjectStore "openstack/ceph" 2026-03-19 22:12:44.365561 I | ceph-object-controller: configuration for object-store ceph is complete 2026-03-19 22:12:44.369348 I | ceph-object-controller: creating object store "ceph" in namespace "openstack" 2026-03-19 22:12:44.374266 I | cephclient: getting or creating ceph auth key "client.rgw.ceph.a" 2026-03-19 22:12:44.881606 I | ceph-object-controller: setting rgw config flags 2026-03-19 22:12:44.881694 I | ceph-object-controller: Configuring authentication with keystone 2026-03-19 22:12:44.881719 I | op-config: setting option "rgw_zone" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:45.330540 I | op-config: successfully set option "rgw_zone" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:45.330575 I | op-config: setting option "rgw_s3_auth_use_keystone" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:45.739404 I | op-config: successfully set option "rgw_s3_auth_use_keystone" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:45.739442 I | op-config: setting option "rgw_log_nonexistent_bucket" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:46.158044 I | op-config: successfully set option "rgw_log_nonexistent_bucket" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:46.158127 I | op-config: setting option "rgw_enable_usage_log" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:46.563910 I | op-config: successfully set option "rgw_enable_usage_log" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:46.564057 I | op-config: setting option "rgw_keystone_admin_domain" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:46.963176 I | op-config: successfully set option "rgw_keystone_admin_domain" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:46.963215 I | op-config: setting option "rgw_keystone_admin_project" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:47.362418 I | op-config: successfully set option "rgw_keystone_admin_project" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:47.362451 I | op-config: setting option "rgw_keystone_admin_user" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:47.767366 I | op-config: successfully set option "rgw_keystone_admin_user" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:47.767397 I | op-config: setting option "rgw_run_sync_thread" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:48.159181 I | op-config: successfully set option "rgw_run_sync_thread" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:48.159219 I | op-config: setting option "rgw_keystone_implicit_tenants" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:48.553461 I | op-config: successfully set option "rgw_keystone_implicit_tenants" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:48.553495 I | op-config: setting option "rgw_keystone_api_version" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:49.326026 I | op-config: successfully set option "rgw_keystone_api_version" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:49.326062 I | op-config: setting option "rgw_swift_versioning_enabled" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:50.237129 I | op-config: successfully set option "rgw_swift_versioning_enabled" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:50.237166 I | op-config: setting option "rgw_log_object_name_utc" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:50.736978 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-19 22:12:50.929488 I | op-config: successfully set option "rgw_log_object_name_utc" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:50.929527 I | op-config: setting option "rgw_zonegroup" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:51.314291 I | op-config: successfully set option "rgw_zonegroup" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:51.314327 I | op-config: setting option "rgw_keystone_url" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:51.732550 I | op-config: successfully set option "rgw_keystone_url" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:51.732587 I | op-config: setting option "rgw_keystone_accepted_roles" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:52.114113 I | op-config: successfully set option "rgw_keystone_accepted_roles" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:52.114151 I | op-config: setting option "rgw_keystone_admin_password" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:52.507911 I | op-config: successfully set option "rgw_keystone_admin_password" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:52.507952 I | op-config: setting option "rgw_swift_account_in_url" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:52.935665 I | op-config: successfully set option "rgw_swift_account_in_url" (user "client.rgw.ceph.a") to the mon configuration database 2026-03-19 22:12:52.936013 I | ceph-object-controller: object store "ceph" deployment "rook-ceph-rgw-ceph-a" created 2026-03-19 22:12:52.996045 I | ceph-object-controller: enabling rgw dashboard 2026-03-19 22:12:54.847694 I | ceph-object-controller: created object store "ceph" in namespace "openstack" 2026-03-19 22:12:54.848531 I | ceph-object-controller: setting the dashboard api secret key 2026-03-19 22:12:55.736890 I | ceph-object-controller: done setting the dashboard api secret key 2026-03-19 22:12:55.789733 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:75} {StateName:peering Count:7}]" 2026-03-19 22:12:55.794951 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:12:58.245474 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:13:00.335574 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:13:28.713774 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:13:30.795886 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:13:36.700766 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-19 22:13:59.214496 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:109} {StateName:peering Count:1}]" 2026-03-19 22:13:59.729972 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:14:01.951686 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:109} {StateName:peering Count:1}]" 2026-03-19 22:14:02.031551 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:14:22.655416 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-19 22:14:30.187811 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:14:32.512492 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:15:00.659286 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:99} {StateName:peering Count:1}]" 2026-03-19 22:15:00.664283 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:15:02.999222 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:97} {StateName:peering Count:2}]" 2026-03-19 22:15:03.004101 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:15:08.627110 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-19 22:15:31.151582 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:103} {StateName:peering Count:1}]" 2026-03-19 22:15:31.156274 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:15:33.469874 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:101} {StateName:peering Count:3}]" 2026-03-19 22:15:33.474298 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:15:54.564677 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-19 22:16:01.617027 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:121} {StateName:peering Count:1}]" 2026-03-19 22:16:01.621909 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:16:03.962010 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:16:32.090930 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:16:34.432825 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:16:40.486045 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-19 22:17:02.573376 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:17:04.922348 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:17:27.161797 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-19 22:17:33.065253 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:17:35.373170 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:18:03.570375 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:18:05.826800 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:18:13.194123 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-19 22:18:34.052349 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:18:36.316880 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:18:59.119638 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-19 22:19:04.506004 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:19:06.793424 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:19:34.989616 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:19:37.269177 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:19:45.071471 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-19 22:20:05.450040 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:20:07.743358 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:20:30.995148 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-19 22:20:35.959743 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:20:38.203875 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:21:06.435116 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:21:08.747079 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:21:16.969902 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-19 22:21:36.895650 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:21:39.301292 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:22:02.958390 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-19 22:22:07.375164 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:22:09.844715 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:22:38.005036 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:22:40.478428 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:22:48.929701 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-19 22:23:08.460971 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:23:10.958079 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:23:34.884234 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-19 22:23:38.938031 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:23:41.456581 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:24:09.419518 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:24:11.947921 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:24:20.904332 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-03-19 22:24:39.927930 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:24:42.536029 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-03-19 22:25:06.878646 W | op-mon: external cluster mon count is 1, consider adding new monitors.