2026/04/12 21:03:58 maxprocs: Updating GOMAXPROCS=1: determined from CPU quota 2026-04-12 21:03:58.174982 I | rookcmd: starting Rook v1.14.5 with arguments '/usr/local/bin/rook ceph operator' 2026-04-12 21:03:58.175002 I | rookcmd: flag values: --enable-machine-disruption-budget=false, --help=false, --kubeconfig=, --log-level=INFO 2026-04-12 21:03:58.175005 I | cephcmd: starting Rook-Ceph operator 2026-04-12 21:03:58.299601 I | cephcmd: base ceph version inside the rook operator image is "ceph version 18.2.2 (531c0d11a1c5d39fbfe6aa8a521f023abf3bf3e2) reef (stable)" 2026-04-12 21:03:58.308064 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2026-04-12 21:03:58.308094 I | operator: watching all namespaces for Ceph CRs 2026-04-12 21:03:58.308178 I | operator: setting up schemes 2026-04-12 21:03:58.311393 I | operator: setting up the controller-runtime manager 2026-04-12 21:03:58.311909 I | ceph-cluster-controller: successfully started 2026-04-12 21:03:58.315234 I | op-k8sutil: ROOK_DISABLE_DEVICE_HOTPLUG="false" (env var) 2026-04-12 21:03:58.315253 I | ceph-cluster-controller: enabling hotplug orchestration 2026-04-12 21:03:58.315274 I | ceph-nodedaemon-controller: successfully started 2026-04-12 21:03:58.315292 I | ceph-block-pool-controller: successfully started 2026-04-12 21:03:58.315317 I | ceph-object-store-user-controller: successfully started 2026-04-12 21:03:58.315337 I | ceph-object-realm-controller: successfully started 2026-04-12 21:03:58.315357 I | ceph-object-zonegroup-controller: successfully started 2026-04-12 21:03:58.315372 I | ceph-object-zone-controller: successfully started 2026-04-12 21:03:58.315487 I | ceph-object-controller: successfully started 2026-04-12 21:03:58.315520 I | ceph-file-controller: successfully started 2026-04-12 21:03:58.315549 I | ceph-nfs-controller: successfully started 2026-04-12 21:03:58.315584 I | ceph-rbd-mirror-controller: successfully started 2026-04-12 21:03:58.315621 I | ceph-client-controller: successfully started 2026-04-12 21:03:58.315640 I | ceph-filesystem-mirror-controller: successfully started 2026-04-12 21:03:58.315663 I | operator: rook-ceph-operator-config-controller successfully started 2026-04-12 21:03:58.315677 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2026-04-12 21:03:58.315793 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2026-04-12 21:03:58.315821 I | ceph-bucket-topic: successfully started 2026-04-12 21:03:58.315834 I | ceph-bucket-notification: successfully started 2026-04-12 21:03:58.315848 I | ceph-bucket-notification: successfully started 2026-04-12 21:03:58.315864 I | ceph-fs-subvolumegroup-controller: successfully started 2026-04-12 21:03:58.315874 I | blockpool-rados-namespace-controller: successfully started 2026-04-12 21:03:58.315921 I | ceph-cosi-controller: successfully started 2026-04-12 21:03:58.315950 I | operator: starting the controller-runtime manager 2026-04-12 21:03:58.437118 I | op-k8sutil: ROOK_WATCH_FOR_NODE_FAILURE="true" (default) 2026-04-12 21:03:58.731464 I | op-k8sutil: ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS="15" (configmap) 2026-04-12 21:03:58.731586 I | op-k8sutil: ROOK_LOG_LEVEL="INFO" (configmap) 2026-04-12 21:03:58.731677 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (configmap) 2026-04-12 21:03:58.735722 I | op-k8sutil: ROOK_CEPH_ALLOW_LOOP_DEVICES="false" (configmap) 2026-04-12 21:03:58.735795 I | operator: rook-ceph-operator-config-controller done reconciling 2026-04-12 21:03:58.746456 I | ceph-csi: successfully created csi config map "rook-ceph-csi-config" 2026-04-12 21:03:58.746553 I | op-k8sutil: ROOK_CSI_DISABLE_DRIVER="false" (configmap) 2026-04-12 21:03:58.747540 I | ceph-csi: CSI Ceph RBD driver disabled 2026-04-12 21:03:58.747588 I | op-k8sutil: removing daemonset csi-rbdplugin if it exists 2026-04-12 21:03:58.749476 I | op-k8sutil: removing deployment csi-rbdplugin-provisioner if it exists 2026-04-12 21:03:58.754480 I | ceph-csi: successfully removed CSI Ceph RBD driver 2026-04-12 21:03:58.754636 I | ceph-csi: CSI CephFS driver disabled 2026-04-12 21:03:58.754703 I | op-k8sutil: removing daemonset csi-cephfsplugin if it exists 2026-04-12 21:03:58.756511 I | op-k8sutil: removing deployment csi-cephfsplugin-provisioner if it exists 2026-04-12 21:03:58.761347 I | ceph-csi: successfully removed CSI CephFS driver 2026-04-12 21:03:58.761421 I | ceph-csi: CSI NFS driver disabled 2026-04-12 21:03:58.761473 I | op-k8sutil: removing daemonset csi-nfsplugin if it exists 2026-04-12 21:03:58.763033 I | op-k8sutil: removing deployment csi-nfsplugin-provisioner if it exists 2026-04-12 21:03:58.767395 I | ceph-csi: successfully removed CSI NFS driver 2026-04-12 21:04:19.494456 I | ceph-spec: adding finalizer "cephcluster.ceph.rook.io" on "ceph" 2026-04-12 21:04:19.500928 I | clusterdisruption-controller: deleted all legacy node drain canary pods 2026-04-12 21:04:19.501877 I | ceph-spec: adding finalizer "cephobjectstore.ceph.rook.io" on "ceph" 2026-04-12 21:04:19.515511 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-04-12 21:04:19.515535 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-04-12 21:04:19.515606 I | op-k8sutil: ROOK_OBC_WATCH_OPERATOR_NAMESPACE="true" (configmap) 2026-04-12 21:04:19.515610 I | op-k8sutil: ROOK_OBC_PROVISIONER_NAME_PREFIX="" (default) 2026-04-12 21:04:19.515614 I | op-bucket-prov: ceph bucket provisioner launched watching for provisioner "openstack.ceph.rook.io/bucket" 2026-04-12 21:04:19.515914 I | op-bucket-prov: successfully reconciled bucket provisioner I0412 21:04:19.515976 1 manager.go:135] "msg"="starting provisioner" "logger"="objectbucket.io/provisioner-manager" "name"="openstack.ceph.rook.io/bucket" 2026-04-12 21:04:19.516828 I | ceph-cluster-controller: reconciling ceph cluster in namespace "openstack" 2026-04-12 21:04:19.539507 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-04-12 21:04:19.539532 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-04-12 21:04:19.539557 I | ceph-spec: found the cluster info to connect to the external cluster. will use "client.admin" to check health and monitor status. mons=map[instance:0xc0018ca630] 2026-04-12 21:04:19.539574 I | ceph-spec: detecting the ceph image version for image harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.7... 2026-04-12 21:04:19.697461 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (configmap) 2026-04-12 21:04:19.697492 I | op-k8sutil: CSI_DISABLE_HOLDER_PODS="true" (configmap) 2026-04-12 21:04:20.296897 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-04-12 21:04:20.296931 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-04-12 21:04:20.497529 I | op-k8sutil: ROOK_CSI_ENABLE_RBD="false" (configmap) 2026-04-12 21:04:20.497561 I | op-k8sutil: ROOK_CSI_ENABLE_CEPHFS="false" (configmap) 2026-04-12 21:04:20.497569 I | op-k8sutil: ROOK_CSI_ENABLE_NFS="false" (configmap) 2026-04-12 21:04:20.497578 I | op-k8sutil: ROOK_CSI_ALLOW_UNSUPPORTED_VERSION="false" (default) 2026-04-12 21:04:20.497586 I | op-k8sutil: CSI_FORCE_CEPHFS_KERNEL_CLIENT="true" (configmap) 2026-04-12 21:04:20.497593 I | op-k8sutil: CSI_GRPC_TIMEOUT_SECONDS="150" (configmap) 2026-04-12 21:04:20.497603 I | op-k8sutil: CSI_CEPHFS_LIVENESS_METRICS_PORT="9081" (default) 2026-04-12 21:04:20.497611 I | op-k8sutil: CSIADDONS_PORT="9070" (default) 2026-04-12 21:04:20.497619 I | op-k8sutil: CSI_RBD_LIVENESS_METRICS_PORT="9080" (default) 2026-04-12 21:04:20.497626 I | op-k8sutil: CSI_ENABLE_LIVENESS="false" (default) 2026-04-12 21:04:20.497633 I | op-k8sutil: CSI_PLUGIN_PRIORITY_CLASSNAME="system-node-critical" (configmap) 2026-04-12 21:04:20.497640 I | op-k8sutil: CSI_PROVISIONER_PRIORITY_CLASSNAME="system-cluster-critical" (configmap) 2026-04-12 21:04:20.497650 I | op-k8sutil: CSI_ENABLE_OMAP_GENERATOR="false" (configmap) 2026-04-12 21:04:20.497659 I | op-k8sutil: CSI_ENABLE_RBD_SNAPSHOTTER="true" (configmap) 2026-04-12 21:04:20.497667 I | op-k8sutil: CSI_ENABLE_CEPHFS_SNAPSHOTTER="true" (configmap) 2026-04-12 21:04:20.497675 I | op-k8sutil: CSI_ENABLE_NFS_SNAPSHOTTER="true" (configmap) 2026-04-12 21:04:20.500163 I | op-k8sutil: CSI_ENABLE_CSIADDONS="false" (configmap) 2026-04-12 21:04:20.500190 I | op-k8sutil: CSI_ENABLE_TOPOLOGY="false" (configmap) 2026-04-12 21:04:20.500195 I | op-k8sutil: CSI_ENABLE_ENCRYPTION="false" (configmap) 2026-04-12 21:04:20.500199 I | op-k8sutil: CSI_ENABLE_METADATA="false" (configmap) 2026-04-12 21:04:20.500208 I | op-k8sutil: CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY="RollingUpdate" (default) 2026-04-12 21:04:20.500214 I | op-k8sutil: CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY_MAX_UNAVAILABLE="1" (default) 2026-04-12 21:04:20.500223 I | op-k8sutil: CSI_NFS_PLUGIN_UPDATE_STRATEGY="RollingUpdate" (default) 2026-04-12 21:04:20.500231 I | op-k8sutil: CSI_RBD_PLUGIN_UPDATE_STRATEGY="RollingUpdate" (default) 2026-04-12 21:04:20.500239 I | op-k8sutil: CSI_RBD_PLUGIN_UPDATE_STRATEGY_MAX_UNAVAILABLE="1" (default) 2026-04-12 21:04:20.500251 I | op-k8sutil: CSI_PLUGIN_ENABLE_SELINUX_HOST_MOUNT="false" (configmap) 2026-04-12 21:04:20.500259 I | ceph-csi: Kubernetes version is 1.28 2026-04-12 21:04:20.500266 I | op-k8sutil: CSI_LOG_LEVEL="" (default) 2026-04-12 21:04:20.500278 I | op-k8sutil: CSI_SIDECAR_LOG_LEVEL="" (default) 2026-04-12 21:04:20.500283 I | op-k8sutil: CSI_LEADER_ELECTION_LEASE_DURATION="" (default) 2026-04-12 21:04:20.500287 I | op-k8sutil: CSI_LEADER_ELECTION_RENEW_DEADLINE="" (default) 2026-04-12 21:04:20.500293 I | op-k8sutil: CSI_LEADER_ELECTION_RETRY_PERIOD="" (default) 2026-04-12 21:04:20.699354 I | op-k8sutil: ROOK_CSI_CEPH_IMAGE="quay.io/cephcsi/cephcsi:v3.11.0" (configmap) 2026-04-12 21:04:20.699380 I | op-k8sutil: ROOK_CSI_REGISTRAR_IMAGE="registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1" (configmap) 2026-04-12 21:04:20.699384 I | op-k8sutil: ROOK_CSI_PROVISIONER_IMAGE="registry.k8s.io/sig-storage/csi-provisioner:v4.0.1" (configmap) 2026-04-12 21:04:20.699388 I | op-k8sutil: ROOK_CSI_ATTACHER_IMAGE="registry.k8s.io/sig-storage/csi-attacher:v4.5.1" (configmap) 2026-04-12 21:04:20.699392 I | op-k8sutil: ROOK_CSI_SNAPSHOTTER_IMAGE="registry.k8s.io/sig-storage/csi-snapshotter:v7.0.2" (configmap) 2026-04-12 21:04:20.699402 I | op-k8sutil: ROOK_CSI_RESIZER_IMAGE="registry.k8s.io/sig-storage/csi-resizer:v1.10.1" (configmap) 2026-04-12 21:04:20.699408 I | op-k8sutil: ROOK_CSI_KUBELET_DIR_PATH="/var/lib/kubelet" (default) 2026-04-12 21:04:20.699415 I | op-k8sutil: ROOK_CSIADDONS_IMAGE="quay.io/csiaddons/k8s-sidecar:v0.8.0" (configmap) 2026-04-12 21:04:20.699421 I | op-k8sutil: CSI_TOPOLOGY_DOMAIN_LABELS="" (default) 2026-04-12 21:04:20.699425 I | op-k8sutil: ROOK_CSI_CEPHFS_POD_LABELS="" (default) 2026-04-12 21:04:20.699430 I | op-k8sutil: ROOK_CSI_NFS_POD_LABELS="" (default) 2026-04-12 21:04:20.699433 I | op-k8sutil: ROOK_CSI_RBD_POD_LABELS="" (default) 2026-04-12 21:04:20.699436 I | op-k8sutil: CSI_CLUSTER_NAME="" (default) 2026-04-12 21:04:20.699440 I | op-k8sutil: ROOK_CSI_IMAGE_PULL_POLICY="IfNotPresent" (configmap) 2026-04-12 21:04:20.699443 I | op-k8sutil: CSI_CEPHFS_KERNEL_MOUNT_OPTIONS="" (default) 2026-04-12 21:04:20.699446 I | op-k8sutil: CSI_CEPHFS_ATTACH_REQUIRED="true" (configmap) 2026-04-12 21:04:20.699449 I | op-k8sutil: CSI_RBD_ATTACH_REQUIRED="true" (configmap) 2026-04-12 21:04:20.699452 I | op-k8sutil: CSI_NFS_ATTACH_REQUIRED="true" (configmap) 2026-04-12 21:04:20.699455 I | op-k8sutil: CSI_DRIVER_NAME_PREFIX="rook-ceph" (default) 2026-04-12 21:04:20.701195 I | op-k8sutil: CSI_ENABLE_VOLUME_GROUP_SNAPSHOT="true" (configmap) 2026-04-12 21:04:20.701209 I | ceph-csi: skipping csi version check, since unsupported versions are allowed or csi is disabled 2026-04-12 21:04:20.701213 I | ceph-csi: CSI Ceph RBD driver disabled 2026-04-12 21:04:20.701216 I | op-k8sutil: removing daemonset csi-rbdplugin if it exists 2026-04-12 21:04:20.703419 I | op-k8sutil: removing deployment csi-rbdplugin-provisioner if it exists 2026-04-12 21:04:20.900261 I | ceph-csi: successfully removed CSI Ceph RBD driver 2026-04-12 21:04:20.900289 I | ceph-csi: CSI CephFS driver disabled 2026-04-12 21:04:20.900297 I | op-k8sutil: removing daemonset csi-cephfsplugin if it exists 2026-04-12 21:04:20.903023 I | op-k8sutil: removing deployment csi-cephfsplugin-provisioner if it exists 2026-04-12 21:04:21.100428 I | ceph-csi: successfully removed CSI CephFS driver 2026-04-12 21:04:21.100456 I | ceph-csi: CSI NFS driver disabled 2026-04-12 21:04:21.100468 I | op-k8sutil: removing daemonset csi-nfsplugin if it exists 2026-04-12 21:04:21.105662 I | op-k8sutil: removing deployment csi-nfsplugin-provisioner if it exists 2026-04-12 21:04:21.301088 I | ceph-csi: successfully removed CSI NFS driver 2026-04-12 21:04:35.592896 I | ceph-spec: detected ceph image version: "18.2.7-0 reef" 2026-04-12 21:04:35.592919 I | ceph-cluster-controller: validating ceph version from provided image 2026-04-12 21:04:35.596559 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-04-12 21:04:35.596576 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-04-12 21:04:35.601073 I | cephclient: writing config file /var/lib/rook/openstack/openstack.config 2026-04-12 21:04:35.601238 I | cephclient: generated admin config in /var/lib/rook/openstack 2026-04-12 21:04:36.146259 E | cephver: external cluster ceph version is a major version higher "18.2.7-0 reef" than the local cluster "0.0.0-0 ", consider upgrading 2026-04-12 21:04:36.723967 I | ceph-cluster-controller: cluster "openstack": version "18.2.7-0 reef" detected for image "harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.7" 2026-04-12 21:04:36.805044 I | ceph-cluster-controller: creating "rook-config-override" configmap 2026-04-12 21:04:36.807728 I | ceph-cluster-controller: creating "rook-ceph-config" secret 2026-04-12 21:04:36.822344 I | ceph-cluster-controller: external cluster identity established 2026-04-12 21:04:36.822371 I | cephclient: getting or creating ceph auth key "client.csi-rbd-provisioner" 2026-04-12 21:04:37.730628 I | cephclient: getting or creating ceph auth key "client.csi-rbd-node" 2026-04-12 21:04:38.293277 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-provisioner" 2026-04-12 21:04:38.867966 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-node" 2026-04-12 21:04:39.464746 I | ceph-csi: created kubernetes csi secrets for cluster "openstack" 2026-04-12 21:04:39.470810 I | ceph-cluster-controller: successfully updated csi config map 2026-04-12 21:04:39.470829 I | cephclient: getting or creating ceph auth key "client.crash" 2026-04-12 21:04:40.461132 I | ceph-nodedaemon-controller: created kubernetes crash collector secret for cluster "openstack" 2026-04-12 21:04:40.464264 I | ceph-cluster-controller: enabling ceph mon monitoring goroutine for cluster "openstack" 2026-04-12 21:04:40.464287 I | ceph-cluster-controller: ceph status check interval is 1m0s 2026-04-12 21:04:40.464292 I | ceph-cluster-controller: enabling ceph status monitoring goroutine for cluster "openstack" 2026-04-12 21:04:40.747616 I | clusterdisruption-controller: all PGs are active+clean. Restoring default OSD pdb settings 2026-04-12 21:04:40.747645 I | clusterdisruption-controller: creating the default pdb "rook-ceph-osd" with maxUnavailable=1 for all osd 2026-04-12 21:04:40.756999 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:04:42.045915 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:04:43.168029 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:04:49.567744 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-04-12 21:04:49.567781 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-04-12 21:04:49.567800 I | ceph-spec: detecting the ceph image version for image harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.7... 2026-04-12 21:04:51.947295 I | ceph-spec: detected ceph image version: "18.2.7-0 reef" 2026-04-12 21:04:53.422306 I | ceph-object-controller: reconciling object store deployments 2026-04-12 21:04:53.439544 I | ceph-object-controller: ceph object store gateway service running at 10.109.47.185 2026-04-12 21:04:53.439662 I | ceph-object-controller: reconciling object store pools 2026-04-12 21:04:56.647840 I | cephclient: reconciling replicated pool ceph.rgw.control succeeded 2026-04-12 21:04:57.527267 I | cephclient: creating a new crush rule for changed deviceClass ("default"-->"") on crush rule "ceph.rgw.control" 2026-04-12 21:04:57.527306 I | cephclient: updating pool "ceph.rgw.control" failure domain from "osd" to "osd" with new crush rule "ceph.rgw.control_osd" 2026-04-12 21:04:57.527315 I | cephclient: crush rule "ceph.rgw.control" will no longer be used by pool "ceph.rgw.control" 2026-04-12 21:04:59.708664 I | cephclient: Successfully updated pool "ceph.rgw.control" failure domain to "osd" 2026-04-12 21:04:59.708700 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.control" 2026-04-12 21:05:03.910501 I | cephclient: reconciling replicated pool ceph.rgw.meta succeeded 2026-04-12 21:05:04.832880 I | cephclient: creating a new crush rule for changed deviceClass ("default"-->"") on crush rule "ceph.rgw.meta" 2026-04-12 21:05:04.832913 I | cephclient: updating pool "ceph.rgw.meta" failure domain from "osd" to "osd" with new crush rule "ceph.rgw.meta_osd" 2026-04-12 21:05:04.832921 I | cephclient: crush rule "ceph.rgw.meta" will no longer be used by pool "ceph.rgw.meta" 2026-04-12 21:05:06.969082 I | cephclient: Successfully updated pool "ceph.rgw.meta" failure domain to "osd" 2026-04-12 21:05:06.969120 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.meta" 2026-04-12 21:05:11.572553 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:17} {StateName:unknown Count:8}]" 2026-04-12 21:05:11.576943 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:05:12.050983 I | cephclient: reconciling replicated pool ceph.rgw.log succeeded 2026-04-12 21:05:13.235071 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:18} {StateName:unknown Count:7}]" 2026-04-12 21:05:13.239715 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:05:13.622570 I | cephclient: creating a new crush rule for changed deviceClass ("default"-->"") on crush rule "ceph.rgw.log" 2026-04-12 21:05:13.622609 I | cephclient: updating pool "ceph.rgw.log" failure domain from "osd" to "osd" with new crush rule "ceph.rgw.log_osd" 2026-04-12 21:05:13.622619 I | cephclient: crush rule "ceph.rgw.log" will no longer be used by pool "ceph.rgw.log" 2026-04-12 21:05:15.137297 I | cephclient: Successfully updated pool "ceph.rgw.log" failure domain to "osd" 2026-04-12 21:05:15.137335 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.log" 2026-04-12 21:05:19.212600 I | cephclient: reconciling replicated pool ceph.rgw.buckets.index succeeded 2026-04-12 21:05:20.159555 I | cephclient: creating a new crush rule for changed deviceClass ("default"-->"") on crush rule "ceph.rgw.buckets.index" 2026-04-12 21:05:20.159589 I | cephclient: updating pool "ceph.rgw.buckets.index" failure domain from "osd" to "osd" with new crush rule "ceph.rgw.buckets.index_osd" 2026-04-12 21:05:20.159597 I | cephclient: crush rule "ceph.rgw.buckets.index" will no longer be used by pool "ceph.rgw.buckets.index" 2026-04-12 21:05:22.250265 I | cephclient: Successfully updated pool "ceph.rgw.buckets.index" failure domain to "osd" 2026-04-12 21:05:22.250311 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.buckets.index" 2026-04-12 21:05:27.342524 I | cephclient: reconciling replicated pool ceph.rgw.buckets.non-ec succeeded 2026-04-12 21:05:27.575348 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-12 21:05:28.402924 I | cephclient: creating a new crush rule for changed deviceClass ("default"-->"") on crush rule "ceph.rgw.buckets.non-ec" 2026-04-12 21:05:28.402951 I | cephclient: updating pool "ceph.rgw.buckets.non-ec" failure domain from "osd" to "osd" with new crush rule "ceph.rgw.buckets.non-ec_osd" 2026-04-12 21:05:28.402956 I | cephclient: crush rule "ceph.rgw.buckets.non-ec" will no longer be used by pool "ceph.rgw.buckets.non-ec" 2026-04-12 21:05:30.384256 I | cephclient: Successfully updated pool "ceph.rgw.buckets.non-ec" failure domain to "osd" 2026-04-12 21:05:30.384297 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.buckets.non-ec" 2026-04-12 21:05:34.457294 I | cephclient: reconciling replicated pool ceph.rgw.otp succeeded 2026-04-12 21:05:35.369014 I | cephclient: creating a new crush rule for changed deviceClass ("default"-->"") on crush rule "ceph.rgw.otp" 2026-04-12 21:05:35.369048 I | cephclient: updating pool "ceph.rgw.otp" failure domain from "osd" to "osd" with new crush rule "ceph.rgw.otp_osd" 2026-04-12 21:05:35.369054 I | cephclient: crush rule "ceph.rgw.otp" will no longer be used by pool "ceph.rgw.otp" 2026-04-12 21:05:37.548099 I | cephclient: Successfully updated pool "ceph.rgw.otp" failure domain to "osd" 2026-04-12 21:05:37.548152 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.otp" 2026-04-12 21:05:41.642862 I | cephclient: reconciling replicated pool .rgw.root succeeded 2026-04-12 21:05:42.664387 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:73} {StateName:unknown Count:8}]" 2026-04-12 21:05:42.669603 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:05:43.167265 I | cephclient: creating a new crush rule for changed deviceClass ("default"-->"") on crush rule ".rgw.root" 2026-04-12 21:05:43.167305 I | cephclient: updating pool ".rgw.root" failure domain from "osd" to "osd" with new crush rule ".rgw.root_osd" 2026-04-12 21:05:43.167313 I | cephclient: crush rule ".rgw.root" will no longer be used by pool ".rgw.root" 2026-04-12 21:05:45.157688 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:75} {StateName:unknown Count:6}]" 2026-04-12 21:05:45.162710 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:05:46.891390 I | cephclient: Successfully updated pool ".rgw.root" failure domain to "osd" 2026-04-12 21:05:46.891425 I | cephclient: setting pool property "pg_num_min" to "8" on pool ".rgw.root" 2026-04-12 21:05:50.956029 I | cephclient: reconciling replicated pool ceph.rgw.buckets.data succeeded 2026-04-12 21:05:51.853366 I | cephclient: creating a new crush rule for changed deviceClass ("default"-->"") on crush rule "ceph.rgw.buckets.data" 2026-04-12 21:05:51.853404 I | cephclient: updating pool "ceph.rgw.buckets.data" failure domain from "osd" to "osd" with new crush rule "ceph.rgw.buckets.data_osd" 2026-04-12 21:05:51.853412 I | cephclient: crush rule "ceph.rgw.buckets.data" will no longer be used by pool "ceph.rgw.buckets.data" 2026-04-12 21:05:53.996802 I | cephclient: Successfully updated pool "ceph.rgw.buckets.data" failure domain to "osd" 2026-04-12 21:05:53.996831 I | ceph-object-controller: configuring object store "ceph" 2026-04-12 21:05:54.607975 I | ceph-object-controller: Object store "ceph": realm=ceph, zonegroup=ceph, zone=ceph 2026-04-12 21:05:54.783942 I | ceph-object-controller: committing changes to RGW configuration period for CephObjectStore "openstack/ceph" 2026-04-12 21:05:55.152720 I | ceph-object-controller: configuration for object-store ceph is complete 2026-04-12 21:05:55.152764 I | ceph-object-controller: creating object store "ceph" in namespace "openstack" 2026-04-12 21:05:55.156946 I | cephclient: getting or creating ceph auth key "client.rgw.ceph.a" 2026-04-12 21:05:55.741851 I | ceph-object-controller: setting rgw config flags 2026-04-12 21:05:55.741889 I | op-config: setting "client.rgw.ceph.a"="rgw_run_sync_thread"="true" option to the mon configuration database 2026-04-12 21:05:56.220125 I | op-config: successfully set "client.rgw.ceph.a"="rgw_run_sync_thread"="true" option to the mon configuration database 2026-04-12 21:05:56.220160 I | op-config: setting "client.rgw.ceph.a"="rgw_log_nonexistent_bucket"="true" option to the mon configuration database 2026-04-12 21:05:56.684127 I | op-config: successfully set "client.rgw.ceph.a"="rgw_log_nonexistent_bucket"="true" option to the mon configuration database 2026-04-12 21:05:56.684152 I | op-config: setting "client.rgw.ceph.a"="rgw_log_object_name_utc"="true" option to the mon configuration database 2026-04-12 21:05:57.137476 I | op-config: successfully set "client.rgw.ceph.a"="rgw_log_object_name_utc"="true" option to the mon configuration database 2026-04-12 21:05:57.137519 I | op-config: setting "client.rgw.ceph.a"="rgw_enable_usage_log"="true" option to the mon configuration database 2026-04-12 21:05:57.588738 I | op-config: successfully set "client.rgw.ceph.a"="rgw_enable_usage_log"="true" option to the mon configuration database 2026-04-12 21:05:57.588768 I | op-config: setting "client.rgw.ceph.a"="rgw_zone"="ceph" option to the mon configuration database 2026-04-12 21:05:58.040167 I | op-config: successfully set "client.rgw.ceph.a"="rgw_zone"="ceph" option to the mon configuration database 2026-04-12 21:05:58.040210 I | op-config: setting "client.rgw.ceph.a"="rgw_zonegroup"="ceph" option to the mon configuration database 2026-04-12 21:05:58.499240 I | op-config: successfully set "client.rgw.ceph.a"="rgw_zonegroup"="ceph" option to the mon configuration database 2026-04-12 21:05:58.499481 I | ceph-object-controller: object store "ceph" deployment "rook-ceph-rgw-ceph-a" created 2026-04-12 21:05:58.551716 I | ceph-object-controller: enabling rgw dashboard 2026-04-12 21:06:00.510044 I | ceph-object-controller: created object store "ceph" in namespace "openstack" 2026-04-12 21:06:00.510650 I | ceph-object-controller: setting the dashboard api secret key 2026-04-12 21:06:01.127499 I | ceph-object-controller: done setting the dashboard api secret key 2026-04-12 21:06:13.872836 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:06:14.354555 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-12 21:06:15.736891 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:06:30.946583 E | ceph-object-controller: failed to reconcile CephObjectStore "openstack/ceph". failed to create object store deployments: failed to get COSI user "cosi": Get "http://rook-ceph-rgw-ceph.openstack.svc:80/admin/user?format=json&uid=cosi": dial tcp 10.109.47.185:80: i/o timeout 2026-04-12 21:06:30.970198 I | ceph-spec: parsing mon endpoints: instance=10.96.240.200:6789 2026-04-12 21:06:30.970226 I | ceph-spec: updating obsolete maxMonID 0 to actual value 76846025234 2026-04-12 21:06:30.970243 I | ceph-spec: detecting the ceph image version for image harbor.atmosphere.dev/quay.io/ceph/ceph:v18.2.7... 2026-04-12 21:06:31.541822 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:06:32.134568 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:06:32.973921 I | ceph-spec: detected ceph image version: "18.2.7-0 reef" 2026-04-12 21:06:34.450960 I | ceph-object-controller: reconciling object store deployments 2026-04-12 21:06:34.482194 I | ceph-object-controller: ceph object store gateway service running at 10.109.47.185 2026-04-12 21:06:34.482308 I | ceph-object-controller: reconciling object store pools 2026-04-12 21:06:35.825407 I | cephclient: application "rgw" is already set on pool "ceph.rgw.control" 2026-04-12 21:06:35.825441 I | cephclient: reconciling replicated pool ceph.rgw.control succeeded 2026-04-12 21:06:36.701561 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.control" 2026-04-12 21:06:39.490028 I | cephclient: application "rgw" is already set on pool "ceph.rgw.meta" 2026-04-12 21:06:39.490061 I | cephclient: reconciling replicated pool ceph.rgw.meta succeeded 2026-04-12 21:06:40.348271 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.meta" 2026-04-12 21:06:42.574002 I | cephclient: application "rgw" is already set on pool "ceph.rgw.log" 2026-04-12 21:06:42.574040 I | cephclient: reconciling replicated pool ceph.rgw.log succeeded 2026-04-12 21:06:43.458239 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.log" 2026-04-12 21:06:44.630795 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:108} {StateName:peering Count:1}]" 2026-04-12 21:06:44.635886 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:06:45.875128 I | cephclient: application "rgw" is already set on pool "ceph.rgw.buckets.index" 2026-04-12 21:06:45.875154 I | cephclient: reconciling replicated pool ceph.rgw.buckets.index succeeded 2026-04-12 21:06:47.338598 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:06:48.238027 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.buckets.index" 2026-04-12 21:06:50.724952 I | cephclient: application "rgw" is already set on pool "ceph.rgw.buckets.non-ec" 2026-04-12 21:06:50.724982 I | cephclient: reconciling replicated pool ceph.rgw.buckets.non-ec succeeded 2026-04-12 21:06:51.604792 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.buckets.non-ec" 2026-04-12 21:06:53.704869 I | cephclient: application "rgw" is already set on pool "ceph.rgw.otp" 2026-04-12 21:06:53.704895 I | cephclient: reconciling replicated pool ceph.rgw.otp succeeded 2026-04-12 21:06:54.561207 I | cephclient: setting pool property "pg_num_min" to "8" on pool "ceph.rgw.otp" 2026-04-12 21:06:56.764491 I | cephclient: application "rgw" is already set on pool ".rgw.root" 2026-04-12 21:06:56.764524 I | cephclient: reconciling replicated pool .rgw.root succeeded 2026-04-12 21:06:57.644396 I | cephclient: setting pool property "pg_num_min" to "8" on pool ".rgw.root" 2026-04-12 21:07:00.469324 I | cephclient: application "rgw" is already set on pool "ceph.rgw.buckets.data" 2026-04-12 21:07:00.469363 I | cephclient: reconciling replicated pool ceph.rgw.buckets.data succeeded 2026-04-12 21:07:01.754730 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-12 21:07:02.102861 I | ceph-object-controller: configuring object store "ceph" 2026-04-12 21:07:02.342306 I | ceph-object-controller: Object store "ceph": realm=ceph, zonegroup=ceph, zone=ceph 2026-04-12 21:07:02.512072 I | ceph-object-controller: there are no changes to commit for RGW configuration period for CephObjectStore "openstack/ceph" 2026-04-12 21:07:02.512099 I | ceph-object-controller: configuration for object-store ceph is complete 2026-04-12 21:07:02.512106 I | ceph-object-controller: creating object store "ceph" in namespace "openstack" 2026-04-12 21:07:02.516902 I | cephclient: getting or creating ceph auth key "client.rgw.ceph.a" 2026-04-12 21:07:03.098716 I | ceph-object-controller: setting rgw config flags 2026-04-12 21:07:03.098754 I | op-config: setting "client.rgw.ceph.a"="rgw_run_sync_thread"="true" option to the mon configuration database 2026-04-12 21:07:03.563159 I | op-config: successfully set "client.rgw.ceph.a"="rgw_run_sync_thread"="true" option to the mon configuration database 2026-04-12 21:07:03.563192 I | op-config: setting "client.rgw.ceph.a"="rgw_log_nonexistent_bucket"="true" option to the mon configuration database 2026-04-12 21:07:04.008119 I | op-config: successfully set "client.rgw.ceph.a"="rgw_log_nonexistent_bucket"="true" option to the mon configuration database 2026-04-12 21:07:04.008164 I | op-config: setting "client.rgw.ceph.a"="rgw_log_object_name_utc"="true" option to the mon configuration database 2026-04-12 21:07:04.449263 I | op-config: successfully set "client.rgw.ceph.a"="rgw_log_object_name_utc"="true" option to the mon configuration database 2026-04-12 21:07:04.449292 I | op-config: setting "client.rgw.ceph.a"="rgw_enable_usage_log"="true" option to the mon configuration database 2026-04-12 21:07:04.887421 I | op-config: successfully set "client.rgw.ceph.a"="rgw_enable_usage_log"="true" option to the mon configuration database 2026-04-12 21:07:04.887457 I | op-config: setting "client.rgw.ceph.a"="rgw_zone"="ceph" option to the mon configuration database 2026-04-12 21:07:05.350723 I | op-config: successfully set "client.rgw.ceph.a"="rgw_zone"="ceph" option to the mon configuration database 2026-04-12 21:07:05.350762 I | op-config: setting "client.rgw.ceph.a"="rgw_zonegroup"="ceph" option to the mon configuration database 2026-04-12 21:07:05.801117 I | op-config: successfully set "client.rgw.ceph.a"="rgw_zonegroup"="ceph" option to the mon configuration database 2026-04-12 21:07:05.801364 I | ceph-object-controller: object store "ceph" deployment "rook-ceph-rgw-ceph-a" created 2026-04-12 21:07:05.825360 I | ceph-object-controller: object store "ceph" deployment "rook-ceph-rgw-ceph-a" already exists. updating if needed 2026-04-12 21:07:05.836530 I | op-k8sutil: deployment "rook-ceph-rgw-ceph-a" did not change, nothing to update 2026-04-12 21:07:05.843875 I | ceph-object-controller: config map "rook-ceph-rgw-ceph-mime-types" for object store "ceph" already exists, not overwriting 2026-04-12 21:07:05.849066 I | ceph-object-controller: enabling rgw dashboard 2026-04-12 21:07:06.972838 I | ceph-object-controller: created object store "ceph" in namespace "openstack" 2026-04-12 21:07:07.949767 I | ceph-object-controller: creating COSI user "cosi" 2026-04-12 21:07:08.035773 I | ceph-spec: created ceph *v1.Secret object "rook-ceph-object-user-ceph-cosi" 2026-04-12 21:07:08.603360 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:104} {StateName:clean+premerge+peered Count:1}]" 2026-04-12 21:07:08.607440 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:07:15.203473 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:105} {StateName:unknown Count:8}]" 2026-04-12 21:07:15.208373 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:07:17.951746 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:105} {StateName:creating+peering Count:5} {StateName:unknown Count:3}]" 2026-04-12 21:07:17.955313 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:07:45.767759 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:07:47.928958 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-12 21:07:48.550551 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:08:16.358408 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:127} {StateName:peering Count:1}]" 2026-04-12 21:08:16.362411 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:08:19.124834 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:08:34.034825 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-12 21:08:46.911481 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:08:49.707901 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:120} {StateName:peering Count:2}]" 2026-04-12 21:08:49.712678 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:09:17.486946 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:09:20.630301 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-12 21:09:20.735419 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:09:48.043033 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:09:51.314315 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:10:06.758129 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-12 21:10:18.642502 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:10:21.880459 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:10:49.205728 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:10:53.256967 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:10:54.171987 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-12 21:11:19.816504 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:11:23.827378 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:11:40.321007 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-12 21:11:50.373935 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:11:54.429996 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:12:20.961626 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:12:25.017498 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:12:26.474935 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-12 21:12:51.550243 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:12:55.589534 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:13:12.638532 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-12 21:13:22.108038 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:13:26.155927 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:13:52.670239 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:13:56.704593 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:13:59.336283 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-12 21:14:23.228683 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:14:27.261978 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:14:45.444300 W | op-mon: external cluster mon count is 1, consider adding new monitors. 2026-04-12 21:14:53.799200 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:14:57.815877 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:15:24.361108 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:15:28.379516 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2026-04-12 21:15:31.569189 W | op-mon: external cluster mon count is 1, consider adding new monitors.