2026-04-08 03:05:54.030281 | Job console starting 2026-04-08 03:05:54.040924 | Updating git repos 2026-04-08 03:05:54.110429 | Cloning repos into workspace 2026-04-08 03:05:54.177953 | Restoring repo states 2026-04-08 03:05:54.194646 | Merging changes 2026-04-08 03:05:55.392339 | Checking out repos 2026-04-08 03:05:55.654282 | Preparing playbooks 2026-04-08 03:05:59.871220 | Running Ansible setup 2026-04-08 03:06:03.346039 | PRE-RUN START: [trusted : github.com/vexxhost/zuul-config/playbooks/base/pre.yaml@main] 2026-04-08 03:06:03.956454 | 2026-04-08 03:06:03.956565 | PLAY [localhost] 2026-04-08 03:06:03.963658 | 2026-04-08 03:06:03.963726 | TASK [Gathering Facts] 2026-04-08 03:06:04.823324 | localhost | ok 2026-04-08 03:06:04.834779 | 2026-04-08 03:06:04.834899 | TASK [Setup log path fact] 2026-04-08 03:06:04.878385 | localhost | ok 2026-04-08 03:06:04.890907 | 2026-04-08 03:06:04.891044 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-08 03:06:04.930819 | localhost | ok 2026-04-08 03:06:04.939373 | 2026-04-08 03:06:04.939524 | TASK [emit-job-header : Print job information] 2026-04-08 03:06:04.981895 | # Job Information 2026-04-08 03:06:04.982152 | Ansible Version: 2.16.16 2026-04-08 03:06:04.982222 | Job: atmosphere-molecule-aio-ovn 2026-04-08 03:06:04.982271 | Pipeline: check 2026-04-08 03:06:04.982318 | Executor: 0a8996d2b663 2026-04-08 03:06:04.982367 | Triggered by: https://github.com/vexxhost/atmosphere/pull/3809 2026-04-08 03:06:04.982425 | Event ID: c20822a0-32f7-11f1-82a7-da0ae58ed532 2026-04-08 03:06:04.986175 | 2026-04-08 03:06:04.986266 | LOOP [emit-job-header : Print node information] 2026-04-08 03:06:05.082683 | localhost | ok: 2026-04-08 03:06:05.082896 | localhost | # Node Information 2026-04-08 03:06:05.082936 | localhost | Inventory Hostname: instance 2026-04-08 03:06:05.082966 | localhost | Hostname: np0000164179 2026-04-08 03:06:05.082995 | localhost | Username: zuul 2026-04-08 03:06:05.083027 | localhost | Distro: Ubuntu 22.04 2026-04-08 03:06:05.083055 | localhost | Provider: yul1 2026-04-08 03:06:05.083082 | localhost | Region: ca-ymq-1 2026-04-08 03:06:05.083108 | localhost | Label: ubuntu-jammy-16 2026-04-08 03:06:05.083134 | localhost | Product Name: OpenStack Nova 2026-04-08 03:06:05.083160 | localhost | Interface IP: 199.204.45.77 2026-04-08 03:06:05.100057 | 2026-04-08 03:06:05.100163 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-08 03:06:05.509781 | localhost -> localhost | changed 2026-04-08 03:06:05.515910 | 2026-04-08 03:06:05.516003 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-08 03:06:06.308980 | localhost -> localhost | changed 2026-04-08 03:06:06.314880 | 2026-04-08 03:06:06.314967 | PLAY [all] 2026-04-08 03:06:06.322358 | 2026-04-08 03:06:06.322421 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-08 03:06:06.529245 | instance -> localhost | ok 2026-04-08 03:06:06.539176 | 2026-04-08 03:06:06.539278 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-08 03:06:06.574232 | instance | ok 2026-04-08 03:06:06.588665 | instance | included: /var/lib/zuul/builds/557441ff9d894b5e938f4d4ea9157550/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-08 03:06:06.594058 | 2026-04-08 03:06:06.594116 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-08 03:06:07.488149 | instance -> localhost | Generating public/private rsa key pair. 2026-04-08 03:06:07.488399 | instance -> localhost | Your identification has been saved in /var/lib/zuul/builds/557441ff9d894b5e938f4d4ea9157550/work/557441ff9d894b5e938f4d4ea9157550_id_rsa 2026-04-08 03:06:07.488615 | instance -> localhost | Your public key has been saved in /var/lib/zuul/builds/557441ff9d894b5e938f4d4ea9157550/work/557441ff9d894b5e938f4d4ea9157550_id_rsa.pub 2026-04-08 03:06:07.488672 | instance -> localhost | The key fingerprint is: 2026-04-08 03:06:07.488721 | instance -> localhost | SHA256:iIbNkUGJUkTVWxG1FuEY+6ztza/Szazf0TVYPXYRQHc zuul-build-sshkey 2026-04-08 03:06:07.488785 | instance -> localhost | The key's randomart image is: 2026-04-08 03:06:07.488835 | instance -> localhost | +---[RSA 3072]----+ 2026-04-08 03:06:07.488889 | instance -> localhost | | ++++o +++..o.+E| 2026-04-08 03:06:07.488937 | instance -> localhost | |. . .o. .= o . +| 2026-04-08 03:06:07.489013 | instance -> localhost | | . o oo + +o| 2026-04-08 03:06:07.489060 | instance -> localhost | | + o.. + + o| 2026-04-08 03:06:07.489105 | instance -> localhost | | . = . S o . ..| 2026-04-08 03:06:07.489149 | instance -> localhost | | . o +| 2026-04-08 03:06:07.489193 | instance -> localhost | | . .. + ..| 2026-04-08 03:06:07.489237 | instance -> localhost | | ..o. +..| 2026-04-08 03:06:07.489281 | instance -> localhost | | ..==o .| 2026-04-08 03:06:07.489304 | instance -> localhost | +----[SHA256]-----+ 2026-04-08 03:06:07.489355 | instance -> localhost | ok: Runtime: 0:00:00.505061 2026-04-08 03:06:07.494426 | 2026-04-08 03:06:07.494489 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-08 03:06:07.530904 | instance | ok 2026-04-08 03:06:07.542324 | instance | included: /var/lib/zuul/builds/557441ff9d894b5e938f4d4ea9157550/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-08 03:06:07.553630 | 2026-04-08 03:06:07.553703 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-08 03:06:07.578661 | instance | skipping: Conditional result was False 2026-04-08 03:06:07.590219 | 2026-04-08 03:06:07.590309 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-08 03:06:08.029622 | instance | changed 2026-04-08 03:06:08.034571 | 2026-04-08 03:06:08.034630 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-08 03:06:08.221555 | instance | ok 2026-04-08 03:06:08.228488 | 2026-04-08 03:06:08.228576 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-08 03:06:08.719043 | instance | changed 2026-04-08 03:06:08.726009 | 2026-04-08 03:06:08.726135 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-08 03:06:09.204820 | instance | changed 2026-04-08 03:06:09.209839 | 2026-04-08 03:06:09.209903 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-08 03:06:09.234658 | instance | skipping: Conditional result was False 2026-04-08 03:06:09.247872 | 2026-04-08 03:06:09.247995 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-08 03:06:09.598251 | instance -> localhost | changed 2026-04-08 03:06:09.662930 | 2026-04-08 03:06:09.663097 | TASK [add-build-sshkey : Add back temp key] 2026-04-08 03:06:09.931908 | instance -> localhost | Identity added: /var/lib/zuul/builds/557441ff9d894b5e938f4d4ea9157550/work/557441ff9d894b5e938f4d4ea9157550_id_rsa (zuul-build-sshkey) 2026-04-08 03:06:09.932258 | instance -> localhost | ok: Runtime: 0:00:00.014528 2026-04-08 03:06:09.939711 | 2026-04-08 03:06:09.939807 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-08 03:06:10.221141 | instance | ok 2026-04-08 03:06:10.226345 | 2026-04-08 03:06:10.226426 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-08 03:06:10.252633 | instance | skipping: Conditional result was False 2026-04-08 03:06:10.271503 | 2026-04-08 03:06:10.271611 | TASK [prepare-workspace : Start zuul_console daemon.] 2026-04-08 03:06:10.550633 | instance | ok 2026-04-08 03:06:10.557774 | 2026-04-08 03:06:10.557849 | TASK [prepare-workspace : Synchronize src repos to workspace directory.] 2026-04-08 03:06:12.162315 | instance | Output suppressed because no_log was given 2026-04-08 03:06:12.209222 | 2026-04-08 03:06:12.209781 | LOOP [ensure-output-dirs : Empty Zuul Output directories by removing them] 2026-04-08 03:06:12.397981 | instance | ok: "logs" 2026-04-08 03:06:12.398206 | instance | ok: All items complete 2026-04-08 03:06:12.398235 | 2026-04-08 03:06:12.563516 | instance | ok: "artifacts" 2026-04-08 03:06:12.719920 | instance | ok: "docs" 2026-04-08 03:06:12.731072 | 2026-04-08 03:06:12.731204 | LOOP [ensure-output-dirs : Ensure Zuul Output directories exist] 2026-04-08 03:06:12.924732 | instance | changed: "logs" 2026-04-08 03:06:13.075465 | instance | changed: "artifacts" 2026-04-08 03:06:13.221622 | instance | changed: "docs" 2026-04-08 03:06:13.250735 | 2026-04-08 03:06:13.250920 | PLAY RECAP 2026-04-08 03:06:13.251118 | instance | ok: 15 changed: 8 unreachable: 0 failed: 0 skipped: 3 rescued: 0 ignored: 0 2026-04-08 03:06:13.251180 | localhost | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-08 03:06:13.251204 | 2026-04-08 03:06:13.449043 | PRE-RUN END RESULT_NORMAL: [trusted : github.com/vexxhost/zuul-config/playbooks/base/pre.yaml@main] 2026-04-08 03:06:13.455204 | PRE-RUN START: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/pre.yaml@main] 2026-04-08 03:06:14.059674 | 2026-04-08 03:06:14.059789 | PLAY [all] 2026-04-08 03:06:14.072163 | 2026-04-08 03:06:14.072261 | TASK [setup-uv : Extract archive] 2026-04-08 03:06:17.221529 | instance | changed 2026-04-08 03:06:17.228563 | 2026-04-08 03:06:17.228719 | TASK [setup-uv : Print version] 2026-04-08 03:06:17.880710 | instance | uv 0.8.13 2026-04-08 03:06:17.769083 | instance | ok: Runtime: 0:00:00.011666 2026-04-08 03:06:17.775755 | 2026-04-08 03:06:17.775808 | PLAY RECAP 2026-04-08 03:06:17.775862 | instance | ok: 2 changed: 2 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-08 03:06:17.775887 | 2026-04-08 03:06:17.942643 | PRE-RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/pre.yaml@main] 2026-04-08 03:06:17.953022 | PRE-RUN START: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/pre.yml@main] 2026-04-08 03:06:18.510572 | 2026-04-08 03:06:18.510674 | PLAY [all] 2026-04-08 03:06:18.521531 | 2026-04-08 03:06:18.521604 | TASK [Install "jq" for log collection] 2026-04-08 03:06:29.274559 | instance | changed 2026-04-08 03:06:29.276754 | 2026-04-08 03:06:29.276831 | PLAY RECAP 2026-04-08 03:06:29.276898 | instance | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-08 03:06:29.276985 | 2026-04-08 03:06:29.373707 | PRE-RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/pre.yml@main] 2026-04-08 03:06:29.387434 | RUN START: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/run.yaml@main] 2026-04-08 03:06:29.978115 | 2026-04-08 03:06:29.978357 | PLAY [all] 2026-04-08 03:06:29.992238 | 2026-04-08 03:06:29.992325 | TASK [Copy inventory file for Zuul] 2026-04-08 03:06:30.798475 | instance | changed 2026-04-08 03:06:30.805394 | 2026-04-08 03:06:30.805494 | TASK [Switch "ansible_host" to private IP] 2026-04-08 03:06:31.162782 | instance | changed: 1 replacements made 2026-04-08 03:06:31.170165 | 2026-04-08 03:06:31.170235 | TASK [Run Molecule scenario] 2026-04-08 03:06:31.609044 | instance | Using CPython 3.10.12 interpreter at: /usr/bin/python3 2026-04-08 03:06:31.609335 | instance | Creating virtual environment at: .venv 2026-04-08 03:06:31.635677 | instance | Building atmosphere @ file:///home/zuul/src/github.com/vexxhost/atmosphere 2026-04-08 03:06:31.661344 | instance | Downloading pydantic-core (2.0MiB) 2026-04-08 03:06:31.661515 | instance | Downloading kubernetes (1.9MiB) 2026-04-08 03:06:31.661725 | instance | Downloading setuptools (1.1MiB) 2026-04-08 03:06:31.662074 | instance | Downloading rjsonnet (1.2MiB) 2026-04-08 03:06:31.662918 | instance | Downloading ansible-core (2.1MiB) 2026-04-08 03:06:31.663120 | instance | Downloading netaddr (2.2MiB) 2026-04-08 03:06:31.663545 | instance | Downloading pygments (1.2MiB) 2026-04-08 03:06:31.665706 | instance | Downloading cryptography (4.2MiB) 2026-04-08 03:06:31.666599 | instance | Downloading openstacksdk (1.7MiB) 2026-04-08 03:06:31.985826 | instance | Building pyperclip==1.9.0 2026-04-08 03:06:31.996381 | instance | Downloading rjsonnet 2026-04-08 03:06:32.105958 | instance | Downloading pydantic-core 2026-04-08 03:06:32.159159 | instance | Downloading netaddr 2026-04-08 03:06:32.173893 | instance | Downloading pygments 2026-04-08 03:06:32.187272 | instance | Downloading cryptography 2026-04-08 03:06:32.228608 | instance | Downloading setuptools 2026-04-08 03:06:32.289774 | instance | Downloading kubernetes 2026-04-08 03:06:32.325578 | instance | Downloading ansible-core 2026-04-08 03:06:32.359585 | instance | Downloading openstacksdk 2026-04-08 03:06:32.699732 | instance | Built pyperclip==1.9.0 2026-04-08 03:06:32.864918 | instance | Built atmosphere @ file:///home/zuul/src/github.com/vexxhost/atmosphere 2026-04-08 03:06:32.909304 | instance | Installed 83 packages in 42ms 2026-04-08 03:06:33.572195 | instance | WARNING Molecule scenarios should migrate to 'extensions/molecule' 2026-04-08 03:06:34.215945 | instance | INFO [aio > discovery] scenario test matrix: dependency, cleanup, destroy, syntax, create, prepare, converge, idempotence, side_effect, verify, cleanup, destroy 2026-04-08 03:06:34.216040 | instance | INFO [aio > prerun] Performing prerun with role_name_check=0... 2026-04-08 03:07:16.204351 | instance | INFO [aio > dependency] Executing 2026-04-08 03:07:16.204585 | instance | WARNING [aio > dependency] Missing roles requirements file: requirements.yml 2026-04-08 03:07:16.204828 | instance | WARNING [aio > dependency] Missing collections requirements file: collections.yml 2026-04-08 03:07:16.204971 | instance | WARNING [aio > dependency] Executed: 2 missing (Remove from test_sequence to suppress) 2026-04-08 03:07:16.214719 | instance | INFO [aio > cleanup] Executing 2026-04-08 03:07:16.215045 | instance | WARNING [aio > cleanup] Executed: Missing playbook (Remove from test_sequence to suppress) 2026-04-08 03:07:16.225295 | instance | INFO [aio > destroy] Executing 2026-04-08 03:07:16.225386 | instance | WARNING [aio > destroy] Skipping, '--destroy=never' requested. 2026-04-08 03:07:16.225482 | instance | INFO [aio > destroy] Executed: Successful 2026-04-08 03:07:16.234992 | instance | INFO [aio > syntax] Executing 2026-04-08 03:07:19.128334 | instance | 2026-04-08 03:07:19.128593 | instance | playbook: /home/zuul/src/github.com/vexxhost/atmosphere/molecule/aio/converge.yml 2026-04-08 03:07:19.241971 | instance | INFO [aio > syntax] Executed: Successful 2026-04-08 03:07:19.282733 | instance | INFO [aio > create] Executing 2026-04-08 03:07:19.285247 | instance | WARNING [aio > create] Executed: Missing playbook (Remove from test_sequence to suppress) 2026-04-08 03:07:19.295530 | instance | INFO [aio > prepare] Executing 2026-04-08 03:07:20.247773 | instance | 2026-04-08 03:07:20.247994 | instance | PLAY [Prepare] ***************************************************************** 2026-04-08 03:07:20.248233 | instance | 2026-04-08 03:07:20.248497 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-08 03:07:20.248764 | instance | Wednesday 08 April 2026 03:07:20 +0000 (0:00:00.033) 0:00:00.033 ******* 2026-04-08 03:07:21.517570 | instance | [WARNING]: Platform linux on host instance is using the discovered Python 2026-04-08 03:07:21.517821 | instance | interpreter at /usr/bin/python3.10, but future installation of another Python 2026-04-08 03:07:21.518105 | instance | interpreter could change the meaning of that path. See 2026-04-08 03:07:21.518389 | instance | https://docs.ansible.com/ansible- 2026-04-08 03:07:21.518744 | instance | core/2.17/reference_appendices/interpreter_discovery.html for more information. 2026-04-08 03:07:21.527570 | instance | ok: [instance] 2026-04-08 03:07:21.527923 | instance | 2026-04-08 03:07:21.528274 | instance | TASK [Configure short hostname] ************************************************ 2026-04-08 03:07:21.528585 | instance | Wednesday 08 April 2026 03:07:21 +0000 (0:00:01.280) 0:00:01.313 ******* 2026-04-08 03:07:22.309977 | instance | changed: [instance] 2026-04-08 03:07:22.310180 | instance | 2026-04-08 03:07:22.310452 | instance | TASK [Ensure hostname inside hosts file] *************************************** 2026-04-08 03:07:22.310800 | instance | Wednesday 08 April 2026 03:07:22 +0000 (0:00:00.782) 0:00:02.095 ******* 2026-04-08 03:07:22.702818 | instance | changed: [instance] 2026-04-08 03:07:22.703013 | instance | 2026-04-08 03:07:22.703292 | instance | TASK [Install "dirmngr" for GPG keyserver operations] ************************** 2026-04-08 03:07:22.703567 | instance | Wednesday 08 April 2026 03:07:22 +0000 (0:00:00.392) 0:00:02.488 ******* 2026-04-08 03:07:23.996278 | instance | ok: [instance] 2026-04-08 03:07:23.996467 | instance | 2026-04-08 03:07:23.996781 | instance | TASK [Purge "snapd" package] *************************************************** 2026-04-08 03:07:23.997067 | instance | Wednesday 08 April 2026 03:07:23 +0000 (0:00:01.290) 0:00:03.780 ******* 2026-04-08 03:07:24.761930 | instance | ok: [instance] 2026-04-08 03:07:24.762136 | instance | 2026-04-08 03:07:24.762424 | instance | PLAY [Generate workspace for Atmosphere] *************************************** 2026-04-08 03:07:24.762733 | instance | 2026-04-08 03:07:24.763024 | instance | TASK [Create folders for workspace] ******************************************** 2026-04-08 03:07:24.763317 | instance | Wednesday 08 April 2026 03:07:24 +0000 (0:00:00.767) 0:00:04.547 ******* 2026-04-08 03:07:25.957780 | instance | changed: [localhost] => (item=group_vars) 2026-04-08 03:07:25.957958 | instance | changed: [localhost] => (item=group_vars/all) 2026-04-08 03:07:25.958297 | instance | changed: [localhost] => (item=group_vars/controllers) 2026-04-08 03:07:25.958479 | instance | changed: [localhost] => (item=group_vars/cephs) 2026-04-08 03:07:25.958769 | instance | changed: [localhost] => (item=group_vars/computes) 2026-04-08 03:07:25.959031 | instance | changed: [localhost] => (item=host_vars) 2026-04-08 03:07:25.959278 | instance | 2026-04-08 03:07:25.959542 | instance | PLAY [Generate Ceph control plane configuration for workspace] ***************** 2026-04-08 03:07:25.959783 | instance | 2026-04-08 03:07:25.960047 | instance | TASK [Ensure the Ceph control plane configuration file exists] ***************** 2026-04-08 03:07:25.960314 | instance | Wednesday 08 April 2026 03:07:25 +0000 (0:00:01.195) 0:00:05.743 ******* 2026-04-08 03:07:26.179740 | instance | changed: [localhost] 2026-04-08 03:07:26.179928 | instance | 2026-04-08 03:07:26.180195 | instance | TASK [Load the current Ceph control plane configuration into a variable] ******* 2026-04-08 03:07:26.180463 | instance | Wednesday 08 April 2026 03:07:26 +0000 (0:00:00.221) 0:00:05.965 ******* 2026-04-08 03:07:26.218609 | instance | ok: [localhost] 2026-04-08 03:07:26.218888 | instance | 2026-04-08 03:07:26.219151 | instance | TASK [Generate Ceph control plane values for missing variables] **************** 2026-04-08 03:07:26.219417 | instance | Wednesday 08 April 2026 03:07:26 +0000 (0:00:00.039) 0:00:06.004 ******* 2026-04-08 03:07:26.269421 | instance | ok: [localhost] => (item={'key': 'ceph_fsid', 'value': '28f62130-c9d2-5667-a3a8-8900da467d1d'}) 2026-04-08 03:07:26.269728 | instance | ok: [localhost] => (item={'key': 'ceph_mon_public_network', 'value': '10.96.240.0/24'}) 2026-04-08 03:07:26.269958 | instance | 2026-04-08 03:07:26.270270 | instance | TASK [Write new Ceph control plane configuration file to disk] ***************** 2026-04-08 03:07:26.270527 | instance | Wednesday 08 April 2026 03:07:26 +0000 (0:00:00.051) 0:00:06.055 ******* 2026-04-08 03:07:26.858095 | instance | changed: [localhost] 2026-04-08 03:07:26.858277 | instance | 2026-04-08 03:07:26.858610 | instance | PLAY [Generate Ceph OSD configuration for workspace] *************************** 2026-04-08 03:07:26.858908 | instance | 2026-04-08 03:07:26.859175 | instance | TASK [Ensure the Ceph OSDs configuration file exists] ************************** 2026-04-08 03:07:26.859444 | instance | Wednesday 08 April 2026 03:07:26 +0000 (0:00:00.588) 0:00:06.643 ******* 2026-04-08 03:07:27.068666 | instance | changed: [localhost] 2026-04-08 03:07:27.068869 | instance | 2026-04-08 03:07:27.069148 | instance | TASK [Load the current Ceph OSDs configuration into a variable] **************** 2026-04-08 03:07:27.069413 | instance | Wednesday 08 April 2026 03:07:27 +0000 (0:00:00.210) 0:00:06.854 ******* 2026-04-08 03:07:27.109105 | instance | ok: [localhost] 2026-04-08 03:07:27.109345 | instance | 2026-04-08 03:07:27.109631 | instance | TASK [Generate Ceph OSDs values for missing variables] ************************* 2026-04-08 03:07:27.109920 | instance | Wednesday 08 April 2026 03:07:27 +0000 (0:00:00.040) 0:00:06.894 ******* 2026-04-08 03:07:27.147267 | instance | ok: [localhost] => (item={'key': 'ceph_osd_devices', 'value': ['/dev/vdb', '/dev/vdc', '/dev/vdd']}) 2026-04-08 03:07:27.147480 | instance | 2026-04-08 03:07:27.147751 | instance | TASK [Write new Ceph OSDs configuration file to disk] ************************** 2026-04-08 03:07:27.148020 | instance | Wednesday 08 April 2026 03:07:27 +0000 (0:00:00.038) 0:00:06.933 ******* 2026-04-08 03:07:27.533476 | instance | changed: [localhost] 2026-04-08 03:07:27.533576 | instance | 2026-04-08 03:07:27.533769 | instance | PLAY [Generate Kubernetes configuration for workspace] ************************* 2026-04-08 03:07:27.533926 | instance | 2026-04-08 03:07:27.534097 | instance | TASK [Ensure the Kubernetes configuration file exists] ************************* 2026-04-08 03:07:27.534287 | instance | Wednesday 08 April 2026 03:07:27 +0000 (0:00:00.386) 0:00:07.319 ******* 2026-04-08 03:07:27.742549 | instance | changed: [localhost] 2026-04-08 03:07:27.742772 | instance | 2026-04-08 03:07:27.743041 | instance | TASK [Load the current Kubernetes configuration into a variable] *************** 2026-04-08 03:07:27.743307 | instance | Wednesday 08 April 2026 03:07:27 +0000 (0:00:00.208) 0:00:07.528 ******* 2026-04-08 03:07:27.776903 | instance | ok: [localhost] 2026-04-08 03:07:27.777101 | instance | 2026-04-08 03:07:27.777393 | instance | TASK [Generate Kubernetes values for missing variables] ************************ 2026-04-08 03:07:27.777634 | instance | Wednesday 08 April 2026 03:07:27 +0000 (0:00:00.034) 0:00:07.563 ******* 2026-04-08 03:07:27.823892 | instance | ok: [localhost] => (item={'key': 'kubernetes_hostname', 'value': '10.96.240.10'}) 2026-04-08 03:07:27.824066 | instance | ok: [localhost] => (item={'key': 'kubernetes_keepalived_vrid', 'value': 42}) 2026-04-08 03:07:27.824374 | instance | ok: [localhost] => (item={'key': 'kubernetes_keepalived_vip', 'value': '10.96.240.10'}) 2026-04-08 03:07:27.824614 | instance | 2026-04-08 03:07:27.824882 | instance | TASK [Write new Kubernetes configuration file to disk] ************************* 2026-04-08 03:07:27.825141 | instance | Wednesday 08 April 2026 03:07:27 +0000 (0:00:00.046) 0:00:07.610 ******* 2026-04-08 03:07:28.206557 | instance | changed: [localhost] 2026-04-08 03:07:28.206658 | instance | 2026-04-08 03:07:28.206667 | instance | PLAY [Generate Keepalived configuration for workspace] ************************* 2026-04-08 03:07:28.206674 | instance | 2026-04-08 03:07:28.206680 | instance | TASK [Ensure the Keeaplived configuration file exists] ************************* 2026-04-08 03:07:28.206686 | instance | Wednesday 08 April 2026 03:07:28 +0000 (0:00:00.382) 0:00:07.992 ******* 2026-04-08 03:07:28.419293 | instance | changed: [localhost] 2026-04-08 03:07:28.419399 | instance | 2026-04-08 03:07:28.419823 | instance | TASK [Load the current Keepalived configuration into a variable] *************** 2026-04-08 03:07:28.419888 | instance | Wednesday 08 April 2026 03:07:28 +0000 (0:00:00.213) 0:00:08.205 ******* 2026-04-08 03:07:28.460420 | instance | ok: [localhost] 2026-04-08 03:07:28.460507 | instance | 2026-04-08 03:07:28.460729 | instance | TASK [Generate Keepalived values for missing variables] ************************ 2026-04-08 03:07:28.460784 | instance | Wednesday 08 April 2026 03:07:28 +0000 (0:00:00.041) 0:00:08.246 ******* 2026-04-08 03:07:28.508584 | instance | ok: [localhost] => (item={'key': 'keepalived_interface', 'value': 'br-ex'}) 2026-04-08 03:07:28.509242 | instance | ok: [localhost] => (item={'key': 'keepalived_vip', 'value': '10.96.250.10'}) 2026-04-08 03:07:28.509291 | instance | 2026-04-08 03:07:28.509301 | instance | TASK [Write new Keepalived configuration file to disk] ************************* 2026-04-08 03:07:28.509308 | instance | Wednesday 08 April 2026 03:07:28 +0000 (0:00:00.047) 0:00:08.294 ******* 2026-04-08 03:07:28.912672 | instance | changed: [localhost] 2026-04-08 03:07:28.912757 | instance | 2026-04-08 03:07:28.913488 | instance | PLAY [Generate endpoints for workspace] **************************************** 2026-04-08 03:07:28.913555 | instance | 2026-04-08 03:07:28.913564 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-08 03:07:28.913570 | instance | Wednesday 08 April 2026 03:07:28 +0000 (0:00:00.404) 0:00:08.698 ******* 2026-04-08 03:07:29.654404 | instance | ok: [localhost] 2026-04-08 03:07:29.654509 | instance | 2026-04-08 03:07:29.654940 | instance | TASK [Ensure the endpoints file exists] **************************************** 2026-04-08 03:07:29.655000 | instance | Wednesday 08 April 2026 03:07:29 +0000 (0:00:00.741) 0:00:09.440 ******* 2026-04-08 03:07:29.867750 | instance | changed: [localhost] 2026-04-08 03:07:29.867883 | instance | 2026-04-08 03:07:29.868281 | instance | TASK [Load the current endpoints into a variable] ****************************** 2026-04-08 03:07:29.868335 | instance | Wednesday 08 April 2026 03:07:29 +0000 (0:00:00.213) 0:00:09.653 ******* 2026-04-08 03:07:29.908762 | instance | ok: [localhost] 2026-04-08 03:07:29.909318 | instance | 2026-04-08 03:07:29.909375 | instance | TASK [Generate endpoint skeleton for missing variables] ************************ 2026-04-08 03:07:29.909384 | instance | Wednesday 08 April 2026 03:07:29 +0000 (0:00:00.040) 0:00:09.694 ******* 2026-04-08 03:07:30.720310 | instance | ok: [localhost] => (item=keycloak_host) 2026-04-08 03:07:30.720424 | instance | ok: [localhost] => (item=kube_prometheus_stack_grafana_host) 2026-04-08 03:07:30.720439 | instance | ok: [localhost] => (item=kube_prometheus_stack_alertmanager_host) 2026-04-08 03:07:30.722787 | instance | ok: [localhost] => (item=kube_prometheus_stack_prometheus_host) 2026-04-08 03:07:30.722943 | instance | ok: [localhost] => (item=openstack_helm_endpoints_region_name) 2026-04-08 03:07:30.722955 | instance | ok: [localhost] => (item=openstack_helm_endpoints_keystone_api_host) 2026-04-08 03:07:30.722963 | instance | ok: [localhost] => (item=openstack_helm_endpoints_glance_api_host) 2026-04-08 03:07:30.722970 | instance | ok: [localhost] => (item=openstack_helm_endpoints_cinder_api_host) 2026-04-08 03:07:30.722976 | instance | ok: [localhost] => (item=openstack_helm_endpoints_placement_api_host) 2026-04-08 03:07:30.722983 | instance | ok: [localhost] => (item=openstack_helm_endpoints_barbican_api_host) 2026-04-08 03:07:30.722989 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_api_host) 2026-04-08 03:07:30.722995 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_api_host) 2026-04-08 03:07:30.723002 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_novnc_host) 2026-04-08 03:07:30.723008 | instance | ok: [localhost] => (item=openstack_helm_endpoints_ironic_api_host) 2026-04-08 03:07:30.723014 | instance | ok: [localhost] => (item=openstack_helm_endpoints_designate_api_host) 2026-04-08 03:07:30.723021 | instance | ok: [localhost] => (item=openstack_helm_endpoints_octavia_api_host) 2026-04-08 03:07:30.723027 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_api_host) 2026-04-08 03:07:30.723034 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_registry_host) 2026-04-08 03:07:30.723041 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_api_host) 2026-04-08 03:07:30.723047 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_cfn_api_host) 2026-04-08 03:07:30.723054 | instance | ok: [localhost] => (item=openstack_helm_endpoints_horizon_api_host) 2026-04-08 03:07:30.723061 | instance | ok: [localhost] => (item=openstack_helm_endpoints_rgw_host) 2026-04-08 03:07:30.723073 | instance | ok: [localhost] => (item=openstack_helm_endpoints_manila_api_host) 2026-04-08 03:07:30.723081 | instance | 2026-04-08 03:07:30.723176 | instance | TASK [Write new endpoints file to disk] **************************************** 2026-04-08 03:07:30.723335 | instance | Wednesday 08 April 2026 03:07:30 +0000 (0:00:00.811) 0:00:10.505 ******* 2026-04-08 03:07:31.119814 | instance | changed: [localhost] 2026-04-08 03:07:31.119875 | instance | 2026-04-08 03:07:31.120069 | instance | TASK [Ensure the endpoints file exists] **************************************** 2026-04-08 03:07:31.120322 | instance | Wednesday 08 April 2026 03:07:31 +0000 (0:00:00.400) 0:00:10.906 ******* 2026-04-08 03:07:31.338309 | instance | changed: [localhost] 2026-04-08 03:07:31.338467 | instance | 2026-04-08 03:07:31.338710 | instance | PLAY [Generate Neutron configuration for workspace] **************************** 2026-04-08 03:07:31.338977 | instance | 2026-04-08 03:07:31.338990 | instance | TASK [Ensure the Neutron configuration file exists] **************************** 2026-04-08 03:07:31.339143 | instance | Wednesday 08 April 2026 03:07:31 +0000 (0:00:00.218) 0:00:11.124 ******* 2026-04-08 03:07:31.556692 | instance | changed: [localhost] 2026-04-08 03:07:31.557177 | instance | 2026-04-08 03:07:31.557228 | instance | TASK [Load the current Neutron configuration into a variable] ****************** 2026-04-08 03:07:31.557236 | instance | Wednesday 08 April 2026 03:07:31 +0000 (0:00:00.217) 0:00:11.342 ******* 2026-04-08 03:07:31.598485 | instance | ok: [localhost] 2026-04-08 03:07:31.599022 | instance | 2026-04-08 03:07:31.599047 | instance | TASK [Generate Neutron values for missing variables] *************************** 2026-04-08 03:07:31.599055 | instance | Wednesday 08 April 2026 03:07:31 +0000 (0:00:00.042) 0:00:11.385 ******* 2026-04-08 03:07:31.643485 | instance | ok: [localhost] => (item={'key': 'neutron_networks', 'value': [{'name': 'public', 'external': True, 'shared': True, 'mtu_size': 1500, 'port_security_enabled': True, 'provider_network_type': 'flat', 'provider_physical_network': 'external', 'subnets': [{'name': 'public-subnet', 'cidr': '10.96.250.0/24', 'gateway_ip': '10.96.250.10', 'allocation_pool_start': '10.96.250.200', 'allocation_pool_end': '10.96.250.220', 'enable_dhcp': True}]}]}) 2026-04-08 03:07:31.644005 | instance | 2026-04-08 03:07:31.644037 | instance | TASK [Write new Neutron configuration file to disk] **************************** 2026-04-08 03:07:31.644045 | instance | Wednesday 08 April 2026 03:07:31 +0000 (0:00:00.044) 0:00:11.429 ******* 2026-04-08 03:07:32.026472 | instance | changed: [localhost] 2026-04-08 03:07:32.026585 | instance | 2026-04-08 03:07:32.027243 | instance | PLAY [Generate Nova configuration for workspace] ******************************* 2026-04-08 03:07:32.027273 | instance | 2026-04-08 03:07:32.027280 | instance | TASK [Ensure the Nova configuration file exists] ******************************* 2026-04-08 03:07:32.027286 | instance | Wednesday 08 April 2026 03:07:32 +0000 (0:00:00.382) 0:00:11.812 ******* 2026-04-08 03:07:32.240336 | instance | changed: [localhost] 2026-04-08 03:07:32.240429 | instance | 2026-04-08 03:07:32.240835 | instance | TASK [Load the current Nova configuration into a variable] ********************* 2026-04-08 03:07:32.240893 | instance | Wednesday 08 April 2026 03:07:32 +0000 (0:00:00.213) 0:00:12.026 ******* 2026-04-08 03:07:32.281053 | instance | ok: [localhost] 2026-04-08 03:07:32.281579 | instance | 2026-04-08 03:07:32.281624 | instance | TASK [Generate Nova values for missing variables] ****************************** 2026-04-08 03:07:32.281633 | instance | Wednesday 08 April 2026 03:07:32 +0000 (0:00:00.040) 0:00:12.067 ******* 2026-04-08 03:07:32.331776 | instance | ok: [localhost] => (item={'key': 'nova_flavors', 'value': [{'name': 'm1.tiny', 'ram': 512, 'disk': 1, 'vcpus': 1}, {'name': 'm1.small', 'ram': 2048, 'disk': 20, 'vcpus': 1}, {'name': 'm1.medium', 'ram': 4096, 'disk': 40, 'vcpus': 2}, {'name': 'm1.large', 'ram': 8192, 'disk': 80, 'vcpus': 4}, {'name': 'm1.xlarge', 'ram': 16384, 'disk': 160, 'vcpus': 8}]}) 2026-04-08 03:07:32.331866 | instance | 2026-04-08 03:07:32.331878 | instance | TASK [Write new Nova configuration file to disk] ******************************* 2026-04-08 03:07:32.331954 | instance | Wednesday 08 April 2026 03:07:32 +0000 (0:00:00.049) 0:00:12.116 ******* 2026-04-08 03:07:32.722232 | instance | changed: [localhost] 2026-04-08 03:07:32.722306 | instance | 2026-04-08 03:07:32.722319 | instance | PLAY [Generate secrets for workspace] ****************************************** 2026-04-08 03:07:32.722330 | instance | 2026-04-08 03:07:32.722340 | instance | TASK [Ensure the secrets file exists] ****************************************** 2026-04-08 03:07:32.722802 | instance | Wednesday 08 April 2026 03:07:32 +0000 (0:00:00.390) 0:00:12.507 ******* 2026-04-08 03:07:32.928559 | instance | changed: [localhost] 2026-04-08 03:07:32.928762 | instance | 2026-04-08 03:07:32.929042 | instance | TASK [Load the current secrets into a variable] ******************************** 2026-04-08 03:07:32.929314 | instance | Wednesday 08 April 2026 03:07:32 +0000 (0:00:00.207) 0:00:12.714 ******* 2026-04-08 03:07:32.963677 | instance | ok: [localhost] 2026-04-08 03:07:32.963893 | instance | 2026-04-08 03:07:32.964178 | instance | TASK [Generate secrets for missing variables] ********************************** 2026-04-08 03:07:32.964437 | instance | Wednesday 08 April 2026 03:07:32 +0000 (0:00:00.035) 0:00:12.750 ******* 2026-04-08 03:07:33.370312 | instance | ok: [localhost] => (item=heat_auth_encryption_key) 2026-04-08 03:07:33.370547 | instance | ok: [localhost] => (item=keepalived_password) 2026-04-08 03:07:33.370936 | instance | ok: [localhost] => (item=keycloak_admin_password) 2026-04-08 03:07:33.371193 | instance | ok: [localhost] => (item=keycloak_database_password) 2026-04-08 03:07:33.371485 | instance | ok: [localhost] => (item=keystone_keycloak_client_secret) 2026-04-08 03:07:33.371754 | instance | ok: [localhost] => (item=keystone_oidc_crypto_passphrase) 2026-04-08 03:07:33.372035 | instance | ok: [localhost] => (item=kube_prometheus_stack_grafana_admin_password) 2026-04-08 03:07:33.372312 | instance | ok: [localhost] => (item=octavia_heartbeat_key) 2026-04-08 03:07:33.372595 | instance | ok: [localhost] => (item=openstack_helm_endpoints_rabbitmq_admin_password) 2026-04-08 03:07:33.372874 | instance | ok: [localhost] => (item=openstack_helm_endpoints_memcached_secret_key) 2026-04-08 03:07:33.373153 | instance | ok: [localhost] => (item=openstack_helm_endpoints_keystone_admin_password) 2026-04-08 03:07:33.373456 | instance | ok: [localhost] => (item=openstack_helm_endpoints_keystone_mariadb_password) 2026-04-08 03:07:33.373731 | instance | ok: [localhost] => (item=openstack_helm_endpoints_keystone_rabbitmq_password) 2026-04-08 03:07:33.374009 | instance | ok: [localhost] => (item=openstack_helm_endpoints_glance_keystone_password) 2026-04-08 03:07:33.374287 | instance | ok: [localhost] => (item=openstack_helm_endpoints_glance_mariadb_password) 2026-04-08 03:07:33.374592 | instance | ok: [localhost] => (item=openstack_helm_endpoints_glance_rabbitmq_password) 2026-04-08 03:07:33.374878 | instance | ok: [localhost] => (item=openstack_helm_endpoints_cinder_keystone_password) 2026-04-08 03:07:33.375153 | instance | ok: [localhost] => (item=openstack_helm_endpoints_cinder_mariadb_password) 2026-04-08 03:07:33.375429 | instance | ok: [localhost] => (item=openstack_helm_endpoints_cinder_rabbitmq_password) 2026-04-08 03:07:33.375708 | instance | ok: [localhost] => (item=openstack_helm_endpoints_placement_keystone_password) 2026-04-08 03:07:33.375982 | instance | ok: [localhost] => (item=openstack_helm_endpoints_placement_mariadb_password) 2026-04-08 03:07:33.376273 | instance | ok: [localhost] => (item=openstack_helm_endpoints_barbican_keystone_password) 2026-04-08 03:07:33.376550 | instance | ok: [localhost] => (item=openstack_helm_endpoints_barbican_mariadb_password) 2026-04-08 03:07:33.376831 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_keystone_password) 2026-04-08 03:07:33.377105 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_mariadb_password) 2026-04-08 03:07:33.377383 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_rabbitmq_password) 2026-04-08 03:07:33.377718 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_metadata_secret) 2026-04-08 03:07:33.377993 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_keystone_password) 2026-04-08 03:07:33.378270 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_mariadb_password) 2026-04-08 03:07:33.378544 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_rabbitmq_password) 2026-04-08 03:07:33.378749 | instance | ok: [localhost] => (item=openstack_helm_endpoints_ironic_keystone_password) 2026-04-08 03:07:33.378780 | instance | ok: [localhost] => (item=openstack_helm_endpoints_ironic_mariadb_password) 2026-04-08 03:07:33.378912 | instance | ok: [localhost] => (item=openstack_helm_endpoints_ironic_rabbitmq_password) 2026-04-08 03:07:33.379024 | instance | ok: [localhost] => (item=openstack_helm_endpoints_designate_keystone_password) 2026-04-08 03:07:33.379139 | instance | ok: [localhost] => (item=openstack_helm_endpoints_designate_mariadb_password) 2026-04-08 03:07:33.379250 | instance | ok: [localhost] => (item=openstack_helm_endpoints_designate_rabbitmq_password) 2026-04-08 03:07:33.379381 | instance | ok: [localhost] => (item=openstack_helm_endpoints_octavia_keystone_password) 2026-04-08 03:07:33.379487 | instance | ok: [localhost] => (item=openstack_helm_endpoints_octavia_mariadb_password) 2026-04-08 03:07:33.379604 | instance | ok: [localhost] => (item=openstack_helm_endpoints_octavia_rabbitmq_password) 2026-04-08 03:07:33.379721 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_keystone_password) 2026-04-08 03:07:33.379836 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_mariadb_password) 2026-04-08 03:07:33.379956 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_rabbitmq_password) 2026-04-08 03:07:33.380075 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_keystone_password) 2026-04-08 03:07:33.380209 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_trustee_keystone_password) 2026-04-08 03:07:33.380332 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_stack_user_keystone_password) 2026-04-08 03:07:33.380451 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_mariadb_password) 2026-04-08 03:07:33.380571 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_rabbitmq_password) 2026-04-08 03:07:33.380690 | instance | ok: [localhost] => (item=openstack_helm_endpoints_horizon_mariadb_password) 2026-04-08 03:07:33.380809 | instance | ok: [localhost] => (item=openstack_helm_endpoints_tempest_keystone_password) 2026-04-08 03:07:33.380933 | instance | ok: [localhost] => (item=openstack_helm_endpoints_openstack_exporter_keystone_password) 2026-04-08 03:07:33.381052 | instance | ok: [localhost] => (item=openstack_helm_endpoints_rgw_keystone_password) 2026-04-08 03:07:33.381171 | instance | ok: [localhost] => (item=openstack_helm_endpoints_manila_keystone_password) 2026-04-08 03:07:33.381290 | instance | ok: [localhost] => (item=openstack_helm_endpoints_manila_mariadb_password) 2026-04-08 03:07:33.381411 | instance | ok: [localhost] => (item=openstack_helm_endpoints_staffeln_mariadb_password) 2026-04-08 03:07:33.381547 | instance | 2026-04-08 03:07:33.381668 | instance | TASK [Generate base64 encoded secrets] ***************************************** 2026-04-08 03:07:33.381789 | instance | Wednesday 08 April 2026 03:07:33 +0000 (0:00:00.406) 0:00:13.156 ******* 2026-04-08 03:07:33.415242 | instance | ok: [localhost] => (item=barbican_kek) 2026-04-08 03:07:33.415571 | instance | 2026-04-08 03:07:33.415970 | instance | TASK [Generate temporary files for generating keys for missing variables] ****** 2026-04-08 03:07:33.416274 | instance | Wednesday 08 April 2026 03:07:33 +0000 (0:00:00.045) 0:00:13.201 ******* 2026-04-08 03:07:33.885682 | instance | changed: [localhost] => (item=manila_ssh_key) 2026-04-08 03:07:33.885775 | instance | changed: [localhost] => (item=nova_ssh_key) 2026-04-08 03:07:33.885888 | instance | 2026-04-08 03:07:33.886008 | instance | TASK [Generate SSH keys for missing variables] ********************************* 2026-04-08 03:07:33.886129 | instance | Wednesday 08 April 2026 03:07:33 +0000 (0:00:00.470) 0:00:13.672 ******* 2026-04-08 03:07:37.118132 | instance | changed: [localhost] => (item=manila_ssh_key) 2026-04-08 03:07:37.118190 | instance | changed: [localhost] => (item=nova_ssh_key) 2026-04-08 03:07:37.118197 | instance | 2026-04-08 03:07:37.118204 | instance | TASK [Set values for SSH keys] ************************************************* 2026-04-08 03:07:37.118210 | instance | Wednesday 08 April 2026 03:07:37 +0000 (0:00:03.231) 0:00:16.903 ******* 2026-04-08 03:07:37.171482 | instance | ok: [localhost] => (item=manila_ssh_key) 2026-04-08 03:07:37.171550 | instance | ok: [localhost] => (item=nova_ssh_key) 2026-04-08 03:07:37.171953 | instance | 2026-04-08 03:07:37.171994 | instance | TASK [Delete the temporary files generated for SSH keys] *********************** 2026-04-08 03:07:37.172000 | instance | Wednesday 08 April 2026 03:07:37 +0000 (0:00:00.054) 0:00:16.958 ******* 2026-04-08 03:07:37.571867 | instance | changed: [localhost] => (item=manila_ssh_key) 2026-04-08 03:07:37.571906 | instance | changed: [localhost] => (item=nova_ssh_key) 2026-04-08 03:07:37.571911 | instance | 2026-04-08 03:07:37.571916 | instance | TASK [Write new secrets file to disk] ****************************************** 2026-04-08 03:07:37.571920 | instance | Wednesday 08 April 2026 03:07:37 +0000 (0:00:00.398) 0:00:17.356 ******* 2026-04-08 03:07:37.939275 | instance | changed: [localhost] 2026-04-08 03:07:37.939358 | instance | 2026-04-08 03:07:37.939925 | instance | TASK [Encrypt secrets file with Vault password] ******************************** 2026-04-08 03:07:37.939969 | instance | Wednesday 08 April 2026 03:07:37 +0000 (0:00:00.368) 0:00:17.725 ******* 2026-04-08 03:07:37.981423 | instance | skipping: [localhost] 2026-04-08 03:07:37.981545 | instance | 2026-04-08 03:07:37.982220 | instance | PLAY [Overwrite OSD devices with LVM-backed paths] ***************************** 2026-04-08 03:07:37.982263 | instance | 2026-04-08 03:07:37.982270 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-08 03:07:37.982277 | instance | Wednesday 08 April 2026 03:07:37 +0000 (0:00:00.042) 0:00:17.767 ******* 2026-04-08 03:07:38.782022 | instance | ok: [instance] 2026-04-08 03:07:38.782057 | instance | 2026-04-08 03:07:38.782062 | instance | TASK [Overwrite existing osds.yml file] **************************************** 2026-04-08 03:07:38.782067 | instance | Wednesday 08 April 2026 03:07:38 +0000 (0:00:00.800) 0:00:18.568 ******* 2026-04-08 03:07:39.360187 | instance | changed: [instance] 2026-04-08 03:07:39.360228 | instance | 2026-04-08 03:07:39.360235 | instance | PLAY [Setup networking] ******************************************************** 2026-04-08 03:07:39.360242 | instance | 2026-04-08 03:07:39.360247 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-08 03:07:39.360253 | instance | Wednesday 08 April 2026 03:07:39 +0000 (0:00:00.577) 0:00:19.145 ******* 2026-04-08 03:07:40.194716 | instance | ok: [instance] 2026-04-08 03:07:40.194828 | instance | 2026-04-08 03:07:40.194898 | instance | TASK [Create bridge for management network] ************************************ 2026-04-08 03:07:40.195065 | instance | Wednesday 08 April 2026 03:07:40 +0000 (0:00:00.835) 0:00:19.981 ******* 2026-04-08 03:07:40.663979 | instance | ok: [instance] 2026-04-08 03:07:40.664111 | instance | 2026-04-08 03:07:40.664468 | instance | TASK [Create fake interface for management bridge] ***************************** 2026-04-08 03:07:40.664514 | instance | Wednesday 08 April 2026 03:07:40 +0000 (0:00:00.469) 0:00:20.450 ******* 2026-04-08 03:07:41.019889 | instance | ok: [instance] 2026-04-08 03:07:41.019967 | instance | 2026-04-08 03:07:41.020247 | instance | TASK [Assign dummy interface to management bridge] ***************************** 2026-04-08 03:07:41.020309 | instance | Wednesday 08 April 2026 03:07:41 +0000 (0:00:00.355) 0:00:20.806 ******* 2026-04-08 03:07:41.343704 | instance | ok: [instance] 2026-04-08 03:07:41.343822 | instance | 2026-04-08 03:07:41.343835 | instance | TASK [Assign IP address for management bridge] ********************************* 2026-04-08 03:07:41.344003 | instance | Wednesday 08 April 2026 03:07:41 +0000 (0:00:00.324) 0:00:21.130 ******* 2026-04-08 03:07:41.672228 | instance | ok: [instance] 2026-04-08 03:07:41.673008 | instance | 2026-04-08 03:07:41.673060 | instance | TASK [Bring up interfaces] ***************************************************** 2026-04-08 03:07:41.673071 | instance | Wednesday 08 April 2026 03:07:41 +0000 (0:00:00.327) 0:00:21.458 ******* 2026-04-08 03:07:42.315027 | instance | ok: [instance] => (item=br-mgmt) 2026-04-08 03:07:42.315112 | instance | ok: [instance] => (item=dummy0) 2026-04-08 03:07:42.315232 | instance | 2026-04-08 03:07:42.315476 | instance | PLAY [Create devices for Ceph] ************************************************* 2026-04-08 03:07:42.315633 | instance | 2026-04-08 03:07:42.315795 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-08 03:07:42.315959 | instance | Wednesday 08 April 2026 03:07:42 +0000 (0:00:00.642) 0:00:22.101 ******* 2026-04-08 03:07:43.193339 | instance | ok: [instance] 2026-04-08 03:07:43.193462 | instance | 2026-04-08 03:07:43.193537 | instance | TASK [Install depedencies] ***************************************************** 2026-04-08 03:07:43.193727 | instance | Wednesday 08 April 2026 03:07:43 +0000 (0:00:00.878) 0:00:22.979 ******* 2026-04-08 03:08:08.671616 | instance | changed: [instance] 2026-04-08 03:08:08.672139 | instance | 2026-04-08 03:08:08.672543 | instance | TASK [Start up service] ******************************************************** 2026-04-08 03:08:08.672960 | instance | Wednesday 08 April 2026 03:08:08 +0000 (0:00:25.477) 0:00:48.456 ******* 2026-04-08 03:08:09.326429 | instance | ok: [instance] 2026-04-08 03:08:09.326647 | instance | 2026-04-08 03:08:09.326885 | instance | TASK [Generate lvm.conf] ******************************************************* 2026-04-08 03:08:09.327118 | instance | Wednesday 08 April 2026 03:08:09 +0000 (0:00:00.655) 0:00:49.112 ******* 2026-04-08 03:08:09.661420 | instance | ok: [instance] 2026-04-08 03:08:09.661910 | instance | 2026-04-08 03:08:09.661964 | instance | TASK [Write /etc/lvm/lvm.conf] ************************************************* 2026-04-08 03:08:09.661972 | instance | Wednesday 08 April 2026 03:08:09 +0000 (0:00:00.335) 0:00:49.447 ******* 2026-04-08 03:08:10.239841 | instance | changed: [instance] 2026-04-08 03:08:10.239975 | instance | 2026-04-08 03:08:10.240356 | instance | TASK [Get list of all loopback devices] **************************************** 2026-04-08 03:08:10.240407 | instance | Wednesday 08 April 2026 03:08:10 +0000 (0:00:00.578) 0:00:50.026 ******* 2026-04-08 03:08:10.553813 | instance | ok: [instance] 2026-04-08 03:08:10.554368 | instance | 2026-04-08 03:08:10.554404 | instance | TASK [Fail if there is any existing loopback devices] ************************** 2026-04-08 03:08:10.554411 | instance | Wednesday 08 April 2026 03:08:10 +0000 (0:00:00.314) 0:00:50.340 ******* 2026-04-08 03:08:10.576709 | instance | skipping: [instance] 2026-04-08 03:08:10.577223 | instance | 2026-04-08 03:08:10.577258 | instance | TASK [Create devices for Ceph] ************************************************* 2026-04-08 03:08:10.577266 | instance | Wednesday 08 April 2026 03:08:10 +0000 (0:00:00.022) 0:00:50.363 ******* 2026-04-08 03:08:11.444284 | instance | changed: [instance] => (item=osd0) 2026-04-08 03:08:11.444443 | instance | changed: [instance] => (item=osd1) 2026-04-08 03:08:11.444690 | instance | changed: [instance] => (item=osd2) 2026-04-08 03:08:11.444886 | instance | 2026-04-08 03:08:11.445091 | instance | TASK [Set permissions on loopback devices] ************************************* 2026-04-08 03:08:11.445294 | instance | Wednesday 08 April 2026 03:08:11 +0000 (0:00:00.867) 0:00:51.230 ******* 2026-04-08 03:08:12.349903 | instance | changed: [instance] => (item=osd0) 2026-04-08 03:08:12.350657 | instance | changed: [instance] => (item=osd1) 2026-04-08 03:08:12.350715 | instance | changed: [instance] => (item=osd2) 2026-04-08 03:08:12.350725 | instance | 2026-04-08 03:08:12.350734 | instance | TASK [Start loop devices] ****************************************************** 2026-04-08 03:08:12.350751 | instance | Wednesday 08 April 2026 03:08:12 +0000 (0:00:00.905) 0:00:52.135 ******* 2026-04-08 03:08:13.421093 | instance | changed: [instance] => (item=osd0) 2026-04-08 03:08:13.421185 | instance | changed: [instance] => (item=osd1) 2026-04-08 03:08:13.421291 | instance | changed: [instance] => (item=osd2) 2026-04-08 03:08:13.421645 | instance | 2026-04-08 03:08:13.421688 | instance | TASK [Create a volume group for each loop device] ****************************** 2026-04-08 03:08:13.421694 | instance | Wednesday 08 April 2026 03:08:13 +0000 (0:00:01.071) 0:00:53.207 ******* 2026-04-08 03:08:16.918685 | instance | changed: [instance] => (item=osd0) 2026-04-08 03:08:16.920293 | instance | changed: [instance] => (item=osd1) 2026-04-08 03:08:16.920419 | instance | changed: [instance] => (item=osd2) 2026-04-08 03:08:16.920451 | instance | 2026-04-08 03:08:16.920467 | instance | TASK [Create a logical volume for each loop device] **************************** 2026-04-08 03:08:16.920480 | instance | Wednesday 08 April 2026 03:08:16 +0000 (0:00:03.496) 0:00:56.704 ******* 2026-04-08 03:08:19.290723 | instance | changed: [instance] => (item=ceph-instance-osd0) 2026-04-08 03:08:19.291019 | instance | changed: [instance] => (item=ceph-instance-osd1) 2026-04-08 03:08:19.291032 | instance | changed: [instance] => (item=ceph-instance-osd2) 2026-04-08 03:08:19.291048 | instance | 2026-04-08 03:08:19.291059 | instance | PLAY [controllers] ************************************************************* 2026-04-08 03:08:19.291069 | instance | 2026-04-08 03:08:19.291082 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-08 03:08:19.291210 | instance | Wednesday 08 April 2026 03:08:19 +0000 (0:00:02.372) 0:00:59.076 ******* 2026-04-08 03:08:20.338719 | instance | ok: [instance] 2026-04-08 03:08:20.339465 | instance | 2026-04-08 03:08:20.339530 | instance | TASK [Set masquerade rule] ***************************************************** 2026-04-08 03:08:20.339541 | instance | Wednesday 08 April 2026 03:08:20 +0000 (0:00:01.047) 0:01:00.124 ******* 2026-04-08 03:08:20.907494 | instance | changed: [instance] 2026-04-08 03:08:20.907547 | instance | 2026-04-08 03:08:20.907558 | instance | PLAY RECAP ********************************************************************* 2026-04-08 03:08:20.907569 | instance | instance : ok=26 changed=11 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 2026-04-08 03:08:20.907596 | instance | localhost : ok=40 changed=21 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 2026-04-08 03:08:20.907705 | instance | 2026-04-08 03:08:20.907949 | instance | Wednesday 08 April 2026 03:08:20 +0000 (0:00:00.568) 0:01:00.693 ******* 2026-04-08 03:08:20.908148 | instance | =============================================================================== 2026-04-08 03:08:20.908350 | instance | Install depedencies ---------------------------------------------------- 25.48s 2026-04-08 03:08:20.908558 | instance | Create a volume group for each loop device ------------------------------ 3.50s 2026-04-08 03:08:20.908760 | instance | Generate SSH keys for missing variables --------------------------------- 3.23s 2026-04-08 03:08:20.908960 | instance | Create a logical volume for each loop device ---------------------------- 2.37s 2026-04-08 03:08:20.909160 | instance | Install "dirmngr" for GPG keyserver operations -------------------------- 1.29s 2026-04-08 03:08:20.909358 | instance | Gathering Facts --------------------------------------------------------- 1.28s 2026-04-08 03:08:20.909605 | instance | Create folders for workspace -------------------------------------------- 1.20s 2026-04-08 03:08:20.909806 | instance | Start loop devices ------------------------------------------------------ 1.07s 2026-04-08 03:08:20.910005 | instance | Gathering Facts --------------------------------------------------------- 1.05s 2026-04-08 03:08:20.910204 | instance | Set permissions on loopback devices ------------------------------------- 0.91s 2026-04-08 03:08:20.910401 | instance | Gathering Facts --------------------------------------------------------- 0.88s 2026-04-08 03:08:20.910624 | instance | Create devices for Ceph ------------------------------------------------- 0.87s 2026-04-08 03:08:20.910825 | instance | Gathering Facts --------------------------------------------------------- 0.84s 2026-04-08 03:08:20.911025 | instance | Generate endpoint skeleton for missing variables ------------------------ 0.81s 2026-04-08 03:08:20.911222 | instance | Gathering Facts --------------------------------------------------------- 0.80s 2026-04-08 03:08:20.911421 | instance | Configure short hostname ------------------------------------------------ 0.78s 2026-04-08 03:08:20.911618 | instance | Purge "snapd" package --------------------------------------------------- 0.77s 2026-04-08 03:08:20.911820 | instance | Gathering Facts --------------------------------------------------------- 0.74s 2026-04-08 03:08:20.912018 | instance | Start up service -------------------------------------------------------- 0.66s 2026-04-08 03:08:20.912215 | instance | Bring up interfaces ----------------------------------------------------- 0.64s 2026-04-08 03:08:20.983521 | instance | INFO [aio > prepare] Executed: Successful 2026-04-08 03:08:20.997038 | instance | INFO [aio > converge] Executing 2026-04-08 03:08:23.810517 | instance | 2026-04-08 03:08:23.810791 | instance | PLAY [all] ********************************************************************* 2026-04-08 03:08:23.810984 | instance | 2026-04-08 03:08:23.811178 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-08 03:08:23.811372 | instance | Wednesday 08 April 2026 03:08:23 +0000 (0:00:00.020) 0:00:00.020 ******* 2026-04-08 03:08:25.125223 | instance | [WARNING]: Platform linux on host instance is using the discovered Python 2026-04-08 03:08:25.125617 | instance | interpreter at /usr/bin/python3.10, but future installation of another Python 2026-04-08 03:08:25.125952 | instance | interpreter could change the meaning of that path. See 2026-04-08 03:08:25.126280 | instance | https://docs.ansible.com/ansible- 2026-04-08 03:08:25.126647 | instance | core/2.17/reference_appendices/interpreter_discovery.html for more information. 2026-04-08 03:08:25.137500 | instance | ok: [instance] 2026-04-08 03:08:25.137846 | instance | 2026-04-08 03:08:25.138196 | instance | TASK [Fail if atmosphere_ceph_enabled is set] ********************************** 2026-04-08 03:08:25.138545 | instance | Wednesday 08 April 2026 03:08:25 +0000 (0:00:01.327) 0:00:01.347 ******* 2026-04-08 03:08:25.180676 | instance | skipping: [instance] 2026-04-08 03:08:25.181018 | instance | 2026-04-08 03:08:25.181364 | instance | TASK [Set a fact with the "atmosphere_images" for other plays] ***************** 2026-04-08 03:08:25.181736 | instance | Wednesday 08 April 2026 03:08:25 +0000 (0:00:00.043) 0:00:01.390 ******* 2026-04-08 03:08:25.365170 | instance | ok: [instance] 2026-04-08 03:08:25.365421 | instance | 2026-04-08 03:08:25.365637 | instance | PLAY [Deploy Ceph monitors & managers] ***************************************** 2026-04-08 03:08:25.365838 | instance | 2026-04-08 03:08:25.366046 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-08 03:08:25.366254 | instance | Wednesday 08 April 2026 03:08:25 +0000 (0:00:00.184) 0:00:01.575 ******* 2026-04-08 03:08:26.414532 | instance | ok: [instance] 2026-04-08 03:08:26.414976 | instance | 2026-04-08 03:08:26.415375 | instance | TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-08 03:08:26.415720 | instance | Wednesday 08 April 2026 03:08:26 +0000 (0:00:01.048) 0:00:02.624 ******* 2026-04-08 03:08:26.831173 | instance | ok: [instance] 2026-04-08 03:08:26.831233 | instance | 2026-04-08 03:08:26.831332 | instance | TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-08 03:08:26.831496 | instance | Wednesday 08 April 2026 03:08:26 +0000 (0:00:00.413) 0:00:03.037 ******* 2026-04-08 03:08:26.883152 | instance | skipping: [instance] 2026-04-08 03:08:26.883219 | instance | 2026-04-08 03:08:26.883322 | instance | TASK [vexxhost.containers.directory : Create directory (/var/lib/downloads)] *** 2026-04-08 03:08:26.883481 | instance | Wednesday 08 April 2026 03:08:26 +0000 (0:00:00.051) 0:00:03.089 ******* 2026-04-08 03:08:27.323190 | instance | changed: [instance] 2026-04-08 03:08:27.323281 | instance | 2026-04-08 03:08:27.323292 | instance | TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-08 03:08:27.323425 | instance | Wednesday 08 April 2026 03:08:27 +0000 (0:00:00.440) 0:00:03.529 ******* 2026-04-08 03:08:27.402647 | instance | ok: [instance] => { 2026-04-08 03:08:27.402830 | instance | "msg": "https://github.com/opencontainers/runc/releases/download/v1.4.0/runc.amd64" 2026-04-08 03:08:27.402978 | instance | } 2026-04-08 03:08:27.403115 | instance | 2026-04-08 03:08:27.403407 | instance | TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-08 03:08:27.403590 | instance | Wednesday 08 April 2026 03:08:27 +0000 (0:00:00.080) 0:00:03.609 ******* 2026-04-08 03:08:28.155883 | instance | changed: [instance] 2026-04-08 03:08:28.155951 | instance | 2026-04-08 03:08:28.155963 | instance | TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-08 03:08:28.155974 | instance | Wednesday 08 April 2026 03:08:28 +0000 (0:00:00.754) 0:00:04.364 ******* 2026-04-08 03:08:28.210045 | instance | skipping: [instance] 2026-04-08 03:08:28.210138 | instance | 2026-04-08 03:08:28.210150 | instance | TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-08 03:08:28.210160 | instance | Wednesday 08 April 2026 03:08:28 +0000 (0:00:00.054) 0:00:04.418 ******* 2026-04-08 03:08:28.263634 | instance | skipping: [instance] 2026-04-08 03:08:28.263691 | instance | 2026-04-08 03:08:28.263703 | instance | TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-08 03:08:28.263713 | instance | Wednesday 08 April 2026 03:08:28 +0000 (0:00:00.053) 0:00:04.472 ******* 2026-04-08 03:08:28.618280 | instance | ok: [instance] 2026-04-08 03:08:28.618669 | instance | 2026-04-08 03:08:28.619005 | instance | TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-08 03:08:28.619327 | instance | Wednesday 08 April 2026 03:08:28 +0000 (0:00:00.355) 0:00:04.828 ******* 2026-04-08 03:08:29.945061 | instance | ok: [instance] 2026-04-08 03:08:29.945468 | instance | 2026-04-08 03:08:29.945815 | instance | TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-08 03:08:29.946151 | instance | Wednesday 08 April 2026 03:08:29 +0000 (0:00:01.325) 0:00:06.154 ******* 2026-04-08 03:08:30.009985 | instance | ok: [instance] => { 2026-04-08 03:08:30.010523 | instance | "msg": "https://github.com/containerd/containerd/releases/download/v2.2.0/containerd-2.2.0-linux-amd64.tar.gz" 2026-04-08 03:08:30.010909 | instance | } 2026-04-08 03:08:30.011236 | instance | 2026-04-08 03:08:30.011586 | instance | TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-08 03:08:30.011938 | instance | Wednesday 08 April 2026 03:08:30 +0000 (0:00:00.065) 0:00:06.219 ******* 2026-04-08 03:08:30.759266 | instance | changed: [instance] 2026-04-08 03:08:30.759348 | instance | 2026-04-08 03:08:30.759593 | instance | TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-08 03:08:30.759609 | instance | Wednesday 08 April 2026 03:08:30 +0000 (0:00:00.750) 0:00:06.969 ******* 2026-04-08 03:08:33.870336 | instance | changed: [instance] 2026-04-08 03:08:33.870908 | instance | 2026-04-08 03:08:33.871046 | instance | TASK [vexxhost.containers.containerd : Install SELinux packages] *************** 2026-04-08 03:08:33.871750 | instance | Wednesday 08 April 2026 03:08:33 +0000 (0:00:03.109) 0:00:10.079 ******* 2026-04-08 03:08:33.903911 | instance | skipping: [instance] 2026-04-08 03:08:33.903972 | instance | 2026-04-08 03:08:33.904291 | instance | TASK [vexxhost.containers.containerd : Set SELinux to permissive at runtime] *** 2026-04-08 03:08:33.904309 | instance | Wednesday 08 April 2026 03:08:33 +0000 (0:00:00.034) 0:00:10.113 ******* 2026-04-08 03:08:33.936099 | instance | skipping: [instance] 2026-04-08 03:08:33.936487 | instance | 2026-04-08 03:08:33.936506 | instance | TASK [vexxhost.containers.containerd : Persist SELinux permissive mode] ******** 2026-04-08 03:08:33.936513 | instance | Wednesday 08 April 2026 03:08:33 +0000 (0:00:00.032) 0:00:10.145 ******* 2026-04-08 03:08:33.967839 | instance | skipping: [instance] 2026-04-08 03:08:33.967956 | instance | 2026-04-08 03:08:33.968091 | instance | TASK [vexxhost.containers.containerd : Install AppArmor packages] ************** 2026-04-08 03:08:33.968285 | instance | Wednesday 08 April 2026 03:08:33 +0000 (0:00:00.031) 0:00:10.177 ******* 2026-04-08 03:08:39.242877 | instance | changed: [instance] 2026-04-08 03:08:39.242953 | instance | 2026-04-08 03:08:39.243188 | instance | TASK [vexxhost.containers.containerd : Create systemd service file for containerd] *** 2026-04-08 03:08:39.243222 | instance | Wednesday 08 April 2026 03:08:39 +0000 (0:00:05.274) 0:00:15.452 ******* 2026-04-08 03:08:39.966914 | instance | changed: [instance] 2026-04-08 03:08:39.966982 | instance | 2026-04-08 03:08:39.967225 | instance | TASK [vexxhost.containers.containerd : Create folders for configuration] ******* 2026-04-08 03:08:39.967293 | instance | Wednesday 08 April 2026 03:08:39 +0000 (0:00:00.724) 0:00:16.176 ******* 2026-04-08 03:08:41.429580 | instance | changed: [instance] => (item={'path': '/etc/containerd'}) 2026-04-08 03:08:41.429663 | instance | changed: [instance] => (item={'path': '/var/lib/containerd', 'mode': '0o700'}) 2026-04-08 03:08:41.430373 | instance | changed: [instance] => (item={'path': '/run/containerd', 'mode': '0o711'}) 2026-04-08 03:08:41.430415 | instance | changed: [instance] => (item={'path': '/run/containerd/io.containerd.grpc.v1.cri', 'mode': '0o700'}) 2026-04-08 03:08:41.430421 | instance | changed: [instance] => (item={'path': '/run/containerd/io.containerd.sandbox.controller.v1.shim', 'mode': '0o700'}) 2026-04-08 03:08:41.430426 | instance | 2026-04-08 03:08:41.430431 | instance | TASK [vexxhost.containers.containerd : Create containerd config file] ********** 2026-04-08 03:08:41.430435 | instance | Wednesday 08 April 2026 03:08:41 +0000 (0:00:01.462) 0:00:17.639 ******* 2026-04-08 03:08:42.113357 | instance | changed: [instance] 2026-04-08 03:08:42.114273 | instance | 2026-04-08 03:08:42.114335 | instance | TASK [vexxhost.containers.containerd : Force any restarts if necessary] ******** 2026-04-08 03:08:42.114341 | instance | Wednesday 08 April 2026 03:08:42 +0000 (0:00:00.674) 0:00:18.313 ******* 2026-04-08 03:08:42.114346 | instance | 2026-04-08 03:08:42.114350 | instance | RUNNING HANDLER [vexxhost.containers.containerd : Reload systemd] ************** 2026-04-08 03:08:42.114354 | instance | Wednesday 08 April 2026 03:08:42 +0000 (0:00:00.008) 0:00:18.322 ******* 2026-04-08 03:08:43.179908 | instance | ok: [instance] 2026-04-08 03:08:43.180028 | instance | 2026-04-08 03:08:43.180156 | instance | RUNNING HANDLER [vexxhost.containers.containerd : Restart containerd] ********** 2026-04-08 03:08:43.180352 | instance | Wednesday 08 April 2026 03:08:43 +0000 (0:00:01.067) 0:00:19.389 ******* 2026-04-08 03:08:43.726371 | instance | changed: [instance] 2026-04-08 03:08:43.726461 | instance | 2026-04-08 03:08:43.726735 | instance | TASK [vexxhost.containers.containerd : Enable and start service] *************** 2026-04-08 03:08:43.727126 | instance | Wednesday 08 April 2026 03:08:43 +0000 (0:00:00.546) 0:00:19.936 ******* 2026-04-08 03:08:44.410517 | instance | changed: [instance] 2026-04-08 03:08:44.410644 | instance | 2026-04-08 03:08:44.410905 | instance | TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-08 03:08:44.410967 | instance | Wednesday 08 April 2026 03:08:44 +0000 (0:00:00.683) 0:00:20.620 ******* 2026-04-08 03:08:44.737709 | instance | ok: [instance] 2026-04-08 03:08:44.737815 | instance | 2026-04-08 03:08:44.737824 | instance | TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-08 03:08:44.737985 | instance | Wednesday 08 April 2026 03:08:44 +0000 (0:00:00.327) 0:00:20.947 ******* 2026-04-08 03:08:44.798409 | instance | ok: [instance] => { 2026-04-08 03:08:44.798501 | instance | "msg": "https://download.docker.com/linux/static/stable/x86_64/docker-24.0.9.tgz" 2026-04-08 03:08:44.798614 | instance | } 2026-04-08 03:08:44.798848 | instance | 2026-04-08 03:08:44.798940 | instance | TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-08 03:08:44.798952 | instance | Wednesday 08 April 2026 03:08:44 +0000 (0:00:00.060) 0:00:21.008 ******* 2026-04-08 03:08:45.740636 | instance | changed: [instance] 2026-04-08 03:08:45.740756 | instance | 2026-04-08 03:08:45.741038 | instance | TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-08 03:08:45.741083 | instance | Wednesday 08 April 2026 03:08:45 +0000 (0:00:00.942) 0:00:21.950 ******* 2026-04-08 03:08:50.199446 | instance | changed: [instance] 2026-04-08 03:08:50.199548 | instance | 2026-04-08 03:08:50.199627 | instance | TASK [vexxhost.containers.docker : Install AppArmor packages] ****************** 2026-04-08 03:08:50.199792 | instance | Wednesday 08 April 2026 03:08:50 +0000 (0:00:04.458) 0:00:26.409 ******* 2026-04-08 03:08:51.403388 | instance | ok: [instance] 2026-04-08 03:08:51.403497 | instance | 2026-04-08 03:08:51.403772 | instance | TASK [vexxhost.containers.docker : Ensure group "docker" exists] *************** 2026-04-08 03:08:51.403845 | instance | Wednesday 08 April 2026 03:08:51 +0000 (0:00:01.203) 0:00:27.612 ******* 2026-04-08 03:08:51.911279 | instance | changed: [instance] 2026-04-08 03:08:51.911391 | instance | 2026-04-08 03:08:51.911401 | instance | TASK [vexxhost.containers.docker : Create systemd service file for docker] ***** 2026-04-08 03:08:51.911572 | instance | Wednesday 08 April 2026 03:08:51 +0000 (0:00:00.508) 0:00:28.121 ******* 2026-04-08 03:08:52.509556 | instance | changed: [instance] 2026-04-08 03:08:52.509673 | instance | 2026-04-08 03:08:52.509773 | instance | TASK [vexxhost.containers.docker : Create folders for configuration] *********** 2026-04-08 03:08:52.509898 | instance | Wednesday 08 April 2026 03:08:52 +0000 (0:00:00.598) 0:00:28.719 ******* 2026-04-08 03:08:53.420074 | instance | changed: [instance] => (item={'path': '/etc/docker'}) 2026-04-08 03:08:53.420152 | instance | changed: [instance] => (item={'path': '/var/lib/docker', 'mode': '0o710'}) 2026-04-08 03:08:53.420675 | instance | changed: [instance] => (item={'path': '/run/docker', 'mode': '0o711'}) 2026-04-08 03:08:53.420739 | instance | 2026-04-08 03:08:53.420745 | instance | TASK [vexxhost.containers.docker : Create systemd socket file for docker] ****** 2026-04-08 03:08:53.420751 | instance | Wednesday 08 April 2026 03:08:53 +0000 (0:00:00.910) 0:00:29.629 ******* 2026-04-08 03:08:54.011927 | instance | changed: [instance] 2026-04-08 03:08:54.011994 | instance | 2026-04-08 03:08:54.012296 | instance | TASK [vexxhost.containers.docker : Create docker daemon config file] *********** 2026-04-08 03:08:54.012516 | instance | Wednesday 08 April 2026 03:08:54 +0000 (0:00:00.592) 0:00:30.222 ******* 2026-04-08 03:08:54.571869 | instance | changed: [instance] 2026-04-08 03:08:54.571986 | instance | 2026-04-08 03:08:54.572688 | instance | TASK [vexxhost.containers.docker : Force any restarts if necessary] ************ 2026-04-08 03:08:54.572756 | instance | Wednesday 08 April 2026 03:08:54 +0000 (0:00:00.551) 0:00:30.773 ******* 2026-04-08 03:08:54.572763 | instance | 2026-04-08 03:08:54.572768 | instance | RUNNING HANDLER [vexxhost.containers.containerd : Reload systemd] ************** 2026-04-08 03:08:54.572772 | instance | Wednesday 08 April 2026 03:08:54 +0000 (0:00:00.008) 0:00:30.781 ******* 2026-04-08 03:08:55.351602 | instance | ok: [instance] 2026-04-08 03:08:55.351740 | instance | 2026-04-08 03:08:55.351752 | instance | RUNNING HANDLER [vexxhost.containers.docker : Restart docker] ****************** 2026-04-08 03:08:55.351889 | instance | Wednesday 08 April 2026 03:08:55 +0000 (0:00:00.779) 0:00:31.561 ******* 2026-04-08 03:08:56.453158 | instance | changed: [instance] 2026-04-08 03:08:56.453254 | instance | 2026-04-08 03:08:56.453272 | instance | TASK [vexxhost.containers.docker : Enable and start service] ******************* 2026-04-08 03:08:56.453434 | instance | Wednesday 08 April 2026 03:08:56 +0000 (0:00:01.101) 0:00:32.663 ******* 2026-04-08 03:08:57.127176 | instance | changed: [instance] 2026-04-08 03:08:57.127311 | instance | 2026-04-08 03:08:57.127325 | instance | TASK [vexxhost.ceph.cephadm : Gather variables for each operating system] ****** 2026-04-08 03:08:57.127460 | instance | Wednesday 08 April 2026 03:08:57 +0000 (0:00:00.673) 0:00:33.337 ******* 2026-04-08 03:08:57.196617 | instance | ok: [instance] => (item=/home/zuul/.ansible/collections/ansible_collections/vexxhost/ceph/roles/cephadm/vars/ubuntu-22.04.yml) 2026-04-08 03:08:57.196700 | instance | 2026-04-08 03:08:57.197002 | instance | TASK [vexxhost.ceph.cephadm : Install packages] ******************************** 2026-04-08 03:08:57.197070 | instance | Wednesday 08 April 2026 03:08:57 +0000 (0:00:00.068) 0:00:33.405 ******* 2026-04-08 03:09:04.564745 | instance | changed: [instance] 2026-04-08 03:09:04.565075 | instance | 2026-04-08 03:09:04.565088 | instance | TASK [vexxhost.ceph.cephadm : Ensure services are started] ********************* 2026-04-08 03:09:04.565098 | instance | Wednesday 08 April 2026 03:09:04 +0000 (0:00:07.368) 0:00:40.774 ******* 2026-04-08 03:09:05.431887 | instance | ok: [instance] => (item=chronyd) 2026-04-08 03:09:05.432018 | instance | ok: [instance] => (item=sshd) 2026-04-08 03:09:05.432440 | instance | 2026-04-08 03:09:05.432482 | instance | TASK [vexxhost.ceph.cephadm : Download "cephadm"] ****************************** 2026-04-08 03:09:05.432487 | instance | Wednesday 08 April 2026 03:09:05 +0000 (0:00:00.867) 0:00:41.642 ******* 2026-04-08 03:09:05.870298 | instance | changed: [instance] 2026-04-08 03:09:05.870376 | instance | 2026-04-08 03:09:05.870704 | instance | TASK [vexxhost.ceph.cephadm : Remove cephadm from old path] ******************** 2026-04-08 03:09:05.870767 | instance | Wednesday 08 April 2026 03:09:05 +0000 (0:00:00.438) 0:00:42.080 ******* 2026-04-08 03:09:06.193513 | instance | ok: [instance] 2026-04-08 03:09:06.193590 | instance | 2026-04-08 03:09:06.193854 | instance | TASK [vexxhost.ceph.cephadm : Ensure "cephadm" user is present] **************** 2026-04-08 03:09:06.193897 | instance | Wednesday 08 April 2026 03:09:06 +0000 (0:00:00.323) 0:00:42.403 ******* 2026-04-08 03:09:06.837070 | instance | changed: [instance] 2026-04-08 03:09:06.837156 | instance | 2026-04-08 03:09:06.837394 | instance | TASK [vexxhost.ceph.cephadm : Allow "cephadm" user to have passwordless sudo] *** 2026-04-08 03:09:06.837435 | instance | Wednesday 08 April 2026 03:09:06 +0000 (0:00:00.643) 0:00:43.047 ******* 2026-04-08 03:09:07.345136 | instance | changed: [instance] 2026-04-08 03:09:07.345512 | instance | 2026-04-08 03:09:07.345552 | instance | TASK [vexxhost.ceph.mon : Get `cephadm ls` status] ***************************** 2026-04-08 03:09:07.345557 | instance | Wednesday 08 April 2026 03:09:07 +0000 (0:00:00.508) 0:00:43.555 ******* 2026-04-08 03:09:09.131838 | instance | ok: [instance] 2026-04-08 03:09:09.131908 | instance | 2026-04-08 03:09:09.132202 | instance | TASK [vexxhost.ceph.mon : Parse the `cephadm ls` output] *********************** 2026-04-08 03:09:09.132556 | instance | Wednesday 08 April 2026 03:09:09 +0000 (0:00:01.786) 0:00:45.341 ******* 2026-04-08 03:09:09.183787 | instance | ok: [instance] 2026-04-08 03:09:09.183853 | instance | 2026-04-08 03:09:09.184167 | instance | TASK [vexxhost.ceph.mon : Assimilate existing configs in `ceph.conf`] ********** 2026-04-08 03:09:09.184218 | instance | Wednesday 08 April 2026 03:09:09 +0000 (0:00:00.051) 0:00:45.393 ******* 2026-04-08 03:09:09.228591 | instance | skipping: [instance] 2026-04-08 03:09:09.228998 | instance | 2026-04-08 03:09:09.229065 | instance | TASK [vexxhost.ceph.mon : Adopt monitor to cluster] **************************** 2026-04-08 03:09:09.229072 | instance | Wednesday 08 April 2026 03:09:09 +0000 (0:00:00.044) 0:00:45.438 ******* 2026-04-08 03:09:09.266818 | instance | skipping: [instance] 2026-04-08 03:09:09.266903 | instance | 2026-04-08 03:09:09.266934 | instance | TASK [vexxhost.ceph.mon : Adopt manager to cluster] **************************** 2026-04-08 03:09:09.267115 | instance | Wednesday 08 April 2026 03:09:09 +0000 (0:00:00.038) 0:00:45.476 ******* 2026-04-08 03:09:09.303434 | instance | skipping: [instance] 2026-04-08 03:09:09.303848 | instance | 2026-04-08 03:09:09.303903 | instance | TASK [vexxhost.ceph.mon : Enable "cephadm" mgr module] ************************* 2026-04-08 03:09:09.303910 | instance | Wednesday 08 April 2026 03:09:09 +0000 (0:00:00.036) 0:00:45.513 ******* 2026-04-08 03:09:09.344113 | instance | skipping: [instance] 2026-04-08 03:09:09.344263 | instance | 2026-04-08 03:09:09.344491 | instance | TASK [vexxhost.ceph.mon : Set orchestrator backend to "cephadm"] *************** 2026-04-08 03:09:09.344649 | instance | Wednesday 08 April 2026 03:09:09 +0000 (0:00:00.040) 0:00:45.553 ******* 2026-04-08 03:09:09.392912 | instance | skipping: [instance] 2026-04-08 03:09:09.393080 | instance | 2026-04-08 03:09:09.393345 | instance | TASK [vexxhost.ceph.mon : Use `cephadm` user for cephadm] ********************** 2026-04-08 03:09:09.393566 | instance | Wednesday 08 April 2026 03:09:09 +0000 (0:00:00.048) 0:00:45.602 ******* 2026-04-08 03:09:09.435133 | instance | skipping: [instance] 2026-04-08 03:09:09.435269 | instance | 2026-04-08 03:09:09.435468 | instance | TASK [vexxhost.ceph.mon : Generate "cephadm" key] ****************************** 2026-04-08 03:09:09.435621 | instance | Wednesday 08 April 2026 03:09:09 +0000 (0:00:00.039) 0:00:45.642 ******* 2026-04-08 03:09:09.475229 | instance | skipping: [instance] 2026-04-08 03:09:09.475353 | instance | 2026-04-08 03:09:09.475553 | instance | TASK [vexxhost.ceph.mon : Set Ceph Monitor IP address] ************************* 2026-04-08 03:09:09.475707 | instance | Wednesday 08 April 2026 03:09:09 +0000 (0:00:00.041) 0:00:45.683 ******* 2026-04-08 03:09:09.615111 | instance | ok: [instance] 2026-04-08 03:09:09.615307 | instance | 2026-04-08 03:09:09.615465 | instance | TASK [vexxhost.ceph.mon : Check if any node is bootstrapped] ******************* 2026-04-08 03:09:09.615617 | instance | Wednesday 08 April 2026 03:09:09 +0000 (0:00:00.140) 0:00:45.824 ******* 2026-04-08 03:09:09.937660 | instance | ok: [instance] => (item=instance) 2026-04-08 03:09:09.937731 | instance | 2026-04-08 03:09:09.937962 | instance | TASK [vexxhost.ceph.mon : Select pre-existing bootstrap node if exists] ******** 2026-04-08 03:09:09.938003 | instance | Wednesday 08 April 2026 03:09:09 +0000 (0:00:00.323) 0:00:46.147 ******* 2026-04-08 03:09:09.991475 | instance | ok: [instance] 2026-04-08 03:09:09.991551 | instance | 2026-04-08 03:09:09.991848 | instance | TASK [vexxhost.ceph.mon : Bootstrap cluster] *********************************** 2026-04-08 03:09:09.991911 | instance | Wednesday 08 April 2026 03:09:09 +0000 (0:00:00.053) 0:00:46.201 ******* 2026-04-08 03:09:10.062469 | instance | included: /home/zuul/.ansible/collections/ansible_collections/vexxhost/ceph/roles/mon/tasks/bootstrap-ceph.yml for instance 2026-04-08 03:09:10.062547 | instance | 2026-04-08 03:09:10.062808 | instance | TASK [vexxhost.ceph.mon : Generate temporary file for "ceph.conf"] ************* 2026-04-08 03:09:10.062898 | instance | Wednesday 08 April 2026 03:09:10 +0000 (0:00:00.071) 0:00:46.272 ******* 2026-04-08 03:09:10.625994 | instance | changed: [instance] 2026-04-08 03:09:10.626106 | instance | 2026-04-08 03:09:10.626460 | instance | TASK [vexxhost.ceph.mon : Include extra configuration values] ****************** 2026-04-08 03:09:10.626524 | instance | Wednesday 08 April 2026 03:09:10 +0000 (0:00:00.563) 0:00:46.835 ******* 2026-04-08 03:09:11.681327 | instance | changed: [instance] => (item={'option': 'mon allow pool size one', 'section': 'global', 'value': True}) 2026-04-08 03:09:11.681427 | instance | changed: [instance] => (item={'option': 'osd crush chooseleaf type', 'section': 'global', 'value': 0}) 2026-04-08 03:09:11.681904 | instance | changed: [instance] => (item={'option': 'auth allow insecure global id reclaim', 'section': 'mon', 'value': False}) 2026-04-08 03:09:11.681920 | instance | 2026-04-08 03:09:11.681926 | instance | TASK [vexxhost.ceph.mon : Run Bootstrap coomand] ******************************* 2026-04-08 03:09:11.681931 | instance | Wednesday 08 April 2026 03:09:11 +0000 (0:00:01.056) 0:00:47.891 ******* 2026-04-08 03:11:15.099257 | instance | ok: [instance] 2026-04-08 03:11:15.099306 | instance | 2026-04-08 03:11:15.099312 | instance | TASK [vexxhost.ceph.mon : Remove temporary file for "ceph.conf"] *************** 2026-04-08 03:11:15.099319 | instance | Wednesday 08 April 2026 03:11:15 +0000 (0:02:03.416) 0:02:51.308 ******* 2026-04-08 03:11:15.427572 | instance | changed: [instance] 2026-04-08 03:11:15.427661 | instance | 2026-04-08 03:11:15.427955 | instance | TASK [vexxhost.ceph.mon : Set bootstrap node] ********************************** 2026-04-08 03:11:15.427993 | instance | Wednesday 08 April 2026 03:11:15 +0000 (0:00:00.329) 0:02:51.637 ******* 2026-04-08 03:11:15.465387 | instance | ok: [instance] 2026-04-08 03:11:15.465493 | instance | 2026-04-08 03:11:15.465720 | instance | TASK [Install Ceph host] ******************************************************* 2026-04-08 03:11:15.465736 | instance | Wednesday 08 April 2026 03:11:15 +0000 (0:00:00.037) 0:02:51.675 ******* 2026-04-08 03:11:15.543766 | instance | included: vexxhost.ceph.cephadm_host for instance 2026-04-08 03:11:15.543837 | instance | 2026-04-08 03:11:15.544095 | instance | TASK [vexxhost.ceph.cephadm_host : Get public SSH key for "cephadm" user] ****** 2026-04-08 03:11:15.544111 | instance | Wednesday 08 April 2026 03:11:15 +0000 (0:00:00.078) 0:02:51.753 ******* 2026-04-08 03:11:17.334655 | instance | ok: [instance] 2026-04-08 03:11:17.334735 | instance | 2026-04-08 03:11:17.335056 | instance | TASK [vexxhost.ceph.cephadm_host : Set fact with public SSH key for "cephadm" user] *** 2026-04-08 03:11:17.335128 | instance | Wednesday 08 April 2026 03:11:17 +0000 (0:00:01.790) 0:02:53.544 ******* 2026-04-08 03:11:17.394040 | instance | ok: [instance] => (item=instance) 2026-04-08 03:11:17.394169 | instance | 2026-04-08 03:11:17.394178 | instance | TASK [vexxhost.ceph.cephadm_host : Set authorized key for "cephadm"] *********** 2026-04-08 03:11:17.394250 | instance | Wednesday 08 April 2026 03:11:17 +0000 (0:00:00.059) 0:02:53.603 ******* 2026-04-08 03:11:17.949053 | instance | ok: [instance] 2026-04-08 03:11:17.949154 | instance | 2026-04-08 03:11:17.949374 | instance | TASK [vexxhost.ceph.cephadm_host : Add new host to Ceph] *********************** 2026-04-08 03:11:17.949423 | instance | Wednesday 08 April 2026 03:11:17 +0000 (0:00:00.555) 0:02:54.159 ******* 2026-04-08 03:11:20.108640 | instance | ok: [instance] 2026-04-08 03:11:20.109097 | instance | 2026-04-08 03:11:20.109117 | instance | TASK [vexxhost.ceph.mon : Configure "mon" label for monitors] ****************** 2026-04-08 03:11:20.109125 | instance | Wednesday 08 April 2026 03:11:20 +0000 (0:00:02.159) 0:02:56.318 ******* 2026-04-08 03:11:21.940184 | instance | ok: [instance] 2026-04-08 03:11:21.940283 | instance | 2026-04-08 03:11:21.940515 | instance | TASK [vexxhost.ceph.mon : Validate monitor exist] ****************************** 2026-04-08 03:11:21.940530 | instance | Wednesday 08 April 2026 03:11:21 +0000 (0:00:01.831) 0:02:58.150 ******* 2026-04-08 03:11:32.270590 | instance | ok: [instance] 2026-04-08 03:11:32.270688 | instance | 2026-04-08 03:11:32.270876 | instance | TASK [Install Ceph host] ******************************************************* 2026-04-08 03:11:32.270997 | instance | Wednesday 08 April 2026 03:11:32 +0000 (0:00:10.330) 0:03:08.480 ******* 2026-04-08 03:11:32.359782 | instance | included: vexxhost.ceph.cephadm_host for instance 2026-04-08 03:11:32.359868 | instance | 2026-04-08 03:11:32.360093 | instance | TASK [vexxhost.ceph.cephadm_host : Get public SSH key for "cephadm" user] ****** 2026-04-08 03:11:32.360115 | instance | Wednesday 08 April 2026 03:11:32 +0000 (0:00:00.088) 0:03:08.569 ******* 2026-04-08 03:11:32.415963 | instance | skipping: [instance] 2026-04-08 03:11:32.416020 | instance | 2026-04-08 03:11:32.416293 | instance | TASK [vexxhost.ceph.cephadm_host : Set fact with public SSH key for "cephadm" user] *** 2026-04-08 03:11:32.416312 | instance | Wednesday 08 April 2026 03:11:32 +0000 (0:00:00.056) 0:03:08.625 ******* 2026-04-08 03:11:32.472917 | instance | skipping: [instance] => (item=instance) 2026-04-08 03:11:32.472997 | instance | skipping: [instance] 2026-04-08 03:11:32.473083 | instance | 2026-04-08 03:11:32.473313 | instance | TASK [vexxhost.ceph.cephadm_host : Set authorized key for "cephadm"] *********** 2026-04-08 03:11:32.473329 | instance | Wednesday 08 April 2026 03:11:32 +0000 (0:00:00.057) 0:03:08.682 ******* 2026-04-08 03:11:32.877722 | instance | ok: [instance] 2026-04-08 03:11:32.877809 | instance | 2026-04-08 03:11:32.878081 | instance | TASK [vexxhost.ceph.cephadm_host : Add new host to Ceph] *********************** 2026-04-08 03:11:32.878151 | instance | Wednesday 08 April 2026 03:11:32 +0000 (0:00:00.404) 0:03:09.087 ******* 2026-04-08 03:11:35.014784 | instance | ok: [instance] 2026-04-08 03:11:35.014875 | instance | 2026-04-08 03:11:35.015128 | instance | TASK [vexxhost.ceph.mgr : Configure "mgr" label for managers] ****************** 2026-04-08 03:11:35.015181 | instance | Wednesday 08 April 2026 03:11:35 +0000 (0:00:02.137) 0:03:11.224 ******* 2026-04-08 03:11:36.727323 | instance | ok: [instance] 2026-04-08 03:11:36.727910 | instance | 2026-04-08 03:11:36.727921 | instance | TASK [vexxhost.ceph.mgr : Validate manager exist] ****************************** 2026-04-08 03:11:36.728058 | instance | Wednesday 08 April 2026 03:11:36 +0000 (0:00:01.704) 0:03:12.929 ******* 2026-04-08 03:11:38.560405 | instance | ok: [instance] 2026-04-08 03:11:38.560495 | instance | 2026-04-08 03:11:38.560778 | instance | TASK [vexxhost.ceph.mgr : Enable the Ceph Manager prometheus module] *********** 2026-04-08 03:11:38.560819 | instance | Wednesday 08 April 2026 03:11:38 +0000 (0:00:01.841) 0:03:14.770 ******* 2026-04-08 03:11:40.996311 | instance | ok: [instance] 2026-04-08 03:11:40.996410 | instance | 2026-04-08 03:11:40.996425 | instance | PLAY [Deploy Ceph OSDs] ******************************************************** 2026-04-08 03:11:40.996556 | instance | 2026-04-08 03:11:40.996674 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-08 03:11:40.996805 | instance | Wednesday 08 April 2026 03:11:40 +0000 (0:00:02.436) 0:03:17.206 ******* 2026-04-08 03:11:42.120527 | instance | ok: [instance] 2026-04-08 03:11:42.120601 | instance | 2026-04-08 03:11:42.120712 | instance | TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-08 03:11:42.120829 | instance | Wednesday 08 April 2026 03:11:42 +0000 (0:00:01.124) 0:03:18.330 ******* 2026-04-08 03:11:42.434174 | instance | ok: [instance] 2026-04-08 03:11:42.434252 | instance | 2026-04-08 03:11:42.434339 | instance | TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-08 03:11:42.434463 | instance | Wednesday 08 April 2026 03:11:42 +0000 (0:00:00.313) 0:03:18.644 ******* 2026-04-08 03:11:42.468943 | instance | skipping: [instance] 2026-04-08 03:11:42.469028 | instance | 2026-04-08 03:11:42.469111 | instance | TASK [vexxhost.containers.directory : Create directory (/var/lib/downloads)] *** 2026-04-08 03:11:42.469228 | instance | Wednesday 08 April 2026 03:11:42 +0000 (0:00:00.034) 0:03:18.679 ******* 2026-04-08 03:11:42.799763 | instance | ok: [instance] 2026-04-08 03:11:42.799828 | instance | 2026-04-08 03:11:42.799948 | instance | TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-08 03:11:42.800068 | instance | Wednesday 08 April 2026 03:11:42 +0000 (0:00:00.330) 0:03:19.009 ******* 2026-04-08 03:11:42.861520 | instance | ok: [instance] => { 2026-04-08 03:11:42.861638 | instance | "msg": "https://github.com/opencontainers/runc/releases/download/v1.4.0/runc.amd64" 2026-04-08 03:11:42.861822 | instance | } 2026-04-08 03:11:42.861990 | instance | 2026-04-08 03:11:42.862171 | instance | TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-08 03:11:42.862357 | instance | Wednesday 08 April 2026 03:11:42 +0000 (0:00:00.061) 0:03:19.071 ******* 2026-04-08 03:11:43.301673 | instance | ok: [instance] 2026-04-08 03:11:43.301874 | instance | 2026-04-08 03:11:43.302139 | instance | TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-08 03:11:43.302376 | instance | Wednesday 08 April 2026 03:11:43 +0000 (0:00:00.439) 0:03:19.511 ******* 2026-04-08 03:11:43.350069 | instance | skipping: [instance] 2026-04-08 03:11:43.350151 | instance | 2026-04-08 03:11:43.350202 | instance | TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-08 03:11:43.350343 | instance | Wednesday 08 April 2026 03:11:43 +0000 (0:00:00.048) 0:03:19.560 ******* 2026-04-08 03:11:43.389052 | instance | skipping: [instance] 2026-04-08 03:11:43.389150 | instance | 2026-04-08 03:11:43.389272 | instance | TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-08 03:11:43.389392 | instance | Wednesday 08 April 2026 03:11:43 +0000 (0:00:00.039) 0:03:19.599 ******* 2026-04-08 03:11:43.696122 | instance | ok: [instance] 2026-04-08 03:11:43.696199 | instance | 2026-04-08 03:11:43.696305 | instance | TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-08 03:11:43.696424 | instance | Wednesday 08 April 2026 03:11:43 +0000 (0:00:00.306) 0:03:19.906 ******* 2026-04-08 03:11:44.834606 | instance | ok: [instance] 2026-04-08 03:11:44.834751 | instance | 2026-04-08 03:11:44.834925 | instance | TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-08 03:11:44.835120 | instance | Wednesday 08 April 2026 03:11:44 +0000 (0:00:01.138) 0:03:21.044 ******* 2026-04-08 03:11:44.907455 | instance | ok: [instance] => { 2026-04-08 03:11:44.907544 | instance | "msg": "https://github.com/containerd/containerd/releases/download/v2.2.0/containerd-2.2.0-linux-amd64.tar.gz" 2026-04-08 03:11:44.907558 | instance | } 2026-04-08 03:11:44.907686 | instance | 2026-04-08 03:11:44.907805 | instance | TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-08 03:11:44.907940 | instance | Wednesday 08 April 2026 03:11:44 +0000 (0:00:00.072) 0:03:21.117 ******* 2026-04-08 03:11:45.379402 | instance | ok: [instance] 2026-04-08 03:11:45.379513 | instance | 2026-04-08 03:11:45.379584 | instance | TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-08 03:11:45.379766 | instance | Wednesday 08 April 2026 03:11:45 +0000 (0:00:00.472) 0:03:21.589 ******* 2026-04-08 03:11:47.454553 | instance | ok: [instance] 2026-04-08 03:11:47.454655 | instance | 2026-04-08 03:11:47.454771 | instance | TASK [vexxhost.containers.containerd : Install SELinux packages] *************** 2026-04-08 03:11:47.454892 | instance | Wednesday 08 April 2026 03:11:47 +0000 (0:00:02.075) 0:03:23.664 ******* 2026-04-08 03:11:47.487340 | instance | skipping: [instance] 2026-04-08 03:11:47.487397 | instance | 2026-04-08 03:11:47.487525 | instance | TASK [vexxhost.containers.containerd : Set SELinux to permissive at runtime] *** 2026-04-08 03:11:47.487639 | instance | Wednesday 08 April 2026 03:11:47 +0000 (0:00:00.032) 0:03:23.697 ******* 2026-04-08 03:11:47.521724 | instance | skipping: [instance] 2026-04-08 03:11:47.521874 | instance | 2026-04-08 03:11:47.522042 | instance | TASK [vexxhost.containers.containerd : Persist SELinux permissive mode] ******** 2026-04-08 03:11:47.522163 | instance | Wednesday 08 April 2026 03:11:47 +0000 (0:00:00.033) 0:03:23.731 ******* 2026-04-08 03:11:47.551584 | instance | skipping: [instance] 2026-04-08 03:11:47.551704 | instance | 2026-04-08 03:11:47.551864 | instance | TASK [vexxhost.containers.containerd : Install AppArmor packages] ************** 2026-04-08 03:11:47.552026 | instance | Wednesday 08 April 2026 03:11:47 +0000 (0:00:00.030) 0:03:23.761 ******* 2026-04-08 03:11:48.633137 | instance | ok: [instance] 2026-04-08 03:11:48.633261 | instance | 2026-04-08 03:11:48.633390 | instance | TASK [vexxhost.containers.containerd : Create systemd service file for containerd] *** 2026-04-08 03:11:48.633507 | instance | Wednesday 08 April 2026 03:11:48 +0000 (0:00:01.081) 0:03:24.842 ******* 2026-04-08 03:11:49.198428 | instance | ok: [instance] 2026-04-08 03:11:49.198540 | instance | 2026-04-08 03:11:49.198761 | instance | TASK [vexxhost.containers.containerd : Create folders for configuration] ******* 2026-04-08 03:11:49.198936 | instance | Wednesday 08 April 2026 03:11:49 +0000 (0:00:00.565) 0:03:25.408 ******* 2026-04-08 03:11:50.699278 | instance | ok: [instance] => (item={'path': '/etc/containerd'}) 2026-04-08 03:11:50.699395 | instance | ok: [instance] => (item={'path': '/var/lib/containerd', 'mode': '0o700'}) 2026-04-08 03:11:50.699471 | instance | ok: [instance] => (item={'path': '/run/containerd', 'mode': '0o711'}) 2026-04-08 03:11:50.699590 | instance | ok: [instance] => (item={'path': '/run/containerd/io.containerd.grpc.v1.cri', 'mode': '0o700'}) 2026-04-08 03:11:50.699717 | instance | ok: [instance] => (item={'path': '/run/containerd/io.containerd.sandbox.controller.v1.shim', 'mode': '0o700'}) 2026-04-08 03:11:50.699834 | instance | 2026-04-08 03:11:50.699969 | instance | TASK [vexxhost.containers.containerd : Create containerd config file] ********** 2026-04-08 03:11:50.700066 | instance | Wednesday 08 April 2026 03:11:50 +0000 (0:00:01.500) 0:03:26.909 ******* 2026-04-08 03:11:51.329776 | instance | ok: [instance] 2026-04-08 03:11:51.329829 | instance | 2026-04-08 03:11:51.329936 | instance | TASK [vexxhost.containers.containerd : Force any restarts if necessary] ******** 2026-04-08 03:11:51.330052 | instance | Wednesday 08 April 2026 03:11:51 +0000 (0:00:00.624) 0:03:27.533 ******* 2026-04-08 03:11:51.330167 | instance | 2026-04-08 03:11:51.330282 | instance | TASK [vexxhost.containers.containerd : Enable and start service] *************** 2026-04-08 03:11:51.330388 | instance | Wednesday 08 April 2026 03:11:51 +0000 (0:00:00.006) 0:03:27.539 ******* 2026-04-08 03:11:51.804399 | instance | ok: [instance] 2026-04-08 03:11:51.804515 | instance | 2026-04-08 03:11:51.804663 | instance | TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-08 03:11:51.804808 | instance | Wednesday 08 April 2026 03:11:51 +0000 (0:00:00.474) 0:03:28.014 ******* 2026-04-08 03:11:52.136399 | instance | ok: [instance] 2026-04-08 03:11:52.136492 | instance | 2026-04-08 03:11:52.136614 | instance | TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-08 03:11:52.136755 | instance | Wednesday 08 April 2026 03:11:52 +0000 (0:00:00.331) 0:03:28.346 ******* 2026-04-08 03:11:52.190331 | instance | ok: [instance] => { 2026-04-08 03:11:52.190412 | instance | "msg": "https://download.docker.com/linux/static/stable/x86_64/docker-24.0.9.tgz" 2026-04-08 03:11:52.190562 | instance | } 2026-04-08 03:11:52.190745 | instance | 2026-04-08 03:11:52.190914 | instance | TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-08 03:11:52.191086 | instance | Wednesday 08 April 2026 03:11:52 +0000 (0:00:00.053) 0:03:28.400 ******* 2026-04-08 03:11:52.630059 | instance | ok: [instance] 2026-04-08 03:11:52.630173 | instance | 2026-04-08 03:11:52.630395 | instance | TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-08 03:11:52.630593 | instance | Wednesday 08 April 2026 03:11:52 +0000 (0:00:00.439) 0:03:28.839 ******* 2026-04-08 03:11:55.814161 | instance | ok: [instance] 2026-04-08 03:11:55.814237 | instance | 2026-04-08 03:11:55.814351 | instance | TASK [vexxhost.containers.docker : Install AppArmor packages] ****************** 2026-04-08 03:11:55.814479 | instance | Wednesday 08 April 2026 03:11:55 +0000 (0:00:03.184) 0:03:32.024 ******* 2026-04-08 03:11:56.880317 | instance | ok: [instance] 2026-04-08 03:11:56.880418 | instance | 2026-04-08 03:11:56.880434 | instance | TASK [vexxhost.containers.docker : Ensure group "docker" exists] *************** 2026-04-08 03:11:56.880607 | instance | Wednesday 08 April 2026 03:11:56 +0000 (0:00:01.066) 0:03:33.090 ******* 2026-04-08 03:11:57.201084 | instance | ok: [instance] 2026-04-08 03:11:57.201162 | instance | 2026-04-08 03:11:57.201407 | instance | TASK [vexxhost.containers.docker : Create systemd service file for docker] ***** 2026-04-08 03:11:57.201447 | instance | Wednesday 08 April 2026 03:11:57 +0000 (0:00:00.320) 0:03:33.411 ******* 2026-04-08 03:11:57.767821 | instance | ok: [instance] 2026-04-08 03:11:57.767909 | instance | 2026-04-08 03:11:57.768313 | instance | TASK [vexxhost.containers.docker : Create folders for configuration] *********** 2026-04-08 03:11:57.768643 | instance | Wednesday 08 April 2026 03:11:57 +0000 (0:00:00.566) 0:03:33.977 ******* 2026-04-08 03:11:58.642491 | instance | ok: [instance] => (item={'path': '/etc/docker'}) 2026-04-08 03:11:58.642638 | instance | ok: [instance] => (item={'path': '/var/lib/docker', 'mode': '0o710'}) 2026-04-08 03:11:58.643310 | instance | ok: [instance] => (item={'path': '/run/docker', 'mode': '0o711'}) 2026-04-08 03:11:58.643395 | instance | 2026-04-08 03:11:58.643404 | instance | TASK [vexxhost.containers.docker : Create systemd socket file for docker] ****** 2026-04-08 03:11:58.643411 | instance | Wednesday 08 April 2026 03:11:58 +0000 (0:00:00.874) 0:03:34.852 ******* 2026-04-08 03:11:59.189782 | instance | ok: [instance] 2026-04-08 03:11:59.189896 | instance | 2026-04-08 03:11:59.190223 | instance | TASK [vexxhost.containers.docker : Create docker daemon config file] *********** 2026-04-08 03:11:59.190271 | instance | Wednesday 08 April 2026 03:11:59 +0000 (0:00:00.547) 0:03:35.399 ******* 2026-04-08 03:11:59.732814 | instance | ok: [instance] 2026-04-08 03:11:59.732890 | instance | 2026-04-08 03:11:59.733574 | instance | TASK [vexxhost.containers.docker : Force any restarts if necessary] ************ 2026-04-08 03:11:59.733644 | instance | Wednesday 08 April 2026 03:11:59 +0000 (0:00:00.534) 0:03:35.934 ******* 2026-04-08 03:11:59.733651 | instance | 2026-04-08 03:11:59.733656 | instance | TASK [vexxhost.containers.docker : Enable and start service] ******************* 2026-04-08 03:11:59.733661 | instance | Wednesday 08 April 2026 03:11:59 +0000 (0:00:00.007) 0:03:35.942 ******* 2026-04-08 03:12:00.221225 | instance | ok: [instance] 2026-04-08 03:12:00.221271 | instance | 2026-04-08 03:12:00.221277 | instance | TASK [vexxhost.ceph.cephadm : Gather variables for each operating system] ****** 2026-04-08 03:12:00.221282 | instance | Wednesday 08 April 2026 03:12:00 +0000 (0:00:00.488) 0:03:36.430 ******* 2026-04-08 03:12:00.284160 | instance | ok: [instance] => (item=/home/zuul/.ansible/collections/ansible_collections/vexxhost/ceph/roles/cephadm/vars/ubuntu-22.04.yml) 2026-04-08 03:12:00.284199 | instance | 2026-04-08 03:12:00.284204 | instance | TASK [vexxhost.ceph.cephadm : Install packages] ******************************** 2026-04-08 03:12:00.284234 | instance | Wednesday 08 April 2026 03:12:00 +0000 (0:00:00.062) 0:03:36.493 ******* 2026-04-08 03:12:01.364050 | instance | ok: [instance] 2026-04-08 03:12:01.364157 | instance | 2026-04-08 03:12:01.364426 | instance | TASK [vexxhost.ceph.cephadm : Ensure services are started] ********************* 2026-04-08 03:12:01.364477 | instance | Wednesday 08 April 2026 03:12:01 +0000 (0:00:01.080) 0:03:37.573 ******* 2026-04-08 03:12:02.281381 | instance | ok: [instance] => (item=chronyd) 2026-04-08 03:12:02.284220 | instance | ok: [instance] => (item=sshd) 2026-04-08 03:12:02.284246 | instance | 2026-04-08 03:12:02.284257 | instance | TASK [vexxhost.ceph.cephadm : Download "cephadm"] ****************************** 2026-04-08 03:12:02.284268 | instance | Wednesday 08 April 2026 03:12:02 +0000 (0:00:00.917) 0:03:38.491 ******* 2026-04-08 03:12:02.702390 | instance | ok: [instance] 2026-04-08 03:12:02.702677 | instance | 2026-04-08 03:12:02.703079 | instance | TASK [vexxhost.ceph.cephadm : Remove cephadm from old path] ******************** 2026-04-08 03:12:02.703093 | instance | Wednesday 08 April 2026 03:12:02 +0000 (0:00:00.420) 0:03:38.911 ******* 2026-04-08 03:12:03.036361 | instance | ok: [instance] 2026-04-08 03:12:03.036448 | instance | 2026-04-08 03:12:03.036669 | instance | TASK [vexxhost.ceph.cephadm : Ensure "cephadm" user is present] **************** 2026-04-08 03:12:03.036712 | instance | Wednesday 08 April 2026 03:12:03 +0000 (0:00:00.334) 0:03:39.246 ******* 2026-04-08 03:12:03.405375 | instance | ok: [instance] 2026-04-08 03:12:03.406369 | instance | 2026-04-08 03:12:03.406425 | instance | TASK [vexxhost.ceph.cephadm : Allow "cephadm" user to have passwordless sudo] *** 2026-04-08 03:12:03.406437 | instance | Wednesday 08 April 2026 03:12:03 +0000 (0:00:00.368) 0:03:39.614 ******* 2026-04-08 03:12:03.729462 | instance | ok: [instance] 2026-04-08 03:12:03.729532 | instance | 2026-04-08 03:12:03.729796 | instance | TASK [vexxhost.ceph.osd : Get monitor status] ********************************** 2026-04-08 03:12:03.729834 | instance | Wednesday 08 April 2026 03:12:03 +0000 (0:00:00.324) 0:03:39.939 ******* 2026-04-08 03:12:04.094899 | instance | ok: [instance] => (item=instance) 2026-04-08 03:12:04.094970 | instance | 2026-04-08 03:12:04.095260 | instance | TASK [vexxhost.ceph.osd : Select admin host] *********************************** 2026-04-08 03:12:04.095301 | instance | Wednesday 08 April 2026 03:12:04 +0000 (0:00:00.365) 0:03:40.304 ******* 2026-04-08 03:12:04.145683 | instance | ok: [instance] 2026-04-08 03:12:04.145769 | instance | 2026-04-08 03:12:04.146038 | instance | TASK [vexxhost.ceph.osd : Get `cephadm ls` status] ***************************** 2026-04-08 03:12:04.146082 | instance | Wednesday 08 April 2026 03:12:04 +0000 (0:00:00.050) 0:03:40.355 ******* 2026-04-08 03:12:09.570049 | instance | ok: [instance] 2026-04-08 03:12:09.570132 | instance | 2026-04-08 03:12:09.570375 | instance | TASK [vexxhost.ceph.osd : Parse the `cephadm ls` output] *********************** 2026-04-08 03:12:09.570416 | instance | Wednesday 08 April 2026 03:12:09 +0000 (0:00:05.424) 0:03:45.779 ******* 2026-04-08 03:12:09.629410 | instance | ok: [instance] 2026-04-08 03:12:09.629746 | instance | 2026-04-08 03:12:09.629797 | instance | TASK [Install Ceph host] ******************************************************* 2026-04-08 03:12:09.629816 | instance | Wednesday 08 April 2026 03:12:09 +0000 (0:00:00.058) 0:03:45.838 ******* 2026-04-08 03:12:09.706105 | instance | included: vexxhost.ceph.cephadm_host for instance 2026-04-08 03:12:09.706160 | instance | 2026-04-08 03:12:09.706440 | instance | TASK [vexxhost.ceph.cephadm_host : Get public SSH key for "cephadm" user] ****** 2026-04-08 03:12:09.706488 | instance | Wednesday 08 April 2026 03:12:09 +0000 (0:00:00.076) 0:03:45.915 ******* 2026-04-08 03:12:09.758022 | instance | skipping: [instance] 2026-04-08 03:12:09.758123 | instance | 2026-04-08 03:12:09.758384 | instance | TASK [vexxhost.ceph.cephadm_host : Set fact with public SSH key for "cephadm" user] *** 2026-04-08 03:12:09.758411 | instance | Wednesday 08 April 2026 03:12:09 +0000 (0:00:00.052) 0:03:45.967 ******* 2026-04-08 03:12:09.812991 | instance | skipping: [instance] => (item=instance) 2026-04-08 03:12:09.813168 | instance | skipping: [instance] 2026-04-08 03:12:09.813343 | instance | 2026-04-08 03:12:09.813527 | instance | TASK [vexxhost.ceph.cephadm_host : Set authorized key for "cephadm"] *********** 2026-04-08 03:12:09.813737 | instance | Wednesday 08 April 2026 03:12:09 +0000 (0:00:00.055) 0:03:46.022 ******* 2026-04-08 03:12:10.222907 | instance | ok: [instance] 2026-04-08 03:12:10.223168 | instance | 2026-04-08 03:12:10.223446 | instance | TASK [vexxhost.ceph.cephadm_host : Add new host to Ceph] *********************** 2026-04-08 03:12:10.223729 | instance | Wednesday 08 April 2026 03:12:10 +0000 (0:00:00.409) 0:03:46.432 ******* 2026-04-08 03:12:12.308941 | instance | ok: [instance] 2026-04-08 03:12:12.309181 | instance | 2026-04-08 03:12:12.309460 | instance | TASK [vexxhost.ceph.osd : Adopt OSDs to cluster] ******************************* 2026-04-08 03:12:12.309732 | instance | Wednesday 08 April 2026 03:12:12 +0000 (0:00:02.086) 0:03:48.518 ******* 2026-04-08 03:12:12.341725 | instance | skipping: [instance] 2026-04-08 03:12:12.341970 | instance | 2026-04-08 03:12:12.342241 | instance | TASK [vexxhost.ceph.osd : Wait until OSD added to cephadm] ********************* 2026-04-08 03:12:12.342501 | instance | Wednesday 08 April 2026 03:12:12 +0000 (0:00:00.033) 0:03:48.551 ******* 2026-04-08 03:12:12.376945 | instance | skipping: [instance] 2026-04-08 03:12:12.377190 | instance | 2026-04-08 03:12:12.377453 | instance | TASK [vexxhost.ceph.osd : Ensure all OSDs are non-legacy] ********************** 2026-04-08 03:12:12.377719 | instance | Wednesday 08 April 2026 03:12:12 +0000 (0:00:00.035) 0:03:48.587 ******* 2026-04-08 03:12:17.820617 | instance | ok: [instance] 2026-04-08 03:12:17.820808 | instance | 2026-04-08 03:12:17.821082 | instance | TASK [vexxhost.ceph.osd : Get `ceph-volume lvm list` status] ******************* 2026-04-08 03:12:17.821347 | instance | Wednesday 08 April 2026 03:12:17 +0000 (0:00:05.443) 0:03:54.030 ******* 2026-04-08 03:12:28.216581 | instance | ok: [instance] 2026-04-08 03:12:28.216811 | instance | 2026-04-08 03:12:28.217086 | instance | TASK [vexxhost.ceph.osd : Install OSDs] **************************************** 2026-04-08 03:12:28.217358 | instance | Wednesday 08 April 2026 03:12:28 +0000 (0:00:10.396) 0:04:04.426 ******* 2026-04-08 03:12:36.221123 | instance | failed: [instance] (item=/dev/vdb) => {"ansible_loop_var": "item", "changed": false, "cmd": ["cephadm", "shell", "--fsid", "4837cbf8-4f90-4300-b3f6-726c9b9f89b4", "--config", "/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/mon.instance/config", "--", "ceph", "orch", "daemon", "add", "osd", "instance:/dev/vdb"], "delta": "0:00:07.624145", "end": "2026-04-08 03:12:36.171780", "item": "/dev/vdb", "msg": "non-zero return code", "rc": 22, "start": "2026-04-08 03:12:28.547635", "stderr": "Error EINVAL: Traceback (most recent call last):\n File \"/usr/share/ceph/mgr/mgr_module.py\", line 1834, in _handle_command\n return self.handle_command(inbuf, cmd)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 183, in handle_command\n return dispatch[cmd['prefix']].call(self, cmd, inbuf)\n File \"/usr/share/ceph/mgr/mgr_module.py\", line 475, in call\n return self.func(mgr, **kwargs)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 119, in \n wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 108, in wrapper\n return func(*args, **kwargs)\n File \"/usr/share/ceph/mgr/orchestrator/module.py\", line 1306, in _daemon_add_osd\n raise_if_exception(completion)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 240, in raise_if_exception\n raise e\nRuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/mon.instance/config\nNon-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/run/ceph:z -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmparpe3ypr:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpwrt80472:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdb --yes --no-systemd\n/usr/bin/docker: stderr stderr: lsblk: /dev/vdb: not a block device\n/usr/bin/docker: stderr Traceback (most recent call last):\n/usr/bin/docker: stderr File \"/usr/sbin/ceph-volume\", line 33, in \n/usr/bin/docker: stderr sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 54, in __init__\n/usr/bin/docker: stderr self.main(self.argv)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/decorators.py\", line 59, in newfunc\n/usr/bin/docker: stderr return f(*a, **kw)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 166, in main\n/usr/bin/docker: stderr terminal.dispatch(self.mapper, subcommand_args)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 194, in dispatch\n/usr/bin/docker: stderr instance.main()\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py\", line 46, in main\n/usr/bin/docker: stderr terminal.dispatch(self.mapper, self.argv)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 192, in dispatch\n/usr/bin/docker: stderr instance = mapper.get(arg)(argv[count:])\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/batch.py\", line 325, in __init__\n/usr/bin/docker: stderr self.args = parser.parse_args(argv)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1825, in parse_args\n/usr/bin/docker: stderr args, argv = self.parse_known_args(args, namespace)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1858, in parse_known_args\n/usr/bin/docker: stderr namespace, args = self._parse_known_args(args, namespace)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2049, in _parse_known_args\n/usr/bin/docker: stderr positionals_end_index = consume_positionals(start_index)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2026, in consume_positionals\n/usr/bin/docker: stderr take_action(action, args)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1919, in take_action\n/usr/bin/docker: stderr argument_values = self._get_values(action, argument_strings)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in _get_values\n/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in \n/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2483, in _get_value\n/usr/bin/docker: stderr result = type_func(arg_string)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 124, in __call__\n/usr/bin/docker: stderr super().get_device(dev_path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 32, in get_device\n/usr/bin/docker: stderr self._device = Device(dev_path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 140, in __init__\n/usr/bin/docker: stderr self._parse()\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 236, in _parse\n/usr/bin/docker: stderr dev = disk.lsblk(self.path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 244, in lsblk\n/usr/bin/docker: stderr result = lsblk_all(device=device,\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 338, in lsblk_all\n/usr/bin/docker: stderr raise RuntimeError(f\"Error: {err}\")\n/usr/bin/docker: stderr RuntimeError: Error: ['lsblk: /dev/vdb: not a block device']\nTraceback (most recent call last):\n File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\n return _run_code(code, main_globals, None,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 11009, in \n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 10997, in main\n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2593, in _infer_config\n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2509, in _infer_fsid\n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2621, in _infer_image\n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2496, in _validate_fsid\n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 7226, in command_ceph_volume\n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2284, in call_throws\nRuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/run/ceph:z -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmparpe3ypr:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpwrt80472:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdb --yes --no-systemd", "stderr_lines": ["Error EINVAL: Traceback (most recent call last):", " File \"/usr/share/ceph/mgr/mgr_module.py\", line 1834, in _handle_command", " return self.handle_command(inbuf, cmd)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 183, in handle_command", " return dispatch[cmd['prefix']].call(self, cmd, inbuf)", " File \"/usr/share/ceph/mgr/mgr_module.py\", line 475, in call", " return self.func(mgr, **kwargs)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 119, in ", " wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 108, in wrapper", " return func(*args, **kwargs)", " File \"/usr/share/ceph/mgr/orchestrator/module.py\", line 1306, in _daemon_add_osd", " raise_if_exception(completion)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 240, in raise_if_exception", " raise e", "RuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/mon.instance/config", "Non-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/run/ceph:z -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmparpe3ypr:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpwrt80472:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdb --yes --no-systemd", "/usr/bin/docker: stderr stderr: lsblk: /dev/vdb: not a block device", "/usr/bin/docker: stderr Traceback (most recent call last):", "/usr/bin/docker: stderr File \"/usr/sbin/ceph-volume\", line 33, in ", "/usr/bin/docker: stderr sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 54, in __init__", "/usr/bin/docker: stderr self.main(self.argv)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/decorators.py\", line 59, in newfunc", "/usr/bin/docker: stderr return f(*a, **kw)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 166, in main", "/usr/bin/docker: stderr terminal.dispatch(self.mapper, subcommand_args)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 194, in dispatch", "/usr/bin/docker: stderr instance.main()", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py\", line 46, in main", "/usr/bin/docker: stderr terminal.dispatch(self.mapper, self.argv)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 192, in dispatch", "/usr/bin/docker: stderr instance = mapper.get(arg)(argv[count:])", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/batch.py\", line 325, in __init__", "/usr/bin/docker: stderr self.args = parser.parse_args(argv)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1825, in parse_args", "/usr/bin/docker: stderr args, argv = self.parse_known_args(args, namespace)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1858, in parse_known_args", "/usr/bin/docker: stderr namespace, args = self._parse_known_args(args, namespace)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2049, in _parse_known_args", "/usr/bin/docker: stderr positionals_end_index = consume_positionals(start_index)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2026, in consume_positionals", "/usr/bin/docker: stderr take_action(action, args)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1919, in take_action", "/usr/bin/docker: stderr argument_values = self._get_values(action, argument_strings)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in _get_values", "/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in ", "/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2483, in _get_value", "/usr/bin/docker: stderr result = type_func(arg_string)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 124, in __call__", "/usr/bin/docker: stderr super().get_device(dev_path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 32, in get_device", "/usr/bin/docker: stderr self._device = Device(dev_path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 140, in __init__", "/usr/bin/docker: stderr self._parse()", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 236, in _parse", "/usr/bin/docker: stderr dev = disk.lsblk(self.path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 244, in lsblk", "/usr/bin/docker: stderr result = lsblk_all(device=device,", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 338, in lsblk_all", "/usr/bin/docker: stderr raise RuntimeError(f\"Error: {err}\")", "/usr/bin/docker: stderr RuntimeError: Error: ['lsblk: /dev/vdb: not a block device']", "Traceback (most recent call last):", " File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main", " return _run_code(code, main_globals, None,", " File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code", " exec(code, run_globals)", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 11009, in ", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 10997, in main", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2593, in _infer_config", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2509, in _infer_fsid", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2621, in _infer_image", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2496, in _validate_fsid", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 7226, in command_ceph_volume", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2284, in call_throws", "RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/run/ceph:z -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmparpe3ypr:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpwrt80472:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdb --yes --no-systemd"], "stdout": "", "stdout_lines": []} 2026-04-08 03:12:44.057502 | instance | failed: [instance] (item=/dev/vdc) => {"ansible_loop_var": "item", "changed": false, "cmd": ["cephadm", "shell", "--fsid", "4837cbf8-4f90-4300-b3f6-726c9b9f89b4", "--config", "/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/mon.instance/config", "--", "ceph", "orch", "daemon", "add", "osd", "instance:/dev/vdc"], "delta": "0:00:07.524765", "end": "2026-04-08 03:12:44.014452", "item": "/dev/vdc", "msg": "non-zero return code", "rc": 22, "start": "2026-04-08 03:12:36.489687", "stderr": "Error EINVAL: Traceback (most recent call last):\n File \"/usr/share/ceph/mgr/mgr_module.py\", line 1834, in _handle_command\n return self.handle_command(inbuf, cmd)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 183, in handle_command\n return dispatch[cmd['prefix']].call(self, cmd, inbuf)\n File \"/usr/share/ceph/mgr/mgr_module.py\", line 475, in call\n return self.func(mgr, **kwargs)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 119, in \n wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 108, in wrapper\n return func(*args, **kwargs)\n File \"/usr/share/ceph/mgr/orchestrator/module.py\", line 1306, in _daemon_add_osd\n raise_if_exception(completion)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 240, in raise_if_exception\n raise e\nRuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/mon.instance/config\nNon-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/run/ceph:z -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpuebk7d5r:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpcmhuqdf9:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdc --yes --no-systemd\n/usr/bin/docker: stderr stderr: lsblk: /dev/vdc: not a block device\n/usr/bin/docker: stderr Traceback (most recent call last):\n/usr/bin/docker: stderr File \"/usr/sbin/ceph-volume\", line 33, in \n/usr/bin/docker: stderr sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 54, in __init__\n/usr/bin/docker: stderr self.main(self.argv)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/decorators.py\", line 59, in newfunc\n/usr/bin/docker: stderr return f(*a, **kw)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 166, in main\n/usr/bin/docker: stderr terminal.dispatch(self.mapper, subcommand_args)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 194, in dispatch\n/usr/bin/docker: stderr instance.main()\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py\", line 46, in main\n/usr/bin/docker: stderr terminal.dispatch(self.mapper, self.argv)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 192, in dispatch\n/usr/bin/docker: stderr instance = mapper.get(arg)(argv[count:])\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/batch.py\", line 325, in __init__\n/usr/bin/docker: stderr self.args = parser.parse_args(argv)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1825, in parse_args\n/usr/bin/docker: stderr args, argv = self.parse_known_args(args, namespace)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1858, in parse_known_args\n/usr/bin/docker: stderr namespace, args = self._parse_known_args(args, namespace)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2049, in _parse_known_args\n/usr/bin/docker: stderr positionals_end_index = consume_positionals(start_index)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2026, in consume_positionals\n/usr/bin/docker: stderr take_action(action, args)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1919, in take_action\n/usr/bin/docker: stderr argument_values = self._get_values(action, argument_strings)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in _get_values\n/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in \n/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2483, in _get_value\n/usr/bin/docker: stderr result = type_func(arg_string)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 124, in __call__\n/usr/bin/docker: stderr super().get_device(dev_path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 32, in get_device\n/usr/bin/docker: stderr self._device = Device(dev_path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 140, in __init__\n/usr/bin/docker: stderr self._parse()\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 236, in _parse\n/usr/bin/docker: stderr dev = disk.lsblk(self.path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 244, in lsblk\n/usr/bin/docker: stderr result = lsblk_all(device=device,\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 338, in lsblk_all\n/usr/bin/docker: stderr raise RuntimeError(f\"Error: {err}\")\n/usr/bin/docker: stderr RuntimeError: Error: ['lsblk: /dev/vdc: not a block device']\nTraceback (most recent call last):\n File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\n return _run_code(code, main_globals, None,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 11009, in \n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 10997, in main\n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2593, in _infer_config\n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2509, in _infer_fsid\n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2621, in _infer_image\n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2496, in _validate_fsid\n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 7226, in command_ceph_volume\n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2284, in call_throws\nRuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/run/ceph:z -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpuebk7d5r:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpcmhuqdf9:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdc --yes --no-systemd", "stderr_lines": ["Error EINVAL: Traceback (most recent call last):", " File \"/usr/share/ceph/mgr/mgr_module.py\", line 1834, in _handle_command", " return self.handle_command(inbuf, cmd)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 183, in handle_command", " return dispatch[cmd['prefix']].call(self, cmd, inbuf)", " File \"/usr/share/ceph/mgr/mgr_module.py\", line 475, in call", " return self.func(mgr, **kwargs)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 119, in ", " wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 108, in wrapper", " return func(*args, **kwargs)", " File \"/usr/share/ceph/mgr/orchestrator/module.py\", line 1306, in _daemon_add_osd", " raise_if_exception(completion)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 240, in raise_if_exception", " raise e", "RuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/mon.instance/config", "Non-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/run/ceph:z -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpuebk7d5r:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpcmhuqdf9:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdc --yes --no-systemd", "/usr/bin/docker: stderr stderr: lsblk: /dev/vdc: not a block device", "/usr/bin/docker: stderr Traceback (most recent call last):", "/usr/bin/docker: stderr File \"/usr/sbin/ceph-volume\", line 33, in ", "/usr/bin/docker: stderr sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 54, in __init__", "/usr/bin/docker: stderr self.main(self.argv)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/decorators.py\", line 59, in newfunc", "/usr/bin/docker: stderr return f(*a, **kw)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 166, in main", "/usr/bin/docker: stderr terminal.dispatch(self.mapper, subcommand_args)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 194, in dispatch", "/usr/bin/docker: stderr instance.main()", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py\", line 46, in main", "/usr/bin/docker: stderr terminal.dispatch(self.mapper, self.argv)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 192, in dispatch", "/usr/bin/docker: stderr instance = mapper.get(arg)(argv[count:])", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/batch.py\", line 325, in __init__", "/usr/bin/docker: stderr self.args = parser.parse_args(argv)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1825, in parse_args", "/usr/bin/docker: stderr args, argv = self.parse_known_args(args, namespace)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1858, in parse_known_args", "/usr/bin/docker: stderr namespace, args = self._parse_known_args(args, namespace)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2049, in _parse_known_args", "/usr/bin/docker: stderr positionals_end_index = consume_positionals(start_index)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2026, in consume_positionals", "/usr/bin/docker: stderr take_action(action, args)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1919, in take_action", "/usr/bin/docker: stderr argument_values = self._get_values(action, argument_strings)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in _get_values", "/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in ", "/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2483, in _get_value", "/usr/bin/docker: stderr result = type_func(arg_string)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 124, in __call__", "/usr/bin/docker: stderr super().get_device(dev_path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 32, in get_device", "/usr/bin/docker: stderr self._device = Device(dev_path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 140, in __init__", "/usr/bin/docker: stderr self._parse()", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 236, in _parse", "/usr/bin/docker: stderr dev = disk.lsblk(self.path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 244, in lsblk", "/usr/bin/docker: stderr result = lsblk_all(device=device,", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 338, in lsblk_all", "/usr/bin/docker: stderr raise RuntimeError(f\"Error: {err}\")", "/usr/bin/docker: stderr RuntimeError: Error: ['lsblk: /dev/vdc: not a block device']", "Traceback (most recent call last):", " File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main", " return _run_code(code, main_globals, None,", " File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code", " exec(code, run_globals)", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 11009, in ", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 10997, in main", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2593, in _infer_config", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2509, in _infer_fsid", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2621, in _infer_image", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2496, in _validate_fsid", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 7226, in command_ceph_volume", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2284, in call_throws", "RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/run/ceph:z -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpuebk7d5r:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpcmhuqdf9:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdc --yes --no-systemd"], "stdout": "", "stdout_lines": []} 2026-04-08 03:12:51.874564 | instance | failed: [instance] (item=/dev/vdd) => {"ansible_loop_var": "item", "changed": false, "cmd": ["cephadm", "shell", "--fsid", "4837cbf8-4f90-4300-b3f6-726c9b9f89b4", "--config", "/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/mon.instance/config", "--", "ceph", "orch", "daemon", "add", "osd", "instance:/dev/vdd"], "delta": "0:00:07.534435", "end": "2026-04-08 03:12:51.827363", "item": "/dev/vdd", "msg": "non-zero return code", "rc": 22, "start": "2026-04-08 03:12:44.292928", "stderr": "Error EINVAL: Traceback (most recent call last):\n File \"/usr/share/ceph/mgr/mgr_module.py\", line 1834, in _handle_command\n return self.handle_command(inbuf, cmd)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 183, in handle_command\n return dispatch[cmd['prefix']].call(self, cmd, inbuf)\n File \"/usr/share/ceph/mgr/mgr_module.py\", line 475, in call\n return self.func(mgr, **kwargs)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 119, in \n wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 108, in wrapper\n return func(*args, **kwargs)\n File \"/usr/share/ceph/mgr/orchestrator/module.py\", line 1306, in _daemon_add_osd\n raise_if_exception(completion)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 240, in raise_if_exception\n raise e\nRuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/mon.instance/config\nNon-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/run/ceph:z -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmp23vfbv43:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpv069fzwh:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdd --yes --no-systemd\n/usr/bin/docker: stderr stderr: lsblk: /dev/vdd: not a block device\n/usr/bin/docker: stderr Traceback (most recent call last):\n/usr/bin/docker: stderr File \"/usr/sbin/ceph-volume\", line 33, in \n/usr/bin/docker: stderr sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 54, in __init__\n/usr/bin/docker: stderr self.main(self.argv)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/decorators.py\", line 59, in newfunc\n/usr/bin/docker: stderr return f(*a, **kw)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 166, in main\n/usr/bin/docker: stderr terminal.dispatch(self.mapper, subcommand_args)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 194, in dispatch\n/usr/bin/docker: stderr instance.main()\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py\", line 46, in main\n/usr/bin/docker: stderr terminal.dispatch(self.mapper, self.argv)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 192, in dispatch\n/usr/bin/docker: stderr instance = mapper.get(arg)(argv[count:])\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/batch.py\", line 325, in __init__\n/usr/bin/docker: stderr self.args = parser.parse_args(argv)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1825, in parse_args\n/usr/bin/docker: stderr args, argv = self.parse_known_args(args, namespace)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1858, in parse_known_args\n/usr/bin/docker: stderr namespace, args = self._parse_known_args(args, namespace)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2049, in _parse_known_args\n/usr/bin/docker: stderr positionals_end_index = consume_positionals(start_index)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2026, in consume_positionals\n/usr/bin/docker: stderr take_action(action, args)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1919, in take_action\n/usr/bin/docker: stderr argument_values = self._get_values(action, argument_strings)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in _get_values\n/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in \n/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2483, in _get_value\n/usr/bin/docker: stderr result = type_func(arg_string)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 124, in __call__\n/usr/bin/docker: stderr super().get_device(dev_path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 32, in get_device\n/usr/bin/docker: stderr self._device = Device(dev_path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 140, in __init__\n/usr/bin/docker: stderr self._parse()\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 236, in _parse\n/usr/bin/docker: stderr dev = disk.lsblk(self.path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 244, in lsblk\n/usr/bin/docker: stderr result = lsblk_all(device=device,\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 338, in lsblk_all\n/usr/bin/docker: stderr raise RuntimeError(f\"Error: {err}\")\n/usr/bin/docker: stderr RuntimeError: Error: ['lsblk: /dev/vdd: not a block device']\nTraceback (most recent call last):\n File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\n return _run_code(code, main_globals, None,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 11009, in \n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 10997, in main\n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2593, in _infer_config\n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2509, in _infer_fsid\n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2621, in _infer_image\n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2496, in _validate_fsid\n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 7226, in command_ceph_volume\n File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2284, in call_throws\nRuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/run/ceph:z -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmp23vfbv43:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpv069fzwh:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdd --yes --no-systemd", "stderr_lines": ["Error EINVAL: Traceback (most recent call last):", " File \"/usr/share/ceph/mgr/mgr_module.py\", line 1834, in _handle_command", " return self.handle_command(inbuf, cmd)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 183, in handle_command", " return dispatch[cmd['prefix']].call(self, cmd, inbuf)", " File \"/usr/share/ceph/mgr/mgr_module.py\", line 475, in call", " return self.func(mgr, **kwargs)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 119, in ", " wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 108, in wrapper", " return func(*args, **kwargs)", " File \"/usr/share/ceph/mgr/orchestrator/module.py\", line 1306, in _daemon_add_osd", " raise_if_exception(completion)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 240, in raise_if_exception", " raise e", "RuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/mon.instance/config", "Non-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/run/ceph:z -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmp23vfbv43:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpv069fzwh:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdd --yes --no-systemd", "/usr/bin/docker: stderr stderr: lsblk: /dev/vdd: not a block device", "/usr/bin/docker: stderr Traceback (most recent call last):", "/usr/bin/docker: stderr File \"/usr/sbin/ceph-volume\", line 33, in ", "/usr/bin/docker: stderr sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 54, in __init__", "/usr/bin/docker: stderr self.main(self.argv)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/decorators.py\", line 59, in newfunc", "/usr/bin/docker: stderr return f(*a, **kw)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 166, in main", "/usr/bin/docker: stderr terminal.dispatch(self.mapper, subcommand_args)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 194, in dispatch", "/usr/bin/docker: stderr instance.main()", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py\", line 46, in main", "/usr/bin/docker: stderr terminal.dispatch(self.mapper, self.argv)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 192, in dispatch", "/usr/bin/docker: stderr instance = mapper.get(arg)(argv[count:])", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/batch.py\", line 325, in __init__", "/usr/bin/docker: stderr self.args = parser.parse_args(argv)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1825, in parse_args", "/usr/bin/docker: stderr args, argv = self.parse_known_args(args, namespace)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1858, in parse_known_args", "/usr/bin/docker: stderr namespace, args = self._parse_known_args(args, namespace)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2049, in _parse_known_args", "/usr/bin/docker: stderr positionals_end_index = consume_positionals(start_index)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2026, in consume_positionals", "/usr/bin/docker: stderr take_action(action, args)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1919, in take_action", "/usr/bin/docker: stderr argument_values = self._get_values(action, argument_strings)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in _get_values", "/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in ", "/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2483, in _get_value", "/usr/bin/docker: stderr result = type_func(arg_string)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 124, in __call__", "/usr/bin/docker: stderr super().get_device(dev_path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 32, in get_device", "/usr/bin/docker: stderr self._device = Device(dev_path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 140, in __init__", "/usr/bin/docker: stderr self._parse()", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 236, in _parse", "/usr/bin/docker: stderr dev = disk.lsblk(self.path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 244, in lsblk", "/usr/bin/docker: stderr result = lsblk_all(device=device,", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 338, in lsblk_all", "/usr/bin/docker: stderr raise RuntimeError(f\"Error: {err}\")", "/usr/bin/docker: stderr RuntimeError: Error: ['lsblk: /dev/vdd: not a block device']", "Traceback (most recent call last):", " File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main", " return _run_code(code, main_globals, None,", " File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code", " exec(code, run_globals)", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 11009, in ", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 10997, in main", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2593, in _infer_config", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2509, in _infer_fsid", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2621, in _infer_image", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2496, in _validate_fsid", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 7226, in command_ceph_volume", " File \"/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2284, in call_throws", "RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/run/ceph:z -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmp23vfbv43:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpv069fzwh:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdd --yes --no-systemd"], "stdout": "", "stdout_lines": []} 2026-04-08 03:12:51.880861 | instance | 2026-04-08 03:12:51.881843 | instance | PLAY RECAP ********************************************************************* 2026-04-08 03:12:51.881883 | instance | instance : ok=105 changed=26 unreachable=0 failed=1 skipped=26 rescued=0 ignored=0 2026-04-08 03:12:51.881895 | instance | 2026-04-08 03:12:51.882598 | instance | Wednesday 08 April 2026 03:12:51 +0000 (0:00:23.664) 0:04:28.090 ******* 2026-04-08 03:12:51.886169 | instance | =============================================================================== 2026-04-08 03:12:51.886232 | instance | vexxhost.ceph.mon : Run Bootstrap coomand ----------------------------- 123.42s 2026-04-08 03:12:51.886243 | instance | vexxhost.ceph.osd : Install OSDs --------------------------------------- 23.66s 2026-04-08 03:12:51.886252 | instance | vexxhost.ceph.osd : Get `ceph-volume lvm list` status ------------------ 10.40s 2026-04-08 03:12:51.886261 | instance | vexxhost.ceph.mon : Validate monitor exist ----------------------------- 10.33s 2026-04-08 03:12:51.886269 | instance | vexxhost.ceph.cephadm : Install packages -------------------------------- 7.37s 2026-04-08 03:12:51.886278 | instance | vexxhost.ceph.osd : Ensure all OSDs are non-legacy ---------------------- 5.44s 2026-04-08 03:12:51.886286 | instance | vexxhost.ceph.osd : Get `cephadm ls` status ----------------------------- 5.42s 2026-04-08 03:12:51.886295 | instance | vexxhost.containers.containerd : Install AppArmor packages -------------- 5.27s 2026-04-08 03:12:51.886303 | instance | vexxhost.containers.download_artifact : Extract archive ----------------- 4.46s 2026-04-08 03:12:51.886312 | instance | vexxhost.containers.download_artifact : Extract archive ----------------- 3.18s 2026-04-08 03:12:51.886328 | instance | vexxhost.containers.download_artifact : Extract archive ----------------- 3.11s 2026-04-08 03:12:51.886337 | instance | vexxhost.ceph.mgr : Enable the Ceph Manager prometheus module ----------- 2.44s 2026-04-08 03:12:51.886463 | instance | vexxhost.ceph.cephadm_host : Add new host to Ceph ----------------------- 2.16s 2026-04-08 03:12:51.886699 | instance | vexxhost.ceph.cephadm_host : Add new host to Ceph ----------------------- 2.14s 2026-04-08 03:12:51.886892 | instance | vexxhost.ceph.cephadm_host : Add new host to Ceph ----------------------- 2.09s 2026-04-08 03:12:51.887082 | instance | vexxhost.containers.download_artifact : Extract archive ----------------- 2.08s 2026-04-08 03:12:51.887271 | instance | vexxhost.containers.containerd : Reload systemd ------------------------- 1.85s 2026-04-08 03:12:51.887460 | instance | vexxhost.ceph.mgr : Validate manager exist ------------------------------ 1.84s 2026-04-08 03:12:51.887653 | instance | vexxhost.ceph.mon : Configure "mon" label for monitors ------------------ 1.83s 2026-04-08 03:12:51.887843 | instance | vexxhost.ceph.cephadm_host : Get public SSH key for "cephadm" user ------ 1.79s 2026-04-08 03:12:52.040677 | instance | CRITICAL Ansible return code was 2, command was: ansible-playbook --inventory /home/zuul/.ansible/tmp/molecule.v9Wo.aio/inventory --skip-tags molecule-notest,notest --inventory=/home/zuul/src/github.com/vexxhost/atmosphere/inventory.yaml /home/zuul/src/github.com/vexxhost/atmosphere/molecule/aio/converge.yml 2026-04-08 03:12:52.040885 | instance | ERROR [aio > converge] Executed: Failed 2026-04-08 03:12:52.040978 | instance | ERROR Ansible return code was 2, command was: ansible-playbook --inventory /home/zuul/.ansible/tmp/molecule.v9Wo.aio/inventory --skip-tags molecule-notest,notest --inventory=/home/zuul/src/github.com/vexxhost/atmosphere/inventory.yaml /home/zuul/src/github.com/vexxhost/atmosphere/molecule/aio/converge.yml 2026-04-08 03:12:52.366302 | instance | ERROR 2026-04-08 03:12:52.366633 | instance | { 2026-04-08 03:12:52.366700 | instance | "delta": "0:06:20.583429", 2026-04-08 03:12:52.366747 | instance | "end": "2026-04-08 03:12:52.122102", 2026-04-08 03:12:52.366788 | instance | "msg": "non-zero return code", 2026-04-08 03:12:52.366829 | instance | "rc": 2, 2026-04-08 03:12:52.366873 | instance | "start": "2026-04-08 03:06:31.538673" 2026-04-08 03:12:52.366914 | instance | } failure 2026-04-08 03:12:52.377848 | 2026-04-08 03:12:52.377898 | PLAY RECAP 2026-04-08 03:12:52.377939 | instance | ok: 2 changed: 2 unreachable: 0 failed: 1 skipped: 0 rescued: 0 ignored: 0 2026-04-08 03:12:52.377961 | 2026-04-08 03:12:52.471265 | RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/run.yaml@main] 2026-04-08 03:12:52.480797 | POST-RUN START: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/post.yml@main] 2026-04-08 03:12:53.060132 | 2026-04-08 03:12:53.060425 | PLAY [all] 2026-04-08 03:12:53.074123 | 2026-04-08 03:12:53.074214 | TASK [gather-host-logs : creating directory for system status] 2026-04-08 03:12:53.412089 | instance | changed 2026-04-08 03:12:53.417141 | 2026-04-08 03:12:53.417218 | TASK [gather-host-logs : Get logs for each host] 2026-04-08 03:12:53.747620 | instance | + systemd-cgls --full --all --no-pager 2026-04-08 03:12:53.759963 | instance | + ip addr 2026-04-08 03:12:53.762095 | instance | + ip route 2026-04-08 03:12:53.764726 | instance | + lsblk 2026-04-08 03:12:53.770770 | instance | + mount 2026-04-08 03:12:53.774350 | instance | + docker images 2026-04-08 03:12:53.793816 | instance | + brctl show 2026-04-08 03:12:53.794528 | instance | /bin/bash: line 8: brctl: command not found 2026-04-08 03:12:53.794915 | instance | + ps aux --sort=-%mem 2026-04-08 03:12:53.819425 | instance | + dpkg -l 2026-04-08 03:12:53.832111 | instance | + CONTAINERS=($(docker ps -a --format '{{ .Names }}' --filter label=zuul)) 2026-04-08 03:12:53.832896 | instance | ++ docker ps -a --format '{{ .Names }}' --filter label=zuul 2026-04-08 03:12:53.849167 | instance | + '[' '!' -z '' ']' 2026-04-08 03:12:53.951182 | instance | ok: Runtime: 0:00:00.107132 2026-04-08 03:12:53.959113 | 2026-04-08 03:12:53.959199 | TASK [gather-host-logs : Downloads logs to executor] 2026-04-08 03:12:54.560461 | instance | changed: 2026-04-08 03:12:54.560664 | instance | created directory /var/lib/zuul/builds/557441ff9d894b5e938f4d4ea9157550/work/logs/instance 2026-04-08 03:12:54.560709 | instance | cd+++++++++ system/ 2026-04-08 03:12:54.560747 | instance | >f+++++++++ system/brctl-show.txt 2026-04-08 03:12:54.560782 | instance | >f+++++++++ system/docker-images.txt 2026-04-08 03:12:54.560816 | instance | >f+++++++++ system/ip-addr.txt 2026-04-08 03:12:54.560849 | instance | >f+++++++++ system/ip-route.txt 2026-04-08 03:12:54.560876 | instance | >f+++++++++ system/lsblk.txt 2026-04-08 03:12:54.560905 | instance | >f+++++++++ system/mount.txt 2026-04-08 03:12:54.560934 | instance | >f+++++++++ system/packages.txt 2026-04-08 03:12:54.560984 | instance | >f+++++++++ system/ps.txt 2026-04-08 03:12:54.561020 | instance | >f+++++++++ system/systemd-cgls.txt 2026-04-08 03:12:54.574237 | 2026-04-08 03:12:54.574326 | LOOP [helm-release-status : creating directory for helm release status] 2026-04-08 03:12:54.762186 | instance | changed: "values" 2026-04-08 03:12:54.926883 | instance | changed: "releases" 2026-04-08 03:12:54.945995 | 2026-04-08 03:12:54.946146 | TASK [helm-release-status : Gather get release status for helm charts] 2026-04-08 03:12:55.162669 | instance | /bin/bash: line 3: kubectl: command not found 2026-04-08 03:12:55.480210 | instance | ok: Runtime: 0:00:00.005267 2026-04-08 03:12:55.486912 | 2026-04-08 03:12:55.487001 | TASK [helm-release-status : Downloads logs to executor] 2026-04-08 03:12:55.967795 | instance | changed: 2026-04-08 03:12:56.008860 | instance | cd+++++++++ helm/ 2026-04-08 03:12:56.009013 | instance | cd+++++++++ helm/releases/ 2026-04-08 03:12:56.009076 | instance | cd+++++++++ helm/values/ 2026-04-08 03:12:56.023410 | 2026-04-08 03:12:56.023517 | TASK [describe-kubernetes-objects : creating directory for cluster scoped objects] 2026-04-08 03:12:56.222805 | instance | changed 2026-04-08 03:12:56.258978 | 2026-04-08 03:12:56.259118 | TASK [describe-kubernetes-objects : Gathering descriptions for cluster scoped objects] 2026-04-08 03:12:56.482905 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-08 03:12:56.483015 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-08 03:12:56.487792 | instance | environment: line 1: kubectl: command not found 2026-04-08 03:12:56.488985 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-08 03:12:56.490020 | instance | environment: line 1: kubectl: command not found 2026-04-08 03:12:56.491512 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-08 03:12:56.492612 | instance | environment: line 1: kubectl: command not found 2026-04-08 03:12:56.494776 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-08 03:12:56.496159 | instance | environment: line 1: kubectl: command not found 2026-04-08 03:12:56.497482 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-08 03:12:56.497790 | instance | environment: line 1: kubectl: command not found 2026-04-08 03:12:56.498721 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-08 03:12:56.793953 | instance | ok: Runtime: 0:00:00.023975 2026-04-08 03:12:56.801209 | 2026-04-08 03:12:56.801303 | TASK [describe-kubernetes-objects : creating directory for namespace scoped objects] 2026-04-08 03:12:57.016309 | instance | changed 2026-04-08 03:12:57.023638 | 2026-04-08 03:12:57.023734 | TASK [describe-kubernetes-objects : Gathering descriptions for namespace scoped objects] 2026-04-08 03:12:57.237201 | instance | environment: line 5: kubectl: command not found 2026-04-08 03:12:57.237513 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-08 03:12:57.237887 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-08 03:12:57.238110 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-08 03:12:57.564696 | instance | ok: Runtime: 0:00:00.010230 2026-04-08 03:12:57.571092 | 2026-04-08 03:12:57.571175 | TASK [describe-kubernetes-objects : Downloads logs to executor] 2026-04-08 03:12:58.070032 | instance | changed: 2026-04-08 03:12:58.070230 | instance | cd+++++++++ objects/ 2026-04-08 03:12:58.070270 | instance | cd+++++++++ objects/cluster/ 2026-04-08 03:12:58.070301 | instance | cd+++++++++ objects/namespaced/ 2026-04-08 03:12:58.080558 | 2026-04-08 03:12:58.080627 | TASK [gather-pod-logs : creating directory for pod logs] 2026-04-08 03:12:58.274971 | instance | changed 2026-04-08 03:12:58.280230 | 2026-04-08 03:12:58.280303 | TASK [gather-pod-logs : creating directory for failed pod logs] 2026-04-08 03:12:58.494913 | instance | changed 2026-04-08 03:12:58.501795 | 2026-04-08 03:12:58.501893 | TASK [gather-pod-logs : retrieve all kubernetes logs, current and previous (if they exist)] 2026-04-08 03:12:58.716156 | instance | environment: line 3: kubectl: command not found 2026-04-08 03:12:59.038707 | instance | ok: Runtime: 0:00:00.009286 2026-04-08 03:12:59.043640 | 2026-04-08 03:12:59.043704 | TASK [gather-pod-logs : Downloads pod logs to executor] 2026-04-08 03:12:59.504489 | instance | changed: 2026-04-08 03:12:59.504788 | instance | cd+++++++++ pod-logs/ 2026-04-08 03:12:59.504882 | instance | cd+++++++++ pod-logs/failed-pods/ 2026-04-08 03:12:59.515098 | 2026-04-08 03:12:59.515158 | TASK [gather-prom-metrics : creating directory for helm release descriptions] 2026-04-08 03:12:59.700948 | instance | changed 2026-04-08 03:12:59.707169 | 2026-04-08 03:12:59.707236 | TASK [gather-prom-metrics : Get metrics from exporter services in all namespaces] 2026-04-08 03:12:59.904301 | instance | /bin/bash: line 2: kubectl: command not found 2026-04-08 03:13:00.243256 | instance | ok: Runtime: 0:00:00.031616 2026-04-08 03:13:00.248025 | 2026-04-08 03:13:00.248088 | TASK [gather-prom-metrics : Get ceph metrics from ceph-mgr] 2026-04-08 03:13:00.452218 | instance | /bin/bash: line 2: kubectl: command not found 2026-04-08 03:13:00.479937 | instance | ceph-mgr endpoints: 2026-04-08 03:13:00.783691 | instance | ok: Runtime: 0:00:00.033775 2026-04-08 03:13:00.790667 | 2026-04-08 03:13:00.790753 | TASK [gather-prom-metrics : Get metrics from fluentd pods] 2026-04-08 03:13:01.003615 | instance | /bin/bash: line 4: kubectl: command not found 2026-04-08 03:13:01.327315 | instance | ok: Runtime: 0:00:00.037560 2026-04-08 03:13:01.332509 | 2026-04-08 03:13:01.332572 | TASK [gather-prom-metrics : Downloads logs to executor] 2026-04-08 03:13:01.802728 | instance | changed: cd+++++++++ prometheus/ 2026-04-08 03:13:01.812331 | 2026-04-08 03:13:01.812393 | TASK [gather-selenium-data : creating directory for helm release descriptions] 2026-04-08 03:13:02.034102 | instance | changed 2026-04-08 03:13:02.038918 | 2026-04-08 03:13:02.038991 | TASK [gather-selenium-data : Get selenium data] 2026-04-08 03:13:02.243353 | instance | + cp '/tmp/artifacts/*' /tmp/logs/selenium/. 2026-04-08 03:13:02.244843 | instance | cp: cannot stat '/tmp/artifacts/*': No such file or directory 2026-04-08 03:13:02.579200 | instance | ERROR 2026-04-08 03:13:02.579530 | instance | { 2026-04-08 03:13:02.579627 | instance | "delta": "0:00:00.007454", 2026-04-08 03:13:02.579679 | instance | "end": "2026-04-08 03:13:02.245301", 2026-04-08 03:13:02.579961 | instance | "msg": "non-zero return code", 2026-04-08 03:13:02.580017 | instance | "rc": 1, 2026-04-08 03:13:02.580062 | instance | "start": "2026-04-08 03:13:02.237847" 2026-04-08 03:13:02.580107 | instance | } 2026-04-08 03:13:02.580166 | instance | ERROR: Ignoring Errors 2026-04-08 03:13:02.586185 | 2026-04-08 03:13:02.586257 | TASK [gather-selenium-data : Downloads logs to executor] 2026-04-08 03:13:03.076597 | instance | changed: cd+++++++++ selenium/ 2026-04-08 03:13:03.084170 | 2026-04-08 03:13:03.084238 | PLAY RECAP 2026-04-08 03:13:03.084299 | instance | ok: 23 changed: 23 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 1 2026-04-08 03:13:03.084328 | 2026-04-08 03:13:03.181356 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/post.yml@main] 2026-04-08 03:13:03.194317 | POST-RUN START: [trusted : github.com/vexxhost/zuul-config/playbooks/base/post.yaml@main] 2026-04-08 03:13:03.766858 | 2026-04-08 03:13:03.766960 | PLAY [all] 2026-04-08 03:13:03.777836 | 2026-04-08 03:13:03.777912 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-08 03:13:03.822303 | instance | skipping: Conditional result was False 2026-04-08 03:13:03.834505 | 2026-04-08 03:13:03.834672 | TASK [fetch-output : Set log path for single node] 2026-04-08 03:13:03.878382 | instance | ok 2026-04-08 03:13:03.884731 | 2026-04-08 03:13:03.884825 | LOOP [fetch-output : Ensure local output dirs] 2026-04-08 03:13:04.236437 | instance -> localhost | ok: "/var/lib/zuul/builds/557441ff9d894b5e938f4d4ea9157550/work/logs" 2026-04-08 03:13:04.456179 | instance -> localhost | changed: "/var/lib/zuul/builds/557441ff9d894b5e938f4d4ea9157550/work/artifacts" 2026-04-08 03:13:04.658715 | instance -> localhost | changed: "/var/lib/zuul/builds/557441ff9d894b5e938f4d4ea9157550/work/docs" 2026-04-08 03:13:04.672614 | 2026-04-08 03:13:04.672735 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-08 03:13:05.302785 | instance | changed: .d..t...... ./ 2026-04-08 03:13:05.303043 | instance | changed: All items complete 2026-04-08 03:13:05.303084 | 2026-04-08 03:13:05.730771 | instance | changed: .d..t...... ./ 2026-04-08 03:13:06.165121 | instance | changed: .d..t...... ./ 2026-04-08 03:13:06.184454 | 2026-04-08 03:13:06.184601 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-08 03:13:06.573485 | instance -> localhost | ok: Item: artifacts Runtime: 0:00:00.007609 2026-04-08 03:13:06.789222 | instance -> localhost | ok: Item: docs Runtime: 0:00:00.007225 2026-04-08 03:13:06.809544 | 2026-04-08 03:13:06.809704 | PLAY [all] 2026-04-08 03:13:06.817019 | 2026-04-08 03:13:06.817083 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-08 03:13:07.204915 | instance | changed 2026-04-08 03:13:07.212402 | 2026-04-08 03:13:07.212507 | PLAY RECAP 2026-04-08 03:13:07.212553 | instance | ok: 5 changed: 4 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-08 03:13:07.212574 | 2026-04-08 03:13:07.336414 | POST-RUN END RESULT_NORMAL: [trusted : github.com/vexxhost/zuul-config/playbooks/base/post.yaml@main] 2026-04-08 03:13:07.349288 | POST-RUN START: [trusted : github.com/vexxhost/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-08 03:13:07.962244 | 2026-04-08 03:13:07.962424 | PLAY [localhost] 2026-04-08 03:13:07.974978 | 2026-04-08 03:13:07.975162 | TASK [Generate Zuul manifest] 2026-04-08 03:13:07.998483 | localhost | ok 2026-04-08 03:13:08.015917 | 2026-04-08 03:13:08.016004 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-08 03:13:08.391639 | localhost | changed 2026-04-08 03:13:08.401338 | 2026-04-08 03:13:08.401412 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-08 03:13:08.432515 | localhost | ok 2026-04-08 03:13:08.441236 | 2026-04-08 03:13:08.441326 | TASK [Upload logs] 2026-04-08 03:13:08.471500 | localhost | ok 2026-04-08 03:13:08.570316 | 2026-04-08 03:13:08.570449 | TASK [Set zuul-log-path fact] 2026-04-08 03:13:08.588359 | localhost | ok 2026-04-08 03:13:08.598628 | 2026-04-08 03:13:08.598694 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-08 03:13:08.625883 | localhost | ok 2026-04-08 03:13:08.632873 | 2026-04-08 03:13:08.632935 | TASK [upload-logs : Create log directories] 2026-04-08 03:13:08.988206 | localhost | changed 2026-04-08 03:13:08.994947 | 2026-04-08 03:13:08.995015 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-08 03:13:09.341614 | localhost -> localhost | ok: Runtime: 0:00:00.005547 2026-04-08 03:13:09.347705 | 2026-04-08 03:13:09.347768 | TASK [upload-logs : Upload logs to log server] 2026-04-08 03:13:09.785019 | localhost | Output suppressed because no_log was given 2026-04-08 03:13:09.790196 | 2026-04-08 03:13:09.790260 | LOOP [upload-logs : Compress console log and json output] 2026-04-08 03:13:09.839474 | localhost | skipping: Conditional result was False 2026-04-08 03:13:09.846743 | localhost | skipping: Conditional result was False 2026-04-08 03:13:09.863972 | 2026-04-08 03:13:09.864097 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-08 03:13:09.909973 | localhost | skipping: Conditional result was False 2026-04-08 03:13:09.910396 | 2026-04-08 03:13:09.914163 | localhost | skipping: Conditional result was False 2026-04-08 03:13:09.929886 | 2026-04-08 03:13:09.930037 | LOOP [upload-logs : Upload console log and json output]