2026-04-07 08:14:59.178335 | Job console starting 2026-04-07 08:14:59.188826 | Updating git repos 2026-04-07 08:14:59.510428 | Cloning repos into workspace 2026-04-07 08:15:02.018543 | Restoring repo states 2026-04-07 08:15:02.034361 | Merging changes 2026-04-07 08:15:04.467444 | Checking out repos 2026-04-07 08:15:05.823945 | Preparing playbooks 2026-04-07 08:15:17.737354 | Running Ansible setup 2026-04-07 08:15:22.632753 | PRE-RUN START: [trusted : github.com/vexxhost/zuul-config/playbooks/base/pre.yaml@main] 2026-04-07 08:15:23.283554 | 2026-04-07 08:15:23.283694 | PLAY [localhost] 2026-04-07 08:15:23.291793 | 2026-04-07 08:15:23.291881 | TASK [Gathering Facts] 2026-04-07 08:15:24.243017 | localhost | ok 2026-04-07 08:15:24.254567 | 2026-04-07 08:15:24.254658 | TASK [Setup log path fact] 2026-04-07 08:15:24.274226 | localhost | ok 2026-04-07 08:15:24.288653 | 2026-04-07 08:15:24.288766 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-07 08:15:24.316336 | localhost | ok 2026-04-07 08:15:24.323886 | 2026-04-07 08:15:24.323997 | TASK [emit-job-header : Print job information] 2026-04-07 08:15:24.363275 | # Job Information 2026-04-07 08:15:24.363457 | Ansible Version: 2.16.16 2026-04-07 08:15:24.363502 | Job: atmosphere-molecule-aio-openvswitch 2026-04-07 08:15:24.363533 | Pipeline: check 2026-04-07 08:15:24.363561 | Executor: 0a8996d2b663 2026-04-07 08:15:24.363587 | Triggered by: https://github.com/vexxhost/atmosphere/pull/3809 2026-04-07 08:15:24.363618 | Event ID: c69c0d70-3259-11f1-8a6c-bfbbc40fe9ae 2026-04-07 08:15:24.367280 | 2026-04-07 08:15:24.367363 | LOOP [emit-job-header : Print node information] 2026-04-07 08:15:24.467679 | localhost | ok: 2026-04-07 08:15:24.467960 | localhost | # Node Information 2026-04-07 08:15:24.468036 | localhost | Inventory Hostname: instance 2026-04-07 08:15:24.468088 | localhost | Hostname: np0000163892 2026-04-07 08:15:24.468132 | localhost | Username: zuul 2026-04-07 08:15:24.468181 | localhost | Distro: Ubuntu 22.04 2026-04-07 08:15:24.468225 | localhost | Provider: yul1 2026-04-07 08:15:24.468267 | localhost | Region: ca-ymq-1 2026-04-07 08:15:24.468308 | localhost | Label: ubuntu-jammy-16 2026-04-07 08:15:24.468349 | localhost | Product Name: OpenStack Nova 2026-04-07 08:15:24.468448 | localhost | Interface IP: 199.204.45.240 2026-04-07 08:15:24.485233 | 2026-04-07 08:15:24.485634 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-07 08:15:24.904496 | localhost -> localhost | changed 2026-04-07 08:15:24.912529 | 2026-04-07 08:15:24.912673 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-07 08:15:25.939957 | localhost -> localhost | changed 2026-04-07 08:15:25.948332 | 2026-04-07 08:15:25.948406 | PLAY [all] 2026-04-07 08:15:25.956510 | 2026-04-07 08:15:25.956574 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-07 08:15:26.437203 | instance -> localhost | ok 2026-04-07 08:15:26.445229 | 2026-04-07 08:15:26.445357 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-07 08:15:26.478239 | instance | ok 2026-04-07 08:15:26.495107 | instance | included: /var/lib/zuul/builds/c8026ada4e2d464a8de3d66983af1cc6/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-07 08:15:26.502570 | 2026-04-07 08:15:26.502703 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-07 08:15:27.547128 | instance -> localhost | Generating public/private rsa key pair. 2026-04-07 08:15:27.547323 | instance -> localhost | Your identification has been saved in /var/lib/zuul/builds/c8026ada4e2d464a8de3d66983af1cc6/work/c8026ada4e2d464a8de3d66983af1cc6_id_rsa 2026-04-07 08:15:27.547374 | instance -> localhost | Your public key has been saved in /var/lib/zuul/builds/c8026ada4e2d464a8de3d66983af1cc6/work/c8026ada4e2d464a8de3d66983af1cc6_id_rsa.pub 2026-04-07 08:15:27.547416 | instance -> localhost | The key fingerprint is: 2026-04-07 08:15:27.547448 | instance -> localhost | SHA256:G2HUBFq6VcG6sncLTN+CxTYGcFyPExIqX7onyHmtpVk zuul-build-sshkey 2026-04-07 08:15:27.547491 | instance -> localhost | The key's randomart image is: 2026-04-07 08:15:27.547522 | instance -> localhost | +---[RSA 3072]----+ 2026-04-07 08:15:27.547557 | instance -> localhost | | ..BB*. | 2026-04-07 08:15:27.547600 | instance -> localhost | | O.oo+ | 2026-04-07 08:15:27.547632 | instance -> localhost | | . + *.o . | 2026-04-07 08:15:27.547660 | instance -> localhost | | o *.+ . | 2026-04-07 08:15:27.547689 | instance -> localhost | | + S.* | 2026-04-07 08:15:27.547717 | instance -> localhost | | . o.=.O o | 2026-04-07 08:15:27.547743 | instance -> localhost | | + +oE o . | 2026-04-07 08:15:27.547770 | instance -> localhost | | ..O.... | 2026-04-07 08:15:27.547806 | instance -> localhost | | +. ... | 2026-04-07 08:15:27.547846 | instance -> localhost | +----[SHA256]-----+ 2026-04-07 08:15:27.547913 | instance -> localhost | ok: Runtime: 0:00:00.631187 2026-04-07 08:15:27.554486 | 2026-04-07 08:15:27.554547 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-07 08:15:27.590664 | instance | ok 2026-04-07 08:15:27.600520 | instance | included: /var/lib/zuul/builds/c8026ada4e2d464a8de3d66983af1cc6/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-07 08:15:27.608644 | 2026-04-07 08:15:27.608727 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-07 08:15:27.632854 | instance | skipping: Conditional result was False 2026-04-07 08:15:27.663369 | 2026-04-07 08:15:27.663512 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-07 08:15:28.119897 | instance | changed 2026-04-07 08:15:28.353969 | 2026-04-07 08:15:28.354118 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-07 08:15:28.531820 | instance | ok 2026-04-07 08:15:28.537179 | 2026-04-07 08:15:28.537288 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-07 08:15:29.047738 | instance | changed 2026-04-07 08:15:29.053121 | 2026-04-07 08:15:29.053207 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-07 08:15:29.528072 | instance | changed 2026-04-07 08:15:29.537008 | 2026-04-07 08:15:29.537136 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-07 08:15:29.560846 | instance | skipping: Conditional result was False 2026-04-07 08:15:29.573633 | 2026-04-07 08:15:29.573714 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-07 08:15:29.963737 | instance -> localhost | changed 2026-04-07 08:15:30.008050 | 2026-04-07 08:15:30.008174 | TASK [add-build-sshkey : Add back temp key] 2026-04-07 08:15:30.307378 | instance -> localhost | Identity added: /var/lib/zuul/builds/c8026ada4e2d464a8de3d66983af1cc6/work/c8026ada4e2d464a8de3d66983af1cc6_id_rsa (zuul-build-sshkey) 2026-04-07 08:15:30.307656 | instance -> localhost | ok: Runtime: 0:00:00.014597 2026-04-07 08:15:30.313166 | 2026-04-07 08:15:30.313234 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-07 08:15:30.595281 | instance | ok 2026-04-07 08:15:30.603122 | 2026-04-07 08:15:30.603371 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-07 08:15:30.632268 | instance | skipping: Conditional result was False 2026-04-07 08:15:30.655924 | 2026-04-07 08:15:30.656034 | TASK [prepare-workspace : Start zuul_console daemon.] 2026-04-07 08:15:30.974376 | instance | ok 2026-04-07 08:15:31.017763 | 2026-04-07 08:15:31.017880 | TASK [prepare-workspace : Synchronize src repos to workspace directory.] 2026-04-07 08:15:32.621419 | instance | Output suppressed because no_log was given 2026-04-07 08:15:32.631071 | 2026-04-07 08:15:32.631131 | LOOP [ensure-output-dirs : Empty Zuul Output directories by removing them] 2026-04-07 08:15:32.831697 | instance | ok: "logs" 2026-04-07 08:15:32.832002 | instance | ok: All items complete 2026-04-07 08:15:32.832059 | 2026-04-07 08:15:32.982736 | instance | ok: "artifacts" 2026-04-07 08:15:33.142011 | instance | ok: "docs" 2026-04-07 08:15:33.154521 | 2026-04-07 08:15:33.154716 | LOOP [ensure-output-dirs : Ensure Zuul Output directories exist] 2026-04-07 08:15:33.331219 | instance | changed: "logs" 2026-04-07 08:15:33.484447 | instance | changed: "artifacts" 2026-04-07 08:15:33.649420 | instance | changed: "docs" 2026-04-07 08:15:33.660546 | 2026-04-07 08:15:33.660652 | PLAY RECAP 2026-04-07 08:15:33.660715 | instance | ok: 15 changed: 8 unreachable: 0 failed: 0 skipped: 3 rescued: 0 ignored: 0 2026-04-07 08:15:33.660759 | localhost | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-07 08:15:33.660793 | 2026-04-07 08:15:33.851834 | PRE-RUN END RESULT_NORMAL: [trusted : github.com/vexxhost/zuul-config/playbooks/base/pre.yaml@main] 2026-04-07 08:15:33.861464 | PRE-RUN START: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/pre.yaml@main] 2026-04-07 08:15:34.443284 | 2026-04-07 08:15:34.443404 | PLAY [all] 2026-04-07 08:15:34.455433 | 2026-04-07 08:15:34.455518 | TASK [setup-uv : Extract archive] 2026-04-07 08:15:36.979002 | instance | changed 2026-04-07 08:15:36.986716 | 2026-04-07 08:15:36.986864 | TASK [setup-uv : Print version] 2026-04-07 08:15:37.337953 | instance | uv 0.8.13 2026-04-07 08:15:37.524375 | instance | ok: Runtime: 0:00:00.011106 2026-04-07 08:15:37.531579 | 2026-04-07 08:15:37.531625 | PLAY RECAP 2026-04-07 08:15:37.531668 | instance | ok: 2 changed: 2 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-07 08:15:37.531692 | 2026-04-07 08:15:37.639419 | PRE-RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/pre.yaml@main] 2026-04-07 08:15:37.652591 | PRE-RUN START: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/pre.yml@main] 2026-04-07 08:15:38.307536 | 2026-04-07 08:15:38.307972 | PLAY [all] 2026-04-07 08:15:38.320061 | 2026-04-07 08:15:38.320189 | TASK [Install "jq" for log collection] 2026-04-07 08:15:53.242468 | instance | changed 2026-04-07 08:15:53.245800 | 2026-04-07 08:15:53.245857 | PLAY RECAP 2026-04-07 08:15:53.245906 | instance | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-07 08:15:53.245955 | 2026-04-07 08:15:53.351866 | PRE-RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/pre.yml@main] 2026-04-07 08:15:53.363452 | RUN START: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/run.yaml@main] 2026-04-07 08:15:53.986104 | 2026-04-07 08:15:53.986235 | PLAY [all] 2026-04-07 08:15:53.999352 | 2026-04-07 08:15:53.999493 | TASK [Copy inventory file for Zuul] 2026-04-07 08:15:54.892830 | instance | changed 2026-04-07 08:15:54.900168 | 2026-04-07 08:15:54.900294 | TASK [Switch "ansible_host" to private IP] 2026-04-07 08:15:55.197070 | instance | changed: 1 replacements made 2026-04-07 08:15:55.205996 | 2026-04-07 08:15:55.207185 | TASK [Run Molecule scenario] 2026-04-07 08:15:55.607714 | instance | Using CPython 3.10.12 interpreter at: /usr/bin/python3 2026-04-07 08:15:55.608366 | instance | Creating virtual environment at: .venv 2026-04-07 08:15:55.633311 | instance | Building atmosphere @ file:///home/zuul/src/github.com/vexxhost/atmosphere 2026-04-07 08:15:55.648719 | instance | Downloading openstacksdk (1.7MiB) 2026-04-07 08:15:55.667539 | instance | Downloading pygments (1.2MiB) 2026-04-07 08:15:55.667750 | instance | Downloading setuptools (1.1MiB) 2026-04-07 08:15:55.668073 | instance | Downloading ansible-core (2.1MiB) 2026-04-07 08:15:55.668303 | instance | Downloading kubernetes (1.9MiB) 2026-04-07 08:15:55.669528 | instance | Downloading rjsonnet (1.2MiB) 2026-04-07 08:15:55.676966 | instance | Downloading netaddr (2.2MiB) 2026-04-07 08:15:55.677535 | instance | Downloading pydantic-core (2.0MiB) 2026-04-07 08:15:55.678284 | instance | Downloading cryptography (4.2MiB) 2026-04-07 08:15:56.016177 | instance | Building pyperclip==1.9.0 2026-04-07 08:15:56.039514 | instance | Downloading rjsonnet 2026-04-07 08:15:56.135663 | instance | Downloading pydantic-core 2026-04-07 08:15:56.177630 | instance | Downloading netaddr 2026-04-07 08:15:56.189917 | instance | Downloading pygments 2026-04-07 08:15:56.201762 | instance | Downloading cryptography 2026-04-07 08:15:56.248245 | instance | Downloading setuptools 2026-04-07 08:15:56.314142 | instance | Downloading kubernetes 2026-04-07 08:15:56.354078 | instance | Downloading ansible-core 2026-04-07 08:15:56.394022 | instance | Downloading openstacksdk 2026-04-07 08:15:56.751337 | instance | Built pyperclip==1.9.0 2026-04-07 08:15:56.927883 | instance | Built atmosphere @ file:///home/zuul/src/github.com/vexxhost/atmosphere 2026-04-07 08:15:56.967747 | instance | Installed 83 packages in 37ms 2026-04-07 08:15:57.586369 | instance | WARNING Molecule scenarios should migrate to 'extensions/molecule' 2026-04-07 08:15:58.276811 | instance | INFO [aio > discovery] scenario test matrix: dependency, cleanup, destroy, syntax, create, prepare, converge, idempotence, side_effect, verify, cleanup, destroy 2026-04-07 08:15:58.276893 | instance | INFO [aio > prerun] Performing prerun with role_name_check=0... 2026-04-07 08:17:01.525717 | instance | INFO [aio > dependency] Executing 2026-04-07 08:17:01.526014 | instance | WARNING [aio > dependency] Missing roles requirements file: requirements.yml 2026-04-07 08:17:01.526259 | instance | WARNING [aio > dependency] Missing collections requirements file: collections.yml 2026-04-07 08:17:01.526450 | instance | WARNING [aio > dependency] Executed: 2 missing (Remove from test_sequence to suppress) 2026-04-07 08:17:01.536380 | instance | INFO [aio > cleanup] Executing 2026-04-07 08:17:01.536778 | instance | WARNING [aio > cleanup] Executed: Missing playbook (Remove from test_sequence to suppress) 2026-04-07 08:17:01.546490 | instance | INFO [aio > destroy] Executing 2026-04-07 08:17:01.546637 | instance | WARNING [aio > destroy] Skipping, '--destroy=never' requested. 2026-04-07 08:17:01.546764 | instance | INFO [aio > destroy] Executed: Successful 2026-04-07 08:17:01.556306 | instance | INFO [aio > syntax] Executing 2026-04-07 08:17:04.378437 | instance | 2026-04-07 08:17:04.378861 | instance | playbook: /home/zuul/src/github.com/vexxhost/atmosphere/molecule/aio/converge.yml 2026-04-07 08:17:04.494845 | instance | INFO [aio > syntax] Executed: Successful 2026-04-07 08:17:04.508421 | instance | INFO [aio > create] Executing 2026-04-07 08:17:04.510283 | instance | WARNING [aio > create] Executed: Missing playbook (Remove from test_sequence to suppress) 2026-04-07 08:17:04.519207 | instance | INFO [aio > prepare] Executing 2026-04-07 08:17:05.348378 | instance | 2026-04-07 08:17:05.348666 | instance | PLAY [Prepare] ***************************************************************** 2026-04-07 08:17:05.348877 | instance | 2026-04-07 08:17:05.349092 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-07 08:17:05.349303 | instance | Tuesday 07 April 2026 08:17:05 +0000 (0:00:00.033) 0:00:00.033 ********* 2026-04-07 08:17:06.321542 | instance | ok: [instance] 2026-04-07 08:17:06.321592 | instance | 2026-04-07 08:17:06.321597 | instance | TASK [Configure short hostname] ************************************************ 2026-04-07 08:17:06.321606 | instance | Tuesday 07 April 2026 08:17:06 +0000 (0:00:00.973) 0:00:01.006 ********* 2026-04-07 08:17:07.135505 | instance | changed: [instance] 2026-04-07 08:17:07.135603 | instance | 2026-04-07 08:17:07.135736 | instance | TASK [Ensure hostname inside hosts file] *************************************** 2026-04-07 08:17:07.135866 | instance | Tuesday 07 April 2026 08:17:07 +0000 (0:00:00.814) 0:00:01.820 ********* 2026-04-07 08:17:07.452845 | instance | changed: [instance] 2026-04-07 08:17:07.452916 | instance | 2026-04-07 08:17:07.453114 | instance | TASK [Install "dirmngr" for GPG keyserver operations] ************************** 2026-04-07 08:17:07.453287 | instance | Tuesday 07 April 2026 08:17:07 +0000 (0:00:00.316) 0:00:02.137 ********* 2026-04-07 08:17:09.069481 | instance | ok: [instance] 2026-04-07 08:17:09.069698 | instance | 2026-04-07 08:17:09.069975 | instance | TASK [Purge "snapd" package] *************************************************** 2026-04-07 08:17:09.070254 | instance | Tuesday 07 April 2026 08:17:09 +0000 (0:00:01.616) 0:00:03.754 ********* 2026-04-07 08:17:10.123192 | instance | ok: [instance] 2026-04-07 08:17:10.123439 | instance | 2026-04-07 08:17:10.123750 | instance | PLAY [Generate workspace for Atmosphere] *************************************** 2026-04-07 08:17:10.124002 | instance | 2026-04-07 08:17:10.124340 | instance | TASK [Create folders for workspace] ******************************************** 2026-04-07 08:17:10.124636 | instance | Tuesday 07 April 2026 08:17:10 +0000 (0:00:01.053) 0:00:04.808 ********* 2026-04-07 08:17:11.352177 | instance | ok: [localhost] => (item=group_vars) 2026-04-07 08:17:11.352427 | instance | ok: [localhost] => (item=group_vars/all) 2026-04-07 08:17:11.352709 | instance | changed: [localhost] => (item=group_vars/controllers) 2026-04-07 08:17:11.353058 | instance | changed: [localhost] => (item=group_vars/cephs) 2026-04-07 08:17:11.353378 | instance | changed: [localhost] => (item=group_vars/computes) 2026-04-07 08:17:11.353733 | instance | ok: [localhost] => (item=host_vars) 2026-04-07 08:17:11.354003 | instance | 2026-04-07 08:17:11.354198 | instance | PLAY [Generate Ceph control plane configuration for workspace] ***************** 2026-04-07 08:17:11.354382 | instance | 2026-04-07 08:17:11.354553 | instance | TASK [Ensure the Ceph control plane configuration file exists] ***************** 2026-04-07 08:17:11.354723 | instance | Tuesday 07 April 2026 08:17:11 +0000 (0:00:01.228) 0:00:06.036 ********* 2026-04-07 08:17:11.604130 | instance | changed: [localhost] 2026-04-07 08:17:11.604366 | instance | 2026-04-07 08:17:11.604382 | instance | TASK [Load the current Ceph control plane configuration into a variable] ******* 2026-04-07 08:17:11.604551 | instance | Tuesday 07 April 2026 08:17:11 +0000 (0:00:00.252) 0:00:06.289 ********* 2026-04-07 08:17:11.655581 | instance | ok: [localhost] 2026-04-07 08:17:11.655842 | instance | 2026-04-07 08:17:11.656144 | instance | TASK [Generate Ceph control plane values for missing variables] **************** 2026-04-07 08:17:11.656438 | instance | Tuesday 07 April 2026 08:17:11 +0000 (0:00:00.051) 0:00:06.340 ********* 2026-04-07 08:17:11.724326 | instance | ok: [localhost] => (item={'key': 'ceph_fsid', 'value': '1a18391a-62f4-5563-9e14-5d12dccfa845'}) 2026-04-07 08:17:11.724517 | instance | ok: [localhost] => (item={'key': 'ceph_mon_public_network', 'value': '10.96.240.0/24'}) 2026-04-07 08:17:11.724784 | instance | 2026-04-07 08:17:11.725054 | instance | TASK [Write new Ceph control plane configuration file to disk] ***************** 2026-04-07 08:17:11.725323 | instance | Tuesday 07 April 2026 08:17:11 +0000 (0:00:00.069) 0:00:06.409 ********* 2026-04-07 08:17:12.295179 | instance | changed: [localhost] 2026-04-07 08:17:12.295400 | instance | 2026-04-07 08:17:12.295687 | instance | PLAY [Generate Ceph OSD configuration for workspace] *************************** 2026-04-07 08:17:12.295932 | instance | 2026-04-07 08:17:12.296214 | instance | TASK [Ensure the Ceph OSDs configuration file exists] ************************** 2026-04-07 08:17:12.296478 | instance | Tuesday 07 April 2026 08:17:12 +0000 (0:00:00.570) 0:00:06.980 ********* 2026-04-07 08:17:12.535879 | instance | changed: [localhost] 2026-04-07 08:17:12.536138 | instance | 2026-04-07 08:17:12.536445 | instance | TASK [Load the current Ceph OSDs configuration into a variable] **************** 2026-04-07 08:17:12.536744 | instance | Tuesday 07 April 2026 08:17:12 +0000 (0:00:00.240) 0:00:07.220 ********* 2026-04-07 08:17:12.569379 | instance | ok: [localhost] 2026-04-07 08:17:12.569657 | instance | 2026-04-07 08:17:12.569952 | instance | TASK [Generate Ceph OSDs values for missing variables] ************************* 2026-04-07 08:17:12.570246 | instance | Tuesday 07 April 2026 08:17:12 +0000 (0:00:00.033) 0:00:07.254 ********* 2026-04-07 08:17:12.606365 | instance | ok: [localhost] => (item={'key': 'ceph_osd_devices', 'value': ['/dev/vdb', '/dev/vdc', '/dev/vdd']}) 2026-04-07 08:17:12.606713 | instance | 2026-04-07 08:17:12.607009 | instance | TASK [Write new Ceph OSDs configuration file to disk] ************************** 2026-04-07 08:17:12.607291 | instance | Tuesday 07 April 2026 08:17:12 +0000 (0:00:00.036) 0:00:07.291 ********* 2026-04-07 08:17:12.995272 | instance | changed: [localhost] 2026-04-07 08:17:12.995525 | instance | 2026-04-07 08:17:12.995848 | instance | PLAY [Generate Kubernetes configuration for workspace] ************************* 2026-04-07 08:17:12.996139 | instance | 2026-04-07 08:17:12.996496 | instance | TASK [Ensure the Kubernetes configuration file exists] ************************* 2026-04-07 08:17:12.996818 | instance | Tuesday 07 April 2026 08:17:12 +0000 (0:00:00.388) 0:00:07.680 ********* 2026-04-07 08:17:13.185078 | instance | changed: [localhost] 2026-04-07 08:17:13.185207 | instance | 2026-04-07 08:17:13.185412 | instance | TASK [Load the current Kubernetes configuration into a variable] *************** 2026-04-07 08:17:13.185576 | instance | Tuesday 07 April 2026 08:17:13 +0000 (0:00:00.189) 0:00:07.869 ********* 2026-04-07 08:17:13.215748 | instance | ok: [localhost] 2026-04-07 08:17:13.216003 | instance | 2026-04-07 08:17:13.216145 | instance | TASK [Generate Kubernetes values for missing variables] ************************ 2026-04-07 08:17:13.216288 | instance | Tuesday 07 April 2026 08:17:13 +0000 (0:00:00.030) 0:00:07.900 ********* 2026-04-07 08:17:13.250353 | instance | ok: [localhost] => (item={'key': 'kubernetes_hostname', 'value': '10.96.240.10'}) 2026-04-07 08:17:13.250456 | instance | ok: [localhost] => (item={'key': 'kubernetes_keepalived_vrid', 'value': 42}) 2026-04-07 08:17:13.250607 | instance | ok: [localhost] => (item={'key': 'kubernetes_keepalived_vip', 'value': '10.96.240.10'}) 2026-04-07 08:17:13.250733 | instance | 2026-04-07 08:17:13.250898 | instance | TASK [Write new Kubernetes configuration file to disk] ************************* 2026-04-07 08:17:13.251108 | instance | Tuesday 07 April 2026 08:17:13 +0000 (0:00:00.034) 0:00:07.935 ********* 2026-04-07 08:17:13.606913 | instance | changed: [localhost] 2026-04-07 08:17:13.607161 | instance | 2026-04-07 08:17:13.607533 | instance | PLAY [Generate Keepalived configuration for workspace] ************************* 2026-04-07 08:17:13.607802 | instance | 2026-04-07 08:17:13.608096 | instance | TASK [Ensure the Keeaplived configuration file exists] ************************* 2026-04-07 08:17:13.608394 | instance | Tuesday 07 April 2026 08:17:13 +0000 (0:00:00.356) 0:00:08.291 ********* 2026-04-07 08:17:13.800937 | instance | changed: [localhost] 2026-04-07 08:17:13.801048 | instance | 2026-04-07 08:17:13.801231 | instance | TASK [Load the current Keepalived configuration into a variable] *************** 2026-04-07 08:17:13.801415 | instance | Tuesday 07 April 2026 08:17:13 +0000 (0:00:00.193) 0:00:08.485 ********* 2026-04-07 08:17:13.828971 | instance | ok: [localhost] 2026-04-07 08:17:13.829244 | instance | 2026-04-07 08:17:13.829575 | instance | TASK [Generate Keepalived values for missing variables] ************************ 2026-04-07 08:17:13.829856 | instance | Tuesday 07 April 2026 08:17:13 +0000 (0:00:00.027) 0:00:08.513 ********* 2026-04-07 08:17:13.861973 | instance | ok: [localhost] => (item={'key': 'keepalived_interface', 'value': 'br-ex'}) 2026-04-07 08:17:13.862236 | instance | ok: [localhost] => (item={'key': 'keepalived_vip', 'value': '10.96.250.10'}) 2026-04-07 08:17:13.862529 | instance | 2026-04-07 08:17:13.862815 | instance | TASK [Write new Keepalived configuration file to disk] ************************* 2026-04-07 08:17:13.863162 | instance | Tuesday 07 April 2026 08:17:13 +0000 (0:00:00.033) 0:00:08.547 ********* 2026-04-07 08:17:14.255615 | instance | changed: [localhost] 2026-04-07 08:17:14.255986 | instance | 2026-04-07 08:17:14.256362 | instance | PLAY [Generate endpoints for workspace] **************************************** 2026-04-07 08:17:14.256682 | instance | 2026-04-07 08:17:14.257028 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-07 08:17:14.257369 | instance | Tuesday 07 April 2026 08:17:14 +0000 (0:00:00.393) 0:00:08.940 ********* 2026-04-07 08:17:14.959905 | instance | ok: [localhost] 2026-04-07 08:17:14.959974 | instance | 2026-04-07 08:17:14.959986 | instance | TASK [Ensure the endpoints file exists] **************************************** 2026-04-07 08:17:14.960119 | instance | Tuesday 07 April 2026 08:17:14 +0000 (0:00:00.702) 0:00:09.643 ********* 2026-04-07 08:17:15.162515 | instance | changed: [localhost] 2026-04-07 08:17:15.162606 | instance | 2026-04-07 08:17:15.162723 | instance | TASK [Load the current endpoints into a variable] ****************************** 2026-04-07 08:17:15.162844 | instance | Tuesday 07 April 2026 08:17:15 +0000 (0:00:00.204) 0:00:09.847 ********* 2026-04-07 08:17:15.198662 | instance | ok: [localhost] 2026-04-07 08:17:15.198878 | instance | 2026-04-07 08:17:15.199146 | instance | TASK [Generate endpoint skeleton for missing variables] ************************ 2026-04-07 08:17:15.199424 | instance | Tuesday 07 April 2026 08:17:15 +0000 (0:00:00.035) 0:00:09.883 ********* 2026-04-07 08:17:15.988464 | instance | ok: [localhost] => (item=keycloak_host) 2026-04-07 08:17:15.988719 | instance | ok: [localhost] => (item=kube_prometheus_stack_grafana_host) 2026-04-07 08:17:15.989007 | instance | ok: [localhost] => (item=kube_prometheus_stack_alertmanager_host) 2026-04-07 08:17:15.989344 | instance | ok: [localhost] => (item=kube_prometheus_stack_prometheus_host) 2026-04-07 08:17:15.989620 | instance | ok: [localhost] => (item=openstack_helm_endpoints_region_name) 2026-04-07 08:17:15.989897 | instance | ok: [localhost] => (item=openstack_helm_endpoints_keystone_api_host) 2026-04-07 08:17:15.990239 | instance | ok: [localhost] => (item=openstack_helm_endpoints_glance_api_host) 2026-04-07 08:17:15.990619 | instance | ok: [localhost] => (item=openstack_helm_endpoints_cinder_api_host) 2026-04-07 08:17:15.990917 | instance | ok: [localhost] => (item=openstack_helm_endpoints_placement_api_host) 2026-04-07 08:17:15.991215 | instance | ok: [localhost] => (item=openstack_helm_endpoints_barbican_api_host) 2026-04-07 08:17:15.991530 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_api_host) 2026-04-07 08:17:15.991815 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_api_host) 2026-04-07 08:17:15.992105 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_novnc_host) 2026-04-07 08:17:15.992403 | instance | ok: [localhost] => (item=openstack_helm_endpoints_ironic_api_host) 2026-04-07 08:17:15.992696 | instance | ok: [localhost] => (item=openstack_helm_endpoints_designate_api_host) 2026-04-07 08:17:15.992976 | instance | ok: [localhost] => (item=openstack_helm_endpoints_octavia_api_host) 2026-04-07 08:17:15.993309 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_api_host) 2026-04-07 08:17:15.993613 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_registry_host) 2026-04-07 08:17:15.993905 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_api_host) 2026-04-07 08:17:15.994208 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_cfn_api_host) 2026-04-07 08:17:15.994544 | instance | ok: [localhost] => (item=openstack_helm_endpoints_horizon_api_host) 2026-04-07 08:17:15.994844 | instance | ok: [localhost] => (item=openstack_helm_endpoints_rgw_host) 2026-04-07 08:17:15.995126 | instance | ok: [localhost] => (item=openstack_helm_endpoints_manila_api_host) 2026-04-07 08:17:15.995406 | instance | 2026-04-07 08:17:15.995705 | instance | TASK [Write new endpoints file to disk] **************************************** 2026-04-07 08:17:15.995998 | instance | Tuesday 07 April 2026 08:17:15 +0000 (0:00:00.789) 0:00:10.673 ********* 2026-04-07 08:17:16.339829 | instance | changed: [localhost] 2026-04-07 08:17:16.340081 | instance | 2026-04-07 08:17:16.340369 | instance | TASK [Ensure the endpoints file exists] **************************************** 2026-04-07 08:17:16.340644 | instance | Tuesday 07 April 2026 08:17:16 +0000 (0:00:00.351) 0:00:11.024 ********* 2026-04-07 08:17:16.543911 | instance | changed: [localhost] 2026-04-07 08:17:16.544154 | instance | 2026-04-07 08:17:16.544427 | instance | PLAY [Generate Neutron configuration for workspace] **************************** 2026-04-07 08:17:16.544690 | instance | 2026-04-07 08:17:16.544952 | instance | TASK [Ensure the Neutron configuration file exists] **************************** 2026-04-07 08:17:16.545226 | instance | Tuesday 07 April 2026 08:17:16 +0000 (0:00:00.204) 0:00:11.229 ********* 2026-04-07 08:17:16.737990 | instance | changed: [localhost] 2026-04-07 08:17:16.738411 | instance | 2026-04-07 08:17:16.738802 | instance | TASK [Load the current Neutron configuration into a variable] ****************** 2026-04-07 08:17:16.739191 | instance | Tuesday 07 April 2026 08:17:16 +0000 (0:00:00.193) 0:00:11.422 ********* 2026-04-07 08:17:16.771558 | instance | ok: [localhost] 2026-04-07 08:17:16.771845 | instance | 2026-04-07 08:17:16.772154 | instance | TASK [Generate Neutron values for missing variables] *************************** 2026-04-07 08:17:16.772453 | instance | Tuesday 07 April 2026 08:17:16 +0000 (0:00:00.033) 0:00:11.456 ********* 2026-04-07 08:17:16.811145 | instance | ok: [localhost] => (item={'key': 'neutron_networks', 'value': [{'name': 'public', 'external': True, 'shared': True, 'mtu_size': 1500, 'port_security_enabled': True, 'provider_network_type': 'flat', 'provider_physical_network': 'external', 'subnets': [{'name': 'public-subnet', 'cidr': '10.96.250.0/24', 'gateway_ip': '10.96.250.10', 'allocation_pool_start': '10.96.250.200', 'allocation_pool_end': '10.96.250.220', 'enable_dhcp': True}]}]}) 2026-04-07 08:17:16.811301 | instance | 2026-04-07 08:17:16.811478 | instance | TASK [Write new Neutron configuration file to disk] **************************** 2026-04-07 08:17:16.811651 | instance | Tuesday 07 April 2026 08:17:16 +0000 (0:00:00.039) 0:00:11.496 ********* 2026-04-07 08:17:17.182598 | instance | changed: [localhost] 2026-04-07 08:17:17.182857 | instance | 2026-04-07 08:17:17.183140 | instance | PLAY [Generate Nova configuration for workspace] ******************************* 2026-04-07 08:17:17.183395 | instance | 2026-04-07 08:17:17.183674 | instance | TASK [Ensure the Nova configuration file exists] ******************************* 2026-04-07 08:17:17.183941 | instance | Tuesday 07 April 2026 08:17:17 +0000 (0:00:00.371) 0:00:11.867 ********* 2026-04-07 08:17:17.375511 | instance | changed: [localhost] 2026-04-07 08:17:17.375766 | instance | 2026-04-07 08:17:17.376046 | instance | TASK [Load the current Nova configuration into a variable] ********************* 2026-04-07 08:17:17.376341 | instance | Tuesday 07 April 2026 08:17:17 +0000 (0:00:00.192) 0:00:12.060 ********* 2026-04-07 08:17:17.409955 | instance | ok: [localhost] 2026-04-07 08:17:17.410250 | instance | 2026-04-07 08:17:17.410571 | instance | TASK [Generate Nova values for missing variables] ****************************** 2026-04-07 08:17:17.410848 | instance | Tuesday 07 April 2026 08:17:17 +0000 (0:00:00.034) 0:00:12.095 ********* 2026-04-07 08:17:17.447125 | instance | ok: [localhost] => (item={'key': 'nova_flavors', 'value': [{'name': 'm1.tiny', 'ram': 512, 'disk': 1, 'vcpus': 1}, {'name': 'm1.small', 'ram': 2048, 'disk': 20, 'vcpus': 1}, {'name': 'm1.medium', 'ram': 4096, 'disk': 40, 'vcpus': 2}, {'name': 'm1.large', 'ram': 8192, 'disk': 80, 'vcpus': 4}, {'name': 'm1.xlarge', 'ram': 16384, 'disk': 160, 'vcpus': 8}]}) 2026-04-07 08:17:17.447351 | instance | 2026-04-07 08:17:17.447593 | instance | TASK [Write new Nova configuration file to disk] ******************************* 2026-04-07 08:17:17.447833 | instance | Tuesday 07 April 2026 08:17:17 +0000 (0:00:00.036) 0:00:12.132 ********* 2026-04-07 08:17:17.805469 | instance | changed: [localhost] 2026-04-07 08:17:17.805710 | instance | 2026-04-07 08:17:17.805989 | instance | PLAY [Generate secrets for workspace] ****************************************** 2026-04-07 08:17:17.806239 | instance | 2026-04-07 08:17:17.806611 | instance | TASK [Ensure the secrets file exists] ****************************************** 2026-04-07 08:17:17.806895 | instance | Tuesday 07 April 2026 08:17:17 +0000 (0:00:00.358) 0:00:12.490 ********* 2026-04-07 08:17:18.009752 | instance | changed: [localhost] 2026-04-07 08:17:18.009973 | instance | 2026-04-07 08:17:18.010270 | instance | TASK [Load the current secrets into a variable] ******************************** 2026-04-07 08:17:18.010914 | instance | Tuesday 07 April 2026 08:17:18 +0000 (0:00:00.204) 0:00:12.694 ********* 2026-04-07 08:17:18.042090 | instance | ok: [localhost] 2026-04-07 08:17:18.042369 | instance | 2026-04-07 08:17:18.042695 | instance | TASK [Generate secrets for missing variables] ********************************** 2026-04-07 08:17:18.042991 | instance | Tuesday 07 April 2026 08:17:18 +0000 (0:00:00.032) 0:00:12.726 ********* 2026-04-07 08:17:18.465956 | instance | ok: [localhost] => (item=heat_auth_encryption_key) 2026-04-07 08:17:18.466446 | instance | ok: [localhost] => (item=keepalived_password) 2026-04-07 08:17:18.466855 | instance | ok: [localhost] => (item=keycloak_admin_password) 2026-04-07 08:17:18.467280 | instance | ok: [localhost] => (item=keycloak_database_password) 2026-04-07 08:17:18.467672 | instance | ok: [localhost] => (item=keystone_keycloak_client_secret) 2026-04-07 08:17:18.468032 | instance | ok: [localhost] => (item=keystone_oidc_crypto_passphrase) 2026-04-07 08:17:18.468391 | instance | ok: [localhost] => (item=kube_prometheus_stack_grafana_admin_password) 2026-04-07 08:17:18.468744 | instance | ok: [localhost] => (item=octavia_heartbeat_key) 2026-04-07 08:17:18.469098 | instance | ok: [localhost] => (item=openstack_helm_endpoints_rabbitmq_admin_password) 2026-04-07 08:17:18.469535 | instance | ok: [localhost] => (item=openstack_helm_endpoints_memcached_secret_key) 2026-04-07 08:17:18.470025 | instance | ok: [localhost] => (item=openstack_helm_endpoints_keystone_admin_password) 2026-04-07 08:17:18.470558 | instance | ok: [localhost] => (item=openstack_helm_endpoints_keystone_mariadb_password) 2026-04-07 08:17:18.471089 | instance | ok: [localhost] => (item=openstack_helm_endpoints_keystone_rabbitmq_password) 2026-04-07 08:17:18.471572 | instance | ok: [localhost] => (item=openstack_helm_endpoints_glance_keystone_password) 2026-04-07 08:17:18.472024 | instance | ok: [localhost] => (item=openstack_helm_endpoints_glance_mariadb_password) 2026-04-07 08:17:18.472501 | instance | ok: [localhost] => (item=openstack_helm_endpoints_glance_rabbitmq_password) 2026-04-07 08:17:18.472963 | instance | ok: [localhost] => (item=openstack_helm_endpoints_cinder_keystone_password) 2026-04-07 08:17:18.473435 | instance | ok: [localhost] => (item=openstack_helm_endpoints_cinder_mariadb_password) 2026-04-07 08:17:18.473806 | instance | ok: [localhost] => (item=openstack_helm_endpoints_cinder_rabbitmq_password) 2026-04-07 08:17:18.474135 | instance | ok: [localhost] => (item=openstack_helm_endpoints_placement_keystone_password) 2026-04-07 08:17:18.474583 | instance | ok: [localhost] => (item=openstack_helm_endpoints_placement_mariadb_password) 2026-04-07 08:17:18.475099 | instance | ok: [localhost] => (item=openstack_helm_endpoints_barbican_keystone_password) 2026-04-07 08:17:18.475416 | instance | ok: [localhost] => (item=openstack_helm_endpoints_barbican_mariadb_password) 2026-04-07 08:17:18.475609 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_keystone_password) 2026-04-07 08:17:18.475765 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_mariadb_password) 2026-04-07 08:17:18.475914 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_rabbitmq_password) 2026-04-07 08:17:18.476063 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_metadata_secret) 2026-04-07 08:17:18.476211 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_keystone_password) 2026-04-07 08:17:18.476359 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_mariadb_password) 2026-04-07 08:17:18.476508 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_rabbitmq_password) 2026-04-07 08:17:18.476657 | instance | ok: [localhost] => (item=openstack_helm_endpoints_ironic_keystone_password) 2026-04-07 08:17:18.476806 | instance | ok: [localhost] => (item=openstack_helm_endpoints_ironic_mariadb_password) 2026-04-07 08:17:18.476954 | instance | ok: [localhost] => (item=openstack_helm_endpoints_ironic_rabbitmq_password) 2026-04-07 08:17:18.477144 | instance | ok: [localhost] => (item=openstack_helm_endpoints_designate_keystone_password) 2026-04-07 08:17:18.477339 | instance | ok: [localhost] => (item=openstack_helm_endpoints_designate_mariadb_password) 2026-04-07 08:17:18.477499 | instance | ok: [localhost] => (item=openstack_helm_endpoints_designate_rabbitmq_password) 2026-04-07 08:17:18.477648 | instance | ok: [localhost] => (item=openstack_helm_endpoints_octavia_keystone_password) 2026-04-07 08:17:18.477797 | instance | ok: [localhost] => (item=openstack_helm_endpoints_octavia_mariadb_password) 2026-04-07 08:17:18.477971 | instance | ok: [localhost] => (item=openstack_helm_endpoints_octavia_rabbitmq_password) 2026-04-07 08:17:18.478128 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_keystone_password) 2026-04-07 08:17:18.478312 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_mariadb_password) 2026-04-07 08:17:18.478478 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_rabbitmq_password) 2026-04-07 08:17:18.478634 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_keystone_password) 2026-04-07 08:17:18.478833 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_trustee_keystone_password) 2026-04-07 08:17:18.479021 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_stack_user_keystone_password) 2026-04-07 08:17:18.479185 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_mariadb_password) 2026-04-07 08:17:18.479392 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_rabbitmq_password) 2026-04-07 08:17:18.479556 | instance | ok: [localhost] => (item=openstack_helm_endpoints_horizon_mariadb_password) 2026-04-07 08:17:18.479707 | instance | ok: [localhost] => (item=openstack_helm_endpoints_tempest_keystone_password) 2026-04-07 08:17:18.479860 | instance | ok: [localhost] => (item=openstack_helm_endpoints_openstack_exporter_keystone_password) 2026-04-07 08:17:18.480009 | instance | ok: [localhost] => (item=openstack_helm_endpoints_rgw_keystone_password) 2026-04-07 08:17:18.480158 | instance | ok: [localhost] => (item=openstack_helm_endpoints_manila_keystone_password) 2026-04-07 08:17:18.480306 | instance | ok: [localhost] => (item=openstack_helm_endpoints_manila_mariadb_password) 2026-04-07 08:17:18.480460 | instance | ok: [localhost] => (item=openstack_helm_endpoints_staffeln_mariadb_password) 2026-04-07 08:17:18.480601 | instance | 2026-04-07 08:17:18.480749 | instance | TASK [Generate base64 encoded secrets] ***************************************** 2026-04-07 08:17:18.480921 | instance | Tuesday 07 April 2026 08:17:18 +0000 (0:00:00.423) 0:00:13.150 ********* 2026-04-07 08:17:18.514760 | instance | ok: [localhost] => (item=barbican_kek) 2026-04-07 08:17:18.515070 | instance | 2026-04-07 08:17:18.515316 | instance | TASK [Generate temporary files for generating keys for missing variables] ****** 2026-04-07 08:17:18.515492 | instance | Tuesday 07 April 2026 08:17:18 +0000 (0:00:00.046) 0:00:13.197 ********* 2026-04-07 08:17:18.966471 | instance | changed: [localhost] => (item=manila_ssh_key) 2026-04-07 08:17:18.966525 | instance | changed: [localhost] => (item=nova_ssh_key) 2026-04-07 08:17:18.966620 | instance | 2026-04-07 08:17:18.966799 | instance | TASK [Generate SSH keys for missing variables] ********************************* 2026-04-07 08:17:18.966979 | instance | Tuesday 07 April 2026 08:17:18 +0000 (0:00:00.453) 0:00:13.651 ********* 2026-04-07 08:17:23.111138 | instance | changed: [localhost] => (item=manila_ssh_key) 2026-04-07 08:17:23.111372 | instance | changed: [localhost] => (item=nova_ssh_key) 2026-04-07 08:17:23.111630 | instance | 2026-04-07 08:17:23.111905 | instance | TASK [Set values for SSH keys] ************************************************* 2026-04-07 08:17:23.112179 | instance | Tuesday 07 April 2026 08:17:23 +0000 (0:00:04.144) 0:00:17.795 ********* 2026-04-07 08:17:23.168821 | instance | ok: [localhost] => (item=manila_ssh_key) 2026-04-07 08:17:23.169100 | instance | ok: [localhost] => (item=nova_ssh_key) 2026-04-07 08:17:23.169366 | instance | 2026-04-07 08:17:23.169651 | instance | TASK [Delete the temporary files generated for SSH keys] *********************** 2026-04-07 08:17:23.169939 | instance | Tuesday 07 April 2026 08:17:23 +0000 (0:00:00.058) 0:00:17.853 ********* 2026-04-07 08:17:23.572652 | instance | changed: [localhost] => (item=manila_ssh_key) 2026-04-07 08:17:23.572886 | instance | changed: [localhost] => (item=nova_ssh_key) 2026-04-07 08:17:23.573139 | instance | 2026-04-07 08:17:23.573430 | instance | TASK [Write new secrets file to disk] ****************************************** 2026-04-07 08:17:23.573749 | instance | Tuesday 07 April 2026 08:17:23 +0000 (0:00:00.403) 0:00:18.257 ********* 2026-04-07 08:17:23.929227 | instance | changed: [localhost] 2026-04-07 08:17:23.929476 | instance | 2026-04-07 08:17:23.929811 | instance | TASK [Encrypt secrets file with Vault password] ******************************** 2026-04-07 08:17:23.930138 | instance | Tuesday 07 April 2026 08:17:23 +0000 (0:00:00.355) 0:00:18.613 ********* 2026-04-07 08:17:23.964319 | instance | skipping: [localhost] 2026-04-07 08:17:23.964519 | instance | 2026-04-07 08:17:23.964856 | instance | PLAY [Setup networking] ******************************************************** 2026-04-07 08:17:23.965048 | instance | 2026-04-07 08:17:23.965305 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-07 08:17:23.965577 | instance | Tuesday 07 April 2026 08:17:23 +0000 (0:00:00.035) 0:00:18.649 ********* 2026-04-07 08:17:24.660251 | instance | ok: [instance] 2026-04-07 08:17:24.660308 | instance | 2026-04-07 08:17:24.660315 | instance | TASK [Create bridge for management network] ************************************ 2026-04-07 08:17:24.660322 | instance | Tuesday 07 April 2026 08:17:24 +0000 (0:00:00.695) 0:00:19.344 ********* 2026-04-07 08:17:25.002002 | instance | ok: [instance] 2026-04-07 08:17:25.002354 | instance | 2026-04-07 08:17:25.002609 | instance | TASK [Create fake interface for management bridge] ***************************** 2026-04-07 08:17:25.002905 | instance | Tuesday 07 April 2026 08:17:24 +0000 (0:00:00.341) 0:00:19.686 ********* 2026-04-07 08:17:25.239855 | instance | ok: [instance] 2026-04-07 08:17:25.240086 | instance | 2026-04-07 08:17:25.240376 | instance | TASK [Assign dummy interface to management bridge] ***************************** 2026-04-07 08:17:25.240647 | instance | Tuesday 07 April 2026 08:17:25 +0000 (0:00:00.237) 0:00:19.924 ********* 2026-04-07 08:17:25.474022 | instance | ok: [instance] 2026-04-07 08:17:25.474373 | instance | 2026-04-07 08:17:25.474765 | instance | TASK [Assign IP address for management bridge] ********************************* 2026-04-07 08:17:25.475098 | instance | Tuesday 07 April 2026 08:17:25 +0000 (0:00:00.234) 0:00:20.158 ********* 2026-04-07 08:17:25.696808 | instance | ok: [instance] 2026-04-07 08:17:25.697066 | instance | 2026-04-07 08:17:25.697400 | instance | TASK [Bring up interfaces] ***************************************************** 2026-04-07 08:17:25.697712 | instance | Tuesday 07 April 2026 08:17:25 +0000 (0:00:00.222) 0:00:20.381 ********* 2026-04-07 08:17:26.103536 | instance | ok: [instance] => (item=br-mgmt) 2026-04-07 08:17:26.103796 | instance | ok: [instance] => (item=dummy0) 2026-04-07 08:17:26.104110 | instance | 2026-04-07 08:17:26.104396 | instance | PLAY [Create devices for Ceph] ************************************************* 2026-04-07 08:17:26.104647 | instance | 2026-04-07 08:17:26.104918 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-07 08:17:26.105224 | instance | Tuesday 07 April 2026 08:17:26 +0000 (0:00:00.406) 0:00:20.788 ********* 2026-04-07 08:17:26.838002 | instance | ok: [instance] 2026-04-07 08:17:26.838065 | instance | 2026-04-07 08:17:26.838077 | instance | TASK [Install depedencies] ***************************************************** 2026-04-07 08:17:26.838087 | instance | Tuesday 07 April 2026 08:17:26 +0000 (0:00:00.733) 0:00:21.522 ********* 2026-04-07 08:17:49.206984 | instance | changed: [instance] 2026-04-07 08:17:49.207108 | instance | 2026-04-07 08:17:49.207495 | instance | TASK [Start up service] ******************************************************** 2026-04-07 08:17:49.207540 | instance | Tuesday 07 April 2026 08:17:49 +0000 (0:00:22.369) 0:00:43.891 ********* 2026-04-07 08:17:49.774125 | instance | ok: [instance] 2026-04-07 08:17:49.774492 | instance | 2026-04-07 08:17:49.774560 | instance | TASK [Generate lvm.conf] ******************************************************* 2026-04-07 08:17:49.774769 | instance | Tuesday 07 April 2026 08:17:49 +0000 (0:00:00.567) 0:00:44.459 ********* 2026-04-07 08:17:50.003077 | instance | ok: [instance] 2026-04-07 08:17:50.003173 | instance | 2026-04-07 08:17:50.003655 | instance | TASK [Write /etc/lvm/lvm.conf] ************************************************* 2026-04-07 08:17:50.003845 | instance | Tuesday 07 April 2026 08:17:49 +0000 (0:00:00.228) 0:00:44.688 ********* 2026-04-07 08:17:50.378956 | instance | changed: [instance] 2026-04-07 08:17:50.379495 | instance | 2026-04-07 08:17:50.379526 | instance | TASK [Get list of all loopback devices] **************************************** 2026-04-07 08:17:50.379534 | instance | Tuesday 07 April 2026 08:17:50 +0000 (0:00:00.375) 0:00:45.064 ********* 2026-04-07 08:17:50.628446 | instance | ok: [instance] 2026-04-07 08:17:50.628980 | instance | 2026-04-07 08:17:50.629015 | instance | TASK [Fail if there is any existing loopback devices] ************************** 2026-04-07 08:17:50.629023 | instance | Tuesday 07 April 2026 08:17:50 +0000 (0:00:00.249) 0:00:45.313 ********* 2026-04-07 08:17:50.653671 | instance | skipping: [instance] 2026-04-07 08:17:50.654224 | instance | 2026-04-07 08:17:50.654269 | instance | TASK [Create devices for Ceph] ************************************************* 2026-04-07 08:17:50.654292 | instance | Tuesday 07 April 2026 08:17:50 +0000 (0:00:00.025) 0:00:45.338 ********* 2026-04-07 08:17:51.236026 | instance | changed: [instance] => (item=osd0) 2026-04-07 08:17:51.236978 | instance | changed: [instance] => (item=osd1) 2026-04-07 08:17:51.237029 | instance | changed: [instance] => (item=osd2) 2026-04-07 08:17:51.237035 | instance | 2026-04-07 08:17:51.237039 | instance | TASK [Set permissions on loopback devices] ************************************* 2026-04-07 08:17:51.237045 | instance | Tuesday 07 April 2026 08:17:51 +0000 (0:00:00.581) 0:00:45.920 ********* 2026-04-07 08:17:51.842868 | instance | changed: [instance] => (item=osd0) 2026-04-07 08:17:51.842947 | instance | changed: [instance] => (item=osd1) 2026-04-07 08:17:51.843390 | instance | changed: [instance] => (item=osd2) 2026-04-07 08:17:51.843428 | instance | 2026-04-07 08:17:51.843434 | instance | TASK [Start loop devices] ****************************************************** 2026-04-07 08:17:51.843439 | instance | Tuesday 07 April 2026 08:17:51 +0000 (0:00:00.607) 0:00:46.528 ********* 2026-04-07 08:17:52.547167 | instance | changed: [instance] => (item=osd0) 2026-04-07 08:17:52.547864 | instance | changed: [instance] => (item=osd1) 2026-04-07 08:17:52.548040 | instance | changed: [instance] => (item=osd2) 2026-04-07 08:17:52.548049 | instance | 2026-04-07 08:17:52.548055 | instance | TASK [Create a volume group for each loop device] ****************************** 2026-04-07 08:17:52.548062 | instance | Tuesday 07 April 2026 08:17:52 +0000 (0:00:00.704) 0:00:47.232 ********* 2026-04-07 08:17:55.554962 | instance | changed: [instance] => (item=osd0) 2026-04-07 08:17:55.555062 | instance | changed: [instance] => (item=osd1) 2026-04-07 08:17:55.555591 | instance | changed: [instance] => (item=osd2) 2026-04-07 08:17:55.555626 | instance | 2026-04-07 08:17:55.555632 | instance | TASK [Create a logical volume for each loop device] **************************** 2026-04-07 08:17:55.555637 | instance | Tuesday 07 April 2026 08:17:55 +0000 (0:00:03.007) 0:00:50.240 ********* 2026-04-07 08:17:57.578024 | instance | changed: [instance] => (item=ceph-instance-osd0) 2026-04-07 08:17:57.578141 | instance | changed: [instance] => (item=ceph-instance-osd1) 2026-04-07 08:17:57.578153 | instance | changed: [instance] => (item=ceph-instance-osd2) 2026-04-07 08:17:57.578424 | instance | 2026-04-07 08:17:57.578503 | instance | PLAY [controllers] ************************************************************* 2026-04-07 08:17:57.578807 | instance | 2026-04-07 08:17:57.578843 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-07 08:17:57.578850 | instance | Tuesday 07 April 2026 08:17:57 +0000 (0:00:02.022) 0:00:52.262 ********* 2026-04-07 08:17:58.491372 | instance | ok: [instance] 2026-04-07 08:17:58.491467 | instance | 2026-04-07 08:17:58.491994 | instance | TASK [Set masquerade rule] ***************************************************** 2026-04-07 08:17:58.492070 | instance | Tuesday 07 April 2026 08:17:58 +0000 (0:00:00.912) 0:00:53.175 ********* 2026-04-07 08:17:58.978022 | instance | changed: [instance] 2026-04-07 08:17:58.978357 | instance | 2026-04-07 08:17:58.978408 | instance | PLAY RECAP ********************************************************************* 2026-04-07 08:17:58.982398 | instance | instance : ok=24 changed=10 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 2026-04-07 08:17:58.982467 | instance | localhost : ok=40 changed=21 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 2026-04-07 08:17:58.982479 | instance | 2026-04-07 08:17:58.982490 | instance | Tuesday 07 April 2026 08:17:58 +0000 (0:00:00.487) 0:00:53.663 ********* 2026-04-07 08:17:58.982500 | instance | =============================================================================== 2026-04-07 08:17:58.982509 | instance | Install depedencies ---------------------------------------------------- 22.37s 2026-04-07 08:17:58.982518 | instance | Generate SSH keys for missing variables --------------------------------- 4.14s 2026-04-07 08:17:58.982527 | instance | Create a volume group for each loop device ------------------------------ 3.01s 2026-04-07 08:17:58.982536 | instance | Create a logical volume for each loop device ---------------------------- 2.02s 2026-04-07 08:17:58.982545 | instance | Install "dirmngr" for GPG keyserver operations -------------------------- 1.62s 2026-04-07 08:17:58.982553 | instance | Create folders for workspace -------------------------------------------- 1.23s 2026-04-07 08:17:58.982562 | instance | Purge "snapd" package --------------------------------------------------- 1.05s 2026-04-07 08:17:58.982571 | instance | Gathering Facts --------------------------------------------------------- 0.97s 2026-04-07 08:17:58.982614 | instance | Gathering Facts --------------------------------------------------------- 0.91s 2026-04-07 08:17:58.982624 | instance | Configure short hostname ------------------------------------------------ 0.81s 2026-04-07 08:17:58.982633 | instance | Generate endpoint skeleton for missing variables ------------------------ 0.79s 2026-04-07 08:17:58.982641 | instance | Gathering Facts --------------------------------------------------------- 0.73s 2026-04-07 08:17:58.982650 | instance | Start loop devices ------------------------------------------------------ 0.70s 2026-04-07 08:17:58.982659 | instance | Gathering Facts --------------------------------------------------------- 0.70s 2026-04-07 08:17:58.982668 | instance | Gathering Facts --------------------------------------------------------- 0.70s 2026-04-07 08:17:58.982676 | instance | Set permissions on loopback devices ------------------------------------- 0.61s 2026-04-07 08:17:58.982685 | instance | Create devices for Ceph ------------------------------------------------- 0.58s 2026-04-07 08:17:58.982694 | instance | Write new Ceph control plane configuration file to disk ----------------- 0.57s 2026-04-07 08:17:58.982704 | instance | Start up service -------------------------------------------------------- 0.57s 2026-04-07 08:17:58.982720 | instance | Set masquerade rule ----------------------------------------------------- 0.49s 2026-04-07 08:17:59.094474 | instance | INFO [aio > prepare] Executed: Successful 2026-04-07 08:17:59.106891 | instance | INFO [aio > converge] Executing 2026-04-07 08:18:01.766217 | instance | 2026-04-07 08:18:01.766479 | instance | PLAY [all] ********************************************************************* 2026-04-07 08:18:01.766741 | instance | 2026-04-07 08:18:01.767133 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-07 08:18:01.767408 | instance | Tuesday 07 April 2026 08:18:01 +0000 (0:00:00.020) 0:00:00.020 ********* 2026-04-07 08:18:03.037003 | instance | ok: [instance] 2026-04-07 08:18:03.037201 | instance | 2026-04-07 08:18:03.037495 | instance | TASK [Fail if atmosphere_ceph_enabled is set] ********************************** 2026-04-07 08:18:03.037787 | instance | Tuesday 07 April 2026 08:18:03 +0000 (0:00:01.270) 0:00:01.291 ********* 2026-04-07 08:18:03.079754 | instance | skipping: [instance] 2026-04-07 08:18:03.079934 | instance | 2026-04-07 08:18:03.080220 | instance | TASK [Set a fact with the "atmosphere_images" for other plays] ***************** 2026-04-07 08:18:03.080557 | instance | Tuesday 07 April 2026 08:18:03 +0000 (0:00:00.042) 0:00:01.334 ********* 2026-04-07 08:18:03.317075 | instance | ok: [instance] 2026-04-07 08:18:03.317231 | instance | 2026-04-07 08:18:03.317509 | instance | PLAY [Deploy Ceph monitors & managers] ***************************************** 2026-04-07 08:18:03.317761 | instance | 2026-04-07 08:18:03.318030 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-07 08:18:03.318361 | instance | Tuesday 07 April 2026 08:18:03 +0000 (0:00:00.237) 0:00:01.571 ********* 2026-04-07 08:18:04.203879 | instance | ok: [instance] 2026-04-07 08:18:04.203942 | instance | 2026-04-07 08:18:04.203955 | instance | TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-07 08:18:04.203965 | instance | Tuesday 07 April 2026 08:18:04 +0000 (0:00:00.885) 0:00:02.457 ********* 2026-04-07 08:18:04.528521 | instance | ok: [instance] 2026-04-07 08:18:04.528607 | instance | 2026-04-07 08:18:04.528831 | instance | TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-07 08:18:04.528869 | instance | Tuesday 07 April 2026 08:18:04 +0000 (0:00:00.325) 0:00:02.783 ********* 2026-04-07 08:18:04.572192 | instance | skipping: [instance] 2026-04-07 08:18:04.572266 | instance | 2026-04-07 08:18:04.572520 | instance | TASK [vexxhost.containers.directory : Create directory (/var/lib/downloads)] *** 2026-04-07 08:18:04.572559 | instance | Tuesday 07 April 2026 08:18:04 +0000 (0:00:00.043) 0:00:02.827 ********* 2026-04-07 08:18:04.887532 | instance | changed: [instance] 2026-04-07 08:18:04.887584 | instance | 2026-04-07 08:18:04.887856 | instance | TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-07 08:18:04.887871 | instance | Tuesday 07 April 2026 08:18:04 +0000 (0:00:00.315) 0:00:03.142 ********* 2026-04-07 08:18:04.959537 | instance | ok: [instance] => { 2026-04-07 08:18:04.959671 | instance | "msg": "https://github.com/opencontainers/runc/releases/download/v1.4.0/runc.amd64" 2026-04-07 08:18:04.960169 | instance | } 2026-04-07 08:18:04.960184 | instance | 2026-04-07 08:18:04.960189 | instance | TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-07 08:18:04.960193 | instance | Tuesday 07 April 2026 08:18:04 +0000 (0:00:00.072) 0:00:03.214 ********* 2026-04-07 08:18:05.633317 | instance | changed: [instance] 2026-04-07 08:18:05.633396 | instance | 2026-04-07 08:18:05.633473 | instance | TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-07 08:18:05.633652 | instance | Tuesday 07 April 2026 08:18:05 +0000 (0:00:00.673) 0:00:03.887 ********* 2026-04-07 08:18:05.689076 | instance | skipping: [instance] 2026-04-07 08:18:05.689153 | instance | 2026-04-07 08:18:05.689292 | instance | TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-07 08:18:05.689478 | instance | Tuesday 07 April 2026 08:18:05 +0000 (0:00:00.056) 0:00:03.944 ********* 2026-04-07 08:18:05.742490 | instance | skipping: [instance] 2026-04-07 08:18:05.742555 | instance | 2026-04-07 08:18:05.742571 | instance | TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-07 08:18:05.742682 | instance | Tuesday 07 April 2026 08:18:05 +0000 (0:00:00.053) 0:00:03.997 ********* 2026-04-07 08:18:05.969888 | instance | ok: [instance] 2026-04-07 08:18:05.969984 | instance | 2026-04-07 08:18:05.970238 | instance | TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-07 08:18:05.970298 | instance | Tuesday 07 April 2026 08:18:05 +0000 (0:00:00.227) 0:00:04.224 ********* 2026-04-07 08:18:07.432837 | instance | ok: [instance] 2026-04-07 08:18:07.433277 | instance | 2026-04-07 08:18:07.433319 | instance | TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-07 08:18:07.433327 | instance | Tuesday 07 April 2026 08:18:07 +0000 (0:00:01.462) 0:00:05.687 ********* 2026-04-07 08:18:07.505393 | instance | ok: [instance] => { 2026-04-07 08:18:07.505464 | instance | "msg": "https://github.com/containerd/containerd/releases/download/v2.2.0/containerd-2.2.0-linux-amd64.tar.gz" 2026-04-07 08:18:07.505951 | instance | } 2026-04-07 08:18:07.505991 | instance | 2026-04-07 08:18:07.505997 | instance | TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-07 08:18:07.506002 | instance | Tuesday 07 April 2026 08:18:07 +0000 (0:00:00.072) 0:00:05.760 ********* 2026-04-07 08:18:08.247436 | instance | changed: [instance] 2026-04-07 08:18:08.247516 | instance | 2026-04-07 08:18:08.247812 | instance | TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-07 08:18:08.247828 | instance | Tuesday 07 April 2026 08:18:08 +0000 (0:00:00.742) 0:00:06.502 ********* 2026-04-07 08:18:11.125298 | instance | changed: [instance] 2026-04-07 08:18:11.125510 | instance | 2026-04-07 08:18:11.125571 | instance | TASK [vexxhost.containers.containerd : Install SELinux packages] *************** 2026-04-07 08:18:11.125788 | instance | Tuesday 07 April 2026 08:18:11 +0000 (0:00:02.877) 0:00:09.379 ********* 2026-04-07 08:18:11.157933 | instance | skipping: [instance] 2026-04-07 08:18:11.158015 | instance | 2026-04-07 08:18:11.158251 | instance | TASK [vexxhost.containers.containerd : Set SELinux to permissive at runtime] *** 2026-04-07 08:18:11.158308 | instance | Tuesday 07 April 2026 08:18:11 +0000 (0:00:00.032) 0:00:09.412 ********* 2026-04-07 08:18:11.188453 | instance | skipping: [instance] 2026-04-07 08:18:11.188847 | instance | 2026-04-07 08:18:11.188890 | instance | TASK [vexxhost.containers.containerd : Persist SELinux permissive mode] ******** 2026-04-07 08:18:11.188896 | instance | Tuesday 07 April 2026 08:18:11 +0000 (0:00:00.030) 0:00:09.443 ********* 2026-04-07 08:18:11.231890 | instance | skipping: [instance] 2026-04-07 08:18:11.231975 | instance | 2026-04-07 08:18:11.232284 | instance | TASK [vexxhost.containers.containerd : Install AppArmor packages] ************** 2026-04-07 08:18:11.232350 | instance | Tuesday 07 April 2026 08:18:11 +0000 (0:00:00.042) 0:00:09.485 ********* 2026-04-07 08:18:17.670713 | instance | changed: [instance] 2026-04-07 08:18:17.671453 | instance | 2026-04-07 08:18:17.671628 | instance | TASK [vexxhost.containers.containerd : Create systemd service file for containerd] *** 2026-04-07 08:18:17.671830 | instance | Tuesday 07 April 2026 08:18:17 +0000 (0:00:06.434) 0:00:15.919 ********* 2026-04-07 08:18:18.191482 | instance | changed: [instance] 2026-04-07 08:18:18.191588 | instance | 2026-04-07 08:18:18.191679 | instance | TASK [vexxhost.containers.containerd : Create folders for configuration] ******* 2026-04-07 08:18:18.192050 | instance | Tuesday 07 April 2026 08:18:18 +0000 (0:00:00.526) 0:00:16.446 ********* 2026-04-07 08:18:19.260549 | instance | changed: [instance] => (item={'path': '/etc/containerd'}) 2026-04-07 08:18:19.260656 | instance | changed: [instance] => (item={'path': '/var/lib/containerd', 'mode': '0o700'}) 2026-04-07 08:18:19.261830 | instance | changed: [instance] => (item={'path': '/run/containerd', 'mode': '0o711'}) 2026-04-07 08:18:19.261897 | instance | changed: [instance] => (item={'path': '/run/containerd/io.containerd.grpc.v1.cri', 'mode': '0o700'}) 2026-04-07 08:18:19.261907 | instance | changed: [instance] => (item={'path': '/run/containerd/io.containerd.sandbox.controller.v1.shim', 'mode': '0o700'}) 2026-04-07 08:18:19.261914 | instance | 2026-04-07 08:18:19.261921 | instance | TASK [vexxhost.containers.containerd : Create containerd config file] ********** 2026-04-07 08:18:19.261928 | instance | Tuesday 07 April 2026 08:18:19 +0000 (0:00:01.068) 0:00:17.514 ********* 2026-04-07 08:18:19.803395 | instance | changed: [instance] 2026-04-07 08:18:19.803459 | instance | 2026-04-07 08:18:19.803466 | instance | TASK [vexxhost.containers.containerd : Force any restarts if necessary] ******** 2026-04-07 08:18:19.803471 | instance | Tuesday 07 April 2026 08:18:19 +0000 (0:00:00.533) 0:00:18.047 ********* 2026-04-07 08:18:19.803476 | instance | 2026-04-07 08:18:19.803480 | instance | RUNNING HANDLER [vexxhost.containers.containerd : Reload systemd] ************** 2026-04-07 08:18:19.803484 | instance | Tuesday 07 April 2026 08:18:19 +0000 (0:00:00.009) 0:00:18.057 ********* 2026-04-07 08:18:20.787599 | instance | ok: [instance] 2026-04-07 08:18:20.787709 | instance | 2026-04-07 08:18:20.787722 | instance | RUNNING HANDLER [vexxhost.containers.containerd : Restart containerd] ********** 2026-04-07 08:18:20.787937 | instance | Tuesday 07 April 2026 08:18:20 +0000 (0:00:00.985) 0:00:19.042 ********* 2026-04-07 08:18:21.312000 | instance | changed: [instance] 2026-04-07 08:18:21.312133 | instance | 2026-04-07 08:18:21.312146 | instance | TASK [vexxhost.containers.containerd : Enable and start service] *************** 2026-04-07 08:18:21.312301 | instance | Tuesday 07 April 2026 08:18:21 +0000 (0:00:00.524) 0:00:19.566 ********* 2026-04-07 08:18:21.876362 | instance | changed: [instance] 2026-04-07 08:18:21.876469 | instance | 2026-04-07 08:18:21.876477 | instance | TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-07 08:18:21.876660 | instance | Tuesday 07 April 2026 08:18:21 +0000 (0:00:00.564) 0:00:20.131 ********* 2026-04-07 08:18:22.118184 | instance | ok: [instance] 2026-04-07 08:18:22.118267 | instance | 2026-04-07 08:18:22.118649 | instance | TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-07 08:18:22.118728 | instance | Tuesday 07 April 2026 08:18:22 +0000 (0:00:00.241) 0:00:20.372 ********* 2026-04-07 08:18:22.171630 | instance | ok: [instance] => { 2026-04-07 08:18:22.171752 | instance | "msg": "https://download.docker.com/linux/static/stable/x86_64/docker-24.0.9.tgz" 2026-04-07 08:18:22.171806 | instance | } 2026-04-07 08:18:22.172203 | instance | 2026-04-07 08:18:22.172261 | instance | TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-07 08:18:22.172268 | instance | Tuesday 07 April 2026 08:18:22 +0000 (0:00:00.053) 0:00:20.426 ********* 2026-04-07 08:18:23.115088 | instance | changed: [instance] 2026-04-07 08:18:23.115171 | instance | 2026-04-07 08:18:23.115611 | instance | TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-07 08:18:23.115675 | instance | Tuesday 07 April 2026 08:18:23 +0000 (0:00:00.943) 0:00:21.369 ********* 2026-04-07 08:18:27.530784 | instance | changed: [instance] 2026-04-07 08:18:27.530895 | instance | 2026-04-07 08:18:27.530962 | instance | TASK [vexxhost.containers.docker : Install AppArmor packages] ****************** 2026-04-07 08:18:27.531129 | instance | Tuesday 07 April 2026 08:18:27 +0000 (0:00:04.416) 0:00:25.785 ********* 2026-04-07 08:18:29.097150 | instance | ok: [instance] 2026-04-07 08:18:29.097222 | instance | 2026-04-07 08:18:29.097525 | instance | TASK [vexxhost.containers.docker : Ensure group "docker" exists] *************** 2026-04-07 08:18:29.097574 | instance | Tuesday 07 April 2026 08:18:29 +0000 (0:00:01.566) 0:00:27.351 ********* 2026-04-07 08:18:29.694332 | instance | changed: [instance] 2026-04-07 08:18:29.694448 | instance | 2026-04-07 08:18:29.694857 | instance | TASK [vexxhost.containers.docker : Create systemd service file for docker] ***** 2026-04-07 08:18:29.695058 | instance | Tuesday 07 April 2026 08:18:29 +0000 (0:00:00.596) 0:00:27.948 ********* 2026-04-07 08:18:30.118504 | instance | changed: [instance] 2026-04-07 08:18:30.118706 | instance | 2026-04-07 08:18:30.118801 | instance | TASK [vexxhost.containers.docker : Create folders for configuration] *********** 2026-04-07 08:18:30.118983 | instance | Tuesday 07 April 2026 08:18:30 +0000 (0:00:00.424) 0:00:28.373 ********* 2026-04-07 08:18:30.748910 | instance | changed: [instance] => (item={'path': '/etc/docker'}) 2026-04-07 08:18:30.749005 | instance | changed: [instance] => (item={'path': '/var/lib/docker', 'mode': '0o710'}) 2026-04-07 08:18:30.749505 | instance | changed: [instance] => (item={'path': '/run/docker', 'mode': '0o711'}) 2026-04-07 08:18:30.749556 | instance | 2026-04-07 08:18:30.749562 | instance | TASK [vexxhost.containers.docker : Create systemd socket file for docker] ****** 2026-04-07 08:18:30.749567 | instance | Tuesday 07 April 2026 08:18:30 +0000 (0:00:00.630) 0:00:29.003 ********* 2026-04-07 08:18:31.123575 | instance | changed: [instance] 2026-04-07 08:18:31.123647 | instance | 2026-04-07 08:18:31.123937 | instance | TASK [vexxhost.containers.docker : Create docker daemon config file] *********** 2026-04-07 08:18:31.123988 | instance | Tuesday 07 April 2026 08:18:31 +0000 (0:00:00.374) 0:00:29.378 ********* 2026-04-07 08:18:31.544889 | instance | changed: [instance] 2026-04-07 08:18:31.544978 | instance | 2026-04-07 08:18:31.545590 | instance | TASK [vexxhost.containers.docker : Force any restarts if necessary] ************ 2026-04-07 08:18:31.545637 | instance | Tuesday 07 April 2026 08:18:31 +0000 (0:00:00.405) 0:00:29.783 ********* 2026-04-07 08:18:31.545642 | instance | 2026-04-07 08:18:31.545654 | instance | RUNNING HANDLER [vexxhost.containers.containerd : Reload systemd] ************** 2026-04-07 08:18:31.545659 | instance | Tuesday 07 April 2026 08:18:31 +0000 (0:00:00.015) 0:00:29.799 ********* 2026-04-07 08:18:32.284124 | instance | ok: [instance] 2026-04-07 08:18:32.284257 | instance | 2026-04-07 08:18:32.284269 | instance | RUNNING HANDLER [vexxhost.containers.docker : Restart docker] ****************** 2026-04-07 08:18:32.284280 | instance | Tuesday 07 April 2026 08:18:32 +0000 (0:00:00.738) 0:00:30.538 ********* 2026-04-07 08:18:34.172890 | instance | changed: [instance] 2026-04-07 08:18:34.172970 | instance | 2026-04-07 08:18:34.173259 | instance | TASK [vexxhost.containers.docker : Enable and start service] ******************* 2026-04-07 08:18:34.173306 | instance | Tuesday 07 April 2026 08:18:34 +0000 (0:00:01.889) 0:00:32.427 ********* 2026-04-07 08:18:34.773048 | instance | changed: [instance] 2026-04-07 08:18:34.773158 | instance | 2026-04-07 08:18:34.773386 | instance | TASK [vexxhost.ceph.cephadm : Gather variables for each operating system] ****** 2026-04-07 08:18:34.773594 | instance | Tuesday 07 April 2026 08:18:34 +0000 (0:00:00.600) 0:00:33.027 ********* 2026-04-07 08:18:34.831883 | instance | ok: [instance] => (item=/home/zuul/.ansible/collections/ansible_collections/vexxhost/ceph/roles/cephadm/vars/ubuntu-22.04.yml) 2026-04-07 08:18:34.831983 | instance | 2026-04-07 08:18:34.832047 | instance | TASK [vexxhost.ceph.cephadm : Install packages] ******************************** 2026-04-07 08:18:34.832195 | instance | Tuesday 07 April 2026 08:18:34 +0000 (0:00:00.058) 0:00:33.086 ********* 2026-04-07 08:18:40.718930 | instance | changed: [instance] 2026-04-07 08:18:40.719019 | instance | 2026-04-07 08:18:40.719236 | instance | TASK [vexxhost.ceph.cephadm : Ensure services are started] ********************* 2026-04-07 08:18:40.719286 | instance | Tuesday 07 April 2026 08:18:40 +0000 (0:00:05.887) 0:00:38.973 ********* 2026-04-07 08:18:41.401363 | instance | ok: [instance] => (item=chronyd) 2026-04-07 08:18:41.401448 | instance | ok: [instance] => (item=sshd) 2026-04-07 08:18:41.401798 | instance | 2026-04-07 08:18:41.401992 | instance | TASK [vexxhost.ceph.cephadm : Download "cephadm"] ****************************** 2026-04-07 08:18:41.401999 | instance | Tuesday 07 April 2026 08:18:41 +0000 (0:00:00.682) 0:00:39.655 ********* 2026-04-07 08:18:42.125059 | instance | changed: [instance] 2026-04-07 08:18:42.125143 | instance | 2026-04-07 08:18:42.125379 | instance | TASK [vexxhost.ceph.cephadm : Remove cephadm from old path] ******************** 2026-04-07 08:18:42.125425 | instance | Tuesday 07 April 2026 08:18:42 +0000 (0:00:00.723) 0:00:40.379 ********* 2026-04-07 08:18:42.361772 | instance | ok: [instance] 2026-04-07 08:18:42.361908 | instance | 2026-04-07 08:18:42.362255 | instance | TASK [vexxhost.ceph.cephadm : Ensure "cephadm" user is present] **************** 2026-04-07 08:18:42.362422 | instance | Tuesday 07 April 2026 08:18:42 +0000 (0:00:00.236) 0:00:40.615 ********* 2026-04-07 08:18:42.922478 | instance | changed: [instance] 2026-04-07 08:18:42.922587 | instance | 2026-04-07 08:18:42.922792 | instance | TASK [vexxhost.ceph.cephadm : Allow "cephadm" user to have passwordless sudo] *** 2026-04-07 08:18:42.922840 | instance | Tuesday 07 April 2026 08:18:42 +0000 (0:00:00.560) 0:00:41.176 ********* 2026-04-07 08:18:43.359144 | instance | changed: [instance] 2026-04-07 08:18:43.359204 | instance | 2026-04-07 08:18:43.359438 | instance | TASK [vexxhost.ceph.mon : Get `cephadm ls` status] ***************************** 2026-04-07 08:18:43.359454 | instance | Tuesday 07 April 2026 08:18:43 +0000 (0:00:00.437) 0:00:41.613 ********* 2026-04-07 08:18:44.996769 | instance | ok: [instance] 2026-04-07 08:18:44.996868 | instance | 2026-04-07 08:18:44.997315 | instance | TASK [vexxhost.ceph.mon : Parse the `cephadm ls` output] *********************** 2026-04-07 08:18:44.997394 | instance | Tuesday 07 April 2026 08:18:44 +0000 (0:00:01.636) 0:00:43.250 ********* 2026-04-07 08:18:45.052896 | instance | ok: [instance] 2026-04-07 08:18:45.053002 | instance | 2026-04-07 08:18:45.053017 | instance | TASK [vexxhost.ceph.mon : Assimilate existing configs in `ceph.conf`] ********** 2026-04-07 08:18:45.053202 | instance | Tuesday 07 April 2026 08:18:45 +0000 (0:00:00.056) 0:00:43.307 ********* 2026-04-07 08:18:45.093587 | instance | skipping: [instance] 2026-04-07 08:18:45.093676 | instance | 2026-04-07 08:18:45.093993 | instance | TASK [vexxhost.ceph.mon : Adopt monitor to cluster] **************************** 2026-04-07 08:18:45.094047 | instance | Tuesday 07 April 2026 08:18:45 +0000 (0:00:00.040) 0:00:43.347 ********* 2026-04-07 08:18:45.135919 | instance | skipping: [instance] 2026-04-07 08:18:45.136024 | instance | 2026-04-07 08:18:45.136082 | instance | TASK [vexxhost.ceph.mon : Adopt manager to cluster] **************************** 2026-04-07 08:18:45.136229 | instance | Tuesday 07 April 2026 08:18:45 +0000 (0:00:00.042) 0:00:43.390 ********* 2026-04-07 08:18:45.172873 | instance | skipping: [instance] 2026-04-07 08:18:45.172976 | instance | 2026-04-07 08:18:45.173203 | instance | TASK [vexxhost.ceph.mon : Enable "cephadm" mgr module] ************************* 2026-04-07 08:18:45.173252 | instance | Tuesday 07 April 2026 08:18:45 +0000 (0:00:00.037) 0:00:43.427 ********* 2026-04-07 08:18:45.213842 | instance | skipping: [instance] 2026-04-07 08:18:45.213945 | instance | 2026-04-07 08:18:45.214048 | instance | TASK [vexxhost.ceph.mon : Set orchestrator backend to "cephadm"] *************** 2026-04-07 08:18:45.214245 | instance | Tuesday 07 April 2026 08:18:45 +0000 (0:00:00.040) 0:00:43.468 ********* 2026-04-07 08:18:45.253495 | instance | skipping: [instance] 2026-04-07 08:18:45.253579 | instance | 2026-04-07 08:18:45.253709 | instance | TASK [vexxhost.ceph.mon : Use `cephadm` user for cephadm] ********************** 2026-04-07 08:18:45.253905 | instance | Tuesday 07 April 2026 08:18:45 +0000 (0:00:00.039) 0:00:43.507 ********* 2026-04-07 08:18:45.292291 | instance | skipping: [instance] 2026-04-07 08:18:45.292366 | instance | 2026-04-07 08:18:45.292493 | instance | TASK [vexxhost.ceph.mon : Generate "cephadm" key] ****************************** 2026-04-07 08:18:45.292685 | instance | Tuesday 07 April 2026 08:18:45 +0000 (0:00:00.038) 0:00:43.546 ********* 2026-04-07 08:18:45.334996 | instance | skipping: [instance] 2026-04-07 08:18:45.335095 | instance | 2026-04-07 08:18:45.335216 | instance | TASK [vexxhost.ceph.mon : Set Ceph Monitor IP address] ************************* 2026-04-07 08:18:45.335397 | instance | Tuesday 07 April 2026 08:18:45 +0000 (0:00:00.040) 0:00:43.587 ********* 2026-04-07 08:18:45.455018 | instance | ok: [instance] 2026-04-07 08:18:45.455109 | instance | 2026-04-07 08:18:45.455175 | instance | TASK [vexxhost.ceph.mon : Check if any node is bootstrapped] ******************* 2026-04-07 08:18:45.455305 | instance | Tuesday 07 April 2026 08:18:45 +0000 (0:00:00.118) 0:00:43.705 ********* 2026-04-07 08:18:45.714886 | instance | ok: [instance] => (item=instance) 2026-04-07 08:18:45.715001 | instance | 2026-04-07 08:18:45.715292 | instance | TASK [vexxhost.ceph.mon : Select pre-existing bootstrap node if exists] ******** 2026-04-07 08:18:45.715341 | instance | Tuesday 07 April 2026 08:18:45 +0000 (0:00:00.263) 0:00:43.969 ********* 2026-04-07 08:18:45.770044 | instance | ok: [instance] 2026-04-07 08:18:45.770153 | instance | 2026-04-07 08:18:45.770505 | instance | TASK [vexxhost.ceph.mon : Bootstrap cluster] *********************************** 2026-04-07 08:18:45.770582 | instance | Tuesday 07 April 2026 08:18:45 +0000 (0:00:00.054) 0:00:44.024 ********* 2026-04-07 08:18:45.845593 | instance | included: /home/zuul/.ansible/collections/ansible_collections/vexxhost/ceph/roles/mon/tasks/bootstrap-ceph.yml for instance 2026-04-07 08:18:45.845692 | instance | 2026-04-07 08:18:45.845705 | instance | TASK [vexxhost.ceph.mon : Generate temporary file for "ceph.conf"] ************* 2026-04-07 08:18:45.845855 | instance | Tuesday 07 April 2026 08:18:45 +0000 (0:00:00.075) 0:00:44.100 ********* 2026-04-07 08:18:46.155388 | instance | changed: [instance] 2026-04-07 08:18:46.156155 | instance | 2026-04-07 08:18:46.156205 | instance | TASK [vexxhost.ceph.mon : Include extra configuration values] ****************** 2026-04-07 08:18:46.156213 | instance | Tuesday 07 April 2026 08:18:46 +0000 (0:00:00.309) 0:00:44.409 ********* 2026-04-07 08:18:46.870773 | instance | changed: [instance] => (item={'section': 'global', 'option': 'mon allow pool size one', 'value': True}) 2026-04-07 08:18:46.870918 | instance | changed: [instance] => (item={'section': 'global', 'option': 'osd crush chooseleaf type', 'value': 0}) 2026-04-07 08:18:46.871629 | instance | changed: [instance] => (item={'section': 'mon', 'option': 'auth allow insecure global id reclaim', 'value': False}) 2026-04-07 08:18:46.871695 | instance | 2026-04-07 08:18:46.871704 | instance | TASK [vexxhost.ceph.mon : Run Bootstrap coomand] ******************************* 2026-04-07 08:18:46.871711 | instance | Tuesday 07 April 2026 08:18:46 +0000 (0:00:00.715) 0:00:45.124 ********* 2026-04-07 08:21:00.008692 | instance | ok: [instance] 2026-04-07 08:21:00.009715 | instance | 2026-04-07 08:21:00.009727 | instance | TASK [vexxhost.ceph.mon : Remove temporary file for "ceph.conf"] *************** 2026-04-07 08:21:00.009735 | instance | Tuesday 07 April 2026 08:21:00 +0000 (0:02:13.138) 0:02:58.263 ********* 2026-04-07 08:21:00.271067 | instance | changed: [instance] 2026-04-07 08:21:00.271149 | instance | 2026-04-07 08:21:00.271266 | instance | TASK [vexxhost.ceph.mon : Set bootstrap node] ********************************** 2026-04-07 08:21:00.271430 | instance | Tuesday 07 April 2026 08:21:00 +0000 (0:00:00.262) 0:02:58.525 ********* 2026-04-07 08:21:00.321161 | instance | ok: [instance] 2026-04-07 08:21:00.321558 | instance | 2026-04-07 08:21:00.321603 | instance | TASK [Install Ceph host] ******************************************************* 2026-04-07 08:21:00.321610 | instance | Tuesday 07 April 2026 08:21:00 +0000 (0:00:00.050) 0:02:58.575 ********* 2026-04-07 08:21:00.411729 | instance | included: vexxhost.ceph.cephadm_host for instance 2026-04-07 08:21:00.411818 | instance | 2026-04-07 08:21:00.412054 | instance | TASK [vexxhost.ceph.cephadm_host : Get public SSH key for "cephadm" user] ****** 2026-04-07 08:21:00.412097 | instance | Tuesday 07 April 2026 08:21:00 +0000 (0:00:00.090) 0:02:58.666 ********* 2026-04-07 08:21:02.139090 | instance | ok: [instance] 2026-04-07 08:21:02.139170 | instance | 2026-04-07 08:21:02.139486 | instance | TASK [vexxhost.ceph.cephadm_host : Set fact with public SSH key for "cephadm" user] *** 2026-04-07 08:21:02.139762 | instance | Tuesday 07 April 2026 08:21:02 +0000 (0:00:01.727) 0:03:00.393 ********* 2026-04-07 08:21:02.195071 | instance | ok: [instance] => (item=instance) 2026-04-07 08:21:02.195124 | instance | 2026-04-07 08:21:02.195130 | instance | TASK [vexxhost.ceph.cephadm_host : Set authorized key for "cephadm"] *********** 2026-04-07 08:21:02.195136 | instance | Tuesday 07 April 2026 08:21:02 +0000 (0:00:00.055) 0:03:00.449 ********* 2026-04-07 08:21:02.797131 | instance | ok: [instance] 2026-04-07 08:21:02.797239 | instance | 2026-04-07 08:21:02.797478 | instance | TASK [vexxhost.ceph.cephadm_host : Add new host to Ceph] *********************** 2026-04-07 08:21:02.797542 | instance | Tuesday 07 April 2026 08:21:02 +0000 (0:00:00.602) 0:03:01.051 ********* 2026-04-07 08:21:04.982904 | instance | ok: [instance] 2026-04-07 08:21:04.982975 | instance | 2026-04-07 08:21:04.983330 | instance | TASK [vexxhost.ceph.mon : Configure "mon" label for monitors] ****************** 2026-04-07 08:21:04.983401 | instance | Tuesday 07 April 2026 08:21:04 +0000 (0:00:02.185) 0:03:03.237 ********* 2026-04-07 08:21:06.819446 | instance | ok: [instance] 2026-04-07 08:21:06.819592 | instance | 2026-04-07 08:21:06.819610 | instance | TASK [vexxhost.ceph.mon : Validate monitor exist] ****************************** 2026-04-07 08:21:06.819705 | instance | Tuesday 07 April 2026 08:21:06 +0000 (0:00:01.836) 0:03:05.073 ********* 2026-04-07 08:21:18.187459 | instance | ok: [instance] 2026-04-07 08:21:18.187556 | instance | 2026-04-07 08:21:18.187571 | instance | TASK [Install Ceph host] ******************************************************* 2026-04-07 08:21:18.187745 | instance | Tuesday 07 April 2026 08:21:18 +0000 (0:00:11.368) 0:03:16.442 ********* 2026-04-07 08:21:18.275318 | instance | included: vexxhost.ceph.cephadm_host for instance 2026-04-07 08:21:18.275412 | instance | 2026-04-07 08:21:18.275684 | instance | TASK [vexxhost.ceph.cephadm_host : Get public SSH key for "cephadm" user] ****** 2026-04-07 08:21:18.275735 | instance | Tuesday 07 April 2026 08:21:18 +0000 (0:00:00.087) 0:03:16.529 ********* 2026-04-07 08:21:18.326974 | instance | skipping: [instance] 2026-04-07 08:21:18.327064 | instance | 2026-04-07 08:21:18.327371 | instance | TASK [vexxhost.ceph.cephadm_host : Set fact with public SSH key for "cephadm" user] *** 2026-04-07 08:21:18.327420 | instance | Tuesday 07 April 2026 08:21:18 +0000 (0:00:00.051) 0:03:16.581 ********* 2026-04-07 08:21:18.379927 | instance | skipping: [instance] => (item=instance) 2026-04-07 08:21:18.380008 | instance | skipping: [instance] 2026-04-07 08:21:18.380354 | instance | 2026-04-07 08:21:18.380398 | instance | TASK [vexxhost.ceph.cephadm_host : Set authorized key for "cephadm"] *********** 2026-04-07 08:21:18.380405 | instance | Tuesday 07 April 2026 08:21:18 +0000 (0:00:00.053) 0:03:16.634 ********* 2026-04-07 08:21:18.673473 | instance | ok: [instance] 2026-04-07 08:21:18.673553 | instance | 2026-04-07 08:21:18.673826 | instance | TASK [vexxhost.ceph.cephadm_host : Add new host to Ceph] *********************** 2026-04-07 08:21:18.673867 | instance | Tuesday 07 April 2026 08:21:18 +0000 (0:00:00.293) 0:03:16.928 ********* 2026-04-07 08:21:20.753755 | instance | ok: [instance] 2026-04-07 08:21:20.753842 | instance | 2026-04-07 08:21:20.754085 | instance | TASK [vexxhost.ceph.mgr : Configure "mgr" label for managers] ****************** 2026-04-07 08:21:20.754126 | instance | Tuesday 07 April 2026 08:21:20 +0000 (0:00:02.080) 0:03:19.008 ********* 2026-04-07 08:21:22.418656 | instance | ok: [instance] 2026-04-07 08:21:22.418790 | instance | 2026-04-07 08:21:22.418806 | instance | TASK [vexxhost.ceph.mgr : Validate manager exist] ****************************** 2026-04-07 08:21:22.418993 | instance | Tuesday 07 April 2026 08:21:22 +0000 (0:00:01.664) 0:03:20.673 ********* 2026-04-07 08:21:24.026940 | instance | ok: [instance] 2026-04-07 08:21:24.027083 | instance | 2026-04-07 08:21:24.027100 | instance | TASK [vexxhost.ceph.mgr : Enable the Ceph Manager prometheus module] *********** 2026-04-07 08:21:24.027186 | instance | Tuesday 07 April 2026 08:21:24 +0000 (0:00:01.608) 0:03:22.281 ********* 2026-04-07 08:21:27.435830 | instance | ok: [instance] 2026-04-07 08:21:27.435959 | instance | 2026-04-07 08:21:27.436023 | instance | PLAY [Deploy Ceph OSDs] ******************************************************** 2026-04-07 08:21:27.436143 | instance | 2026-04-07 08:21:27.436267 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-07 08:21:27.436395 | instance | Tuesday 07 April 2026 08:21:27 +0000 (0:00:03.409) 0:03:25.690 ********* 2026-04-07 08:21:28.425224 | instance | ok: [instance] 2026-04-07 08:21:28.425305 | instance | 2026-04-07 08:21:28.425434 | instance | TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-07 08:21:28.425561 | instance | Tuesday 07 April 2026 08:21:28 +0000 (0:00:00.989) 0:03:26.679 ********* 2026-04-07 08:21:28.667879 | instance | ok: [instance] 2026-04-07 08:21:28.667994 | instance | 2026-04-07 08:21:28.668048 | instance | TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-07 08:21:28.668176 | instance | Tuesday 07 April 2026 08:21:28 +0000 (0:00:00.242) 0:03:26.922 ********* 2026-04-07 08:21:28.716393 | instance | skipping: [instance] 2026-04-07 08:21:28.716503 | instance | 2026-04-07 08:21:28.716656 | instance | TASK [vexxhost.containers.directory : Create directory (/var/lib/downloads)] *** 2026-04-07 08:21:28.716768 | instance | Tuesday 07 April 2026 08:21:28 +0000 (0:00:00.048) 0:03:26.970 ********* 2026-04-07 08:21:28.967862 | instance | ok: [instance] 2026-04-07 08:21:28.968015 | instance | 2026-04-07 08:21:28.968033 | instance | TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-07 08:21:28.968159 | instance | Tuesday 07 April 2026 08:21:28 +0000 (0:00:00.251) 0:03:27.222 ********* 2026-04-07 08:21:29.028539 | instance | ok: [instance] => { 2026-04-07 08:21:29.028662 | instance | "msg": "https://github.com/opencontainers/runc/releases/download/v1.4.0/runc.amd64" 2026-04-07 08:21:29.028727 | instance | } 2026-04-07 08:21:29.028846 | instance | 2026-04-07 08:21:29.029012 | instance | TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-07 08:21:29.029145 | instance | Tuesday 07 April 2026 08:21:29 +0000 (0:00:00.060) 0:03:27.283 ********* 2026-04-07 08:21:29.380504 | instance | ok: [instance] 2026-04-07 08:21:29.380584 | instance | 2026-04-07 08:21:29.380708 | instance | TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-07 08:21:29.380838 | instance | Tuesday 07 April 2026 08:21:29 +0000 (0:00:00.351) 0:03:27.635 ********* 2026-04-07 08:21:29.429908 | instance | skipping: [instance] 2026-04-07 08:21:29.429989 | instance | 2026-04-07 08:21:29.430099 | instance | TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-07 08:21:29.430221 | instance | Tuesday 07 April 2026 08:21:29 +0000 (0:00:00.049) 0:03:27.684 ********* 2026-04-07 08:21:29.478368 | instance | skipping: [instance] 2026-04-07 08:21:29.478467 | instance | 2026-04-07 08:21:29.478576 | instance | TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-07 08:21:29.478690 | instance | Tuesday 07 April 2026 08:21:29 +0000 (0:00:00.048) 0:03:27.732 ********* 2026-04-07 08:21:29.718911 | instance | ok: [instance] 2026-04-07 08:21:29.718991 | instance | 2026-04-07 08:21:29.719108 | instance | TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-07 08:21:29.719230 | instance | Tuesday 07 April 2026 08:21:29 +0000 (0:00:00.240) 0:03:27.973 ********* 2026-04-07 08:21:31.199372 | instance | ok: [instance] 2026-04-07 08:21:31.199480 | instance | 2026-04-07 08:21:31.199581 | instance | TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-07 08:21:31.199711 | instance | Tuesday 07 April 2026 08:21:31 +0000 (0:00:01.480) 0:03:29.453 ********* 2026-04-07 08:21:31.268288 | instance | ok: [instance] => { 2026-04-07 08:21:31.268404 | instance | "msg": "https://github.com/containerd/containerd/releases/download/v2.2.0/containerd-2.2.0-linux-amd64.tar.gz" 2026-04-07 08:21:31.268475 | instance | } 2026-04-07 08:21:31.268622 | instance | 2026-04-07 08:21:31.268761 | instance | TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-07 08:21:31.268893 | instance | Tuesday 07 April 2026 08:21:31 +0000 (0:00:00.068) 0:03:29.522 ********* 2026-04-07 08:21:31.672136 | instance | ok: [instance] 2026-04-07 08:21:31.672253 | instance | 2026-04-07 08:21:31.672318 | instance | TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-07 08:21:31.672453 | instance | Tuesday 07 April 2026 08:21:31 +0000 (0:00:00.404) 0:03:29.926 ********* 2026-04-07 08:21:33.655870 | instance | ok: [instance] 2026-04-07 08:21:33.655954 | instance | 2026-04-07 08:21:33.656072 | instance | TASK [vexxhost.containers.containerd : Install SELinux packages] *************** 2026-04-07 08:21:33.656250 | instance | Tuesday 07 April 2026 08:21:33 +0000 (0:00:01.983) 0:03:31.910 ********* 2026-04-07 08:21:33.693195 | instance | skipping: [instance] 2026-04-07 08:21:33.693276 | instance | 2026-04-07 08:21:33.693400 | instance | TASK [vexxhost.containers.containerd : Set SELinux to permissive at runtime] *** 2026-04-07 08:21:33.693519 | instance | Tuesday 07 April 2026 08:21:33 +0000 (0:00:00.037) 0:03:31.948 ********* 2026-04-07 08:21:33.729589 | instance | skipping: [instance] 2026-04-07 08:21:33.729662 | instance | 2026-04-07 08:21:33.729783 | instance | TASK [vexxhost.containers.containerd : Persist SELinux permissive mode] ******** 2026-04-07 08:21:33.729901 | instance | Tuesday 07 April 2026 08:21:33 +0000 (0:00:00.036) 0:03:31.984 ********* 2026-04-07 08:21:33.762728 | instance | skipping: [instance] 2026-04-07 08:21:33.762851 | instance | 2026-04-07 08:21:33.762866 | instance | TASK [vexxhost.containers.containerd : Install AppArmor packages] ************** 2026-04-07 08:21:33.762996 | instance | Tuesday 07 April 2026 08:21:33 +0000 (0:00:00.033) 0:03:32.017 ********* 2026-04-07 08:21:35.334839 | instance | ok: [instance] 2026-04-07 08:21:35.334960 | instance | 2026-04-07 08:21:35.335014 | instance | TASK [vexxhost.containers.containerd : Create systemd service file for containerd] *** 2026-04-07 08:21:35.335147 | instance | Tuesday 07 April 2026 08:21:35 +0000 (0:00:01.571) 0:03:33.589 ********* 2026-04-07 08:21:35.748353 | instance | ok: [instance] 2026-04-07 08:21:35.748461 | instance | 2026-04-07 08:21:35.748534 | instance | TASK [vexxhost.containers.containerd : Create folders for configuration] ******* 2026-04-07 08:21:35.748651 | instance | Tuesday 07 April 2026 08:21:35 +0000 (0:00:00.413) 0:03:34.003 ********* 2026-04-07 08:21:36.818371 | instance | ok: [instance] => (item={'path': '/etc/containerd'}) 2026-04-07 08:21:36.818494 | instance | ok: [instance] => (item={'path': '/var/lib/containerd', 'mode': '0o700'}) 2026-04-07 08:21:36.818541 | instance | ok: [instance] => (item={'path': '/run/containerd', 'mode': '0o711'}) 2026-04-07 08:21:36.818684 | instance | ok: [instance] => (item={'path': '/run/containerd/io.containerd.grpc.v1.cri', 'mode': '0o700'}) 2026-04-07 08:21:36.819062 | instance | ok: [instance] => (item={'path': '/run/containerd/io.containerd.sandbox.controller.v1.shim', 'mode': '0o700'}) 2026-04-07 08:21:36.819201 | instance | 2026-04-07 08:21:36.819338 | instance | TASK [vexxhost.containers.containerd : Create containerd config file] ********** 2026-04-07 08:21:36.819462 | instance | Tuesday 07 April 2026 08:21:36 +0000 (0:00:01.069) 0:03:35.072 ********* 2026-04-07 08:21:37.286521 | instance | ok: [instance] 2026-04-07 08:21:37.286654 | instance | 2026-04-07 08:21:37.286669 | instance | TASK [vexxhost.containers.containerd : Force any restarts if necessary] ******** 2026-04-07 08:21:37.286811 | instance | Tuesday 07 April 2026 08:21:37 +0000 (0:00:00.461) 0:03:35.533 ********* 2026-04-07 08:21:37.286932 | instance | 2026-04-07 08:21:37.287053 | instance | TASK [vexxhost.containers.containerd : Enable and start service] *************** 2026-04-07 08:21:37.287169 | instance | Tuesday 07 April 2026 08:21:37 +0000 (0:00:00.006) 0:03:35.540 ********* 2026-04-07 08:21:37.701238 | instance | ok: [instance] 2026-04-07 08:21:37.701279 | instance | 2026-04-07 08:21:37.701285 | instance | TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-07 08:21:37.701290 | instance | Tuesday 07 April 2026 08:21:37 +0000 (0:00:00.414) 0:03:35.955 ********* 2026-04-07 08:21:37.947211 | instance | ok: [instance] 2026-04-07 08:21:37.947294 | instance | 2026-04-07 08:21:37.947549 | instance | TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-07 08:21:37.947593 | instance | Tuesday 07 April 2026 08:21:37 +0000 (0:00:00.246) 0:03:36.202 ********* 2026-04-07 08:21:37.999407 | instance | ok: [instance] => { 2026-04-07 08:21:37.999471 | instance | "msg": "https://download.docker.com/linux/static/stable/x86_64/docker-24.0.9.tgz" 2026-04-07 08:21:37.999480 | instance | } 2026-04-07 08:21:37.999487 | instance | 2026-04-07 08:21:37.999494 | instance | TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-07 08:21:37.999500 | instance | Tuesday 07 April 2026 08:21:37 +0000 (0:00:00.051) 0:03:36.253 ********* 2026-04-07 08:21:38.434166 | instance | ok: [instance] 2026-04-07 08:21:38.434308 | instance | 2026-04-07 08:21:38.434465 | instance | TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-07 08:21:38.434615 | instance | Tuesday 07 April 2026 08:21:38 +0000 (0:00:00.435) 0:03:36.688 ********* 2026-04-07 08:21:41.564773 | instance | ok: [instance] 2026-04-07 08:21:41.564898 | instance | 2026-04-07 08:21:41.564915 | instance | TASK [vexxhost.containers.docker : Install AppArmor packages] ****************** 2026-04-07 08:21:41.565095 | instance | Tuesday 07 April 2026 08:21:41 +0000 (0:00:03.130) 0:03:39.819 ********* 2026-04-07 08:21:43.076331 | instance | ok: [instance] 2026-04-07 08:21:43.076470 | instance | 2026-04-07 08:21:43.076523 | instance | TASK [vexxhost.containers.docker : Ensure group "docker" exists] *************** 2026-04-07 08:21:43.076594 | instance | Tuesday 07 April 2026 08:21:43 +0000 (0:00:01.511) 0:03:41.330 ********* 2026-04-07 08:21:43.315376 | instance | ok: [instance] 2026-04-07 08:21:43.315494 | instance | 2026-04-07 08:21:43.315791 | instance | TASK [vexxhost.containers.docker : Create systemd service file for docker] ***** 2026-04-07 08:21:43.315861 | instance | Tuesday 07 April 2026 08:21:43 +0000 (0:00:00.239) 0:03:41.569 ********* 2026-04-07 08:21:43.723726 | instance | ok: [instance] 2026-04-07 08:21:43.723785 | instance | 2026-04-07 08:21:43.723792 | instance | TASK [vexxhost.containers.docker : Create folders for configuration] *********** 2026-04-07 08:21:43.723798 | instance | Tuesday 07 April 2026 08:21:43 +0000 (0:00:00.407) 0:03:41.977 ********* 2026-04-07 08:21:44.335320 | instance | ok: [instance] => (item={'path': '/etc/docker'}) 2026-04-07 08:21:44.335412 | instance | ok: [instance] => (item={'path': '/var/lib/docker', 'mode': '0o710'}) 2026-04-07 08:21:44.335562 | instance | ok: [instance] => (item={'path': '/run/docker', 'mode': '0o711'}) 2026-04-07 08:21:44.335733 | instance | 2026-04-07 08:21:44.335924 | instance | TASK [vexxhost.containers.docker : Create systemd socket file for docker] ****** 2026-04-07 08:21:44.336151 | instance | Tuesday 07 April 2026 08:21:44 +0000 (0:00:00.611) 0:03:42.589 ********* 2026-04-07 08:21:44.742348 | instance | ok: [instance] 2026-04-07 08:21:44.742440 | instance | 2026-04-07 08:21:44.742483 | instance | TASK [vexxhost.containers.docker : Create docker daemon config file] *********** 2026-04-07 08:21:44.742626 | instance | Tuesday 07 April 2026 08:21:44 +0000 (0:00:00.407) 0:03:42.996 ********* 2026-04-07 08:21:45.137522 | instance | ok: [instance] 2026-04-07 08:21:45.137603 | instance | 2026-04-07 08:21:45.137802 | instance | TASK [vexxhost.containers.docker : Force any restarts if necessary] ************ 2026-04-07 08:21:45.137972 | instance | Tuesday 07 April 2026 08:21:45 +0000 (0:00:00.388) 0:03:43.384 ********* 2026-04-07 08:21:45.138124 | instance | 2026-04-07 08:21:45.138326 | instance | TASK [vexxhost.containers.docker : Enable and start service] ******************* 2026-04-07 08:21:45.138485 | instance | Tuesday 07 April 2026 08:21:45 +0000 (0:00:00.007) 0:03:43.392 ********* 2026-04-07 08:21:45.527991 | instance | ok: [instance] 2026-04-07 08:21:45.528080 | instance | 2026-04-07 08:21:45.528163 | instance | TASK [vexxhost.ceph.cephadm : Gather variables for each operating system] ****** 2026-04-07 08:21:45.528286 | instance | Tuesday 07 April 2026 08:21:45 +0000 (0:00:00.390) 0:03:43.782 ********* 2026-04-07 08:21:45.579582 | instance | ok: [instance] => (item=/home/zuul/.ansible/collections/ansible_collections/vexxhost/ceph/roles/cephadm/vars/ubuntu-22.04.yml) 2026-04-07 08:21:45.579827 | instance | 2026-04-07 08:21:45.580088 | instance | TASK [vexxhost.ceph.cephadm : Install packages] ******************************** 2026-04-07 08:21:45.580444 | instance | Tuesday 07 April 2026 08:21:45 +0000 (0:00:00.051) 0:03:43.834 ********* 2026-04-07 08:21:46.906390 | instance | ok: [instance] 2026-04-07 08:21:46.906468 | instance | 2026-04-07 08:21:46.906480 | instance | TASK [vexxhost.ceph.cephadm : Ensure services are started] ********************* 2026-04-07 08:21:46.906498 | instance | Tuesday 07 April 2026 08:21:46 +0000 (0:00:01.325) 0:03:45.159 ********* 2026-04-07 08:21:47.585347 | instance | ok: [instance] => (item=chronyd) 2026-04-07 08:21:47.585423 | instance | ok: [instance] => (item=sshd) 2026-04-07 08:21:47.585845 | instance | 2026-04-07 08:21:47.585891 | instance | TASK [vexxhost.ceph.cephadm : Download "cephadm"] ****************************** 2026-04-07 08:21:47.585896 | instance | Tuesday 07 April 2026 08:21:47 +0000 (0:00:00.680) 0:03:45.840 ********* 2026-04-07 08:21:48.380765 | instance | ok: [instance] 2026-04-07 08:21:48.380807 | instance | 2026-04-07 08:21:48.380813 | instance | TASK [vexxhost.ceph.cephadm : Remove cephadm from old path] ******************** 2026-04-07 08:21:48.380818 | instance | Tuesday 07 April 2026 08:21:48 +0000 (0:00:00.795) 0:03:46.635 ********* 2026-04-07 08:21:48.603150 | instance | ok: [instance] 2026-04-07 08:21:48.603248 | instance | 2026-04-07 08:21:48.603558 | instance | TASK [vexxhost.ceph.cephadm : Ensure "cephadm" user is present] **************** 2026-04-07 08:21:48.603600 | instance | Tuesday 07 April 2026 08:21:48 +0000 (0:00:00.222) 0:03:46.857 ********* 2026-04-07 08:21:48.861486 | instance | ok: [instance] 2026-04-07 08:21:48.861576 | instance | 2026-04-07 08:21:48.861588 | instance | TASK [vexxhost.ceph.cephadm : Allow "cephadm" user to have passwordless sudo] *** 2026-04-07 08:21:48.861753 | instance | Tuesday 07 April 2026 08:21:48 +0000 (0:00:00.258) 0:03:47.116 ********* 2026-04-07 08:21:49.088076 | instance | ok: [instance] 2026-04-07 08:21:49.088184 | instance | 2026-04-07 08:21:49.088192 | instance | TASK [vexxhost.ceph.osd : Get monitor status] ********************************** 2026-04-07 08:21:49.088343 | instance | Tuesday 07 April 2026 08:21:49 +0000 (0:00:00.226) 0:03:47.342 ********* 2026-04-07 08:21:49.336346 | instance | ok: [instance] => (item=instance) 2026-04-07 08:21:49.336681 | instance | 2026-04-07 08:21:49.336697 | instance | TASK [vexxhost.ceph.osd : Select admin host] *********************************** 2026-04-07 08:21:49.336708 | instance | Tuesday 07 April 2026 08:21:49 +0000 (0:00:00.247) 0:03:47.590 ********* 2026-04-07 08:21:49.384429 | instance | ok: [instance] 2026-04-07 08:21:49.384475 | instance | 2026-04-07 08:21:49.384481 | instance | TASK [vexxhost.ceph.osd : Get `cephadm ls` status] ***************************** 2026-04-07 08:21:49.384486 | instance | Tuesday 07 April 2026 08:21:49 +0000 (0:00:00.048) 0:03:47.638 ********* 2026-04-07 08:21:54.721712 | instance | ok: [instance] 2026-04-07 08:21:54.721774 | instance | 2026-04-07 08:21:54.722039 | instance | TASK [vexxhost.ceph.osd : Parse the `cephadm ls` output] *********************** 2026-04-07 08:21:54.722056 | instance | Tuesday 07 April 2026 08:21:54 +0000 (0:00:05.337) 0:03:52.976 ********* 2026-04-07 08:21:54.767741 | instance | ok: [instance] 2026-04-07 08:21:54.768215 | instance | 2026-04-07 08:21:54.768236 | instance | TASK [Install Ceph host] ******************************************************* 2026-04-07 08:21:54.768244 | instance | Tuesday 07 April 2026 08:21:54 +0000 (0:00:00.046) 0:03:53.022 ********* 2026-04-07 08:21:54.830389 | instance | included: vexxhost.ceph.cephadm_host for instance 2026-04-07 08:21:54.830450 | instance | 2026-04-07 08:21:54.830720 | instance | TASK [vexxhost.ceph.cephadm_host : Get public SSH key for "cephadm" user] ****** 2026-04-07 08:21:54.830737 | instance | Tuesday 07 April 2026 08:21:54 +0000 (0:00:00.062) 0:03:53.085 ********* 2026-04-07 08:21:54.879579 | instance | skipping: [instance] 2026-04-07 08:21:54.880144 | instance | 2026-04-07 08:21:54.880200 | instance | TASK [vexxhost.ceph.cephadm_host : Set fact with public SSH key for "cephadm" user] *** 2026-04-07 08:21:54.880209 | instance | Tuesday 07 April 2026 08:21:54 +0000 (0:00:00.048) 0:03:53.134 ********* 2026-04-07 08:21:54.922633 | instance | skipping: [instance] => (item=instance) 2026-04-07 08:21:54.922751 | instance | skipping: [instance] 2026-04-07 08:21:54.922933 | instance | 2026-04-07 08:21:54.923088 | instance | TASK [vexxhost.ceph.cephadm_host : Set authorized key for "cephadm"] *********** 2026-04-07 08:21:54.923240 | instance | Tuesday 07 April 2026 08:21:54 +0000 (0:00:00.043) 0:03:53.177 ********* 2026-04-07 08:21:55.217295 | instance | ok: [instance] 2026-04-07 08:21:55.217350 | instance | 2026-04-07 08:21:55.217513 | instance | TASK [vexxhost.ceph.cephadm_host : Add new host to Ceph] *********************** 2026-04-07 08:21:55.217759 | instance | Tuesday 07 April 2026 08:21:55 +0000 (0:00:00.294) 0:03:53.472 ********* 2026-04-07 08:21:57.259759 | instance | ok: [instance] 2026-04-07 08:21:57.259834 | instance | 2026-04-07 08:21:57.260062 | instance | TASK [vexxhost.ceph.osd : Adopt OSDs to cluster] ******************************* 2026-04-07 08:21:57.260104 | instance | Tuesday 07 April 2026 08:21:57 +0000 (0:00:02.042) 0:03:55.514 ********* 2026-04-07 08:21:57.291883 | instance | skipping: [instance] 2026-04-07 08:21:57.292328 | instance | 2026-04-07 08:21:57.292377 | instance | TASK [vexxhost.ceph.osd : Wait until OSD added to cephadm] ********************* 2026-04-07 08:21:57.292386 | instance | Tuesday 07 April 2026 08:21:57 +0000 (0:00:00.032) 0:03:55.546 ********* 2026-04-07 08:21:57.324708 | instance | skipping: [instance] 2026-04-07 08:21:57.325129 | instance | 2026-04-07 08:21:57.325158 | instance | TASK [vexxhost.ceph.osd : Ensure all OSDs are non-legacy] ********************** 2026-04-07 08:21:57.325164 | instance | Tuesday 07 April 2026 08:21:57 +0000 (0:00:00.032) 0:03:55.579 ********* 2026-04-07 08:22:02.682510 | instance | ok: [instance] 2026-04-07 08:22:02.682579 | instance | 2026-04-07 08:22:02.682854 | instance | TASK [vexxhost.ceph.osd : Get `ceph-volume lvm list` status] ******************* 2026-04-07 08:22:02.682914 | instance | Tuesday 07 April 2026 08:22:02 +0000 (0:00:05.357) 0:04:00.937 ********* 2026-04-07 08:22:13.155889 | instance | ok: [instance] 2026-04-07 08:22:13.155978 | instance | 2026-04-07 08:22:13.156249 | instance | TASK [vexxhost.ceph.osd : Install OSDs] **************************************** 2026-04-07 08:22:13.156265 | instance | Tuesday 07 April 2026 08:22:13 +0000 (0:00:10.473) 0:04:11.410 ********* 2026-04-07 08:22:21.130575 | instance | failed: [instance] (item=/dev/vdb) => {"ansible_loop_var": "item", "changed": false, "cmd": ["cephadm", "shell", "--fsid", "1a18391a-62f4-5563-9e14-5d12dccfa845", "--config", "/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/mon.instance/config", "--", "ceph", "orch", "daemon", "add", "osd", "instance:/dev/vdb"], "delta": "0:00:07.732633", "end": "2026-04-07 08:22:21.096228", "item": "/dev/vdb", "msg": "non-zero return code", "rc": 22, "start": "2026-04-07 08:22:13.363595", "stderr": "Error EINVAL: Traceback (most recent call last):\n File \"/usr/share/ceph/mgr/mgr_module.py\", line 1834, in _handle_command\n return self.handle_command(inbuf, cmd)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 183, in handle_command\n return dispatch[cmd['prefix']].call(self, cmd, inbuf)\n File \"/usr/share/ceph/mgr/mgr_module.py\", line 475, in call\n return self.func(mgr, **kwargs)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 119, in \n wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 108, in wrapper\n return func(*args, **kwargs)\n File \"/usr/share/ceph/mgr/orchestrator/module.py\", line 1306, in _daemon_add_osd\n raise_if_exception(completion)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 240, in raise_if_exception\n raise e\nRuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/mon.instance/config\nNon-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/run/ceph:z -v /var/log/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/log/ceph:z -v /var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmp5dhic35i:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmphq22nuvo:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdb --yes --no-systemd\n/usr/bin/docker: stderr stderr: lsblk: /dev/vdb: not a block device\n/usr/bin/docker: stderr Traceback (most recent call last):\n/usr/bin/docker: stderr File \"/usr/sbin/ceph-volume\", line 33, in \n/usr/bin/docker: stderr sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 54, in __init__\n/usr/bin/docker: stderr self.main(self.argv)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/decorators.py\", line 59, in newfunc\n/usr/bin/docker: stderr return f(*a, **kw)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 166, in main\n/usr/bin/docker: stderr terminal.dispatch(self.mapper, subcommand_args)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 194, in dispatch\n/usr/bin/docker: stderr instance.main()\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py\", line 46, in main\n/usr/bin/docker: stderr terminal.dispatch(self.mapper, self.argv)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 192, in dispatch\n/usr/bin/docker: stderr instance = mapper.get(arg)(argv[count:])\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/batch.py\", line 325, in __init__\n/usr/bin/docker: stderr self.args = parser.parse_args(argv)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1825, in parse_args\n/usr/bin/docker: stderr args, argv = self.parse_known_args(args, namespace)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1858, in parse_known_args\n/usr/bin/docker: stderr namespace, args = self._parse_known_args(args, namespace)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2049, in _parse_known_args\n/usr/bin/docker: stderr positionals_end_index = consume_positionals(start_index)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2026, in consume_positionals\n/usr/bin/docker: stderr take_action(action, args)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1919, in take_action\n/usr/bin/docker: stderr argument_values = self._get_values(action, argument_strings)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in _get_values\n/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in \n/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2483, in _get_value\n/usr/bin/docker: stderr result = type_func(arg_string)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 124, in __call__\n/usr/bin/docker: stderr super().get_device(dev_path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 32, in get_device\n/usr/bin/docker: stderr self._device = Device(dev_path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 140, in __init__\n/usr/bin/docker: stderr self._parse()\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 236, in _parse\n/usr/bin/docker: stderr dev = disk.lsblk(self.path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 244, in lsblk\n/usr/bin/docker: stderr result = lsblk_all(device=device,\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 338, in lsblk_all\n/usr/bin/docker: stderr raise RuntimeError(f\"Error: {err}\")\n/usr/bin/docker: stderr RuntimeError: Error: ['lsblk: /dev/vdb: not a block device']\nTraceback (most recent call last):\n File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\n return _run_code(code, main_globals, None,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 11009, in \n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 10997, in main\n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2593, in _infer_config\n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2509, in _infer_fsid\n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2621, in _infer_image\n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2496, in _validate_fsid\n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 7226, in command_ceph_volume\n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2284, in call_throws\nRuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/run/ceph:z -v /var/log/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/log/ceph:z -v /var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmp5dhic35i:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmphq22nuvo:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdb --yes --no-systemd", "stderr_lines": ["Error EINVAL: Traceback (most recent call last):", " File \"/usr/share/ceph/mgr/mgr_module.py\", line 1834, in _handle_command", " return self.handle_command(inbuf, cmd)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 183, in handle_command", " return dispatch[cmd['prefix']].call(self, cmd, inbuf)", " File \"/usr/share/ceph/mgr/mgr_module.py\", line 475, in call", " return self.func(mgr, **kwargs)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 119, in ", " wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 108, in wrapper", " return func(*args, **kwargs)", " File \"/usr/share/ceph/mgr/orchestrator/module.py\", line 1306, in _daemon_add_osd", " raise_if_exception(completion)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 240, in raise_if_exception", " raise e", "RuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/mon.instance/config", "Non-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/run/ceph:z -v /var/log/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/log/ceph:z -v /var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmp5dhic35i:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmphq22nuvo:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdb --yes --no-systemd", "/usr/bin/docker: stderr stderr: lsblk: /dev/vdb: not a block device", "/usr/bin/docker: stderr Traceback (most recent call last):", "/usr/bin/docker: stderr File \"/usr/sbin/ceph-volume\", line 33, in ", "/usr/bin/docker: stderr sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 54, in __init__", "/usr/bin/docker: stderr self.main(self.argv)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/decorators.py\", line 59, in newfunc", "/usr/bin/docker: stderr return f(*a, **kw)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 166, in main", "/usr/bin/docker: stderr terminal.dispatch(self.mapper, subcommand_args)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 194, in dispatch", "/usr/bin/docker: stderr instance.main()", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py\", line 46, in main", "/usr/bin/docker: stderr terminal.dispatch(self.mapper, self.argv)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 192, in dispatch", "/usr/bin/docker: stderr instance = mapper.get(arg)(argv[count:])", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/batch.py\", line 325, in __init__", "/usr/bin/docker: stderr self.args = parser.parse_args(argv)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1825, in parse_args", "/usr/bin/docker: stderr args, argv = self.parse_known_args(args, namespace)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1858, in parse_known_args", "/usr/bin/docker: stderr namespace, args = self._parse_known_args(args, namespace)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2049, in _parse_known_args", "/usr/bin/docker: stderr positionals_end_index = consume_positionals(start_index)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2026, in consume_positionals", "/usr/bin/docker: stderr take_action(action, args)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1919, in take_action", "/usr/bin/docker: stderr argument_values = self._get_values(action, argument_strings)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in _get_values", "/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in ", "/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2483, in _get_value", "/usr/bin/docker: stderr result = type_func(arg_string)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 124, in __call__", "/usr/bin/docker: stderr super().get_device(dev_path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 32, in get_device", "/usr/bin/docker: stderr self._device = Device(dev_path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 140, in __init__", "/usr/bin/docker: stderr self._parse()", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 236, in _parse", "/usr/bin/docker: stderr dev = disk.lsblk(self.path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 244, in lsblk", "/usr/bin/docker: stderr result = lsblk_all(device=device,", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 338, in lsblk_all", "/usr/bin/docker: stderr raise RuntimeError(f\"Error: {err}\")", "/usr/bin/docker: stderr RuntimeError: Error: ['lsblk: /dev/vdb: not a block device']", "Traceback (most recent call last):", " File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main", " return _run_code(code, main_globals, None,", " File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code", " exec(code, run_globals)", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 11009, in ", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 10997, in main", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2593, in _infer_config", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2509, in _infer_fsid", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2621, in _infer_image", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2496, in _validate_fsid", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 7226, in command_ceph_volume", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2284, in call_throws", "RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/run/ceph:z -v /var/log/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/log/ceph:z -v /var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmp5dhic35i:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmphq22nuvo:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdb --yes --no-systemd"], "stdout": "", "stdout_lines": []} 2026-04-07 08:22:29.166521 | instance | failed: [instance] (item=/dev/vdc) => {"ansible_loop_var": "item", "changed": false, "cmd": ["cephadm", "shell", "--fsid", "1a18391a-62f4-5563-9e14-5d12dccfa845", "--config", "/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/mon.instance/config", "--", "ceph", "orch", "daemon", "add", "osd", "instance:/dev/vdc"], "delta": "0:00:07.811463", "end": "2026-04-07 08:22:29.131348", "item": "/dev/vdc", "msg": "non-zero return code", "rc": 22, "start": "2026-04-07 08:22:21.319885", "stderr": "Error EINVAL: Traceback (most recent call last):\n File \"/usr/share/ceph/mgr/mgr_module.py\", line 1834, in _handle_command\n return self.handle_command(inbuf, cmd)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 183, in handle_command\n return dispatch[cmd['prefix']].call(self, cmd, inbuf)\n File \"/usr/share/ceph/mgr/mgr_module.py\", line 475, in call\n return self.func(mgr, **kwargs)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 119, in \n wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 108, in wrapper\n return func(*args, **kwargs)\n File \"/usr/share/ceph/mgr/orchestrator/module.py\", line 1306, in _daemon_add_osd\n raise_if_exception(completion)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 240, in raise_if_exception\n raise e\nRuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/mon.instance/config\nNon-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/run/ceph:z -v /var/log/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/log/ceph:z -v /var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpb4yn9q2i:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpq3tl5gg7:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdc --yes --no-systemd\n/usr/bin/docker: stderr stderr: lsblk: /dev/vdc: not a block device\n/usr/bin/docker: stderr Traceback (most recent call last):\n/usr/bin/docker: stderr File \"/usr/sbin/ceph-volume\", line 33, in \n/usr/bin/docker: stderr sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 54, in __init__\n/usr/bin/docker: stderr self.main(self.argv)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/decorators.py\", line 59, in newfunc\n/usr/bin/docker: stderr return f(*a, **kw)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 166, in main\n/usr/bin/docker: stderr terminal.dispatch(self.mapper, subcommand_args)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 194, in dispatch\n/usr/bin/docker: stderr instance.main()\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py\", line 46, in main\n/usr/bin/docker: stderr terminal.dispatch(self.mapper, self.argv)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 192, in dispatch\n/usr/bin/docker: stderr instance = mapper.get(arg)(argv[count:])\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/batch.py\", line 325, in __init__\n/usr/bin/docker: stderr self.args = parser.parse_args(argv)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1825, in parse_args\n/usr/bin/docker: stderr args, argv = self.parse_known_args(args, namespace)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1858, in parse_known_args\n/usr/bin/docker: stderr namespace, args = self._parse_known_args(args, namespace)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2049, in _parse_known_args\n/usr/bin/docker: stderr positionals_end_index = consume_positionals(start_index)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2026, in consume_positionals\n/usr/bin/docker: stderr take_action(action, args)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1919, in take_action\n/usr/bin/docker: stderr argument_values = self._get_values(action, argument_strings)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in _get_values\n/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in \n/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2483, in _get_value\n/usr/bin/docker: stderr result = type_func(arg_string)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 124, in __call__\n/usr/bin/docker: stderr super().get_device(dev_path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 32, in get_device\n/usr/bin/docker: stderr self._device = Device(dev_path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 140, in __init__\n/usr/bin/docker: stderr self._parse()\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 236, in _parse\n/usr/bin/docker: stderr dev = disk.lsblk(self.path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 244, in lsblk\n/usr/bin/docker: stderr result = lsblk_all(device=device,\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 338, in lsblk_all\n/usr/bin/docker: stderr raise RuntimeError(f\"Error: {err}\")\n/usr/bin/docker: stderr RuntimeError: Error: ['lsblk: /dev/vdc: not a block device']\nTraceback (most recent call last):\n File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\n return _run_code(code, main_globals, None,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 11009, in \n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 10997, in main\n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2593, in _infer_config\n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2509, in _infer_fsid\n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2621, in _infer_image\n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2496, in _validate_fsid\n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 7226, in command_ceph_volume\n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2284, in call_throws\nRuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/run/ceph:z -v /var/log/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/log/ceph:z -v /var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpb4yn9q2i:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpq3tl5gg7:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdc --yes --no-systemd", "stderr_lines": ["Error EINVAL: Traceback (most recent call last):", " File \"/usr/share/ceph/mgr/mgr_module.py\", line 1834, in _handle_command", " return self.handle_command(inbuf, cmd)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 183, in handle_command", " return dispatch[cmd['prefix']].call(self, cmd, inbuf)", " File \"/usr/share/ceph/mgr/mgr_module.py\", line 475, in call", " return self.func(mgr, **kwargs)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 119, in ", " wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 108, in wrapper", " return func(*args, **kwargs)", " File \"/usr/share/ceph/mgr/orchestrator/module.py\", line 1306, in _daemon_add_osd", " raise_if_exception(completion)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 240, in raise_if_exception", " raise e", "RuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/mon.instance/config", "Non-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/run/ceph:z -v /var/log/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/log/ceph:z -v /var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpb4yn9q2i:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpq3tl5gg7:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdc --yes --no-systemd", "/usr/bin/docker: stderr stderr: lsblk: /dev/vdc: not a block device", "/usr/bin/docker: stderr Traceback (most recent call last):", "/usr/bin/docker: stderr File \"/usr/sbin/ceph-volume\", line 33, in ", "/usr/bin/docker: stderr sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 54, in __init__", "/usr/bin/docker: stderr self.main(self.argv)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/decorators.py\", line 59, in newfunc", "/usr/bin/docker: stderr return f(*a, **kw)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 166, in main", "/usr/bin/docker: stderr terminal.dispatch(self.mapper, subcommand_args)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 194, in dispatch", "/usr/bin/docker: stderr instance.main()", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py\", line 46, in main", "/usr/bin/docker: stderr terminal.dispatch(self.mapper, self.argv)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 192, in dispatch", "/usr/bin/docker: stderr instance = mapper.get(arg)(argv[count:])", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/batch.py\", line 325, in __init__", "/usr/bin/docker: stderr self.args = parser.parse_args(argv)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1825, in parse_args", "/usr/bin/docker: stderr args, argv = self.parse_known_args(args, namespace)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1858, in parse_known_args", "/usr/bin/docker: stderr namespace, args = self._parse_known_args(args, namespace)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2049, in _parse_known_args", "/usr/bin/docker: stderr positionals_end_index = consume_positionals(start_index)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2026, in consume_positionals", "/usr/bin/docker: stderr take_action(action, args)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1919, in take_action", "/usr/bin/docker: stderr argument_values = self._get_values(action, argument_strings)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in _get_values", "/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in ", "/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2483, in _get_value", "/usr/bin/docker: stderr result = type_func(arg_string)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 124, in __call__", "/usr/bin/docker: stderr super().get_device(dev_path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 32, in get_device", "/usr/bin/docker: stderr self._device = Device(dev_path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 140, in __init__", "/usr/bin/docker: stderr self._parse()", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 236, in _parse", "/usr/bin/docker: stderr dev = disk.lsblk(self.path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 244, in lsblk", "/usr/bin/docker: stderr result = lsblk_all(device=device,", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 338, in lsblk_all", "/usr/bin/docker: stderr raise RuntimeError(f\"Error: {err}\")", "/usr/bin/docker: stderr RuntimeError: Error: ['lsblk: /dev/vdc: not a block device']", "Traceback (most recent call last):", " File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main", " return _run_code(code, main_globals, None,", " File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code", " exec(code, run_globals)", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 11009, in ", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 10997, in main", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2593, in _infer_config", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2509, in _infer_fsid", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2621, in _infer_image", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2496, in _validate_fsid", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 7226, in command_ceph_volume", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2284, in call_throws", "RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/run/ceph:z -v /var/log/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/log/ceph:z -v /var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpb4yn9q2i:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpq3tl5gg7:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdc --yes --no-systemd"], "stdout": "", "stdout_lines": []} 2026-04-07 08:22:37.116232 | instance | failed: [instance] (item=/dev/vdd) => {"ansible_loop_var": "item", "changed": false, "cmd": ["cephadm", "shell", "--fsid", "1a18391a-62f4-5563-9e14-5d12dccfa845", "--config", "/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/mon.instance/config", "--", "ceph", "orch", "daemon", "add", "osd", "instance:/dev/vdd"], "delta": "0:00:07.743475", "end": "2026-04-07 08:22:37.081387", "item": "/dev/vdd", "msg": "non-zero return code", "rc": 22, "start": "2026-04-07 08:22:29.337912", "stderr": "Error EINVAL: Traceback (most recent call last):\n File \"/usr/share/ceph/mgr/mgr_module.py\", line 1834, in _handle_command\n return self.handle_command(inbuf, cmd)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 183, in handle_command\n return dispatch[cmd['prefix']].call(self, cmd, inbuf)\n File \"/usr/share/ceph/mgr/mgr_module.py\", line 475, in call\n return self.func(mgr, **kwargs)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 119, in \n wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 108, in wrapper\n return func(*args, **kwargs)\n File \"/usr/share/ceph/mgr/orchestrator/module.py\", line 1306, in _daemon_add_osd\n raise_if_exception(completion)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 240, in raise_if_exception\n raise e\nRuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/mon.instance/config\nNon-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/run/ceph:z -v /var/log/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/log/ceph:z -v /var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmptccgvwkd:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpfba70arv:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdd --yes --no-systemd\n/usr/bin/docker: stderr stderr: lsblk: /dev/vdd: not a block device\n/usr/bin/docker: stderr Traceback (most recent call last):\n/usr/bin/docker: stderr File \"/usr/sbin/ceph-volume\", line 33, in \n/usr/bin/docker: stderr sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 54, in __init__\n/usr/bin/docker: stderr self.main(self.argv)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/decorators.py\", line 59, in newfunc\n/usr/bin/docker: stderr return f(*a, **kw)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 166, in main\n/usr/bin/docker: stderr terminal.dispatch(self.mapper, subcommand_args)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 194, in dispatch\n/usr/bin/docker: stderr instance.main()\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py\", line 46, in main\n/usr/bin/docker: stderr terminal.dispatch(self.mapper, self.argv)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 192, in dispatch\n/usr/bin/docker: stderr instance = mapper.get(arg)(argv[count:])\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/batch.py\", line 325, in __init__\n/usr/bin/docker: stderr self.args = parser.parse_args(argv)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1825, in parse_args\n/usr/bin/docker: stderr args, argv = self.parse_known_args(args, namespace)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1858, in parse_known_args\n/usr/bin/docker: stderr namespace, args = self._parse_known_args(args, namespace)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2049, in _parse_known_args\n/usr/bin/docker: stderr positionals_end_index = consume_positionals(start_index)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2026, in consume_positionals\n/usr/bin/docker: stderr take_action(action, args)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1919, in take_action\n/usr/bin/docker: stderr argument_values = self._get_values(action, argument_strings)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in _get_values\n/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in \n/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2483, in _get_value\n/usr/bin/docker: stderr result = type_func(arg_string)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 124, in __call__\n/usr/bin/docker: stderr super().get_device(dev_path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 32, in get_device\n/usr/bin/docker: stderr self._device = Device(dev_path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 140, in __init__\n/usr/bin/docker: stderr self._parse()\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 236, in _parse\n/usr/bin/docker: stderr dev = disk.lsblk(self.path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 244, in lsblk\n/usr/bin/docker: stderr result = lsblk_all(device=device,\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 338, in lsblk_all\n/usr/bin/docker: stderr raise RuntimeError(f\"Error: {err}\")\n/usr/bin/docker: stderr RuntimeError: Error: ['lsblk: /dev/vdd: not a block device']\nTraceback (most recent call last):\n File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\n return _run_code(code, main_globals, None,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 11009, in \n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 10997, in main\n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2593, in _infer_config\n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2509, in _infer_fsid\n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2621, in _infer_image\n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2496, in _validate_fsid\n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 7226, in command_ceph_volume\n File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2284, in call_throws\nRuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/run/ceph:z -v /var/log/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/log/ceph:z -v /var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmptccgvwkd:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpfba70arv:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdd --yes --no-systemd", "stderr_lines": ["Error EINVAL: Traceback (most recent call last):", " File \"/usr/share/ceph/mgr/mgr_module.py\", line 1834, in _handle_command", " return self.handle_command(inbuf, cmd)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 183, in handle_command", " return dispatch[cmd['prefix']].call(self, cmd, inbuf)", " File \"/usr/share/ceph/mgr/mgr_module.py\", line 475, in call", " return self.func(mgr, **kwargs)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 119, in ", " wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 108, in wrapper", " return func(*args, **kwargs)", " File \"/usr/share/ceph/mgr/orchestrator/module.py\", line 1306, in _daemon_add_osd", " raise_if_exception(completion)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 240, in raise_if_exception", " raise e", "RuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/mon.instance/config", "Non-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/run/ceph:z -v /var/log/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/log/ceph:z -v /var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmptccgvwkd:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpfba70arv:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdd --yes --no-systemd", "/usr/bin/docker: stderr stderr: lsblk: /dev/vdd: not a block device", "/usr/bin/docker: stderr Traceback (most recent call last):", "/usr/bin/docker: stderr File \"/usr/sbin/ceph-volume\", line 33, in ", "/usr/bin/docker: stderr sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 54, in __init__", "/usr/bin/docker: stderr self.main(self.argv)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/decorators.py\", line 59, in newfunc", "/usr/bin/docker: stderr return f(*a, **kw)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 166, in main", "/usr/bin/docker: stderr terminal.dispatch(self.mapper, subcommand_args)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 194, in dispatch", "/usr/bin/docker: stderr instance.main()", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py\", line 46, in main", "/usr/bin/docker: stderr terminal.dispatch(self.mapper, self.argv)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 192, in dispatch", "/usr/bin/docker: stderr instance = mapper.get(arg)(argv[count:])", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/batch.py\", line 325, in __init__", "/usr/bin/docker: stderr self.args = parser.parse_args(argv)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1825, in parse_args", "/usr/bin/docker: stderr args, argv = self.parse_known_args(args, namespace)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1858, in parse_known_args", "/usr/bin/docker: stderr namespace, args = self._parse_known_args(args, namespace)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2049, in _parse_known_args", "/usr/bin/docker: stderr positionals_end_index = consume_positionals(start_index)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2026, in consume_positionals", "/usr/bin/docker: stderr take_action(action, args)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1919, in take_action", "/usr/bin/docker: stderr argument_values = self._get_values(action, argument_strings)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in _get_values", "/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in ", "/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2483, in _get_value", "/usr/bin/docker: stderr result = type_func(arg_string)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 124, in __call__", "/usr/bin/docker: stderr super().get_device(dev_path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 32, in get_device", "/usr/bin/docker: stderr self._device = Device(dev_path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 140, in __init__", "/usr/bin/docker: stderr self._parse()", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 236, in _parse", "/usr/bin/docker: stderr dev = disk.lsblk(self.path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 244, in lsblk", "/usr/bin/docker: stderr result = lsblk_all(device=device,", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 338, in lsblk_all", "/usr/bin/docker: stderr raise RuntimeError(f\"Error: {err}\")", "/usr/bin/docker: stderr RuntimeError: Error: ['lsblk: /dev/vdd: not a block device']", "Traceback (most recent call last):", " File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main", " return _run_code(code, main_globals, None,", " File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code", " exec(code, run_globals)", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 11009, in ", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 10997, in main", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2593, in _infer_config", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2509, in _infer_fsid", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2621, in _infer_image", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2496, in _validate_fsid", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 7226, in command_ceph_volume", " File \"/var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2284, in call_throws", "RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/run/ceph:z -v /var/log/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845:/var/log/ceph:z -v /var/lib/ceph/1a18391a-62f4-5563-9e14-5d12dccfa845/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmptccgvwkd:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpfba70arv:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdd --yes --no-systemd"], "stdout": "", "stdout_lines": []} 2026-04-07 08:22:37.121024 | instance | 2026-04-07 08:22:37.122384 | instance | PLAY RECAP ********************************************************************* 2026-04-07 08:22:37.122402 | instance | instance : ok=105 changed=26 unreachable=0 failed=1 skipped=26 rescued=0 ignored=0 2026-04-07 08:22:37.122412 | instance | 2026-04-07 08:22:37.122422 | instance | Tuesday 07 April 2026 08:22:37 +0000 (0:00:23.964) 0:04:35.375 ********* 2026-04-07 08:22:37.122431 | instance | =============================================================================== 2026-04-07 08:22:37.122440 | instance | vexxhost.ceph.mon : Run Bootstrap coomand ----------------------------- 133.14s 2026-04-07 08:22:37.122448 | instance | vexxhost.ceph.osd : Install OSDs --------------------------------------- 23.97s 2026-04-07 08:22:37.122457 | instance | vexxhost.ceph.mon : Validate monitor exist ----------------------------- 11.37s 2026-04-07 08:22:37.124890 | instance | vexxhost.ceph.osd : Get `ceph-volume lvm list` status ------------------ 10.47s 2026-04-07 08:22:37.124938 | instance | vexxhost.containers.containerd : Install AppArmor packages -------------- 6.43s 2026-04-07 08:22:37.124945 | instance | vexxhost.ceph.cephadm : Install packages -------------------------------- 5.89s 2026-04-07 08:22:37.124951 | instance | vexxhost.ceph.osd : Ensure all OSDs are non-legacy ---------------------- 5.36s 2026-04-07 08:22:37.124957 | instance | vexxhost.ceph.osd : Get `cephadm ls` status ----------------------------- 5.34s 2026-04-07 08:22:37.124979 | instance | vexxhost.containers.download_artifact : Extract archive ----------------- 4.42s 2026-04-07 08:22:37.124985 | instance | vexxhost.ceph.mgr : Enable the Ceph Manager prometheus module ----------- 3.41s 2026-04-07 08:22:37.124991 | instance | vexxhost.containers.download_artifact : Extract archive ----------------- 3.13s 2026-04-07 08:22:37.124996 | instance | vexxhost.containers.download_artifact : Extract archive ----------------- 2.88s 2026-04-07 08:22:37.125002 | instance | vexxhost.ceph.cephadm_host : Add new host to Ceph ----------------------- 2.19s 2026-04-07 08:22:37.125008 | instance | vexxhost.ceph.cephadm_host : Add new host to Ceph ----------------------- 2.08s 2026-04-07 08:22:37.125013 | instance | vexxhost.ceph.cephadm_host : Add new host to Ceph ----------------------- 2.04s 2026-04-07 08:22:37.125019 | instance | vexxhost.containers.download_artifact : Extract archive ----------------- 1.98s 2026-04-07 08:22:37.125024 | instance | vexxhost.containers.docker : Restart docker ----------------------------- 1.89s 2026-04-07 08:22:37.125030 | instance | vexxhost.ceph.mon : Configure "mon" label for monitors ------------------ 1.84s 2026-04-07 08:22:37.125035 | instance | vexxhost.ceph.cephadm_host : Get public SSH key for "cephadm" user ------ 1.73s 2026-04-07 08:22:37.125041 | instance | vexxhost.containers.containerd : Reload systemd ------------------------- 1.72s 2026-04-07 08:22:37.403174 | instance | CRITICAL Ansible return code was 2, command was: ansible-playbook --inventory /home/zuul/.ansible/tmp/molecule.v9Wo.aio/inventory --skip-tags molecule-notest,notest /home/zuul/src/github.com/vexxhost/atmosphere/molecule/aio/converge.yml 2026-04-07 08:22:37.403480 | instance | ERROR [aio > converge] Executed: Failed 2026-04-07 08:22:37.403674 | instance | ERROR Ansible return code was 2, command was: ansible-playbook --inventory /home/zuul/.ansible/tmp/molecule.v9Wo.aio/inventory --skip-tags molecule-notest,notest /home/zuul/src/github.com/vexxhost/atmosphere/molecule/aio/converge.yml 2026-04-07 08:22:37.679915 | instance | ERROR 2026-04-07 08:22:37.680527 | instance | { 2026-04-07 08:22:37.680579 | instance | "delta": "0:06:41.949408", 2026-04-07 08:22:37.680611 | instance | "end": "2026-04-07 08:22:37.494733", 2026-04-07 08:22:37.680638 | instance | "msg": "non-zero return code", 2026-04-07 08:22:37.680666 | instance | "rc": 2, 2026-04-07 08:22:37.680696 | instance | "start": "2026-04-07 08:15:55.545325" 2026-04-07 08:22:37.680723 | instance | } failure 2026-04-07 08:22:37.693399 | 2026-04-07 08:22:37.693453 | PLAY RECAP 2026-04-07 08:22:37.693500 | instance | ok: 2 changed: 2 unreachable: 0 failed: 1 skipped: 0 rescued: 0 ignored: 0 2026-04-07 08:22:37.693522 | 2026-04-07 08:22:37.855968 | RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/run.yaml@main] 2026-04-07 08:22:37.865072 | POST-RUN START: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/post.yml@main] 2026-04-07 08:22:38.451997 | 2026-04-07 08:22:38.452114 | PLAY [all] 2026-04-07 08:22:38.465836 | 2026-04-07 08:22:38.466007 | TASK [gather-host-logs : creating directory for system status] 2026-04-07 08:22:38.833498 | instance | changed 2026-04-07 08:22:38.838580 | 2026-04-07 08:22:38.838653 | TASK [gather-host-logs : Get logs for each host] 2026-04-07 08:22:39.179411 | instance | + systemd-cgls --full --all --no-pager 2026-04-07 08:22:39.193888 | instance | + ip addr 2026-04-07 08:22:39.197335 | instance | + ip route 2026-04-07 08:22:39.200307 | instance | + lsblk 2026-04-07 08:22:39.204603 | instance | + mount 2026-04-07 08:22:39.207747 | instance | + docker images 2026-04-07 08:22:39.230629 | instance | + brctl show 2026-04-07 08:22:39.231349 | instance | /bin/bash: line 8: brctl: command not found 2026-04-07 08:22:39.231691 | instance | + ps aux --sort=-%mem 2026-04-07 08:22:39.250347 | instance | + dpkg -l 2026-04-07 08:22:39.259619 | instance | + CONTAINERS=($(docker ps -a --format '{{ .Names }}' --filter label=zuul)) 2026-04-07 08:22:39.260166 | instance | ++ docker ps -a --format '{{ .Names }}' --filter label=zuul 2026-04-07 08:22:39.281554 | instance | + '[' '!' -z '' ']' 2026-04-07 08:22:39.379039 | instance | ok: Runtime: 0:00:00.108302 2026-04-07 08:22:39.386613 | 2026-04-07 08:22:39.386677 | TASK [gather-host-logs : Downloads logs to executor] 2026-04-07 08:22:40.009851 | instance | changed: 2026-04-07 08:22:40.010059 | instance | created directory /var/lib/zuul/builds/c8026ada4e2d464a8de3d66983af1cc6/work/logs/instance 2026-04-07 08:22:40.010099 | instance | cd+++++++++ system/ 2026-04-07 08:22:40.010135 | instance | >f+++++++++ system/brctl-show.txt 2026-04-07 08:22:40.010166 | instance | >f+++++++++ system/docker-images.txt 2026-04-07 08:22:40.010194 | instance | >f+++++++++ system/ip-addr.txt 2026-04-07 08:22:40.010224 | instance | >f+++++++++ system/ip-route.txt 2026-04-07 08:22:40.010254 | instance | >f+++++++++ system/lsblk.txt 2026-04-07 08:22:40.010282 | instance | >f+++++++++ system/mount.txt 2026-04-07 08:22:40.010309 | instance | >f+++++++++ system/packages.txt 2026-04-07 08:22:40.010335 | instance | >f+++++++++ system/ps.txt 2026-04-07 08:22:40.010365 | instance | >f+++++++++ system/systemd-cgls.txt 2026-04-07 08:22:40.020640 | 2026-04-07 08:22:40.020706 | LOOP [helm-release-status : creating directory for helm release status] 2026-04-07 08:22:40.228378 | instance | changed: "values" 2026-04-07 08:22:40.399205 | instance | changed: "releases" 2026-04-07 08:22:40.419149 | 2026-04-07 08:22:40.419328 | TASK [helm-release-status : Gather get release status for helm charts] 2026-04-07 08:22:40.631703 | instance | /bin/bash: line 3: kubectl: command not found 2026-04-07 08:22:40.954559 | instance | ok: Runtime: 0:00:00.006752 2026-04-07 08:22:40.962717 | 2026-04-07 08:22:40.962802 | TASK [helm-release-status : Downloads logs to executor] 2026-04-07 08:22:41.447518 | instance | changed: 2026-04-07 08:22:41.447667 | instance | cd+++++++++ helm/ 2026-04-07 08:22:41.447707 | instance | cd+++++++++ helm/releases/ 2026-04-07 08:22:41.447729 | instance | cd+++++++++ helm/values/ 2026-04-07 08:22:41.456489 | 2026-04-07 08:22:41.456552 | TASK [describe-kubernetes-objects : creating directory for cluster scoped objects] 2026-04-07 08:22:41.655827 | instance | changed 2026-04-07 08:22:41.690054 | 2026-04-07 08:22:41.690210 | TASK [describe-kubernetes-objects : Gathering descriptions for cluster scoped objects] 2026-04-07 08:22:41.910181 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-07 08:22:41.911003 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-07 08:22:41.916248 | instance | environment: line 1: kubectl: command not found 2026-04-07 08:22:41.917250 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-07 08:22:41.918494 | instance | environment: line 1: kubectl: command not found 2026-04-07 08:22:41.920035 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-07 08:22:41.921115 | instance | environment: line 1: kubectl: command not found 2026-04-07 08:22:41.922263 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-07 08:22:41.924136 | instance | environment: line 1: kubectl: command not found 2026-04-07 08:22:41.925452 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-07 08:22:41.926762 | instance | environment: line 1: kubectl: command not found 2026-04-07 08:22:41.928137 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-07 08:22:42.238724 | instance | ok: Runtime: 0:00:00.028590 2026-04-07 08:22:42.248994 | 2026-04-07 08:22:42.249281 | TASK [describe-kubernetes-objects : creating directory for namespace scoped objects] 2026-04-07 08:22:42.458604 | instance | changed 2026-04-07 08:22:42.465001 | 2026-04-07 08:22:42.465074 | TASK [describe-kubernetes-objects : Gathering descriptions for namespace scoped objects] 2026-04-07 08:22:42.673056 | instance | environment: line 5: kubectl: command not found 2026-04-07 08:22:42.673668 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-07 08:22:42.673692 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-07 08:22:42.674737 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-07 08:22:43.001694 | instance | ok: Runtime: 0:00:00.010408 2026-04-07 08:22:43.008979 | 2026-04-07 08:22:43.009052 | TASK [describe-kubernetes-objects : Downloads logs to executor] 2026-04-07 08:22:43.485928 | instance | changed: 2026-04-07 08:22:43.486453 | instance | cd+++++++++ objects/ 2026-04-07 08:22:43.486497 | instance | cd+++++++++ objects/cluster/ 2026-04-07 08:22:43.486529 | instance | cd+++++++++ objects/namespaced/ 2026-04-07 08:22:43.497586 | 2026-04-07 08:22:43.497653 | TASK [gather-pod-logs : creating directory for pod logs] 2026-04-07 08:22:43.700952 | instance | changed 2026-04-07 08:22:43.708109 | 2026-04-07 08:22:43.708227 | TASK [gather-pod-logs : creating directory for failed pod logs] 2026-04-07 08:22:43.906596 | instance | changed 2026-04-07 08:22:43.912706 | 2026-04-07 08:22:43.912776 | TASK [gather-pod-logs : retrieve all kubernetes logs, current and previous (if they exist)] 2026-04-07 08:22:44.133148 | instance | environment: line 3: kubectl: command not found 2026-04-07 08:22:44.447948 | instance | ok: Runtime: 0:00:00.010497 2026-04-07 08:22:44.452974 | 2026-04-07 08:22:44.492933 | TASK [gather-pod-logs : Downloads pod logs to executor] 2026-04-07 08:22:44.973064 | instance | changed: 2026-04-07 08:22:44.973232 | instance | cd+++++++++ pod-logs/ 2026-04-07 08:22:44.973260 | instance | cd+++++++++ pod-logs/failed-pods/ 2026-04-07 08:22:44.982817 | 2026-04-07 08:22:44.982878 | TASK [gather-prom-metrics : creating directory for helm release descriptions] 2026-04-07 08:22:45.180290 | instance | changed 2026-04-07 08:22:45.185443 | 2026-04-07 08:22:45.185511 | TASK [gather-prom-metrics : Get metrics from exporter services in all namespaces] 2026-04-07 08:22:45.419539 | instance | /bin/bash: line 2: kubectl: command not found 2026-04-07 08:22:45.722878 | instance | ok: Runtime: 0:00:00.036442 2026-04-07 08:22:45.729673 | 2026-04-07 08:22:45.729764 | TASK [gather-prom-metrics : Get ceph metrics from ceph-mgr] 2026-04-07 08:22:45.948725 | instance | /bin/bash: line 2: kubectl: command not found 2026-04-07 08:22:45.979382 | instance | ceph-mgr endpoints: 2026-04-07 08:22:46.269129 | instance | ok: Runtime: 0:00:00.037039 2026-04-07 08:22:46.274911 | 2026-04-07 08:22:46.274976 | TASK [gather-prom-metrics : Get metrics from fluentd pods] 2026-04-07 08:22:46.477313 | instance | /bin/bash: line 4: kubectl: command not found 2026-04-07 08:22:46.808797 | instance | ok: Runtime: 0:00:00.036781 2026-04-07 08:22:46.814938 | 2026-04-07 08:22:46.814998 | TASK [gather-prom-metrics : Downloads logs to executor] 2026-04-07 08:22:47.286656 | instance | changed: cd+++++++++ prometheus/ 2026-04-07 08:22:47.296031 | 2026-04-07 08:22:47.307650 | TASK [gather-selenium-data : creating directory for helm release descriptions] 2026-04-07 08:22:47.560536 | instance | changed 2026-04-07 08:22:47.565567 | 2026-04-07 08:22:47.565632 | TASK [gather-selenium-data : Get selenium data] 2026-04-07 08:22:47.760862 | instance | + cp '/tmp/artifacts/*' /tmp/logs/selenium/. 2026-04-07 08:22:47.762456 | instance | cp: cannot stat '/tmp/artifacts/*': No such file or directory 2026-04-07 08:22:48.100502 | instance | ERROR 2026-04-07 08:22:48.100683 | instance | { 2026-04-07 08:22:48.100716 | instance | "delta": "0:00:00.006720", 2026-04-07 08:22:48.100738 | instance | "end": "2026-04-07 08:22:47.762810", 2026-04-07 08:22:48.100759 | instance | "msg": "non-zero return code", 2026-04-07 08:22:48.100778 | instance | "rc": 1, 2026-04-07 08:22:48.100797 | instance | "start": "2026-04-07 08:22:47.756090" 2026-04-07 08:22:48.100815 | instance | } 2026-04-07 08:22:48.100839 | instance | ERROR: Ignoring Errors 2026-04-07 08:22:48.105654 | 2026-04-07 08:22:48.105717 | TASK [gather-selenium-data : Downloads logs to executor] 2026-04-07 08:22:48.561567 | instance | changed: cd+++++++++ selenium/ 2026-04-07 08:22:48.569065 | 2026-04-07 08:22:48.569113 | PLAY RECAP 2026-04-07 08:22:48.569156 | instance | ok: 23 changed: 23 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 1 2026-04-07 08:22:48.569177 | 2026-04-07 08:22:48.671059 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/post.yml@main] 2026-04-07 08:22:48.684097 | POST-RUN START: [trusted : github.com/vexxhost/zuul-config/playbooks/base/post.yaml@main] 2026-04-07 08:22:49.272245 | 2026-04-07 08:22:49.272380 | PLAY [all] 2026-04-07 08:22:49.283714 | 2026-04-07 08:22:49.283824 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-07 08:22:49.318837 | instance | skipping: Conditional result was False 2026-04-07 08:22:49.328206 | 2026-04-07 08:22:49.328370 | TASK [fetch-output : Set log path for single node] 2026-04-07 08:22:49.372620 | instance | ok 2026-04-07 08:22:49.377608 | 2026-04-07 08:22:49.377677 | LOOP [fetch-output : Ensure local output dirs] 2026-04-07 08:22:49.733801 | instance -> localhost | ok: "/var/lib/zuul/builds/c8026ada4e2d464a8de3d66983af1cc6/work/logs" 2026-04-07 08:22:49.929264 | instance -> localhost | changed: "/var/lib/zuul/builds/c8026ada4e2d464a8de3d66983af1cc6/work/artifacts" 2026-04-07 08:22:50.132626 | instance -> localhost | changed: "/var/lib/zuul/builds/c8026ada4e2d464a8de3d66983af1cc6/work/docs" 2026-04-07 08:22:50.154180 | 2026-04-07 08:22:50.154278 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-07 08:22:50.760017 | instance | changed: .d..t...... ./ 2026-04-07 08:22:50.760195 | instance | changed: All items complete 2026-04-07 08:22:50.760223 | 2026-04-07 08:22:51.202856 | instance | changed: .d..t...... ./ 2026-04-07 08:22:51.661156 | instance | changed: .d..t...... ./ 2026-04-07 08:22:51.685749 | 2026-04-07 08:22:51.685916 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-07 08:22:52.122421 | instance -> localhost | ok: Item: artifacts Runtime: 0:00:00.008195 2026-04-07 08:22:52.341917 | instance -> localhost | ok: Item: docs Runtime: 0:00:00.008310 2026-04-07 08:22:52.356355 | 2026-04-07 08:22:52.356526 | PLAY [all] 2026-04-07 08:22:52.363911 | 2026-04-07 08:22:52.363976 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-07 08:22:52.754858 | instance | changed 2026-04-07 08:22:52.762378 | 2026-04-07 08:22:52.762460 | PLAY RECAP 2026-04-07 08:22:52.762512 | instance | ok: 5 changed: 4 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-07 08:22:52.762534 | 2026-04-07 08:22:52.917660 | POST-RUN END RESULT_NORMAL: [trusted : github.com/vexxhost/zuul-config/playbooks/base/post.yaml@main] 2026-04-07 08:22:52.931104 | POST-RUN START: [trusted : github.com/vexxhost/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-07 08:22:53.498962 | 2026-04-07 08:22:53.499340 | PLAY [localhost] 2026-04-07 08:22:53.508719 | 2026-04-07 08:22:53.508785 | TASK [Generate Zuul manifest] 2026-04-07 08:22:53.530123 | localhost | ok 2026-04-07 08:22:53.545710 | 2026-04-07 08:22:53.545795 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-07 08:22:53.902467 | localhost | changed 2026-04-07 08:22:53.914458 | 2026-04-07 08:22:53.914576 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-07 08:22:53.947757 | localhost | ok 2026-04-07 08:22:53.957227 | 2026-04-07 08:22:53.957317 | TASK [Upload logs] 2026-04-07 08:22:53.979533 | localhost | ok 2026-04-07 08:22:54.069565 | 2026-04-07 08:22:54.069689 | TASK [Set zuul-log-path fact] 2026-04-07 08:22:54.090781 | localhost | ok 2026-04-07 08:22:54.102809 | 2026-04-07 08:22:54.102890 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-07 08:22:54.133975 | localhost | ok 2026-04-07 08:22:54.145070 | 2026-04-07 08:22:54.145155 | TASK [upload-logs : Create log directories] 2026-04-07 08:22:54.490218 | localhost | changed 2026-04-07 08:22:54.496487 | 2026-04-07 08:22:54.496582 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-07 08:22:54.853775 | localhost -> localhost | ok: Runtime: 0:00:00.006109 2026-04-07 08:22:54.860546 | 2026-04-07 08:22:54.860636 | TASK [upload-logs : Upload logs to log server] 2026-04-07 08:22:55.294954 | localhost | Output suppressed because no_log was given 2026-04-07 08:22:55.299857 | 2026-04-07 08:22:55.299944 | LOOP [upload-logs : Compress console log and json output] 2026-04-07 08:22:55.346855 | localhost | skipping: Conditional result was False 2026-04-07 08:22:55.354526 | localhost | skipping: Conditional result was False 2026-04-07 08:22:55.366070 | 2026-04-07 08:22:55.366294 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-07 08:22:55.414704 | localhost | skipping: Conditional result was False 2026-04-07 08:22:55.415065 | 2026-04-07 08:22:55.423081 | localhost | skipping: Conditional result was False 2026-04-07 08:22:55.440271 | 2026-04-07 08:22:55.440427 | LOOP [upload-logs : Upload console log and json output]