2026-04-07 08:15:00.861778 | Job console starting 2026-04-07 08:15:00.874479 | Updating git repos 2026-04-07 08:15:01.012377 | Cloning repos into workspace 2026-04-07 08:15:06.488779 | Restoring repo states 2026-04-07 08:15:06.512628 | Merging changes 2026-04-07 08:15:08.097547 | Checking out repos 2026-04-07 08:15:08.741649 | Preparing playbooks 2026-04-07 08:15:17.702090 | Running Ansible setup 2026-04-07 08:15:21.966667 | PRE-RUN START: [trusted : github.com/vexxhost/zuul-config/playbooks/base/pre.yaml@main] 2026-04-07 08:15:22.616882 | 2026-04-07 08:15:22.617043 | PLAY [localhost] 2026-04-07 08:15:22.626278 | 2026-04-07 08:15:22.626408 | TASK [Gathering Facts] 2026-04-07 08:15:23.524786 | localhost | ok 2026-04-07 08:15:23.532765 | 2026-04-07 08:15:23.532879 | TASK [Setup log path fact] 2026-04-07 08:15:23.551756 | localhost | ok 2026-04-07 08:15:23.564945 | 2026-04-07 08:15:23.565061 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-07 08:15:23.593490 | localhost | ok 2026-04-07 08:15:23.602110 | 2026-04-07 08:15:23.602222 | TASK [emit-job-header : Print job information] 2026-04-07 08:15:23.641313 | # Job Information 2026-04-07 08:15:23.641505 | Ansible Version: 2.16.16 2026-04-07 08:15:23.641556 | Job: atmosphere-molecule-aio-ovn 2026-04-07 08:15:23.641590 | Pipeline: check 2026-04-07 08:15:23.641620 | Executor: 0a8996d2b663 2026-04-07 08:15:23.641651 | Triggered by: https://github.com/vexxhost/atmosphere/pull/3809 2026-04-07 08:15:23.641687 | Event ID: c69c0d70-3259-11f1-8a6c-bfbbc40fe9ae 2026-04-07 08:15:23.645714 | 2026-04-07 08:15:23.645808 | LOOP [emit-job-header : Print node information] 2026-04-07 08:15:23.760510 | localhost | ok: 2026-04-07 08:15:23.760657 | localhost | # Node Information 2026-04-07 08:15:23.760686 | localhost | Inventory Hostname: instance 2026-04-07 08:15:23.760708 | localhost | Hostname: np0000163893 2026-04-07 08:15:23.760729 | localhost | Username: zuul 2026-04-07 08:15:23.760751 | localhost | Distro: Ubuntu 22.04 2026-04-07 08:15:23.760771 | localhost | Provider: yul1 2026-04-07 08:15:23.760791 | localhost | Region: ca-ymq-1 2026-04-07 08:15:23.760810 | localhost | Label: ubuntu-jammy-16 2026-04-07 08:15:23.760828 | localhost | Product Name: OpenStack Nova 2026-04-07 08:15:23.760847 | localhost | Interface IP: 199.204.45.19 2026-04-07 08:15:23.769503 | 2026-04-07 08:15:23.769613 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-07 08:15:24.433720 | localhost -> localhost | changed 2026-04-07 08:15:24.439619 | 2026-04-07 08:15:24.439720 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-07 08:15:25.366440 | localhost -> localhost | changed 2026-04-07 08:15:25.379047 | 2026-04-07 08:15:25.379210 | PLAY [all] 2026-04-07 08:15:25.388076 | 2026-04-07 08:15:25.388159 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-07 08:15:25.604159 | instance -> localhost | ok 2026-04-07 08:15:25.614595 | 2026-04-07 08:15:25.614698 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-07 08:15:25.649372 | instance | ok 2026-04-07 08:15:25.662263 | instance | included: /var/lib/zuul/builds/77519decd7264f12b5a98a3e1f2f54a0/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-07 08:15:25.667590 | 2026-04-07 08:15:25.667647 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-07 08:15:26.413106 | instance -> localhost | Generating public/private rsa key pair. 2026-04-07 08:15:26.413271 | instance -> localhost | Your identification has been saved in /var/lib/zuul/builds/77519decd7264f12b5a98a3e1f2f54a0/work/77519decd7264f12b5a98a3e1f2f54a0_id_rsa 2026-04-07 08:15:26.413301 | instance -> localhost | Your public key has been saved in /var/lib/zuul/builds/77519decd7264f12b5a98a3e1f2f54a0/work/77519decd7264f12b5a98a3e1f2f54a0_id_rsa.pub 2026-04-07 08:15:26.413323 | instance -> localhost | The key fingerprint is: 2026-04-07 08:15:26.413345 | instance -> localhost | SHA256:UoQii6e9MvxdkBVaBsE/n/Is8p2yuDH4x2suy/bq9h0 zuul-build-sshkey 2026-04-07 08:15:26.413375 | instance -> localhost | The key's randomart image is: 2026-04-07 08:15:26.413397 | instance -> localhost | +---[RSA 3072]----+ 2026-04-07 08:15:26.413420 | instance -> localhost | | .oo=. | 2026-04-07 08:15:26.413441 | instance -> localhost | | . ..=.. | 2026-04-07 08:15:26.413461 | instance -> localhost | | . o o... | 2026-04-07 08:15:26.413481 | instance -> localhost | |. o o+ | 2026-04-07 08:15:26.413517 | instance -> localhost | | + o. S . | 2026-04-07 08:15:26.413537 | instance -> localhost | |. . . .o o | 2026-04-07 08:15:26.413557 | instance -> localhost | |. o o..+E | 2026-04-07 08:15:26.413577 | instance -> localhost | |o.. +==*ooo | 2026-04-07 08:15:26.413599 | instance -> localhost | | o..+X&B=+ | 2026-04-07 08:15:26.413619 | instance -> localhost | +----[SHA256]-----+ 2026-04-07 08:15:26.413667 | instance -> localhost | ok: Runtime: 0:00:00.328661 2026-04-07 08:15:26.418713 | 2026-04-07 08:15:26.418782 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-07 08:15:26.449125 | instance | ok 2026-04-07 08:15:26.458219 | instance | included: /var/lib/zuul/builds/77519decd7264f12b5a98a3e1f2f54a0/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-07 08:15:26.465798 | 2026-04-07 08:15:26.465859 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-07 08:15:26.490925 | instance | skipping: Conditional result was False 2026-04-07 08:15:26.504306 | 2026-04-07 08:15:26.504447 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-07 08:15:26.981600 | instance | changed 2026-04-07 08:15:26.989431 | 2026-04-07 08:15:26.989542 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-07 08:15:27.184470 | instance | ok 2026-04-07 08:15:27.190380 | 2026-04-07 08:15:27.190474 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-07 08:15:27.691123 | instance | changed 2026-04-07 08:15:27.698341 | 2026-04-07 08:15:27.698436 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-07 08:15:28.211400 | instance | changed 2026-04-07 08:15:28.355375 | 2026-04-07 08:15:28.355488 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-07 08:15:28.379101 | instance | skipping: Conditional result was False 2026-04-07 08:15:28.388329 | 2026-04-07 08:15:28.388504 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-07 08:15:28.789168 | instance -> localhost | changed 2026-04-07 08:15:28.838378 | 2026-04-07 08:15:28.838497 | TASK [add-build-sshkey : Add back temp key] 2026-04-07 08:15:29.153954 | instance -> localhost | Identity added: /var/lib/zuul/builds/77519decd7264f12b5a98a3e1f2f54a0/work/77519decd7264f12b5a98a3e1f2f54a0_id_rsa (zuul-build-sshkey) 2026-04-07 08:15:29.154176 | instance -> localhost | ok: Runtime: 0:00:00.014175 2026-04-07 08:15:29.161458 | 2026-04-07 08:15:29.161535 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-07 08:15:29.439638 | instance | ok 2026-04-07 08:15:29.444816 | 2026-04-07 08:15:29.444882 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-07 08:15:29.469824 | instance | skipping: Conditional result was False 2026-04-07 08:15:29.485285 | 2026-04-07 08:15:29.485387 | TASK [prepare-workspace : Start zuul_console daemon.] 2026-04-07 08:15:29.800560 | instance | ok 2026-04-07 08:15:29.808721 | 2026-04-07 08:15:29.808879 | TASK [prepare-workspace : Synchronize src repos to workspace directory.] 2026-04-07 08:15:31.531186 | instance | Output suppressed because no_log was given 2026-04-07 08:15:31.543492 | 2026-04-07 08:15:31.543571 | LOOP [ensure-output-dirs : Empty Zuul Output directories by removing them] 2026-04-07 08:15:31.726468 | instance | ok: "logs" 2026-04-07 08:15:31.727127 | instance | ok: All items complete 2026-04-07 08:15:31.727160 | 2026-04-07 08:15:31.875306 | instance | ok: "artifacts" 2026-04-07 08:15:32.034594 | instance | ok: "docs" 2026-04-07 08:15:32.051488 | 2026-04-07 08:15:32.051674 | LOOP [ensure-output-dirs : Ensure Zuul Output directories exist] 2026-04-07 08:15:32.237784 | instance | changed: "logs" 2026-04-07 08:15:32.391961 | instance | changed: "artifacts" 2026-04-07 08:15:32.550921 | instance | changed: "docs" 2026-04-07 08:15:32.567374 | 2026-04-07 08:15:32.568563 | PLAY RECAP 2026-04-07 08:15:32.568650 | instance | ok: 15 changed: 8 unreachable: 0 failed: 0 skipped: 3 rescued: 0 ignored: 0 2026-04-07 08:15:32.568693 | localhost | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-07 08:15:32.568734 | 2026-04-07 08:15:32.737783 | PRE-RUN END RESULT_NORMAL: [trusted : github.com/vexxhost/zuul-config/playbooks/base/pre.yaml@main] 2026-04-07 08:15:32.751296 | PRE-RUN START: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/pre.yaml@main] 2026-04-07 08:15:33.349899 | 2026-04-07 08:15:33.350010 | PLAY [all] 2026-04-07 08:15:33.361919 | 2026-04-07 08:15:33.362017 | TASK [setup-uv : Extract archive] 2026-04-07 08:15:35.546479 | instance | changed 2026-04-07 08:15:35.553570 | 2026-04-07 08:15:35.553651 | TASK [setup-uv : Print version] 2026-04-07 08:15:35.913420 | instance | uv 0.8.13 2026-04-07 08:15:36.090691 | instance | ok: Runtime: 0:00:00.011255 2026-04-07 08:15:36.112010 | 2026-04-07 08:15:36.112096 | PLAY RECAP 2026-04-07 08:15:36.112147 | instance | ok: 2 changed: 2 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-07 08:15:36.112173 | 2026-04-07 08:15:36.249618 | PRE-RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/pre.yaml@main] 2026-04-07 08:15:36.260999 | PRE-RUN START: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/pre.yml@main] 2026-04-07 08:15:36.868860 | 2026-04-07 08:15:36.869100 | PLAY [all] 2026-04-07 08:15:36.884379 | 2026-04-07 08:15:36.884537 | TASK [Install "jq" for log collection] 2026-04-07 08:15:47.329620 | instance | changed 2026-04-07 08:15:47.332127 | 2026-04-07 08:15:47.332198 | PLAY RECAP 2026-04-07 08:15:47.332266 | instance | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-07 08:15:47.332327 | 2026-04-07 08:15:47.445204 | PRE-RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/pre.yml@main] 2026-04-07 08:15:47.458012 | RUN START: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/run.yaml@main] 2026-04-07 08:15:48.054647 | 2026-04-07 08:15:48.054795 | PLAY [all] 2026-04-07 08:15:48.069138 | 2026-04-07 08:15:48.069241 | TASK [Copy inventory file for Zuul] 2026-04-07 08:15:48.882748 | instance | changed 2026-04-07 08:15:48.890187 | 2026-04-07 08:15:48.890354 | TASK [Switch "ansible_host" to private IP] 2026-04-07 08:15:49.187745 | instance | changed: 1 replacements made 2026-04-07 08:15:49.195029 | 2026-04-07 08:15:49.195112 | TASK [Run Molecule scenario] 2026-04-07 08:15:49.555829 | instance | Using CPython 3.10.12 interpreter at: /usr/bin/python3 2026-04-07 08:15:49.555938 | instance | Creating virtual environment at: .venv 2026-04-07 08:15:49.582087 | instance | Building atmosphere @ file:///home/zuul/src/github.com/vexxhost/atmosphere 2026-04-07 08:15:49.608948 | instance | Downloading pydantic-core (2.0MiB) 2026-04-07 08:15:49.609415 | instance | Downloading pygments (1.2MiB) 2026-04-07 08:15:49.609700 | instance | Downloading kubernetes (1.9MiB) 2026-04-07 08:15:49.609972 | instance | Downloading rjsonnet (1.2MiB) 2026-04-07 08:15:49.610763 | instance | Downloading setuptools (1.1MiB) 2026-04-07 08:15:49.611152 | instance | Downloading ansible-core (2.1MiB) 2026-04-07 08:15:49.611731 | instance | Downloading openstacksdk (1.7MiB) 2026-04-07 08:15:49.612894 | instance | Downloading cryptography (4.2MiB) 2026-04-07 08:15:49.614335 | instance | Downloading netaddr (2.2MiB) 2026-04-07 08:15:49.992897 | instance | Building pyperclip==1.9.0 2026-04-07 08:15:50.006536 | instance | Downloading rjsonnet 2026-04-07 08:15:50.115965 | instance | Downloading pydantic-core 2026-04-07 08:15:50.161833 | instance | Downloading netaddr 2026-04-07 08:15:50.180030 | instance | Downloading pygments 2026-04-07 08:15:50.192389 | instance | Downloading cryptography 2026-04-07 08:15:50.233971 | instance | Downloading setuptools 2026-04-07 08:15:50.301306 | instance | Downloading kubernetes 2026-04-07 08:15:50.336265 | instance | Downloading ansible-core 2026-04-07 08:15:50.371684 | instance | Downloading openstacksdk 2026-04-07 08:15:50.767574 | instance | Built pyperclip==1.9.0 2026-04-07 08:15:51.026477 | instance | Built atmosphere @ file:///home/zuul/src/github.com/vexxhost/atmosphere 2026-04-07 08:15:51.068686 | instance | Installed 83 packages in 39ms 2026-04-07 08:15:51.734265 | instance | WARNING Molecule scenarios should migrate to 'extensions/molecule' 2026-04-07 08:15:52.461369 | instance | INFO [aio > discovery] scenario test matrix: dependency, cleanup, destroy, syntax, create, prepare, converge, idempotence, side_effect, verify, cleanup, destroy 2026-04-07 08:15:52.461491 | instance | INFO [aio > prerun] Performing prerun with role_name_check=0... 2026-04-07 08:17:00.223415 | instance | INFO [aio > dependency] Executing 2026-04-07 08:17:00.223553 | instance | WARNING [aio > dependency] Missing roles requirements file: requirements.yml 2026-04-07 08:17:00.223741 | instance | WARNING [aio > dependency] Missing collections requirements file: collections.yml 2026-04-07 08:17:00.223868 | instance | WARNING [aio > dependency] Executed: 2 missing (Remove from test_sequence to suppress) 2026-04-07 08:17:00.233983 | instance | INFO [aio > cleanup] Executing 2026-04-07 08:17:00.234287 | instance | WARNING [aio > cleanup] Executed: Missing playbook (Remove from test_sequence to suppress) 2026-04-07 08:17:00.243850 | instance | INFO [aio > destroy] Executing 2026-04-07 08:17:00.243885 | instance | WARNING [aio > destroy] Skipping, '--destroy=never' requested. 2026-04-07 08:17:00.243992 | instance | INFO [aio > destroy] Executed: Successful 2026-04-07 08:17:00.253629 | instance | INFO [aio > syntax] Executing 2026-04-07 08:17:03.246085 | instance | 2026-04-07 08:17:03.246199 | instance | playbook: /home/zuul/src/github.com/vexxhost/atmosphere/molecule/aio/converge.yml 2026-04-07 08:17:03.384715 | instance | INFO [aio > syntax] Executed: Successful 2026-04-07 08:17:03.396651 | instance | INFO [aio > create] Executing 2026-04-07 08:17:03.399194 | instance | WARNING [aio > create] Executed: Missing playbook (Remove from test_sequence to suppress) 2026-04-07 08:17:03.408864 | instance | INFO [aio > prepare] Executing 2026-04-07 08:17:04.252462 | instance | 2026-04-07 08:17:04.252679 | instance | PLAY [Prepare] ***************************************************************** 2026-04-07 08:17:04.252927 | instance | 2026-04-07 08:17:04.253229 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-07 08:17:04.253476 | instance | Tuesday 07 April 2026 08:17:04 +0000 (0:00:00.032) 0:00:00.032 ********* 2026-04-07 08:17:05.277097 | instance | ok: [instance] 2026-04-07 08:17:05.277267 | instance | 2026-04-07 08:17:05.277563 | instance | TASK [Configure short hostname] ************************************************ 2026-04-07 08:17:05.277875 | instance | Tuesday 07 April 2026 08:17:05 +0000 (0:00:01.024) 0:00:01.056 ********* 2026-04-07 08:17:05.998374 | instance | changed: [instance] 2026-04-07 08:17:05.998497 | instance | 2026-04-07 08:17:05.998627 | instance | TASK [Ensure hostname inside hosts file] *************************************** 2026-04-07 08:17:05.998959 | instance | Tuesday 07 April 2026 08:17:05 +0000 (0:00:00.721) 0:00:01.778 ********* 2026-04-07 08:17:06.303016 | instance | changed: [instance] 2026-04-07 08:17:06.303248 | instance | 2026-04-07 08:17:06.303530 | instance | TASK [Install "dirmngr" for GPG keyserver operations] ************************** 2026-04-07 08:17:06.303861 | instance | Tuesday 07 April 2026 08:17:06 +0000 (0:00:00.304) 0:00:02.083 ********* 2026-04-07 08:17:07.800581 | instance | ok: [instance] 2026-04-07 08:17:07.800831 | instance | 2026-04-07 08:17:07.801153 | instance | TASK [Purge "snapd" package] *************************************************** 2026-04-07 08:17:07.801533 | instance | Tuesday 07 April 2026 08:17:07 +0000 (0:00:01.497) 0:00:03.580 ********* 2026-04-07 08:17:08.737215 | instance | ok: [instance] 2026-04-07 08:17:08.737427 | instance | 2026-04-07 08:17:08.737725 | instance | PLAY [Generate workspace for Atmosphere] *************************************** 2026-04-07 08:17:08.737958 | instance | 2026-04-07 08:17:08.738224 | instance | TASK [Create folders for workspace] ******************************************** 2026-04-07 08:17:08.738499 | instance | Tuesday 07 April 2026 08:17:08 +0000 (0:00:00.936) 0:00:04.517 ********* 2026-04-07 08:17:09.973353 | instance | ok: [localhost] => (item=group_vars) 2026-04-07 08:17:09.973433 | instance | ok: [localhost] => (item=group_vars/all) 2026-04-07 08:17:09.973562 | instance | changed: [localhost] => (item=group_vars/controllers) 2026-04-07 08:17:09.973687 | instance | changed: [localhost] => (item=group_vars/cephs) 2026-04-07 08:17:09.973808 | instance | changed: [localhost] => (item=group_vars/computes) 2026-04-07 08:17:09.973946 | instance | ok: [localhost] => (item=host_vars) 2026-04-07 08:17:09.974048 | instance | 2026-04-07 08:17:09.974173 | instance | PLAY [Generate Ceph control plane configuration for workspace] ***************** 2026-04-07 08:17:09.974287 | instance | 2026-04-07 08:17:09.974410 | instance | TASK [Ensure the Ceph control plane configuration file exists] ***************** 2026-04-07 08:17:09.974532 | instance | Tuesday 07 April 2026 08:17:09 +0000 (0:00:01.236) 0:00:05.753 ********* 2026-04-07 08:17:10.203833 | instance | changed: [localhost] 2026-04-07 08:17:10.204061 | instance | 2026-04-07 08:17:10.204334 | instance | TASK [Load the current Ceph control plane configuration into a variable] ******* 2026-04-07 08:17:10.204654 | instance | Tuesday 07 April 2026 08:17:10 +0000 (0:00:00.230) 0:00:05.984 ********* 2026-04-07 08:17:10.252079 | instance | ok: [localhost] 2026-04-07 08:17:10.252322 | instance | 2026-04-07 08:17:10.252630 | instance | TASK [Generate Ceph control plane values for missing variables] **************** 2026-04-07 08:17:10.252906 | instance | Tuesday 07 April 2026 08:17:10 +0000 (0:00:00.048) 0:00:06.032 ********* 2026-04-07 08:17:10.315062 | instance | ok: [localhost] => (item={'key': 'ceph_fsid', 'value': '3b2c2574-f01d-5e08-aaf2-394c5021871d'}) 2026-04-07 08:17:10.315237 | instance | ok: [localhost] => (item={'key': 'ceph_mon_public_network', 'value': '10.96.240.0/24'}) 2026-04-07 08:17:10.315409 | instance | 2026-04-07 08:17:10.315586 | instance | TASK [Write new Ceph control plane configuration file to disk] ***************** 2026-04-07 08:17:10.315750 | instance | Tuesday 07 April 2026 08:17:10 +0000 (0:00:00.063) 0:00:06.095 ********* 2026-04-07 08:17:10.899216 | instance | changed: [localhost] 2026-04-07 08:17:10.899484 | instance | 2026-04-07 08:17:10.899836 | instance | PLAY [Generate Ceph OSD configuration for workspace] *************************** 2026-04-07 08:17:10.900096 | instance | 2026-04-07 08:17:10.900373 | instance | TASK [Ensure the Ceph OSDs configuration file exists] ************************** 2026-04-07 08:17:10.900650 | instance | Tuesday 07 April 2026 08:17:10 +0000 (0:00:00.583) 0:00:06.679 ********* 2026-04-07 08:17:11.160073 | instance | changed: [localhost] 2026-04-07 08:17:11.160340 | instance | 2026-04-07 08:17:11.160667 | instance | TASK [Load the current Ceph OSDs configuration into a variable] **************** 2026-04-07 08:17:11.160953 | instance | Tuesday 07 April 2026 08:17:11 +0000 (0:00:00.260) 0:00:06.939 ********* 2026-04-07 08:17:11.192216 | instance | ok: [localhost] 2026-04-07 08:17:11.192468 | instance | 2026-04-07 08:17:11.192745 | instance | TASK [Generate Ceph OSDs values for missing variables] ************************* 2026-04-07 08:17:11.193018 | instance | Tuesday 07 April 2026 08:17:11 +0000 (0:00:00.032) 0:00:06.972 ********* 2026-04-07 08:17:11.224809 | instance | ok: [localhost] => (item={'key': 'ceph_osd_devices', 'value': ['/dev/vdb', '/dev/vdc', '/dev/vdd']}) 2026-04-07 08:17:11.225059 | instance | 2026-04-07 08:17:11.225345 | instance | TASK [Write new Ceph OSDs configuration file to disk] ************************** 2026-04-07 08:17:11.225689 | instance | Tuesday 07 April 2026 08:17:11 +0000 (0:00:00.032) 0:00:07.004 ********* 2026-04-07 08:17:11.592511 | instance | changed: [localhost] 2026-04-07 08:17:11.592761 | instance | 2026-04-07 08:17:11.593042 | instance | PLAY [Generate Kubernetes configuration for workspace] ************************* 2026-04-07 08:17:11.593295 | instance | 2026-04-07 08:17:11.593567 | instance | TASK [Ensure the Kubernetes configuration file exists] ************************* 2026-04-07 08:17:11.593877 | instance | Tuesday 07 April 2026 08:17:11 +0000 (0:00:00.368) 0:00:07.373 ********* 2026-04-07 08:17:11.803967 | instance | changed: [localhost] 2026-04-07 08:17:11.804232 | instance | 2026-04-07 08:17:11.804529 | instance | TASK [Load the current Kubernetes configuration into a variable] *************** 2026-04-07 08:17:11.804822 | instance | Tuesday 07 April 2026 08:17:11 +0000 (0:00:00.210) 0:00:07.583 ********* 2026-04-07 08:17:11.837210 | instance | ok: [localhost] 2026-04-07 08:17:11.837499 | instance | 2026-04-07 08:17:11.837788 | instance | TASK [Generate Kubernetes values for missing variables] ************************ 2026-04-07 08:17:11.838127 | instance | Tuesday 07 April 2026 08:17:11 +0000 (0:00:00.033) 0:00:07.617 ********* 2026-04-07 08:17:11.880878 | instance | ok: [localhost] => (item={'key': 'kubernetes_hostname', 'value': '10.96.240.10'}) 2026-04-07 08:17:11.881176 | instance | ok: [localhost] => (item={'key': 'kubernetes_keepalived_vrid', 'value': 42}) 2026-04-07 08:17:11.881578 | instance | ok: [localhost] => (item={'key': 'kubernetes_keepalived_vip', 'value': '10.96.240.10'}) 2026-04-07 08:17:11.881961 | instance | 2026-04-07 08:17:11.882366 | instance | TASK [Write new Kubernetes configuration file to disk] ************************* 2026-04-07 08:17:11.882700 | instance | Tuesday 07 April 2026 08:17:11 +0000 (0:00:00.043) 0:00:07.661 ********* 2026-04-07 08:17:12.268085 | instance | changed: [localhost] 2026-04-07 08:17:12.268336 | instance | 2026-04-07 08:17:12.268664 | instance | PLAY [Generate Keepalived configuration for workspace] ************************* 2026-04-07 08:17:12.268933 | instance | 2026-04-07 08:17:12.269208 | instance | TASK [Ensure the Keeaplived configuration file exists] ************************* 2026-04-07 08:17:12.269483 | instance | Tuesday 07 April 2026 08:17:12 +0000 (0:00:00.387) 0:00:08.048 ********* 2026-04-07 08:17:12.479584 | instance | changed: [localhost] 2026-04-07 08:17:12.479839 | instance | 2026-04-07 08:17:12.480140 | instance | TASK [Load the current Keepalived configuration into a variable] *************** 2026-04-07 08:17:12.480454 | instance | Tuesday 07 April 2026 08:17:12 +0000 (0:00:00.211) 0:00:08.259 ********* 2026-04-07 08:17:12.514034 | instance | ok: [localhost] 2026-04-07 08:17:12.514285 | instance | 2026-04-07 08:17:12.514579 | instance | TASK [Generate Keepalived values for missing variables] ************************ 2026-04-07 08:17:12.514922 | instance | Tuesday 07 April 2026 08:17:12 +0000 (0:00:00.035) 0:00:08.294 ********* 2026-04-07 08:17:12.554981 | instance | ok: [localhost] => (item={'key': 'keepalived_interface', 'value': 'br-ex'}) 2026-04-07 08:17:12.555249 | instance | ok: [localhost] => (item={'key': 'keepalived_vip', 'value': '10.96.250.10'}) 2026-04-07 08:17:12.555495 | instance | 2026-04-07 08:17:12.555763 | instance | TASK [Write new Keepalived configuration file to disk] ************************* 2026-04-07 08:17:12.556030 | instance | Tuesday 07 April 2026 08:17:12 +0000 (0:00:00.040) 0:00:08.335 ********* 2026-04-07 08:17:12.943936 | instance | changed: [localhost] 2026-04-07 08:17:12.944190 | instance | 2026-04-07 08:17:12.944478 | instance | PLAY [Generate endpoints for workspace] **************************************** 2026-04-07 08:17:12.944753 | instance | 2026-04-07 08:17:12.945022 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-07 08:17:12.945332 | instance | Tuesday 07 April 2026 08:17:12 +0000 (0:00:00.388) 0:00:08.723 ********* 2026-04-07 08:17:13.707492 | instance | ok: [localhost] 2026-04-07 08:17:13.707836 | instance | 2026-04-07 08:17:13.708154 | instance | TASK [Ensure the endpoints file exists] **************************************** 2026-04-07 08:17:13.708466 | instance | Tuesday 07 April 2026 08:17:13 +0000 (0:00:00.763) 0:00:09.487 ********* 2026-04-07 08:17:13.919345 | instance | changed: [localhost] 2026-04-07 08:17:13.919619 | instance | 2026-04-07 08:17:13.920023 | instance | TASK [Load the current endpoints into a variable] ****************************** 2026-04-07 08:17:13.920334 | instance | Tuesday 07 April 2026 08:17:13 +0000 (0:00:00.212) 0:00:09.699 ********* 2026-04-07 08:17:13.954662 | instance | ok: [localhost] 2026-04-07 08:17:13.954962 | instance | 2026-04-07 08:17:13.955270 | instance | TASK [Generate endpoint skeleton for missing variables] ************************ 2026-04-07 08:17:13.955576 | instance | Tuesday 07 April 2026 08:17:13 +0000 (0:00:00.035) 0:00:09.734 ********* 2026-04-07 08:17:14.771678 | instance | ok: [localhost] => (item=keycloak_host) 2026-04-07 08:17:14.772272 | instance | ok: [localhost] => (item=kube_prometheus_stack_grafana_host) 2026-04-07 08:17:14.772809 | instance | ok: [localhost] => (item=kube_prometheus_stack_alertmanager_host) 2026-04-07 08:17:14.773314 | instance | ok: [localhost] => (item=kube_prometheus_stack_prometheus_host) 2026-04-07 08:17:14.773769 | instance | ok: [localhost] => (item=openstack_helm_endpoints_region_name) 2026-04-07 08:17:14.774105 | instance | ok: [localhost] => (item=openstack_helm_endpoints_keystone_api_host) 2026-04-07 08:17:14.774427 | instance | ok: [localhost] => (item=openstack_helm_endpoints_glance_api_host) 2026-04-07 08:17:14.774744 | instance | ok: [localhost] => (item=openstack_helm_endpoints_cinder_api_host) 2026-04-07 08:17:14.775100 | instance | ok: [localhost] => (item=openstack_helm_endpoints_placement_api_host) 2026-04-07 08:17:14.775444 | instance | ok: [localhost] => (item=openstack_helm_endpoints_barbican_api_host) 2026-04-07 08:17:14.775770 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_api_host) 2026-04-07 08:17:14.776091 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_api_host) 2026-04-07 08:17:14.776405 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_novnc_host) 2026-04-07 08:17:14.776752 | instance | ok: [localhost] => (item=openstack_helm_endpoints_ironic_api_host) 2026-04-07 08:17:14.777069 | instance | ok: [localhost] => (item=openstack_helm_endpoints_designate_api_host) 2026-04-07 08:17:14.777412 | instance | ok: [localhost] => (item=openstack_helm_endpoints_octavia_api_host) 2026-04-07 08:17:14.777713 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_api_host) 2026-04-07 08:17:14.778025 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_registry_host) 2026-04-07 08:17:14.778340 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_api_host) 2026-04-07 08:17:14.778655 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_cfn_api_host) 2026-04-07 08:17:14.778995 | instance | ok: [localhost] => (item=openstack_helm_endpoints_horizon_api_host) 2026-04-07 08:17:14.779315 | instance | ok: [localhost] => (item=openstack_helm_endpoints_rgw_host) 2026-04-07 08:17:14.779619 | instance | ok: [localhost] => (item=openstack_helm_endpoints_manila_api_host) 2026-04-07 08:17:14.779896 | instance | 2026-04-07 08:17:14.780193 | instance | TASK [Write new endpoints file to disk] **************************************** 2026-04-07 08:17:14.780548 | instance | Tuesday 07 April 2026 08:17:14 +0000 (0:00:00.816) 0:00:10.551 ********* 2026-04-07 08:17:15.155833 | instance | changed: [localhost] 2026-04-07 08:17:15.156127 | instance | 2026-04-07 08:17:15.156352 | instance | TASK [Ensure the endpoints file exists] **************************************** 2026-04-07 08:17:15.156567 | instance | Tuesday 07 April 2026 08:17:15 +0000 (0:00:00.383) 0:00:10.935 ********* 2026-04-07 08:17:15.379338 | instance | changed: [localhost] 2026-04-07 08:17:15.379606 | instance | 2026-04-07 08:17:15.379844 | instance | PLAY [Generate Neutron configuration for workspace] **************************** 2026-04-07 08:17:15.380049 | instance | 2026-04-07 08:17:15.380262 | instance | TASK [Ensure the Neutron configuration file exists] **************************** 2026-04-07 08:17:15.380475 | instance | Tuesday 07 April 2026 08:17:15 +0000 (0:00:00.221) 0:00:11.157 ********* 2026-04-07 08:17:15.594362 | instance | changed: [localhost] 2026-04-07 08:17:15.594417 | instance | 2026-04-07 08:17:15.594423 | instance | TASK [Load the current Neutron configuration into a variable] ****************** 2026-04-07 08:17:15.594447 | instance | Tuesday 07 April 2026 08:17:15 +0000 (0:00:00.217) 0:00:11.374 ********* 2026-04-07 08:17:15.629991 | instance | ok: [localhost] 2026-04-07 08:17:15.630023 | instance | 2026-04-07 08:17:15.630030 | instance | TASK [Generate Neutron values for missing variables] *************************** 2026-04-07 08:17:15.630035 | instance | Tuesday 07 April 2026 08:17:15 +0000 (0:00:00.036) 0:00:11.410 ********* 2026-04-07 08:17:15.672882 | instance | ok: [localhost] => (item={'key': 'neutron_networks', 'value': [{'name': 'public', 'external': True, 'shared': True, 'mtu_size': 1500, 'port_security_enabled': True, 'provider_network_type': 'flat', 'provider_physical_network': 'external', 'subnets': [{'name': 'public-subnet', 'cidr': '10.96.250.0/24', 'gateway_ip': '10.96.250.10', 'allocation_pool_start': '10.96.250.200', 'allocation_pool_end': '10.96.250.220', 'enable_dhcp': True}]}]}) 2026-04-07 08:17:15.672937 | instance | 2026-04-07 08:17:15.672944 | instance | TASK [Write new Neutron configuration file to disk] **************************** 2026-04-07 08:17:15.672952 | instance | Tuesday 07 April 2026 08:17:15 +0000 (0:00:00.042) 0:00:11.452 ********* 2026-04-07 08:17:16.062613 | instance | changed: [localhost] 2026-04-07 08:17:16.062667 | instance | 2026-04-07 08:17:16.062672 | instance | PLAY [Generate Nova configuration for workspace] ******************************* 2026-04-07 08:17:16.062677 | instance | 2026-04-07 08:17:16.062681 | instance | TASK [Ensure the Nova configuration file exists] ******************************* 2026-04-07 08:17:16.062685 | instance | Tuesday 07 April 2026 08:17:16 +0000 (0:00:00.389) 0:00:11.842 ********* 2026-04-07 08:17:16.276626 | instance | changed: [localhost] 2026-04-07 08:17:16.276678 | instance | 2026-04-07 08:17:16.276684 | instance | TASK [Load the current Nova configuration into a variable] ********************* 2026-04-07 08:17:16.276688 | instance | Tuesday 07 April 2026 08:17:16 +0000 (0:00:00.214) 0:00:12.057 ********* 2026-04-07 08:17:16.315036 | instance | ok: [localhost] 2026-04-07 08:17:16.315122 | instance | 2026-04-07 08:17:16.315173 | instance | TASK [Generate Nova values for missing variables] ****************************** 2026-04-07 08:17:16.315326 | instance | Tuesday 07 April 2026 08:17:16 +0000 (0:00:00.038) 0:00:12.095 ********* 2026-04-07 08:17:16.357674 | instance | ok: [localhost] => (item={'key': 'nova_flavors', 'value': [{'name': 'm1.tiny', 'ram': 512, 'disk': 1, 'vcpus': 1}, {'name': 'm1.small', 'ram': 2048, 'disk': 20, 'vcpus': 1}, {'name': 'm1.medium', 'ram': 4096, 'disk': 40, 'vcpus': 2}, {'name': 'm1.large', 'ram': 8192, 'disk': 80, 'vcpus': 4}, {'name': 'm1.xlarge', 'ram': 16384, 'disk': 160, 'vcpus': 8}]}) 2026-04-07 08:17:16.357936 | instance | 2026-04-07 08:17:16.358208 | instance | TASK [Write new Nova configuration file to disk] ******************************* 2026-04-07 08:17:16.358508 | instance | Tuesday 07 April 2026 08:17:16 +0000 (0:00:00.042) 0:00:12.138 ********* 2026-04-07 08:17:16.731218 | instance | changed: [localhost] 2026-04-07 08:17:16.731521 | instance | 2026-04-07 08:17:16.731867 | instance | PLAY [Generate secrets for workspace] ****************************************** 2026-04-07 08:17:16.732192 | instance | 2026-04-07 08:17:16.732526 | instance | TASK [Ensure the secrets file exists] ****************************************** 2026-04-07 08:17:16.732817 | instance | Tuesday 07 April 2026 08:17:16 +0000 (0:00:00.373) 0:00:12.511 ********* 2026-04-07 08:17:16.939596 | instance | changed: [localhost] 2026-04-07 08:17:16.939876 | instance | 2026-04-07 08:17:16.940263 | instance | TASK [Load the current secrets into a variable] ******************************** 2026-04-07 08:17:16.940971 | instance | Tuesday 07 April 2026 08:17:16 +0000 (0:00:00.207) 0:00:12.719 ********* 2026-04-07 08:17:16.976193 | instance | ok: [localhost] 2026-04-07 08:17:16.976477 | instance | 2026-04-07 08:17:16.976779 | instance | TASK [Generate secrets for missing variables] ********************************** 2026-04-07 08:17:16.977075 | instance | Tuesday 07 April 2026 08:17:16 +0000 (0:00:00.037) 0:00:12.756 ********* 2026-04-07 08:17:17.377368 | instance | ok: [localhost] => (item=heat_auth_encryption_key) 2026-04-07 08:17:17.377787 | instance | ok: [localhost] => (item=keepalived_password) 2026-04-07 08:17:17.378136 | instance | ok: [localhost] => (item=keycloak_admin_password) 2026-04-07 08:17:17.378553 | instance | ok: [localhost] => (item=keycloak_database_password) 2026-04-07 08:17:17.378956 | instance | ok: [localhost] => (item=keystone_keycloak_client_secret) 2026-04-07 08:17:17.379299 | instance | ok: [localhost] => (item=keystone_oidc_crypto_passphrase) 2026-04-07 08:17:17.379639 | instance | ok: [localhost] => (item=kube_prometheus_stack_grafana_admin_password) 2026-04-07 08:17:17.379976 | instance | ok: [localhost] => (item=octavia_heartbeat_key) 2026-04-07 08:17:17.380313 | instance | ok: [localhost] => (item=openstack_helm_endpoints_rabbitmq_admin_password) 2026-04-07 08:17:17.380648 | instance | ok: [localhost] => (item=openstack_helm_endpoints_memcached_secret_key) 2026-04-07 08:17:17.380980 | instance | ok: [localhost] => (item=openstack_helm_endpoints_keystone_admin_password) 2026-04-07 08:17:17.381313 | instance | ok: [localhost] => (item=openstack_helm_endpoints_keystone_mariadb_password) 2026-04-07 08:17:17.381646 | instance | ok: [localhost] => (item=openstack_helm_endpoints_keystone_rabbitmq_password) 2026-04-07 08:17:17.381976 | instance | ok: [localhost] => (item=openstack_helm_endpoints_glance_keystone_password) 2026-04-07 08:17:17.382307 | instance | ok: [localhost] => (item=openstack_helm_endpoints_glance_mariadb_password) 2026-04-07 08:17:17.382669 | instance | ok: [localhost] => (item=openstack_helm_endpoints_glance_rabbitmq_password) 2026-04-07 08:17:17.383035 | instance | ok: [localhost] => (item=openstack_helm_endpoints_cinder_keystone_password) 2026-04-07 08:17:17.383371 | instance | ok: [localhost] => (item=openstack_helm_endpoints_cinder_mariadb_password) 2026-04-07 08:17:17.383706 | instance | ok: [localhost] => (item=openstack_helm_endpoints_cinder_rabbitmq_password) 2026-04-07 08:17:17.384038 | instance | ok: [localhost] => (item=openstack_helm_endpoints_placement_keystone_password) 2026-04-07 08:17:17.384370 | instance | ok: [localhost] => (item=openstack_helm_endpoints_placement_mariadb_password) 2026-04-07 08:17:17.384700 | instance | ok: [localhost] => (item=openstack_helm_endpoints_barbican_keystone_password) 2026-04-07 08:17:17.385030 | instance | ok: [localhost] => (item=openstack_helm_endpoints_barbican_mariadb_password) 2026-04-07 08:17:17.385359 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_keystone_password) 2026-04-07 08:17:17.385693 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_mariadb_password) 2026-04-07 08:17:17.386024 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_rabbitmq_password) 2026-04-07 08:17:17.386377 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_metadata_secret) 2026-04-07 08:17:17.386716 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_keystone_password) 2026-04-07 08:17:17.387074 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_mariadb_password) 2026-04-07 08:17:17.387411 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_rabbitmq_password) 2026-04-07 08:17:17.387740 | instance | ok: [localhost] => (item=openstack_helm_endpoints_ironic_keystone_password) 2026-04-07 08:17:17.388069 | instance | ok: [localhost] => (item=openstack_helm_endpoints_ironic_mariadb_password) 2026-04-07 08:17:17.388400 | instance | ok: [localhost] => (item=openstack_helm_endpoints_ironic_rabbitmq_password) 2026-04-07 08:17:17.388731 | instance | ok: [localhost] => (item=openstack_helm_endpoints_designate_keystone_password) 2026-04-07 08:17:17.389061 | instance | ok: [localhost] => (item=openstack_helm_endpoints_designate_mariadb_password) 2026-04-07 08:17:17.389391 | instance | ok: [localhost] => (item=openstack_helm_endpoints_designate_rabbitmq_password) 2026-04-07 08:17:17.389723 | instance | ok: [localhost] => (item=openstack_helm_endpoints_octavia_keystone_password) 2026-04-07 08:17:17.390052 | instance | ok: [localhost] => (item=openstack_helm_endpoints_octavia_mariadb_password) 2026-04-07 08:17:17.390436 | instance | ok: [localhost] => (item=openstack_helm_endpoints_octavia_rabbitmq_password) 2026-04-07 08:17:17.390774 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_keystone_password) 2026-04-07 08:17:17.391134 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_mariadb_password) 2026-04-07 08:17:17.391437 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_rabbitmq_password) 2026-04-07 08:17:17.391592 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_keystone_password) 2026-04-07 08:17:17.391760 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_trustee_keystone_password) 2026-04-07 08:17:17.391914 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_stack_user_keystone_password) 2026-04-07 08:17:17.392063 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_mariadb_password) 2026-04-07 08:17:17.392212 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_rabbitmq_password) 2026-04-07 08:17:17.392361 | instance | ok: [localhost] => (item=openstack_helm_endpoints_horizon_mariadb_password) 2026-04-07 08:17:17.392513 | instance | ok: [localhost] => (item=openstack_helm_endpoints_tempest_keystone_password) 2026-04-07 08:17:17.392668 | instance | ok: [localhost] => (item=openstack_helm_endpoints_openstack_exporter_keystone_password) 2026-04-07 08:17:17.392819 | instance | ok: [localhost] => (item=openstack_helm_endpoints_rgw_keystone_password) 2026-04-07 08:17:17.392970 | instance | ok: [localhost] => (item=openstack_helm_endpoints_manila_keystone_password) 2026-04-07 08:17:17.393119 | instance | ok: [localhost] => (item=openstack_helm_endpoints_manila_mariadb_password) 2026-04-07 08:17:17.393267 | instance | ok: [localhost] => (item=openstack_helm_endpoints_staffeln_mariadb_password) 2026-04-07 08:17:17.393409 | instance | 2026-04-07 08:17:17.393559 | instance | TASK [Generate base64 encoded secrets] ***************************************** 2026-04-07 08:17:17.393711 | instance | Tuesday 07 April 2026 08:17:17 +0000 (0:00:00.401) 0:00:13.157 ********* 2026-04-07 08:17:17.431854 | instance | ok: [localhost] => (item=barbican_kek) 2026-04-07 08:17:17.432275 | instance | 2026-04-07 08:17:17.432645 | instance | TASK [Generate temporary files for generating keys for missing variables] ****** 2026-04-07 08:17:17.433004 | instance | Tuesday 07 April 2026 08:17:17 +0000 (0:00:00.052) 0:00:13.210 ********* 2026-04-07 08:17:17.879068 | instance | changed: [localhost] => (item=manila_ssh_key) 2026-04-07 08:17:17.879491 | instance | changed: [localhost] => (item=nova_ssh_key) 2026-04-07 08:17:17.879650 | instance | 2026-04-07 08:17:17.879814 | instance | TASK [Generate SSH keys for missing variables] ********************************* 2026-04-07 08:17:17.879975 | instance | Tuesday 07 April 2026 08:17:17 +0000 (0:00:00.445) 0:00:13.655 ********* 2026-04-07 08:17:20.365571 | instance | changed: [localhost] => (item=manila_ssh_key) 2026-04-07 08:17:20.365627 | instance | changed: [localhost] => (item=nova_ssh_key) 2026-04-07 08:17:20.365634 | instance | 2026-04-07 08:17:20.365640 | instance | TASK [Set values for SSH keys] ************************************************* 2026-04-07 08:17:20.365647 | instance | Tuesday 07 April 2026 08:17:20 +0000 (0:00:02.489) 0:00:16.145 ********* 2026-04-07 08:17:20.421876 | instance | ok: [localhost] => (item=manila_ssh_key) 2026-04-07 08:17:20.421907 | instance | ok: [localhost] => (item=nova_ssh_key) 2026-04-07 08:17:20.421912 | instance | 2026-04-07 08:17:20.421917 | instance | TASK [Delete the temporary files generated for SSH keys] *********************** 2026-04-07 08:17:20.421921 | instance | Tuesday 07 April 2026 08:17:20 +0000 (0:00:00.057) 0:00:16.202 ********* 2026-04-07 08:17:20.800697 | instance | changed: [localhost] => (item=manila_ssh_key) 2026-04-07 08:17:20.800745 | instance | changed: [localhost] => (item=nova_ssh_key) 2026-04-07 08:17:20.800750 | instance | 2026-04-07 08:17:20.800755 | instance | TASK [Write new secrets file to disk] ****************************************** 2026-04-07 08:17:20.800760 | instance | Tuesday 07 April 2026 08:17:20 +0000 (0:00:00.378) 0:00:16.581 ********* 2026-04-07 08:17:21.179045 | instance | changed: [localhost] 2026-04-07 08:17:21.179120 | instance | 2026-04-07 08:17:21.179207 | instance | TASK [Encrypt secrets file with Vault password] ******************************** 2026-04-07 08:17:21.179458 | instance | Tuesday 07 April 2026 08:17:21 +0000 (0:00:00.378) 0:00:16.959 ********* 2026-04-07 08:17:21.216935 | instance | skipping: [localhost] 2026-04-07 08:17:21.217258 | instance | 2026-04-07 08:17:21.217590 | instance | PLAY [Setup networking] ******************************************************** 2026-04-07 08:17:21.217869 | instance | 2026-04-07 08:17:21.218173 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-07 08:17:21.218477 | instance | Tuesday 07 April 2026 08:17:21 +0000 (0:00:00.038) 0:00:16.997 ********* 2026-04-07 08:17:21.962929 | instance | ok: [instance] 2026-04-07 08:17:21.963737 | instance | 2026-04-07 08:17:21.963798 | instance | TASK [Create bridge for management network] ************************************ 2026-04-07 08:17:21.963809 | instance | Tuesday 07 April 2026 08:17:21 +0000 (0:00:00.745) 0:00:17.743 ********* 2026-04-07 08:17:22.286177 | instance | ok: [instance] 2026-04-07 08:17:22.286317 | instance | 2026-04-07 08:17:22.286736 | instance | TASK [Create fake interface for management bridge] ***************************** 2026-04-07 08:17:22.286812 | instance | Tuesday 07 April 2026 08:17:22 +0000 (0:00:00.323) 0:00:18.066 ********* 2026-04-07 08:17:22.505758 | instance | ok: [instance] 2026-04-07 08:17:22.505917 | instance | 2026-04-07 08:17:22.506392 | instance | TASK [Assign dummy interface to management bridge] ***************************** 2026-04-07 08:17:22.506451 | instance | Tuesday 07 April 2026 08:17:22 +0000 (0:00:00.219) 0:00:18.286 ********* 2026-04-07 08:17:22.724697 | instance | ok: [instance] 2026-04-07 08:17:22.725206 | instance | 2026-04-07 08:17:22.725227 | instance | TASK [Assign IP address for management bridge] ********************************* 2026-04-07 08:17:22.725234 | instance | Tuesday 07 April 2026 08:17:22 +0000 (0:00:00.219) 0:00:18.505 ********* 2026-04-07 08:17:22.939840 | instance | ok: [instance] 2026-04-07 08:17:22.939960 | instance | 2026-04-07 08:17:22.940196 | instance | TASK [Bring up interfaces] ***************************************************** 2026-04-07 08:17:22.940432 | instance | Tuesday 07 April 2026 08:17:22 +0000 (0:00:00.215) 0:00:18.720 ********* 2026-04-07 08:17:23.349338 | instance | ok: [instance] => (item=br-mgmt) 2026-04-07 08:17:23.349443 | instance | ok: [instance] => (item=dummy0) 2026-04-07 08:17:23.349514 | instance | 2026-04-07 08:17:23.350034 | instance | PLAY [Create devices for Ceph] ************************************************* 2026-04-07 08:17:23.350076 | instance | 2026-04-07 08:17:23.350083 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-07 08:17:23.350088 | instance | Tuesday 07 April 2026 08:17:23 +0000 (0:00:00.409) 0:00:19.130 ********* 2026-04-07 08:17:24.130536 | instance | ok: [instance] 2026-04-07 08:17:24.130991 | instance | 2026-04-07 08:17:24.131102 | instance | TASK [Install depedencies] ***************************************************** 2026-04-07 08:17:24.131240 | instance | Tuesday 07 April 2026 08:17:24 +0000 (0:00:00.780) 0:00:19.910 ********* 2026-04-07 08:17:45.640305 | instance | changed: [instance] 2026-04-07 08:17:45.640532 | instance | 2026-04-07 08:17:45.640818 | instance | TASK [Start up service] ******************************************************** 2026-04-07 08:17:45.641103 | instance | Tuesday 07 April 2026 08:17:45 +0000 (0:00:21.509) 0:00:41.420 ********* 2026-04-07 08:17:46.288989 | instance | ok: [instance] 2026-04-07 08:17:46.289200 | instance | 2026-04-07 08:17:46.289485 | instance | TASK [Generate lvm.conf] ******************************************************* 2026-04-07 08:17:46.289776 | instance | Tuesday 07 April 2026 08:17:46 +0000 (0:00:00.648) 0:00:42.069 ********* 2026-04-07 08:17:46.541972 | instance | ok: [instance] 2026-04-07 08:17:46.542180 | instance | 2026-04-07 08:17:46.542475 | instance | TASK [Write /etc/lvm/lvm.conf] ************************************************* 2026-04-07 08:17:46.542786 | instance | Tuesday 07 April 2026 08:17:46 +0000 (0:00:00.252) 0:00:42.322 ********* 2026-04-07 08:17:46.912998 | instance | changed: [instance] 2026-04-07 08:17:46.913116 | instance | 2026-04-07 08:17:46.913130 | instance | TASK [Get list of all loopback devices] **************************************** 2026-04-07 08:17:46.913141 | instance | Tuesday 07 April 2026 08:17:46 +0000 (0:00:00.370) 0:00:42.692 ********* 2026-04-07 08:17:47.128812 | instance | ok: [instance] 2026-04-07 08:17:47.129568 | instance | 2026-04-07 08:17:47.129607 | instance | TASK [Fail if there is any existing loopback devices] ************************** 2026-04-07 08:17:47.129612 | instance | Tuesday 07 April 2026 08:17:47 +0000 (0:00:00.216) 0:00:42.908 ********* 2026-04-07 08:17:47.152532 | instance | skipping: [instance] 2026-04-07 08:17:47.153124 | instance | 2026-04-07 08:17:47.153160 | instance | TASK [Create devices for Ceph] ************************************************* 2026-04-07 08:17:47.153168 | instance | Tuesday 07 April 2026 08:17:47 +0000 (0:00:00.024) 0:00:42.933 ********* 2026-04-07 08:17:47.730097 | instance | changed: [instance] => (item=osd0) 2026-04-07 08:17:47.730201 | instance | changed: [instance] => (item=osd1) 2026-04-07 08:17:47.730686 | instance | changed: [instance] => (item=osd2) 2026-04-07 08:17:47.730728 | instance | 2026-04-07 08:17:47.730734 | instance | TASK [Set permissions on loopback devices] ************************************* 2026-04-07 08:17:47.730738 | instance | Tuesday 07 April 2026 08:17:47 +0000 (0:00:00.577) 0:00:43.511 ********* 2026-04-07 08:17:48.323987 | instance | changed: [instance] => (item=osd0) 2026-04-07 08:17:48.324089 | instance | changed: [instance] => (item=osd1) 2026-04-07 08:17:48.324929 | instance | changed: [instance] => (item=osd2) 2026-04-07 08:17:48.324955 | instance | 2026-04-07 08:17:48.324965 | instance | TASK [Start loop devices] ****************************************************** 2026-04-07 08:17:48.324974 | instance | Tuesday 07 April 2026 08:17:48 +0000 (0:00:00.592) 0:00:44.104 ********* 2026-04-07 08:17:49.056115 | instance | changed: [instance] => (item=osd0) 2026-04-07 08:17:49.056248 | instance | changed: [instance] => (item=osd1) 2026-04-07 08:17:49.056902 | instance | changed: [instance] => (item=osd2) 2026-04-07 08:17:49.056945 | instance | 2026-04-07 08:17:49.056953 | instance | TASK [Create a volume group for each loop device] ****************************** 2026-04-07 08:17:49.056959 | instance | Tuesday 07 April 2026 08:17:49 +0000 (0:00:00.732) 0:00:44.836 ********* 2026-04-07 08:17:52.309622 | instance | changed: [instance] => (item=osd0) 2026-04-07 08:17:52.309739 | instance | changed: [instance] => (item=osd1) 2026-04-07 08:17:52.310463 | instance | changed: [instance] => (item=osd2) 2026-04-07 08:17:52.310699 | instance | 2026-04-07 08:17:52.310709 | instance | TASK [Create a logical volume for each loop device] **************************** 2026-04-07 08:17:52.310716 | instance | Tuesday 07 April 2026 08:17:52 +0000 (0:00:03.253) 0:00:48.090 ********* 2026-04-07 08:17:54.281360 | instance | changed: [instance] => (item=ceph-instance-osd0) 2026-04-07 08:17:54.281476 | instance | changed: [instance] => (item=ceph-instance-osd1) 2026-04-07 08:17:54.282443 | instance | changed: [instance] => (item=ceph-instance-osd2) 2026-04-07 08:17:54.282488 | instance | 2026-04-07 08:17:54.282495 | instance | PLAY [controllers] ************************************************************* 2026-04-07 08:17:54.282502 | instance | 2026-04-07 08:17:54.282507 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-07 08:17:54.282513 | instance | Tuesday 07 April 2026 08:17:54 +0000 (0:00:01.971) 0:00:50.062 ********* 2026-04-07 08:17:55.248549 | instance | ok: [instance] 2026-04-07 08:17:55.249060 | instance | 2026-04-07 08:17:55.249108 | instance | TASK [Set masquerade rule] ***************************************************** 2026-04-07 08:17:55.249119 | instance | Tuesday 07 April 2026 08:17:55 +0000 (0:00:00.967) 0:00:51.029 ********* 2026-04-07 08:17:55.755325 | instance | changed: [instance] 2026-04-07 08:17:55.758875 | instance | 2026-04-07 08:17:55.758922 | instance | PLAY RECAP ********************************************************************* 2026-04-07 08:17:55.758937 | instance | instance : ok=24 changed=10 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 2026-04-07 08:17:55.758943 | instance | localhost : ok=40 changed=21 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 2026-04-07 08:17:55.758949 | instance | 2026-04-07 08:17:55.758955 | instance | Tuesday 07 April 2026 08:17:55 +0000 (0:00:00.506) 0:00:51.536 ********* 2026-04-07 08:17:55.758961 | instance | =============================================================================== 2026-04-07 08:17:55.758966 | instance | Install depedencies ---------------------------------------------------- 21.51s 2026-04-07 08:17:55.758971 | instance | Create a volume group for each loop device ------------------------------ 3.25s 2026-04-07 08:17:55.758977 | instance | Generate SSH keys for missing variables --------------------------------- 2.49s 2026-04-07 08:17:55.758982 | instance | Create a logical volume for each loop device ---------------------------- 1.97s 2026-04-07 08:17:55.758987 | instance | Install "dirmngr" for GPG keyserver operations -------------------------- 1.50s 2026-04-07 08:17:55.758993 | instance | Create folders for workspace -------------------------------------------- 1.24s 2026-04-07 08:17:55.758998 | instance | Gathering Facts --------------------------------------------------------- 1.02s 2026-04-07 08:17:55.759003 | instance | Gathering Facts --------------------------------------------------------- 0.97s 2026-04-07 08:17:55.759021 | instance | Purge "snapd" package --------------------------------------------------- 0.94s 2026-04-07 08:17:55.759027 | instance | Generate endpoint skeleton for missing variables ------------------------ 0.82s 2026-04-07 08:17:55.759036 | instance | Gathering Facts --------------------------------------------------------- 0.78s 2026-04-07 08:17:55.759105 | instance | Gathering Facts --------------------------------------------------------- 0.76s 2026-04-07 08:17:55.759323 | instance | Gathering Facts --------------------------------------------------------- 0.75s 2026-04-07 08:17:55.759531 | instance | Start loop devices ------------------------------------------------------ 0.73s 2026-04-07 08:17:55.759738 | instance | Configure short hostname ------------------------------------------------ 0.72s 2026-04-07 08:17:55.759939 | instance | Start up service -------------------------------------------------------- 0.65s 2026-04-07 08:17:55.760164 | instance | Set permissions on loopback devices ------------------------------------- 0.59s 2026-04-07 08:17:55.760370 | instance | Write new Ceph control plane configuration file to disk ----------------- 0.58s 2026-04-07 08:17:55.760574 | instance | Create devices for Ceph ------------------------------------------------- 0.58s 2026-04-07 08:17:55.760776 | instance | Set masquerade rule ----------------------------------------------------- 0.51s 2026-04-07 08:17:55.825762 | instance | INFO [aio > prepare] Executed: Successful 2026-04-07 08:17:55.840073 | instance | INFO [aio > converge] Executing 2026-04-07 08:17:58.544770 | instance | 2026-04-07 08:17:58.545014 | instance | PLAY [all] ********************************************************************* 2026-04-07 08:17:58.545213 | instance | 2026-04-07 08:17:58.545426 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-07 08:17:58.545627 | instance | Tuesday 07 April 2026 08:17:58 +0000 (0:00:00.020) 0:00:00.020 ********* 2026-04-07 08:17:59.778694 | instance | ok: [instance] 2026-04-07 08:17:59.778756 | instance | 2026-04-07 08:17:59.778768 | instance | TASK [Fail if atmosphere_ceph_enabled is set] ********************************** 2026-04-07 08:17:59.778777 | instance | Tuesday 07 April 2026 08:17:59 +0000 (0:00:01.232) 0:00:01.253 ********* 2026-04-07 08:17:59.817040 | instance | skipping: [instance] 2026-04-07 08:17:59.817092 | instance | 2026-04-07 08:17:59.817102 | instance | TASK [Set a fact with the "atmosphere_images" for other plays] ***************** 2026-04-07 08:17:59.817112 | instance | Tuesday 07 April 2026 08:17:59 +0000 (0:00:00.038) 0:00:01.291 ********* 2026-04-07 08:18:00.042740 | instance | ok: [instance] 2026-04-07 08:18:00.042788 | instance | 2026-04-07 08:18:00.042794 | instance | PLAY [Deploy Ceph monitors & managers] ***************************************** 2026-04-07 08:18:00.042798 | instance | 2026-04-07 08:18:00.042802 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-07 08:18:00.042806 | instance | Tuesday 07 April 2026 08:18:00 +0000 (0:00:00.226) 0:00:01.517 ********* 2026-04-07 08:18:00.942763 | instance | ok: [instance] 2026-04-07 08:18:00.942848 | instance | 2026-04-07 08:18:00.942864 | instance | TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-07 08:18:00.942874 | instance | Tuesday 07 April 2026 08:18:00 +0000 (0:00:00.899) 0:00:02.417 ********* 2026-04-07 08:18:01.252078 | instance | ok: [instance] 2026-04-07 08:18:01.252149 | instance | 2026-04-07 08:18:01.252398 | instance | TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-07 08:18:01.252431 | instance | Tuesday 07 April 2026 08:18:01 +0000 (0:00:00.310) 0:00:02.727 ********* 2026-04-07 08:18:01.298701 | instance | skipping: [instance] 2026-04-07 08:18:01.298924 | instance | 2026-04-07 08:18:01.299225 | instance | TASK [vexxhost.containers.directory : Create directory (/var/lib/downloads)] *** 2026-04-07 08:18:01.299264 | instance | Tuesday 07 April 2026 08:18:01 +0000 (0:00:00.046) 0:00:02.773 ********* 2026-04-07 08:18:01.621904 | instance | changed: [instance] 2026-04-07 08:18:01.621982 | instance | 2026-04-07 08:18:01.622047 | instance | TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-07 08:18:01.622185 | instance | Tuesday 07 April 2026 08:18:01 +0000 (0:00:00.323) 0:00:03.097 ********* 2026-04-07 08:18:01.691114 | instance | ok: [instance] => { 2026-04-07 08:18:01.691208 | instance | "msg": "https://github.com/opencontainers/runc/releases/download/v1.4.0/runc.amd64" 2026-04-07 08:18:01.691675 | instance | } 2026-04-07 08:18:01.691716 | instance | 2026-04-07 08:18:01.691722 | instance | TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-07 08:18:01.691727 | instance | Tuesday 07 April 2026 08:18:01 +0000 (0:00:00.069) 0:00:03.166 ********* 2026-04-07 08:18:02.378005 | instance | changed: [instance] 2026-04-07 08:18:02.378113 | instance | 2026-04-07 08:18:02.378320 | instance | TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-07 08:18:02.378568 | instance | Tuesday 07 April 2026 08:18:02 +0000 (0:00:00.686) 0:00:03.853 ********* 2026-04-07 08:18:02.430703 | instance | skipping: [instance] 2026-04-07 08:18:02.430799 | instance | 2026-04-07 08:18:02.431004 | instance | TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-07 08:18:02.431204 | instance | Tuesday 07 April 2026 08:18:02 +0000 (0:00:00.052) 0:00:03.906 ********* 2026-04-07 08:18:02.484932 | instance | skipping: [instance] 2026-04-07 08:18:02.484994 | instance | 2026-04-07 08:18:02.485136 | instance | TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-07 08:18:02.485317 | instance | Tuesday 07 April 2026 08:18:02 +0000 (0:00:00.054) 0:00:03.960 ********* 2026-04-07 08:18:02.694632 | instance | ok: [instance] 2026-04-07 08:18:02.694715 | instance | 2026-04-07 08:18:02.694893 | instance | TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-07 08:18:02.695048 | instance | Tuesday 07 April 2026 08:18:02 +0000 (0:00:00.209) 0:00:04.170 ********* 2026-04-07 08:18:04.240919 | instance | ok: [instance] 2026-04-07 08:18:04.240990 | instance | 2026-04-07 08:18:04.241258 | instance | TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-07 08:18:04.241298 | instance | Tuesday 07 April 2026 08:18:04 +0000 (0:00:01.546) 0:00:05.716 ********* 2026-04-07 08:18:04.301808 | instance | ok: [instance] => { 2026-04-07 08:18:04.301873 | instance | "msg": "https://github.com/containerd/containerd/releases/download/v2.2.0/containerd-2.2.0-linux-amd64.tar.gz" 2026-04-07 08:18:04.302400 | instance | } 2026-04-07 08:18:04.302438 | instance | 2026-04-07 08:18:04.302444 | instance | TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-07 08:18:04.302449 | instance | Tuesday 07 April 2026 08:18:04 +0000 (0:00:00.060) 0:00:05.777 ********* 2026-04-07 08:18:05.103499 | instance | changed: [instance] 2026-04-07 08:18:05.104024 | instance | 2026-04-07 08:18:05.104071 | instance | TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-07 08:18:05.104079 | instance | Tuesday 07 April 2026 08:18:05 +0000 (0:00:00.801) 0:00:06.578 ********* 2026-04-07 08:18:07.961743 | instance | changed: [instance] 2026-04-07 08:18:07.961836 | instance | 2026-04-07 08:18:07.962081 | instance | TASK [vexxhost.containers.containerd : Install SELinux packages] *************** 2026-04-07 08:18:07.962126 | instance | Tuesday 07 April 2026 08:18:07 +0000 (0:00:02.858) 0:00:09.437 ********* 2026-04-07 08:18:08.001243 | instance | skipping: [instance] 2026-04-07 08:18:08.001326 | instance | 2026-04-07 08:18:08.001550 | instance | TASK [vexxhost.containers.containerd : Set SELinux to permissive at runtime] *** 2026-04-07 08:18:08.001581 | instance | Tuesday 07 April 2026 08:18:07 +0000 (0:00:00.039) 0:00:09.476 ********* 2026-04-07 08:18:08.038201 | instance | skipping: [instance] 2026-04-07 08:18:08.038587 | instance | 2026-04-07 08:18:08.038629 | instance | TASK [vexxhost.containers.containerd : Persist SELinux permissive mode] ******** 2026-04-07 08:18:08.038635 | instance | Tuesday 07 April 2026 08:18:08 +0000 (0:00:00.037) 0:00:09.513 ********* 2026-04-07 08:18:08.071810 | instance | skipping: [instance] 2026-04-07 08:18:08.071888 | instance | 2026-04-07 08:18:08.072137 | instance | TASK [vexxhost.containers.containerd : Install AppArmor packages] ************** 2026-04-07 08:18:08.072175 | instance | Tuesday 07 April 2026 08:18:08 +0000 (0:00:00.033) 0:00:09.547 ********* 2026-04-07 08:18:13.930102 | instance | changed: [instance] 2026-04-07 08:18:13.930189 | instance | 2026-04-07 08:18:13.930452 | instance | TASK [vexxhost.containers.containerd : Create systemd service file for containerd] *** 2026-04-07 08:18:13.930495 | instance | Tuesday 07 April 2026 08:18:13 +0000 (0:00:05.858) 0:00:15.405 ********* 2026-04-07 08:18:14.389153 | instance | changed: [instance] 2026-04-07 08:18:14.389968 | instance | 2026-04-07 08:18:14.389996 | instance | TASK [vexxhost.containers.containerd : Create folders for configuration] ******* 2026-04-07 08:18:14.390005 | instance | Tuesday 07 April 2026 08:18:14 +0000 (0:00:00.458) 0:00:15.864 ********* 2026-04-07 08:18:15.410472 | instance | changed: [instance] => (item={'path': '/etc/containerd'}) 2026-04-07 08:18:15.410548 | instance | changed: [instance] => (item={'path': '/var/lib/containerd', 'mode': '0o700'}) 2026-04-07 08:18:15.410612 | instance | changed: [instance] => (item={'path': '/run/containerd', 'mode': '0o711'}) 2026-04-07 08:18:15.410959 | instance | changed: [instance] => (item={'path': '/run/containerd/io.containerd.grpc.v1.cri', 'mode': '0o700'}) 2026-04-07 08:18:15.411225 | instance | changed: [instance] => (item={'path': '/run/containerd/io.containerd.sandbox.controller.v1.shim', 'mode': '0o700'}) 2026-04-07 08:18:15.411247 | instance | 2026-04-07 08:18:15.411258 | instance | TASK [vexxhost.containers.containerd : Create containerd config file] ********** 2026-04-07 08:18:15.411268 | instance | Tuesday 07 April 2026 08:18:15 +0000 (0:00:01.021) 0:00:16.885 ********* 2026-04-07 08:18:15.944083 | instance | changed: [instance] 2026-04-07 08:18:15.944148 | instance | 2026-04-07 08:18:15.944816 | instance | TASK [vexxhost.containers.containerd : Force any restarts if necessary] ******** 2026-04-07 08:18:15.944857 | instance | Tuesday 07 April 2026 08:18:15 +0000 (0:00:00.524) 0:00:17.410 ********* 2026-04-07 08:18:15.944863 | instance | 2026-04-07 08:18:15.944867 | instance | RUNNING HANDLER [vexxhost.containers.containerd : Reload systemd] ************** 2026-04-07 08:18:15.944871 | instance | Tuesday 07 April 2026 08:18:15 +0000 (0:00:00.009) 0:00:17.419 ********* 2026-04-07 08:18:16.898608 | instance | ok: [instance] 2026-04-07 08:18:16.898685 | instance | 2026-04-07 08:18:16.898923 | instance | RUNNING HANDLER [vexxhost.containers.containerd : Restart containerd] ********** 2026-04-07 08:18:16.898999 | instance | Tuesday 07 April 2026 08:18:16 +0000 (0:00:00.953) 0:00:18.373 ********* 2026-04-07 08:18:17.387896 | instance | changed: [instance] 2026-04-07 08:18:17.388011 | instance | 2026-04-07 08:18:17.388254 | instance | TASK [vexxhost.containers.containerd : Enable and start service] *************** 2026-04-07 08:18:17.388270 | instance | Tuesday 07 April 2026 08:18:17 +0000 (0:00:00.490) 0:00:18.863 ********* 2026-04-07 08:18:17.948135 | instance | changed: [instance] 2026-04-07 08:18:17.948204 | instance | 2026-04-07 08:18:17.948487 | instance | TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-07 08:18:17.948523 | instance | Tuesday 07 April 2026 08:18:17 +0000 (0:00:00.560) 0:00:19.423 ********* 2026-04-07 08:18:18.176716 | instance | ok: [instance] 2026-04-07 08:18:18.177122 | instance | 2026-04-07 08:18:18.177192 | instance | TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-07 08:18:18.177199 | instance | Tuesday 07 April 2026 08:18:18 +0000 (0:00:00.227) 0:00:19.651 ********* 2026-04-07 08:18:18.228600 | instance | ok: [instance] => { 2026-04-07 08:18:18.228690 | instance | "msg": "https://download.docker.com/linux/static/stable/x86_64/docker-24.0.9.tgz" 2026-04-07 08:18:18.228762 | instance | } 2026-04-07 08:18:18.229134 | instance | 2026-04-07 08:18:18.229174 | instance | TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-07 08:18:18.229180 | instance | Tuesday 07 April 2026 08:18:18 +0000 (0:00:00.052) 0:00:19.703 ********* 2026-04-07 08:18:19.135743 | instance | changed: [instance] 2026-04-07 08:18:19.135820 | instance | 2026-04-07 08:18:19.136072 | instance | TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-07 08:18:19.136109 | instance | Tuesday 07 April 2026 08:18:19 +0000 (0:00:00.907) 0:00:20.611 ********* 2026-04-07 08:18:23.483999 | instance | changed: [instance] 2026-04-07 08:18:23.484124 | instance | 2026-04-07 08:18:23.484139 | instance | TASK [vexxhost.containers.docker : Install AppArmor packages] ****************** 2026-04-07 08:18:23.484336 | instance | Tuesday 07 April 2026 08:18:23 +0000 (0:00:04.347) 0:00:24.958 ********* 2026-04-07 08:18:24.722073 | instance | ok: [instance] 2026-04-07 08:18:24.722191 | instance | 2026-04-07 08:18:24.722235 | instance | TASK [vexxhost.containers.docker : Ensure group "docker" exists] *************** 2026-04-07 08:18:24.722312 | instance | Tuesday 07 April 2026 08:18:24 +0000 (0:00:01.238) 0:00:26.197 ********* 2026-04-07 08:18:25.064518 | instance | changed: [instance] 2026-04-07 08:18:25.064553 | instance | 2026-04-07 08:18:25.064558 | instance | TASK [vexxhost.containers.docker : Create systemd service file for docker] ***** 2026-04-07 08:18:25.064563 | instance | Tuesday 07 April 2026 08:18:25 +0000 (0:00:00.341) 0:00:26.539 ********* 2026-04-07 08:18:25.441342 | instance | changed: [instance] 2026-04-07 08:18:25.441379 | instance | 2026-04-07 08:18:25.441384 | instance | TASK [vexxhost.containers.docker : Create folders for configuration] *********** 2026-04-07 08:18:25.441389 | instance | Tuesday 07 April 2026 08:18:25 +0000 (0:00:00.377) 0:00:26.916 ********* 2026-04-07 08:18:26.055580 | instance | changed: [instance] => (item={'path': '/etc/docker'}) 2026-04-07 08:18:26.055678 | instance | changed: [instance] => (item={'path': '/var/lib/docker', 'mode': '0o710'}) 2026-04-07 08:18:26.056189 | instance | changed: [instance] => (item={'path': '/run/docker', 'mode': '0o711'}) 2026-04-07 08:18:26.056228 | instance | 2026-04-07 08:18:26.056234 | instance | TASK [vexxhost.containers.docker : Create systemd socket file for docker] ****** 2026-04-07 08:18:26.056239 | instance | Tuesday 07 April 2026 08:18:26 +0000 (0:00:00.614) 0:00:27.531 ********* 2026-04-07 08:18:26.459190 | instance | changed: [instance] 2026-04-07 08:18:26.459264 | instance | 2026-04-07 08:18:26.459522 | instance | TASK [vexxhost.containers.docker : Create docker daemon config file] *********** 2026-04-07 08:18:26.459557 | instance | Tuesday 07 April 2026 08:18:26 +0000 (0:00:00.403) 0:00:27.934 ********* 2026-04-07 08:18:26.865703 | instance | changed: [instance] 2026-04-07 08:18:26.865775 | instance | 2026-04-07 08:18:26.866418 | instance | TASK [vexxhost.containers.docker : Force any restarts if necessary] ************ 2026-04-07 08:18:26.866486 | instance | Tuesday 07 April 2026 08:18:26 +0000 (0:00:00.393) 0:00:28.328 ********* 2026-04-07 08:18:26.866492 | instance | 2026-04-07 08:18:26.866504 | instance | RUNNING HANDLER [vexxhost.containers.containerd : Reload systemd] ************** 2026-04-07 08:18:26.866509 | instance | Tuesday 07 April 2026 08:18:26 +0000 (0:00:00.012) 0:00:28.340 ********* 2026-04-07 08:18:27.599034 | instance | ok: [instance] 2026-04-07 08:18:27.599139 | instance | 2026-04-07 08:18:27.599465 | instance | RUNNING HANDLER [vexxhost.containers.docker : Restart docker] ****************** 2026-04-07 08:18:27.599762 | instance | Tuesday 07 April 2026 08:18:27 +0000 (0:00:00.733) 0:00:29.074 ********* 2026-04-07 08:18:29.097850 | instance | changed: [instance] 2026-04-07 08:18:29.097922 | instance | 2026-04-07 08:18:29.098271 | instance | TASK [vexxhost.containers.docker : Enable and start service] ******************* 2026-04-07 08:18:29.098343 | instance | Tuesday 07 April 2026 08:18:29 +0000 (0:00:01.498) 0:00:30.572 ********* 2026-04-07 08:18:29.675708 | instance | changed: [instance] 2026-04-07 08:18:29.676056 | instance | 2026-04-07 08:18:29.676074 | instance | TASK [vexxhost.ceph.cephadm : Gather variables for each operating system] ****** 2026-04-07 08:18:29.676158 | instance | Tuesday 07 April 2026 08:18:29 +0000 (0:00:00.578) 0:00:31.151 ********* 2026-04-07 08:18:29.740047 | instance | ok: [instance] => (item=/home/zuul/.ansible/collections/ansible_collections/vexxhost/ceph/roles/cephadm/vars/ubuntu-22.04.yml) 2026-04-07 08:18:29.740097 | instance | 2026-04-07 08:18:29.740103 | instance | TASK [vexxhost.ceph.cephadm : Install packages] ******************************** 2026-04-07 08:18:29.740107 | instance | Tuesday 07 April 2026 08:18:29 +0000 (0:00:00.063) 0:00:31.214 ********* 2026-04-07 08:18:36.397845 | instance | changed: [instance] 2026-04-07 08:18:36.397934 | instance | 2026-04-07 08:18:36.397994 | instance | TASK [vexxhost.ceph.cephadm : Ensure services are started] ********************* 2026-04-07 08:18:36.398130 | instance | Tuesday 07 April 2026 08:18:36 +0000 (0:00:06.658) 0:00:37.872 ********* 2026-04-07 08:18:37.074822 | instance | ok: [instance] => (item=chronyd) 2026-04-07 08:18:37.074928 | instance | ok: [instance] => (item=sshd) 2026-04-07 08:18:37.075301 | instance | 2026-04-07 08:18:37.075341 | instance | TASK [vexxhost.ceph.cephadm : Download "cephadm"] ****************************** 2026-04-07 08:18:37.075347 | instance | Tuesday 07 April 2026 08:18:37 +0000 (0:00:00.677) 0:00:38.550 ********* 2026-04-07 08:18:37.916928 | instance | changed: [instance] 2026-04-07 08:18:37.917035 | instance | 2026-04-07 08:18:37.917403 | instance | TASK [vexxhost.ceph.cephadm : Remove cephadm from old path] ******************** 2026-04-07 08:18:37.917458 | instance | Tuesday 07 April 2026 08:18:37 +0000 (0:00:00.841) 0:00:39.392 ********* 2026-04-07 08:18:38.130760 | instance | ok: [instance] 2026-04-07 08:18:38.130841 | instance | 2026-04-07 08:18:38.131114 | instance | TASK [vexxhost.ceph.cephadm : Ensure "cephadm" user is present] **************** 2026-04-07 08:18:38.131130 | instance | Tuesday 07 April 2026 08:18:38 +0000 (0:00:00.213) 0:00:39.606 ********* 2026-04-07 08:18:38.619803 | instance | changed: [instance] 2026-04-07 08:18:38.619876 | instance | 2026-04-07 08:18:38.620181 | instance | TASK [vexxhost.ceph.cephadm : Allow "cephadm" user to have passwordless sudo] *** 2026-04-07 08:18:38.620221 | instance | Tuesday 07 April 2026 08:18:38 +0000 (0:00:00.489) 0:00:40.095 ********* 2026-04-07 08:18:39.048949 | instance | changed: [instance] 2026-04-07 08:18:39.049030 | instance | 2026-04-07 08:18:39.049093 | instance | TASK [vexxhost.ceph.mon : Get `cephadm ls` status] ***************************** 2026-04-07 08:18:39.049242 | instance | Tuesday 07 April 2026 08:18:39 +0000 (0:00:00.429) 0:00:40.524 ********* 2026-04-07 08:18:40.659135 | instance | ok: [instance] 2026-04-07 08:18:40.659600 | instance | 2026-04-07 08:18:40.659643 | instance | TASK [vexxhost.ceph.mon : Parse the `cephadm ls` output] *********************** 2026-04-07 08:18:40.659649 | instance | Tuesday 07 April 2026 08:18:40 +0000 (0:00:01.610) 0:00:42.134 ********* 2026-04-07 08:18:40.710101 | instance | ok: [instance] 2026-04-07 08:18:40.710186 | instance | 2026-04-07 08:18:40.710455 | instance | TASK [vexxhost.ceph.mon : Assimilate existing configs in `ceph.conf`] ********** 2026-04-07 08:18:40.710489 | instance | Tuesday 07 April 2026 08:18:40 +0000 (0:00:00.051) 0:00:42.185 ********* 2026-04-07 08:18:40.747963 | instance | skipping: [instance] 2026-04-07 08:18:40.748426 | instance | 2026-04-07 08:18:40.748461 | instance | TASK [vexxhost.ceph.mon : Adopt monitor to cluster] **************************** 2026-04-07 08:18:40.748467 | instance | Tuesday 07 April 2026 08:18:40 +0000 (0:00:00.037) 0:00:42.223 ********* 2026-04-07 08:18:40.777879 | instance | skipping: [instance] 2026-04-07 08:18:40.777965 | instance | 2026-04-07 08:18:40.778197 | instance | TASK [vexxhost.ceph.mon : Adopt manager to cluster] **************************** 2026-04-07 08:18:40.778239 | instance | Tuesday 07 April 2026 08:18:40 +0000 (0:00:00.030) 0:00:42.253 ********* 2026-04-07 08:18:40.805193 | instance | skipping: [instance] 2026-04-07 08:18:40.805554 | instance | 2026-04-07 08:18:40.805592 | instance | TASK [vexxhost.ceph.mon : Enable "cephadm" mgr module] ************************* 2026-04-07 08:18:40.805598 | instance | Tuesday 07 April 2026 08:18:40 +0000 (0:00:00.027) 0:00:42.280 ********* 2026-04-07 08:18:40.837129 | instance | skipping: [instance] 2026-04-07 08:18:40.837244 | instance | 2026-04-07 08:18:40.837389 | instance | TASK [vexxhost.ceph.mon : Set orchestrator backend to "cephadm"] *************** 2026-04-07 08:18:40.837609 | instance | Tuesday 07 April 2026 08:18:40 +0000 (0:00:00.031) 0:00:42.312 ********* 2026-04-07 08:18:40.871686 | instance | skipping: [instance] 2026-04-07 08:18:40.871786 | instance | 2026-04-07 08:18:40.871969 | instance | TASK [vexxhost.ceph.mon : Use `cephadm` user for cephadm] ********************** 2026-04-07 08:18:40.872118 | instance | Tuesday 07 April 2026 08:18:40 +0000 (0:00:00.034) 0:00:42.347 ********* 2026-04-07 08:18:40.907807 | instance | skipping: [instance] 2026-04-07 08:18:40.907917 | instance | 2026-04-07 08:18:40.908101 | instance | TASK [vexxhost.ceph.mon : Generate "cephadm" key] ****************************** 2026-04-07 08:18:40.908251 | instance | Tuesday 07 April 2026 08:18:40 +0000 (0:00:00.036) 0:00:42.383 ********* 2026-04-07 08:18:40.945346 | instance | skipping: [instance] 2026-04-07 08:18:40.945409 | instance | 2026-04-07 08:18:40.945537 | instance | TASK [vexxhost.ceph.mon : Set Ceph Monitor IP address] ************************* 2026-04-07 08:18:40.945718 | instance | Tuesday 07 April 2026 08:18:40 +0000 (0:00:00.037) 0:00:42.420 ********* 2026-04-07 08:18:41.061315 | instance | ok: [instance] 2026-04-07 08:18:41.061391 | instance | 2026-04-07 08:18:41.061528 | instance | TASK [vexxhost.ceph.mon : Check if any node is bootstrapped] ******************* 2026-04-07 08:18:41.061749 | instance | Tuesday 07 April 2026 08:18:41 +0000 (0:00:00.115) 0:00:42.536 ********* 2026-04-07 08:18:41.291305 | instance | ok: [instance] => (item=instance) 2026-04-07 08:18:41.291687 | instance | 2026-04-07 08:18:41.291728 | instance | TASK [vexxhost.ceph.mon : Select pre-existing bootstrap node if exists] ******** 2026-04-07 08:18:41.291734 | instance | Tuesday 07 April 2026 08:18:41 +0000 (0:00:00.230) 0:00:42.766 ********* 2026-04-07 08:18:41.342043 | instance | ok: [instance] 2026-04-07 08:18:41.342148 | instance | 2026-04-07 08:18:41.342415 | instance | TASK [vexxhost.ceph.mon : Bootstrap cluster] *********************************** 2026-04-07 08:18:41.342448 | instance | Tuesday 07 April 2026 08:18:41 +0000 (0:00:00.050) 0:00:42.817 ********* 2026-04-07 08:18:41.413658 | instance | included: /home/zuul/.ansible/collections/ansible_collections/vexxhost/ceph/roles/mon/tasks/bootstrap-ceph.yml for instance 2026-04-07 08:18:41.413756 | instance | 2026-04-07 08:18:41.414050 | instance | TASK [vexxhost.ceph.mon : Generate temporary file for "ceph.conf"] ************* 2026-04-07 08:18:41.414085 | instance | Tuesday 07 April 2026 08:18:41 +0000 (0:00:00.071) 0:00:42.889 ********* 2026-04-07 08:18:41.698410 | instance | changed: [instance] 2026-04-07 08:18:41.698507 | instance | 2026-04-07 08:18:41.698784 | instance | TASK [vexxhost.ceph.mon : Include extra configuration values] ****************** 2026-04-07 08:18:41.698821 | instance | Tuesday 07 April 2026 08:18:41 +0000 (0:00:00.284) 0:00:43.173 ********* 2026-04-07 08:18:42.382884 | instance | changed: [instance] => (item={'section': 'global', 'option': 'mon allow pool size one', 'value': True}) 2026-04-07 08:18:42.383684 | instance | changed: [instance] => (item={'section': 'global', 'option': 'osd crush chooseleaf type', 'value': 0}) 2026-04-07 08:18:42.383732 | instance | changed: [instance] => (item={'section': 'mon', 'option': 'auth allow insecure global id reclaim', 'value': False}) 2026-04-07 08:18:42.383739 | instance | 2026-04-07 08:18:42.383745 | instance | TASK [vexxhost.ceph.mon : Run Bootstrap coomand] ******************************* 2026-04-07 08:18:42.383751 | instance | Tuesday 07 April 2026 08:18:42 +0000 (0:00:00.684) 0:00:43.858 ********* 2026-04-07 08:20:50.097758 | instance | ok: [instance] 2026-04-07 08:20:50.097848 | instance | 2026-04-07 08:20:50.097861 | instance | TASK [vexxhost.ceph.mon : Remove temporary file for "ceph.conf"] *************** 2026-04-07 08:20:50.098000 | instance | Tuesday 07 April 2026 08:20:50 +0000 (0:02:07.714) 0:02:51.572 ********* 2026-04-07 08:20:50.331745 | instance | changed: [instance] 2026-04-07 08:20:50.331825 | instance | 2026-04-07 08:20:50.332093 | instance | TASK [vexxhost.ceph.mon : Set bootstrap node] ********************************** 2026-04-07 08:20:50.332134 | instance | Tuesday 07 April 2026 08:20:50 +0000 (0:00:00.233) 0:02:51.806 ********* 2026-04-07 08:20:50.375939 | instance | ok: [instance] 2026-04-07 08:20:50.376331 | instance | 2026-04-07 08:20:50.376372 | instance | TASK [Install Ceph host] ******************************************************* 2026-04-07 08:20:50.376378 | instance | Tuesday 07 April 2026 08:20:50 +0000 (0:00:00.044) 0:02:51.851 ********* 2026-04-07 08:20:50.453612 | instance | included: vexxhost.ceph.cephadm_host for instance 2026-04-07 08:20:50.453692 | instance | 2026-04-07 08:20:50.453753 | instance | TASK [vexxhost.ceph.cephadm_host : Get public SSH key for "cephadm" user] ****** 2026-04-07 08:20:50.453909 | instance | Tuesday 07 April 2026 08:20:50 +0000 (0:00:00.077) 0:02:51.929 ********* 2026-04-07 08:20:52.096487 | instance | ok: [instance] 2026-04-07 08:20:52.096622 | instance | 2026-04-07 08:20:52.097054 | instance | TASK [vexxhost.ceph.cephadm_host : Set fact with public SSH key for "cephadm" user] *** 2026-04-07 08:20:52.097094 | instance | Tuesday 07 April 2026 08:20:52 +0000 (0:00:01.642) 0:02:53.571 ********* 2026-04-07 08:20:52.158948 | instance | ok: [instance] => (item=instance) 2026-04-07 08:20:52.159016 | instance | 2026-04-07 08:20:52.159294 | instance | TASK [vexxhost.ceph.cephadm_host : Set authorized key for "cephadm"] *********** 2026-04-07 08:20:52.159325 | instance | Tuesday 07 April 2026 08:20:52 +0000 (0:00:00.062) 0:02:53.634 ********* 2026-04-07 08:20:52.605598 | instance | ok: [instance] 2026-04-07 08:20:52.605682 | instance | 2026-04-07 08:20:52.605943 | instance | TASK [vexxhost.ceph.cephadm_host : Add new host to Ceph] *********************** 2026-04-07 08:20:52.605995 | instance | Tuesday 07 April 2026 08:20:52 +0000 (0:00:00.446) 0:02:54.081 ********* 2026-04-07 08:20:54.746889 | instance | ok: [instance] 2026-04-07 08:20:54.746976 | instance | 2026-04-07 08:20:54.747215 | instance | TASK [vexxhost.ceph.mon : Configure "mon" label for monitors] ****************** 2026-04-07 08:20:54.747253 | instance | Tuesday 07 April 2026 08:20:54 +0000 (0:00:02.141) 0:02:56.222 ********* 2026-04-07 08:20:56.404479 | instance | ok: [instance] 2026-04-07 08:20:56.404552 | instance | 2026-04-07 08:20:56.404851 | instance | TASK [vexxhost.ceph.mon : Validate monitor exist] ****************************** 2026-04-07 08:20:56.404882 | instance | Tuesday 07 April 2026 08:20:56 +0000 (0:00:01.657) 0:02:57.879 ********* 2026-04-07 08:21:07.683285 | instance | ok: [instance] 2026-04-07 08:21:07.683366 | instance | 2026-04-07 08:21:07.683617 | instance | TASK [Install Ceph host] ******************************************************* 2026-04-07 08:21:07.683652 | instance | Tuesday 07 April 2026 08:21:07 +0000 (0:00:11.278) 0:03:09.158 ********* 2026-04-07 08:21:07.755347 | instance | included: vexxhost.ceph.cephadm_host for instance 2026-04-07 08:21:07.755437 | instance | 2026-04-07 08:21:07.755738 | instance | TASK [vexxhost.ceph.cephadm_host : Get public SSH key for "cephadm" user] ****** 2026-04-07 08:21:07.755785 | instance | Tuesday 07 April 2026 08:21:07 +0000 (0:00:00.071) 0:03:09.230 ********* 2026-04-07 08:21:07.813050 | instance | skipping: [instance] 2026-04-07 08:21:07.813116 | instance | 2026-04-07 08:21:07.813396 | instance | TASK [vexxhost.ceph.cephadm_host : Set fact with public SSH key for "cephadm" user] *** 2026-04-07 08:21:07.813416 | instance | Tuesday 07 April 2026 08:21:07 +0000 (0:00:00.057) 0:03:09.288 ********* 2026-04-07 08:21:07.868328 | instance | skipping: [instance] => (item=instance) 2026-04-07 08:21:07.868812 | instance | skipping: [instance] 2026-04-07 08:21:07.868855 | instance | 2026-04-07 08:21:07.868861 | instance | TASK [vexxhost.ceph.cephadm_host : Set authorized key for "cephadm"] *********** 2026-04-07 08:21:07.868866 | instance | Tuesday 07 April 2026 08:21:07 +0000 (0:00:00.055) 0:03:09.343 ********* 2026-04-07 08:21:08.160005 | instance | ok: [instance] 2026-04-07 08:21:08.160071 | instance | 2026-04-07 08:21:08.160347 | instance | TASK [vexxhost.ceph.cephadm_host : Add new host to Ceph] *********************** 2026-04-07 08:21:08.160387 | instance | Tuesday 07 April 2026 08:21:08 +0000 (0:00:00.292) 0:03:09.635 ********* 2026-04-07 08:21:10.120846 | instance | ok: [instance] 2026-04-07 08:21:10.120888 | instance | 2026-04-07 08:21:10.120894 | instance | TASK [vexxhost.ceph.mgr : Configure "mgr" label for managers] ****************** 2026-04-07 08:21:10.120899 | instance | Tuesday 07 April 2026 08:21:10 +0000 (0:00:01.960) 0:03:11.596 ********* 2026-04-07 08:21:11.831912 | instance | ok: [instance] 2026-04-07 08:21:11.832039 | instance | 2026-04-07 08:21:11.832051 | instance | TASK [vexxhost.ceph.mgr : Validate manager exist] ****************************** 2026-04-07 08:21:11.832062 | instance | Tuesday 07 April 2026 08:21:11 +0000 (0:00:01.709) 0:03:13.305 ********* 2026-04-07 08:21:13.413299 | instance | ok: [instance] 2026-04-07 08:21:13.413375 | instance | 2026-04-07 08:21:13.413705 | instance | TASK [vexxhost.ceph.mgr : Enable the Ceph Manager prometheus module] *********** 2026-04-07 08:21:13.413773 | instance | Tuesday 07 April 2026 08:21:13 +0000 (0:00:01.582) 0:03:14.888 ********* 2026-04-07 08:21:15.284359 | instance | ok: [instance] 2026-04-07 08:21:15.284478 | instance | 2026-04-07 08:21:15.284486 | instance | PLAY [Deploy Ceph OSDs] ******************************************************** 2026-04-07 08:21:15.284603 | instance | 2026-04-07 08:21:15.284723 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-07 08:21:15.284844 | instance | Tuesday 07 April 2026 08:21:15 +0000 (0:00:01.871) 0:03:16.760 ********* 2026-04-07 08:21:16.229833 | instance | ok: [instance] 2026-04-07 08:21:16.229945 | instance | 2026-04-07 08:21:16.229959 | instance | TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-07 08:21:16.230093 | instance | Tuesday 07 April 2026 08:21:16 +0000 (0:00:00.945) 0:03:17.705 ********* 2026-04-07 08:21:16.455725 | instance | ok: [instance] 2026-04-07 08:21:16.455835 | instance | 2026-04-07 08:21:16.455944 | instance | TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-07 08:21:16.456111 | instance | Tuesday 07 April 2026 08:21:16 +0000 (0:00:00.225) 0:03:17.930 ********* 2026-04-07 08:21:16.498498 | instance | skipping: [instance] 2026-04-07 08:21:16.498557 | instance | 2026-04-07 08:21:16.498694 | instance | TASK [vexxhost.containers.directory : Create directory (/var/lib/downloads)] *** 2026-04-07 08:21:16.498808 | instance | Tuesday 07 April 2026 08:21:16 +0000 (0:00:00.043) 0:03:17.974 ********* 2026-04-07 08:21:16.727261 | instance | ok: [instance] 2026-04-07 08:21:16.727362 | instance | 2026-04-07 08:21:16.727421 | instance | TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-07 08:21:16.727545 | instance | Tuesday 07 April 2026 08:21:16 +0000 (0:00:00.228) 0:03:18.202 ********* 2026-04-07 08:21:16.780896 | instance | ok: [instance] => { 2026-04-07 08:21:16.781095 | instance | "msg": "https://github.com/opencontainers/runc/releases/download/v1.4.0/runc.amd64" 2026-04-07 08:21:16.781204 | instance | } 2026-04-07 08:21:16.781359 | instance | 2026-04-07 08:21:16.781509 | instance | TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-07 08:21:16.781688 | instance | Tuesday 07 April 2026 08:21:16 +0000 (0:00:00.053) 0:03:18.256 ********* 2026-04-07 08:21:17.111598 | instance | ok: [instance] 2026-04-07 08:21:17.111724 | instance | 2026-04-07 08:21:17.111829 | instance | TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-07 08:21:17.111997 | instance | Tuesday 07 April 2026 08:21:17 +0000 (0:00:00.330) 0:03:18.586 ********* 2026-04-07 08:21:17.158638 | instance | skipping: [instance] 2026-04-07 08:21:17.158762 | instance | 2026-04-07 08:21:17.158775 | instance | TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-07 08:21:17.158925 | instance | Tuesday 07 April 2026 08:21:17 +0000 (0:00:00.047) 0:03:18.634 ********* 2026-04-07 08:21:17.200768 | instance | skipping: [instance] 2026-04-07 08:21:17.200874 | instance | 2026-04-07 08:21:17.200930 | instance | TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-07 08:21:17.201047 | instance | Tuesday 07 April 2026 08:21:17 +0000 (0:00:00.042) 0:03:18.676 ********* 2026-04-07 08:21:17.430994 | instance | ok: [instance] 2026-04-07 08:21:17.431061 | instance | 2026-04-07 08:21:17.431206 | instance | TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-07 08:21:17.431337 | instance | Tuesday 07 April 2026 08:21:17 +0000 (0:00:00.229) 0:03:18.906 ********* 2026-04-07 08:21:18.724767 | instance | ok: [instance] 2026-04-07 08:21:18.724874 | instance | 2026-04-07 08:21:18.724942 | instance | TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-07 08:21:18.725069 | instance | Tuesday 07 April 2026 08:21:18 +0000 (0:00:01.293) 0:03:20.200 ********* 2026-04-07 08:21:18.800100 | instance | ok: [instance] => { 2026-04-07 08:21:18.800240 | instance | "msg": "https://github.com/containerd/containerd/releases/download/v2.2.0/containerd-2.2.0-linux-amd64.tar.gz" 2026-04-07 08:21:18.800427 | instance | } 2026-04-07 08:21:18.800605 | instance | 2026-04-07 08:21:18.800799 | instance | TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-07 08:21:18.801019 | instance | Tuesday 07 April 2026 08:21:18 +0000 (0:00:00.074) 0:03:20.275 ********* 2026-04-07 08:21:19.208882 | instance | ok: [instance] 2026-04-07 08:21:19.209000 | instance | 2026-04-07 08:21:19.209048 | instance | TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-07 08:21:19.209175 | instance | Tuesday 07 April 2026 08:21:19 +0000 (0:00:00.409) 0:03:20.684 ********* 2026-04-07 08:21:21.139530 | instance | ok: [instance] 2026-04-07 08:21:21.139665 | instance | 2026-04-07 08:21:21.139678 | instance | TASK [vexxhost.containers.containerd : Install SELinux packages] *************** 2026-04-07 08:21:21.139853 | instance | Tuesday 07 April 2026 08:21:21 +0000 (0:00:01.930) 0:03:22.615 ********* 2026-04-07 08:21:21.171338 | instance | skipping: [instance] 2026-04-07 08:21:21.171431 | instance | 2026-04-07 08:21:21.171659 | instance | TASK [vexxhost.containers.containerd : Set SELinux to permissive at runtime] *** 2026-04-07 08:21:21.171690 | instance | Tuesday 07 April 2026 08:21:21 +0000 (0:00:00.032) 0:03:22.647 ********* 2026-04-07 08:21:21.203278 | instance | skipping: [instance] 2026-04-07 08:21:21.203415 | instance | 2026-04-07 08:21:21.203492 | instance | TASK [vexxhost.containers.containerd : Persist SELinux permissive mode] ******** 2026-04-07 08:21:21.203660 | instance | Tuesday 07 April 2026 08:21:21 +0000 (0:00:00.031) 0:03:22.678 ********* 2026-04-07 08:21:21.230593 | instance | skipping: [instance] 2026-04-07 08:21:21.230701 | instance | 2026-04-07 08:21:21.230715 | instance | TASK [vexxhost.containers.containerd : Install AppArmor packages] ************** 2026-04-07 08:21:21.230904 | instance | Tuesday 07 April 2026 08:21:21 +0000 (0:00:00.027) 0:03:22.706 ********* 2026-04-07 08:21:22.591398 | instance | ok: [instance] 2026-04-07 08:21:22.591516 | instance | 2026-04-07 08:21:22.591638 | instance | TASK [vexxhost.containers.containerd : Create systemd service file for containerd] *** 2026-04-07 08:21:22.591772 | instance | Tuesday 07 April 2026 08:21:22 +0000 (0:00:01.359) 0:03:24.066 ********* 2026-04-07 08:21:23.007006 | instance | ok: [instance] 2026-04-07 08:21:23.007135 | instance | 2026-04-07 08:21:23.007148 | instance | TASK [vexxhost.containers.containerd : Create folders for configuration] ******* 2026-04-07 08:21:23.007249 | instance | Tuesday 07 April 2026 08:21:23 +0000 (0:00:00.415) 0:03:24.482 ********* 2026-04-07 08:21:24.004076 | instance | ok: [instance] => (item={'path': '/etc/containerd'}) 2026-04-07 08:21:24.004212 | instance | ok: [instance] => (item={'path': '/var/lib/containerd', 'mode': '0o700'}) 2026-04-07 08:21:24.004224 | instance | ok: [instance] => (item={'path': '/run/containerd', 'mode': '0o711'}) 2026-04-07 08:21:24.004360 | instance | ok: [instance] => (item={'path': '/run/containerd/io.containerd.grpc.v1.cri', 'mode': '0o700'}) 2026-04-07 08:21:24.004545 | instance | ok: [instance] => (item={'path': '/run/containerd/io.containerd.sandbox.controller.v1.shim', 'mode': '0o700'}) 2026-04-07 08:21:24.004677 | instance | 2026-04-07 08:21:24.004795 | instance | TASK [vexxhost.containers.containerd : Create containerd config file] ********** 2026-04-07 08:21:24.004919 | instance | Tuesday 07 April 2026 08:21:23 +0000 (0:00:00.997) 0:03:25.479 ********* 2026-04-07 08:21:24.470798 | instance | ok: [instance] 2026-04-07 08:21:24.470967 | instance | 2026-04-07 08:21:24.471141 | instance | TASK [vexxhost.containers.containerd : Force any restarts if necessary] ******** 2026-04-07 08:21:24.471328 | instance | Tuesday 07 April 2026 08:21:24 +0000 (0:00:00.456) 0:03:25.935 ********* 2026-04-07 08:21:24.471500 | instance | 2026-04-07 08:21:24.471692 | instance | TASK [vexxhost.containers.containerd : Enable and start service] *************** 2026-04-07 08:21:24.471881 | instance | Tuesday 07 April 2026 08:21:24 +0000 (0:00:00.010) 0:03:25.945 ********* 2026-04-07 08:21:24.837960 | instance | ok: [instance] 2026-04-07 08:21:24.838080 | instance | 2026-04-07 08:21:24.838170 | instance | TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-07 08:21:24.838332 | instance | Tuesday 07 April 2026 08:21:24 +0000 (0:00:00.367) 0:03:26.313 ********* 2026-04-07 08:21:25.057779 | instance | ok: [instance] 2026-04-07 08:21:25.057872 | instance | 2026-04-07 08:21:25.057986 | instance | TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-07 08:21:25.058108 | instance | Tuesday 07 April 2026 08:21:25 +0000 (0:00:00.219) 0:03:26.533 ********* 2026-04-07 08:21:25.117344 | instance | ok: [instance] => { 2026-04-07 08:21:25.117509 | instance | "msg": "https://download.docker.com/linux/static/stable/x86_64/docker-24.0.9.tgz" 2026-04-07 08:21:25.117682 | instance | } 2026-04-07 08:21:25.117850 | instance | 2026-04-07 08:21:25.118034 | instance | TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-07 08:21:25.118224 | instance | Tuesday 07 April 2026 08:21:25 +0000 (0:00:00.059) 0:03:26.592 ********* 2026-04-07 08:21:25.491073 | instance | ok: [instance] 2026-04-07 08:21:25.491184 | instance | 2026-04-07 08:21:25.491229 | instance | TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-07 08:21:25.491362 | instance | Tuesday 07 April 2026 08:21:25 +0000 (0:00:00.374) 0:03:26.966 ********* 2026-04-07 08:21:28.522873 | instance | ok: [instance] 2026-04-07 08:21:28.523008 | instance | 2026-04-07 08:21:28.523069 | instance | TASK [vexxhost.containers.docker : Install AppArmor packages] ****************** 2026-04-07 08:21:28.523217 | instance | Tuesday 07 April 2026 08:21:28 +0000 (0:00:03.031) 0:03:29.998 ********* 2026-04-07 08:21:29.980920 | instance | ok: [instance] 2026-04-07 08:21:29.981036 | instance | 2026-04-07 08:21:29.981217 | instance | TASK [vexxhost.containers.docker : Ensure group "docker" exists] *************** 2026-04-07 08:21:29.981388 | instance | Tuesday 07 April 2026 08:21:29 +0000 (0:00:01.457) 0:03:31.456 ********* 2026-04-07 08:21:30.199038 | instance | ok: [instance] 2026-04-07 08:21:30.199118 | instance | 2026-04-07 08:21:30.199222 | instance | TASK [vexxhost.containers.docker : Create systemd service file for docker] ***** 2026-04-07 08:21:30.199344 | instance | Tuesday 07 April 2026 08:21:30 +0000 (0:00:00.218) 0:03:31.674 ********* 2026-04-07 08:21:30.622863 | instance | ok: [instance] 2026-04-07 08:21:30.623080 | instance | 2026-04-07 08:21:30.623351 | instance | TASK [vexxhost.containers.docker : Create folders for configuration] *********** 2026-04-07 08:21:30.623673 | instance | Tuesday 07 April 2026 08:21:30 +0000 (0:00:00.423) 0:03:32.098 ********* 2026-04-07 08:21:31.265034 | instance | ok: [instance] => (item={'path': '/etc/docker'}) 2026-04-07 08:21:31.265157 | instance | ok: [instance] => (item={'path': '/var/lib/docker', 'mode': '0o710'}) 2026-04-07 08:21:31.265279 | instance | ok: [instance] => (item={'path': '/run/docker', 'mode': '0o711'}) 2026-04-07 08:21:31.265437 | instance | 2026-04-07 08:21:31.265629 | instance | TASK [vexxhost.containers.docker : Create systemd socket file for docker] ****** 2026-04-07 08:21:31.265894 | instance | Tuesday 07 April 2026 08:21:31 +0000 (0:00:00.641) 0:03:32.739 ********* 2026-04-07 08:21:31.648384 | instance | ok: [instance] 2026-04-07 08:21:31.648544 | instance | 2026-04-07 08:21:31.648741 | instance | TASK [vexxhost.containers.docker : Create docker daemon config file] *********** 2026-04-07 08:21:31.648922 | instance | Tuesday 07 April 2026 08:21:31 +0000 (0:00:00.383) 0:03:33.123 ********* 2026-04-07 08:21:32.043942 | instance | ok: [instance] 2026-04-07 08:21:32.044159 | instance | 2026-04-07 08:21:32.044492 | instance | TASK [vexxhost.containers.docker : Force any restarts if necessary] ************ 2026-04-07 08:21:32.044780 | instance | Tuesday 07 April 2026 08:21:32 +0000 (0:00:00.388) 0:03:33.512 ********* 2026-04-07 08:21:32.045039 | instance | 2026-04-07 08:21:32.045321 | instance | TASK [vexxhost.containers.docker : Enable and start service] ******************* 2026-04-07 08:21:32.045671 | instance | Tuesday 07 April 2026 08:21:32 +0000 (0:00:00.006) 0:03:33.518 ********* 2026-04-07 08:21:32.429663 | instance | ok: [instance] 2026-04-07 08:21:32.430272 | instance | 2026-04-07 08:21:32.430289 | instance | TASK [vexxhost.ceph.cephadm : Gather variables for each operating system] ****** 2026-04-07 08:21:32.430300 | instance | Tuesday 07 April 2026 08:21:32 +0000 (0:00:00.384) 0:03:33.903 ********* 2026-04-07 08:21:32.483809 | instance | ok: [instance] => (item=/home/zuul/.ansible/collections/ansible_collections/vexxhost/ceph/roles/cephadm/vars/ubuntu-22.04.yml) 2026-04-07 08:21:32.483936 | instance | 2026-04-07 08:21:32.483949 | instance | TASK [vexxhost.ceph.cephadm : Install packages] ******************************** 2026-04-07 08:21:32.484039 | instance | Tuesday 07 April 2026 08:21:32 +0000 (0:00:00.055) 0:03:33.959 ********* 2026-04-07 08:21:33.975520 | instance | ok: [instance] 2026-04-07 08:21:33.975562 | instance | 2026-04-07 08:21:33.975568 | instance | TASK [vexxhost.ceph.cephadm : Ensure services are started] ********************* 2026-04-07 08:21:33.975573 | instance | Tuesday 07 April 2026 08:21:33 +0000 (0:00:01.491) 0:03:35.450 ********* 2026-04-07 08:21:34.693266 | instance | ok: [instance] => (item=chronyd) 2026-04-07 08:21:34.693310 | instance | ok: [instance] => (item=sshd) 2026-04-07 08:21:34.693316 | instance | 2026-04-07 08:21:34.693321 | instance | TASK [vexxhost.ceph.cephadm : Download "cephadm"] ****************************** 2026-04-07 08:21:34.693326 | instance | Tuesday 07 April 2026 08:21:34 +0000 (0:00:00.717) 0:03:36.168 ********* 2026-04-07 08:21:35.480450 | instance | ok: [instance] 2026-04-07 08:21:35.480787 | instance | 2026-04-07 08:21:35.480805 | instance | TASK [vexxhost.ceph.cephadm : Remove cephadm from old path] ******************** 2026-04-07 08:21:35.480951 | instance | Tuesday 07 April 2026 08:21:35 +0000 (0:00:00.786) 0:03:36.954 ********* 2026-04-07 08:21:35.710955 | instance | ok: [instance] 2026-04-07 08:21:35.711035 | instance | 2026-04-07 08:21:35.711267 | instance | TASK [vexxhost.ceph.cephadm : Ensure "cephadm" user is present] **************** 2026-04-07 08:21:35.711311 | instance | Tuesday 07 April 2026 08:21:35 +0000 (0:00:00.231) 0:03:37.186 ********* 2026-04-07 08:21:35.984174 | instance | ok: [instance] 2026-04-07 08:21:35.984248 | instance | 2026-04-07 08:21:35.984528 | instance | TASK [vexxhost.ceph.cephadm : Allow "cephadm" user to have passwordless sudo] *** 2026-04-07 08:21:35.984568 | instance | Tuesday 07 April 2026 08:21:35 +0000 (0:00:00.273) 0:03:37.459 ********* 2026-04-07 08:21:36.215392 | instance | ok: [instance] 2026-04-07 08:21:36.215450 | instance | 2026-04-07 08:21:36.215789 | instance | TASK [vexxhost.ceph.osd : Get monitor status] ********************************** 2026-04-07 08:21:36.215835 | instance | Tuesday 07 April 2026 08:21:36 +0000 (0:00:00.231) 0:03:37.690 ********* 2026-04-07 08:21:36.461213 | instance | ok: [instance] => (item=instance) 2026-04-07 08:21:36.461334 | instance | 2026-04-07 08:21:36.461686 | instance | TASK [vexxhost.ceph.osd : Select admin host] *********************************** 2026-04-07 08:21:36.461738 | instance | Tuesday 07 April 2026 08:21:36 +0000 (0:00:00.245) 0:03:37.936 ********* 2026-04-07 08:21:36.513612 | instance | ok: [instance] 2026-04-07 08:21:36.514123 | instance | 2026-04-07 08:21:36.514167 | instance | TASK [vexxhost.ceph.osd : Get `cephadm ls` status] ***************************** 2026-04-07 08:21:36.514176 | instance | Tuesday 07 April 2026 08:21:36 +0000 (0:00:00.052) 0:03:37.988 ********* 2026-04-07 08:21:41.885933 | instance | ok: [instance] 2026-04-07 08:21:41.885999 | instance | 2026-04-07 08:21:41.886279 | instance | TASK [vexxhost.ceph.osd : Parse the `cephadm ls` output] *********************** 2026-04-07 08:21:41.886318 | instance | Tuesday 07 April 2026 08:21:41 +0000 (0:00:05.372) 0:03:43.361 ********* 2026-04-07 08:21:41.939652 | instance | ok: [instance] 2026-04-07 08:21:41.939707 | instance | 2026-04-07 08:21:41.940003 | instance | TASK [Install Ceph host] ******************************************************* 2026-04-07 08:21:41.940045 | instance | Tuesday 07 April 2026 08:21:41 +0000 (0:00:00.053) 0:03:43.415 ********* 2026-04-07 08:21:42.001348 | instance | included: vexxhost.ceph.cephadm_host for instance 2026-04-07 08:21:42.001784 | instance | 2026-04-07 08:21:42.001828 | instance | TASK [vexxhost.ceph.cephadm_host : Get public SSH key for "cephadm" user] ****** 2026-04-07 08:21:42.001834 | instance | Tuesday 07 April 2026 08:21:41 +0000 (0:00:00.061) 0:03:43.476 ********* 2026-04-07 08:21:42.048042 | instance | skipping: [instance] 2026-04-07 08:21:42.048156 | instance | 2026-04-07 08:21:42.048522 | instance | TASK [vexxhost.ceph.cephadm_host : Set fact with public SSH key for "cephadm" user] *** 2026-04-07 08:21:42.048546 | instance | Tuesday 07 April 2026 08:21:42 +0000 (0:00:00.047) 0:03:43.523 ********* 2026-04-07 08:21:42.095897 | instance | skipping: [instance] => (item=instance) 2026-04-07 08:21:42.096018 | instance | skipping: [instance] 2026-04-07 08:21:42.096137 | instance | 2026-04-07 08:21:42.096277 | instance | TASK [vexxhost.ceph.cephadm_host : Set authorized key for "cephadm"] *********** 2026-04-07 08:21:42.096407 | instance | Tuesday 07 April 2026 08:21:42 +0000 (0:00:00.047) 0:03:43.571 ********* 2026-04-07 08:21:42.391171 | instance | ok: [instance] 2026-04-07 08:21:42.391430 | instance | 2026-04-07 08:21:42.391709 | instance | TASK [vexxhost.ceph.cephadm_host : Add new host to Ceph] *********************** 2026-04-07 08:21:42.391984 | instance | Tuesday 07 April 2026 08:21:42 +0000 (0:00:00.294) 0:03:43.866 ********* 2026-04-07 08:21:44.424416 | instance | ok: [instance] 2026-04-07 08:21:44.424604 | instance | 2026-04-07 08:21:44.424873 | instance | TASK [vexxhost.ceph.osd : Adopt OSDs to cluster] ******************************* 2026-04-07 08:21:44.425143 | instance | Tuesday 07 April 2026 08:21:44 +0000 (0:00:02.033) 0:03:45.899 ********* 2026-04-07 08:21:44.455201 | instance | skipping: [instance] 2026-04-07 08:21:44.455441 | instance | 2026-04-07 08:21:44.455710 | instance | TASK [vexxhost.ceph.osd : Wait until OSD added to cephadm] ********************* 2026-04-07 08:21:44.456031 | instance | Tuesday 07 April 2026 08:21:44 +0000 (0:00:00.031) 0:03:45.930 ********* 2026-04-07 08:21:44.489949 | instance | skipping: [instance] 2026-04-07 08:21:44.490190 | instance | 2026-04-07 08:21:44.490480 | instance | TASK [vexxhost.ceph.osd : Ensure all OSDs are non-legacy] ********************** 2026-04-07 08:21:44.490767 | instance | Tuesday 07 April 2026 08:21:44 +0000 (0:00:00.034) 0:03:45.965 ********* 2026-04-07 08:21:49.837064 | instance | ok: [instance] 2026-04-07 08:21:49.837131 | instance | 2026-04-07 08:21:49.837142 | instance | TASK [vexxhost.ceph.osd : Get `ceph-volume lvm list` status] ******************* 2026-04-07 08:21:49.837168 | instance | Tuesday 07 April 2026 08:21:49 +0000 (0:00:05.346) 0:03:51.311 ********* 2026-04-07 08:22:00.580953 | instance | ok: [instance] 2026-04-07 08:22:00.581051 | instance | 2026-04-07 08:22:00.581127 | instance | TASK [vexxhost.ceph.osd : Install OSDs] **************************************** 2026-04-07 08:22:00.581281 | instance | Tuesday 07 April 2026 08:22:00 +0000 (0:00:10.744) 0:04:02.056 ********* 2026-04-07 08:22:08.427655 | instance | failed: [instance] (item=/dev/vdb) => {"ansible_loop_var": "item", "changed": false, "cmd": ["cephadm", "shell", "--fsid", "3b2c2574-f01d-5e08-aaf2-394c5021871d", "--config", "/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/mon.instance/config", "--", "ceph", "orch", "daemon", "add", "osd", "instance:/dev/vdb"], "delta": "0:00:07.578860", "end": "2026-04-07 08:22:08.389168", "item": "/dev/vdb", "msg": "non-zero return code", "rc": 22, "start": "2026-04-07 08:22:00.810308", "stderr": "Error EINVAL: Traceback (most recent call last):\n File \"/usr/share/ceph/mgr/mgr_module.py\", line 1834, in _handle_command\n return self.handle_command(inbuf, cmd)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 183, in handle_command\n return dispatch[cmd['prefix']].call(self, cmd, inbuf)\n File \"/usr/share/ceph/mgr/mgr_module.py\", line 475, in call\n return self.func(mgr, **kwargs)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 119, in \n wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 108, in wrapper\n return func(*args, **kwargs)\n File \"/usr/share/ceph/mgr/orchestrator/module.py\", line 1306, in _daemon_add_osd\n raise_if_exception(completion)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 240, in raise_if_exception\n raise e\nRuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/mon.instance/config\nNon-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/run/ceph:z -v /var/log/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/log/ceph:z -v /var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmp9z7jqurb:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpq9maqrre:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdb --yes --no-systemd\n/usr/bin/docker: stderr stderr: lsblk: /dev/vdb: not a block device\n/usr/bin/docker: stderr Traceback (most recent call last):\n/usr/bin/docker: stderr File \"/usr/sbin/ceph-volume\", line 33, in \n/usr/bin/docker: stderr sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 54, in __init__\n/usr/bin/docker: stderr self.main(self.argv)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/decorators.py\", line 59, in newfunc\n/usr/bin/docker: stderr return f(*a, **kw)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 166, in main\n/usr/bin/docker: stderr terminal.dispatch(self.mapper, subcommand_args)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 194, in dispatch\n/usr/bin/docker: stderr instance.main()\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py\", line 46, in main\n/usr/bin/docker: stderr terminal.dispatch(self.mapper, self.argv)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 192, in dispatch\n/usr/bin/docker: stderr instance = mapper.get(arg)(argv[count:])\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/batch.py\", line 325, in __init__\n/usr/bin/docker: stderr self.args = parser.parse_args(argv)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1825, in parse_args\n/usr/bin/docker: stderr args, argv = self.parse_known_args(args, namespace)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1858, in parse_known_args\n/usr/bin/docker: stderr namespace, args = self._parse_known_args(args, namespace)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2049, in _parse_known_args\n/usr/bin/docker: stderr positionals_end_index = consume_positionals(start_index)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2026, in consume_positionals\n/usr/bin/docker: stderr take_action(action, args)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1919, in take_action\n/usr/bin/docker: stderr argument_values = self._get_values(action, argument_strings)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in _get_values\n/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in \n/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2483, in _get_value\n/usr/bin/docker: stderr result = type_func(arg_string)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 124, in __call__\n/usr/bin/docker: stderr super().get_device(dev_path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 32, in get_device\n/usr/bin/docker: stderr self._device = Device(dev_path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 140, in __init__\n/usr/bin/docker: stderr self._parse()\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 236, in _parse\n/usr/bin/docker: stderr dev = disk.lsblk(self.path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 244, in lsblk\n/usr/bin/docker: stderr result = lsblk_all(device=device,\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 338, in lsblk_all\n/usr/bin/docker: stderr raise RuntimeError(f\"Error: {err}\")\n/usr/bin/docker: stderr RuntimeError: Error: ['lsblk: /dev/vdb: not a block device']\nTraceback (most recent call last):\n File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\n return _run_code(code, main_globals, None,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 11009, in \n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 10997, in main\n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2593, in _infer_config\n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2509, in _infer_fsid\n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2621, in _infer_image\n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2496, in _validate_fsid\n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 7226, in command_ceph_volume\n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2284, in call_throws\nRuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/run/ceph:z -v /var/log/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/log/ceph:z -v /var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmp9z7jqurb:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpq9maqrre:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdb --yes --no-systemd", "stderr_lines": ["Error EINVAL: Traceback (most recent call last):", " File \"/usr/share/ceph/mgr/mgr_module.py\", line 1834, in _handle_command", " return self.handle_command(inbuf, cmd)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 183, in handle_command", " return dispatch[cmd['prefix']].call(self, cmd, inbuf)", " File \"/usr/share/ceph/mgr/mgr_module.py\", line 475, in call", " return self.func(mgr, **kwargs)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 119, in ", " wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 108, in wrapper", " return func(*args, **kwargs)", " File \"/usr/share/ceph/mgr/orchestrator/module.py\", line 1306, in _daemon_add_osd", " raise_if_exception(completion)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 240, in raise_if_exception", " raise e", "RuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/mon.instance/config", "Non-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/run/ceph:z -v /var/log/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/log/ceph:z -v /var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmp9z7jqurb:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpq9maqrre:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdb --yes --no-systemd", "/usr/bin/docker: stderr stderr: lsblk: /dev/vdb: not a block device", "/usr/bin/docker: stderr Traceback (most recent call last):", "/usr/bin/docker: stderr File \"/usr/sbin/ceph-volume\", line 33, in ", "/usr/bin/docker: stderr sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 54, in __init__", "/usr/bin/docker: stderr self.main(self.argv)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/decorators.py\", line 59, in newfunc", "/usr/bin/docker: stderr return f(*a, **kw)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 166, in main", "/usr/bin/docker: stderr terminal.dispatch(self.mapper, subcommand_args)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 194, in dispatch", "/usr/bin/docker: stderr instance.main()", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py\", line 46, in main", "/usr/bin/docker: stderr terminal.dispatch(self.mapper, self.argv)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 192, in dispatch", "/usr/bin/docker: stderr instance = mapper.get(arg)(argv[count:])", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/batch.py\", line 325, in __init__", "/usr/bin/docker: stderr self.args = parser.parse_args(argv)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1825, in parse_args", "/usr/bin/docker: stderr args, argv = self.parse_known_args(args, namespace)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1858, in parse_known_args", "/usr/bin/docker: stderr namespace, args = self._parse_known_args(args, namespace)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2049, in _parse_known_args", "/usr/bin/docker: stderr positionals_end_index = consume_positionals(start_index)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2026, in consume_positionals", "/usr/bin/docker: stderr take_action(action, args)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1919, in take_action", "/usr/bin/docker: stderr argument_values = self._get_values(action, argument_strings)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in _get_values", "/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in ", "/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2483, in _get_value", "/usr/bin/docker: stderr result = type_func(arg_string)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 124, in __call__", "/usr/bin/docker: stderr super().get_device(dev_path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 32, in get_device", "/usr/bin/docker: stderr self._device = Device(dev_path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 140, in __init__", "/usr/bin/docker: stderr self._parse()", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 236, in _parse", "/usr/bin/docker: stderr dev = disk.lsblk(self.path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 244, in lsblk", "/usr/bin/docker: stderr result = lsblk_all(device=device,", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 338, in lsblk_all", "/usr/bin/docker: stderr raise RuntimeError(f\"Error: {err}\")", "/usr/bin/docker: stderr RuntimeError: Error: ['lsblk: /dev/vdb: not a block device']", "Traceback (most recent call last):", " File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main", " return _run_code(code, main_globals, None,", " File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code", " exec(code, run_globals)", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 11009, in ", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 10997, in main", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2593, in _infer_config", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2509, in _infer_fsid", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2621, in _infer_image", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2496, in _validate_fsid", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 7226, in command_ceph_volume", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2284, in call_throws", "RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/run/ceph:z -v /var/log/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/log/ceph:z -v /var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmp9z7jqurb:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpq9maqrre:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdb --yes --no-systemd"], "stdout": "", "stdout_lines": []} 2026-04-07 08:22:16.293679 | instance | failed: [instance] (item=/dev/vdc) => {"ansible_loop_var": "item", "changed": false, "cmd": ["cephadm", "shell", "--fsid", "3b2c2574-f01d-5e08-aaf2-394c5021871d", "--config", "/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/mon.instance/config", "--", "ceph", "orch", "daemon", "add", "osd", "instance:/dev/vdc"], "delta": "0:00:07.644137", "end": "2026-04-07 08:22:16.252739", "item": "/dev/vdc", "msg": "non-zero return code", "rc": 22, "start": "2026-04-07 08:22:08.608602", "stderr": "Error EINVAL: Traceback (most recent call last):\n File \"/usr/share/ceph/mgr/mgr_module.py\", line 1834, in _handle_command\n return self.handle_command(inbuf, cmd)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 183, in handle_command\n return dispatch[cmd['prefix']].call(self, cmd, inbuf)\n File \"/usr/share/ceph/mgr/mgr_module.py\", line 475, in call\n return self.func(mgr, **kwargs)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 119, in \n wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 108, in wrapper\n return func(*args, **kwargs)\n File \"/usr/share/ceph/mgr/orchestrator/module.py\", line 1306, in _daemon_add_osd\n raise_if_exception(completion)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 240, in raise_if_exception\n raise e\nRuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/mon.instance/config\nNon-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/run/ceph:z -v /var/log/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/log/ceph:z -v /var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmplntv5iqv:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp0s8li0i5:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdc --yes --no-systemd\n/usr/bin/docker: stderr stderr: lsblk: /dev/vdc: not a block device\n/usr/bin/docker: stderr Traceback (most recent call last):\n/usr/bin/docker: stderr File \"/usr/sbin/ceph-volume\", line 33, in \n/usr/bin/docker: stderr sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 54, in __init__\n/usr/bin/docker: stderr self.main(self.argv)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/decorators.py\", line 59, in newfunc\n/usr/bin/docker: stderr return f(*a, **kw)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 166, in main\n/usr/bin/docker: stderr terminal.dispatch(self.mapper, subcommand_args)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 194, in dispatch\n/usr/bin/docker: stderr instance.main()\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py\", line 46, in main\n/usr/bin/docker: stderr terminal.dispatch(self.mapper, self.argv)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 192, in dispatch\n/usr/bin/docker: stderr instance = mapper.get(arg)(argv[count:])\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/batch.py\", line 325, in __init__\n/usr/bin/docker: stderr self.args = parser.parse_args(argv)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1825, in parse_args\n/usr/bin/docker: stderr args, argv = self.parse_known_args(args, namespace)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1858, in parse_known_args\n/usr/bin/docker: stderr namespace, args = self._parse_known_args(args, namespace)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2049, in _parse_known_args\n/usr/bin/docker: stderr positionals_end_index = consume_positionals(start_index)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2026, in consume_positionals\n/usr/bin/docker: stderr take_action(action, args)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1919, in take_action\n/usr/bin/docker: stderr argument_values = self._get_values(action, argument_strings)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in _get_values\n/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in \n/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2483, in _get_value\n/usr/bin/docker: stderr result = type_func(arg_string)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 124, in __call__\n/usr/bin/docker: stderr super().get_device(dev_path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 32, in get_device\n/usr/bin/docker: stderr self._device = Device(dev_path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 140, in __init__\n/usr/bin/docker: stderr self._parse()\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 236, in _parse\n/usr/bin/docker: stderr dev = disk.lsblk(self.path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 244, in lsblk\n/usr/bin/docker: stderr result = lsblk_all(device=device,\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 338, in lsblk_all\n/usr/bin/docker: stderr raise RuntimeError(f\"Error: {err}\")\n/usr/bin/docker: stderr RuntimeError: Error: ['lsblk: /dev/vdc: not a block device']\nTraceback (most recent call last):\n File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\n return _run_code(code, main_globals, None,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 11009, in \n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 10997, in main\n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2593, in _infer_config\n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2509, in _infer_fsid\n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2621, in _infer_image\n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2496, in _validate_fsid\n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 7226, in command_ceph_volume\n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2284, in call_throws\nRuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/run/ceph:z -v /var/log/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/log/ceph:z -v /var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmplntv5iqv:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp0s8li0i5:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdc --yes --no-systemd", "stderr_lines": ["Error EINVAL: Traceback (most recent call last):", " File \"/usr/share/ceph/mgr/mgr_module.py\", line 1834, in _handle_command", " return self.handle_command(inbuf, cmd)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 183, in handle_command", " return dispatch[cmd['prefix']].call(self, cmd, inbuf)", " File \"/usr/share/ceph/mgr/mgr_module.py\", line 475, in call", " return self.func(mgr, **kwargs)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 119, in ", " wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 108, in wrapper", " return func(*args, **kwargs)", " File \"/usr/share/ceph/mgr/orchestrator/module.py\", line 1306, in _daemon_add_osd", " raise_if_exception(completion)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 240, in raise_if_exception", " raise e", "RuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/mon.instance/config", "Non-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/run/ceph:z -v /var/log/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/log/ceph:z -v /var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmplntv5iqv:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp0s8li0i5:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdc --yes --no-systemd", "/usr/bin/docker: stderr stderr: lsblk: /dev/vdc: not a block device", "/usr/bin/docker: stderr Traceback (most recent call last):", "/usr/bin/docker: stderr File \"/usr/sbin/ceph-volume\", line 33, in ", "/usr/bin/docker: stderr sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 54, in __init__", "/usr/bin/docker: stderr self.main(self.argv)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/decorators.py\", line 59, in newfunc", "/usr/bin/docker: stderr return f(*a, **kw)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 166, in main", "/usr/bin/docker: stderr terminal.dispatch(self.mapper, subcommand_args)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 194, in dispatch", "/usr/bin/docker: stderr instance.main()", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py\", line 46, in main", "/usr/bin/docker: stderr terminal.dispatch(self.mapper, self.argv)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 192, in dispatch", "/usr/bin/docker: stderr instance = mapper.get(arg)(argv[count:])", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/batch.py\", line 325, in __init__", "/usr/bin/docker: stderr self.args = parser.parse_args(argv)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1825, in parse_args", "/usr/bin/docker: stderr args, argv = self.parse_known_args(args, namespace)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1858, in parse_known_args", "/usr/bin/docker: stderr namespace, args = self._parse_known_args(args, namespace)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2049, in _parse_known_args", "/usr/bin/docker: stderr positionals_end_index = consume_positionals(start_index)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2026, in consume_positionals", "/usr/bin/docker: stderr take_action(action, args)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1919, in take_action", "/usr/bin/docker: stderr argument_values = self._get_values(action, argument_strings)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in _get_values", "/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in ", "/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2483, in _get_value", "/usr/bin/docker: stderr result = type_func(arg_string)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 124, in __call__", "/usr/bin/docker: stderr super().get_device(dev_path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 32, in get_device", "/usr/bin/docker: stderr self._device = Device(dev_path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 140, in __init__", "/usr/bin/docker: stderr self._parse()", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 236, in _parse", "/usr/bin/docker: stderr dev = disk.lsblk(self.path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 244, in lsblk", "/usr/bin/docker: stderr result = lsblk_all(device=device,", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 338, in lsblk_all", "/usr/bin/docker: stderr raise RuntimeError(f\"Error: {err}\")", "/usr/bin/docker: stderr RuntimeError: Error: ['lsblk: /dev/vdc: not a block device']", "Traceback (most recent call last):", " File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main", " return _run_code(code, main_globals, None,", " File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code", " exec(code, run_globals)", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 11009, in ", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 10997, in main", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2593, in _infer_config", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2509, in _infer_fsid", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2621, in _infer_image", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2496, in _validate_fsid", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 7226, in command_ceph_volume", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2284, in call_throws", "RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/run/ceph:z -v /var/log/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/log/ceph:z -v /var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmplntv5iqv:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp0s8li0i5:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdc --yes --no-systemd"], "stdout": "", "stdout_lines": []} 2026-04-07 08:22:24.190142 | instance | failed: [instance] (item=/dev/vdd) => {"ansible_loop_var": "item", "changed": false, "cmd": ["cephadm", "shell", "--fsid", "3b2c2574-f01d-5e08-aaf2-394c5021871d", "--config", "/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/mon.instance/config", "--", "ceph", "orch", "daemon", "add", "osd", "instance:/dev/vdd"], "delta": "0:00:07.669463", "end": "2026-04-07 08:22:24.160002", "item": "/dev/vdd", "msg": "non-zero return code", "rc": 22, "start": "2026-04-07 08:22:16.490539", "stderr": "Error EINVAL: Traceback (most recent call last):\n File \"/usr/share/ceph/mgr/mgr_module.py\", line 1834, in _handle_command\n return self.handle_command(inbuf, cmd)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 183, in handle_command\n return dispatch[cmd['prefix']].call(self, cmd, inbuf)\n File \"/usr/share/ceph/mgr/mgr_module.py\", line 475, in call\n return self.func(mgr, **kwargs)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 119, in \n wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 108, in wrapper\n return func(*args, **kwargs)\n File \"/usr/share/ceph/mgr/orchestrator/module.py\", line 1306, in _daemon_add_osd\n raise_if_exception(completion)\n File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 240, in raise_if_exception\n raise e\nRuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/mon.instance/config\nNon-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/run/ceph:z -v /var/log/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/log/ceph:z -v /var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpg_4fou1p:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpl98tgjey:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdd --yes --no-systemd\n/usr/bin/docker: stderr stderr: lsblk: /dev/vdd: not a block device\n/usr/bin/docker: stderr Traceback (most recent call last):\n/usr/bin/docker: stderr File \"/usr/sbin/ceph-volume\", line 33, in \n/usr/bin/docker: stderr sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 54, in __init__\n/usr/bin/docker: stderr self.main(self.argv)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/decorators.py\", line 59, in newfunc\n/usr/bin/docker: stderr return f(*a, **kw)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 166, in main\n/usr/bin/docker: stderr terminal.dispatch(self.mapper, subcommand_args)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 194, in dispatch\n/usr/bin/docker: stderr instance.main()\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py\", line 46, in main\n/usr/bin/docker: stderr terminal.dispatch(self.mapper, self.argv)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 192, in dispatch\n/usr/bin/docker: stderr instance = mapper.get(arg)(argv[count:])\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/batch.py\", line 325, in __init__\n/usr/bin/docker: stderr self.args = parser.parse_args(argv)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1825, in parse_args\n/usr/bin/docker: stderr args, argv = self.parse_known_args(args, namespace)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1858, in parse_known_args\n/usr/bin/docker: stderr namespace, args = self._parse_known_args(args, namespace)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2049, in _parse_known_args\n/usr/bin/docker: stderr positionals_end_index = consume_positionals(start_index)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2026, in consume_positionals\n/usr/bin/docker: stderr take_action(action, args)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1919, in take_action\n/usr/bin/docker: stderr argument_values = self._get_values(action, argument_strings)\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in _get_values\n/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in \n/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]\n/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2483, in _get_value\n/usr/bin/docker: stderr result = type_func(arg_string)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 124, in __call__\n/usr/bin/docker: stderr super().get_device(dev_path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 32, in get_device\n/usr/bin/docker: stderr self._device = Device(dev_path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 140, in __init__\n/usr/bin/docker: stderr self._parse()\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 236, in _parse\n/usr/bin/docker: stderr dev = disk.lsblk(self.path)\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 244, in lsblk\n/usr/bin/docker: stderr result = lsblk_all(device=device,\n/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 338, in lsblk_all\n/usr/bin/docker: stderr raise RuntimeError(f\"Error: {err}\")\n/usr/bin/docker: stderr RuntimeError: Error: ['lsblk: /dev/vdd: not a block device']\nTraceback (most recent call last):\n File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\n return _run_code(code, main_globals, None,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 11009, in \n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 10997, in main\n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2593, in _infer_config\n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2509, in _infer_fsid\n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2621, in _infer_image\n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2496, in _validate_fsid\n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 7226, in command_ceph_volume\n File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2284, in call_throws\nRuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/run/ceph:z -v /var/log/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/log/ceph:z -v /var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpg_4fou1p:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpl98tgjey:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdd --yes --no-systemd", "stderr_lines": ["Error EINVAL: Traceback (most recent call last):", " File \"/usr/share/ceph/mgr/mgr_module.py\", line 1834, in _handle_command", " return self.handle_command(inbuf, cmd)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 183, in handle_command", " return dispatch[cmd['prefix']].call(self, cmd, inbuf)", " File \"/usr/share/ceph/mgr/mgr_module.py\", line 475, in call", " return self.func(mgr, **kwargs)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 119, in ", " wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 108, in wrapper", " return func(*args, **kwargs)", " File \"/usr/share/ceph/mgr/orchestrator/module.py\", line 1306, in _daemon_add_osd", " raise_if_exception(completion)", " File \"/usr/share/ceph/mgr/orchestrator/_interface.py\", line 240, in raise_if_exception", " raise e", "RuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/mon.instance/config", "Non-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/run/ceph:z -v /var/log/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/log/ceph:z -v /var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpg_4fou1p:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpl98tgjey:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdd --yes --no-systemd", "/usr/bin/docker: stderr stderr: lsblk: /dev/vdd: not a block device", "/usr/bin/docker: stderr Traceback (most recent call last):", "/usr/bin/docker: stderr File \"/usr/sbin/ceph-volume\", line 33, in ", "/usr/bin/docker: stderr sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 54, in __init__", "/usr/bin/docker: stderr self.main(self.argv)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/decorators.py\", line 59, in newfunc", "/usr/bin/docker: stderr return f(*a, **kw)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/main.py\", line 166, in main", "/usr/bin/docker: stderr terminal.dispatch(self.mapper, subcommand_args)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 194, in dispatch", "/usr/bin/docker: stderr instance.main()", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py\", line 46, in main", "/usr/bin/docker: stderr terminal.dispatch(self.mapper, self.argv)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/terminal.py\", line 192, in dispatch", "/usr/bin/docker: stderr instance = mapper.get(arg)(argv[count:])", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/batch.py\", line 325, in __init__", "/usr/bin/docker: stderr self.args = parser.parse_args(argv)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1825, in parse_args", "/usr/bin/docker: stderr args, argv = self.parse_known_args(args, namespace)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1858, in parse_known_args", "/usr/bin/docker: stderr namespace, args = self._parse_known_args(args, namespace)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2049, in _parse_known_args", "/usr/bin/docker: stderr positionals_end_index = consume_positionals(start_index)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2026, in consume_positionals", "/usr/bin/docker: stderr take_action(action, args)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 1919, in take_action", "/usr/bin/docker: stderr argument_values = self._get_values(action, argument_strings)", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in _get_values", "/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2468, in ", "/usr/bin/docker: stderr value = [self._get_value(action, v) for v in arg_strings]", "/usr/bin/docker: stderr File \"/usr/lib64/python3.9/argparse.py\", line 2483, in _get_value", "/usr/bin/docker: stderr result = type_func(arg_string)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 124, in __call__", "/usr/bin/docker: stderr super().get_device(dev_path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/arg_validators.py\", line 32, in get_device", "/usr/bin/docker: stderr self._device = Device(dev_path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 140, in __init__", "/usr/bin/docker: stderr self._parse()", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/device.py\", line 236, in _parse", "/usr/bin/docker: stderr dev = disk.lsblk(self.path)", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 244, in lsblk", "/usr/bin/docker: stderr result = lsblk_all(device=device,", "/usr/bin/docker: stderr File \"/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py\", line 338, in lsblk_all", "/usr/bin/docker: stderr raise RuntimeError(f\"Error: {err}\")", "/usr/bin/docker: stderr RuntimeError: Error: ['lsblk: /dev/vdd: not a block device']", "Traceback (most recent call last):", " File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main", " return _run_code(code, main_globals, None,", " File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code", " exec(code, run_globals)", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 11009, in ", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 10997, in main", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2593, in _infer_config", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2509, in _infer_fsid", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2621, in _infer_image", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2496, in _validate_fsid", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 7226, in command_ceph_volume", " File \"/var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d/__main__.py\", line 2284, in call_throws", "RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 -e NODE_NAME=instance -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/run/ceph:z -v /var/log/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d:/var/log/ceph:z -v /var/lib/ceph/3b2c2574-f01d-5e08-aaf2-394c5021871d/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpg_4fou1p:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpl98tgjey:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 lvm batch --no-auto /dev/vdd --yes --no-systemd"], "stdout": "", "stdout_lines": []} 2026-04-07 08:22:24.198138 | instance | 2026-04-07 08:22:24.199083 | instance | PLAY RECAP ********************************************************************* 2026-04-07 08:22:24.199233 | instance | instance : ok=105 changed=26 unreachable=0 failed=1 skipped=26 rescued=0 ignored=0 2026-04-07 08:22:24.202905 | instance | 2026-04-07 08:22:24.202928 | instance | Tuesday 07 April 2026 08:22:24 +0000 (0:00:23.617) 0:04:25.673 ********* 2026-04-07 08:22:24.202934 | instance | =============================================================================== 2026-04-07 08:22:24.202938 | instance | vexxhost.ceph.mon : Run Bootstrap coomand ----------------------------- 127.71s 2026-04-07 08:22:24.202942 | instance | vexxhost.ceph.osd : Install OSDs --------------------------------------- 23.62s 2026-04-07 08:22:24.202946 | instance | vexxhost.ceph.mon : Validate monitor exist ----------------------------- 11.28s 2026-04-07 08:22:24.202950 | instance | vexxhost.ceph.osd : Get `ceph-volume lvm list` status ------------------ 10.74s 2026-04-07 08:22:24.202953 | instance | vexxhost.ceph.cephadm : Install packages -------------------------------- 6.66s 2026-04-07 08:22:24.202957 | instance | vexxhost.containers.containerd : Install AppArmor packages -------------- 5.86s 2026-04-07 08:22:24.202961 | instance | vexxhost.ceph.osd : Get `cephadm ls` status ----------------------------- 5.37s 2026-04-07 08:22:24.202965 | instance | vexxhost.ceph.osd : Ensure all OSDs are non-legacy ---------------------- 5.35s 2026-04-07 08:22:24.202976 | instance | vexxhost.containers.download_artifact : Extract archive ----------------- 4.35s 2026-04-07 08:22:24.202980 | instance | vexxhost.containers.download_artifact : Extract archive ----------------- 3.03s 2026-04-07 08:22:24.202984 | instance | vexxhost.containers.download_artifact : Extract archive ----------------- 2.86s 2026-04-07 08:22:24.202987 | instance | vexxhost.ceph.cephadm_host : Add new host to Ceph ----------------------- 2.14s 2026-04-07 08:22:24.202991 | instance | vexxhost.ceph.cephadm_host : Add new host to Ceph ----------------------- 2.03s 2026-04-07 08:22:24.202995 | instance | vexxhost.ceph.cephadm_host : Add new host to Ceph ----------------------- 1.96s 2026-04-07 08:22:24.202998 | instance | vexxhost.containers.download_artifact : Extract archive ----------------- 1.93s 2026-04-07 08:22:24.203002 | instance | vexxhost.ceph.mgr : Enable the Ceph Manager prometheus module ----------- 1.87s 2026-04-07 08:22:24.203006 | instance | vexxhost.ceph.mgr : Configure "mgr" label for managers ------------------ 1.71s 2026-04-07 08:22:24.203112 | instance | vexxhost.containers.containerd : Reload systemd ------------------------- 1.69s 2026-04-07 08:22:24.203260 | instance | vexxhost.ceph.mon : Configure "mon" label for monitors ------------------ 1.66s 2026-04-07 08:22:24.203406 | instance | vexxhost.ceph.cephadm_host : Get public SSH key for "cephadm" user ------ 1.64s 2026-04-07 08:22:24.397365 | instance | CRITICAL Ansible return code was 2, command was: ansible-playbook --inventory /home/zuul/.ansible/tmp/molecule.v9Wo.aio/inventory --skip-tags molecule-notest,notest /home/zuul/src/github.com/vexxhost/atmosphere/molecule/aio/converge.yml 2026-04-07 08:22:24.397666 | instance | ERROR [aio > converge] Executed: Failed 2026-04-07 08:22:24.397890 | instance | ERROR Ansible return code was 2, command was: ansible-playbook --inventory /home/zuul/.ansible/tmp/molecule.v9Wo.aio/inventory --skip-tags molecule-notest,notest /home/zuul/src/github.com/vexxhost/atmosphere/molecule/aio/converge.yml 2026-04-07 08:22:24.695523 | instance | ERROR 2026-04-07 08:22:24.695789 | instance | { 2026-04-07 08:22:24.695879 | instance | "delta": "0:06:35.006202", 2026-04-07 08:22:24.695917 | instance | "end": "2026-04-07 08:22:24.498264", 2026-04-07 08:22:24.695948 | instance | "msg": "non-zero return code", 2026-04-07 08:22:24.695978 | instance | "rc": 2, 2026-04-07 08:22:24.696012 | instance | "start": "2026-04-07 08:15:49.492062" 2026-04-07 08:22:24.696042 | instance | } failure 2026-04-07 08:22:24.707419 | 2026-04-07 08:22:24.707468 | PLAY RECAP 2026-04-07 08:22:24.707511 | instance | ok: 2 changed: 2 unreachable: 0 failed: 1 skipped: 0 rescued: 0 ignored: 0 2026-04-07 08:22:24.707532 | 2026-04-07 08:22:24.815042 | RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/run.yaml@main] 2026-04-07 08:22:24.824886 | POST-RUN START: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/post.yml@main] 2026-04-07 08:22:25.469363 | 2026-04-07 08:22:25.469524 | PLAY [all] 2026-04-07 08:22:25.484150 | 2026-04-07 08:22:25.484255 | TASK [gather-host-logs : creating directory for system status] 2026-04-07 08:22:25.835774 | instance | changed 2026-04-07 08:22:25.841062 | 2026-04-07 08:22:25.841137 | TASK [gather-host-logs : Get logs for each host] 2026-04-07 08:22:26.160911 | instance | + systemd-cgls --full --all --no-pager 2026-04-07 08:22:26.174281 | instance | + ip addr 2026-04-07 08:22:26.178123 | instance | + ip route 2026-04-07 08:22:26.180676 | instance | + lsblk 2026-04-07 08:22:26.184225 | instance | + mount 2026-04-07 08:22:26.187904 | instance | + docker images 2026-04-07 08:22:26.205221 | instance | + brctl show 2026-04-07 08:22:26.205787 | instance | /bin/bash: line 8: brctl: command not found 2026-04-07 08:22:26.206213 | instance | + ps aux --sort=-%mem 2026-04-07 08:22:26.222902 | instance | + dpkg -l 2026-04-07 08:22:26.230226 | instance | + CONTAINERS=($(docker ps -a --format '{{ .Names }}' --filter label=zuul)) 2026-04-07 08:22:26.230906 | instance | ++ docker ps -a --format '{{ .Names }}' --filter label=zuul 2026-04-07 08:22:26.249015 | instance | + '[' '!' -z '' ']' 2026-04-07 08:22:26.379636 | instance | ok: Runtime: 0:00:00.094327 2026-04-07 08:22:26.386992 | 2026-04-07 08:22:26.387078 | TASK [gather-host-logs : Downloads logs to executor] 2026-04-07 08:22:27.005186 | instance | changed: 2026-04-07 08:22:27.005349 | instance | created directory /var/lib/zuul/builds/77519decd7264f12b5a98a3e1f2f54a0/work/logs/instance 2026-04-07 08:22:27.005378 | instance | cd+++++++++ system/ 2026-04-07 08:22:27.005401 | instance | >f+++++++++ system/brctl-show.txt 2026-04-07 08:22:27.005422 | instance | >f+++++++++ system/docker-images.txt 2026-04-07 08:22:27.005442 | instance | >f+++++++++ system/ip-addr.txt 2026-04-07 08:22:27.005464 | instance | >f+++++++++ system/ip-route.txt 2026-04-07 08:22:27.005485 | instance | >f+++++++++ system/lsblk.txt 2026-04-07 08:22:27.005505 | instance | >f+++++++++ system/mount.txt 2026-04-07 08:22:27.005525 | instance | >f+++++++++ system/packages.txt 2026-04-07 08:22:27.005544 | instance | >f+++++++++ system/ps.txt 2026-04-07 08:22:27.005565 | instance | >f+++++++++ system/systemd-cgls.txt 2026-04-07 08:22:27.014389 | 2026-04-07 08:22:27.014454 | LOOP [helm-release-status : creating directory for helm release status] 2026-04-07 08:22:27.212511 | instance | changed: "values" 2026-04-07 08:22:27.390805 | instance | changed: "releases" 2026-04-07 08:22:27.409485 | 2026-04-07 08:22:27.409655 | TASK [helm-release-status : Gather get release status for helm charts] 2026-04-07 08:22:27.622215 | instance | /bin/bash: line 3: kubectl: command not found 2026-04-07 08:22:27.949272 | instance | ok: Runtime: 0:00:00.007499 2026-04-07 08:22:27.954794 | 2026-04-07 08:22:27.954865 | TASK [helm-release-status : Downloads logs to executor] 2026-04-07 08:22:28.468783 | instance | changed: 2026-04-07 08:22:28.469047 | instance | cd+++++++++ helm/ 2026-04-07 08:22:28.469128 | instance | cd+++++++++ helm/releases/ 2026-04-07 08:22:28.469179 | instance | cd+++++++++ helm/values/ 2026-04-07 08:22:28.478448 | 2026-04-07 08:22:28.478519 | TASK [describe-kubernetes-objects : creating directory for cluster scoped objects] 2026-04-07 08:22:28.696407 | instance | changed 2026-04-07 08:22:28.728689 | 2026-04-07 08:22:28.728816 | TASK [describe-kubernetes-objects : Gathering descriptions for cluster scoped objects] 2026-04-07 08:22:28.942094 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-07 08:22:28.942530 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-07 08:22:28.948344 | instance | environment: line 1: kubectl: command not found 2026-04-07 08:22:28.949722 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-07 08:22:28.950586 | instance | environment: line 1: kubectl: command not found 2026-04-07 08:22:28.952265 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-07 08:22:28.953587 | instance | environment: line 1: kubectl: command not found 2026-04-07 08:22:28.955124 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-07 08:22:28.956332 | instance | environment: line 1: kubectl: command not found 2026-04-07 08:22:28.957618 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-07 08:22:28.957951 | instance | environment: line 1: kubectl: command not found 2026-04-07 08:22:28.959469 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-07 08:22:29.265324 | instance | ok: Runtime: 0:00:00.028003 2026-04-07 08:22:29.270359 | 2026-04-07 08:22:29.270448 | TASK [describe-kubernetes-objects : creating directory for namespace scoped objects] 2026-04-07 08:22:29.466212 | instance | changed 2026-04-07 08:22:29.472077 | 2026-04-07 08:22:29.472144 | TASK [describe-kubernetes-objects : Gathering descriptions for namespace scoped objects] 2026-04-07 08:22:29.674365 | instance | environment: line 5: kubectl: command not found 2026-04-07 08:22:29.675126 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-07 08:22:29.675264 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-07 08:22:29.675419 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-07 08:22:30.008940 | instance | ok: Runtime: 0:00:00.008886 2026-04-07 08:22:30.014672 | 2026-04-07 08:22:30.014732 | TASK [describe-kubernetes-objects : Downloads logs to executor] 2026-04-07 08:22:30.516106 | instance | changed: 2026-04-07 08:22:30.516297 | instance | cd+++++++++ objects/ 2026-04-07 08:22:30.516336 | instance | cd+++++++++ objects/cluster/ 2026-04-07 08:22:30.516368 | instance | cd+++++++++ objects/namespaced/ 2026-04-07 08:22:30.528907 | 2026-04-07 08:22:30.528972 | TASK [gather-pod-logs : creating directory for pod logs] 2026-04-07 08:22:30.724648 | instance | changed 2026-04-07 08:22:30.730547 | 2026-04-07 08:22:30.730623 | TASK [gather-pod-logs : creating directory for failed pod logs] 2026-04-07 08:22:30.927728 | instance | changed 2026-04-07 08:22:30.933763 | 2026-04-07 08:22:30.933877 | TASK [gather-pod-logs : retrieve all kubernetes logs, current and previous (if they exist)] 2026-04-07 08:22:31.166619 | instance | environment: line 3: kubectl: command not found 2026-04-07 08:22:31.472359 | instance | ok: Runtime: 0:00:00.008426 2026-04-07 08:22:31.479305 | 2026-04-07 08:22:31.479390 | TASK [gather-pod-logs : Downloads pod logs to executor] 2026-04-07 08:22:31.980605 | instance | changed: 2026-04-07 08:22:31.980804 | instance | cd+++++++++ pod-logs/ 2026-04-07 08:22:31.980844 | instance | cd+++++++++ pod-logs/failed-pods/ 2026-04-07 08:22:31.991985 | 2026-04-07 08:22:31.992045 | TASK [gather-prom-metrics : creating directory for helm release descriptions] 2026-04-07 08:22:32.209907 | instance | changed 2026-04-07 08:22:32.216733 | 2026-04-07 08:22:32.216857 | TASK [gather-prom-metrics : Get metrics from exporter services in all namespaces] 2026-04-07 08:22:32.423275 | instance | /bin/bash: line 2: kubectl: command not found 2026-04-07 08:22:32.753398 | instance | ok: Runtime: 0:00:00.035058 2026-04-07 08:22:32.759668 | 2026-04-07 08:22:32.759731 | TASK [gather-prom-metrics : Get ceph metrics from ceph-mgr] 2026-04-07 08:22:32.970375 | instance | /bin/bash: line 2: kubectl: command not found 2026-04-07 08:22:32.998457 | instance | ceph-mgr endpoints: 2026-04-07 08:22:33.294150 | instance | ok: Runtime: 0:00:00.035583 2026-04-07 08:22:33.301351 | 2026-04-07 08:22:33.301486 | TASK [gather-prom-metrics : Get metrics from fluentd pods] 2026-04-07 08:22:33.511683 | instance | /bin/bash: line 4: kubectl: command not found 2026-04-07 08:22:33.843478 | instance | ok: Runtime: 0:00:00.034666 2026-04-07 08:22:33.849937 | 2026-04-07 08:22:33.850043 | TASK [gather-prom-metrics : Downloads logs to executor] 2026-04-07 08:22:34.322271 | instance | changed: cd+++++++++ prometheus/ 2026-04-07 08:22:34.331175 | 2026-04-07 08:22:34.331236 | TASK [gather-selenium-data : creating directory for helm release descriptions] 2026-04-07 08:22:34.559251 | instance | changed 2026-04-07 08:22:34.564383 | 2026-04-07 08:22:34.564451 | TASK [gather-selenium-data : Get selenium data] 2026-04-07 08:22:34.766050 | instance | + cp '/tmp/artifacts/*' /tmp/logs/selenium/. 2026-04-07 08:22:34.767782 | instance | cp: cannot stat '/tmp/artifacts/*': No such file or directory 2026-04-07 08:22:35.100517 | instance | ERROR 2026-04-07 08:22:35.100822 | instance | { 2026-04-07 08:22:35.100924 | instance | "delta": "0:00:00.007390", 2026-04-07 08:22:35.101080 | instance | "end": "2026-04-07 08:22:34.768231", 2026-04-07 08:22:35.101145 | instance | "msg": "non-zero return code", 2026-04-07 08:22:35.101190 | instance | "rc": 1, 2026-04-07 08:22:35.101230 | instance | "start": "2026-04-07 08:22:34.760841" 2026-04-07 08:22:35.101270 | instance | } 2026-04-07 08:22:35.101327 | instance | ERROR: Ignoring Errors 2026-04-07 08:22:35.112172 | 2026-04-07 08:22:35.112379 | TASK [gather-selenium-data : Downloads logs to executor] 2026-04-07 08:22:35.577323 | instance | changed: cd+++++++++ selenium/ 2026-04-07 08:22:35.584580 | 2026-04-07 08:22:35.584628 | PLAY RECAP 2026-04-07 08:22:35.584670 | instance | ok: 23 changed: 23 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 1 2026-04-07 08:22:35.584693 | 2026-04-07 08:22:35.686014 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/post.yml@main] 2026-04-07 08:22:35.698521 | POST-RUN START: [trusted : github.com/vexxhost/zuul-config/playbooks/base/post.yaml@main] 2026-04-07 08:22:36.265284 | 2026-04-07 08:22:36.265392 | PLAY [all] 2026-04-07 08:22:36.275888 | 2026-04-07 08:22:36.275960 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-07 08:22:36.320234 | instance | skipping: Conditional result was False 2026-04-07 08:22:36.330543 | 2026-04-07 08:22:36.330618 | TASK [fetch-output : Set log path for single node] 2026-04-07 08:22:36.383533 | instance | ok 2026-04-07 08:22:36.389877 | 2026-04-07 08:22:36.389968 | LOOP [fetch-output : Ensure local output dirs] 2026-04-07 08:22:36.753880 | instance -> localhost | ok: "/var/lib/zuul/builds/77519decd7264f12b5a98a3e1f2f54a0/work/logs" 2026-04-07 08:22:36.982542 | instance -> localhost | changed: "/var/lib/zuul/builds/77519decd7264f12b5a98a3e1f2f54a0/work/artifacts" 2026-04-07 08:22:37.184307 | instance -> localhost | changed: "/var/lib/zuul/builds/77519decd7264f12b5a98a3e1f2f54a0/work/docs" 2026-04-07 08:22:37.207377 | 2026-04-07 08:22:37.207528 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-07 08:22:37.797557 | instance | changed: .d..t...... ./ 2026-04-07 08:22:37.797879 | instance | changed: All items complete 2026-04-07 08:22:37.797946 | 2026-04-07 08:22:38.232988 | instance | changed: .d..t...... ./ 2026-04-07 08:22:38.661100 | instance | changed: .d..t...... ./ 2026-04-07 08:22:38.683749 | 2026-04-07 08:22:38.683883 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-07 08:22:39.092388 | instance -> localhost | ok: Item: artifacts Runtime: 0:00:00.007176 2026-04-07 08:22:39.307397 | instance -> localhost | ok: Item: docs Runtime: 0:00:00.007393 2026-04-07 08:22:39.324401 | 2026-04-07 08:22:39.324489 | PLAY [all] 2026-04-07 08:22:39.331078 | 2026-04-07 08:22:39.331138 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-07 08:22:39.719292 | instance | changed 2026-04-07 08:22:39.724483 | 2026-04-07 08:22:39.724528 | PLAY RECAP 2026-04-07 08:22:39.724568 | instance | ok: 5 changed: 4 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-07 08:22:39.724589 | 2026-04-07 08:22:39.823332 | POST-RUN END RESULT_NORMAL: [trusted : github.com/vexxhost/zuul-config/playbooks/base/post.yaml@main] 2026-04-07 08:22:39.836249 | POST-RUN START: [trusted : github.com/vexxhost/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-07 08:22:40.491933 | 2026-04-07 08:22:40.492080 | PLAY [localhost] 2026-04-07 08:22:40.502883 | 2026-04-07 08:22:40.502954 | TASK [Generate Zuul manifest] 2026-04-07 08:22:40.525417 | localhost | ok 2026-04-07 08:22:40.541541 | 2026-04-07 08:22:40.541647 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-07 08:22:40.898828 | localhost | changed 2026-04-07 08:22:40.910892 | 2026-04-07 08:22:40.910966 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-07 08:22:40.941479 | localhost | ok 2026-04-07 08:22:40.949783 | 2026-04-07 08:22:40.949848 | TASK [Upload logs] 2026-04-07 08:22:40.970301 | localhost | ok 2026-04-07 08:22:41.066835 | 2026-04-07 08:22:41.066948 | TASK [Set zuul-log-path fact] 2026-04-07 08:22:41.086289 | localhost | ok 2026-04-07 08:22:41.099268 | 2026-04-07 08:22:41.099347 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-07 08:22:41.130520 | localhost | ok 2026-04-07 08:22:41.139910 | 2026-04-07 08:22:41.139974 | TASK [upload-logs : Create log directories] 2026-04-07 08:22:41.484724 | localhost | changed 2026-04-07 08:22:41.490786 | 2026-04-07 08:22:41.490881 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-07 08:22:41.858213 | localhost -> localhost | ok: Runtime: 0:00:00.006311 2026-04-07 08:22:41.863214 | 2026-04-07 08:22:41.863279 | TASK [upload-logs : Upload logs to log server] 2026-04-07 08:22:42.282982 | localhost | Output suppressed because no_log was given 2026-04-07 08:22:42.287206 | 2026-04-07 08:22:42.287271 | LOOP [upload-logs : Compress console log and json output] 2026-04-07 08:22:42.330352 | localhost | skipping: Conditional result was False 2026-04-07 08:22:42.337928 | localhost | skipping: Conditional result was False 2026-04-07 08:22:42.349712 | 2026-04-07 08:22:42.349835 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-07 08:22:42.391677 | localhost | skipping: Conditional result was False 2026-04-07 08:22:42.392059 | 2026-04-07 08:22:42.396000 | localhost | skipping: Conditional result was False 2026-04-07 08:22:42.415070 | 2026-04-07 08:22:42.415340 | LOOP [upload-logs : Upload console log and json output]