2026-02-03 17:17:11.800144 | Job console starting 2026-02-03 17:17:11.813196 | Updating git repos 2026-02-03 17:17:11.887250 | Cloning repos into workspace 2026-02-03 17:17:11.973379 | Restoring repo states 2026-02-03 17:17:12.006419 | Merging changes 2026-02-03 17:17:13.165903 | Checking out repos 2026-02-03 17:17:13.285073 | Preparing playbooks 2026-02-03 17:17:16.831088 | Running Ansible setup 2026-02-03 17:17:20.189571 | PRE-RUN START: [trusted : vexxhost.dev/zuul-config/playbooks/base/pre.yaml@main] 2026-02-03 17:17:20.773702 | 2026-02-03 17:17:20.773891 | PLAY [localhost] 2026-02-03 17:17:20.782061 | 2026-02-03 17:17:20.782250 | TASK [Gathering Facts] 2026-02-03 17:17:21.680878 | localhost | ok 2026-02-03 17:17:21.692226 | 2026-02-03 17:17:21.692331 | TASK [Setup log path fact] 2026-02-03 17:17:21.715342 | localhost | ok 2026-02-03 17:17:21.732768 | 2026-02-03 17:17:21.732973 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-03 17:17:21.763260 | localhost | ok 2026-02-03 17:17:21.770227 | 2026-02-03 17:17:21.770300 | TASK [emit-job-header : Print job information] 2026-02-03 17:17:21.811610 | # Job Information 2026-02-03 17:17:21.811881 | Ansible Version: 2.16.15 2026-02-03 17:17:21.811951 | Job: atmosphere-molecule-csi-rbd 2026-02-03 17:17:21.812056 | Pipeline: check 2026-02-03 17:17:21.812106 | Executor: 3a2793d2bd32 2026-02-03 17:17:21.812150 | Triggered by: https://github.com/vexxhost/atmosphere/pull/3535 2026-02-03 17:17:21.812200 | Event ID: 13765a90-0124-11f1-9be0-108e0f1364b3 2026-02-03 17:17:21.816172 | 2026-02-03 17:17:21.816268 | LOOP [emit-job-header : Print node information] 2026-02-03 17:17:21.909000 | localhost | ok: 2026-02-03 17:17:21.909342 | localhost | # Node Information 2026-02-03 17:17:21.909405 | localhost | Inventory Hostname: instance 2026-02-03 17:17:21.909453 | localhost | Hostname: np0000154776 2026-02-03 17:17:21.909497 | localhost | Username: zuul 2026-02-03 17:17:21.909548 | localhost | Distro: Ubuntu 22.04 2026-02-03 17:17:21.909591 | localhost | Provider: yul1 2026-02-03 17:17:21.909633 | localhost | Region: ca-ymq-1 2026-02-03 17:17:21.909675 | localhost | Label: ubuntu-jammy 2026-02-03 17:17:21.909716 | localhost | Product Name: OpenStack Nova 2026-02-03 17:17:21.909757 | localhost | Interface IP: 162.253.55.195 2026-02-03 17:17:21.926499 | 2026-02-03 17:17:21.926646 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-03 17:17:22.306190 | localhost -> localhost | changed 2026-02-03 17:17:22.316568 | 2026-02-03 17:17:22.316731 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-03 17:17:23.141298 | localhost -> localhost | changed 2026-02-03 17:17:23.150036 | 2026-02-03 17:17:23.150097 | PLAY [all] 2026-02-03 17:17:23.160879 | 2026-02-03 17:17:23.160979 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-03 17:17:23.390053 | instance -> localhost | ok 2026-02-03 17:17:23.400976 | 2026-02-03 17:17:23.401060 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-03 17:17:23.431659 | instance | ok 2026-02-03 17:17:23.447209 | instance | included: /var/lib/zuul/builds/f47942460ac2482e84103b318d29b633/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-03 17:17:23.452693 | 2026-02-03 17:17:23.452755 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-03 17:17:24.166219 | instance -> localhost | Generating public/private rsa key pair. 2026-02-03 17:17:24.166435 | instance -> localhost | Your identification has been saved in /var/lib/zuul/builds/f47942460ac2482e84103b318d29b633/work/f47942460ac2482e84103b318d29b633_id_rsa 2026-02-03 17:17:24.166477 | instance -> localhost | Your public key has been saved in /var/lib/zuul/builds/f47942460ac2482e84103b318d29b633/work/f47942460ac2482e84103b318d29b633_id_rsa.pub 2026-02-03 17:17:24.166511 | instance -> localhost | The key fingerprint is: 2026-02-03 17:17:24.166543 | instance -> localhost | SHA256:Mnluu5/a+Qt6Vr+WqybU4+kypwOLo7Fot9MdFr2GBT8 zuul-build-sshkey 2026-02-03 17:17:24.166642 | instance -> localhost | The key's randomart image is: 2026-02-03 17:17:24.166706 | instance -> localhost | +---[RSA 3072]----+ 2026-02-03 17:17:24.166748 | instance -> localhost | | | 2026-02-03 17:17:24.166811 | instance -> localhost | | . | 2026-02-03 17:17:24.166867 | instance -> localhost | | + | 2026-02-03 17:17:24.166903 | instance -> localhost | | .. E | 2026-02-03 17:17:24.166978 | instance -> localhost | | + S+ + | 2026-02-03 17:17:24.167013 | instance -> localhost | | == + + | 2026-02-03 17:17:24.167074 | instance -> localhost | | .. +o*.o + . | 2026-02-03 17:17:24.167126 | instance -> localhost | | ..oo+.o+B+= + | 2026-02-03 17:17:24.167167 | instance -> localhost | | ...++ .=**@=+oo | 2026-02-03 17:17:24.167197 | instance -> localhost | +----[SHA256]-----+ 2026-02-03 17:17:24.167265 | instance -> localhost | ok: Runtime: 0:00:00.299910 2026-02-03 17:17:24.174658 | 2026-02-03 17:17:24.174749 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-03 17:17:24.207160 | instance | ok 2026-02-03 17:17:24.218117 | instance | included: /var/lib/zuul/builds/f47942460ac2482e84103b318d29b633/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-03 17:17:24.225403 | 2026-02-03 17:17:24.225471 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-03 17:17:24.249830 | instance | skipping: Conditional result was False 2026-02-03 17:17:24.261176 | 2026-02-03 17:17:24.261315 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-03 17:17:24.764247 | instance | changed 2026-02-03 17:17:24.771110 | 2026-02-03 17:17:24.771177 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-03 17:17:24.967675 | instance | ok 2026-02-03 17:17:24.973942 | 2026-02-03 17:17:24.974011 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-03 17:17:25.453597 | instance | changed 2026-02-03 17:17:25.460825 | 2026-02-03 17:17:25.460919 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-03 17:17:25.931002 | instance | changed 2026-02-03 17:17:25.938516 | 2026-02-03 17:17:25.938580 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-03 17:17:25.963451 | instance | skipping: Conditional result was False 2026-02-03 17:17:25.975601 | 2026-02-03 17:17:25.975743 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-03 17:17:26.344659 | instance -> localhost | changed 2026-02-03 17:17:26.362539 | 2026-02-03 17:17:26.362652 | TASK [add-build-sshkey : Add back temp key] 2026-02-03 17:17:26.631998 | instance -> localhost | Identity added: /var/lib/zuul/builds/f47942460ac2482e84103b318d29b633/work/f47942460ac2482e84103b318d29b633_id_rsa (zuul-build-sshkey) 2026-02-03 17:17:26.632267 | instance -> localhost | ok: Runtime: 0:00:00.015062 2026-02-03 17:17:26.638431 | 2026-02-03 17:17:26.638496 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-03 17:17:26.989881 | instance | ok 2026-02-03 17:17:27.001139 | 2026-02-03 17:17:27.001237 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-03 17:17:27.027140 | instance | skipping: Conditional result was False 2026-02-03 17:17:27.040784 | 2026-02-03 17:17:27.040880 | TASK [prepare-workspace : Start zuul_console daemon.] 2026-02-03 17:17:27.372226 | instance | ok 2026-02-03 17:17:27.379986 | 2026-02-03 17:17:27.380075 | TASK [prepare-workspace : Synchronize src repos to workspace directory.] 2026-02-03 17:17:29.039724 | instance | Output suppressed because no_log was given 2026-02-03 17:17:29.048705 | 2026-02-03 17:17:29.048768 | LOOP [ensure-output-dirs : Empty Zuul Output directories by removing them] 2026-02-03 17:17:29.250543 | instance | ok: "logs" 2026-02-03 17:17:29.250861 | instance | ok: All items complete 2026-02-03 17:17:29.250902 | 2026-02-03 17:17:29.414607 | instance | ok: "artifacts" 2026-02-03 17:17:29.572675 | instance | ok: "docs" 2026-02-03 17:17:29.592686 | 2026-02-03 17:17:29.592781 | LOOP [ensure-output-dirs : Ensure Zuul Output directories exist] 2026-02-03 17:17:29.781723 | instance | changed: "logs" 2026-02-03 17:17:29.945920 | instance | changed: "artifacts" 2026-02-03 17:17:30.111936 | instance | changed: "docs" 2026-02-03 17:17:30.131182 | 2026-02-03 17:17:30.131300 | PLAY RECAP 2026-02-03 17:17:30.131355 | instance | ok: 15 changed: 8 unreachable: 0 failed: 0 skipped: 3 rescued: 0 ignored: 0 2026-02-03 17:17:30.131384 | localhost | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-03 17:17:30.131406 | 2026-02-03 17:17:30.266395 | PRE-RUN END RESULT_NORMAL: [trusted : vexxhost.dev/zuul-config/playbooks/base/pre.yaml@main] 2026-02-03 17:17:30.270233 | PRE-RUN START: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/pre.yaml@main] 2026-02-03 17:17:30.825712 | 2026-02-03 17:17:30.825852 | PLAY [all] 2026-02-03 17:17:30.836808 | 2026-02-03 17:17:30.836884 | TASK [setup-uv : Extract archive] 2026-02-03 17:17:33.106843 | instance | changed 2026-02-03 17:17:33.116004 | 2026-02-03 17:17:33.116089 | TASK [setup-uv : Print version] 2026-02-03 17:17:34.214223 | instance | uv 0.8.13 2026-02-03 17:17:33.651420 | instance | ok: Runtime: 0:00:00.010495 2026-02-03 17:17:33.659648 | 2026-02-03 17:17:33.659702 | PLAY RECAP 2026-02-03 17:17:33.659749 | instance | ok: 2 changed: 2 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-03 17:17:33.659772 | 2026-02-03 17:17:33.788317 | PRE-RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/pre.yaml@main] 2026-02-03 17:17:33.792179 | PRE-RUN START: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/pre.yml@stable/2024.1] 2026-02-03 17:17:34.338259 | 2026-02-03 17:17:34.338392 | PLAY [all] 2026-02-03 17:17:34.348930 | 2026-02-03 17:17:34.349186 | TASK [Install "jq" for log collection] 2026-02-03 17:17:44.624430 | instance | changed 2026-02-03 17:17:44.626419 | 2026-02-03 17:17:44.626479 | PLAY RECAP 2026-02-03 17:17:44.626534 | instance | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-03 17:17:44.626588 | 2026-02-03 17:17:44.741843 | PRE-RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/pre.yml@stable/2024.1] 2026-02-03 17:17:44.745821 | RUN START: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/run.yaml@main] 2026-02-03 17:17:45.295539 | 2026-02-03 17:17:45.295733 | PLAY [all] 2026-02-03 17:17:45.306802 | 2026-02-03 17:17:45.306871 | TASK [Copy inventory file for Zuul] 2026-02-03 17:17:46.154242 | instance | changed 2026-02-03 17:17:46.161464 | 2026-02-03 17:17:46.161544 | TASK [Switch "ansible_host" to private IP] 2026-02-03 17:17:46.450200 | instance | changed: 1 replacements made 2026-02-03 17:17:46.456632 | 2026-02-03 17:17:46.456728 | TASK [Run Molecule scenario] 2026-02-03 17:17:46.889388 | instance | Using CPython 3.10.12 interpreter at: /usr/bin/python3 2026-02-03 17:17:46.889604 | instance | Creating virtual environment at: .venv 2026-02-03 17:17:46.928753 | instance | Building atmosphere @ file:///home/zuul/src/github.com/vexxhost/atmosphere 2026-02-03 17:17:46.964735 | instance | Downloading rjsonnet (1.2MiB) 2026-02-03 17:17:46.995025 | instance | Downloading pygments (1.2MiB) 2026-02-03 17:17:46.996075 | instance | Downloading setuptools (1.1MiB) 2026-02-03 17:17:47.006124 | instance | Downloading openstacksdk (1.7MiB) 2026-02-03 17:17:47.021577 | instance | Downloading cryptography (4.2MiB) 2026-02-03 17:17:47.024853 | instance | Downloading netaddr (2.2MiB) 2026-02-03 17:17:47.025936 | instance | Downloading ansible-core (2.1MiB) 2026-02-03 17:17:47.026891 | instance | Downloading kubernetes (1.9MiB) 2026-02-03 17:17:47.311352 | instance | Building pyperclip==1.9.0 2026-02-03 17:17:47.379681 | instance | Downloading rjsonnet 2026-02-03 17:17:47.517085 | instance | Downloading netaddr 2026-02-03 17:17:47.552943 | instance | Downloading pygments 2026-02-03 17:17:47.573888 | instance | Downloading cryptography 2026-02-03 17:17:47.628302 | instance | Downloading setuptools 2026-02-03 17:17:47.711770 | instance | Downloading kubernetes 2026-02-03 17:17:47.746390 | instance | Downloading ansible-core 2026-02-03 17:17:47.764025 | instance | Downloading openstacksdk 2026-02-03 17:17:48.206317 | instance | Built pyperclip==1.9.0 2026-02-03 17:17:48.496596 | instance | Built atmosphere @ file:///home/zuul/src/github.com/vexxhost/atmosphere 2026-02-03 17:17:48.569171 | instance | Installed 79 packages in 71ms 2026-02-03 17:17:49.338328 | instance | WARNING Molecule scenarios should migrate to 'extensions/molecule' 2026-02-03 17:17:49.950752 | instance | INFO [csi > discovery] scenario test matrix: dependency, cleanup, destroy, syntax, create, prepare, converge, idempotence, side_effect, verify, cleanup, destroy 2026-02-03 17:17:49.951066 | instance | INFO [csi > prerun] Performing prerun with role_name_check=0... 2026-02-03 17:18:35.133567 | instance | INFO [csi > dependency] Executing 2026-02-03 17:18:35.133727 | instance | WARNING [csi > dependency] Missing roles requirements file: requirements.yml 2026-02-03 17:18:35.133933 | instance | WARNING [csi > dependency] Missing collections requirements file: collections.yml 2026-02-03 17:18:35.134057 | instance | WARNING [csi > dependency] Executed: 2 missing (Remove from test_sequence to suppress) 2026-02-03 17:18:35.141824 | instance | INFO [csi > cleanup] Executing 2026-02-03 17:18:35.142055 | instance | WARNING [csi > cleanup] Executed: Missing playbook (Remove from test_sequence to suppress) 2026-02-03 17:18:35.167014 | instance | INFO [csi > destroy] Executing 2026-02-03 17:18:35.167157 | instance | WARNING [csi > destroy] Skipping, '--destroy=never' requested. 2026-02-03 17:18:35.167317 | instance | INFO [csi > destroy] Executed: Successful 2026-02-03 17:18:35.182729 | instance | INFO [csi > syntax] Executing 2026-02-03 17:18:36.879177 | instance | 2026-02-03 17:18:36.879631 | instance | playbook: /home/zuul/src/github.com/vexxhost/atmosphere/molecule/csi/converge.yml 2026-02-03 17:18:36.980020 | instance | INFO [csi > syntax] Executed: Successful 2026-02-03 17:18:36.988755 | instance | INFO [csi > create] Executing 2026-02-03 17:18:36.990744 | instance | WARNING [csi > create] Executed: Missing playbook (Remove from test_sequence to suppress) 2026-02-03 17:18:36.998479 | instance | INFO [csi > prepare] Executing 2026-02-03 17:18:37.910558 | instance | 2026-02-03 17:18:37.910638 | instance | PLAY [Prepare] ***************************************************************** 2026-02-03 17:18:37.910647 | instance | 2026-02-03 17:18:37.910654 | instance | TASK [Gathering Facts] ********************************************************* 2026-02-03 17:18:37.910661 | instance | Tuesday 03 February 2026 17:18:37 +0000 (0:00:00.036) 0:00:00.036 ****** 2026-02-03 17:18:39.089500 | instance | [WARNING]: Platform linux on host instance is using the discovered Python 2026-02-03 17:18:39.089560 | instance | interpreter at /usr/bin/python3.10, but future installation of another Python 2026-02-03 17:18:39.089568 | instance | interpreter could change the meaning of that path. See 2026-02-03 17:18:39.089574 | instance | https://docs.ansible.com/ansible- 2026-02-03 17:18:39.089580 | instance | core/2.17/reference_appendices/interpreter_discovery.html for more information. 2026-02-03 17:18:39.099766 | instance | ok: [instance] 2026-02-03 17:18:39.099816 | instance | 2026-02-03 17:18:39.099840 | instance | TASK [Configure short hostname] ************************************************ 2026-02-03 17:18:39.099847 | instance | Tuesday 03 February 2026 17:18:39 +0000 (0:00:01.189) 0:00:01.226 ****** 2026-02-03 17:18:39.828528 | instance | changed: [instance] 2026-02-03 17:18:39.828581 | instance | 2026-02-03 17:18:39.828589 | instance | TASK [Ensure hostname inside hosts file] *************************************** 2026-02-03 17:18:39.828596 | instance | Tuesday 03 February 2026 17:18:39 +0000 (0:00:00.728) 0:00:01.954 ****** 2026-02-03 17:18:40.108183 | instance | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-03 17:18:40.108221 | instance | with a mode of 0700, this may cause issues when running as another user. To 2026-02-03 17:18:40.108227 | instance | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-03 17:18:40.115355 | instance | changed: [instance] 2026-02-03 17:18:40.115377 | instance | 2026-02-03 17:18:40.115385 | instance | TASK [Purge "snapd" package] *************************************************** 2026-02-03 17:18:40.115391 | instance | Tuesday 03 February 2026 17:18:40 +0000 (0:00:00.287) 0:00:02.242 ****** 2026-02-03 17:18:40.996899 | instance | ok: [instance] 2026-02-03 17:18:40.996961 | instance | 2026-02-03 17:18:40.996973 | instance | PLAY [Create devices for Ceph] ************************************************* 2026-02-03 17:18:40.996984 | instance | 2026-02-03 17:18:40.996993 | instance | TASK [Gathering Facts] ********************************************************* 2026-02-03 17:18:40.997004 | instance | Tuesday 03 February 2026 17:18:40 +0000 (0:00:00.880) 0:00:03.123 ****** 2026-02-03 17:18:41.682181 | instance | ok: [instance] 2026-02-03 17:18:41.682230 | instance | 2026-02-03 17:18:41.682242 | instance | TASK [Install depedencies] ***************************************************** 2026-02-03 17:18:41.682252 | instance | Tuesday 03 February 2026 17:18:41 +0000 (0:00:00.685) 0:00:03.809 ****** 2026-02-03 17:19:14.056402 | instance | changed: [instance] 2026-02-03 17:19:14.056433 | instance | 2026-02-03 17:19:14.056438 | instance | TASK [Start up service] ******************************************************** 2026-02-03 17:19:14.056443 | instance | Tuesday 03 February 2026 17:19:14 +0000 (0:00:32.374) 0:00:36.183 ****** 2026-02-03 17:19:14.601309 | instance | ok: [instance] 2026-02-03 17:19:14.601342 | instance | 2026-02-03 17:19:14.601356 | instance | TASK [Generate lvm.conf] ******************************************************* 2026-02-03 17:19:14.601363 | instance | Tuesday 03 February 2026 17:19:14 +0000 (0:00:00.545) 0:00:36.728 ****** 2026-02-03 17:19:14.890745 | instance | ok: [instance] 2026-02-03 17:19:14.890786 | instance | 2026-02-03 17:19:14.890793 | instance | TASK [Write /etc/lvm/lvm.conf] ************************************************* 2026-02-03 17:19:14.890798 | instance | Tuesday 03 February 2026 17:19:14 +0000 (0:00:00.289) 0:00:37.018 ****** 2026-02-03 17:19:15.514374 | instance | changed: [instance] 2026-02-03 17:19:15.514409 | instance | 2026-02-03 17:19:15.514414 | instance | TASK [Get list of all loopback devices] **************************************** 2026-02-03 17:19:15.514419 | instance | Tuesday 03 February 2026 17:19:15 +0000 (0:00:00.623) 0:00:37.642 ****** 2026-02-03 17:19:15.694470 | instance | ok: [instance] 2026-02-03 17:19:15.694499 | instance | 2026-02-03 17:19:15.694505 | instance | TASK [Fail if there is any existing loopback devices] ************************** 2026-02-03 17:19:15.694510 | instance | Tuesday 03 February 2026 17:19:15 +0000 (0:00:00.180) 0:00:37.822 ****** 2026-02-03 17:19:15.719758 | instance | skipping: [instance] 2026-02-03 17:19:15.719794 | instance | 2026-02-03 17:19:15.719800 | instance | TASK [Create devices for Ceph] ************************************************* 2026-02-03 17:19:15.719805 | instance | Tuesday 03 February 2026 17:19:15 +0000 (0:00:00.025) 0:00:37.847 ****** 2026-02-03 17:19:16.217959 | instance | changed: [instance] => (item=osd0) 2026-02-03 17:19:16.218055 | instance | changed: [instance] => (item=osd1) 2026-02-03 17:19:16.218063 | instance | changed: [instance] => (item=osd2) 2026-02-03 17:19:16.218070 | instance | 2026-02-03 17:19:16.218076 | instance | TASK [Set permissions on loopback devices] ************************************* 2026-02-03 17:19:16.218083 | instance | Tuesday 03 February 2026 17:19:16 +0000 (0:00:00.497) 0:00:38.344 ****** 2026-02-03 17:19:16.786827 | instance | changed: [instance] => (item=osd0) 2026-02-03 17:19:16.787682 | instance | changed: [instance] => (item=osd1) 2026-02-03 17:19:16.787702 | instance | changed: [instance] => (item=osd2) 2026-02-03 17:19:16.787710 | instance | 2026-02-03 17:19:16.787716 | instance | TASK [Start loop devices] ****************************************************** 2026-02-03 17:19:16.787722 | instance | Tuesday 03 February 2026 17:19:16 +0000 (0:00:00.569) 0:00:38.914 ****** 2026-02-03 17:19:17.410350 | instance | changed: [instance] => (item=osd0) 2026-02-03 17:19:17.410381 | instance | changed: [instance] => (item=osd1) 2026-02-03 17:19:17.410386 | instance | changed: [instance] => (item=osd2) 2026-02-03 17:19:17.410390 | instance | 2026-02-03 17:19:17.410395 | instance | TASK [Create a volume group for each loop device] ****************************** 2026-02-03 17:19:17.410400 | instance | Tuesday 03 February 2026 17:19:17 +0000 (0:00:00.622) 0:00:39.537 ****** 2026-02-03 17:19:20.270416 | instance | changed: [instance] => (item=osd0) 2026-02-03 17:19:20.270462 | instance | changed: [instance] => (item=osd1) 2026-02-03 17:19:20.270468 | instance | changed: [instance] => (item=osd2) 2026-02-03 17:19:20.270472 | instance | 2026-02-03 17:19:20.270477 | instance | TASK [Create a logical volume for each loop device] **************************** 2026-02-03 17:19:20.270482 | instance | Tuesday 03 February 2026 17:19:20 +0000 (0:00:02.860) 0:00:42.397 ****** 2026-02-03 17:19:21.994635 | instance | changed: [instance] => (item=ceph-instance-osd0) 2026-02-03 17:19:21.994676 | instance | changed: [instance] => (item=ceph-instance-osd1) 2026-02-03 17:19:21.994684 | instance | changed: [instance] => (item=ceph-instance-osd2) 2026-02-03 17:19:21.994690 | instance | 2026-02-03 17:19:21.994696 | instance | PLAY RECAP ********************************************************************* 2026-02-03 17:19:21.995278 | instance | instance : ok=15 changed=9 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 2026-02-03 17:19:21.995291 | instance | 2026-02-03 17:19:21.995297 | instance | Tuesday 03 February 2026 17:19:21 +0000 (0:00:01.723) 0:00:44.120 ****** 2026-02-03 17:19:21.995303 | instance | =============================================================================== 2026-02-03 17:19:21.995309 | instance | Install depedencies ---------------------------------------------------- 32.37s 2026-02-03 17:19:21.995315 | instance | Create a volume group for each loop device ------------------------------ 2.86s 2026-02-03 17:19:21.995321 | instance | Create a logical volume for each loop device ---------------------------- 1.72s 2026-02-03 17:19:21.995326 | instance | Gathering Facts --------------------------------------------------------- 1.19s 2026-02-03 17:19:21.995467 | instance | Purge "snapd" package --------------------------------------------------- 0.88s 2026-02-03 17:19:21.995678 | instance | Configure short hostname ------------------------------------------------ 0.73s 2026-02-03 17:19:21.995879 | instance | Gathering Facts --------------------------------------------------------- 0.69s 2026-02-03 17:19:21.996080 | instance | Write /etc/lvm/lvm.conf ------------------------------------------------- 0.62s 2026-02-03 17:19:21.996281 | instance | Start loop devices ------------------------------------------------------ 0.62s 2026-02-03 17:19:21.996482 | instance | Set permissions on loopback devices ------------------------------------- 0.57s 2026-02-03 17:19:21.996683 | instance | Start up service -------------------------------------------------------- 0.55s 2026-02-03 17:19:21.996883 | instance | Create devices for Ceph ------------------------------------------------- 0.50s 2026-02-03 17:19:21.997083 | instance | Generate lvm.conf ------------------------------------------------------- 0.29s 2026-02-03 17:19:21.997286 | instance | Ensure hostname inside hosts file --------------------------------------- 0.29s 2026-02-03 17:19:21.997486 | instance | Get list of all loopback devices ---------------------------------------- 0.18s 2026-02-03 17:19:21.997688 | instance | Fail if there is any existing loopback devices -------------------------- 0.03s 2026-02-03 17:19:22.066625 | instance | INFO [csi > prepare] Executed: Successful 2026-02-03 17:19:22.075837 | instance | INFO [csi > converge] Executing 2026-02-03 17:19:23.478345 | instance | 2026-02-03 17:19:23.478422 | instance | PLAY [Debug CSI driver value] ************************************************** 2026-02-03 17:19:23.478450 | instance | 2026-02-03 17:19:23.478460 | instance | TASK [Gathering Facts] ********************************************************* 2026-02-03 17:19:23.478470 | instance | Tuesday 03 February 2026 17:19:23 +0000 (0:00:00.011) 0:00:00.011 ****** 2026-02-03 17:19:24.427653 | instance | [WARNING]: Platform linux on host instance is using the discovered Python 2026-02-03 17:19:24.427697 | instance | interpreter at /usr/bin/python3.10, but future installation of another Python 2026-02-03 17:19:24.427702 | instance | interpreter could change the meaning of that path. See 2026-02-03 17:19:24.427707 | instance | https://docs.ansible.com/ansible- 2026-02-03 17:19:24.427711 | instance | core/2.17/reference_appendices/interpreter_discovery.html for more information. 2026-02-03 17:19:24.438770 | instance | ok: [instance] 2026-02-03 17:19:24.438800 | instance | 2026-02-03 17:19:24.438825 | instance | TASK [Display CSI driver value and environment variable] *********************** 2026-02-03 17:19:24.438834 | instance | Tuesday 03 February 2026 17:19:24 +0000 (0:00:00.961) 0:00:00.972 ****** 2026-02-03 17:19:24.478657 | instance | ok: [instance] => { 2026-02-03 17:19:24.480517 | instance | "msg": "csi_driver=rbd, MOLECULE_CSI_DRIVER=" 2026-02-03 17:19:24.480544 | instance | } 2026-02-03 17:19:24.480555 | instance | 2026-02-03 17:19:24.480565 | instance | PLAY [all] ********************************************************************* 2026-02-03 17:19:24.480574 | instance | 2026-02-03 17:19:24.480583 | instance | TASK [Gathering Facts] ********************************************************* 2026-02-03 17:19:24.480592 | instance | Tuesday 03 February 2026 17:19:24 +0000 (0:00:00.039) 0:00:01.012 ****** 2026-02-03 17:19:25.289427 | instance | ok: [instance] 2026-02-03 17:19:25.289457 | instance | 2026-02-03 17:19:25.289463 | instance | TASK [Set a fact with the "atmosphere_images" for other plays] ***************** 2026-02-03 17:19:25.289467 | instance | Tuesday 03 February 2026 17:19:25 +0000 (0:00:00.810) 0:00:01.823 ****** 2026-02-03 17:19:25.465177 | instance | ok: [instance] 2026-02-03 17:19:25.465211 | instance | 2026-02-03 17:19:25.465216 | instance | PLAY [Deploy Ceph monitors & managers] ***************************************** 2026-02-03 17:19:25.465221 | instance | 2026-02-03 17:19:25.465225 | instance | TASK [Gathering Facts] ********************************************************* 2026-02-03 17:19:25.465229 | instance | Tuesday 03 February 2026 17:19:25 +0000 (0:00:00.175) 0:00:01.999 ****** 2026-02-03 17:19:26.329845 | instance | ok: [instance] 2026-02-03 17:19:26.329892 | instance | 2026-02-03 17:19:26.329904 | instance | TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-02-03 17:19:26.329915 | instance | Tuesday 03 February 2026 17:19:26 +0000 (0:00:00.863) 0:00:02.862 ****** 2026-02-03 17:19:26.602269 | instance | ok: [instance] 2026-02-03 17:19:26.602301 | instance | 2026-02-03 17:19:26.602307 | instance | TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-02-03 17:19:26.602312 | instance | Tuesday 03 February 2026 17:19:26 +0000 (0:00:00.273) 0:00:03.136 ****** 2026-02-03 17:19:26.644727 | instance | skipping: [instance] 2026-02-03 17:19:26.644803 | instance | 2026-02-03 17:19:26.644816 | instance | TASK [vexxhost.containers.directory : Create directory (/var/lib/downloads)] *** 2026-02-03 17:19:26.644828 | instance | Tuesday 03 February 2026 17:19:26 +0000 (0:00:00.041) 0:00:03.177 ****** 2026-02-03 17:19:26.931960 | instance | changed: [instance] 2026-02-03 17:19:26.931997 | instance | 2026-02-03 17:19:26.932003 | instance | TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-02-03 17:19:26.932008 | instance | Tuesday 03 February 2026 17:19:26 +0000 (0:00:00.288) 0:00:03.465 ****** 2026-02-03 17:19:27.004494 | instance | ok: [instance] => { 2026-02-03 17:19:27.004529 | instance | "msg": "https://github.com/opencontainers/runc/releases/download/v1.4.0/runc.amd64" 2026-02-03 17:19:27.004561 | instance | } 2026-02-03 17:19:27.004566 | instance | 2026-02-03 17:19:27.004570 | instance | TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-02-03 17:19:27.004575 | instance | Tuesday 03 February 2026 17:19:27 +0000 (0:00:00.072) 0:00:03.537 ****** 2026-02-03 17:19:28.013389 | instance | changed: [instance] 2026-02-03 17:19:28.013417 | instance | 2026-02-03 17:19:28.013423 | instance | TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-02-03 17:19:28.013433 | instance | Tuesday 03 February 2026 17:19:28 +0000 (0:00:01.009) 0:00:04.547 ****** 2026-02-03 17:19:28.061190 | instance | skipping: [instance] 2026-02-03 17:19:28.061243 | instance | 2026-02-03 17:19:28.061255 | instance | TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-02-03 17:19:28.061266 | instance | Tuesday 03 February 2026 17:19:28 +0000 (0:00:00.047) 0:00:04.594 ****** 2026-02-03 17:19:28.106743 | instance | skipping: [instance] 2026-02-03 17:19:28.106812 | instance | 2026-02-03 17:19:28.106824 | instance | TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-02-03 17:19:28.107083 | instance | Tuesday 03 February 2026 17:19:28 +0000 (0:00:00.045) 0:00:04.639 ****** 2026-02-03 17:19:28.305629 | instance | ok: [instance] 2026-02-03 17:19:28.305664 | instance | 2026-02-03 17:19:28.305670 | instance | TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-02-03 17:19:28.305675 | instance | Tuesday 03 February 2026 17:19:28 +0000 (0:00:00.199) 0:00:04.839 ****** 2026-02-03 17:19:29.418409 | instance | ok: [instance] 2026-02-03 17:19:29.418469 | instance | 2026-02-03 17:19:29.418482 | instance | TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-02-03 17:19:29.418493 | instance | Tuesday 03 February 2026 17:19:29 +0000 (0:00:01.111) 0:00:05.951 ****** 2026-02-03 17:19:29.482820 | instance | ok: [instance] => { 2026-02-03 17:19:29.482910 | instance | "msg": "https://github.com/containerd/containerd/releases/download/v2.2.0/containerd-2.2.0-linux-amd64.tar.gz" 2026-02-03 17:19:29.482923 | instance | } 2026-02-03 17:19:29.482933 | instance | 2026-02-03 17:19:29.482943 | instance | TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-02-03 17:19:29.482953 | instance | Tuesday 03 February 2026 17:19:29 +0000 (0:00:00.063) 0:00:06.015 ****** 2026-02-03 17:19:30.267876 | instance | changed: [instance] 2026-02-03 17:19:30.267932 | instance | 2026-02-03 17:19:30.267944 | instance | TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-02-03 17:19:30.267953 | instance | Tuesday 03 February 2026 17:19:30 +0000 (0:00:00.785) 0:00:06.800 ****** 2026-02-03 17:19:33.101995 | instance | changed: [instance] 2026-02-03 17:19:33.102029 | instance | 2026-02-03 17:19:33.102035 | instance | TASK [vexxhost.containers.containerd : Install SELinux packages] *************** 2026-02-03 17:19:33.102039 | instance | Tuesday 03 February 2026 17:19:33 +0000 (0:00:02.835) 0:00:09.635 ****** 2026-02-03 17:19:33.130515 | instance | skipping: [instance] 2026-02-03 17:19:33.130545 | instance | 2026-02-03 17:19:33.130552 | instance | TASK [vexxhost.containers.containerd : Set SELinux to permissive at runtime] *** 2026-02-03 17:19:33.130558 | instance | Tuesday 03 February 2026 17:19:33 +0000 (0:00:00.028) 0:00:09.664 ****** 2026-02-03 17:19:33.162694 | instance | skipping: [instance] 2026-02-03 17:19:33.162778 | instance | 2026-02-03 17:19:33.163840 | instance | TASK [vexxhost.containers.containerd : Persist SELinux permissive mode] ******** 2026-02-03 17:19:33.163903 | instance | Tuesday 03 February 2026 17:19:33 +0000 (0:00:00.031) 0:00:09.695 ****** 2026-02-03 17:19:33.184778 | instance | skipping: [instance] 2026-02-03 17:19:33.184808 | instance | 2026-02-03 17:19:33.184817 | instance | TASK [vexxhost.containers.containerd : Install AppArmor packages] ************** 2026-02-03 17:19:33.184826 | instance | Tuesday 03 February 2026 17:19:33 +0000 (0:00:00.022) 0:00:09.718 ****** 2026-02-03 17:19:38.557943 | instance | changed: [instance] 2026-02-03 17:19:38.557972 | instance | 2026-02-03 17:19:38.557978 | instance | TASK [vexxhost.containers.containerd : Create systemd service file for containerd] *** 2026-02-03 17:19:38.557988 | instance | Tuesday 03 February 2026 17:19:38 +0000 (0:00:05.373) 0:00:15.091 ****** 2026-02-03 17:19:39.069614 | instance | changed: [instance] 2026-02-03 17:19:39.069662 | instance | 2026-02-03 17:19:39.069670 | instance | TASK [vexxhost.containers.containerd : Create folders for configuration] ******* 2026-02-03 17:19:39.069677 | instance | Tuesday 03 February 2026 17:19:39 +0000 (0:00:00.511) 0:00:15.603 ****** 2026-02-03 17:19:39.910629 | instance | changed: [instance] => (item={'path': '/etc/containerd'}) 2026-02-03 17:19:39.910673 | instance | changed: [instance] => (item={'path': '/var/lib/containerd', 'mode': '0o700'}) 2026-02-03 17:19:39.910681 | instance | changed: [instance] => (item={'path': '/run/containerd', 'mode': '0o711'}) 2026-02-03 17:19:39.910699 | instance | changed: [instance] => (item={'path': '/run/containerd/io.containerd.grpc.v1.cri', 'mode': '0o700'}) 2026-02-03 17:19:39.911454 | instance | changed: [instance] => (item={'path': '/run/containerd/io.containerd.sandbox.controller.v1.shim', 'mode': '0o700'}) 2026-02-03 17:19:39.911470 | instance | 2026-02-03 17:19:39.911477 | instance | TASK [vexxhost.containers.containerd : Create containerd config file] ********** 2026-02-03 17:19:39.911483 | instance | Tuesday 03 February 2026 17:19:39 +0000 (0:00:00.840) 0:00:16.444 ****** 2026-02-03 17:19:40.445568 | instance | changed: [instance] 2026-02-03 17:19:40.445602 | instance | 2026-02-03 17:19:40.445608 | instance | TASK [vexxhost.containers.containerd : Force any restarts if necessary] ******** 2026-02-03 17:19:40.445613 | instance | Tuesday 03 February 2026 17:19:40 +0000 (0:00:00.518) 0:00:16.962 ****** 2026-02-03 17:19:40.445617 | instance | 2026-02-03 17:19:40.445621 | instance | RUNNING HANDLER [vexxhost.containers.containerd : Reload systemd] ************** 2026-02-03 17:19:40.445625 | instance | Tuesday 03 February 2026 17:19:40 +0000 (0:00:00.016) 0:00:16.979 ****** 2026-02-03 17:19:41.429642 | instance | ok: [instance] 2026-02-03 17:19:41.429786 | instance | 2026-02-03 17:19:41.429793 | instance | RUNNING HANDLER [vexxhost.containers.containerd : Restart containerd] ********** 2026-02-03 17:19:41.429799 | instance | Tuesday 03 February 2026 17:19:41 +0000 (0:00:00.983) 0:00:17.962 ****** 2026-02-03 17:19:41.845565 | instance | changed: [instance] 2026-02-03 17:19:41.845607 | instance | 2026-02-03 17:19:41.845613 | instance | TASK [vexxhost.containers.containerd : Enable and start service] *************** 2026-02-03 17:19:41.845618 | instance | Tuesday 03 February 2026 17:19:41 +0000 (0:00:00.416) 0:00:18.379 ****** 2026-02-03 17:19:42.380664 | instance | changed: [instance] 2026-02-03 17:19:42.380697 | instance | 2026-02-03 17:19:42.380703 | instance | TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-02-03 17:19:42.380708 | instance | Tuesday 03 February 2026 17:19:42 +0000 (0:00:00.534) 0:00:18.914 ****** 2026-02-03 17:19:42.579904 | instance | ok: [instance] 2026-02-03 17:19:42.579940 | instance | 2026-02-03 17:19:42.579945 | instance | TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-02-03 17:19:42.579950 | instance | Tuesday 03 February 2026 17:19:42 +0000 (0:00:00.199) 0:00:19.113 ****** 2026-02-03 17:19:42.642007 | instance | ok: [instance] => { 2026-02-03 17:19:42.642046 | instance | "msg": "https://download.docker.com/linux/static/stable/x86_64/docker-24.0.9.tgz" 2026-02-03 17:19:42.642052 | instance | } 2026-02-03 17:19:42.642056 | instance | 2026-02-03 17:19:42.642060 | instance | TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-02-03 17:19:42.642065 | instance | Tuesday 03 February 2026 17:19:42 +0000 (0:00:00.061) 0:00:19.175 ****** 2026-02-03 17:19:43.546619 | instance | changed: [instance] 2026-02-03 17:19:43.546657 | instance | 2026-02-03 17:19:43.546662 | instance | TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-02-03 17:19:43.546701 | instance | Tuesday 03 February 2026 17:19:43 +0000 (0:00:00.904) 0:00:20.080 ****** 2026-02-03 17:19:48.097523 | instance | changed: [instance] 2026-02-03 17:19:48.097567 | instance | 2026-02-03 17:19:48.097572 | instance | TASK [vexxhost.containers.docker : Install AppArmor packages] ****************** 2026-02-03 17:19:48.097577 | instance | Tuesday 03 February 2026 17:19:48 +0000 (0:00:04.550) 0:00:24.631 ****** 2026-02-03 17:19:49.045829 | instance | ok: [instance] 2026-02-03 17:19:49.045866 | instance | 2026-02-03 17:19:49.045872 | instance | TASK [vexxhost.containers.docker : Ensure group "docker" exists] *************** 2026-02-03 17:19:49.045876 | instance | Tuesday 03 February 2026 17:19:49 +0000 (0:00:00.948) 0:00:25.579 ****** 2026-02-03 17:19:51.941556 | instance | changed: [instance] 2026-02-03 17:19:51.941597 | instance | 2026-02-03 17:19:51.941603 | instance | TASK [vexxhost.containers.docker : Create systemd service file for docker] ***** 2026-02-03 17:19:51.941608 | instance | Tuesday 03 February 2026 17:19:51 +0000 (0:00:02.895) 0:00:28.475 ****** 2026-02-03 17:19:52.325686 | instance | changed: [instance] 2026-02-03 17:19:52.325718 | instance | 2026-02-03 17:19:52.325723 | instance | TASK [vexxhost.containers.docker : Create folders for configuration] *********** 2026-02-03 17:19:52.325740 | instance | Tuesday 03 February 2026 17:19:52 +0000 (0:00:00.384) 0:00:28.859 ****** 2026-02-03 17:19:52.843557 | instance | changed: [instance] => (item={'path': '/etc/docker'}) 2026-02-03 17:19:52.843590 | instance | changed: [instance] => (item={'path': '/var/lib/docker', 'mode': '0o710'}) 2026-02-03 17:19:52.843596 | instance | changed: [instance] => (item={'path': '/run/docker', 'mode': '0o711'}) 2026-02-03 17:19:52.843601 | instance | 2026-02-03 17:19:52.843606 | instance | TASK [vexxhost.containers.docker : Create systemd socket file for docker] ****** 2026-02-03 17:19:52.843610 | instance | Tuesday 03 February 2026 17:19:52 +0000 (0:00:00.517) 0:00:29.377 ****** 2026-02-03 17:19:53.234662 | instance | changed: [instance] 2026-02-03 17:19:53.234718 | instance | 2026-02-03 17:19:53.234975 | instance | TASK [vexxhost.containers.docker : Create docker daemon config file] *********** 2026-02-03 17:19:53.235013 | instance | Tuesday 03 February 2026 17:19:53 +0000 (0:00:00.391) 0:00:29.768 ****** 2026-02-03 17:19:53.674647 | instance | changed: [instance] 2026-02-03 17:19:53.674720 | instance | 2026-02-03 17:19:53.675405 | instance | TASK [vexxhost.containers.docker : Force any restarts if necessary] ************ 2026-02-03 17:19:53.675452 | instance | Tuesday 03 February 2026 17:19:53 +0000 (0:00:00.422) 0:00:30.190 ****** 2026-02-03 17:19:53.675458 | instance | 2026-02-03 17:19:53.675463 | instance | RUNNING HANDLER [vexxhost.containers.containerd : Reload systemd] ************** 2026-02-03 17:19:53.675467 | instance | Tuesday 03 February 2026 17:19:53 +0000 (0:00:00.017) 0:00:30.208 ****** 2026-02-03 17:19:54.422172 | instance | ok: [instance] 2026-02-03 17:19:54.422255 | instance | 2026-02-03 17:19:54.422536 | instance | RUNNING HANDLER [vexxhost.containers.docker : Restart docker] ****************** 2026-02-03 17:19:54.422590 | instance | Tuesday 03 February 2026 17:19:54 +0000 (0:00:00.747) 0:00:30.956 ****** 2026-02-03 17:19:58.492791 | instance | changed: [instance] 2026-02-03 17:19:58.493278 | instance | 2026-02-03 17:19:58.493299 | instance | TASK [vexxhost.containers.docker : Enable and start service] ******************* 2026-02-03 17:19:58.493307 | instance | Tuesday 03 February 2026 17:19:58 +0000 (0:00:04.070) 0:00:35.026 ****** 2026-02-03 17:19:59.002571 | instance | changed: [instance] 2026-02-03 17:19:59.002677 | instance | 2026-02-03 17:19:59.002959 | instance | TASK [vexxhost.ceph.cephadm : Gather variables for each operating system] ****** 2026-02-03 17:19:59.002988 | instance | Tuesday 03 February 2026 17:19:58 +0000 (0:00:00.509) 0:00:35.536 ****** 2026-02-03 17:19:59.049498 | instance | ok: [instance] => (item=/home/zuul/.ansible/collections/ansible_collections/vexxhost/ceph/roles/cephadm/vars/ubuntu-22.04.yml) 2026-02-03 17:19:59.049532 | instance | 2026-02-03 17:19:59.049538 | instance | TASK [vexxhost.ceph.cephadm : Install packages] ******************************** 2026-02-03 17:19:59.049543 | instance | Tuesday 03 February 2026 17:19:59 +0000 (0:00:00.046) 0:00:35.583 ****** 2026-02-03 17:20:06.035761 | instance | changed: [instance] 2026-02-03 17:20:06.035791 | instance | 2026-02-03 17:20:06.035797 | instance | TASK [vexxhost.ceph.cephadm : Ensure services are started] ********************* 2026-02-03 17:20:06.035802 | instance | Tuesday 03 February 2026 17:20:06 +0000 (0:00:06.985) 0:00:42.569 ****** 2026-02-03 17:20:06.654395 | instance | ok: [instance] => (item=chronyd) 2026-02-03 17:20:06.654432 | instance | ok: [instance] => (item=sshd) 2026-02-03 17:20:06.654438 | instance | 2026-02-03 17:20:06.654442 | instance | TASK [vexxhost.ceph.cephadm : Download "cephadm"] ****************************** 2026-02-03 17:20:06.654447 | instance | Tuesday 03 February 2026 17:20:06 +0000 (0:00:00.619) 0:00:43.188 ****** 2026-02-03 17:20:06.929258 | instance | changed: [instance] 2026-02-03 17:20:06.929321 | instance | 2026-02-03 17:20:06.929605 | instance | TASK [vexxhost.ceph.cephadm : Remove cephadm from old path] ******************** 2026-02-03 17:20:06.929691 | instance | Tuesday 03 February 2026 17:20:06 +0000 (0:00:00.275) 0:00:43.463 ****** 2026-02-03 17:20:07.120492 | instance | ok: [instance] 2026-02-03 17:20:07.120559 | instance | 2026-02-03 17:20:07.120842 | instance | TASK [vexxhost.ceph.cephadm : Ensure "cephadm" user is present] **************** 2026-02-03 17:20:07.120875 | instance | Tuesday 03 February 2026 17:20:07 +0000 (0:00:00.191) 0:00:43.654 ****** 2026-02-03 17:20:08.100244 | instance | changed: [instance] 2026-02-03 17:20:08.100362 | instance | 2026-02-03 17:20:08.100414 | instance | TASK [vexxhost.ceph.cephadm : Allow "cephadm" user to have passwordless sudo] *** 2026-02-03 17:20:08.100579 | instance | Tuesday 03 February 2026 17:20:08 +0000 (0:00:00.979) 0:00:44.634 ****** 2026-02-03 17:20:08.433798 | instance | changed: [instance] 2026-02-03 17:20:08.433859 | instance | 2026-02-03 17:20:08.434147 | instance | TASK [vexxhost.ceph.mon : Get `cephadm ls` status] ***************************** 2026-02-03 17:20:08.434185 | instance | Tuesday 03 February 2026 17:20:08 +0000 (0:00:00.333) 0:00:44.967 ****** 2026-02-03 17:20:10.069395 | instance | ok: [instance] 2026-02-03 17:20:10.069436 | instance | 2026-02-03 17:20:10.069442 | instance | TASK [vexxhost.ceph.mon : Parse the `cephadm ls` output] *********************** 2026-02-03 17:20:10.069447 | instance | Tuesday 03 February 2026 17:20:10 +0000 (0:00:01.634) 0:00:46.602 ****** 2026-02-03 17:20:10.127137 | instance | ok: [instance] 2026-02-03 17:20:10.127191 | instance | 2026-02-03 17:20:10.127197 | instance | TASK [vexxhost.ceph.mon : Assimilate existing configs in `ceph.conf`] ********** 2026-02-03 17:20:10.127202 | instance | Tuesday 03 February 2026 17:20:10 +0000 (0:00:00.057) 0:00:46.660 ****** 2026-02-03 17:20:10.162691 | instance | skipping: [instance] 2026-02-03 17:20:10.162776 | instance | 2026-02-03 17:20:10.162842 | instance | TASK [vexxhost.ceph.mon : Adopt monitor to cluster] **************************** 2026-02-03 17:20:10.162962 | instance | Tuesday 03 February 2026 17:20:10 +0000 (0:00:00.036) 0:00:46.696 ****** 2026-02-03 17:20:10.191709 | instance | skipping: [instance] 2026-02-03 17:20:10.191966 | instance | 2026-02-03 17:20:10.192235 | instance | TASK [vexxhost.ceph.mon : Adopt manager to cluster] **************************** 2026-02-03 17:20:10.192499 | instance | Tuesday 03 February 2026 17:20:10 +0000 (0:00:00.028) 0:00:46.725 ****** 2026-02-03 17:20:10.225418 | instance | skipping: [instance] 2026-02-03 17:20:10.225524 | instance | 2026-02-03 17:20:10.225648 | instance | TASK [vexxhost.ceph.mon : Enable "cephadm" mgr module] ************************* 2026-02-03 17:20:10.225765 | instance | Tuesday 03 February 2026 17:20:10 +0000 (0:00:00.033) 0:00:46.759 ****** 2026-02-03 17:20:10.255718 | instance | skipping: [instance] 2026-02-03 17:20:10.255995 | instance | 2026-02-03 17:20:10.256283 | instance | TASK [vexxhost.ceph.mon : Set orchestrator backend to "cephadm"] *************** 2026-02-03 17:20:10.256567 | instance | Tuesday 03 February 2026 17:20:10 +0000 (0:00:00.030) 0:00:46.789 ****** 2026-02-03 17:20:10.294234 | instance | skipping: [instance] 2026-02-03 17:20:10.294508 | instance | 2026-02-03 17:20:10.294846 | instance | TASK [vexxhost.ceph.mon : Use `cephadm` user for cephadm] ********************** 2026-02-03 17:20:10.295132 | instance | Tuesday 03 February 2026 17:20:10 +0000 (0:00:00.038) 0:00:46.828 ****** 2026-02-03 17:20:10.329170 | instance | skipping: [instance] 2026-02-03 17:20:10.329447 | instance | 2026-02-03 17:20:10.329738 | instance | TASK [vexxhost.ceph.mon : Generate "cephadm" key] ****************************** 2026-02-03 17:20:10.330018 | instance | Tuesday 03 February 2026 17:20:10 +0000 (0:00:00.035) 0:00:46.863 ****** 2026-02-03 17:20:10.364089 | instance | skipping: [instance] 2026-02-03 17:20:10.364460 | instance | 2026-02-03 17:20:10.365045 | instance | TASK [vexxhost.ceph.mon : Set Ceph Monitor IP address] ************************* 2026-02-03 17:20:10.365439 | instance | Tuesday 03 February 2026 17:20:10 +0000 (0:00:00.034) 0:00:46.898 ****** 2026-02-03 17:20:10.480685 | instance | ok: [instance] 2026-02-03 17:20:10.480924 | instance | 2026-02-03 17:20:10.481186 | instance | TASK [vexxhost.ceph.mon : Check if any node is bootstrapped] ******************* 2026-02-03 17:20:10.481449 | instance | Tuesday 03 February 2026 17:20:10 +0000 (0:00:00.116) 0:00:47.014 ****** 2026-02-03 17:20:10.695212 | instance | ok: [instance] => (item=instance) 2026-02-03 17:20:10.695507 | instance | 2026-02-03 17:20:10.695817 | instance | TASK [vexxhost.ceph.mon : Select pre-existing bootstrap node if exists] ******** 2026-02-03 17:20:10.696089 | instance | Tuesday 03 February 2026 17:20:10 +0000 (0:00:00.214) 0:00:47.228 ****** 2026-02-03 17:20:10.733863 | instance | ok: [instance] 2026-02-03 17:20:10.734192 | instance | 2026-02-03 17:20:10.734487 | instance | TASK [vexxhost.ceph.mon : Bootstrap cluster] *********************************** 2026-02-03 17:20:10.734803 | instance | Tuesday 03 February 2026 17:20:10 +0000 (0:00:00.038) 0:00:47.267 ****** 2026-02-03 17:20:10.791130 | instance | included: /home/zuul/.ansible/collections/ansible_collections/vexxhost/ceph/roles/mon/tasks/bootstrap-ceph.yml for instance 2026-02-03 17:20:10.791497 | instance | 2026-02-03 17:20:10.791878 | instance | TASK [vexxhost.ceph.mon : Generate temporary file for "ceph.conf"] ************* 2026-02-03 17:20:10.792170 | instance | Tuesday 03 February 2026 17:20:10 +0000 (0:00:00.057) 0:00:47.324 ****** 2026-02-03 17:20:11.074695 | instance | changed: [instance] 2026-02-03 17:20:11.074810 | instance | 2026-02-03 17:20:11.074938 | instance | TASK [vexxhost.ceph.mon : Include extra configuration values] ****************** 2026-02-03 17:20:11.075062 | instance | Tuesday 03 February 2026 17:20:11 +0000 (0:00:00.283) 0:00:47.608 ****** 2026-02-03 17:20:11.739354 | instance | changed: [instance] => (item={'option': 'mon allow pool size one', 'section': 'global', 'value': True}) 2026-02-03 17:20:11.739686 | instance | changed: [instance] => (item={'option': 'osd crush chooseleaf type', 'section': 'global', 'value': 0}) 2026-02-03 17:20:11.740004 | instance | changed: [instance] => (item={'option': 'auth allow insecure global id reclaim', 'section': 'mon', 'value': False}) 2026-02-03 17:20:11.740280 | instance | 2026-02-03 17:20:11.740573 | instance | TASK [vexxhost.ceph.mon : Run Bootstrap coomand] ******************************* 2026-02-03 17:20:11.740869 | instance | Tuesday 03 February 2026 17:20:11 +0000 (0:00:00.664) 0:00:48.272 ****** 2026-02-03 17:21:30.773915 | instance | fatal: [instance]: FAILED! => {"changed": false, "cmd": ["cephadm", "bootstrap", "--fsid", "4837cbf8-4f90-4300-b3f6-726c9b9f89b4", "--mon-ip", "162.253.55.195", "--cluster-network", "0.0.0.0/0", "--ssh-user", "cephadm", "--config", "/tmp/ceph_p5fste18.conf", "--skip-monitoring-stack"], "delta": "0:01:18.850601", "end": "2026-02-03 17:21:30.753351", "msg": "non-zero return code", "rc": 1, "start": "2026-02-03 17:20:11.902750", "stderr": "Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts.\nThe cluster CIDR network 0.0.0.0/0 is not configured locally.\nError: Failed to add host : Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18.2.1 -e NODE_NAME=instance -e CEPH_USE_RANDOM_NONCE=1 -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /tmp/ceph-tmprcnvhq1b:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmp8p0dpo11:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18.2.1 orch host add instance 162.253.55.195\nERROR: Failed to add host : Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18.2.1 -e NODE_NAME=instance -e CEPH_USE_RANDOM_NONCE=1 -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /tmp/ceph-tmprcnvhq1b:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmp8p0dpo11:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18.2.1 orch host add instance 162.253.55.195", "stderr_lines": ["Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts.", "The cluster CIDR network 0.0.0.0/0 is not configured locally.", "Error: Failed to add host : Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18.2.1 -e NODE_NAME=instance -e CEPH_USE_RANDOM_NONCE=1 -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /tmp/ceph-tmprcnvhq1b:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmp8p0dpo11:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18.2.1 orch host add instance 162.253.55.195", "ERROR: Failed to add host : Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18.2.1 -e NODE_NAME=instance -e CEPH_USE_RANDOM_NONCE=1 -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /tmp/ceph-tmprcnvhq1b:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmp8p0dpo11:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18.2.1 orch host add instance 162.253.55.195"], "stdout": "Creating directory /etc/ceph for ceph.conf\nVerifying ssh connectivity using standard pubkey authentication ...\nAdding key to cephadm@localhost authorized_keys...\nVerifying podman|docker is present...\nVerifying lvm2 is present...\nVerifying time synchronization is in place...\nUnit chrony.service is enabled and running\nRepeating the final host check...\ndocker (/usr/bin/docker) is present\nsystemctl is present\nlvcreate is present\nUnit chrony.service is enabled and running\nHost looks OK\nCluster fsid: 4837cbf8-4f90-4300-b3f6-726c9b9f89b4\nVerifying IP 162.253.55.195 port 3300 ...\nVerifying IP 162.253.55.195 port 6789 ...\nMon IP `162.253.55.195` is in CIDR network `162.253.52.0/24`\nMon IP `162.253.55.195` is in CIDR network `162.253.52.0/24`\nMon IP `162.253.55.195` is in CIDR network `162.253.53.0/24`\nMon IP `162.253.55.195` is in CIDR network `162.253.53.0/24`\nMon IP `162.253.55.195` is in CIDR network `162.253.54.0/24`\nMon IP `162.253.55.195` is in CIDR network `162.253.54.0/24`\nMon IP `162.253.55.195` is in CIDR network `162.253.55.0/24`\nMon IP `162.253.55.195` is in CIDR network `162.253.55.0/24`\nMon IP `162.253.55.195` is in CIDR network `162.253.55.1/32`\nMon IP `162.253.55.195` is in CIDR network `162.253.55.1/32`\nMon IP `162.253.55.195` is in CIDR network `162.253.55.103/32`\nMon IP `162.253.55.195` is in CIDR network `162.253.55.103/32`\nMon IP `162.253.55.195` is in CIDR network `199.19.212.0/24`\nMon IP `162.253.55.195` is in CIDR network `199.19.212.0/24`\nMon IP `162.253.55.195` is in CIDR network `199.19.213.0/24`\nMon IP `162.253.55.195` is in CIDR network `199.19.213.0/24`\nMon IP `162.253.55.195` is in CIDR network `199.19.214.0/24`\nMon IP `162.253.55.195` is in CIDR network `199.19.214.0/24`\nMon IP `162.253.55.195` is in CIDR network `199.19.215.0/24`\nMon IP `162.253.55.195` is in CIDR network `199.19.215.0/24`\nMon IP `162.253.55.195` is in CIDR network `199.204.45.0/24`\nMon IP `162.253.55.195` is in CIDR network `199.204.45.0/24`\nMon IP `162.253.55.195` is in CIDR network `199.204.46.0/24`\nMon IP `162.253.55.195` is in CIDR network `199.204.46.0/24`\nPulling container image quay.io/ceph/ceph:v18.2.1...\nCeph version: ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)\nExtracting ceph user uid/gid from container image...\nCreating initial keys...\nCreating initial monmap...\nCreating mon...\nWaiting for mon to start...\nWaiting for mon...\nmon is available\nAssimilating anything we can from ceph.conf...\nGenerating new minimal ceph.conf...\nRestarting the monitor...\nSetting public_network to 162.253.55.1/32,162.253.55.0/24,162.253.53.0/24,199.19.212.0/24,162.253.54.0/24,199.19.213.0/24,199.19.214.0/24,199.204.46.0/24,162.253.52.0/24,162.253.55.103/32,199.19.215.0/24,199.204.45.0/24 in mon config section\nSetting cluster_network to 0.0.0.0/0\nWrote config to /etc/ceph/ceph.conf\nWrote keyring to /etc/ceph/ceph.client.admin.keyring\nCreating mgr...\nVerifying port 0.0.0.0:9283 ...\nVerifying port 0.0.0.0:8765 ...\nWaiting for mgr to start...\nWaiting for mgr...\nmgr not available, waiting (1/15)...\nmgr not available, waiting (2/15)...\nmgr not available, waiting (3/15)...\nmgr is available\nEnabling cephadm module...\nWaiting for the mgr to restart...\nWaiting for mgr epoch 5...\nmgr epoch 5 is available\nSetting orchestrator backend to cephadm...\nGenerating ssh key...\nWrote public SSH key to /etc/ceph/ceph.pub\nAdding key to cephadm@localhost authorized_keys...\nAdding host instance...\nNon-zero exit code 22 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18.2.1 -e NODE_NAME=instance -e CEPH_USE_RANDOM_NONCE=1 -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /tmp/ceph-tmprcnvhq1b:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmp8p0dpo11:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18.2.1 orch host add instance 162.253.55.195\n/usr/bin/ceph: stderr Error EINVAL: check-host failed:\n/usr/bin/ceph: stderr Unable to write instance:/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.8c89112927b45a1984d03fb02785df709234bdb856619c217e1ad5d54aebef2b: Failed to connect to instance (162.253.55.195). Connection lost\n/usr/bin/ceph: stderr Log: Opening SSH connection to 162.253.55.195, port 22\n/usr/bin/ceph: stderr [conn=5] Connected to SSH server at 162.253.55.195, port 22\n/usr/bin/ceph: stderr [conn=5] Local address: 162.253.55.195, port 34296\n/usr/bin/ceph: stderr [conn=5] Peer address: 162.253.55.195, port 22\n/usr/bin/ceph: stderr [conn=5] Connection lost\n/usr/bin/ceph: stderr [conn=5] Aborting connection\n/usr/bin/ceph: stderr \n\n\n\t***************\n\tCephadm hit an issue during cluster installation. Current cluster files will NOT BE DELETED automatically to change\n\tthis behaviour you can pass the --cleanup-on-failure. To remove this broken cluster manually please run:\n\n\t > cephadm rm-cluster --force --fsid 4837cbf8-4f90-4300-b3f6-726c9b9f89b4\n\n\tin case of any previous broken installation user must use the rm-cluster command to delete the broken cluster:\n\n\t > cephadm rm-cluster --force --zap-osds --fsid \n\n\tfor more information please refer to https://docs.ceph.com/en/latest/cephadm/operations/#purging-a-cluster\n\t***************", "stdout_lines": ["Creating directory /etc/ceph for ceph.conf", "Verifying ssh connectivity using standard pubkey authentication ...", "Adding key to cephadm@localhost authorized_keys...", "Verifying podman|docker is present...", "Verifying lvm2 is present...", "Verifying time synchronization is in place...", "Unit chrony.service is enabled and running", "Repeating the final host check...", "docker (/usr/bin/docker) is present", "systemctl is present", "lvcreate is present", "Unit chrony.service is enabled and running", "Host looks OK", "Cluster fsid: 4837cbf8-4f90-4300-b3f6-726c9b9f89b4", "Verifying IP 162.253.55.195 port 3300 ...", "Verifying IP 162.253.55.195 port 6789 ...", "Mon IP `162.253.55.195` is in CIDR network `162.253.52.0/24`", "Mon IP `162.253.55.195` is in CIDR network `162.253.52.0/24`", "Mon IP `162.253.55.195` is in CIDR network `162.253.53.0/24`", "Mon IP `162.253.55.195` is in CIDR network `162.253.53.0/24`", "Mon IP `162.253.55.195` is in CIDR network `162.253.54.0/24`", "Mon IP `162.253.55.195` is in CIDR network `162.253.54.0/24`", "Mon IP `162.253.55.195` is in CIDR network `162.253.55.0/24`", "Mon IP `162.253.55.195` is in CIDR network `162.253.55.0/24`", "Mon IP `162.253.55.195` is in CIDR network `162.253.55.1/32`", "Mon IP `162.253.55.195` is in CIDR network `162.253.55.1/32`", "Mon IP `162.253.55.195` is in CIDR network `162.253.55.103/32`", "Mon IP `162.253.55.195` is in CIDR network `162.253.55.103/32`", "Mon IP `162.253.55.195` is in CIDR network `199.19.212.0/24`", "Mon IP `162.253.55.195` is in CIDR network `199.19.212.0/24`", "Mon IP `162.253.55.195` is in CIDR network `199.19.213.0/24`", "Mon IP `162.253.55.195` is in CIDR network `199.19.213.0/24`", "Mon IP `162.253.55.195` is in CIDR network `199.19.214.0/24`", "Mon IP `162.253.55.195` is in CIDR network `199.19.214.0/24`", "Mon IP `162.253.55.195` is in CIDR network `199.19.215.0/24`", "Mon IP `162.253.55.195` is in CIDR network `199.19.215.0/24`", "Mon IP `162.253.55.195` is in CIDR network `199.204.45.0/24`", "Mon IP `162.253.55.195` is in CIDR network `199.204.45.0/24`", "Mon IP `162.253.55.195` is in CIDR network `199.204.46.0/24`", "Mon IP `162.253.55.195` is in CIDR network `199.204.46.0/24`", "Pulling container image quay.io/ceph/ceph:v18.2.1...", "Ceph version: ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)", "Extracting ceph user uid/gid from container image...", "Creating initial keys...", "Creating initial monmap...", "Creating mon...", "Waiting for mon to start...", "Waiting for mon...", "mon is available", "Assimilating anything we can from ceph.conf...", "Generating new minimal ceph.conf...", "Restarting the monitor...", "Setting public_network to 162.253.55.1/32,162.253.55.0/24,162.253.53.0/24,199.19.212.0/24,162.253.54.0/24,199.19.213.0/24,199.19.214.0/24,199.204.46.0/24,162.253.52.0/24,162.253.55.103/32,199.19.215.0/24,199.204.45.0/24 in mon config section", "Setting cluster_network to 0.0.0.0/0", "Wrote config to /etc/ceph/ceph.conf", "Wrote keyring to /etc/ceph/ceph.client.admin.keyring", "Creating mgr...", "Verifying port 0.0.0.0:9283 ...", "Verifying port 0.0.0.0:8765 ...", "Waiting for mgr to start...", "Waiting for mgr...", "mgr not available, waiting (1/15)...", "mgr not available, waiting (2/15)...", "mgr not available, waiting (3/15)...", "mgr is available", "Enabling cephadm module...", "Waiting for the mgr to restart...", "Waiting for mgr epoch 5...", "mgr epoch 5 is available", "Setting orchestrator backend to cephadm...", "Generating ssh key...", "Wrote public SSH key to /etc/ceph/ceph.pub", "Adding key to cephadm@localhost authorized_keys...", "Adding host instance...", "Non-zero exit code 22 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18.2.1 -e NODE_NAME=instance -e CEPH_USE_RANDOM_NONCE=1 -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /tmp/ceph-tmprcnvhq1b:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmp8p0dpo11:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18.2.1 orch host add instance 162.253.55.195", "/usr/bin/ceph: stderr Error EINVAL: check-host failed:", "/usr/bin/ceph: stderr Unable to write instance:/var/lib/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4/cephadm.8c89112927b45a1984d03fb02785df709234bdb856619c217e1ad5d54aebef2b: Failed to connect to instance (162.253.55.195). Connection lost", "/usr/bin/ceph: stderr Log: Opening SSH connection to 162.253.55.195, port 22", "/usr/bin/ceph: stderr [conn=5] Connected to SSH server at 162.253.55.195, port 22", "/usr/bin/ceph: stderr [conn=5] Local address: 162.253.55.195, port 34296", "/usr/bin/ceph: stderr [conn=5] Peer address: 162.253.55.195, port 22", "/usr/bin/ceph: stderr [conn=5] Connection lost", "/usr/bin/ceph: stderr [conn=5] Aborting connection", "/usr/bin/ceph: stderr ", "", "", "\t***************", "\tCephadm hit an issue during cluster installation. Current cluster files will NOT BE DELETED automatically to change", "\tthis behaviour you can pass the --cleanup-on-failure. To remove this broken cluster manually please run:", "", "\t > cephadm rm-cluster --force --fsid 4837cbf8-4f90-4300-b3f6-726c9b9f89b4", "", "\tin case of any previous broken installation user must use the rm-cluster command to delete the broken cluster:", "", "\t > cephadm rm-cluster --force --zap-osds --fsid ", "", "\tfor more information please refer to https://docs.ceph.com/en/latest/cephadm/operations/#purging-a-cluster", "\t***************"]} 2026-02-03 17:21:30.787675 | instance | 2026-02-03 17:21:30.788097 | instance | TASK [vexxhost.ceph.mon : Remove temporary file for "ceph.conf"] *************** 2026-02-03 17:21:30.788423 | instance | Tuesday 03 February 2026 17:21:30 +0000 (0:01:19.048) 0:02:07.321 ****** 2026-02-03 17:21:30.965890 | instance | changed: [instance] 2026-02-03 17:21:30.965945 | instance | 2026-02-03 17:21:30.966066 | instance | PLAY RECAP ********************************************************************* 2026-02-03 17:21:30.966233 | instance | instance : ok=50 changed=26 unreachable=0 failed=1 skipped=13 rescued=0 ignored=0 2026-02-03 17:21:30.966319 | instance | 2026-02-03 17:21:30.966444 | instance | Tuesday 03 February 2026 17:21:30 +0000 (0:00:00.178) 0:02:07.500 ****** 2026-02-03 17:21:30.966643 | instance | =============================================================================== 2026-02-03 17:21:30.966759 | instance | vexxhost.ceph.mon : Run Bootstrap coomand ------------------------------ 79.05s 2026-02-03 17:21:30.966880 | instance | vexxhost.ceph.cephadm : Install packages -------------------------------- 6.99s 2026-02-03 17:21:30.967049 | instance | vexxhost.containers.containerd : Install AppArmor packages -------------- 5.37s 2026-02-03 17:21:30.967172 | instance | vexxhost.containers.download_artifact : Extract archive ----------------- 4.55s 2026-02-03 17:21:30.967306 | instance | vexxhost.containers.docker : Restart docker ----------------------------- 4.07s 2026-02-03 17:21:30.967471 | instance | vexxhost.containers.docker : Ensure group "docker" exists --------------- 2.90s 2026-02-03 17:21:30.967635 | instance | vexxhost.containers.download_artifact : Extract archive ----------------- 2.84s 2026-02-03 17:21:30.967903 | instance | vexxhost.containers.containerd : Reload systemd ------------------------- 1.73s 2026-02-03 17:21:30.968084 | instance | vexxhost.ceph.mon : Get `cephadm ls` status ----------------------------- 1.63s 2026-02-03 17:21:30.968251 | instance | vexxhost.containers.package : Update state for tar ---------------------- 1.11s 2026-02-03 17:21:30.968413 | instance | vexxhost.containers.download_artifact : Download item ------------------- 1.01s 2026-02-03 17:21:30.968578 | instance | vexxhost.ceph.cephadm : Ensure "cephadm" user is present ---------------- 0.98s 2026-02-03 17:21:30.968741 | instance | Gathering Facts --------------------------------------------------------- 0.96s 2026-02-03 17:21:30.968898 | instance | vexxhost.containers.docker : Install AppArmor packages ------------------ 0.95s 2026-02-03 17:21:30.969061 | instance | vexxhost.containers.download_artifact : Download item ------------------- 0.90s 2026-02-03 17:21:30.969239 | instance | Gathering Facts --------------------------------------------------------- 0.86s 2026-02-03 17:21:30.969413 | instance | vexxhost.containers.containerd : Create folders for configuration ------- 0.84s 2026-02-03 17:21:30.969577 | instance | Gathering Facts --------------------------------------------------------- 0.81s 2026-02-03 17:21:30.969741 | instance | vexxhost.containers.download_artifact : Download item ------------------- 0.79s 2026-02-03 17:21:30.969904 | instance | vexxhost.ceph.mon : Include extra configuration values ------------------ 0.66s 2026-02-03 17:21:31.055090 | instance | CRITICAL Ansible return code was 2, command was: ansible-playbook --inventory /home/zuul/.ansible/tmp/molecule.v9Wo.csi/inventory --skip-tags molecule-notest,notest --inventory=/home/zuul/src/github.com/vexxhost/atmosphere/inventory.yaml /home/zuul/src/github.com/vexxhost/atmosphere/molecule/csi/converge.yml 2026-02-03 17:21:31.055136 | instance | ERROR [csi > converge] Executed: Failed 2026-02-03 17:21:31.055224 | instance | ERROR Ansible return code was 2, command was: ansible-playbook --inventory /home/zuul/.ansible/tmp/molecule.v9Wo.csi/inventory --skip-tags molecule-notest,notest --inventory=/home/zuul/src/github.com/vexxhost/atmosphere/inventory.yaml /home/zuul/src/github.com/vexxhost/atmosphere/molecule/csi/converge.yml 2026-02-03 17:21:31.327726 | instance | ERROR 2026-02-03 17:21:31.328061 | instance | { 2026-02-03 17:21:31.328126 | instance | "delta": "0:03:44.307624", 2026-02-03 17:21:31.328173 | instance | "end": "2026-02-03 17:21:31.119989", 2026-02-03 17:21:31.328213 | instance | "msg": "non-zero return code", 2026-02-03 17:21:31.328251 | instance | "rc": 2, 2026-02-03 17:21:31.328294 | instance | "start": "2026-02-03 17:17:46.812365" 2026-02-03 17:21:31.328333 | instance | } failure 2026-02-03 17:21:31.336086 | 2026-02-03 17:21:31.336138 | PLAY RECAP 2026-02-03 17:21:31.336187 | instance | ok: 2 changed: 2 unreachable: 0 failed: 1 skipped: 0 rescued: 0 ignored: 0 2026-02-03 17:21:31.336209 | 2026-02-03 17:21:31.467352 | RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/run.yaml@main] 2026-02-03 17:21:31.471825 | POST-RUN START: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/post.yml@stable/2024.1] 2026-02-03 17:21:32.068354 | 2026-02-03 17:21:32.068509 | PLAY [all] 2026-02-03 17:21:32.083087 | 2026-02-03 17:21:32.083175 | TASK [gather-host-logs : creating directory for system status] 2026-02-03 17:21:32.408750 | instance | changed 2026-02-03 17:21:32.415813 | 2026-02-03 17:21:32.415917 | TASK [gather-host-logs : Get logs for each host] 2026-02-03 17:21:32.839249 | instance | + systemd-cgls --full --all --no-pager 2026-02-03 17:21:32.848497 | instance | + ip addr 2026-02-03 17:21:32.850243 | instance | + ip route 2026-02-03 17:21:32.851885 | instance | + lsblk 2026-02-03 17:21:32.854418 | instance | + mount 2026-02-03 17:21:32.855839 | instance | + docker images 2026-02-03 17:21:32.869621 | instance | + brctl show 2026-02-03 17:21:32.870047 | instance | /bin/bash: line 8: brctl: command not found 2026-02-03 17:21:32.870297 | instance | + ps aux --sort=-%mem 2026-02-03 17:21:32.882495 | instance | + dpkg -l 2026-02-03 17:21:32.888491 | instance | + CONTAINERS=($(docker ps -a --format '{{ .Names }}' --filter label=zuul)) 2026-02-03 17:21:32.888848 | instance | ++ docker ps -a --format '{{ .Names }}' --filter label=zuul 2026-02-03 17:21:32.902632 | instance | + '[' '!' -z '' ']' 2026-02-03 17:21:32.952116 | instance | ok: Runtime: 0:00:00.068072 2026-02-03 17:21:32.960197 | 2026-02-03 17:21:32.960286 | TASK [gather-host-logs : Downloads logs to executor] 2026-02-03 17:21:33.584307 | instance | changed: 2026-02-03 17:21:33.584487 | instance | created directory /var/lib/zuul/builds/f47942460ac2482e84103b318d29b633/work/logs/instance 2026-02-03 17:21:33.584515 | instance | cd+++++++++ system/ 2026-02-03 17:21:33.584536 | instance | >f+++++++++ system/brctl-show.txt 2026-02-03 17:21:33.584557 | instance | >f+++++++++ system/docker-images.txt 2026-02-03 17:21:33.584576 | instance | >f+++++++++ system/ip-addr.txt 2026-02-03 17:21:33.584597 | instance | >f+++++++++ system/ip-route.txt 2026-02-03 17:21:33.584617 | instance | >f+++++++++ system/lsblk.txt 2026-02-03 17:21:33.584636 | instance | >f+++++++++ system/mount.txt 2026-02-03 17:21:33.584654 | instance | >f+++++++++ system/packages.txt 2026-02-03 17:21:33.584672 | instance | >f+++++++++ system/ps.txt 2026-02-03 17:21:33.584693 | instance | >f+++++++++ system/systemd-cgls.txt 2026-02-03 17:21:33.594419 | 2026-02-03 17:21:33.594485 | LOOP [helm-release-status : creating directory for helm release status] 2026-02-03 17:21:33.795945 | instance | changed: "values" 2026-02-03 17:21:33.952069 | instance | changed: "releases" 2026-02-03 17:21:33.969200 | 2026-02-03 17:21:33.969553 | TASK [helm-release-status : Gather get release status for helm charts] 2026-02-03 17:21:34.174205 | instance | /bin/bash: line 3: kubectl: command not found 2026-02-03 17:21:34.510699 | instance | ok: Runtime: 0:00:00.004546 2026-02-03 17:21:34.518486 | 2026-02-03 17:21:34.518584 | TASK [helm-release-status : Downloads logs to executor] 2026-02-03 17:21:34.988146 | instance | changed: 2026-02-03 17:21:34.988352 | instance | cd+++++++++ helm/ 2026-02-03 17:21:34.988407 | instance | cd+++++++++ helm/releases/ 2026-02-03 17:21:34.988471 | instance | cd+++++++++ helm/values/ 2026-02-03 17:21:35.000081 | 2026-02-03 17:21:35.000145 | TASK [describe-kubernetes-objects : creating directory for cluster scoped objects] 2026-02-03 17:21:35.205207 | instance | changed 2026-02-03 17:21:35.212008 | 2026-02-03 17:21:35.212157 | TASK [describe-kubernetes-objects : Gathering descriptions for cluster scoped objects] 2026-02-03 17:21:35.421352 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-02-03 17:21:35.421576 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-02-03 17:21:35.424648 | instance | environment: line 1: kubectl: command not found 2026-02-03 17:21:35.426190 | instance | environment: line 1: kubectl: command not found 2026-02-03 17:21:35.426487 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-02-03 17:21:35.427499 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-02-03 17:21:35.429730 | instance | environment: line 1: kubectl: command not found 2026-02-03 17:21:35.430699 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-02-03 17:21:35.430742 | instance | environment: line 1: kubectl: command not found 2026-02-03 17:21:35.431394 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-02-03 17:21:35.432605 | instance | environment: line 1: kubectl: command not found 2026-02-03 17:21:35.433653 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-02-03 17:21:35.751216 | instance | ok: Runtime: 0:00:00.018459 2026-02-03 17:21:35.758781 | 2026-02-03 17:21:35.758872 | TASK [describe-kubernetes-objects : creating directory for namespace scoped objects] 2026-02-03 17:21:35.965096 | instance | changed 2026-02-03 17:21:35.973276 | 2026-02-03 17:21:35.973381 | TASK [describe-kubernetes-objects : Gathering descriptions for namespace scoped objects] 2026-02-03 17:21:36.219742 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-02-03 17:21:36.221139 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-02-03 17:21:36.221444 | instance | environment: line 5: kubectl: command not found 2026-02-03 17:21:36.222335 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-02-03 17:21:36.547631 | instance | ok: Runtime: 0:00:00.012096 2026-02-03 17:21:36.554704 | 2026-02-03 17:21:36.554790 | TASK [describe-kubernetes-objects : Downloads logs to executor] 2026-02-03 17:21:37.036084 | instance | changed: 2026-02-03 17:21:37.036363 | instance | cd+++++++++ objects/ 2026-02-03 17:21:37.036395 | instance | cd+++++++++ objects/cluster/ 2026-02-03 17:21:37.036415 | instance | cd+++++++++ objects/namespaced/ 2026-02-03 17:21:37.045068 | 2026-02-03 17:21:37.045133 | TASK [gather-pod-logs : creating directory for pod logs] 2026-02-03 17:21:37.250941 | instance | changed 2026-02-03 17:21:37.257454 | 2026-02-03 17:21:37.257522 | TASK [gather-pod-logs : creating directory for failed pod logs] 2026-02-03 17:21:37.456293 | instance | changed 2026-02-03 17:21:37.462575 | 2026-02-03 17:21:37.462705 | TASK [gather-pod-logs : retrieve all kubernetes logs, current and previous (if they exist)] 2026-02-03 17:21:37.690825 | instance | environment: line 3: kubectl: command not found 2026-02-03 17:21:38.001315 | instance | ok: Runtime: 0:00:00.006959 2026-02-03 17:21:38.008221 | 2026-02-03 17:21:38.008357 | TASK [gather-pod-logs : Downloads pod logs to executor] 2026-02-03 17:21:38.478699 | instance | changed: 2026-02-03 17:21:38.479001 | instance | cd+++++++++ pod-logs/ 2026-02-03 17:21:38.479113 | instance | cd+++++++++ pod-logs/failed-pods/ 2026-02-03 17:21:38.488080 | 2026-02-03 17:21:38.488147 | TASK [gather-prom-metrics : creating directory for helm release descriptions] 2026-02-03 17:21:38.693520 | instance | changed 2026-02-03 17:21:38.700447 | 2026-02-03 17:21:38.700541 | TASK [gather-prom-metrics : Get metrics from exporter services in all namespaces] 2026-02-03 17:21:38.910068 | instance | /bin/bash: line 2: kubectl: command not found 2026-02-03 17:21:39.240566 | instance | ok: Runtime: 0:00:00.031751 2026-02-03 17:21:39.247713 | 2026-02-03 17:21:39.247827 | TASK [gather-prom-metrics : Get ceph metrics from ceph-mgr] 2026-02-03 17:21:39.455793 | instance | /bin/bash: line 2: kubectl: command not found 2026-02-03 17:21:39.483163 | instance | ceph-mgr endpoints: 2026-02-03 17:21:39.783808 | instance | ok: Runtime: 0:00:00.033259 2026-02-03 17:21:39.790693 | 2026-02-03 17:21:39.790762 | TASK [gather-prom-metrics : Get metrics from fluentd pods] 2026-02-03 17:21:39.992940 | instance | /bin/bash: line 4: kubectl: command not found 2026-02-03 17:21:40.326815 | instance | ok: Runtime: 0:00:00.030295 2026-02-03 17:21:40.333665 | 2026-02-03 17:21:40.333729 | TASK [gather-prom-metrics : Downloads logs to executor] 2026-02-03 17:21:40.811446 | instance | changed: cd+++++++++ prometheus/ 2026-02-03 17:21:40.821038 | 2026-02-03 17:21:40.821101 | TASK [gather-selenium-data : creating directory for helm release descriptions] 2026-02-03 17:21:41.025858 | instance | changed 2026-02-03 17:21:41.034310 | 2026-02-03 17:21:41.034413 | TASK [gather-selenium-data : Get selenium data] 2026-02-03 17:21:41.242276 | instance | + cp '/tmp/artifacts/*' /tmp/logs/selenium/. 2026-02-03 17:21:41.243883 | instance | cp: cannot stat '/tmp/artifacts/*': No such file or directory 2026-02-03 17:21:41.580057 | instance | ERROR 2026-02-03 17:21:41.580312 | instance | { 2026-02-03 17:21:41.580351 | instance | "delta": "0:00:00.006786", 2026-02-03 17:21:41.580378 | instance | "end": "2026-02-03 17:21:41.244195", 2026-02-03 17:21:41.580403 | instance | "msg": "non-zero return code", 2026-02-03 17:21:41.580427 | instance | "rc": 1, 2026-02-03 17:21:41.580449 | instance | "start": "2026-02-03 17:21:41.237409" 2026-02-03 17:21:41.580471 | instance | } 2026-02-03 17:21:41.580500 | instance | ERROR: Ignoring Errors 2026-02-03 17:21:41.587021 | 2026-02-03 17:21:41.587104 | TASK [gather-selenium-data : Downloads logs to executor] 2026-02-03 17:21:42.068940 | instance | changed: cd+++++++++ selenium/ 2026-02-03 17:21:42.075062 | 2026-02-03 17:21:42.075112 | PLAY RECAP 2026-02-03 17:21:42.075159 | instance | ok: 23 changed: 23 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 1 2026-02-03 17:21:42.075181 | 2026-02-03 17:21:42.207891 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/post.yml@stable/2024.1] 2026-02-03 17:21:42.212055 | POST-RUN START: [trusted : vexxhost.dev/zuul-config/playbooks/base/post.yaml@main] 2026-02-03 17:21:42.812464 | 2026-02-03 17:21:42.812680 | PLAY [all] 2026-02-03 17:21:42.826233 | 2026-02-03 17:21:42.826307 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-03 17:21:42.872850 | instance | skipping: Conditional result was False 2026-02-03 17:21:42.883995 | 2026-02-03 17:21:42.884111 | TASK [fetch-output : Set log path for single node] 2026-02-03 17:21:42.920764 | instance | ok 2026-02-03 17:21:42.928236 | 2026-02-03 17:21:42.928349 | LOOP [fetch-output : Ensure local output dirs] 2026-02-03 17:21:43.302139 | instance -> localhost | ok: "/var/lib/zuul/builds/f47942460ac2482e84103b318d29b633/work/logs" 2026-02-03 17:21:43.517373 | instance -> localhost | changed: "/var/lib/zuul/builds/f47942460ac2482e84103b318d29b633/work/artifacts" 2026-02-03 17:21:43.727814 | instance -> localhost | changed: "/var/lib/zuul/builds/f47942460ac2482e84103b318d29b633/work/docs" 2026-02-03 17:21:43.746962 | 2026-02-03 17:21:43.747102 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-03 17:21:44.374956 | instance | changed: .d..t...... ./ 2026-02-03 17:21:44.375192 | instance | changed: All items complete 2026-02-03 17:21:44.375221 | 2026-02-03 17:21:44.832157 | instance | changed: .d..t...... ./ 2026-02-03 17:21:45.268036 | instance | changed: .d..t...... ./ 2026-02-03 17:21:45.286367 | 2026-02-03 17:21:45.286507 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-03 17:21:45.724103 | instance -> localhost | ok: Item: artifacts Runtime: 0:00:00.008525 2026-02-03 17:21:45.962831 | instance -> localhost | ok: Item: docs Runtime: 0:00:00.008068 2026-02-03 17:21:45.985615 | 2026-02-03 17:21:45.985724 | PLAY [all] 2026-02-03 17:21:45.991507 | 2026-02-03 17:21:45.991572 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-03 17:21:46.403100 | instance | changed 2026-02-03 17:21:46.411911 | 2026-02-03 17:21:46.411970 | PLAY RECAP 2026-02-03 17:21:46.412019 | instance | ok: 5 changed: 4 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-03 17:21:46.412043 | 2026-02-03 17:21:46.537654 | POST-RUN END RESULT_NORMAL: [trusted : vexxhost.dev/zuul-config/playbooks/base/post.yaml@main] 2026-02-03 17:21:46.542256 | POST-RUN START: [trusted : vexxhost.dev/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-03 17:21:47.156977 | 2026-02-03 17:21:47.157113 | PLAY [localhost] 2026-02-03 17:21:47.175916 | 2026-02-03 17:21:47.176126 | TASK [Generate Zuul manifest] 2026-02-03 17:21:47.198174 | localhost | ok 2026-02-03 17:21:47.219728 | 2026-02-03 17:21:47.219860 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-03 17:21:47.552899 | localhost | changed 2026-02-03 17:21:47.563265 | 2026-02-03 17:21:47.563353 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-03 17:21:47.596267 | localhost | ok 2026-02-03 17:21:47.605108 | 2026-02-03 17:21:47.605194 | TASK [Upload logs] 2026-02-03 17:21:47.624428 | localhost | ok 2026-02-03 17:21:47.686652 | 2026-02-03 17:21:47.686786 | TASK [Set zuul-log-path fact] 2026-02-03 17:21:47.706276 | localhost | ok 2026-02-03 17:21:47.720547 | 2026-02-03 17:21:47.720616 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-03 17:21:47.752767 | localhost | ok 2026-02-03 17:21:47.762111 | 2026-02-03 17:21:47.762269 | TASK [upload-logs : Create log directories] 2026-02-03 17:21:48.144635 | localhost | changed 2026-02-03 17:21:48.153058 | 2026-02-03 17:21:48.153126 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-03 17:21:48.589079 | localhost -> localhost | ok: Runtime: 0:00:00.005881 2026-02-03 17:21:48.596370 | 2026-02-03 17:21:48.596479 | TASK [upload-logs : Upload logs to log server] 2026-02-03 17:21:49.103830 | localhost | Output suppressed because no_log was given 2026-02-03 17:21:49.109538 | 2026-02-03 17:21:49.109628 | LOOP [upload-logs : Compress console log and json output] 2026-02-03 17:21:49.154164 | localhost | skipping: Conditional result was False 2026-02-03 17:21:49.160626 | localhost | skipping: Conditional result was False 2026-02-03 17:21:49.173786 | 2026-02-03 17:21:49.173995 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-03 17:21:49.222975 | localhost | skipping: Conditional result was False 2026-02-03 17:21:49.223459 | 2026-02-03 17:21:49.226543 | localhost | skipping: Conditional result was False 2026-02-03 17:21:49.242193 | 2026-02-03 17:21:49.242333 | LOOP [upload-logs : Upload console log and json output]