2026-04-23 16:25:34.291272 | Job console starting 2026-04-23 16:25:34.312024 | Updating git repos 2026-04-23 16:25:34.714602 | Cloning repos into workspace 2026-04-23 16:25:34.893106 | Restoring repo states 2026-04-23 16:25:34.924640 | Merging changes 2026-04-23 16:25:38.158030 | Checking out repos 2026-04-23 16:25:38.689360 | Preparing playbooks 2026-04-23 16:25:57.241012 | Running Ansible setup 2026-04-23 16:26:02.029355 | PRE-RUN START: [trusted : github.com/vexxhost/zuul-config/playbooks/base/pre.yaml@main] 2026-04-23 16:26:02.726984 | 2026-04-23 16:26:02.727164 | PLAY [localhost] 2026-04-23 16:26:02.736062 | 2026-04-23 16:26:02.736174 | TASK [Gathering Facts] 2026-04-23 16:26:03.973242 | localhost | ok 2026-04-23 16:26:03.986895 | 2026-04-23 16:26:03.987027 | TASK [Setup log path fact] 2026-04-23 16:26:04.005247 | localhost | ok 2026-04-23 16:26:04.017648 | 2026-04-23 16:26:04.017791 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-23 16:26:04.056912 | localhost | ok 2026-04-23 16:26:04.066128 | 2026-04-23 16:26:04.066300 | TASK [emit-job-header : Print job information] 2026-04-23 16:26:04.108141 | # Job Information 2026-04-23 16:26:04.158885 | Ansible Version: 2.16.16 2026-04-23 16:26:04.159103 | Job: atmosphere-molecule-aio-ovn 2026-04-23 16:26:04.159171 | Pipeline: check 2026-04-23 16:26:04.159238 | Executor: 0a8996d2b663 2026-04-23 16:26:04.159300 | Triggered by: https://github.com/vexxhost/atmosphere/pull/3873 2026-04-23 16:26:04.159374 | Event ID: c26512e0-3f30-11f1-9931-b173b6dbd0a1 2026-04-23 16:26:04.167130 | 2026-04-23 16:26:04.167308 | LOOP [emit-job-header : Print node information] 2026-04-23 16:26:04.321944 | localhost | ok: 2026-04-23 16:26:04.358120 | localhost | # Node Information 2026-04-23 16:26:04.358265 | localhost | Inventory Hostname: instance 2026-04-23 16:26:04.358303 | localhost | Hostname: np0000169836 2026-04-23 16:26:04.358341 | localhost | Username: zuul 2026-04-23 16:26:04.358366 | localhost | Distro: Ubuntu 22.04 2026-04-23 16:26:04.358388 | localhost | Provider: yul1 2026-04-23 16:26:04.358408 | localhost | Region: ca-ymq-1 2026-04-23 16:26:04.358428 | localhost | Label: ubuntu-jammy-16 2026-04-23 16:26:04.358447 | localhost | Product Name: OpenStack Nova 2026-04-23 16:26:04.358466 | localhost | Interface IP: 199.19.213.161 2026-04-23 16:26:04.373246 | 2026-04-23 16:26:04.373419 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-23 16:26:04.989805 | localhost -> localhost | changed 2026-04-23 16:26:04.996354 | 2026-04-23 16:26:04.996445 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-23 16:26:06.222350 | localhost -> localhost | changed 2026-04-23 16:26:06.230222 | 2026-04-23 16:26:06.230348 | PLAY [all] 2026-04-23 16:26:06.239296 | 2026-04-23 16:26:06.239493 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-23 16:26:06.517981 | instance -> localhost | ok 2026-04-23 16:26:06.526009 | 2026-04-23 16:26:06.526115 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-23 16:26:06.557677 | instance | ok 2026-04-23 16:26:06.571215 | instance | included: /var/lib/zuul/builds/c4e83f0223f040e785fae21c19b0f782/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-23 16:26:06.576875 | 2026-04-23 16:26:06.576942 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-23 16:26:07.843790 | instance -> localhost | Generating public/private rsa key pair. 2026-04-23 16:26:07.844043 | instance -> localhost | Your identification has been saved in /var/lib/zuul/builds/c4e83f0223f040e785fae21c19b0f782/work/c4e83f0223f040e785fae21c19b0f782_id_rsa 2026-04-23 16:26:07.844094 | instance -> localhost | Your public key has been saved in /var/lib/zuul/builds/c4e83f0223f040e785fae21c19b0f782/work/c4e83f0223f040e785fae21c19b0f782_id_rsa.pub 2026-04-23 16:26:07.844131 | instance -> localhost | The key fingerprint is: 2026-04-23 16:26:07.844163 | instance -> localhost | SHA256:vJWMyFte/AykLIyTXt5ZcfFQ709GbUclhT/wq4GTfcY zuul-build-sshkey 2026-04-23 16:26:07.844215 | instance -> localhost | The key's randomart image is: 2026-04-23 16:26:07.844247 | instance -> localhost | +---[RSA 3072]----+ 2026-04-23 16:26:07.844286 | instance -> localhost | | o.oo=| 2026-04-23 16:26:07.844317 | instance -> localhost | | =.+.| 2026-04-23 16:26:07.844349 | instance -> localhost | | o . +.*| 2026-04-23 16:26:07.844380 | instance -> localhost | | = + * + *o| 2026-04-23 16:26:07.844409 | instance -> localhost | | + * S O + . *| 2026-04-23 16:26:07.844440 | instance -> localhost | | . + * * B o E.| 2026-04-23 16:26:07.844469 | instance -> localhost | | . o = + = .| 2026-04-23 16:26:07.844498 | instance -> localhost | | . | 2026-04-23 16:26:07.844531 | instance -> localhost | | | 2026-04-23 16:26:07.844561 | instance -> localhost | +----[SHA256]-----+ 2026-04-23 16:26:07.844640 | instance -> localhost | ok: Runtime: 0:00:00.742350 2026-04-23 16:26:07.853339 | 2026-04-23 16:26:07.853491 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-23 16:26:07.888569 | instance | ok 2026-04-23 16:26:07.902524 | instance | included: /var/lib/zuul/builds/c4e83f0223f040e785fae21c19b0f782/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-23 16:26:07.910433 | 2026-04-23 16:26:07.910527 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-23 16:26:07.935011 | instance | skipping: Conditional result was False 2026-04-23 16:26:07.945506 | 2026-04-23 16:26:07.945625 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-23 16:26:08.450795 | instance | changed 2026-04-23 16:26:08.458374 | 2026-04-23 16:26:08.458470 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-23 16:26:08.667580 | instance | ok 2026-04-23 16:26:08.674053 | 2026-04-23 16:26:08.674148 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-23 16:26:09.158647 | instance | changed 2026-04-23 16:26:09.163992 | 2026-04-23 16:26:09.164065 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-23 16:26:09.647666 | instance | changed 2026-04-23 16:26:09.654547 | 2026-04-23 16:26:09.654633 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-23 16:26:09.679408 | instance | skipping: Conditional result was False 2026-04-23 16:26:09.745781 | 2026-04-23 16:26:09.745956 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-23 16:26:10.144887 | instance -> localhost | changed 2026-04-23 16:26:10.155934 | 2026-04-23 16:26:10.156039 | TASK [add-build-sshkey : Add back temp key] 2026-04-23 16:26:10.470415 | instance -> localhost | Identity added: /var/lib/zuul/builds/c4e83f0223f040e785fae21c19b0f782/work/c4e83f0223f040e785fae21c19b0f782_id_rsa (zuul-build-sshkey) 2026-04-23 16:26:10.470606 | instance -> localhost | ok: Runtime: 0:00:00.019974 2026-04-23 16:26:10.475711 | 2026-04-23 16:26:10.475785 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-23 16:26:10.753770 | instance | ok 2026-04-23 16:26:10.768464 | 2026-04-23 16:26:10.770659 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-23 16:26:10.805501 | instance | skipping: Conditional result was False 2026-04-23 16:26:10.816217 | 2026-04-23 16:26:10.816292 | TASK [prepare-workspace : Start zuul_console daemon.] 2026-04-23 16:26:11.186056 | instance | ok 2026-04-23 16:26:12.044917 | 2026-04-23 16:26:12.052406 | TASK [prepare-workspace : Synchronize src repos to workspace directory.] 2026-04-23 16:26:14.050467 | instance | Output suppressed because no_log was given 2026-04-23 16:26:14.071486 | 2026-04-23 16:26:14.071765 | LOOP [ensure-output-dirs : Empty Zuul Output directories by removing them] 2026-04-23 16:26:14.269582 | instance | ok: "logs" 2026-04-23 16:26:15.400413 | instance | ok: All items complete 2026-04-23 16:26:15.400529 | 2026-04-23 16:26:15.406816 | instance | ok: "artifacts" 2026-04-23 16:26:15.413899 | instance | ok: "docs" 2026-04-23 16:26:15.422940 | 2026-04-23 16:26:15.423155 | LOOP [ensure-output-dirs : Ensure Zuul Output directories exist] 2026-04-23 16:26:15.629746 | instance | changed: "logs" 2026-04-23 16:26:15.788259 | instance | changed: "artifacts" 2026-04-23 16:26:15.943410 | instance | changed: "docs" 2026-04-23 16:26:15.958530 | 2026-04-23 16:26:15.958633 | PLAY RECAP 2026-04-23 16:26:15.958683 | instance | ok: 15 changed: 8 unreachable: 0 failed: 0 skipped: 3 rescued: 0 ignored: 0 2026-04-23 16:26:15.958713 | localhost | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-23 16:26:15.958767 | 2026-04-23 16:26:16.197580 | PRE-RUN END RESULT_NORMAL: [trusted : github.com/vexxhost/zuul-config/playbooks/base/pre.yaml@main] 2026-04-23 16:26:16.224154 | PRE-RUN START: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/pre.yaml@main] 2026-04-23 16:26:16.918612 | 2026-04-23 16:26:17.624914 | PLAY [all] 2026-04-23 16:26:17.640472 | 2026-04-23 16:26:17.640662 | TASK [setup-uv : Extract archive] 2026-04-23 16:26:20.014382 | instance | changed 2026-04-23 16:26:20.024864 | 2026-04-23 16:26:20.025227 | TASK [setup-uv : Print version] 2026-04-23 16:26:20.381026 | instance | uv 0.8.13 2026-04-23 16:26:21.963628 | instance | ok: Runtime: 0:00:00.014687 2026-04-23 16:26:21.973158 | 2026-04-23 16:26:21.973271 | PLAY RECAP 2026-04-23 16:26:21.973325 | instance | ok: 2 changed: 2 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-23 16:26:21.973350 | 2026-04-23 16:26:22.208933 | PRE-RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/pre.yaml@main] 2026-04-23 16:26:22.216078 | PRE-RUN START: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/pre.yml@main] 2026-04-23 16:26:23.015929 | 2026-04-23 16:26:23.016195 | PLAY [all] 2026-04-23 16:26:23.028619 | 2026-04-23 16:26:23.028719 | TASK [Install "jq" for log collection] 2026-04-23 16:26:33.856400 | instance | changed 2026-04-23 16:26:33.862102 | 2026-04-23 16:26:33.862202 | TASK [Install pip3 for Python package management] 2026-04-23 16:26:38.309490 | instance | changed 2026-04-23 16:26:38.315667 | 2026-04-23 16:26:38.315741 | TASK [Install Python "kubernetes" library for kubernetes.core modules] 2026-04-23 16:26:41.399424 | instance | changed 2026-04-23 16:26:41.402494 | 2026-04-23 16:26:41.402607 | PLAY [all] 2026-04-23 16:26:41.412406 | 2026-04-23 16:26:41.412588 | TASK [ensure-go : Check installed go version] 2026-04-23 16:26:41.953134 | instance | ok: ERROR (ignored) 2026-04-23 16:26:41.963015 | instance | { 2026-04-23 16:26:41.963146 | instance | "failed_when_result": false, 2026-04-23 16:26:41.963199 | instance | "msg": "[Errno 2] No such file or directory: b'go'", 2026-04-23 16:26:41.963247 | instance | "rc": 2 2026-04-23 16:26:41.963294 | instance | } 2026-04-23 16:26:41.975556 | 2026-04-23 16:26:41.976499 | TASK [ensure-go : Skip if correct version of go is installed] 2026-04-23 16:26:42.031857 | instance | ok 2026-04-23 16:26:42.042977 | instance | included: /var/lib/zuul/builds/c4e83f0223f040e785fae21c19b0f782/untrusted/project_2/opendev.org/zuul/zuul-jobs/roles/ensure-go/tasks/install-go.yaml 2026-04-23 16:26:42.050355 | 2026-04-23 16:26:42.050520 | TASK [ensure-go : Create temp directory] 2026-04-23 16:26:42.399297 | instance | changed 2026-04-23 16:26:42.406578 | 2026-04-23 16:26:42.406671 | TASK [ensure-go : Get archive checksum] 2026-04-23 16:26:43.027740 | instance | ok: OK (64 bytes) 2026-04-23 16:26:43.036079 | 2026-04-23 16:26:43.036175 | TASK [ensure-go : Download go archive] 2026-04-23 16:26:46.059921 | instance | changed: OK (78559214 bytes) 2026-04-23 16:26:46.070223 | 2026-04-23 16:26:46.070401 | TASK [ensure-go : Install go] 2026-04-23 16:26:51.972475 | instance | changed 2026-04-23 16:26:51.980571 | 2026-04-23 16:26:51.980661 | PLAY [all] 2026-04-23 16:26:51.986855 | 2026-04-23 16:26:51.986917 | TASK [Copy inventory file for Zuul] 2026-04-23 16:26:52.745847 | instance | changed 2026-04-23 16:26:52.750609 | 2026-04-23 16:26:52.750678 | TASK [Switch "ansible_host" to private IP] 2026-04-23 16:26:53.132945 | instance | changed: 1 replacements made 2026-04-23 16:26:53.195789 | 2026-04-23 16:26:53.195979 | TASK [Run molecule prepare] 2026-04-23 16:26:53.466950 | instance | Using CPython 3.10.12 interpreter at: /usr/bin/python3 2026-04-23 16:26:53.467314 | instance | Creating virtual environment at: .venv 2026-04-23 16:26:53.492348 | instance | Building atmosphere @ file:///home/zuul/src/github.com/vexxhost/atmosphere 2026-04-23 16:26:53.516008 | instance | Downloading setuptools (1.1MiB) 2026-04-23 16:26:53.516247 | instance | Downloading netaddr (2.2MiB) 2026-04-23 16:26:53.516485 | instance | Downloading rjsonnet (1.2MiB) 2026-04-23 16:26:53.516831 | instance | Downloading pydantic-core (2.0MiB) 2026-04-23 16:26:53.520759 | instance | Downloading cryptography (4.2MiB) 2026-04-23 16:26:53.521688 | instance | Downloading ansible-core (2.1MiB) 2026-04-23 16:26:53.522289 | instance | Downloading pygments (1.2MiB) 2026-04-23 16:26:53.528196 | instance | Downloading kubernetes (1.9MiB) 2026-04-23 16:26:53.528540 | instance | Downloading openstacksdk (1.7MiB) 2026-04-23 16:26:53.753178 | instance | Building pyperclip==1.9.0 2026-04-23 16:26:54.040200 | instance | Downloading rjsonnet 2026-04-23 16:26:54.042880 | instance | Downloading setuptools 2026-04-23 16:26:54.069177 | instance | Downloading pygments 2026-04-23 16:26:54.114481 | instance | Downloading pydantic-core 2026-04-23 16:26:54.153339 | instance | Downloading netaddr 2026-04-23 16:26:54.170599 | instance | Downloading cryptography 2026-04-23 16:26:54.187799 | instance | Downloading kubernetes 2026-04-23 16:26:54.206383 | instance | Downloading openstacksdk 2026-04-23 16:26:54.226730 | instance | Downloading ansible-core 2026-04-23 16:26:54.550102 | instance | Built pyperclip==1.9.0 2026-04-23 16:26:54.743366 | instance | Built atmosphere @ file:///home/zuul/src/github.com/vexxhost/atmosphere 2026-04-23 16:26:54.792781 | instance | Installed 83 packages in 47ms 2026-04-23 16:26:55.414563 | instance | WARNING Molecule scenarios should migrate to 'extensions/molecule' 2026-04-23 16:26:56.056784 | instance | INFO [aio > discovery] scenario test matrix: prepare 2026-04-23 16:26:56.056886 | instance | INFO [aio > prerun] Performing prerun with role_name_check=0... 2026-04-23 16:27:40.385186 | instance | INFO [aio > prepare] Executing 2026-04-23 16:27:41.332604 | instance | 2026-04-23 16:27:41.333055 | instance | PLAY [Prepare] ***************************************************************** 2026-04-23 16:27:41.333340 | instance | 2026-04-23 16:27:41.333627 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-23 16:27:41.334084 | instance | Thursday 23 April 2026 16:27:41 +0000 (0:00:00.026) 0:00:00.026 ******** 2026-04-23 16:27:42.424409 | instance | [WARNING]: Platform linux on host instance is using the discovered Python 2026-04-23 16:27:42.424674 | instance | interpreter at /usr/bin/python3.10, but future installation of another Python 2026-04-23 16:27:42.424990 | instance | interpreter could change the meaning of that path. See 2026-04-23 16:27:42.425267 | instance | https://docs.ansible.com/ansible- 2026-04-23 16:27:42.425538 | instance | core/2.17/reference_appendices/interpreter_discovery.html for more information. 2026-04-23 16:27:42.433149 | instance | ok: [instance] 2026-04-23 16:27:42.433404 | instance | 2026-04-23 16:27:42.433746 | instance | TASK [Configure short hostname] ************************************************ 2026-04-23 16:27:42.434090 | instance | Thursday 23 April 2026 16:27:42 +0000 (0:00:01.101) 0:00:01.128 ******** 2026-04-23 16:27:43.075614 | instance | changed: [instance] 2026-04-23 16:27:43.075927 | instance | 2026-04-23 16:27:43.076318 | instance | TASK [Ensure hostname inside hosts file] *************************************** 2026-04-23 16:27:43.076596 | instance | Thursday 23 April 2026 16:27:43 +0000 (0:00:00.642) 0:00:01.770 ******** 2026-04-23 16:27:43.322168 | instance | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-23 16:27:43.322414 | instance | with a mode of 0700, this may cause issues when running as another user. To 2026-04-23 16:27:43.322703 | instance | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-23 16:27:43.330708 | instance | changed: [instance] 2026-04-23 16:27:43.330959 | instance | 2026-04-23 16:27:43.331234 | instance | TASK [Install "dirmngr" for GPG keyserver operations] ************************** 2026-04-23 16:27:43.331513 | instance | Thursday 23 April 2026 16:27:43 +0000 (0:00:00.255) 0:00:02.025 ******** 2026-04-23 16:27:44.468128 | instance | ok: [instance] 2026-04-23 16:27:44.468219 | instance | 2026-04-23 16:27:44.468287 | instance | TASK [Purge "snapd" package] *************************************************** 2026-04-23 16:27:44.468414 | instance | Thursday 23 April 2026 16:27:44 +0000 (0:00:01.136) 0:00:03.162 ******** 2026-04-23 16:27:45.188381 | instance | ok: [instance] 2026-04-23 16:27:45.188602 | instance | 2026-04-23 16:27:45.188885 | instance | PLAY [Generate workspace for Atmosphere] *************************************** 2026-04-23 16:27:45.189178 | instance | 2026-04-23 16:27:45.189446 | instance | TASK [Create folders for workspace] ******************************************** 2026-04-23 16:27:45.189729 | instance | Thursday 23 April 2026 16:27:45 +0000 (0:00:00.720) 0:00:03.882 ******** 2026-04-23 16:27:46.206599 | instance | changed: [localhost] => (item=group_vars) 2026-04-23 16:27:46.206698 | instance | changed: [localhost] => (item=group_vars/all) 2026-04-23 16:27:46.206859 | instance | changed: [localhost] => (item=group_vars/controllers) 2026-04-23 16:27:46.207030 | instance | changed: [localhost] => (item=group_vars/cephs) 2026-04-23 16:27:46.207202 | instance | changed: [localhost] => (item=group_vars/computes) 2026-04-23 16:27:46.207377 | instance | changed: [localhost] => (item=host_vars) 2026-04-23 16:27:46.207543 | instance | 2026-04-23 16:27:46.207751 | instance | PLAY [Generate Ceph control plane configuration for workspace] ***************** 2026-04-23 16:27:46.207904 | instance | 2026-04-23 16:27:46.208084 | instance | TASK [Ensure the Ceph control plane configuration file exists] ***************** 2026-04-23 16:27:46.208260 | instance | Thursday 23 April 2026 16:27:46 +0000 (0:00:01.018) 0:00:04.901 ******** 2026-04-23 16:27:46.404851 | instance | changed: [localhost] 2026-04-23 16:27:46.405058 | instance | 2026-04-23 16:27:46.405345 | instance | TASK [Load the current Ceph control plane configuration into a variable] ******* 2026-04-23 16:27:46.405639 | instance | Thursday 23 April 2026 16:27:46 +0000 (0:00:00.198) 0:00:05.099 ******** 2026-04-23 16:27:46.431702 | instance | ok: [localhost] 2026-04-23 16:27:46.431966 | instance | 2026-04-23 16:27:46.432254 | instance | TASK [Generate Ceph control plane values for missing variables] **************** 2026-04-23 16:27:46.432542 | instance | Thursday 23 April 2026 16:27:46 +0000 (0:00:00.027) 0:00:05.126 ******** 2026-04-23 16:27:46.486031 | instance | ok: [localhost] => (item={'key': 'ceph_fsid', 'value': '0b6cb0c5-0ae3-5c28-8507-ddd0a40e93b5'}) 2026-04-23 16:27:46.486263 | instance | ok: [localhost] => (item={'key': 'ceph_mon_public_network', 'value': '10.96.240.0/24'}) 2026-04-23 16:27:46.486506 | instance | 2026-04-23 16:27:46.486771 | instance | TASK [Write new Ceph control plane configuration file to disk] ***************** 2026-04-23 16:27:46.487049 | instance | Thursday 23 April 2026 16:27:46 +0000 (0:00:00.054) 0:00:05.180 ******** 2026-04-23 16:27:47.061994 | instance | changed: [localhost] 2026-04-23 16:27:47.062251 | instance | 2026-04-23 16:27:47.062551 | instance | PLAY [Generate Ceph OSD configuration for workspace] *************************** 2026-04-23 16:27:47.062931 | instance | 2026-04-23 16:27:47.063238 | instance | TASK [Ensure the Ceph OSDs configuration file exists] ************************** 2026-04-23 16:27:47.063526 | instance | Thursday 23 April 2026 16:27:47 +0000 (0:00:00.575) 0:00:05.756 ******** 2026-04-23 16:27:47.234134 | instance | changed: [localhost] 2026-04-23 16:27:47.234332 | instance | 2026-04-23 16:27:47.234599 | instance | TASK [Load the current Ceph OSDs configuration into a variable] **************** 2026-04-23 16:27:47.234868 | instance | Thursday 23 April 2026 16:27:47 +0000 (0:00:00.172) 0:00:05.928 ******** 2026-04-23 16:27:47.258286 | instance | ok: [localhost] 2026-04-23 16:27:47.258522 | instance | 2026-04-23 16:27:47.258789 | instance | TASK [Generate Ceph OSDs values for missing variables] ************************* 2026-04-23 16:27:47.259176 | instance | Thursday 23 April 2026 16:27:47 +0000 (0:00:00.024) 0:00:05.952 ******** 2026-04-23 16:27:47.290691 | instance | ok: [localhost] => (item={'key': 'ceph_osd_devices', 'value': ['/dev/vdb', '/dev/vdc', '/dev/vdd']}) 2026-04-23 16:27:47.290955 | instance | 2026-04-23 16:27:47.291220 | instance | TASK [Write new Ceph OSDs configuration file to disk] ************************** 2026-04-23 16:27:47.291486 | instance | Thursday 23 April 2026 16:27:47 +0000 (0:00:00.032) 0:00:05.985 ******** 2026-04-23 16:27:47.630166 | instance | changed: [localhost] 2026-04-23 16:27:47.630378 | instance | 2026-04-23 16:27:47.630728 | instance | PLAY [Generate Kubernetes configuration for workspace] ************************* 2026-04-23 16:27:47.630936 | instance | 2026-04-23 16:27:47.631186 | instance | TASK [Ensure the Kubernetes configuration file exists] ************************* 2026-04-23 16:27:47.631458 | instance | Thursday 23 April 2026 16:27:47 +0000 (0:00:00.339) 0:00:06.324 ******** 2026-04-23 16:27:47.804113 | instance | changed: [localhost] 2026-04-23 16:27:47.804271 | instance | 2026-04-23 16:27:47.804453 | instance | TASK [Load the current Kubernetes configuration into a variable] *************** 2026-04-23 16:27:47.804631 | instance | Thursday 23 April 2026 16:27:47 +0000 (0:00:00.173) 0:00:06.498 ******** 2026-04-23 16:27:47.833518 | instance | ok: [localhost] 2026-04-23 16:27:47.833752 | instance | 2026-04-23 16:27:47.834023 | instance | TASK [Generate Kubernetes values for missing variables] ************************ 2026-04-23 16:27:47.834336 | instance | Thursday 23 April 2026 16:27:47 +0000 (0:00:00.029) 0:00:06.527 ******** 2026-04-23 16:27:47.874398 | instance | ok: [localhost] => (item={'key': 'kubernetes_hostname', 'value': '10.96.240.10'}) 2026-04-23 16:27:47.874663 | instance | ok: [localhost] => (item={'key': 'kubernetes_keepalived_vrid', 'value': 42}) 2026-04-23 16:27:47.874935 | instance | ok: [localhost] => (item={'key': 'kubernetes_keepalived_vip', 'value': '10.96.240.10'}) 2026-04-23 16:27:47.875185 | instance | 2026-04-23 16:27:47.875454 | instance | TASK [Write new Kubernetes configuration file to disk] ************************* 2026-04-23 16:27:47.875811 | instance | Thursday 23 April 2026 16:27:47 +0000 (0:00:00.041) 0:00:06.568 ******** 2026-04-23 16:27:48.224028 | instance | changed: [localhost] 2026-04-23 16:27:48.224235 | instance | 2026-04-23 16:27:48.224506 | instance | PLAY [Generate Keepalived configuration for workspace] ************************* 2026-04-23 16:27:48.224753 | instance | 2026-04-23 16:27:48.225022 | instance | TASK [Ensure the Keeaplived configuration file exists] ************************* 2026-04-23 16:27:48.225295 | instance | Thursday 23 April 2026 16:27:48 +0000 (0:00:00.349) 0:00:06.918 ******** 2026-04-23 16:27:48.404729 | instance | changed: [localhost] 2026-04-23 16:27:48.404907 | instance | 2026-04-23 16:27:48.405190 | instance | TASK [Load the current Keepalived configuration into a variable] *************** 2026-04-23 16:27:48.405548 | instance | Thursday 23 April 2026 16:27:48 +0000 (0:00:00.180) 0:00:07.098 ******** 2026-04-23 16:27:48.436086 | instance | ok: [localhost] 2026-04-23 16:27:48.436318 | instance | 2026-04-23 16:27:48.436592 | instance | TASK [Generate Keepalived values for missing variables] ************************ 2026-04-23 16:27:48.436890 | instance | Thursday 23 April 2026 16:27:48 +0000 (0:00:00.031) 0:00:07.130 ******** 2026-04-23 16:27:48.474356 | instance | ok: [localhost] => (item={'key': 'keepalived_interface', 'value': 'br-ex'}) 2026-04-23 16:27:48.474601 | instance | ok: [localhost] => (item={'key': 'keepalived_vip', 'value': '10.96.250.10'}) 2026-04-23 16:27:48.474842 | instance | 2026-04-23 16:27:48.475108 | instance | TASK [Write new Keepalived configuration file to disk] ************************* 2026-04-23 16:27:48.475370 | instance | Thursday 23 April 2026 16:27:48 +0000 (0:00:00.038) 0:00:07.168 ******** 2026-04-23 16:27:48.829413 | instance | changed: [localhost] 2026-04-23 16:27:48.829660 | instance | 2026-04-23 16:27:48.829959 | instance | PLAY [Generate endpoints for workspace] **************************************** 2026-04-23 16:27:48.830240 | instance | 2026-04-23 16:27:48.830532 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-23 16:27:48.830829 | instance | Thursday 23 April 2026 16:27:48 +0000 (0:00:00.354) 0:00:07.523 ******** 2026-04-23 16:27:49.498629 | instance | ok: [localhost] 2026-04-23 16:27:49.498811 | instance | 2026-04-23 16:27:49.499120 | instance | TASK [Ensure the endpoints file exists] **************************************** 2026-04-23 16:27:49.499401 | instance | Thursday 23 April 2026 16:27:49 +0000 (0:00:00.669) 0:00:08.192 ******** 2026-04-23 16:27:49.679181 | instance | changed: [localhost] 2026-04-23 16:27:49.679346 | instance | 2026-04-23 16:27:49.679643 | instance | TASK [Load the current endpoints into a variable] ****************************** 2026-04-23 16:27:49.679974 | instance | Thursday 23 April 2026 16:27:49 +0000 (0:00:00.180) 0:00:08.373 ******** 2026-04-23 16:27:49.711478 | instance | ok: [localhost] 2026-04-23 16:27:49.711784 | instance | 2026-04-23 16:27:49.712053 | instance | TASK [Generate endpoint skeleton for missing variables] ************************ 2026-04-23 16:27:49.712323 | instance | Thursday 23 April 2026 16:27:49 +0000 (0:00:00.032) 0:00:08.406 ******** 2026-04-23 16:27:50.458806 | instance | ok: [localhost] => (item=keycloak_host) 2026-04-23 16:27:50.459042 | instance | ok: [localhost] => (item=kube_prometheus_stack_grafana_host) 2026-04-23 16:27:50.459314 | instance | ok: [localhost] => (item=kube_prometheus_stack_alertmanager_host) 2026-04-23 16:27:50.459624 | instance | ok: [localhost] => (item=kube_prometheus_stack_prometheus_host) 2026-04-23 16:27:50.460011 | instance | ok: [localhost] => (item=openstack_helm_endpoints_region_name) 2026-04-23 16:27:50.460302 | instance | ok: [localhost] => (item=openstack_helm_endpoints_keystone_api_host) 2026-04-23 16:27:50.460559 | instance | ok: [localhost] => (item=openstack_helm_endpoints_glance_api_host) 2026-04-23 16:27:50.460863 | instance | ok: [localhost] => (item=openstack_helm_endpoints_cinder_api_host) 2026-04-23 16:27:50.461138 | instance | ok: [localhost] => (item=openstack_helm_endpoints_placement_api_host) 2026-04-23 16:27:50.461410 | instance | ok: [localhost] => (item=openstack_helm_endpoints_barbican_api_host) 2026-04-23 16:27:50.461668 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_api_host) 2026-04-23 16:27:50.462017 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_api_host) 2026-04-23 16:27:50.462309 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_novnc_host) 2026-04-23 16:27:50.462573 | instance | ok: [localhost] => (item=openstack_helm_endpoints_ironic_api_host) 2026-04-23 16:27:50.462837 | instance | ok: [localhost] => (item=openstack_helm_endpoints_designate_api_host) 2026-04-23 16:27:50.463101 | instance | ok: [localhost] => (item=openstack_helm_endpoints_octavia_api_host) 2026-04-23 16:27:50.463370 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_api_host) 2026-04-23 16:27:50.463709 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_registry_host) 2026-04-23 16:27:50.464017 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_api_host) 2026-04-23 16:27:50.464332 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_cfn_api_host) 2026-04-23 16:27:50.464712 | instance | ok: [localhost] => (item=openstack_helm_endpoints_horizon_api_host) 2026-04-23 16:27:50.465031 | instance | ok: [localhost] => (item=openstack_helm_endpoints_rgw_host) 2026-04-23 16:27:50.465298 | instance | ok: [localhost] => (item=openstack_helm_endpoints_manila_api_host) 2026-04-23 16:27:50.465656 | instance | 2026-04-23 16:27:50.466020 | instance | TASK [Write new endpoints file to disk] **************************************** 2026-04-23 16:27:50.466300 | instance | Thursday 23 April 2026 16:27:50 +0000 (0:00:00.746) 0:00:09.153 ******** 2026-04-23 16:27:50.805889 | instance | changed: [localhost] 2026-04-23 16:27:50.806121 | instance | 2026-04-23 16:27:50.806398 | instance | TASK [Ensure the endpoints file exists] **************************************** 2026-04-23 16:27:50.806672 | instance | Thursday 23 April 2026 16:27:50 +0000 (0:00:00.347) 0:00:09.500 ******** 2026-04-23 16:27:50.991266 | instance | changed: [localhost] 2026-04-23 16:27:50.991448 | instance | 2026-04-23 16:27:50.991771 | instance | PLAY [Generate Neutron configuration for workspace] **************************** 2026-04-23 16:27:50.992012 | instance | 2026-04-23 16:27:50.992370 | instance | TASK [Ensure the Neutron configuration file exists] **************************** 2026-04-23 16:27:50.992555 | instance | Thursday 23 April 2026 16:27:50 +0000 (0:00:00.185) 0:00:09.685 ******** 2026-04-23 16:27:51.169112 | instance | changed: [localhost] 2026-04-23 16:27:51.169314 | instance | 2026-04-23 16:27:51.169594 | instance | TASK [Load the current Neutron configuration into a variable] ****************** 2026-04-23 16:27:51.169858 | instance | Thursday 23 April 2026 16:27:51 +0000 (0:00:00.178) 0:00:09.863 ******** 2026-04-23 16:27:51.203218 | instance | ok: [localhost] 2026-04-23 16:27:51.203459 | instance | 2026-04-23 16:27:51.203790 | instance | TASK [Generate Neutron values for missing variables] *************************** 2026-04-23 16:27:51.204082 | instance | Thursday 23 April 2026 16:27:51 +0000 (0:00:00.034) 0:00:09.897 ******** 2026-04-23 16:27:51.243001 | instance | ok: [localhost] => (item={'key': 'neutron_networks', 'value': [{'name': 'public', 'external': True, 'shared': True, 'mtu_size': 1500, 'port_security_enabled': True, 'provider_network_type': 'flat', 'provider_physical_network': 'external', 'subnets': [{'name': 'public-subnet', 'cidr': '10.96.250.0/24', 'gateway_ip': '10.96.250.10', 'allocation_pool_start': '10.96.250.200', 'allocation_pool_end': '10.96.250.220', 'enable_dhcp': True}]}]}) 2026-04-23 16:27:51.243246 | instance | 2026-04-23 16:27:51.243503 | instance | TASK [Write new Neutron configuration file to disk] **************************** 2026-04-23 16:27:51.243806 | instance | Thursday 23 April 2026 16:27:51 +0000 (0:00:00.039) 0:00:09.937 ******** 2026-04-23 16:27:51.594660 | instance | changed: [localhost] 2026-04-23 16:27:51.594929 | instance | 2026-04-23 16:27:51.595198 | instance | PLAY [Generate Nova configuration for workspace] ******************************* 2026-04-23 16:27:51.595455 | instance | 2026-04-23 16:27:51.595833 | instance | TASK [Ensure the Nova configuration file exists] ******************************* 2026-04-23 16:27:51.596208 | instance | Thursday 23 April 2026 16:27:51 +0000 (0:00:00.351) 0:00:10.289 ******** 2026-04-23 16:27:51.774785 | instance | changed: [localhost] 2026-04-23 16:27:51.774983 | instance | 2026-04-23 16:27:51.775254 | instance | TASK [Load the current Nova configuration into a variable] ********************* 2026-04-23 16:27:51.775516 | instance | Thursday 23 April 2026 16:27:51 +0000 (0:00:00.179) 0:00:10.469 ******** 2026-04-23 16:27:51.805451 | instance | ok: [localhost] 2026-04-23 16:27:51.805683 | instance | 2026-04-23 16:27:51.805941 | instance | TASK [Generate Nova values for missing variables] ****************************** 2026-04-23 16:27:51.806296 | instance | Thursday 23 April 2026 16:27:51 +0000 (0:00:00.030) 0:00:10.500 ******** 2026-04-23 16:27:51.846324 | instance | ok: [localhost] => (item={'key': 'nova_flavors', 'value': [{'name': 'm1.tiny', 'ram': 512, 'disk': 1, 'vcpus': 1}, {'name': 'm1.small', 'ram': 2048, 'disk': 20, 'vcpus': 1}, {'name': 'm1.medium', 'ram': 4096, 'disk': 40, 'vcpus': 2}, {'name': 'm1.large', 'ram': 8192, 'disk': 80, 'vcpus': 4}, {'name': 'm1.xlarge', 'ram': 16384, 'disk': 160, 'vcpus': 8}]}) 2026-04-23 16:27:51.846572 | instance | 2026-04-23 16:27:51.846862 | instance | TASK [Write new Nova configuration file to disk] ******************************* 2026-04-23 16:27:51.847185 | instance | Thursday 23 April 2026 16:27:51 +0000 (0:00:00.041) 0:00:10.541 ******** 2026-04-23 16:27:52.207691 | instance | changed: [localhost] 2026-04-23 16:27:52.207919 | instance | 2026-04-23 16:27:52.208200 | instance | PLAY [Generate secrets for workspace] ****************************************** 2026-04-23 16:27:52.208441 | instance | 2026-04-23 16:27:52.208706 | instance | TASK [Ensure the secrets file exists] ****************************************** 2026-04-23 16:27:52.208994 | instance | Thursday 23 April 2026 16:27:52 +0000 (0:00:00.361) 0:00:10.902 ******** 2026-04-23 16:27:52.388229 | instance | changed: [localhost] 2026-04-23 16:27:52.388449 | instance | 2026-04-23 16:27:52.388802 | instance | TASK [Load the current secrets into a variable] ******************************** 2026-04-23 16:27:52.389097 | instance | Thursday 23 April 2026 16:27:52 +0000 (0:00:00.179) 0:00:11.082 ******** 2026-04-23 16:27:52.419280 | instance | ok: [localhost] 2026-04-23 16:27:52.419537 | instance | 2026-04-23 16:27:52.419858 | instance | TASK [Generate secrets for missing variables] ********************************** 2026-04-23 16:27:52.420127 | instance | Thursday 23 April 2026 16:27:52 +0000 (0:00:00.031) 0:00:11.113 ******** 2026-04-23 16:27:52.816310 | instance | ok: [localhost] => (item=heat_auth_encryption_key) 2026-04-23 16:27:52.816506 | instance | ok: [localhost] => (item=keepalived_password) 2026-04-23 16:27:52.816770 | instance | ok: [localhost] => (item=keycloak_admin_password) 2026-04-23 16:27:52.817042 | instance | ok: [localhost] => (item=keycloak_database_password) 2026-04-23 16:27:52.817302 | instance | ok: [localhost] => (item=keystone_keycloak_client_secret) 2026-04-23 16:27:52.817624 | instance | ok: [localhost] => (item=keystone_oidc_crypto_passphrase) 2026-04-23 16:27:52.817857 | instance | ok: [localhost] => (item=kube_prometheus_stack_grafana_admin_password) 2026-04-23 16:27:52.818114 | instance | ok: [localhost] => (item=octavia_heartbeat_key) 2026-04-23 16:27:52.818385 | instance | ok: [localhost] => (item=openstack_helm_endpoints_rabbitmq_admin_password) 2026-04-23 16:27:52.818655 | instance | ok: [localhost] => (item=openstack_helm_endpoints_memcached_secret_key) 2026-04-23 16:27:52.818909 | instance | ok: [localhost] => (item=openstack_helm_endpoints_keystone_admin_password) 2026-04-23 16:27:52.819159 | instance | ok: [localhost] => (item=openstack_helm_endpoints_keystone_mariadb_password) 2026-04-23 16:27:52.819424 | instance | ok: [localhost] => (item=openstack_helm_endpoints_keystone_rabbitmq_password) 2026-04-23 16:27:52.819746 | instance | ok: [localhost] => (item=openstack_helm_endpoints_glance_keystone_password) 2026-04-23 16:27:52.820004 | instance | ok: [localhost] => (item=openstack_helm_endpoints_glance_mariadb_password) 2026-04-23 16:27:52.820264 | instance | ok: [localhost] => (item=openstack_helm_endpoints_glance_rabbitmq_password) 2026-04-23 16:27:52.820520 | instance | ok: [localhost] => (item=openstack_helm_endpoints_cinder_keystone_password) 2026-04-23 16:27:52.820780 | instance | ok: [localhost] => (item=openstack_helm_endpoints_cinder_mariadb_password) 2026-04-23 16:27:52.821039 | instance | ok: [localhost] => (item=openstack_helm_endpoints_cinder_rabbitmq_password) 2026-04-23 16:27:52.821299 | instance | ok: [localhost] => (item=openstack_helm_endpoints_placement_keystone_password) 2026-04-23 16:27:52.821597 | instance | ok: [localhost] => (item=openstack_helm_endpoints_placement_mariadb_password) 2026-04-23 16:27:52.821861 | instance | ok: [localhost] => (item=openstack_helm_endpoints_barbican_keystone_password) 2026-04-23 16:27:52.822122 | instance | ok: [localhost] => (item=openstack_helm_endpoints_barbican_mariadb_password) 2026-04-23 16:27:52.822393 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_keystone_password) 2026-04-23 16:27:52.822658 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_mariadb_password) 2026-04-23 16:27:52.822916 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_rabbitmq_password) 2026-04-23 16:27:52.823176 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_metadata_secret) 2026-04-23 16:27:52.823436 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_keystone_password) 2026-04-23 16:27:52.823730 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_mariadb_password) 2026-04-23 16:27:52.823995 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_rabbitmq_password) 2026-04-23 16:27:52.824255 | instance | ok: [localhost] => (item=openstack_helm_endpoints_ironic_keystone_password) 2026-04-23 16:27:52.824512 | instance | ok: [localhost] => (item=openstack_helm_endpoints_ironic_mariadb_password) 2026-04-23 16:27:52.824769 | instance | ok: [localhost] => (item=openstack_helm_endpoints_ironic_rabbitmq_password) 2026-04-23 16:27:52.825027 | instance | ok: [localhost] => (item=openstack_helm_endpoints_designate_keystone_password) 2026-04-23 16:27:52.825285 | instance | ok: [localhost] => (item=openstack_helm_endpoints_designate_mariadb_password) 2026-04-23 16:27:52.825603 | instance | ok: [localhost] => (item=openstack_helm_endpoints_designate_rabbitmq_password) 2026-04-23 16:27:52.825859 | instance | ok: [localhost] => (item=openstack_helm_endpoints_octavia_keystone_password) 2026-04-23 16:27:52.826118 | instance | ok: [localhost] => (item=openstack_helm_endpoints_octavia_mariadb_password) 2026-04-23 16:27:52.826377 | instance | ok: [localhost] => (item=openstack_helm_endpoints_octavia_rabbitmq_password) 2026-04-23 16:27:52.826567 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_keystone_password) 2026-04-23 16:27:52.826688 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_mariadb_password) 2026-04-23 16:27:52.826809 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_rabbitmq_password) 2026-04-23 16:27:52.826927 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_keystone_password) 2026-04-23 16:27:52.827058 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_trustee_keystone_password) 2026-04-23 16:27:52.827194 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_stack_user_keystone_password) 2026-04-23 16:27:52.827316 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_mariadb_password) 2026-04-23 16:27:52.827435 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_rabbitmq_password) 2026-04-23 16:27:52.827553 | instance | ok: [localhost] => (item=openstack_helm_endpoints_horizon_mariadb_password) 2026-04-23 16:27:52.827701 | instance | ok: [localhost] => (item=openstack_helm_endpoints_tempest_keystone_password) 2026-04-23 16:27:52.827816 | instance | ok: [localhost] => (item=openstack_helm_endpoints_openstack_exporter_keystone_password) 2026-04-23 16:27:52.827934 | instance | ok: [localhost] => (item=openstack_helm_endpoints_rgw_keystone_password) 2026-04-23 16:27:52.828055 | instance | ok: [localhost] => (item=openstack_helm_endpoints_manila_keystone_password) 2026-04-23 16:27:52.828172 | instance | ok: [localhost] => (item=openstack_helm_endpoints_manila_mariadb_password) 2026-04-23 16:27:52.828292 | instance | ok: [localhost] => (item=openstack_helm_endpoints_staffeln_mariadb_password) 2026-04-23 16:27:52.828405 | instance | 2026-04-23 16:27:52.828530 | instance | TASK [Generate base64 encoded secrets] ***************************************** 2026-04-23 16:27:52.828650 | instance | Thursday 23 April 2026 16:27:52 +0000 (0:00:00.396) 0:00:11.510 ******** 2026-04-23 16:27:52.874766 | instance | ok: [localhost] => (item=barbican_kek) 2026-04-23 16:27:52.874973 | instance | 2026-04-23 16:27:52.875230 | instance | TASK [Generate temporary files for generating keys for missing variables] ****** 2026-04-23 16:27:52.875492 | instance | Thursday 23 April 2026 16:27:52 +0000 (0:00:00.058) 0:00:11.569 ******** 2026-04-23 16:27:53.266324 | instance | changed: [localhost] => (item=manila_ssh_key) 2026-04-23 16:27:53.266404 | instance | changed: [localhost] => (item=nova_ssh_key) 2026-04-23 16:27:53.266564 | instance | 2026-04-23 16:27:53.266729 | instance | TASK [Generate SSH keys for missing variables] ********************************* 2026-04-23 16:27:53.266899 | instance | Thursday 23 April 2026 16:27:53 +0000 (0:00:00.391) 0:00:11.961 ******** 2026-04-23 16:28:01.215280 | instance | changed: [localhost] => (item=manila_ssh_key) 2026-04-23 16:28:01.215395 | instance | changed: [localhost] => (item=nova_ssh_key) 2026-04-23 16:28:01.215407 | instance | 2026-04-23 16:28:01.215436 | instance | TASK [Set values for SSH keys] ************************************************* 2026-04-23 16:28:01.215446 | instance | Thursday 23 April 2026 16:28:01 +0000 (0:00:07.946) 0:00:19.907 ******** 2026-04-23 16:28:01.277247 | instance | ok: [localhost] => (item=manila_ssh_key) 2026-04-23 16:28:01.277459 | instance | ok: [localhost] => (item=nova_ssh_key) 2026-04-23 16:28:01.277747 | instance | 2026-04-23 16:28:01.278056 | instance | TASK [Delete the temporary files generated for SSH keys] *********************** 2026-04-23 16:28:01.278349 | instance | Thursday 23 April 2026 16:28:01 +0000 (0:00:00.064) 0:00:19.971 ******** 2026-04-23 16:28:01.612549 | instance | changed: [localhost] => (item=manila_ssh_key) 2026-04-23 16:28:01.612779 | instance | changed: [localhost] => (item=nova_ssh_key) 2026-04-23 16:28:01.613033 | instance | 2026-04-23 16:28:01.613307 | instance | TASK [Write new secrets file to disk] ****************************************** 2026-04-23 16:28:01.613659 | instance | Thursday 23 April 2026 16:28:01 +0000 (0:00:00.335) 0:00:20.306 ******** 2026-04-23 16:28:01.969707 | instance | changed: [localhost] 2026-04-23 16:28:01.969950 | instance | 2026-04-23 16:28:01.970261 | instance | TASK [Encrypt secrets file with Vault password] ******************************** 2026-04-23 16:28:01.970546 | instance | Thursday 23 April 2026 16:28:01 +0000 (0:00:00.356) 0:00:20.663 ******** 2026-04-23 16:28:02.009299 | instance | skipping: [localhost] 2026-04-23 16:28:02.009576 | instance | 2026-04-23 16:28:02.009860 | instance | PLAY [Setup networking] ******************************************************** 2026-04-23 16:28:02.010256 | instance | 2026-04-23 16:28:02.010543 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-23 16:28:02.010799 | instance | Thursday 23 April 2026 16:28:02 +0000 (0:00:00.040) 0:00:20.704 ******** 2026-04-23 16:28:02.709498 | instance | ok: [instance] 2026-04-23 16:28:02.709726 | instance | 2026-04-23 16:28:02.710015 | instance | TASK [Create bridge for management network] ************************************ 2026-04-23 16:28:02.710464 | instance | Thursday 23 April 2026 16:28:02 +0000 (0:00:00.699) 0:00:21.403 ******** 2026-04-23 16:28:03.052597 | instance | ok: [instance] 2026-04-23 16:28:03.052929 | instance | 2026-04-23 16:28:03.053169 | instance | TASK [Create fake interface for management bridge] ***************************** 2026-04-23 16:28:03.053326 | instance | Thursday 23 April 2026 16:28:03 +0000 (0:00:00.342) 0:00:21.746 ******** 2026-04-23 16:28:03.263567 | instance | ok: [instance] 2026-04-23 16:28:03.264196 | instance | 2026-04-23 16:28:03.264670 | instance | TASK [Assign dummy interface to management bridge] ***************************** 2026-04-23 16:28:03.265103 | instance | Thursday 23 April 2026 16:28:03 +0000 (0:00:00.211) 0:00:21.957 ******** 2026-04-23 16:28:03.462699 | instance | ok: [instance] 2026-04-23 16:28:03.462843 | instance | 2026-04-23 16:28:03.463014 | instance | TASK [Assign IP address for management bridge] ********************************* 2026-04-23 16:28:03.463287 | instance | Thursday 23 April 2026 16:28:03 +0000 (0:00:00.199) 0:00:22.157 ******** 2026-04-23 16:28:03.656641 | instance | ok: [instance] 2026-04-23 16:28:03.656872 | instance | 2026-04-23 16:28:03.657157 | instance | TASK [Bring up interfaces] ***************************************************** 2026-04-23 16:28:03.657544 | instance | Thursday 23 April 2026 16:28:03 +0000 (0:00:00.193) 0:00:22.350 ******** 2026-04-23 16:28:04.026971 | instance | ok: [instance] => (item=br-mgmt) 2026-04-23 16:28:04.027699 | instance | ok: [instance] => (item=dummy0) 2026-04-23 16:28:04.028062 | instance | 2026-04-23 16:28:04.028534 | instance | PLAY [Create devices for Ceph] ************************************************* 2026-04-23 16:28:04.028861 | instance | 2026-04-23 16:28:04.029233 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-23 16:28:04.029592 | instance | Thursday 23 April 2026 16:28:04 +0000 (0:00:00.370) 0:00:22.721 ******** 2026-04-23 16:28:04.760164 | instance | ok: [instance] 2026-04-23 16:28:04.760255 | instance | 2026-04-23 16:28:04.760356 | instance | TASK [Install depedencies] ***************************************************** 2026-04-23 16:28:04.760547 | instance | Thursday 23 April 2026 16:28:04 +0000 (0:00:00.732) 0:00:23.453 ******** 2026-04-23 16:28:24.345930 | instance | changed: [instance] 2026-04-23 16:28:24.346035 | instance | 2026-04-23 16:28:24.346338 | instance | TASK [Start up service] ******************************************************** 2026-04-23 16:28:24.346381 | instance | Thursday 23 April 2026 16:28:24 +0000 (0:00:19.586) 0:00:43.040 ******** 2026-04-23 16:28:24.895112 | instance | ok: [instance] 2026-04-23 16:28:24.895182 | instance | 2026-04-23 16:28:24.895464 | instance | TASK [Generate lvm.conf] ******************************************************* 2026-04-23 16:28:24.895509 | instance | Thursday 23 April 2026 16:28:24 +0000 (0:00:00.548) 0:00:43.589 ******** 2026-04-23 16:28:25.109049 | instance | ok: [instance] 2026-04-23 16:28:25.109118 | instance | 2026-04-23 16:28:25.109526 | instance | TASK [Write /etc/lvm/lvm.conf] ************************************************* 2026-04-23 16:28:25.109846 | instance | Thursday 23 April 2026 16:28:25 +0000 (0:00:00.213) 0:00:43.803 ******** 2026-04-23 16:28:25.576514 | instance | changed: [instance] 2026-04-23 16:28:25.576628 | instance | 2026-04-23 16:28:25.576975 | instance | TASK [Get list of all loopback devices] **************************************** 2026-04-23 16:28:25.577031 | instance | Thursday 23 April 2026 16:28:25 +0000 (0:00:00.467) 0:00:44.271 ******** 2026-04-23 16:28:25.772510 | instance | ok: [instance] 2026-04-23 16:28:25.772578 | instance | 2026-04-23 16:28:25.772834 | instance | TASK [Fail if there is any existing loopback devices] ************************** 2026-04-23 16:28:25.772887 | instance | Thursday 23 April 2026 16:28:25 +0000 (0:00:00.195) 0:00:44.466 ******** 2026-04-23 16:28:25.800930 | instance | skipping: [instance] 2026-04-23 16:28:25.801302 | instance | 2026-04-23 16:28:25.801329 | instance | TASK [Create devices for Ceph] ************************************************* 2026-04-23 16:28:25.801336 | instance | Thursday 23 April 2026 16:28:25 +0000 (0:00:00.028) 0:00:44.495 ******** 2026-04-23 16:28:26.337547 | instance | changed: [instance] => (item=osd0) 2026-04-23 16:28:26.337625 | instance | changed: [instance] => (item=osd1) 2026-04-23 16:28:26.337758 | instance | changed: [instance] => (item=osd2) 2026-04-23 16:28:26.337962 | instance | 2026-04-23 16:28:26.338112 | instance | TASK [Set permissions on loopback devices] ************************************* 2026-04-23 16:28:26.338261 | instance | Thursday 23 April 2026 16:28:26 +0000 (0:00:00.536) 0:00:45.032 ******** 2026-04-23 16:28:26.878007 | instance | changed: [instance] => (item=osd0) 2026-04-23 16:28:26.878100 | instance | changed: [instance] => (item=osd1) 2026-04-23 16:28:26.878205 | instance | changed: [instance] => (item=osd2) 2026-04-23 16:28:26.878402 | instance | 2026-04-23 16:28:26.878553 | instance | TASK [Start loop devices] ****************************************************** 2026-04-23 16:28:26.878716 | instance | Thursday 23 April 2026 16:28:26 +0000 (0:00:00.540) 0:00:45.572 ******** 2026-04-23 16:28:27.645567 | instance | changed: [instance] => (item=osd0) 2026-04-23 16:28:27.646396 | instance | changed: [instance] => (item=osd1) 2026-04-23 16:28:27.646472 | instance | changed: [instance] => (item=osd2) 2026-04-23 16:28:27.646480 | instance | 2026-04-23 16:28:27.646487 | instance | TASK [Create a volume group for each loop device] ****************************** 2026-04-23 16:28:27.646495 | instance | Thursday 23 April 2026 16:28:27 +0000 (0:00:00.767) 0:00:46.339 ******** 2026-04-23 16:28:30.642067 | instance | changed: [instance] => (item=osd0) 2026-04-23 16:28:30.642220 | instance | changed: [instance] => (item=osd1) 2026-04-23 16:28:30.642232 | instance | changed: [instance] => (item=osd2) 2026-04-23 16:28:30.642767 | instance | 2026-04-23 16:28:30.642831 | instance | TASK [Create a logical volume for each loop device] **************************** 2026-04-23 16:28:30.642840 | instance | Thursday 23 April 2026 16:28:30 +0000 (0:00:02.996) 0:00:49.336 ******** 2026-04-23 16:28:32.538586 | instance | changed: [instance] => (item=ceph-instance-osd0) 2026-04-23 16:28:32.538660 | instance | changed: [instance] => (item=ceph-instance-osd1) 2026-04-23 16:28:32.539393 | instance | changed: [instance] => (item=ceph-instance-osd2) 2026-04-23 16:28:32.539444 | instance | 2026-04-23 16:28:32.539451 | instance | PLAY [controllers] ************************************************************* 2026-04-23 16:28:32.539455 | instance | 2026-04-23 16:28:32.539459 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-23 16:28:32.539464 | instance | Thursday 23 April 2026 16:28:32 +0000 (0:00:01.896) 0:00:51.233 ******** 2026-04-23 16:28:33.421199 | instance | ok: [instance] 2026-04-23 16:28:33.421264 | instance | 2026-04-23 16:28:33.421615 | instance | TASK [Set masquerade rule] ***************************************************** 2026-04-23 16:28:33.421662 | instance | Thursday 23 April 2026 16:28:33 +0000 (0:00:00.882) 0:00:52.115 ******** 2026-04-23 16:28:33.874326 | instance | changed: [instance] 2026-04-23 16:28:33.874571 | instance | 2026-04-23 16:28:33.874946 | instance | PLAY RECAP ********************************************************************* 2026-04-23 16:28:33.875210 | instance | instance : ok=24 changed=10 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 2026-04-23 16:28:33.875425 | instance | localhost : ok=40 changed=21 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 2026-04-23 16:28:33.875635 | instance | 2026-04-23 16:28:33.875842 | instance | Thursday 23 April 2026 16:28:33 +0000 (0:00:00.453) 0:00:52.569 ******** 2026-04-23 16:28:33.876039 | instance | =============================================================================== 2026-04-23 16:28:33.876240 | instance | Install depedencies ---------------------------------------------------- 19.59s 2026-04-23 16:28:33.876442 | instance | Generate SSH keys for missing variables --------------------------------- 7.95s 2026-04-23 16:28:33.876643 | instance | Create a volume group for each loop device ------------------------------ 3.00s 2026-04-23 16:28:33.876845 | instance | Create a logical volume for each loop device ---------------------------- 1.90s 2026-04-23 16:28:33.877048 | instance | Install "dirmngr" for GPG keyserver operations -------------------------- 1.14s 2026-04-23 16:28:33.877247 | instance | Gathering Facts --------------------------------------------------------- 1.10s 2026-04-23 16:28:33.877456 | instance | Create folders for workspace -------------------------------------------- 1.02s 2026-04-23 16:28:33.877657 | instance | Gathering Facts --------------------------------------------------------- 0.88s 2026-04-23 16:28:33.877856 | instance | Start loop devices ------------------------------------------------------ 0.77s 2026-04-23 16:28:33.878070 | instance | Generate endpoint skeleton for missing variables ------------------------ 0.75s 2026-04-23 16:28:33.878271 | instance | Gathering Facts --------------------------------------------------------- 0.73s 2026-04-23 16:28:33.878472 | instance | Purge "snapd" package --------------------------------------------------- 0.72s 2026-04-23 16:28:33.878669 | instance | Gathering Facts --------------------------------------------------------- 0.70s 2026-04-23 16:28:33.878884 | instance | Gathering Facts --------------------------------------------------------- 0.67s 2026-04-23 16:28:33.879084 | instance | Configure short hostname ------------------------------------------------ 0.64s 2026-04-23 16:28:33.879285 | instance | Write new Ceph control plane configuration file to disk ----------------- 0.58s 2026-04-23 16:28:33.879482 | instance | Start up service -------------------------------------------------------- 0.55s 2026-04-23 16:28:33.879704 | instance | Set permissions on loopback devices ------------------------------------- 0.54s 2026-04-23 16:28:33.879905 | instance | Create devices for Ceph ------------------------------------------------- 0.54s 2026-04-23 16:28:33.880104 | instance | Write /etc/lvm/lvm.conf ------------------------------------------------- 0.47s 2026-04-23 16:28:33.961810 | instance | INFO [aio > prepare] Executed: Successful 2026-04-23 16:28:33.962518 | instance | INFO Molecule executed 1 scenario (1 successful) 2026-04-23 16:28:34.371953 | instance | ok: Runtime: 0:01:40.640563 2026-04-23 16:28:34.377572 | 2026-04-23 16:28:34.377628 | PLAY RECAP 2026-04-23 16:28:34.377677 | instance | ok: 12 changed: 9 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-23 16:28:34.377700 | 2026-04-23 16:28:34.573070 | PRE-RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/pre.yml@main] 2026-04-23 16:28:34.580287 | RUN START: [untrusted : github.com/vexxhost/atmosphere/molecule/aio/converge.yml@main] 2026-04-23 16:28:35.234185 | 2026-04-23 16:28:35.234323 | PLAY [all] 2026-04-23 16:28:35.245849 | 2026-04-23 16:28:35.245931 | TASK [Build atmosphere binary] 2026-04-23 16:28:35.615373 | instance | go: downloading github.com/spf13/cobra v1.9.1 2026-04-23 16:28:35.622665 | instance | go: downloading golang.org/x/sync v0.18.0 2026-04-23 16:28:35.840946 | instance | go: downloading github.com/spf13/pflag v1.0.7 2026-04-23 16:28:42.298663 | instance | ok: Runtime: 0:00:06.488985 2026-04-23 16:28:42.305335 | 2026-04-23 16:28:42.305447 | TASK [Deploy with parallel orchestrator] 2026-04-23 16:28:42.507640 | instance | ==> Running preflight checks 2026-04-23 16:28:42.967189 | instance | [preflight] 2026-04-23 16:28:42.967323 | instance | [preflight] PLAY [Preflight checks] ******************************************************** 2026-04-23 16:28:42.967337 | instance | [preflight] 2026-04-23 16:28:42.967351 | instance | [preflight] TASK [Fail if atmosphere_ceph_enabled is set] ********************************** 2026-04-23 16:28:42.988717 | instance | [preflight] skipping: [instance] 2026-04-23 16:28:42.988756 | instance | [preflight] 2026-04-23 16:28:42.988768 | instance | [preflight] PLAY RECAP ********************************************************************* 2026-04-23 16:28:42.988780 | instance | [preflight] instance : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 2026-04-23 16:28:42.988790 | instance | [preflight] 2026-04-23 16:28:43.049043 | instance | ==> Preflight checks passed 2026-04-23 16:28:43.049130 | instance | ==> Starting parallel deployment 2026-04-23 16:28:43.049232 | instance | ==> [multipathd] Starting deployment 2026-04-23 16:28:43.049493 | instance | ==> [ceph] Starting deployment 2026-04-23 16:28:43.049530 | instance | ==> [kubernetes] Starting deployment 2026-04-23 16:28:43.049588 | instance | ==> [udev] Starting deployment 2026-04-23 16:28:43.049628 | instance | ==> [iscsi] Starting deployment 2026-04-23 16:28:43.049996 | instance | ==> [lpfc] Starting deployment 2026-04-23 16:28:43.514661 | instance | [udev] 2026-04-23 16:28:43.514704 | instance | [udev] PLAY [controllers:computes] **************************************************** 2026-04-23 16:28:43.514713 | instance | [udev] 2026-04-23 16:28:43.514720 | instance | [udev] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:28:43.532811 | instance | [multipathd] 2026-04-23 16:28:43.532854 | instance | [multipathd] PLAY [controllers:computes] **************************************************** 2026-04-23 16:28:43.532861 | instance | [multipathd] 2026-04-23 16:28:43.532867 | instance | [multipathd] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:28:43.543439 | instance | [lpfc] 2026-04-23 16:28:43.543463 | instance | [lpfc] PLAY [controllers:computes] **************************************************** 2026-04-23 16:28:43.543470 | instance | [lpfc] 2026-04-23 16:28:43.543476 | instance | [lpfc] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:28:43.900236 | instance | [ceph] 2026-04-23 16:28:43.900293 | instance | [ceph] PLAY [all] ********************************************************************* 2026-04-23 16:28:43.900304 | instance | [ceph] 2026-04-23 16:28:43.900314 | instance | [ceph] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:28:44.913231 | instance | [udev] [WARNING]: Platform linux on host instance is using the discovered Python 2026-04-23 16:28:44.913278 | instance | [udev] interpreter at /usr/bin/python3.10, but future installation of another Python 2026-04-23 16:28:44.913287 | instance | [udev] interpreter could change the meaning of that path. See 2026-04-23 16:28:44.913293 | instance | [udev] https://docs.ansible.com/ansible- 2026-04-23 16:28:44.913299 | instance | [udev] core/2.17/reference_appendices/interpreter_discovery.html for more information. 2026-04-23 16:28:44.923956 | instance | [udev] ok: [instance] 2026-04-23 16:28:44.923980 | instance | [udev] 2026-04-23 16:28:44.923988 | instance | [udev] TASK [vexxhost.atmosphere.udev : Add udev rules for Pure Storage FlashArray] *** 2026-04-23 16:28:45.035218 | instance | [multipathd] [WARNING]: Platform linux on host instance is using the discovered Python 2026-04-23 16:28:45.035262 | instance | [multipathd] interpreter at /usr/bin/python3.10, but future installation of another Python 2026-04-23 16:28:45.035268 | instance | [multipathd] interpreter could change the meaning of that path. See 2026-04-23 16:28:45.035273 | instance | [multipathd] https://docs.ansible.com/ansible- 2026-04-23 16:28:45.035278 | instance | [multipathd] core/2.17/reference_appendices/interpreter_discovery.html for more information. 2026-04-23 16:28:45.045260 | instance | [multipathd] ok: [instance] 2026-04-23 16:28:45.045274 | instance | [multipathd] 2026-04-23 16:28:45.045281 | instance | [multipathd] TASK [vexxhost.atmosphere.multipathd : Add backports PPA] ********************** 2026-04-23 16:28:45.049417 | instance | [lpfc] [WARNING]: Platform linux on host instance is using the discovered Python 2026-04-23 16:28:45.049451 | instance | [lpfc] interpreter at /usr/bin/python3.10, but future installation of another Python 2026-04-23 16:28:45.049461 | instance | [lpfc] interpreter could change the meaning of that path. See 2026-04-23 16:28:45.049470 | instance | [lpfc] https://docs.ansible.com/ansible- 2026-04-23 16:28:45.049479 | instance | [lpfc] core/2.17/reference_appendices/interpreter_discovery.html for more information. 2026-04-23 16:28:45.060043 | instance | [lpfc] ok: [instance] 2026-04-23 16:28:45.060061 | instance | [lpfc] 2026-04-23 16:28:45.060071 | instance | [lpfc] TASK [vexxhost.atmosphere.lpfc : Detect if the "lpfc" module is loaded] ******** 2026-04-23 16:28:45.197836 | instance | [ceph] [WARNING]: Platform linux on host instance is using the discovered Python 2026-04-23 16:28:45.197881 | instance | [ceph] interpreter at /usr/bin/python3.10, but future installation of another Python 2026-04-23 16:28:45.197893 | instance | [ceph] interpreter could change the meaning of that path. See 2026-04-23 16:28:45.197902 | instance | [ceph] https://docs.ansible.com/ansible- 2026-04-23 16:28:45.197912 | instance | [ceph] core/2.17/reference_appendices/interpreter_discovery.html for more information. 2026-04-23 16:28:45.206683 | instance | [ceph] ok: [instance] 2026-04-23 16:28:45.206698 | instance | [ceph] 2026-04-23 16:28:45.206708 | instance | [ceph] TASK [Fail if atmosphere_ceph_enabled is set] ********************************** 2026-04-23 16:28:45.244809 | instance | [ceph] skipping: [instance] 2026-04-23 16:28:45.244828 | instance | [ceph] 2026-04-23 16:28:45.244837 | instance | [ceph] TASK [Set a fact with the "atmosphere_images" for other plays] ***************** 2026-04-23 16:28:45.423058 | instance | [ceph] ok: [instance] 2026-04-23 16:28:45.423111 | instance | [ceph] 2026-04-23 16:28:45.423123 | instance | [ceph] PLAY [Deploy Ceph monitors & managers] ***************************************** 2026-04-23 16:28:45.423133 | instance | [ceph] 2026-04-23 16:28:45.423142 | instance | [ceph] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:28:45.480781 | instance | [lpfc] ok: [instance] 2026-04-23 16:28:45.480824 | instance | [lpfc] 2026-04-23 16:28:45.480836 | instance | [lpfc] TASK [vexxhost.atmosphere.lpfc : Install the configuration file] *************** 2026-04-23 16:28:45.498753 | instance | [lpfc] skipping: [instance] 2026-04-23 16:28:45.498788 | instance | [lpfc] 2026-04-23 16:28:45.498798 | instance | [lpfc] TASK [vexxhost.atmosphere.lpfc : Get the values for the module parameters] ***** 2026-04-23 16:28:45.537735 | instance | [lpfc] skipping: [instance] => (item=lpfc_lun_queue_depth) 2026-04-23 16:28:45.537762 | instance | [lpfc] skipping: [instance] => (item=lpfc_sg_seg_cnt) 2026-04-23 16:28:45.537772 | instance | [lpfc] skipping: [instance] => (item=lpfc_max_luns) 2026-04-23 16:28:45.537781 | instance | [lpfc] skipping: [instance] => (item=lpfc_enable_fc4_type) 2026-04-23 16:28:45.537790 | instance | [lpfc] skipping: [instance] 2026-04-23 16:28:45.537798 | instance | [lpfc] 2026-04-23 16:28:45.537808 | instance | [lpfc] TASK [vexxhost.atmosphere.lpfc : Detect if the run-time module parameters are set correctly] *** 2026-04-23 16:28:45.577086 | instance | [lpfc] skipping: [instance] => (item=lpfc_lun_queue_depth) 2026-04-23 16:28:45.577109 | instance | [lpfc] skipping: [instance] => (item=lpfc_sg_seg_cnt) 2026-04-23 16:28:45.577118 | instance | [lpfc] skipping: [instance] => (item=lpfc_max_luns) 2026-04-23 16:28:45.577127 | instance | [lpfc] skipping: [instance] => (item=lpfc_enable_fc4_type) 2026-04-23 16:28:45.577136 | instance | [lpfc] skipping: [instance] 2026-04-23 16:28:45.577145 | instance | [lpfc] 2026-04-23 16:28:45.577153 | instance | [lpfc] TASK [vexxhost.atmosphere.lpfc : Update "initramfs" if the configuration file has changed] *** 2026-04-23 16:28:45.600280 | instance | [lpfc] skipping: [instance] 2026-04-23 16:28:45.600313 | instance | [lpfc] 2026-04-23 16:28:45.600322 | instance | [lpfc] TASK [Reboot the system if the configuration file has changed] ***************** 2026-04-23 16:28:45.623658 | instance | [lpfc] skipping: [instance] 2026-04-23 16:28:45.623720 | instance | [lpfc] 2026-04-23 16:28:45.623733 | instance | [lpfc] PLAY RECAP ********************************************************************* 2026-04-23 16:28:45.623752 | instance | [lpfc] instance : ok=2 changed=0 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-04-23 16:28:45.623761 | instance | [lpfc] 2026-04-23 16:28:45.644928 | instance | [udev] changed: [instance] 2026-04-23 16:28:45.644987 | instance | [udev] 2026-04-23 16:28:45.644999 | instance | [udev] TASK [vexxhost.atmosphere.udev : Add udev rules for SCSI Unit Attention] ******* 2026-04-23 16:28:45.732796 | instance | ==> [lpfc] Deployment complete 2026-04-23 16:28:46.204862 | instance | [udev] changed: [instance] 2026-04-23 16:28:46.204917 | instance | [udev] 2026-04-23 16:28:46.204929 | instance | [udev] RUNNING HANDLER [vexxhost.atmosphere.udev : Reload udev] *********************** 2026-04-23 16:28:46.499440 | instance | [ceph] ok: [instance] 2026-04-23 16:28:46.499488 | instance | [ceph] 2026-04-23 16:28:46.499499 | instance | [ceph] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:28:46.597120 | instance | [udev] ok: [instance] 2026-04-23 16:28:46.597167 | instance | [udev] 2026-04-23 16:28:46.597178 | instance | [udev] PLAY RECAP ********************************************************************* 2026-04-23 16:28:46.597188 | instance | [udev] instance : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-23 16:28:46.597198 | instance | [udev] 2026-04-23 16:28:46.660686 | instance | ==> [udev] Deployment complete 2026-04-23 16:28:46.943314 | instance | [ceph] ok: [instance] 2026-04-23 16:28:46.943377 | instance | [ceph] 2026-04-23 16:28:46.943389 | instance | [ceph] TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-23 16:28:46.982759 | instance | [ceph] skipping: [instance] 2026-04-23 16:28:46.982803 | instance | [ceph] 2026-04-23 16:28:46.982817 | instance | [ceph] TASK [vexxhost.containers.directory : Create directory (/var/lib/downloads)] *** 2026-04-23 16:28:47.411936 | instance | [ceph] changed: [instance] 2026-04-23 16:28:47.411988 | instance | [ceph] 2026-04-23 16:28:47.412000 | instance | [ceph] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:28:47.475539 | instance | [ceph] ok: [instance] => { 2026-04-23 16:28:47.475592 | instance | [ceph] "msg": "https://github.com/opencontainers/runc/releases/download/v1.4.2/runc.amd64" 2026-04-23 16:28:47.475604 | instance | [ceph] } 2026-04-23 16:28:47.475614 | instance | [ceph] 2026-04-23 16:28:47.475623 | instance | [ceph] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:28:48.261680 | instance | [ceph] changed: [instance] 2026-04-23 16:28:48.261726 | instance | [ceph] 2026-04-23 16:28:48.261738 | instance | [ceph] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:28:48.306653 | instance | [ceph] skipping: [instance] 2026-04-23 16:28:48.306688 | instance | [ceph] 2026-04-23 16:28:48.306699 | instance | [ceph] TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-23 16:28:48.355304 | instance | [ceph] skipping: [instance] 2026-04-23 16:28:48.355338 | instance | [ceph] 2026-04-23 16:28:48.355348 | instance | [ceph] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:28:48.669419 | instance | [ceph] ok: [instance] 2026-04-23 16:28:48.669472 | instance | [ceph] 2026-04-23 16:28:48.669484 | instance | [ceph] TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-23 16:28:49.375286 | instance | [multipathd] changed: [instance] 2026-04-23 16:28:49.375345 | instance | [multipathd] 2026-04-23 16:28:49.375357 | instance | [multipathd] TASK [vexxhost.atmosphere.multipathd : Install the multipathd package] ********* 2026-04-23 16:28:50.875402 | instance | [ceph] ok: [instance] 2026-04-23 16:28:50.875456 | instance | [ceph] 2026-04-23 16:28:50.875469 | instance | [ceph] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:28:50.943766 | instance | [ceph] ok: [instance] => { 2026-04-23 16:28:50.943842 | instance | [ceph] "msg": "https://github.com/containerd/containerd/releases/download/v2.2.3/containerd-2.2.3-linux-amd64.tar.gz" 2026-04-23 16:28:50.943855 | instance | [ceph] } 2026-04-23 16:28:50.943866 | instance | [ceph] 2026-04-23 16:28:50.943876 | instance | [ceph] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:28:51.821335 | instance | [ceph] changed: [instance] 2026-04-23 16:28:51.821408 | instance | [ceph] 2026-04-23 16:28:51.821419 | instance | [ceph] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:28:54.849856 | instance | [ceph] changed: [instance] 2026-04-23 16:28:54.849920 | instance | [ceph] 2026-04-23 16:28:54.849932 | instance | [ceph] TASK [vexxhost.containers.containerd : Install SELinux packages] *************** 2026-04-23 16:28:54.886198 | instance | [ceph] skipping: [instance] 2026-04-23 16:28:54.886232 | instance | [ceph] 2026-04-23 16:28:54.886243 | instance | [ceph] TASK [vexxhost.containers.containerd : Set SELinux to permissive at runtime] *** 2026-04-23 16:28:54.920427 | instance | [ceph] skipping: [instance] 2026-04-23 16:28:54.920462 | instance | [ceph] 2026-04-23 16:28:54.920473 | instance | [ceph] TASK [vexxhost.containers.containerd : Persist SELinux permissive mode] ******** 2026-04-23 16:28:54.964427 | instance | [ceph] skipping: [instance] 2026-04-23 16:28:54.964499 | instance | [ceph] 2026-04-23 16:28:54.964512 | instance | [ceph] TASK [vexxhost.containers.containerd : Install AppArmor packages] ************** 2026-04-23 16:29:05.615993 | instance | [multipathd] changed: [instance] 2026-04-23 16:29:05.616314 | instance | [multipathd] 2026-04-23 16:29:05.616329 | instance | [multipathd] TASK [vexxhost.atmosphere.multipathd : Install the configuration file] ********* 2026-04-23 16:29:06.373897 | instance | [multipathd] changed: [instance] 2026-04-23 16:29:06.373941 | instance | [multipathd] 2026-04-23 16:29:06.373947 | instance | [multipathd] RUNNING HANDLER [vexxhost.atmosphere.multipathd : Restart "multipathd"] ******** 2026-04-23 16:29:07.129670 | instance | [multipathd] changed: [instance] 2026-04-23 16:29:07.129768 | instance | [multipathd] 2026-04-23 16:29:07.129780 | instance | [multipathd] PLAY RECAP ********************************************************************* 2026-04-23 16:29:07.129791 | instance | [multipathd] instance : ok=5 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-23 16:29:07.129801 | instance | [multipathd] 2026-04-23 16:29:07.195713 | instance | ==> [multipathd] Deployment complete 2026-04-23 16:29:08.005190 | instance | [kubernetes] 2026-04-23 16:29:08.005277 | instance | [kubernetes] PLAY [all] ********************************************************************* 2026-04-23 16:29:08.005290 | instance | [kubernetes] 2026-04-23 16:29:08.005300 | instance | [kubernetes] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:29:09.317444 | instance | [kubernetes] [WARNING]: Platform linux on host instance is using the discovered Python 2026-04-23 16:29:09.317516 | instance | [kubernetes] interpreter at /usr/bin/python3.10, but future installation of another Python 2026-04-23 16:29:09.317528 | instance | [kubernetes] interpreter could change the meaning of that path. See 2026-04-23 16:29:09.317538 | instance | [kubernetes] https://docs.ansible.com/ansible- 2026-04-23 16:29:09.317548 | instance | [kubernetes] core/2.17/reference_appendices/interpreter_discovery.html for more information. 2026-04-23 16:29:09.338485 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:09.338554 | instance | [kubernetes] 2026-04-23 16:29:09.338566 | instance | [kubernetes] TASK [vexxhost.atmosphere.sysctl : Configure sysctl values] ******************** 2026-04-23 16:29:16.150990 | instance | [kubernetes] changed: [instance] => (item={'name': 'fs.aio-max-nr', 'value': 1048576}) 2026-04-23 16:29:16.151046 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.tcp_timestamps', 'value': 0}) 2026-04-23 16:29:16.151057 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.tcp_sack', 'value': 1}) 2026-04-23 16:29:16.151067 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.core.netdev_budget', 'value': 1000}) 2026-04-23 16:29:16.151077 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.core.netdev_max_backlog', 'value': 250000}) 2026-04-23 16:29:16.151086 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.core.rmem_max', 'value': 4194304}) 2026-04-23 16:29:16.151095 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.core.wmem_max', 'value': 4194304}) 2026-04-23 16:29:16.151104 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.core.rmem_default', 'value': 4194304}) 2026-04-23 16:29:16.151112 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.core.wmem_default', 'value': 4194304}) 2026-04-23 16:29:16.151121 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.core.optmem_max', 'value': 4194304}) 2026-04-23 16:29:16.151130 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.tcp_rmem', 'value': '4096 87380 4194304'}) 2026-04-23 16:29:16.151170 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.tcp_wmem', 'value': '4096 65536 4194304'}) 2026-04-23 16:29:16.151179 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.tcp_low_latency', 'value': 1}) 2026-04-23 16:29:16.151188 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.tcp_adv_win_scale', 'value': 1}) 2026-04-23 16:29:16.151197 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.neigh.default.gc_thresh1', 'value': 128}) 2026-04-23 16:29:16.151206 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.neigh.default.gc_thresh2', 'value': 28872}) 2026-04-23 16:29:16.151215 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.neigh.default.gc_thresh3', 'value': 32768}) 2026-04-23 16:29:16.151223 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv6.neigh.default.gc_thresh1', 'value': 128}) 2026-04-23 16:29:16.151232 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv6.neigh.default.gc_thresh2', 'value': 28872}) 2026-04-23 16:29:16.151240 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv6.neigh.default.gc_thresh3', 'value': 32768}) 2026-04-23 16:29:16.151249 | instance | [kubernetes] 2026-04-23 16:29:16.151258 | instance | [kubernetes] TASK [vexxhost.atmosphere.ethtool : Create folder for persistent configuration] *** 2026-04-23 16:29:16.572485 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:16.572550 | instance | [kubernetes] 2026-04-23 16:29:16.572562 | instance | [kubernetes] TASK [vexxhost.atmosphere.ethtool : Install persistent "ethtool" tuning] ******* 2026-04-23 16:29:17.312411 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:17.312444 | instance | [kubernetes] 2026-04-23 16:29:17.312449 | instance | [kubernetes] TASK [vexxhost.atmosphere.ethtool : Run "ethtool" tuning] ********************** 2026-04-23 16:29:17.771544 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:17.771625 | instance | [kubernetes] 2026-04-23 16:29:17.771636 | instance | [kubernetes] TASK [Set a fact with the "atmosphere_images" for other plays] ***************** 2026-04-23 16:29:17.903763 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:17.903800 | instance | [kubernetes] 2026-04-23 16:29:17.903811 | instance | [kubernetes] PLAY [Configure Kubernetes VIP] ************************************************ 2026-04-23 16:29:17.903821 | instance | [kubernetes] 2026-04-23 16:29:17.903840 | instance | [kubernetes] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:29:19.045466 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:19.045515 | instance | [kubernetes] 2026-04-23 16:29:19.045526 | instance | [kubernetes] TASK [vexxhost.containers.directory : Create directory (/etc/kubernetes/manifests)] *** 2026-04-23 16:29:19.351532 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:19.351899 | instance | [kubernetes] 2026-04-23 16:29:19.351910 | instance | [kubernetes] TASK [vexxhost.kubernetes.kube_vip : Uninstall legacy HA stack] **************** 2026-04-23 16:29:20.079762 | instance | [ceph] FAILED - RETRYING: [instance]: Install AppArmor packages (5 retries left). 2026-04-23 16:29:20.079808 | instance | [ceph] changed: [instance] 2026-04-23 16:29:20.079815 | instance | [ceph] 2026-04-23 16:29:20.079821 | instance | [ceph] TASK [vexxhost.containers.containerd : Create systemd service file for containerd] *** 2026-04-23 16:29:20.785812 | instance | [ceph] changed: [instance] 2026-04-23 16:29:20.785879 | instance | [ceph] 2026-04-23 16:29:20.785890 | instance | [ceph] TASK [vexxhost.containers.containerd : Create folders for configuration] ******* 2026-04-23 16:29:20.794435 | instance | [kubernetes] ok: [instance] => (item=/etc/keepalived/keepalived.conf) 2026-04-23 16:29:20.794476 | instance | [kubernetes] ok: [instance] => (item=/etc/keepalived/check_apiserver.sh) 2026-04-23 16:29:20.794487 | instance | [kubernetes] ok: [instance] => (item=/etc/kubernetes/manifests/keepalived.yaml) 2026-04-23 16:29:20.794497 | instance | [kubernetes] ok: [instance] => (item=/etc/haproxy/haproxy.cfg) 2026-04-23 16:29:20.794506 | instance | [kubernetes] ok: [instance] => (item=/etc/kubernetes/manifests/haproxy.yaml) 2026-04-23 16:29:20.794516 | instance | [kubernetes] 2026-04-23 16:29:20.794525 | instance | [kubernetes] TASK [vexxhost.kubernetes.kube_vip : Switch API server to run on port 6443] **** 2026-04-23 16:29:21.761315 | instance | [kubernetes] failed: [instance] (item=/etc/kubernetes/manifests/kube-apiserver.yaml) => {"ansible_loop_var": "item", "changed": false, "item": "/etc/kubernetes/manifests/kube-apiserver.yaml", "msg": "Path /etc/kubernetes/manifests/kube-apiserver.yaml does not exist !", "rc": 257} 2026-04-23 16:29:21.761387 | instance | [kubernetes] failed: [instance] (item=/etc/kubernetes/controller-manager.conf) => {"ansible_loop_var": "item", "changed": false, "item": "/etc/kubernetes/controller-manager.conf", "msg": "Path /etc/kubernetes/controller-manager.conf does not exist !", "rc": 257} 2026-04-23 16:29:21.761407 | instance | [kubernetes] failed: [instance] (item=/etc/kubernetes/scheduler.conf) => {"ansible_loop_var": "item", "changed": false, "item": "/etc/kubernetes/scheduler.conf", "msg": "Path /etc/kubernetes/scheduler.conf does not exist !", "rc": 257} 2026-04-23 16:29:21.761417 | instance | [kubernetes] ...ignoring 2026-04-23 16:29:21.761427 | instance | [kubernetes] 2026-04-23 16:29:21.761437 | instance | [kubernetes] TASK [vexxhost.kubernetes.kube_vip : Check if super-admin.conf exists] ********* 2026-04-23 16:29:22.057351 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:22.057414 | instance | [kubernetes] 2026-04-23 16:29:22.057425 | instance | [kubernetes] TASK [vexxhost.kubernetes.kube_vip : Check if kubeadm has already run] ********* 2026-04-23 16:29:22.213895 | instance | [ceph] changed: [instance] => (item={'path': '/etc/containerd'}) 2026-04-23 16:29:22.213933 | instance | [ceph] changed: [instance] => (item={'path': '/var/lib/containerd', 'mode': '0o700'}) 2026-04-23 16:29:22.213944 | instance | [ceph] changed: [instance] => (item={'path': '/run/containerd', 'mode': '0o711'}) 2026-04-23 16:29:22.213954 | instance | [ceph] changed: [instance] => (item={'path': '/run/containerd/io.containerd.grpc.v1.cri', 'mode': '0o700'}) 2026-04-23 16:29:22.213963 | instance | [ceph] changed: [instance] => (item={'path': '/run/containerd/io.containerd.sandbox.controller.v1.shim', 'mode': '0o700'}) 2026-04-23 16:29:22.213973 | instance | [ceph] 2026-04-23 16:29:22.213982 | instance | [ceph] TASK [vexxhost.containers.containerd : Create containerd config file] ********** 2026-04-23 16:29:22.360767 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:22.360830 | instance | [kubernetes] 2026-04-23 16:29:22.360842 | instance | [kubernetes] TASK [vexxhost.kubernetes.kube_vip : Set fact with KUBECONFIG path] ************ 2026-04-23 16:29:22.392312 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:22.392371 | instance | [kubernetes] 2026-04-23 16:29:22.392386 | instance | [kubernetes] TASK [vexxhost.kubernetes.kube_vip : Set fact with KUBECONFIG path (with super-admin.conf)] *** 2026-04-23 16:29:22.422872 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:22.422907 | instance | [kubernetes] 2026-04-23 16:29:22.422918 | instance | [kubernetes] TASK [vexxhost.kubernetes.kube_vip : Upload Kubernetes manifest] *************** 2026-04-23 16:29:22.901917 | instance | [ceph] changed: [instance] 2026-04-23 16:29:22.901969 | instance | [ceph] 2026-04-23 16:29:22.901980 | instance | [ceph] TASK [vexxhost.containers.containerd : Force any restarts if necessary] ******** 2026-04-23 16:29:22.901990 | instance | [ceph] 2026-04-23 16:29:22.901999 | instance | [ceph] RUNNING HANDLER [vexxhost.containers.containerd : Reload systemd] ************** 2026-04-23 16:29:23.041238 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:23.041303 | instance | [kubernetes] 2026-04-23 16:29:23.041315 | instance | [kubernetes] TASK [vexxhost.kubernetes.kube_vip : Ensure kube-vip configuration file] ******* 2026-04-23 16:29:23.401577 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:23.401641 | instance | [kubernetes] 2026-04-23 16:29:23.401653 | instance | [kubernetes] TASK [vexxhost.kubernetes.kube_vip : Flush handlers] *************************** 2026-04-23 16:29:23.401663 | instance | [kubernetes] 2026-04-23 16:29:23.401672 | instance | [kubernetes] PLAY [Install Kubernetes] ****************************************************** 2026-04-23 16:29:23.401680 | instance | [kubernetes] 2026-04-23 16:29:23.401689 | instance | [kubernetes] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:29:24.028363 | instance | [ceph] ok: [instance] 2026-04-23 16:29:24.028425 | instance | [ceph] 2026-04-23 16:29:24.028437 | instance | [ceph] RUNNING HANDLER [vexxhost.containers.containerd : Restart containerd] ********** 2026-04-23 16:29:24.598008 | instance | [ceph] changed: [instance] 2026-04-23 16:29:24.598084 | instance | [ceph] 2026-04-23 16:29:24.598102 | instance | [ceph] TASK [vexxhost.containers.containerd : Enable and start service] *************** 2026-04-23 16:29:24.657461 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:24.657519 | instance | [kubernetes] 2026-04-23 16:29:24.657531 | instance | [kubernetes] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:29:24.983194 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:24.983251 | instance | [kubernetes] 2026-04-23 16:29:24.983263 | instance | [kubernetes] TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-23 16:29:25.021041 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:25.021076 | instance | [kubernetes] 2026-04-23 16:29:25.021086 | instance | [kubernetes] TASK [vexxhost.containers.directory : Create directory (/var/lib/downloads)] *** 2026-04-23 16:29:25.297072 | instance | [ceph] changed: [instance] 2026-04-23 16:29:25.297181 | instance | [ceph] 2026-04-23 16:29:25.297193 | instance | [ceph] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:29:25.334011 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:25.334084 | instance | [kubernetes] 2026-04-23 16:29:25.334097 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:29:25.384609 | instance | [kubernetes] ok: [instance] => { 2026-04-23 16:29:25.384698 | instance | [kubernetes] "msg": "https://github.com/opencontainers/runc/releases/download/v1.4.2/runc.amd64" 2026-04-23 16:29:25.384711 | instance | [kubernetes] } 2026-04-23 16:29:25.384721 | instance | [kubernetes] 2026-04-23 16:29:25.384730 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:29:25.625786 | instance | [ceph] ok: [instance] 2026-04-23 16:29:25.625861 | instance | [ceph] 2026-04-23 16:29:25.625873 | instance | [ceph] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:29:25.688635 | instance | [ceph] ok: [instance] => { 2026-04-23 16:29:25.688683 | instance | [ceph] "msg": "https://download.docker.com/linux/static/stable/x86_64/docker-24.0.9.tgz" 2026-04-23 16:29:25.688690 | instance | [ceph] } 2026-04-23 16:29:25.688695 | instance | [ceph] 2026-04-23 16:29:25.688701 | instance | [ceph] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:29:25.892352 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:25.892398 | instance | [kubernetes] 2026-04-23 16:29:25.892404 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:29:25.938240 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:25.938275 | instance | [kubernetes] 2026-04-23 16:29:25.938281 | instance | [kubernetes] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:29:26.263367 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:26.263429 | instance | [kubernetes] 2026-04-23 16:29:26.263435 | instance | [kubernetes] TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-23 16:29:26.665796 | instance | [ceph] changed: [instance] 2026-04-23 16:29:26.665853 | instance | [ceph] 2026-04-23 16:29:26.665865 | instance | [ceph] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:29:27.505915 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:27.505971 | instance | [kubernetes] 2026-04-23 16:29:27.505983 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:29:27.568860 | instance | [kubernetes] ok: [instance] => { 2026-04-23 16:29:27.568892 | instance | [kubernetes] "msg": "https://github.com/containerd/containerd/releases/download/v2.2.3/containerd-2.2.3-linux-amd64.tar.gz" 2026-04-23 16:29:27.568903 | instance | [kubernetes] } 2026-04-23 16:29:27.568912 | instance | [kubernetes] 2026-04-23 16:29:27.568921 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:29:28.007204 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:28.007252 | instance | [kubernetes] 2026-04-23 16:29:28.007261 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:29:30.219765 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:30.219852 | instance | [kubernetes] 2026-04-23 16:29:30.219865 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Install SELinux packages] *************** 2026-04-23 16:29:30.249604 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:30.249637 | instance | [kubernetes] 2026-04-23 16:29:30.249647 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Set SELinux to permissive at runtime] *** 2026-04-23 16:29:30.280089 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:30.280128 | instance | [kubernetes] 2026-04-23 16:29:30.280142 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Persist SELinux permissive mode] ******** 2026-04-23 16:29:30.310138 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:30.310179 | instance | [kubernetes] 2026-04-23 16:29:30.310193 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Install AppArmor packages] ************** 2026-04-23 16:29:31.231939 | instance | [ceph] changed: [instance] 2026-04-23 16:29:31.232009 | instance | [ceph] 2026-04-23 16:29:31.232021 | instance | [ceph] TASK [vexxhost.containers.docker : Install AppArmor packages] ****************** 2026-04-23 16:29:31.387941 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:31.388002 | instance | [kubernetes] 2026-04-23 16:29:31.388013 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Create systemd service file for containerd] *** 2026-04-23 16:29:31.920972 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:31.921033 | instance | [kubernetes] 2026-04-23 16:29:31.921044 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Create folders for configuration] ******* 2026-04-23 16:29:32.322383 | instance | [ceph] ok: [instance] 2026-04-23 16:29:32.322443 | instance | [ceph] 2026-04-23 16:29:32.322457 | instance | [ceph] TASK [vexxhost.containers.docker : Ensure group "docker" exists] *************** 2026-04-23 16:29:32.769236 | instance | [ceph] changed: [instance] 2026-04-23 16:29:32.769299 | instance | [ceph] 2026-04-23 16:29:32.769310 | instance | [ceph] TASK [vexxhost.containers.docker : Create systemd service file for docker] ***** 2026-04-23 16:29:33.343341 | instance | [ceph] changed: [instance] 2026-04-23 16:29:33.343397 | instance | [ceph] 2026-04-23 16:29:33.343409 | instance | [ceph] TASK [vexxhost.containers.docker : Create folders for configuration] *********** 2026-04-23 16:29:33.365950 | instance | [kubernetes] ok: [instance] => (item={'path': '/etc/containerd'}) 2026-04-23 16:29:33.366211 | instance | [kubernetes] ok: [instance] => (item={'path': '/var/lib/containerd', 'mode': '0o700'}) 2026-04-23 16:29:33.366218 | instance | [kubernetes] ok: [instance] => (item={'path': '/run/containerd', 'mode': '0o711'}) 2026-04-23 16:29:33.366227 | instance | [kubernetes] ok: [instance] => (item={'path': '/run/containerd/io.containerd.grpc.v1.cri', 'mode': '0o700'}) 2026-04-23 16:29:33.366231 | instance | [kubernetes] ok: [instance] => (item={'path': '/run/containerd/io.containerd.sandbox.controller.v1.shim', 'mode': '0o700'}) 2026-04-23 16:29:33.366236 | instance | [kubernetes] 2026-04-23 16:29:33.366240 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Create containerd config file] ********** 2026-04-23 16:29:33.992776 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:33.992839 | instance | [kubernetes] 2026-04-23 16:29:33.992851 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Force any restarts if necessary] ******** 2026-04-23 16:29:33.992861 | instance | [kubernetes] 2026-04-23 16:29:33.992870 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Enable and start service] *************** 2026-04-23 16:29:34.230588 | instance | [ceph] changed: [instance] => (item={'path': '/etc/docker'}) 2026-04-23 16:29:34.230644 | instance | [ceph] changed: [instance] => (item={'path': '/var/lib/docker', 'mode': '0o710'}) 2026-04-23 16:29:34.230657 | instance | [ceph] changed: [instance] => (item={'path': '/run/docker', 'mode': '0o711'}) 2026-04-23 16:29:34.230666 | instance | [ceph] 2026-04-23 16:29:34.230676 | instance | [ceph] TASK [vexxhost.containers.docker : Create systemd socket file for docker] ****** 2026-04-23 16:29:34.708266 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:34.708596 | instance | [kubernetes] 2026-04-23 16:29:34.708611 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubernetes_upgrade_check : Retrieve the "kubeadm-config" ConfigMap] *** 2026-04-23 16:29:34.788999 | instance | [ceph] changed: [instance] 2026-04-23 16:29:34.789062 | instance | [ceph] 2026-04-23 16:29:34.789070 | instance | [ceph] TASK [vexxhost.containers.docker : Create docker daemon config file] *********** 2026-04-23 16:29:35.365571 | instance | [ceph] changed: [instance] 2026-04-23 16:29:35.365626 | instance | [ceph] 2026-04-23 16:29:35.365640 | instance | [ceph] TASK [vexxhost.containers.docker : Force any restarts if necessary] ************ 2026-04-23 16:29:35.365653 | instance | [ceph] 2026-04-23 16:29:35.365666 | instance | [ceph] RUNNING HANDLER [vexxhost.containers.containerd : Reload systemd] ************** 2026-04-23 16:29:35.532131 | instance | [kubernetes] An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible_collections.kubernetes.core.plugins.module_utils.k8s.exceptions.CoreException: Could not create API client: Invalid kube-config file. No configuration found. 2026-04-23 16:29:35.532195 | instance | [kubernetes] fatal: [instance]: FAILED! => {"changed": false, "msg": "Could not create API client: Invalid kube-config file. No configuration found."} 2026-04-23 16:29:35.532208 | instance | [kubernetes] ...ignoring 2026-04-23 16:29:35.532219 | instance | [kubernetes] 2026-04-23 16:29:35.532230 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubernetes_upgrade_check : Parse the ClusterConfiguration] *** 2026-04-23 16:29:35.561569 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:35.561608 | instance | [kubernetes] 2026-04-23 16:29:35.561620 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubernetes_upgrade_check : Retrieve the current Kubernetes version] *** 2026-04-23 16:29:35.596074 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:35.596106 | instance | [kubernetes] 2026-04-23 16:29:35.596118 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubernetes_upgrade_check : Extract major, minor, and patch versions] *** 2026-04-23 16:29:35.630066 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:35.630094 | instance | [kubernetes] 2026-04-23 16:29:35.630100 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubernetes_upgrade_check : Fail if we're jumping more than one minor version] *** 2026-04-23 16:29:35.662771 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:35.662805 | instance | [kubernetes] 2026-04-23 16:29:35.662816 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubernetes_upgrade_check : Set fact if we need to upgrade] *** 2026-04-23 16:29:35.702139 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:35.702170 | instance | [kubernetes] 2026-04-23 16:29:35.702180 | instance | [kubernetes] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:29:36.009461 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:36.009538 | instance | [kubernetes] 2026-04-23 16:29:36.009553 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:29:36.053646 | instance | [kubernetes] ok: [instance] => { 2026-04-23 16:29:36.053684 | instance | [kubernetes] "msg": "https://dl.k8s.io/release/v1.28.13/bin/linux/amd64/kubeadm" 2026-04-23 16:29:36.053695 | instance | [kubernetes] } 2026-04-23 16:29:36.053704 | instance | [kubernetes] 2026-04-23 16:29:36.053713 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:29:36.150116 | instance | [ceph] ok: [instance] 2026-04-23 16:29:36.150171 | instance | [ceph] 2026-04-23 16:29:36.150180 | instance | [ceph] RUNNING HANDLER [vexxhost.containers.docker : Restart docker] ****************** 2026-04-23 16:29:37.116281 | instance | [ceph] changed: [instance] 2026-04-23 16:29:37.116356 | instance | [ceph] 2026-04-23 16:29:37.116367 | instance | [ceph] TASK [vexxhost.containers.docker : Enable and start service] ******************* 2026-04-23 16:29:37.790615 | instance | [ceph] changed: [instance] 2026-04-23 16:29:37.790673 | instance | [ceph] 2026-04-23 16:29:37.790684 | instance | [ceph] TASK [vexxhost.ceph.cephadm : Gather variables for each operating system] ****** 2026-04-23 16:29:37.845797 | instance | [ceph] ok: [instance] => (item=/home/zuul/.ansible/collections/ansible_collections/vexxhost/ceph/roles/cephadm/vars/ubuntu-22.04.yml) 2026-04-23 16:29:37.845832 | instance | [ceph] 2026-04-23 16:29:37.845843 | instance | [ceph] TASK [vexxhost.ceph.cephadm : Install packages] ******************************** 2026-04-23 16:29:40.158404 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:40.158474 | instance | [kubernetes] 2026-04-23 16:29:40.158545 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:29:40.202911 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:40.202958 | instance | [kubernetes] 2026-04-23 16:29:40.202969 | instance | [kubernetes] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:29:40.524785 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:40.524868 | instance | [kubernetes] 2026-04-23 16:29:40.524881 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:29:40.567924 | instance | [kubernetes] ok: [instance] => { 2026-04-23 16:29:40.567963 | instance | [kubernetes] "msg": "https://dl.k8s.io/release/v1.28.13/bin/linux/amd64/kubectl" 2026-04-23 16:29:40.567975 | instance | [kubernetes] } 2026-04-23 16:29:40.567984 | instance | [kubernetes] 2026-04-23 16:29:40.567993 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:29:41.364387 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:41.364457 | instance | [kubernetes] 2026-04-23 16:29:41.364470 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:29:41.418289 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:41.418362 | instance | [kubernetes] 2026-04-23 16:29:41.418374 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Install SELinux packages] *************** 2026-04-23 16:29:41.452831 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:41.452881 | instance | [kubernetes] 2026-04-23 16:29:41.452893 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Set SELinux to permissive at runtime] *** 2026-04-23 16:29:41.484858 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:41.484902 | instance | [kubernetes] 2026-04-23 16:29:41.484922 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Persist SELinux permissive mode] ******** 2026-04-23 16:29:41.517545 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:41.517585 | instance | [kubernetes] 2026-04-23 16:29:41.517597 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Install AppArmor packages] ************** 2026-04-23 16:29:42.760593 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:42.760660 | instance | [kubernetes] 2026-04-23 16:29:42.760673 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Create systemd service file for containerd] *** 2026-04-23 16:29:43.150666 | instance | [ceph] changed: [instance] 2026-04-23 16:29:43.150731 | instance | [ceph] 2026-04-23 16:29:43.150742 | instance | [ceph] TASK [vexxhost.ceph.cephadm : Ensure services are started] ********************* 2026-04-23 16:29:43.298719 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:43.298779 | instance | [kubernetes] 2026-04-23 16:29:43.298791 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Create folders for configuration] ******* 2026-04-23 16:29:44.008067 | instance | [ceph] ok: [instance] => (item=chronyd) 2026-04-23 16:29:44.008126 | instance | [ceph] ok: [instance] => (item=sshd) 2026-04-23 16:29:44.008137 | instance | [ceph] 2026-04-23 16:29:44.008147 | instance | [ceph] TASK [vexxhost.ceph.cephadm : Download "cephadm"] ****************************** 2026-04-23 16:29:44.528738 | instance | [ceph] changed: [instance] 2026-04-23 16:29:44.528782 | instance | [ceph] 2026-04-23 16:29:44.528790 | instance | [ceph] TASK [vexxhost.ceph.cephadm : Remove cephadm from old path] ******************** 2026-04-23 16:29:44.761758 | instance | [kubernetes] ok: [instance] => (item={'path': '/etc/containerd'}) 2026-04-23 16:29:44.761814 | instance | [kubernetes] ok: [instance] => (item={'path': '/var/lib/containerd', 'mode': '0o700'}) 2026-04-23 16:29:44.761826 | instance | [kubernetes] ok: [instance] => (item={'path': '/run/containerd', 'mode': '0o711'}) 2026-04-23 16:29:44.761835 | instance | [kubernetes] ok: [instance] => (item={'path': '/run/containerd/io.containerd.grpc.v1.cri', 'mode': '0o700'}) 2026-04-23 16:29:44.761845 | instance | [kubernetes] ok: [instance] => (item={'path': '/run/containerd/io.containerd.sandbox.controller.v1.shim', 'mode': '0o700'}) 2026-04-23 16:29:44.761855 | instance | [kubernetes] 2026-04-23 16:29:44.761864 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Create containerd config file] ********** 2026-04-23 16:29:44.866969 | instance | [ceph] ok: [instance] 2026-04-23 16:29:44.867067 | instance | [ceph] 2026-04-23 16:29:44.867082 | instance | [ceph] TASK [vexxhost.ceph.cephadm : Ensure "cephadm" user is present] **************** 2026-04-23 16:29:45.372449 | instance | [ceph] changed: [instance] 2026-04-23 16:29:45.372505 | instance | [ceph] 2026-04-23 16:29:45.372512 | instance | [ceph] TASK [vexxhost.ceph.cephadm : Allow "cephadm" user to have passwordless sudo] *** 2026-04-23 16:29:45.401208 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:45.401250 | instance | [kubernetes] 2026-04-23 16:29:45.401257 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Force any restarts if necessary] ******** 2026-04-23 16:29:45.401264 | instance | [kubernetes] 2026-04-23 16:29:45.401274 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Enable and start service] *************** 2026-04-23 16:29:45.785006 | instance | [ceph] changed: [instance] 2026-04-23 16:29:45.785069 | instance | [ceph] 2026-04-23 16:29:45.785081 | instance | [ceph] TASK [vexxhost.ceph.mon : Get `cephadm ls` status] ***************************** 2026-04-23 16:29:45.881994 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:45.882053 | instance | [kubernetes] 2026-04-23 16:29:45.882065 | instance | [kubernetes] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:29:46.192668 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:46.192731 | instance | [kubernetes] 2026-04-23 16:29:46.192739 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:29:46.244436 | instance | [kubernetes] ok: [instance] => { 2026-04-23 16:29:46.244496 | instance | [kubernetes] "msg": "https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.35.0/crictl-v1.35.0-linux-amd64.tar.gz" 2026-04-23 16:29:46.244509 | instance | [kubernetes] } 2026-04-23 16:29:46.244519 | instance | [kubernetes] 2026-04-23 16:29:46.244530 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:29:47.157746 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:47.157822 | instance | [kubernetes] 2026-04-23 16:29:47.157833 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:29:47.497407 | instance | [ceph] ok: [instance] 2026-04-23 16:29:47.497469 | instance | [ceph] 2026-04-23 16:29:47.497482 | instance | [ceph] TASK [vexxhost.ceph.mon : Parse the `cephadm ls` output] *********************** 2026-04-23 16:29:47.552380 | instance | [ceph] ok: [instance] 2026-04-23 16:29:47.552441 | instance | [ceph] 2026-04-23 16:29:47.552453 | instance | [ceph] TASK [vexxhost.ceph.mon : Assimilate existing configs in `ceph.conf`] ********** 2026-04-23 16:29:47.585938 | instance | [ceph] skipping: [instance] 2026-04-23 16:29:47.586007 | instance | [ceph] 2026-04-23 16:29:47.586022 | instance | [ceph] TASK [vexxhost.ceph.mon : Adopt monitor to cluster] **************************** 2026-04-23 16:29:47.620981 | instance | [ceph] skipping: [instance] 2026-04-23 16:29:47.621042 | instance | [ceph] 2026-04-23 16:29:47.621062 | instance | [ceph] TASK [vexxhost.ceph.mon : Adopt manager to cluster] **************************** 2026-04-23 16:29:47.656454 | instance | [ceph] skipping: [instance] 2026-04-23 16:29:47.656488 | instance | [ceph] 2026-04-23 16:29:47.656499 | instance | [ceph] TASK [vexxhost.ceph.mon : Enable "cephadm" mgr module] ************************* 2026-04-23 16:29:47.687790 | instance | [ceph] skipping: [instance] 2026-04-23 16:29:47.687839 | instance | [ceph] 2026-04-23 16:29:47.687848 | instance | [ceph] TASK [vexxhost.ceph.mon : Set orchestrator backend to "cephadm"] *************** 2026-04-23 16:29:47.717083 | instance | [ceph] skipping: [instance] 2026-04-23 16:29:47.717118 | instance | [ceph] 2026-04-23 16:29:47.717126 | instance | [ceph] TASK [vexxhost.ceph.mon : Use `cephadm` user for cephadm] ********************** 2026-04-23 16:29:47.754752 | instance | [ceph] skipping: [instance] 2026-04-23 16:29:47.754787 | instance | [ceph] 2026-04-23 16:29:47.754797 | instance | [ceph] TASK [vexxhost.ceph.mon : Generate "cephadm" key] ****************************** 2026-04-23 16:29:47.788728 | instance | [ceph] skipping: [instance] 2026-04-23 16:29:47.788759 | instance | [ceph] 2026-04-23 16:29:47.788772 | instance | [ceph] TASK [vexxhost.ceph.mon : Set Ceph Monitor IP address] ************************* 2026-04-23 16:29:47.894283 | instance | [ceph] ok: [instance] 2026-04-23 16:29:47.894320 | instance | [ceph] 2026-04-23 16:29:47.894330 | instance | [ceph] TASK [vexxhost.ceph.mon : Check if any node is bootstrapped] ******************* 2026-04-23 16:29:48.237652 | instance | [ceph] ok: [instance] => (item=instance) 2026-04-23 16:29:48.237728 | instance | [ceph] 2026-04-23 16:29:48.237736 | instance | [ceph] TASK [vexxhost.ceph.mon : Select pre-existing bootstrap node if exists] ******** 2026-04-23 16:29:48.294695 | instance | [ceph] ok: [instance] 2026-04-23 16:29:48.294730 | instance | [ceph] 2026-04-23 16:29:48.294740 | instance | [ceph] TASK [vexxhost.ceph.mon : Bootstrap cluster] *********************************** 2026-04-23 16:29:48.371869 | instance | [ceph] included: /home/zuul/.ansible/collections/ansible_collections/vexxhost/ceph/roles/mon/tasks/bootstrap-ceph.yml for instance 2026-04-23 16:29:48.371940 | instance | [ceph] 2026-04-23 16:29:48.371954 | instance | [ceph] TASK [vexxhost.ceph.mon : Generate temporary file for "ceph.conf"] ************* 2026-04-23 16:29:48.628703 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:48.628756 | instance | [kubernetes] 2026-04-23 16:29:48.628768 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:29:48.692961 | instance | [kubernetes] ok: [instance] => { 2026-04-23 16:29:48.693008 | instance | [kubernetes] "msg": "https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.35.0/critest-v1.35.0-linux-amd64.tar.gz" 2026-04-23 16:29:48.693072 | instance | [kubernetes] } 2026-04-23 16:29:48.693092 | instance | [kubernetes] 2026-04-23 16:29:48.693102 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:29:48.756533 | instance | [ceph] changed: [instance] 2026-04-23 16:29:48.756578 | instance | [ceph] 2026-04-23 16:29:48.756584 | instance | [ceph] TASK [vexxhost.ceph.mon : Include extra configuration values] ****************** 2026-04-23 16:29:49.426983 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:49.427030 | instance | [kubernetes] 2026-04-23 16:29:49.427038 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:29:49.750660 | instance | [ceph] changed: [instance] => (item={'option': 'mon allow pool size one', 'section': 'global', 'value': True}) 2026-04-23 16:29:49.750716 | instance | [ceph] changed: [instance] => (item={'option': 'osd crush chooseleaf type', 'section': 'global', 'value': 0}) 2026-04-23 16:29:49.750727 | instance | [ceph] changed: [instance] => (item={'option': 'auth allow insecure global id reclaim', 'section': 'mon', 'value': False}) 2026-04-23 16:29:49.750737 | instance | [ceph] 2026-04-23 16:29:49.750754 | instance | [ceph] TASK [vexxhost.ceph.mon : Run Bootstrap coomand] ******************************* 2026-04-23 16:29:50.947025 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:50.947103 | instance | [kubernetes] 2026-04-23 16:29:50.947115 | instance | [kubernetes] TASK [vexxhost.containers.cri_tools : Create crictl config] ******************** 2026-04-23 16:29:51.544499 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:51.544554 | instance | [kubernetes] 2026-04-23 16:29:51.544561 | instance | [kubernetes] TASK [vexxhost.containers.directory : Create directory (/opt/cni/bin)] ********* 2026-04-23 16:29:51.874645 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:51.874705 | instance | [kubernetes] 2026-04-23 16:29:51.874717 | instance | [kubernetes] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:29:52.220485 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:52.220541 | instance | [kubernetes] 2026-04-23 16:29:52.220553 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:29:52.278879 | instance | [kubernetes] ok: [instance] => { 2026-04-23 16:29:52.278913 | instance | [kubernetes] "msg": "https://github.com/containernetworking/plugins/releases/download/v1.9.1/cni-plugins-linux-amd64-v1.9.1.tgz" 2026-04-23 16:29:52.278926 | instance | [kubernetes] } 2026-04-23 16:29:52.278935 | instance | [kubernetes] 2026-04-23 16:29:52.278945 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:29:53.747621 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:53.747673 | instance | [kubernetes] 2026-04-23 16:29:53.747685 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:29:56.536965 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:56.537027 | instance | [kubernetes] 2026-04-23 16:29:56.537036 | instance | [kubernetes] TASK [vexxhost.containers.cni_plugins : Gather variables for each operating system] *** 2026-04-23 16:29:56.598340 | instance | [kubernetes] ok: [instance] => (item=/home/zuul/.ansible/collections/ansible_collections/vexxhost/containers/roles/cni_plugins/vars/debian.yml) 2026-04-23 16:29:56.598382 | instance | [kubernetes] 2026-04-23 16:29:56.598389 | instance | [kubernetes] TASK [vexxhost.containers.cni_plugins : Install additional packages] *********** 2026-04-23 16:29:57.719097 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:57.719154 | instance | [kubernetes] 2026-04-23 16:29:57.719167 | instance | [kubernetes] TASK [vexxhost.containers.cni_plugins : Ensure IPv6 is enabled] **************** 2026-04-23 16:29:58.020885 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:58.020938 | instance | [kubernetes] 2026-04-23 16:29:58.020951 | instance | [kubernetes] TASK [vexxhost.containers.cni_plugins : Enable kernel modules on-boot] ********* 2026-04-23 16:29:58.568634 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:58.568692 | instance | [kubernetes] 2026-04-23 16:29:58.568700 | instance | [kubernetes] TASK [vexxhost.containers.cni_plugins : Enable kernel modules in runtime] ****** 2026-04-23 16:29:59.830344 | instance | [kubernetes] changed: [instance] => (item=br_netfilter) 2026-04-23 16:29:59.830398 | instance | [kubernetes] ok: [instance] => (item=ip_tables) 2026-04-23 16:29:59.830411 | instance | [kubernetes] changed: [instance] => (item=ip6_tables) 2026-04-23 16:29:59.830421 | instance | [kubernetes] ok: [instance] => (item=nf_conntrack) 2026-04-23 16:29:59.830431 | instance | [kubernetes] 2026-04-23 16:29:59.830442 | instance | [kubernetes] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:30:00.135776 | instance | [kubernetes] ok: [instance] 2026-04-23 16:30:00.135844 | instance | [kubernetes] 2026-04-23 16:30:00.135856 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:30:00.175981 | instance | [kubernetes] ok: [instance] => { 2026-04-23 16:30:00.176030 | instance | [kubernetes] "msg": "https://dl.k8s.io/release/v1.28.13/bin/linux/amd64/kubelet" 2026-04-23 16:30:00.176038 | instance | [kubernetes] } 2026-04-23 16:30:00.176044 | instance | [kubernetes] 2026-04-23 16:30:00.176050 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:30:16.132314 | instance | [kubernetes] changed: [instance] 2026-04-23 16:30:16.132389 | instance | [kubernetes] 2026-04-23 16:30:16.132402 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:30:16.165838 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:30:16.165877 | instance | [kubernetes] 2026-04-23 16:30:16.165891 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Gather variables for each operating system] *** 2026-04-23 16:30:16.215738 | instance | [kubernetes] ok: [instance] => (item=/home/zuul/.ansible/collections/ansible_collections/vexxhost/kubernetes/roles/kubelet/vars/debian.yml) 2026-04-23 16:30:16.215765 | instance | [kubernetes] 2026-04-23 16:30:16.215772 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Install coreutils] ************************* 2026-04-23 16:30:16.244162 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:30:16.244183 | instance | [kubernetes] 2026-04-23 16:30:16.244190 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Install additional packages] *************** 2026-04-23 16:30:21.335013 | instance | [kubernetes] changed: [instance] 2026-04-23 16:30:21.335076 | instance | [kubernetes] 2026-04-23 16:30:21.335088 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Configure sysctl values] ******************* 2026-04-23 16:30:23.464848 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-23 16:30:23.464909 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.bridge.bridge-nf-call-iptables', 'value': 1}) 2026-04-23 16:30:23.464921 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.bridge.bridge-nf-call-ip6tables', 'value': 1}) 2026-04-23 16:30:23.464930 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 0}) 2026-04-23 16:30:23.464939 | instance | [kubernetes] changed: [instance] => (item={'name': 'fs.inotify.max_queued_events', 'value': 1048576}) 2026-04-23 16:30:23.464964 | instance | [kubernetes] changed: [instance] => (item={'name': 'fs.inotify.max_user_instances', 'value': 8192}) 2026-04-23 16:30:23.464973 | instance | [kubernetes] changed: [instance] => (item={'name': 'fs.inotify.max_user_watches', 'value': 1048576}) 2026-04-23 16:30:23.464982 | instance | [kubernetes] 2026-04-23 16:30:23.464991 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Create folders for kubernetes configuration] *** 2026-04-23 16:30:24.347393 | instance | [kubernetes] changed: [instance] => (item=/etc/systemd/system/kubelet.service.d) 2026-04-23 16:30:24.347473 | instance | [kubernetes] ok: [instance] => (item=/etc/kubernetes) 2026-04-23 16:30:24.347485 | instance | [kubernetes] ok: [instance] => (item=/etc/kubernetes/manifests) 2026-04-23 16:30:24.347495 | instance | [kubernetes] 2026-04-23 16:30:24.347505 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Add kubelet systemd service config] ******** 2026-04-23 16:30:24.990023 | instance | [kubernetes] changed: [instance] 2026-04-23 16:30:24.990086 | instance | [kubernetes] 2026-04-23 16:30:24.990098 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Add kubeadm dropin for kubelet systemd service config] *** 2026-04-23 16:30:25.556626 | instance | [kubernetes] changed: [instance] 2026-04-23 16:30:25.556690 | instance | [kubernetes] 2026-04-23 16:30:25.556702 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Check swap status] ************************* 2026-04-23 16:30:25.871912 | instance | [kubernetes] ok: [instance] 2026-04-23 16:30:25.871960 | instance | [kubernetes] 2026-04-23 16:30:25.871966 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Disable swap] ****************************** 2026-04-23 16:30:25.902798 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:30:25.902846 | instance | [kubernetes] 2026-04-23 16:30:25.902854 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Remove swapfile from /etc/fstab] *********** 2026-04-23 16:30:26.601050 | instance | [kubernetes] ok: [instance] => (item=swap) 2026-04-23 16:30:26.601137 | instance | [kubernetes] ok: [instance] => (item=none) 2026-04-23 16:30:26.601153 | instance | [kubernetes] 2026-04-23 16:30:26.601164 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Create noswap systemd service config file] *** 2026-04-23 16:30:27.173999 | instance | [kubernetes] changed: [instance] 2026-04-23 16:30:27.174042 | instance | [kubernetes] 2026-04-23 16:30:27.174053 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Enable noswap service] ********************* 2026-04-23 16:30:27.844489 | instance | [kubernetes] changed: [instance] 2026-04-23 16:30:27.844543 | instance | [kubernetes] 2026-04-23 16:30:27.844551 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Force any restarts if necessary] *********** 2026-04-23 16:30:27.844557 | instance | [kubernetes] 2026-04-23 16:30:27.844563 | instance | [kubernetes] RUNNING HANDLER [vexxhost.kubernetes.kubelet : Reload systemd] ***************** 2026-04-23 16:30:28.713470 | instance | [ceph] fatal: [instance]: FAILED! => {"changed": false, "cmd": ["cephadm", "bootstrap", "--fsid", "4837cbf8-4f90-4300-b3f6-726c9b9f89b4", "--mon-ip", "10.96.240.200", "--cluster-network", "10.96.240.0/24", "--ssh-user", "cephadm", "--config", "/tmp/ceph_kjirsokv.conf", "--skip-monitoring-stack"], "delta": "0:00:38.653288", "end": "2026-04-23 16:30:28.653067", "msg": "non-zero return code", "rc": 1, "start": "2026-04-23 16:29:49.999779", "stderr": "Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts.\nRuntimeError: Failed command: systemctl daemon-reload: Failed to reload daemon: Transport endpoint is not connected\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\n return _run_code(code, main_globals, None,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/usr/bin/cephadm/__main__.py\", line 11009, in \n File \"/usr/bin/cephadm/__main__.py\", line 10997, in main\n File \"/usr/bin/cephadm/__main__.py\", line 6395, in _rollback\n File \"/usr/bin/cephadm/__main__.py\", line 2643, in _default_image\n File \"/usr/bin/cephadm/__main__.py\", line 6540, in command_bootstrap\n File \"/usr/bin/cephadm/__main__.py\", line 5919, in create_mon\n File \"/usr/bin/cephadm/__main__.py\", line 4032, in deploy_daemon\n File \"/usr/bin/cephadm/__main__.py\", line 4265, in deploy_daemon_units\n File \"/usr/bin/cephadm/__main__.py\", line 2283, in call_throws\nRuntimeError: Failed command: systemctl daemon-reload: Failed to reload daemon: Transport endpoint is not connected", "stderr_lines": ["Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts.", "RuntimeError: Failed command: systemctl daemon-reload: Failed to reload daemon: Transport endpoint is not connected", "", "Traceback (most recent call last):", " File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main", " return _run_code(code, main_globals, None,", " File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code", " exec(code, run_globals)", " File \"/usr/bin/cephadm/__main__.py\", line 11009, in ", " File \"/usr/bin/cephadm/__main__.py\", line 10997, in main", " File \"/usr/bin/cephadm/__main__.py\", line 6395, in _rollback", " File \"/usr/bin/cephadm/__main__.py\", line 2643, in _default_image", " File \"/usr/bin/cephadm/__main__.py\", line 6540, in command_bootstrap", " File \"/usr/bin/cephadm/__main__.py\", line 5919, in create_mon", " File \"/usr/bin/cephadm/__main__.py\", line 4032, in deploy_daemon", " File \"/usr/bin/cephadm/__main__.py\", line 4265, in deploy_daemon_units", " File \"/usr/bin/cephadm/__main__.py\", line 2283, in call_throws", "RuntimeError: Failed command: systemctl daemon-reload: Failed to reload daemon: Transport endpoint is not connected"], "stdout": "Creating directory /etc/ceph for ceph.conf\nVerifying ssh connectivity using standard pubkey authentication ...\nAdding key to cephadm@localhost authorized_keys...\nVerifying podman|docker is present...\nVerifying lvm2 is present...\nVerifying time synchronization is in place...\nUnit chrony.service is enabled and running\nRepeating the final host check...\ndocker (/usr/bin/docker) is present\nsystemctl is present\nlvcreate is present\nUnit chrony.service is enabled and running\nHost looks OK\nCluster fsid: 4837cbf8-4f90-4300-b3f6-726c9b9f89b4\nVerifying IP 10.96.240.200 port 3300 ...\nVerifying IP 10.96.240.200 port 6789 ...\nMon IP `10.96.240.200` is in CIDR network `10.96.240.0/24`\nMon IP `10.96.240.200` is in CIDR network `10.96.240.0/24`\nPulling container image quay.io/ceph/ceph:v18.2.7...\nCeph version: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)\nExtracting ceph user uid/gid from container image...\nCreating initial keys...\nCreating initial monmap...\nCreating mon...\nNon-zero exit code 1 from systemctl daemon-reload\nsystemctl: stderr Failed to reload daemon: Transport endpoint is not connected\n\n\n\t***************\n\tCephadm hit an issue during cluster installation. Current cluster files will NOT BE DELETED automatically to change\n\tthis behaviour you can pass the --cleanup-on-failure. To remove this broken cluster manually please run:\n\n\t > cephadm rm-cluster --force --fsid 4837cbf8-4f90-4300-b3f6-726c9b9f89b4\n\n\tin case of any previous broken installation user must use the rm-cluster command to delete the broken cluster:\n\n\t > cephadm rm-cluster --force --zap-osds --fsid \n\n\tfor more information please refer to https://docs.ceph.com/en/latest/cephadm/operations/#purging-a-cluster\n\t***************", "stdout_lines": ["Creating directory /etc/ceph for ceph.conf", "Verifying ssh connectivity using standard pubkey authentication ...", "Adding key to cephadm@localhost authorized_keys...", "Verifying podman|docker is present...", "Verifying lvm2 is present...", "Verifying time synchronization is in place...", "Unit chrony.service is enabled and running", "Repeating the final host check...", "docker (/usr/bin/docker) is present", "systemctl is present", "lvcreate is present", "Unit chrony.service is enabled and running", "Host looks OK", "Cluster fsid: 4837cbf8-4f90-4300-b3f6-726c9b9f89b4", "Verifying IP 10.96.240.200 port 3300 ...", "Verifying IP 10.96.240.200 port 6789 ...", "Mon IP `10.96.240.200` is in CIDR network `10.96.240.0/24`", "Mon IP `10.96.240.200` is in CIDR network `10.96.240.0/24`", "Pulling container image quay.io/ceph/ceph:v18.2.7...", "Ceph version: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)", "Extracting ceph user uid/gid from container image...", "Creating initial keys...", "Creating initial monmap...", "Creating mon...", "Non-zero exit code 1 from systemctl daemon-reload", "systemctl: stderr Failed to reload daemon: Transport endpoint is not connected", "", "", "\t***************", "\tCephadm hit an issue during cluster installation. Current cluster files will NOT BE DELETED automatically to change", "\tthis behaviour you can pass the --cleanup-on-failure. To remove this broken cluster manually please run:", "", "\t > cephadm rm-cluster --force --fsid 4837cbf8-4f90-4300-b3f6-726c9b9f89b4", "", "\tin case of any previous broken installation user must use the rm-cluster command to delete the broken cluster:", "", "\t > cephadm rm-cluster --force --zap-osds --fsid ", "", "\tfor more information please refer to https://docs.ceph.com/en/latest/cephadm/operations/#purging-a-cluster", "\t***************"]} 2026-04-23 16:30:28.713520 | instance | [ceph] 2026-04-23 16:30:28.713526 | instance | [ceph] TASK [vexxhost.ceph.mon : Remove temporary file for "ceph.conf"] *************** 2026-04-23 16:30:28.802830 | instance | [kubernetes] ok: [instance] 2026-04-23 16:30:28.802866 | instance | [kubernetes] 2026-04-23 16:30:28.802872 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Enable and start kubelet service] ********** 2026-04-23 16:30:29.019876 | instance | [ceph] changed: [instance] 2026-04-23 16:30:29.019931 | instance | [ceph] 2026-04-23 16:30:29.019944 | instance | [ceph] PLAY RECAP ********************************************************************* 2026-04-23 16:30:29.019956 | instance | [ceph] instance : ok=48 changed=26 unreachable=0 failed=1 skipped=14 rescued=0 ignored=0 2026-04-23 16:30:29.019966 | instance | [ceph] 2026-04-23 16:30:29.439085 | instance | Error: component ceph failed: ansible-playbook failed for ceph: exit status 2 2026-04-23 16:30:29.439168 | instance | Usage: 2026-04-23 16:30:29.439181 | instance | atmosphere deploy [flags] 2026-04-23 16:30:29.439191 | instance | 2026-04-23 16:30:29.439200 | instance | Flags: 2026-04-23 16:30:29.439210 | instance | --concurrency int Max concurrent deployments per wave (0 = unlimited) 2026-04-23 16:30:29.439219 | instance | -h, --help help for deploy 2026-04-23 16:30:29.439228 | instance | -i, --inventory string Path to Ansible inventory file (required) 2026-04-23 16:30:29.439238 | instance | -t, --tags string Comma-separated list of component tags to deploy 2026-04-23 16:30:29.439247 | instance | 2026-04-23 16:30:29.439256 | instance | component ceph failed: ansible-playbook failed for ceph: exit status 2 2026-04-23 16:30:29.494411 | instance | ERROR 2026-04-23 16:30:29.494658 | instance | { 2026-04-23 16:30:29.494703 | instance | "delta": "0:01:46.945166", 2026-04-23 16:30:29.494735 | instance | "end": "2026-04-23 16:30:29.440712", 2026-04-23 16:30:29.494764 | instance | "msg": "non-zero return code", 2026-04-23 16:30:29.494791 | instance | "rc": 1, 2026-04-23 16:30:29.494818 | instance | "start": "2026-04-23 16:28:42.495546" 2026-04-23 16:30:29.494847 | instance | } failure 2026-04-23 16:30:29.507419 | 2026-04-23 16:30:29.507484 | PLAY RECAP 2026-04-23 16:30:29.507528 | instance | ok: 1 changed: 0 unreachable: 0 failed: 1 skipped: 0 rescued: 0 ignored: 0 2026-04-23 16:30:29.507551 | 2026-04-23 16:30:29.673684 | RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/atmosphere/molecule/aio/converge.yml@main] 2026-04-23 16:30:29.683953 | POST-RUN START: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/post.yml@main] 2026-04-23 16:30:30.344763 | 2026-04-23 16:30:30.344913 | PLAY [all] 2026-04-23 16:30:30.362493 | 2026-04-23 16:30:30.362664 | TASK [gather-host-logs : creating directory for system status] 2026-04-23 16:30:30.704726 | instance | changed 2026-04-23 16:30:30.712389 | 2026-04-23 16:30:30.712486 | TASK [gather-host-logs : Get logs for each host] 2026-04-23 16:30:31.059254 | instance | + systemd-cgls --full --all --no-pager 2026-04-23 16:30:31.071524 | instance | + ip addr 2026-04-23 16:30:31.073149 | instance | + ip route 2026-04-23 16:30:31.075806 | instance | + lsblk 2026-04-23 16:30:31.081549 | instance | + mount 2026-04-23 16:30:31.084528 | instance | + docker images 2026-04-23 16:30:31.102061 | instance | + brctl show 2026-04-23 16:30:31.102593 | instance | /bin/bash: line 8: brctl: command not found 2026-04-23 16:30:31.102867 | instance | + ps aux --sort=-%mem 2026-04-23 16:30:31.120640 | instance | + dpkg -l 2026-04-23 16:30:31.132605 | instance | + CONTAINERS=($(docker ps -a --format '{{ .Names }}' --filter label=zuul)) 2026-04-23 16:30:31.133105 | instance | ++ docker ps -a --format '{{ .Names }}' --filter label=zuul 2026-04-23 16:30:31.152045 | instance | + '[' '!' -z '' ']' 2026-04-23 16:30:31.251647 | instance | ok: Runtime: 0:00:00.098669 2026-04-23 16:30:31.261650 | 2026-04-23 16:30:31.261811 | TASK [gather-host-logs : Downloads logs to executor] 2026-04-23 16:30:31.913588 | instance | changed: 2026-04-23 16:30:31.913812 | instance | created directory /var/lib/zuul/builds/c4e83f0223f040e785fae21c19b0f782/work/logs/instance 2026-04-23 16:30:31.913855 | instance | cd+++++++++ system/ 2026-04-23 16:30:31.913888 | instance | >f+++++++++ system/brctl-show.txt 2026-04-23 16:30:31.913919 | instance | >f+++++++++ system/docker-images.txt 2026-04-23 16:30:31.913949 | instance | >f+++++++++ system/ip-addr.txt 2026-04-23 16:30:31.914002 | instance | >f+++++++++ system/ip-route.txt 2026-04-23 16:30:31.914035 | instance | >f+++++++++ system/lsblk.txt 2026-04-23 16:30:31.914066 | instance | >f+++++++++ system/mount.txt 2026-04-23 16:30:31.914097 | instance | >f+++++++++ system/packages.txt 2026-04-23 16:30:31.914126 | instance | >f+++++++++ system/ps.txt 2026-04-23 16:30:31.914158 | instance | >f+++++++++ system/systemd-cgls.txt 2026-04-23 16:30:31.926674 | 2026-04-23 16:30:31.926793 | LOOP [helm-release-status : creating directory for helm release status] 2026-04-23 16:30:32.120771 | instance | changed: "values" 2026-04-23 16:30:32.286799 | instance | changed: "releases" 2026-04-23 16:30:32.301366 | 2026-04-23 16:30:32.301501 | TASK [helm-release-status : Gather get release status for helm charts] 2026-04-23 16:30:32.554611 | instance | E0423 16:30:32.554466 26964 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:32.555421 | instance | E0423 16:30:32.555389 26964 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:32.557327 | instance | E0423 16:30:32.557270 26964 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:32.558083 | instance | E0423 16:30:32.558044 26964 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:32.559970 | instance | E0423 16:30:32.559930 26964 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:32.560021 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:32.840214 | instance | ok: Runtime: 0:00:00.063402 2026-04-23 16:30:32.845742 | 2026-04-23 16:30:32.845823 | TASK [helm-release-status : Downloads logs to executor] 2026-04-23 16:30:33.336495 | instance | changed: 2026-04-23 16:30:33.336723 | instance | cd+++++++++ helm/ 2026-04-23 16:30:33.336766 | instance | cd+++++++++ helm/releases/ 2026-04-23 16:30:33.336797 | instance | cd+++++++++ helm/values/ 2026-04-23 16:30:33.347915 | 2026-04-23 16:30:33.347987 | TASK [describe-kubernetes-objects : creating directory for cluster scoped objects] 2026-04-23 16:30:33.584015 | instance | changed 2026-04-23 16:30:33.591157 | 2026-04-23 16:30:33.591226 | TASK [describe-kubernetes-objects : Gathering descriptions for cluster scoped objects] 2026-04-23 16:30:33.793284 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-23 16:30:33.794253 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-23 16:30:33.800741 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-23 16:30:33.804193 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-23 16:30:33.849547 | instance | E0423 16:30:33.849458 27017 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.851379 | instance | E0423 16:30:33.851346 27017 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.852258 | instance | E0423 16:30:33.852217 27017 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.854209 | instance | E0423 16:30:33.854173 27017 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.854858 | instance | E0423 16:30:33.854834 27017 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.855154 | instance | E0423 16:30:33.855097 27024 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.855789 | instance | E0423 16:30:33.855760 27024 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.856067 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:33.857592 | instance | E0423 16:30:33.857559 27024 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.858313 | instance | E0423 16:30:33.858279 27024 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.860101 | instance | E0423 16:30:33.860059 27024 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.860120 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:33.863867 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-23 16:30:33.867979 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-23 16:30:33.909638 | instance | E0423 16:30:33.909537 27055 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.910354 | instance | E0423 16:30:33.910319 27055 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.912678 | instance | E0423 16:30:33.912648 27055 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.913337 | instance | E0423 16:30:33.913305 27055 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.915125 | instance | E0423 16:30:33.915094 27055 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.915147 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:33.919489 | instance | E0423 16:30:33.919411 27062 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.920293 | instance | E0423 16:30:33.920240 27062 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.921816 | instance | E0423 16:30:33.921777 27062 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.922489 | instance | E0423 16:30:33.922454 27062 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.923359 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-23 16:30:33.924430 | instance | E0423 16:30:33.924380 27062 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.924457 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:33.971617 | instance | E0423 16:30:33.971527 27089 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.972270 | instance | E0423 16:30:33.972240 27089 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.974231 | instance | E0423 16:30:33.974197 27089 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.974951 | instance | E0423 16:30:33.974913 27089 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.976378 | instance | E0423 16:30:33.976342 27089 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:33.976413 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:34.130398 | instance | ok: Runtime: 0:00:00.194678 2026-04-23 16:30:34.136370 | 2026-04-23 16:30:34.136487 | TASK [describe-kubernetes-objects : creating directory for namespace scoped objects] 2026-04-23 16:30:34.322871 | instance | changed 2026-04-23 16:30:34.328663 | 2026-04-23 16:30:34.328781 | TASK [describe-kubernetes-objects : Gathering descriptions for namespace scoped objects] 2026-04-23 16:30:34.533651 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-23 16:30:34.533820 | instance | xargs: xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args valuewarning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-23 16:30:34.533827 | instance | 2026-04-23 16:30:34.584050 | instance | E0423 16:30:34.583667 27126 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:34.585430 | instance | E0423 16:30:34.585348 27126 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:34.586214 | instance | E0423 16:30:34.586152 27126 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:34.588129 | instance | E0423 16:30:34.588029 27126 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:34.588731 | instance | E0423 16:30:34.588667 27126 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:34.589933 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:34.870870 | instance | ok: Runtime: 0:00:00.068131 2026-04-23 16:30:34.876570 | 2026-04-23 16:30:34.876646 | TASK [describe-kubernetes-objects : Downloads logs to executor] 2026-04-23 16:30:35.392940 | instance | changed: 2026-04-23 16:30:35.393425 | instance | cd+++++++++ objects/ 2026-04-23 16:30:35.393486 | instance | cd+++++++++ objects/cluster/ 2026-04-23 16:30:35.393519 | instance | cd+++++++++ objects/namespaced/ 2026-04-23 16:30:35.406031 | 2026-04-23 16:30:35.406115 | TASK [gather-pod-logs : creating directory for pod logs] 2026-04-23 16:30:35.602936 | instance | changed 2026-04-23 16:30:35.611193 | 2026-04-23 16:30:35.611339 | TASK [gather-pod-logs : creating directory for failed pod logs] 2026-04-23 16:30:35.808272 | instance | changed 2026-04-23 16:30:35.815793 | 2026-04-23 16:30:35.815890 | TASK [gather-pod-logs : retrieve all kubernetes logs, current and previous (if they exist)] 2026-04-23 16:30:36.101930 | instance | E0423 16:30:36.101760 27185 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.102617 | instance | E0423 16:30:36.102571 27185 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.104207 | instance | E0423 16:30:36.104164 27185 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.104689 | instance | E0423 16:30:36.104649 27185 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.106263 | instance | E0423 16:30:36.106227 27185 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.106317 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:36.357507 | instance | ok: Runtime: 0:00:00.064111 2026-04-23 16:30:36.365342 | 2026-04-23 16:30:36.365447 | TASK [gather-pod-logs : Downloads pod logs to executor] 2026-04-23 16:30:36.858121 | instance | changed: 2026-04-23 16:30:36.858369 | instance | cd+++++++++ pod-logs/ 2026-04-23 16:30:36.858408 | instance | cd+++++++++ pod-logs/failed-pods/ 2026-04-23 16:30:36.869308 | 2026-04-23 16:30:36.869400 | TASK [gather-prom-metrics : creating directory for helm release descriptions] 2026-04-23 16:30:37.057072 | instance | changed 2026-04-23 16:30:37.064373 | 2026-04-23 16:30:37.064481 | TASK [gather-prom-metrics : Get metrics from exporter services in all namespaces] 2026-04-23 16:30:37.314606 | instance | E0423 16:30:37.314432 27231 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:37.315367 | instance | E0423 16:30:37.315315 27231 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:37.317307 | instance | E0423 16:30:37.317245 27231 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:37.318039 | instance | E0423 16:30:37.317979 27231 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:37.319892 | instance | E0423 16:30:37.319842 27231 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:37.319972 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:37.605296 | instance | ok: Runtime: 0:00:00.061962 2026-04-23 16:30:37.610978 | 2026-04-23 16:30:37.611062 | TASK [gather-prom-metrics : Get ceph metrics from ceph-mgr] 2026-04-23 16:30:37.870183 | instance | E0423 16:30:37.870066 27256 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:37.870797 | instance | E0423 16:30:37.870761 27256 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:37.872722 | instance | E0423 16:30:37.872680 27256 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:37.873479 | instance | E0423 16:30:37.873447 27256 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:37.875268 | instance | E0423 16:30:37.875224 27256 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:37.875304 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:37.880551 | instance | ceph-mgr endpoints: 2026-04-23 16:30:38.149965 | instance | ok: Runtime: 0:00:00.065926 2026-04-23 16:30:38.155331 | 2026-04-23 16:30:38.155405 | TASK [gather-prom-metrics : Get metrics from fluentd pods] 2026-04-23 16:30:38.409859 | instance | E0423 16:30:38.409736 27285 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:38.410601 | instance | E0423 16:30:38.410518 27285 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:38.412390 | instance | E0423 16:30:38.412312 27285 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:38.413188 | instance | E0423 16:30:38.413134 27285 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:38.414786 | instance | E0423 16:30:38.414741 27285 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:38.414869 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:38.695260 | instance | ok: Runtime: 0:00:00.063315 2026-04-23 16:30:38.703865 | 2026-04-23 16:30:38.704002 | TASK [gather-prom-metrics : Downloads logs to executor] 2026-04-23 16:30:39.205145 | instance | changed: cd+++++++++ prometheus/ 2026-04-23 16:30:39.218020 | 2026-04-23 16:30:39.218122 | TASK [gather-selenium-data : creating directory for helm release descriptions] 2026-04-23 16:30:39.475438 | instance | changed 2026-04-23 16:30:39.481301 | 2026-04-23 16:30:39.481391 | TASK [gather-selenium-data : Get selenium data] 2026-04-23 16:30:39.686017 | instance | + cp '/tmp/artifacts/*' /tmp/logs/selenium/. 2026-04-23 16:30:39.687509 | instance | cp: cannot stat '/tmp/artifacts/*': No such file or directory 2026-04-23 16:30:40.028663 | instance | ERROR 2026-04-23 16:30:40.029013 | instance | { 2026-04-23 16:30:40.029064 | instance | "delta": "0:00:00.005769", 2026-04-23 16:30:40.029094 | instance | "end": "2026-04-23 16:30:39.687857", 2026-04-23 16:30:40.029121 | instance | "msg": "non-zero return code", 2026-04-23 16:30:40.029174 | instance | "rc": 1, 2026-04-23 16:30:40.029201 | instance | "start": "2026-04-23 16:30:39.682088" 2026-04-23 16:30:40.029225 | instance | } 2026-04-23 16:30:40.029257 | instance | ERROR: Ignoring Errors 2026-04-23 16:30:40.036601 | 2026-04-23 16:30:40.036706 | TASK [gather-selenium-data : Downloads logs to executor] 2026-04-23 16:30:40.550445 | instance | changed: cd+++++++++ selenium/ 2026-04-23 16:30:40.558711 | 2026-04-23 16:30:40.558767 | PLAY RECAP 2026-04-23 16:30:40.558815 | instance | ok: 23 changed: 23 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 1 2026-04-23 16:30:40.558841 | 2026-04-23 16:30:40.761725 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/post.yml@main] 2026-04-23 16:30:40.778159 | POST-RUN START: [trusted : github.com/vexxhost/zuul-config/playbooks/base/post.yaml@main] 2026-04-23 16:30:41.443666 | 2026-04-23 16:30:41.443831 | PLAY [all] 2026-04-23 16:30:41.455946 | 2026-04-23 16:30:41.456088 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-23 16:30:41.502463 | instance | skipping: Conditional result was False 2026-04-23 16:30:41.513033 | 2026-04-23 16:30:41.513145 | TASK [fetch-output : Set log path for single node] 2026-04-23 16:30:41.556675 | instance | ok 2026-04-23 16:30:41.563452 | 2026-04-23 16:30:41.563529 | LOOP [fetch-output : Ensure local output dirs] 2026-04-23 16:30:42.001073 | instance -> localhost | ok: "/var/lib/zuul/builds/c4e83f0223f040e785fae21c19b0f782/work/logs" 2026-04-23 16:30:42.903734 | instance -> localhost | changed: "/var/lib/zuul/builds/c4e83f0223f040e785fae21c19b0f782/work/artifacts" 2026-04-23 16:30:43.119482 | instance -> localhost | changed: "/var/lib/zuul/builds/c4e83f0223f040e785fae21c19b0f782/work/docs" 2026-04-23 16:30:43.141472 | 2026-04-23 16:30:43.141655 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-23 16:30:43.752548 | instance | changed: .d..t...... ./ 2026-04-23 16:30:43.752795 | instance | changed: All items complete 2026-04-23 16:30:43.752832 | 2026-04-23 16:30:44.216537 | instance | changed: .d..t...... ./ 2026-04-23 16:30:44.690780 | instance | changed: .d..t...... ./ 2026-04-23 16:30:44.714837 | 2026-04-23 16:30:44.715019 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-23 16:30:45.217985 | instance -> localhost | ok: Item: artifacts Runtime: 0:00:00.008545 2026-04-23 16:30:45.620797 | instance -> localhost | ok: Item: docs Runtime: 0:00:00.008099 2026-04-23 16:30:45.638613 | 2026-04-23 16:30:45.638793 | PLAY [all] 2026-04-23 16:30:45.645120 | 2026-04-23 16:30:45.645190 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-23 16:30:46.037485 | instance | changed 2026-04-23 16:30:46.043797 | 2026-04-23 16:30:46.043847 | PLAY RECAP 2026-04-23 16:30:46.043897 | instance | ok: 5 changed: 4 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-23 16:30:46.043919 | 2026-04-23 16:30:46.169325 | POST-RUN END RESULT_NORMAL: [trusted : github.com/vexxhost/zuul-config/playbooks/base/post.yaml@main] 2026-04-23 16:30:46.182479 | POST-RUN START: [trusted : github.com/vexxhost/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-23 16:30:46.828518 | 2026-04-23 16:30:46.828714 | PLAY [localhost] 2026-04-23 16:30:46.840146 | 2026-04-23 16:30:46.840308 | TASK [Generate Zuul manifest] 2026-04-23 16:30:46.861528 | localhost | ok 2026-04-23 16:30:46.882056 | 2026-04-23 16:30:46.882223 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-23 16:30:47.278451 | localhost | changed 2026-04-23 16:30:47.292405 | 2026-04-23 16:30:47.292578 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-23 16:30:47.323639 | localhost | ok 2026-04-23 16:30:47.332725 | 2026-04-23 16:30:47.332829 | TASK [Upload logs] 2026-04-23 16:30:47.355158 | localhost | ok 2026-04-23 16:30:47.804365 | 2026-04-23 16:30:47.804531 | TASK [Set zuul-log-path fact] 2026-04-23 16:30:47.827311 | localhost | ok 2026-04-23 16:30:47.844240 | 2026-04-23 16:30:47.844522 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-23 16:30:47.881823 | localhost | ok 2026-04-23 16:30:47.891476 | 2026-04-23 16:30:47.891625 | TASK [upload-logs : Create log directories] 2026-04-23 16:30:48.272202 | localhost | changed 2026-04-23 16:30:48.278528 | 2026-04-23 16:30:48.278626 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-23 16:30:48.753880 | localhost -> localhost | ok: Runtime: 0:00:00.004507 2026-04-23 16:30:48.765035 | 2026-04-23 16:30:48.765245 | TASK [upload-logs : Upload logs to log server] 2026-04-23 16:30:50.793249 | localhost | Output suppressed because no_log was given 2026-04-23 16:30:50.797506 | 2026-04-23 16:30:50.797585 | LOOP [upload-logs : Compress console log and json output] 2026-04-23 16:30:50.845787 | localhost | skipping: Conditional result was False 2026-04-23 16:30:50.853676 | localhost | skipping: Conditional result was False 2026-04-23 16:30:50.867559 | 2026-04-23 16:30:50.867643 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-23 16:30:50.918046 | localhost | skipping: Conditional result was False 2026-04-23 16:30:50.918503 | 2026-04-23 16:30:50.922107 | localhost | skipping: Conditional result was False 2026-04-23 16:30:50.931748 | 2026-04-23 16:30:50.931881 | LOOP [upload-logs : Upload console log and json output]