2026-04-23 16:25:31.880539 | Job console starting 2026-04-23 16:25:31.894513 | Updating git repos 2026-04-23 16:25:31.961960 | Cloning repos into workspace 2026-04-23 16:25:32.655514 | Restoring repo states 2026-04-23 16:25:32.944539 | Merging changes 2026-04-23 16:25:34.809146 | Checking out repos 2026-04-23 16:25:35.655972 | Preparing playbooks 2026-04-23 16:25:54.363153 | Running Ansible setup 2026-04-23 16:25:59.068267 | PRE-RUN START: [trusted : github.com/vexxhost/zuul-config/playbooks/base/pre.yaml@main] 2026-04-23 16:25:59.738236 | 2026-04-23 16:25:59.840269 | PLAY [localhost] 2026-04-23 16:25:59.855528 | 2026-04-23 16:25:59.855645 | TASK [Gathering Facts] 2026-04-23 16:26:01.424082 | localhost | ok 2026-04-23 16:26:01.435215 | 2026-04-23 16:26:01.435404 | TASK [Setup log path fact] 2026-04-23 16:26:01.456503 | localhost | ok 2026-04-23 16:26:01.492812 | 2026-04-23 16:26:01.493077 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-23 16:26:01.534672 | localhost | ok 2026-04-23 16:26:01.544800 | 2026-04-23 16:26:01.544930 | TASK [emit-job-header : Print job information] 2026-04-23 16:26:01.588392 | # Job Information 2026-04-23 16:26:01.590819 | Ansible Version: 2.16.16 2026-04-23 16:26:01.881140 | Job: atmosphere-molecule-aio-openvswitch 2026-04-23 16:26:01.881396 | Pipeline: check 2026-04-23 16:26:01.881526 | Executor: 0a8996d2b663 2026-04-23 16:26:01.881591 | Triggered by: https://github.com/vexxhost/atmosphere/pull/3873 2026-04-23 16:26:01.881650 | Event ID: c26512e0-3f30-11f1-9931-b173b6dbd0a1 2026-04-23 16:26:01.892777 | 2026-04-23 16:26:01.893259 | LOOP [emit-job-header : Print node information] 2026-04-23 16:26:01.999352 | localhost | ok: 2026-04-23 16:26:01.999537 | localhost | # Node Information 2026-04-23 16:26:01.999568 | localhost | Inventory Hostname: instance 2026-04-23 16:26:01.999591 | localhost | Hostname: np0000169835 2026-04-23 16:26:01.999612 | localhost | Username: zuul 2026-04-23 16:26:01.999635 | localhost | Distro: Ubuntu 22.04 2026-04-23 16:26:01.999656 | localhost | Provider: yul1 2026-04-23 16:26:01.999675 | localhost | Region: ca-ymq-1 2026-04-23 16:26:01.999694 | localhost | Label: ubuntu-jammy-16 2026-04-23 16:26:01.999714 | localhost | Product Name: OpenStack Nova 2026-04-23 16:26:01.999734 | localhost | Interface IP: 199.19.213.20 2026-04-23 16:26:02.015824 | 2026-04-23 16:26:02.016061 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-23 16:26:02.659118 | localhost -> localhost | changed 2026-04-23 16:26:02.665674 | 2026-04-23 16:26:02.665791 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-23 16:26:03.905461 | localhost -> localhost | changed 2026-04-23 16:26:03.913067 | 2026-04-23 16:26:03.913160 | PLAY [all] 2026-04-23 16:26:03.924043 | 2026-04-23 16:26:03.924121 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-23 16:26:04.262468 | instance -> localhost | ok 2026-04-23 16:26:04.370581 | 2026-04-23 16:26:04.370783 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-23 16:26:04.402541 | instance | ok 2026-04-23 16:26:04.428678 | instance | included: /var/lib/zuul/builds/4443a534d0224a25859b36e43392f9eb/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-23 16:26:04.436402 | 2026-04-23 16:26:04.436488 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-23 16:26:05.633533 | instance -> localhost | Generating public/private rsa key pair. 2026-04-23 16:26:05.633707 | instance -> localhost | Your identification has been saved in /var/lib/zuul/builds/4443a534d0224a25859b36e43392f9eb/work/4443a534d0224a25859b36e43392f9eb_id_rsa 2026-04-23 16:26:05.633737 | instance -> localhost | Your public key has been saved in /var/lib/zuul/builds/4443a534d0224a25859b36e43392f9eb/work/4443a534d0224a25859b36e43392f9eb_id_rsa.pub 2026-04-23 16:26:05.633760 | instance -> localhost | The key fingerprint is: 2026-04-23 16:26:05.633781 | instance -> localhost | SHA256:cdJtloLpVkmhFt8JcDbf8lvxF6Udm2XpTlhqzzgdd7Y zuul-build-sshkey 2026-04-23 16:26:05.633810 | instance -> localhost | The key's randomart image is: 2026-04-23 16:26:05.633831 | instance -> localhost | +---[RSA 3072]----+ 2026-04-23 16:26:05.633855 | instance -> localhost | | o.B. .=| 2026-04-23 16:26:05.633877 | instance -> localhost | | @ B + **| 2026-04-23 16:26:05.633898 | instance -> localhost | | B B X B=.| 2026-04-23 16:26:05.633918 | instance -> localhost | | o = + * +B| 2026-04-23 16:26:05.633938 | instance -> localhost | | S . O.O| 2026-04-23 16:26:05.633958 | instance -> localhost | | . o E.| 2026-04-23 16:26:05.633978 | instance -> localhost | | o | 2026-04-23 16:26:05.633998 | instance -> localhost | | | 2026-04-23 16:26:05.634021 | instance -> localhost | | | 2026-04-23 16:26:05.634042 | instance -> localhost | +----[SHA256]-----+ 2026-04-23 16:26:05.634113 | instance -> localhost | ok: Runtime: 0:00:00.424263 2026-04-23 16:26:05.639771 | 2026-04-23 16:26:05.639843 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-23 16:26:05.674435 | instance | ok 2026-04-23 16:26:05.685493 | instance | included: /var/lib/zuul/builds/4443a534d0224a25859b36e43392f9eb/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-23 16:26:05.693083 | 2026-04-23 16:26:05.693164 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-23 16:26:05.717933 | instance | skipping: Conditional result was False 2026-04-23 16:26:05.728248 | 2026-04-23 16:26:05.728361 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-23 16:26:06.217011 | instance | changed 2026-04-23 16:26:06.222389 | 2026-04-23 16:26:06.222484 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-23 16:26:06.399619 | instance | ok 2026-04-23 16:26:06.410601 | 2026-04-23 16:26:06.410971 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-23 16:26:06.892154 | instance | changed 2026-04-23 16:26:06.897737 | 2026-04-23 16:26:06.897850 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-23 16:26:07.370220 | instance | changed 2026-04-23 16:26:07.375987 | 2026-04-23 16:26:07.376071 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-23 16:26:07.401429 | instance | skipping: Conditional result was False 2026-04-23 16:26:07.477299 | 2026-04-23 16:26:07.797825 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-23 16:26:08.217304 | instance -> localhost | changed 2026-04-23 16:26:08.229988 | 2026-04-23 16:26:08.230168 | TASK [add-build-sshkey : Add back temp key] 2026-04-23 16:26:08.691346 | instance -> localhost | Identity added: /var/lib/zuul/builds/4443a534d0224a25859b36e43392f9eb/work/4443a534d0224a25859b36e43392f9eb_id_rsa (zuul-build-sshkey) 2026-04-23 16:26:08.691547 | instance -> localhost | ok: Runtime: 0:00:00.013001 2026-04-23 16:26:08.696747 | 2026-04-23 16:26:08.696813 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-23 16:26:08.981538 | instance | ok 2026-04-23 16:26:08.988468 | 2026-04-23 16:26:08.988559 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-23 16:26:09.013386 | instance | skipping: Conditional result was False 2026-04-23 16:26:09.029005 | 2026-04-23 16:26:09.029093 | TASK [prepare-workspace : Start zuul_console daemon.] 2026-04-23 16:26:09.333712 | instance | ok 2026-04-23 16:26:09.341959 | 2026-04-23 16:26:09.342063 | TASK [prepare-workspace : Synchronize src repos to workspace directory.] 2026-04-23 16:26:11.368066 | instance | Output suppressed because no_log was given 2026-04-23 16:26:12.056635 | 2026-04-23 16:26:12.056763 | LOOP [ensure-output-dirs : Empty Zuul Output directories by removing them] 2026-04-23 16:26:12.244374 | instance | ok: "logs" 2026-04-23 16:26:12.244781 | instance | ok: All items complete 2026-04-23 16:26:12.244845 | 2026-04-23 16:26:12.396943 | instance | ok: "artifacts" 2026-04-23 16:26:12.554864 | instance | ok: "docs" 2026-04-23 16:26:12.569715 | 2026-04-23 16:26:12.569855 | LOOP [ensure-output-dirs : Ensure Zuul Output directories exist] 2026-04-23 16:26:12.774745 | instance | changed: "logs" 2026-04-23 16:26:12.923774 | instance | changed: "artifacts" 2026-04-23 16:26:13.082628 | instance | changed: "docs" 2026-04-23 16:26:13.100327 | 2026-04-23 16:26:13.100461 | PLAY RECAP 2026-04-23 16:26:13.100511 | instance | ok: 15 changed: 8 unreachable: 0 failed: 0 skipped: 3 rescued: 0 ignored: 0 2026-04-23 16:26:13.100541 | localhost | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-23 16:26:13.100561 | 2026-04-23 16:26:13.252318 | PRE-RUN END RESULT_NORMAL: [trusted : github.com/vexxhost/zuul-config/playbooks/base/pre.yaml@main] 2026-04-23 16:26:13.258788 | PRE-RUN START: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/pre.yaml@main] 2026-04-23 16:26:14.025691 | 2026-04-23 16:26:14.025911 | PLAY [all] 2026-04-23 16:26:14.040541 | 2026-04-23 16:26:14.040628 | TASK [setup-uv : Extract archive] 2026-04-23 16:26:16.790678 | instance | changed 2026-04-23 16:26:17.634582 | 2026-04-23 16:26:17.634862 | TASK [setup-uv : Print version] 2026-04-23 16:26:18.047219 | instance | uv 0.8.13 2026-04-23 16:26:18.194894 | instance | ok: Runtime: 0:00:00.012910 2026-04-23 16:26:18.204228 | 2026-04-23 16:26:18.204308 | PLAY RECAP 2026-04-23 16:26:18.204373 | instance | ok: 2 changed: 2 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-23 16:26:18.204412 | 2026-04-23 16:26:18.357230 | PRE-RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/pre.yaml@main] 2026-04-23 16:26:18.370783 | PRE-RUN START: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/pre.yml@main] 2026-04-23 16:26:19.137563 | 2026-04-23 16:26:19.345426 | PLAY [all] 2026-04-23 16:26:19.382099 | 2026-04-23 16:26:19.382282 | TASK [Install "jq" for log collection] 2026-04-23 16:26:34.169392 | instance | changed 2026-04-23 16:26:34.177547 | 2026-04-23 16:26:34.177676 | TASK [Install pip3 for Python package management] 2026-04-23 16:26:38.542584 | instance | changed 2026-04-23 16:26:38.548850 | 2026-04-23 16:26:38.548922 | TASK [Install Python "kubernetes" library for kubernetes.core modules] 2026-04-23 16:26:41.993079 | instance | changed 2026-04-23 16:26:41.996002 | 2026-04-23 16:26:41.996066 | PLAY [all] 2026-04-23 16:26:42.003761 | 2026-04-23 16:26:42.003849 | TASK [ensure-go : Check installed go version] 2026-04-23 16:26:42.541908 | instance | ok: ERROR (ignored) 2026-04-23 16:26:42.542257 | instance | { 2026-04-23 16:26:42.542303 | instance | "failed_when_result": false, 2026-04-23 16:26:42.542335 | instance | "msg": "[Errno 2] No such file or directory: b'go'", 2026-04-23 16:26:42.542365 | instance | "rc": 2 2026-04-23 16:26:42.542399 | instance | } 2026-04-23 16:26:42.549530 | 2026-04-23 16:26:42.549672 | TASK [ensure-go : Skip if correct version of go is installed] 2026-04-23 16:26:42.604876 | instance | ok 2026-04-23 16:26:42.616757 | instance | included: /var/lib/zuul/builds/4443a534d0224a25859b36e43392f9eb/untrusted/project_2/opendev.org/zuul/zuul-jobs/roles/ensure-go/tasks/install-go.yaml 2026-04-23 16:26:42.624447 | 2026-04-23 16:26:42.624530 | TASK [ensure-go : Create temp directory] 2026-04-23 16:26:42.942603 | instance | changed 2026-04-23 16:26:42.950446 | 2026-04-23 16:26:42.950551 | TASK [ensure-go : Get archive checksum] 2026-04-23 16:26:43.629754 | instance | ok: OK (64 bytes) 2026-04-23 16:26:43.637725 | 2026-04-23 16:26:43.637828 | TASK [ensure-go : Download go archive] 2026-04-23 16:26:45.262950 | instance | changed: OK (78559214 bytes) 2026-04-23 16:26:45.274295 | 2026-04-23 16:26:45.274488 | TASK [ensure-go : Install go] 2026-04-23 16:26:51.159882 | instance | changed 2026-04-23 16:26:51.173317 | 2026-04-23 16:26:51.173433 | PLAY [all] 2026-04-23 16:26:51.182688 | 2026-04-23 16:26:51.182825 | TASK [Copy inventory file for Zuul] 2026-04-23 16:26:51.946855 | instance | changed 2026-04-23 16:26:51.953430 | 2026-04-23 16:26:51.953567 | TASK [Switch "ansible_host" to private IP] 2026-04-23 16:26:52.281210 | instance | changed: 1 replacements made 2026-04-23 16:26:52.332544 | 2026-04-23 16:26:52.332704 | TASK [Run molecule prepare] 2026-04-23 16:26:52.617602 | instance | Using CPython 3.10.12 interpreter at: /usr/bin/python3 2026-04-23 16:26:52.617733 | instance | Creating virtual environment at: .venv 2026-04-23 16:26:52.642170 | instance | Building atmosphere @ file:///home/zuul/src/github.com/vexxhost/atmosphere 2026-04-23 16:26:52.684342 | instance | Downloading kubernetes (1.9MiB) 2026-04-23 16:26:52.684689 | instance | Downloading openstacksdk (1.7MiB) 2026-04-23 16:26:52.689916 | instance | Downloading netaddr (2.2MiB) 2026-04-23 16:26:52.690591 | instance | Downloading rjsonnet (1.2MiB) 2026-04-23 16:26:52.691219 | instance | Downloading setuptools (1.1MiB) 2026-04-23 16:26:52.691584 | instance | Downloading pydantic-core (2.0MiB) 2026-04-23 16:26:52.691853 | instance | Downloading ansible-core (2.1MiB) 2026-04-23 16:26:52.692555 | instance | Downloading cryptography (4.2MiB) 2026-04-23 16:26:52.692895 | instance | Downloading pygments (1.2MiB) 2026-04-23 16:26:53.003614 | instance | Building pyperclip==1.9.0 2026-04-23 16:26:53.031503 | instance | Downloading rjsonnet 2026-04-23 16:26:53.131505 | instance | Downloading pydantic-core 2026-04-23 16:26:53.157122 | instance | Downloading netaddr 2026-04-23 16:26:53.166961 | instance | Downloading pygments 2026-04-23 16:26:53.214910 | instance | Downloading setuptools 2026-04-23 16:26:53.290863 | instance | Downloading cryptography 2026-04-23 16:26:53.300998 | instance | Downloading kubernetes 2026-04-23 16:26:53.336776 | instance | Downloading ansible-core 2026-04-23 16:26:53.371705 | instance | Downloading openstacksdk 2026-04-23 16:26:53.750431 | instance | Built pyperclip==1.9.0 2026-04-23 16:26:53.924924 | instance | Built atmosphere @ file:///home/zuul/src/github.com/vexxhost/atmosphere 2026-04-23 16:26:53.967711 | instance | Installed 83 packages in 40ms 2026-04-23 16:26:54.615463 | instance | WARNING Molecule scenarios should migrate to 'extensions/molecule' 2026-04-23 16:26:55.202337 | instance | INFO [aio > discovery] scenario test matrix: prepare 2026-04-23 16:26:55.202380 | instance | INFO [aio > prerun] Performing prerun with role_name_check=0... 2026-04-23 16:27:40.205089 | instance | INFO [aio > prepare] Executing 2026-04-23 16:27:41.116644 | instance | 2026-04-23 16:27:41.117117 | instance | PLAY [Prepare] ***************************************************************** 2026-04-23 16:27:41.117432 | instance | 2026-04-23 16:27:41.117684 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-23 16:27:41.117955 | instance | Thursday 23 April 2026 16:27:41 +0000 (0:00:00.024) 0:00:00.024 ******** 2026-04-23 16:27:42.239810 | instance | [WARNING]: Platform linux on host instance is using the discovered Python 2026-04-23 16:27:42.240054 | instance | interpreter at /usr/bin/python3.10, but future installation of another Python 2026-04-23 16:27:42.240326 | instance | interpreter could change the meaning of that path. See 2026-04-23 16:27:42.240590 | instance | https://docs.ansible.com/ansible- 2026-04-23 16:27:42.240886 | instance | core/2.17/reference_appendices/interpreter_discovery.html for more information. 2026-04-23 16:27:42.250178 | instance | ok: [instance] 2026-04-23 16:27:42.250405 | instance | 2026-04-23 16:27:42.250663 | instance | TASK [Configure short hostname] ************************************************ 2026-04-23 16:27:42.250931 | instance | Thursday 23 April 2026 16:27:42 +0000 (0:00:01.134) 0:00:01.158 ******** 2026-04-23 16:27:42.925378 | instance | changed: [instance] 2026-04-23 16:27:42.925615 | instance | 2026-04-23 16:27:42.925905 | instance | TASK [Ensure hostname inside hosts file] *************************************** 2026-04-23 16:27:42.926306 | instance | Thursday 23 April 2026 16:27:42 +0000 (0:00:00.674) 0:00:01.833 ******** 2026-04-23 16:27:43.186707 | instance | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-23 16:27:43.186955 | instance | with a mode of 0700, this may cause issues when running as another user. To 2026-04-23 16:27:43.187218 | instance | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-23 16:27:43.196608 | instance | changed: [instance] 2026-04-23 16:27:43.196856 | instance | 2026-04-23 16:27:43.197136 | instance | TASK [Install "dirmngr" for GPG keyserver operations] ************************** 2026-04-23 16:27:43.197403 | instance | Thursday 23 April 2026 16:27:43 +0000 (0:00:00.271) 0:00:02.104 ******** 2026-04-23 16:27:44.253211 | instance | ok: [instance] 2026-04-23 16:27:44.253537 | instance | 2026-04-23 16:27:44.253703 | instance | TASK [Purge "snapd" package] *************************************************** 2026-04-23 16:27:44.254031 | instance | Thursday 23 April 2026 16:27:44 +0000 (0:00:01.055) 0:00:03.160 ******** 2026-04-23 16:27:44.942479 | instance | ok: [instance] 2026-04-23 16:27:44.942704 | instance | 2026-04-23 16:27:44.942983 | instance | PLAY [Generate workspace for Atmosphere] *************************************** 2026-04-23 16:27:44.943237 | instance | 2026-04-23 16:27:44.943509 | instance | TASK [Create folders for workspace] ******************************************** 2026-04-23 16:27:44.943786 | instance | Thursday 23 April 2026 16:27:44 +0000 (0:00:00.689) 0:00:03.850 ******** 2026-04-23 16:27:45.957918 | instance | changed: [localhost] => (item=group_vars) 2026-04-23 16:27:45.958286 | instance | changed: [localhost] => (item=group_vars/all) 2026-04-23 16:27:45.958622 | instance | changed: [localhost] => (item=group_vars/controllers) 2026-04-23 16:27:45.959095 | instance | changed: [localhost] => (item=group_vars/cephs) 2026-04-23 16:27:45.959430 | instance | changed: [localhost] => (item=group_vars/computes) 2026-04-23 16:27:45.959706 | instance | changed: [localhost] => (item=host_vars) 2026-04-23 16:27:45.959963 | instance | 2026-04-23 16:27:45.960240 | instance | PLAY [Generate Ceph control plane configuration for workspace] ***************** 2026-04-23 16:27:45.960493 | instance | 2026-04-23 16:27:45.960804 | instance | TASK [Ensure the Ceph control plane configuration file exists] ***************** 2026-04-23 16:27:45.961081 | instance | Thursday 23 April 2026 16:27:45 +0000 (0:00:01.015) 0:00:04.865 ******** 2026-04-23 16:27:46.136576 | instance | changed: [localhost] 2026-04-23 16:27:46.136779 | instance | 2026-04-23 16:27:46.137083 | instance | TASK [Load the current Ceph control plane configuration into a variable] ******* 2026-04-23 16:27:46.137369 | instance | Thursday 23 April 2026 16:27:46 +0000 (0:00:00.178) 0:00:05.044 ******** 2026-04-23 16:27:46.164734 | instance | ok: [localhost] 2026-04-23 16:27:46.164968 | instance | 2026-04-23 16:27:46.165269 | instance | TASK [Generate Ceph control plane values for missing variables] **************** 2026-04-23 16:27:46.165550 | instance | Thursday 23 April 2026 16:27:46 +0000 (0:00:00.028) 0:00:05.072 ******** 2026-04-23 16:27:46.217244 | instance | ok: [localhost] => (item={'key': 'ceph_fsid', 'value': 'aec69f02-1a84-533d-ae19-116a0c440f29'}) 2026-04-23 16:27:46.217533 | instance | ok: [localhost] => (item={'key': 'ceph_mon_public_network', 'value': '10.96.240.0/24'}) 2026-04-23 16:27:46.217812 | instance | 2026-04-23 16:27:46.218074 | instance | TASK [Write new Ceph control plane configuration file to disk] ***************** 2026-04-23 16:27:46.218349 | instance | Thursday 23 April 2026 16:27:46 +0000 (0:00:00.052) 0:00:05.124 ******** 2026-04-23 16:27:46.776236 | instance | changed: [localhost] 2026-04-23 16:27:46.776477 | instance | 2026-04-23 16:27:46.776787 | instance | PLAY [Generate Ceph OSD configuration for workspace] *************************** 2026-04-23 16:27:46.777072 | instance | 2026-04-23 16:27:46.777358 | instance | TASK [Ensure the Ceph OSDs configuration file exists] ************************** 2026-04-23 16:27:46.777649 | instance | Thursday 23 April 2026 16:27:46 +0000 (0:00:00.559) 0:00:05.683 ******** 2026-04-23 16:27:46.972872 | instance | changed: [localhost] 2026-04-23 16:27:46.973216 | instance | 2026-04-23 16:27:46.973586 | instance | TASK [Load the current Ceph OSDs configuration into a variable] **************** 2026-04-23 16:27:46.973907 | instance | Thursday 23 April 2026 16:27:46 +0000 (0:00:00.196) 0:00:05.880 ******** 2026-04-23 16:27:47.000440 | instance | ok: [localhost] 2026-04-23 16:27:47.000751 | instance | 2026-04-23 16:27:47.001100 | instance | TASK [Generate Ceph OSDs values for missing variables] ************************* 2026-04-23 16:27:47.001484 | instance | Thursday 23 April 2026 16:27:46 +0000 (0:00:00.027) 0:00:05.908 ******** 2026-04-23 16:27:47.036988 | instance | ok: [localhost] => (item={'key': 'ceph_osd_devices', 'value': ['/dev/vdb', '/dev/vdc', '/dev/vdd']}) 2026-04-23 16:27:47.037063 | instance | 2026-04-23 16:27:47.037174 | instance | TASK [Write new Ceph OSDs configuration file to disk] ************************** 2026-04-23 16:27:47.037294 | instance | Thursday 23 April 2026 16:27:47 +0000 (0:00:00.036) 0:00:05.944 ******** 2026-04-23 16:27:47.402444 | instance | changed: [localhost] 2026-04-23 16:27:47.402688 | instance | 2026-04-23 16:27:47.403072 | instance | PLAY [Generate Kubernetes configuration for workspace] ************************* 2026-04-23 16:27:47.403360 | instance | 2026-04-23 16:27:47.403649 | instance | TASK [Ensure the Kubernetes configuration file exists] ************************* 2026-04-23 16:27:47.403940 | instance | Thursday 23 April 2026 16:27:47 +0000 (0:00:00.365) 0:00:06.310 ******** 2026-04-23 16:27:47.570497 | instance | changed: [localhost] 2026-04-23 16:27:47.570759 | instance | 2026-04-23 16:27:47.570983 | instance | TASK [Load the current Kubernetes configuration into a variable] *************** 2026-04-23 16:27:47.571261 | instance | Thursday 23 April 2026 16:27:47 +0000 (0:00:00.167) 0:00:06.477 ******** 2026-04-23 16:27:47.600117 | instance | ok: [localhost] 2026-04-23 16:27:47.600249 | instance | 2026-04-23 16:27:47.600449 | instance | TASK [Generate Kubernetes values for missing variables] ************************ 2026-04-23 16:27:47.600632 | instance | Thursday 23 April 2026 16:27:47 +0000 (0:00:00.030) 0:00:06.508 ******** 2026-04-23 16:27:47.642464 | instance | ok: [localhost] => (item={'key': 'kubernetes_hostname', 'value': '10.96.240.10'}) 2026-04-23 16:27:47.642727 | instance | ok: [localhost] => (item={'key': 'kubernetes_keepalived_vrid', 'value': 42}) 2026-04-23 16:27:47.643023 | instance | ok: [localhost] => (item={'key': 'kubernetes_keepalived_vip', 'value': '10.96.240.10'}) 2026-04-23 16:27:47.643299 | instance | 2026-04-23 16:27:47.643569 | instance | TASK [Write new Kubernetes configuration file to disk] ************************* 2026-04-23 16:27:47.643838 | instance | Thursday 23 April 2026 16:27:47 +0000 (0:00:00.042) 0:00:06.550 ******** 2026-04-23 16:27:47.982507 | instance | changed: [localhost] 2026-04-23 16:27:47.982759 | instance | 2026-04-23 16:27:47.983038 | instance | PLAY [Generate Keepalived configuration for workspace] ************************* 2026-04-23 16:27:47.983250 | instance | 2026-04-23 16:27:47.983550 | instance | TASK [Ensure the Keeaplived configuration file exists] ************************* 2026-04-23 16:27:47.983875 | instance | Thursday 23 April 2026 16:27:47 +0000 (0:00:00.339) 0:00:06.889 ******** 2026-04-23 16:27:48.164498 | instance | changed: [localhost] 2026-04-23 16:27:48.164640 | instance | 2026-04-23 16:27:48.164925 | instance | TASK [Load the current Keepalived configuration into a variable] *************** 2026-04-23 16:27:48.165186 | instance | Thursday 23 April 2026 16:27:48 +0000 (0:00:00.181) 0:00:07.071 ******** 2026-04-23 16:27:48.188206 | instance | ok: [localhost] 2026-04-23 16:27:48.188439 | instance | 2026-04-23 16:27:48.188702 | instance | TASK [Generate Keepalived values for missing variables] ************************ 2026-04-23 16:27:48.188967 | instance | Thursday 23 April 2026 16:27:48 +0000 (0:00:00.024) 0:00:07.096 ******** 2026-04-23 16:27:48.227631 | instance | ok: [localhost] => (item={'key': 'keepalived_interface', 'value': 'br-ex'}) 2026-04-23 16:27:48.227866 | instance | ok: [localhost] => (item={'key': 'keepalived_vip', 'value': '10.96.250.10'}) 2026-04-23 16:27:48.228136 | instance | 2026-04-23 16:27:48.228392 | instance | TASK [Write new Keepalived configuration file to disk] ************************* 2026-04-23 16:27:48.228653 | instance | Thursday 23 April 2026 16:27:48 +0000 (0:00:00.038) 0:00:07.135 ******** 2026-04-23 16:27:48.601826 | instance | changed: [localhost] 2026-04-23 16:27:48.602184 | instance | 2026-04-23 16:27:48.602570 | instance | PLAY [Generate endpoints for workspace] **************************************** 2026-04-23 16:27:48.602894 | instance | 2026-04-23 16:27:48.603255 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-23 16:27:48.603633 | instance | Thursday 23 April 2026 16:27:48 +0000 (0:00:00.372) 0:00:07.508 ******** 2026-04-23 16:27:49.275667 | instance | ok: [localhost] 2026-04-23 16:27:49.275877 | instance | 2026-04-23 16:27:49.276149 | instance | TASK [Ensure the endpoints file exists] **************************************** 2026-04-23 16:27:49.276421 | instance | Thursday 23 April 2026 16:27:49 +0000 (0:00:00.674) 0:00:08.183 ******** 2026-04-23 16:27:49.457499 | instance | changed: [localhost] 2026-04-23 16:27:49.457719 | instance | 2026-04-23 16:27:49.458096 | instance | TASK [Load the current endpoints into a variable] ****************************** 2026-04-23 16:27:49.458371 | instance | Thursday 23 April 2026 16:27:49 +0000 (0:00:00.181) 0:00:08.365 ******** 2026-04-23 16:27:49.489296 | instance | ok: [localhost] 2026-04-23 16:27:49.489555 | instance | 2026-04-23 16:27:49.489823 | instance | TASK [Generate endpoint skeleton for missing variables] ************************ 2026-04-23 16:27:49.490120 | instance | Thursday 23 April 2026 16:27:49 +0000 (0:00:00.032) 0:00:08.397 ******** 2026-04-23 16:27:50.276395 | instance | ok: [localhost] => (item=keycloak_host) 2026-04-23 16:27:50.276614 | instance | ok: [localhost] => (item=kube_prometheus_stack_grafana_host) 2026-04-23 16:27:50.276895 | instance | ok: [localhost] => (item=kube_prometheus_stack_alertmanager_host) 2026-04-23 16:27:50.277278 | instance | ok: [localhost] => (item=kube_prometheus_stack_prometheus_host) 2026-04-23 16:27:50.277494 | instance | ok: [localhost] => (item=openstack_helm_endpoints_region_name) 2026-04-23 16:27:50.277756 | instance | ok: [localhost] => (item=openstack_helm_endpoints_keystone_api_host) 2026-04-23 16:27:50.278039 | instance | ok: [localhost] => (item=openstack_helm_endpoints_glance_api_host) 2026-04-23 16:27:50.278395 | instance | ok: [localhost] => (item=openstack_helm_endpoints_cinder_api_host) 2026-04-23 16:27:50.278628 | instance | ok: [localhost] => (item=openstack_helm_endpoints_placement_api_host) 2026-04-23 16:27:50.278946 | instance | ok: [localhost] => (item=openstack_helm_endpoints_barbican_api_host) 2026-04-23 16:27:50.279159 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_api_host) 2026-04-23 16:27:50.279421 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_api_host) 2026-04-23 16:27:50.279684 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_novnc_host) 2026-04-23 16:27:50.279951 | instance | ok: [localhost] => (item=openstack_helm_endpoints_ironic_api_host) 2026-04-23 16:27:50.280211 | instance | ok: [localhost] => (item=openstack_helm_endpoints_designate_api_host) 2026-04-23 16:27:50.280474 | instance | ok: [localhost] => (item=openstack_helm_endpoints_octavia_api_host) 2026-04-23 16:27:50.280735 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_api_host) 2026-04-23 16:27:50.280999 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_registry_host) 2026-04-23 16:27:50.281259 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_api_host) 2026-04-23 16:27:50.281521 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_cfn_api_host) 2026-04-23 16:27:50.281786 | instance | ok: [localhost] => (item=openstack_helm_endpoints_horizon_api_host) 2026-04-23 16:27:50.282260 | instance | ok: [localhost] => (item=openstack_helm_endpoints_rgw_host) 2026-04-23 16:27:50.282535 | instance | ok: [localhost] => (item=openstack_helm_endpoints_manila_api_host) 2026-04-23 16:27:50.282738 | instance | 2026-04-23 16:27:50.282964 | instance | TASK [Write new endpoints file to disk] **************************************** 2026-04-23 16:27:50.283193 | instance | Thursday 23 April 2026 16:27:50 +0000 (0:00:00.784) 0:00:09.182 ******** 2026-04-23 16:27:50.627522 | instance | changed: [localhost] 2026-04-23 16:27:50.627803 | instance | 2026-04-23 16:27:50.628093 | instance | TASK [Ensure the endpoints file exists] **************************************** 2026-04-23 16:27:50.628368 | instance | Thursday 23 April 2026 16:27:50 +0000 (0:00:00.352) 0:00:09.535 ******** 2026-04-23 16:27:50.810569 | instance | changed: [localhost] 2026-04-23 16:27:50.810781 | instance | 2026-04-23 16:27:50.811053 | instance | PLAY [Generate Neutron configuration for workspace] **************************** 2026-04-23 16:27:50.811291 | instance | 2026-04-23 16:27:50.811545 | instance | TASK [Ensure the Neutron configuration file exists] **************************** 2026-04-23 16:27:50.811815 | instance | Thursday 23 April 2026 16:27:50 +0000 (0:00:00.183) 0:00:09.718 ******** 2026-04-23 16:27:50.991211 | instance | changed: [localhost] 2026-04-23 16:27:50.991475 | instance | 2026-04-23 16:27:50.991747 | instance | TASK [Load the current Neutron configuration into a variable] ****************** 2026-04-23 16:27:50.992063 | instance | Thursday 23 April 2026 16:27:50 +0000 (0:00:00.180) 0:00:09.898 ******** 2026-04-23 16:27:51.023153 | instance | ok: [localhost] 2026-04-23 16:27:51.023472 | instance | 2026-04-23 16:27:51.023811 | instance | TASK [Generate Neutron values for missing variables] *************************** 2026-04-23 16:27:51.024142 | instance | Thursday 23 April 2026 16:27:51 +0000 (0:00:00.032) 0:00:09.931 ******** 2026-04-23 16:27:51.064917 | instance | ok: [localhost] => (item={'key': 'neutron_networks', 'value': [{'name': 'public', 'external': True, 'shared': True, 'mtu_size': 1500, 'port_security_enabled': True, 'provider_network_type': 'flat', 'provider_physical_network': 'external', 'subnets': [{'name': 'public-subnet', 'cidr': '10.96.250.0/24', 'gateway_ip': '10.96.250.10', 'allocation_pool_start': '10.96.250.200', 'allocation_pool_end': '10.96.250.220', 'enable_dhcp': True}]}]}) 2026-04-23 16:27:51.065126 | instance | 2026-04-23 16:27:51.065399 | instance | TASK [Write new Neutron configuration file to disk] **************************** 2026-04-23 16:27:51.065684 | instance | Thursday 23 April 2026 16:27:51 +0000 (0:00:00.042) 0:00:09.973 ******** 2026-04-23 16:27:51.451438 | instance | changed: [localhost] 2026-04-23 16:27:51.452066 | instance | 2026-04-23 16:27:51.452078 | instance | PLAY [Generate Nova configuration for workspace] ******************************* 2026-04-23 16:27:51.452195 | instance | 2026-04-23 16:27:51.452470 | instance | TASK [Ensure the Nova configuration file exists] ******************************* 2026-04-23 16:27:51.452746 | instance | Thursday 23 April 2026 16:27:51 +0000 (0:00:00.384) 0:00:10.357 ******** 2026-04-23 16:27:51.631484 | instance | changed: [localhost] 2026-04-23 16:27:51.631761 | instance | 2026-04-23 16:27:51.632113 | instance | TASK [Load the current Nova configuration into a variable] ********************* 2026-04-23 16:27:51.632397 | instance | Thursday 23 April 2026 16:27:51 +0000 (0:00:00.179) 0:00:10.536 ******** 2026-04-23 16:27:51.659561 | instance | ok: [localhost] 2026-04-23 16:27:51.659764 | instance | 2026-04-23 16:27:51.659996 | instance | TASK [Generate Nova values for missing variables] ****************************** 2026-04-23 16:27:51.660218 | instance | Thursday 23 April 2026 16:27:51 +0000 (0:00:00.030) 0:00:10.567 ******** 2026-04-23 16:27:51.704412 | instance | ok: [localhost] => (item={'key': 'nova_flavors', 'value': [{'name': 'm1.tiny', 'ram': 512, 'disk': 1, 'vcpus': 1}, {'name': 'm1.small', 'ram': 2048, 'disk': 20, 'vcpus': 1}, {'name': 'm1.medium', 'ram': 4096, 'disk': 40, 'vcpus': 2}, {'name': 'm1.large', 'ram': 8192, 'disk': 80, 'vcpus': 4}, {'name': 'm1.xlarge', 'ram': 16384, 'disk': 160, 'vcpus': 8}]}) 2026-04-23 16:27:51.704670 | instance | 2026-04-23 16:27:51.704989 | instance | TASK [Write new Nova configuration file to disk] ******************************* 2026-04-23 16:27:51.705295 | instance | Thursday 23 April 2026 16:27:51 +0000 (0:00:00.044) 0:00:10.612 ******** 2026-04-23 16:27:52.056769 | instance | changed: [localhost] 2026-04-23 16:27:52.056995 | instance | 2026-04-23 16:27:52.057303 | instance | PLAY [Generate secrets for workspace] ****************************************** 2026-04-23 16:27:52.057533 | instance | 2026-04-23 16:27:52.057804 | instance | TASK [Ensure the secrets file exists] ****************************************** 2026-04-23 16:27:52.058121 | instance | Thursday 23 April 2026 16:27:52 +0000 (0:00:00.351) 0:00:10.964 ******** 2026-04-23 16:27:52.233423 | instance | changed: [localhost] 2026-04-23 16:27:52.233635 | instance | 2026-04-23 16:27:52.233899 | instance | TASK [Load the current secrets into a variable] ******************************** 2026-04-23 16:27:52.234269 | instance | Thursday 23 April 2026 16:27:52 +0000 (0:00:00.176) 0:00:11.141 ******** 2026-04-23 16:27:52.262896 | instance | ok: [localhost] 2026-04-23 16:27:52.263244 | instance | 2026-04-23 16:27:52.263574 | instance | TASK [Generate secrets for missing variables] ********************************** 2026-04-23 16:27:52.263884 | instance | Thursday 23 April 2026 16:27:52 +0000 (0:00:00.029) 0:00:11.171 ******** 2026-04-23 16:27:52.688187 | instance | ok: [localhost] => (item=heat_auth_encryption_key) 2026-04-23 16:27:52.688438 | instance | ok: [localhost] => (item=keepalived_password) 2026-04-23 16:27:52.688717 | instance | ok: [localhost] => (item=keycloak_admin_password) 2026-04-23 16:27:52.688992 | instance | ok: [localhost] => (item=keycloak_database_password) 2026-04-23 16:27:52.689281 | instance | ok: [localhost] => (item=keystone_keycloak_client_secret) 2026-04-23 16:27:52.689548 | instance | ok: [localhost] => (item=keystone_oidc_crypto_passphrase) 2026-04-23 16:27:52.689818 | instance | ok: [localhost] => (item=kube_prometheus_stack_grafana_admin_password) 2026-04-23 16:27:52.690189 | instance | ok: [localhost] => (item=octavia_heartbeat_key) 2026-04-23 16:27:52.690468 | instance | ok: [localhost] => (item=openstack_helm_endpoints_rabbitmq_admin_password) 2026-04-23 16:27:52.690737 | instance | ok: [localhost] => (item=openstack_helm_endpoints_memcached_secret_key) 2026-04-23 16:27:52.691006 | instance | ok: [localhost] => (item=openstack_helm_endpoints_keystone_admin_password) 2026-04-23 16:27:52.691270 | instance | ok: [localhost] => (item=openstack_helm_endpoints_keystone_mariadb_password) 2026-04-23 16:27:52.691534 | instance | ok: [localhost] => (item=openstack_helm_endpoints_keystone_rabbitmq_password) 2026-04-23 16:27:52.691800 | instance | ok: [localhost] => (item=openstack_helm_endpoints_glance_keystone_password) 2026-04-23 16:27:52.692069 | instance | ok: [localhost] => (item=openstack_helm_endpoints_glance_mariadb_password) 2026-04-23 16:27:52.692345 | instance | ok: [localhost] => (item=openstack_helm_endpoints_glance_rabbitmq_password) 2026-04-23 16:27:52.692616 | instance | ok: [localhost] => (item=openstack_helm_endpoints_cinder_keystone_password) 2026-04-23 16:27:52.692886 | instance | ok: [localhost] => (item=openstack_helm_endpoints_cinder_mariadb_password) 2026-04-23 16:27:52.693161 | instance | ok: [localhost] => (item=openstack_helm_endpoints_cinder_rabbitmq_password) 2026-04-23 16:27:52.693434 | instance | ok: [localhost] => (item=openstack_helm_endpoints_placement_keystone_password) 2026-04-23 16:27:52.693706 | instance | ok: [localhost] => (item=openstack_helm_endpoints_placement_mariadb_password) 2026-04-23 16:27:52.693995 | instance | ok: [localhost] => (item=openstack_helm_endpoints_barbican_keystone_password) 2026-04-23 16:27:52.694349 | instance | ok: [localhost] => (item=openstack_helm_endpoints_barbican_mariadb_password) 2026-04-23 16:27:52.694616 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_keystone_password) 2026-04-23 16:27:52.694846 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_mariadb_password) 2026-04-23 16:27:52.694998 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_rabbitmq_password) 2026-04-23 16:27:52.695163 | instance | ok: [localhost] => (item=openstack_helm_endpoints_neutron_metadata_secret) 2026-04-23 16:27:52.695327 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_keystone_password) 2026-04-23 16:27:52.695490 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_mariadb_password) 2026-04-23 16:27:52.695650 | instance | ok: [localhost] => (item=openstack_helm_endpoints_nova_rabbitmq_password) 2026-04-23 16:27:52.695813 | instance | ok: [localhost] => (item=openstack_helm_endpoints_ironic_keystone_password) 2026-04-23 16:27:52.695978 | instance | ok: [localhost] => (item=openstack_helm_endpoints_ironic_mariadb_password) 2026-04-23 16:27:52.696141 | instance | ok: [localhost] => (item=openstack_helm_endpoints_ironic_rabbitmq_password) 2026-04-23 16:27:52.696307 | instance | ok: [localhost] => (item=openstack_helm_endpoints_designate_keystone_password) 2026-04-23 16:27:52.696469 | instance | ok: [localhost] => (item=openstack_helm_endpoints_designate_mariadb_password) 2026-04-23 16:27:52.696632 | instance | ok: [localhost] => (item=openstack_helm_endpoints_designate_rabbitmq_password) 2026-04-23 16:27:52.696797 | instance | ok: [localhost] => (item=openstack_helm_endpoints_octavia_keystone_password) 2026-04-23 16:27:52.696962 | instance | ok: [localhost] => (item=openstack_helm_endpoints_octavia_mariadb_password) 2026-04-23 16:27:52.697126 | instance | ok: [localhost] => (item=openstack_helm_endpoints_octavia_rabbitmq_password) 2026-04-23 16:27:52.697289 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_keystone_password) 2026-04-23 16:27:52.697457 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_mariadb_password) 2026-04-23 16:27:52.697676 | instance | ok: [localhost] => (item=openstack_helm_endpoints_magnum_rabbitmq_password) 2026-04-23 16:27:52.697859 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_keystone_password) 2026-04-23 16:27:52.698145 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_trustee_keystone_password) 2026-04-23 16:27:52.698422 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_stack_user_keystone_password) 2026-04-23 16:27:52.698598 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_mariadb_password) 2026-04-23 16:27:52.698765 | instance | ok: [localhost] => (item=openstack_helm_endpoints_heat_rabbitmq_password) 2026-04-23 16:27:52.698934 | instance | ok: [localhost] => (item=openstack_helm_endpoints_horizon_mariadb_password) 2026-04-23 16:27:52.699099 | instance | ok: [localhost] => (item=openstack_helm_endpoints_tempest_keystone_password) 2026-04-23 16:27:52.699274 | instance | ok: [localhost] => (item=openstack_helm_endpoints_openstack_exporter_keystone_password) 2026-04-23 16:27:52.699432 | instance | ok: [localhost] => (item=openstack_helm_endpoints_rgw_keystone_password) 2026-04-23 16:27:52.699597 | instance | ok: [localhost] => (item=openstack_helm_endpoints_manila_keystone_password) 2026-04-23 16:27:52.699760 | instance | ok: [localhost] => (item=openstack_helm_endpoints_manila_mariadb_password) 2026-04-23 16:27:52.699924 | instance | ok: [localhost] => (item=openstack_helm_endpoints_staffeln_mariadb_password) 2026-04-23 16:27:52.700080 | instance | 2026-04-23 16:27:52.700287 | instance | TASK [Generate base64 encoded secrets] ***************************************** 2026-04-23 16:27:52.700472 | instance | Thursday 23 April 2026 16:27:52 +0000 (0:00:00.424) 0:00:11.596 ******** 2026-04-23 16:27:52.740643 | instance | ok: [localhost] => (item=barbican_kek) 2026-04-23 16:27:52.740865 | instance | 2026-04-23 16:27:52.741133 | instance | TASK [Generate temporary files for generating keys for missing variables] ****** 2026-04-23 16:27:52.741402 | instance | Thursday 23 April 2026 16:27:52 +0000 (0:00:00.052) 0:00:11.648 ******** 2026-04-23 16:27:53.123870 | instance | changed: [localhost] => (item=manila_ssh_key) 2026-04-23 16:27:53.124295 | instance | changed: [localhost] => (item=nova_ssh_key) 2026-04-23 16:27:53.124688 | instance | 2026-04-23 16:27:53.124960 | instance | TASK [Generate SSH keys for missing variables] ********************************* 2026-04-23 16:27:53.125237 | instance | Thursday 23 April 2026 16:27:53 +0000 (0:00:00.381) 0:00:12.029 ******** 2026-04-23 16:27:56.902392 | instance | changed: [localhost] => (item=manila_ssh_key) 2026-04-23 16:27:56.902617 | instance | changed: [localhost] => (item=nova_ssh_key) 2026-04-23 16:27:56.903690 | instance | 2026-04-23 16:27:56.903840 | instance | TASK [Set values for SSH keys] ************************************************* 2026-04-23 16:27:56.903854 | instance | Thursday 23 April 2026 16:27:56 +0000 (0:00:03.779) 0:00:15.809 ******** 2026-04-23 16:27:56.954261 | instance | ok: [localhost] => (item=manila_ssh_key) 2026-04-23 16:27:56.954554 | instance | ok: [localhost] => (item=nova_ssh_key) 2026-04-23 16:27:56.954987 | instance | 2026-04-23 16:27:56.955197 | instance | TASK [Delete the temporary files generated for SSH keys] *********************** 2026-04-23 16:27:56.955497 | instance | Thursday 23 April 2026 16:27:56 +0000 (0:00:00.052) 0:00:15.862 ******** 2026-04-23 16:27:57.281525 | instance | changed: [localhost] => (item=manila_ssh_key) 2026-04-23 16:27:57.281747 | instance | changed: [localhost] => (item=nova_ssh_key) 2026-04-23 16:27:57.281996 | instance | 2026-04-23 16:27:57.282321 | instance | TASK [Write new secrets file to disk] ****************************************** 2026-04-23 16:27:57.282568 | instance | Thursday 23 April 2026 16:27:57 +0000 (0:00:00.327) 0:00:16.189 ******** 2026-04-23 16:27:57.644133 | instance | changed: [localhost] 2026-04-23 16:27:57.644368 | instance | 2026-04-23 16:27:57.644646 | instance | TASK [Encrypt secrets file with Vault password] ******************************** 2026-04-23 16:27:57.644915 | instance | Thursday 23 April 2026 16:27:57 +0000 (0:00:00.362) 0:00:16.552 ******** 2026-04-23 16:27:57.681983 | instance | skipping: [localhost] 2026-04-23 16:27:57.682303 | instance | 2026-04-23 16:27:57.682585 | instance | PLAY [Setup networking] ******************************************************** 2026-04-23 16:27:57.682842 | instance | 2026-04-23 16:27:57.683122 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-23 16:27:57.683375 | instance | Thursday 23 April 2026 16:27:57 +0000 (0:00:00.038) 0:00:16.590 ******** 2026-04-23 16:27:58.371940 | instance | ok: [instance] 2026-04-23 16:27:58.372008 | instance | 2026-04-23 16:27:58.372224 | instance | TASK [Create bridge for management network] ************************************ 2026-04-23 16:27:58.372359 | instance | Thursday 23 April 2026 16:27:58 +0000 (0:00:00.689) 0:00:17.279 ******** 2026-04-23 16:27:58.695181 | instance | ok: [instance] 2026-04-23 16:27:58.695429 | instance | 2026-04-23 16:27:58.695703 | instance | TASK [Create fake interface for management bridge] ***************************** 2026-04-23 16:27:58.695925 | instance | Thursday 23 April 2026 16:27:58 +0000 (0:00:00.323) 0:00:17.603 ******** 2026-04-23 16:27:58.896186 | instance | ok: [instance] 2026-04-23 16:27:58.896597 | instance | 2026-04-23 16:27:58.896836 | instance | TASK [Assign dummy interface to management bridge] ***************************** 2026-04-23 16:27:58.897101 | instance | Thursday 23 April 2026 16:27:58 +0000 (0:00:00.200) 0:00:17.804 ******** 2026-04-23 16:27:59.087754 | instance | ok: [instance] 2026-04-23 16:27:59.087973 | instance | 2026-04-23 16:27:59.088238 | instance | TASK [Assign IP address for management bridge] ********************************* 2026-04-23 16:27:59.088506 | instance | Thursday 23 April 2026 16:27:59 +0000 (0:00:00.191) 0:00:17.995 ******** 2026-04-23 16:27:59.278460 | instance | ok: [instance] 2026-04-23 16:27:59.278690 | instance | 2026-04-23 16:27:59.278977 | instance | TASK [Bring up interfaces] ***************************************************** 2026-04-23 16:27:59.279409 | instance | Thursday 23 April 2026 16:27:59 +0000 (0:00:00.190) 0:00:18.186 ******** 2026-04-23 16:27:59.669099 | instance | ok: [instance] => (item=br-mgmt) 2026-04-23 16:27:59.669207 | instance | ok: [instance] => (item=dummy0) 2026-04-23 16:27:59.669318 | instance | 2026-04-23 16:27:59.669468 | instance | PLAY [Create devices for Ceph] ************************************************* 2026-04-23 16:27:59.669604 | instance | 2026-04-23 16:27:59.669752 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-23 16:27:59.669913 | instance | Thursday 23 April 2026 16:27:59 +0000 (0:00:00.390) 0:00:18.576 ******** 2026-04-23 16:28:00.405201 | instance | ok: [instance] 2026-04-23 16:28:00.405373 | instance | 2026-04-23 16:28:00.405635 | instance | TASK [Install depedencies] ***************************************************** 2026-04-23 16:28:00.405896 | instance | Thursday 23 April 2026 16:28:00 +0000 (0:00:00.736) 0:00:19.313 ******** 2026-04-23 16:28:18.889363 | instance | changed: [instance] 2026-04-23 16:28:18.889578 | instance | 2026-04-23 16:28:18.889852 | instance | TASK [Start up service] ******************************************************** 2026-04-23 16:28:18.890203 | instance | Thursday 23 April 2026 16:28:18 +0000 (0:00:18.483) 0:00:37.796 ******** 2026-04-23 16:28:19.423267 | instance | ok: [instance] 2026-04-23 16:28:19.423539 | instance | 2026-04-23 16:28:19.423831 | instance | TASK [Generate lvm.conf] ******************************************************* 2026-04-23 16:28:19.424113 | instance | Thursday 23 April 2026 16:28:19 +0000 (0:00:00.534) 0:00:38.331 ******** 2026-04-23 16:28:19.640330 | instance | ok: [instance] 2026-04-23 16:28:19.640515 | instance | 2026-04-23 16:28:19.640778 | instance | TASK [Write /etc/lvm/lvm.conf] ************************************************* 2026-04-23 16:28:19.641082 | instance | Thursday 23 April 2026 16:28:19 +0000 (0:00:00.217) 0:00:38.548 ******** 2026-04-23 16:28:20.119323 | instance | changed: [instance] 2026-04-23 16:28:20.119690 | instance | 2026-04-23 16:28:20.120013 | instance | TASK [Get list of all loopback devices] **************************************** 2026-04-23 16:28:20.120332 | instance | Thursday 23 April 2026 16:28:20 +0000 (0:00:00.478) 0:00:39.027 ******** 2026-04-23 16:28:20.318600 | instance | ok: [instance] 2026-04-23 16:28:20.318834 | instance | 2026-04-23 16:28:20.319044 | instance | TASK [Fail if there is any existing loopback devices] ************************** 2026-04-23 16:28:20.319207 | instance | Thursday 23 April 2026 16:28:20 +0000 (0:00:00.196) 0:00:39.224 ******** 2026-04-23 16:28:20.343351 | instance | skipping: [instance] 2026-04-23 16:28:20.343371 | instance | 2026-04-23 16:28:20.343378 | instance | TASK [Create devices for Ceph] ************************************************* 2026-04-23 16:28:20.343384 | instance | Thursday 23 April 2026 16:28:20 +0000 (0:00:00.027) 0:00:39.251 ******** 2026-04-23 16:28:20.865625 | instance | changed: [instance] => (item=osd0) 2026-04-23 16:28:20.865673 | instance | changed: [instance] => (item=osd1) 2026-04-23 16:28:20.865680 | instance | changed: [instance] => (item=osd2) 2026-04-23 16:28:20.865686 | instance | 2026-04-23 16:28:20.865693 | instance | TASK [Set permissions on loopback devices] ************************************* 2026-04-23 16:28:20.865700 | instance | Thursday 23 April 2026 16:28:20 +0000 (0:00:00.521) 0:00:39.772 ******** 2026-04-23 16:28:21.379310 | instance | changed: [instance] => (item=osd0) 2026-04-23 16:28:21.379355 | instance | changed: [instance] => (item=osd1) 2026-04-23 16:28:21.379360 | instance | changed: [instance] => (item=osd2) 2026-04-23 16:28:21.379365 | instance | 2026-04-23 16:28:21.379370 | instance | TASK [Start loop devices] ****************************************************** 2026-04-23 16:28:21.379385 | instance | Thursday 23 April 2026 16:28:21 +0000 (0:00:00.514) 0:00:40.286 ******** 2026-04-23 16:28:22.117751 | instance | changed: [instance] => (item=osd0) 2026-04-23 16:28:22.117796 | instance | changed: [instance] => (item=osd1) 2026-04-23 16:28:22.117804 | instance | changed: [instance] => (item=osd2) 2026-04-23 16:28:22.117810 | instance | 2026-04-23 16:28:22.117817 | instance | TASK [Create a volume group for each loop device] ****************************** 2026-04-23 16:28:22.117823 | instance | Thursday 23 April 2026 16:28:22 +0000 (0:00:00.738) 0:00:41.025 ******** 2026-04-23 16:28:25.304029 | instance | changed: [instance] => (item=osd0) 2026-04-23 16:28:25.304087 | instance | changed: [instance] => (item=osd1) 2026-04-23 16:28:25.304098 | instance | changed: [instance] => (item=osd2) 2026-04-23 16:28:25.304107 | instance | 2026-04-23 16:28:25.304117 | instance | TASK [Create a logical volume for each loop device] **************************** 2026-04-23 16:28:25.304127 | instance | Thursday 23 April 2026 16:28:25 +0000 (0:00:03.185) 0:00:44.210 ******** 2026-04-23 16:28:27.177353 | instance | changed: [instance] => (item=ceph-instance-osd0) 2026-04-23 16:28:27.177407 | instance | changed: [instance] => (item=ceph-instance-osd1) 2026-04-23 16:28:27.177414 | instance | changed: [instance] => (item=ceph-instance-osd2) 2026-04-23 16:28:27.177419 | instance | 2026-04-23 16:28:27.177423 | instance | PLAY [controllers] ************************************************************* 2026-04-23 16:28:27.177428 | instance | 2026-04-23 16:28:27.177432 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-23 16:28:27.177436 | instance | Thursday 23 April 2026 16:28:27 +0000 (0:00:01.874) 0:00:46.084 ******** 2026-04-23 16:28:28.069877 | instance | ok: [instance] 2026-04-23 16:28:28.069925 | instance | 2026-04-23 16:28:28.069932 | instance | TASK [Set masquerade rule] ***************************************************** 2026-04-23 16:28:28.069937 | instance | Thursday 23 April 2026 16:28:28 +0000 (0:00:00.892) 0:00:46.977 ******** 2026-04-23 16:28:28.570214 | instance | changed: [instance] 2026-04-23 16:28:28.570288 | instance | 2026-04-23 16:28:28.570300 | instance | PLAY RECAP ********************************************************************* 2026-04-23 16:28:28.570318 | instance | instance : ok=24 changed=10 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 2026-04-23 16:28:28.570328 | instance | localhost : ok=40 changed=21 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 2026-04-23 16:28:28.570866 | instance | 2026-04-23 16:28:28.570932 | instance | Thursday 23 April 2026 16:28:28 +0000 (0:00:00.501) 0:00:47.478 ******** 2026-04-23 16:28:28.570944 | instance | =============================================================================== 2026-04-23 16:28:28.570954 | instance | Install depedencies ---------------------------------------------------- 18.48s 2026-04-23 16:28:28.570969 | instance | Generate SSH keys for missing variables --------------------------------- 3.78s 2026-04-23 16:28:28.571575 | instance | Create a volume group for each loop device ------------------------------ 3.19s 2026-04-23 16:28:28.571627 | instance | Create a logical volume for each loop device ---------------------------- 1.87s 2026-04-23 16:28:28.571638 | instance | Gathering Facts --------------------------------------------------------- 1.13s 2026-04-23 16:28:28.571647 | instance | Install "dirmngr" for GPG keyserver operations -------------------------- 1.06s 2026-04-23 16:28:28.571656 | instance | Create folders for workspace -------------------------------------------- 1.02s 2026-04-23 16:28:28.571671 | instance | Gathering Facts --------------------------------------------------------- 0.89s 2026-04-23 16:28:28.573046 | instance | Generate endpoint skeleton for missing variables ------------------------ 0.78s 2026-04-23 16:28:28.573089 | instance | Start loop devices ------------------------------------------------------ 0.74s 2026-04-23 16:28:28.573094 | instance | Gathering Facts --------------------------------------------------------- 0.74s 2026-04-23 16:28:28.573099 | instance | Purge "snapd" package --------------------------------------------------- 0.69s 2026-04-23 16:28:28.573103 | instance | Gathering Facts --------------------------------------------------------- 0.69s 2026-04-23 16:28:28.573119 | instance | Gathering Facts --------------------------------------------------------- 0.67s 2026-04-23 16:28:28.573124 | instance | Configure short hostname ------------------------------------------------ 0.67s 2026-04-23 16:28:28.573129 | instance | Write new Ceph control plane configuration file to disk ----------------- 0.56s 2026-04-23 16:28:28.573133 | instance | Start up service -------------------------------------------------------- 0.53s 2026-04-23 16:28:28.573137 | instance | Create devices for Ceph ------------------------------------------------- 0.52s 2026-04-23 16:28:28.573141 | instance | Set permissions on loopback devices ------------------------------------- 0.51s 2026-04-23 16:28:28.573145 | instance | Set masquerade rule ----------------------------------------------------- 0.50s 2026-04-23 16:28:28.647654 | instance | INFO [aio > prepare] Executed: Successful 2026-04-23 16:28:28.648290 | instance | INFO Molecule executed 1 scenario (1 successful) 2026-04-23 16:28:29.013782 | instance | ok: Runtime: 0:01:36.185438 2026-04-23 16:28:29.020175 | 2026-04-23 16:28:29.020334 | PLAY RECAP 2026-04-23 16:28:29.020534 | instance | ok: 12 changed: 9 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-23 16:28:29.020581 | 2026-04-23 16:28:29.158287 | PRE-RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/pre.yml@main] 2026-04-23 16:28:29.169654 | RUN START: [untrusted : github.com/vexxhost/atmosphere/molecule/aio/converge.yml@main] 2026-04-23 16:28:29.812537 | 2026-04-23 16:28:29.812710 | PLAY [all] 2026-04-23 16:28:29.825726 | 2026-04-23 16:28:29.825869 | TASK [Build atmosphere binary] 2026-04-23 16:28:30.230694 | instance | go: downloading github.com/spf13/cobra v1.9.1 2026-04-23 16:28:30.244284 | instance | go: downloading golang.org/x/sync v0.18.0 2026-04-23 16:28:30.465671 | instance | go: downloading github.com/spf13/pflag v1.0.7 2026-04-23 16:28:37.374907 | instance | ok: Runtime: 0:00:06.705727 2026-04-23 16:28:37.381889 | 2026-04-23 16:28:37.382058 | TASK [Deploy with parallel orchestrator] 2026-04-23 16:28:37.580496 | instance | ==> Running preflight checks 2026-04-23 16:28:38.047769 | instance | [preflight] 2026-04-23 16:28:38.047829 | instance | [preflight] PLAY [Preflight checks] ******************************************************** 2026-04-23 16:28:38.047841 | instance | [preflight] 2026-04-23 16:28:38.047855 | instance | [preflight] TASK [Fail if atmosphere_ceph_enabled is set] ********************************** 2026-04-23 16:28:38.069827 | instance | [preflight] skipping: [instance] 2026-04-23 16:28:38.069981 | instance | [preflight] 2026-04-23 16:28:38.069999 | instance | [preflight] PLAY RECAP ********************************************************************* 2026-04-23 16:28:38.070016 | instance | [preflight] instance : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 2026-04-23 16:28:38.070027 | instance | [preflight] 2026-04-23 16:28:38.129359 | instance | ==> Preflight checks passed 2026-04-23 16:28:38.129457 | instance | ==> Starting parallel deployment 2026-04-23 16:28:38.129553 | instance | ==> [udev] Starting deployment 2026-04-23 16:28:38.129678 | instance | ==> [multipathd] Starting deployment 2026-04-23 16:28:38.129759 | instance | ==> [iscsi] Starting deployment 2026-04-23 16:28:38.129772 | instance | ==> [ceph] Starting deployment 2026-04-23 16:28:38.129898 | instance | ==> [kubernetes] Starting deployment 2026-04-23 16:28:38.130195 | instance | ==> [lpfc] Starting deployment 2026-04-23 16:28:38.603278 | instance | [udev] 2026-04-23 16:28:38.603334 | instance | [udev] PLAY [controllers:computes] **************************************************** 2026-04-23 16:28:38.603346 | instance | [udev] 2026-04-23 16:28:38.603355 | instance | [udev] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:28:38.616218 | instance | [multipathd] 2026-04-23 16:28:38.616265 | instance | [multipathd] PLAY [controllers:computes] **************************************************** 2026-04-23 16:28:38.616275 | instance | [multipathd] 2026-04-23 16:28:38.616284 | instance | [multipathd] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:28:38.621065 | instance | [lpfc] 2026-04-23 16:28:38.621083 | instance | [lpfc] PLAY [controllers:computes] **************************************************** 2026-04-23 16:28:38.621092 | instance | [lpfc] 2026-04-23 16:28:38.621101 | instance | [lpfc] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:28:38.918401 | instance | [ceph] 2026-04-23 16:28:38.918471 | instance | [ceph] PLAY [all] ********************************************************************* 2026-04-23 16:28:38.918483 | instance | [ceph] 2026-04-23 16:28:38.918508 | instance | [ceph] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:28:40.025163 | instance | [multipathd] [WARNING]: Platform linux on host instance is using the discovered Python 2026-04-23 16:28:40.025229 | instance | [multipathd] interpreter at /usr/bin/python3.10, but future installation of another Python 2026-04-23 16:28:40.025246 | instance | [multipathd] interpreter could change the meaning of that path. See 2026-04-23 16:28:40.025257 | instance | [multipathd] https://docs.ansible.com/ansible- 2026-04-23 16:28:40.025266 | instance | [multipathd] core/2.17/reference_appendices/interpreter_discovery.html for more information. 2026-04-23 16:28:40.034454 | instance | [multipathd] ok: [instance] 2026-04-23 16:28:40.034490 | instance | [multipathd] 2026-04-23 16:28:40.034501 | instance | [multipathd] TASK [vexxhost.atmosphere.multipathd : Add backports PPA] ********************** 2026-04-23 16:28:40.088545 | instance | [udev] [WARNING]: Platform linux on host instance is using the discovered Python 2026-04-23 16:28:40.088628 | instance | [udev] interpreter at /usr/bin/python3.10, but future installation of another Python 2026-04-23 16:28:40.088641 | instance | [udev] interpreter could change the meaning of that path. See 2026-04-23 16:28:40.088651 | instance | [udev] https://docs.ansible.com/ansible- 2026-04-23 16:28:40.088662 | instance | [udev] core/2.17/reference_appendices/interpreter_discovery.html for more information. 2026-04-23 16:28:40.099142 | instance | [udev] ok: [instance] 2026-04-23 16:28:40.099174 | instance | [udev] 2026-04-23 16:28:40.099185 | instance | [udev] TASK [vexxhost.atmosphere.udev : Add udev rules for Pure Storage FlashArray] *** 2026-04-23 16:28:40.119537 | instance | [lpfc] [WARNING]: Platform linux on host instance is using the discovered Python 2026-04-23 16:28:40.119588 | instance | [lpfc] interpreter at /usr/bin/python3.10, but future installation of another Python 2026-04-23 16:28:40.119599 | instance | [lpfc] interpreter could change the meaning of that path. See 2026-04-23 16:28:40.119608 | instance | [lpfc] https://docs.ansible.com/ansible- 2026-04-23 16:28:40.119618 | instance | [lpfc] core/2.17/reference_appendices/interpreter_discovery.html for more information. 2026-04-23 16:28:40.128023 | instance | [lpfc] ok: [instance] 2026-04-23 16:28:40.128055 | instance | [lpfc] 2026-04-23 16:28:40.128065 | instance | [lpfc] TASK [vexxhost.atmosphere.lpfc : Detect if the "lpfc" module is loaded] ******** 2026-04-23 16:28:40.205212 | instance | [ceph] [WARNING]: Platform linux on host instance is using the discovered Python 2026-04-23 16:28:40.205258 | instance | [ceph] interpreter at /usr/bin/python3.10, but future installation of another Python 2026-04-23 16:28:40.205266 | instance | [ceph] interpreter could change the meaning of that path. See 2026-04-23 16:28:40.205273 | instance | [ceph] https://docs.ansible.com/ansible- 2026-04-23 16:28:40.205280 | instance | [ceph] core/2.17/reference_appendices/interpreter_discovery.html for more information. 2026-04-23 16:28:40.213315 | instance | [ceph] ok: [instance] 2026-04-23 16:28:40.213384 | instance | [ceph] 2026-04-23 16:28:40.213390 | instance | [ceph] TASK [Fail if atmosphere_ceph_enabled is set] ********************************** 2026-04-23 16:28:40.248724 | instance | [ceph] skipping: [instance] 2026-04-23 16:28:40.248760 | instance | [ceph] 2026-04-23 16:28:40.248771 | instance | [ceph] TASK [Set a fact with the "atmosphere_images" for other plays] ***************** 2026-04-23 16:28:40.422299 | instance | [ceph] ok: [instance] 2026-04-23 16:28:40.422357 | instance | [ceph] 2026-04-23 16:28:40.422370 | instance | [ceph] PLAY [Deploy Ceph monitors & managers] ***************************************** 2026-04-23 16:28:40.422380 | instance | [ceph] 2026-04-23 16:28:40.422388 | instance | [ceph] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:28:40.507096 | instance | [lpfc] ok: [instance] 2026-04-23 16:28:40.507157 | instance | [lpfc] 2026-04-23 16:28:40.507169 | instance | [lpfc] TASK [vexxhost.atmosphere.lpfc : Install the configuration file] *************** 2026-04-23 16:28:40.534508 | instance | [lpfc] skipping: [instance] 2026-04-23 16:28:40.534546 | instance | [lpfc] 2026-04-23 16:28:40.534559 | instance | [lpfc] TASK [vexxhost.atmosphere.lpfc : Get the values for the module parameters] ***** 2026-04-23 16:28:40.569898 | instance | [lpfc] skipping: [instance] => (item=lpfc_lun_queue_depth) 2026-04-23 16:28:40.569940 | instance | [lpfc] skipping: [instance] => (item=lpfc_sg_seg_cnt) 2026-04-23 16:28:40.569952 | instance | [lpfc] skipping: [instance] => (item=lpfc_max_luns) 2026-04-23 16:28:40.569962 | instance | [lpfc] skipping: [instance] => (item=lpfc_enable_fc4_type) 2026-04-23 16:28:40.569971 | instance | [lpfc] skipping: [instance] 2026-04-23 16:28:40.569980 | instance | [lpfc] 2026-04-23 16:28:40.569995 | instance | [lpfc] TASK [vexxhost.atmosphere.lpfc : Detect if the run-time module parameters are set correctly] *** 2026-04-23 16:28:40.604809 | instance | [lpfc] skipping: [instance] => (item=lpfc_lun_queue_depth) 2026-04-23 16:28:40.604855 | instance | [lpfc] skipping: [instance] => (item=lpfc_sg_seg_cnt) 2026-04-23 16:28:40.604867 | instance | [lpfc] skipping: [instance] => (item=lpfc_max_luns) 2026-04-23 16:28:40.604876 | instance | [lpfc] skipping: [instance] => (item=lpfc_enable_fc4_type) 2026-04-23 16:28:40.604886 | instance | [lpfc] skipping: [instance] 2026-04-23 16:28:40.604895 | instance | [lpfc] 2026-04-23 16:28:40.604916 | instance | [lpfc] TASK [vexxhost.atmosphere.lpfc : Update "initramfs" if the configuration file has changed] *** 2026-04-23 16:28:40.629358 | instance | [lpfc] skipping: [instance] 2026-04-23 16:28:40.629391 | instance | [lpfc] 2026-04-23 16:28:40.629401 | instance | [lpfc] TASK [Reboot the system if the configuration file has changed] ***************** 2026-04-23 16:28:40.654232 | instance | [lpfc] skipping: [instance] 2026-04-23 16:28:40.654264 | instance | [lpfc] 2026-04-23 16:28:40.654275 | instance | [lpfc] PLAY RECAP ********************************************************************* 2026-04-23 16:28:40.654289 | instance | [lpfc] instance : ok=2 changed=0 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-04-23 16:28:40.654318 | instance | [lpfc] 2026-04-23 16:28:40.713558 | instance | ==> [lpfc] Deployment complete 2026-04-23 16:28:40.805533 | instance | [udev] changed: [instance] 2026-04-23 16:28:40.805588 | instance | [udev] 2026-04-23 16:28:40.805596 | instance | [udev] TASK [vexxhost.atmosphere.udev : Add udev rules for SCSI Unit Attention] ******* 2026-04-23 16:28:41.348847 | instance | [udev] changed: [instance] 2026-04-23 16:28:41.348902 | instance | [udev] 2026-04-23 16:28:41.348909 | instance | [udev] RUNNING HANDLER [vexxhost.atmosphere.udev : Reload udev] *********************** 2026-04-23 16:28:41.490916 | instance | [ceph] ok: [instance] 2026-04-23 16:28:41.490978 | instance | [ceph] 2026-04-23 16:28:41.490990 | instance | [ceph] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:28:41.728032 | instance | [udev] ok: [instance] 2026-04-23 16:28:41.728078 | instance | [udev] 2026-04-23 16:28:41.728084 | instance | [udev] PLAY RECAP ********************************************************************* 2026-04-23 16:28:41.728089 | instance | [udev] instance : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-23 16:28:41.728099 | instance | [udev] 2026-04-23 16:28:41.821877 | instance | ==> [udev] Deployment complete 2026-04-23 16:28:41.921355 | instance | [ceph] ok: [instance] 2026-04-23 16:28:41.921400 | instance | [ceph] 2026-04-23 16:28:41.921408 | instance | [ceph] TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-23 16:28:41.962946 | instance | [ceph] skipping: [instance] 2026-04-23 16:28:41.963007 | instance | [ceph] 2026-04-23 16:28:41.963022 | instance | [ceph] TASK [vexxhost.containers.directory : Create directory (/var/lib/downloads)] *** 2026-04-23 16:28:42.410838 | instance | [ceph] changed: [instance] 2026-04-23 16:28:42.410894 | instance | [ceph] 2026-04-23 16:28:42.410906 | instance | [ceph] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:28:42.478352 | instance | [ceph] ok: [instance] => { 2026-04-23 16:28:42.478387 | instance | [ceph] "msg": "https://github.com/opencontainers/runc/releases/download/v1.4.2/runc.amd64" 2026-04-23 16:28:42.478398 | instance | [ceph] } 2026-04-23 16:28:42.478407 | instance | [ceph] 2026-04-23 16:28:42.478416 | instance | [ceph] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:28:43.186632 | instance | [ceph] changed: [instance] 2026-04-23 16:28:43.186684 | instance | [ceph] 2026-04-23 16:28:43.186696 | instance | [ceph] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:28:43.234517 | instance | [ceph] skipping: [instance] 2026-04-23 16:28:43.234570 | instance | [ceph] 2026-04-23 16:28:43.234582 | instance | [ceph] TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-23 16:28:43.283789 | instance | [ceph] skipping: [instance] 2026-04-23 16:28:43.283824 | instance | [ceph] 2026-04-23 16:28:43.283834 | instance | [ceph] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:28:43.590168 | instance | [ceph] ok: [instance] 2026-04-23 16:28:43.590226 | instance | [ceph] 2026-04-23 16:28:43.590242 | instance | [ceph] TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-23 16:28:44.912138 | instance | [ceph] ok: [instance] 2026-04-23 16:28:44.912197 | instance | [ceph] 2026-04-23 16:28:44.912211 | instance | [ceph] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:28:44.974485 | instance | [ceph] ok: [instance] => { 2026-04-23 16:28:44.974524 | instance | [ceph] "msg": "https://github.com/containerd/containerd/releases/download/v2.2.3/containerd-2.2.3-linux-amd64.tar.gz" 2026-04-23 16:28:44.974536 | instance | [ceph] } 2026-04-23 16:28:44.974546 | instance | [ceph] 2026-04-23 16:28:44.974555 | instance | [ceph] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:28:45.310751 | instance | [multipathd] changed: [instance] 2026-04-23 16:28:45.310819 | instance | [multipathd] 2026-04-23 16:28:45.310830 | instance | [multipathd] TASK [vexxhost.atmosphere.multipathd : Install the multipathd package] ********* 2026-04-23 16:28:45.713488 | instance | [ceph] changed: [instance] 2026-04-23 16:28:45.713541 | instance | [ceph] 2026-04-23 16:28:45.713553 | instance | [ceph] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:28:48.695749 | instance | [ceph] changed: [instance] 2026-04-23 16:28:48.695868 | instance | [ceph] 2026-04-23 16:28:48.695880 | instance | [ceph] TASK [vexxhost.containers.containerd : Install SELinux packages] *************** 2026-04-23 16:28:48.731418 | instance | [ceph] skipping: [instance] 2026-04-23 16:28:48.731502 | instance | [ceph] 2026-04-23 16:28:48.731514 | instance | [ceph] TASK [vexxhost.containers.containerd : Set SELinux to permissive at runtime] *** 2026-04-23 16:28:48.761370 | instance | [ceph] skipping: [instance] 2026-04-23 16:28:48.761407 | instance | [ceph] 2026-04-23 16:28:48.761418 | instance | [ceph] TASK [vexxhost.containers.containerd : Persist SELinux permissive mode] ******** 2026-04-23 16:28:48.790777 | instance | [ceph] skipping: [instance] 2026-04-23 16:28:48.790810 | instance | [ceph] 2026-04-23 16:28:48.790820 | instance | [ceph] TASK [vexxhost.containers.containerd : Install AppArmor packages] ************** 2026-04-23 16:29:02.832089 | instance | [multipathd] changed: [instance] 2026-04-23 16:29:02.832213 | instance | [multipathd] 2026-04-23 16:29:02.832225 | instance | [multipathd] TASK [vexxhost.atmosphere.multipathd : Install the configuration file] ********* 2026-04-23 16:29:03.583492 | instance | [multipathd] changed: [instance] 2026-04-23 16:29:03.583580 | instance | [multipathd] 2026-04-23 16:29:03.583586 | instance | [multipathd] RUNNING HANDLER [vexxhost.atmosphere.multipathd : Restart "multipathd"] ******** 2026-04-23 16:29:04.344065 | instance | [multipathd] changed: [instance] 2026-04-23 16:29:04.344134 | instance | [multipathd] 2026-04-23 16:29:04.344146 | instance | [multipathd] PLAY RECAP ********************************************************************* 2026-04-23 16:29:04.344156 | instance | [multipathd] instance : ok=5 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-23 16:29:04.344166 | instance | [multipathd] 2026-04-23 16:29:04.412580 | instance | ==> [multipathd] Deployment complete 2026-04-23 16:29:04.885495 | instance | [iscsi] 2026-04-23 16:29:04.885562 | instance | [iscsi] PLAY [controllers:computes] **************************************************** 2026-04-23 16:29:04.885574 | instance | [iscsi] 2026-04-23 16:29:04.885583 | instance | [iscsi] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:29:06.136311 | instance | [iscsi] [WARNING]: Platform linux on host instance is using the discovered Python 2026-04-23 16:29:06.136399 | instance | [iscsi] interpreter at /usr/bin/python3.10, but future installation of another Python 2026-04-23 16:29:06.136413 | instance | [iscsi] interpreter could change the meaning of that path. See 2026-04-23 16:29:06.136423 | instance | [iscsi] https://docs.ansible.com/ansible- 2026-04-23 16:29:06.136432 | instance | [iscsi] core/2.17/reference_appendices/interpreter_discovery.html for more information. 2026-04-23 16:29:06.146142 | instance | [iscsi] ok: [instance] 2026-04-23 16:29:06.146174 | instance | [iscsi] 2026-04-23 16:29:06.146184 | instance | [iscsi] TASK [vexxhost.atmosphere.iscsi : Install iscsi package] *********************** 2026-04-23 16:29:07.462163 | instance | [iscsi] ok: [instance] 2026-04-23 16:29:07.462304 | instance | [iscsi] 2026-04-23 16:29:07.462324 | instance | [iscsi] TASK [vexxhost.atmosphere.iscsi : Ensure iscsid is started] ******************** 2026-04-23 16:29:08.161858 | instance | [iscsi] changed: [instance] 2026-04-23 16:29:08.161945 | instance | [iscsi] 2026-04-23 16:29:08.161958 | instance | [iscsi] PLAY RECAP ********************************************************************* 2026-04-23 16:29:08.161969 | instance | [iscsi] instance : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-23 16:29:08.161978 | instance | [iscsi] 2026-04-23 16:29:08.230104 | instance | ==> [iscsi] Deployment complete 2026-04-23 16:29:09.059175 | instance | [kubernetes] 2026-04-23 16:29:09.059259 | instance | [kubernetes] PLAY [all] ********************************************************************* 2026-04-23 16:29:09.059272 | instance | [kubernetes] 2026-04-23 16:29:09.059282 | instance | [kubernetes] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:29:10.350115 | instance | [kubernetes] [WARNING]: Platform linux on host instance is using the discovered Python 2026-04-23 16:29:10.350173 | instance | [kubernetes] interpreter at /usr/bin/python3.10, but future installation of another Python 2026-04-23 16:29:10.350182 | instance | [kubernetes] interpreter could change the meaning of that path. See 2026-04-23 16:29:10.350189 | instance | [kubernetes] https://docs.ansible.com/ansible- 2026-04-23 16:29:10.350210 | instance | [kubernetes] core/2.17/reference_appendices/interpreter_discovery.html for more information. 2026-04-23 16:29:10.370862 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:10.370895 | instance | [kubernetes] 2026-04-23 16:29:10.370906 | instance | [kubernetes] TASK [vexxhost.atmosphere.sysctl : Configure sysctl values] ******************** 2026-04-23 16:29:16.064845 | instance | [kubernetes] changed: [instance] => (item={'name': 'fs.aio-max-nr', 'value': 1048576}) 2026-04-23 16:29:16.064965 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.tcp_timestamps', 'value': 0}) 2026-04-23 16:29:16.064983 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.tcp_sack', 'value': 1}) 2026-04-23 16:29:16.064996 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.core.netdev_budget', 'value': 1000}) 2026-04-23 16:29:16.065009 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.core.netdev_max_backlog', 'value': 250000}) 2026-04-23 16:29:16.065018 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.core.rmem_max', 'value': 4194304}) 2026-04-23 16:29:16.065030 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.core.wmem_max', 'value': 4194304}) 2026-04-23 16:29:16.065043 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.core.rmem_default', 'value': 4194304}) 2026-04-23 16:29:16.065054 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.core.wmem_default', 'value': 4194304}) 2026-04-23 16:29:16.065066 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.core.optmem_max', 'value': 4194304}) 2026-04-23 16:29:16.065080 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.tcp_rmem', 'value': '4096 87380 4194304'}) 2026-04-23 16:29:16.065092 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.tcp_wmem', 'value': '4096 65536 4194304'}) 2026-04-23 16:29:16.065102 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.tcp_low_latency', 'value': 1}) 2026-04-23 16:29:16.065111 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.tcp_adv_win_scale', 'value': 1}) 2026-04-23 16:29:16.065120 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.neigh.default.gc_thresh1', 'value': 128}) 2026-04-23 16:29:16.065129 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.neigh.default.gc_thresh2', 'value': 28872}) 2026-04-23 16:29:16.065137 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.neigh.default.gc_thresh3', 'value': 32768}) 2026-04-23 16:29:16.065146 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv6.neigh.default.gc_thresh1', 'value': 128}) 2026-04-23 16:29:16.065154 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv6.neigh.default.gc_thresh2', 'value': 28872}) 2026-04-23 16:29:16.065163 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv6.neigh.default.gc_thresh3', 'value': 32768}) 2026-04-23 16:29:16.065172 | instance | [kubernetes] 2026-04-23 16:29:16.065181 | instance | [kubernetes] TASK [vexxhost.atmosphere.ethtool : Create folder for persistent configuration] *** 2026-04-23 16:29:16.506305 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:16.506373 | instance | [kubernetes] 2026-04-23 16:29:16.506385 | instance | [kubernetes] TASK [vexxhost.atmosphere.ethtool : Install persistent "ethtool" tuning] ******* 2026-04-23 16:29:17.259027 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:17.259271 | instance | [kubernetes] 2026-04-23 16:29:17.259284 | instance | [kubernetes] TASK [vexxhost.atmosphere.ethtool : Run "ethtool" tuning] ********************** 2026-04-23 16:29:17.391481 | instance | [ceph] FAILED - RETRYING: [instance]: Install AppArmor packages (5 retries left). 2026-04-23 16:29:17.391789 | instance | [ceph] FAILED - RETRYING: [instance]: Install AppArmor packages (4 retries left). 2026-04-23 16:29:17.391802 | instance | [ceph] changed: [instance] 2026-04-23 16:29:17.391812 | instance | [ceph] 2026-04-23 16:29:17.391822 | instance | [ceph] TASK [vexxhost.containers.containerd : Create systemd service file for containerd] *** 2026-04-23 16:29:17.708486 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:17.708553 | instance | [kubernetes] 2026-04-23 16:29:17.708566 | instance | [kubernetes] TASK [Set a fact with the "atmosphere_images" for other plays] ***************** 2026-04-23 16:29:17.841163 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:17.841228 | instance | [kubernetes] 2026-04-23 16:29:17.841260 | instance | [kubernetes] PLAY [Configure Kubernetes VIP] ************************************************ 2026-04-23 16:29:17.841270 | instance | [kubernetes] 2026-04-23 16:29:17.841279 | instance | [kubernetes] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:29:18.100815 | instance | [ceph] changed: [instance] 2026-04-23 16:29:18.100885 | instance | [ceph] 2026-04-23 16:29:18.100898 | instance | [ceph] TASK [vexxhost.containers.containerd : Create folders for configuration] ******* 2026-04-23 16:29:18.914175 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:18.914238 | instance | [kubernetes] 2026-04-23 16:29:18.914250 | instance | [kubernetes] TASK [vexxhost.containers.directory : Create directory (/etc/kubernetes/manifests)] *** 2026-04-23 16:29:19.211930 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:19.212001 | instance | [kubernetes] 2026-04-23 16:29:19.212014 | instance | [kubernetes] TASK [vexxhost.kubernetes.kube_vip : Uninstall legacy HA stack] **************** 2026-04-23 16:29:19.494554 | instance | [ceph] changed: [instance] => (item={'path': '/etc/containerd'}) 2026-04-23 16:29:19.494623 | instance | [ceph] changed: [instance] => (item={'path': '/var/lib/containerd', 'mode': '0o700'}) 2026-04-23 16:29:19.494634 | instance | [ceph] changed: [instance] => (item={'path': '/run/containerd', 'mode': '0o711'}) 2026-04-23 16:29:19.494644 | instance | [ceph] changed: [instance] => (item={'path': '/run/containerd/io.containerd.grpc.v1.cri', 'mode': '0o700'}) 2026-04-23 16:29:19.494654 | instance | [ceph] changed: [instance] => (item={'path': '/run/containerd/io.containerd.sandbox.controller.v1.shim', 'mode': '0o700'}) 2026-04-23 16:29:19.494663 | instance | [ceph] 2026-04-23 16:29:19.494673 | instance | [ceph] TASK [vexxhost.containers.containerd : Create containerd config file] ********** 2026-04-23 16:29:20.155799 | instance | [ceph] changed: [instance] 2026-04-23 16:29:20.155864 | instance | [ceph] 2026-04-23 16:29:20.155877 | instance | [ceph] TASK [vexxhost.containers.containerd : Force any restarts if necessary] ******** 2026-04-23 16:29:20.155888 | instance | [ceph] 2026-04-23 16:29:20.155898 | instance | [ceph] RUNNING HANDLER [vexxhost.containers.containerd : Reload systemd] ************** 2026-04-23 16:29:20.575117 | instance | [kubernetes] ok: [instance] => (item=/etc/keepalived/keepalived.conf) 2026-04-23 16:29:20.575190 | instance | [kubernetes] ok: [instance] => (item=/etc/keepalived/check_apiserver.sh) 2026-04-23 16:29:20.575202 | instance | [kubernetes] ok: [instance] => (item=/etc/kubernetes/manifests/keepalived.yaml) 2026-04-23 16:29:20.575214 | instance | [kubernetes] ok: [instance] => (item=/etc/haproxy/haproxy.cfg) 2026-04-23 16:29:20.575252 | instance | [kubernetes] ok: [instance] => (item=/etc/kubernetes/manifests/haproxy.yaml) 2026-04-23 16:29:20.575263 | instance | [kubernetes] 2026-04-23 16:29:20.575273 | instance | [kubernetes] TASK [vexxhost.kubernetes.kube_vip : Switch API server to run on port 6443] **** 2026-04-23 16:29:21.188148 | instance | [ceph] ok: [instance] 2026-04-23 16:29:21.188205 | instance | [ceph] 2026-04-23 16:29:21.188213 | instance | [ceph] RUNNING HANDLER [vexxhost.containers.containerd : Restart containerd] ********** 2026-04-23 16:29:21.455726 | instance | [kubernetes] failed: [instance] (item=/etc/kubernetes/manifests/kube-apiserver.yaml) => {"ansible_loop_var": "item", "changed": false, "item": "/etc/kubernetes/manifests/kube-apiserver.yaml", "msg": "Path /etc/kubernetes/manifests/kube-apiserver.yaml does not exist !", "rc": 257} 2026-04-23 16:29:21.455787 | instance | [kubernetes] failed: [instance] (item=/etc/kubernetes/controller-manager.conf) => {"ansible_loop_var": "item", "changed": false, "item": "/etc/kubernetes/controller-manager.conf", "msg": "Path /etc/kubernetes/controller-manager.conf does not exist !", "rc": 257} 2026-04-23 16:29:21.455803 | instance | [kubernetes] failed: [instance] (item=/etc/kubernetes/scheduler.conf) => {"ansible_loop_var": "item", "changed": false, "item": "/etc/kubernetes/scheduler.conf", "msg": "Path /etc/kubernetes/scheduler.conf does not exist !", "rc": 257} 2026-04-23 16:29:21.455809 | instance | [kubernetes] ...ignoring 2026-04-23 16:29:21.455817 | instance | [kubernetes] 2026-04-23 16:29:21.455838 | instance | [kubernetes] TASK [vexxhost.kubernetes.kube_vip : Check if super-admin.conf exists] ********* 2026-04-23 16:29:21.725052 | instance | [ceph] changed: [instance] 2026-04-23 16:29:21.725131 | instance | [ceph] 2026-04-23 16:29:21.725143 | instance | [ceph] TASK [vexxhost.containers.containerd : Enable and start service] *************** 2026-04-23 16:29:21.730940 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:21.730955 | instance | [kubernetes] 2026-04-23 16:29:21.730960 | instance | [kubernetes] TASK [vexxhost.kubernetes.kube_vip : Check if kubeadm has already run] ********* 2026-04-23 16:29:22.018044 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:22.018137 | instance | [kubernetes] 2026-04-23 16:29:22.018166 | instance | [kubernetes] TASK [vexxhost.kubernetes.kube_vip : Set fact with KUBECONFIG path] ************ 2026-04-23 16:29:22.049016 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:22.049046 | instance | [kubernetes] 2026-04-23 16:29:22.049056 | instance | [kubernetes] TASK [vexxhost.kubernetes.kube_vip : Set fact with KUBECONFIG path (with super-admin.conf)] *** 2026-04-23 16:29:22.085692 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:22.085717 | instance | [kubernetes] 2026-04-23 16:29:22.085737 | instance | [kubernetes] TASK [vexxhost.kubernetes.kube_vip : Upload Kubernetes manifest] *************** 2026-04-23 16:29:22.355968 | instance | [ceph] changed: [instance] 2026-04-23 16:29:22.356103 | instance | [ceph] 2026-04-23 16:29:22.356116 | instance | [ceph] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:29:22.692574 | instance | [ceph] ok: [instance] 2026-04-23 16:29:22.692677 | instance | [ceph] 2026-04-23 16:29:22.692689 | instance | [ceph] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:29:22.727398 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:22.727517 | instance | [kubernetes] 2026-04-23 16:29:22.727530 | instance | [kubernetes] TASK [vexxhost.kubernetes.kube_vip : Ensure kube-vip configuration file] ******* 2026-04-23 16:29:22.743423 | instance | [ceph] ok: [instance] => { 2026-04-23 16:29:22.743463 | instance | [ceph] "msg": "https://download.docker.com/linux/static/stable/x86_64/docker-24.0.9.tgz" 2026-04-23 16:29:22.743474 | instance | [ceph] } 2026-04-23 16:29:22.743483 | instance | [ceph] 2026-04-23 16:29:22.743492 | instance | [ceph] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:29:23.080784 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:23.080883 | instance | [kubernetes] 2026-04-23 16:29:23.080894 | instance | [kubernetes] TASK [vexxhost.kubernetes.kube_vip : Flush handlers] *************************** 2026-04-23 16:29:23.080905 | instance | [kubernetes] 2026-04-23 16:29:23.080913 | instance | [kubernetes] PLAY [Install Kubernetes] ****************************************************** 2026-04-23 16:29:23.080922 | instance | [kubernetes] 2026-04-23 16:29:23.080931 | instance | [kubernetes] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:29:23.801126 | instance | [ceph] changed: [instance] 2026-04-23 16:29:23.801241 | instance | [ceph] 2026-04-23 16:29:23.801253 | instance | [ceph] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:29:24.234871 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:24.234958 | instance | [kubernetes] 2026-04-23 16:29:24.234969 | instance | [kubernetes] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:29:24.536831 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:24.536910 | instance | [kubernetes] 2026-04-23 16:29:24.536922 | instance | [kubernetes] TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-23 16:29:24.577956 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:24.577992 | instance | [kubernetes] 2026-04-23 16:29:24.578003 | instance | [kubernetes] TASK [vexxhost.containers.directory : Create directory (/var/lib/downloads)] *** 2026-04-23 16:29:24.889282 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:24.889337 | instance | [kubernetes] 2026-04-23 16:29:24.889349 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:29:24.937987 | instance | [kubernetes] ok: [instance] => { 2026-04-23 16:29:24.938021 | instance | [kubernetes] "msg": "https://github.com/opencontainers/runc/releases/download/v1.4.2/runc.amd64" 2026-04-23 16:29:24.938088 | instance | [kubernetes] } 2026-04-23 16:29:24.938100 | instance | [kubernetes] 2026-04-23 16:29:24.938110 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:29:25.436309 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:25.436366 | instance | [kubernetes] 2026-04-23 16:29:25.436377 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:29:25.484411 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:25.484445 | instance | [kubernetes] 2026-04-23 16:29:25.484456 | instance | [kubernetes] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:29:25.790212 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:25.790291 | instance | [kubernetes] 2026-04-23 16:29:25.790303 | instance | [kubernetes] TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-23 16:29:27.066965 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:27.067046 | instance | [kubernetes] 2026-04-23 16:29:27.067059 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:29:27.130785 | instance | [kubernetes] ok: [instance] => { 2026-04-23 16:29:27.130822 | instance | [kubernetes] "msg": "https://github.com/containerd/containerd/releases/download/v2.2.3/containerd-2.2.3-linux-amd64.tar.gz" 2026-04-23 16:29:27.130834 | instance | [kubernetes] } 2026-04-23 16:29:27.130843 | instance | [kubernetes] 2026-04-23 16:29:27.130851 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:29:27.594842 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:27.594941 | instance | [kubernetes] 2026-04-23 16:29:27.594953 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:29:28.212730 | instance | [ceph] changed: [instance] 2026-04-23 16:29:28.212856 | instance | [ceph] 2026-04-23 16:29:28.212869 | instance | [ceph] TASK [vexxhost.containers.docker : Install AppArmor packages] ****************** 2026-04-23 16:29:29.442689 | instance | [ceph] ok: [instance] 2026-04-23 16:29:29.442812 | instance | [ceph] 2026-04-23 16:29:29.442824 | instance | [ceph] TASK [vexxhost.containers.docker : Ensure group "docker" exists] *************** 2026-04-23 16:29:29.799828 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:29.799896 | instance | [kubernetes] 2026-04-23 16:29:29.799907 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Install SELinux packages] *************** 2026-04-23 16:29:29.826984 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:29.827017 | instance | [kubernetes] 2026-04-23 16:29:29.827027 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Set SELinux to permissive at runtime] *** 2026-04-23 16:29:29.856410 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:29.856511 | instance | [kubernetes] 2026-04-23 16:29:29.856523 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Persist SELinux permissive mode] ******** 2026-04-23 16:29:29.885318 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:29.885363 | instance | [kubernetes] 2026-04-23 16:29:29.885374 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Install AppArmor packages] ************** 2026-04-23 16:29:29.961195 | instance | [ceph] changed: [instance] 2026-04-23 16:29:29.961274 | instance | [ceph] 2026-04-23 16:29:29.961286 | instance | [ceph] TASK [vexxhost.containers.docker : Create systemd service file for docker] ***** 2026-04-23 16:29:30.558497 | instance | [ceph] changed: [instance] 2026-04-23 16:29:30.558560 | instance | [ceph] 2026-04-23 16:29:30.558568 | instance | [ceph] TASK [vexxhost.containers.docker : Create folders for configuration] *********** 2026-04-23 16:29:31.116971 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:31.117026 | instance | [kubernetes] 2026-04-23 16:29:31.117038 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Create systemd service file for containerd] *** 2026-04-23 16:29:31.442346 | instance | [ceph] changed: [instance] => (item={'path': '/etc/docker'}) 2026-04-23 16:29:31.442389 | instance | [ceph] changed: [instance] => (item={'path': '/var/lib/docker', 'mode': '0o710'}) 2026-04-23 16:29:31.442403 | instance | [ceph] changed: [instance] => (item={'path': '/run/docker', 'mode': '0o711'}) 2026-04-23 16:29:31.442409 | instance | [ceph] 2026-04-23 16:29:31.442424 | instance | [ceph] TASK [vexxhost.containers.docker : Create systemd socket file for docker] ****** 2026-04-23 16:29:31.663959 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:31.664008 | instance | [kubernetes] 2026-04-23 16:29:31.664025 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Create folders for configuration] ******* 2026-04-23 16:29:32.009437 | instance | [ceph] changed: [instance] 2026-04-23 16:29:32.009497 | instance | [ceph] 2026-04-23 16:29:32.009509 | instance | [ceph] TASK [vexxhost.containers.docker : Create docker daemon config file] *********** 2026-04-23 16:29:32.590317 | instance | [ceph] changed: [instance] 2026-04-23 16:29:32.590374 | instance | [ceph] 2026-04-23 16:29:32.590385 | instance | [ceph] TASK [vexxhost.containers.docker : Force any restarts if necessary] ************ 2026-04-23 16:29:32.590395 | instance | [ceph] 2026-04-23 16:29:32.590404 | instance | [ceph] RUNNING HANDLER [vexxhost.containers.containerd : Reload systemd] ************** 2026-04-23 16:29:33.135138 | instance | [kubernetes] ok: [instance] => (item={'path': '/etc/containerd'}) 2026-04-23 16:29:33.135202 | instance | [kubernetes] ok: [instance] => (item={'path': '/var/lib/containerd', 'mode': '0o700'}) 2026-04-23 16:29:33.135215 | instance | [kubernetes] ok: [instance] => (item={'path': '/run/containerd', 'mode': '0o711'}) 2026-04-23 16:29:33.135225 | instance | [kubernetes] ok: [instance] => (item={'path': '/run/containerd/io.containerd.grpc.v1.cri', 'mode': '0o700'}) 2026-04-23 16:29:33.135236 | instance | [kubernetes] ok: [instance] => (item={'path': '/run/containerd/io.containerd.sandbox.controller.v1.shim', 'mode': '0o700'}) 2026-04-23 16:29:33.135246 | instance | [kubernetes] 2026-04-23 16:29:33.135256 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Create containerd config file] ********** 2026-04-23 16:29:33.411792 | instance | [ceph] ok: [instance] 2026-04-23 16:29:33.411850 | instance | [ceph] 2026-04-23 16:29:33.411863 | instance | [ceph] RUNNING HANDLER [vexxhost.containers.docker : Restart docker] ****************** 2026-04-23 16:29:33.754886 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:33.754928 | instance | [kubernetes] 2026-04-23 16:29:33.754934 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Force any restarts if necessary] ******** 2026-04-23 16:29:33.754939 | instance | [kubernetes] 2026-04-23 16:29:33.754943 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Enable and start service] *************** 2026-04-23 16:29:34.291531 | instance | [ceph] changed: [instance] 2026-04-23 16:29:34.291599 | instance | [ceph] 2026-04-23 16:29:34.291611 | instance | [ceph] TASK [vexxhost.containers.docker : Enable and start service] ******************* 2026-04-23 16:29:34.446070 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:34.446124 | instance | [kubernetes] 2026-04-23 16:29:34.446136 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubernetes_upgrade_check : Retrieve the "kubeadm-config" ConfigMap] *** 2026-04-23 16:29:34.944595 | instance | [ceph] changed: [instance] 2026-04-23 16:29:34.944666 | instance | [ceph] 2026-04-23 16:29:34.944677 | instance | [ceph] TASK [vexxhost.ceph.cephadm : Gather variables for each operating system] ****** 2026-04-23 16:29:35.001885 | instance | [ceph] ok: [instance] => (item=/home/zuul/.ansible/collections/ansible_collections/vexxhost/ceph/roles/cephadm/vars/ubuntu-22.04.yml) 2026-04-23 16:29:35.001917 | instance | [ceph] 2026-04-23 16:29:35.001927 | instance | [ceph] TASK [vexxhost.ceph.cephadm : Install packages] ******************************** 2026-04-23 16:29:35.321451 | instance | [kubernetes] An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible_collections.kubernetes.core.plugins.module_utils.k8s.exceptions.CoreException: Could not create API client: Invalid kube-config file. No configuration found. 2026-04-23 16:29:35.321499 | instance | [kubernetes] fatal: [instance]: FAILED! => {"changed": false, "msg": "Could not create API client: Invalid kube-config file. No configuration found."} 2026-04-23 16:29:35.321507 | instance | [kubernetes] ...ignoring 2026-04-23 16:29:35.321515 | instance | [kubernetes] 2026-04-23 16:29:35.321528 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubernetes_upgrade_check : Parse the ClusterConfiguration] *** 2026-04-23 16:29:35.352793 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:35.352851 | instance | [kubernetes] 2026-04-23 16:29:35.352882 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubernetes_upgrade_check : Retrieve the current Kubernetes version] *** 2026-04-23 16:29:35.387851 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:35.387886 | instance | [kubernetes] 2026-04-23 16:29:35.387897 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubernetes_upgrade_check : Extract major, minor, and patch versions] *** 2026-04-23 16:29:35.423363 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:35.423395 | instance | [kubernetes] 2026-04-23 16:29:35.423406 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubernetes_upgrade_check : Fail if we're jumping more than one minor version] *** 2026-04-23 16:29:35.458259 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:35.458282 | instance | [kubernetes] 2026-04-23 16:29:35.458292 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubernetes_upgrade_check : Set fact if we need to upgrade] *** 2026-04-23 16:29:35.501197 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:35.501239 | instance | [kubernetes] 2026-04-23 16:29:35.501252 | instance | [kubernetes] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:29:35.807865 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:35.807929 | instance | [kubernetes] 2026-04-23 16:29:35.807941 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:29:35.848311 | instance | [kubernetes] ok: [instance] => { 2026-04-23 16:29:35.848343 | instance | [kubernetes] "msg": "https://dl.k8s.io/release/v1.28.13/bin/linux/amd64/kubeadm" 2026-04-23 16:29:35.848353 | instance | [kubernetes] } 2026-04-23 16:29:35.848362 | instance | [kubernetes] 2026-04-23 16:29:35.848371 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:29:40.604061 | instance | [ceph] changed: [instance] 2026-04-23 16:29:40.604409 | instance | [ceph] 2026-04-23 16:29:40.604422 | instance | [ceph] TASK [vexxhost.ceph.cephadm : Ensure services are started] ********************* 2026-04-23 16:29:40.907625 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:40.907669 | instance | [kubernetes] 2026-04-23 16:29:40.907674 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:29:40.956256 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:40.956274 | instance | [kubernetes] 2026-04-23 16:29:40.956279 | instance | [kubernetes] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:29:41.269002 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:41.269068 | instance | [kubernetes] 2026-04-23 16:29:41.269076 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:29:41.310697 | instance | [kubernetes] ok: [instance] => { 2026-04-23 16:29:41.310779 | instance | [kubernetes] "msg": "https://dl.k8s.io/release/v1.28.13/bin/linux/amd64/kubectl" 2026-04-23 16:29:41.310792 | instance | [kubernetes] } 2026-04-23 16:29:41.310802 | instance | [kubernetes] 2026-04-23 16:29:41.310811 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:29:41.470546 | instance | [ceph] ok: [instance] => (item=chronyd) 2026-04-23 16:29:41.470653 | instance | [ceph] ok: [instance] => (item=sshd) 2026-04-23 16:29:41.470666 | instance | [ceph] 2026-04-23 16:29:41.470676 | instance | [ceph] TASK [vexxhost.ceph.cephadm : Download "cephadm"] ****************************** 2026-04-23 16:29:42.531641 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:42.531706 | instance | [kubernetes] 2026-04-23 16:29:42.531717 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:29:42.575507 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:42.575538 | instance | [kubernetes] 2026-04-23 16:29:42.575548 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Install SELinux packages] *************** 2026-04-23 16:29:42.607330 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:42.607358 | instance | [kubernetes] 2026-04-23 16:29:42.607368 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Set SELinux to permissive at runtime] *** 2026-04-23 16:29:42.636872 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:42.636947 | instance | [kubernetes] 2026-04-23 16:29:42.636960 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Persist SELinux permissive mode] ******** 2026-04-23 16:29:42.668643 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:29:42.668702 | instance | [kubernetes] 2026-04-23 16:29:42.668714 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Install AppArmor packages] ************** 2026-04-23 16:29:42.692017 | instance | [ceph] changed: [instance] 2026-04-23 16:29:42.692044 | instance | [ceph] 2026-04-23 16:29:42.692054 | instance | [ceph] TASK [vexxhost.ceph.cephadm : Remove cephadm from old path] ******************** 2026-04-23 16:29:43.008630 | instance | [ceph] ok: [instance] 2026-04-23 16:29:43.008721 | instance | [ceph] 2026-04-23 16:29:43.008748 | instance | [ceph] TASK [vexxhost.ceph.cephadm : Ensure "cephadm" user is present] **************** 2026-04-23 16:29:43.511525 | instance | [ceph] changed: [instance] 2026-04-23 16:29:43.511588 | instance | [ceph] 2026-04-23 16:29:43.511600 | instance | [ceph] TASK [vexxhost.ceph.cephadm : Allow "cephadm" user to have passwordless sudo] *** 2026-04-23 16:29:43.885274 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:43.885326 | instance | [kubernetes] 2026-04-23 16:29:43.885335 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Create systemd service file for containerd] *** 2026-04-23 16:29:43.912361 | instance | [ceph] changed: [instance] 2026-04-23 16:29:43.912438 | instance | [ceph] 2026-04-23 16:29:43.912451 | instance | [ceph] TASK [vexxhost.ceph.mon : Get `cephadm ls` status] ***************************** 2026-04-23 16:29:44.437617 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:44.437687 | instance | [kubernetes] 2026-04-23 16:29:44.437695 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Create folders for configuration] ******* 2026-04-23 16:29:45.596121 | instance | [ceph] ok: [instance] 2026-04-23 16:29:45.596177 | instance | [ceph] 2026-04-23 16:29:45.596189 | instance | [ceph] TASK [vexxhost.ceph.mon : Parse the `cephadm ls` output] *********************** 2026-04-23 16:29:45.637480 | instance | [ceph] ok: [instance] 2026-04-23 16:29:45.637525 | instance | [ceph] 2026-04-23 16:29:45.637536 | instance | [ceph] TASK [vexxhost.ceph.mon : Assimilate existing configs in `ceph.conf`] ********** 2026-04-23 16:29:45.671191 | instance | [ceph] skipping: [instance] 2026-04-23 16:29:45.671228 | instance | [ceph] 2026-04-23 16:29:45.671238 | instance | [ceph] TASK [vexxhost.ceph.mon : Adopt monitor to cluster] **************************** 2026-04-23 16:29:45.708119 | instance | [ceph] skipping: [instance] 2026-04-23 16:29:45.708170 | instance | [ceph] 2026-04-23 16:29:45.708181 | instance | [ceph] TASK [vexxhost.ceph.mon : Adopt manager to cluster] **************************** 2026-04-23 16:29:45.738581 | instance | [ceph] skipping: [instance] 2026-04-23 16:29:45.738633 | instance | [ceph] 2026-04-23 16:29:45.738645 | instance | [ceph] TASK [vexxhost.ceph.mon : Enable "cephadm" mgr module] ************************* 2026-04-23 16:29:45.768302 | instance | [ceph] skipping: [instance] 2026-04-23 16:29:45.768337 | instance | [ceph] 2026-04-23 16:29:45.768348 | instance | [ceph] TASK [vexxhost.ceph.mon : Set orchestrator backend to "cephadm"] *************** 2026-04-23 16:29:45.795417 | instance | [ceph] skipping: [instance] 2026-04-23 16:29:45.795453 | instance | [ceph] 2026-04-23 16:29:45.795463 | instance | [ceph] TASK [vexxhost.ceph.mon : Use `cephadm` user for cephadm] ********************** 2026-04-23 16:29:45.831508 | instance | [ceph] skipping: [instance] 2026-04-23 16:29:45.831540 | instance | [ceph] 2026-04-23 16:29:45.831550 | instance | [ceph] TASK [vexxhost.ceph.mon : Generate "cephadm" key] ****************************** 2026-04-23 16:29:45.866109 | instance | [ceph] skipping: [instance] 2026-04-23 16:29:45.866143 | instance | [ceph] 2026-04-23 16:29:45.866154 | instance | [ceph] TASK [vexxhost.ceph.mon : Set Ceph Monitor IP address] ************************* 2026-04-23 16:29:45.873808 | instance | [kubernetes] ok: [instance] => (item={'path': '/etc/containerd'}) 2026-04-23 16:29:45.873827 | instance | [kubernetes] ok: [instance] => (item={'path': '/var/lib/containerd', 'mode': '0o700'}) 2026-04-23 16:29:45.873837 | instance | [kubernetes] ok: [instance] => (item={'path': '/run/containerd', 'mode': '0o711'}) 2026-04-23 16:29:45.873847 | instance | [kubernetes] ok: [instance] => (item={'path': '/run/containerd/io.containerd.grpc.v1.cri', 'mode': '0o700'}) 2026-04-23 16:29:45.873856 | instance | [kubernetes] ok: [instance] => (item={'path': '/run/containerd/io.containerd.sandbox.controller.v1.shim', 'mode': '0o700'}) 2026-04-23 16:29:45.873883 | instance | [kubernetes] 2026-04-23 16:29:45.873892 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Create containerd config file] ********** 2026-04-23 16:29:45.972609 | instance | [ceph] ok: [instance] 2026-04-23 16:29:45.972650 | instance | [ceph] 2026-04-23 16:29:45.972661 | instance | [ceph] TASK [vexxhost.ceph.mon : Check if any node is bootstrapped] ******************* 2026-04-23 16:29:46.311663 | instance | [ceph] ok: [instance] => (item=instance) 2026-04-23 16:29:46.311721 | instance | [ceph] 2026-04-23 16:29:46.311733 | instance | [ceph] TASK [vexxhost.ceph.mon : Select pre-existing bootstrap node if exists] ******** 2026-04-23 16:29:46.361300 | instance | [ceph] ok: [instance] 2026-04-23 16:29:46.361337 | instance | [ceph] 2026-04-23 16:29:46.361348 | instance | [ceph] TASK [vexxhost.ceph.mon : Bootstrap cluster] *********************************** 2026-04-23 16:29:46.432596 | instance | [ceph] included: /home/zuul/.ansible/collections/ansible_collections/vexxhost/ceph/roles/mon/tasks/bootstrap-ceph.yml for instance 2026-04-23 16:29:46.432634 | instance | [ceph] 2026-04-23 16:29:46.432641 | instance | [ceph] TASK [vexxhost.ceph.mon : Generate temporary file for "ceph.conf"] ************* 2026-04-23 16:29:46.487204 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:46.487308 | instance | [kubernetes] 2026-04-23 16:29:46.487321 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Force any restarts if necessary] ******** 2026-04-23 16:29:46.487332 | instance | [kubernetes] 2026-04-23 16:29:46.487341 | instance | [kubernetes] TASK [vexxhost.containers.containerd : Enable and start service] *************** 2026-04-23 16:29:46.827013 | instance | [ceph] changed: [instance] 2026-04-23 16:29:46.827118 | instance | [ceph] 2026-04-23 16:29:46.827130 | instance | [ceph] TASK [vexxhost.ceph.mon : Include extra configuration values] ****************** 2026-04-23 16:29:46.943874 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:46.943945 | instance | [kubernetes] 2026-04-23 16:29:46.943950 | instance | [kubernetes] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:29:47.259609 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:47.260050 | instance | [kubernetes] 2026-04-23 16:29:47.260092 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:29:47.313833 | instance | [kubernetes] ok: [instance] => { 2026-04-23 16:29:47.313938 | instance | [kubernetes] "msg": "https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.35.0/crictl-v1.35.0-linux-amd64.tar.gz" 2026-04-23 16:29:47.313950 | instance | [kubernetes] } 2026-04-23 16:29:47.313960 | instance | [kubernetes] 2026-04-23 16:29:47.313969 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:29:47.796563 | instance | [ceph] changed: [instance] => (item={'option': 'mon allow pool size one', 'section': 'global', 'value': True}) 2026-04-23 16:29:47.796614 | instance | [ceph] changed: [instance] => (item={'option': 'osd crush chooseleaf type', 'section': 'global', 'value': 0}) 2026-04-23 16:29:47.796626 | instance | [ceph] changed: [instance] => (item={'option': 'auth allow insecure global id reclaim', 'section': 'mon', 'value': False}) 2026-04-23 16:29:47.796636 | instance | [ceph] 2026-04-23 16:29:47.796645 | instance | [ceph] TASK [vexxhost.ceph.mon : Run Bootstrap coomand] ******************************* 2026-04-23 16:29:48.065426 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:48.065696 | instance | [kubernetes] 2026-04-23 16:29:48.065705 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:29:49.471003 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:49.471061 | instance | [kubernetes] 2026-04-23 16:29:49.471073 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:29:49.524121 | instance | [kubernetes] ok: [instance] => { 2026-04-23 16:29:49.524165 | instance | [kubernetes] "msg": "https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.35.0/critest-v1.35.0-linux-amd64.tar.gz" 2026-04-23 16:29:49.524177 | instance | [kubernetes] } 2026-04-23 16:29:49.524187 | instance | [kubernetes] 2026-04-23 16:29:49.524196 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:29:50.290369 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:50.290432 | instance | [kubernetes] 2026-04-23 16:29:50.290444 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:29:51.832230 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:51.832310 | instance | [kubernetes] 2026-04-23 16:29:51.832323 | instance | [kubernetes] TASK [vexxhost.containers.cri_tools : Create crictl config] ******************** 2026-04-23 16:29:52.378214 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:52.378271 | instance | [kubernetes] 2026-04-23 16:29:52.378283 | instance | [kubernetes] TASK [vexxhost.containers.directory : Create directory (/opt/cni/bin)] ********* 2026-04-23 16:29:52.713677 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:52.713763 | instance | [kubernetes] 2026-04-23 16:29:52.713775 | instance | [kubernetes] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:29:53.057892 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:53.057963 | instance | [kubernetes] 2026-04-23 16:29:53.057973 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:29:53.111523 | instance | [kubernetes] ok: [instance] => { 2026-04-23 16:29:53.111582 | instance | [kubernetes] "msg": "https://github.com/containernetworking/plugins/releases/download/v1.9.1/cni-plugins-linux-amd64-v1.9.1.tgz" 2026-04-23 16:29:53.111595 | instance | [kubernetes] } 2026-04-23 16:29:53.111605 | instance | [kubernetes] 2026-04-23 16:29:53.111615 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:29:54.806102 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:54.806166 | instance | [kubernetes] 2026-04-23 16:29:54.806178 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:29:58.573384 | instance | [kubernetes] changed: [instance] 2026-04-23 16:29:58.573461 | instance | [kubernetes] 2026-04-23 16:29:58.573475 | instance | [kubernetes] TASK [vexxhost.containers.cni_plugins : Gather variables for each operating system] *** 2026-04-23 16:29:58.626645 | instance | [kubernetes] ok: [instance] => (item=/home/zuul/.ansible/collections/ansible_collections/vexxhost/containers/roles/cni_plugins/vars/debian.yml) 2026-04-23 16:29:58.626679 | instance | [kubernetes] 2026-04-23 16:29:58.626691 | instance | [kubernetes] TASK [vexxhost.containers.cni_plugins : Install additional packages] *********** 2026-04-23 16:29:59.750095 | instance | [kubernetes] ok: [instance] 2026-04-23 16:29:59.750165 | instance | [kubernetes] 2026-04-23 16:29:59.750178 | instance | [kubernetes] TASK [vexxhost.containers.cni_plugins : Ensure IPv6 is enabled] **************** 2026-04-23 16:30:00.133927 | instance | [kubernetes] changed: [instance] 2026-04-23 16:30:00.133983 | instance | [kubernetes] 2026-04-23 16:30:00.133991 | instance | [kubernetes] TASK [vexxhost.containers.cni_plugins : Enable kernel modules on-boot] ********* 2026-04-23 16:30:00.691851 | instance | [kubernetes] changed: [instance] 2026-04-23 16:30:00.691971 | instance | [kubernetes] 2026-04-23 16:30:00.691979 | instance | [kubernetes] TASK [vexxhost.containers.cni_plugins : Enable kernel modules in runtime] ****** 2026-04-23 16:30:01.964263 | instance | [kubernetes] changed: [instance] => (item=br_netfilter) 2026-04-23 16:30:01.964517 | instance | [kubernetes] ok: [instance] => (item=ip_tables) 2026-04-23 16:30:01.964530 | instance | [kubernetes] changed: [instance] => (item=ip6_tables) 2026-04-23 16:30:01.964540 | instance | [kubernetes] ok: [instance] => (item=nf_conntrack) 2026-04-23 16:30:01.964549 | instance | [kubernetes] 2026-04-23 16:30:01.964559 | instance | [kubernetes] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:30:02.275957 | instance | [kubernetes] ok: [instance] 2026-04-23 16:30:02.276016 | instance | [kubernetes] 2026-04-23 16:30:02.276028 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:30:02.321973 | instance | [kubernetes] ok: [instance] => { 2026-04-23 16:30:02.322033 | instance | [kubernetes] "msg": "https://dl.k8s.io/release/v1.28.13/bin/linux/amd64/kubelet" 2026-04-23 16:30:02.322046 | instance | [kubernetes] } 2026-04-23 16:30:02.322090 | instance | [kubernetes] 2026-04-23 16:30:02.322100 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:30:18.300650 | instance | [kubernetes] changed: [instance] 2026-04-23 16:30:18.300729 | instance | [kubernetes] 2026-04-23 16:30:18.300749 | instance | [kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:30:18.348092 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:30:18.348152 | instance | [kubernetes] 2026-04-23 16:30:18.348169 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Gather variables for each operating system] *** 2026-04-23 16:30:18.400712 | instance | [kubernetes] ok: [instance] => (item=/home/zuul/.ansible/collections/ansible_collections/vexxhost/kubernetes/roles/kubelet/vars/debian.yml) 2026-04-23 16:30:18.400758 | instance | [kubernetes] 2026-04-23 16:30:18.400770 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Install coreutils] ************************* 2026-04-23 16:30:18.440728 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:30:18.440762 | instance | [kubernetes] 2026-04-23 16:30:18.440773 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Install additional packages] *************** 2026-04-23 16:30:21.598989 | instance | [kubernetes] changed: [instance] 2026-04-23 16:30:21.599052 | instance | [kubernetes] 2026-04-23 16:30:21.599064 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Configure sysctl values] ******************* 2026-04-23 16:30:26.095449 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-23 16:30:26.095512 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.bridge.bridge-nf-call-iptables', 'value': 1}) 2026-04-23 16:30:26.095524 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.bridge.bridge-nf-call-ip6tables', 'value': 1}) 2026-04-23 16:30:26.095532 | instance | [kubernetes] changed: [instance] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 0}) 2026-04-23 16:30:26.095540 | instance | [kubernetes] changed: [instance] => (item={'name': 'fs.inotify.max_queued_events', 'value': 1048576}) 2026-04-23 16:30:26.095548 | instance | [kubernetes] changed: [instance] => (item={'name': 'fs.inotify.max_user_instances', 'value': 8192}) 2026-04-23 16:30:26.095556 | instance | [kubernetes] changed: [instance] => (item={'name': 'fs.inotify.max_user_watches', 'value': 1048576}) 2026-04-23 16:30:26.095566 | instance | [kubernetes] 2026-04-23 16:30:26.095574 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Create folders for kubernetes configuration] *** 2026-04-23 16:30:26.981147 | instance | [kubernetes] changed: [instance] => (item=/etc/systemd/system/kubelet.service.d) 2026-04-23 16:30:26.981212 | instance | [kubernetes] ok: [instance] => (item=/etc/kubernetes) 2026-04-23 16:30:26.981223 | instance | [kubernetes] ok: [instance] => (item=/etc/kubernetes/manifests) 2026-04-23 16:30:26.981233 | instance | [kubernetes] 2026-04-23 16:30:26.981243 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Add kubelet systemd service config] ******** 2026-04-23 16:30:27.533385 | instance | [kubernetes] changed: [instance] 2026-04-23 16:30:27.533485 | instance | [kubernetes] 2026-04-23 16:30:27.533498 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Add kubeadm dropin for kubelet systemd service config] *** 2026-04-23 16:30:28.128801 | instance | [kubernetes] changed: [instance] 2026-04-23 16:30:28.128870 | instance | [kubernetes] 2026-04-23 16:30:28.128882 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Check swap status] ************************* 2026-04-23 16:30:28.442658 | instance | [kubernetes] ok: [instance] 2026-04-23 16:30:28.442730 | instance | [kubernetes] 2026-04-23 16:30:28.443244 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Disable swap] ****************************** 2026-04-23 16:30:28.476959 | instance | [kubernetes] skipping: [instance] 2026-04-23 16:30:28.477065 | instance | [kubernetes] 2026-04-23 16:30:28.477077 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Remove swapfile from /etc/fstab] *********** 2026-04-23 16:30:29.171708 | instance | [kubernetes] ok: [instance] => (item=swap) 2026-04-23 16:30:29.171811 | instance | [kubernetes] ok: [instance] => (item=none) 2026-04-23 16:30:29.171824 | instance | [kubernetes] 2026-04-23 16:30:29.171836 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Create noswap systemd service config file] *** 2026-04-23 16:30:29.752922 | instance | [kubernetes] changed: [instance] 2026-04-23 16:30:29.753082 | instance | [kubernetes] 2026-04-23 16:30:29.753092 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Enable noswap service] ********************* 2026-04-23 16:30:30.446127 | instance | [kubernetes] changed: [instance] 2026-04-23 16:30:30.446240 | instance | [kubernetes] 2026-04-23 16:30:30.446253 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Force any restarts if necessary] *********** 2026-04-23 16:30:30.446264 | instance | [kubernetes] 2026-04-23 16:30:30.446272 | instance | [kubernetes] RUNNING HANDLER [vexxhost.kubernetes.kubelet : Reload systemd] ***************** 2026-04-23 16:30:31.300377 | instance | [kubernetes] ok: [instance] 2026-04-23 16:30:31.300497 | instance | [kubernetes] 2026-04-23 16:30:31.300545 | instance | [kubernetes] TASK [vexxhost.kubernetes.kubelet : Enable and start kubelet service] ********** 2026-04-23 16:30:31.383133 | instance | [ceph] fatal: [instance]: FAILED! => {"changed": false, "cmd": ["cephadm", "bootstrap", "--fsid", "4837cbf8-4f90-4300-b3f6-726c9b9f89b4", "--mon-ip", "10.96.240.200", "--cluster-network", "10.96.240.0/24", "--ssh-user", "cephadm", "--config", "/tmp/ceph_acrjkef0.conf", "--skip-monitoring-stack"], "delta": "0:00:43.285827", "end": "2026-04-23 16:30:31.335314", "msg": "non-zero return code", "rc": 1, "start": "2026-04-23 16:29:48.049487", "stderr": "Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts.\nRuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18.2.7 -e NODE_NAME=instance -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /tmp/ceph-tmpzgo800tb:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmp8rbfn165:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18.2.7 config set global cluster_network 10.96.240.0/24: docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: unable to apply cgroup configuration: unable to start unit \"docker-db6638e37c0319b83b35e3bb16ec074837a1323073de4b2de24305179f02640e.scope\" (properties [{Name:Description Value:\"libcontainer container db6638e37c0319b83b35e3bb16ec074837a1323073de4b2de24305179f02640e\"} {Name:Slice Value:\"system.slice\"} {Name:Delegate Value:true} {Name:PIDs Value:@au [27802]} {Name:MemoryAccounting Value:true} {Name:CPUAccounting Value:true} {Name:IOAccounting Value:true} {Name:TasksAccounting Value:true} {Name:DefaultDependencies Value:false}]): Message recipient disconnected from message bus without replying: unknown.\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\n return _run_code(code, main_globals, None,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/usr/bin/cephadm/__main__.py\", line 11009, in \n File \"/usr/bin/cephadm/__main__.py\", line 10997, in main\n File \"/usr/bin/cephadm/__main__.py\", line 6395, in _rollback\n File \"/usr/bin/cephadm/__main__.py\", line 2643, in _default_image\n File \"/usr/bin/cephadm/__main__.py\", line 6566, in command_bootstrap\n File \"/usr/bin/cephadm/__main__.py\", line 6250, in finish_bootstrap_config\n File \"/usr/bin/cephadm/__main__.py\", line 6556, in cli\n File \"/usr/bin/cephadm/__main__.py\", line 4895, in run\n File \"/usr/bin/cephadm/__main__.py\", line 2283, in call_throws\nRuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18.2.7 -e NODE_NAME=instance -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /tmp/ceph-tmpzgo800tb:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmp8rbfn165:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18.2.7 config set global cluster_network 10.96.240.0/24: docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: unable to apply cgroup configuration: unable to start unit \"docker-db6638e37c0319b83b35e3bb16ec074837a1323073de4b2de24305179f02640e.scope\" (properties [{Name:Description Value:\"libcontainer container db6638e37c0319b83b35e3bb16ec074837a1323073de4b2de24305179f02640e\"} {Name:Slice Value:\"system.slice\"} {Name:Delegate Value:true} {Name:PIDs Value:@au [27802]} {Name:MemoryAccounting Value:true} {Name:CPUAccounting Value:true} {Name:IOAccounting Value:true} {Name:TasksAccounting Value:true} {Name:DefaultDependencies Value:false}]): Message recipient disconnected from message bus without replying: unknown.", "stderr_lines": ["Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts.", "RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18.2.7 -e NODE_NAME=instance -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /tmp/ceph-tmpzgo800tb:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmp8rbfn165:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18.2.7 config set global cluster_network 10.96.240.0/24: docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: unable to apply cgroup configuration: unable to start unit \"docker-db6638e37c0319b83b35e3bb16ec074837a1323073de4b2de24305179f02640e.scope\" (properties [{Name:Description Value:\"libcontainer container db6638e37c0319b83b35e3bb16ec074837a1323073de4b2de24305179f02640e\"} {Name:Slice Value:\"system.slice\"} {Name:Delegate Value:true} {Name:PIDs Value:@au [27802]} {Name:MemoryAccounting Value:true} {Name:CPUAccounting Value:true} {Name:IOAccounting Value:true} {Name:TasksAccounting Value:true} {Name:DefaultDependencies Value:false}]): Message recipient disconnected from message bus without replying: unknown.", "", "Traceback (most recent call last):", " File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main", " return _run_code(code, main_globals, None,", " File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code", " exec(code, run_globals)", " File \"/usr/bin/cephadm/__main__.py\", line 11009, in ", " File \"/usr/bin/cephadm/__main__.py\", line 10997, in main", " File \"/usr/bin/cephadm/__main__.py\", line 6395, in _rollback", " File \"/usr/bin/cephadm/__main__.py\", line 2643, in _default_image", " File \"/usr/bin/cephadm/__main__.py\", line 6566, in command_bootstrap", " File \"/usr/bin/cephadm/__main__.py\", line 6250, in finish_bootstrap_config", " File \"/usr/bin/cephadm/__main__.py\", line 6556, in cli", " File \"/usr/bin/cephadm/__main__.py\", line 4895, in run", " File \"/usr/bin/cephadm/__main__.py\", line 2283, in call_throws", "RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18.2.7 -e NODE_NAME=instance -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /tmp/ceph-tmpzgo800tb:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmp8rbfn165:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18.2.7 config set global cluster_network 10.96.240.0/24: docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: unable to apply cgroup configuration: unable to start unit \"docker-db6638e37c0319b83b35e3bb16ec074837a1323073de4b2de24305179f02640e.scope\" (properties [{Name:Description Value:\"libcontainer container db6638e37c0319b83b35e3bb16ec074837a1323073de4b2de24305179f02640e\"} {Name:Slice Value:\"system.slice\"} {Name:Delegate Value:true} {Name:PIDs Value:@au [27802]} {Name:MemoryAccounting Value:true} {Name:CPUAccounting Value:true} {Name:IOAccounting Value:true} {Name:TasksAccounting Value:true} {Name:DefaultDependencies Value:false}]): Message recipient disconnected from message bus without replying: unknown."], "stdout": "Creating directory /etc/ceph for ceph.conf\nVerifying ssh connectivity using standard pubkey authentication ...\nAdding key to cephadm@localhost authorized_keys...\nVerifying podman|docker is present...\nVerifying lvm2 is present...\nVerifying time synchronization is in place...\nUnit chrony.service is enabled and running\nRepeating the final host check...\ndocker (/usr/bin/docker) is present\nsystemctl is present\nlvcreate is present\nUnit chrony.service is enabled and running\nHost looks OK\nCluster fsid: 4837cbf8-4f90-4300-b3f6-726c9b9f89b4\nVerifying IP 10.96.240.200 port 3300 ...\nVerifying IP 10.96.240.200 port 6789 ...\nMon IP `10.96.240.200` is in CIDR network `10.96.240.0/24`\nMon IP `10.96.240.200` is in CIDR network `10.96.240.0/24`\nPulling container image quay.io/ceph/ceph:v18.2.7...\nCeph version: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)\nExtracting ceph user uid/gid from container image...\nCreating initial keys...\nCreating initial monmap...\nCreating mon...\nWaiting for mon to start...\nWaiting for mon...\nmon is available\nAssimilating anything we can from ceph.conf...\nGenerating new minimal ceph.conf...\nRestarting the monitor...\nSetting public_network to 10.96.240.0/24 in global config section\nSetting cluster_network to 10.96.240.0/24\nNon-zero exit code 125 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18.2.7 -e NODE_NAME=instance -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /tmp/ceph-tmpzgo800tb:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmp8rbfn165:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18.2.7 config set global cluster_network 10.96.240.0/24\n/usr/bin/ceph: stderr docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: unable to apply cgroup configuration: unable to start unit \"docker-db6638e37c0319b83b35e3bb16ec074837a1323073de4b2de24305179f02640e.scope\" (properties [{Name:Description Value:\"libcontainer container db6638e37c0319b83b35e3bb16ec074837a1323073de4b2de24305179f02640e\"} {Name:Slice Value:\"system.slice\"} {Name:Delegate Value:true} {Name:PIDs Value:@au [27802]} {Name:MemoryAccounting Value:true} {Name:CPUAccounting Value:true} {Name:IOAccounting Value:true} {Name:TasksAccounting Value:true} {Name:DefaultDependencies Value:false}]): Message recipient disconnected from message bus without replying: unknown.\n\n\n\t***************\n\tCephadm hit an issue during cluster installation. Current cluster files will NOT BE DELETED automatically to change\n\tthis behaviour you can pass the --cleanup-on-failure. To remove this broken cluster manually please run:\n\n\t > cephadm rm-cluster --force --fsid 4837cbf8-4f90-4300-b3f6-726c9b9f89b4\n\n\tin case of any previous broken installation user must use the rm-cluster command to delete the broken cluster:\n\n\t > cephadm rm-cluster --force --zap-osds --fsid \n\n\tfor more information please refer to https://docs.ceph.com/en/latest/cephadm/operations/#purging-a-cluster\n\t***************", "stdout_lines": ["Creating directory /etc/ceph for ceph.conf", "Verifying ssh connectivity using standard pubkey authentication ...", "Adding key to cephadm@localhost authorized_keys...", "Verifying podman|docker is present...", "Verifying lvm2 is present...", "Verifying time synchronization is in place...", "Unit chrony.service is enabled and running", "Repeating the final host check...", "docker (/usr/bin/docker) is present", "systemctl is present", "lvcreate is present", "Unit chrony.service is enabled and running", "Host looks OK", "Cluster fsid: 4837cbf8-4f90-4300-b3f6-726c9b9f89b4", "Verifying IP 10.96.240.200 port 3300 ...", "Verifying IP 10.96.240.200 port 6789 ...", "Mon IP `10.96.240.200` is in CIDR network `10.96.240.0/24`", "Mon IP `10.96.240.200` is in CIDR network `10.96.240.0/24`", "Pulling container image quay.io/ceph/ceph:v18.2.7...", "Ceph version: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)", "Extracting ceph user uid/gid from container image...", "Creating initial keys...", "Creating initial monmap...", "Creating mon...", "Waiting for mon to start...", "Waiting for mon...", "mon is available", "Assimilating anything we can from ceph.conf...", "Generating new minimal ceph.conf...", "Restarting the monitor...", "Setting public_network to 10.96.240.0/24 in global config section", "Setting cluster_network to 10.96.240.0/24", "Non-zero exit code 125 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18.2.7 -e NODE_NAME=instance -v /var/log/ceph/4837cbf8-4f90-4300-b3f6-726c9b9f89b4:/var/log/ceph:z -v /tmp/ceph-tmpzgo800tb:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmp8rbfn165:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18.2.7 config set global cluster_network 10.96.240.0/24", "/usr/bin/ceph: stderr docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: unable to apply cgroup configuration: unable to start unit \"docker-db6638e37c0319b83b35e3bb16ec074837a1323073de4b2de24305179f02640e.scope\" (properties [{Name:Description Value:\"libcontainer container db6638e37c0319b83b35e3bb16ec074837a1323073de4b2de24305179f02640e\"} {Name:Slice Value:\"system.slice\"} {Name:Delegate Value:true} {Name:PIDs Value:@au [27802]} {Name:MemoryAccounting Value:true} {Name:CPUAccounting Value:true} {Name:IOAccounting Value:true} {Name:TasksAccounting Value:true} {Name:DefaultDependencies Value:false}]): Message recipient disconnected from message bus without replying: unknown.", "", "", "\t***************", "\tCephadm hit an issue during cluster installation. Current cluster files will NOT BE DELETED automatically to change", "\tthis behaviour you can pass the --cleanup-on-failure. To remove this broken cluster manually please run:", "", "\t > cephadm rm-cluster --force --fsid 4837cbf8-4f90-4300-b3f6-726c9b9f89b4", "", "\tin case of any previous broken installation user must use the rm-cluster command to delete the broken cluster:", "", "\t > cephadm rm-cluster --force --zap-osds --fsid ", "", "\tfor more information please refer to https://docs.ceph.com/en/latest/cephadm/operations/#purging-a-cluster", "\t***************"]} 2026-04-23 16:30:31.401844 | instance | [ceph] 2026-04-23 16:30:31.401880 | instance | [ceph] TASK [vexxhost.ceph.mon : Remove temporary file for "ceph.conf"] *************** 2026-04-23 16:30:31.705840 | instance | [ceph] changed: [instance] 2026-04-23 16:30:31.705905 | instance | [ceph] 2026-04-23 16:30:31.705917 | instance | [ceph] PLAY RECAP ********************************************************************* 2026-04-23 16:30:31.705928 | instance | [ceph] instance : ok=48 changed=26 unreachable=0 failed=1 skipped=14 rescued=0 ignored=0 2026-04-23 16:30:31.705938 | instance | [ceph] 2026-04-23 16:30:31.955686 | instance | Error: component ceph failed: ansible-playbook failed for ceph: exit status 2 2026-04-23 16:30:31.955816 | instance | Usage: 2026-04-23 16:30:31.955828 | instance | atmosphere deploy [flags] 2026-04-23 16:30:31.955839 | instance | 2026-04-23 16:30:31.955848 | instance | Flags: 2026-04-23 16:30:31.955857 | instance | --concurrency int Max concurrent deployments per wave (0 = unlimited) 2026-04-23 16:30:31.955867 | instance | -h, --help help for deploy 2026-04-23 16:30:31.955876 | instance | -i, --inventory string Path to Ansible inventory file (required) 2026-04-23 16:30:31.955888 | instance | -t, --tags string Comma-separated list of component tags to deploy 2026-04-23 16:30:31.955897 | instance | 2026-04-23 16:30:31.955905 | instance | component ceph failed: ansible-playbook failed for ceph: exit status 2 2026-04-23 16:30:32.105274 | instance | ERROR 2026-04-23 16:30:32.105545 | instance | { 2026-04-23 16:30:32.105588 | instance | "delta": "0:01:54.387407", 2026-04-23 16:30:32.105619 | instance | "end": "2026-04-23 16:30:31.957572", 2026-04-23 16:30:32.105646 | instance | "msg": "non-zero return code", 2026-04-23 16:30:32.105672 | instance | "rc": 1, 2026-04-23 16:30:32.105697 | instance | "start": "2026-04-23 16:28:37.570165" 2026-04-23 16:30:32.105725 | instance | } failure 2026-04-23 16:30:32.115118 | 2026-04-23 16:30:32.115169 | PLAY RECAP 2026-04-23 16:30:32.115236 | instance | ok: 1 changed: 0 unreachable: 0 failed: 1 skipped: 0 rescued: 0 ignored: 0 2026-04-23 16:30:32.115262 | 2026-04-23 16:30:32.261608 | RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/atmosphere/molecule/aio/converge.yml@main] 2026-04-23 16:30:32.270395 | POST-RUN START: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/post.yml@main] 2026-04-23 16:30:32.950427 | 2026-04-23 16:30:32.950623 | PLAY [all] 2026-04-23 16:30:32.966045 | 2026-04-23 16:30:32.966214 | TASK [gather-host-logs : creating directory for system status] 2026-04-23 16:30:33.317014 | instance | changed 2026-04-23 16:30:33.322632 | 2026-04-23 16:30:33.322719 | TASK [gather-host-logs : Get logs for each host] 2026-04-23 16:30:33.665337 | instance | + systemd-cgls --full --all --no-pager 2026-04-23 16:30:33.677074 | instance | + ip addr 2026-04-23 16:30:33.680091 | instance | + ip route 2026-04-23 16:30:33.681743 | instance | + lsblk 2026-04-23 16:30:33.686636 | instance | + mount 2026-04-23 16:30:33.689358 | instance | + docker images 2026-04-23 16:30:33.710353 | instance | + brctl show 2026-04-23 16:30:33.710945 | instance | /bin/bash: line 8: brctl: command not found 2026-04-23 16:30:33.711289 | instance | + ps aux --sort=-%mem 2026-04-23 16:30:33.725203 | instance | + dpkg -l 2026-04-23 16:30:33.735654 | instance | + CONTAINERS=($(docker ps -a --format '{{ .Names }}' --filter label=zuul)) 2026-04-23 16:30:33.736191 | instance | ++ docker ps -a --format '{{ .Names }}' --filter label=zuul 2026-04-23 16:30:33.750927 | instance | + '[' '!' -z '' ']' 2026-04-23 16:30:33.862871 | instance | ok: Runtime: 0:00:00.090712 2026-04-23 16:30:33.869386 | 2026-04-23 16:30:33.869458 | TASK [gather-host-logs : Downloads logs to executor] 2026-04-23 16:30:34.495558 | instance | changed: 2026-04-23 16:30:34.495738 | instance | created directory /var/lib/zuul/builds/4443a534d0224a25859b36e43392f9eb/work/logs/instance 2026-04-23 16:30:34.495766 | instance | cd+++++++++ system/ 2026-04-23 16:30:34.495788 | instance | >f+++++++++ system/brctl-show.txt 2026-04-23 16:30:34.495810 | instance | >f+++++++++ system/docker-images.txt 2026-04-23 16:30:34.495829 | instance | >f+++++++++ system/ip-addr.txt 2026-04-23 16:30:34.495851 | instance | >f+++++++++ system/ip-route.txt 2026-04-23 16:30:34.495874 | instance | >f+++++++++ system/lsblk.txt 2026-04-23 16:30:34.495894 | instance | >f+++++++++ system/mount.txt 2026-04-23 16:30:34.495914 | instance | >f+++++++++ system/packages.txt 2026-04-23 16:30:34.495932 | instance | >f+++++++++ system/ps.txt 2026-04-23 16:30:34.495954 | instance | >f+++++++++ system/systemd-cgls.txt 2026-04-23 16:30:34.505407 | 2026-04-23 16:30:34.505481 | LOOP [helm-release-status : creating directory for helm release status] 2026-04-23 16:30:34.708322 | instance | changed: "values" 2026-04-23 16:30:34.867788 | instance | changed: "releases" 2026-04-23 16:30:34.879502 | 2026-04-23 16:30:34.879694 | TASK [helm-release-status : Gather get release status for helm charts] 2026-04-23 16:30:35.165420 | instance | E0423 16:30:35.165228 28028 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:35.166149 | instance | E0423 16:30:35.166083 28028 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:35.167943 | instance | E0423 16:30:35.167872 28028 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:35.168587 | instance | E0423 16:30:35.168523 28028 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:35.170477 | instance | E0423 16:30:35.170406 28028 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:35.170525 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:35.421414 | instance | ok: Runtime: 0:00:00.065148 2026-04-23 16:30:35.428006 | 2026-04-23 16:30:35.428179 | TASK [helm-release-status : Downloads logs to executor] 2026-04-23 16:30:35.912223 | instance | changed: 2026-04-23 16:30:35.912515 | instance | cd+++++++++ helm/ 2026-04-23 16:30:35.912574 | instance | cd+++++++++ helm/releases/ 2026-04-23 16:30:35.912610 | instance | cd+++++++++ helm/values/ 2026-04-23 16:30:35.926996 | 2026-04-23 16:30:35.927202 | TASK [describe-kubernetes-objects : creating directory for cluster scoped objects] 2026-04-23 16:30:36.162121 | instance | changed 2026-04-23 16:30:36.167413 | 2026-04-23 16:30:36.167488 | TASK [describe-kubernetes-objects : Gathering descriptions for cluster scoped objects] 2026-04-23 16:30:36.390326 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-23 16:30:36.391127 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-23 16:30:36.397493 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-23 16:30:36.401586 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-23 16:30:36.442494 | instance | E0423 16:30:36.442296 28080 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.443479 | instance | E0423 16:30:36.443417 28080 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.443939 | instance | E0423 16:30:36.443910 28080 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.446217 | instance | E0423 16:30:36.445812 28080 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.446421 | instance | E0423 16:30:36.446382 28080 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.447613 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:36.457802 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-23 16:30:36.457862 | instance | E0423 16:30:36.457785 28088 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.458899 | instance | E0423 16:30:36.458803 28088 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.459660 | instance | E0423 16:30:36.459613 28088 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.461327 | instance | E0423 16:30:36.461294 28088 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.461854 | instance | E0423 16:30:36.461824 28088 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.463001 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:36.471788 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-23 16:30:36.506664 | instance | E0423 16:30:36.506564 28121 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.506973 | instance | E0423 16:30:36.506935 28121 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.508459 | instance | E0423 16:30:36.508399 28121 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.509466 | instance | E0423 16:30:36.509378 28121 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.510944 | instance | E0423 16:30:36.510882 28121 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.510990 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:36.519783 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-23 16:30:36.523479 | instance | E0423 16:30:36.523271 28129 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.524904 | instance | E0423 16:30:36.524828 28129 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.525495 | instance | E0423 16:30:36.525443 28129 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.527164 | instance | E0423 16:30:36.527109 28129 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.527541 | instance | E0423 16:30:36.527504 28129 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.528759 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:36.571609 | instance | E0423 16:30:36.571454 28157 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.572318 | instance | E0423 16:30:36.572277 28157 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.574309 | instance | E0423 16:30:36.574268 28157 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.575031 | instance | E0423 16:30:36.574995 28157 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.576650 | instance | E0423 16:30:36.576618 28157 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:36.576686 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:36.706879 | instance | ok: Runtime: 0:00:00.198861 2026-04-23 16:30:36.714396 | 2026-04-23 16:30:36.714504 | TASK [describe-kubernetes-objects : creating directory for namespace scoped objects] 2026-04-23 16:30:36.919902 | instance | changed 2026-04-23 16:30:36.927159 | 2026-04-23 16:30:36.927271 | TASK [describe-kubernetes-objects : Gathering descriptions for namespace scoped objects] 2026-04-23 16:30:37.134966 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-23 16:30:37.135196 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-23 16:30:37.135212 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-23 16:30:37.185045 | instance | E0423 16:30:37.184870 28195 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:37.185782 | instance | E0423 16:30:37.185689 28195 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:37.189069 | instance | E0423 16:30:37.189023 28195 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:37.190278 | instance | E0423 16:30:37.190231 28195 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:37.191184 | instance | E0423 16:30:37.191055 28195 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:37.192295 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:37.467984 | instance | ok: Runtime: 0:00:00.068570 2026-04-23 16:30:37.475104 | 2026-04-23 16:30:37.475200 | TASK [describe-kubernetes-objects : Downloads logs to executor] 2026-04-23 16:30:37.974118 | instance | changed: 2026-04-23 16:30:37.974340 | instance | cd+++++++++ objects/ 2026-04-23 16:30:37.974379 | instance | cd+++++++++ objects/cluster/ 2026-04-23 16:30:37.974409 | instance | cd+++++++++ objects/namespaced/ 2026-04-23 16:30:37.984554 | 2026-04-23 16:30:37.984622 | TASK [gather-pod-logs : creating directory for pod logs] 2026-04-23 16:30:38.184138 | instance | changed 2026-04-23 16:30:38.194355 | 2026-04-23 16:30:38.194461 | TASK [gather-pod-logs : creating directory for failed pod logs] 2026-04-23 16:30:38.413144 | instance | changed 2026-04-23 16:30:38.420255 | 2026-04-23 16:30:38.420353 | TASK [gather-pod-logs : retrieve all kubernetes logs, current and previous (if they exist)] 2026-04-23 16:30:38.682519 | instance | E0423 16:30:38.682331 28249 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:38.683261 | instance | E0423 16:30:38.683151 28249 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:38.685108 | instance | E0423 16:30:38.685010 28249 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:38.685558 | instance | E0423 16:30:38.685464 28249 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:38.686914 | instance | E0423 16:30:38.686818 28249 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:38.686957 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:38.961459 | instance | ok: Runtime: 0:00:00.066862 2026-04-23 16:30:38.966435 | 2026-04-23 16:30:38.966500 | TASK [gather-pod-logs : Downloads pod logs to executor] 2026-04-23 16:30:39.475991 | instance | changed: 2026-04-23 16:30:39.476206 | instance | cd+++++++++ pod-logs/ 2026-04-23 16:30:39.476248 | instance | cd+++++++++ pod-logs/failed-pods/ 2026-04-23 16:30:39.488621 | 2026-04-23 16:30:39.488691 | TASK [gather-prom-metrics : creating directory for helm release descriptions] 2026-04-23 16:30:39.686835 | instance | changed 2026-04-23 16:30:39.693285 | 2026-04-23 16:30:39.693367 | TASK [gather-prom-metrics : Get metrics from exporter services in all namespaces] 2026-04-23 16:30:39.961261 | instance | E0423 16:30:39.961093 28295 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:39.962091 | instance | E0423 16:30:39.962010 28295 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:39.963215 | instance | E0423 16:30:39.963157 28295 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:39.963857 | instance | E0423 16:30:39.963784 28295 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:39.965379 | instance | E0423 16:30:39.965309 28295 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:39.965432 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:40.228136 | instance | ok: Runtime: 0:00:00.064071 2026-04-23 16:30:40.234126 | 2026-04-23 16:30:40.234239 | TASK [gather-prom-metrics : Get ceph metrics from ceph-mgr] 2026-04-23 16:30:40.496286 | instance | E0423 16:30:40.496102 28321 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:40.497100 | instance | E0423 16:30:40.497005 28321 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:40.498982 | instance | E0423 16:30:40.498930 28321 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:40.499970 | instance | E0423 16:30:40.499518 28321 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:40.501807 | instance | E0423 16:30:40.501702 28321 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:40.501847 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:40.506660 | instance | ceph-mgr endpoints: 2026-04-23 16:30:40.769600 | instance | ok: Runtime: 0:00:00.069064 2026-04-23 16:30:40.778085 | 2026-04-23 16:30:40.778186 | TASK [gather-prom-metrics : Get metrics from fluentd pods] 2026-04-23 16:30:41.033698 | instance | E0423 16:30:41.033532 28349 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:41.034384 | instance | E0423 16:30:41.034311 28349 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:41.035894 | instance | E0423 16:30:41.035841 28349 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:41.036515 | instance | E0423 16:30:41.036476 28349 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:41.038427 | instance | E0423 16:30:41.038349 28349 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:30:41.038482 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:30:41.320086 | instance | ok: Runtime: 0:00:00.059443 2026-04-23 16:30:41.326343 | 2026-04-23 16:30:41.326443 | TASK [gather-prom-metrics : Downloads logs to executor] 2026-04-23 16:30:41.815057 | instance | changed: cd+++++++++ prometheus/ 2026-04-23 16:30:41.826951 | 2026-04-23 16:30:41.827075 | TASK [gather-selenium-data : creating directory for helm release descriptions] 2026-04-23 16:30:42.085633 | instance | changed 2026-04-23 16:30:42.091716 | 2026-04-23 16:30:42.091835 | TASK [gather-selenium-data : Get selenium data] 2026-04-23 16:30:42.293362 | instance | + cp '/tmp/artifacts/*' /tmp/logs/selenium/. 2026-04-23 16:30:42.294813 | instance | cp: cannot stat '/tmp/artifacts/*': No such file or directory 2026-04-23 16:30:42.630765 | instance | ERROR 2026-04-23 16:30:42.631043 | instance | { 2026-04-23 16:30:42.631137 | instance | "delta": "0:00:00.005592", 2026-04-23 16:30:42.631200 | instance | "end": "2026-04-23 16:30:42.295156", 2026-04-23 16:30:42.631257 | instance | "msg": "non-zero return code", 2026-04-23 16:30:42.631307 | instance | "rc": 1, 2026-04-23 16:30:42.631362 | instance | "start": "2026-04-23 16:30:42.289564" 2026-04-23 16:30:42.631414 | instance | } 2026-04-23 16:30:42.631477 | instance | ERROR: Ignoring Errors 2026-04-23 16:30:42.637391 | 2026-04-23 16:30:42.637460 | TASK [gather-selenium-data : Downloads logs to executor] 2026-04-23 16:30:43.658749 | instance | changed: cd+++++++++ selenium/ 2026-04-23 16:30:43.666152 | 2026-04-23 16:30:43.666214 | PLAY RECAP 2026-04-23 16:30:43.666263 | instance | ok: 23 changed: 23 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 1 2026-04-23 16:30:43.666290 | 2026-04-23 16:30:43.858536 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/post.yml@main] 2026-04-23 16:30:43.866349 | POST-RUN START: [trusted : github.com/vexxhost/zuul-config/playbooks/base/post.yaml@main] 2026-04-23 16:30:44.525627 | 2026-04-23 16:30:44.525795 | PLAY [all] 2026-04-23 16:30:44.537910 | 2026-04-23 16:30:44.537998 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-23 16:30:44.593264 | instance | skipping: Conditional result was False 2026-04-23 16:30:44.605230 | 2026-04-23 16:30:44.605343 | TASK [fetch-output : Set log path for single node] 2026-04-23 16:30:44.650855 | instance | ok 2026-04-23 16:30:44.657907 | 2026-04-23 16:30:44.658014 | LOOP [fetch-output : Ensure local output dirs] 2026-04-23 16:30:45.079494 | instance -> localhost | ok: "/var/lib/zuul/builds/4443a534d0224a25859b36e43392f9eb/work/logs" 2026-04-23 16:30:45.302571 | instance -> localhost | changed: "/var/lib/zuul/builds/4443a534d0224a25859b36e43392f9eb/work/artifacts" 2026-04-23 16:30:45.601450 | instance -> localhost | changed: "/var/lib/zuul/builds/4443a534d0224a25859b36e43392f9eb/work/docs" 2026-04-23 16:30:45.619866 | 2026-04-23 16:30:45.620027 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-23 16:30:46.262791 | instance | changed: .d..t...... ./ 2026-04-23 16:30:46.263413 | instance | changed: All items complete 2026-04-23 16:30:46.263444 | 2026-04-23 16:30:46.724184 | instance | changed: .d..t...... ./ 2026-04-23 16:30:47.208549 | instance | changed: .d..t...... ./ 2026-04-23 16:30:47.236216 | 2026-04-23 16:30:47.236393 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-23 16:30:47.879768 | instance -> localhost | ok: Item: artifacts Runtime: 0:00:00.007590 2026-04-23 16:30:48.129607 | instance -> localhost | ok: Item: docs Runtime: 0:00:00.006695 2026-04-23 16:30:48.148268 | 2026-04-23 16:30:48.148700 | PLAY [all] 2026-04-23 16:30:48.157230 | 2026-04-23 16:30:48.157337 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-23 16:30:48.578736 | instance | changed 2026-04-23 16:30:48.587051 | 2026-04-23 16:30:48.587128 | PLAY RECAP 2026-04-23 16:30:48.587194 | instance | ok: 5 changed: 4 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-23 16:30:48.587227 | 2026-04-23 16:30:48.735882 | POST-RUN END RESULT_NORMAL: [trusted : github.com/vexxhost/zuul-config/playbooks/base/post.yaml@main] 2026-04-23 16:30:48.745686 | POST-RUN START: [trusted : github.com/vexxhost/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-23 16:30:49.383803 | 2026-04-23 16:30:50.381141 | PLAY [localhost] 2026-04-23 16:30:50.398294 | 2026-04-23 16:30:50.398486 | TASK [Generate Zuul manifest] 2026-04-23 16:30:50.418251 | localhost | ok 2026-04-23 16:30:50.437018 | 2026-04-23 16:30:50.437116 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-23 16:30:50.849395 | localhost | changed 2026-04-23 16:30:50.859771 | 2026-04-23 16:30:50.859844 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-23 16:30:50.891468 | localhost | ok 2026-04-23 16:30:50.902076 | 2026-04-23 16:30:50.902254 | TASK [Upload logs] 2026-04-23 16:30:50.922277 | localhost | ok 2026-04-23 16:30:51.027844 | 2026-04-23 16:30:51.027974 | TASK [Set zuul-log-path fact] 2026-04-23 16:30:51.050002 | localhost | ok 2026-04-23 16:30:51.061759 | 2026-04-23 16:30:51.061846 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-23 16:30:51.093219 | localhost | ok 2026-04-23 16:30:51.102284 | 2026-04-23 16:30:51.102379 | TASK [upload-logs : Create log directories] 2026-04-23 16:30:51.481826 | localhost | changed 2026-04-23 16:30:51.487482 | 2026-04-23 16:30:51.487576 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-23 16:30:51.906728 | localhost -> localhost | ok: Runtime: 0:00:00.007351 2026-04-23 16:30:51.913383 | 2026-04-23 16:30:51.913472 | TASK [upload-logs : Upload logs to log server] 2026-04-23 16:31:00.610853 | localhost | Output suppressed because no_log was given 2026-04-23 16:31:00.616800 | 2026-04-23 16:31:00.616913 | LOOP [upload-logs : Compress console log and json output] 2026-04-23 16:31:00.667275 | localhost | skipping: Conditional result was False 2026-04-23 16:31:01.026911 | localhost | skipping: Conditional result was False 2026-04-23 16:31:01.036187 | 2026-04-23 16:31:01.036333 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-23 16:31:01.079196 | localhost | skipping: Conditional result was False 2026-04-23 16:31:01.079638 | 2026-04-23 16:31:01.083458 | localhost | skipping: Conditional result was False 2026-04-23 16:31:01.098448 | 2026-04-23 16:31:01.098548 | LOOP [upload-logs : Upload console log and json output]