2026-04-23 16:25:25.240545 | Job console starting 2026-04-23 16:25:25.251113 | Updating git repos 2026-04-23 16:25:25.332948 | Cloning repos into workspace 2026-04-23 16:25:25.566571 | Restoring repo states 2026-04-23 16:25:25.590298 | Merging changes 2026-04-23 16:25:26.506489 | Checking out repos 2026-04-23 16:25:26.693813 | Preparing playbooks 2026-04-23 16:25:30.905021 | Running Ansible setup 2026-04-23 16:25:35.956972 | PRE-RUN START: [trusted : github.com/vexxhost/zuul-config/playbooks/base/pre.yaml@main] 2026-04-23 16:25:36.544516 | 2026-04-23 16:25:36.609950 | PLAY [localhost] 2026-04-23 16:25:36.626046 | 2026-04-23 16:25:36.626158 | TASK [Gathering Facts] 2026-04-23 16:25:37.798405 | localhost | ok 2026-04-23 16:25:37.805104 | 2026-04-23 16:25:37.805209 | TASK [Setup log path fact] 2026-04-23 16:25:37.822205 | localhost | ok 2026-04-23 16:25:37.832679 | 2026-04-23 16:25:37.832805 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-23 16:25:37.869439 | localhost | ok 2026-04-23 16:25:37.877937 | 2026-04-23 16:25:37.878058 | TASK [emit-job-header : Print job information] 2026-04-23 16:25:37.917521 | # Job Information 2026-04-23 16:25:37.950660 | Ansible Version: 2.16.16 2026-04-23 16:25:37.950930 | Job: atmosphere-molecule-csi-rbd 2026-04-23 16:25:37.951015 | Pipeline: check 2026-04-23 16:25:37.951085 | Executor: 0a8996d2b663 2026-04-23 16:25:37.951154 | Triggered by: https://github.com/vexxhost/atmosphere/pull/3834 2026-04-23 16:25:37.951232 | Event ID: ab227be0-3f30-11f1-863a-4a4ba5131e0a 2026-04-23 16:25:37.956446 | 2026-04-23 16:25:37.956556 | LOOP [emit-job-header : Print node information] 2026-04-23 16:25:38.075777 | localhost | ok: 2026-04-23 16:25:38.146891 | localhost | # Node Information 2026-04-23 16:25:38.147091 | localhost | Inventory Hostname: instance 2026-04-23 16:25:38.147129 | localhost | Hostname: np0000169823 2026-04-23 16:25:38.147164 | localhost | Username: zuul 2026-04-23 16:25:38.147197 | localhost | Distro: Ubuntu 22.04 2026-04-23 16:25:38.147224 | localhost | Provider: yul1 2026-04-23 16:25:38.147248 | localhost | Region: ca-ymq-1 2026-04-23 16:25:38.147272 | localhost | Label: ubuntu-jammy 2026-04-23 16:25:38.147295 | localhost | Product Name: OpenStack Nova 2026-04-23 16:25:38.147320 | localhost | Interface IP: 199.19.213.206 2026-04-23 16:25:38.164160 | 2026-04-23 16:25:38.164362 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-23 16:25:38.666134 | localhost -> localhost | changed 2026-04-23 16:25:38.674883 | 2026-04-23 16:25:38.675010 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-23 16:25:40.001192 | localhost -> localhost | changed 2026-04-23 16:25:40.130817 | 2026-04-23 16:25:40.130937 | PLAY [all] 2026-04-23 16:25:40.148867 | 2026-04-23 16:25:40.149832 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-23 16:25:40.558243 | instance -> localhost | ok 2026-04-23 16:25:40.571637 | 2026-04-23 16:25:40.571997 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-23 16:25:40.625641 | instance | ok 2026-04-23 16:25:40.647720 | instance | included: /var/lib/zuul/builds/3c6807d353184035be6864b5041de007/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-23 16:25:40.863381 | 2026-04-23 16:25:40.863522 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-23 16:25:41.757083 | instance -> localhost | Generating public/private rsa key pair. 2026-04-23 16:25:41.757277 | instance -> localhost | Your identification has been saved in /var/lib/zuul/builds/3c6807d353184035be6864b5041de007/work/3c6807d353184035be6864b5041de007_id_rsa 2026-04-23 16:25:41.757315 | instance -> localhost | Your public key has been saved in /var/lib/zuul/builds/3c6807d353184035be6864b5041de007/work/3c6807d353184035be6864b5041de007_id_rsa.pub 2026-04-23 16:25:41.757345 | instance -> localhost | The key fingerprint is: 2026-04-23 16:25:41.757373 | instance -> localhost | SHA256:o9GN1tz4bzGl6G//4CeBNoX3uzecBwhe7CzvqZSpCao zuul-build-sshkey 2026-04-23 16:25:41.757426 | instance -> localhost | The key's randomart image is: 2026-04-23 16:25:41.757454 | instance -> localhost | +---[RSA 3072]----+ 2026-04-23 16:25:41.757483 | instance -> localhost | | | 2026-04-23 16:25:41.757511 | instance -> localhost | | | 2026-04-23 16:25:41.757536 | instance -> localhost | | . . | 2026-04-23 16:25:41.757562 | instance -> localhost | | . =.o+ o .| 2026-04-23 16:25:41.757586 | instance -> localhost | | . S.==.* + | 2026-04-23 16:25:41.757611 | instance -> localhost | | + .o+O * .| 2026-04-23 16:25:41.757636 | instance -> localhost | | o +=..o=o| 2026-04-23 16:25:41.757661 | instance -> localhost | | . . + o++*+| 2026-04-23 16:25:41.757688 | instance -> localhost | | E.. o .oo++=O| 2026-04-23 16:25:41.757715 | instance -> localhost | +----[SHA256]-----+ 2026-04-23 16:25:41.757780 | instance -> localhost | ok: Runtime: 0:00:00.296158 2026-04-23 16:25:41.765029 | 2026-04-23 16:25:41.765108 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-23 16:25:41.801597 | instance | ok 2026-04-23 16:25:41.869729 | instance | included: /var/lib/zuul/builds/3c6807d353184035be6864b5041de007/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-23 16:25:41.878441 | 2026-04-23 16:25:41.878533 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-23 16:25:41.904036 | instance | skipping: Conditional result was False 2026-04-23 16:25:41.917863 | 2026-04-23 16:25:42.005274 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-23 16:25:42.525440 | instance | changed 2026-04-23 16:25:42.632807 | 2026-04-23 16:25:42.633033 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-23 16:25:42.830261 | instance | ok 2026-04-23 16:25:43.015574 | 2026-04-23 16:25:43.015742 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-23 16:25:43.586795 | instance | changed 2026-04-23 16:25:43.731327 | 2026-04-23 16:25:43.731481 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-23 16:25:44.218018 | instance | changed 2026-04-23 16:25:44.301356 | 2026-04-23 16:25:44.301499 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-23 16:25:44.326138 | instance | skipping: Conditional result was False 2026-04-23 16:25:44.335543 | 2026-04-23 16:25:44.335651 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-23 16:25:45.189053 | instance -> localhost | changed 2026-04-23 16:25:45.207165 | 2026-04-23 16:25:45.207270 | TASK [add-build-sshkey : Add back temp key] 2026-04-23 16:25:45.912294 | instance -> localhost | Identity added: /var/lib/zuul/builds/3c6807d353184035be6864b5041de007/work/3c6807d353184035be6864b5041de007_id_rsa (zuul-build-sshkey) 2026-04-23 16:25:45.912549 | instance -> localhost | ok: Runtime: 0:00:00.015792 2026-04-23 16:25:45.922806 | 2026-04-23 16:25:45.922942 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-23 16:25:46.275405 | instance | ok 2026-04-23 16:25:46.283664 | 2026-04-23 16:25:46.283792 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-23 16:25:46.308593 | instance | skipping: Conditional result was False 2026-04-23 16:25:46.328014 | 2026-04-23 16:25:46.509553 | TASK [prepare-workspace : Start zuul_console daemon.] 2026-04-23 16:25:46.846635 | instance | ok 2026-04-23 16:25:46.889152 | 2026-04-23 16:25:46.889296 | TASK [prepare-workspace : Synchronize src repos to workspace directory.] 2026-04-23 16:25:49.328644 | instance | Output suppressed because no_log was given 2026-04-23 16:25:49.390176 | 2026-04-23 16:25:49.390308 | LOOP [ensure-output-dirs : Empty Zuul Output directories by removing them] 2026-04-23 16:25:49.612448 | instance | ok: "logs" 2026-04-23 16:25:49.732114 | instance | ok: All items complete 2026-04-23 16:25:49.732499 | 2026-04-23 16:25:49.775427 | instance | ok: "artifacts" 2026-04-23 16:25:49.934604 | instance | ok: "docs" 2026-04-23 16:25:50.063822 | 2026-04-23 16:25:50.064016 | LOOP [ensure-output-dirs : Ensure Zuul Output directories exist] 2026-04-23 16:25:50.270298 | instance | changed: "logs" 2026-04-23 16:25:50.430912 | instance | changed: "artifacts" 2026-04-23 16:25:50.607420 | instance | changed: "docs" 2026-04-23 16:25:50.627064 | 2026-04-23 16:25:50.699252 | PLAY RECAP 2026-04-23 16:25:50.699875 | instance | ok: 15 changed: 8 unreachable: 0 failed: 0 skipped: 3 rescued: 0 ignored: 0 2026-04-23 16:25:50.700344 | localhost | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-23 16:25:50.700429 | 2026-04-23 16:25:50.877029 | PRE-RUN END RESULT_NORMAL: [trusted : github.com/vexxhost/zuul-config/playbooks/base/pre.yaml@main] 2026-04-23 16:25:50.915951 | PRE-RUN START: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/pre.yaml@main] 2026-04-23 16:25:51.580707 | 2026-04-23 16:25:51.603438 | PLAY [all] 2026-04-23 16:25:51.622142 | 2026-04-23 16:25:51.622262 | TASK [setup-uv : Extract archive] 2026-04-23 16:25:54.033080 | instance | changed 2026-04-23 16:25:54.068252 | 2026-04-23 16:25:54.068425 | TASK [setup-uv : Print version] 2026-04-23 16:25:54.775227 | instance | uv 0.8.13 2026-04-23 16:25:54.687393 | instance | ok: Runtime: 0:00:00.010160 2026-04-23 16:25:54.696064 | 2026-04-23 16:25:54.696124 | PLAY RECAP 2026-04-23 16:25:54.696172 | instance | ok: 2 changed: 2 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-23 16:25:54.696197 | 2026-04-23 16:25:54.892353 | PRE-RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/zuul-jobs/playbooks/molecule/pre.yaml@main] 2026-04-23 16:25:54.897121 | PRE-RUN START: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/pre.yml@main] 2026-04-23 16:25:55.511576 | 2026-04-23 16:25:55.549101 | PLAY [all] 2026-04-23 16:25:55.563521 | 2026-04-23 16:25:55.563653 | TASK [Install "jq" for log collection] 2026-04-23 16:26:05.880080 | instance | changed 2026-04-23 16:26:05.888080 | 2026-04-23 16:26:05.888188 | TASK [Install pip3 for Python package management] 2026-04-23 16:26:10.156877 | instance | changed 2026-04-23 16:26:10.162279 | 2026-04-23 16:26:10.162396 | TASK [Install Python "kubernetes" library for kubernetes.core modules] 2026-04-23 16:26:13.591052 | instance | changed 2026-04-23 16:26:13.594858 | 2026-04-23 16:26:13.594916 | PLAY [all] 2026-04-23 16:26:13.603062 | 2026-04-23 16:26:13.603134 | TASK [ensure-go : Check installed go version] 2026-04-23 16:26:14.140581 | instance | ok: ERROR (ignored) 2026-04-23 16:26:14.140881 | instance | { 2026-04-23 16:26:14.140914 | instance | "failed_when_result": false, 2026-04-23 16:26:14.140936 | instance | "msg": "[Errno 2] No such file or directory: b'go'", 2026-04-23 16:26:14.140990 | instance | "rc": 2 2026-04-23 16:26:14.141017 | instance | } 2026-04-23 16:26:14.146282 | 2026-04-23 16:26:14.146367 | TASK [ensure-go : Skip if correct version of go is installed] 2026-04-23 16:26:14.206864 | instance | ok 2026-04-23 16:26:15.414318 | instance | included: /var/lib/zuul/builds/3c6807d353184035be6864b5041de007/untrusted/project_2/opendev.org/zuul/zuul-jobs/roles/ensure-go/tasks/install-go.yaml 2026-04-23 16:26:15.421931 | 2026-04-23 16:26:15.422029 | TASK [ensure-go : Create temp directory] 2026-04-23 16:26:15.802397 | instance | changed 2026-04-23 16:26:15.821961 | 2026-04-23 16:26:15.822100 | TASK [ensure-go : Get archive checksum] 2026-04-23 16:26:17.311556 | instance | ok: OK (64 bytes) 2026-04-23 16:26:17.634535 | 2026-04-23 16:26:17.634674 | TASK [ensure-go : Download go archive] 2026-04-23 16:26:19.245893 | instance | changed: OK (78559214 bytes) 2026-04-23 16:26:19.358685 | 2026-04-23 16:26:19.358819 | TASK [ensure-go : Install go] 2026-04-23 16:26:25.325435 | instance | changed 2026-04-23 16:26:25.338244 | 2026-04-23 16:26:25.338382 | PLAY [all] 2026-04-23 16:26:25.345267 | 2026-04-23 16:26:25.345452 | TASK [Copy inventory file for Zuul] 2026-04-23 16:26:26.118456 | instance | changed 2026-04-23 16:26:26.124293 | 2026-04-23 16:26:26.124359 | TASK [Switch "ansible_host" to private IP] 2026-04-23 16:26:26.455007 | instance | changed: 1 replacements made 2026-04-23 16:26:26.460234 | 2026-04-23 16:26:26.460310 | TASK [Run molecule prepare] 2026-04-23 16:26:26.714934 | instance | Using CPython 3.10.12 interpreter at: /usr/bin/python3 2026-04-23 16:26:26.715128 | instance | Creating virtual environment at: .venv 2026-04-23 16:26:26.736680 | instance | Building atmosphere @ file:///home/zuul/src/github.com/vexxhost/atmosphere 2026-04-23 16:26:26.763776 | instance | Downloading netaddr (2.2MiB) 2026-04-23 16:26:26.764031 | instance | Downloading ansible-core (2.1MiB) 2026-04-23 16:26:26.764385 | instance | Downloading pygments (1.2MiB) 2026-04-23 16:26:26.764663 | instance | Downloading kubernetes (1.9MiB) 2026-04-23 16:26:26.764862 | instance | Downloading setuptools (1.1MiB) 2026-04-23 16:26:26.765075 | instance | Downloading openstacksdk (1.7MiB) 2026-04-23 16:26:26.765272 | instance | Downloading cryptography (4.2MiB) 2026-04-23 16:26:26.765470 | instance | Downloading pydantic-core (2.0MiB) 2026-04-23 16:26:26.870182 | instance | Downloading rjsonnet (1.2MiB) 2026-04-23 16:26:27.070406 | instance | Building pyperclip==1.9.0 2026-04-23 16:26:27.180568 | instance | Downloading pygments 2026-04-23 16:26:27.215509 | instance | Downloading setuptools 2026-04-23 16:26:27.385864 | instance | Downloading rjsonnet 2026-04-23 16:26:27.965019 | instance | Downloading openstacksdk 2026-04-23 16:26:27.988057 | instance | Downloading kubernetes 2026-04-23 16:26:27.990885 | instance | Downloading pydantic-core 2026-04-23 16:26:28.041327 | instance | Downloading netaddr 2026-04-23 16:26:28.044731 | instance | Downloading ansible-core 2026-04-23 16:26:28.051976 | instance | Downloading cryptography 2026-04-23 16:26:28.560461 | instance | Built pyperclip==1.9.0 2026-04-23 16:26:28.679904 | instance | Built atmosphere @ file:///home/zuul/src/github.com/vexxhost/atmosphere 2026-04-23 16:26:28.722776 | instance | Installed 83 packages in 41ms 2026-04-23 16:26:29.333892 | instance | WARNING Molecule scenarios should migrate to 'extensions/molecule' 2026-04-23 16:26:29.982221 | instance | INFO [csi > discovery] scenario test matrix: prepare 2026-04-23 16:26:29.982276 | instance | INFO [csi > prerun] Performing prerun with role_name_check=0... 2026-04-23 16:27:17.517517 | instance | INFO [csi > prepare] Executing 2026-04-23 16:27:18.438082 | instance | 2026-04-23 16:27:18.438553 | instance | PLAY [Prepare] ***************************************************************** 2026-04-23 16:27:18.438823 | instance | 2026-04-23 16:27:18.439106 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-23 16:27:18.439387 | instance | Thursday 23 April 2026 16:27:18 +0000 (0:00:00.026) 0:00:00.026 ******** 2026-04-23 16:27:20.545547 | instance | [WARNING]: Platform linux on host instance is using the discovered Python 2026-04-23 16:27:20.545815 | instance | interpreter at /usr/bin/python3.10, but future installation of another Python 2026-04-23 16:27:20.546128 | instance | interpreter could change the meaning of that path. See 2026-04-23 16:27:20.546397 | instance | https://docs.ansible.com/ansible- 2026-04-23 16:27:20.546666 | instance | core/2.17/reference_appendices/interpreter_discovery.html for more information. 2026-04-23 16:27:20.554416 | instance | ok: [instance] 2026-04-23 16:27:20.554663 | instance | 2026-04-23 16:27:20.554933 | instance | TASK [Configure short hostname] ************************************************ 2026-04-23 16:27:20.555200 | instance | Thursday 23 April 2026 16:27:20 +0000 (0:00:02.117) 0:00:02.143 ******** 2026-04-23 16:27:21.218289 | instance | changed: [instance] 2026-04-23 16:27:21.218504 | instance | 2026-04-23 16:27:21.218784 | instance | TASK [Ensure hostname inside hosts file] *************************************** 2026-04-23 16:27:21.219090 | instance | Thursday 23 April 2026 16:27:21 +0000 (0:00:00.663) 0:00:02.806 ******** 2026-04-23 16:27:21.488247 | instance | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-23 16:27:21.488528 | instance | with a mode of 0700, this may cause issues when running as another user. To 2026-04-23 16:27:21.488815 | instance | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-23 16:27:21.498351 | instance | changed: [instance] 2026-04-23 16:27:21.498596 | instance | 2026-04-23 16:27:21.498881 | instance | TASK [Purge "snapd" package] *************************************************** 2026-04-23 16:27:21.499153 | instance | Thursday 23 April 2026 16:27:21 +0000 (0:00:00.280) 0:00:03.087 ******** 2026-04-23 16:27:22.393471 | instance | ok: [instance] 2026-04-23 16:27:22.393755 | instance | 2026-04-23 16:27:22.394037 | instance | PLAY [Create devices for Ceph] ************************************************* 2026-04-23 16:27:22.394309 | instance | 2026-04-23 16:27:22.394582 | instance | TASK [Gathering Facts] ********************************************************* 2026-04-23 16:27:22.394866 | instance | Thursday 23 April 2026 16:27:22 +0000 (0:00:00.894) 0:00:03.981 ******** 2026-04-23 16:27:23.121346 | instance | ok: [instance] 2026-04-23 16:27:23.121622 | instance | 2026-04-23 16:27:23.121923 | instance | TASK [Install depedencies] ***************************************************** 2026-04-23 16:27:23.122234 | instance | Thursday 23 April 2026 16:27:23 +0000 (0:00:00.728) 0:00:04.710 ******** 2026-04-23 16:27:46.827112 | instance | changed: [instance] 2026-04-23 16:27:46.827412 | instance | 2026-04-23 16:27:46.827775 | instance | TASK [Start up service] ******************************************************** 2026-04-23 16:27:46.828164 | instance | Thursday 23 April 2026 16:27:46 +0000 (0:00:23.705) 0:00:28.415 ******** 2026-04-23 16:27:47.345919 | instance | ok: [instance] 2026-04-23 16:27:47.346173 | instance | 2026-04-23 16:27:47.346463 | instance | TASK [Generate lvm.conf] ******************************************************* 2026-04-23 16:27:47.346751 | instance | Thursday 23 April 2026 16:27:47 +0000 (0:00:00.518) 0:00:28.934 ******** 2026-04-23 16:27:47.632617 | instance | ok: [instance] 2026-04-23 16:27:47.632874 | instance | 2026-04-23 16:27:47.633167 | instance | TASK [Write /etc/lvm/lvm.conf] ************************************************* 2026-04-23 16:27:47.633497 | instance | Thursday 23 April 2026 16:27:47 +0000 (0:00:00.286) 0:00:29.221 ******** 2026-04-23 16:27:48.233775 | instance | changed: [instance] 2026-04-23 16:27:48.233839 | instance | 2026-04-23 16:27:48.233986 | instance | TASK [Get list of all loopback devices] **************************************** 2026-04-23 16:27:48.234116 | instance | Thursday 23 April 2026 16:27:48 +0000 (0:00:00.601) 0:00:29.823 ******** 2026-04-23 16:27:48.411022 | instance | ok: [instance] 2026-04-23 16:27:48.411172 | instance | 2026-04-23 16:27:48.411356 | instance | TASK [Fail if there is any existing loopback devices] ************************** 2026-04-23 16:27:48.411534 | instance | Thursday 23 April 2026 16:27:48 +0000 (0:00:00.176) 0:00:30.000 ******** 2026-04-23 16:27:48.434936 | instance | skipping: [instance] 2026-04-23 16:27:48.434956 | instance | 2026-04-23 16:27:48.434963 | instance | TASK [Create devices for Ceph] ************************************************* 2026-04-23 16:27:48.434969 | instance | Thursday 23 April 2026 16:27:48 +0000 (0:00:00.023) 0:00:30.023 ******** 2026-04-23 16:27:48.968344 | instance | changed: [instance] => (item=osd0) 2026-04-23 16:27:48.968376 | instance | changed: [instance] => (item=osd1) 2026-04-23 16:27:48.968381 | instance | changed: [instance] => (item=osd2) 2026-04-23 16:27:48.968385 | instance | 2026-04-23 16:27:48.968542 | instance | TASK [Set permissions on loopback devices] ************************************* 2026-04-23 16:27:48.968812 | instance | Thursday 23 April 2026 16:27:48 +0000 (0:00:00.532) 0:00:30.556 ******** 2026-04-23 16:27:49.547145 | instance | changed: [instance] => (item=osd0) 2026-04-23 16:27:49.547410 | instance | changed: [instance] => (item=osd1) 2026-04-23 16:27:49.547678 | instance | changed: [instance] => (item=osd2) 2026-04-23 16:27:49.547939 | instance | 2026-04-23 16:27:49.548217 | instance | TASK [Start loop devices] ****************************************************** 2026-04-23 16:27:49.548497 | instance | Thursday 23 April 2026 16:27:49 +0000 (0:00:00.579) 0:00:31.135 ******** 2026-04-23 16:27:50.198790 | instance | changed: [instance] => (item=osd0) 2026-04-23 16:27:50.199090 | instance | changed: [instance] => (item=osd1) 2026-04-23 16:27:50.199397 | instance | changed: [instance] => (item=osd2) 2026-04-23 16:27:50.199744 | instance | 2026-04-23 16:27:50.200039 | instance | TASK [Create a volume group for each loop device] ****************************** 2026-04-23 16:27:50.200324 | instance | Thursday 23 April 2026 16:27:50 +0000 (0:00:00.651) 0:00:31.787 ******** 2026-04-23 16:27:53.273107 | instance | changed: [instance] => (item=osd0) 2026-04-23 16:27:53.273268 | instance | changed: [instance] => (item=osd1) 2026-04-23 16:27:53.273541 | instance | changed: [instance] => (item=osd2) 2026-04-23 16:27:53.273719 | instance | 2026-04-23 16:27:53.273900 | instance | TASK [Create a logical volume for each loop device] **************************** 2026-04-23 16:27:53.274070 | instance | Thursday 23 April 2026 16:27:53 +0000 (0:00:03.074) 0:00:34.862 ******** 2026-04-23 16:27:55.028607 | instance | changed: [instance] => (item=ceph-instance-osd0) 2026-04-23 16:27:55.028860 | instance | changed: [instance] => (item=ceph-instance-osd1) 2026-04-23 16:27:55.029123 | instance | changed: [instance] => (item=ceph-instance-osd2) 2026-04-23 16:27:55.029413 | instance | 2026-04-23 16:27:55.029670 | instance | PLAY RECAP ********************************************************************* 2026-04-23 16:27:55.029987 | instance | instance : ok=15 changed=9 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 2026-04-23 16:27:55.030237 | instance | 2026-04-23 16:27:55.030524 | instance | Thursday 23 April 2026 16:27:55 +0000 (0:00:01.755) 0:00:36.618 ******** 2026-04-23 16:27:55.030773 | instance | =============================================================================== 2026-04-23 16:27:55.031036 | instance | Install depedencies ---------------------------------------------------- 23.71s 2026-04-23 16:27:55.031302 | instance | Create a volume group for each loop device ------------------------------ 3.07s 2026-04-23 16:27:55.031569 | instance | Gathering Facts --------------------------------------------------------- 2.12s 2026-04-23 16:27:55.031834 | instance | Create a logical volume for each loop device ---------------------------- 1.76s 2026-04-23 16:27:55.032093 | instance | Purge "snapd" package --------------------------------------------------- 0.89s 2026-04-23 16:27:55.032352 | instance | Gathering Facts --------------------------------------------------------- 0.73s 2026-04-23 16:27:55.032609 | instance | Configure short hostname ------------------------------------------------ 0.66s 2026-04-23 16:27:55.032871 | instance | Start loop devices ------------------------------------------------------ 0.65s 2026-04-23 16:27:55.033128 | instance | Write /etc/lvm/lvm.conf ------------------------------------------------- 0.60s 2026-04-23 16:27:55.033421 | instance | Set permissions on loopback devices ------------------------------------- 0.58s 2026-04-23 16:27:55.033685 | instance | Create devices for Ceph ------------------------------------------------- 0.53s 2026-04-23 16:27:55.033963 | instance | Start up service -------------------------------------------------------- 0.52s 2026-04-23 16:27:55.034219 | instance | Generate lvm.conf ------------------------------------------------------- 0.29s 2026-04-23 16:27:55.034481 | instance | Ensure hostname inside hosts file --------------------------------------- 0.28s 2026-04-23 16:27:55.034735 | instance | Get list of all loopback devices ---------------------------------------- 0.18s 2026-04-23 16:27:55.035002 | instance | Fail if there is any existing loopback devices -------------------------- 0.02s 2026-04-23 16:27:55.099295 | instance | INFO [csi > prepare] Executed: Successful 2026-04-23 16:27:55.099858 | instance | INFO Molecule executed 1 scenario (1 successful) 2026-04-23 16:27:55.612545 | instance | ok: Runtime: 0:01:28.523682 2026-04-23 16:27:55.615235 | 2026-04-23 16:27:55.615319 | PLAY RECAP 2026-04-23 16:27:55.615391 | instance | ok: 12 changed: 9 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-23 16:27:55.615422 | 2026-04-23 16:27:55.750562 | PRE-RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/pre.yml@main] 2026-04-23 16:27:55.755153 | RUN START: [untrusted : github.com/vexxhost/atmosphere/molecule/csi/converge.yml@main] 2026-04-23 16:27:56.354957 | 2026-04-23 16:27:56.355127 | PLAY [all] 2026-04-23 16:27:56.367404 | 2026-04-23 16:27:56.367554 | TASK [Build atmosphere binary] 2026-04-23 16:27:56.747217 | instance | go: downloading github.com/spf13/cobra v1.9.1 2026-04-23 16:27:56.750679 | instance | go: downloading golang.org/x/sync v0.18.0 2026-04-23 16:27:57.139678 | instance | go: downloading github.com/spf13/pflag v1.0.7 2026-04-23 16:28:03.413671 | instance | ok: Runtime: 0:00:06.578298 2026-04-23 16:28:03.421113 | 2026-04-23 16:28:03.421217 | TASK [Deploy with parallel orchestrator] 2026-04-23 16:28:03.628542 | instance | ==> Multi-tag mode: ceph, kubernetes, csi 2026-04-23 16:28:03.628733 | instance | ==> Running preflight checks 2026-04-23 16:28:04.098241 | instance | [preflight] 2026-04-23 16:28:04.098290 | instance | [preflight] PLAY [Preflight checks] ******************************************************** 2026-04-23 16:28:04.098306 | instance | [preflight] 2026-04-23 16:28:04.098318 | instance | [preflight] TASK [Fail if atmosphere_ceph_enabled is set] ********************************** 2026-04-23 16:28:04.119013 | instance | [preflight] skipping: [instance] 2026-04-23 16:28:04.119043 | instance | [preflight] 2026-04-23 16:28:04.119054 | instance | [preflight] PLAY RECAP ********************************************************************* 2026-04-23 16:28:04.119066 | instance | [preflight] instance : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 2026-04-23 16:28:04.119077 | instance | [preflight] 2026-04-23 16:28:04.175298 | instance | ==> Preflight checks passed 2026-04-23 16:28:04.175485 | instance | ==> Starting parallel deployment (subgraph) 2026-04-23 16:28:04.175526 | instance | ==> [kubernetes] Starting deployment 2026-04-23 16:28:04.175697 | instance | ==> [ceph] Starting deployment 2026-04-23 16:28:04.925075 | instance | [ceph/ceph] 2026-04-23 16:28:04.925156 | instance | [ceph/ceph] PLAY [all] ********************************************************************* 2026-04-23 16:28:04.925171 | instance | [ceph/ceph] 2026-04-23 16:28:04.925183 | instance | [ceph/ceph] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:28:04.961473 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:04.961529 | instance | [kubernetes/kubernetes] PLAY [all] ********************************************************************* 2026-04-23 16:28:04.961547 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:04.961557 | instance | [kubernetes/kubernetes] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:28:06.380466 | instance | [ceph/ceph] [WARNING]: Platform linux on host instance is using the discovered Python 2026-04-23 16:28:06.380591 | instance | [ceph/ceph] interpreter at /usr/bin/python3.10, but future installation of another Python 2026-04-23 16:28:06.380598 | instance | [ceph/ceph] interpreter could change the meaning of that path. See 2026-04-23 16:28:06.380606 | instance | [ceph/ceph] https://docs.ansible.com/ansible- 2026-04-23 16:28:06.380610 | instance | [ceph/ceph] core/2.17/reference_appendices/interpreter_discovery.html for more information. 2026-04-23 16:28:06.388081 | instance | [ceph/ceph] ok: [instance] 2026-04-23 16:28:06.388112 | instance | [ceph/ceph] 2026-04-23 16:28:06.388123 | instance | [ceph/ceph] TASK [Fail if atmosphere_ceph_enabled is set] ********************************** 2026-04-23 16:28:06.423606 | instance | [kubernetes/kubernetes] [WARNING]: Platform linux on host instance is using the discovered Python 2026-04-23 16:28:06.423644 | instance | [kubernetes/kubernetes] interpreter at /usr/bin/python3.10, but future installation of another Python 2026-04-23 16:28:06.423656 | instance | [kubernetes/kubernetes] interpreter could change the meaning of that path. See 2026-04-23 16:28:06.423666 | instance | [kubernetes/kubernetes] https://docs.ansible.com/ansible- 2026-04-23 16:28:06.423675 | instance | [kubernetes/kubernetes] core/2.17/reference_appendices/interpreter_discovery.html for more information. 2026-04-23 16:28:06.425928 | instance | [ceph/ceph] skipping: [instance] 2026-04-23 16:28:06.425950 | instance | [ceph/ceph] 2026-04-23 16:28:06.425959 | instance | [ceph/ceph] TASK [Set a fact with the "atmosphere_images" for other plays] ***************** 2026-04-23 16:28:06.441738 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:06.441768 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:06.441778 | instance | [kubernetes/kubernetes] TASK [vexxhost.atmosphere.sysctl : Configure sysctl values] ******************** 2026-04-23 16:28:06.604025 | instance | [ceph/ceph] ok: [instance] 2026-04-23 16:28:06.604082 | instance | [ceph/ceph] 2026-04-23 16:28:06.604094 | instance | [ceph/ceph] PLAY [Deploy Ceph monitors & managers] ***************************************** 2026-04-23 16:28:06.604104 | instance | [ceph/ceph] 2026-04-23 16:28:06.604113 | instance | [ceph/ceph] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:28:07.578908 | instance | [ceph/ceph] ok: [instance] 2026-04-23 16:28:07.578993 | instance | [ceph/ceph] 2026-04-23 16:28:07.579006 | instance | [ceph/ceph] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:28:08.016736 | instance | [ceph/ceph] ok: [instance] 2026-04-23 16:28:08.016808 | instance | [ceph/ceph] 2026-04-23 16:28:08.016826 | instance | [ceph/ceph] TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-23 16:28:08.057338 | instance | [ceph/ceph] skipping: [instance] 2026-04-23 16:28:08.057407 | instance | [ceph/ceph] 2026-04-23 16:28:08.057419 | instance | [ceph/ceph] TASK [vexxhost.containers.directory : Create directory (/var/lib/downloads)] *** 2026-04-23 16:28:08.476992 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:08.477058 | instance | [ceph/ceph] 2026-04-23 16:28:08.477071 | instance | [ceph/ceph] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:28:08.543422 | instance | [ceph/ceph] ok: [instance] => { 2026-04-23 16:28:08.543473 | instance | [ceph/ceph] "msg": "https://github.com/opencontainers/runc/releases/download/v1.4.2/runc.amd64" 2026-04-23 16:28:08.543485 | instance | [ceph/ceph] } 2026-04-23 16:28:08.543494 | instance | [ceph/ceph] 2026-04-23 16:28:08.543504 | instance | [ceph/ceph] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:28:09.239426 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:09.239480 | instance | [ceph/ceph] 2026-04-23 16:28:09.239492 | instance | [ceph/ceph] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:28:09.285943 | instance | [ceph/ceph] skipping: [instance] 2026-04-23 16:28:09.285992 | instance | [ceph/ceph] 2026-04-23 16:28:09.286003 | instance | [ceph/ceph] TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-23 16:28:09.328787 | instance | [ceph/ceph] skipping: [instance] 2026-04-23 16:28:09.328853 | instance | [ceph/ceph] 2026-04-23 16:28:09.328865 | instance | [ceph/ceph] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:28:09.635582 | instance | [ceph/ceph] ok: [instance] 2026-04-23 16:28:09.635630 | instance | [ceph/ceph] 2026-04-23 16:28:09.635641 | instance | [ceph/ceph] TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-23 16:28:10.837416 | instance | [ceph/ceph] ok: [instance] 2026-04-23 16:28:10.837462 | instance | [ceph/ceph] 2026-04-23 16:28:10.837470 | instance | [ceph/ceph] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:28:10.899817 | instance | [ceph/ceph] ok: [instance] => { 2026-04-23 16:28:10.899870 | instance | [ceph/ceph] "msg": "https://github.com/containerd/containerd/releases/download/v2.2.3/containerd-2.2.3-linux-amd64.tar.gz" 2026-04-23 16:28:10.899888 | instance | [ceph/ceph] } 2026-04-23 16:28:10.899896 | instance | [ceph/ceph] 2026-04-23 16:28:10.899904 | instance | [ceph/ceph] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:28:11.661181 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:11.661376 | instance | [ceph/ceph] 2026-04-23 16:28:11.661391 | instance | [ceph/ceph] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:28:14.036798 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'fs.aio-max-nr', 'value': 1048576}) 2026-04-23 16:28:14.036865 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.ipv4.tcp_timestamps', 'value': 0}) 2026-04-23 16:28:14.036893 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.ipv4.tcp_sack', 'value': 1}) 2026-04-23 16:28:14.036903 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.core.netdev_budget', 'value': 1000}) 2026-04-23 16:28:14.036912 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.core.netdev_max_backlog', 'value': 250000}) 2026-04-23 16:28:14.036921 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.core.rmem_max', 'value': 4194304}) 2026-04-23 16:28:14.036930 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.core.wmem_max', 'value': 4194304}) 2026-04-23 16:28:14.036938 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.core.rmem_default', 'value': 4194304}) 2026-04-23 16:28:14.036947 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.core.wmem_default', 'value': 4194304}) 2026-04-23 16:28:14.036970 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.core.optmem_max', 'value': 4194304}) 2026-04-23 16:28:14.036979 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.ipv4.tcp_rmem', 'value': '4096 87380 4194304'}) 2026-04-23 16:28:14.036988 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.ipv4.tcp_wmem', 'value': '4096 65536 4194304'}) 2026-04-23 16:28:14.036997 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.ipv4.tcp_low_latency', 'value': 1}) 2026-04-23 16:28:14.037006 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.ipv4.tcp_adv_win_scale', 'value': 1}) 2026-04-23 16:28:14.037015 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.ipv4.neigh.default.gc_thresh1', 'value': 128}) 2026-04-23 16:28:14.037024 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.ipv4.neigh.default.gc_thresh2', 'value': 28872}) 2026-04-23 16:28:14.037033 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.ipv4.neigh.default.gc_thresh3', 'value': 32768}) 2026-04-23 16:28:14.037042 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.ipv6.neigh.default.gc_thresh1', 'value': 128}) 2026-04-23 16:28:14.037050 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.ipv6.neigh.default.gc_thresh2', 'value': 28872}) 2026-04-23 16:28:14.037059 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.ipv6.neigh.default.gc_thresh3', 'value': 32768}) 2026-04-23 16:28:14.037068 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:14.037077 | instance | [kubernetes/kubernetes] TASK [vexxhost.atmosphere.ethtool : Create folder for persistent configuration] *** 2026-04-23 16:28:14.425276 | instance | [kubernetes/kubernetes] changed: [instance] 2026-04-23 16:28:14.425337 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:14.425349 | instance | [kubernetes/kubernetes] TASK [vexxhost.atmosphere.ethtool : Install persistent "ethtool" tuning] ******* 2026-04-23 16:28:14.605500 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:14.605549 | instance | [ceph/ceph] 2026-04-23 16:28:14.605557 | instance | [ceph/ceph] TASK [vexxhost.containers.containerd : Install SELinux packages] *************** 2026-04-23 16:28:14.632486 | instance | [ceph/ceph] skipping: [instance] 2026-04-23 16:28:14.632524 | instance | [ceph/ceph] 2026-04-23 16:28:14.632531 | instance | [ceph/ceph] TASK [vexxhost.containers.containerd : Set SELinux to permissive at runtime] *** 2026-04-23 16:28:14.665265 | instance | [ceph/ceph] skipping: [instance] 2026-04-23 16:28:14.665317 | instance | [ceph/ceph] 2026-04-23 16:28:14.665328 | instance | [ceph/ceph] TASK [vexxhost.containers.containerd : Persist SELinux permissive mode] ******** 2026-04-23 16:28:14.696364 | instance | [ceph/ceph] skipping: [instance] 2026-04-23 16:28:14.696401 | instance | [ceph/ceph] 2026-04-23 16:28:14.696411 | instance | [ceph/ceph] TASK [vexxhost.containers.containerd : Install AppArmor packages] ************** 2026-04-23 16:28:15.133408 | instance | [kubernetes/kubernetes] changed: [instance] 2026-04-23 16:28:15.133456 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:15.133464 | instance | [kubernetes/kubernetes] TASK [vexxhost.atmosphere.ethtool : Run "ethtool" tuning] ********************** 2026-04-23 16:28:15.567445 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:15.567508 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:15.567519 | instance | [kubernetes/kubernetes] TASK [Set a fact with the "atmosphere_images" for other plays] ***************** 2026-04-23 16:28:15.724922 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:15.724991 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:15.725004 | instance | [kubernetes/kubernetes] PLAY [Configure Kubernetes VIP] ************************************************ 2026-04-23 16:28:15.725013 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:15.725020 | instance | [kubernetes/kubernetes] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:28:16.691833 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:16.691900 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:16.691912 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.directory : Create directory (/etc/kubernetes/manifests)] *** 2026-04-23 16:28:17.014154 | instance | [kubernetes/kubernetes] changed: [instance] 2026-04-23 16:28:17.014209 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:17.014221 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kube_vip : Uninstall legacy HA stack] **************** 2026-04-23 16:28:18.414819 | instance | [kubernetes/kubernetes] ok: [instance] => (item=/etc/keepalived/keepalived.conf) 2026-04-23 16:28:18.414884 | instance | [kubernetes/kubernetes] ok: [instance] => (item=/etc/keepalived/check_apiserver.sh) 2026-04-23 16:28:18.414895 | instance | [kubernetes/kubernetes] ok: [instance] => (item=/etc/kubernetes/manifests/keepalived.yaml) 2026-04-23 16:28:18.414905 | instance | [kubernetes/kubernetes] ok: [instance] => (item=/etc/haproxy/haproxy.cfg) 2026-04-23 16:28:18.414928 | instance | [kubernetes/kubernetes] ok: [instance] => (item=/etc/kubernetes/manifests/haproxy.yaml) 2026-04-23 16:28:18.414941 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:18.414951 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kube_vip : Switch API server to run on port 6443] **** 2026-04-23 16:28:19.292934 | instance | [kubernetes/kubernetes] failed: [instance] (item=/etc/kubernetes/manifests/kube-apiserver.yaml) => {"ansible_loop_var": "item", "changed": false, "item": "/etc/kubernetes/manifests/kube-apiserver.yaml", "msg": "Path /etc/kubernetes/manifests/kube-apiserver.yaml does not exist !", "rc": 257} 2026-04-23 16:28:19.292996 | instance | [kubernetes/kubernetes] failed: [instance] (item=/etc/kubernetes/controller-manager.conf) => {"ansible_loop_var": "item", "changed": false, "item": "/etc/kubernetes/controller-manager.conf", "msg": "Path /etc/kubernetes/controller-manager.conf does not exist !", "rc": 257} 2026-04-23 16:28:19.293012 | instance | [kubernetes/kubernetes] failed: [instance] (item=/etc/kubernetes/scheduler.conf) => {"ansible_loop_var": "item", "changed": false, "item": "/etc/kubernetes/scheduler.conf", "msg": "Path /etc/kubernetes/scheduler.conf does not exist !", "rc": 257} 2026-04-23 16:28:19.293021 | instance | [kubernetes/kubernetes] ...ignoring 2026-04-23 16:28:19.293031 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:19.293041 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kube_vip : Check if super-admin.conf exists] ********* 2026-04-23 16:28:19.559820 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:19.559873 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:19.559885 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kube_vip : Check if kubeadm has already run] ********* 2026-04-23 16:28:19.847815 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:19.847869 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:19.847878 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kube_vip : Set fact with KUBECONFIG path] ************ 2026-04-23 16:28:19.878108 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:19.878155 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:19.878161 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kube_vip : Set fact with KUBECONFIG path (with super-admin.conf)] *** 2026-04-23 16:28:19.909485 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:19.909536 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:19.909543 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kube_vip : Upload Kubernetes manifest] *************** 2026-04-23 16:28:20.186794 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:20.186826 | instance | [ceph/ceph] 2026-04-23 16:28:20.186831 | instance | [ceph/ceph] TASK [vexxhost.containers.containerd : Create systemd service file for containerd] *** 2026-04-23 16:28:20.541849 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:20.541897 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:20.541910 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kube_vip : Ensure kube-vip configuration file] ******* 2026-04-23 16:28:20.864322 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:20.864370 | instance | [ceph/ceph] 2026-04-23 16:28:20.864381 | instance | [ceph/ceph] TASK [vexxhost.containers.containerd : Create folders for configuration] ******* 2026-04-23 16:28:20.892328 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:20.892368 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:20.892379 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kube_vip : Flush handlers] *************************** 2026-04-23 16:28:20.892389 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:20.892398 | instance | [kubernetes/kubernetes] PLAY [Install Kubernetes] ****************************************************** 2026-04-23 16:28:20.892407 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:20.892415 | instance | [kubernetes/kubernetes] TASK [Gathering Facts] ********************************************************* 2026-04-23 16:28:21.869047 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:21.869258 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:21.869266 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:28:22.165079 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:22.165157 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:22.165169 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-23 16:28:22.203271 | instance | [kubernetes/kubernetes] skipping: [instance] 2026-04-23 16:28:22.203341 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:22.203353 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.directory : Create directory (/var/lib/downloads)] *** 2026-04-23 16:28:22.311359 | instance | [ceph/ceph] changed: [instance] => (item={'path': '/etc/containerd'}) 2026-04-23 16:28:22.311436 | instance | [ceph/ceph] changed: [instance] => (item={'path': '/var/lib/containerd', 'mode': '0o700'}) 2026-04-23 16:28:22.311450 | instance | [ceph/ceph] changed: [instance] => (item={'path': '/run/containerd', 'mode': '0o711'}) 2026-04-23 16:28:22.311463 | instance | [ceph/ceph] changed: [instance] => (item={'path': '/run/containerd/io.containerd.grpc.v1.cri', 'mode': '0o700'}) 2026-04-23 16:28:22.311477 | instance | [ceph/ceph] changed: [instance] => (item={'path': '/run/containerd/io.containerd.sandbox.controller.v1.shim', 'mode': '0o700'}) 2026-04-23 16:28:22.311490 | instance | [ceph/ceph] 2026-04-23 16:28:22.311502 | instance | [ceph/ceph] TASK [vexxhost.containers.containerd : Create containerd config file] ********** 2026-04-23 16:28:22.546038 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:22.546121 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:22.546148 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:28:22.598433 | instance | [kubernetes/kubernetes] ok: [instance] => { 2026-04-23 16:28:22.598504 | instance | [kubernetes/kubernetes] "msg": "https://github.com/opencontainers/runc/releases/download/v1.4.2/runc.amd64" 2026-04-23 16:28:22.598516 | instance | [kubernetes/kubernetes] } 2026-04-23 16:28:22.598526 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:22.598535 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:28:22.948230 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:22.948276 | instance | [ceph/ceph] 2026-04-23 16:28:22.948287 | instance | [ceph/ceph] TASK [vexxhost.containers.containerd : Force any restarts if necessary] ******** 2026-04-23 16:28:22.948298 | instance | [ceph/ceph] 2026-04-23 16:28:22.948307 | instance | [ceph/ceph] RUNNING HANDLER [vexxhost.containers.containerd : Reload systemd] ************** 2026-04-23 16:28:23.093573 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:23.093634 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:23.093647 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:28:23.132648 | instance | [kubernetes/kubernetes] skipping: [instance] 2026-04-23 16:28:23.132706 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:23.132717 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:28:23.446256 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:23.446302 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:23.446312 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.package : Update state for tar] ********************** 2026-04-23 16:28:23.972224 | instance | [ceph/ceph] ok: [instance] 2026-04-23 16:28:23.972303 | instance | [ceph/ceph] 2026-04-23 16:28:23.972336 | instance | [ceph/ceph] RUNNING HANDLER [vexxhost.containers.containerd : Restart containerd] ********** 2026-04-23 16:28:24.493521 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:24.493615 | instance | [ceph/ceph] 2026-04-23 16:28:24.493627 | instance | [ceph/ceph] TASK [vexxhost.containers.containerd : Enable and start service] *************** 2026-04-23 16:28:24.681980 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:24.682032 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:24.682044 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:28:24.742708 | instance | [kubernetes/kubernetes] ok: [instance] => { 2026-04-23 16:28:24.742758 | instance | [kubernetes/kubernetes] "msg": "https://github.com/containerd/containerd/releases/download/v2.2.3/containerd-2.2.3-linux-amd64.tar.gz" 2026-04-23 16:28:24.742764 | instance | [kubernetes/kubernetes] } 2026-04-23 16:28:24.742769 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:24.742773 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:28:25.152455 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:25.152514 | instance | [ceph/ceph] 2026-04-23 16:28:25.152526 | instance | [ceph/ceph] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:28:25.216404 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:25.216466 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:25.216475 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:28:25.453158 | instance | [ceph/ceph] ok: [instance] 2026-04-23 16:28:25.453228 | instance | [ceph/ceph] 2026-04-23 16:28:25.453240 | instance | [ceph/ceph] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:28:25.503999 | instance | [ceph/ceph] ok: [instance] => { 2026-04-23 16:28:25.504045 | instance | [ceph/ceph] "msg": "https://download.docker.com/linux/static/stable/x86_64/docker-24.0.9.tgz" 2026-04-23 16:28:25.504061 | instance | [ceph/ceph] } 2026-04-23 16:28:25.504074 | instance | [ceph/ceph] 2026-04-23 16:28:25.504088 | instance | [ceph/ceph] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:28:26.553043 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:26.553089 | instance | [ceph/ceph] 2026-04-23 16:28:26.553100 | instance | [ceph/ceph] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:28:27.413258 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:27.413314 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:27.413327 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.containerd : Install SELinux packages] *************** 2026-04-23 16:28:27.443514 | instance | [kubernetes/kubernetes] skipping: [instance] 2026-04-23 16:28:27.443544 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:27.443554 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.containerd : Set SELinux to permissive at runtime] *** 2026-04-23 16:28:27.472428 | instance | [kubernetes/kubernetes] skipping: [instance] 2026-04-23 16:28:27.472457 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:27.472466 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.containerd : Persist SELinux permissive mode] ******** 2026-04-23 16:28:27.502983 | instance | [kubernetes/kubernetes] skipping: [instance] 2026-04-23 16:28:27.503018 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:27.503029 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.containerd : Install AppArmor packages] ************** 2026-04-23 16:28:28.594045 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:28.594091 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:28.594111 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.containerd : Create systemd service file for containerd] *** 2026-04-23 16:28:29.145103 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:29.145167 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:29.145177 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.containerd : Create folders for configuration] ******* 2026-04-23 16:28:30.635644 | instance | [kubernetes/kubernetes] ok: [instance] => (item={'path': '/etc/containerd'}) 2026-04-23 16:28:30.635713 | instance | [kubernetes/kubernetes] ok: [instance] => (item={'path': '/var/lib/containerd', 'mode': '0o700'}) 2026-04-23 16:28:30.635727 | instance | [kubernetes/kubernetes] ok: [instance] => (item={'path': '/run/containerd', 'mode': '0o711'}) 2026-04-23 16:28:30.635737 | instance | [kubernetes/kubernetes] ok: [instance] => (item={'path': '/run/containerd/io.containerd.grpc.v1.cri', 'mode': '0o700'}) 2026-04-23 16:28:30.635747 | instance | [kubernetes/kubernetes] ok: [instance] => (item={'path': '/run/containerd/io.containerd.sandbox.controller.v1.shim', 'mode': '0o700'}) 2026-04-23 16:28:30.635758 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:30.635768 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.containerd : Create containerd config file] ********** 2026-04-23 16:28:31.012217 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:31.012265 | instance | [ceph/ceph] 2026-04-23 16:28:31.012277 | instance | [ceph/ceph] TASK [vexxhost.containers.docker : Install AppArmor packages] ****************** 2026-04-23 16:28:31.230548 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:31.230596 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:31.230607 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.containerd : Force any restarts if necessary] ******** 2026-04-23 16:28:31.230618 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:31.230626 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.containerd : Enable and start service] *************** 2026-04-23 16:28:31.883744 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:31.883791 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:31.883799 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kubernetes_upgrade_check : Retrieve the "kubeadm-config" ConfigMap] *** 2026-04-23 16:28:32.086126 | instance | [ceph/ceph] ok: [instance] 2026-04-23 16:28:32.086177 | instance | [ceph/ceph] 2026-04-23 16:28:32.086189 | instance | [ceph/ceph] TASK [vexxhost.containers.docker : Ensure group "docker" exists] *************** 2026-04-23 16:28:32.538742 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:32.538800 | instance | [ceph/ceph] 2026-04-23 16:28:32.538816 | instance | [ceph/ceph] TASK [vexxhost.containers.docker : Create systemd service file for docker] ***** 2026-04-23 16:28:32.737083 | instance | [kubernetes/kubernetes] An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible_collections.kubernetes.core.plugins.module_utils.k8s.exceptions.CoreException: Could not create API client: Invalid kube-config file. No configuration found. 2026-04-23 16:28:32.737133 | instance | [kubernetes/kubernetes] fatal: [instance]: FAILED! => {"changed": false, "msg": "Could not create API client: Invalid kube-config file. No configuration found."} 2026-04-23 16:28:32.737142 | instance | [kubernetes/kubernetes] ...ignoring 2026-04-23 16:28:32.737151 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:32.737159 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kubernetes_upgrade_check : Parse the ClusterConfiguration] *** 2026-04-23 16:28:32.766005 | instance | [kubernetes/kubernetes] skipping: [instance] 2026-04-23 16:28:32.766045 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:32.766052 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kubernetes_upgrade_check : Retrieve the current Kubernetes version] *** 2026-04-23 16:28:32.797437 | instance | [kubernetes/kubernetes] skipping: [instance] 2026-04-23 16:28:32.797477 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:32.797484 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kubernetes_upgrade_check : Extract major, minor, and patch versions] *** 2026-04-23 16:28:32.830950 | instance | [kubernetes/kubernetes] skipping: [instance] 2026-04-23 16:28:32.830984 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:32.830991 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kubernetes_upgrade_check : Fail if we're jumping more than one minor version] *** 2026-04-23 16:28:32.862740 | instance | [kubernetes/kubernetes] skipping: [instance] 2026-04-23 16:28:32.862766 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:32.862773 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kubernetes_upgrade_check : Set fact if we need to upgrade] *** 2026-04-23 16:28:32.900350 | instance | [kubernetes/kubernetes] skipping: [instance] 2026-04-23 16:28:32.900423 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:32.900435 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:28:33.101154 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:33.101196 | instance | [ceph/ceph] 2026-04-23 16:28:33.101201 | instance | [ceph/ceph] TASK [vexxhost.containers.docker : Create folders for configuration] *********** 2026-04-23 16:28:33.198903 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:33.198956 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:33.198968 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:28:33.249954 | instance | [kubernetes/kubernetes] ok: [instance] => { 2026-04-23 16:28:33.249994 | instance | [kubernetes/kubernetes] "msg": "https://dl.k8s.io/release/v1.28.13/bin/linux/amd64/kubeadm" 2026-04-23 16:28:33.250006 | instance | [kubernetes/kubernetes] } 2026-04-23 16:28:33.250016 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:33.250025 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:28:33.963090 | instance | [ceph/ceph] changed: [instance] => (item={'path': '/etc/docker'}) 2026-04-23 16:28:33.963149 | instance | [ceph/ceph] changed: [instance] => (item={'path': '/var/lib/docker', 'mode': '0o710'}) 2026-04-23 16:28:33.963155 | instance | [ceph/ceph] changed: [instance] => (item={'path': '/run/docker', 'mode': '0o711'}) 2026-04-23 16:28:33.963160 | instance | [ceph/ceph] 2026-04-23 16:28:33.963164 | instance | [ceph/ceph] TASK [vexxhost.containers.docker : Create systemd socket file for docker] ****** 2026-04-23 16:28:34.501054 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:34.501114 | instance | [ceph/ceph] 2026-04-23 16:28:34.501126 | instance | [ceph/ceph] TASK [vexxhost.containers.docker : Create docker daemon config file] *********** 2026-04-23 16:28:35.062771 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:35.063006 | instance | [ceph/ceph] 2026-04-23 16:28:35.063025 | instance | [ceph/ceph] TASK [vexxhost.containers.docker : Force any restarts if necessary] ************ 2026-04-23 16:28:35.063044 | instance | [ceph/ceph] 2026-04-23 16:28:35.063053 | instance | [ceph/ceph] RUNNING HANDLER [vexxhost.containers.containerd : Reload systemd] ************** 2026-04-23 16:28:35.796448 | instance | [ceph/ceph] ok: [instance] 2026-04-23 16:28:35.796515 | instance | [ceph/ceph] 2026-04-23 16:28:35.796527 | instance | [ceph/ceph] RUNNING HANDLER [vexxhost.containers.docker : Restart docker] ****************** 2026-04-23 16:28:36.710432 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:36.710494 | instance | [ceph/ceph] 2026-04-23 16:28:36.710505 | instance | [ceph/ceph] TASK [vexxhost.containers.docker : Enable and start service] ******************* 2026-04-23 16:28:37.352763 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:37.352845 | instance | [ceph/ceph] 2026-04-23 16:28:37.352861 | instance | [ceph/ceph] TASK [vexxhost.ceph.cephadm : Gather variables for each operating system] ****** 2026-04-23 16:28:37.403950 | instance | [ceph/ceph] ok: [instance] => (item=/home/zuul/.ansible/collections/ansible_collections/vexxhost/ceph/roles/cephadm/vars/ubuntu-22.04.yml) 2026-04-23 16:28:37.404060 | instance | [ceph/ceph] 2026-04-23 16:28:37.404093 | instance | [ceph/ceph] TASK [vexxhost.ceph.cephadm : Install packages] ******************************** 2026-04-23 16:28:38.868138 | instance | [kubernetes/kubernetes] changed: [instance] 2026-04-23 16:28:38.868206 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:38.868218 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:28:38.903731 | instance | [kubernetes/kubernetes] skipping: [instance] 2026-04-23 16:28:38.903758 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:38.903765 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:28:39.204692 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:39.204753 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:39.204766 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:28:39.244264 | instance | [kubernetes/kubernetes] ok: [instance] => { 2026-04-23 16:28:39.244298 | instance | [kubernetes/kubernetes] "msg": "https://dl.k8s.io/release/v1.28.13/bin/linux/amd64/kubectl" 2026-04-23 16:28:39.244325 | instance | [kubernetes/kubernetes] } 2026-04-23 16:28:39.244339 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:39.244351 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:28:42.370000 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:42.370081 | instance | [ceph/ceph] 2026-04-23 16:28:42.370094 | instance | [ceph/ceph] TASK [vexxhost.ceph.cephadm : Ensure services are started] ********************* 2026-04-23 16:28:43.225941 | instance | [ceph/ceph] ok: [instance] => (item=chronyd) 2026-04-23 16:28:43.226174 | instance | [ceph/ceph] ok: [instance] => (item=sshd) 2026-04-23 16:28:43.226193 | instance | [ceph/ceph] 2026-04-23 16:28:43.226210 | instance | [ceph/ceph] TASK [vexxhost.ceph.cephadm : Download "cephadm"] ****************************** 2026-04-23 16:28:43.893612 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:43.893655 | instance | [ceph/ceph] 2026-04-23 16:28:43.893663 | instance | [ceph/ceph] TASK [vexxhost.ceph.cephadm : Remove cephadm from old path] ******************** 2026-04-23 16:28:44.221732 | instance | [ceph/ceph] ok: [instance] 2026-04-23 16:28:44.221798 | instance | [ceph/ceph] 2026-04-23 16:28:44.221809 | instance | [ceph/ceph] TASK [vexxhost.ceph.cephadm : Ensure "cephadm" user is present] **************** 2026-04-23 16:28:44.784452 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:44.784545 | instance | [ceph/ceph] 2026-04-23 16:28:44.784558 | instance | [ceph/ceph] TASK [vexxhost.ceph.cephadm : Allow "cephadm" user to have passwordless sudo] *** 2026-04-23 16:28:45.229195 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:45.229249 | instance | [ceph/ceph] 2026-04-23 16:28:45.229261 | instance | [ceph/ceph] TASK [vexxhost.ceph.mon : Get `cephadm ls` status] ***************************** 2026-04-23 16:28:46.168744 | instance | [kubernetes/kubernetes] changed: [instance] 2026-04-23 16:28:46.168805 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:46.168819 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:28:46.218153 | instance | [kubernetes/kubernetes] skipping: [instance] 2026-04-23 16:28:46.218209 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:46.218221 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.containerd : Install SELinux packages] *************** 2026-04-23 16:28:46.239423 | instance | [kubernetes/kubernetes] skipping: [instance] 2026-04-23 16:28:46.239466 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:46.239477 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.containerd : Set SELinux to permissive at runtime] *** 2026-04-23 16:28:46.262714 | instance | [kubernetes/kubernetes] skipping: [instance] 2026-04-23 16:28:46.262750 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:46.262761 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.containerd : Persist SELinux permissive mode] ******** 2026-04-23 16:28:46.285663 | instance | [kubernetes/kubernetes] skipping: [instance] 2026-04-23 16:28:46.285725 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:46.285738 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.containerd : Install AppArmor packages] ************** 2026-04-23 16:28:46.938716 | instance | [ceph/ceph] ok: [instance] 2026-04-23 16:28:46.938783 | instance | [ceph/ceph] 2026-04-23 16:28:46.938791 | instance | [ceph/ceph] TASK [vexxhost.ceph.mon : Parse the `cephadm ls` output] *********************** 2026-04-23 16:28:46.991875 | instance | [ceph/ceph] ok: [instance] 2026-04-23 16:28:46.991963 | instance | [ceph/ceph] 2026-04-23 16:28:46.991974 | instance | [ceph/ceph] TASK [vexxhost.ceph.mon : Assimilate existing configs in `ceph.conf`] ********** 2026-04-23 16:28:47.021926 | instance | [ceph/ceph] skipping: [instance] 2026-04-23 16:28:47.022053 | instance | [ceph/ceph] 2026-04-23 16:28:47.022066 | instance | [ceph/ceph] TASK [vexxhost.ceph.mon : Adopt monitor to cluster] **************************** 2026-04-23 16:28:47.053693 | instance | [ceph/ceph] skipping: [instance] 2026-04-23 16:28:47.053777 | instance | [ceph/ceph] 2026-04-23 16:28:47.053793 | instance | [ceph/ceph] TASK [vexxhost.ceph.mon : Adopt manager to cluster] **************************** 2026-04-23 16:28:47.084885 | instance | [ceph/ceph] skipping: [instance] 2026-04-23 16:28:47.084953 | instance | [ceph/ceph] 2026-04-23 16:28:47.084969 | instance | [ceph/ceph] TASK [vexxhost.ceph.mon : Enable "cephadm" mgr module] ************************* 2026-04-23 16:28:47.119891 | instance | [ceph/ceph] skipping: [instance] 2026-04-23 16:28:47.119975 | instance | [ceph/ceph] 2026-04-23 16:28:47.119991 | instance | [ceph/ceph] TASK [vexxhost.ceph.mon : Set orchestrator backend to "cephadm"] *************** 2026-04-23 16:28:47.175078 | instance | [ceph/ceph] skipping: [instance] 2026-04-23 16:28:47.175158 | instance | [ceph/ceph] 2026-04-23 16:28:47.175169 | instance | [ceph/ceph] TASK [vexxhost.ceph.mon : Use `cephadm` user for cephadm] ********************** 2026-04-23 16:28:47.206329 | instance | [ceph/ceph] skipping: [instance] 2026-04-23 16:28:47.206405 | instance | [ceph/ceph] 2026-04-23 16:28:47.206416 | instance | [ceph/ceph] TASK [vexxhost.ceph.mon : Generate "cephadm" key] ****************************** 2026-04-23 16:28:47.237105 | instance | [ceph/ceph] skipping: [instance] 2026-04-23 16:28:47.237176 | instance | [ceph/ceph] 2026-04-23 16:28:47.237187 | instance | [ceph/ceph] TASK [vexxhost.ceph.mon : Set Ceph Monitor IP address] ************************* 2026-04-23 16:28:47.350420 | instance | [ceph/ceph] ok: [instance] 2026-04-23 16:28:47.350483 | instance | [ceph/ceph] 2026-04-23 16:28:47.350494 | instance | [ceph/ceph] TASK [vexxhost.ceph.mon : Check if any node is bootstrapped] ******************* 2026-04-23 16:28:47.416486 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:47.416538 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:47.416550 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.containerd : Create systemd service file for containerd] *** 2026-04-23 16:28:47.668125 | instance | [ceph/ceph] ok: [instance] => (item=instance) 2026-04-23 16:28:47.668158 | instance | [ceph/ceph] 2026-04-23 16:28:47.668163 | instance | [ceph/ceph] TASK [vexxhost.ceph.mon : Select pre-existing bootstrap node if exists] ******** 2026-04-23 16:28:47.716722 | instance | [ceph/ceph] ok: [instance] 2026-04-23 16:28:47.716763 | instance | [ceph/ceph] 2026-04-23 16:28:47.716774 | instance | [ceph/ceph] TASK [vexxhost.ceph.mon : Bootstrap cluster] *********************************** 2026-04-23 16:28:47.788875 | instance | [ceph/ceph] included: /home/zuul/.ansible/collections/ansible_collections/vexxhost/ceph/roles/mon/tasks/bootstrap-ceph.yml for instance 2026-04-23 16:28:47.788932 | instance | [ceph/ceph] 2026-04-23 16:28:47.788944 | instance | [ceph/ceph] TASK [vexxhost.ceph.mon : Generate temporary file for "ceph.conf"] ************* 2026-04-23 16:28:47.930355 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:47.930400 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:47.930408 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.containerd : Create folders for configuration] ******* 2026-04-23 16:28:48.174959 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:28:48.175004 | instance | [ceph/ceph] 2026-04-23 16:28:48.175016 | instance | [ceph/ceph] TASK [vexxhost.ceph.mon : Include extra configuration values] ****************** 2026-04-23 16:28:49.142293 | instance | [ceph/ceph] changed: [instance] => (item={'option': 'mon allow pool size one', 'section': 'global', 'value': True}) 2026-04-23 16:28:49.142329 | instance | [ceph/ceph] changed: [instance] => (item={'option': 'osd crush chooseleaf type', 'section': 'global', 'value': 0}) 2026-04-23 16:28:49.142336 | instance | [ceph/ceph] changed: [instance] => (item={'option': 'auth allow insecure global id reclaim', 'section': 'mon', 'value': False}) 2026-04-23 16:28:49.142342 | instance | [ceph/ceph] 2026-04-23 16:28:49.142348 | instance | [ceph/ceph] TASK [vexxhost.ceph.mon : Run Bootstrap coomand] ******************************* 2026-04-23 16:28:49.298458 | instance | [kubernetes/kubernetes] ok: [instance] => (item={'path': '/etc/containerd'}) 2026-04-23 16:28:49.298694 | instance | [kubernetes/kubernetes] ok: [instance] => (item={'path': '/var/lib/containerd', 'mode': '0o700'}) 2026-04-23 16:28:49.298706 | instance | [kubernetes/kubernetes] ok: [instance] => (item={'path': '/run/containerd', 'mode': '0o711'}) 2026-04-23 16:28:49.298715 | instance | [kubernetes/kubernetes] ok: [instance] => (item={'path': '/run/containerd/io.containerd.grpc.v1.cri', 'mode': '0o700'}) 2026-04-23 16:28:49.298723 | instance | [kubernetes/kubernetes] ok: [instance] => (item={'path': '/run/containerd/io.containerd.sandbox.controller.v1.shim', 'mode': '0o700'}) 2026-04-23 16:28:49.298732 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:49.298741 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.containerd : Create containerd config file] ********** 2026-04-23 16:28:49.873994 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:49.874045 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:49.874053 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.containerd : Force any restarts if necessary] ******** 2026-04-23 16:28:49.874059 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:49.874065 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.containerd : Enable and start service] *************** 2026-04-23 16:28:50.310306 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:50.310363 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:50.310375 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:28:50.618377 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:50.618421 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:50.618427 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:28:50.670400 | instance | [kubernetes/kubernetes] ok: [instance] => { 2026-04-23 16:28:50.670443 | instance | [kubernetes/kubernetes] "msg": "https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.35.0/crictl-v1.35.0-linux-amd64.tar.gz" 2026-04-23 16:28:50.670449 | instance | [kubernetes/kubernetes] } 2026-04-23 16:28:50.670454 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:50.670462 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:28:51.376333 | instance | [kubernetes/kubernetes] changed: [instance] 2026-04-23 16:28:51.376371 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:51.376379 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:28:52.899036 | instance | [kubernetes/kubernetes] changed: [instance] 2026-04-23 16:28:52.899100 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:52.899109 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:28:52.947475 | instance | [kubernetes/kubernetes] ok: [instance] => { 2026-04-23 16:28:52.947520 | instance | [kubernetes/kubernetes] "msg": "https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.35.0/critest-v1.35.0-linux-amd64.tar.gz" 2026-04-23 16:28:52.947531 | instance | [kubernetes/kubernetes] } 2026-04-23 16:28:52.947540 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:52.947548 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:28:53.776210 | instance | [kubernetes/kubernetes] changed: [instance] 2026-04-23 16:28:53.776245 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:53.776251 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:28:55.261156 | instance | [kubernetes/kubernetes] changed: [instance] 2026-04-23 16:28:55.261212 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:55.261224 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.cri_tools : Create crictl config] ******************** 2026-04-23 16:28:55.790611 | instance | [kubernetes/kubernetes] changed: [instance] 2026-04-23 16:28:55.790674 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:55.790687 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.directory : Create directory (/opt/cni/bin)] ********* 2026-04-23 16:28:56.076127 | instance | [kubernetes/kubernetes] changed: [instance] 2026-04-23 16:28:56.076180 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:56.076193 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:28:56.376933 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:28:56.377004 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:56.377016 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:28:56.425019 | instance | [kubernetes/kubernetes] ok: [instance] => { 2026-04-23 16:28:56.425079 | instance | [kubernetes/kubernetes] "msg": "https://github.com/containernetworking/plugins/releases/download/v1.9.1/cni-plugins-linux-amd64-v1.9.1.tgz" 2026-04-23 16:28:56.425110 | instance | [kubernetes/kubernetes] } 2026-04-23 16:28:56.425120 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:56.425129 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:28:57.734867 | instance | [kubernetes/kubernetes] changed: [instance] 2026-04-23 16:28:57.734964 | instance | [kubernetes/kubernetes] 2026-04-23 16:28:57.734976 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:29:00.449341 | instance | [kubernetes/kubernetes] changed: [instance] 2026-04-23 16:29:00.449407 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:00.449418 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.cni_plugins : Gather variables for each operating system] *** 2026-04-23 16:29:00.500994 | instance | [kubernetes/kubernetes] ok: [instance] => (item=/home/zuul/.ansible/collections/ansible_collections/vexxhost/containers/roles/cni_plugins/vars/debian.yml) 2026-04-23 16:29:00.501039 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:00.501050 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.cni_plugins : Install additional packages] *********** 2026-04-23 16:29:01.595196 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:29:01.595242 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:01.595250 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.cni_plugins : Ensure IPv6 is enabled] **************** 2026-04-23 16:29:01.903373 | instance | [kubernetes/kubernetes] changed: [instance] 2026-04-23 16:29:01.903411 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:01.903419 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.cni_plugins : Enable kernel modules on-boot] ********* 2026-04-23 16:29:02.409973 | instance | [kubernetes/kubernetes] changed: [instance] 2026-04-23 16:29:02.410017 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:02.410025 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.cni_plugins : Enable kernel modules in runtime] ****** 2026-04-23 16:29:03.678423 | instance | [kubernetes/kubernetes] changed: [instance] => (item=br_netfilter) 2026-04-23 16:29:03.678485 | instance | [kubernetes/kubernetes] ok: [instance] => (item=ip_tables) 2026-04-23 16:29:03.678502 | instance | [kubernetes/kubernetes] changed: [instance] => (item=ip6_tables) 2026-04-23 16:29:03.678515 | instance | [kubernetes/kubernetes] changed: [instance] => (item=nf_conntrack) 2026-04-23 16:29:03.678526 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:03.678535 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.forget_package : Forget package] ********************* 2026-04-23 16:29:03.961451 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:29:03.961499 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:03.961509 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Starting download of file] ******* 2026-04-23 16:29:04.001475 | instance | [kubernetes/kubernetes] ok: [instance] => { 2026-04-23 16:29:04.001550 | instance | [kubernetes/kubernetes] "msg": "https://dl.k8s.io/release/v1.28.13/bin/linux/amd64/kubelet" 2026-04-23 16:29:04.001562 | instance | [kubernetes/kubernetes] } 2026-04-23 16:29:04.001573 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:04.001582 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Download item] ******************* 2026-04-23 16:29:16.463523 | instance | [kubernetes/kubernetes] changed: [instance] 2026-04-23 16:29:16.463599 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:16.463614 | instance | [kubernetes/kubernetes] TASK [vexxhost.containers.download_artifact : Extract archive] ***************** 2026-04-23 16:29:16.499027 | instance | [kubernetes/kubernetes] skipping: [instance] 2026-04-23 16:29:16.499049 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:16.499056 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kubelet : Gather variables for each operating system] *** 2026-04-23 16:29:16.549542 | instance | [kubernetes/kubernetes] ok: [instance] => (item=/home/zuul/.ansible/collections/ansible_collections/vexxhost/kubernetes/roles/kubelet/vars/debian.yml) 2026-04-23 16:29:16.549592 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:16.549601 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kubelet : Install coreutils] ************************* 2026-04-23 16:29:16.581476 | instance | [kubernetes/kubernetes] skipping: [instance] 2026-04-23 16:29:16.581528 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:16.581539 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kubelet : Install additional packages] *************** 2026-04-23 16:29:20.504289 | instance | [kubernetes/kubernetes] changed: [instance] 2026-04-23 16:29:20.504342 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:20.504354 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kubelet : Configure sysctl values] ******************* 2026-04-23 16:29:24.609217 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-23 16:29:24.609558 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.bridge.bridge-nf-call-iptables', 'value': 1}) 2026-04-23 16:29:24.609578 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.bridge.bridge-nf-call-ip6tables', 'value': 1}) 2026-04-23 16:29:24.609588 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 0}) 2026-04-23 16:29:24.609596 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'fs.inotify.max_queued_events', 'value': 1048576}) 2026-04-23 16:29:24.609605 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'fs.inotify.max_user_instances', 'value': 8192}) 2026-04-23 16:29:24.609614 | instance | [kubernetes/kubernetes] changed: [instance] => (item={'name': 'fs.inotify.max_user_watches', 'value': 1048576}) 2026-04-23 16:29:24.609630 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:24.609640 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kubelet : Create folders for kubernetes configuration] *** 2026-04-23 16:29:25.498858 | instance | [kubernetes/kubernetes] changed: [instance] => (item=/etc/systemd/system/kubelet.service.d) 2026-04-23 16:29:25.498918 | instance | [kubernetes/kubernetes] ok: [instance] => (item=/etc/kubernetes) 2026-04-23 16:29:25.498931 | instance | [kubernetes/kubernetes] ok: [instance] => (item=/etc/kubernetes/manifests) 2026-04-23 16:29:25.498941 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:25.498952 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kubelet : Add kubelet systemd service config] ******** 2026-04-23 16:29:26.077606 | instance | [kubernetes/kubernetes] changed: [instance] 2026-04-23 16:29:26.077650 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:26.077658 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kubelet : Add kubeadm dropin for kubelet systemd service config] *** 2026-04-23 16:29:26.597185 | instance | [kubernetes/kubernetes] changed: [instance] 2026-04-23 16:29:26.597231 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:26.597239 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kubelet : Check swap status] ************************* 2026-04-23 16:29:26.906008 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:29:26.906058 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:26.906066 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kubelet : Disable swap] ****************************** 2026-04-23 16:29:26.932926 | instance | [kubernetes/kubernetes] skipping: [instance] 2026-04-23 16:29:26.932979 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:26.932991 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kubelet : Remove swapfile from /etc/fstab] *********** 2026-04-23 16:29:27.672668 | instance | [kubernetes/kubernetes] ok: [instance] => (item=swap) 2026-04-23 16:29:27.672732 | instance | [kubernetes/kubernetes] ok: [instance] => (item=none) 2026-04-23 16:29:27.672744 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:27.672753 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kubelet : Create noswap systemd service config file] *** 2026-04-23 16:29:28.231975 | instance | [kubernetes/kubernetes] changed: [instance] 2026-04-23 16:29:28.232037 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:28.232045 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kubelet : Enable noswap service] ********************* 2026-04-23 16:29:28.895603 | instance | [kubernetes/kubernetes] changed: [instance] 2026-04-23 16:29:28.895646 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:28.895654 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kubelet : Force any restarts if necessary] *********** 2026-04-23 16:29:28.895671 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:28.895678 | instance | [kubernetes/kubernetes] RUNNING HANDLER [vexxhost.kubernetes.kubelet : Reload systemd] ***************** 2026-04-23 16:29:29.715187 | instance | [kubernetes/kubernetes] ok: [instance] 2026-04-23 16:29:29.715238 | instance | [kubernetes/kubernetes] 2026-04-23 16:29:29.715245 | instance | [kubernetes/kubernetes] TASK [vexxhost.kubernetes.kubelet : Enable and start kubelet service] ********** 2026-04-23 16:29:29.725144 | instance | [ceph/ceph] fatal: [instance]: FAILED! => {"changed": false, "cmd": ["cephadm", "bootstrap", "--fsid", "4837cbf8-4f90-4300-b3f6-726c9b9f89b4", "--mon-ip", "199.19.213.206", "--cluster-network", "199.19.213.0/24", "--ssh-user", "cephadm", "--config", "/tmp/ceph_vmf904iu.conf", "--skip-monitoring-stack"], "delta": "0:00:40.285306", "end": "2026-04-23 16:29:29.686703", "msg": "non-zero return code", "rc": 1, "start": "2026-04-23 16:28:49.401397", "stderr": "Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts.\nRuntimeError: Failed command: systemctl restart ceph-4837cbf8-4f90-4300-b3f6-726c9b9f89b4@mon.instance: Failed to restart ceph-4837cbf8-4f90-4300-b3f6-726c9b9f89b4@mon.instance.service: Message recipient disconnected from message bus without replying\nSee system logs and 'systemctl status ceph-4837cbf8-4f90-4300-b3f6-726c9b9f89b4@mon.instance.service' for details.\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\n return _run_code(code, main_globals, None,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/usr/bin/cephadm/__main__.py\", line 10700, in \n File \"/usr/bin/cephadm/__main__.py\", line 10688, in main\n File \"/usr/bin/cephadm/__main__.py\", line 6156, in _rollback\n File \"/usr/bin/cephadm/__main__.py\", line 2495, in _default_image\n File \"/usr/bin/cephadm/__main__.py\", line 6327, in command_bootstrap\n File \"/usr/bin/cephadm/__main__.py\", line 6000, in finish_bootstrap_config\n File \"/usr/bin/cephadm/__main__.py\", line 2135, in call_throws\nRuntimeError: Failed command: systemctl restart ceph-4837cbf8-4f90-4300-b3f6-726c9b9f89b4@mon.instance: Failed to restart ceph-4837cbf8-4f90-4300-b3f6-726c9b9f89b4@mon.instance.service: Message recipient disconnected from message bus without replying\nSee system logs and 'systemctl status ceph-4837cbf8-4f90-4300-b3f6-726c9b9f89b4@mon.instance.service' for details.", "stderr_lines": ["Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts.", "RuntimeError: Failed command: systemctl restart ceph-4837cbf8-4f90-4300-b3f6-726c9b9f89b4@mon.instance: Failed to restart ceph-4837cbf8-4f90-4300-b3f6-726c9b9f89b4@mon.instance.service: Message recipient disconnected from message bus without replying", "See system logs and 'systemctl status ceph-4837cbf8-4f90-4300-b3f6-726c9b9f89b4@mon.instance.service' for details.", "", "Traceback (most recent call last):", " File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main", " return _run_code(code, main_globals, None,", " File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code", " exec(code, run_globals)", " File \"/usr/bin/cephadm/__main__.py\", line 10700, in ", " File \"/usr/bin/cephadm/__main__.py\", line 10688, in main", " File \"/usr/bin/cephadm/__main__.py\", line 6156, in _rollback", " File \"/usr/bin/cephadm/__main__.py\", line 2495, in _default_image", " File \"/usr/bin/cephadm/__main__.py\", line 6327, in command_bootstrap", " File \"/usr/bin/cephadm/__main__.py\", line 6000, in finish_bootstrap_config", " File \"/usr/bin/cephadm/__main__.py\", line 2135, in call_throws", "RuntimeError: Failed command: systemctl restart ceph-4837cbf8-4f90-4300-b3f6-726c9b9f89b4@mon.instance: Failed to restart ceph-4837cbf8-4f90-4300-b3f6-726c9b9f89b4@mon.instance.service: Message recipient disconnected from message bus without replying", "See system logs and 'systemctl status ceph-4837cbf8-4f90-4300-b3f6-726c9b9f89b4@mon.instance.service' for details."], "stdout": "Creating directory /etc/ceph for ceph.conf\nVerifying ssh connectivity using standard pubkey authentication ...\nAdding key to cephadm@localhost authorized_keys...\nVerifying podman|docker is present...\nVerifying lvm2 is present...\nVerifying time synchronization is in place...\nUnit chrony.service is enabled and running\nRepeating the final host check...\ndocker (/usr/bin/docker) is present\nsystemctl is present\nlvcreate is present\nUnit chrony.service is enabled and running\nHost looks OK\nCluster fsid: 4837cbf8-4f90-4300-b3f6-726c9b9f89b4\nVerifying IP 199.19.213.206 port 3300 ...\nVerifying IP 199.19.213.206 port 6789 ...\nMon IP `199.19.213.206` is in CIDR network `162.253.52.0/24`\nMon IP `199.19.213.206` is in CIDR network `162.253.52.0/24`\nMon IP `199.19.213.206` is in CIDR network `162.253.53.0/24`\nMon IP `199.19.213.206` is in CIDR network `162.253.53.0/24`\nMon IP `199.19.213.206` is in CIDR network `162.253.54.0/24`\nMon IP `199.19.213.206` is in CIDR network `162.253.54.0/24`\nMon IP `199.19.213.206` is in CIDR network `162.253.55.0/24`\nMon IP `199.19.213.206` is in CIDR network `162.253.55.0/24`\nMon IP `199.19.213.206` is in CIDR network `199.19.212.0/24`\nMon IP `199.19.213.206` is in CIDR network `199.19.212.0/24`\nMon IP `199.19.213.206` is in CIDR network `199.19.213.0/24`\nMon IP `199.19.213.206` is in CIDR network `199.19.213.0/24`\nMon IP `199.19.213.206` is in CIDR network `199.19.213.1/32`\nMon IP `199.19.213.206` is in CIDR network `199.19.213.1/32`\nMon IP `199.19.213.206` is in CIDR network `199.19.213.4/32`\nMon IP `199.19.213.206` is in CIDR network `199.19.213.4/32`\nMon IP `199.19.213.206` is in CIDR network `199.19.214.0/24`\nMon IP `199.19.213.206` is in CIDR network `199.19.214.0/24`\nMon IP `199.19.213.206` is in CIDR network `199.19.215.0/24`\nMon IP `199.19.213.206` is in CIDR network `199.19.215.0/24`\nMon IP `199.19.213.206` is in CIDR network `199.204.45.0/24`\nMon IP `199.19.213.206` is in CIDR network `199.204.45.0/24`\nMon IP `199.19.213.206` is in CIDR network `199.204.46.0/24`\nMon IP `199.19.213.206` is in CIDR network `199.204.46.0/24`\nPulling container image quay.io/ceph/ceph:v18.2.1...\nCeph version: ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)\nExtracting ceph user uid/gid from container image...\nCreating initial keys...\nCreating initial monmap...\nCreating mon...\nWaiting for mon to start...\nWaiting for mon...\nmon is available\nAssimilating anything we can from ceph.conf...\nGenerating new minimal ceph.conf...\nRestarting the monitor...\nNon-zero exit code 1 from systemctl restart ceph-4837cbf8-4f90-4300-b3f6-726c9b9f89b4@mon.instance\nsystemctl: stderr Failed to restart ceph-4837cbf8-4f90-4300-b3f6-726c9b9f89b4@mon.instance.service: Message recipient disconnected from message bus without replying\nsystemctl: stderr See system logs and 'systemctl status ceph-4837cbf8-4f90-4300-b3f6-726c9b9f89b4@mon.instance.service' for details.\n\n\n\t***************\n\tCephadm hit an issue during cluster installation. Current cluster files will NOT BE DELETED automatically to change\n\tthis behaviour you can pass the --cleanup-on-failure. To remove this broken cluster manually please run:\n\n\t > cephadm rm-cluster --force --fsid 4837cbf8-4f90-4300-b3f6-726c9b9f89b4\n\n\tin case of any previous broken installation user must use the rm-cluster command to delete the broken cluster:\n\n\t > cephadm rm-cluster --force --zap-osds --fsid \n\n\tfor more information please refer to https://docs.ceph.com/en/latest/cephadm/operations/#purging-a-cluster\n\t***************", "stdout_lines": ["Creating directory /etc/ceph for ceph.conf", "Verifying ssh connectivity using standard pubkey authentication ...", "Adding key to cephadm@localhost authorized_keys...", "Verifying podman|docker is present...", "Verifying lvm2 is present...", "Verifying time synchronization is in place...", "Unit chrony.service is enabled and running", "Repeating the final host check...", "docker (/usr/bin/docker) is present", "systemctl is present", "lvcreate is present", "Unit chrony.service is enabled and running", "Host looks OK", "Cluster fsid: 4837cbf8-4f90-4300-b3f6-726c9b9f89b4", "Verifying IP 199.19.213.206 port 3300 ...", "Verifying IP 199.19.213.206 port 6789 ...", "Mon IP `199.19.213.206` is in CIDR network `162.253.52.0/24`", "Mon IP `199.19.213.206` is in CIDR network `162.253.52.0/24`", "Mon IP `199.19.213.206` is in CIDR network `162.253.53.0/24`", "Mon IP `199.19.213.206` is in CIDR network `162.253.53.0/24`", "Mon IP `199.19.213.206` is in CIDR network `162.253.54.0/24`", "Mon IP `199.19.213.206` is in CIDR network `162.253.54.0/24`", "Mon IP `199.19.213.206` is in CIDR network `162.253.55.0/24`", "Mon IP `199.19.213.206` is in CIDR network `162.253.55.0/24`", "Mon IP `199.19.213.206` is in CIDR network `199.19.212.0/24`", "Mon IP `199.19.213.206` is in CIDR network `199.19.212.0/24`", "Mon IP `199.19.213.206` is in CIDR network `199.19.213.0/24`", "Mon IP `199.19.213.206` is in CIDR network `199.19.213.0/24`", "Mon IP `199.19.213.206` is in CIDR network `199.19.213.1/32`", "Mon IP `199.19.213.206` is in CIDR network `199.19.213.1/32`", "Mon IP `199.19.213.206` is in CIDR network `199.19.213.4/32`", "Mon IP `199.19.213.206` is in CIDR network `199.19.213.4/32`", "Mon IP `199.19.213.206` is in CIDR network `199.19.214.0/24`", "Mon IP `199.19.213.206` is in CIDR network `199.19.214.0/24`", "Mon IP `199.19.213.206` is in CIDR network `199.19.215.0/24`", "Mon IP `199.19.213.206` is in CIDR network `199.19.215.0/24`", "Mon IP `199.19.213.206` is in CIDR network `199.204.45.0/24`", "Mon IP `199.19.213.206` is in CIDR network `199.204.45.0/24`", "Mon IP `199.19.213.206` is in CIDR network `199.204.46.0/24`", "Mon IP `199.19.213.206` is in CIDR network `199.204.46.0/24`", "Pulling container image quay.io/ceph/ceph:v18.2.1...", "Ceph version: ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)", "Extracting ceph user uid/gid from container image...", "Creating initial keys...", "Creating initial monmap...", "Creating mon...", "Waiting for mon to start...", "Waiting for mon...", "mon is available", "Assimilating anything we can from ceph.conf...", "Generating new minimal ceph.conf...", "Restarting the monitor...", "Non-zero exit code 1 from systemctl restart ceph-4837cbf8-4f90-4300-b3f6-726c9b9f89b4@mon.instance", "systemctl: stderr Failed to restart ceph-4837cbf8-4f90-4300-b3f6-726c9b9f89b4@mon.instance.service: Message recipient disconnected from message bus without replying", "systemctl: stderr See system logs and 'systemctl status ceph-4837cbf8-4f90-4300-b3f6-726c9b9f89b4@mon.instance.service' for details.", "", "", "\t***************", "\tCephadm hit an issue during cluster installation. Current cluster files will NOT BE DELETED automatically to change", "\tthis behaviour you can pass the --cleanup-on-failure. To remove this broken cluster manually please run:", "", "\t > cephadm rm-cluster --force --fsid 4837cbf8-4f90-4300-b3f6-726c9b9f89b4", "", "\tin case of any previous broken installation user must use the rm-cluster command to delete the broken cluster:", "", "\t > cephadm rm-cluster --force --zap-osds --fsid ", "", "\tfor more information please refer to https://docs.ceph.com/en/latest/cephadm/operations/#purging-a-cluster", "\t***************"]} 2026-04-23 16:29:29.740790 | instance | [ceph/ceph] 2026-04-23 16:29:29.740825 | instance | [ceph/ceph] TASK [vexxhost.ceph.mon : Remove temporary file for "ceph.conf"] *************** 2026-04-23 16:29:30.049516 | instance | [ceph/ceph] changed: [instance] 2026-04-23 16:29:30.049575 | instance | [ceph/ceph] 2026-04-23 16:29:30.049583 | instance | [ceph/ceph] PLAY RECAP ********************************************************************* 2026-04-23 16:29:30.049590 | instance | [ceph/ceph] instance : ok=48 changed=26 unreachable=0 failed=1 skipped=14 rescued=0 ignored=0 2026-04-23 16:29:30.049595 | instance | [ceph/ceph] 2026-04-23 16:29:30.356193 | instance | Error: component ceph failed: ansible-playbook failed for ceph/ceph: exit status 2 2026-04-23 16:29:30.356268 | instance | Usage: 2026-04-23 16:29:30.356280 | instance | atmosphere deploy [flags] 2026-04-23 16:29:30.356290 | instance | 2026-04-23 16:29:30.356298 | instance | Flags: 2026-04-23 16:29:30.356307 | instance | --concurrency int Max concurrent deployments per wave (0 = unlimited) 2026-04-23 16:29:30.356315 | instance | -h, --help help for deploy 2026-04-23 16:29:30.356324 | instance | -i, --inventory string Path to Ansible inventory file (required) 2026-04-23 16:29:30.356332 | instance | -t, --tags string Comma-separated list of component tags to deploy 2026-04-23 16:29:30.356341 | instance | 2026-04-23 16:29:30.356349 | instance | component ceph failed: ansible-playbook failed for ceph/ceph: exit status 2 2026-04-23 16:29:30.580523 | instance | ERROR 2026-04-23 16:29:30.580797 | instance | { 2026-04-23 16:29:30.580842 | instance | "delta": "0:01:26.735193", 2026-04-23 16:29:30.580871 | instance | "end": "2026-04-23 16:29:30.357182", 2026-04-23 16:29:30.580898 | instance | "msg": "non-zero return code", 2026-04-23 16:29:30.580923 | instance | "rc": 1, 2026-04-23 16:29:30.580971 | instance | "start": "2026-04-23 16:28:03.621989" 2026-04-23 16:29:30.581016 | instance | } failure 2026-04-23 16:29:30.590160 | 2026-04-23 16:29:30.590211 | PLAY RECAP 2026-04-23 16:29:30.590258 | instance | ok: 1 changed: 0 unreachable: 0 failed: 1 skipped: 0 rescued: 0 ignored: 0 2026-04-23 16:29:30.590290 | 2026-04-23 16:29:30.709643 | RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/atmosphere/molecule/csi/converge.yml@main] 2026-04-23 16:29:30.713715 | POST-RUN START: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/post.yml@main] 2026-04-23 16:29:31.389683 | 2026-04-23 16:29:31.389852 | PLAY [all] 2026-04-23 16:29:31.405454 | 2026-04-23 16:29:31.405581 | TASK [gather-host-logs : creating directory for system status] 2026-04-23 16:29:31.743684 | instance | changed 2026-04-23 16:29:31.751567 | 2026-04-23 16:29:31.751684 | TASK [gather-host-logs : Get logs for each host] 2026-04-23 16:29:32.169196 | instance | + systemd-cgls --full --all --no-pager 2026-04-23 16:29:32.181234 | instance | + ip addr 2026-04-23 16:29:32.182635 | instance | + ip route 2026-04-23 16:29:32.185469 | instance | + lsblk 2026-04-23 16:29:32.187940 | instance | + mount 2026-04-23 16:29:32.189728 | instance | + docker images 2026-04-23 16:29:32.207775 | instance | + brctl show 2026-04-23 16:29:32.208158 | instance | /bin/bash: line 8: brctl: command not found 2026-04-23 16:29:32.208685 | instance | + ps aux --sort=-%mem 2026-04-23 16:29:32.222149 | instance | + dpkg -l 2026-04-23 16:29:32.232931 | instance | + CONTAINERS=($(docker ps -a --format '{{ .Names }}' --filter label=zuul)) 2026-04-23 16:29:32.233543 | instance | ++ docker ps -a --format '{{ .Names }}' --filter label=zuul 2026-04-23 16:29:32.252208 | instance | + '[' '!' -z '' ']' 2026-04-23 16:29:32.298668 | instance | ok: Runtime: 0:00:00.087551 2026-04-23 16:29:32.306525 | 2026-04-23 16:29:32.306598 | TASK [gather-host-logs : Downloads logs to executor] 2026-04-23 16:29:32.984252 | instance | changed: 2026-04-23 16:29:32.984577 | instance | created directory /var/lib/zuul/builds/3c6807d353184035be6864b5041de007/work/logs/instance 2026-04-23 16:29:32.984646 | instance | cd+++++++++ system/ 2026-04-23 16:29:32.984695 | instance | >f+++++++++ system/brctl-show.txt 2026-04-23 16:29:32.984740 | instance | >f+++++++++ system/docker-images.txt 2026-04-23 16:29:32.984783 | instance | >f+++++++++ system/ip-addr.txt 2026-04-23 16:29:32.984833 | instance | >f+++++++++ system/ip-route.txt 2026-04-23 16:29:32.984878 | instance | >f+++++++++ system/lsblk.txt 2026-04-23 16:29:32.984921 | instance | >f+++++++++ system/mount.txt 2026-04-23 16:29:32.984997 | instance | >f+++++++++ system/packages.txt 2026-04-23 16:29:32.985042 | instance | >f+++++++++ system/ps.txt 2026-04-23 16:29:32.985087 | instance | >f+++++++++ system/systemd-cgls.txt 2026-04-23 16:29:32.994910 | 2026-04-23 16:29:32.994986 | LOOP [helm-release-status : creating directory for helm release status] 2026-04-23 16:29:33.186506 | instance | changed: "values" 2026-04-23 16:29:33.352444 | instance | changed: "releases" 2026-04-23 16:29:33.367657 | 2026-04-23 16:29:33.367847 | TASK [helm-release-status : Gather get release status for helm charts] 2026-04-23 16:29:33.631391 | instance | E0423 16:29:33.631226 18593 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:33.632171 | instance | E0423 16:29:33.632114 18593 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:33.633853 | instance | E0423 16:29:33.633776 18593 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:33.634828 | instance | E0423 16:29:33.634777 18593 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:33.635963 | instance | E0423 16:29:33.635922 18593 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:33.636107 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:29:33.903445 | instance | ok: Runtime: 0:00:00.066949 2026-04-23 16:29:33.908766 | 2026-04-23 16:29:33.908837 | TASK [helm-release-status : Downloads logs to executor] 2026-04-23 16:29:34.468167 | instance | changed: 2026-04-23 16:29:34.468400 | instance | cd+++++++++ helm/ 2026-04-23 16:29:34.468442 | instance | cd+++++++++ helm/releases/ 2026-04-23 16:29:34.468473 | instance | cd+++++++++ helm/values/ 2026-04-23 16:29:34.483202 | 2026-04-23 16:29:34.483304 | TASK [describe-kubernetes-objects : creating directory for cluster scoped objects] 2026-04-23 16:29:34.694601 | instance | changed 2026-04-23 16:29:34.701275 | 2026-04-23 16:29:34.701380 | TASK [describe-kubernetes-objects : Gathering descriptions for cluster scoped objects] 2026-04-23 16:29:34.907725 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-23 16:29:34.907870 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-23 16:29:34.915176 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-23 16:29:34.917171 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-23 16:29:34.967170 | instance | E0423 16:29:34.966998 18645 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:34.967796 | instance | E0423 16:29:34.967748 18645 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:34.969330 | instance | E0423 16:29:34.969280 18645 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:34.969606 | instance | E0423 16:29:34.969550 18645 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:34.970980 | instance | E0423 16:29:34.970900 18645 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:34.971015 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:29:34.976538 | instance | E0423 16:29:34.976453 18640 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:34.977147 | instance | E0423 16:29:34.977105 18640 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:34.978686 | instance | E0423 16:29:34.978635 18640 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:34.979175 | instance | E0423 16:29:34.979135 18640 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:34.979249 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-23 16:29:34.980602 | instance | E0423 16:29:34.980553 18640 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:34.980638 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:29:34.987221 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-23 16:29:35.028793 | instance | E0423 16:29:35.028554 18664 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:35.029089 | instance | E0423 16:29:35.029016 18664 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:35.030488 | instance | E0423 16:29:35.030416 18664 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:35.030850 | instance | E0423 16:29:35.030793 18664 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:35.032383 | instance | E0423 16:29:35.032299 18664 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:35.032419 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:29:35.037043 | instance | E0423 16:29:35.036933 18671 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:35.037606 | instance | E0423 16:29:35.037573 18671 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:35.039264 | instance | E0423 16:29:35.039225 18671 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:35.039843 | instance | E0423 16:29:35.039801 18671 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:35.041856 | instance | E0423 16:29:35.041623 18671 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:35.042044 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:29:35.042058 | instance | xargs: warning: options --max-lines and --replace/-I/-i are mutually exclusive, ignoring previous --max-lines value 2026-04-23 16:29:35.097607 | instance | E0423 16:29:35.097493 18688 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:35.098118 | instance | E0423 16:29:35.098083 18688 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:35.099896 | instance | E0423 16:29:35.099829 18688 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:35.100574 | instance | E0423 16:29:35.100543 18688 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:35.102371 | instance | E0423 16:29:35.102340 18688 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:35.102461 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:29:35.241391 | instance | ok: Runtime: 0:00:00.204613 2026-04-23 16:29:35.246792 | 2026-04-23 16:29:35.246859 | TASK [describe-kubernetes-objects : creating directory for namespace scoped objects] 2026-04-23 16:29:35.451153 | instance | changed 2026-04-23 16:29:35.458080 | 2026-04-23 16:29:35.458177 | TASK [describe-kubernetes-objects : Gathering descriptions for namespace scoped objects] 2026-04-23 16:29:35.731931 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-23 16:29:35.732695 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-23 16:29:35.732849 | instance | xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value 2026-04-23 16:29:35.776132 | instance | E0423 16:29:35.775989 18718 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:35.777172 | instance | E0423 16:29:35.777138 18718 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:35.778215 | instance | E0423 16:29:35.778163 18718 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:35.779623 | instance | E0423 16:29:35.779582 18718 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:35.780366 | instance | E0423 16:29:35.780329 18718 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:35.781599 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:29:36.028417 | instance | ok: Runtime: 0:00:00.063530 2026-04-23 16:29:36.038938 | 2026-04-23 16:29:36.039146 | TASK [describe-kubernetes-objects : Downloads logs to executor] 2026-04-23 16:29:36.733549 | instance | changed: 2026-04-23 16:29:36.733786 | instance | cd+++++++++ objects/ 2026-04-23 16:29:36.733826 | instance | cd+++++++++ objects/cluster/ 2026-04-23 16:29:36.733857 | instance | cd+++++++++ objects/namespaced/ 2026-04-23 16:29:36.744383 | 2026-04-23 16:29:36.744455 | TASK [gather-pod-logs : creating directory for pod logs] 2026-04-23 16:29:36.951006 | instance | changed 2026-04-23 16:29:36.956329 | 2026-04-23 16:29:36.956416 | TASK [gather-pod-logs : creating directory for failed pod logs] 2026-04-23 16:29:37.155386 | instance | changed 2026-04-23 16:29:37.162525 | 2026-04-23 16:29:37.162618 | TASK [gather-pod-logs : retrieve all kubernetes logs, current and previous (if they exist)] 2026-04-23 16:29:37.428807 | instance | E0423 16:29:37.428529 18768 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:37.429860 | instance | E0423 16:29:37.429821 18768 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:37.430417 | instance | E0423 16:29:37.430373 18768 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:37.432392 | instance | E0423 16:29:37.432347 18768 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:37.432930 | instance | E0423 16:29:37.432895 18768 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:37.434263 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:29:37.700306 | instance | ok: Runtime: 0:00:00.066714 2026-04-23 16:29:37.709146 | 2026-04-23 16:29:37.709254 | TASK [gather-pod-logs : Downloads pod logs to executor] 2026-04-23 16:29:38.223980 | instance | changed: 2026-04-23 16:29:38.224264 | instance | cd+++++++++ pod-logs/ 2026-04-23 16:29:38.224307 | instance | cd+++++++++ pod-logs/failed-pods/ 2026-04-23 16:29:38.236450 | 2026-04-23 16:29:38.236695 | TASK [gather-prom-metrics : creating directory for helm release descriptions] 2026-04-23 16:29:38.443465 | instance | changed 2026-04-23 16:29:38.450608 | 2026-04-23 16:29:38.450702 | TASK [gather-prom-metrics : Get metrics from exporter services in all namespaces] 2026-04-23 16:29:38.738904 | instance | E0423 16:29:38.738817 18811 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:38.739932 | instance | E0423 16:29:38.739874 18811 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:38.741730 | instance | E0423 16:29:38.741674 18811 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:38.742632 | instance | E0423 16:29:38.742600 18811 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:38.743398 | instance | E0423 16:29:38.743350 18811 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:38.744663 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:29:38.986069 | instance | ok: Runtime: 0:00:00.071549 2026-04-23 16:29:38.993452 | 2026-04-23 16:29:38.993762 | TASK [gather-prom-metrics : Get ceph metrics from ceph-mgr] 2026-04-23 16:29:39.275518 | instance | E0423 16:29:39.275339 18834 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:39.277297 | instance | E0423 16:29:39.277260 18834 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:39.278536 | instance | E0423 16:29:39.278492 18834 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:39.279893 | instance | E0423 16:29:39.279847 18834 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:39.280480 | instance | E0423 16:29:39.280451 18834 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:39.281653 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:29:39.286923 | instance | ceph-mgr endpoints: 2026-04-23 16:29:39.533778 | instance | ok: Runtime: 0:00:00.069816 2026-04-23 16:29:39.539277 | 2026-04-23 16:29:39.539346 | TASK [gather-prom-metrics : Get metrics from fluentd pods] 2026-04-23 16:29:39.793855 | instance | E0423 16:29:39.793702 18856 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:39.794564 | instance | E0423 16:29:39.794524 18856 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:39.796217 | instance | E0423 16:29:39.796193 18856 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:39.796718 | instance | E0423 16:29:39.796683 18856 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:39.798451 | instance | E0423 16:29:39.798411 18856 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused 2026-04-23 16:29:39.798486 | instance | The connection to the server localhost:8080 was refused - did you specify the right host or port? 2026-04-23 16:29:40.072667 | instance | ok: Runtime: 0:00:00.064803 2026-04-23 16:29:40.080911 | 2026-04-23 16:29:40.081065 | TASK [gather-prom-metrics : Downloads logs to executor] 2026-04-23 16:29:40.586367 | instance | changed: cd+++++++++ prometheus/ 2026-04-23 16:29:40.906635 | 2026-04-23 16:29:40.906765 | TASK [gather-selenium-data : creating directory for helm release descriptions] 2026-04-23 16:29:41.104798 | instance | changed 2026-04-23 16:29:41.109944 | 2026-04-23 16:29:41.110014 | TASK [gather-selenium-data : Get selenium data] 2026-04-23 16:29:41.329342 | instance | + cp '/tmp/artifacts/*' /tmp/logs/selenium/. 2026-04-23 16:29:41.330970 | instance | cp: cannot stat '/tmp/artifacts/*': No such file or directory 2026-04-23 16:29:41.648269 | instance | ERROR 2026-04-23 16:29:41.648558 | instance | { 2026-04-23 16:29:41.648601 | instance | "delta": "0:00:00.006471", 2026-04-23 16:29:41.648630 | instance | "end": "2026-04-23 16:29:41.331356", 2026-04-23 16:29:41.648657 | instance | "msg": "non-zero return code", 2026-04-23 16:29:41.648684 | instance | "rc": 1, 2026-04-23 16:29:41.648709 | instance | "start": "2026-04-23 16:29:41.324885" 2026-04-23 16:29:41.648735 | instance | } 2026-04-23 16:29:41.648767 | instance | ERROR: Ignoring Errors 2026-04-23 16:29:41.655770 | 2026-04-23 16:29:41.655840 | TASK [gather-selenium-data : Downloads logs to executor] 2026-04-23 16:29:42.182712 | instance | changed: cd+++++++++ selenium/ 2026-04-23 16:29:42.190943 | 2026-04-23 16:29:42.191014 | PLAY RECAP 2026-04-23 16:29:42.191074 | instance | ok: 23 changed: 23 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 1 2026-04-23 16:29:42.191107 | 2026-04-23 16:29:42.328939 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/vexxhost/atmosphere/test-playbooks/molecule/post.yml@main] 2026-04-23 16:29:42.332142 | POST-RUN START: [trusted : github.com/vexxhost/zuul-config/playbooks/base/post.yaml@main] 2026-04-23 16:29:42.976326 | 2026-04-23 16:29:42.976472 | PLAY [all] 2026-04-23 16:29:42.989627 | 2026-04-23 16:29:42.989711 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-23 16:29:43.038661 | instance | skipping: Conditional result was False 2026-04-23 16:29:43.046818 | 2026-04-23 16:29:43.046961 | TASK [fetch-output : Set log path for single node] 2026-04-23 16:29:43.092906 | instance | ok 2026-04-23 16:29:43.099187 | 2026-04-23 16:29:43.099260 | LOOP [fetch-output : Ensure local output dirs] 2026-04-23 16:29:43.928001 | instance -> localhost | ok: "/var/lib/zuul/builds/3c6807d353184035be6864b5041de007/work/logs" 2026-04-23 16:29:44.144551 | instance -> localhost | changed: "/var/lib/zuul/builds/3c6807d353184035be6864b5041de007/work/artifacts" 2026-04-23 16:29:44.404034 | instance -> localhost | changed: "/var/lib/zuul/builds/3c6807d353184035be6864b5041de007/work/docs" 2026-04-23 16:29:44.419231 | 2026-04-23 16:29:44.419424 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-23 16:29:45.096891 | instance | changed: .d..t...... ./ 2026-04-23 16:29:45.097137 | instance | changed: All items complete 2026-04-23 16:29:45.097171 | 2026-04-23 16:29:45.553537 | instance | changed: .d..t...... ./ 2026-04-23 16:29:46.007752 | instance | changed: .d..t...... ./ 2026-04-23 16:29:46.030486 | 2026-04-23 16:29:46.030645 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-23 16:29:46.737611 | instance -> localhost | ok: Item: artifacts Runtime: 0:00:00.223626 2026-04-23 16:29:46.966986 | instance -> localhost | ok: Item: docs Runtime: 0:00:00.006399 2026-04-23 16:29:46.985306 | 2026-04-23 16:29:46.985450 | PLAY [all] 2026-04-23 16:29:46.991933 | 2026-04-23 16:29:46.992006 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-23 16:29:47.409655 | instance | changed 2026-04-23 16:29:47.417171 | 2026-04-23 16:29:47.417235 | PLAY RECAP 2026-04-23 16:29:47.417294 | instance | ok: 5 changed: 4 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-23 16:29:47.417323 | 2026-04-23 16:29:47.562557 | POST-RUN END RESULT_NORMAL: [trusted : github.com/vexxhost/zuul-config/playbooks/base/post.yaml@main] 2026-04-23 16:29:47.565945 | POST-RUN START: [trusted : github.com/vexxhost/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-23 16:29:48.184180 | 2026-04-23 16:29:48.589893 | PLAY [localhost] 2026-04-23 16:29:48.603125 | 2026-04-23 16:29:48.603234 | TASK [Generate Zuul manifest] 2026-04-23 16:29:48.626337 | localhost | ok 2026-04-23 16:29:48.643833 | 2026-04-23 16:29:48.643981 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-23 16:29:49.391317 | localhost | changed 2026-04-23 16:29:49.404092 | 2026-04-23 16:29:49.404173 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-23 16:29:49.456496 | localhost | ok 2026-04-23 16:29:49.465734 | 2026-04-23 16:29:49.465839 | TASK [Upload logs] 2026-04-23 16:29:49.500494 | localhost | ok 2026-04-23 16:29:49.631860 | 2026-04-23 16:29:49.632004 | TASK [Set zuul-log-path fact] 2026-04-23 16:29:49.650919 | localhost | ok 2026-04-23 16:29:49.663571 | 2026-04-23 16:29:49.663641 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-23 16:29:49.694903 | localhost | ok 2026-04-23 16:29:49.703198 | 2026-04-23 16:29:49.703266 | TASK [upload-logs : Create log directories] 2026-04-23 16:29:50.141533 | localhost | changed 2026-04-23 16:29:50.147945 | 2026-04-23 16:29:50.148043 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-23 16:29:50.602332 | localhost -> localhost | ok: Runtime: 0:00:00.016180 2026-04-23 16:29:50.607709 | 2026-04-23 16:29:50.607778 | TASK [upload-logs : Upload logs to log server] 2026-04-23 16:29:51.089260 | localhost | Output suppressed because no_log was given 2026-04-23 16:29:51.094415 | 2026-04-23 16:29:51.094503 | LOOP [upload-logs : Compress console log and json output] 2026-04-23 16:29:51.141992 | localhost | skipping: Conditional result was False 2026-04-23 16:29:51.149024 | localhost | skipping: Conditional result was False 2026-04-23 16:29:51.155123 | 2026-04-23 16:29:51.155272 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-23 16:29:51.198317 | localhost | skipping: Conditional result was False 2026-04-23 16:29:51.198693 | 2026-04-23 16:29:51.202695 | localhost | skipping: Conditional result was False 2026-04-23 16:29:51.210470 | 2026-04-23 16:29:51.210569 | LOOP [upload-logs : Upload console log and json output]