● np0000155647 State: running Units: 506 loaded (incl. loaded aliases) Jobs: 0 queued Failed: 0 units Since: Mon 2026-02-16 17:51:02 UTC; 25min ago systemd: 255.4-1ubuntu8.12 CGroup: / ├─init.scope │ └─1 /sbin/init nofb ├─system.slice │ ├─apache-htcacheclean.service │ │ └─14203 /usr/bin/htcacheclean -d 120 -p /var/cache/apache2/mod_cache_disk -l 300M -n │ ├─apache2.service │ │ ├─122537 /usr/sbin/apache2 -k start │ │ ├─122541 /usr/sbin/apache2 -k start │ │ └─122542 /usr/sbin/apache2 -k start │ ├─containerd.service │ │ ├─20631 /usr/bin/containerd │ │ └─21507 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 089fb10c98a483eac1b5cbd49445b3386ad4740a6877edc01e9209d9dbf220c7 -address /run/containerd/containerd.sock │ ├─dbus.service │ │ └─708 @dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only │ ├─dm-event.service │ │ └─115151 /usr/sbin/dmeventd -f │ ├─docker-089fb10c98a483eac1b5cbd49445b3386ad4740a6877edc01e9209d9dbf220c7.scope │ │ ├─init.scope │ │ │ └─21530 /sbin/init │ │ ├─kubelet.slice │ │ │ ├─kubelet-kubepods.slice │ │ │ │ ├─kubelet-kubepods-besteffort.slice │ │ │ │ │ ├─kubelet-kubepods-besteffort-pod060980bd_94df_4b77_8c4f_85019165ff36.slice │ │ │ │ │ │ ├─cri-containerd-1433ec82c7d58a1bd88b32542ecb0883d1dd9b071469fb95c79ac08ced6611d2.scope │ │ │ │ │ │ │ └─25177 /manager --leader-elect --diagnostics-address=:8443 --insecure-diagnostics=false --feature-gates=MachinePool=true,KubeadmBootstrapFormatIgnition=true,PriorityQueue=false --bootstrap-token-ttl=15m │ │ │ │ │ │ └─cri-containerd-5793c32bcc0436f8fe34a586a9c3d691c3891ff8e512284b4424309d965cf4de.scope │ │ │ │ │ │ └─24819 /pause │ │ │ │ │ ├─kubelet-kubepods-besteffort-pod060f8598_7528_45b1_b3c5_0ca523a34f10.slice │ │ │ │ │ │ ├─cri-containerd-a9b853c1562e2d9b5d25818d03170c0ae004e0c3bdcba5684225a09905a1b752.scope │ │ │ │ │ │ │ └─24755 /pause │ │ │ │ │ │ └─cri-containerd-f03af497176e3521a17e482305315b46ef9a3f06f8def1aa9d3e6b9f8a165825.scope │ │ │ │ │ │ └─25002 /manager --leader-elect --diagnostics-address=:8443 --insecure-diagnostics=false --feature-gates=MachinePool=true,ClusterResourceSet=true,ClusterTopology=true,RuntimeSDK=false,MachineSetPreflightChecks=true,MachineWaitForVolumeDetachConsiderVolumeAttachments=true,PriorityQueue=false │ │ │ │ │ ├─kubelet-kubepods-besteffort-pod15afed8a_99d8_4a13_9c07_038039770363.slice │ │ │ │ │ │ ├─cri-containerd-716b23ff544bc41fa5015a42d703c5554339e9b24ab33edcd77f5b2026964d08.scope │ │ │ │ │ │ │ └─25086 /pause │ │ │ │ │ │ └─cri-containerd-d9dba19631b25e52efc68930f07a9a022f59c9244d7050bc186b9e4d87d4e755.scope │ │ │ │ │ │ └─25471 /manager --leader-elect --v=2 --diagnostics-address=127.0.0.1:8080 --insecure-diagnostics=true │ │ │ │ │ ├─kubelet-kubepods-besteffort-pod18fcda39_476d_4f2a_b389_6fff818f42ae.slice │ │ │ │ │ │ ├─cri-containerd-10c28a6f48eec284148f84ce3e47fe3004a9cd7bbf1f9abc58e6814017b471be.scope │ │ │ │ │ │ │ └─23824 /pause │ │ │ │ │ │ └─cri-containerd-720d0f6651c8177a77c9230bf86a48de597bc5c1ec5db6e60bd66fe90064648e.scope │ │ │ │ │ │ └─24002 local-path-provisioner --debug start --helper-image docker.io/kindest/local-path-helper:v20220607-9a4d8d2a --config /etc/config/config.json │ │ │ │ │ ├─kubelet-kubepods-besteffort-pod2bdc6e4c_0088_47cb_be88_ab92547b89ae.slice │ │ │ │ │ │ ├─cri-containerd-5e10cc4be68610c8fdfce2d97284fa2a72defecd8f9dbf1195ed33469751fc11.scope │ │ │ │ │ │ │ └─23841 /pause │ │ │ │ │ │ └─cri-containerd-6604ce9efc54877b4d953350f7e65a5f444d760019403df0dbdd867c03f80c27.scope │ │ │ │ │ │ └─24200 /app/cmd/controller/controller --v=2 --cluster-resource-namespace=cert-manager --leader-election-namespace=kube-system --acme-http01-solver-image=quay.io/jetstack/cert-manager-acmesolver:v1.18.1 --max-concurrent-challenges=60 │ │ │ │ │ ├─kubelet-kubepods-besteffort-pod60d2d5fb_575b_4758_90d1_81d8244a7f54.slice │ │ │ │ │ │ ├─cri-containerd-5c88b5d8462c6a19822bef36e5c767ce579a2df6df3f201d51bb2c4a584c158d.scope │ │ │ │ │ │ │ └─24971 /pause │ │ │ │ │ │ └─cri-containerd-75f147194ebc30a5d1f5ba46bc89ad0ee081af58bddf008e5031627017ed8994.scope │ │ │ │ │ │ └─25312 /manager --leader-elect --diagnostics-address=:8443 --insecure-diagnostics=false --feature-gates=MachinePool=true,ClusterTopology=true,KubeadmBootstrapFormatIgnition=true,PriorityQueue=false │ │ │ │ │ ├─kubelet-kubepods-besteffort-pod96cc9fa3_1069_4840_9c97_4d69571ebb29.slice │ │ │ │ │ │ ├─cri-containerd-49b695fce71dcafd713509840486eda46a4808bc556797e8cbb013c178adf453.scope │ │ │ │ │ │ │ └─24091 /pause │ │ │ │ │ │ └─cri-containerd-6d62f3e77e85aff076f4bf174cf00cbbe5d08b7b01c315a48b33f121763c8447.scope │ │ │ │ │ │ └─24533 /app/cmd/webhook/webhook --v=2 --secure-port=10250 --dynamic-serving-ca-secret-namespace=cert-manager --dynamic-serving-ca-secret-name=cert-manager-webhook-ca --dynamic-serving-dns-names=cert-manager-webhook --dynamic-serving-dns-names=cert-manager-webhook.cert-manager --dynamic-serving-dns-names=cert-manager-webhook.cert-manager.svc │ │ │ │ │ ├─kubelet-kubepods-besteffort-pod9f8355b0_94bd_475d_bf74_9d386d0f5259.slice │ │ │ │ │ │ ├─cri-containerd-7a0648fca673c15649668c644af739b895a375f8e0a5ec9c7cf267b897aa4815.scope │ │ │ │ │ │ │ └─24078 /pause │ │ │ │ │ │ └─cri-containerd-f55b671d3ac5e81b9dc93f8fe60e82166337b18ff359ea71e86690ffa57838b4.scope │ │ │ │ │ │ └─24411 /app/cmd/cainjector/cainjector --v=2 --leader-election-namespace=kube-system │ │ │ │ │ └─kubelet-kubepods-besteffort-podeba8cea0_a113_40e4_8af9_f9092b483360.slice │ │ │ │ │ ├─cri-containerd-05883a8b5dcc6186bc92366d098adf91aacc2716c5f4654d5faee7250595663b.scope │ │ │ │ │ │ └─23221 /pause │ │ │ │ │ └─cri-containerd-68a9f7e8b2f1594edc9ae113bf7569a1d5ed85bb82562eb4364f5331c9f598ca.scope │ │ │ │ │ └─23271 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=kind-control-plane │ │ │ │ ├─kubelet-kubepods-burstable.slice │ │ │ │ │ ├─kubelet-kubepods-burstable-pod0656ab70da313d6449b17f099a2a3110.slice │ │ │ │ │ │ ├─cri-containerd-6d4579b16512918eddfa28c91b9b82464468be359a2a61c9fea7dc7b7ab46364.scope │ │ │ │ │ │ │ └─22509 etcd --advertise-client-urls=https://172.18.0.2:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://172.18.0.2:2380 --initial-cluster=kind-control-plane=https://172.18.0.2:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://172.18.0.2:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://172.18.0.2:2380 --name=kind-control-plane --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt │ │ │ │ │ │ └─cri-containerd-a181437f07c759cb357dc1c5a4915703fdbc8421ac7dbe7ae420af7bbf5ce962.scope │ │ │ │ │ │ └─22255 /pause │ │ │ │ │ ├─kubelet-kubepods-burstable-pod53ff6c8abd472f64bc9a9afbd3a471a9.slice │ │ │ │ │ │ ├─cri-containerd-9213f7ed733001c44536ef277d2f04abbec812060f0464459c296548b41c605b.scope │ │ │ │ │ │ │ └─22269 /pause │ │ │ │ │ │ └─cri-containerd-e5efc56a027eace488dd3cff0e461733af3798de3cb89fefc0a233cd6d868383.scope │ │ │ │ │ │ └─22372 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=kind --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key "--controllers=*,bootstrapsigner,tokencleaner" --enable-hostpath-provisioner=true --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/16 --use-service-account-credentials=true │ │ │ │ │ ├─kubelet-kubepods-burstable-pod65d25134_75a8_44c0_b994_37071db70c0b.slice │ │ │ │ │ │ ├─cri-containerd-0845db3249e178e867783ec1010a54509a9a6f20e675db3c40bafc17b94aff4a.scope │ │ │ │ │ │ │ └─23596 /pause │ │ │ │ │ │ └─cri-containerd-a1bcb37a57c99f9a954339f4c95765996f1fc2161db6fc87722931f900073eac.scope │ │ │ │ │ │ └─23685 /coredns -conf /etc/coredns/Corefile │ │ │ │ │ ├─kubelet-kubepods-burstable-pod922d5a86_cf0c_4898_9361_4f7a1724917a.slice │ │ │ │ │ │ ├─cri-containerd-5f0448337760a1a92cb526a9ebe910cc2ad19bc77e1dfc4e2691cc1e11943675.scope │ │ │ │ │ │ │ └─23927 /pause │ │ │ │ │ │ └─cri-containerd-d0f8a8d96527dbcca96f1dd0492e8b1ba70ee11008c068b797069a257b450b1d.scope │ │ │ │ │ │ └─24314 /manager --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 │ │ │ │ │ ├─kubelet-kubepods-burstable-podbee69ab63b6471d4da666ee970746eae.slice │ │ │ │ │ │ ├─cri-containerd-5191216187fdcf29cecbd6140d2896e536a2153a60d5d507dc419302faacfe0d.scope │ │ │ │ │ │ │ └─22253 /pause │ │ │ │ │ │ └─cri-containerd-92b6f098aaae83573340f2ea18f968ceaff832acd7b11fb4c99b6ac6d401b2fe.scope │ │ │ │ │ │ └─22350 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true │ │ │ │ │ ├─kubelet-kubepods-burstable-podcbd4ee29_9a60_4f24_babe_75a79e0262a8.slice │ │ │ │ │ │ ├─cri-containerd-45366b6128ee283ed8e551bd4cfbb23d9e41caf8e9df563caa159264f5fb5a8f.scope │ │ │ │ │ │ │ └─23604 /pause │ │ │ │ │ │ └─cri-containerd-c82673298311c208438753cc6f9980d181abc401eaf370dd7390fdbc968f243a.scope │ │ │ │ │ │ └─23676 /coredns -conf /etc/coredns/Corefile │ │ │ │ │ └─kubelet-kubepods-burstable-podef6ebc9842be361e05ebdb6790c540b6.slice │ │ │ │ │ ├─cri-containerd-048d36383cc06e32dd1ab4003dee6a9343d2ccda4cb4eb939db648f16ac6f620.scope │ │ │ │ │ │ └─22272 /pause │ │ │ │ │ └─cri-containerd-d4a9d2a347b177fb443b9691e9438d1c0ee06ea2f1d19bf68afb66b1353f589c.scope │ │ │ │ │ └─22412 kube-apiserver --advertise-address=172.18.0.2 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --runtime-config= --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/16 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key │ │ │ │ └─kubelet-kubepods-podad85b7c2_f9f9_4ec9_b260_341f20aa22ff.slice │ │ │ │ ├─cri-containerd-840e0998fae8407b6859c75f3049d4116708e1a099a2af82c2a231ec21648183.scope │ │ │ │ │ └─23228 /pause │ │ │ │ └─cri-containerd-8a5e3ce32b811e69fef7d0bd0b708db17b4ffe5f3648638f8d7369dee746a825.scope │ │ │ │ └─23315 /bin/kindnetd │ │ │ └─kubelet.service │ │ │ └─22594 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --node-ip=172.18.0.2 --node-labels= --pod-infra-container-image=registry.k8s.io/pause:3.8 --provider-id=kind://docker/kind/kind-control-plane --fail-swap-on=false --cgroup-root=/kubelet │ │ └─system.slice │ │ ├─containerd.service │ │ │ ├─21726 /usr/local/bin/containerd │ │ │ ├─22165 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id a181437f07c759cb357dc1c5a4915703fdbc8421ac7dbe7ae420af7bbf5ce962 -address /run/containerd/containerd.sock │ │ │ ├─22172 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5191216187fdcf29cecbd6140d2896e536a2153a60d5d507dc419302faacfe0d -address /run/containerd/containerd.sock │ │ │ ├─22182 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 048d36383cc06e32dd1ab4003dee6a9343d2ccda4cb4eb939db648f16ac6f620 -address /run/containerd/containerd.sock │ │ │ ├─22207 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 9213f7ed733001c44536ef277d2f04abbec812060f0464459c296548b41c605b -address /run/containerd/containerd.sock │ │ │ ├─23175 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 05883a8b5dcc6186bc92366d098adf91aacc2716c5f4654d5faee7250595663b -address /run/containerd/containerd.sock │ │ │ ├─23197 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 840e0998fae8407b6859c75f3049d4116708e1a099a2af82c2a231ec21648183 -address /run/containerd/containerd.sock │ │ │ ├─23556 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 45366b6128ee283ed8e551bd4cfbb23d9e41caf8e9df563caa159264f5fb5a8f -address /run/containerd/containerd.sock │ │ │ ├─23564 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 0845db3249e178e867783ec1010a54509a9a6f20e675db3c40bafc17b94aff4a -address /run/containerd/containerd.sock │ │ │ ├─23750 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 10c28a6f48eec284148f84ce3e47fe3004a9cd7bbf1f9abc58e6814017b471be -address /run/containerd/containerd.sock │ │ │ ├─23778 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5e10cc4be68610c8fdfce2d97284fa2a72defecd8f9dbf1195ed33469751fc11 -address /run/containerd/containerd.sock │ │ │ ├─23907 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5f0448337760a1a92cb526a9ebe910cc2ad19bc77e1dfc4e2691cc1e11943675 -address /run/containerd/containerd.sock │ │ │ ├─24027 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 7a0648fca673c15649668c644af739b895a375f8e0a5ec9c7cf267b897aa4815 -address /run/containerd/containerd.sock │ │ │ ├─24053 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 49b695fce71dcafd713509840486eda46a4808bc556797e8cbb013c178adf453 -address /run/containerd/containerd.sock │ │ │ ├─24730 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id a9b853c1562e2d9b5d25818d03170c0ae004e0c3bdcba5684225a09905a1b752 -address /run/containerd/containerd.sock │ │ │ ├─24799 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5793c32bcc0436f8fe34a586a9c3d691c3891ff8e512284b4424309d965cf4de -address /run/containerd/containerd.sock │ │ │ ├─24951 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5c88b5d8462c6a19822bef36e5c767ce579a2df6df3f201d51bb2c4a584c158d -address /run/containerd/containerd.sock │ │ │ └─25067 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 716b23ff544bc41fa5015a42d703c5554339e9b24ab33edcd77f5b2026964d08 -address /run/containerd/containerd.sock │ │ └─systemd-journald.service │ │ └─21712 /lib/systemd/systemd-journald │ ├─docker.service │ │ ├─20760 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock │ │ └─21600 /usr/bin/docker-proxy -proto tcp -host-ip 127.0.0.1 -host-port 36617 -container-ip 172.18.0.2 -container-port 6443 -use-listen-fd │ ├─epmd.service │ │ └─26075 /usr/bin/epmd -systemd │ ├─fsidd.service │ │ └─54639 /usr/sbin/fsidd │ ├─haproxy.service │ │ ├─13241 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock │ │ └─13243 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock │ ├─iscsid.service │ │ ├─44108 /usr/sbin/iscsid │ │ └─44109 /usr/sbin/iscsid │ ├─ksmtuned.service │ │ ├─ 5045 /bin/bash /usr/sbin/ksmtuned │ │ └─130723 sleep 60 │ ├─libvirtd.service │ │ ├─ 42979 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper │ │ ├─ 42980 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper │ │ └─112159 /usr/sbin/libvirtd --timeout 120 │ ├─memcached.service │ │ └─66471 /usr/bin/memcached -m 64 -p 11211 -u memcache -l 127.0.0.1 -l ::1 -P /var/run/memcached/memcached.pid │ ├─mysql.service │ │ └─62997 /usr/sbin/mysqld │ ├─nfs-blkmap.service │ │ └─54644 /usr/sbin/blkmapd │ ├─nfs-idmapd.service │ │ └─54647 /usr/sbin/rpc.idmapd │ ├─nfs-mountd.service │ │ └─54657 /usr/sbin/rpc.mountd │ ├─nfsdcld.service │ │ └─54659 /usr/sbin/nfsdcld │ ├─nmbd.service │ │ └─55225 /usr/sbin/nmbd --foreground --no-process-group │ ├─ovn-controller-vtep.service │ │ └─100859 ovn-controller-vtep -vconsole:emer -vsyslog:err -vfile:info --vtep-db=/var/run/openvswitch/db.sock --ovnsb-db=/var/run/ovn/ovnsb_db.sock --no-chdir --log-file=/var/log/ovn/ovn-controller-vtep.log --pidfile=/var/run/ovn/ovn-controller-vtep.pid --detach │ ├─ovn-controller.service │ │ └─101600 ovn-controller unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --no-chdir --log-file=/var/log/ovn/ovn-controller.log --pidfile=/var/run/ovn/ovn-controller.pid --detach │ ├─ovn-northd.service │ │ └─101269 ovn-northd -vconsole:emer -vsyslog:err -vfile:info --ovnnb-db=unix:/var/run/ovn/ovnnb_db.sock --ovnsb-db=unix:/var/run/ovn/ovnsb_db.sock --no-chdir --log-file=/var/log/ovn/ovn-northd.log --pidfile=/var/run/ovn/ovn-northd.pid --detach │ ├─ovn-ovsdb-server-nb.service │ │ └─101194 ovsdb-server -vconsole:off -vfile:info --log-file=/var/log/ovn/ovsdb-server-nb.log --remote=punix:/var/run/ovn/ovnnb_db.sock --pidfile=/var/run/ovn/ovnnb_db.pid --unixctl=/var/run/ovn/ovnnb_db.ctl --remote=db:OVN_Northbound,NB_Global,connections --private-key=db:OVN_Northbound,SSL,private_key --certificate=db:OVN_Northbound,SSL,certificate --ca-cert=db:OVN_Northbound,SSL,ca_cert --ssl-protocols=db:OVN_Northbound,SSL,ssl_protocols --ssl-ciphers=db:OVN_Northbound,SSL,ssl_ciphers /var/lib/ovn/ovnnb_db.db │ ├─ovn-ovsdb-server-sb.service │ │ └─101199 ovsdb-server -vconsole:off -vfile:info --log-file=/var/log/ovn/ovsdb-server-sb.log --remote=punix:/var/run/ovn/ovnsb_db.sock --pidfile=/var/run/ovn/ovnsb_db.pid --unixctl=/var/run/ovn/ovnsb_db.ctl --remote=db:OVN_Southbound,SB_Global,connections --private-key=db:OVN_Southbound,SSL,private_key --certificate=db:OVN_Southbound,SSL,certificate --ca-cert=db:OVN_Southbound,SSL,ca_cert --ssl-protocols=db:OVN_Southbound,SSL,ssl_protocols --ssl-ciphers=db:OVN_Southbound,SSL,ssl_ciphers /var/lib/ovn/ovnsb_db.db │ ├─ovs-vswitchd.service │ │ └─100770 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach │ ├─ovsdb-server.service │ │ └─100719 ovsdb-server /etc/openvswitch/conf.db -vconsole:emer -vsyslog:err -vfile:info --remote=punix:/var/run/openvswitch/db.sock --private-key=db:Open_vSwitch,SSL,private_key --certificate=db:Open_vSwitch,SSL,certificate --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --no-chdir --log-file=/var/log/openvswitch/ovsdb-server.log --pidfile=/var/run/openvswitch/ovsdb-server.pid --detach │ ├─polkit.service │ │ └─745 /usr/lib/polkit-1/polkitd --no-debug │ ├─rabbitmq-server.service │ │ ├─26195 /usr/lib/erlang/erts-13.2.2.5/bin/beam.smp -W w -MBas ageffcbf -MHas ageffcbf -MBlmbcs 512 -MHlmbcs 512 -MMmcs 30 -pc unicode -P 1048576 -t 5000000 -stbt db -zdbbl 128000 -sbwt none -sbwtdcpu none -sbwtdio none -- -root /usr/lib/erlang -bindir /usr/lib/erlang/erts-13.2.2.5/bin -progname erl -- -home /var/lib/rabbitmq -- -pa "" -noshell -noinput -s rabbit boot -boot start_sasl -syslog logger "[]" -syslog syslog_error_logger false -kernel prevent_overlapping_partitions false -enable-feature maybe_expr │ │ ├─26205 erl_child_setup 65536 │ │ ├─26313 /usr/lib/erlang/erts-13.2.2.5/bin/inet_gethost 4 │ │ ├─26314 /usr/lib/erlang/erts-13.2.2.5/bin/inet_gethost 4 │ │ └─26319 /bin/sh -s rabbit_disk_monitor │ ├─rpc-statd.service │ │ └─54648 /usr/sbin/rpc.statd │ ├─rpcbind.service │ │ └─54015 /sbin/rpcbind -f -w │ ├─rsyslog.service │ │ └─125254 /usr/sbin/rsyslogd -n -iNONE │ ├─smbd.service │ │ ├─55156 /usr/sbin/smbd --foreground --no-process-group │ │ ├─55160 "smbd: notifyd" . │ │ └─55161 "smbd: cleanupd " │ ├─ssh.service │ │ └─746 "sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups" │ ├─system-devstack.slice │ │ ├─devstack@barbican-keystone-listener.service │ │ │ ├─118155 "barbican-keystone-listener: master process [/opt/stack/data/venv/bin/barbican-keystone-listener --config-file=/etc/barbican/barbican.conf]" │ │ │ └─118402 "barbican-keystone-listener: ServiceWrapper worker(0)" │ │ ├─devstack@barbican-retry.service │ │ │ ├─117625 "barbican-retry: master process [/opt/stack/data/venv/bin/barbican-retry --config-file=/etc/barbican/barbican.conf]" │ │ │ └─117918 "barbican-retry: ServiceWrapper worker(0)" │ │ ├─devstack@barbican-svc.service │ │ │ ├─117084 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv │ │ │ ├─117085 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv │ │ │ ├─117086 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv │ │ │ ├─117087 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv │ │ │ └─117088 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv │ │ ├─devstack@c-api.service │ │ │ ├─112517 "cinder-apiuWSGI master" │ │ │ ├─112522 "cinder-apiuWSGI worker 1" │ │ │ ├─112523 "cinder-apiuWSGI worker 2" │ │ │ ├─112524 "cinder-apiuWSGI worker 3" │ │ │ └─112525 "cinder-apiuWSGI worker 4" │ │ ├─devstack@c-bak.service │ │ │ └─113815 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/cinder-backup --config-file /etc/cinder/cinder.conf │ │ ├─devstack@c-sch.service │ │ │ └─113235 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/cinder-scheduler --config-file /etc/cinder/cinder.conf │ │ ├─devstack@c-vol.service │ │ │ ├─114396 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/cinder-volume --config-file /etc/cinder/cinder.conf │ │ │ └─114685 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/cinder-volume --config-file /etc/cinder/cinder.conf │ │ ├─devstack@etcd.service │ │ │ └─64797 /opt/stack/bin/etcd --name np0000155647 --data-dir /opt/stack/data/etcd --initial-cluster-state new --initial-cluster-token etcd-cluster-01 --initial-cluster np0000155647=http://199.204.45.4:2380 --initial-advertise-peer-urls http://199.204.45.4:2380 --advertise-client-urls http://199.204.45.4:2379 --listen-peer-urls http://0.0.0.0:2380 --listen-client-urls http://199.204.45.4:2379 --log-level=debug │ │ ├─devstack@file_tracker.service │ │ │ ├─ 64151 /bin/bash /opt/stack/devstack/tools/file_tracker.sh │ │ │ └─129429 sleep 20 │ │ ├─devstack@g-api.service │ │ │ ├─115234 "glance-apiuWSGI master" │ │ │ ├─115235 "glance-apiuWSGI worker 1" │ │ │ ├─115236 "glance-apiuWSGI worker 2" │ │ │ ├─115237 "glance-apiuWSGI worker 3" │ │ │ └─115238 "glance-apiuWSGI worker 4" │ │ ├─devstack@keystone.service │ │ │ ├─66049 "keystoneuWSGI master" │ │ │ ├─66057 "keystoneuWSGI worker 1" │ │ │ ├─66058 "keystoneuWSGI worker 2" │ │ │ ├─66059 "keystoneuWSGI worker 3" │ │ │ └─66060 "keystoneuWSGI worker 4" │ │ ├─devstack@m-api.service │ │ │ ├─122152 "manila-apiuWSGI master" │ │ │ ├─122153 "manila-apiuWSGI worker 1" │ │ │ ├─122154 "manila-apiuWSGI worker 2" │ │ │ ├─122155 "manila-apiuWSGI worker 3" │ │ │ └─122156 "manila-apiuWSGI worker 4" │ │ ├─devstack@m-dat.service │ │ │ └─128394 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/manila-data --config-file /etc/manila/manila.conf │ │ ├─devstack@m-sch.service │ │ │ └─127822 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/manila-scheduler --config-file /etc/manila/manila.conf │ │ ├─devstack@m-shr.service │ │ │ ├─127286 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/manila-share --config-file /etc/manila/manila.conf │ │ │ └─127637 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/manila-share --config-file /etc/manila/manila.conf │ │ ├─devstack@magnum-api.service │ │ │ ├─119710 "magnum-apiuWSGI master" │ │ │ ├─119712 "magnum-apiuWSGI worker 1" │ │ │ ├─119713 "magnum-apiuWSGI worker 2" │ │ │ ├─119714 "magnum-apiuWSGI worker 3" │ │ │ └─119715 "magnum-apiuWSGI worker 4" │ │ ├─devstack@magnum-cond.service │ │ │ ├─120306 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120636 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120638 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120640 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120641 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120642 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120644 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120647 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120648 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120651 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120653 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120655 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120657 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120659 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120663 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120668 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ └─120669 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ ├─devstack@memory_tracker.service │ │ │ ├─ 63656 /bin/bash /opt/stack/devstack/tools/memory_tracker.sh │ │ │ └─129419 sleep 20 │ │ ├─devstack@n-api-meta.service │ │ │ ├─108345 "nova-api-metauWSGI master" │ │ │ ├─108346 "nova-api-metauWSGI worker 1" │ │ │ ├─108347 "nova-api-metauWSGI worker 2" │ │ │ ├─108348 "nova-api-metauWSGI worker 3" │ │ │ ├─108349 "nova-api-metauWSGI worker 4" │ │ │ └─108350 "nova-api-metauWSGI http 1" │ │ ├─devstack@n-api.service │ │ │ ├─99874 "nova-apiuWSGI master" │ │ │ ├─99875 "nova-apiuWSGI worker 1" │ │ │ ├─99876 "nova-apiuWSGI worker 2" │ │ │ ├─99877 "nova-apiuWSGI worker 3" │ │ │ └─99878 "nova-apiuWSGI worker 4" │ │ ├─devstack@n-cond-cell1.service │ │ │ ├─110436 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf │ │ │ ├─111018 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf │ │ │ ├─111019 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf │ │ │ ├─111021 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf │ │ │ └─111022 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf │ │ ├─devstack@n-cpu.service │ │ │ └─111521 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-compute --config-file /etc/nova/nova-cpu.conf │ │ ├─devstack@n-novnc-cell1.service │ │ │ └─109046 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-novncproxy --config-file /etc/nova/nova_cell1.conf --web /opt/stack/novnc │ │ ├─devstack@n-sch.service │ │ │ ├─107735 "nova-scheduler: master process [/opt/stack/data/venv/bin/nova-scheduler --config-file /etc/nova/nova.conf]" │ │ │ ├─108462 "nova-scheduler: ServiceWrapper worker(0)" │ │ │ ├─108471 "nova-scheduler: ServiceWrapper worker(1)" │ │ │ ├─108480 "nova-scheduler: ServiceWrapper worker(2)" │ │ │ └─108488 "nova-scheduler: ServiceWrapper worker(3)" │ │ ├─devstack@n-super-cond.service │ │ │ ├─109828 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf │ │ │ ├─110421 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf │ │ │ ├─110422 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf │ │ │ ├─110423 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf │ │ │ └─110424 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf │ │ ├─devstack@neutron-api.service │ │ │ ├─103265 "neutron-apiuWSGI master" │ │ │ ├─103266 "neutron-apiuWSGI worker 1" │ │ │ ├─103267 "neutron-apiuWSGI worker 2" │ │ │ ├─103268 "neutron-apiuWSGI worker 3" │ │ │ └─103269 "neutron-apiuWSGI worker 4" │ │ ├─devstack@neutron-ovn-maintenance-worker.service │ │ │ ├─104760 "neutron-ovn-maintenance-worker: master process [/opt/stack/data/venv/bin/neutron-ovn-maintenance-worker --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini]" │ │ │ └─105533 "neutron-server: maintenance worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-ovn-maintenance-worker --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ │ ├─devstack@neutron-periodic-workers.service │ │ │ ├─104263 "neutron-periodic-workers: master process [/opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini]" │ │ │ ├─104984 "neutron-server: periodic worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ │ │ ├─104993 "neutron-server: periodic worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ │ │ ├─105004 "neutron-server: periodic worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ │ │ └─105017 "neutron-server: periodic worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ │ ├─devstack@neutron-rpc-server.service │ │ │ ├─103751 "neutron-rpc-server: master process [/opt/stack/data/venv/bin/neutron-rpc-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini]" │ │ │ ├─104906 "neutron-server: rpc worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-rpc-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ │ │ └─104914 "neutron-server: rpc worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-rpc-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ │ ├─devstack@o-api.service │ │ │ ├─123997 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv │ │ │ ├─123998 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv │ │ │ ├─123999 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv │ │ │ ├─124000 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv │ │ │ └─124001 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv │ │ ├─devstack@o-da.service │ │ │ ├─124527 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/octavia-driver-agent --config-file /etc/octavia/octavia.conf │ │ │ ├─125280 "octavia-driver-agent - status_listener" │ │ │ ├─125283 "octavia-driver-agent - stats_listener" │ │ │ ├─125285 "octavia-driver-agent - get_listener" │ │ │ └─125384 "octavia-driver-agent - provider_agent -- ovn" │ │ ├─devstack@o-hk.service │ │ │ └─125136 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/octavia-housekeeping --config-file /etc/octavia/octavia.conf │ │ ├─devstack@openstack-cli-server.service │ │ │ └─62235 /opt/stack/data/venv/bin/python3 /opt/stack/devstack/files/openstack-cli-server/openstack-cli-server │ │ ├─devstack@placement-api.service │ │ │ ├─105545 "placementuWSGI master" │ │ │ ├─105547 "placementuWSGI worker 1" │ │ │ ├─105548 "placementuWSGI worker 2" │ │ │ ├─105549 "placementuWSGI worker 3" │ │ │ └─105550 "placementuWSGI worker 4" │ │ └─devstack@q-ovn-agent.service │ │ ├─102144 "neutron-ovn-agent: master process [/opt/stack/data/venv/bin/neutron-ovn-agent --config-file /etc/neutron/plugins/ml2/ovn_agent.ini]" │ │ ├─102627 "neutron-ovn-agent: ServiceWrapper worker(0)" │ │ ├─102934 /opt/stack/data/venv/bin/python3.12 /usr/local/bin/privsep-helper --config-file /etc/neutron/plugins/ml2/ovn_agent.ini --privsep_context neutron.privileged.namespace_cmd --privsep_sock_path /tmp/tmp0w79_nvv/privsep.sock │ │ ├─106395 /opt/stack/data/venv/bin/python3.12 /usr/local/bin/privsep-helper --config-file /etc/neutron/plugins/ml2/ovn_agent.ini --privsep_context neutron.privileged.default --privsep_sock_path /tmp/tmpcnh8_ng3/privsep.sock │ │ ├─128352 /opt/stack/data/venv/bin/python3.12 /usr/local/bin/privsep-helper --config-file /etc/neutron/plugins/ml2/ovn_agent.ini --privsep_context neutron.privileged.link_cmd --privsep_sock_path /tmp/tmp7dfz2ixi/privsep.sock │ │ ├─128782 sudo /opt/stack/data/venv/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf │ │ ├─128784 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf │ │ └─128812 haproxy -f /opt/stack/data/neutron/ovn-metadata-proxy/1ddaf2af-8333-48ec-a71c-3dafdea80472.conf │ ├─system-getty.slice │ │ └─getty@tty1.service │ │ └─726 /sbin/agetty -o "-p -- \\u" --noclear - linux │ ├─system-serial\x2dgetty.slice │ │ └─serial-getty@ttyS0.service │ │ └─727 /sbin/agetty -o "-p -- \\u" --keep-baud 115200,57600,38400,9600 - vt220 │ ├─systemd-journald.service │ │ └─19011 /usr/lib/systemd/systemd-journald │ ├─systemd-logind.service │ │ └─712 /usr/lib/systemd/systemd-logind │ ├─systemd-machined.service │ │ └─42877 /usr/lib/systemd/systemd-machined │ ├─systemd-networkd.service │ │ └─602 /usr/lib/systemd/systemd-networkd │ ├─systemd-resolved.service │ │ └─460 /usr/lib/systemd/systemd-resolved │ ├─systemd-timesyncd.service │ │ └─463 /usr/lib/systemd/systemd-timesyncd │ ├─systemd-udevd.service │ │ └─udev │ │ └─453 /usr/lib/systemd/systemd-udevd │ ├─virtlockd.service │ │ └─43092 /usr/sbin/virtlockd │ └─virtlogd.service │ └─48673 /usr/sbin/virtlogd └─user.slice └─user-1000.slice ├─session-1.scope │ ├─ 828 "sshd: zuul [priv]" │ ├─ 849 "sshd: zuul@notty" │ ├─ 1054 /usr/bin/python3 │ ├─130815 sh -c "/bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '\"'\"'echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3'\"'\"' && sleep 0'" │ ├─130816 /bin/sh -c "sudo -H -S -n -u root /bin/sh -c 'echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3' && sleep 0" │ ├─130817 sudo -H -S -n -u root /bin/sh -c "echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3" │ ├─130818 /bin/sh -c "echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3" │ ├─130819 /usr/bin/python3 │ ├─130820 /bin/bash -c "sudo iptables-save > /home/zuul/iptables.txt\n\n# NOTE(sfernand): Run 'df' with a 60s timeout to prevent hangs from\n# stale NFS mounts.\ntimeout -s 9 60s df -h > /home/zuul/df.txt || true\n# If 'df' times out, the mount output helps debug which NFS share\n# is unresponsive.\nmount > /home/zuul/mount.txt\n\nfor py_ver in 2 3; do\n if [[ \`which python\${py_ver}\` ]]; then\n python\${py_ver} -m pip freeze > /home/zuul/pip\${py_ver}-freeze.txt\n fi\ndone\n\nif [ \`command -v dpkg\` ]; then\n dpkg -l> /home/zuul/dpkg-l.txt\nfi\nif [ \`command -v rpm\` ]; then\n rpm -qa | sort > /home/zuul/rpm-qa.txt\nfi\n\n# Services status\nsudo systemctl status --all > services.txt 2>/dev/null\n\n# NOTE(kchamart) The 'audit.log' can be useful in cases when QEMU\n# failed to start due to denials from SELinux — useful for CentOS\n# and Fedora machines. For Ubuntu (which runs AppArmor), DevStack\n# already captures the contents of /var/log/kern.log (via\n# \`journalctl -t kernel\` redirected into syslog.txt.gz), which\n# contains AppArmor-related messages.\nif [ -f /var/log/audit/audit.log ] ; then\n sudo cp /var/log/audit/audit.log /home/zuul/audit.log &&\n chmod +r /home/zuul/audit.log;\nfi\n\n# gzip and save any coredumps in /var/core\nif [ -d /var/core ]; then\n sudo gzip -r /var/core\n sudo cp -r /var/core /home/zuul/\nfi\n\nsudo ss -lntup | grep ':53' > /home/zuul/listen53.txt\n\n# NOTE(andreaf) Service logs are already in logs/ thanks for the\n# export-devstack-journal log. Apache logs are under apache/ thans to the\n# apache-logs-conf role.\ngrep -i deprecat /home/zuul/logs/*.txt /home/zuul/apache/*.log | \\\n sed -r 's/[0-9]{1,2}\\:[0-9]{1,2}\\:[0-9]{1,2}\\.[0-9]{1,3}/ /g' | \\\n sed -r 's/[0-9]{1,2}\\:[0-9]{1,2}\\:[0-9]{1,2}/ /g' | \\\n sed -r 's/[0-9]{1,4}-[0-9]{1,2}-[0-9]{1,4}/ /g' |\n sed -r 's/\\[.*\\]/ /g' | \\\n sed -r 's/\\s[0-9]+\\s/ /g' | \\\n awk '{if (\$0 in seen) {seen[\$0]++} else {out[++n]=\$0;seen[\$0]=1}} END { for (i=1; i<=n; i++) print seen[out[i]]\" :: \" out[i] }' > /home/zuul/deprecations.log\n" │ ├─130834 sudo systemctl status --all │ └─130835 systemctl status --all └─user@1000.service └─init.scope ├─833 /usr/lib/systemd/systemd --user └─834 "(sd-pam)" ● proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point Loaded: loaded (/usr/lib/systemd/system/proc-sys-fs-binfmt_misc.automount; static) Active: active (running) since Mon 2026-02-16 17:51:02 UTC; 25min ago Triggers: ● proc-sys-fs-binfmt_misc.mount Where: /proc/sys/fs/binfmt_misc Docs: https://docs.kernel.org/admin-guide/binfmt-misc.html https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems Feb 16 17:51:03 ubuntu systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 451 (systemd-binfmt) Notice: journal has been rotated since unit was started, output may be incomplete. ● dev-cdrom.device - QEMU_DVD-ROM config-2 Follows: unit currently follows state of sys-devices-pci0000:00-0000:00:01.1-ata1-host0-target0:0:0-0:0:0:0-block-sr0.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:01.1/ata1/host0/target0:0:0/0:0:0:0/block/sr0 ● dev-disk-by\x2ddiskseq-11.device - QEMU_DVD-ROM config-2 Follows: unit currently follows state of sys-devices-pci0000:00-0000:00:01.1-ata1-host0-target0:0:0-0:0:0:0-block-sr0.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:01.1/ata1/host0/target0:0:0/0:0:0:0/block/sr0 ● dev-disk-by\x2ddiskseq-20.device - /dev/disk/by-diskseq/20 Follows: unit currently follows state of sys-devices-virtual-block-loop0.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:08:54 UTC; 8min ago Device: /sys/devices/virtual/block/loop0 ● dev-disk-by\x2ddiskseq-9.device - /dev/disk/by-diskseq/9 Follows: unit currently follows state of sys-devices-pci0000:00-0000:00:04.0-virtio2-block-vda.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:04.0/virtio2/block/vda ● dev-disk-by\x2ddiskseq-9\x2dpart1.device - /dev/disk/by-diskseq/9-part1 Follows: unit currently follows state of sys-devices-pci0000:00-0000:00:04.0-virtio2-block-vda-vda1.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:04.0/virtio2/block/vda/vda1 ● dev-disk-by\x2did-ata\x2dQEMU_DVD\x2dROM_QM00001.device - QEMU_DVD-ROM config-2 Follows: unit currently follows state of sys-devices-pci0000:00-0000:00:01.1-ata1-host0-target0:0:0-0:0:0:0-block-sr0.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:01.1/ata1/host0/target0:0:0/0:0:0:0/block/sr0 ● dev-disk-by\x2did-lvm\x2dpv\x2duuid\x2dKy3w3c\x2dqWTa\x2d0Zy6\x2dHENc\x2d6U3O\x2dUObQ\x2dVoAhlK.device - /dev/disk/by-id/lvm-pv-uuid-Ky3w3c-qWTa-0Zy6-HENc-6U3O-UObQ-VoAhlK Follows: unit currently follows state of sys-devices-virtual-block-loop0.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:08:54 UTC; 8min ago Device: /sys/devices/virtual/block/loop0 ● dev-disk-by\x2dlabel-cloudimg\x2drootfs.device - /dev/disk/by-label/cloudimg-rootfs Follows: unit currently follows state of sys-devices-pci0000:00-0000:00:04.0-virtio2-block-vda-vda1.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:04.0/virtio2/block/vda/vda1 ● dev-disk-by\x2dlabel-config\x2d2.device - QEMU_DVD-ROM config-2 Follows: unit currently follows state of sys-devices-pci0000:00-0000:00:01.1-ata1-host0-target0:0:0-0:0:0:0-block-sr0.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:01.1/ata1/host0/target0:0:0/0:0:0:0/block/sr0 ● dev-disk-by\x2dloop\x2dinode-253:1\x2d4723209.device - /dev/disk/by-loop-inode/253:1-4723209 Follows: unit currently follows state of sys-devices-virtual-block-loop0.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:08:54 UTC; 8min ago Device: /sys/devices/virtual/block/loop0 ● dev-disk-by\x2dloop\x2dref-\x5cx2fopt\x5cx2fstack\x5cx2fdata\x5cx2fstack\x2dvolumes\x2dlvmdriver\x2d1\x2dbacking\x2dfile.device - /dev/disk/by-loop-ref/\x2fopt\x2fstack\x2fdata\x2fstack-volumes-lvmdriver-1-backing-file Follows: unit currently follows state of sys-devices-virtual-block-loop0.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:08:54 UTC; 8min ago Device: /sys/devices/virtual/block/loop0 ● dev-disk-by\x2dpartuuid-12528807\x2d01.device - /dev/disk/by-partuuid/12528807-01 Follows: unit currently follows state of sys-devices-pci0000:00-0000:00:04.0-virtio2-block-vda-vda1.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:04.0/virtio2/block/vda/vda1 ● dev-disk-by\x2dpath-pci\x2d0000:00:01.1\x2data\x2d1.0.device - QEMU_DVD-ROM config-2 Follows: unit currently follows state of sys-devices-pci0000:00-0000:00:01.1-ata1-host0-target0:0:0-0:0:0:0-block-sr0.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:01.1/ata1/host0/target0:0:0/0:0:0:0/block/sr0 ● dev-disk-by\x2dpath-pci\x2d0000:00:01.1\x2data\x2d1.device - QEMU_DVD-ROM config-2 Follows: unit currently follows state of sys-devices-pci0000:00-0000:00:01.1-ata1-host0-target0:0:0-0:0:0:0-block-sr0.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:01.1/ata1/host0/target0:0:0/0:0:0:0/block/sr0 ● dev-disk-by\x2dpath-pci\x2d0000:00:04.0.device - /dev/disk/by-path/pci-0000:00:04.0 Follows: unit currently follows state of sys-devices-pci0000:00-0000:00:04.0-virtio2-block-vda.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:04.0/virtio2/block/vda ● dev-disk-by\x2dpath-pci\x2d0000:00:04.0\x2dpart1.device - /dev/disk/by-path/pci-0000:00:04.0-part1 Follows: unit currently follows state of sys-devices-pci0000:00-0000:00:04.0-virtio2-block-vda-vda1.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:04.0/virtio2/block/vda/vda1 ● dev-disk-by\x2dpath-virtio\x2dpci\x2d0000:00:04.0.device - /dev/disk/by-path/virtio-pci-0000:00:04.0 Follows: unit currently follows state of sys-devices-pci0000:00-0000:00:04.0-virtio2-block-vda.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:04.0/virtio2/block/vda ● dev-disk-by\x2dpath-virtio\x2dpci\x2d0000:00:04.0\x2dpart1.device - /dev/disk/by-path/virtio-pci-0000:00:04.0-part1 Follows: unit currently follows state of sys-devices-pci0000:00-0000:00:04.0-virtio2-block-vda-vda1.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:04.0/virtio2/block/vda/vda1 ● dev-disk-by\x2duuid-2026\x2d02\x2d16\x2d17\x2d50\x2d55\x2d00.device - QEMU_DVD-ROM config-2 Follows: unit currently follows state of sys-devices-pci0000:00-0000:00:01.1-ata1-host0-target0:0:0-0:0:0:0-block-sr0.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:01.1/ata1/host0/target0:0:0/0:0:0:0/block/sr0 ● dev-disk-by\x2duuid-43b989ca\x2d4072\x2d4978\x2d8639\x2d289329ba8670.device - /dev/disk/by-uuid/43b989ca-4072-4978-8639-289329ba8670 Follows: unit currently follows state of sys-devices-pci0000:00-0000:00:04.0-virtio2-block-vda-vda1.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:04.0/virtio2/block/vda/vda1 ● dev-dm\x2d0.device - /dev/dm-0 Follows: unit currently follows state of sys-devices-virtual-block-dm\x2d0.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:13:45 UTC; 3min 13s ago Device: /sys/devices/virtual/block/dm-0 ● dev-dm\x2d1.device - /dev/dm-1 Follows: unit currently follows state of sys-devices-virtual-block-dm\x2d1.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:13:45 UTC; 3min 13s ago Device: /sys/devices/virtual/block/dm-1 ● dev-dm\x2d2.device - /dev/dm-2 Follows: unit currently follows state of sys-devices-virtual-block-dm\x2d2.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:13:45 UTC; 3min 13s ago Device: /sys/devices/virtual/block/dm-2 ● dev-loop0.device - /dev/loop0 Follows: unit currently follows state of sys-devices-virtual-block-loop0.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:08:54 UTC; 8min ago Device: /sys/devices/virtual/block/loop0 ● dev-mapper-stack\x2d\x2dvolumes\x2d\x2dlvmdriver\x2d\x2d1\x2dstack\x2d\x2dvolumes\x2d\x2dlvmdriver\x2d\x2d1\x2d\x2dpool.device - /dev/mapper/stack--volumes--lvmdriver--1-stack--volumes--lvmdriver--1--pool Follows: unit currently follows state of sys-devices-virtual-block-dm\x2d2.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:13:45 UTC; 3min 13s ago Device: /sys/devices/virtual/block/dm-2 ● dev-mapper-stack\x2d\x2dvolumes\x2d\x2dlvmdriver\x2d\x2d1\x2dstack\x2d\x2dvolumes\x2d\x2dlvmdriver\x2d\x2d1\x2d\x2dpool_tdata.device - /dev/mapper/stack--volumes--lvmdriver--1-stack--volumes--lvmdriver--1--pool_tdata Follows: unit currently follows state of sys-devices-virtual-block-dm\x2d1.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:13:45 UTC; 3min 13s ago Device: /sys/devices/virtual/block/dm-1 ● dev-mapper-stack\x2d\x2dvolumes\x2d\x2dlvmdriver\x2d\x2d1\x2dstack\x2d\x2dvolumes\x2d\x2dlvmdriver\x2d\x2d1\x2d\x2dpool_tmeta.device - /dev/mapper/stack--volumes--lvmdriver--1-stack--volumes--lvmdriver--1--pool_tmeta Follows: unit currently follows state of sys-devices-virtual-block-dm\x2d0.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:13:45 UTC; 3min 13s ago Device: /sys/devices/virtual/block/dm-0 ● dev-rfkill.device - /dev/rfkill Follows: unit currently follows state of sys-devices-virtual-misc-rfkill.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/virtual/misc/rfkill ● dev-sr0.device - QEMU_DVD-ROM config-2 Follows: unit currently follows state of sys-devices-pci0000:00-0000:00:01.1-ata1-host0-target0:0:0-0:0:0:0-block-sr0.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:01.1/ata1/host0/target0:0:0/0:0:0:0/block/sr0 ● dev-stack\x2dvolumes\x2dlvmdriver\x2d1-stack\x2dvolumes\x2dlvmdriver\x2d1\x2dpool.device - /dev/stack-volumes-lvmdriver-1/stack-volumes-lvmdriver-1-pool Follows: unit currently follows state of sys-devices-virtual-block-dm\x2d2.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:13:45 UTC; 3min 13s ago Device: /sys/devices/virtual/block/dm-2 ● dev-ttyprintk.device - /dev/ttyprintk Follows: unit currently follows state of sys-devices-virtual-tty-ttyprintk.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/virtual/tty/ttyprintk ● dev-ttyS0.device - /dev/ttyS0 Follows: unit currently follows state of sys-devices-pnp0-00:00-00:00:0-00:00:0.0-tty-ttyS0.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pnp0/00:00/00:00:0/00:00:0.0/tty/ttyS0 Feb 16 17:51:03 ubuntu systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. ● dev-ttyS1.device - /dev/ttyS1 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.1-tty-ttyS1.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.1/tty/ttyS1 ● dev-ttyS10.device - /dev/ttyS10 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.10-tty-ttyS10.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.10/tty/ttyS10 ● dev-ttyS11.device - /dev/ttyS11 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.11-tty-ttyS11.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.11/tty/ttyS11 ● dev-ttyS12.device - /dev/ttyS12 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.12-tty-ttyS12.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.12/tty/ttyS12 ● dev-ttyS13.device - /dev/ttyS13 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.13-tty-ttyS13.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.13/tty/ttyS13 ● dev-ttyS14.device - /dev/ttyS14 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.14-tty-ttyS14.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.14/tty/ttyS14 ● dev-ttyS15.device - /dev/ttyS15 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.15-tty-ttyS15.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.15/tty/ttyS15 ● dev-ttyS16.device - /dev/ttyS16 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.16-tty-ttyS16.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.16/tty/ttyS16 ● dev-ttyS17.device - /dev/ttyS17 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.17-tty-ttyS17.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.17/tty/ttyS17 ● dev-ttyS18.device - /dev/ttyS18 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.18-tty-ttyS18.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.18/tty/ttyS18 ● dev-ttyS19.device - /dev/ttyS19 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.19-tty-ttyS19.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.19/tty/ttyS19 ● dev-ttyS2.device - /dev/ttyS2 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.2-tty-ttyS2.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.2/tty/ttyS2 ● dev-ttyS20.device - /dev/ttyS20 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.20-tty-ttyS20.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.20/tty/ttyS20 ● dev-ttyS21.device - /dev/ttyS21 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.21-tty-ttyS21.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.21/tty/ttyS21 ● dev-ttyS22.device - /dev/ttyS22 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.22-tty-ttyS22.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.22/tty/ttyS22 ● dev-ttyS23.device - /dev/ttyS23 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.23-tty-ttyS23.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.23/tty/ttyS23 ● dev-ttyS24.device - /dev/ttyS24 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.24-tty-ttyS24.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.24/tty/ttyS24 ● dev-ttyS25.device - /dev/ttyS25 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.25-tty-ttyS25.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.25/tty/ttyS25 ● dev-ttyS26.device - /dev/ttyS26 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.26-tty-ttyS26.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.26/tty/ttyS26 ● dev-ttyS27.device - /dev/ttyS27 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.27-tty-ttyS27.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.27/tty/ttyS27 ● dev-ttyS28.device - /dev/ttyS28 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.28-tty-ttyS28.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.28/tty/ttyS28 ● dev-ttyS29.device - /dev/ttyS29 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.29-tty-ttyS29.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.29/tty/ttyS29 ● dev-ttyS3.device - /dev/ttyS3 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.3-tty-ttyS3.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.3/tty/ttyS3 ● dev-ttyS30.device - /dev/ttyS30 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.30-tty-ttyS30.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.30/tty/ttyS30 ● dev-ttyS31.device - /dev/ttyS31 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.31-tty-ttyS31.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.31/tty/ttyS31 ● dev-ttyS4.device - /dev/ttyS4 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.4-tty-ttyS4.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.4/tty/ttyS4 ● dev-ttyS5.device - /dev/ttyS5 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.5-tty-ttyS5.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.5/tty/ttyS5 ● dev-ttyS6.device - /dev/ttyS6 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.6-tty-ttyS6.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.6/tty/ttyS6 ● dev-ttyS7.device - /dev/ttyS7 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.7-tty-ttyS7.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.7/tty/ttyS7 ● dev-ttyS8.device - /dev/ttyS8 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.8-tty-ttyS8.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.8/tty/ttyS8 ● dev-ttyS9.device - /dev/ttyS9 Follows: unit currently follows state of sys-devices-platform-serial8250-serial8250:0-serial8250:0.9-tty-ttyS9.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.9/tty/ttyS9 ● dev-vda.device - /dev/vda Follows: unit currently follows state of sys-devices-pci0000:00-0000:00:04.0-virtio2-block-vda.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:04.0/virtio2/block/vda ● dev-vda1.device - /dev/vda1 Follows: unit currently follows state of sys-devices-pci0000:00-0000:00:04.0-virtio2-block-vda-vda1.device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:04.0/virtio2/block/vda/vda1 Notice: journal has been rotated since unit was started, output may be incomplete. ● sys-devices-pci0000:00-0000:00:01.1-ata1-host0-target0:0:0-0:0:0:0-block-sr0.device - QEMU_DVD-ROM config-2 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:01.1/ata1/host0/target0:0:0/0:0:0:0/block/sr0 ● sys-devices-pci0000:00-0000:00:03.0-virtio1-net-ens3.device - Virtio network device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:03.0/virtio1/net/ens3 ● sys-devices-pci0000:00-0000:00:04.0-virtio2-block-vda-vda1.device - /sys/devices/pci0000:00/0000:00:04.0/virtio2/block/vda/vda1 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:04.0/virtio2/block/vda/vda1 ● sys-devices-pci0000:00-0000:00:04.0-virtio2-block-vda.device - /sys/devices/pci0000:00/0000:00:04.0/virtio2/block/vda Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:04.0/virtio2/block/vda ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.1-tty-ttyS1.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.1/tty/ttyS1 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.1/tty/ttyS1 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.10-tty-ttyS10.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.10/tty/ttyS10 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.10/tty/ttyS10 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.11-tty-ttyS11.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.11/tty/ttyS11 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.11/tty/ttyS11 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.12-tty-ttyS12.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.12/tty/ttyS12 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.12/tty/ttyS12 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.13-tty-ttyS13.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.13/tty/ttyS13 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.13/tty/ttyS13 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.14-tty-ttyS14.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.14/tty/ttyS14 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.14/tty/ttyS14 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.15-tty-ttyS15.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.15/tty/ttyS15 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.15/tty/ttyS15 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.16-tty-ttyS16.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.16/tty/ttyS16 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.16/tty/ttyS16 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.17-tty-ttyS17.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.17/tty/ttyS17 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.17/tty/ttyS17 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.18-tty-ttyS18.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.18/tty/ttyS18 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.18/tty/ttyS18 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.19-tty-ttyS19.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.19/tty/ttyS19 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.19/tty/ttyS19 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.2-tty-ttyS2.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.2/tty/ttyS2 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.2/tty/ttyS2 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.20-tty-ttyS20.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.20/tty/ttyS20 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.20/tty/ttyS20 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.21-tty-ttyS21.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.21/tty/ttyS21 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.21/tty/ttyS21 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.22-tty-ttyS22.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.22/tty/ttyS22 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.22/tty/ttyS22 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.23-tty-ttyS23.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.23/tty/ttyS23 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.23/tty/ttyS23 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.24-tty-ttyS24.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.24/tty/ttyS24 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.24/tty/ttyS24 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.25-tty-ttyS25.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.25/tty/ttyS25 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.25/tty/ttyS25 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.26-tty-ttyS26.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.26/tty/ttyS26 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.26/tty/ttyS26 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.27-tty-ttyS27.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.27/tty/ttyS27 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.27/tty/ttyS27 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.28-tty-ttyS28.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.28/tty/ttyS28 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.28/tty/ttyS28 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.29-tty-ttyS29.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.29/tty/ttyS29 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.29/tty/ttyS29 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.3-tty-ttyS3.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.3/tty/ttyS3 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.3/tty/ttyS3 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.30-tty-ttyS30.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.30/tty/ttyS30 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.30/tty/ttyS30 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.31-tty-ttyS31.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.31/tty/ttyS31 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.31/tty/ttyS31 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.4-tty-ttyS4.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.4/tty/ttyS4 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.4/tty/ttyS4 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.5-tty-ttyS5.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.5/tty/ttyS5 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.5/tty/ttyS5 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.6-tty-ttyS6.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.6/tty/ttyS6 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.6/tty/ttyS6 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.7-tty-ttyS7.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.7/tty/ttyS7 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.7/tty/ttyS7 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.8-tty-ttyS8.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.8/tty/ttyS8 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.8/tty/ttyS8 ● sys-devices-platform-serial8250-serial8250:0-serial8250:0.9-tty-ttyS9.device - /sys/devices/platform/serial8250/serial8250:0/serial8250:0.9/tty/ttyS9 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/platform/serial8250/serial8250:0/serial8250:0.9/tty/ttyS9 ● sys-devices-pnp0-00:00-00:00:0-00:00:0.0-tty-ttyS0.device - /sys/devices/pnp0/00:00/00:00:0/00:00:0.0/tty/ttyS0 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pnp0/00:00/00:00:0/00:00:0.0/tty/ttyS0 ● sys-devices-virtual-block-dm\x2d0.device - /sys/devices/virtual/block/dm-0 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:13:45 UTC; 3min 13s ago Device: /sys/devices/virtual/block/dm-0 ● sys-devices-virtual-block-dm\x2d1.device - /sys/devices/virtual/block/dm-1 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:13:45 UTC; 3min 13s ago Device: /sys/devices/virtual/block/dm-1 ● sys-devices-virtual-block-dm\x2d2.device - /sys/devices/virtual/block/dm-2 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:13:45 UTC; 3min 13s ago Device: /sys/devices/virtual/block/dm-2 ● sys-devices-virtual-block-loop0.device - /sys/devices/virtual/block/loop0 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:08:54 UTC; 8min ago Device: /sys/devices/virtual/block/loop0 ● sys-devices-virtual-misc-rfkill.device - /sys/devices/virtual/misc/rfkill Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/virtual/misc/rfkill ● sys-devices-virtual-net-br\x2d285a601cfe66.device - /sys/devices/virtual/net/br-285a601cfe66 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:58:02 UTC; 18min ago Device: /sys/devices/virtual/net/br-285a601cfe66 ● sys-devices-virtual-net-br\x2dex.device - /sys/devices/virtual/net/br-ex Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:12:27 UTC; 4min 31s ago Device: /sys/devices/virtual/net/br-ex ● sys-devices-virtual-net-br\x2dint.device - /sys/devices/virtual/net/br-int Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:01:56 UTC; 15min ago Device: /sys/devices/virtual/net/br-int ● sys-devices-virtual-net-docker0.device - /sys/devices/virtual/net/docker0 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:57:46 UTC; 19min ago Device: /sys/devices/virtual/net/docker0 ● sys-devices-virtual-net-ovs\x2dsystem.device - /sys/devices/virtual/net/ovs-system Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:01:56 UTC; 15min ago Device: /sys/devices/virtual/net/ovs-system ● sys-devices-virtual-net-tap1ddaf2af\x2d80.device - /sys/devices/virtual/net/tap1ddaf2af-80 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:16:19 UTC; 39s ago Device: /sys/devices/virtual/net/tap1ddaf2af-80 ● sys-devices-virtual-net-tapb44d09fe\x2d67.device - /sys/devices/virtual/net/tapb44d09fe-67 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:16:17 UTC; 41s ago Device: /sys/devices/virtual/net/tapb44d09fe-67 ● sys-devices-virtual-net-veth71cb24e.device - /sys/devices/virtual/net/veth71cb24e Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:58:09 UTC; 18min ago Device: /sys/devices/virtual/net/veth71cb24e ● sys-devices-virtual-net-virbr0.device - /sys/devices/virtual/net/virbr0 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:02:19 UTC; 14min ago Device: /sys/devices/virtual/net/virbr0 ● sys-devices-virtual-tty-ttyprintk.device - /sys/devices/virtual/tty/ttyprintk Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/virtual/tty/ttyprintk ● sys-module-configfs.device - /sys/module/configfs Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/module/configfs ● sys-module-fuse.device - /sys/module/fuse Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/module/fuse ● sys-subsystem-net-devices-br\x2d285a601cfe66.device - /sys/subsystem/net/devices/br-285a601cfe66 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:58:02 UTC; 18min ago Device: /sys/devices/virtual/net/br-285a601cfe66 ● sys-subsystem-net-devices-br\x2dex.device - /sys/subsystem/net/devices/br-ex Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:12:27 UTC; 4min 31s ago Device: /sys/devices/virtual/net/br-ex ● sys-subsystem-net-devices-br\x2dint.device - /sys/subsystem/net/devices/br-int Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:01:56 UTC; 15min ago Device: /sys/devices/virtual/net/br-int ● sys-subsystem-net-devices-docker0.device - /sys/subsystem/net/devices/docker0 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:57:46 UTC; 19min ago Device: /sys/devices/virtual/net/docker0 ● sys-subsystem-net-devices-ens3.device - Virtio network device Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:51:03 UTC; 25min ago Device: /sys/devices/pci0000:00/0000:00:03.0/virtio1/net/ens3 ● sys-subsystem-net-devices-ovs\x2dsystem.device - /sys/subsystem/net/devices/ovs-system Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:01:56 UTC; 15min ago Device: /sys/devices/virtual/net/ovs-system ● sys-subsystem-net-devices-tap1ddaf2af\x2d80.device - /sys/subsystem/net/devices/tap1ddaf2af-80 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:16:19 UTC; 39s ago Device: /sys/devices/virtual/net/tap1ddaf2af-80 ● sys-subsystem-net-devices-tapb44d09fe\x2d67.device - /sys/subsystem/net/devices/tapb44d09fe-67 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:16:17 UTC; 41s ago Device: /sys/devices/virtual/net/tapb44d09fe-67 ● sys-subsystem-net-devices-veth71cb24e.device - /sys/subsystem/net/devices/veth71cb24e Loaded: loaded Active: active (plugged) since Mon 2026-02-16 17:58:09 UTC; 18min ago Device: /sys/devices/virtual/net/veth71cb24e ● sys-subsystem-net-devices-virbr0.device - /sys/subsystem/net/devices/virbr0 Loaded: loaded Active: active (plugged) since Mon 2026-02-16 18:02:19 UTC; 14min ago Device: /sys/devices/virtual/net/virbr0 ● -.mount - Root Mount Loaded: loaded (/etc/fstab; generated) Active: active (mounted) since Mon 2026-02-16 17:51:02 UTC; 25min ago Where: / What: /dev/vda1 Docs: man:fstab(5) man:systemd-fstab-generator(8) Notice: journal has been rotated since unit was started, output may be incomplete. ● dev-hugepages.mount - Huge Pages File System Loaded: loaded (/proc/self/mountinfo; static) Active: active (mounted) since Mon 2026-02-16 17:51:03 UTC; 25min ago Where: /dev/hugepages What: hugetlbfs Docs: https://docs.kernel.org/admin-guide/mm/hugetlbpage.html https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems Tasks: 0 (limit: 77077) Memory: 112.0K (peak: 768.0K) CPU: 8ms CGroup: /dev-hugepages.mount Notice: journal has been rotated since unit was started, output may be incomplete. ● dev-mqueue.mount - POSIX Message Queue File System Loaded: loaded (/proc/self/mountinfo; static) Active: active (mounted) since Mon 2026-02-16 17:51:03 UTC; 25min ago Where: /dev/mqueue What: mqueue Docs: man:mq_overview(7) https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems Tasks: 0 (limit: 77077) Memory: 4.0K (peak: 1.5M) CPU: 12ms CGroup: /dev-mqueue.mount Feb 16 17:51:03 ubuntu systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Notice: journal has been rotated since unit was started, output may be incomplete. ● opt-stack-data-etcd.mount - /opt/stack/data/etcd Loaded: loaded (/proc/self/mountinfo) Active: active (mounted) since Mon 2026-02-16 17:59:57 UTC; 17min ago Where: /opt/stack/data/etcd What: tmpfs ● proc-fs-nfsd.mount - NFSD configuration filesystem Loaded: loaded (/usr/lib/systemd/system/proc-fs-nfsd.mount; static) Active: active (mounted) since Mon 2026-02-16 18:03:47 UTC; 13min ago Where: /proc/fs/nfsd What: nfsd Tasks: 0 (limit: 77077) Memory: 20.0K (peak: 1.5M) CPU: 10ms CGroup: /proc-fs-nfsd.mount Feb 16 18:03:47 np0000155647 systemd[1]: Mounting proc-fs-nfsd.mount - NFSD configuration filesystem... Feb 16 18:03:47 np0000155647 systemd[1]: Mounted proc-fs-nfsd.mount - NFSD configuration filesystem. ● proc-sys-fs-binfmt_misc.mount - Arbitrary Executable File Formats File System Loaded: loaded (/proc/self/mountinfo; disabled; preset: disabled) Active: active (mounted) since Mon 2026-02-16 17:51:03 UTC; 25min ago TriggeredBy: ● proc-sys-fs-binfmt_misc.automount Where: /proc/sys/fs/binfmt_misc What: binfmt_misc Docs: https://docs.kernel.org/admin-guide/binfmt-misc.html https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems Tasks: 0 (limit: 77077) Memory: 8.0K (peak: 1.5M) CPU: 13ms CGroup: /proc-sys-fs-binfmt_misc.mount Feb 16 17:51:03 ubuntu systemd[1]: Mounting proc-sys-fs-binfmt_misc.mount - Arbitrary Executable File Formats File System... Feb 16 17:51:03 ubuntu systemd[1]: Mounted proc-sys-fs-binfmt_misc.mount - Arbitrary Executable File Formats File System. ● run-docker-netns-b52d933c71ac.mount - /run/docker/netns/b52d933c71ac Loaded: loaded (/proc/self/mountinfo) Active: active (mounted) since Mon 2026-02-16 17:58:09 UTC; 18min ago Where: /run/docker/netns/b52d933c71ac What: nsfs ● run-netns-ovnmeta\x2d1ddaf2af\x2d8333\x2d48ec\x2da71c\x2d3dafdea80472.mount - /run/netns/ovnmeta-1ddaf2af-8333-48ec-a71c-3dafdea80472 Loaded: loaded (/proc/self/mountinfo) Active: active (mounted) since Mon 2026-02-16 18:16:17 UTC; 41s ago Where: /run/netns/ovnmeta-1ddaf2af-8333-48ec-a71c-3dafdea80472 What: nsfs ● run-netns.mount - /run/netns Loaded: loaded (/proc/self/mountinfo) Active: active (mounted) since Mon 2026-02-16 18:16:17 UTC; 41s ago Where: /run/netns What: tmpfs ● run-rpc_pipefs.mount - RPC Pipe File System Loaded: loaded (/run/systemd/generator/run-rpc_pipefs.mount; generated) Active: active (mounted) since Mon 2026-02-16 18:03:46 UTC; 13min ago Where: /run/rpc_pipefs What: sunrpc Tasks: 0 (limit: 77077) Memory: 20.0K (peak: 544.0K) CPU: 2ms CGroup: /system.slice/run-rpc_pipefs.mount Feb 16 18:03:46 np0000155647 systemd[1]: Mounting run-rpc_pipefs.mount - RPC Pipe File System... Feb 16 18:03:46 np0000155647 systemd[1]: Mounted run-rpc_pipefs.mount - RPC Pipe File System. ● run-user-1000.mount - /run/user/1000 Loaded: loaded (/proc/self/mountinfo) Active: active (mounted) since Mon 2026-02-16 17:51:35 UTC; 25min ago Where: /run/user/1000 What: tmpfs ● sys-fs-fuse-connections.mount - FUSE Control File System Loaded: loaded (/proc/self/mountinfo; static) Active: active (mounted) since Mon 2026-02-16 17:51:03 UTC; 25min ago Where: /sys/fs/fuse/connections What: fusectl Docs: https://docs.kernel.org/filesystems/fuse.html https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems Tasks: 0 (limit: 77077) Memory: 8.0K (peak: 324.0K) CPU: 3ms CGroup: /sys-fs-fuse-connections.mount Feb 16 17:51:03 ubuntu systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 16 17:51:03 ubuntu systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. ● sys-kernel-config.mount - Kernel Configuration File System Loaded: loaded (/proc/self/mountinfo; static) Active: active (mounted) since Mon 2026-02-16 17:51:03 UTC; 25min ago Where: /sys/kernel/config What: configfs Docs: https://docs.kernel.org/filesystems/configfs.html https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems Tasks: 0 (limit: 77077) Memory: 8.0K (peak: 1.7M) CPU: 13ms CGroup: /sys-kernel-config.mount Feb 16 17:51:03 ubuntu systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 16 17:51:03 ubuntu systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. ● sys-kernel-debug.mount - Kernel Debug File System Loaded: loaded (/proc/self/mountinfo; static) Active: active (mounted) since Mon 2026-02-16 17:51:03 UTC; 25min ago Where: /sys/kernel/debug What: debugfs Docs: https://docs.kernel.org/filesystems/debugfs.html https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems Tasks: 0 (limit: 77077) Memory: 4.0K (peak: 1.7M) CPU: 11ms CGroup: /sys-kernel-debug.mount Feb 16 17:51:03 ubuntu systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Notice: journal has been rotated since unit was started, output may be incomplete. ● sys-kernel-tracing.mount - Kernel Trace File System Loaded: loaded (/proc/self/mountinfo; static) Active: active (mounted) since Mon 2026-02-16 17:51:03 UTC; 25min ago Where: /sys/kernel/tracing What: tracefs Docs: https://docs.kernel.org/trace/ftrace.html https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems Tasks: 0 (limit: 77077) Memory: 4.0K (peak: 1.5M) CPU: 8ms CGroup: /sys-kernel-tracing.mount Feb 16 17:51:03 ubuntu systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Notice: journal has been rotated since unit was started, output may be incomplete. ● var-lib-docker-rootfs-overlayfs-089fb10c98a483eac1b5cbd49445b3386ad4740a6877edc01e9209d9dbf220c7.mount - /var/lib/docker/rootfs/overlayfs/089fb10c98a483eac1b5cbd49445b3386ad4740a6877edc01e9209d9dbf220c7 Loaded: loaded (/proc/self/mountinfo) Active: active (mounted) since Mon 2026-02-16 17:58:09 UTC; 18min ago Where: /var/lib/docker/rootfs/overlayfs/089fb10c98a483eac1b5cbd49445b3386ad4740a6877edc01e9209d9dbf220c7 What: overlay ○ var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) Loaded: loaded (/usr/lib/systemd/system/var-lib-machines.mount; static) Active: inactive (dead) Condition: start condition unmet at Mon 2026-02-16 18:02:19 UTC; 14min ago Where: /var/lib/machines What: /var/lib/machines.raw Feb 16 18:02:15 np0000155647 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 16 18:02:19 np0000155647 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). ● systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch Loaded: loaded (/usr/lib/systemd/system/systemd-ask-password-console.path; static) Active: active (waiting) since Mon 2026-02-16 17:51:02 UTC; 25min ago Triggers: ● systemd-ask-password-console.service Docs: man:systemd-ask-password-console.path(8) Notice: journal has been rotated since unit was started, output may be incomplete. ● systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch Loaded: loaded (/usr/lib/systemd/system/systemd-ask-password-wall.path; static) Active: active (waiting) since Mon 2026-02-16 17:51:02 UTC; 25min ago Triggers: ● systemd-ask-password-wall.service Docs: man:systemd-ask-password-wall.path(8) Notice: journal has been rotated since unit was started, output may be incomplete. ● docker-089fb10c98a483eac1b5cbd49445b3386ad4740a6877edc01e9209d9dbf220c7.scope - libcontainer container 089fb10c98a483eac1b5cbd49445b3386ad4740a6877edc01e9209d9dbf220c7 Loaded: loaded (/run/systemd/transient/docker-089fb10c98a483eac1b5cbd49445b3386ad4740a6877edc01e9209d9dbf220c7.scope; transient) Transient: yes Drop-In: /run/systemd/transient/docker-089fb10c98a483eac1b5cbd49445b3386ad4740a6877edc01e9209d9dbf220c7.scope.d └─50-DeviceAllow.conf, 50-DevicePolicy.conf Active: active (running) since Mon 2026-02-16 17:58:09 UTC; 18min ago IO: 448.0K read, 1.5G written Tasks: 616 (limit: 77077) Memory: 2.3G (peak: 2.3G) CPU: 6min 26.076s CGroup: /system.slice/docker-089fb10c98a483eac1b5cbd49445b3386ad4740a6877edc01e9209d9dbf220c7.scope ├─init.scope │ └─21530 /sbin/init ├─kubelet.slice │ ├─kubelet-kubepods.slice │ │ ├─kubelet-kubepods-besteffort.slice │ │ │ ├─kubelet-kubepods-besteffort-pod060980bd_94df_4b77_8c4f_85019165ff36.slice │ │ │ │ ├─cri-containerd-1433ec82c7d58a1bd88b32542ecb0883d1dd9b071469fb95c79ac08ced6611d2.scope │ │ │ │ │ └─25177 /manager --leader-elect --diagnostics-address=:8443 --insecure-diagnostics=false --feature-gates=MachinePool=true,KubeadmBootstrapFormatIgnition=true,PriorityQueue=false --bootstrap-token-ttl=15m │ │ │ │ └─cri-containerd-5793c32bcc0436f8fe34a586a9c3d691c3891ff8e512284b4424309d965cf4de.scope │ │ │ │ └─24819 /pause │ │ │ ├─kubelet-kubepods-besteffort-pod060f8598_7528_45b1_b3c5_0ca523a34f10.slice │ │ │ │ ├─cri-containerd-a9b853c1562e2d9b5d25818d03170c0ae004e0c3bdcba5684225a09905a1b752.scope │ │ │ │ │ └─24755 /pause │ │ │ │ └─cri-containerd-f03af497176e3521a17e482305315b46ef9a3f06f8def1aa9d3e6b9f8a165825.scope │ │ │ │ └─25002 /manager --leader-elect --diagnostics-address=:8443 --insecure-diagnostics=false --feature-gates=MachinePool=true,ClusterResourceSet=true,ClusterTopology=true,RuntimeSDK=false,MachineSetPreflightChecks=true,MachineWaitForVolumeDetachConsiderVolumeAttachments=true,PriorityQueue=false │ │ │ ├─kubelet-kubepods-besteffort-pod15afed8a_99d8_4a13_9c07_038039770363.slice │ │ │ │ ├─cri-containerd-716b23ff544bc41fa5015a42d703c5554339e9b24ab33edcd77f5b2026964d08.scope │ │ │ │ │ └─25086 /pause │ │ │ │ └─cri-containerd-d9dba19631b25e52efc68930f07a9a022f59c9244d7050bc186b9e4d87d4e755.scope │ │ │ │ └─25471 /manager --leader-elect --v=2 --diagnostics-address=127.0.0.1:8080 --insecure-diagnostics=true │ │ │ ├─kubelet-kubepods-besteffort-pod18fcda39_476d_4f2a_b389_6fff818f42ae.slice │ │ │ │ ├─cri-containerd-10c28a6f48eec284148f84ce3e47fe3004a9cd7bbf1f9abc58e6814017b471be.scope │ │ │ │ │ └─23824 /pause │ │ │ │ └─cri-containerd-720d0f6651c8177a77c9230bf86a48de597bc5c1ec5db6e60bd66fe90064648e.scope │ │ │ │ └─24002 local-path-provisioner --debug start --helper-image docker.io/kindest/local-path-helper:v20220607-9a4d8d2a --config /etc/config/config.json │ │ │ ├─kubelet-kubepods-besteffort-pod2bdc6e4c_0088_47cb_be88_ab92547b89ae.slice │ │ │ │ ├─cri-containerd-5e10cc4be68610c8fdfce2d97284fa2a72defecd8f9dbf1195ed33469751fc11.scope │ │ │ │ │ └─23841 /pause │ │ │ │ └─cri-containerd-6604ce9efc54877b4d953350f7e65a5f444d760019403df0dbdd867c03f80c27.scope │ │ │ │ └─24200 /app/cmd/controller/controller --v=2 --cluster-resource-namespace=cert-manager --leader-election-namespace=kube-system --acme-http01-solver-image=quay.io/jetstack/cert-manager-acmesolver:v1.18.1 --max-concurrent-challenges=60 │ │ │ ├─kubelet-kubepods-besteffort-pod60d2d5fb_575b_4758_90d1_81d8244a7f54.slice │ │ │ │ ├─cri-containerd-5c88b5d8462c6a19822bef36e5c767ce579a2df6df3f201d51bb2c4a584c158d.scope │ │ │ │ │ └─24971 /pause │ │ │ │ └─cri-containerd-75f147194ebc30a5d1f5ba46bc89ad0ee081af58bddf008e5031627017ed8994.scope │ │ │ │ └─25312 /manager --leader-elect --diagnostics-address=:8443 --insecure-diagnostics=false --feature-gates=MachinePool=true,ClusterTopology=true,KubeadmBootstrapFormatIgnition=true,PriorityQueue=false │ │ │ ├─kubelet-kubepods-besteffort-pod96cc9fa3_1069_4840_9c97_4d69571ebb29.slice │ │ │ │ ├─cri-containerd-49b695fce71dcafd713509840486eda46a4808bc556797e8cbb013c178adf453.scope │ │ │ │ │ └─24091 /pause │ │ │ │ └─cri-containerd-6d62f3e77e85aff076f4bf174cf00cbbe5d08b7b01c315a48b33f121763c8447.scope │ │ │ │ └─24533 /app/cmd/webhook/webhook --v=2 --secure-port=10250 --dynamic-serving-ca-secret-namespace=cert-manager --dynamic-serving-ca-secret-name=cert-manager-webhook-ca --dynamic-serving-dns-names=cert-manager-webhook --dynamic-serving-dns-names=cert-manager-webhook.cert-manager --dynamic-serving-dns-names=cert-manager-webhook.cert-manager.svc │ │ │ ├─kubelet-kubepods-besteffort-pod9f8355b0_94bd_475d_bf74_9d386d0f5259.slice │ │ │ │ ├─cri-containerd-7a0648fca673c15649668c644af739b895a375f8e0a5ec9c7cf267b897aa4815.scope │ │ │ │ │ └─24078 /pause │ │ │ │ └─cri-containerd-f55b671d3ac5e81b9dc93f8fe60e82166337b18ff359ea71e86690ffa57838b4.scope │ │ │ │ └─24411 /app/cmd/cainjector/cainjector --v=2 --leader-election-namespace=kube-system │ │ │ └─kubelet-kubepods-besteffort-podeba8cea0_a113_40e4_8af9_f9092b483360.slice │ │ │ ├─cri-containerd-05883a8b5dcc6186bc92366d098adf91aacc2716c5f4654d5faee7250595663b.scope │ │ │ │ └─23221 /pause │ │ │ └─cri-containerd-68a9f7e8b2f1594edc9ae113bf7569a1d5ed85bb82562eb4364f5331c9f598ca.scope │ │ │ └─23271 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=kind-control-plane │ │ ├─kubelet-kubepods-burstable.slice │ │ │ ├─kubelet-kubepods-burstable-pod0656ab70da313d6449b17f099a2a3110.slice │ │ │ │ ├─cri-containerd-6d4579b16512918eddfa28c91b9b82464468be359a2a61c9fea7dc7b7ab46364.scope │ │ │ │ │ └─22509 etcd --advertise-client-urls=https://172.18.0.2:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://172.18.0.2:2380 --initial-cluster=kind-control-plane=https://172.18.0.2:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://172.18.0.2:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://172.18.0.2:2380 --name=kind-control-plane --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt │ │ │ │ └─cri-containerd-a181437f07c759cb357dc1c5a4915703fdbc8421ac7dbe7ae420af7bbf5ce962.scope │ │ │ │ └─22255 /pause │ │ │ ├─kubelet-kubepods-burstable-pod53ff6c8abd472f64bc9a9afbd3a471a9.slice │ │ │ │ ├─cri-containerd-9213f7ed733001c44536ef277d2f04abbec812060f0464459c296548b41c605b.scope │ │ │ │ │ └─22269 /pause │ │ │ │ └─cri-containerd-e5efc56a027eace488dd3cff0e461733af3798de3cb89fefc0a233cd6d868383.scope │ │ │ │ └─22372 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=kind --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key "--controllers=*,bootstrapsigner,tokencleaner" --enable-hostpath-provisioner=true --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/16 --use-service-account-credentials=true │ │ │ ├─kubelet-kubepods-burstable-pod65d25134_75a8_44c0_b994_37071db70c0b.slice │ │ │ │ ├─cri-containerd-0845db3249e178e867783ec1010a54509a9a6f20e675db3c40bafc17b94aff4a.scope │ │ │ │ │ └─23596 /pause │ │ │ │ └─cri-containerd-a1bcb37a57c99f9a954339f4c95765996f1fc2161db6fc87722931f900073eac.scope │ │ │ │ └─23685 /coredns -conf /etc/coredns/Corefile │ │ │ ├─kubelet-kubepods-burstable-pod922d5a86_cf0c_4898_9361_4f7a1724917a.slice │ │ │ │ ├─cri-containerd-5f0448337760a1a92cb526a9ebe910cc2ad19bc77e1dfc4e2691cc1e11943675.scope │ │ │ │ │ └─23927 /pause │ │ │ │ └─cri-containerd-d0f8a8d96527dbcca96f1dd0492e8b1ba70ee11008c068b797069a257b450b1d.scope │ │ │ │ └─24314 /manager --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 │ │ │ ├─kubelet-kubepods-burstable-podbee69ab63b6471d4da666ee970746eae.slice │ │ │ │ ├─cri-containerd-5191216187fdcf29cecbd6140d2896e536a2153a60d5d507dc419302faacfe0d.scope │ │ │ │ │ └─22253 /pause │ │ │ │ └─cri-containerd-92b6f098aaae83573340f2ea18f968ceaff832acd7b11fb4c99b6ac6d401b2fe.scope │ │ │ │ └─22350 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true │ │ │ ├─kubelet-kubepods-burstable-podcbd4ee29_9a60_4f24_babe_75a79e0262a8.slice │ │ │ │ ├─cri-containerd-45366b6128ee283ed8e551bd4cfbb23d9e41caf8e9df563caa159264f5fb5a8f.scope │ │ │ │ │ └─23604 /pause │ │ │ │ └─cri-containerd-c82673298311c208438753cc6f9980d181abc401eaf370dd7390fdbc968f243a.scope │ │ │ │ └─23676 /coredns -conf /etc/coredns/Corefile │ │ │ └─kubelet-kubepods-burstable-podef6ebc9842be361e05ebdb6790c540b6.slice │ │ │ ├─cri-containerd-048d36383cc06e32dd1ab4003dee6a9343d2ccda4cb4eb939db648f16ac6f620.scope │ │ │ │ └─22272 /pause │ │ │ └─cri-containerd-d4a9d2a347b177fb443b9691e9438d1c0ee06ea2f1d19bf68afb66b1353f589c.scope │ │ │ └─22412 kube-apiserver --advertise-address=172.18.0.2 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --runtime-config= --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/16 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key │ │ └─kubelet-kubepods-podad85b7c2_f9f9_4ec9_b260_341f20aa22ff.slice │ │ ├─cri-containerd-840e0998fae8407b6859c75f3049d4116708e1a099a2af82c2a231ec21648183.scope │ │ │ └─23228 /pause │ │ └─cri-containerd-8a5e3ce32b811e69fef7d0bd0b708db17b4ffe5f3648638f8d7369dee746a825.scope │ │ └─23315 /bin/kindnetd │ └─kubelet.service │ └─22594 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --node-ip=172.18.0.2 --node-labels= --pod-infra-container-image=registry.k8s.io/pause:3.8 --provider-id=kind://docker/kind/kind-control-plane --fail-swap-on=false --cgroup-root=/kubelet └─system.slice ├─containerd.service │ ├─21726 /usr/local/bin/containerd │ ├─22165 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id a181437f07c759cb357dc1c5a4915703fdbc8421ac7dbe7ae420af7bbf5ce962 -address /run/containerd/containerd.sock │ ├─22172 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5191216187fdcf29cecbd6140d2896e536a2153a60d5d507dc419302faacfe0d -address /run/containerd/containerd.sock │ ├─22182 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 048d36383cc06e32dd1ab4003dee6a9343d2ccda4cb4eb939db648f16ac6f620 -address /run/containerd/containerd.sock │ ├─22207 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 9213f7ed733001c44536ef277d2f04abbec812060f0464459c296548b41c605b -address /run/containerd/containerd.sock │ ├─23175 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 05883a8b5dcc6186bc92366d098adf91aacc2716c5f4654d5faee7250595663b -address /run/containerd/containerd.sock │ ├─23197 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 840e0998fae8407b6859c75f3049d4116708e1a099a2af82c2a231ec21648183 -address /run/containerd/containerd.sock │ ├─23556 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 45366b6128ee283ed8e551bd4cfbb23d9e41caf8e9df563caa159264f5fb5a8f -address /run/containerd/containerd.sock │ ├─23564 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 0845db3249e178e867783ec1010a54509a9a6f20e675db3c40bafc17b94aff4a -address /run/containerd/containerd.sock │ ├─23750 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 10c28a6f48eec284148f84ce3e47fe3004a9cd7bbf1f9abc58e6814017b471be -address /run/containerd/containerd.sock │ ├─23778 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5e10cc4be68610c8fdfce2d97284fa2a72defecd8f9dbf1195ed33469751fc11 -address /run/containerd/containerd.sock │ ├─23907 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5f0448337760a1a92cb526a9ebe910cc2ad19bc77e1dfc4e2691cc1e11943675 -address /run/containerd/containerd.sock │ ├─24027 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 7a0648fca673c15649668c644af739b895a375f8e0a5ec9c7cf267b897aa4815 -address /run/containerd/containerd.sock │ ├─24053 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 49b695fce71dcafd713509840486eda46a4808bc556797e8cbb013c178adf453 -address /run/containerd/containerd.sock │ ├─24730 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id a9b853c1562e2d9b5d25818d03170c0ae004e0c3bdcba5684225a09905a1b752 -address /run/containerd/containerd.sock │ ├─24799 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5793c32bcc0436f8fe34a586a9c3d691c3891ff8e512284b4424309d965cf4de -address /run/containerd/containerd.sock │ ├─24951 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5c88b5d8462c6a19822bef36e5c767ce579a2df6df3f201d51bb2c4a584c158d -address /run/containerd/containerd.sock │ └─25067 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 716b23ff544bc41fa5015a42d703c5554339e9b24ab33edcd77f5b2026964d08 -address /run/containerd/containerd.sock └─systemd-journald.service └─21712 /lib/systemd/systemd-journald Feb 16 17:58:09 np0000155647 systemd[1]: Started docker-089fb10c98a483eac1b5cbd49445b3386ad4740a6877edc01e9209d9dbf220c7.scope - libcontainer container 089fb10c98a483eac1b5cbd49445b3386ad4740a6877edc01e9209d9dbf220c7. ● init.scope - System and Service Manager Loaded: loaded Transient: yes Active: active (running) since Mon 2026-02-16 17:51:02 UTC; 25min ago Docs: man:systemd(1) Tasks: 1 (limit: 77077) Memory: 10.5M (peak: 24.2M) CPU: 1min 14.220s CGroup: /init.scope └─1 /sbin/init nofb Feb 16 18:16:15 np0000155647 systemd[1]: Reloading... Feb 16 18:16:16 np0000155647 systemd[1]: Reloading finished in 326 ms. Feb 16 18:16:16 np0000155647 systemd[1]: Started devstack@m-sch.service - Devstack devstack@m-sch.service. Feb 16 18:16:17 np0000155647 systemd[1]: Reloading requested from client PID 128205 ('systemctl') (unit session-1.scope)... Feb 16 18:16:17 np0000155647 systemd[1]: Reloading... Feb 16 18:16:18 np0000155647 systemd[1]: Reloading finished in 324 ms. Feb 16 18:16:18 np0000155647 systemd[1]: Reloading requested from client PID 128296 ('systemctl') (unit session-1.scope)... Feb 16 18:16:18 np0000155647 systemd[1]: Reloading... Feb 16 18:16:18 np0000155647 systemd[1]: Reloading finished in 304 ms. Feb 16 18:16:18 np0000155647 systemd[1]: Started devstack@m-dat.service - Devstack devstack@m-dat.service. ● session-1.scope - Session 1 of User zuul Loaded: loaded (/run/systemd/transient/session-1.scope; transient) Transient: yes Active: active (running) since Mon 2026-02-16 17:51:35 UTC; 25min ago Tasks: 13 Memory: 20.6G (peak: 20.7G) CPU: 23min 4.536s CGroup: /user.slice/user-1000.slice/session-1.scope ├─ 828 "sshd: zuul [priv]" ├─ 849 "sshd: zuul@notty" ├─ 1054 /usr/bin/python3 ├─130815 sh -c "/bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '\"'\"'echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3'\"'\"' && sleep 0'" ├─130816 /bin/sh -c "sudo -H -S -n -u root /bin/sh -c 'echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3' && sleep 0" ├─130817 sudo -H -S -n -u root /bin/sh -c "echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3" ├─130818 /bin/sh -c "echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3" ├─130819 /usr/bin/python3 ├─130820 /bin/bash -c "sudo iptables-save > /home/zuul/iptables.txt\n\n# NOTE(sfernand): Run 'df' with a 60s timeout to prevent hangs from\n# stale NFS mounts.\ntimeout -s 9 60s df -h > /home/zuul/df.txt || true\n# If 'df' times out, the mount output helps debug which NFS share\n# is unresponsive.\nmount > /home/zuul/mount.txt\n\nfor py_ver in 2 3; do\n if [[ \`which python\${py_ver}\` ]]; then\n python\${py_ver} -m pip freeze > /home/zuul/pip\${py_ver}-freeze.txt\n fi\ndone\n\nif [ \`command -v dpkg\` ]; then\n dpkg -l> /home/zuul/dpkg-l.txt\nfi\nif [ \`command -v rpm\` ]; then\n rpm -qa | sort > /home/zuul/rpm-qa.txt\nfi\n\n# Services status\nsudo systemctl status --all > services.txt 2>/dev/null\n\n# NOTE(kchamart) The 'audit.log' can be useful in cases when QEMU\n# failed to start due to denials from SELinux — useful for CentOS\n# and Fedora machines. For Ubuntu (which runs AppArmor), DevStack\n# already captures the contents of /var/log/kern.log (via\n# \`journalctl -t kernel\` redirected into syslog.txt.gz), which\n# contains AppArmor-related messages.\nif [ -f /var/log/audit/audit.log ] ; then\n sudo cp /var/log/audit/audit.log /home/zuul/audit.log &&\n chmod +r /home/zuul/audit.log;\nfi\n\n# gzip and save any coredumps in /var/core\nif [ -d /var/core ]; then\n sudo gzip -r /var/core\n sudo cp -r /var/core /home/zuul/\nfi\n\nsudo ss -lntup | grep ':53' > /home/zuul/listen53.txt\n\n# NOTE(andreaf) Service logs are already in logs/ thanks for the\n# export-devstack-journal log. Apache logs are under apache/ thans to the\n# apache-logs-conf role.\ngrep -i deprecat /home/zuul/logs/*.txt /home/zuul/apache/*.log | \\\n sed -r 's/[0-9]{1,2}\\:[0-9]{1,2}\\:[0-9]{1,2}\\.[0-9]{1,3}/ /g' | \\\n sed -r 's/[0-9]{1,2}\\:[0-9]{1,2}\\:[0-9]{1,2}/ /g' | \\\n sed -r 's/[0-9]{1,4}-[0-9]{1,2}-[0-9]{1,4}/ /g' |\n sed -r 's/\\[.*\\]/ /g' | \\\n sed -r 's/\\s[0-9]+\\s/ /g' | \\\n awk '{if (\$0 in seen) {seen[\$0]++} else {out[++n]=\$0;seen[\$0]=1}} END { for (i=1; i<=n; i++) print seen[out[i]]\" :: \" out[i] }' > /home/zuul/deprecations.log\n" ├─130834 sudo systemctl status --all └─130835 systemctl status --all Feb 16 18:16:57 np0000155647 python3[130809]: ansible-ansible.legacy.command Invoked with _raw_params=cp -pRL /etc/openstack /home/zuul/etc/ zuul_no_log=False zuul_log_id=0242ac17-0010-4345-bbbd-00000000002f-1-controller zuul_output_max_bytes=1073741824 zuul_ansible_split_streams=False _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 16 18:16:57 np0000155647 sudo[130807]: pam_unix(sudo:session): session closed for user root Feb 16 18:16:57 np0000155647 sudo[130817]: zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3' Feb 16 18:16:57 np0000155647 sudo[130817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000) Feb 16 18:16:58 np0000155647 python3[130819]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=sudo iptables-save > /home/zuul/iptables.txt # NOTE(sfernand): Run 'df' with a 60s timeout to prevent hangs from # stale NFS mounts. timeout -s 9 60s df -h > /home/zuul/df.txt || true # If 'df' times out, the mount output helps debug which NFS share # is unresponsive. mount > /home/zuul/mount.txt for py_ver in 2 3; do if [[ `which python${py_ver}` ]]; then python${py_ver} -m pip freeze > /home/zuul/pip${py_ver}-freeze.txt fi done if [ `command -v dpkg` ]; then dpkg -l> /home/zuul/dpkg-l.txt fi if [ `command -v rpm` ]; then rpm -qa | sort > /home/zuul/rpm-qa.txt fi # Services status sudo systemctl status --all > services.txt 2>/dev/null # NOTE(kchamart) The 'audit.log' can be useful in cases when QEMU # failed to start due to denials from SELinux — useful for CentOS # and Fedora machines. For Ubuntu (which runs AppArmor), DevStack # already captures the contents of /var/log/kern.log (via # `journalctl -t kernel` redirected into syslog.txt.gz), which # contains AppArmor-related messages. if [ -f /var/log/audit/audit.log ] ; then sudo cp /var/log/audit/audit.log /home/zuul/audit.log && chmod +r /home/zuul/audit.log; fi # gzip and save any coredumps in /var/core if [ -d /var/core ]; then sudo gzip -r /var/core sudo cp -r /var/core /home/zuul/ fi sudo ss -lntup | grep ':53' > /home/zuul/listen53.txt # NOTE(andreaf) Service logs are already in logs/ thanks for the # export-devstack-journal log. Apache logs are under apache/ thans to the # apache-logs-conf role. grep -i deprecat /home/zuul/logs/*.txt /home/zuul/apache/*.log | \ sed -r 's/[0-9]{1,2}\:[0-9]{1,2}\:[0-9]{1,2}\.[0-9]{1,3}/ /g' | \ sed -r 's/[0-9]{1,2}\:[0-9]{1,2}\:[0-9]{1,2}/ /g' | \ sed -r 's/[0-9]{1,4}-[0-9]{1,2}-[0-9]{1,4}/ /g' | sed -r 's/\[.*\]/ /g' | \ sed -r 's/\s[0-9]+\s/ /g' | \ awk '{if ($0 in seen) {seen[$0]++} else {out[++n]=$0;seen[$0]=1}} END { for (i=1; i<=n; i++) print seen[out[i]]" :: " out[i] }' > /home/zuul/deprecations.log _uses_shell=True zuul_no_log=False zuul_log_id=0242ac17-0010-4345-bbbd-000000000033-1-controller zuul_output_max_bytes=1073741824 zuul_ansible_split_streams=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None Feb 16 18:16:58 np0000155647 sudo[130822]: root : PWD=/home/zuul ; USER=root ; COMMAND=/usr/sbin/iptables-save Feb 16 18:16:58 np0000155647 sudo[130822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=0) Feb 16 18:16:58 np0000155647 sudo[130822]: pam_unix(sudo:session): session closed for user root Feb 16 18:16:58 np0000155647 sudo[130834]: root : PWD=/home/zuul ; USER=root ; COMMAND=/usr/bin/systemctl status --all Feb 16 18:16:58 np0000155647 sudo[130834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=0) ● apache-htcacheclean.service - Disk Cache Cleaning Daemon for Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/apache-htcacheclean.service; disabled; preset: enabled) Active: active (running) since Mon 2026-02-16 17:56:33 UTC; 20min ago Docs: https://httpd.apache.org/docs/2.4/programs/htcacheclean.html Main PID: 14203 (htcacheclean) Tasks: 1 (limit: 77077) Memory: 296.0K (peak: 712.0K) CPU: 62ms CGroup: /system.slice/apache-htcacheclean.service └─14203 /usr/bin/htcacheclean -d 120 -p /var/cache/apache2/mod_cache_disk -l 300M -n Feb 16 17:56:33 np0000155647 systemd[1]: Starting apache-htcacheclean.service - Disk Cache Cleaning Daemon for Apache HTTP Server... Feb 16 17:56:33 np0000155647 systemd[1]: Started apache-htcacheclean.service - Disk Cache Cleaning Daemon for Apache HTTP Server. ● apache2.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/apache2.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:15:33 UTC; 1min 25s ago Docs: https://httpd.apache.org/docs/2.4/ Main PID: 122537 (apache2) Tasks: 69 (limit: 77077) Memory: 17.5M (peak: 18.9M) CPU: 936ms CGroup: /system.slice/apache2.service ├─122537 /usr/sbin/apache2 -k start ├─122541 /usr/sbin/apache2 -k start └─122542 /usr/sbin/apache2 -k start Feb 16 18:15:33 np0000155647 systemd[1]: Starting apache2.service - The Apache HTTP Server... Feb 16 18:15:33 np0000155647 systemd[1]: Started apache2.service - The Apache HTTP Server. ○ apt-daily-upgrade.service - Daily apt upgrade and clean activities Loaded: loaded (/usr/lib/systemd/system/apt-daily-upgrade.service; static) Active: inactive (dead) TriggeredBy: ● apt-daily-upgrade.timer Docs: man:apt(8) ○ apt-daily.service - Daily apt download activities Loaded: loaded (/usr/lib/systemd/system/apt-daily.service; static) Active: inactive (dead) since Mon 2026-02-16 18:13:21 UTC; 3min 37s ago TriggeredBy: ● apt-daily.timer Docs: man:apt(8) Main PID: 109843 (code=exited, status=0/SUCCESS) CPU: 640ms Feb 16 18:13:21 np0000155647 systemd[1]: Starting apt-daily.service - Daily apt download activities... Feb 16 18:13:21 np0000155647 systemd[1]: apt-daily.service: Deactivated successfully. Feb 16 18:13:21 np0000155647 systemd[1]: Finished apt-daily.service - Daily apt download activities. ○ auth-rpcgss-module.service - Kernel Module supporting RPCSEC_GSS Loaded: loaded (/usr/lib/systemd/system/auth-rpcgss-module.service; static) Active: inactive (dead) Condition: start condition unmet at Mon 2026-02-16 18:03:48 UTC; 13min ago Feb 16 18:03:46 np0000155647 systemd[1]: auth-rpcgss-module.service - Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Feb 16 18:03:47 np0000155647 systemd[1]: auth-rpcgss-module.service - Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Feb 16 18:03:48 np0000155647 systemd[1]: auth-rpcgss-module.service - Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). ● blk-availability.service - Availability of block devices Loaded: loaded (/usr/lib/systemd/system/blk-availability.service; enabled; preset: enabled) Active: active (exited) since Mon 2026-02-16 17:56:36 UTC; 20min ago Feb 16 17:56:36 np0000155647 systemd[1]: Finished blk-availability.service - Availability of block devices. ● cloud-config.service - Cloud-init: Config Stage Loaded: loaded (/usr/lib/systemd/system/cloud-config.service; enabled; preset: enabled) Active: active (exited) since Mon 2026-02-16 17:51:10 UTC; 25min ago Main PID: 707 (code=exited, status=0/SUCCESS) Tasks: 0 (limit: 77077) Memory: 9.7M (peak: 47.3M) CPU: 596ms CGroup: /system.slice/cloud-config.service Feb 16 17:51:10 np0000155647 systemd[1]: Starting cloud-config.service - Cloud-init: Config Stage... Feb 16 17:51:10 np0000155647 cloud-init[763]: Cloud-init v. 25.2-0ubuntu1~24.04.1 running 'modules:config' at Mon, 16 Feb 2026 17:51:10 +0000. Up 10.01 seconds. Feb 16 17:51:10 np0000155647 systemd[1]: Finished cloud-config.service - Cloud-init: Config Stage. ● cloud-final.service - Cloud-init: Final Stage Loaded: loaded (/usr/lib/systemd/system/cloud-final.service; enabled; preset: enabled) Active: active (exited) since Mon 2026-02-16 17:51:11 UTC; 25min ago Main PID: 786 (code=exited, status=0/SUCCESS) Tasks: 0 Memory: 420.0K (peak: 31.4M) CPU: 486ms CGroup: /system.slice/cloud-final.service Feb 16 17:51:11 np0000155647 cloud-init[806]: ############################################################# Feb 16 17:51:11 np0000155647 cloud-init[807]: -----BEGIN SSH HOST KEY FINGERPRINTS----- Feb 16 17:51:11 np0000155647 cloud-init[809]: 1024 SHA256:8XspVvG3gWE71MUTvp3Bdbxp4TYisUGI9yZVw+pbX+I root@np0000155647 (DSA) Feb 16 17:51:11 np0000155647 cloud-init[811]: 256 SHA256:Yva4p/HA50XTTTo8XvTgTeEoFHvgZtjNhLm+4CBTcQc root@np0000155647 (ECDSA) Feb 16 17:51:11 np0000155647 cloud-init[813]: 256 SHA256:F7RtwYy5YuX5RmNHyI6FE0c0cUK38AMOs9doJAYKaVA root@np0000155647 (ED25519) Feb 16 17:51:11 np0000155647 cloud-init[815]: 3072 SHA256:R6vctQ7z+TYJsztPXHNFcmCY86ZrN9m3rIhwUxppaxU root@np0000155647 (RSA) Feb 16 17:51:11 np0000155647 cloud-init[816]: -----END SSH HOST KEY FINGERPRINTS----- Feb 16 17:51:11 np0000155647 cloud-init[817]: ############################################################# Feb 16 17:51:11 np0000155647 cloud-init[804]: Cloud-init v. 25.2-0ubuntu1~24.04.1 finished at Mon, 16 Feb 2026 17:51:11 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0]. Up 10.94 seconds Feb 16 17:51:11 np0000155647 systemd[1]: Finished cloud-final.service - Cloud-init: Final Stage. ○ cloud-init-hotplugd.service - Cloud-init: Hotplug Hook Loaded: loaded (/usr/lib/systemd/system/cloud-init-hotplugd.service; static) Active: inactive (dead) TriggeredBy: ● cloud-init-hotplugd.socket ● cloud-init-local.service - Cloud-init: Local Stage (pre-network) Loaded: loaded (/usr/lib/systemd/system/cloud-init-local.service; enabled; preset: enabled) Active: active (exited) since Mon 2026-02-16 17:51:04 UTC; 25min ago Main PID: 437 (code=exited, status=0/SUCCESS) Tasks: 0 (limit: 77077) Memory: 22.5M (peak: 67.3M) CPU: 904ms CGroup: /system.slice/cloud-init-local.service Feb 16 17:51:03 ubuntu systemd[1]: Starting cloud-init-local.service - Cloud-init: Local Stage (pre-network)... Feb 16 17:51:03 ubuntu cloud-init[546]: Cloud-init v. 25.2-0ubuntu1~24.04.1 running 'init-local' at Mon, 16 Feb 2026 17:51:03 +0000. Up 3.61 seconds. Feb 16 17:51:04 np0000155647 systemd[1]: Finished cloud-init-local.service - Cloud-init: Local Stage (pre-network). ● cloud-init.service - Cloud-init: Network Stage Loaded: loaded (/usr/lib/systemd/system/cloud-init.service; enabled; preset: enabled) Active: active (exited) since Mon 2026-02-16 17:51:10 UTC; 25min ago Main PID: 615 (code=exited, status=0/SUCCESS) Tasks: 0 (limit: 77077) Memory: 27.5M (peak: 59.3M) CPU: 1.628s CGroup: /system.slice/cloud-init.service Feb 16 17:51:09 np0000155647 cloud-init[620]: | o. . o*@*%.. | Feb 16 17:51:09 np0000155647 cloud-init[620]: | . . o+BB.= | Feb 16 17:51:09 np0000155647 cloud-init[620]: | o === .. | Feb 16 17:51:09 np0000155647 cloud-init[620]: | .S..+ o | Feb 16 17:51:09 np0000155647 cloud-init[620]: | . o | Feb 16 17:51:09 np0000155647 cloud-init[620]: | . | Feb 16 17:51:09 np0000155647 cloud-init[620]: | | Feb 16 17:51:09 np0000155647 cloud-init[620]: | | Feb 16 17:51:09 np0000155647 cloud-init[620]: +----[SHA256]-----+ Feb 16 17:51:10 np0000155647 systemd[1]: Finished cloud-init.service - Cloud-init: Network Stage. ● containerd.service - containerd container runtime Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 17:57:45 UTC; 19min ago Docs: https://containerd.io Main PID: 20631 (containerd) Tasks: 32 Memory: 1.3G (peak: 1.3G) CPU: 14.620s CGroup: /system.slice/containerd.service ├─20631 /usr/bin/containerd └─21507 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 089fb10c98a483eac1b5cbd49445b3386ad4740a6877edc01e9209d9dbf220c7 -address /run/containerd/containerd.sock Feb 16 17:57:45 np0000155647 containerd[20631]: time="2026-02-16T17:57:45.021523519Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Feb 16 17:57:45 np0000155647 containerd[20631]: time="2026-02-16T17:57:45.021536740Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Feb 16 17:57:45 np0000155647 containerd[20631]: time="2026-02-16T17:57:45.021548840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Feb 16 17:57:45 np0000155647 containerd[20631]: time="2026-02-16T17:57:45.021559970Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Feb 16 17:57:45 np0000155647 containerd[20631]: time="2026-02-16T17:57:45.021572440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Feb 16 17:57:45 np0000155647 containerd[20631]: time="2026-02-16T17:57:45.021820355Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 16 17:57:45 np0000155647 containerd[20631]: time="2026-02-16T17:57:45.021888966Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 16 17:57:45 np0000155647 containerd[20631]: time="2026-02-16T17:57:45.021954547Z" level=info msg="containerd successfully booted in 0.030842s" Feb 16 17:57:45 np0000155647 systemd[1]: Started containerd.service - containerd container runtime. Feb 16 17:58:09 np0000155647 containerd[20631]: time="2026-02-16T17:58:09.069558279Z" level=info msg="connecting to shim 089fb10c98a483eac1b5cbd49445b3386ad4740a6877edc01e9209d9dbf220c7" address="unix:///run/containerd/s/7dc57554ad4dfa5245adbc44640f8fde9233decd0ad7cd6a71931b5fd2720c04" namespace=moby protocol=ttrpc version=3 ● dbus.service - D-Bus System Message Bus Loaded: loaded (/usr/lib/systemd/system/dbus.service; static) Active: active (running) since Mon 2026-02-16 17:51:10 UTC; 25min ago TriggeredBy: ● dbus.socket Docs: man:dbus-daemon(1) Main PID: 708 (dbus-daemon) Tasks: 1 (limit: 77077) Memory: 2.2M (peak: 2.9M) CPU: 6.538s CGroup: /system.slice/dbus.service └─708 @dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only Feb 16 17:55:43 np0000155647 dbus-daemon[708]: Unknown username "dnsmasq" in message bus configuration file Feb 16 17:55:43 np0000155647 dbus-daemon[708]: [system] Reloaded configuration Feb 16 17:56:37 np0000155647 dbus-daemon[708]: [system] Reloaded configuration Feb 16 18:02:08 np0000155647 dbus-daemon[708]: [system] Reloaded configuration Feb 16 18:02:08 np0000155647 dbus-daemon[708]: [system] Reloaded configuration Feb 16 18:02:08 np0000155647 dbus-daemon[708]: [system] Reloaded configuration Feb 16 18:02:08 np0000155647 dbus-daemon[708]: [system] Reloaded configuration Feb 16 18:02:08 np0000155647 dbus-daemon[708]: [system] Reloaded configuration Feb 16 18:02:08 np0000155647 dbus-daemon[708]: [system] Reloaded configuration Feb 16 18:02:23 np0000155647 dbus-daemon[708]: [system] Reloaded configuration ● devstack@barbican-keystone-listener.service - Devstack devstack@barbican-keystone-listener.service Loaded: loaded (/etc/systemd/system/devstack@barbican-keystone-listener.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:14:30 UTC; 2min 28s ago Main PID: 118155 (barbican-keysto) Tasks: 26 (limit: 77077) Memory: 86.4M (peak: 87.8M) CPU: 1.444s CGroup: /system.slice/system-devstack.slice/devstack@barbican-keystone-listener.service ├─118155 "barbican-keystone-listener: master process [/opt/stack/data/venv/bin/barbican-keystone-listener --config-file=/etc/barbican/barbican.conf]" └─118402 "barbican-keystone-listener: ServiceWrapper worker(0)" Feb 16 18:15:27 np0000155647 barbican-keystone-listener[118402]: 2026-02-16 18:15:27.043 118402 DEBUG barbican.queue.keystone_listener [-] Input keystone event publisher_id = api.np0000155647 process_event /opt/stack/barbican/barbican/queue/keystone_listener.py:74 Feb 16 18:15:27 np0000155647 barbican-keystone-listener[118402]: 2026-02-16 18:15:27.043 118402 DEBUG barbican.queue.keystone_listener [-] Input keystone event payload = {'tenant_id': 'c5a6b0746f4d47e385a76a4c9d68a217', 'user_id': '20f36523e4dd4978a6b417e9ef4d36fc', 'key_name': 'manila-service'} process_event /opt/stack/barbican/barbican/queue/keystone_listener.py:75 Feb 16 18:15:27 np0000155647 barbican-keystone-listener[118402]: 2026-02-16 18:15:27.044 118402 DEBUG barbican.queue.keystone_listener [-] Input keystone event type = keypair.import.start process_event /opt/stack/barbican/barbican/queue/keystone_listener.py:76 Feb 16 18:15:27 np0000155647 barbican-keystone-listener[118402]: 2026-02-16 18:15:27.044 118402 DEBUG barbican.queue.keystone_listener [-] Input keystone event metadata = {'message_id': '9ef577f2-60a5-4cd2-b61f-5e46290b35f7', 'timestamp': '2026-02-16 18:15:27.021701'} process_event /opt/stack/barbican/barbican/queue/keystone_listener.py:77 Feb 16 18:15:27 np0000155647 barbican-keystone-listener[118402]: 2026-02-16 18:15:27.045 118402 DEBUG barbican.queue.keystone_listener [-] Keystone Event: resource type=import, operation type=start, keystone id=None process_event /opt/stack/barbican/barbican/queue/keystone_listener.py:80 Feb 16 18:15:27 np0000155647 barbican-keystone-listener[118402]: 2026-02-16 18:15:27.597 118402 DEBUG barbican.queue.keystone_listener [-] Input keystone event publisher_id = api.np0000155647 process_event /opt/stack/barbican/barbican/queue/keystone_listener.py:74 Feb 16 18:15:27 np0000155647 barbican-keystone-listener[118402]: 2026-02-16 18:15:27.597 118402 DEBUG barbican.queue.keystone_listener [-] Input keystone event payload = {'tenant_id': 'c5a6b0746f4d47e385a76a4c9d68a217', 'user_id': '20f36523e4dd4978a6b417e9ef4d36fc', 'key_name': 'manila-service'} process_event /opt/stack/barbican/barbican/queue/keystone_listener.py:75 Feb 16 18:15:27 np0000155647 barbican-keystone-listener[118402]: 2026-02-16 18:15:27.597 118402 DEBUG barbican.queue.keystone_listener [-] Input keystone event type = keypair.import.end process_event /opt/stack/barbican/barbican/queue/keystone_listener.py:76 Feb 16 18:15:27 np0000155647 barbican-keystone-listener[118402]: 2026-02-16 18:15:27.597 118402 DEBUG barbican.queue.keystone_listener [-] Input keystone event metadata = {'message_id': 'a30c9592-cb97-49b6-a1bb-2c3092dd2d38', 'timestamp': '2026-02-16 18:15:27.593084'} process_event /opt/stack/barbican/barbican/queue/keystone_listener.py:77 Feb 16 18:15:27 np0000155647 barbican-keystone-listener[118402]: 2026-02-16 18:15:27.597 118402 DEBUG barbican.queue.keystone_listener [-] Keystone Event: resource type=import, operation type=end, keystone id=None process_event /opt/stack/barbican/barbican/queue/keystone_listener.py:80 ● devstack@barbican-retry.service - Devstack devstack@barbican-retry.service Loaded: loaded (/etc/systemd/system/devstack@barbican-retry.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:14:28 UTC; 2min 30s ago Main PID: 117625 (barbican-retry:) Tasks: 7 (limit: 77077) Memory: 94.0M (peak: 94.3M) CPU: 1.225s CGroup: /system.slice/system-devstack.slice/devstack@barbican-retry.service ├─117625 "barbican-retry: master process [/opt/stack/data/venv/bin/barbican-retry --config-file=/etc/barbican/barbican.conf]" └─117918 "barbican-retry: ServiceWrapper worker(0)" Feb 16 18:16:41 np0000155647 barbican-retry[117625]: 2026-02-16 18:16:41.640 117625 INFO barbican.queue.retry_scheduler [-] Done processing '0' tasks, will check again in '10.121255215923748' seconds. Feb 16 18:16:41 np0000155647 barbican-retry[117625]: 2026-02-16 18:16:41.640 117625 DEBUG oslo.service.backend._threading.loopingcall [-] Dynamic interval looping call 'barbican.queue.retry_scheduler.PeriodicServer._check_retry_tasks' sleeping for 10.00 seconds _run_loop /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/backend/_threading/loopingcall.py:125 Feb 16 18:16:51 np0000155647 barbican-retry[117625]: 2026-02-16 18:16:51.638 117625 DEBUG dbcounter [-] [117625] Writing DB stats barbican:SELECT=10 stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115 Feb 16 18:16:51 np0000155647 barbican-retry[117625]: 2026-02-16 18:16:51.641 117625 INFO barbican.queue.retry_scheduler [-] Processing scheduled retry tasks: Feb 16 18:16:51 np0000155647 barbican-retry[117625]: 2026-02-16 18:16:51.641 117625 DEBUG barbican.model.repositories [-] Clean paging values limit=10, offset=0 clean_paging_values /opt/stack/barbican/barbican/model/repositories.py:267 Feb 16 18:16:51 np0000155647 barbican-retry[117625]: 2026-02-16 18:16:51.642 117625 DEBUG barbican.model.repositories [-] Getting session... get_session /opt/stack/barbican/barbican/model/repositories.py:309 Feb 16 18:16:51 np0000155647 barbican-retry[117625]: 2026-02-16 18:16:51.642 117625 DEBUG barbican.model.repositories [-] Retrieving from 0 to 10 get_by_create_date /opt/stack/barbican/barbican/model/repositories.py:1263 Feb 16 18:16:51 np0000155647 barbican-retry[117625]: 2026-02-16 18:16:51.647 117625 DEBUG barbican.model.repositories [-] Number entities retrieved: 0 out of 0 get_by_create_date /opt/stack/barbican/barbican/model/repositories.py:1266 Feb 16 18:16:51 np0000155647 barbican-retry[117625]: 2026-02-16 18:16:51.648 117625 INFO barbican.queue.retry_scheduler [-] Done processing '0' tasks, will check again in '9.011006409258036' seconds. Feb 16 18:16:51 np0000155647 barbican-retry[117625]: 2026-02-16 18:16:51.648 117625 DEBUG oslo.service.backend._threading.loopingcall [-] Dynamic interval looping call 'barbican.queue.retry_scheduler.PeriodicServer._check_retry_tasks' sleeping for 9.01 seconds _run_loop /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/backend/_threading/loopingcall.py:125 ● devstack@barbican-svc.service - Devstack devstack@barbican-svc.service Loaded: loaded (/etc/systemd/system/devstack@barbican-svc.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:14:16 UTC; 2min 42s ago Main PID: 117084 (uwsgi) Status: "uWSGI is ready" Tasks: 5 (limit: 77077) Memory: 355.0M (peak: 355.6M) CPU: 5.952s CGroup: /system.slice/system-devstack.slice/devstack@barbican-svc.service ├─117084 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv ├─117085 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv ├─117086 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv ├─117087 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv └─117088 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv Feb 16 18:14:18 np0000155647 devstack@barbican-svc.service[117088]: 2026-02-16 18:14:18.060 117088 DEBUG barbican.api.controllers.secrets [-] Creating SecretsController __init__ /opt/stack/barbican/barbican/api/controllers/secrets.py:283 Feb 16 18:14:18 np0000155647 devstack@barbican-svc.service[117088]: 2026-02-16 18:14:18.060 117088 DEBUG barbican.api.controllers.orders [-] Creating OrdersController __init__ /opt/stack/barbican/barbican/api/controllers/orders.py:97 Feb 16 18:14:18 np0000155647 devstack@barbican-svc.service[117088]: 2026-02-16 18:14:18.060 117088 DEBUG barbican.api.controllers.transportkeys [-] Creating TransportKeyController __init__ /opt/stack/barbican/barbican/api/controllers/transportkeys.py:86 Feb 16 18:14:18 np0000155647 devstack@barbican-svc.service[117088]: 2026-02-16 18:14:18.060 117088 DEBUG barbican.api.controllers.quotas [-] === Creating QuotasController === __init__ /opt/stack/barbican/barbican/api/controllers/quotas.py:39 Feb 16 18:14:18 np0000155647 devstack@barbican-svc.service[117088]: 2026-02-16 18:14:18.060 117088 DEBUG barbican.api.controllers.quotas [-] === Creating ProjectsQuotaController === __init__ /opt/stack/barbican/barbican/api/controllers/quotas.py:118 Feb 16 18:14:18 np0000155647 devstack@barbican-svc.service[117088]: 2026-02-16 18:14:18.061 117088 DEBUG barbican.api.controllers.secretstores [-] Creating SecretStoresController __init__ /opt/stack/barbican/barbican/api/controllers/secretstores.py:133 Feb 16 18:14:18 np0000155647 devstack@barbican-svc.service[117088]: 2026-02-16 18:14:18.061 117088 INFO barbican.api.app [-] Barbican app created and initialized Feb 16 18:14:18 np0000155647 devstack@barbican-svc.service[117088]: 2026-02-16 18:14:18.063 117088 WARNING keystonemiddleware.auth_token [-] AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to False. This is backwards compatible but deprecated behaviour. Please set this to True. Feb 16 18:14:18 np0000155647 devstack@barbican-svc.service[117088]: 2026-02-16 18:14:18.067 117088 WARNING keystonemiddleware.auth_token [-] Configuring www_authenticate_uri to point to the public identity endpoint is required; clients may not be able to authenticate against an admin endpoint Feb 16 18:14:18 np0000155647 devstack@barbican-svc.service[117088]: WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x720afc2a3668 pid: 117088 (default app) ● devstack@c-api.service - Devstack devstack@c-api.service Loaded: loaded (/etc/systemd/system/devstack@c-api.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:13:32 UTC; 3min 26s ago Main PID: 112517 (uwsgi) Status: "uWSGI is ready" Tasks: 13 (limit: 77077) Memory: 450.6M (peak: 451.1M) CPU: 9.870s CGroup: /system.slice/system-devstack.slice/devstack@c-api.service ├─112517 "cinder-apiuWSGI master" ├─112522 "cinder-apiuWSGI worker 1" ├─112523 "cinder-apiuWSGI worker 2" ├─112524 "cinder-apiuWSGI worker 3" └─112525 "cinder-apiuWSGI worker 4" Feb 16 18:16:06 np0000155647 devstack@c-api.service[112525]: /opt/stack/data/venv/lib/python3.12/site-packages/oslo_policy/policy.py:813: UserWarning: Policy "volume_extension:default_get_all":"role:admin and system_scope:all" was deprecated in X in favor of "volume_extension:default_get_all":"rule:admin_api". Reason: Default policies now support the three Keystone default roles, namely 'admin', 'member', and 'reader' to implement three Cinder "personas". See "Policy Personas and Permissions" in the "Cinder Service Configuration" documentation (Xena release) for details.. Either ensure your deployment is ready for the new default or copy/paste the deprecated policy into your policy file and maintain it manually. Feb 16 18:16:06 np0000155647 devstack@c-api.service[112525]: warnings.warn(deprecated_msg) Feb 16 18:16:06 np0000155647 devstack@c-api.service[112525]: /opt/stack/data/venv/lib/python3.12/site-packages/oslo_policy/policy.py:813: UserWarning: Policy "volume_extension:default_unset":"rule:system_or_domain_or_project_admin" was deprecated in X in favor of "volume_extension:default_unset":"rule:admin_api". Reason: Default policies now support the three Keystone default roles, namely 'admin', 'member', and 'reader' to implement three Cinder "personas". See "Policy Personas and Permissions" in the "Cinder Service Configuration" documentation (Xena release) for details.. Either ensure your deployment is ready for the new default or copy/paste the deprecated policy into your policy file and maintain it manually. Feb 16 18:16:06 np0000155647 devstack@c-api.service[112525]: warnings.warn(deprecated_msg) Feb 16 18:16:06 np0000155647 devstack@c-api.service[112525]: DEBUG cinder.api.middleware.request_id [None req-c39b5c4c-cb5a-42b8-ad40-54e131cf6b7f None None] RequestId filter calling following filter/app {{(pid=112525) _context_setter /opt/stack/cinder/cinder/api/middleware/request_id.py:62}} Feb 16 18:16:06 np0000155647 devstack@c-api.service[112525]: INFO cinder.api.openstack.wsgi [None req-c39b5c4c-cb5a-42b8-ad40-54e131cf6b7f None None] GET https://199.204.45.4/volume// Feb 16 18:16:06 np0000155647 devstack@c-api.service[112525]: DEBUG cinder.api.openstack.wsgi [None req-c39b5c4c-cb5a-42b8-ad40-54e131cf6b7f None None] Empty body provided in request {{(pid=112525) get_body /opt/stack/cinder/cinder/api/openstack/wsgi.py:725}} Feb 16 18:16:06 np0000155647 devstack@c-api.service[112525]: DEBUG cinder.api.openstack.wsgi [None req-c39b5c4c-cb5a-42b8-ad40-54e131cf6b7f None None] Calling method 'all' {{(pid=112525) _process_stack /opt/stack/cinder/cinder/api/openstack/wsgi.py:878}} Feb 16 18:16:06 np0000155647 devstack@c-api.service[112525]: INFO cinder.api.openstack.wsgi [None req-c39b5c4c-cb5a-42b8-ad40-54e131cf6b7f None None] https://199.204.45.4/volume// returned with HTTP 300 Feb 16 18:16:06 np0000155647 devstack@c-api.service[112525]: [pid: 112525|app: 0|req: 1/4] 199.204.45.4 () {66 vars in 1442 bytes} [Mon Feb 16 18:16:06 2026] GET /volume/ => generated 388 bytes in 24 msecs (HTTP/1.1 300) 7 headers in 299 bytes (1 switches on core 0) ● devstack@c-bak.service - Devstack devstack@c-bak.service Loaded: loaded (/etc/systemd/system/devstack@c-bak.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:13:39 UTC; 3min 19s ago Main PID: 113815 (cinder-backup) Tasks: 1 (limit: 77077) Memory: 92.3M (peak: 92.8M) CPU: 1.679s CGroup: /system.slice/system-devstack.slice/devstack@c-bak.service └─113815 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/cinder-backup --config-file /etc/cinder/cinder.conf Feb 16 18:13:41 np0000155647 cinder-backup[113815]: INFO cinder.keymgr.migration [None req-d25f5ed9-f560-4d83-ab06-97fd6eb21745 None None] No volumes are using the ConfKeyManager's encryption_key_id. Feb 16 18:13:41 np0000155647 cinder-backup[113815]: DEBUG cinder.service [None req-10f13350-7249-43a4-9c63-7397f52440f1 None None] Creating RPC server for service cinder-backup {{(pid=113815) start /opt/stack/cinder/cinder/service.py:239}} Feb 16 18:13:41 np0000155647 cinder-backup[113815]: INFO cinder.keymgr.migration [None req-d25f5ed9-f560-4d83-ab06-97fd6eb21745 None None] No backups are known to be using the ConfKeyManager's encryption_key_id. Feb 16 18:13:41 np0000155647 cinder-backup[113815]: DEBUG cinder.service [None req-10f13350-7249-43a4-9c63-7397f52440f1 None None] Pinning object versions for RPC server serializer to 1.39 {{(pid=113815) start /opt/stack/cinder/cinder/service.py:245}} Feb 16 18:13:51 np0000155647 cinder-backup[113815]: DEBUG dbcounter [-] [113815] Writing DB stats cinder:SELECT=13,cinder:INSERT=1 {{(pid=113815) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:15:37 np0000155647 cinder-backup[113815]: DEBUG oslo_service.periodic_task [None req-8dadd820-ab76-4e7c-ba48-2715ddbd3c42 None None] Running periodic task BackupManager.publish_service_capabilities {{(pid=113815) run_periodic_tasks /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/periodic_task.py:210}} Feb 16 18:15:37 np0000155647 cinder-backup[113815]: DEBUG cinder.manager [None req-8dadd820-ab76-4e7c-ba48-2715ddbd3c42 None None] Notifying Schedulers of capabilities ... {{(pid=113815) _publish_service_capabilities /opt/stack/cinder/cinder/manager.py:202}} Feb 16 18:15:41 np0000155647 cinder-backup[113815]: ERROR cinder.service [-] Manager for service cinder-backup np0000155647 is reporting problems, not sending heartbeat. Service will appear "down". Feb 16 18:16:37 np0000155647 cinder-backup[113815]: DEBUG oslo_service.periodic_task [None req-8dadd820-ab76-4e7c-ba48-2715ddbd3c42 None None] Running periodic task BackupManager.publish_service_capabilities {{(pid=113815) run_periodic_tasks /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/periodic_task.py:210}} Feb 16 18:16:37 np0000155647 cinder-backup[113815]: DEBUG cinder.manager [None req-8dadd820-ab76-4e7c-ba48-2715ddbd3c42 None None] Notifying Schedulers of capabilities ... {{(pid=113815) _publish_service_capabilities /opt/stack/cinder/cinder/manager.py:202}} ● devstack@c-sch.service - Devstack devstack@c-sch.service Loaded: loaded (/etc/systemd/system/devstack@c-sch.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:13:37 UTC; 3min 21s ago Main PID: 113235 (cinder-schedule) Tasks: 1 (limit: 77077) Memory: 105.9M (peak: 106.4M) CPU: 2.008s CGroup: /system.slice/system-devstack.slice/devstack@c-sch.service └─113235 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/cinder-scheduler --config-file /etc/cinder/cinder.conf Feb 16 18:14:01 np0000155647 cinder-scheduler[113235]: DEBUG dbcounter [-] [113235] Writing DB stats cinder:SELECT=1 {{(pid=113235) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:14:02 np0000155647 cinder-scheduler[113235]: DEBUG oslo_service.periodic_task [None req-e9f1495a-770a-4c82-872d-6cc3b02057d2 None None] Running periodic task SchedulerManager._clean_expired_messages {{(pid=113235) run_periodic_tasks /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/periodic_task.py:210}} Feb 16 18:14:02 np0000155647 cinder-scheduler[113235]: INFO cinder.message.api [None req-e9f1495a-770a-4c82-872d-6cc3b02057d2 None None] Deleted 0 expired messages. Feb 16 18:14:02 np0000155647 cinder-scheduler[113235]: DEBUG oslo_service.periodic_task [None req-e9f1495a-770a-4c82-872d-6cc3b02057d2 None None] Running periodic task SchedulerManager._clean_expired_reservation {{(pid=113235) run_periodic_tasks /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/periodic_task.py:210}} Feb 16 18:14:14 np0000155647 cinder-scheduler[113235]: DEBUG dbcounter [-] [113235] Writing DB stats cinder:DELETE=1,cinder:SELECT=1 {{(pid=113235) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:15:07 np0000155647 cinder-scheduler[113235]: DEBUG cinder.scheduler.host_manager [None req-fc1e21b9-8b54-4977-ae02-79d758179744 None None] Received volume service update from np0000155647@lvmdriver-1: {'volume_backend_name': 'lvmdriver-1', 'vendor_name': 'Open Source', 'driver_version': '3.0.0', 'storage_protocol': 'iSCSI', 'pools': [{'pool_name': 'lvmdriver-1', 'total_capacity_gb': 47.5, 'free_capacity_gb': 47.5, 'reserved_percentage': 0, 'location_info': 'LVMVolumeDriver:np0000155647:stack-volumes-lvmdriver-1:thin:0', 'QoS_support': False, 'provisioned_capacity_gb': 0.0, 'max_over_subscription_ratio': '20.0', 'thin_provisioning_support': True, 'thick_provisioning_support': False, 'total_volumes': 1, 'filter_function': None, 'goodness_function': None, 'multiattach': True, 'backend_state': 'up', 'allocated_capacity_gb': 0, 'cacheable': True}], 'shared_targets': False, 'sparse_copy_volume': True, 'filter_function': None, 'goodness_function': None} {{(pid=113235) update_service_capabilities /opt/stack/cinder/cinder/scheduler/host_manager.py:629}} Feb 16 18:15:37 np0000155647 cinder-scheduler[113235]: DEBUG cinder.scheduler.host_manager [None req-12d4f83c-bd96-48fb-8b2b-1f635b6209ad None None] Received backup service update from np0000155647: {'backend_state': False, 'driver_name': 'cinder.backup.drivers.swift.SwiftBackupDriver', 'availability_zone': 'nova'} {{(pid=113235) update_service_capabilities /opt/stack/cinder/cinder/scheduler/host_manager.py:598}} Feb 16 18:16:02 np0000155647 cinder-scheduler[113235]: DEBUG dbcounter [-] [113235] Writing DB stats cinder:SELECT=1,cinder:UPDATE=1 {{(pid=113235) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:07 np0000155647 cinder-scheduler[113235]: DEBUG cinder.scheduler.host_manager [None req-df5ae9cf-5a07-4c4f-9ffc-bf3adef3b8a0 None None] Received volume service update from np0000155647@lvmdriver-1: {'volume_backend_name': 'lvmdriver-1', 'vendor_name': 'Open Source', 'driver_version': '3.0.0', 'storage_protocol': 'iSCSI', 'pools': [{'pool_name': 'lvmdriver-1', 'total_capacity_gb': 47.5, 'free_capacity_gb': 47.5, 'reserved_percentage': 0, 'location_info': 'LVMVolumeDriver:np0000155647:stack-volumes-lvmdriver-1:thin:0', 'QoS_support': False, 'provisioned_capacity_gb': 0.0, 'max_over_subscription_ratio': '20.0', 'thin_provisioning_support': True, 'thick_provisioning_support': False, 'total_volumes': 1, 'filter_function': None, 'goodness_function': None, 'multiattach': True, 'backend_state': 'up', 'allocated_capacity_gb': 0, 'cacheable': True}], 'shared_targets': False, 'sparse_copy_volume': True, 'filter_function': None, 'goodness_function': None} {{(pid=113235) update_service_capabilities /opt/stack/cinder/cinder/scheduler/host_manager.py:629}} Feb 16 18:16:37 np0000155647 cinder-scheduler[113235]: DEBUG cinder.scheduler.host_manager [None req-a9d880cc-43a6-444b-a76c-a1dac6c3380e None None] Received backup service update from np0000155647: {'backend_state': False, 'driver_name': 'cinder.backup.drivers.swift.SwiftBackupDriver', 'availability_zone': 'nova'} {{(pid=113235) update_service_capabilities /opt/stack/cinder/cinder/scheduler/host_manager.py:598}} ● devstack@c-vol.service - Devstack devstack@c-vol.service Loaded: loaded (/etc/systemd/system/devstack@c-vol.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:13:41 UTC; 3min 17s ago Main PID: 114396 (cinder-volume) Tasks: 2 (limit: 77077) Memory: 136.0M (peak: 165.9M) CPU: 14.503s CGroup: /system.slice/system-devstack.slice/devstack@c-vol.service ├─114396 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/cinder-volume --config-file /etc/cinder/cinder.conf └─114685 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/cinder-volume --config-file /etc/cinder/cinder.conf Feb 16 18:16:06 np0000155647 sudo[126022]: stack : PWD=/ ; USER=root ; COMMAND=/opt/stack/data/venv/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o size,data_percent --separator : --nosuffix /dev/stack-volumes-lvmdriver-1/stack-volumes-lvmdriver-1-pool Feb 16 18:16:06 np0000155647 sudo[126022]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002) Feb 16 18:16:07 np0000155647 sudo[126022]: pam_unix(sudo:session): session closed for user root Feb 16 18:16:07 np0000155647 cinder-volume[114685]: DEBUG oslo_concurrency.processutils [None req-aee4ab94-f412-47f2-b8e2-20532ba8c258 None None] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o size,data_percent --separator : --nosuffix /dev/stack-volumes-lvmdriver-1/stack-volumes-lvmdriver-1-pool" returned: 0 in 0.340s {{(pid=114685) execute /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/processutils.py:396}} Feb 16 18:16:07 np0000155647 cinder-volume[114685]: DEBUG oslo_concurrency.processutils [None req-aee4ab94-f412-47f2-b8e2-20532ba8c258 None None] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix --readonly stack-volumes-lvmdriver-1 {{(pid=114685) execute /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/processutils.py:368}} Feb 16 18:16:07 np0000155647 sudo[126055]: stack : PWD=/ ; USER=root ; COMMAND=/opt/stack/data/venv/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix --readonly stack-volumes-lvmdriver-1 Feb 16 18:16:07 np0000155647 sudo[126055]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002) Feb 16 18:16:07 np0000155647 sudo[126055]: pam_unix(sudo:session): session closed for user root Feb 16 18:16:07 np0000155647 cinder-volume[114685]: DEBUG oslo_concurrency.processutils [None req-aee4ab94-f412-47f2-b8e2-20532ba8c258 None None] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix --readonly stack-volumes-lvmdriver-1" returned: 0 in 0.357s {{(pid=114685) execute /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/processutils.py:396}} Feb 16 18:16:07 np0000155647 cinder-volume[114685]: DEBUG cinder.manager [None req-aee4ab94-f412-47f2-b8e2-20532ba8c258 None None] Notifying Schedulers of capabilities ... {{(pid=114685) _publish_service_capabilities /opt/stack/cinder/cinder/manager.py:202}} ● devstack@etcd.service - Devstack devstack@etcd.service Loaded: loaded (/etc/systemd/system/devstack@etcd.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:06:27 UTC; 10min ago Main PID: 64797 (etcd) Tasks: 14 (limit: 77077) Memory: 133.0M (peak: 134.5M) CPU: 3.941s CGroup: /system.slice/system-devstack.slice/devstack@etcd.service └─64797 /opt/stack/bin/etcd --name np0000155647 --data-dir /opt/stack/data/etcd --initial-cluster-state new --initial-cluster-token etcd-cluster-01 --initial-cluster np0000155647=http://199.204.45.4:2380 --initial-advertise-peer-urls http://199.204.45.4:2380 --advertise-client-urls http://199.204.45.4:2379 --listen-peer-urls http://0.0.0.0:2380 --listen-client-urls http://199.204.45.4:2379 --log-level=debug Feb 16 18:16:33 np0000155647 etcd[64797]: {"level":"debug","ts":"2026-02-16T18:16:33.911445Z","caller":"etcdserver/server.go:1230","msg":"The member is active, skip checking leadership","latestTickTs":"2026-02-16T18:16:33.830461Z","now":"2026-02-16T18:16:33.911443Z"} Feb 16 18:16:34 np0000155647 etcd[64797]: {"level":"debug","ts":"2026-02-16T18:16:34.087093Z","caller":"etcdserver/server.go:1230","msg":"The member is active, skip checking leadership","latestTickTs":"2026-02-16T18:16:34.030653Z","now":"2026-02-16T18:16:34.087090Z"} Feb 16 18:16:41 np0000155647 etcd[64797]: {"level":"debug","ts":"2026-02-16T18:16:41.136211Z","caller":"etcdserver/server.go:1230","msg":"The member is active, skip checking leadership","latestTickTs":"2026-02-16T18:16:41.130794Z","now":"2026-02-16T18:16:41.136208Z"} Feb 16 18:16:43 np0000155647 etcd[64797]: {"level":"debug","ts":"2026-02-16T18:16:43.825342Z","caller":"etcdserver/server.go:1230","msg":"The member is active, skip checking leadership","latestTickTs":"2026-02-16T18:16:43.730472Z","now":"2026-02-16T18:16:43.825334Z"} Feb 16 18:16:48 np0000155647 etcd[64797]: {"level":"debug","ts":"2026-02-16T18:16:48.879958Z","caller":"etcdserver/server.go:1230","msg":"The member is active, skip checking leadership","latestTickTs":"2026-02-16T18:16:48.831211Z","now":"2026-02-16T18:16:48.879953Z"} Feb 16 18:16:48 np0000155647 etcd[64797]: {"level":"debug","ts":"2026-02-16T18:16:48.906684Z","caller":"etcdserver/server.go:1230","msg":"The member is active, skip checking leadership","latestTickTs":"2026-02-16T18:16:48.831211Z","now":"2026-02-16T18:16:48.906681Z"} Feb 16 18:16:48 np0000155647 etcd[64797]: {"level":"debug","ts":"2026-02-16T18:16:48.913412Z","caller":"etcdserver/server.go:1230","msg":"The member is active, skip checking leadership","latestTickTs":"2026-02-16T18:16:48.831211Z","now":"2026-02-16T18:16:48.913409Z"} Feb 16 18:16:49 np0000155647 etcd[64797]: {"level":"debug","ts":"2026-02-16T18:16:49.089287Z","caller":"etcdserver/server.go:1230","msg":"The member is active, skip checking leadership","latestTickTs":"2026-02-16T18:16:49.031017Z","now":"2026-02-16T18:16:49.089282Z"} Feb 16 18:16:56 np0000155647 etcd[64797]: {"level":"debug","ts":"2026-02-16T18:16:56.138943Z","caller":"etcdserver/server.go:1230","msg":"The member is active, skip checking leadership","latestTickTs":"2026-02-16T18:16:56.131206Z","now":"2026-02-16T18:16:56.138940Z"} Feb 16 18:16:58 np0000155647 etcd[64797]: {"level":"debug","ts":"2026-02-16T18:16:58.828346Z","caller":"etcdserver/server.go:1230","msg":"The member is active, skip checking leadership","latestTickTs":"2026-02-16T18:16:58.731115Z","now":"2026-02-16T18:16:58.828342Z"} ● devstack@file_tracker.service - Devstack devstack@file_tracker.service Loaded: loaded (/etc/systemd/system/devstack@file_tracker.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:06:23 UTC; 10min ago Main PID: 64151 (file_tracker.sh) Tasks: 2 (limit: 77077) Memory: 592.0K (peak: 1.1M) CPU: 208ms CGroup: /system.slice/system-devstack.slice/devstack@file_tracker.service ├─ 64151 /bin/bash /opt/stack/devstack/tools/file_tracker.sh └─129429 sleep 20 Feb 16 18:13:43 np0000155647 file_tracker.sh[114527]: 8128 0 9223372036854775807 Feb 16 18:14:03 np0000155647 file_tracker.sh[115845]: 8512 0 9223372036854775807 Feb 16 18:14:23 np0000155647 file_tracker.sh[117138]: 8832 0 9223372036854775807 Feb 16 18:14:43 np0000155647 file_tracker.sh[119049]: 9088 0 9223372036854775807 Feb 16 18:15:03 np0000155647 file_tracker.sh[121242]: 9408 0 9223372036854775807 Feb 16 18:15:23 np0000155647 file_tracker.sh[121401]: 9472 0 9223372036854775807 Feb 16 18:15:43 np0000155647 file_tracker.sh[123043]: 9760 0 9223372036854775807 Feb 16 18:16:03 np0000155647 file_tracker.sh[125997]: 10304 0 9223372036854775807 Feb 16 18:16:23 np0000155647 file_tracker.sh[128835]: 10656 0 9223372036854775807 Feb 16 18:16:43 np0000155647 file_tracker.sh[129428]: 10528 0 9223372036854775807 ● devstack@g-api.service - Devstack devstack@g-api.service Loaded: loaded (/etc/systemd/system/devstack@g-api.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:13:45 UTC; 3min 13s ago Main PID: 115234 (uwsgi) Status: "uWSGI is ready" Tasks: 19 (limit: 77077) Memory: 2.0G (peak: 2.0G) CPU: 20.657s CGroup: /system.slice/system-devstack.slice/devstack@g-api.service ├─115234 "glance-apiuWSGI master" ├─115235 "glance-apiuWSGI worker 1" ├─115236 "glance-apiuWSGI worker 2" ├─115237 "glance-apiuWSGI worker 3" └─115238 "glance-apiuWSGI worker 4" Feb 16 18:16:23 np0000155647 devstack@g-api.service[115237]: DEBUG glance.api.middleware.version_negotiation [None req-11fd487f-6359-48c2-ae09-4b7af70f043f admin admin] new path /v2/images {{(pid=115237) process_request /opt/stack/glance/glance/api/middleware/version_negotiation.py:70}} Feb 16 18:16:23 np0000155647 devstack@g-api.service[115237]: [pid: 115237|app: 0|req: 6/18] 127.0.0.1 () {38 vars in 857 bytes} [Mon Feb 16 18:16:23 2026] GET /v2/images?name=ubuntu-22 => generated 84 bytes in 36 msecs (HTTP/1.1 200) 4 headers in 156 bytes (1 switches on core 0) Feb 16 18:16:23 np0000155647 devstack@g-api.service[115238]: DEBUG glance.api.middleware.version_negotiation [None req-2cae60aa-e8ae-4468-8674-f4b50f23be4e admin admin] Determining version of request: GET /v2/images Accept: application/json {{(pid=115238) process_request /opt/stack/glance/glance/api/middleware/version_negotiation.py:44}} Feb 16 18:16:23 np0000155647 devstack@g-api.service[115238]: DEBUG glance.api.middleware.version_negotiation [None req-2cae60aa-e8ae-4468-8674-f4b50f23be4e admin admin] Using url versioning {{(pid=115238) process_request /opt/stack/glance/glance/api/middleware/version_negotiation.py:57}} Feb 16 18:16:23 np0000155647 devstack@g-api.service[115238]: DEBUG glance.api.middleware.version_negotiation [None req-2cae60aa-e8ae-4468-8674-f4b50f23be4e admin admin] Matched version: v2 {{(pid=115238) process_request /opt/stack/glance/glance/api/middleware/version_negotiation.py:69}} Feb 16 18:16:23 np0000155647 devstack@g-api.service[115238]: DEBUG glance.api.middleware.version_negotiation [None req-2cae60aa-e8ae-4468-8674-f4b50f23be4e admin admin] new path /v2/images {{(pid=115238) process_request /opt/stack/glance/glance/api/middleware/version_negotiation.py:70}} Feb 16 18:16:23 np0000155647 devstack@g-api.service[115238]: [pid: 115238|app: 0|req: 5/19] 127.0.0.1 () {38 vars in 857 bytes} [Mon Feb 16 18:16:23 2026] GET /v2/images?os_hidden=True => generated 84 bytes in 34 msecs (HTTP/1.1 200) 4 headers in 156 bytes (1 switches on core 0) Feb 16 18:16:33 np0000155647 devstack@g-api.service[115236]: DEBUG dbcounter [-] [115236] Writing DB stats glance:SELECT=1 {{(pid=115236) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:33 np0000155647 devstack@g-api.service[115237]: DEBUG dbcounter [-] [115237] Writing DB stats glance:SELECT=1 {{(pid=115237) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:33 np0000155647 devstack@g-api.service[115238]: DEBUG dbcounter [-] [115238] Writing DB stats glance:SELECT=1 {{(pid=115238) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} ● devstack@keystone.service - Devstack devstack@keystone.service Loaded: loaded (/etc/systemd/system/devstack@keystone.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:06:41 UTC; 10min ago Main PID: 66049 (uwsgi) Status: "uWSGI is ready" Tasks: 9 (limit: 77077) Memory: 453.0M (peak: 453.5M) CPU: 1min 21.010s CGroup: /system.slice/system-devstack.slice/devstack@keystone.service ├─66049 "keystoneuWSGI master" ├─66057 "keystoneuWSGI worker 1" ├─66058 "keystoneuWSGI worker 2" ├─66059 "keystoneuWSGI worker 3" └─66060 "keystoneuWSGI worker 4" Feb 16 18:16:25 np0000155647 devstack@keystone.service[66060]: DEBUG keystone.server.flask.request_processing.req_logging [None req-371c9d0a-1fd4-458f-a35b-5c11ff1edc30 None None] PATH_INFO: `/v3/auth/tokens` {{(pid=66060) log_request_info /opt/stack/keystone/keystone/server/flask/request_processing/req_logging.py:28}} Feb 16 18:16:25 np0000155647 devstack@keystone.service[66060]: DEBUG dbcounter [-] [66060] Writing DB stats keystone:SELECT=739,keystone:UPDATE=1,keystone:INSERT=6 {{(pid=66060) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:25 np0000155647 devstack@keystone.service[66060]: WARNING keystone.common.password_hashing [None req-371c9d0a-1fd4-458f-a35b-5c11ff1edc30 None None] Truncating password to algorithm specific maximum length 72 characters. Feb 16 18:16:25 np0000155647 devstack@keystone.service[66060]: DEBUG keystone.auth.core [None req-371c9d0a-1fd4-458f-a35b-5c11ff1edc30 None None] MFA Rules not processed for user `c4165115e4494e7e860b3803ed685267`. Rule list: `[]` (Enabled: `True`). {{(pid=66060) check_auth_methods_against_rules /opt/stack/keystone/keystone/auth/core.py:476}} Feb 16 18:16:25 np0000155647 devstack@keystone.service[66060]: DEBUG keystone.common.fernet_utils [None req-371c9d0a-1fd4-458f-a35b-5c11ff1edc30 None None] Loaded 2 Fernet keys from /etc/keystone/fernet-keys/, but `[fernet_tokens] max_active_keys = 3`; perhaps there have not been enough key rotations to reach `max_active_keys` yet? {{(pid=66060) load_keys /opt/stack/keystone/keystone/common/fernet_utils.py:297}} Feb 16 18:16:25 np0000155647 devstack@keystone.service[66060]: [pid: 66060|app: 0|req: 409/1635] 199.204.45.4 () {66 vars in 1152 bytes} [Mon Feb 16 18:16:25 2026] POST /identity/v3/auth/tokens => generated 4946 bytes in 26 msecs (HTTP/1.1 201) 6 headers in 385 bytes (1 switches on core 0) Feb 16 18:16:33 np0000155647 devstack@keystone.service[66058]: DEBUG dbcounter [-] [66058] Writing DB stats keystone:SELECT=24 {{(pid=66058) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:33 np0000155647 devstack@keystone.service[66059]: DEBUG dbcounter [-] [66059] Writing DB stats keystone:SELECT=34 {{(pid=66059) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:33 np0000155647 devstack@keystone.service[66057]: DEBUG dbcounter [-] [66057] Writing DB stats keystone:SELECT=2 {{(pid=66057) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:35 np0000155647 devstack@keystone.service[66060]: DEBUG dbcounter [-] [66060] Writing DB stats keystone:SELECT=3 {{(pid=66060) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} ● devstack@m-api.service - Devstack devstack@m-api.service Loaded: loaded (/etc/systemd/system/devstack@m-api.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:15:30 UTC; 1min 28s ago Main PID: 122152 (uwsgi) Status: "uWSGI is ready" Tasks: 13 (limit: 77077) Memory: 429.7M (peak: 430.2M) CPU: 10.647s CGroup: /system.slice/system-devstack.slice/devstack@m-api.service ├─122152 "manila-apiuWSGI master" ├─122153 "manila-apiuWSGI worker 1" ├─122154 "manila-apiuWSGI worker 2" ├─122155 "manila-apiuWSGI worker 3" └─122156 "manila-apiuWSGI worker 4" Feb 16 18:15:39 np0000155647 devstack@m-api.service[122153]: INFO manila.api.openstack.wsgi [None req-454cd819-6fc8-4108-a7a5-cb1340d93b65 admin admin] https://199.204.45.4/share/v2/types returned with HTTP 200 Feb 16 18:15:39 np0000155647 devstack@m-api.service[122153]: [pid: 122153|app: 0|req: 2/9] 199.204.45.4 () {70 vars in 1315 bytes} [Mon Feb 16 18:15:39 2026] POST /share/v2/types => generated 715 bytes in 38 msecs (HTTP/1.1 200) 7 headers in 297 bytes (1 switches on core 0) Feb 16 18:15:39 np0000155647 devstack@m-api.service[122155]: INFO manila.api.openstack.wsgi [None req-17a288f2-23be-483b-9ebf-4d268f871d97 admin admin] POST https://199.204.45.4/share/v2/types Feb 16 18:15:39 np0000155647 devstack@m-api.service[122155]: DEBUG manila.api.openstack.wsgi [None req-17a288f2-23be-483b-9ebf-4d268f871d97 admin admin] Action: 'create', calling method: Controller.__getattribute__..version_select, body: {"share_type": {"name": "dhss_false", "share_type_access:is_public": true, "extra_specs": {"snapshot_support": "True", "create_share_from_snapshot_support": "True", "driver_handles_share_servers": false}}} {{(pid=122155) _process_stack /opt/stack/manila/manila/api/openstack/wsgi.py:796}} Feb 16 18:15:39 np0000155647 devstack@m-api.service[122155]: INFO manila.api.openstack.wsgi [None req-17a288f2-23be-483b-9ebf-4d268f871d97 admin admin] https://199.204.45.4/share/v2/types returned with HTTP 200 Feb 16 18:15:39 np0000155647 devstack@m-api.service[122155]: [pid: 122155|app: 0|req: 3/10] 199.204.45.4 () {70 vars in 1315 bytes} [Mon Feb 16 18:15:39 2026] POST /share/v2/types => generated 721 bytes in 28 msecs (HTTP/1.1 200) 7 headers in 297 bytes (1 switches on core 0) Feb 16 18:15:49 np0000155647 devstack@m-api.service[122154]: DEBUG dbcounter [-] [122154] Writing DB stats manila:SELECT=2 {{(pid=122154) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:15:49 np0000155647 devstack@m-api.service[122156]: DEBUG dbcounter [-] [122156] Writing DB stats manila:INSERT=6,manila:SELECT=4 {{(pid=122156) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:15:49 np0000155647 devstack@m-api.service[122153]: DEBUG dbcounter [-] [122153] Writing DB stats manila:SELECT=2,manila:INSERT=4 {{(pid=122153) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:15:49 np0000155647 devstack@m-api.service[122155]: DEBUG dbcounter [-] [122155] Writing DB stats manila:SELECT=2,manila:INSERT=4 {{(pid=122155) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} ● devstack@m-dat.service - Devstack devstack@m-dat.service Loaded: loaded (/etc/systemd/system/devstack@m-dat.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:16:18 UTC; 40s ago Main PID: 128394 (manila-data) Tasks: 1 (limit: 77077) Memory: 89.6M (peak: 89.8M) CPU: 1.480s CGroup: /system.slice/system-devstack.slice/devstack@m-dat.service └─128394 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/manila-data --config-file /etc/manila/manila.conf Feb 16 18:16:19 np0000155647 manila-data[128394]: DEBUG oslo_service.backend._eventlet.service [-] quota.snapshots = 50 {{(pid=128394) log_opt_values /opt/stack/data/venv/lib/python3.12/site-packages/oslo_config/cfg.py:2824}} Feb 16 18:16:19 np0000155647 manila-data[128394]: DEBUG oslo_service.backend._eventlet.service [-] quota.until_refresh = 0 {{(pid=128394) log_opt_values /opt/stack/data/venv/lib/python3.12/site-packages/oslo_config/cfg.py:2824}} Feb 16 18:16:19 np0000155647 manila-data[128394]: DEBUG oslo_service.backend._eventlet.service [-] ******************************************************************************** {{(pid=128394) log_opt_values /opt/stack/data/venv/lib/python3.12/site-packages/oslo_config/cfg.py:2828}} Feb 16 18:16:19 np0000155647 manila-data[128394]: INFO manila.service [-] Starting manila-data node (version 21.1.0) Feb 16 18:16:19 np0000155647 manila-data[128394]: DEBUG oslo_db.api [None req-ea8b165b-c2f4-4fd0-b91a-07020341b4bb None None] Loading backend 'sqlalchemy' from 'manila.db.sqlalchemy.api' {{(pid=128394) _load_backend /opt/stack/data/venv/lib/python3.12/site-packages/oslo_db/api.py:259}} Feb 16 18:16:19 np0000155647 manila-data[128394]: INFO dbcounter [None req-ea8b165b-c2f4-4fd0-b91a-07020341b4bb None None] Registered counter for database manila Feb 16 18:16:19 np0000155647 manila-data[128394]: DEBUG oslo_db.sqlalchemy.engines [None req-ea8b165b-c2f4-4fd0-b91a-07020341b4bb None None] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_ENGINE_SUBSTITUTION {{(pid=128394) _check_effective_sql_mode /opt/stack/data/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:325}} Feb 16 18:16:20 np0000155647 manila-data[128394]: DEBUG dbcounter [-] [128394] Writer thread running {{(pid=128394) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:102}} Feb 16 18:16:20 np0000155647 manila-data[128394]: DEBUG manila.service [None req-ea8b165b-c2f4-4fd0-b91a-07020341b4bb None None] Creating RPC server for service manila-data. {{(pid=128394) start /opt/stack/manila/manila/service.py:159}} Feb 16 18:16:30 np0000155647 manila-data[128394]: DEBUG dbcounter [-] [128394] Writing DB stats manila:SELECT=3,manila:INSERT=1 {{(pid=128394) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} ● devstack@m-sch.service - Devstack devstack@m-sch.service Loaded: loaded (/etc/systemd/system/devstack@m-sch.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:16:16 UTC; 42s ago Main PID: 127822 (manila-schedule) Tasks: 1 (limit: 77077) Memory: 94.0M (peak: 94.2M) CPU: 1.532s CGroup: /system.slice/system-devstack.slice/devstack@m-sch.service └─127822 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/manila-scheduler --config-file /etc/manila/manila.conf Feb 16 18:16:17 np0000155647 manila-scheduler[127822]: DEBUG oslo_service.backend._eventlet.service [-] netapp_active_iq.aiq_username = None {{(pid=127822) log_opt_values /opt/stack/data/venv/lib/python3.12/site-packages/oslo_config/cfg.py:2824}} Feb 16 18:16:17 np0000155647 manila-scheduler[127822]: DEBUG oslo_service.backend._eventlet.service [-] ******************************************************************************** {{(pid=127822) log_opt_values /opt/stack/data/venv/lib/python3.12/site-packages/oslo_config/cfg.py:2828}} Feb 16 18:16:17 np0000155647 manila-scheduler[127822]: INFO manila.service [-] Starting manila-scheduler node (version 21.1.0) Feb 16 18:16:17 np0000155647 manila-scheduler[127822]: DEBUG oslo_db.api [None req-6904302b-72b8-4d64-8a98-8a6f5dbe27e5 None None] Loading backend 'sqlalchemy' from 'manila.db.sqlalchemy.api' {{(pid=127822) _load_backend /opt/stack/data/venv/lib/python3.12/site-packages/oslo_db/api.py:259}} Feb 16 18:16:17 np0000155647 manila-scheduler[127822]: INFO dbcounter [None req-6904302b-72b8-4d64-8a98-8a6f5dbe27e5 None None] Registered counter for database manila Feb 16 18:16:17 np0000155647 manila-scheduler[127822]: DEBUG oslo_db.sqlalchemy.engines [None req-6904302b-72b8-4d64-8a98-8a6f5dbe27e5 None None] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_ENGINE_SUBSTITUTION {{(pid=127822) _check_effective_sql_mode /opt/stack/data/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:325}} Feb 16 18:16:17 np0000155647 manila-scheduler[127822]: DEBUG dbcounter [-] [127822] Writer thread running {{(pid=127822) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:102}} Feb 16 18:16:18 np0000155647 manila-scheduler[127822]: DEBUG manila.service [None req-6904302b-72b8-4d64-8a98-8a6f5dbe27e5 None None] Creating RPC server for service manila-scheduler. {{(pid=127822) start /opt/stack/manila/manila/service.py:159}} Feb 16 18:16:20 np0000155647 manila-scheduler[127822]: DEBUG manila.scheduler.host_manager [None req-d0096f5c-9192-482c-9549-14f4dc36f86b None None] Received share service update from np0000155647@generic: {'share_backend_name': 'GENERIC', 'driver_handles_share_servers': True, 'vendor_name': 'Open Source', 'driver_version': '1.0', 'storage_protocol': 'NFS_CIFS', 'total_capacity_gb': 'unknown', 'free_capacity_gb': 'unknown', 'reserved_percentage': 0, 'reserved_snapshot_percentage': 0, 'reserved_share_extend_percentage': 0, 'qos': False, 'pools': None, 'snapshot_support': True, 'create_share_from_snapshot_support': True, 'revert_to_snapshot_support': False, 'mount_snapshot_support': False, 'replication_domain': None, 'filter_function': None, 'goodness_function': None, 'security_service_update_support': False, 'network_allocation_update_support': False, 'share_server_multiple_subnet_support': False, 'mount_point_name_support': False, 'share_replicas_migration_support': False, 'encryption_support': None, 'max_shares_per_share_server': -1, 'max_share_server_size': -1, 'share_group_stats': {'consistent_snapshot_support': None}, 'ipv4_support': True, 'ipv6_support': False, 'server_pools_mapping': {}} {{(pid=127822) update_service_capabilities /opt/stack/manila/manila/scheduler/host_manager.py:638}} Feb 16 18:16:27 np0000155647 manila-scheduler[127822]: DEBUG dbcounter [-] [127822] Writing DB stats manila:SELECT=2,manila:INSERT=1 {{(pid=127822) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} ● devstack@m-shr.service - Devstack devstack@m-shr.service Loaded: loaded (/etc/systemd/system/devstack@m-shr.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:16:14 UTC; 45s ago Main PID: 127286 (manila-share) Tasks: 2 (limit: 77077) Memory: 154.4M (peak: 177.2M) CPU: 4.936s CGroup: /system.slice/system-devstack.slice/devstack@m-shr.service ├─127286 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/manila-share --config-file /etc/manila/manila.conf └─127637 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/manila-share --config-file /etc/manila/manila.conf Feb 16 18:16:19 np0000155647 manila-share[127637]: DEBUG oslo_concurrency.lockutils [None req-c5925587-f6fa-461e-80a2-f6b49e5cefbc None None] Lock "linux_interface_device_name_tapb44d09fe-67" "released" by "manila.network.linux.interface.device_name_synchronized..wrapped_func..source_func" :: held 0.420s {{(pid=127637) inner /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:538}} Feb 16 18:16:19 np0000155647 manila-share[127637]: DEBUG oslo_concurrency.lockutils [None req-c5925587-f6fa-461e-80a2-f6b49e5cefbc None None] Lock "service_instance_plug_interface_in_host" "released" by "manila.share.drivers.service_instance.NeutronNetworkHelper._plug_interface_in_host" :: held 0.834s {{(pid=127637) inner /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:538}} Feb 16 18:16:19 np0000155647 manila-share[127637]: DEBUG manila.share.manager [None req-c5925587-f6fa-461e-80a2-f6b49e5cefbc None None] The backend np0000155647@generic does not support get backend info method. {{(pid=127637) ensure_driver_resources /opt/stack/manila/manila/share/manager.py:445}} Feb 16 18:16:20 np0000155647 manila-share[127637]: DEBUG manila.share.manager [None req-c5925587-f6fa-461e-80a2-f6b49e5cefbc None None] Re-exporting 0 shares {{(pid=127637) ensure_driver_resources /opt/stack/manila/manila/share/manager.py:464}} Feb 16 18:16:20 np0000155647 manila-share[127637]: INFO manila.share.manager [None req-c5925587-f6fa-461e-80a2-f6b49e5cefbc None None] Updating share status Feb 16 18:16:20 np0000155647 manila-share[127637]: DEBUG manila.share.driver [None req-c5925587-f6fa-461e-80a2-f6b49e5cefbc None None] Updating share stats. {{(pid=127637) _update_share_stats /opt/stack/manila/manila/share/driver.py:1335}} Feb 16 18:16:20 np0000155647 manila-share[127637]: DEBUG manila.manager [None req-c5925587-f6fa-461e-80a2-f6b49e5cefbc None None] Notifying Schedulers of capabilities ... {{(pid=127637) _publish_service_capabilities /opt/stack/manila/manila/manager.py:172}} Feb 16 18:16:20 np0000155647 manila-share[127637]: INFO manila.share.manager [None req-c5925587-f6fa-461e-80a2-f6b49e5cefbc None None] Finished initialization of driver: 'GenericShareDriver@np0000155647@generic' Feb 16 18:16:20 np0000155647 manila-share[127637]: DEBUG manila.service [None req-c5925587-f6fa-461e-80a2-f6b49e5cefbc None None] Creating RPC server for service manila-share. {{(pid=127637) start /opt/stack/manila/manila/service.py:159}} Feb 16 18:16:30 np0000155647 manila-share[127637]: DEBUG dbcounter [-] [127637] Writing DB stats manila:SELECT=5,manila:INSERT=2 {{(pid=127637) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} ● devstack@magnum-api.service - Devstack devstack@magnum-api.service Loaded: loaded (/etc/systemd/system/devstack@magnum-api.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:14:46 UTC; 2min 12s ago Main PID: 119710 (uwsgi) Status: "uWSGI is ready" Tasks: 5 (limit: 77077) Memory: 351.8M (peak: 352.2M) CPU: 7.812s CGroup: /system.slice/system-devstack.slice/devstack@magnum-api.service ├─119710 "magnum-apiuWSGI master" ├─119712 "magnum-apiuWSGI worker 1" ├─119713 "magnum-apiuWSGI worker 2" ├─119714 "magnum-apiuWSGI worker 3" └─119715 "magnum-apiuWSGI worker 4" Feb 16 18:14:47 np0000155647 devstack@magnum-api.service[119715]: Using RPC transport for notifications. Please use get_notification_transport to obtain a notification transport instance. Feb 16 18:14:47 np0000155647 devstack@magnum-api.service[119715]: INFO magnum.api.app [-] Full WSGI config used: /etc/magnum/api-paste.ini Feb 16 18:14:48 np0000155647 devstack@magnum-api.service[119713]: WARNING keystonemiddleware.auth_token [-] AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to False. This is backwards compatible but deprecated behaviour. Please set this to True. Feb 16 18:14:48 np0000155647 devstack@magnum-api.service[119714]: WARNING keystonemiddleware.auth_token [-] AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to False. This is backwards compatible but deprecated behaviour. Please set this to True. Feb 16 18:14:48 np0000155647 devstack@magnum-api.service[119712]: WARNING keystonemiddleware.auth_token [-] AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to False. This is backwards compatible but deprecated behaviour. Please set this to True. Feb 16 18:14:48 np0000155647 devstack@magnum-api.service[119713]: WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x7467e74a3668 pid: 119713 (default app) Feb 16 18:14:48 np0000155647 devstack@magnum-api.service[119714]: WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x7467e74a3668 pid: 119714 (default app) Feb 16 18:14:48 np0000155647 devstack@magnum-api.service[119712]: WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x7467e74a3668 pid: 119712 (default app) Feb 16 18:14:48 np0000155647 devstack@magnum-api.service[119715]: WARNING keystonemiddleware.auth_token [-] AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to False. This is backwards compatible but deprecated behaviour. Please set this to True. Feb 16 18:14:48 np0000155647 devstack@magnum-api.service[119715]: WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x7467e74a3668 pid: 119715 (default app) ● devstack@magnum-cond.service - Devstack devstack@magnum-cond.service Loaded: loaded (/etc/systemd/system/devstack@magnum-cond.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:14:48 UTC; 2min 10s ago Main PID: 120306 (magnum-conducto) Tasks: 17 (limit: 77077) Memory: 242.8M (peak: 245.2M) CPU: 3.421s CGroup: /system.slice/system-devstack.slice/devstack@magnum-cond.service ├─120306 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor ├─120636 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor ├─120638 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor ├─120640 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor ├─120641 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor ├─120642 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor ├─120644 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor ├─120647 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor ├─120648 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor ├─120651 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor ├─120653 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor ├─120655 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor ├─120657 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor ├─120659 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor ├─120663 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor ├─120668 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor └─120669 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor Feb 16 18:16:40 np0000155647 magnum-conductor[120306]: DEBUG dbcounter [-] [120306] Writing DB stats magnum:SELECT=1 {{(pid=120306) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:40 np0000155647 magnum-conductor[120306]: DEBUG oslo_service.periodic_task [-] Running periodic task MagnumPeriodicTasks.sync_cluster_status {{(pid=120306) run_periodic_tasks /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/periodic_task.py:210}} Feb 16 18:16:40 np0000155647 magnum-conductor[120306]: DEBUG magnum.service.periodic [None req-68b6d133-e2fd-4153-9425-a7b87c05b767 None None] Starting to sync up cluster status {{(pid=120306) sync_cluster_status /opt/stack/magnum/magnum/service/periodic.py:182}} Feb 16 18:16:50 np0000155647 magnum-conductor[120306]: DEBUG dbcounter [-] [120306] Writing DB stats magnum:SELECT=1 {{(pid=120306) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:50 np0000155647 magnum-conductor[120306]: DEBUG oslo_service.periodic_task [-] Running periodic task MagnumPeriodicTasks.sync_cluster_status {{(pid=120306) run_periodic_tasks /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/periodic_task.py:210}} Feb 16 18:16:50 np0000155647 magnum-conductor[120306]: DEBUG magnum.service.periodic [None req-3d138acb-f88f-4af5-8b3f-b446c893a55f None None] Starting to sync up cluster status {{(pid=120306) sync_cluster_status /opt/stack/magnum/magnum/service/periodic.py:182}} Feb 16 18:16:51 np0000155647 magnum-conductor[120306]: DEBUG oslo_service.periodic_task [-] Running periodic task MagnumServicePeriodicTasks.update_magnum_service {{(pid=120306) run_periodic_tasks /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/periodic_task.py:210}} Feb 16 18:16:51 np0000155647 magnum-conductor[120306]: DEBUG magnum.servicegroup.magnum_service_periodic [None req-33e5aab6-8728-4c7b-b0f7-ba9987af5833 None None] Update magnum_service {{(pid=120306) update_magnum_service /opt/stack/magnum/magnum/servicegroup/magnum_service_periodic.py:42}} Feb 16 18:16:52 np0000155647 magnum-conductor[120306]: DEBUG oslo_service.periodic_task [-] Running periodic task MagnumPeriodicTasks.sync_cluster_health_status {{(pid=120306) run_periodic_tasks /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/periodic_task.py:210}} Feb 16 18:16:52 np0000155647 magnum-conductor[120306]: DEBUG magnum.service.periodic [None req-48ccc3ee-2bee-44a5-b856-7bea8492f2e8 None None] Starting to sync up cluster health status {{(pid=120306) sync_cluster_health_status /opt/stack/magnum/magnum/service/periodic.py:214}} ● devstack@memory_tracker.service - Devstack devstack@memory_tracker.service Loaded: loaded (/etc/systemd/system/devstack@memory_tracker.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:06:20 UTC; 10min ago Main PID: 63656 (memory_tracker.) Tasks: 2 (limit: 77077) Memory: 1.4M (peak: 9.4M) CPU: 1.952s CGroup: /system.slice/system-devstack.slice/devstack@memory_tracker.service ├─ 63656 /bin/bash /opt/stack/devstack/tools/memory_tracker.sh └─129419 sleep 20 Feb 16 18:16:22 np0000155647 memory_tracker.sh[128831]: 24971 0.0 692 24951 00:00:00 1 do_sys_pause /pause Feb 16 18:16:22 np0000155647 memory_tracker.sh[128831]: 22253 0.0 688 22172 00:00:00 1 do_sys_pause /pause Feb 16 18:16:22 np0000155647 memory_tracker.sh[128831]: 23221 0.0 688 23175 00:00:00 1 do_sys_pause /pause Feb 16 18:16:22 np0000155647 memory_tracker.sh[128831]: 23841 0.0 688 23778 00:00:00 1 do_sys_pause /pause Feb 16 18:16:22 np0000155647 memory_tracker.sh[128831]: 24078 0.0 688 24027 00:00:00 1 do_sys_pause /pause Feb 16 18:16:22 np0000155647 memory_tracker.sh[128831]: 24091 0.0 688 24053 00:00:00 1 do_sys_pause /pause Feb 16 18:16:22 np0000155647 memory_tracker.sh[128831]: 24819 0.0 688 24799 00:00:00 1 do_sys_pause /pause Feb 16 18:16:22 np0000155647 memory_tracker.sh[63656]: --- Feb 16 18:16:22 np0000155647 memory_tracker.sh[128833]: [iscsid (pid:44109)]=14172KB; [dmeventd (pid:115151)]=80760KB; [ovs-vswitchd (pid:100770)]=1639740KB Feb 16 18:16:22 np0000155647 memory_tracker.sh[63656]: ]]] ● devstack@n-api-meta.service - Devstack devstack@n-api-meta.service Loaded: loaded (/etc/systemd/system/devstack@n-api-meta.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:13:15 UTC; 3min 43s ago Main PID: 108345 (uwsgi) Status: "uWSGI is ready" Tasks: 30 (limit: 77077) Memory: 422.8M (peak: 424.0M) CPU: 8.736s CGroup: /system.slice/system-devstack.slice/devstack@n-api-meta.service ├─108345 "nova-api-metauWSGI master" ├─108346 "nova-api-metauWSGI worker 1" ├─108347 "nova-api-metauWSGI worker 2" ├─108348 "nova-api-metauWSGI worker 3" ├─108349 "nova-api-metauWSGI worker 4" └─108350 "nova-api-metauWSGI http 1" Feb 16 18:13:27 np0000155647 devstack@n-api-meta.service[108346]: DEBUG dbcounter [-] [108346] Writing DB stats nova_cell1:SELECT=1 {{(pid=108346) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:13:27 np0000155647 devstack@n-api-meta.service[108349]: DEBUG dbcounter [-] [108349] Writing DB stats nova_cell0:SELECT=1 {{(pid=108349) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:13:27 np0000155647 devstack@n-api-meta.service[108348]: DEBUG dbcounter [-] [108348] Writing DB stats nova_cell0:SELECT=1 {{(pid=108348) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:13:27 np0000155647 devstack@n-api-meta.service[108348]: DEBUG dbcounter [-] [108348] Writing DB stats nova_cell1:SELECT=1 {{(pid=108348) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:13:27 np0000155647 devstack@n-api-meta.service[108349]: DEBUG dbcounter [-] [108349] Writing DB stats nova_cell0:SELECT=2,nova_cell0:INSERT=1 {{(pid=108349) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:13:27 np0000155647 devstack@n-api-meta.service[108346]: DEBUG dbcounter [-] [108346] Writing DB stats nova_cell0:SELECT=2,nova_cell0:INSERT=1 {{(pid=108346) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:13:27 np0000155647 devstack@n-api-meta.service[108348]: DEBUG dbcounter [-] [108348] Writing DB stats nova_cell0:SELECT=1 {{(pid=108348) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:13:27 np0000155647 devstack@n-api-meta.service[108347]: DEBUG dbcounter [-] [108347] Writing DB stats nova_cell0:SELECT=1 {{(pid=108347) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:13:27 np0000155647 devstack@n-api-meta.service[108347]: DEBUG dbcounter [-] [108347] Writing DB stats nova_cell1:SELECT=1 {{(pid=108347) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:13:27 np0000155647 devstack@n-api-meta.service[108347]: DEBUG dbcounter [-] [108347] Writing DB stats nova_cell0:SELECT=1 {{(pid=108347) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} ● devstack@n-api.service - Devstack devstack@n-api.service Loaded: loaded (/etc/systemd/system/devstack@n-api.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:12:15 UTC; 4min 43s ago Main PID: 99874 (uwsgi) Status: "uWSGI is ready" Tasks: 32 (limit: 77077) Memory: 524.9M (peak: 525.7M) CPU: 16.885s CGroup: /system.slice/system-devstack.slice/devstack@n-api.service ├─99874 "nova-apiuWSGI master" ├─99875 "nova-apiuWSGI worker 1" ├─99876 "nova-apiuWSGI worker 2" ├─99877 "nova-apiuWSGI worker 3" └─99878 "nova-apiuWSGI worker 4" Feb 16 18:15:58 np0000155647 devstack@n-api.service[99875]: DEBUG oslo_concurrency.lockutils [None req-bfb5b76e-32bc-4851-b5d9-4cd466734a73 admin admin] Lock "00000000-0000-0000-0000-000000000000" acquired by "nova.context.set_target_cell..get_or_set_cached_cell_and_set_connections" :: waited 0.001s {{(pid=99875) inner /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:519}} Feb 16 18:15:58 np0000155647 devstack@n-api.service[99875]: DEBUG oslo_concurrency.lockutils [None req-bfb5b76e-32bc-4851-b5d9-4cd466734a73 admin admin] Lock "00000000-0000-0000-0000-000000000000" "released" by "nova.context.set_target_cell..get_or_set_cached_cell_and_set_connections" :: held 0.000s {{(pid=99875) inner /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:538}} Feb 16 18:15:58 np0000155647 devstack@n-api.service[99875]: DEBUG oslo_concurrency.lockutils [None req-bfb5b76e-32bc-4851-b5d9-4cd466734a73 admin admin] Acquiring lock "b19b425c-3655-4efe-a465-1596a7ff4965" by "nova.context.set_target_cell..get_or_set_cached_cell_and_set_connections" {{(pid=99875) inner /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:506}} Feb 16 18:15:58 np0000155647 devstack@n-api.service[99875]: DEBUG oslo_concurrency.lockutils [None req-bfb5b76e-32bc-4851-b5d9-4cd466734a73 admin admin] Lock "b19b425c-3655-4efe-a465-1596a7ff4965" acquired by "nova.context.set_target_cell..get_or_set_cached_cell_and_set_connections" :: waited 0.000s {{(pid=99875) inner /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:519}} Feb 16 18:15:58 np0000155647 devstack@n-api.service[99875]: DEBUG oslo_concurrency.lockutils [None req-bfb5b76e-32bc-4851-b5d9-4cd466734a73 admin admin] Lock "b19b425c-3655-4efe-a465-1596a7ff4965" "released" by "nova.context.set_target_cell..get_or_set_cached_cell_and_set_connections" :: held 0.000s {{(pid=99875) inner /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:538}} Feb 16 18:15:58 np0000155647 devstack@n-api.service[99875]: INFO nova.api.openstack.requestlog [None req-bfb5b76e-32bc-4851-b5d9-4cd466734a73 admin admin] 199.204.45.4 "GET /compute/v2.1/os-services?binary=nova-compute&host=np0000155647" status: 200 len: 255 microversion: 2.69 time: 0.038814 Feb 16 18:15:58 np0000155647 devstack@n-api.service[99875]: [pid: 99875|app: 0|req: 15/60] 199.204.45.4 () {68 vars in 1460 bytes} [Mon Feb 16 18:15:58 2026] GET /compute/v2.1/os-services?binary=nova-compute&host=np0000155647 => generated 255 bytes in 40 msecs (HTTP/1.1 200) 9 headers in 359 bytes (1 switches on core 0) Feb 16 18:16:08 np0000155647 devstack@n-api.service[99875]: DEBUG dbcounter [-] [99875] Writing DB stats nova_cell0:SELECT=1 {{(pid=99875) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:08 np0000155647 devstack@n-api.service[99875]: DEBUG dbcounter [-] [99875] Writing DB stats nova_cell1:SELECT=1 {{(pid=99875) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:08 np0000155647 devstack@n-api.service[99875]: DEBUG dbcounter [-] [99875] Writing DB stats nova_api:SELECT=2 {{(pid=99875) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} ● devstack@n-cond-cell1.service - Devstack devstack@n-cond-cell1.service Loaded: loaded (/etc/systemd/system/devstack@n-cond-cell1.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:13:23 UTC; 3min 35s ago Main PID: 110436 (nova-conductor) Tasks: 5 (limit: 77077) Memory: 214.7M (peak: 215.5M) CPU: 5.273s CGroup: /system.slice/system-devstack.slice/devstack@n-cond-cell1.service ├─110436 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf ├─111018 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf ├─111019 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf ├─111021 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf └─111022 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf Feb 16 18:15:50 np0000155647 nova-conductor[111022]: DEBUG dbcounter [-] [111022] Writing DB stats nova_cell1:SELECT=4,nova_cell1:UPDATE=3 {{(pid=111022) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:15:50 np0000155647 nova-conductor[111019]: DEBUG dbcounter [-] [111019] Writing DB stats nova_cell1:SELECT=3 {{(pid=111019) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:00 np0000155647 nova-conductor[111018]: DEBUG dbcounter [-] [111018] Writing DB stats nova_cell1:SELECT=7,nova_cell1:UPDATE=5 {{(pid=111018) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:00 np0000155647 nova-conductor[111021]: DEBUG dbcounter [-] [111021] Writing DB stats nova_cell1:SELECT=4,nova_cell1:UPDATE=3 {{(pid=111021) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:10 np0000155647 nova-conductor[111022]: DEBUG dbcounter [-] [111022] Writing DB stats nova_cell1:SELECT=2,nova_cell1:UPDATE=2 {{(pid=111022) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:20 np0000155647 nova-conductor[111018]: DEBUG dbcounter [-] [111018] Writing DB stats nova_cell1:SELECT=2,nova_cell1:UPDATE=2 {{(pid=111018) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:50 np0000155647 nova-conductor[111018]: DEBUG dbcounter [-] [111018] Writing DB stats nova_cell1:SELECT=4,nova_cell1:UPDATE=3 {{(pid=111018) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:50 np0000155647 nova-conductor[111019]: DEBUG dbcounter [-] [111019] Writing DB stats nova_cell1:SELECT=8,nova_cell1:UPDATE=6 {{(pid=111019) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:50 np0000155647 nova-conductor[111021]: DEBUG dbcounter [-] [111021] Writing DB stats nova_cell1:SELECT=6,nova_cell1:UPDATE=5 {{(pid=111021) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:50 np0000155647 nova-conductor[111022]: DEBUG dbcounter [-] [111022] Writing DB stats nova_cell1:SELECT=6,nova_cell1:UPDATE=5 {{(pid=111022) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} ● devstack@n-cpu.service - Devstack devstack@n-cpu.service Loaded: loaded (/etc/systemd/system/devstack@n-cpu.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:13:27 UTC; 3min 31s ago Main PID: 111521 (nova-compute) Tasks: 22 (limit: 77077) Memory: 143.1M (peak: 147.3M) CPU: 3.843s CGroup: /system.slice/system-devstack.slice/devstack@n-cpu.service └─111521 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-compute --config-file /etc/nova/nova-cpu.conf Feb 16 18:16:37 np0000155647 nova-compute[111521]: DEBUG oslo_concurrency.processutils [None req-0032b4d1-d535-4de8-8672-cc5b55661554 None None] CMD "env LANG=C uptime" returned: 0 in 0.024s {{(pid=111521) execute /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/processutils.py:396}} Feb 16 18:16:37 np0000155647 nova-compute[111521]: DEBUG nova.compute.resource_tracker [None req-0032b4d1-d535-4de8-8672-cc5b55661554 None None] Hypervisor/Node resource view: name=np0000155647.novalocal free_ram=51540MB free_disk=270.57918548583984GB free_vcpus=16 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] {{(pid=111521) _report_hypervisor_resource_view /opt/stack/nova/nova/compute/resource_tracker.py:1136}} Feb 16 18:16:37 np0000155647 nova-compute[111521]: DEBUG oslo_concurrency.lockutils [None req-0032b4d1-d535-4de8-8672-cc5b55661554 None None] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" {{(pid=111521) inner /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:506}} Feb 16 18:16:37 np0000155647 nova-compute[111521]: DEBUG oslo_concurrency.lockutils [None req-0032b4d1-d535-4de8-8672-cc5b55661554 None None] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s {{(pid=111521) inner /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:519}} Feb 16 18:16:38 np0000155647 nova-compute[111521]: DEBUG nova.compute.resource_tracker [None req-0032b4d1-d535-4de8-8672-cc5b55661554 None None] Total usable vcpus: 16, total allocated vcpus: 0 {{(pid=111521) _report_final_resource_view /opt/stack/nova/nova/compute/resource_tracker.py:1159}} Feb 16 18:16:38 np0000155647 nova-compute[111521]: DEBUG nova.compute.resource_tracker [None req-0032b4d1-d535-4de8-8672-cc5b55661554 None None] Final resource view: name=np0000155647.novalocal phys_ram=64301MB used_ram=512MB phys_disk=299GB used_disk=0GB total_vcpus=16 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:16:37 up 25 min, 1 user, load average: 2.31, 2.43, 1.88\n'} {{(pid=111521) _report_final_resource_view /opt/stack/nova/nova/compute/resource_tracker.py:1168}} Feb 16 18:16:38 np0000155647 nova-compute[111521]: DEBUG nova.compute.provider_tree [None req-0032b4d1-d535-4de8-8672-cc5b55661554 None None] Inventory has not changed in ProviderTree for provider: c127ed9f-ac85-45da-aee7-540248c4ef19 {{(pid=111521) update_inventory /opt/stack/nova/nova/compute/provider_tree.py:180}} Feb 16 18:16:38 np0000155647 nova-compute[111521]: DEBUG nova.scheduler.client.report [None req-0032b4d1-d535-4de8-8672-cc5b55661554 None None] Inventory has not changed for provider c127ed9f-ac85-45da-aee7-540248c4ef19 based on inventory data: {'MEMORY_MB': {'total': 64301, 'reserved': 512, 'min_unit': 1, 'max_unit': 64301, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 16, 'reserved': 0, 'min_unit': 1, 'max_unit': 16, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 299, 'reserved': 0, 'min_unit': 1, 'max_unit': 299, 'step_size': 1, 'allocation_ratio': 1.0}} {{(pid=111521) set_inventory_for_provider /opt/stack/nova/nova/scheduler/client/report.py:958}} Feb 16 18:16:39 np0000155647 nova-compute[111521]: DEBUG nova.compute.resource_tracker [None req-0032b4d1-d535-4de8-8672-cc5b55661554 None None] Compute_service record updated for np0000155647:np0000155647.novalocal {{(pid=111521) _update_available_resource /opt/stack/nova/nova/compute/resource_tracker.py:1097}} Feb 16 18:16:39 np0000155647 nova-compute[111521]: DEBUG oslo_concurrency.lockutils [None req-0032b4d1-d535-4de8-8672-cc5b55661554 None None] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.151s {{(pid=111521) inner /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:538}} ● devstack@n-novnc-cell1.service - Devstack devstack@n-novnc-cell1.service Loaded: loaded (/etc/systemd/system/devstack@n-novnc-cell1.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:13:18 UTC; 3min 41s ago Main PID: 109046 (nova-novncproxy) Tasks: 16 (limit: 77077) Memory: 105.9M (peak: 106.0M) CPU: 2.932s CGroup: /system.slice/system-devstack.slice/devstack@n-novnc-cell1.service └─109046 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-novncproxy --config-file /etc/nova/nova_cell1.conf --web /opt/stack/novnc Feb 16 18:13:18 np0000155647 systemd[1]: Started devstack@n-novnc-cell1.service - Devstack devstack@n-novnc-cell1.service. Feb 16 18:13:19 np0000155647 nova-novncproxy[109046]: INFO nova.console.websocketproxy [-] WebSocket server settings: Feb 16 18:13:19 np0000155647 nova-novncproxy[109046]: INFO nova.console.websocketproxy [-]  - Listen on 0.0.0.0:6080 Feb 16 18:13:19 np0000155647 nova-novncproxy[109046]: INFO nova.console.websocketproxy [-]  - Web server (no directory listings). Web root: /opt/stack/novnc Feb 16 18:13:19 np0000155647 nova-novncproxy[109046]: INFO nova.console.websocketproxy [-]  - No SSL/TLS support (no cert file) Feb 16 18:13:19 np0000155647 nova-novncproxy[109046]: INFO nova.console.websocketproxy [-]  - proxying from 0.0.0.0:6080 to None:None ● devstack@n-sch.service - Devstack devstack@n-sch.service Loaded: loaded (/etc/systemd/system/devstack@n-sch.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:13:12 UTC; 3min 46s ago Main PID: 107735 (nova-scheduler:) Tasks: 33 (limit: 77077) Memory: 195.8M (peak: 196.4M) CPU: 2.569s CGroup: /system.slice/system-devstack.slice/devstack@n-sch.service ├─107735 "nova-scheduler: master process [/opt/stack/data/venv/bin/nova-scheduler --config-file /etc/nova/nova.conf]" ├─108462 "nova-scheduler: ServiceWrapper worker(0)" ├─108471 "nova-scheduler: ServiceWrapper worker(1)" ├─108480 "nova-scheduler: ServiceWrapper worker(2)" └─108488 "nova-scheduler: ServiceWrapper worker(3)" Feb 16 18:15:21 np0000155647 nova-scheduler[108488]: DEBUG oslo.service.backend._threading.loopingcall [-] Fixed interval looping call 'nova.servicegroup.drivers.db.DbDriver._report_state' sleeping for 119.99 seconds {{(pid=108488) _run_loop /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/backend/_threading/loopingcall.py:125}} Feb 16 18:15:21 np0000155647 nova-scheduler[108471]: DEBUG oslo.service.backend._threading.loopingcall [-] Fixed interval looping call 'nova.servicegroup.drivers.db.DbDriver._report_state' sleeping for 119.99 seconds {{(pid=108471) _run_loop /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/backend/_threading/loopingcall.py:125}} Feb 16 18:15:31 np0000155647 nova-scheduler[108462]: DEBUG dbcounter [-] [108462] Writing DB stats nova_cell0:SELECT=1,nova_cell0:UPDATE=1 {{(pid=108462) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:15:31 np0000155647 nova-scheduler[108480]: DEBUG dbcounter [-] [108480] Writing DB stats nova_cell0:SELECT=1 {{(pid=108480) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:15:31 np0000155647 nova-scheduler[108488]: DEBUG dbcounter [-] [108488] Writing DB stats nova_cell0:SELECT=1 {{(pid=108488) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:15:31 np0000155647 nova-scheduler[108471]: DEBUG dbcounter [-] [108471] Writing DB stats nova_cell0:SELECT=1 {{(pid=108471) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:07 np0000155647 nova-scheduler[108471]: DEBUG oslo.service.backend._threading.loopingcall [None req-b872a17d-0127-40cc-a1c5-afe9dc833413 None None] Dynamic interval looping call 'nova.service.Service.periodic_tasks' sleeping for 60.00 seconds {{(pid=108471) _run_loop /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/backend/_threading/loopingcall.py:125}} Feb 16 18:16:10 np0000155647 nova-scheduler[108480]: DEBUG oslo.service.backend._threading.loopingcall [None req-da7e8156-7f32-47a4-8b70-5f2fac278078 None None] Dynamic interval looping call 'nova.service.Service.periodic_tasks' sleeping for 60.00 seconds {{(pid=108480) _run_loop /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/backend/_threading/loopingcall.py:125}} Feb 16 18:16:11 np0000155647 nova-scheduler[108462]: DEBUG oslo.service.backend._threading.loopingcall [None req-49f60818-5277-4655-bdfc-54c340f826cf None None] Dynamic interval looping call 'nova.service.Service.periodic_tasks' sleeping for 60.00 seconds {{(pid=108462) _run_loop /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/backend/_threading/loopingcall.py:125}} Feb 16 18:16:21 np0000155647 nova-scheduler[108488]: DEBUG oslo.service.backend._threading.loopingcall [None req-38ff91e3-d427-4838-ac66-aba0dd7c70aa None None] Dynamic interval looping call 'nova.service.Service.periodic_tasks' sleeping for 60.00 seconds {{(pid=108488) _run_loop /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/backend/_threading/loopingcall.py:125}} ● devstack@n-super-cond.service - Devstack devstack@n-super-cond.service Loaded: loaded (/etc/systemd/system/devstack@n-super-cond.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:13:21 UTC; 3min 38s ago Main PID: 109828 (nova-conductor) Tasks: 5 (limit: 77077) Memory: 192.7M (peak: 193.4M) CPU: 4.448s CGroup: /system.slice/system-devstack.slice/devstack@n-super-cond.service ├─109828 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf ├─110421 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf ├─110422 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf ├─110423 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf └─110424 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf Feb 16 18:13:33 np0000155647 nova-conductor[109828]: DEBUG dbcounter [-] [109828] Writing DB stats nova_cell0:SELECT=1 {{(pid=109828) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:13:33 np0000155647 nova-conductor[109828]: DEBUG dbcounter [-] [109828] Writing DB stats nova_cell1:SELECT=1 {{(pid=109828) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:13:38 np0000155647 nova-conductor[110421]: DEBUG dbcounter [-] [110421] Writing DB stats nova_cell0:SELECT=5,nova_cell0:INSERT=1,nova_cell0:UPDATE=1 {{(pid=110421) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:13:38 np0000155647 nova-conductor[110422]: DEBUG dbcounter [-] [110422] Writing DB stats nova_cell0:SELECT=4,nova_cell0:INSERT=1 {{(pid=110422) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:13:38 np0000155647 nova-conductor[110423]: DEBUG dbcounter [-] [110423] Writing DB stats nova_cell0:SELECT=4,nova_cell0:INSERT=1,nova_cell0:UPDATE=1 {{(pid=110423) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:13:38 np0000155647 nova-conductor[110424]: DEBUG dbcounter [-] [110424] Writing DB stats nova_cell0:SELECT=4,nova_cell0:INSERT=1,nova_cell0:UPDATE=1 {{(pid=110424) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:15:38 np0000155647 nova-conductor[110421]: DEBUG dbcounter [-] [110421] Writing DB stats nova_cell0:SELECT=1,nova_cell0:UPDATE=1 {{(pid=110421) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:15:38 np0000155647 nova-conductor[110422]: DEBUG dbcounter [-] [110422] Writing DB stats nova_cell0:SELECT=1 {{(pid=110422) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:15:38 np0000155647 nova-conductor[110423]: DEBUG dbcounter [-] [110423] Writing DB stats nova_cell0:SELECT=1 {{(pid=110423) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:15:38 np0000155647 nova-conductor[110424]: DEBUG dbcounter [-] [110424] Writing DB stats nova_cell0:SELECT=1 {{(pid=110424) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} ● devstack@neutron-api.service - Devstack devstack@neutron-api.service Loaded: loaded (/etc/systemd/system/devstack@neutron-api.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:12:37 UTC; 4min 21s ago Main PID: 103265 (uwsgi) Status: "uWSGI is ready" Tasks: 106 (limit: 77077) Memory: 723.8M (peak: 724.4M) CPU: 44.741s CGroup: /system.slice/system-devstack.slice/devstack@neutron-api.service ├─103265 "neutron-apiuWSGI master" ├─103266 "neutron-apiuWSGI worker 1" ├─103267 "neutron-apiuWSGI worker 2" ├─103268 "neutron-apiuWSGI worker 3" └─103269 "neutron-apiuWSGI worker 4" Feb 16 18:16:44 np0000155647 devstack@neutron-api.service[103267]: DEBUG futurist.periodics [-] Submitting periodic callback 'neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance.HashRingHealthCheckPeriodics.touch_hash_ring_node' {{(pid=103267) _process_scheduled /opt/stack/data/venv/lib/python3.12/site-packages/futurist/periodics.py:638}} Feb 16 18:16:44 np0000155647 devstack@neutron-api.service[103267]: DEBUG neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance [-] Touching Hash Ring node "d4eed7dc4bb457c59ece68e8f64e8256" from periodic health check thread {{(pid=103267) touch_hash_ring_node /opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py:1135}} Feb 16 18:16:44 np0000155647 devstack@neutron-api.service[103268]: DEBUG futurist.periodics [-] Submitting periodic callback 'neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance.HashRingHealthCheckPeriodics.touch_hash_ring_node' {{(pid=103268) _process_scheduled /opt/stack/data/venv/lib/python3.12/site-packages/futurist/periodics.py:638}} Feb 16 18:16:44 np0000155647 devstack@neutron-api.service[103268]: DEBUG neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance [-] Touching Hash Ring node "ea52a344d585548db743727dad47a3e3" from periodic health check thread {{(pid=103268) touch_hash_ring_node /opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py:1135}} Feb 16 18:16:53 np0000155647 devstack@neutron-api.service[103266]: DEBUG dbcounter [-] [103266] Writing DB stats neutron:UPDATE=1,neutron:SELECT=1 {{(pid=103266) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:54 np0000155647 devstack@neutron-api.service[103269]: DEBUG dbcounter [-] [103269] Writing DB stats neutron:UPDATE=1,neutron:SELECT=1 {{(pid=103269) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:54 np0000155647 devstack@neutron-api.service[103268]: DEBUG dbcounter [-] [103268] Writing DB stats neutron:UPDATE=1,neutron:SELECT=1 {{(pid=103268) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:54 np0000155647 devstack@neutron-api.service[103267]: DEBUG dbcounter [-] [103267] Writing DB stats neutron:UPDATE=1,neutron:SELECT=1 {{(pid=103267) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:58 np0000155647 devstack@neutron-api.service[103266]: DEBUG futurist.periodics [-] Submitting periodic callback 'neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance.HashRingHealthCheckPeriodics.touch_hash_ring_node' {{(pid=103266) _process_scheduled /opt/stack/data/venv/lib/python3.12/site-packages/futurist/periodics.py:638}} Feb 16 18:16:58 np0000155647 devstack@neutron-api.service[103266]: DEBUG neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance [-] Touching Hash Ring node "5b08aed52ec65f4aabb509f450d902fd" from periodic health check thread {{(pid=103266) touch_hash_ring_node /opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py:1135}} ● devstack@neutron-ovn-maintenance-worker.service - Devstack devstack@neutron-ovn-maintenance-worker.service Loaded: loaded (/etc/systemd/system/devstack@neutron-ovn-maintenance-worker.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:12:43 UTC; 4min 15s ago Main PID: 104760 (neutron-ovn-mai) Tasks: 32 (limit: 77077) Memory: 256.6M (peak: 257.0M) CPU: 5.265s CGroup: /system.slice/system-devstack.slice/devstack@neutron-ovn-maintenance-worker.service ├─104760 "neutron-ovn-maintenance-worker: master process [/opt/stack/data/venv/bin/neutron-ovn-maintenance-worker --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini]" └─105533 "neutron-server: maintenance worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-ovn-maintenance-worker --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" Feb 16 18:13:09 np0000155647 neutron-ovn-maintenance-worker[105533]: DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: LogicalRouterPortEvent(events=('create', 'delete'), table='Logical_Router_Port', conditions=None, old_conditions=None), priority=20 to row=Logical_Router_Port(name=lrp-fa0e5821-34ff-4688-b4c8-900dc59b5bea, ipv6_prefix=[], mac=fa:16:3e:2d:4c:84, gateway_chassis=[], external_ids={'neutron:is_ext_gw': 'True', 'neutron:network_name': 'neutron-93d1473d-f0d5-4bcd-8f9d-5efc8e58a1dd', 'neutron:revision_number': '8', 'neutron:router_name': 'neutron-a1cb68a9-e8cc-4f51-92f4-087fb158272a', 'neutron:subnet_ids': '59c18181-60e0-499e-8288-ae2b19c4ed7b d8c73330-6608-4d2b-8508-a4371686e45a'}, options={'gateway_mtu': '1372', 'reside-on-redirect-chassis': 'true'}, ha_chassis_group=[], status={}, enabled=[], networks=['172.24.5.168/24', '2001:db8::3a7/64'], peer=[], ipv6_ra_configs={}) old= {{(pid=105533) matches /opt/stack/data/venv/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55}} Feb 16 18:13:09 np0000155647 neutron-ovn-maintenance-worker[105533]: INFO neutron.common.ovn.utils [-] HA Chassis Group neutron-518b734c-c486-46b8-b5b6-d41c2da9915a synchronized; highest priority chassis 34eeba02-25e9-447a-a0d4-f9c5eb6e69ca Feb 16 18:13:09 np0000155647 neutron-ovn-maintenance-worker[105533]: DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=HA_Chassis_Group, record=neutron-518b734c-c486-46b8-b5b6-d41c2da9915a, col_values=(('external_ids', {'neutron:availability_zone_hints': '', 'neutron:network_id': '518b734c-c486-46b8-b5b6-d41c2da9915a', 'neutron:router_id': 'a1cb68a9-e8cc-4f51-92f4-087fb158272a'}),), if_exists=True) {{(pid=105533) do_commit /opt/stack/data/venv/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89}} Feb 16 18:13:09 np0000155647 neutron-ovn-maintenance-worker[105533]: DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): HAChassisGroupAddChassisCommand(_result=None, hcg=neutron-518b734c-c486-46b8-b5b6-d41c2da9915a, chassis=34eeba02-25e9-447a-a0d4-f9c5eb6e69ca, priority=1, columns={}) {{(pid=105533) do_commit /opt/stack/data/venv/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89}} Feb 16 18:13:09 np0000155647 neutron-ovn-maintenance-worker[105533]: DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change {{(pid=105533) do_commit /opt/stack/data/venv/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129}} Feb 16 18:13:09 np0000155647 neutron-ovn-maintenance-worker[105533]: INFO neutron.common.ovn.utils [-] HA Chassis Group neutron-518b734c-c486-46b8-b5b6-d41c2da9915a synchronized; highest priority chassis 34eeba02-25e9-447a-a0d4-f9c5eb6e69ca Feb 16 18:13:09 np0000155647 neutron-ovn-maintenance-worker[105533]: DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=HA_Chassis_Group, record=neutron-518b734c-c486-46b8-b5b6-d41c2da9915a, col_values=(('external_ids', {'neutron:availability_zone_hints': '', 'neutron:network_id': '518b734c-c486-46b8-b5b6-d41c2da9915a', 'neutron:router_id': 'a1cb68a9-e8cc-4f51-92f4-087fb158272a'}),), if_exists=True) {{(pid=105533) do_commit /opt/stack/data/venv/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89}} Feb 16 18:13:09 np0000155647 neutron-ovn-maintenance-worker[105533]: DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): HAChassisGroupAddChassisCommand(_result=None, hcg=neutron-518b734c-c486-46b8-b5b6-d41c2da9915a, chassis=34eeba02-25e9-447a-a0d4-f9c5eb6e69ca, priority=1, columns={}) {{(pid=105533) do_commit /opt/stack/data/venv/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89}} Feb 16 18:13:09 np0000155647 neutron-ovn-maintenance-worker[105533]: DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change {{(pid=105533) do_commit /opt/stack/data/venv/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129}} Feb 16 18:13:19 np0000155647 neutron-ovn-maintenance-worker[105533]: DEBUG dbcounter [-] [105533] Writing DB stats neutron:SELECT=383,neutron:DELETE=1,neutron:INSERT=1,neutron:UPDATE=2 {{(pid=105533) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} ● devstack@neutron-periodic-workers.service - Devstack devstack@neutron-periodic-workers.service Loaded: loaded (/etc/systemd/system/devstack@neutron-periodic-workers.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:12:41 UTC; 4min 17s ago Main PID: 104263 (neutron-periodi) Tasks: 22 (limit: 77077) Memory: 197.1M (peak: 197.4M) CPU: 3.028s CGroup: /system.slice/system-devstack.slice/devstack@neutron-periodic-workers.service ├─104263 "neutron-periodic-workers: master process [/opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini]" ├─104984 "neutron-server: periodic worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" ├─104993 "neutron-server: periodic worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" ├─105004 "neutron-server: periodic worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" └─105017 "neutron-server: periodic worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" Feb 16 18:15:52 np0000155647 neutron-periodic-workers[104984]: DEBUG oslo.service.backend._threading.loopingcall [None req-250cb602-12a5-4075-aa22-94f1186947c2 None None] Fixed interval looping call 'neutron.plugins.ml2.plugin.DhcpAgentSchedulerDbMixin.remove_networks_from_down_agents' sleeping for 36.99 seconds {{(pid=104984) _run_loop /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/backend/_threading/loopingcall.py:125}} Feb 16 18:15:55 np0000155647 neutron-periodic-workers[104993]: DEBUG dbcounter [-] [104993] Writing DB stats neutron:SELECT=1 {{(pid=104993) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:02 np0000155647 neutron-periodic-workers[104984]: DEBUG dbcounter [-] [104984] Writing DB stats neutron:SELECT=3 {{(pid=104984) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:22 np0000155647 neutron-periodic-workers[104993]: DEBUG neutron.db.agents_db [None req-19412641-6528-4ef2-a8b8-9b37884866cb None None] Agent healthcheck: found 0 active agents {{(pid=104993) agent_health_check /opt/stack/neutron/neutron/db/agents_db.py:317}} Feb 16 18:16:22 np0000155647 neutron-periodic-workers[104993]: DEBUG oslo.service.backend._threading.loopingcall [None req-19412641-6528-4ef2-a8b8-9b37884866cb None None] Fixed interval looping call 'neutron.plugins.ml2.plugin.AgentDbMixin.agent_health_check' sleeping for 36.99 seconds {{(pid=104993) _run_loop /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/backend/_threading/loopingcall.py:125}} Feb 16 18:16:29 np0000155647 neutron-periodic-workers[104984]: DEBUG oslo.service.backend._threading.loopingcall [None req-250cb602-12a5-4075-aa22-94f1186947c2 None None] Fixed interval looping call 'neutron.plugins.ml2.plugin.DhcpAgentSchedulerDbMixin.remove_networks_from_down_agents' sleeping for 36.99 seconds {{(pid=104984) _run_loop /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/backend/_threading/loopingcall.py:125}} Feb 16 18:16:32 np0000155647 neutron-periodic-workers[104993]: DEBUG dbcounter [-] [104993] Writing DB stats neutron:SELECT=1 {{(pid=104993) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:39 np0000155647 neutron-periodic-workers[104984]: DEBUG dbcounter [-] [104984] Writing DB stats neutron:SELECT=3 {{(pid=104984) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:44 np0000155647 neutron-periodic-workers[105004]: DEBUG oslo.service.backend._threading.loopingcall [None req-0b083d47-085b-429a-ac23-62ecee73c0d8 None None] Fixed interval looping call 'neutron.db.quota.driver_nolock.DbQuotaNoLockDriver._remove_expired_reservations' sleeping for 119.99 seconds {{(pid=105004) _run_loop /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/backend/_threading/loopingcall.py:125}} Feb 16 18:16:54 np0000155647 neutron-periodic-workers[105004]: DEBUG dbcounter [-] [105004] Writing DB stats neutron:DELETE=1 {{(pid=105004) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} ● devstack@neutron-rpc-server.service - Devstack devstack@neutron-rpc-server.service Loaded: loaded (/etc/systemd/system/devstack@neutron-rpc-server.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:12:39 UTC; 4min 19s ago Main PID: 103751 (neutron-rpc-ser) Tasks: 29 (limit: 77077) Memory: 220.3M (peak: 220.7M) CPU: 4.036s CGroup: /system.slice/system-devstack.slice/devstack@neutron-rpc-server.service ├─103751 "neutron-rpc-server: master process [/opt/stack/data/venv/bin/neutron-rpc-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini]" ├─104906 "neutron-server: rpc worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-rpc-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" └─104914 "neutron-server: rpc worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-rpc-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" Feb 16 18:16:08 np0000155647 neutron-rpc-server[104906]: DEBUG dbcounter [-] [104906] Writing DB stats neutron:SELECT=2 {{(pid=104906) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:08 np0000155647 neutron-rpc-server[104914]: DEBUG dbcounter [-] [104914] Writing DB stats neutron:SELECT=2 {{(pid=104914) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:09 np0000155647 neutron-rpc-server[104906]: DEBUG neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovsdb_monitor [-] ChassisAgentWriteEvent : Matched Chassis_Private, update, None None {{(pid=104906) matches /opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovsdb_monitor.py:65}} Feb 16 18:16:09 np0000155647 neutron-rpc-server[104914]: DEBUG neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovsdb_monitor [-] ChassisAgentWriteEvent : Matched Chassis_Private, update, None None {{(pid=104914) matches /opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovsdb_monitor.py:65}} Feb 16 18:16:09 np0000155647 neutron-rpc-server[104914]: DEBUG neutron.common.ovn.hash_ring_manager [-] Hash Ring loaded. 4 active nodes. 0 offline nodes {{(pid=104914) _load_hash_ring /opt/stack/neutron/neutron/common/ovn/hash_ring_manager.py:102}} Feb 16 18:16:09 np0000155647 neutron-rpc-server[104906]: DEBUG neutron.common.ovn.hash_ring_manager [-] Hash Ring loaded. 4 active nodes. 0 offline nodes {{(pid=104906) _load_hash_ring /opt/stack/neutron/neutron/common/ovn/hash_ring_manager.py:102}} Feb 16 18:16:13 np0000155647 neutron-rpc-server[104906]: DEBUG neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovsdb_monitor [-] ChassisOVNAgentWriteEvent : Matched Chassis_Private, update, None None {{(pid=104906) matches /opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovsdb_monitor.py:65}} Feb 16 18:16:13 np0000155647 neutron-rpc-server[104914]: DEBUG neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovsdb_monitor [-] ChassisOVNAgentWriteEvent : Matched Chassis_Private, update, None None {{(pid=104914) matches /opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovsdb_monitor.py:65}} Feb 16 18:16:19 np0000155647 neutron-rpc-server[104914]: DEBUG dbcounter [-] [104914] Writing DB stats neutron:SELECT=2 {{(pid=104914) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:19 np0000155647 neutron-rpc-server[104906]: DEBUG dbcounter [-] [104906] Writing DB stats neutron:SELECT=2 {{(pid=104906) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} ● devstack@o-api.service - Devstack devstack@o-api.service Loaded: loaded (/etc/systemd/system/devstack@o-api.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:15:51 UTC; 1min 7s ago Main PID: 123997 (uwsgi) Status: "uWSGI is ready" Tasks: 9 (limit: 77077) Memory: 460.6M (peak: 461.6M) CPU: 17.711s CGroup: /system.slice/system-devstack.slice/devstack@o-api.service ├─123997 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv ├─123998 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv ├─123999 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv ├─124000 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv └─124001 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv Feb 16 18:15:58 np0000155647 devstack@o-api.service[124000]: WSGI app 0 (mountpoint='') ready in 7 seconds on interpreter 0x7bb6fcaa3668 pid: 124000 (default app) Feb 16 18:15:58 np0000155647 devstack@o-api.service[123998]: WARNING keystonemiddleware.auth_token [-] AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to False. This is backwards compatible but deprecated behaviour. Please set this to True. Feb 16 18:15:58 np0000155647 devstack@o-api.service[123998]: WARNING keystonemiddleware.auth_token [-] Configuring www_authenticate_uri to point to the public identity endpoint is required; clients may not be able to authenticate against an admin endpoint Feb 16 18:15:58 np0000155647 devstack@o-api.service[123998]: WSGI app 0 (mountpoint='') ready in 7 seconds on interpreter 0x7bb6fcaa3668 pid: 123998 (default app) Feb 16 18:15:58 np0000155647 devstack@o-api.service[124001]: WARNING keystonemiddleware.auth_token [-] AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to False. This is backwards compatible but deprecated behaviour. Please set this to True. Feb 16 18:15:58 np0000155647 devstack@o-api.service[124001]: WARNING keystonemiddleware.auth_token [-] Configuring www_authenticate_uri to point to the public identity endpoint is required; clients may not be able to authenticate against an admin endpoint Feb 16 18:15:58 np0000155647 devstack@o-api.service[124001]: WSGI app 0 (mountpoint='') ready in 7 seconds on interpreter 0x7bb6fcaa3668 pid: 124001 (default app) Feb 16 18:15:58 np0000155647 devstack@o-api.service[123999]: WARNING keystonemiddleware.auth_token [-] AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to False. This is backwards compatible but deprecated behaviour. Please set this to True. Feb 16 18:15:58 np0000155647 devstack@o-api.service[123999]: WARNING keystonemiddleware.auth_token [-] Configuring www_authenticate_uri to point to the public identity endpoint is required; clients may not be able to authenticate against an admin endpoint Feb 16 18:15:58 np0000155647 devstack@o-api.service[123999]: WSGI app 0 (mountpoint='') ready in 7 seconds on interpreter 0x7bb6fcaa3668 pid: 123999 (default app) ● devstack@o-da.service - Devstack devstack@o-da.service Loaded: loaded (/etc/systemd/system/devstack@o-da.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:15:53 UTC; 1min 5s ago Main PID: 124527 (octavia-driver-) Tasks: 17 (limit: 77077) Memory: 188.7M (peak: 189.2M) CPU: 3.807s CGroup: /system.slice/system-devstack.slice/devstack@o-da.service ├─124527 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/octavia-driver-agent --config-file /etc/octavia/octavia.conf ├─125280 "octavia-driver-agent - status_listener" ├─125283 "octavia-driver-agent - stats_listener" ├─125285 "octavia-driver-agent - get_listener" └─125384 "octavia-driver-agent - provider_agent -- ovn" Feb 16 18:15:57 np0000155647 octavia-driver-agent[125384]: INFO ovn_octavia_provider.maintenance [-] Periodic task found: DBInconsistenciesPeriodics.format_ip_port_mappings_ipv6 Feb 16 18:15:57 np0000155647 octavia-driver-agent[125384]: DEBUG futurist.periodics [-] Submitting immediate callback 'ovn_octavia_provider.maintenance.DBInconsistenciesPeriodics.change_device_owner_lb_hm_ports' {{(pid=125384) _process_immediates /opt/stack/data/venv/lib/python3.12/site-packages/futurist/periodics.py:673}} Feb 16 18:15:57 np0000155647 octavia-driver-agent[125384]: INFO ovn_octavia_provider.agent [-] OVN provider agent has started. Feb 16 18:15:57 np0000155647 octavia-driver-agent[125384]: DEBUG ovn_octavia_provider.maintenance [-] Maintenance task: checking device_owner for OVN LB HM ports. {{(pid=125384) change_device_owner_lb_hm_ports /opt/stack/ovn-octavia-provider/ovn_octavia_provider/maintenance.py:81}} Feb 16 18:15:57 np0000155647 octavia-driver-agent[125384]: DEBUG ovn_octavia_provider.maintenance [-] Maintenance task: no more ports left, stopping the periodic task. {{(pid=125384) change_device_owner_lb_hm_ports /opt/stack/ovn-octavia-provider/ovn_octavia_provider/maintenance.py:121}} Feb 16 18:15:57 np0000155647 octavia-driver-agent[125384]: DEBUG futurist.periodics [-] Periodic callback 'ovn_octavia_provider.maintenance.DBInconsistenciesPeriodics.change_device_owner_lb_hm_ports' raised 'NeverAgain' exception, stopping any further execution of it. {{(pid=125384) _on_done /opt/stack/data/venv/lib/python3.12/site-packages/futurist/periodics.py:710}} Feb 16 18:15:57 np0000155647 octavia-driver-agent[125384]: DEBUG futurist.periodics [-] Submitting immediate callback 'ovn_octavia_provider.maintenance.DBInconsistenciesPeriodics.format_ip_port_mappings_ipv6' {{(pid=125384) _process_immediates /opt/stack/data/venv/lib/python3.12/site-packages/futurist/periodics.py:673}} Feb 16 18:15:57 np0000155647 octavia-driver-agent[125384]: DEBUG ovn_octavia_provider.maintenance [-] Maintenance task: Ensure correct formatting of ip_port_mappings for IPv6 backend members. {{(pid=125384) format_ip_port_mappings_ipv6 /opt/stack/ovn-octavia-provider/ovn_octavia_provider/maintenance.py:138}} Feb 16 18:15:57 np0000155647 octavia-driver-agent[125384]: DEBUG ovn_octavia_provider.maintenance [-] Maintenance task: no more ip_port_mappings to format, stopping the periodic task. {{(pid=125384) format_ip_port_mappings_ipv6 /opt/stack/ovn-octavia-provider/ovn_octavia_provider/maintenance.py:163}} Feb 16 18:15:57 np0000155647 octavia-driver-agent[125384]: DEBUG futurist.periodics [-] Periodic callback 'ovn_octavia_provider.maintenance.DBInconsistenciesPeriodics.format_ip_port_mappings_ipv6' raised 'NeverAgain' exception, stopping any further execution of it. {{(pid=125384) _on_done /opt/stack/data/venv/lib/python3.12/site-packages/futurist/periodics.py:710}} ● devstack@o-hk.service - Devstack devstack@o-hk.service Loaded: loaded (/etc/systemd/system/devstack@o-hk.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:15:56 UTC; 1min 3s ago Main PID: 125136 (octavia-houseke) Tasks: 3 (limit: 77077) Memory: 104.9M (peak: 105.2M) CPU: 3.310s CGroup: /system.slice/system-devstack.slice/devstack@o-hk.service └─125136 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/octavia-housekeeping --config-file /etc/octavia/octavia.conf Feb 16 18:15:59 np0000155647 octavia-housekeeping[125136]: DEBUG octavia.cmd.house_keeping [-] ******************************************************************************** {{(pid=125136) log_opt_values /opt/stack/data/venv/lib/python3.12/site-packages/oslo_config/cfg.py:2828}} Feb 16 18:15:59 np0000155647 octavia-housekeeping[125136]: INFO octavia.cmd.house_keeping [-] Starting house keeping at 2026-02-16 18:15:59.294273 Feb 16 18:15:59 np0000155647 octavia-housekeeping[125136]: INFO octavia.cmd.house_keeping [-] DB cleanup interval is set to 30 sec Feb 16 18:15:59 np0000155647 octavia-housekeeping[125136]: INFO octavia.cmd.house_keeping [-] Amphora expiry age is 3600 seconds Feb 16 18:15:59 np0000155647 octavia-housekeeping[125136]: INFO octavia.cmd.house_keeping [-] Load balancer expiry age is 3600 seconds Feb 16 18:15:59 np0000155647 octavia-housekeeping[125136]: DEBUG octavia.cmd.house_keeping [-] Initiating the cleanup of old resources... {{(pid=125136) db_cleanup /opt/stack/octavia/octavia/cmd/house_keeping.py:49}} Feb 16 18:15:59 np0000155647 octavia-housekeeping[125136]: INFO octavia.cmd.house_keeping [-] Expiring certificate check interval is set to 3600 sec Feb 16 18:15:59 np0000155647 octavia-housekeeping[125136]: DEBUG octavia.cmd.house_keeping [-] Initiating certification rotation ... {{(pid=125136) cert_rotation /opt/stack/octavia/octavia/cmd/house_keeping.py:66}} Feb 16 18:15:59 np0000155647 octavia-housekeeping[125136]: DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_ENGINE_SUBSTITUTION {{(pid=125136) _check_effective_sql_mode /opt/stack/data/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:325}} Feb 16 18:16:29 np0000155647 octavia-housekeeping[125136]: DEBUG octavia.cmd.house_keeping [-] Initiating the cleanup of old resources... {{(pid=125136) db_cleanup /opt/stack/octavia/octavia/cmd/house_keeping.py:49}} ● devstack@openstack-cli-server.service - Devstack devstack@openstack-cli-server.service Loaded: loaded (/etc/systemd/system/devstack@openstack-cli-server.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:06:08 UTC; 10min ago Main PID: 62235 (python3) Tasks: 1 (limit: 77077) Memory: 91.8M (peak: 95.1M) CPU: 29.265s CGroup: /system.slice/system-devstack.slice/devstack@openstack-cli-server.service └─62235 /opt/stack/data/venv/bin/python3 /opt/stack/devstack/files/openstack-cli-server/openstack-cli-server Feb 16 18:15:48 np0000155647 python3[62235]: openstack ['--os-cloud', 'devstack-system-admin', 'role', 'assignment', 'list', '--role', 'load-balancer_member', '--user', 'demo', '--project', 'demo', '-c', 'Role', '-f', 'value'] Feb 16 18:15:58 np0000155647 python3[62235]: openstack ['--os-cloud', 'devstack-admin', '--os-region', 'RegionOne', 'compute', 'service', 'list', '--host', 'np0000155647', '--service', 'nova-compute', '-c', 'ID', '-f', 'value'] Feb 16 18:16:06 np0000155647 python3[62235]: openstack ['complete'] Feb 16 18:16:08 np0000155647 python3[62235]: openstack ['--os-cloud', 'devstack-admin', 'project', 'show', 'service', '-c', 'id', '-f', 'value'] Feb 16 18:16:08 np0000155647 python3[62235]: openstack ['--os-cloud', 'devstack-admin', 'network', 'show', 'admin_net', '-f', 'value', '-c', 'id'] Feb 16 18:16:09 np0000155647 python3[62235]: openstack ['--os-cloud', 'devstack-admin', 'network', 'create', 'admin_net', '--project', '6ecb481b7dec4e029415e2004c8c0d59'] Feb 16 18:16:10 np0000155647 python3[62235]: openstack ['--os-cloud', 'devstack-admin', 'network', 'show', 'admin_net', '-f', 'value', '-c', 'id'] Feb 16 18:16:10 np0000155647 python3[62235]: openstack ['--os-cloud', 'devstack-admin', 'subnet', 'show', 'admin_subnet', '-f', 'value', '-c', 'id'] Feb 16 18:16:10 np0000155647 python3[62235]: openstack ['--os-cloud', 'devstack-admin', 'subnet', 'create', 'admin_subnet', '--project', '6ecb481b7dec4e029415e2004c8c0d59', '--ip-version', '4', '--network', '1ddaf2af-8333-48ec-a71c-3dafdea80472', '--gateway', 'None', '--subnet-range', '10.2.5.0/24'] Feb 16 18:16:11 np0000155647 python3[62235]: openstack ['--os-cloud', 'devstack-admin', 'subnet', 'show', 'admin_subnet', '-f', 'value', '-c', 'id'] ● devstack@placement-api.service - Devstack devstack@placement-api.service Loaded: loaded (/etc/systemd/system/devstack@placement-api.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:12:46 UTC; 4min 12s ago Main PID: 105545 (uwsgi) Status: "uWSGI is ready" Tasks: 9 (limit: 77077) Memory: 329.9M (peak: 330.2M) CPU: 5.672s CGroup: /system.slice/system-devstack.slice/devstack@placement-api.service ├─105545 "placementuWSGI master" ├─105547 "placementuWSGI worker 1" ├─105548 "placementuWSGI worker 2" ├─105549 "placementuWSGI worker 3" └─105550 "placementuWSGI worker 4" Feb 16 18:16:05 np0000155647 devstack@placement-api.service[105550]: INFO placement.requestlog [None req-fb0182d6-dc0b-4f2b-a761-3bd5cf78216a None None] 199.204.45.4 "GET /placement//" status: 200 len: 136 microversion: 1.0 Feb 16 18:16:05 np0000155647 devstack@placement-api.service[105550]: [pid: 105550|app: 0|req: 4/16] 199.204.45.4 () {64 vars in 1244 bytes} [Mon Feb 16 18:16:05 2026] GET /placement/ => generated 136 bytes in 2 msecs (HTTP/1.1 200) 6 headers in 224 bytes (1 switches on core 0) Feb 16 18:16:38 np0000155647 devstack@placement-api.service[105547]: DEBUG placement.requestlog [req-249e4ac5-9947-4fc6-8826-2a3f79ae2493 req-dce6e9e6-f0bb-4837-b9c3-63fc4ff18883 None None] Starting request: 199.204.45.4 "GET /placement/resource_providers/c127ed9f-ac85-45da-aee7-540248c4ef19/allocations" {{(pid=105547) __call__ /opt/stack/placement/placement/requestlog.py:55}} Feb 16 18:16:38 np0000155647 devstack@placement-api.service[105547]: INFO placement.requestlog [req-249e4ac5-9947-4fc6-8826-2a3f79ae2493 req-dce6e9e6-f0bb-4837-b9c3-63fc4ff18883 service nova] 199.204.45.4 "GET /placement/resource_providers/c127ed9f-ac85-45da-aee7-540248c4ef19/allocations" status: 200 len: 54 microversion: 1.0 Feb 16 18:16:38 np0000155647 devstack@placement-api.service[105547]: [pid: 105547|app: 0|req: 4/17] 199.204.45.4 () {66 vars in 1529 bytes} [Mon Feb 16 18:16:38 2026] GET /placement/resource_providers/c127ed9f-ac85-45da-aee7-540248c4ef19/allocations => generated 54 bytes in 17 msecs (HTTP/1.1 200) 6 headers in 223 bytes (1 switches on core 0) Feb 16 18:16:38 np0000155647 devstack@placement-api.service[105548]: DEBUG placement.requestlog [req-249e4ac5-9947-4fc6-8826-2a3f79ae2493 req-0e4b7dce-a608-4159-8526-191c275589ae None None] Starting request: 199.204.45.4 "GET /placement/resource_providers/c127ed9f-ac85-45da-aee7-540248c4ef19/allocations" {{(pid=105548) __call__ /opt/stack/placement/placement/requestlog.py:55}} Feb 16 18:16:38 np0000155647 devstack@placement-api.service[105548]: INFO placement.requestlog [req-249e4ac5-9947-4fc6-8826-2a3f79ae2493 req-0e4b7dce-a608-4159-8526-191c275589ae service nova] 199.204.45.4 "GET /placement/resource_providers/c127ed9f-ac85-45da-aee7-540248c4ef19/allocations" status: 200 len: 54 microversion: 1.0 Feb 16 18:16:38 np0000155647 devstack@placement-api.service[105548]: [pid: 105548|app: 0|req: 5/18] 199.204.45.4 () {66 vars in 1529 bytes} [Mon Feb 16 18:16:38 2026] GET /placement/resource_providers/c127ed9f-ac85-45da-aee7-540248c4ef19/allocations => generated 54 bytes in 12 msecs (HTTP/1.1 200) 6 headers in 223 bytes (1 switches on core 0) Feb 16 18:16:48 np0000155647 devstack@placement-api.service[105547]: DEBUG dbcounter [-] [105547] Writing DB stats placement:SELECT=2 {{(pid=105547) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} Feb 16 18:16:48 np0000155647 devstack@placement-api.service[105548]: DEBUG dbcounter [-] [105548] Writing DB stats placement:SELECT=2 {{(pid=105548) stat_writer /opt/stack/data/venv/lib/python3.12/site-packages/dbcounter.py:115}} ● devstack@q-ovn-agent.service - Devstack devstack@q-ovn-agent.service Loaded: loaded (/etc/systemd/system/devstack@q-ovn-agent.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:12:33 UTC; 4min 25s ago Main PID: 102144 (neutron-ovn-age) Tasks: 32 (limit: 77077) Memory: 529.9M (peak: 531.4M) CPU: 8.358s CGroup: /system.slice/system-devstack.slice/devstack@q-ovn-agent.service ├─102144 "neutron-ovn-agent: master process [/opt/stack/data/venv/bin/neutron-ovn-agent --config-file /etc/neutron/plugins/ml2/ovn_agent.ini]" ├─102627 "neutron-ovn-agent: ServiceWrapper worker(0)" ├─102934 /opt/stack/data/venv/bin/python3.12 /usr/local/bin/privsep-helper --config-file /etc/neutron/plugins/ml2/ovn_agent.ini --privsep_context neutron.privileged.namespace_cmd --privsep_sock_path /tmp/tmp0w79_nvv/privsep.sock ├─106395 /opt/stack/data/venv/bin/python3.12 /usr/local/bin/privsep-helper --config-file /etc/neutron/plugins/ml2/ovn_agent.ini --privsep_context neutron.privileged.default --privsep_sock_path /tmp/tmpcnh8_ng3/privsep.sock ├─128352 /opt/stack/data/venv/bin/python3.12 /usr/local/bin/privsep-helper --config-file /etc/neutron/plugins/ml2/ovn_agent.ini --privsep_context neutron.privileged.link_cmd --privsep_sock_path /tmp/tmp7dfz2ixi/privsep.sock ├─128782 sudo /opt/stack/data/venv/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf ├─128784 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf └─128812 haproxy -f /opt/stack/data/neutron/ovn-metadata-proxy/1ddaf2af-8333-48ec-a71c-3dafdea80472.conf Feb 16 18:16:19 np0000155647 neutron-ovn-agent[102627]: DEBUG oslo_rootwrap.client [-] Popen for ['sudo', '/opt/stack/data/venv/bin/neutron-rootwrap-daemon', '/etc/neutron/rootwrap.conf'] command has been instantiated {{(pid=102627) _initialize /opt/stack/data/venv/lib/python3.12/site-packages/oslo_rootwrap/client.py:74}} Feb 16 18:16:19 np0000155647 sudo[128782]: stack : PWD=/ ; USER=root ; COMMAND=/opt/stack/data/venv/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf Feb 16 18:16:19 np0000155647 sudo[128782]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002) Feb 16 18:16:19 np0000155647 neutron-ovn-agent[102627]: INFO oslo_rootwrap.client [-] Spawned new rootwrap daemon process with pid=128782 Feb 16 18:16:35 np0000155647 neutron-ovn-agent[102627]: DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" {{(pid=102627) inner /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:506}} Feb 16 18:16:35 np0000155647 neutron-ovn-agent[102627]: DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s {{(pid=102627) inner /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:519}} Feb 16 18:16:35 np0000155647 neutron-ovn-agent[102627]: DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s {{(pid=102627) inner /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:538}} Feb 16 18:16:35 np0000155647 neutron-ovn-agent[102627]: DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" {{(pid=102627) inner /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:506}} Feb 16 18:16:35 np0000155647 neutron-ovn-agent[102627]: DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s {{(pid=102627) inner /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:519}} Feb 16 18:16:35 np0000155647 neutron-ovn-agent[102627]: DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s {{(pid=102627) inner /opt/stack/data/venv/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:538}} ● dm-event.service - Device-mapper event daemon Loaded: loaded (/usr/lib/systemd/system/dm-event.service; static) Active: active (running) since Mon 2026-02-16 18:13:45 UTC; 3min 13s ago TriggeredBy: ● dm-event.socket Docs: man:dmeventd(8) Main PID: 115151 (dmeventd) Tasks: 3 (limit: 77077) Memory: 14.5M (peak: 14.8M) CPU: 46ms CGroup: /system.slice/dm-event.service └─115151 /usr/sbin/dmeventd -f Feb 16 18:13:45 np0000155647 systemd[1]: Started dm-event.service - Device-mapper event daemon. Feb 16 18:13:45 np0000155647 dmeventd[115151]: dmeventd ready for processing. Feb 16 18:13:45 np0000155647 dmeventd[115151]: Monitoring thin pool stack--volumes--lvmdriver--1-stack--volumes--lvmdriver--1--pool. ○ dmesg.service - Save initial kernel messages after boot Loaded: loaded (/usr/lib/systemd/system/dmesg.service; enabled; preset: enabled) Active: inactive (dead) since Mon 2026-02-16 17:56:12 UTC; 20min ago Duration: 105ms Main PID: 11881 (code=exited, status=0/SUCCESS) CPU: 54ms Feb 16 17:56:12 np0000155647 systemd[1]: Started dmesg.service - Save initial kernel messages after boot. Feb 16 17:56:12 np0000155647 systemd[1]: dmesg.service: Deactivated successfully. ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 17:57:46 UTC; 19min ago TriggeredBy: ● docker.socket Docs: https://docs.docker.com Main PID: 20760 (dockerd) Tasks: 28 Memory: 440.9M (peak: 451.3M) CPU: 4.947s CGroup: /system.slice/docker.service ├─20760 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock └─21600 /usr/bin/docker-proxy -proto tcp -host-ip 127.0.0.1 -host-port 36617 -container-ip 172.18.0.2 -container-port 6443 -use-listen-fd Feb 16 17:57:46 np0000155647 dockerd[20760]: time="2026-02-16T17:57:46.660598985Z" level=info msg="Loading containers: done." Feb 16 17:57:46 np0000155647 dockerd[20760]: time="2026-02-16T17:57:46.675031216Z" level=info msg="Docker daemon" commit=6bc6209 containerd-snapshotter=true storage-driver=overlayfs version=29.2.1 Feb 16 17:57:46 np0000155647 dockerd[20760]: time="2026-02-16T17:57:46.675274810Z" level=info msg="Initializing buildkit" Feb 16 17:57:46 np0000155647 dockerd[20760]: time="2026-02-16T17:57:46.697368190Z" level=info msg="Completed buildkit initialization" Feb 16 17:57:46 np0000155647 dockerd[20760]: time="2026-02-16T17:57:46.712183588Z" level=info msg="Daemon has completed initialization" Feb 16 17:57:46 np0000155647 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 16 17:57:46 np0000155647 dockerd[20760]: time="2026-02-16T17:57:46.712415252Z" level=info msg="API listen on /run/docker.sock" Feb 16 17:58:02 np0000155647 dockerd[20760]: time="2026-02-16T17:58:02.424038175Z" level=info msg="image pulled" digest="sha256:9be91e9e9cdf116809841fc77ebdb8845443c4c72fe5218f3ae9eb57fdb4bace" remote="docker.io/kindest/node@sha256:9be91e9e9cdf116809841fc77ebdb8845443c4c72fe5218f3ae9eb57fdb4bace" Feb 16 17:58:09 np0000155647 dockerd[20760]: time="2026-02-16T17:58:09.252194335Z" level=info msg="Skipping check for route to send NA, EMSGSIZE" eid=af0a096c89a5 ep=kind-control-plane net=kind nid=285a601cfe66 Feb 16 17:58:09 np0000155647 dockerd[20760]: time="2026-02-16T17:58:09.256527614Z" level=info msg="sbJoin: gwep4 ''->'af0a096c89a5', gwep6 ''->'af0a096c89a5'" eid=af0a096c89a5 ep=kind-control-plane net=kind nid=285a601cfe66 ○ dpkg-db-backup.service - Daily dpkg database backup service Loaded: loaded (/usr/lib/systemd/system/dpkg-db-backup.service; static) Active: inactive (dead) TriggeredBy: ● dpkg-db-backup.timer Docs: man:dpkg(1) ○ e2scrub_all.service - Online ext4 Metadata Check for All Filesystems Loaded: loaded (/usr/lib/systemd/system/e2scrub_all.service; static) Active: inactive (dead) TriggeredBy: ● e2scrub_all.timer Docs: man:e2scrub_all(8) ○ e2scrub_reap.service - Remove Stale Online ext4 Metadata Check Snapshots Loaded: loaded (/usr/lib/systemd/system/e2scrub_reap.service; enabled; preset: enabled) Active: inactive (dead) since Mon 2026-02-16 17:51:10 UTC; 25min ago Docs: man:e2scrub_all(8) Main PID: 709 (code=exited, status=0/SUCCESS) CPU: 32ms Feb 16 17:51:10 np0000155647 systemd[1]: Starting e2scrub_reap.service - Remove Stale Online ext4 Metadata Check Snapshots... Feb 16 17:51:10 np0000155647 systemd[1]: e2scrub_reap.service: Deactivated successfully. Feb 16 17:51:10 np0000155647 systemd[1]: Finished e2scrub_reap.service - Remove Stale Online ext4 Metadata Check Snapshots. ○ emergency.service - Emergency Shell Loaded: loaded (/usr/lib/systemd/system/emergency.service; static) Active: inactive (dead) Docs: man:sulogin(8) ● epmd.service - Erlang Port Mapper Daemon Loaded: loaded (/usr/lib/systemd/system/epmd.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 17:59:19 UTC; 17min ago TriggeredBy: ● epmd.socket Main PID: 26075 (epmd) Tasks: 1 (limit: 77077) Memory: 448.0K (peak: 1.5M) CPU: 34ms CGroup: /system.slice/epmd.service └─26075 /usr/bin/epmd -systemd Feb 16 17:59:19 np0000155647 systemd[1]: Started epmd.service - Erlang Port Mapper Daemon. ● fsidd.service - NFS FSID Daemon Loaded: loaded (/usr/lib/systemd/system/fsidd.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:03:47 UTC; 13min ago Main PID: 54639 (fsidd) Tasks: 1 (limit: 77077) Memory: 436.0K (peak: 676.0K) CPU: 4ms CGroup: /system.slice/fsidd.service └─54639 /usr/sbin/fsidd Feb 16 18:03:47 np0000155647 systemd[1]: Started fsidd.service - NFS FSID Daemon. ○ fstrim.service - Discard unused blocks on filesystems from /etc/fstab Loaded: loaded (/usr/lib/systemd/system/fstrim.service; static) Active: inactive (dead) TriggeredBy: ● fstrim.timer Docs: man:fstrim(8) ○ getty-static.service - getty on tty2-tty6 if dbus and logind are not available Loaded: loaded (/usr/lib/systemd/system/getty-static.service; static) Active: inactive (dead) Condition: start condition unmet at Mon 2026-02-16 17:51:10 UTC; 25min ago Feb 16 17:51:10 np0000155647 systemd[1]: getty-static.service - getty on tty2-tty6 if dbus and logind are not available was skipped because of an unmet condition check (ConditionPathExists=!/usr/bin/dbus-daemon). ● getty@tty1.service - Getty on tty1 Loaded: loaded (/usr/lib/systemd/system/getty@.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 17:51:10 UTC; 25min ago Docs: man:agetty(8) man:systemd-getty-generator(8) https://0pointer.de/blog/projects/serial-console.html Main PID: 726 (agetty) Tasks: 1 (limit: 77077) Memory: 300.0K (peak: 1.8M) CPU: 20ms CGroup: /system.slice/system-getty.slice/getty@tty1.service └─726 /sbin/agetty -o "-p -- \\u" --noclear - linux Feb 16 17:51:10 np0000155647 systemd[1]: Started getty@tty1.service - Getty on tty1. ○ grub-common.service - Record successful boot for GRUB Loaded: loaded (/usr/lib/systemd/system/grub-common.service; enabled; preset: enabled) Active: inactive (dead) since Mon 2026-02-16 17:51:10 UTC; 25min ago Main PID: 715 (code=exited, status=0/SUCCESS) CPU: 50ms Feb 16 17:51:10 np0000155647 systemd[1]: Starting grub-common.service - Record successful boot for GRUB... Feb 16 17:51:10 np0000155647 systemd[1]: grub-common.service: Deactivated successfully. Feb 16 17:51:10 np0000155647 systemd[1]: Finished grub-common.service - Record successful boot for GRUB. ○ grub-initrd-fallback.service - GRUB failed boot detection Loaded: loaded (/usr/lib/systemd/system/grub-initrd-fallback.service; enabled; preset: enabled) Active: inactive (dead) since Mon 2026-02-16 17:51:10 UTC; 25min ago Main PID: 741 (code=exited, status=0/SUCCESS) CPU: 22ms Feb 16 17:51:10 np0000155647 systemd[1]: Starting grub-initrd-fallback.service - GRUB failed boot detection... Feb 16 17:51:10 np0000155647 systemd[1]: grub-initrd-fallback.service: Deactivated successfully. Feb 16 17:51:10 np0000155647 systemd[1]: Finished grub-initrd-fallback.service - GRUB failed boot detection. ● haproxy.service - HAProxy Load Balancer Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 17:56:23 UTC; 20min ago Docs: man:haproxy(1) file:/usr/share/doc/haproxy/configuration.txt.gz Main PID: 13241 (haproxy) Status: "Ready." Tasks: 17 (limit: 77077) Memory: 42.9M (peak: 45.8M) CPU: 213ms CGroup: /system.slice/haproxy.service ├─13241 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock └─13243 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock Feb 16 17:56:23 np0000155647 systemd[1]: Starting haproxy.service - HAProxy Load Balancer... Feb 16 17:56:23 np0000155647 haproxy[13241]: [NOTICE] (13241) : New worker (13243) forked Feb 16 17:56:23 np0000155647 haproxy[13241]: [NOTICE] (13241) : Loading success. Feb 16 17:56:23 np0000155647 systemd[1]: Started haproxy.service - HAProxy Load Balancer. ○ initrd-cleanup.service - Cleaning Up and Shutting Down Daemons Loaded: loaded (/usr/lib/systemd/system/initrd-cleanup.service; static) Active: inactive (dead) ○ initrd-parse-etc.service - Mountpoints Configured in the Real Root Loaded: loaded (/usr/lib/systemd/system/initrd-parse-etc.service; static) Active: inactive (dead) ○ initrd-switch-root.service - Switch Root Loaded: loaded (/usr/lib/systemd/system/initrd-switch-root.service; static) Active: inactive (dead) ○ initrd-udevadm-cleanup-db.service - Cleanup udev Database Loaded: loaded (/usr/lib/systemd/system/initrd-udevadm-cleanup-db.service; static) Active: inactive (dead) ● iscsid.service - iSCSI initiator daemon (iscsid) Loaded: loaded (/usr/lib/systemd/system/iscsid.service; disabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:02:39 UTC; 14min ago TriggeredBy: ● iscsid.socket Docs: man:iscsid(8) Main PID: 44109 (iscsid) Tasks: 2 (limit: 77077) Memory: 2.7M (peak: 3.0M) CPU: 58ms CGroup: /system.slice/iscsid.service ├─44108 /usr/sbin/iscsid └─44109 /usr/sbin/iscsid Feb 16 18:02:39 np0000155647 systemd[1]: Starting iscsid.service - iSCSI initiator daemon (iscsid)... Feb 16 18:02:39 np0000155647 iscsid[44106]: iSCSI logger with pid=44108 started! Feb 16 18:02:39 np0000155647 systemd[1]: Started iscsid.service - iSCSI initiator daemon (iscsid). Feb 16 18:02:40 np0000155647 iscsid[44108]: iSCSI daemon with pid=44109 started! ● kmod-static-nodes.service - Create List of Static Device Nodes Loaded: loaded (/usr/lib/systemd/system/kmod-static-nodes.service; static) Active: active (exited) since Mon 2026-02-16 17:51:03 UTC; 25min ago Main PID: 388 (code=exited, status=0/SUCCESS) CPU: 12ms Feb 16 17:51:03 ubuntu systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Notice: journal has been rotated since unit was started, output may be incomplete. ● ksm.service - Kernel Samepage Merging Loaded: loaded (/usr/lib/systemd/system/ksm.service; enabled; preset: enabled) Active: active (exited) since Mon 2026-02-16 17:54:53 UTC; 22min ago Main PID: 5040 (code=exited, status=0/SUCCESS) CPU: 1ms Feb 16 17:54:53 np0000155647 systemd[1]: Starting ksm.service - Kernel Samepage Merging... Feb 16 17:54:53 np0000155647 systemd[1]: Finished ksm.service - Kernel Samepage Merging. ● ksmtuned.service - Kernel Samepage Merging (KSM) Tuning Daemon Loaded: loaded (/usr/lib/systemd/system/ksmtuned.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 17:54:53 UTC; 22min ago Main PID: 5045 (ksmtuned) Tasks: 2 (limit: 77077) Memory: 2.5M (peak: 4.3M) CPU: 1.310s CGroup: /system.slice/ksmtuned.service ├─ 5045 /bin/bash /usr/sbin/ksmtuned └─130723 sleep 60 Feb 16 17:54:53 np0000155647 systemd[1]: Starting ksmtuned.service - Kernel Samepage Merging (KSM) Tuning Daemon... Feb 16 17:54:53 np0000155647 systemd[1]: Started ksmtuned.service - Kernel Samepage Merging (KSM) Tuning Daemon. ● ldconfig.service - Rebuild Dynamic Linker Cache Loaded: loaded (/usr/lib/systemd/system/ldconfig.service; static) Active: active (exited) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:ldconfig(8) Main PID: 450 (code=exited, status=0/SUCCESS) CPU: 42ms Feb 16 17:51:03 ubuntu systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 16 17:51:03 ubuntu systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. ● libvirt-guests.service - libvirt guests suspend/resume service Loaded: loaded (/usr/lib/systemd/system/libvirt-guests.service; enabled; preset: enabled) Active: active (exited) since Mon 2026-02-16 18:02:21 UTC; 14min ago Docs: man:libvirt-guests(8) https://libvirt.org/ Main PID: 43087 (code=exited, status=0/SUCCESS) CPU: 18ms Feb 16 18:02:21 np0000155647 systemd[1]: Starting libvirt-guests.service - libvirt guests suspend/resume service... Feb 16 18:02:21 np0000155647 systemd[1]: Finished libvirt-guests.service - libvirt guests suspend/resume service. ● libvirtd.service - libvirt legacy monolithic daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; preset: enabled) Drop-In: /etc/systemd/system/libvirtd.service.d └─coredump.conf Active: active (running) since Mon 2026-02-16 18:13:31 UTC; 3min 27s ago TriggeredBy: ● libvirtd-ro.socket ● libvirtd.socket ● libvirtd-admin.socket Docs: man:libvirtd(8) https://libvirt.org/ Main PID: 112159 (libvirtd) Tasks: 22 (limit: 32768) Memory: 32.5M (peak: 62.3M) CPU: 6.436s CGroup: /system.slice/libvirtd.service ├─ 42979 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper ├─ 42980 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper └─112159 /usr/sbin/libvirtd --timeout 120 Feb 16 18:13:31 np0000155647 systemd[1]: Starting libvirtd.service - libvirt legacy monolithic daemon... Feb 16 18:13:31 np0000155647 libvirtd[112159]: 2026-02-16 18:13:31.341+0000: 112159: info : libvirt version: 10.0.0, package: 10.0.0-2ubuntu8.11 (Ubuntu) Feb 16 18:13:31 np0000155647 libvirtd[112159]: 2026-02-16 18:13:31.341+0000: 112159: info : hostname: np0000155647 Feb 16 18:13:31 np0000155647 libvirtd[112159]: 2026-02-16 18:13:31.341+0000: 112159: debug : virLogParseOutputs:1638 : outputs=1:file:/var/log/libvirt/libvirtd.log Feb 16 18:13:31 np0000155647 libvirtd[112159]: 2026-02-16 18:13:31.341+0000: 112159: debug : virLogParseOutput:1485 : output=1:file:/var/log/libvirt/libvirtd.log Feb 16 18:13:31 np0000155647 systemd[1]: Started libvirtd.service - libvirt legacy monolithic daemon. Feb 16 18:13:32 np0000155647 dnsmasq[42979]: read /etc/hosts - 8 names Feb 16 18:13:32 np0000155647 dnsmasq[42979]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 names Feb 16 18:13:32 np0000155647 dnsmasq-dhcp[42979]: read /var/lib/libvirt/dnsmasq/default.hostsfile ○ logrotate.service - Rotate log files Loaded: loaded (/usr/lib/systemd/system/logrotate.service; static) Active: inactive (dead) TriggeredBy: ● logrotate.timer Docs: man:logrotate(8) man:logrotate.conf(5) ○ lvm2-lvmpolld.service - LVM2 poll daemon Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmpolld.service; static) Active: inactive (dead) TriggeredBy: ● lvm2-lvmpolld.socket Docs: man:lvmpolld(8) ● lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling Loaded: loaded (/usr/lib/systemd/system/lvm2-monitor.service; enabled; preset: enabled) Active: active (exited) since Mon 2026-02-16 17:56:36 UTC; 20min ago Docs: man:dmeventd(8) man:lvcreate(8) man:lvchange(8) man:vgchange(8) Main PID: 14528 (code=exited, status=0/SUCCESS) CPU: 12ms Feb 16 17:56:36 np0000155647 systemd[1]: Starting lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Feb 16 17:56:36 np0000155647 systemd[1]: Finished lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. ○ man-db.service - Daily man-db regeneration Loaded: loaded (/usr/lib/systemd/system/man-db.service; static) Active: inactive (dead) TriggeredBy: ● man-db.timer Docs: man:mandb(8) ● memcached.service - memcached daemon Loaded: loaded (/usr/lib/systemd/system/memcached.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:06:46 UTC; 10min ago Docs: man:memcached(1) Main PID: 66471 (memcached) Tasks: 10 (limit: 77077) Memory: 16.6M (peak: 17.2M) CPU: 2.377s CGroup: /system.slice/memcached.service └─66471 /usr/bin/memcached -m 64 -p 11211 -u memcache -l 127.0.0.1 -l ::1 -P /var/run/memcached/memcached.pid Feb 16 18:06:46 np0000155647 systemd[1]: Started memcached.service - memcached daemon. ○ modprobe@configfs.service - Load Kernel Module configfs Loaded: loaded (/usr/lib/systemd/system/modprobe@.service; static) Active: inactive (dead) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:modprobe(8) Main PID: 389 (code=exited, status=0/SUCCESS) CPU: 13ms Feb 16 17:51:03 ubuntu systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 16 17:51:03 ubuntu systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Notice: journal has been rotated since unit was started, output may be incomplete. ○ modprobe@dm_mod.service - Load Kernel Module dm_mod Loaded: loaded (/usr/lib/systemd/system/modprobe@.service; static) Active: inactive (dead) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:modprobe(8) Main PID: 390 (code=exited, status=0/SUCCESS) CPU: 16ms Feb 16 17:51:03 ubuntu systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 16 17:51:03 ubuntu systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Notice: journal has been rotated since unit was started, output may be incomplete. ○ modprobe@drm.service - Load Kernel Module drm Loaded: loaded (/usr/lib/systemd/system/modprobe@.service; static) Active: inactive (dead) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:modprobe(8) Main PID: 391 (code=exited, status=0/SUCCESS) CPU: 12ms Feb 16 17:51:03 ubuntu systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 16 17:51:03 ubuntu systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Notice: journal has been rotated since unit was started, output may be incomplete. ○ modprobe@efi_pstore.service - Load Kernel Module efi_pstore Loaded: loaded (/usr/lib/systemd/system/modprobe@.service; static) Active: inactive (dead) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:modprobe(8) Main PID: 392 (code=exited, status=0/SUCCESS) CPU: 12ms Feb 16 17:51:03 ubuntu systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 16 17:51:03 ubuntu systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Notice: journal has been rotated since unit was started, output may be incomplete. ○ modprobe@fuse.service - Load Kernel Module fuse Loaded: loaded (/usr/lib/systemd/system/modprobe@.service; static) Active: inactive (dead) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:modprobe(8) Main PID: 393 (code=exited, status=0/SUCCESS) CPU: 14ms Feb 16 17:51:03 ubuntu systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 16 17:51:03 ubuntu systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Notice: journal has been rotated since unit was started, output may be incomplete. ○ modprobe@loop.service - Load Kernel Module loop Loaded: loaded (/usr/lib/systemd/system/modprobe@.service; static) Active: inactive (dead) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:modprobe(8) Main PID: 394 (code=exited, status=0/SUCCESS) CPU: 11ms Feb 16 17:51:03 ubuntu systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 16 17:51:03 ubuntu systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Notice: journal has been rotated since unit was started, output may be incomplete. ○ motd-news.service - Message of the Day Loaded: loaded (/usr/lib/systemd/system/motd-news.service; static) Active: inactive (dead) since Mon 2026-02-16 18:08:54 UTC; 8min ago TriggeredBy: ● motd-news.timer Docs: man:update-motd(8) Main PID: 80686 (code=exited, status=0/SUCCESS) CPU: 1ms Feb 16 18:08:54 np0000155647 systemd[1]: Starting motd-news.service - Message of the Day... Feb 16 18:08:54 np0000155647 systemd[1]: motd-news.service: Deactivated successfully. Feb 16 18:08:54 np0000155647 systemd[1]: Finished motd-news.service - Message of the Day. ● mysql.service - MySQL Community Server Loaded: loaded (/usr/lib/systemd/system/mysql.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:06:18 UTC; 10min ago Main PID: 62997 (mysqld) Status: "Server is operational" Tasks: 244 (limit: 77077) Memory: 1.1G (peak: 1.1G) CPU: 1min 56.770s CGroup: /system.slice/mysql.service └─62997 /usr/sbin/mysqld Feb 16 18:06:17 np0000155647 systemd[1]: Starting mysql.service - MySQL Community Server... Feb 16 18:06:18 np0000155647 systemd[1]: Started mysql.service - MySQL Community Server. ○ netplan-ovs-cleanup.service - OpenVSwitch configuration for cleanup Loaded: loaded (/run/systemd/system/netplan-ovs-cleanup.service; enabled-runtime; preset: enabled) Active: inactive (dead) Condition: start condition unmet at Mon 2026-02-16 17:51:04 UTC; 25min ago Feb 16 17:51:04 np0000155647 systemd[1]: netplan-ovs-cleanup.service - OpenVSwitch configuration for cleanup was skipped because of an unmet condition check (ConditionFileIsExecutable=/usr/bin/ovs-vsctl). Feb 16 17:51:04 np0000155647 systemd[1]: netplan-ovs-cleanup.service - OpenVSwitch configuration for cleanup was skipped because of an unmet condition check (ConditionFileIsExecutable=/usr/bin/ovs-vsctl). Feb 16 17:51:04 np0000155647 systemd[1]: netplan-ovs-cleanup.service - OpenVSwitch configuration for cleanup was skipped because of an unmet condition check (ConditionFileIsExecutable=/usr/bin/ovs-vsctl). ○ networkd-dispatcher.service - Dispatcher daemon for systemd-networkd Loaded: loaded (/usr/lib/systemd/system/networkd-dispatcher.service; enabled; preset: enabled) Active: inactive (dead) Condition: start condition unmet at Mon 2026-02-16 17:51:10 UTC; 25min ago Feb 16 17:51:10 np0000155647 systemd[1]: networkd-dispatcher.service - Dispatcher daemon for systemd-networkd was skipped because no trigger condition checks were met. ● nfs-blkmap.service - pNFS block layout mapping daemon Loaded: loaded (/usr/lib/systemd/system/nfs-blkmap.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:03:47 UTC; 13min ago Main PID: 54644 (blkmapd) Tasks: 1 (limit: 77077) Memory: 332.0K (peak: 1.5M) CPU: 16ms CGroup: /system.slice/nfs-blkmap.service └─54644 /usr/sbin/blkmapd Feb 16 18:03:47 np0000155647 systemd[1]: Starting nfs-blkmap.service - pNFS block layout mapping daemon... Feb 16 18:03:47 np0000155647 blkmapd[54644]: open pipe file /run/rpc_pipefs/nfs/blocklayout failed: No such file or directory Feb 16 18:03:47 np0000155647 systemd[1]: Started nfs-blkmap.service - pNFS block layout mapping daemon. ● nfs-idmapd.service - NFSv4 ID-name mapping service Loaded: loaded (/usr/lib/systemd/system/nfs-idmapd.service; static) Active: active (running) since Mon 2026-02-16 18:03:47 UTC; 13min ago Main PID: 54647 (rpc.idmapd) Tasks: 1 (limit: 77077) Memory: 412.0K (peak: 1.5M) CPU: 14ms CGroup: /system.slice/nfs-idmapd.service └─54647 /usr/sbin/rpc.idmapd Feb 16 18:03:47 np0000155647 systemd[1]: Starting nfs-idmapd.service - NFSv4 ID-name mapping service... Feb 16 18:03:47 np0000155647 rpc.idmapd[54647]: Setting log level to 0 Feb 16 18:03:47 np0000155647 systemd[1]: Started nfs-idmapd.service - NFSv4 ID-name mapping service. ● nfs-mountd.service - NFS Mount Daemon Loaded: loaded (/usr/lib/systemd/system/nfs-mountd.service; static) Active: active (running) since Mon 2026-02-16 18:03:47 UTC; 13min ago Main PID: 54657 (rpc.mountd) Tasks: 1 (limit: 77077) Memory: 904.0K (peak: 1.5M) CPU: 11ms CGroup: /system.slice/nfs-mountd.service └─54657 /usr/sbin/rpc.mountd Feb 16 18:03:47 np0000155647 systemd[1]: Starting nfs-mountd.service - NFS Mount Daemon... Feb 16 18:03:47 np0000155647 rpc.mountd[54657]: Version 2.6.4 starting Feb 16 18:03:47 np0000155647 systemd[1]: Started nfs-mountd.service - NFS Mount Daemon. ● nfs-server.service - NFS server and services Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; preset: enabled) Active: active (exited) since Mon 2026-02-16 18:03:47 UTC; 13min ago Main PID: 54663 (code=exited, status=0/SUCCESS) CPU: 21ms Feb 16 18:03:47 np0000155647 systemd[1]: Starting nfs-server.service - NFS server and services... Feb 16 18:03:47 np0000155647 exportfs[54661]: exportfs: can't open /etc/exports for reading Feb 16 18:03:47 np0000155647 systemd[1]: Finished nfs-server.service - NFS server and services. ○ nfs-utils.service - NFS server and client services Loaded: loaded (/usr/lib/systemd/system/nfs-utils.service; static) Active: inactive (dead) ● nfsdcld.service - NFSv4 Client Tracking Daemon Loaded: loaded (/usr/lib/systemd/system/nfsdcld.service; static) Active: active (running) since Mon 2026-02-16 18:03:47 UTC; 13min ago Main PID: 54659 (nfsdcld) Tasks: 1 (limit: 77077) Memory: 684.0K (peak: 1.5M) CPU: 18ms CGroup: /system.slice/nfsdcld.service └─54659 /usr/sbin/nfsdcld Feb 16 18:03:47 np0000155647 systemd[1]: Starting nfsdcld.service - NFSv4 Client Tracking Daemon... Feb 16 18:03:47 np0000155647 systemd[1]: Started nfsdcld.service - NFSv4 Client Tracking Daemon. ● nmbd.service - Samba NMB Daemon Loaded: loaded (/usr/lib/systemd/system/nmbd.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:03:51 UTC; 13min ago Docs: man:nmbd(8) man:samba(7) man:smb.conf(5) Main PID: 55225 (nmbd) Status: "nmbd: ready to serve connections..." Tasks: 1 (limit: 77077) Memory: 3.2M (peak: 4.4M) CPU: 188ms CGroup: /system.slice/nmbd.service └─55225 /usr/sbin/nmbd --foreground --no-process-group Feb 16 18:03:51 np0000155647 systemd[1]: Starting nmbd.service - Samba NMB Daemon... Feb 16 18:03:51 np0000155647 (nmbd)[55225]: nmbd.service: Referenced but unset environment variable evaluates to an empty string: NMBDOPTIONS Feb 16 18:03:51 np0000155647 systemd[1]: Started nmbd.service - Samba NMB Daemon. ○ open-iscsi.service - Login to default iSCSI targets Loaded: loaded (/usr/lib/systemd/system/open-iscsi.service; enabled; preset: enabled) Active: inactive (dead) Condition: start condition unmet at Mon 2026-02-16 17:51:10 UTC; 25min ago Docs: man:iscsiadm(8) man:iscsid(8) Feb 16 17:51:10 np0000155647 systemd[1]: open-iscsi.service - Login to default iSCSI targets was skipped because no trigger condition checks were met. ● openvswitch-switch.service - Open vSwitch Loaded: loaded (/usr/lib/systemd/system/openvswitch-switch.service; enabled; preset: enabled) Active: active (exited) since Mon 2026-02-16 18:12:27 UTC; 4min 32s ago Main PID: 100800 (code=exited, status=0/SUCCESS) CPU: 1ms Feb 16 18:12:27 np0000155647 systemd[1]: Starting openvswitch-switch.service - Open vSwitch... Feb 16 18:12:27 np0000155647 systemd[1]: Finished openvswitch-switch.service - Open vSwitch. ● ovn-central.service - Open Virtual Network central components Loaded: loaded (/usr/lib/systemd/system/ovn-central.service; enabled; preset: enabled) Active: active (exited) since Mon 2026-02-16 18:12:28 UTC; 4min 30s ago Main PID: 101121 (code=exited, status=0/SUCCESS) CPU: 11ms Feb 16 18:12:28 np0000155647 systemd[1]: Starting ovn-central.service - Open Virtual Network central components... Feb 16 18:12:28 np0000155647 systemd[1]: Finished ovn-central.service - Open Virtual Network central components. ● ovn-controller-vtep.service - Open Virtual Network VTEP gateway controller daemon Loaded: loaded (/usr/lib/systemd/system/ovn-controller-vtep.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:12:27 UTC; 4min 32s ago Main PID: 100859 (ovn-controller-) Tasks: 1 (limit: 77077) Memory: 880.0K (peak: 2.3M) CPU: 67ms CGroup: /system.slice/ovn-controller-vtep.service └─100859 ovn-controller-vtep -vconsole:emer -vsyslog:err -vfile:info --vtep-db=/var/run/openvswitch/db.sock --ovnsb-db=/var/run/ovn/ovnsb_db.sock --no-chdir --log-file=/var/log/ovn/ovn-controller-vtep.log --pidfile=/var/run/ovn/ovn-controller-vtep.pid --detach Feb 16 18:12:27 np0000155647 systemd[1]: Starting ovn-controller-vtep.service - Open Virtual Network VTEP gateway controller daemon... Feb 16 18:12:27 np0000155647 (ovn-ctl)[100809]: ovn-controller-vtep.service: Referenced but unset environment variable evaluates to an empty string: OVN_CTL_OPTS Feb 16 18:12:27 np0000155647 ovn-ctl[100809]: * Starting ovn-controller-vtep Feb 16 18:12:27 np0000155647 systemd[1]: Started ovn-controller-vtep.service - Open Virtual Network VTEP gateway controller daemon. ● ovn-controller.service - Open Virtual Network host control daemon Loaded: loaded (/usr/lib/systemd/system/ovn-controller.service; static) Active: active (running) since Mon 2026-02-16 18:12:30 UTC; 4min 28s ago Main PID: 101600 (ovn-controller) Tasks: 5 (limit: 77077) Memory: 5.5M (peak: 6.0M) CPU: 253ms CGroup: /system.slice/ovn-controller.service └─101600 ovn-controller unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --no-chdir --log-file=/var/log/ovn/ovn-controller.log --pidfile=/var/run/ovn/ovn-controller.pid --detach Feb 16 18:12:30 np0000155647 systemd[1]: Starting ovn-controller.service - Open Virtual Network host control daemon... Feb 16 18:12:30 np0000155647 (ovn-ctl)[101569]: ovn-controller.service: Referenced but unset environment variable evaluates to an empty string: OVN_CTL_OPTS Feb 16 18:12:30 np0000155647 ovn-ctl[101569]: * Starting ovn-controller Feb 16 18:12:30 np0000155647 systemd[1]: Started ovn-controller.service - Open Virtual Network host control daemon. ● ovn-host.service - Open Virtual Network host components Loaded: loaded (/usr/lib/systemd/system/ovn-host.service; enabled; preset: enabled) Active: active (exited) since Mon 2026-02-16 18:01:56 UTC; 15min ago Main PID: 41050 (code=exited, status=0/SUCCESS) CPU: 11ms Feb 16 18:01:56 np0000155647 systemd[1]: Starting ovn-host.service - Open Virtual Network host components... Feb 16 18:01:56 np0000155647 systemd[1]: Finished ovn-host.service - Open Virtual Network host components. ● ovn-northd.service - Open Virtual Network central control daemon Loaded: loaded (/usr/lib/systemd/system/ovn-northd.service; static) Active: active (running) since Mon 2026-02-16 18:12:29 UTC; 4min 30s ago Main PID: 101269 (ovn-northd) Tasks: 3 (limit: 77077) Memory: 3.0M (peak: 3.3M) CPU: 243ms CGroup: /system.slice/ovn-northd.service └─101269 ovn-northd -vconsole:emer -vsyslog:err -vfile:info --ovnnb-db=unix:/var/run/ovn/ovnnb_db.sock --ovnsb-db=unix:/var/run/ovn/ovnsb_db.sock --no-chdir --log-file=/var/log/ovn/ovn-northd.log --pidfile=/var/run/ovn/ovn-northd.pid --detach Feb 16 18:12:28 np0000155647 systemd[1]: Starting ovn-northd.service - Open Virtual Network central control daemon... Feb 16 18:12:28 np0000155647 (ovn-ctl)[101200]: ovn-northd.service: Referenced but unset environment variable evaluates to an empty string: OVN_CTL_OPTS Feb 16 18:12:29 np0000155647 ovn-ctl[101200]: * Starting ovn-northd Feb 16 18:12:29 np0000155647 systemd[1]: Started ovn-northd.service - Open Virtual Network central control daemon. ● ovn-ovsdb-server-nb.service - Open vSwitch database server for OVN Northbound database Loaded: loaded (/usr/lib/systemd/system/ovn-ovsdb-server-nb.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:12:28 UTC; 4min 30s ago Main PID: 101194 (ovsdb-server) Tasks: 1 (limit: 77077) Memory: 4.7M (peak: 5.5M) CPU: 3.147s CGroup: /system.slice/ovn-ovsdb-server-nb.service └─101194 ovsdb-server -vconsole:off -vfile:info --log-file=/var/log/ovn/ovsdb-server-nb.log --remote=punix:/var/run/ovn/ovnnb_db.sock --pidfile=/var/run/ovn/ovnnb_db.pid --unixctl=/var/run/ovn/ovnnb_db.ctl --remote=db:OVN_Northbound,NB_Global,connections --private-key=db:OVN_Northbound,SSL,private_key --certificate=db:OVN_Northbound,SSL,certificate --ca-cert=db:OVN_Northbound,SSL,ca_cert --ssl-protocols=db:OVN_Northbound,SSL,ssl_protocols --ssl-ciphers=db:OVN_Northbound,SSL,ssl_ciphers /var/lib/ovn/ovnnb_db.db Feb 16 18:16:58 np0000155647 ovsdb-server[101194]: ovs|05316|poll_loop|DBG|wakeup due to 0-ms timeout at ../lib/stream-ssl.c:844 (0% CPU usage) Feb 16 18:16:58 np0000155647 ovsdb-server[101194]: ovs|05317|poll_loop|DBG|wakeup due to [POLLIN] on fd 30 (199.204.45.4:6641<->199.204.45.4:56228) at ../lib/stream-ssl.c:842 (0% CPU usage) Feb 16 18:16:58 np0000155647 ovsdb-server[101194]: ovs|05318|stream_ssl|DBG|server25<--ssl:199.204.45.4:56228 type 256 (5 bytes) Feb 16 18:16:58 np0000155647 ovsdb-server[101194]: ovs|05319|stream_ssl|DBG|server25<--ssl:199.204.45.4:56228 type 257 (1 bytes) Feb 16 18:16:58 np0000155647 ovsdb-server[101194]: ovs|05320|jsonrpc|DBG|ssl:199.204.45.4:56228: received request, method="echo", params=[], id="echo" Feb 16 18:16:58 np0000155647 ovsdb-server[101194]: ovs|05321|jsonrpc|DBG|ssl:199.204.45.4:56228: send reply, result=[], id="echo" Feb 16 18:16:58 np0000155647 ovsdb-server[101194]: ovs|05322|stream_ssl|DBG|server25-->ssl:199.204.45.4:56228 type 256 (5 bytes) Feb 16 18:16:58 np0000155647 ovsdb-server[101194]: ovs|05323|stream_ssl|DBG|server25-->ssl:199.204.45.4:56228 type 257 (1 bytes) Feb 16 18:16:58 np0000155647 ovsdb-server[101194]: ovs|05324|poll_loop|DBG|wakeup due to 0-ms timeout at ../lib/stream-ssl.c:844 (0% CPU usage) Feb 16 18:16:58 np0000155647 ovsdb-server[101194]: ovs|05325|poll_loop|DBG|wakeup due to 149-ms timeout at ../ovsdb/ovsdb-server.c:400 (0% CPU usage) ● ovn-ovsdb-server-sb.service - Open vSwitch database server for OVN Southbound database Loaded: loaded (/usr/lib/systemd/system/ovn-ovsdb-server-sb.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:12:28 UTC; 4min 30s ago Main PID: 101199 (ovsdb-server) Tasks: 1 (limit: 77077) Memory: 5.3M (peak: 5.8M) CPU: 3.529s CGroup: /system.slice/ovn-ovsdb-server-sb.service └─101199 ovsdb-server -vconsole:off -vfile:info --log-file=/var/log/ovn/ovsdb-server-sb.log --remote=punix:/var/run/ovn/ovnsb_db.sock --pidfile=/var/run/ovn/ovnsb_db.pid --unixctl=/var/run/ovn/ovnsb_db.ctl --remote=db:OVN_Southbound,SB_Global,connections --private-key=db:OVN_Southbound,SSL,private_key --certificate=db:OVN_Southbound,SSL,certificate --ca-cert=db:OVN_Southbound,SSL,ca_cert --ssl-protocols=db:OVN_Southbound,SSL,ssl_protocols --ssl-ciphers=db:OVN_Southbound,SSL,ssl_ciphers /var/lib/ovn/ovnsb_db.db Feb 16 18:16:57 np0000155647 ovsdb-server[101199]: ovs|03727|jsonrpc|DBG|ssl:199.204.45.4:48628: send request, method="echo", params=[], id="echo" Feb 16 18:16:57 np0000155647 ovsdb-server[101199]: ovs|03728|stream_ssl|DBG|server18-->ssl:199.204.45.4:48628 type 256 (5 bytes) Feb 16 18:16:57 np0000155647 ovsdb-server[101199]: ovs|03729|stream_ssl|DBG|server18-->ssl:199.204.45.4:48628 type 257 (1 bytes) Feb 16 18:16:57 np0000155647 ovsdb-server[101199]: ovs|03730|poll_loop|DBG|wakeup due to [POLLIN] on fd 31 (199.204.45.4:6642<->199.204.45.4:48628) at ../lib/stream-ssl.c:842 (0% CPU usage) Feb 16 18:16:57 np0000155647 ovsdb-server[101199]: ovs|03731|stream_ssl|DBG|server18<--ssl:199.204.45.4:48628 type 256 (5 bytes) Feb 16 18:16:57 np0000155647 ovsdb-server[101199]: ovs|03732|stream_ssl|DBG|server18<--ssl:199.204.45.4:48628 type 257 (1 bytes) Feb 16 18:16:57 np0000155647 ovsdb-server[101199]: ovs|03733|jsonrpc|DBG|ssl:199.204.45.4:48628: received reply, result=[], id="echo" Feb 16 18:16:57 np0000155647 ovsdb-server[101199]: ovs|03734|reconnect|DBG|ssl:199.204.45.4:48628: entering ACTIVE Feb 16 18:16:57 np0000155647 ovsdb-server[101199]: ovs|03735|poll_loop|DBG|wakeup due to 0-ms timeout at ../lib/stream-ssl.c:844 (0% CPU usage) Feb 16 18:16:58 np0000155647 ovsdb-server[101199]: ovs|03736|poll_loop|DBG|wakeup due to 1597-ms timeout at ../ovsdb/ovsdb-server.c:400 (0% CPU usage) ● ovs-record-hostname.service - Open vSwitch Record Hostname Loaded: loaded (/usr/lib/systemd/system/ovs-record-hostname.service; enabled; preset: enabled) Active: active (exited) since Mon 2026-02-16 18:12:27 UTC; 4min 32s ago Main PID: 100806 (code=exited, status=0/SUCCESS) CPU: 30ms Feb 16 18:12:27 np0000155647 systemd[1]: Starting ovs-record-hostname.service - Open vSwitch Record Hostname... Feb 16 18:12:27 np0000155647 ovs-vsctl[100836]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0000155647.novalocal Feb 16 18:12:27 np0000155647 systemd[1]: Finished ovs-record-hostname.service - Open vSwitch Record Hostname. ● ovs-vswitchd.service - Open vSwitch Forwarding Unit Loaded: loaded (/usr/lib/systemd/system/ovs-vswitchd.service; static) Active: active (running) since Mon 2026-02-16 18:12:27 UTC; 4min 32s ago Main PID: 100770 (ovs-vswitchd) Tasks: 23 (limit: 77077) Memory: 183.8M (peak: 185.4M) CPU: 1.481s CGroup: /system.slice/ovs-vswitchd.service └─100770 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach Feb 16 18:12:26 np0000155647 systemd[1]: Starting ovs-vswitchd.service - Open vSwitch Forwarding Unit... Feb 16 18:12:26 np0000155647 (ovs-ctl)[100731]: ovs-vswitchd.service: Referenced but unset environment variable evaluates to an empty string: OVS_CTL_OPTS Feb 16 18:12:27 np0000155647 ovs-ctl[100731]: * Starting ovs-vswitchd Feb 16 18:12:27 np0000155647 ovs-ctl[100731]: * Enabling remote OVSDB managers Feb 16 18:12:27 np0000155647 systemd[1]: Started ovs-vswitchd.service - Open vSwitch Forwarding Unit. ● ovsdb-server.service - Open vSwitch Database Unit Loaded: loaded (/usr/lib/systemd/system/ovsdb-server.service; static) Active: active (running) since Mon 2026-02-16 18:12:26 UTC; 4min 32s ago Main PID: 100719 (ovsdb-server) Tasks: 1 (limit: 77077) Memory: 2.1M (peak: 5.2M) CPU: 2.895s CGroup: /system.slice/ovsdb-server.service └─100719 ovsdb-server /etc/openvswitch/conf.db -vconsole:emer -vsyslog:err -vfile:info --remote=punix:/var/run/openvswitch/db.sock --private-key=db:Open_vSwitch,SSL,private_key --certificate=db:Open_vSwitch,SSL,certificate --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --no-chdir --log-file=/var/log/openvswitch/ovsdb-server.log --pidfile=/var/run/openvswitch/ovsdb-server.pid --detach Feb 16 18:12:26 np0000155647 systemd[1]: Starting ovsdb-server.service - Open vSwitch Database Unit... Feb 16 18:12:26 np0000155647 (ovs-ctl)[100685]: ovsdb-server.service: Referenced but unset environment variable evaluates to an empty string: OVS_CTL_OPTS Feb 16 18:12:26 np0000155647 ovs-ctl[100685]: * Starting ovsdb-server Feb 16 18:12:26 np0000155647 ovs-vsctl[100720]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1 Feb 16 18:12:26 np0000155647 ovs-vsctl[100725]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.4 "external-ids:system-id=\"34eeba02-25e9-447a-a0d4-f9c5eb6e69ca\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"ubuntu\"" "system-version=\"24.04\"" Feb 16 18:12:26 np0000155647 ovs-ctl[100685]: * Configuring Open vSwitch system IDs Feb 16 18:12:26 np0000155647 ovs-ctl[100685]: * Enabling remote OVSDB managers Feb 16 18:12:26 np0000155647 systemd[1]: Started ovsdb-server.service - Open vSwitch Database Unit. ● polkit.service - Authorization Manager Loaded: loaded (/usr/lib/systemd/system/polkit.service; static) Active: active (running) since Mon 2026-02-16 17:51:10 UTC; 25min ago Docs: man:polkit(8) Main PID: 745 (polkitd) Tasks: 4 (limit: 77077) Memory: 3.5M (peak: 4.6M) CPU: 160ms CGroup: /system.slice/polkit.service └─745 /usr/lib/polkit-1/polkitd --no-debug Feb 16 18:02:09 np0000155647 polkitd[745]: Reloading rules Feb 16 18:02:09 np0000155647 polkitd[745]: Collecting garbage unconditionally... Feb 16 18:02:09 np0000155647 polkitd[745]: Loading rules from directory /etc/polkit-1/rules.d Feb 16 18:02:09 np0000155647 polkitd[745]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 16 18:02:09 np0000155647 polkitd[745]: Finished loading, compiling and executing 5 rules Feb 16 18:02:09 np0000155647 polkitd[745]: Reloading rules Feb 16 18:02:09 np0000155647 polkitd[745]: Collecting garbage unconditionally... Feb 16 18:02:09 np0000155647 polkitd[745]: Loading rules from directory /etc/polkit-1/rules.d Feb 16 18:02:09 np0000155647 polkitd[745]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 16 18:02:09 np0000155647 polkitd[745]: Finished loading, compiling and executing 5 rules ● postgresql.service - PostgreSQL RDBMS Loaded: loaded (/usr/lib/systemd/system/postgresql.service; enabled; preset: enabled) Active: active (exited) since Mon 2026-02-16 17:56:26 UTC; 20min ago Main PID: 13587 (code=exited, status=0/SUCCESS) CPU: 3ms Feb 16 17:56:26 np0000155647 systemd[1]: Starting postgresql.service - PostgreSQL RDBMS... Feb 16 17:56:26 np0000155647 systemd[1]: Finished postgresql.service - PostgreSQL RDBMS. ● qemu-kvm.service - QEMU KVM preparation - module, ksm, hugepages Loaded: loaded (/usr/lib/systemd/system/qemu-kvm.service; enabled; preset: enabled) Active: active (exited) since Mon 2026-02-16 18:02:17 UTC; 14min ago Main PID: 42560 (code=exited, status=0/SUCCESS) CPU: 30ms Feb 16 18:02:17 np0000155647 systemd[1]: Starting qemu-kvm.service - QEMU KVM preparation - module, ksm, hugepages... Feb 16 18:02:17 np0000155647 systemd[1]: Finished qemu-kvm.service - QEMU KVM preparation - module, ksm, hugepages. ● rabbitmq-server.service - RabbitMQ Messaging Server Loaded: loaded (/usr/lib/systemd/system/rabbitmq-server.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 17:59:28 UTC; 17min ago Main PID: 26195 (beam.smp) Tasks: 52 (limit: 77077) Memory: 126.8M (peak: 164.3M) CPU: 25.944s CGroup: /system.slice/rabbitmq-server.service ├─26195 /usr/lib/erlang/erts-13.2.2.5/bin/beam.smp -W w -MBas ageffcbf -MHas ageffcbf -MBlmbcs 512 -MHlmbcs 512 -MMmcs 30 -pc unicode -P 1048576 -t 5000000 -stbt db -zdbbl 128000 -sbwt none -sbwtdcpu none -sbwtdio none -- -root /usr/lib/erlang -bindir /usr/lib/erlang/erts-13.2.2.5/bin -progname erl -- -home /var/lib/rabbitmq -- -pa "" -noshell -noinput -s rabbit boot -boot start_sasl -syslog logger "[]" -syslog syslog_error_logger false -kernel prevent_overlapping_partitions false -enable-feature maybe_expr ├─26205 erl_child_setup 65536 ├─26313 /usr/lib/erlang/erts-13.2.2.5/bin/inet_gethost 4 ├─26314 /usr/lib/erlang/erts-13.2.2.5/bin/inet_gethost 4 └─26319 /bin/sh -s rabbit_disk_monitor Feb 16 17:59:22 np0000155647 systemd[1]: Starting rabbitmq-server.service - RabbitMQ Messaging Server... Feb 16 17:59:28 np0000155647 systemd[1]: Started rabbitmq-server.service - RabbitMQ Messaging Server. ○ rc-local.service - /etc/rc.local Compatibility Loaded: loaded (/usr/lib/systemd/system/rc-local.service; static) Drop-In: /usr/lib/systemd/system/rc-local.service.d └─debian.conf Active: inactive (dead) Docs: man:systemd-rc-local-generator(8) ○ rescue.service - Rescue Shell Loaded: loaded (/usr/lib/systemd/system/rescue.service; static) Active: inactive (dead) Docs: man:sulogin(8) ○ rpc-gssd.service - RPC security service for NFS client and server Loaded: loaded (/usr/lib/systemd/system/rpc-gssd.service; static) Active: inactive (dead) Condition: start condition unmet at Mon 2026-02-16 18:03:48 UTC; 13min ago Feb 16 18:03:46 np0000155647 systemd[1]: rpc-gssd.service - RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Feb 16 18:03:47 np0000155647 systemd[1]: rpc-gssd.service - RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Feb 16 18:03:48 np0000155647 systemd[1]: rpc-gssd.service - RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). ● rpc-statd-notify.service - Notify NFS peers of a restart Loaded: loaded (/usr/lib/systemd/system/rpc-statd-notify.service; static) Active: active (exited) since Mon 2026-02-16 18:03:46 UTC; 13min ago CPU: 13ms Feb 16 18:03:46 np0000155647 systemd[1]: Starting rpc-statd-notify.service - Notify NFS peers of a restart... Feb 16 18:03:46 np0000155647 sm-notify[54553]: Version 2.6.4 starting Feb 16 18:03:46 np0000155647 systemd[1]: Started rpc-statd-notify.service - Notify NFS peers of a restart. ● rpc-statd.service - NFS status monitor for NFSv2/3 locking. Loaded: loaded (/usr/lib/systemd/system/rpc-statd.service; static) Active: active (running) since Mon 2026-02-16 18:03:47 UTC; 13min ago Main PID: 54648 (rpc.statd) Tasks: 1 (limit: 77077) Memory: 508.0K (peak: 1.5M) CPU: 16ms CGroup: /system.slice/rpc-statd.service └─54648 /usr/sbin/rpc.statd Feb 16 18:03:47 np0000155647 systemd[1]: Starting rpc-statd.service - NFS status monitor for NFSv2/3 locking.... Feb 16 18:03:47 np0000155647 rpc.statd[54648]: Version 2.6.4 starting Feb 16 18:03:47 np0000155647 rpc.statd[54648]: Flags: TI-RPC Feb 16 18:03:47 np0000155647 rpc.statd[54648]: Failed to read /var/lib/nfs/state: Success Feb 16 18:03:47 np0000155647 rpc.statd[54648]: Initializing NSM state Feb 16 18:03:47 np0000155647 systemd[1]: Started rpc-statd.service - NFS status monitor for NFSv2/3 locking.. ○ rpc-svcgssd.service - RPC security service for NFS server Loaded: loaded (/usr/lib/systemd/system/rpc-svcgssd.service; static) Active: inactive (dead) Condition: start condition unmet at Mon 2026-02-16 18:03:48 UTC; 13min ago Feb 16 18:03:47 np0000155647 systemd[1]: rpc-svcgssd.service - RPC security service for NFS server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Feb 16 18:03:48 np0000155647 systemd[1]: rpc-svcgssd.service - RPC security service for NFS server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). ● rpcbind.service - RPC bind portmap service Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:03:42 UTC; 13min ago TriggeredBy: ● rpcbind.socket Docs: man:rpcbind(8) Main PID: 54015 (rpcbind) Tasks: 1 (limit: 77077) Memory: 604.0K (peak: 1.5M) CPU: 19ms CGroup: /system.slice/rpcbind.service └─54015 /sbin/rpcbind -f -w Feb 16 18:03:42 np0000155647 systemd[1]: Starting rpcbind.service - RPC bind portmap service... Feb 16 18:03:42 np0000155647 systemd[1]: Started rpcbind.service - RPC bind portmap service. ● rsyslog.service - System Logging Service Loaded: loaded (/usr/lib/systemd/system/rsyslog.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:15:56 UTC; 1min 3s ago TriggeredBy: ● syslog.socket Docs: man:rsyslogd(8) man:rsyslog.conf(5) https://www.rsyslog.com/doc/ Main PID: 125254 (rsyslogd) Tasks: 4 (limit: 77077) Memory: 3.6M (peak: 3.8M) CPU: 460ms CGroup: /system.slice/rsyslog.service └─125254 /usr/sbin/rsyslogd -n -iNONE Feb 16 18:15:56 np0000155647 systemd[1]: Starting rsyslog.service - System Logging Service... Feb 16 18:15:56 np0000155647 rsyslogd[125254]: imuxsock: Acquired UNIX socket '/run/systemd/journal/syslog' (fd 3) from systemd. [v8.2312.0] Feb 16 18:15:56 np0000155647 rsyslogd[125254]: rsyslogd's groupid changed to 103 Feb 16 18:15:56 np0000155647 systemd[1]: Started rsyslog.service - System Logging Service. Feb 16 18:15:56 np0000155647 rsyslogd[125254]: rsyslogd's userid changed to 103 Feb 16 18:15:56 np0000155647 rsyslogd[125254]: [origin software="rsyslogd" swVersion="8.2312.0" x-pid="125254" x-info="https://www.rsyslog.com"] start ● rtslib-fb-targetctl.service - Restore LIO kernel target configuration Loaded: loaded (/usr/lib/systemd/system/rtslib-fb-targetctl.service; enabled; preset: enabled) Active: active (exited) since Mon 2026-02-16 18:01:16 UTC; 15min ago Main PID: 36156 (code=exited, status=0/SUCCESS) CPU: 100ms Feb 16 18:01:16 np0000155647 systemd[1]: Starting rtslib-fb-targetctl.service - Restore LIO kernel target configuration... Feb 16 18:01:16 np0000155647 target[36156]: No saved config file at /etc/rtslib-fb-target/saveconfig.json, ok, exiting Feb 16 18:01:16 np0000155647 systemd[1]: Finished rtslib-fb-targetctl.service - Restore LIO kernel target configuration. ○ samba-ad-dc.service - Samba AD Daemon Loaded: loaded (/usr/lib/systemd/system/samba-ad-dc.service; enabled; preset: enabled) Active: inactive (dead) (Result: exec-condition) since Mon 2026-02-16 18:03:52 UTC; 13min ago Condition: start condition unmet at Mon 2026-02-16 18:03:52 UTC; 13min ago Docs: man:samba(8) man:samba(7) man:smb.conf(5) CPU: 28ms Feb 16 18:03:52 np0000155647 systemd[1]: Starting samba-ad-dc.service - Samba AD Daemon... Feb 16 18:03:52 np0000155647 systemd[1]: samba-ad-dc.service: Skipped due to 'exec-condition'. Feb 16 18:03:52 np0000155647 systemd[1]: Condition check resulted in samba-ad-dc.service - Samba AD Daemon being skipped. ● serial-getty@ttyS0.service - Serial Getty on ttyS0 Loaded: loaded (/usr/lib/systemd/system/serial-getty@.service; enabled-runtime; preset: enabled) Active: active (running) since Mon 2026-02-16 17:51:10 UTC; 25min ago Docs: man:agetty(8) man:systemd-getty-generator(8) https://0pointer.de/blog/projects/serial-console.html Main PID: 727 (agetty) Tasks: 1 (limit: 77077) Memory: 248.0K (peak: 1.8M) CPU: 19ms CGroup: /system.slice/system-serial\x2dgetty.slice/serial-getty@ttyS0.service └─727 /sbin/agetty -o "-p -- \\u" --keep-baud 115200,57600,38400,9600 - vt220 Feb 16 17:51:10 np0000155647 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. ● smbd.service - Samba SMB Daemon Loaded: loaded (/usr/lib/systemd/system/smbd.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:03:51 UTC; 13min ago Docs: man:smbd(8) man:samba(7) man:smb.conf(5) Main PID: 55156 (smbd) Status: "smbd: ready to serve connections..." Tasks: 3 (limit: 77077) Memory: 8.7M (peak: 9.1M) CPU: 101ms CGroup: /system.slice/smbd.service ├─55156 /usr/sbin/smbd --foreground --no-process-group ├─55160 "smbd: notifyd" . └─55161 "smbd: cleanupd " Feb 16 18:03:50 np0000155647 systemd[1]: Starting smbd.service - Samba SMB Daemon... Feb 16 18:03:50 np0000155647 (smbd)[55156]: smbd.service: Referenced but unset environment variable evaluates to an empty string: SMBDOPTIONS Feb 16 18:03:51 np0000155647 systemd[1]: Started smbd.service - Samba SMB Daemon. ● ssh-keygen.service - OpenSSH Server Key Generation Loaded: loaded (/usr/lib/systemd/system/ssh-keygen.service; enabled; preset: enabled) Active: active (exited) since Mon 2026-02-16 17:51:10 UTC; 25min ago Main PID: 711 (code=exited, status=0/SUCCESS) CPU: 70ms Feb 16 17:51:10 np0000155647 runtime-ssh-host-keys.sh[711]: + for key in dsa ecdsa ed25519 rsa Feb 16 17:51:10 np0000155647 runtime-ssh-host-keys.sh[711]: + FILE=/etc/ssh/ssh_host_ecdsa_key Feb 16 17:51:10 np0000155647 runtime-ssh-host-keys.sh[711]: + '[' -f /etc/ssh/ssh_host_ecdsa_key ']' Feb 16 17:51:10 np0000155647 runtime-ssh-host-keys.sh[711]: + for key in dsa ecdsa ed25519 rsa Feb 16 17:51:10 np0000155647 runtime-ssh-host-keys.sh[711]: + FILE=/etc/ssh/ssh_host_ed25519_key Feb 16 17:51:10 np0000155647 runtime-ssh-host-keys.sh[711]: + '[' -f /etc/ssh/ssh_host_ed25519_key ']' Feb 16 17:51:10 np0000155647 runtime-ssh-host-keys.sh[711]: + for key in dsa ecdsa ed25519 rsa Feb 16 17:51:10 np0000155647 runtime-ssh-host-keys.sh[711]: + FILE=/etc/ssh/ssh_host_rsa_key Feb 16 17:51:10 np0000155647 runtime-ssh-host-keys.sh[711]: + '[' -f /etc/ssh/ssh_host_rsa_key ']' Feb 16 17:51:10 np0000155647 systemd[1]: Finished ssh-keygen.service - OpenSSH Server Key Generation. ● ssh.service - OpenBSD Secure Shell server Loaded: loaded (/usr/lib/systemd/system/ssh.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 17:51:10 UTC; 25min ago TriggeredBy: ● ssh.socket Docs: man:sshd(8) man:sshd_config(5) Main PID: 746 (sshd) Tasks: 1 (limit: 77077) Memory: 4.5M (peak: 8.7M) CPU: 367ms CGroup: /system.slice/ssh.service └─746 "sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups" Feb 16 17:51:10 np0000155647 sshd[793]: Unable to negotiate with 162.253.55.78 port 36612: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth] Feb 16 17:51:10 np0000155647 sshd[789]: Connection closed by 162.253.55.78 port 36600 [preauth] Feb 16 17:51:10 np0000155647 sshd[797]: Connection closed by 162.253.55.78 port 36640 [preauth] Feb 16 17:51:11 np0000155647 sshd[799]: Unable to negotiate with 162.253.55.78 port 36648: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth] Feb 16 17:51:11 np0000155647 sshd[795]: Connection closed by 162.253.55.78 port 36624 [preauth] Feb 16 17:51:35 np0000155647 sshd[828]: Accepted publickey for zuul from 162.253.55.78 port 60332 ssh2: RSA SHA256:5JQ51Kd3ktdyza7fRzhNtLcnbLBOADmSEZNw9DAuVyA Feb 16 17:51:35 np0000155647 sshd[828]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0) Feb 16 17:51:47 np0000155647 sshd[1056]: Accepted publickey for zuul from 162.253.55.78 port 48808 ssh2: RSA SHA256:JPhKqrGg2vWyLfIOjpGC7DqIER8fJ7Oksn1/nDXaJBU Feb 16 17:51:47 np0000155647 sshd[1056]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0) Feb 16 18:10:06 np0000155647 sshd[86390]: fatal: userauth_passwd: parse packet: invalid format [preauth] ○ ssl-cert.service - Generate snakeoil SSL keypair Loaded: loaded (/usr/lib/systemd/system/ssl-cert.service; enabled; preset: enabled) Active: inactive (dead) Condition: start condition unmet at Mon 2026-02-16 17:56:16 UTC; 20min ago Feb 16 17:56:16 np0000155647 systemd[1]: ssl-cert.service - Generate snakeoil SSL keypair was skipped because of an unmet condition check (ConditionPathExists=!/etc/ssl/private/ssl-cert-snakeoil.key). ● stack-volumes-lvmdriver-1-backing-file.service - Activate LVM backing file /opt/stack/data/stack-volumes-lvmdriver-1-backing-file Loaded: loaded (/etc/systemd/system/stack-volumes-lvmdriver-1-backing-file.service; enabled; preset: enabled) Active: active (exited) since Mon 2026-02-16 18:08:54 UTC; 8min ago Main PID: 80691 (code=exited, status=0/SUCCESS) CPU: 10ms Feb 16 18:08:54 np0000155647 systemd[1]: Starting stack-volumes-lvmdriver-1-backing-file.service - Activate LVM backing file /opt/stack/data/stack-volumes-lvmdriver-1-backing-file... Feb 16 18:08:54 np0000155647 losetup[80691]: /dev/loop0 Feb 16 18:08:54 np0000155647 systemd[1]: Finished stack-volumes-lvmdriver-1-backing-file.service - Activate LVM backing file /opt/stack/data/stack-volumes-lvmdriver-1-backing-file. ● sysfsutils.service - Apply sysfs variables Loaded: loaded (/usr/lib/systemd/system/sysfsutils.service; enabled; preset: enabled) Active: active (exited) since Mon 2026-02-16 17:56:10 UTC; 20min ago Docs: man:sysfs.conf(5) man:systool(1) Main PID: 11663 (code=exited, status=0/SUCCESS) CPU: 20ms Feb 16 17:56:10 np0000155647 systemd[1]: Starting sysfsutils.service - Apply sysfs variables... Feb 16 17:56:10 np0000155647 sysfsutils[11663]: * Setting sysfs variables...... Feb 16 17:56:10 np0000155647 sysfsutils[11663]: ...done. Feb 16 17:56:10 np0000155647 systemd[1]: Finished sysfsutils.service - Apply sysfs variables. ○ systemd-ask-password-console.service - Dispatch Password Requests to Console Loaded: loaded (/usr/lib/systemd/system/systemd-ask-password-console.service; static) Active: inactive (dead) TriggeredBy: ● systemd-ask-password-console.path Docs: man:systemd-ask-password-console.service(8) ○ systemd-ask-password-wall.service - Forward Password Requests to Wall Loaded: loaded (/usr/lib/systemd/system/systemd-ask-password-wall.service; static) Active: inactive (dead) TriggeredBy: ● systemd-ask-password-wall.path Docs: man:systemd-ask-password-wall.service(8) ○ systemd-battery-check.service - Check battery level during early boot Loaded: loaded (/usr/lib/systemd/system/systemd-battery-check.service; static) Active: inactive (dead) Docs: man:systemd-battery-check.service(8) ● systemd-binfmt.service - Set Up Additional Binary Formats Loaded: loaded (/usr/lib/systemd/system/systemd-binfmt.service; static) Active: active (exited) since Mon 2026-02-16 17:56:51 UTC; 20min ago Docs: man:systemd-binfmt.service(8) man:binfmt.d(5) https://docs.kernel.org/admin-guide/binfmt-misc.html https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems Main PID: 17320 (code=exited, status=0/SUCCESS) CPU: 14ms Feb 16 17:56:51 np0000155647 systemd[1]: Starting systemd-binfmt.service - Set Up Additional Binary Formats... Feb 16 17:56:51 np0000155647 systemd[1]: Finished systemd-binfmt.service - Set Up Additional Binary Formats. ○ systemd-bsod.service - Displays emergency message in full screen. Loaded: loaded (/usr/lib/systemd/system/systemd-bsod.service; static) Active: inactive (dead) Docs: man:systemd-bsod.service(8) ● systemd-firstboot.service - First Boot Wizard Loaded: loaded (/usr/lib/systemd/system/systemd-firstboot.service; static) Active: active (exited) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd-firstboot(1) Main PID: 458 (code=exited, status=0/SUCCESS) CPU: 19ms Feb 16 17:51:03 ubuntu systemd[1]: Starting systemd-firstboot.service - First Boot Wizard... Feb 16 17:51:03 ubuntu systemd[1]: Finished systemd-firstboot.service - First Boot Wizard. ● systemd-fsck-root.service - File System Check on Root Device Loaded: loaded (/usr/lib/systemd/system/systemd-fsck-root.service; enabled-runtime; preset: enabled) Active: active (exited) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd-fsck-root.service(8) Main PID: 395 (code=exited, status=0/SUCCESS) CPU: 28ms Feb 16 17:51:03 ubuntu systemd-fsck[400]: cloudimg-rootfs: clean, 35149/1070256 files, 745865/1069920 blocks Feb 16 17:51:03 ubuntu systemd[1]: Finished systemd-fsck-root.service - File System Check on Root Device. Notice: journal has been rotated since unit was started, output may be incomplete. ○ systemd-fsckd.service - File System Check Daemon to report status Loaded: loaded (/usr/lib/systemd/system/systemd-fsckd.service; static) Active: inactive (dead) since Mon 2026-02-16 17:51:33 UTC; 25min ago Duration: 30.029s TriggeredBy: ● systemd-fsckd.socket Docs: man:systemd-fsckd.service(8) Main PID: 419 (code=exited, status=0/SUCCESS) CPU: 23ms Feb 16 17:51:03 ubuntu systemd[1]: Started systemd-fsckd.service - File System Check Daemon to report status. Feb 16 17:51:33 np0000155647 systemd[1]: systemd-fsckd.service: Deactivated successfully. ○ systemd-hibernate-resume.service - Resume from hibernation Loaded: loaded (/usr/lib/systemd/system/systemd-hibernate-resume.service; static) Active: inactive (dead) Docs: man:systemd-hibernate-resume.service(8) ○ systemd-hibernate.service - System Hibernate Loaded: loaded (/usr/lib/systemd/system/systemd-hibernate.service; static) Active: inactive (dead) Docs: man:systemd-hibernate.service(8) ○ systemd-hwdb-update.service - Rebuild Hardware Database Loaded: loaded (/usr/lib/systemd/system/systemd-hwdb-update.service; static) Active: inactive (dead) Condition: start condition unmet at Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:hwdb(7) man:systemd-hwdb(8) Feb 16 17:51:03 ubuntu systemd[1]: systemd-hwdb-update.service - Rebuild Hardware Database was skipped because no trigger condition checks were met. ○ systemd-hybrid-sleep.service - System Hybrid Suspend+Hibernate Loaded: loaded (/usr/lib/systemd/system/systemd-hybrid-sleep.service; static) Active: inactive (dead) Docs: man:systemd-hybrid-sleep.service(8) ○ systemd-initctl.service - initctl Compatibility Daemon Loaded: loaded (/usr/lib/systemd/system/systemd-initctl.service; static) Active: inactive (dead) TriggeredBy: ● systemd-initctl.socket Docs: man:systemd-initctl.service(8) ● systemd-journal-catalog-update.service - Rebuild Journal Catalog Loaded: loaded (/usr/lib/systemd/system/systemd-journal-catalog-update.service; static) Active: active (exited) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd-journald.service(8) man:journald.conf(5) Main PID: 459 (code=exited, status=0/SUCCESS) CPU: 26ms Feb 16 17:51:03 ubuntu systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 16 17:51:03 ubuntu systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. ● systemd-journal-flush.service - Flush Journal to Persistent Storage Loaded: loaded (/usr/lib/systemd/system/systemd-journal-flush.service; static) Active: active (exited) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd-journald.service(8) man:journald.conf(5) Main PID: 438 (code=exited, status=0/SUCCESS) CPU: 16ms Feb 16 17:51:03 ubuntu systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 16 17:51:03 ubuntu systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. ● systemd-journald.service - Journal Service Loaded: loaded (/usr/lib/systemd/system/systemd-journald.service; static) Drop-In: /usr/lib/systemd/system/systemd-journald.service.d └─nice.conf Active: active (running) since Mon 2026-02-16 17:57:06 UTC; 19min ago TriggeredBy: ○ systemd-journald-audit.socket ● systemd-journald.socket ● systemd-journald-dev-log.socket Docs: man:systemd-journald.service(8) man:journald.conf(5) Main PID: 19011 (systemd-journal) Status: "Processing requests..." Tasks: 1 (limit: 77077) FD Store: 59 (limit: 4224) Memory: 66.4M (peak: 66.6M) CPU: 8.855s CGroup: /system.slice/systemd-journald.service └─19011 /usr/lib/systemd/systemd-journald Feb 16 17:57:06 np0000155647 systemd-journald[19011]: Collecting audit messages is disabled. Feb 16 17:57:06 np0000155647 systemd-journald[19011]: Journal started Feb 16 17:57:06 np0000155647 systemd-journald[19011]: System Journal (/var/log/journal/3e25737a9583453da6a1018921f2d60f) is 24.0M, max 4.0G, 3.9G free. ● systemd-logind.service - User Login Management Loaded: loaded (/usr/lib/systemd/system/systemd-logind.service; static) Drop-In: /usr/lib/systemd/system/systemd-logind.service.d └─dbus.conf Active: active (running) since Mon 2026-02-16 17:51:10 UTC; 25min ago Docs: man:sd-login(3) man:systemd-logind.service(8) man:logind.conf(5) man:org.freedesktop.login1(5) Main PID: 712 (systemd-logind) Status: "Processing requests..." Tasks: 1 (limit: 77077) FD Store: 0 (limit: 512) Memory: 1.8M (peak: 2.1M) CPU: 2.024s CGroup: /system.slice/systemd-logind.service └─712 /usr/lib/systemd/systemd-logind Feb 16 17:51:10 np0000155647 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 16 17:51:10 np0000155647 systemd-logind[712]: New seat seat0. Feb 16 17:51:10 np0000155647 systemd-logind[712]: Watching system buttons on /dev/input/event0 (Power Button) Feb 16 17:51:10 np0000155647 systemd-logind[712]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Feb 16 17:51:10 np0000155647 systemd[1]: Started systemd-logind.service - User Login Management. Feb 16 17:51:35 np0000155647 systemd-logind[712]: New session 1 of user zuul. Feb 16 17:51:47 np0000155647 systemd-logind[712]: New session 3 of user zuul. Feb 16 17:52:09 np0000155647 systemd-logind[712]: Session 3 logged out. Waiting for processes to exit. Feb 16 17:52:09 np0000155647 systemd-logind[712]: Removed session 3. ● systemd-machine-id-commit.service - Commit a transient machine-id on disk Loaded: loaded (/usr/lib/systemd/system/systemd-machine-id-commit.service; static) Active: active (exited) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd-machine-id-commit.service(8) Main PID: 480 (code=exited, status=0/SUCCESS) CPU: 21ms Feb 16 17:51:03 ubuntu systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 16 17:51:03 ubuntu systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. ● systemd-machined.service - Virtual Machine and Container Registration Service Loaded: loaded (/usr/lib/systemd/system/systemd-machined.service; static) Active: active (running) since Mon 2026-02-16 18:02:19 UTC; 14min ago Docs: man:systemd-machined.service(8) man:org.freedesktop.machine1(5) Main PID: 42877 (systemd-machine) Status: "Processing requests..." Tasks: 1 (limit: 77077) Memory: 1.2M (peak: 1.8M) CPU: 1.400s CGroup: /system.slice/systemd-machined.service └─42877 /usr/lib/systemd/systemd-machined Feb 16 18:02:19 np0000155647 systemd[1]: Starting systemd-machined.service - Virtual Machine and Container Registration Service... Feb 16 18:02:19 np0000155647 systemd[1]: Started systemd-machined.service - Virtual Machine and Container Registration Service. ● systemd-modules-load.service - Load Kernel Modules Loaded: loaded (/usr/lib/systemd/system/systemd-modules-load.service; static) Active: active (exited) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd-modules-load.service(8) man:modules-load.d(5) Main PID: 397 (code=exited, status=0/SUCCESS) CPU: 19ms Feb 16 17:51:03 ubuntu systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. ● systemd-networkd-wait-online.service - Wait for Network to be Configured Loaded: loaded (/usr/lib/systemd/system/systemd-networkd-wait-online.service; enabled-runtime; preset: enabled) Drop-In: /run/systemd/system/systemd-networkd-wait-online.service.d └─10-netplan.conf Active: active (exited) since Mon 2026-02-16 17:51:06 UTC; 25min ago Docs: man:systemd-networkd-wait-online.service(8) Main PID: 613 (code=exited, status=0/SUCCESS) CPU: 18ms Feb 16 17:51:04 np0000155647 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 16 17:51:06 np0000155647 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. ● systemd-networkd.service - Network Configuration Loaded: loaded (/usr/lib/systemd/system/systemd-networkd.service; enabled-runtime; preset: enabled) Active: active (running) since Mon 2026-02-16 17:51:04 UTC; 25min ago TriggeredBy: ● systemd-networkd.socket Docs: man:systemd-networkd.service(8) man:org.freedesktop.network1(5) Main PID: 602 (systemd-network) Status: "Processing requests..." Tasks: 1 (limit: 77077) FD Store: 0 (limit: 512) Memory: 3.3M (peak: 3.6M) CPU: 134ms CGroup: /system.slice/systemd-networkd.service └─602 /usr/lib/systemd/systemd-networkd Feb 16 18:02:20 np0000155647 systemd-networkd[602]: virbr0: Link UP Feb 16 18:13:04 np0000155647 systemd-networkd[602]: br-ex: Link UP Feb 16 18:13:04 np0000155647 systemd-networkd[602]: br-ex: Gained carrier Feb 16 18:13:06 np0000155647 systemd-networkd[602]: br-ex: Gained IPv6LL Feb 16 18:16:18 np0000155647 systemd-networkd[602]: tapb44d09fe-67: Link UP Feb 16 18:16:18 np0000155647 systemd-networkd[602]: tapb44d09fe-67: Gained carrier Feb 16 18:16:19 np0000155647 systemd-networkd[602]: tapb44d09fe-67: Gained IPv6LL Feb 16 18:16:19 np0000155647 systemd-networkd[602]: tap1ddaf2af-80: Link UP Feb 16 18:16:19 np0000155647 systemd-networkd[602]: tap1ddaf2af-80: Gained carrier Feb 16 18:16:21 np0000155647 systemd-networkd[602]: tap1ddaf2af-80: Gained IPv6LL ○ systemd-pcrmachine.service - TPM2 PCR Machine ID Measurement Loaded: loaded (/usr/lib/systemd/system/systemd-pcrmachine.service; static) Active: inactive (dead) Condition: start condition unmet at Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd-pcrmachine.service(8) ○ systemd-pcrphase-initrd.service - TPM2 PCR Barrier (initrd) Loaded: loaded (/usr/lib/systemd/system/systemd-pcrphase-initrd.service; static) Active: inactive (dead) Docs: man:systemd-pcrphase-initrd.service(8) ○ systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) Loaded: loaded (/usr/lib/systemd/system/systemd-pcrphase-sysinit.service; static) Active: inactive (dead) Condition: start condition unmet at Mon 2026-02-16 17:51:10 UTC; 25min ago Docs: man:systemd-pcrphase-sysinit.service(8) Feb 16 17:51:10 np0000155647 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionSecurity=measured-uki). ○ systemd-pcrphase.service - TPM2 PCR Barrier (User) Loaded: loaded (/usr/lib/systemd/system/systemd-pcrphase.service; static) Active: inactive (dead) Condition: start condition unmet at Mon 2026-02-16 17:51:10 UTC; 25min ago Docs: man:systemd-pcrphase.service(8) Feb 16 17:51:10 np0000155647 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionSecurity=measured-uki). ○ systemd-pstore.service - Platform Persistent Storage Archival Loaded: loaded (/usr/lib/systemd/system/systemd-pstore.service; enabled; preset: enabled) Active: inactive (dead) Condition: start condition unmet at Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd-pstore(8) Feb 16 17:51:03 ubuntu systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). ● systemd-random-seed.service - Load/Save OS Random Seed Loaded: loaded (/usr/lib/systemd/system/systemd-random-seed.service; static) Active: active (exited) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd-random-seed.service(8) man:random(4) Main PID: 439 (code=exited, status=0/SUCCESS) CPU: 16ms Feb 16 17:51:03 ubuntu systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 16 17:51:03 ubuntu systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. ● systemd-remount-fs.service - Remount Root and Kernel File Systems Loaded: loaded (/usr/lib/systemd/system/systemd-remount-fs.service; enabled-runtime; preset: enabled) Active: active (exited) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd-remount-fs.service(8) https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems Main PID: 420 (code=exited, status=0/SUCCESS) CPU: 24ms Feb 16 17:51:03 ubuntu systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 16 17:51:03 ubuntu systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. ○ systemd-repart.service - Repartition Root Disk Loaded: loaded (/usr/lib/systemd/system/systemd-repart.service; static) Active: inactive (dead) Condition: start condition unmet at Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd-repart.service(8) Feb 16 17:51:03 ubuntu systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. ● systemd-resolved.service - Network Name Resolution Loaded: loaded (/usr/lib/systemd/system/systemd-resolved.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd-resolved.service(8) man:org.freedesktop.resolve1(5) https://www.freedesktop.org/wiki/Software/systemd/writing-network-configuration-managers https://www.freedesktop.org/wiki/Software/systemd/writing-resolver-clients Main PID: 460 (systemd-resolve) Status: "Processing requests..." Tasks: 1 (limit: 77077) Memory: 7.0M (peak: 7.7M) CPU: 494ms CGroup: /system.slice/systemd-resolved.service └─460 /usr/lib/systemd/systemd-resolved Feb 16 17:51:03 ubuntu systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 16 17:51:03 ubuntu systemd-resolved[460]: Positive Trust Anchors: Feb 16 17:51:03 ubuntu systemd-resolved[460]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 16 17:51:03 ubuntu systemd-resolved[460]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Feb 16 17:51:03 ubuntu systemd-resolved[460]: Using system hostname 'ubuntu'. Feb 16 17:51:03 ubuntu systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 16 17:51:04 np0000155647 systemd-resolved[460]: System hostname changed to 'np0000155647'. Feb 16 17:51:40 np0000155647 systemd-resolved[460]: Clock change detected. Flushing caches. ○ systemd-rfkill.service - Load/Save RF Kill Switch Status Loaded: loaded (/usr/lib/systemd/system/systemd-rfkill.service; static) Active: inactive (dead) TriggeredBy: ● systemd-rfkill.socket Docs: man:systemd-rfkill.service(8) ○ systemd-soft-reboot.service - Reboot System Userspace Loaded: loaded (/usr/lib/systemd/system/systemd-soft-reboot.service; static) Active: inactive (dead) Docs: man:systemd-soft-reboot.service(8) ○ systemd-suspend-then-hibernate.service - System Suspend then Hibernate Loaded: loaded (/usr/lib/systemd/system/systemd-suspend-then-hibernate.service; static) Active: inactive (dead) Docs: man:systemd-suspend-then-hibernate.service(8) ○ systemd-suspend.service - System Suspend Loaded: loaded (/usr/lib/systemd/system/systemd-suspend.service; static) Active: inactive (dead) Docs: man:systemd-suspend.service(8) ● systemd-sysctl.service - Apply Kernel Variables Loaded: loaded (/usr/lib/systemd/system/systemd-sysctl.service; static) Active: active (exited) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd-sysctl.service(8) man:sysctl.d(5) Main PID: 421 (code=exited, status=0/SUCCESS) CPU: 21ms Feb 16 17:51:03 ubuntu systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 16 17:51:03 ubuntu systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. ○ systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/ Loaded: loaded (/usr/lib/systemd/system/systemd-sysext.service; disabled; preset: enabled) Active: inactive (dead) Docs: man:systemd-sysext.service(8) ● systemd-sysusers.service - Create System Users Loaded: loaded (/usr/lib/systemd/system/systemd-sysusers.service; static) Active: active (exited) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:sysusers.d(5) man:systemd-sysusers.service(8) Main PID: 440 (code=exited, status=0/SUCCESS) CPU: 23ms Feb 16 17:51:03 ubuntu systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 16 17:51:03 ubuntu systemd[1]: Finished systemd-sysusers.service - Create System Users. ● systemd-timesyncd.service - Network Time Synchronization Loaded: loaded (/usr/lib/systemd/system/systemd-timesyncd.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd-timesyncd.service(8) Main PID: 463 (systemd-timesyn) Status: "Contacted time server [2620:2d:4000:1::41]:123 (ntp.ubuntu.com)." Tasks: 2 (limit: 77077) Memory: 1.4M (peak: 2.2M) CPU: 86ms CGroup: /system.slice/systemd-timesyncd.service └─463 /usr/lib/systemd/systemd-timesyncd Feb 16 17:51:04 np0000155647 systemd-timesyncd[463]: Network configuration changed, trying to establish connection. Feb 16 17:51:04 np0000155647 systemd-timesyncd[463]: Network configuration changed, trying to establish connection. Feb 16 17:51:06 np0000155647 systemd-timesyncd[463]: Network configuration changed, trying to establish connection. Feb 16 17:51:07 np0000155647 systemd-timesyncd[463]: Network configuration changed, trying to establish connection. Feb 16 17:51:07 np0000155647 systemd-timesyncd[463]: Network configuration changed, trying to establish connection. Feb 16 17:51:08 np0000155647 systemd-timesyncd[463]: Network configuration changed, trying to establish connection. Feb 16 17:51:08 np0000155647 systemd-timesyncd[463]: Network configuration changed, trying to establish connection. Feb 16 17:51:09 np0000155647 systemd-timesyncd[463]: Network configuration changed, trying to establish connection. Feb 16 17:51:40 np0000155647 systemd-timesyncd[463]: Contacted time server [2620:2d:4000:1::41]:123 (ntp.ubuntu.com). Feb 16 17:51:40 np0000155647 systemd-timesyncd[463]: Initial clock synchronization to Mon 2026-02-16 17:51:40.445755 UTC. ○ systemd-tmpfiles-clean.service - Cleanup of Temporary Directories Loaded: loaded (/usr/lib/systemd/system/systemd-tmpfiles-clean.service; static) Active: inactive (dead) since Mon 2026-02-16 18:06:06 UTC; 10min ago TriggeredBy: ● systemd-tmpfiles-clean.timer Docs: man:tmpfiles.d(5) man:systemd-tmpfiles(8) Main PID: 61914 (code=exited, status=0/SUCCESS) CPU: 20ms Feb 16 18:06:06 np0000155647 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Feb 16 18:06:06 np0000155647 systemd-tmpfiles[61914]: /etc/tmpfiles.d/uwsgi.conf:1: Line references path below legacy directory /var/run/, updating /var/run/uwsgi → /run/uwsgi; please update the tmpfiles.d/ drop-in file accordingly. Feb 16 18:06:06 np0000155647 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Feb 16 18:06:06 np0000155647 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. ● systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully Loaded: loaded (/usr/lib/systemd/system/systemd-tmpfiles-setup-dev-early.service; static) Active: active (exited) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:tmpfiles.d(5) man:systemd-tmpfiles(8) Main PID: 423 (code=exited, status=0/SUCCESS) CPU: 28ms Feb 16 17:51:03 ubuntu systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 16 17:51:03 ubuntu systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. ● systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev Loaded: loaded (/usr/lib/systemd/system/systemd-tmpfiles-setup-dev.service; static) Active: active (exited) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:tmpfiles.d(5) man:systemd-tmpfiles(8) Main PID: 444 (code=exited, status=0/SUCCESS) CPU: 15ms Feb 16 17:51:03 ubuntu systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 16 17:51:03 ubuntu systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. ● systemd-tmpfiles-setup.service - Create Volatile Files and Directories Loaded: loaded (/usr/lib/systemd/system/systemd-tmpfiles-setup.service; static) Active: active (exited) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:tmpfiles.d(5) man:systemd-tmpfiles(8) Main PID: 452 (code=exited, status=0/SUCCESS) CPU: 26ms Feb 16 17:51:03 ubuntu systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Feb 16 17:51:03 ubuntu systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. ○ systemd-tpm2-setup-early.service - TPM2 SRK Setup (Early) Loaded: loaded (/usr/lib/systemd/system/systemd-tpm2-setup-early.service; static) Active: inactive (dead) Condition: start condition unmet at Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd-tpm2-setup.service(8) ○ systemd-tpm2-setup.service - TPM2 SRK Setup Loaded: loaded (/usr/lib/systemd/system/systemd-tpm2-setup.service; static) Active: inactive (dead) Condition: start condition unmet at Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd-tpm2-setup.service(8) Feb 16 17:51:03 ubuntu systemd[1]: systemd-tpm2-setup.service - TPM2 SRK Setup was skipped because of an unmet condition check (ConditionSecurity=measured-uki). ● systemd-udev-settle.service - Wait for udev To Complete Device Initialization Loaded: loaded (/usr/lib/systemd/system/systemd-udev-settle.service; static) Active: active (exited) since Mon 2026-02-16 18:08:54 UTC; 8min ago Docs: man:systemd-udev-settle.service(8) Main PID: 80687 (code=exited, status=0/SUCCESS) CPU: 9ms Feb 16 18:08:54 np0000155647 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 16 18:08:54 np0000155647 udevadm[80687]: systemd-udev-settle.service is deprecated. Please fix stack-volumes-lvmdriver-1-backing-file.service not to pull it in. Feb 16 18:08:54 np0000155647 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. ● systemd-udev-trigger.service - Coldplug All udev Devices Loaded: loaded (/usr/lib/systemd/system/systemd-udev-trigger.service; static) Active: active (exited) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:udev(7) man:systemd-udevd.service(8) Main PID: 398 (code=exited, status=0/SUCCESS) CPU: 133ms Feb 16 17:51:03 ubuntu systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. ● systemd-udevd.service - Rule-based Manager for Device Events and Files Loaded: loaded (/usr/lib/systemd/system/systemd-udevd.service; static) Drop-In: /usr/lib/systemd/system/systemd-udevd.service.d └─syscall-architecture.conf Active: active (running) since Mon 2026-02-16 17:51:03 UTC; 25min ago TriggeredBy: ● systemd-udevd-kernel.socket ● systemd-udevd-control.socket Docs: man:systemd-udevd.service(8) man:udev(7) Main PID: 453 (systemd-udevd) Status: "Processing with 48 children at max" Tasks: 1 Memory: 12.7M (peak: 39.9M) CPU: 2.765s CGroup: /system.slice/systemd-udevd.service └─udev └─453 /usr/lib/systemd/systemd-udevd Feb 16 17:51:03 ubuntu systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 16 17:51:03 ubuntu systemd-udevd[453]: Using default interface naming scheme 'v255'. Feb 16 17:51:03 ubuntu systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 16 18:08:54 np0000155647 lvm[80714]: PV /dev/loop0 online, VG stack-volumes-lvmdriver-1 is complete. Feb 16 18:08:54 np0000155647 lvm[80789]: PV /dev/loop0 online, VG stack-volumes-lvmdriver-1 is complete. Feb 16 18:08:54 np0000155647 lvm[80789]: VG stack-volumes-lvmdriver-1 finished Feb 16 18:13:45 np0000155647 lvm[115162]: PV /dev/loop0 online, VG stack-volumes-lvmdriver-1 is complete. Feb 16 18:13:45 np0000155647 lvm[115162]: VG stack-volumes-lvmdriver-1 finished ● systemd-update-done.service - Update is Completed Loaded: loaded (/usr/lib/systemd/system/systemd-update-done.service; static) Active: active (exited) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd-update-done.service(8) Main PID: 540 (code=exited, status=0/SUCCESS) CPU: 9ms Feb 16 17:51:03 ubuntu systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 16 17:51:03 ubuntu systemd[1]: Finished systemd-update-done.service - Update is Completed. ○ systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP Loaded: loaded (/usr/lib/systemd/system/systemd-update-utmp-runlevel.service; static) Active: inactive (dead) since Mon 2026-02-16 17:51:10 UTC; 25min ago Docs: man:systemd-update-utmp-runlevel.service(8) man:utmp(5) Main PID: 748 (code=exited, status=0/SUCCESS) CPU: 5ms Feb 16 17:51:10 np0000155647 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Feb 16 17:51:10 np0000155647 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 16 17:51:10 np0000155647 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. ● systemd-update-utmp.service - Record System Boot/Shutdown in UTMP Loaded: loaded (/usr/lib/systemd/system/systemd-update-utmp.service; static) Active: active (exited) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd-update-utmp.service(8) man:utmp(5) Main PID: 464 (code=exited, status=0/SUCCESS) CPU: 17ms Feb 16 17:51:03 ubuntu systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 16 17:51:03 ubuntu systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. ● systemd-user-sessions.service - Permit User Sessions Loaded: loaded (/usr/lib/systemd/system/systemd-user-sessions.service; static) Active: active (exited) since Mon 2026-02-16 17:51:10 UTC; 25min ago Docs: man:systemd-user-sessions.service(8) Main PID: 714 (code=exited, status=0/SUCCESS) CPU: 19ms Feb 16 17:51:10 np0000155647 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 16 17:51:10 np0000155647 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. ● user-runtime-dir@1000.service - User Runtime Directory /run/user/1000 Loaded: loaded (/usr/lib/systemd/system/user-runtime-dir@.service; static) Active: active (exited) since Mon 2026-02-16 17:51:35 UTC; 25min ago Docs: man:user@.service(5) Main PID: 830 (code=exited, status=0/SUCCESS) CPU: 15ms Feb 16 17:51:35 np0000155647 systemd[1]: Starting user-runtime-dir@1000.service - User Runtime Directory /run/user/1000... Feb 16 17:51:35 np0000155647 systemd[1]: Finished user-runtime-dir@1000.service - User Runtime Directory /run/user/1000. ● user@1000.service - User Manager for UID 1000 Loaded: loaded (/usr/lib/systemd/system/user@.service; static) Drop-In: /usr/lib/systemd/system/user@.service.d └─10-login-barrier.conf, timeout.conf Active: active (running) since Mon 2026-02-16 17:51:35 UTC; 25min ago Docs: man:user@.service(5) Main PID: 833 (systemd) Status: "Ready." Tasks: 2 Memory: 4.1M (peak: 6.0M) CPU: 240ms CGroup: /user.slice/user-1000.slice/user@1000.service └─init.scope ├─833 /usr/lib/systemd/systemd --user └─834 "(sd-pam)" Feb 16 17:51:35 np0000155647 systemd[833]: Reached target timers.target - Timers. Feb 16 17:51:35 np0000155647 systemd[833]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 16 17:51:35 np0000155647 systemd[833]: Listening on pk-debconf-helper.socket - debconf communication socket. Feb 16 17:51:35 np0000155647 systemd[833]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 16 17:51:35 np0000155647 systemd[833]: Reached target sockets.target - Sockets. Feb 16 17:51:35 np0000155647 systemd[833]: Reached target basic.target - Basic System. Feb 16 17:51:35 np0000155647 systemd[833]: Reached target default.target - Main User Target. Feb 16 17:51:35 np0000155647 systemd[833]: Startup finished in 165ms. Feb 16 17:51:35 np0000155647 systemd[1]: Started user@1000.service - User Manager for UID 1000. Feb 16 17:56:42 np0000155647 systemd[833]: launchpadlib-cache-clean.service - Clean up old files in the Launchpadlib cache was skipped because of an unmet condition check (ConditionPathExists=/home/zuul/.launchpadlib/api.launchpad.net/cache). ○ uuidd.service - Daemon for generating UUIDs Loaded: loaded (/usr/lib/systemd/system/uuidd.service; indirect; preset: enabled) Active: inactive (dead) TriggeredBy: ● uuidd.socket Docs: man:uuidd(8) ● uwsgi.service - LSB: Start/stop uWSGI server instance(s) Loaded: loaded (/etc/init.d/uwsgi; generated) Active: active (exited) since Mon 2026-02-16 18:00:10 UTC; 16min ago Docs: man:systemd-sysv-generator(8) CPU: 68ms Feb 16 18:00:10 np0000155647 systemd[1]: Starting uwsgi.service - LSB: Start/stop uWSGI server instance(s)... Feb 16 18:00:10 np0000155647 uwsgi[30188]: * Starting app server(s) uwsgi Feb 16 18:00:10 np0000155647 uwsgi[30188]: ...done. Feb 16 18:00:10 np0000155647 systemd[1]: Started uwsgi.service - LSB: Start/stop uWSGI server instance(s). ● virtlockd.service - libvirt locking daemon Loaded: loaded (/usr/lib/systemd/system/virtlockd.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:02:21 UTC; 14min ago TriggeredBy: ● virtlockd-admin.socket ● virtlockd.socket Docs: man:virtlockd(8) https://libvirt.org/ Main PID: 43092 (virtlockd) Tasks: 1 (limit: 77077) Memory: 2.0M (peak: 2.3M) CPU: 27ms CGroup: /system.slice/virtlockd.service └─43092 /usr/sbin/virtlockd Feb 16 18:02:21 np0000155647 systemd[1]: Starting virtlockd.service - libvirt locking daemon... Feb 16 18:02:21 np0000155647 systemd[1]: Started virtlockd.service - libvirt locking daemon. ● virtlogd.service - libvirt logging daemon Loaded: loaded (/usr/lib/systemd/system/virtlogd.service; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:02:53 UTC; 14min ago TriggeredBy: ● virtlogd-admin.socket ● virtlogd.socket Docs: man:virtlogd(8) https://libvirt.org/ Main PID: 48673 (virtlogd) Tasks: 1 (limit: 77077) Memory: 2.4M (peak: 2.6M) CPU: 48ms CGroup: /system.slice/virtlogd.service └─48673 /usr/sbin/virtlogd Feb 16 18:02:53 np0000155647 systemd[1]: Starting virtlogd.service - libvirt logging daemon... Feb 16 18:02:53 np0000155647 systemd[1]: Started virtlogd.service - libvirt logging daemon. ● -.slice - Root Slice Loaded: loaded Active: active since Mon 2026-02-16 17:51:02 UTC; 25min ago Docs: man:systemd.special(7) Tasks: 2025 Memory: 12.1G () CPU: 39min 52.540s CGroup: / ├─init.scope │ └─1 /sbin/init nofb ├─system.slice │ ├─apache-htcacheclean.service │ │ └─14203 /usr/bin/htcacheclean -d 120 -p /var/cache/apache2/mod_cache_disk -l 300M -n │ ├─apache2.service │ │ ├─122537 /usr/sbin/apache2 -k start │ │ ├─122541 /usr/sbin/apache2 -k start │ │ └─122542 /usr/sbin/apache2 -k start │ ├─containerd.service │ │ ├─20631 /usr/bin/containerd │ │ └─21507 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 089fb10c98a483eac1b5cbd49445b3386ad4740a6877edc01e9209d9dbf220c7 -address /run/containerd/containerd.sock │ ├─dbus.service │ │ └─708 @dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only │ ├─dm-event.service │ │ └─115151 /usr/sbin/dmeventd -f │ ├─docker-089fb10c98a483eac1b5cbd49445b3386ad4740a6877edc01e9209d9dbf220c7.scope │ │ ├─init.scope │ │ │ └─21530 /sbin/init │ │ ├─kubelet.slice │ │ │ ├─kubelet-kubepods.slice │ │ │ │ ├─kubelet-kubepods-besteffort.slice │ │ │ │ │ ├─kubelet-kubepods-besteffort-pod060980bd_94df_4b77_8c4f_85019165ff36.slice │ │ │ │ │ │ ├─cri-containerd-1433ec82c7d58a1bd88b32542ecb0883d1dd9b071469fb95c79ac08ced6611d2.scope │ │ │ │ │ │ │ └─25177 /manager --leader-elect --diagnostics-address=:8443 --insecure-diagnostics=false --feature-gates=MachinePool=true,KubeadmBootstrapFormatIgnition=true,PriorityQueue=false --bootstrap-token-ttl=15m │ │ │ │ │ │ └─cri-containerd-5793c32bcc0436f8fe34a586a9c3d691c3891ff8e512284b4424309d965cf4de.scope │ │ │ │ │ │ └─24819 /pause │ │ │ │ │ ├─kubelet-kubepods-besteffort-pod060f8598_7528_45b1_b3c5_0ca523a34f10.slice │ │ │ │ │ │ ├─cri-containerd-a9b853c1562e2d9b5d25818d03170c0ae004e0c3bdcba5684225a09905a1b752.scope │ │ │ │ │ │ │ └─24755 /pause │ │ │ │ │ │ └─cri-containerd-f03af497176e3521a17e482305315b46ef9a3f06f8def1aa9d3e6b9f8a165825.scope │ │ │ │ │ │ └─25002 /manager --leader-elect --diagnostics-address=:8443 --insecure-diagnostics=false --feature-gates=MachinePool=true,ClusterResourceSet=true,ClusterTopology=true,RuntimeSDK=false,MachineSetPreflightChecks=true,MachineWaitForVolumeDetachConsiderVolumeAttachments=true,PriorityQueue=false │ │ │ │ │ ├─kubelet-kubepods-besteffort-pod15afed8a_99d8_4a13_9c07_038039770363.slice │ │ │ │ │ │ ├─cri-containerd-716b23ff544bc41fa5015a42d703c5554339e9b24ab33edcd77f5b2026964d08.scope │ │ │ │ │ │ │ └─25086 /pause │ │ │ │ │ │ └─cri-containerd-d9dba19631b25e52efc68930f07a9a022f59c9244d7050bc186b9e4d87d4e755.scope │ │ │ │ │ │ └─25471 /manager --leader-elect --v=2 --diagnostics-address=127.0.0.1:8080 --insecure-diagnostics=true │ │ │ │ │ ├─kubelet-kubepods-besteffort-pod18fcda39_476d_4f2a_b389_6fff818f42ae.slice │ │ │ │ │ │ ├─cri-containerd-10c28a6f48eec284148f84ce3e47fe3004a9cd7bbf1f9abc58e6814017b471be.scope │ │ │ │ │ │ │ └─23824 /pause │ │ │ │ │ │ └─cri-containerd-720d0f6651c8177a77c9230bf86a48de597bc5c1ec5db6e60bd66fe90064648e.scope │ │ │ │ │ │ └─24002 local-path-provisioner --debug start --helper-image docker.io/kindest/local-path-helper:v20220607-9a4d8d2a --config /etc/config/config.json │ │ │ │ │ ├─kubelet-kubepods-besteffort-pod2bdc6e4c_0088_47cb_be88_ab92547b89ae.slice │ │ │ │ │ │ ├─cri-containerd-5e10cc4be68610c8fdfce2d97284fa2a72defecd8f9dbf1195ed33469751fc11.scope │ │ │ │ │ │ │ └─23841 /pause │ │ │ │ │ │ └─cri-containerd-6604ce9efc54877b4d953350f7e65a5f444d760019403df0dbdd867c03f80c27.scope │ │ │ │ │ │ └─24200 /app/cmd/controller/controller --v=2 --cluster-resource-namespace=cert-manager --leader-election-namespace=kube-system --acme-http01-solver-image=quay.io/jetstack/cert-manager-acmesolver:v1.18.1 --max-concurrent-challenges=60 │ │ │ │ │ ├─kubelet-kubepods-besteffort-pod60d2d5fb_575b_4758_90d1_81d8244a7f54.slice │ │ │ │ │ │ ├─cri-containerd-5c88b5d8462c6a19822bef36e5c767ce579a2df6df3f201d51bb2c4a584c158d.scope │ │ │ │ │ │ │ └─24971 /pause │ │ │ │ │ │ └─cri-containerd-75f147194ebc30a5d1f5ba46bc89ad0ee081af58bddf008e5031627017ed8994.scope │ │ │ │ │ │ └─25312 /manager --leader-elect --diagnostics-address=:8443 --insecure-diagnostics=false --feature-gates=MachinePool=true,ClusterTopology=true,KubeadmBootstrapFormatIgnition=true,PriorityQueue=false │ │ │ │ │ ├─kubelet-kubepods-besteffort-pod96cc9fa3_1069_4840_9c97_4d69571ebb29.slice │ │ │ │ │ │ ├─cri-containerd-49b695fce71dcafd713509840486eda46a4808bc556797e8cbb013c178adf453.scope │ │ │ │ │ │ │ └─24091 /pause │ │ │ │ │ │ └─cri-containerd-6d62f3e77e85aff076f4bf174cf00cbbe5d08b7b01c315a48b33f121763c8447.scope │ │ │ │ │ │ └─24533 /app/cmd/webhook/webhook --v=2 --secure-port=10250 --dynamic-serving-ca-secret-namespace=cert-manager --dynamic-serving-ca-secret-name=cert-manager-webhook-ca --dynamic-serving-dns-names=cert-manager-webhook --dynamic-serving-dns-names=cert-manager-webhook.cert-manager --dynamic-serving-dns-names=cert-manager-webhook.cert-manager.svc │ │ │ │ │ ├─kubelet-kubepods-besteffort-pod9f8355b0_94bd_475d_bf74_9d386d0f5259.slice │ │ │ │ │ │ ├─cri-containerd-7a0648fca673c15649668c644af739b895a375f8e0a5ec9c7cf267b897aa4815.scope │ │ │ │ │ │ │ └─24078 /pause │ │ │ │ │ │ └─cri-containerd-f55b671d3ac5e81b9dc93f8fe60e82166337b18ff359ea71e86690ffa57838b4.scope │ │ │ │ │ │ └─24411 /app/cmd/cainjector/cainjector --v=2 --leader-election-namespace=kube-system │ │ │ │ │ └─kubelet-kubepods-besteffort-podeba8cea0_a113_40e4_8af9_f9092b483360.slice │ │ │ │ │ ├─cri-containerd-05883a8b5dcc6186bc92366d098adf91aacc2716c5f4654d5faee7250595663b.scope │ │ │ │ │ │ └─23221 /pause │ │ │ │ │ └─cri-containerd-68a9f7e8b2f1594edc9ae113bf7569a1d5ed85bb82562eb4364f5331c9f598ca.scope │ │ │ │ │ └─23271 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=kind-control-plane │ │ │ │ ├─kubelet-kubepods-burstable.slice │ │ │ │ │ ├─kubelet-kubepods-burstable-pod0656ab70da313d6449b17f099a2a3110.slice │ │ │ │ │ │ ├─cri-containerd-6d4579b16512918eddfa28c91b9b82464468be359a2a61c9fea7dc7b7ab46364.scope │ │ │ │ │ │ │ └─22509 etcd --advertise-client-urls=https://172.18.0.2:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://172.18.0.2:2380 --initial-cluster=kind-control-plane=https://172.18.0.2:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://172.18.0.2:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://172.18.0.2:2380 --name=kind-control-plane --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt │ │ │ │ │ │ └─cri-containerd-a181437f07c759cb357dc1c5a4915703fdbc8421ac7dbe7ae420af7bbf5ce962.scope │ │ │ │ │ │ └─22255 /pause │ │ │ │ │ ├─kubelet-kubepods-burstable-pod53ff6c8abd472f64bc9a9afbd3a471a9.slice │ │ │ │ │ │ ├─cri-containerd-9213f7ed733001c44536ef277d2f04abbec812060f0464459c296548b41c605b.scope │ │ │ │ │ │ │ └─22269 /pause │ │ │ │ │ │ └─cri-containerd-e5efc56a027eace488dd3cff0e461733af3798de3cb89fefc0a233cd6d868383.scope │ │ │ │ │ │ └─22372 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=kind --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key "--controllers=*,bootstrapsigner,tokencleaner" --enable-hostpath-provisioner=true --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/16 --use-service-account-credentials=true │ │ │ │ │ ├─kubelet-kubepods-burstable-pod65d25134_75a8_44c0_b994_37071db70c0b.slice │ │ │ │ │ │ ├─cri-containerd-0845db3249e178e867783ec1010a54509a9a6f20e675db3c40bafc17b94aff4a.scope │ │ │ │ │ │ │ └─23596 /pause │ │ │ │ │ │ └─cri-containerd-a1bcb37a57c99f9a954339f4c95765996f1fc2161db6fc87722931f900073eac.scope │ │ │ │ │ │ └─23685 /coredns -conf /etc/coredns/Corefile │ │ │ │ │ ├─kubelet-kubepods-burstable-pod922d5a86_cf0c_4898_9361_4f7a1724917a.slice │ │ │ │ │ │ ├─cri-containerd-5f0448337760a1a92cb526a9ebe910cc2ad19bc77e1dfc4e2691cc1e11943675.scope │ │ │ │ │ │ │ └─23927 /pause │ │ │ │ │ │ └─cri-containerd-d0f8a8d96527dbcca96f1dd0492e8b1ba70ee11008c068b797069a257b450b1d.scope │ │ │ │ │ │ └─24314 /manager --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 │ │ │ │ │ ├─kubelet-kubepods-burstable-podbee69ab63b6471d4da666ee970746eae.slice │ │ │ │ │ │ ├─cri-containerd-5191216187fdcf29cecbd6140d2896e536a2153a60d5d507dc419302faacfe0d.scope │ │ │ │ │ │ │ └─22253 /pause │ │ │ │ │ │ └─cri-containerd-92b6f098aaae83573340f2ea18f968ceaff832acd7b11fb4c99b6ac6d401b2fe.scope │ │ │ │ │ │ └─22350 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true │ │ │ │ │ ├─kubelet-kubepods-burstable-podcbd4ee29_9a60_4f24_babe_75a79e0262a8.slice │ │ │ │ │ │ ├─cri-containerd-45366b6128ee283ed8e551bd4cfbb23d9e41caf8e9df563caa159264f5fb5a8f.scope │ │ │ │ │ │ │ └─23604 /pause │ │ │ │ │ │ └─cri-containerd-c82673298311c208438753cc6f9980d181abc401eaf370dd7390fdbc968f243a.scope │ │ │ │ │ │ └─23676 /coredns -conf /etc/coredns/Corefile │ │ │ │ │ └─kubelet-kubepods-burstable-podef6ebc9842be361e05ebdb6790c540b6.slice │ │ │ │ │ ├─cri-containerd-048d36383cc06e32dd1ab4003dee6a9343d2ccda4cb4eb939db648f16ac6f620.scope │ │ │ │ │ │ └─22272 /pause │ │ │ │ │ └─cri-containerd-d4a9d2a347b177fb443b9691e9438d1c0ee06ea2f1d19bf68afb66b1353f589c.scope │ │ │ │ │ └─22412 kube-apiserver --advertise-address=172.18.0.2 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --runtime-config= --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/16 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key │ │ │ │ └─kubelet-kubepods-podad85b7c2_f9f9_4ec9_b260_341f20aa22ff.slice │ │ │ │ ├─cri-containerd-840e0998fae8407b6859c75f3049d4116708e1a099a2af82c2a231ec21648183.scope │ │ │ │ │ └─23228 /pause │ │ │ │ └─cri-containerd-8a5e3ce32b811e69fef7d0bd0b708db17b4ffe5f3648638f8d7369dee746a825.scope │ │ │ │ └─23315 /bin/kindnetd │ │ │ └─kubelet.service │ │ │ └─22594 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --node-ip=172.18.0.2 --node-labels= --pod-infra-container-image=registry.k8s.io/pause:3.8 --provider-id=kind://docker/kind/kind-control-plane --fail-swap-on=false --cgroup-root=/kubelet │ │ └─system.slice │ │ ├─containerd.service │ │ │ ├─21726 /usr/local/bin/containerd │ │ │ ├─22165 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id a181437f07c759cb357dc1c5a4915703fdbc8421ac7dbe7ae420af7bbf5ce962 -address /run/containerd/containerd.sock │ │ │ ├─22172 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5191216187fdcf29cecbd6140d2896e536a2153a60d5d507dc419302faacfe0d -address /run/containerd/containerd.sock │ │ │ ├─22182 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 048d36383cc06e32dd1ab4003dee6a9343d2ccda4cb4eb939db648f16ac6f620 -address /run/containerd/containerd.sock │ │ │ ├─22207 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 9213f7ed733001c44536ef277d2f04abbec812060f0464459c296548b41c605b -address /run/containerd/containerd.sock │ │ │ ├─23175 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 05883a8b5dcc6186bc92366d098adf91aacc2716c5f4654d5faee7250595663b -address /run/containerd/containerd.sock │ │ │ ├─23197 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 840e0998fae8407b6859c75f3049d4116708e1a099a2af82c2a231ec21648183 -address /run/containerd/containerd.sock │ │ │ ├─23556 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 45366b6128ee283ed8e551bd4cfbb23d9e41caf8e9df563caa159264f5fb5a8f -address /run/containerd/containerd.sock │ │ │ ├─23564 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 0845db3249e178e867783ec1010a54509a9a6f20e675db3c40bafc17b94aff4a -address /run/containerd/containerd.sock │ │ │ ├─23750 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 10c28a6f48eec284148f84ce3e47fe3004a9cd7bbf1f9abc58e6814017b471be -address /run/containerd/containerd.sock │ │ │ ├─23778 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5e10cc4be68610c8fdfce2d97284fa2a72defecd8f9dbf1195ed33469751fc11 -address /run/containerd/containerd.sock │ │ │ ├─23907 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5f0448337760a1a92cb526a9ebe910cc2ad19bc77e1dfc4e2691cc1e11943675 -address /run/containerd/containerd.sock │ │ │ ├─24027 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 7a0648fca673c15649668c644af739b895a375f8e0a5ec9c7cf267b897aa4815 -address /run/containerd/containerd.sock │ │ │ ├─24053 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 49b695fce71dcafd713509840486eda46a4808bc556797e8cbb013c178adf453 -address /run/containerd/containerd.sock │ │ │ ├─24730 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id a9b853c1562e2d9b5d25818d03170c0ae004e0c3bdcba5684225a09905a1b752 -address /run/containerd/containerd.sock │ │ │ ├─24799 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5793c32bcc0436f8fe34a586a9c3d691c3891ff8e512284b4424309d965cf4de -address /run/containerd/containerd.sock │ │ │ ├─24951 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5c88b5d8462c6a19822bef36e5c767ce579a2df6df3f201d51bb2c4a584c158d -address /run/containerd/containerd.sock │ │ │ └─25067 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 716b23ff544bc41fa5015a42d703c5554339e9b24ab33edcd77f5b2026964d08 -address /run/containerd/containerd.sock │ │ └─systemd-journald.service │ │ └─21712 /lib/systemd/systemd-journald │ ├─docker.service │ │ ├─20760 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock │ │ └─21600 /usr/bin/docker-proxy -proto tcp -host-ip 127.0.0.1 -host-port 36617 -container-ip 172.18.0.2 -container-port 6443 -use-listen-fd │ ├─epmd.service │ │ └─26075 /usr/bin/epmd -systemd │ ├─fsidd.service │ │ └─54639 /usr/sbin/fsidd │ ├─haproxy.service │ │ ├─13241 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock │ │ └─13243 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock │ ├─iscsid.service │ │ ├─44108 /usr/sbin/iscsid │ │ └─44109 /usr/sbin/iscsid │ ├─ksmtuned.service │ │ ├─ 5045 /bin/bash /usr/sbin/ksmtuned │ │ └─130723 sleep 60 │ ├─libvirtd.service │ │ ├─ 42979 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper │ │ ├─ 42980 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper │ │ └─112159 /usr/sbin/libvirtd --timeout 120 │ ├─memcached.service │ │ └─66471 /usr/bin/memcached -m 64 -p 11211 -u memcache -l 127.0.0.1 -l ::1 -P /var/run/memcached/memcached.pid │ ├─mysql.service │ │ └─62997 /usr/sbin/mysqld │ ├─nfs-blkmap.service │ │ └─54644 /usr/sbin/blkmapd │ ├─nfs-idmapd.service │ │ └─54647 /usr/sbin/rpc.idmapd │ ├─nfs-mountd.service │ │ └─54657 /usr/sbin/rpc.mountd │ ├─nfsdcld.service │ │ └─54659 /usr/sbin/nfsdcld │ ├─nmbd.service │ │ └─55225 /usr/sbin/nmbd --foreground --no-process-group │ ├─ovn-controller-vtep.service │ │ └─100859 ovn-controller-vtep -vconsole:emer -vsyslog:err -vfile:info --vtep-db=/var/run/openvswitch/db.sock --ovnsb-db=/var/run/ovn/ovnsb_db.sock --no-chdir --log-file=/var/log/ovn/ovn-controller-vtep.log --pidfile=/var/run/ovn/ovn-controller-vtep.pid --detach │ ├─ovn-controller.service │ │ └─101600 ovn-controller unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --no-chdir --log-file=/var/log/ovn/ovn-controller.log --pidfile=/var/run/ovn/ovn-controller.pid --detach │ ├─ovn-northd.service │ │ └─101269 ovn-northd -vconsole:emer -vsyslog:err -vfile:info --ovnnb-db=unix:/var/run/ovn/ovnnb_db.sock --ovnsb-db=unix:/var/run/ovn/ovnsb_db.sock --no-chdir --log-file=/var/log/ovn/ovn-northd.log --pidfile=/var/run/ovn/ovn-northd.pid --detach │ ├─ovn-ovsdb-server-nb.service │ │ └─101194 ovsdb-server -vconsole:off -vfile:info --log-file=/var/log/ovn/ovsdb-server-nb.log --remote=punix:/var/run/ovn/ovnnb_db.sock --pidfile=/var/run/ovn/ovnnb_db.pid --unixctl=/var/run/ovn/ovnnb_db.ctl --remote=db:OVN_Northbound,NB_Global,connections --private-key=db:OVN_Northbound,SSL,private_key --certificate=db:OVN_Northbound,SSL,certificate --ca-cert=db:OVN_Northbound,SSL,ca_cert --ssl-protocols=db:OVN_Northbound,SSL,ssl_protocols --ssl-ciphers=db:OVN_Northbound,SSL,ssl_ciphers /var/lib/ovn/ovnnb_db.db │ ├─ovn-ovsdb-server-sb.service │ │ └─101199 ovsdb-server -vconsole:off -vfile:info --log-file=/var/log/ovn/ovsdb-server-sb.log --remote=punix:/var/run/ovn/ovnsb_db.sock --pidfile=/var/run/ovn/ovnsb_db.pid --unixctl=/var/run/ovn/ovnsb_db.ctl --remote=db:OVN_Southbound,SB_Global,connections --private-key=db:OVN_Southbound,SSL,private_key --certificate=db:OVN_Southbound,SSL,certificate --ca-cert=db:OVN_Southbound,SSL,ca_cert --ssl-protocols=db:OVN_Southbound,SSL,ssl_protocols --ssl-ciphers=db:OVN_Southbound,SSL,ssl_ciphers /var/lib/ovn/ovnsb_db.db │ ├─ovs-vswitchd.service │ │ └─100770 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach │ ├─ovsdb-server.service │ │ └─100719 ovsdb-server /etc/openvswitch/conf.db -vconsole:emer -vsyslog:err -vfile:info --remote=punix:/var/run/openvswitch/db.sock --private-key=db:Open_vSwitch,SSL,private_key --certificate=db:Open_vSwitch,SSL,certificate --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --no-chdir --log-file=/var/log/openvswitch/ovsdb-server.log --pidfile=/var/run/openvswitch/ovsdb-server.pid --detach │ ├─polkit.service │ │ └─745 /usr/lib/polkit-1/polkitd --no-debug │ ├─rabbitmq-server.service │ │ ├─26195 /usr/lib/erlang/erts-13.2.2.5/bin/beam.smp -W w -MBas ageffcbf -MHas ageffcbf -MBlmbcs 512 -MHlmbcs 512 -MMmcs 30 -pc unicode -P 1048576 -t 5000000 -stbt db -zdbbl 128000 -sbwt none -sbwtdcpu none -sbwtdio none -- -root /usr/lib/erlang -bindir /usr/lib/erlang/erts-13.2.2.5/bin -progname erl -- -home /var/lib/rabbitmq -- -pa "" -noshell -noinput -s rabbit boot -boot start_sasl -syslog logger "[]" -syslog syslog_error_logger false -kernel prevent_overlapping_partitions false -enable-feature maybe_expr │ │ ├─26205 erl_child_setup 65536 │ │ ├─26313 /usr/lib/erlang/erts-13.2.2.5/bin/inet_gethost 4 │ │ ├─26314 /usr/lib/erlang/erts-13.2.2.5/bin/inet_gethost 4 │ │ └─26319 /bin/sh -s rabbit_disk_monitor │ ├─rpc-statd.service │ │ └─54648 /usr/sbin/rpc.statd │ ├─rpcbind.service │ │ └─54015 /sbin/rpcbind -f -w │ ├─rsyslog.service │ │ └─125254 /usr/sbin/rsyslogd -n -iNONE │ ├─smbd.service │ │ ├─55156 /usr/sbin/smbd --foreground --no-process-group │ │ ├─55160 "smbd: notifyd" . │ │ └─55161 "smbd: cleanupd " │ ├─ssh.service │ │ └─746 "sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups" │ ├─system-devstack.slice │ │ ├─devstack@barbican-keystone-listener.service │ │ │ ├─118155 "barbican-keystone-listener: master process [/opt/stack/data/venv/bin/barbican-keystone-listener --config-file=/etc/barbican/barbican.conf]" │ │ │ └─118402 "barbican-keystone-listener: ServiceWrapper worker(0)" │ │ ├─devstack@barbican-retry.service │ │ │ ├─117625 "barbican-retry: master process [/opt/stack/data/venv/bin/barbican-retry --config-file=/etc/barbican/barbican.conf]" │ │ │ └─117918 "barbican-retry: ServiceWrapper worker(0)" │ │ ├─devstack@barbican-svc.service │ │ │ ├─117084 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv │ │ │ ├─117085 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv │ │ │ ├─117086 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv │ │ │ ├─117087 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv │ │ │ └─117088 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv │ │ ├─devstack@c-api.service │ │ │ ├─112517 "cinder-apiuWSGI master" │ │ │ ├─112522 "cinder-apiuWSGI worker 1" │ │ │ ├─112523 "cinder-apiuWSGI worker 2" │ │ │ ├─112524 "cinder-apiuWSGI worker 3" │ │ │ └─112525 "cinder-apiuWSGI worker 4" │ │ ├─devstack@c-bak.service │ │ │ └─113815 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/cinder-backup --config-file /etc/cinder/cinder.conf │ │ ├─devstack@c-sch.service │ │ │ └─113235 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/cinder-scheduler --config-file /etc/cinder/cinder.conf │ │ ├─devstack@c-vol.service │ │ │ ├─114396 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/cinder-volume --config-file /etc/cinder/cinder.conf │ │ │ └─114685 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/cinder-volume --config-file /etc/cinder/cinder.conf │ │ ├─devstack@etcd.service │ │ │ └─64797 /opt/stack/bin/etcd --name np0000155647 --data-dir /opt/stack/data/etcd --initial-cluster-state new --initial-cluster-token etcd-cluster-01 --initial-cluster np0000155647=http://199.204.45.4:2380 --initial-advertise-peer-urls http://199.204.45.4:2380 --advertise-client-urls http://199.204.45.4:2379 --listen-peer-urls http://0.0.0.0:2380 --listen-client-urls http://199.204.45.4:2379 --log-level=debug │ │ ├─devstack@file_tracker.service │ │ │ ├─ 64151 /bin/bash /opt/stack/devstack/tools/file_tracker.sh │ │ │ └─129429 sleep 20 │ │ ├─devstack@g-api.service │ │ │ ├─115234 "glance-apiuWSGI master" │ │ │ ├─115235 "glance-apiuWSGI worker 1" │ │ │ ├─115236 "glance-apiuWSGI worker 2" │ │ │ ├─115237 "glance-apiuWSGI worker 3" │ │ │ └─115238 "glance-apiuWSGI worker 4" │ │ ├─devstack@keystone.service │ │ │ ├─66049 "keystoneuWSGI master" │ │ │ ├─66057 "keystoneuWSGI worker 1" │ │ │ ├─66058 "keystoneuWSGI worker 2" │ │ │ ├─66059 "keystoneuWSGI worker 3" │ │ │ └─66060 "keystoneuWSGI worker 4" │ │ ├─devstack@m-api.service │ │ │ ├─122152 "manila-apiuWSGI master" │ │ │ ├─122153 "manila-apiuWSGI worker 1" │ │ │ ├─122154 "manila-apiuWSGI worker 2" │ │ │ ├─122155 "manila-apiuWSGI worker 3" │ │ │ └─122156 "manila-apiuWSGI worker 4" │ │ ├─devstack@m-dat.service │ │ │ └─128394 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/manila-data --config-file /etc/manila/manila.conf │ │ ├─devstack@m-sch.service │ │ │ └─127822 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/manila-scheduler --config-file /etc/manila/manila.conf │ │ ├─devstack@m-shr.service │ │ │ ├─127286 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/manila-share --config-file /etc/manila/manila.conf │ │ │ └─127637 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/manila-share --config-file /etc/manila/manila.conf │ │ ├─devstack@magnum-api.service │ │ │ ├─119710 "magnum-apiuWSGI master" │ │ │ ├─119712 "magnum-apiuWSGI worker 1" │ │ │ ├─119713 "magnum-apiuWSGI worker 2" │ │ │ ├─119714 "magnum-apiuWSGI worker 3" │ │ │ └─119715 "magnum-apiuWSGI worker 4" │ │ ├─devstack@magnum-cond.service │ │ │ ├─120306 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120636 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120638 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120640 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120641 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120642 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120644 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120647 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120648 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120651 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120653 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120655 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120657 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120659 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120663 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ ├─120668 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ │ └─120669 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ ├─devstack@memory_tracker.service │ │ │ ├─ 63656 /bin/bash /opt/stack/devstack/tools/memory_tracker.sh │ │ │ └─129419 sleep 20 │ │ ├─devstack@n-api-meta.service │ │ │ ├─108345 "nova-api-metauWSGI master" │ │ │ ├─108346 "nova-api-metauWSGI worker 1" │ │ │ ├─108347 "nova-api-metauWSGI worker 2" │ │ │ ├─108348 "nova-api-metauWSGI worker 3" │ │ │ ├─108349 "nova-api-metauWSGI worker 4" │ │ │ └─108350 "nova-api-metauWSGI http 1" │ │ ├─devstack@n-api.service │ │ │ ├─99874 "nova-apiuWSGI master" │ │ │ ├─99875 "nova-apiuWSGI worker 1" │ │ │ ├─99876 "nova-apiuWSGI worker 2" │ │ │ ├─99877 "nova-apiuWSGI worker 3" │ │ │ └─99878 "nova-apiuWSGI worker 4" │ │ ├─devstack@n-cond-cell1.service │ │ │ ├─110436 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf │ │ │ ├─111018 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf │ │ │ ├─111019 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf │ │ │ ├─111021 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf │ │ │ └─111022 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf │ │ ├─devstack@n-cpu.service │ │ │ └─111521 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-compute --config-file /etc/nova/nova-cpu.conf │ │ ├─devstack@n-novnc-cell1.service │ │ │ └─109046 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-novncproxy --config-file /etc/nova/nova_cell1.conf --web /opt/stack/novnc │ │ ├─devstack@n-sch.service │ │ │ ├─107735 "nova-scheduler: master process [/opt/stack/data/venv/bin/nova-scheduler --config-file /etc/nova/nova.conf]" │ │ │ ├─108462 "nova-scheduler: ServiceWrapper worker(0)" │ │ │ ├─108471 "nova-scheduler: ServiceWrapper worker(1)" │ │ │ ├─108480 "nova-scheduler: ServiceWrapper worker(2)" │ │ │ └─108488 "nova-scheduler: ServiceWrapper worker(3)" │ │ ├─devstack@n-super-cond.service │ │ │ ├─109828 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf │ │ │ ├─110421 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf │ │ │ ├─110422 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf │ │ │ ├─110423 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf │ │ │ └─110424 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf │ │ ├─devstack@neutron-api.service │ │ │ ├─103265 "neutron-apiuWSGI master" │ │ │ ├─103266 "neutron-apiuWSGI worker 1" │ │ │ ├─103267 "neutron-apiuWSGI worker 2" │ │ │ ├─103268 "neutron-apiuWSGI worker 3" │ │ │ └─103269 "neutron-apiuWSGI worker 4" │ │ ├─devstack@neutron-ovn-maintenance-worker.service │ │ │ ├─104760 "neutron-ovn-maintenance-worker: master process [/opt/stack/data/venv/bin/neutron-ovn-maintenance-worker --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini]" │ │ │ └─105533 "neutron-server: maintenance worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-ovn-maintenance-worker --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ │ ├─devstack@neutron-periodic-workers.service │ │ │ ├─104263 "neutron-periodic-workers: master process [/opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini]" │ │ │ ├─104984 "neutron-server: periodic worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ │ │ ├─104993 "neutron-server: periodic worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ │ │ ├─105004 "neutron-server: periodic worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ │ │ └─105017 "neutron-server: periodic worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ │ ├─devstack@neutron-rpc-server.service │ │ │ ├─103751 "neutron-rpc-server: master process [/opt/stack/data/venv/bin/neutron-rpc-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini]" │ │ │ ├─104906 "neutron-server: rpc worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-rpc-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ │ │ └─104914 "neutron-server: rpc worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-rpc-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ │ ├─devstack@o-api.service │ │ │ ├─123997 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv │ │ │ ├─123998 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv │ │ │ ├─123999 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv │ │ │ ├─124000 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv │ │ │ └─124001 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv │ │ ├─devstack@o-da.service │ │ │ ├─124527 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/octavia-driver-agent --config-file /etc/octavia/octavia.conf │ │ │ ├─125280 "octavia-driver-agent - status_listener" │ │ │ ├─125283 "octavia-driver-agent - stats_listener" │ │ │ ├─125285 "octavia-driver-agent - get_listener" │ │ │ └─125384 "octavia-driver-agent - provider_agent -- ovn" │ │ ├─devstack@o-hk.service │ │ │ └─125136 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/octavia-housekeeping --config-file /etc/octavia/octavia.conf │ │ ├─devstack@openstack-cli-server.service │ │ │ └─62235 /opt/stack/data/venv/bin/python3 /opt/stack/devstack/files/openstack-cli-server/openstack-cli-server │ │ ├─devstack@placement-api.service │ │ │ ├─105545 "placementuWSGI master" │ │ │ ├─105547 "placementuWSGI worker 1" │ │ │ ├─105548 "placementuWSGI worker 2" │ │ │ ├─105549 "placementuWSGI worker 3" │ │ │ └─105550 "placementuWSGI worker 4" │ │ └─devstack@q-ovn-agent.service │ │ ├─102144 "neutron-ovn-agent: master process [/opt/stack/data/venv/bin/neutron-ovn-agent --config-file /etc/neutron/plugins/ml2/ovn_agent.ini]" │ │ ├─102627 "neutron-ovn-agent: ServiceWrapper worker(0)" │ │ ├─102934 /opt/stack/data/venv/bin/python3.12 /usr/local/bin/privsep-helper --config-file /etc/neutron/plugins/ml2/ovn_agent.ini --privsep_context neutron.privileged.namespace_cmd --privsep_sock_path /tmp/tmp0w79_nvv/privsep.sock │ │ ├─106395 /opt/stack/data/venv/bin/python3.12 /usr/local/bin/privsep-helper --config-file /etc/neutron/plugins/ml2/ovn_agent.ini --privsep_context neutron.privileged.default --privsep_sock_path /tmp/tmpcnh8_ng3/privsep.sock │ │ ├─128352 /opt/stack/data/venv/bin/python3.12 /usr/local/bin/privsep-helper --config-file /etc/neutron/plugins/ml2/ovn_agent.ini --privsep_context neutron.privileged.link_cmd --privsep_sock_path /tmp/tmp7dfz2ixi/privsep.sock │ │ ├─128782 sudo /opt/stack/data/venv/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf │ │ ├─128784 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf │ │ └─128812 haproxy -f /opt/stack/data/neutron/ovn-metadata-proxy/1ddaf2af-8333-48ec-a71c-3dafdea80472.conf │ ├─system-getty.slice │ │ └─getty@tty1.service │ │ └─726 /sbin/agetty -o "-p -- \\u" --noclear - linux │ ├─system-serial\x2dgetty.slice │ │ └─serial-getty@ttyS0.service │ │ └─727 /sbin/agetty -o "-p -- \\u" --keep-baud 115200,57600,38400,9600 - vt220 │ ├─systemd-journald.service │ │ └─19011 /usr/lib/systemd/systemd-journald │ ├─systemd-logind.service │ │ └─712 /usr/lib/systemd/systemd-logind │ ├─systemd-machined.service │ │ └─42877 /usr/lib/systemd/systemd-machined │ ├─systemd-networkd.service │ │ └─602 /usr/lib/systemd/systemd-networkd │ ├─systemd-resolved.service │ │ └─460 /usr/lib/systemd/systemd-resolved │ ├─systemd-timesyncd.service │ │ └─463 /usr/lib/systemd/systemd-timesyncd │ ├─systemd-udevd.service │ │ └─udev │ │ └─453 /usr/lib/systemd/systemd-udevd │ ├─virtlockd.service │ │ └─43092 /usr/sbin/virtlockd │ └─virtlogd.service │ └─48673 /usr/sbin/virtlogd └─user.slice └─user-1000.slice ├─session-1.scope │ ├─ 828 "sshd: zuul [priv]" │ ├─ 849 "sshd: zuul@notty" │ ├─ 1054 /usr/bin/python3 │ ├─130815 sh -c "/bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '\"'\"'echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3'\"'\"' && sleep 0'" │ ├─130816 /bin/sh -c "sudo -H -S -n -u root /bin/sh -c 'echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3' && sleep 0" │ ├─130817 sudo -H -S -n -u root /bin/sh -c "echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3" │ ├─130818 /bin/sh -c "echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3" │ ├─130819 /usr/bin/python3 │ ├─130820 /bin/bash -c "sudo iptables-save > /home/zuul/iptables.txt\n\n# NOTE(sfernand): Run 'df' with a 60s timeout to prevent hangs from\n# stale NFS mounts.\ntimeout -s 9 60s df -h > /home/zuul/df.txt || true\n# If 'df' times out, the mount output helps debug which NFS share\n# is unresponsive.\nmount > /home/zuul/mount.txt\n\nfor py_ver in 2 3; do\n if [[ \`which python\${py_ver}\` ]]; then\n python\${py_ver} -m pip freeze > /home/zuul/pip\${py_ver}-freeze.txt\n fi\ndone\n\nif [ \`command -v dpkg\` ]; then\n dpkg -l> /home/zuul/dpkg-l.txt\nfi\nif [ \`command -v rpm\` ]; then\n rpm -qa | sort > /home/zuul/rpm-qa.txt\nfi\n\n# Services status\nsudo systemctl status --all > services.txt 2>/dev/null\n\n# NOTE(kchamart) The 'audit.log' can be useful in cases when QEMU\n# failed to start due to denials from SELinux — useful for CentOS\n# and Fedora machines. For Ubuntu (which runs AppArmor), DevStack\n# already captures the contents of /var/log/kern.log (via\n# \`journalctl -t kernel\` redirected into syslog.txt.gz), which\n# contains AppArmor-related messages.\nif [ -f /var/log/audit/audit.log ] ; then\n sudo cp /var/log/audit/audit.log /home/zuul/audit.log &&\n chmod +r /home/zuul/audit.log;\nfi\n\n# gzip and save any coredumps in /var/core\nif [ -d /var/core ]; then\n sudo gzip -r /var/core\n sudo cp -r /var/core /home/zuul/\nfi\n\nsudo ss -lntup | grep ':53' > /home/zuul/listen53.txt\n\n# NOTE(andreaf) Service logs are already in logs/ thanks for the\n# export-devstack-journal log. Apache logs are under apache/ thans to the\n# apache-logs-conf role.\ngrep -i deprecat /home/zuul/logs/*.txt /home/zuul/apache/*.log | \\\n sed -r 's/[0-9]{1,2}\\:[0-9]{1,2}\\:[0-9]{1,2}\\.[0-9]{1,3}/ /g' | \\\n sed -r 's/[0-9]{1,2}\\:[0-9]{1,2}\\:[0-9]{1,2}/ /g' | \\\n sed -r 's/[0-9]{1,4}-[0-9]{1,2}-[0-9]{1,4}/ /g' |\n sed -r 's/\\[.*\\]/ /g' | \\\n sed -r 's/\\s[0-9]+\\s/ /g' | \\\n awk '{if (\$0 in seen) {seen[\$0]++} else {out[++n]=\$0;seen[\$0]=1}} END { for (i=1; i<=n; i++) print seen[out[i]]\" :: \" out[i] }' > /home/zuul/deprecations.log\n" │ ├─130834 sudo systemctl status --all │ └─130835 systemctl status --all └─user@1000.service └─init.scope ├─833 /usr/lib/systemd/systemd --user └─834 "(sd-pam)" Feb 16 18:16:15 np0000155647 systemd[1]: Reloading... Feb 16 18:16:16 np0000155647 systemd[1]: Reloading finished in 326 ms. Feb 16 18:16:16 np0000155647 systemd[1]: Started devstack@m-sch.service - Devstack devstack@m-sch.service. Feb 16 18:16:17 np0000155647 systemd[1]: Reloading requested from client PID 128205 ('systemctl') (unit session-1.scope)... Feb 16 18:16:17 np0000155647 systemd[1]: Reloading... Feb 16 18:16:18 np0000155647 systemd[1]: Reloading finished in 324 ms. Feb 16 18:16:18 np0000155647 systemd[1]: Reloading requested from client PID 128296 ('systemctl') (unit session-1.scope)... Feb 16 18:16:18 np0000155647 systemd[1]: Reloading... Feb 16 18:16:18 np0000155647 systemd[1]: Reloading finished in 304 ms. Feb 16 18:16:18 np0000155647 systemd[1]: Started devstack@m-dat.service - Devstack devstack@m-dat.service. ● machine.slice - Virtual Machine and Container Slice Loaded: loaded (/usr/lib/systemd/system/machine.slice; static) Active: active since Mon 2026-02-16 18:02:19 UTC; 14min ago Docs: man:systemd.special(7) Tasks: 0 Memory: 0B (peak: 0B) CPU: 0 CGroup: /machine.slice Feb 16 18:02:19 np0000155647 systemd[1]: Created slice machine.slice - Virtual Machine and Container Slice. ● system-devstack.slice - Slice /system/devstack Loaded: loaded Active: active since Mon 2026-02-16 18:06:08 UTC; 10min ago Tasks: 546 Memory: 10.0G (peak: 10.0G) CPU: 5min 44.302s CGroup: /system.slice/system-devstack.slice ├─devstack@barbican-keystone-listener.service │ ├─118155 "barbican-keystone-listener: master process [/opt/stack/data/venv/bin/barbican-keystone-listener --config-file=/etc/barbican/barbican.conf]" │ └─118402 "barbican-keystone-listener: ServiceWrapper worker(0)" ├─devstack@barbican-retry.service │ ├─117625 "barbican-retry: master process [/opt/stack/data/venv/bin/barbican-retry --config-file=/etc/barbican/barbican.conf]" │ └─117918 "barbican-retry: ServiceWrapper worker(0)" ├─devstack@barbican-svc.service │ ├─117084 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv │ ├─117085 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv │ ├─117086 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv │ ├─117087 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv │ └─117088 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv ├─devstack@c-api.service │ ├─112517 "cinder-apiuWSGI master" │ ├─112522 "cinder-apiuWSGI worker 1" │ ├─112523 "cinder-apiuWSGI worker 2" │ ├─112524 "cinder-apiuWSGI worker 3" │ └─112525 "cinder-apiuWSGI worker 4" ├─devstack@c-bak.service │ └─113815 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/cinder-backup --config-file /etc/cinder/cinder.conf ├─devstack@c-sch.service │ └─113235 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/cinder-scheduler --config-file /etc/cinder/cinder.conf ├─devstack@c-vol.service │ ├─114396 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/cinder-volume --config-file /etc/cinder/cinder.conf │ └─114685 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/cinder-volume --config-file /etc/cinder/cinder.conf ├─devstack@etcd.service │ └─64797 /opt/stack/bin/etcd --name np0000155647 --data-dir /opt/stack/data/etcd --initial-cluster-state new --initial-cluster-token etcd-cluster-01 --initial-cluster np0000155647=http://199.204.45.4:2380 --initial-advertise-peer-urls http://199.204.45.4:2380 --advertise-client-urls http://199.204.45.4:2379 --listen-peer-urls http://0.0.0.0:2380 --listen-client-urls http://199.204.45.4:2379 --log-level=debug ├─devstack@file_tracker.service │ ├─ 64151 /bin/bash /opt/stack/devstack/tools/file_tracker.sh │ └─129429 sleep 20 ├─devstack@g-api.service │ ├─115234 "glance-apiuWSGI master" │ ├─115235 "glance-apiuWSGI worker 1" │ ├─115236 "glance-apiuWSGI worker 2" │ ├─115237 "glance-apiuWSGI worker 3" │ └─115238 "glance-apiuWSGI worker 4" ├─devstack@keystone.service │ ├─66049 "keystoneuWSGI master" │ ├─66057 "keystoneuWSGI worker 1" │ ├─66058 "keystoneuWSGI worker 2" │ ├─66059 "keystoneuWSGI worker 3" │ └─66060 "keystoneuWSGI worker 4" ├─devstack@m-api.service │ ├─122152 "manila-apiuWSGI master" │ ├─122153 "manila-apiuWSGI worker 1" │ ├─122154 "manila-apiuWSGI worker 2" │ ├─122155 "manila-apiuWSGI worker 3" │ └─122156 "manila-apiuWSGI worker 4" ├─devstack@m-dat.service │ └─128394 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/manila-data --config-file /etc/manila/manila.conf ├─devstack@m-sch.service │ └─127822 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/manila-scheduler --config-file /etc/manila/manila.conf ├─devstack@m-shr.service │ ├─127286 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/manila-share --config-file /etc/manila/manila.conf │ └─127637 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/manila-share --config-file /etc/manila/manila.conf ├─devstack@magnum-api.service │ ├─119710 "magnum-apiuWSGI master" │ ├─119712 "magnum-apiuWSGI worker 1" │ ├─119713 "magnum-apiuWSGI worker 2" │ ├─119714 "magnum-apiuWSGI worker 3" │ └─119715 "magnum-apiuWSGI worker 4" ├─devstack@magnum-cond.service │ ├─120306 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ ├─120636 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ ├─120638 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ ├─120640 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ ├─120641 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ ├─120642 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ ├─120644 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ ├─120647 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ ├─120648 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ ├─120651 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ ├─120653 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ ├─120655 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ ├─120657 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ ├─120659 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ ├─120663 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ ├─120668 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ └─120669 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor ├─devstack@memory_tracker.service │ ├─ 63656 /bin/bash /opt/stack/devstack/tools/memory_tracker.sh │ └─129419 sleep 20 ├─devstack@n-api-meta.service │ ├─108345 "nova-api-metauWSGI master" │ ├─108346 "nova-api-metauWSGI worker 1" │ ├─108347 "nova-api-metauWSGI worker 2" │ ├─108348 "nova-api-metauWSGI worker 3" │ ├─108349 "nova-api-metauWSGI worker 4" │ └─108350 "nova-api-metauWSGI http 1" ├─devstack@n-api.service │ ├─99874 "nova-apiuWSGI master" │ ├─99875 "nova-apiuWSGI worker 1" │ ├─99876 "nova-apiuWSGI worker 2" │ ├─99877 "nova-apiuWSGI worker 3" │ └─99878 "nova-apiuWSGI worker 4" ├─devstack@n-cond-cell1.service │ ├─110436 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf │ ├─111018 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf │ ├─111019 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf │ ├─111021 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf │ └─111022 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf ├─devstack@n-cpu.service │ └─111521 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-compute --config-file /etc/nova/nova-cpu.conf ├─devstack@n-novnc-cell1.service │ └─109046 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-novncproxy --config-file /etc/nova/nova_cell1.conf --web /opt/stack/novnc ├─devstack@n-sch.service │ ├─107735 "nova-scheduler: master process [/opt/stack/data/venv/bin/nova-scheduler --config-file /etc/nova/nova.conf]" │ ├─108462 "nova-scheduler: ServiceWrapper worker(0)" │ ├─108471 "nova-scheduler: ServiceWrapper worker(1)" │ ├─108480 "nova-scheduler: ServiceWrapper worker(2)" │ └─108488 "nova-scheduler: ServiceWrapper worker(3)" ├─devstack@n-super-cond.service │ ├─109828 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf │ ├─110421 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf │ ├─110422 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf │ ├─110423 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf │ └─110424 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf ├─devstack@neutron-api.service │ ├─103265 "neutron-apiuWSGI master" │ ├─103266 "neutron-apiuWSGI worker 1" │ ├─103267 "neutron-apiuWSGI worker 2" │ ├─103268 "neutron-apiuWSGI worker 3" │ └─103269 "neutron-apiuWSGI worker 4" ├─devstack@neutron-ovn-maintenance-worker.service │ ├─104760 "neutron-ovn-maintenance-worker: master process [/opt/stack/data/venv/bin/neutron-ovn-maintenance-worker --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini]" │ └─105533 "neutron-server: maintenance worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-ovn-maintenance-worker --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" ├─devstack@neutron-periodic-workers.service │ ├─104263 "neutron-periodic-workers: master process [/opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini]" │ ├─104984 "neutron-server: periodic worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ ├─104993 "neutron-server: periodic worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ ├─105004 "neutron-server: periodic worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ └─105017 "neutron-server: periodic worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" ├─devstack@neutron-rpc-server.service │ ├─103751 "neutron-rpc-server: master process [/opt/stack/data/venv/bin/neutron-rpc-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini]" │ ├─104906 "neutron-server: rpc worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-rpc-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ └─104914 "neutron-server: rpc worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-rpc-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" ├─devstack@o-api.service │ ├─123997 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv │ ├─123998 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv │ ├─123999 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv │ ├─124000 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv │ └─124001 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv ├─devstack@o-da.service │ ├─124527 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/octavia-driver-agent --config-file /etc/octavia/octavia.conf │ ├─125280 "octavia-driver-agent - status_listener" │ ├─125283 "octavia-driver-agent - stats_listener" │ ├─125285 "octavia-driver-agent - get_listener" │ └─125384 "octavia-driver-agent - provider_agent -- ovn" ├─devstack@o-hk.service │ └─125136 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/octavia-housekeeping --config-file /etc/octavia/octavia.conf ├─devstack@openstack-cli-server.service │ └─62235 /opt/stack/data/venv/bin/python3 /opt/stack/devstack/files/openstack-cli-server/openstack-cli-server ├─devstack@placement-api.service │ ├─105545 "placementuWSGI master" │ ├─105547 "placementuWSGI worker 1" │ ├─105548 "placementuWSGI worker 2" │ ├─105549 "placementuWSGI worker 3" │ └─105550 "placementuWSGI worker 4" └─devstack@q-ovn-agent.service ├─102144 "neutron-ovn-agent: master process [/opt/stack/data/venv/bin/neutron-ovn-agent --config-file /etc/neutron/plugins/ml2/ovn_agent.ini]" ├─102627 "neutron-ovn-agent: ServiceWrapper worker(0)" ├─102934 /opt/stack/data/venv/bin/python3.12 /usr/local/bin/privsep-helper --config-file /etc/neutron/plugins/ml2/ovn_agent.ini --privsep_context neutron.privileged.namespace_cmd --privsep_sock_path /tmp/tmp0w79_nvv/privsep.sock ├─106395 /opt/stack/data/venv/bin/python3.12 /usr/local/bin/privsep-helper --config-file /etc/neutron/plugins/ml2/ovn_agent.ini --privsep_context neutron.privileged.default --privsep_sock_path /tmp/tmpcnh8_ng3/privsep.sock ├─128352 /opt/stack/data/venv/bin/python3.12 /usr/local/bin/privsep-helper --config-file /etc/neutron/plugins/ml2/ovn_agent.ini --privsep_context neutron.privileged.link_cmd --privsep_sock_path /tmp/tmp7dfz2ixi/privsep.sock ├─128782 sudo /opt/stack/data/venv/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf ├─128784 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf └─128812 haproxy -f /opt/stack/data/neutron/ovn-metadata-proxy/1ddaf2af-8333-48ec-a71c-3dafdea80472.conf Feb 16 18:16:59 np0000155647 devstack@neutron-api.service[103269]: DEBUG futurist.periodics [-] Submitting periodic callback 'neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance.HashRingHealthCheckPeriodics.touch_hash_ring_node' {{(pid=103269) _process_scheduled /opt/stack/data/venv/lib/python3.12/site-packages/futurist/periodics.py:638}} Feb 16 18:16:59 np0000155647 devstack@neutron-api.service[103269]: DEBUG neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance [-] Touching Hash Ring node "fb185414f56850ae97a8f47af55e8742" from periodic health check thread {{(pid=103269) touch_hash_ring_node /opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py:1135}} Feb 16 18:16:59 np0000155647 neutron-periodic-workers[104993]: DEBUG neutron.db.agents_db [None req-19412641-6528-4ef2-a8b8-9b37884866cb None None] Agent healthcheck: found 0 active agents {{(pid=104993) agent_health_check /opt/stack/neutron/neutron/db/agents_db.py:317}} Feb 16 18:16:59 np0000155647 neutron-periodic-workers[104993]: DEBUG oslo.service.backend._threading.loopingcall [None req-19412641-6528-4ef2-a8b8-9b37884866cb None None] Fixed interval looping call 'neutron.plugins.ml2.plugin.AgentDbMixin.agent_health_check' sleeping for 36.99 seconds {{(pid=104993) _run_loop /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/backend/_threading/loopingcall.py:125}} Feb 16 18:16:59 np0000155647 devstack@neutron-api.service[103267]: DEBUG futurist.periodics [-] Submitting periodic callback 'neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance.HashRingHealthCheckPeriodics.touch_hash_ring_node' {{(pid=103267) _process_scheduled /opt/stack/data/venv/lib/python3.12/site-packages/futurist/periodics.py:638}} Feb 16 18:16:59 np0000155647 devstack@neutron-api.service[103267]: DEBUG neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance [-] Touching Hash Ring node "d4eed7dc4bb457c59ece68e8f64e8256" from periodic health check thread {{(pid=103267) touch_hash_ring_node /opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py:1135}} Feb 16 18:16:59 np0000155647 devstack@neutron-api.service[103268]: DEBUG futurist.periodics [-] Submitting periodic callback 'neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance.HashRingHealthCheckPeriodics.touch_hash_ring_node' {{(pid=103268) _process_scheduled /opt/stack/data/venv/lib/python3.12/site-packages/futurist/periodics.py:638}} Feb 16 18:16:59 np0000155647 devstack@neutron-api.service[103268]: DEBUG neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance [-] Touching Hash Ring node "ea52a344d585548db743727dad47a3e3" from periodic health check thread {{(pid=103268) touch_hash_ring_node /opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py:1135}} Feb 16 18:17:00 np0000155647 magnum-conductor[120306]: DEBUG oslo_service.periodic_task [-] Running periodic task MagnumPeriodicTasks.sync_cluster_status {{(pid=120306) run_periodic_tasks /opt/stack/data/venv/lib/python3.12/site-packages/oslo_service/periodic_task.py:210}} Feb 16 18:17:00 np0000155647 magnum-conductor[120306]: DEBUG magnum.service.periodic [None req-ba458089-2e20-4fbf-a4c1-555b65158bb6 None None] Starting to sync up cluster status {{(pid=120306) sync_cluster_status /opt/stack/magnum/magnum/service/periodic.py:182}} ● system-getty.slice - Slice /system/getty Loaded: loaded Active: active since Mon 2026-02-16 17:51:02 UTC; 25min ago Tasks: 1 Memory: 336.0K (peak: 1.8M) CPU: 20ms CGroup: /system.slice/system-getty.slice └─getty@tty1.service └─726 /sbin/agetty -o "-p -- \\u" --noclear - linux Notice: journal has been rotated since unit was started, output may be incomplete. ● system-modprobe.slice - Slice /system/modprobe Loaded: loaded Active: active since Mon 2026-02-16 17:51:02 UTC; 25min ago Tasks: 0 Memory: 80.0K (peak: 4.6M) CPU: 81ms CGroup: /system.slice/system-modprobe.slice Notice: journal has been rotated since unit was started, output may be incomplete. ● system-serial\x2dgetty.slice - Slice /system/serial-getty Loaded: loaded Active: active since Mon 2026-02-16 17:51:02 UTC; 25min ago Tasks: 1 Memory: 284.0K (peak: 1.8M) CPU: 19ms CGroup: /system.slice/system-serial\x2dgetty.slice └─serial-getty@ttyS0.service └─727 /sbin/agetty -o "-p -- \\u" --keep-baud 115200,57600,38400,9600 - vt220 Notice: journal has been rotated since unit was started, output may be incomplete. ● system.slice - System Slice Loaded: loaded Active: active since Mon 2026-02-16 17:51:02 UTC; 25min ago Docs: man:systemd.special(7) Tasks: 1713 Memory: 15.9G (peak: 16.0G) CPU: 17min 14.684s CGroup: /system.slice ├─apache-htcacheclean.service │ └─14203 /usr/bin/htcacheclean -d 120 -p /var/cache/apache2/mod_cache_disk -l 300M -n ├─apache2.service │ ├─122537 /usr/sbin/apache2 -k start │ ├─122541 /usr/sbin/apache2 -k start │ └─122542 /usr/sbin/apache2 -k start ├─containerd.service │ ├─20631 /usr/bin/containerd │ └─21507 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 089fb10c98a483eac1b5cbd49445b3386ad4740a6877edc01e9209d9dbf220c7 -address /run/containerd/containerd.sock ├─dbus.service │ └─708 @dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only ├─dm-event.service │ └─115151 /usr/sbin/dmeventd -f ├─docker-089fb10c98a483eac1b5cbd49445b3386ad4740a6877edc01e9209d9dbf220c7.scope │ ├─init.scope │ │ └─21530 /sbin/init │ ├─kubelet.slice │ │ ├─kubelet-kubepods.slice │ │ │ ├─kubelet-kubepods-besteffort.slice │ │ │ │ ├─kubelet-kubepods-besteffort-pod060980bd_94df_4b77_8c4f_85019165ff36.slice │ │ │ │ │ ├─cri-containerd-1433ec82c7d58a1bd88b32542ecb0883d1dd9b071469fb95c79ac08ced6611d2.scope │ │ │ │ │ │ └─25177 /manager --leader-elect --diagnostics-address=:8443 --insecure-diagnostics=false --feature-gates=MachinePool=true,KubeadmBootstrapFormatIgnition=true,PriorityQueue=false --bootstrap-token-ttl=15m │ │ │ │ │ └─cri-containerd-5793c32bcc0436f8fe34a586a9c3d691c3891ff8e512284b4424309d965cf4de.scope │ │ │ │ │ └─24819 /pause │ │ │ │ ├─kubelet-kubepods-besteffort-pod060f8598_7528_45b1_b3c5_0ca523a34f10.slice │ │ │ │ │ ├─cri-containerd-a9b853c1562e2d9b5d25818d03170c0ae004e0c3bdcba5684225a09905a1b752.scope │ │ │ │ │ │ └─24755 /pause │ │ │ │ │ └─cri-containerd-f03af497176e3521a17e482305315b46ef9a3f06f8def1aa9d3e6b9f8a165825.scope │ │ │ │ │ └─25002 /manager --leader-elect --diagnostics-address=:8443 --insecure-diagnostics=false --feature-gates=MachinePool=true,ClusterResourceSet=true,ClusterTopology=true,RuntimeSDK=false,MachineSetPreflightChecks=true,MachineWaitForVolumeDetachConsiderVolumeAttachments=true,PriorityQueue=false │ │ │ │ ├─kubelet-kubepods-besteffort-pod15afed8a_99d8_4a13_9c07_038039770363.slice │ │ │ │ │ ├─cri-containerd-716b23ff544bc41fa5015a42d703c5554339e9b24ab33edcd77f5b2026964d08.scope │ │ │ │ │ │ └─25086 /pause │ │ │ │ │ └─cri-containerd-d9dba19631b25e52efc68930f07a9a022f59c9244d7050bc186b9e4d87d4e755.scope │ │ │ │ │ └─25471 /manager --leader-elect --v=2 --diagnostics-address=127.0.0.1:8080 --insecure-diagnostics=true │ │ │ │ ├─kubelet-kubepods-besteffort-pod18fcda39_476d_4f2a_b389_6fff818f42ae.slice │ │ │ │ │ ├─cri-containerd-10c28a6f48eec284148f84ce3e47fe3004a9cd7bbf1f9abc58e6814017b471be.scope │ │ │ │ │ │ └─23824 /pause │ │ │ │ │ └─cri-containerd-720d0f6651c8177a77c9230bf86a48de597bc5c1ec5db6e60bd66fe90064648e.scope │ │ │ │ │ └─24002 local-path-provisioner --debug start --helper-image docker.io/kindest/local-path-helper:v20220607-9a4d8d2a --config /etc/config/config.json │ │ │ │ ├─kubelet-kubepods-besteffort-pod2bdc6e4c_0088_47cb_be88_ab92547b89ae.slice │ │ │ │ │ ├─cri-containerd-5e10cc4be68610c8fdfce2d97284fa2a72defecd8f9dbf1195ed33469751fc11.scope │ │ │ │ │ │ └─23841 /pause │ │ │ │ │ └─cri-containerd-6604ce9efc54877b4d953350f7e65a5f444d760019403df0dbdd867c03f80c27.scope │ │ │ │ │ └─24200 /app/cmd/controller/controller --v=2 --cluster-resource-namespace=cert-manager --leader-election-namespace=kube-system --acme-http01-solver-image=quay.io/jetstack/cert-manager-acmesolver:v1.18.1 --max-concurrent-challenges=60 │ │ │ │ ├─kubelet-kubepods-besteffort-pod60d2d5fb_575b_4758_90d1_81d8244a7f54.slice │ │ │ │ │ ├─cri-containerd-5c88b5d8462c6a19822bef36e5c767ce579a2df6df3f201d51bb2c4a584c158d.scope │ │ │ │ │ │ └─24971 /pause │ │ │ │ │ └─cri-containerd-75f147194ebc30a5d1f5ba46bc89ad0ee081af58bddf008e5031627017ed8994.scope │ │ │ │ │ └─25312 /manager --leader-elect --diagnostics-address=:8443 --insecure-diagnostics=false --feature-gates=MachinePool=true,ClusterTopology=true,KubeadmBootstrapFormatIgnition=true,PriorityQueue=false │ │ │ │ ├─kubelet-kubepods-besteffort-pod96cc9fa3_1069_4840_9c97_4d69571ebb29.slice │ │ │ │ │ ├─cri-containerd-49b695fce71dcafd713509840486eda46a4808bc556797e8cbb013c178adf453.scope │ │ │ │ │ │ └─24091 /pause │ │ │ │ │ └─cri-containerd-6d62f3e77e85aff076f4bf174cf00cbbe5d08b7b01c315a48b33f121763c8447.scope │ │ │ │ │ └─24533 /app/cmd/webhook/webhook --v=2 --secure-port=10250 --dynamic-serving-ca-secret-namespace=cert-manager --dynamic-serving-ca-secret-name=cert-manager-webhook-ca --dynamic-serving-dns-names=cert-manager-webhook --dynamic-serving-dns-names=cert-manager-webhook.cert-manager --dynamic-serving-dns-names=cert-manager-webhook.cert-manager.svc │ │ │ │ ├─kubelet-kubepods-besteffort-pod9f8355b0_94bd_475d_bf74_9d386d0f5259.slice │ │ │ │ │ ├─cri-containerd-7a0648fca673c15649668c644af739b895a375f8e0a5ec9c7cf267b897aa4815.scope │ │ │ │ │ │ └─24078 /pause │ │ │ │ │ └─cri-containerd-f55b671d3ac5e81b9dc93f8fe60e82166337b18ff359ea71e86690ffa57838b4.scope │ │ │ │ │ └─24411 /app/cmd/cainjector/cainjector --v=2 --leader-election-namespace=kube-system │ │ │ │ └─kubelet-kubepods-besteffort-podeba8cea0_a113_40e4_8af9_f9092b483360.slice │ │ │ │ ├─cri-containerd-05883a8b5dcc6186bc92366d098adf91aacc2716c5f4654d5faee7250595663b.scope │ │ │ │ │ └─23221 /pause │ │ │ │ └─cri-containerd-68a9f7e8b2f1594edc9ae113bf7569a1d5ed85bb82562eb4364f5331c9f598ca.scope │ │ │ │ └─23271 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=kind-control-plane │ │ │ ├─kubelet-kubepods-burstable.slice │ │ │ │ ├─kubelet-kubepods-burstable-pod0656ab70da313d6449b17f099a2a3110.slice │ │ │ │ │ ├─cri-containerd-6d4579b16512918eddfa28c91b9b82464468be359a2a61c9fea7dc7b7ab46364.scope │ │ │ │ │ │ └─22509 etcd --advertise-client-urls=https://172.18.0.2:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://172.18.0.2:2380 --initial-cluster=kind-control-plane=https://172.18.0.2:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://172.18.0.2:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://172.18.0.2:2380 --name=kind-control-plane --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt │ │ │ │ │ └─cri-containerd-a181437f07c759cb357dc1c5a4915703fdbc8421ac7dbe7ae420af7bbf5ce962.scope │ │ │ │ │ └─22255 /pause │ │ │ │ ├─kubelet-kubepods-burstable-pod53ff6c8abd472f64bc9a9afbd3a471a9.slice │ │ │ │ │ ├─cri-containerd-9213f7ed733001c44536ef277d2f04abbec812060f0464459c296548b41c605b.scope │ │ │ │ │ │ └─22269 /pause │ │ │ │ │ └─cri-containerd-e5efc56a027eace488dd3cff0e461733af3798de3cb89fefc0a233cd6d868383.scope │ │ │ │ │ └─22372 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=kind --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key "--controllers=*,bootstrapsigner,tokencleaner" --enable-hostpath-provisioner=true --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/16 --use-service-account-credentials=true │ │ │ │ ├─kubelet-kubepods-burstable-pod65d25134_75a8_44c0_b994_37071db70c0b.slice │ │ │ │ │ ├─cri-containerd-0845db3249e178e867783ec1010a54509a9a6f20e675db3c40bafc17b94aff4a.scope │ │ │ │ │ │ └─23596 /pause │ │ │ │ │ └─cri-containerd-a1bcb37a57c99f9a954339f4c95765996f1fc2161db6fc87722931f900073eac.scope │ │ │ │ │ └─23685 /coredns -conf /etc/coredns/Corefile │ │ │ │ ├─kubelet-kubepods-burstable-pod922d5a86_cf0c_4898_9361_4f7a1724917a.slice │ │ │ │ │ ├─cri-containerd-5f0448337760a1a92cb526a9ebe910cc2ad19bc77e1dfc4e2691cc1e11943675.scope │ │ │ │ │ │ └─23927 /pause │ │ │ │ │ └─cri-containerd-d0f8a8d96527dbcca96f1dd0492e8b1ba70ee11008c068b797069a257b450b1d.scope │ │ │ │ │ └─24314 /manager --metrics-bind-address=:8443 --leader-elect --health-probe-bind-address=:8081 │ │ │ │ ├─kubelet-kubepods-burstable-podbee69ab63b6471d4da666ee970746eae.slice │ │ │ │ │ ├─cri-containerd-5191216187fdcf29cecbd6140d2896e536a2153a60d5d507dc419302faacfe0d.scope │ │ │ │ │ │ └─22253 /pause │ │ │ │ │ └─cri-containerd-92b6f098aaae83573340f2ea18f968ceaff832acd7b11fb4c99b6ac6d401b2fe.scope │ │ │ │ │ └─22350 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true │ │ │ │ ├─kubelet-kubepods-burstable-podcbd4ee29_9a60_4f24_babe_75a79e0262a8.slice │ │ │ │ │ ├─cri-containerd-45366b6128ee283ed8e551bd4cfbb23d9e41caf8e9df563caa159264f5fb5a8f.scope │ │ │ │ │ │ └─23604 /pause │ │ │ │ │ └─cri-containerd-c82673298311c208438753cc6f9980d181abc401eaf370dd7390fdbc968f243a.scope │ │ │ │ │ └─23676 /coredns -conf /etc/coredns/Corefile │ │ │ │ └─kubelet-kubepods-burstable-podef6ebc9842be361e05ebdb6790c540b6.slice │ │ │ │ ├─cri-containerd-048d36383cc06e32dd1ab4003dee6a9343d2ccda4cb4eb939db648f16ac6f620.scope │ │ │ │ │ └─22272 /pause │ │ │ │ └─cri-containerd-d4a9d2a347b177fb443b9691e9438d1c0ee06ea2f1d19bf68afb66b1353f589c.scope │ │ │ │ └─22412 kube-apiserver --advertise-address=172.18.0.2 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --runtime-config= --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/16 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key │ │ │ └─kubelet-kubepods-podad85b7c2_f9f9_4ec9_b260_341f20aa22ff.slice │ │ │ ├─cri-containerd-840e0998fae8407b6859c75f3049d4116708e1a099a2af82c2a231ec21648183.scope │ │ │ │ └─23228 /pause │ │ │ └─cri-containerd-8a5e3ce32b811e69fef7d0bd0b708db17b4ffe5f3648638f8d7369dee746a825.scope │ │ │ └─23315 /bin/kindnetd │ │ └─kubelet.service │ │ └─22594 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --node-ip=172.18.0.2 --node-labels= --pod-infra-container-image=registry.k8s.io/pause:3.8 --provider-id=kind://docker/kind/kind-control-plane --fail-swap-on=false --cgroup-root=/kubelet │ └─system.slice │ ├─containerd.service │ │ ├─21726 /usr/local/bin/containerd │ │ ├─22165 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id a181437f07c759cb357dc1c5a4915703fdbc8421ac7dbe7ae420af7bbf5ce962 -address /run/containerd/containerd.sock │ │ ├─22172 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5191216187fdcf29cecbd6140d2896e536a2153a60d5d507dc419302faacfe0d -address /run/containerd/containerd.sock │ │ ├─22182 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 048d36383cc06e32dd1ab4003dee6a9343d2ccda4cb4eb939db648f16ac6f620 -address /run/containerd/containerd.sock │ │ ├─22207 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 9213f7ed733001c44536ef277d2f04abbec812060f0464459c296548b41c605b -address /run/containerd/containerd.sock │ │ ├─23175 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 05883a8b5dcc6186bc92366d098adf91aacc2716c5f4654d5faee7250595663b -address /run/containerd/containerd.sock │ │ ├─23197 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 840e0998fae8407b6859c75f3049d4116708e1a099a2af82c2a231ec21648183 -address /run/containerd/containerd.sock │ │ ├─23556 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 45366b6128ee283ed8e551bd4cfbb23d9e41caf8e9df563caa159264f5fb5a8f -address /run/containerd/containerd.sock │ │ ├─23564 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 0845db3249e178e867783ec1010a54509a9a6f20e675db3c40bafc17b94aff4a -address /run/containerd/containerd.sock │ │ ├─23750 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 10c28a6f48eec284148f84ce3e47fe3004a9cd7bbf1f9abc58e6814017b471be -address /run/containerd/containerd.sock │ │ ├─23778 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5e10cc4be68610c8fdfce2d97284fa2a72defecd8f9dbf1195ed33469751fc11 -address /run/containerd/containerd.sock │ │ ├─23907 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5f0448337760a1a92cb526a9ebe910cc2ad19bc77e1dfc4e2691cc1e11943675 -address /run/containerd/containerd.sock │ │ ├─24027 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 7a0648fca673c15649668c644af739b895a375f8e0a5ec9c7cf267b897aa4815 -address /run/containerd/containerd.sock │ │ ├─24053 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 49b695fce71dcafd713509840486eda46a4808bc556797e8cbb013c178adf453 -address /run/containerd/containerd.sock │ │ ├─24730 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id a9b853c1562e2d9b5d25818d03170c0ae004e0c3bdcba5684225a09905a1b752 -address /run/containerd/containerd.sock │ │ ├─24799 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5793c32bcc0436f8fe34a586a9c3d691c3891ff8e512284b4424309d965cf4de -address /run/containerd/containerd.sock │ │ ├─24951 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5c88b5d8462c6a19822bef36e5c767ce579a2df6df3f201d51bb2c4a584c158d -address /run/containerd/containerd.sock │ │ └─25067 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 716b23ff544bc41fa5015a42d703c5554339e9b24ab33edcd77f5b2026964d08 -address /run/containerd/containerd.sock │ └─systemd-journald.service │ └─21712 /lib/systemd/systemd-journald ├─docker.service │ ├─20760 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock │ └─21600 /usr/bin/docker-proxy -proto tcp -host-ip 127.0.0.1 -host-port 36617 -container-ip 172.18.0.2 -container-port 6443 -use-listen-fd ├─epmd.service │ └─26075 /usr/bin/epmd -systemd ├─fsidd.service │ └─54639 /usr/sbin/fsidd ├─haproxy.service │ ├─13241 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock │ └─13243 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock ├─iscsid.service │ ├─44108 /usr/sbin/iscsid │ └─44109 /usr/sbin/iscsid ├─ksmtuned.service │ ├─ 5045 /bin/bash /usr/sbin/ksmtuned │ └─130723 sleep 60 ├─libvirtd.service │ ├─ 42979 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper │ ├─ 42980 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper │ └─112159 /usr/sbin/libvirtd --timeout 120 ├─memcached.service │ └─66471 /usr/bin/memcached -m 64 -p 11211 -u memcache -l 127.0.0.1 -l ::1 -P /var/run/memcached/memcached.pid ├─mysql.service │ └─62997 /usr/sbin/mysqld ├─nfs-blkmap.service │ └─54644 /usr/sbin/blkmapd ├─nfs-idmapd.service │ └─54647 /usr/sbin/rpc.idmapd ├─nfs-mountd.service │ └─54657 /usr/sbin/rpc.mountd ├─nfsdcld.service │ └─54659 /usr/sbin/nfsdcld ├─nmbd.service │ └─55225 /usr/sbin/nmbd --foreground --no-process-group ├─ovn-controller-vtep.service │ └─100859 ovn-controller-vtep -vconsole:emer -vsyslog:err -vfile:info --vtep-db=/var/run/openvswitch/db.sock --ovnsb-db=/var/run/ovn/ovnsb_db.sock --no-chdir --log-file=/var/log/ovn/ovn-controller-vtep.log --pidfile=/var/run/ovn/ovn-controller-vtep.pid --detach ├─ovn-controller.service │ └─101600 ovn-controller unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --no-chdir --log-file=/var/log/ovn/ovn-controller.log --pidfile=/var/run/ovn/ovn-controller.pid --detach ├─ovn-northd.service │ └─101269 ovn-northd -vconsole:emer -vsyslog:err -vfile:info --ovnnb-db=unix:/var/run/ovn/ovnnb_db.sock --ovnsb-db=unix:/var/run/ovn/ovnsb_db.sock --no-chdir --log-file=/var/log/ovn/ovn-northd.log --pidfile=/var/run/ovn/ovn-northd.pid --detach ├─ovn-ovsdb-server-nb.service │ └─101194 ovsdb-server -vconsole:off -vfile:info --log-file=/var/log/ovn/ovsdb-server-nb.log --remote=punix:/var/run/ovn/ovnnb_db.sock --pidfile=/var/run/ovn/ovnnb_db.pid --unixctl=/var/run/ovn/ovnnb_db.ctl --remote=db:OVN_Northbound,NB_Global,connections --private-key=db:OVN_Northbound,SSL,private_key --certificate=db:OVN_Northbound,SSL,certificate --ca-cert=db:OVN_Northbound,SSL,ca_cert --ssl-protocols=db:OVN_Northbound,SSL,ssl_protocols --ssl-ciphers=db:OVN_Northbound,SSL,ssl_ciphers /var/lib/ovn/ovnnb_db.db ├─ovn-ovsdb-server-sb.service │ └─101199 ovsdb-server -vconsole:off -vfile:info --log-file=/var/log/ovn/ovsdb-server-sb.log --remote=punix:/var/run/ovn/ovnsb_db.sock --pidfile=/var/run/ovn/ovnsb_db.pid --unixctl=/var/run/ovn/ovnsb_db.ctl --remote=db:OVN_Southbound,SB_Global,connections --private-key=db:OVN_Southbound,SSL,private_key --certificate=db:OVN_Southbound,SSL,certificate --ca-cert=db:OVN_Southbound,SSL,ca_cert --ssl-protocols=db:OVN_Southbound,SSL,ssl_protocols --ssl-ciphers=db:OVN_Southbound,SSL,ssl_ciphers /var/lib/ovn/ovnsb_db.db ├─ovs-vswitchd.service │ └─100770 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach ├─ovsdb-server.service │ └─100719 ovsdb-server /etc/openvswitch/conf.db -vconsole:emer -vsyslog:err -vfile:info --remote=punix:/var/run/openvswitch/db.sock --private-key=db:Open_vSwitch,SSL,private_key --certificate=db:Open_vSwitch,SSL,certificate --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --no-chdir --log-file=/var/log/openvswitch/ovsdb-server.log --pidfile=/var/run/openvswitch/ovsdb-server.pid --detach ├─polkit.service │ └─745 /usr/lib/polkit-1/polkitd --no-debug ├─rabbitmq-server.service │ ├─26195 /usr/lib/erlang/erts-13.2.2.5/bin/beam.smp -W w -MBas ageffcbf -MHas ageffcbf -MBlmbcs 512 -MHlmbcs 512 -MMmcs 30 -pc unicode -P 1048576 -t 5000000 -stbt db -zdbbl 128000 -sbwt none -sbwtdcpu none -sbwtdio none -- -root /usr/lib/erlang -bindir /usr/lib/erlang/erts-13.2.2.5/bin -progname erl -- -home /var/lib/rabbitmq -- -pa "" -noshell -noinput -s rabbit boot -boot start_sasl -syslog logger "[]" -syslog syslog_error_logger false -kernel prevent_overlapping_partitions false -enable-feature maybe_expr │ ├─26205 erl_child_setup 65536 │ ├─26313 /usr/lib/erlang/erts-13.2.2.5/bin/inet_gethost 4 │ ├─26314 /usr/lib/erlang/erts-13.2.2.5/bin/inet_gethost 4 │ └─26319 /bin/sh -s rabbit_disk_monitor ├─rpc-statd.service │ └─54648 /usr/sbin/rpc.statd ├─rpcbind.service │ └─54015 /sbin/rpcbind -f -w ├─rsyslog.service │ └─125254 /usr/sbin/rsyslogd -n -iNONE ├─smbd.service │ ├─55156 /usr/sbin/smbd --foreground --no-process-group │ ├─55160 "smbd: notifyd" . │ └─55161 "smbd: cleanupd " ├─ssh.service │ └─746 "sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups" ├─system-devstack.slice │ ├─devstack@barbican-keystone-listener.service │ │ ├─118155 "barbican-keystone-listener: master process [/opt/stack/data/venv/bin/barbican-keystone-listener --config-file=/etc/barbican/barbican.conf]" │ │ └─118402 "barbican-keystone-listener: ServiceWrapper worker(0)" │ ├─devstack@barbican-retry.service │ │ ├─117625 "barbican-retry: master process [/opt/stack/data/venv/bin/barbican-retry --config-file=/etc/barbican/barbican.conf]" │ │ └─117918 "barbican-retry: ServiceWrapper worker(0)" │ ├─devstack@barbican-svc.service │ │ ├─117084 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv │ │ ├─117085 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv │ │ ├─117086 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv │ │ ├─117087 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv │ │ └─117088 /opt/stack/data/venv/bin/uwsgi --ini /etc/barbican/barbican-uwsgi.ini --venv /opt/stack/data/venv │ ├─devstack@c-api.service │ │ ├─112517 "cinder-apiuWSGI master" │ │ ├─112522 "cinder-apiuWSGI worker 1" │ │ ├─112523 "cinder-apiuWSGI worker 2" │ │ ├─112524 "cinder-apiuWSGI worker 3" │ │ └─112525 "cinder-apiuWSGI worker 4" │ ├─devstack@c-bak.service │ │ └─113815 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/cinder-backup --config-file /etc/cinder/cinder.conf │ ├─devstack@c-sch.service │ │ └─113235 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/cinder-scheduler --config-file /etc/cinder/cinder.conf │ ├─devstack@c-vol.service │ │ ├─114396 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/cinder-volume --config-file /etc/cinder/cinder.conf │ │ └─114685 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/cinder-volume --config-file /etc/cinder/cinder.conf │ ├─devstack@etcd.service │ │ └─64797 /opt/stack/bin/etcd --name np0000155647 --data-dir /opt/stack/data/etcd --initial-cluster-state new --initial-cluster-token etcd-cluster-01 --initial-cluster np0000155647=http://199.204.45.4:2380 --initial-advertise-peer-urls http://199.204.45.4:2380 --advertise-client-urls http://199.204.45.4:2379 --listen-peer-urls http://0.0.0.0:2380 --listen-client-urls http://199.204.45.4:2379 --log-level=debug │ ├─devstack@file_tracker.service │ │ ├─ 64151 /bin/bash /opt/stack/devstack/tools/file_tracker.sh │ │ └─129429 sleep 20 │ ├─devstack@g-api.service │ │ ├─115234 "glance-apiuWSGI master" │ │ ├─115235 "glance-apiuWSGI worker 1" │ │ ├─115236 "glance-apiuWSGI worker 2" │ │ ├─115237 "glance-apiuWSGI worker 3" │ │ └─115238 "glance-apiuWSGI worker 4" │ ├─devstack@keystone.service │ │ ├─66049 "keystoneuWSGI master" │ │ ├─66057 "keystoneuWSGI worker 1" │ │ ├─66058 "keystoneuWSGI worker 2" │ │ ├─66059 "keystoneuWSGI worker 3" │ │ └─66060 "keystoneuWSGI worker 4" │ ├─devstack@m-api.service │ │ ├─122152 "manila-apiuWSGI master" │ │ ├─122153 "manila-apiuWSGI worker 1" │ │ ├─122154 "manila-apiuWSGI worker 2" │ │ ├─122155 "manila-apiuWSGI worker 3" │ │ └─122156 "manila-apiuWSGI worker 4" │ ├─devstack@m-dat.service │ │ └─128394 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/manila-data --config-file /etc/manila/manila.conf │ ├─devstack@m-sch.service │ │ └─127822 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/manila-scheduler --config-file /etc/manila/manila.conf │ ├─devstack@m-shr.service │ │ ├─127286 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/manila-share --config-file /etc/manila/manila.conf │ │ └─127637 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/manila-share --config-file /etc/manila/manila.conf │ ├─devstack@magnum-api.service │ │ ├─119710 "magnum-apiuWSGI master" │ │ ├─119712 "magnum-apiuWSGI worker 1" │ │ ├─119713 "magnum-apiuWSGI worker 2" │ │ ├─119714 "magnum-apiuWSGI worker 3" │ │ └─119715 "magnum-apiuWSGI worker 4" │ ├─devstack@magnum-cond.service │ │ ├─120306 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ ├─120636 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ ├─120638 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ ├─120640 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ ├─120641 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ ├─120642 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ ├─120644 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ ├─120647 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ ├─120648 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ ├─120651 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ ├─120653 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ ├─120655 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ ├─120657 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ ├─120659 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ ├─120663 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ ├─120668 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ │ └─120669 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/magnum-conductor │ ├─devstack@memory_tracker.service │ │ ├─ 63656 /bin/bash /opt/stack/devstack/tools/memory_tracker.sh │ │ └─129419 sleep 20 │ ├─devstack@n-api-meta.service │ │ ├─108345 "nova-api-metauWSGI master" │ │ ├─108346 "nova-api-metauWSGI worker 1" │ │ ├─108347 "nova-api-metauWSGI worker 2" │ │ ├─108348 "nova-api-metauWSGI worker 3" │ │ ├─108349 "nova-api-metauWSGI worker 4" │ │ └─108350 "nova-api-metauWSGI http 1" │ ├─devstack@n-api.service │ │ ├─99874 "nova-apiuWSGI master" │ │ ├─99875 "nova-apiuWSGI worker 1" │ │ ├─99876 "nova-apiuWSGI worker 2" │ │ ├─99877 "nova-apiuWSGI worker 3" │ │ └─99878 "nova-apiuWSGI worker 4" │ ├─devstack@n-cond-cell1.service │ │ ├─110436 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf │ │ ├─111018 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf │ │ ├─111019 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf │ │ ├─111021 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf │ │ └─111022 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova_cell1.conf │ ├─devstack@n-cpu.service │ │ └─111521 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-compute --config-file /etc/nova/nova-cpu.conf │ ├─devstack@n-novnc-cell1.service │ │ └─109046 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-novncproxy --config-file /etc/nova/nova_cell1.conf --web /opt/stack/novnc │ ├─devstack@n-sch.service │ │ ├─107735 "nova-scheduler: master process [/opt/stack/data/venv/bin/nova-scheduler --config-file /etc/nova/nova.conf]" │ │ ├─108462 "nova-scheduler: ServiceWrapper worker(0)" │ │ ├─108471 "nova-scheduler: ServiceWrapper worker(1)" │ │ ├─108480 "nova-scheduler: ServiceWrapper worker(2)" │ │ └─108488 "nova-scheduler: ServiceWrapper worker(3)" │ ├─devstack@n-super-cond.service │ │ ├─109828 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf │ │ ├─110421 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf │ │ ├─110422 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf │ │ ├─110423 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf │ │ └─110424 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/nova-conductor --config-file /etc/nova/nova.conf │ ├─devstack@neutron-api.service │ │ ├─103265 "neutron-apiuWSGI master" │ │ ├─103266 "neutron-apiuWSGI worker 1" │ │ ├─103267 "neutron-apiuWSGI worker 2" │ │ ├─103268 "neutron-apiuWSGI worker 3" │ │ └─103269 "neutron-apiuWSGI worker 4" │ ├─devstack@neutron-ovn-maintenance-worker.service │ │ ├─104760 "neutron-ovn-maintenance-worker: master process [/opt/stack/data/venv/bin/neutron-ovn-maintenance-worker --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini]" │ │ └─105533 "neutron-server: maintenance worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-ovn-maintenance-worker --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ ├─devstack@neutron-periodic-workers.service │ │ ├─104263 "neutron-periodic-workers: master process [/opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini]" │ │ ├─104984 "neutron-server: periodic worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ │ ├─104993 "neutron-server: periodic worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ │ ├─105004 "neutron-server: periodic worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ │ └─105017 "neutron-server: periodic worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-periodic-workers --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ ├─devstack@neutron-rpc-server.service │ │ ├─103751 "neutron-rpc-server: master process [/opt/stack/data/venv/bin/neutron-rpc-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini]" │ │ ├─104906 "neutron-server: rpc worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-rpc-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ │ └─104914 "neutron-server: rpc worker (/opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-rpc-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini)" │ ├─devstack@o-api.service │ │ ├─123997 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv │ │ ├─123998 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv │ │ ├─123999 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv │ │ ├─124000 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv │ │ └─124001 /bin/uwsgi --ini /etc/octavia/octavia-uwsgi.ini --venv /opt/stack/data/venv │ ├─devstack@o-da.service │ │ ├─124527 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/octavia-driver-agent --config-file /etc/octavia/octavia.conf │ │ ├─125280 "octavia-driver-agent - status_listener" │ │ ├─125283 "octavia-driver-agent - stats_listener" │ │ ├─125285 "octavia-driver-agent - get_listener" │ │ └─125384 "octavia-driver-agent - provider_agent -- ovn" │ ├─devstack@o-hk.service │ │ └─125136 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/octavia-housekeeping --config-file /etc/octavia/octavia.conf │ ├─devstack@openstack-cli-server.service │ │ └─62235 /opt/stack/data/venv/bin/python3 /opt/stack/devstack/files/openstack-cli-server/openstack-cli-server │ ├─devstack@placement-api.service │ │ ├─105545 "placementuWSGI master" │ │ ├─105547 "placementuWSGI worker 1" │ │ ├─105548 "placementuWSGI worker 2" │ │ ├─105549 "placementuWSGI worker 3" │ │ └─105550 "placementuWSGI worker 4" │ └─devstack@q-ovn-agent.service │ ├─102144 "neutron-ovn-agent: master process [/opt/stack/data/venv/bin/neutron-ovn-agent --config-file /etc/neutron/plugins/ml2/ovn_agent.ini]" │ ├─102627 "neutron-ovn-agent: ServiceWrapper worker(0)" │ ├─102934 /opt/stack/data/venv/bin/python3.12 /usr/local/bin/privsep-helper --config-file /etc/neutron/plugins/ml2/ovn_agent.ini --privsep_context neutron.privileged.namespace_cmd --privsep_sock_path /tmp/tmp0w79_nvv/privsep.sock │ ├─106395 /opt/stack/data/venv/bin/python3.12 /usr/local/bin/privsep-helper --config-file /etc/neutron/plugins/ml2/ovn_agent.ini --privsep_context neutron.privileged.default --privsep_sock_path /tmp/tmpcnh8_ng3/privsep.sock │ ├─128352 /opt/stack/data/venv/bin/python3.12 /usr/local/bin/privsep-helper --config-file /etc/neutron/plugins/ml2/ovn_agent.ini --privsep_context neutron.privileged.link_cmd --privsep_sock_path /tmp/tmp7dfz2ixi/privsep.sock │ ├─128782 sudo /opt/stack/data/venv/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf │ ├─128784 /opt/stack/data/venv/bin/python3.12 /opt/stack/data/venv/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf │ └─128812 haproxy -f /opt/stack/data/neutron/ovn-metadata-proxy/1ddaf2af-8333-48ec-a71c-3dafdea80472.conf ├─system-getty.slice │ └─getty@tty1.service │ └─726 /sbin/agetty -o "-p -- \\u" --noclear - linux ├─system-serial\x2dgetty.slice │ └─serial-getty@ttyS0.service │ └─727 /sbin/agetty -o "-p -- \\u" --keep-baud 115200,57600,38400,9600 - vt220 ├─systemd-journald.service │ └─19011 /usr/lib/systemd/systemd-journald ├─systemd-logind.service │ └─712 /usr/lib/systemd/systemd-logind ├─systemd-machined.service │ └─42877 /usr/lib/systemd/systemd-machined ├─systemd-networkd.service │ └─602 /usr/lib/systemd/systemd-networkd ├─systemd-resolved.service │ └─460 /usr/lib/systemd/systemd-resolved ├─systemd-timesyncd.service │ └─463 /usr/lib/systemd/systemd-timesyncd ├─systemd-udevd.service │ └─udev │ └─453 /usr/lib/systemd/systemd-udevd ├─virtlockd.service │ └─43092 /usr/sbin/virtlockd └─virtlogd.service └─48673 /usr/sbin/virtlogd Feb 16 18:16:58 np0000155647 ovsdb-server[101194]: ovs|05325|poll_loop|DBG|wakeup due to 149-ms timeout at ../ovsdb/ovsdb-server.c:400 (0% CPU usage) Feb 16 18:16:58 np0000155647 ovsdb-server[101199]: ovs|03736|poll_loop|DBG|wakeup due to 1597-ms timeout at ../ovsdb/ovsdb-server.c:400 (0% CPU usage) Feb 16 18:16:59 np0000155647 ovsdb-server[101199]: ovs|03737|poll_loop|DBG|wakeup due to [POLLIN] on fd 22 (199.204.45.4:6642<->199.204.45.4:39564) at ../lib/stream-ssl.c:842 (0% CPU usage) Feb 16 18:16:59 np0000155647 ovsdb-server[101199]: ovs|03738|stream_ssl|DBG|server0<--ssl:199.204.45.4:39564 type 256 (5 bytes) Feb 16 18:16:59 np0000155647 ovsdb-server[101199]: ovs|03739|stream_ssl|DBG|server0<--ssl:199.204.45.4:39564 type 257 (1 bytes) Feb 16 18:16:59 np0000155647 ovsdb-server[101199]: ovs|03740|jsonrpc|DBG|ssl:199.204.45.4:39564: received request, method="echo", params=[], id="echo" Feb 16 18:16:59 np0000155647 ovsdb-server[101199]: ovs|03741|jsonrpc|DBG|ssl:199.204.45.4:39564: send reply, result=[], id="echo" Feb 16 18:16:59 np0000155647 ovsdb-server[101199]: ovs|03742|stream_ssl|DBG|server0-->ssl:199.204.45.4:39564 type 256 (5 bytes) Feb 16 18:16:59 np0000155647 ovsdb-server[101199]: ovs|03743|stream_ssl|DBG|server0-->ssl:199.204.45.4:39564 type 257 (1 bytes) Feb 16 18:16:59 np0000155647 ovsdb-server[101199]: ovs|03744|poll_loop|DBG|wakeup due to 0-ms timeout at ../lib/stream-ssl.c:844 (0% CPU usage) ● user-1000.slice - User Slice of UID 1000 Loaded: loaded Drop-In: /usr/lib/systemd/system/user-.slice.d └─10-defaults.conf Active: active since Mon 2026-02-16 17:51:35 UTC; 25min ago Docs: man:user@.service(5) Tasks: 15 (limit: 169569) Memory: 23.3G (peak: 23.5G) CPU: 23min 24.208s CGroup: /user.slice/user-1000.slice ├─session-1.scope │ ├─ 828 "sshd: zuul [priv]" │ ├─ 849 "sshd: zuul@notty" │ ├─ 1054 /usr/bin/python3 │ ├─130815 sh -c "/bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '\"'\"'echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3'\"'\"' && sleep 0'" │ ├─130816 /bin/sh -c "sudo -H -S -n -u root /bin/sh -c 'echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3' && sleep 0" │ ├─130817 sudo -H -S -n -u root /bin/sh -c "echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3" │ ├─130818 /bin/sh -c "echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3" │ ├─130819 /usr/bin/python3 │ ├─130820 /bin/bash -c "sudo iptables-save > /home/zuul/iptables.txt\n\n# NOTE(sfernand): Run 'df' with a 60s timeout to prevent hangs from\n# stale NFS mounts.\ntimeout -s 9 60s df -h > /home/zuul/df.txt || true\n# If 'df' times out, the mount output helps debug which NFS share\n# is unresponsive.\nmount > /home/zuul/mount.txt\n\nfor py_ver in 2 3; do\n if [[ \`which python\${py_ver}\` ]]; then\n python\${py_ver} -m pip freeze > /home/zuul/pip\${py_ver}-freeze.txt\n fi\ndone\n\nif [ \`command -v dpkg\` ]; then\n dpkg -l> /home/zuul/dpkg-l.txt\nfi\nif [ \`command -v rpm\` ]; then\n rpm -qa | sort > /home/zuul/rpm-qa.txt\nfi\n\n# Services status\nsudo systemctl status --all > services.txt 2>/dev/null\n\n# NOTE(kchamart) The 'audit.log' can be useful in cases when QEMU\n# failed to start due to denials from SELinux — useful for CentOS\n# and Fedora machines. For Ubuntu (which runs AppArmor), DevStack\n# already captures the contents of /var/log/kern.log (via\n# \`journalctl -t kernel\` redirected into syslog.txt.gz), which\n# contains AppArmor-related messages.\nif [ -f /var/log/audit/audit.log ] ; then\n sudo cp /var/log/audit/audit.log /home/zuul/audit.log &&\n chmod +r /home/zuul/audit.log;\nfi\n\n# gzip and save any coredumps in /var/core\nif [ -d /var/core ]; then\n sudo gzip -r /var/core\n sudo cp -r /var/core /home/zuul/\nfi\n\nsudo ss -lntup | grep ':53' > /home/zuul/listen53.txt\n\n# NOTE(andreaf) Service logs are already in logs/ thanks for the\n# export-devstack-journal log. Apache logs are under apache/ thans to the\n# apache-logs-conf role.\ngrep -i deprecat /home/zuul/logs/*.txt /home/zuul/apache/*.log | \\\n sed -r 's/[0-9]{1,2}\\:[0-9]{1,2}\\:[0-9]{1,2}\\.[0-9]{1,3}/ /g' | \\\n sed -r 's/[0-9]{1,2}\\:[0-9]{1,2}\\:[0-9]{1,2}/ /g' | \\\n sed -r 's/[0-9]{1,4}-[0-9]{1,2}-[0-9]{1,4}/ /g' |\n sed -r 's/\\[.*\\]/ /g' | \\\n sed -r 's/\\s[0-9]+\\s/ /g' | \\\n awk '{if (\$0 in seen) {seen[\$0]++} else {out[++n]=\$0;seen[\$0]=1}} END { for (i=1; i<=n; i++) print seen[out[i]]\" :: \" out[i] }' > /home/zuul/deprecations.log\n" │ ├─130834 sudo systemctl status --all │ └─130835 systemctl status --all └─user@1000.service └─init.scope ├─833 /usr/lib/systemd/systemd --user └─834 "(sd-pam)" Feb 16 18:16:57 np0000155647 python3[130809]: ansible-ansible.legacy.command Invoked with _raw_params=cp -pRL /etc/openstack /home/zuul/etc/ zuul_no_log=False zuul_log_id=0242ac17-0010-4345-bbbd-00000000002f-1-controller zuul_output_max_bytes=1073741824 zuul_ansible_split_streams=False _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 16 18:16:57 np0000155647 sudo[130807]: pam_unix(sudo:session): session closed for user root Feb 16 18:16:57 np0000155647 sudo[130817]: zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3' Feb 16 18:16:57 np0000155647 sudo[130817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000) Feb 16 18:16:58 np0000155647 python3[130819]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=sudo iptables-save > /home/zuul/iptables.txt # NOTE(sfernand): Run 'df' with a 60s timeout to prevent hangs from # stale NFS mounts. timeout -s 9 60s df -h > /home/zuul/df.txt || true # If 'df' times out, the mount output helps debug which NFS share # is unresponsive. mount > /home/zuul/mount.txt for py_ver in 2 3; do if [[ `which python${py_ver}` ]]; then python${py_ver} -m pip freeze > /home/zuul/pip${py_ver}-freeze.txt fi done if [ `command -v dpkg` ]; then dpkg -l> /home/zuul/dpkg-l.txt fi if [ `command -v rpm` ]; then rpm -qa | sort > /home/zuul/rpm-qa.txt fi # Services status sudo systemctl status --all > services.txt 2>/dev/null # NOTE(kchamart) The 'audit.log' can be useful in cases when QEMU # failed to start due to denials from SELinux — useful for CentOS # and Fedora machines. For Ubuntu (which runs AppArmor), DevStack # already captures the contents of /var/log/kern.log (via # `journalctl -t kernel` redirected into syslog.txt.gz), which # contains AppArmor-related messages. if [ -f /var/log/audit/audit.log ] ; then sudo cp /var/log/audit/audit.log /home/zuul/audit.log && chmod +r /home/zuul/audit.log; fi # gzip and save any coredumps in /var/core if [ -d /var/core ]; then sudo gzip -r /var/core sudo cp -r /var/core /home/zuul/ fi sudo ss -lntup | grep ':53' > /home/zuul/listen53.txt # NOTE(andreaf) Service logs are already in logs/ thanks for the # export-devstack-journal log. Apache logs are under apache/ thans to the # apache-logs-conf role. grep -i deprecat /home/zuul/logs/*.txt /home/zuul/apache/*.log | \ sed -r 's/[0-9]{1,2}\:[0-9]{1,2}\:[0-9]{1,2}\.[0-9]{1,3}/ /g' | \ sed -r 's/[0-9]{1,2}\:[0-9]{1,2}\:[0-9]{1,2}/ /g' | \ sed -r 's/[0-9]{1,4}-[0-9]{1,2}-[0-9]{1,4}/ /g' | sed -r 's/\[.*\]/ /g' | \ sed -r 's/\s[0-9]+\s/ /g' | \ awk '{if ($0 in seen) {seen[$0]++} else {out[++n]=$0;seen[$0]=1}} END { for (i=1; i<=n; i++) print seen[out[i]]" :: " out[i] }' > /home/zuul/deprecations.log _uses_shell=True zuul_no_log=False zuul_log_id=0242ac17-0010-4345-bbbd-000000000033-1-controller zuul_output_max_bytes=1073741824 zuul_ansible_split_streams=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None Feb 16 18:16:58 np0000155647 sudo[130822]: root : PWD=/home/zuul ; USER=root ; COMMAND=/usr/sbin/iptables-save Feb 16 18:16:58 np0000155647 sudo[130822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=0) Feb 16 18:16:58 np0000155647 sudo[130822]: pam_unix(sudo:session): session closed for user root Feb 16 18:16:58 np0000155647 sudo[130834]: root : PWD=/home/zuul ; USER=root ; COMMAND=/usr/bin/systemctl status --all Feb 16 18:16:58 np0000155647 sudo[130834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=0) ● user.slice - User and Session Slice Loaded: loaded (/usr/lib/systemd/system/user.slice; static) Active: active since Mon 2026-02-16 17:51:02 UTC; 25min ago Docs: man:systemd.special(7) Tasks: 15 Memory: 23.3G (peak: 23.5G) CPU: 23min 24.212s CGroup: /user.slice └─user-1000.slice ├─session-1.scope │ ├─ 828 "sshd: zuul [priv]" │ ├─ 849 "sshd: zuul@notty" │ ├─ 1054 /usr/bin/python3 │ ├─130815 sh -c "/bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '\"'\"'echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3'\"'\"' && sleep 0'" │ ├─130816 /bin/sh -c "sudo -H -S -n -u root /bin/sh -c 'echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3' && sleep 0" │ ├─130817 sudo -H -S -n -u root /bin/sh -c "echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3" │ ├─130818 /bin/sh -c "echo BECOME-SUCCESS-teaxywwfhlmfpoullrqpxwihwgjieefl ; /usr/bin/python3" │ ├─130819 /usr/bin/python3 │ ├─130820 /bin/bash -c "sudo iptables-save > /home/zuul/iptables.txt\n\n# NOTE(sfernand): Run 'df' with a 60s timeout to prevent hangs from\n# stale NFS mounts.\ntimeout -s 9 60s df -h > /home/zuul/df.txt || true\n# If 'df' times out, the mount output helps debug which NFS share\n# is unresponsive.\nmount > /home/zuul/mount.txt\n\nfor py_ver in 2 3; do\n if [[ \`which python\${py_ver}\` ]]; then\n python\${py_ver} -m pip freeze > /home/zuul/pip\${py_ver}-freeze.txt\n fi\ndone\n\nif [ \`command -v dpkg\` ]; then\n dpkg -l> /home/zuul/dpkg-l.txt\nfi\nif [ \`command -v rpm\` ]; then\n rpm -qa | sort > /home/zuul/rpm-qa.txt\nfi\n\n# Services status\nsudo systemctl status --all > services.txt 2>/dev/null\n\n# NOTE(kchamart) The 'audit.log' can be useful in cases when QEMU\n# failed to start due to denials from SELinux — useful for CentOS\n# and Fedora machines. For Ubuntu (which runs AppArmor), DevStack\n# already captures the contents of /var/log/kern.log (via\n# \`journalctl -t kernel\` redirected into syslog.txt.gz), which\n# contains AppArmor-related messages.\nif [ -f /var/log/audit/audit.log ] ; then\n sudo cp /var/log/audit/audit.log /home/zuul/audit.log &&\n chmod +r /home/zuul/audit.log;\nfi\n\n# gzip and save any coredumps in /var/core\nif [ -d /var/core ]; then\n sudo gzip -r /var/core\n sudo cp -r /var/core /home/zuul/\nfi\n\nsudo ss -lntup | grep ':53' > /home/zuul/listen53.txt\n\n# NOTE(andreaf) Service logs are already in logs/ thanks for the\n# export-devstack-journal log. Apache logs are under apache/ thans to the\n# apache-logs-conf role.\ngrep -i deprecat /home/zuul/logs/*.txt /home/zuul/apache/*.log | \\\n sed -r 's/[0-9]{1,2}\\:[0-9]{1,2}\\:[0-9]{1,2}\\.[0-9]{1,3}/ /g' | \\\n sed -r 's/[0-9]{1,2}\\:[0-9]{1,2}\\:[0-9]{1,2}/ /g' | \\\n sed -r 's/[0-9]{1,4}-[0-9]{1,2}-[0-9]{1,4}/ /g' |\n sed -r 's/\\[.*\\]/ /g' | \\\n sed -r 's/\\s[0-9]+\\s/ /g' | \\\n awk '{if (\$0 in seen) {seen[\$0]++} else {out[++n]=\$0;seen[\$0]=1}} END { for (i=1; i<=n; i++) print seen[out[i]]\" :: \" out[i] }' > /home/zuul/deprecations.log\n" │ ├─130834 sudo systemctl status --all │ └─130835 systemctl status --all └─user@1000.service └─init.scope ├─833 /usr/lib/systemd/systemd --user └─834 "(sd-pam)" Notice: journal has been rotated since unit was started, output may be incomplete. ● cloud-init-hotplugd.socket - cloud-init hotplug hook socket Loaded: loaded (/usr/lib/systemd/system/cloud-init-hotplugd.socket; enabled; preset: enabled) Active: active (listening) since Mon 2026-02-16 17:51:10 UTC; 25min ago Triggers: ● cloud-init-hotplugd.service Listen: /run/cloud-init/share/hook-hotplug-cmd (FIFO) CGroup: /system.slice/cloud-init-hotplugd.socket Feb 16 17:51:10 np0000155647 systemd[1]: Listening on cloud-init-hotplugd.socket - cloud-init hotplug hook socket. ● dbus.socket - D-Bus System Message Bus Socket Loaded: loaded (/usr/lib/systemd/system/dbus.socket; static) Active: active (running) since Mon 2026-02-16 17:51:10 UTC; 25min ago Triggers: ● dbus.service Listen: /run/dbus/system_bus_socket (Stream) CGroup: /system.slice/dbus.socket Feb 16 17:51:10 np0000155647 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. ● dm-event.socket - Device-mapper event daemon FIFOs Loaded: loaded (/usr/lib/systemd/system/dm-event.socket; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 17:56:34 UTC; 20min ago Triggers: ● dm-event.service Docs: man:dmeventd(8) Listen: /run/dmeventd-server (FIFO) /run/dmeventd-client (FIFO) CGroup: /system.slice/dm-event.socket Feb 16 17:56:34 np0000155647 systemd[1]: Listening on dm-event.socket - Device-mapper event daemon FIFOs. ● docker.socket - Docker Socket for the API Loaded: loaded (/usr/lib/systemd/system/docker.socket; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 17:57:46 UTC; 19min ago Triggers: ● docker.service Listen: /run/docker.sock (Stream) Tasks: 0 (limit: 77077) Memory: 0B (peak: 256.0K) CPU: 1ms CGroup: /system.slice/docker.socket Feb 16 17:57:45 np0000155647 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 16 17:57:46 np0000155647 systemd[1]: Listening on docker.socket - Docker Socket for the API. ● epmd.socket - Erlang Port Mapper Daemon Activation Socket Loaded: loaded (/usr/lib/systemd/system/epmd.socket; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 17:59:19 UTC; 17min ago Triggers: ● epmd.service Listen: [::]:4369 (Stream) Tasks: 0 (limit: 77077) Memory: 8.0K (peak: 256.0K) CPU: 1ms CGroup: /system.slice/epmd.socket Feb 16 17:59:19 np0000155647 systemd[1]: Listening on epmd.socket - Erlang Port Mapper Daemon Activation Socket. ● iscsid.socket - Open-iSCSI iscsid Socket Loaded: loaded (/usr/lib/systemd/system/iscsid.socket; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 17:51:10 UTC; 25min ago Triggers: ● iscsid.service Docs: man:iscsid(8) man:iscsiadm(8) Listen: @ISCSIADM_ABSTRACT_NAMESPACE (Stream) CGroup: /system.slice/iscsid.socket Feb 16 17:51:10 np0000155647 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. ● libvirtd-admin.socket - libvirt legacy monolithic daemon admin socket Loaded: loaded (/usr/lib/systemd/system/libvirtd-admin.socket; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:02:19 UTC; 14min ago Triggers: ● libvirtd.service Listen: /run/libvirt/libvirt-admin-sock (Stream) CGroup: /system.slice/libvirtd-admin.socket Feb 16 18:02:19 np0000155647 systemd[1]: Listening on libvirtd-admin.socket - libvirt legacy monolithic daemon admin socket. ● libvirtd-ro.socket - libvirt legacy monolithic daemon read-only socket Loaded: loaded (/usr/lib/systemd/system/libvirtd-ro.socket; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:02:19 UTC; 14min ago Triggers: ● libvirtd.service Listen: /run/libvirt/libvirt-sock-ro (Stream) CGroup: /system.slice/libvirtd-ro.socket Feb 16 18:02:19 np0000155647 systemd[1]: Listening on libvirtd-ro.socket - libvirt legacy monolithic daemon read-only socket. ● libvirtd.socket - libvirt legacy monolithic daemon socket Loaded: loaded (/usr/lib/systemd/system/libvirtd.socket; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:02:19 UTC; 14min ago Triggers: ● libvirtd.service Listen: /run/libvirt/libvirt-sock (Stream) Tasks: 0 (limit: 77077) Memory: 0B (peak: 256.0K) CPU: 1ms CGroup: /system.slice/libvirtd.socket Feb 16 18:02:19 np0000155647 systemd[1]: Starting libvirtd.socket - libvirt legacy monolithic daemon socket... Feb 16 18:02:19 np0000155647 systemd[1]: Listening on libvirtd.socket - libvirt legacy monolithic daemon socket. ● lvm2-lvmpolld.socket - LVM2 poll daemon socket Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmpolld.socket; enabled; preset: enabled) Active: active (listening) since Mon 2026-02-16 17:56:37 UTC; 20min ago Triggers: ● lvm2-lvmpolld.service Docs: man:lvmpolld(8) Listen: /run/lvm/lvmpolld.socket (Stream) CGroup: /system.slice/lvm2-lvmpolld.socket Feb 16 17:56:37 np0000155647 systemd[1]: Listening on lvm2-lvmpolld.socket - LVM2 poll daemon socket. ● rpcbind.socket - RPCbind Server Activation Socket Loaded: loaded (/usr/lib/systemd/system/rpcbind.socket; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:03:42 UTC; 13min ago Triggers: ● rpcbind.service Listen: /run/rpcbind.sock (Stream) 0.0.0.0:111 (Stream) 0.0.0.0:111 (Datagram) [::]:111 (Stream) [::]:111 (Datagram) Tasks: 0 (limit: 77077) Memory: 20.0K (peak: 272.0K) CPU: 5ms CGroup: /system.slice/rpcbind.socket Feb 16 18:03:42 np0000155647 systemd[1]: Listening on rpcbind.socket - RPCbind Server Activation Socket. ● ssh.socket - OpenBSD Secure Shell server socket Loaded: loaded (/usr/lib/systemd/system/ssh.socket; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 17:51:10 UTC; 25min ago Triggers: ● ssh.service Listen: 0.0.0.0:22 (Stream) [::]:22 (Stream) Tasks: 0 (limit: 77077) Memory: 12.0K (peak: 264.0K) CPU: 3ms CGroup: /system.slice/ssh.socket Feb 16 17:51:10 np0000155647 systemd[1]: Listening on ssh.socket - OpenBSD Secure Shell server socket. ● syslog.socket - Syslog Socket Loaded: loaded (/usr/lib/systemd/system/syslog.socket; static) Active: active (running) since Mon 2026-02-16 17:56:12 UTC; 20min ago Triggers: ● rsyslog.service Docs: man:systemd.special(7) https://www.freedesktop.org/wiki/Software/systemd/syslog Listen: /run/systemd/journal/syslog (Datagram) CGroup: /system.slice/syslog.socket Feb 16 17:56:12 np0000155647 systemd[1]: Listening on syslog.socket - Syslog Socket. ● systemd-coredump.socket - Process Core Dump Socket Loaded: loaded (/usr/lib/systemd/system/systemd-coredump.socket; static) Active: active (listening) since Mon 2026-02-16 18:02:15 UTC; 14min ago Docs: man:systemd-coredump(8) Listen: /run/systemd/coredump (SequentialPacket) Accepted: 0; Connected: 0; CGroup: /system.slice/systemd-coredump.socket Feb 16 18:02:15 np0000155647 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. ● systemd-fsckd.socket - fsck to fsckd communication Socket Loaded: loaded (/usr/lib/systemd/system/systemd-fsckd.socket; static) Active: active (listening) since Mon 2026-02-16 17:51:02 UTC; 25min ago Triggers: ● systemd-fsckd.service Docs: man:systemd-fsckd.service(8) man:systemd-fsck@.service(8) man:systemd-fsck-root.service(8) Listen: /run/systemd/fsck.progress (Stream) CGroup: /system.slice/systemd-fsckd.socket Notice: journal has been rotated since unit was started, output may be incomplete. ● systemd-initctl.socket - initctl Compatibility Named Pipe Loaded: loaded (/usr/lib/systemd/system/systemd-initctl.socket; static) Active: active (listening) since Mon 2026-02-16 17:51:02 UTC; 25min ago Triggers: ● systemd-initctl.service Docs: man:systemd-initctl.socket(8) Listen: /run/initctl (FIFO) CGroup: /system.slice/systemd-initctl.socket Notice: journal has been rotated since unit was started, output may be incomplete. ○ systemd-journald-audit.socket - Journal Audit Socket Loaded: loaded (/usr/lib/systemd/system/systemd-journald-audit.socket; disabled; preset: enabled) Active: inactive (dead) Triggers: ● systemd-journald.service Docs: man:systemd-journald.service(8) man:journald.conf(5) Listen: audit 1 (Netlink) ● systemd-journald-dev-log.socket - Journal Socket (/dev/log) Loaded: loaded (/usr/lib/systemd/system/systemd-journald-dev-log.socket; static) Active: active (running) since Mon 2026-02-16 17:51:02 UTC; 25min ago Triggers: ● systemd-journald.service Docs: man:systemd-journald.service(8) man:journald.conf(5) Listen: /run/systemd/journal/dev-log (Datagram) CGroup: /system.slice/systemd-journald-dev-log.socket Notice: journal has been rotated since unit was started, output may be incomplete. ● systemd-journald.socket - Journal Socket Loaded: loaded (/usr/lib/systemd/system/systemd-journald.socket; static) Active: active (running) since Mon 2026-02-16 17:51:02 UTC; 25min ago Triggers: ● systemd-journald.service Docs: man:systemd-journald.service(8) man:journald.conf(5) Listen: /run/systemd/journal/socket (Datagram) /run/systemd/journal/stdout (Stream) CGroup: /system.slice/systemd-journald.socket Notice: journal has been rotated since unit was started, output may be incomplete. ● systemd-networkd.socket - Network Service Netlink Socket Loaded: loaded (/usr/lib/systemd/system/systemd-networkd.socket; disabled; preset: enabled) Active: active (running) since Mon 2026-02-16 17:51:04 UTC; 25min ago Triggers: ● systemd-networkd.service Docs: man:systemd-networkd.service(8) man:rtnetlink(7) Listen: route 1361 (Netlink) CGroup: /system.slice/systemd-networkd.socket Feb 16 17:51:04 np0000155647 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. ○ systemd-pcrextend.socket - TPM2 PCR Extension (Varlink) Loaded: loaded (/usr/lib/systemd/system/systemd-pcrextend.socket; disabled; preset: enabled) Active: inactive (dead) Condition: start condition unmet at Mon 2026-02-16 17:51:02 UTC; 25min ago Docs: man:systemd-pcrextend(8) Listen: /run/systemd/io.systemd.PCRExtend (Stream) Accepted: 0; Connected: 0; ● systemd-rfkill.socket - Load/Save RF Kill Switch Status /dev/rfkill Watch Loaded: loaded (/usr/lib/systemd/system/systemd-rfkill.socket; static) Active: active (listening) since Mon 2026-02-16 17:51:03 UTC; 25min ago Triggers: ● systemd-rfkill.service Docs: man:systemd-rfkill.socket(8) Listen: /dev/rfkill (Special) CGroup: /system.slice/systemd-rfkill.socket Feb 16 17:51:03 ubuntu systemd[1]: Listening on systemd-rfkill.socket - Load/Save RF Kill Switch Status /dev/rfkill Watch. ● systemd-sysext.socket - System Extension Image Management (Varlink) Loaded: loaded (/usr/lib/systemd/system/systemd-sysext.socket; disabled; preset: enabled) Active: active (listening) since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd-sysext(8) Listen: /run/systemd/io.systemd.sysext (Stream) Accepted: 0; Connected: 0; CGroup: /system.slice/systemd-sysext.socket Feb 16 17:51:03 ubuntu systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). ● systemd-udevd-control.socket - udev Control Socket Loaded: loaded (/usr/lib/systemd/system/systemd-udevd-control.socket; static) Active: active (running) since Mon 2026-02-16 17:51:02 UTC; 25min ago Triggers: ● systemd-udevd.service Docs: man:systemd-udevd-control.socket(8) man:udev(7) Listen: /run/udev/control (SequentialPacket) CGroup: /system.slice/systemd-udevd-control.socket Notice: journal has been rotated since unit was started, output may be incomplete. ● systemd-udevd-kernel.socket - udev Kernel Socket Loaded: loaded (/usr/lib/systemd/system/systemd-udevd-kernel.socket; static) Active: active (running) since Mon 2026-02-16 17:51:02 UTC; 25min ago Triggers: ● systemd-udevd.service Docs: man:systemd-udevd-kernel.socket(8) man:udev(7) Listen: kobject-uevent 1 (Netlink) CGroup: /system.slice/systemd-udevd-kernel.socket Notice: journal has been rotated since unit was started, output may be incomplete. ● uuidd.socket - UUID daemon activation socket Loaded: loaded (/usr/lib/systemd/system/uuidd.socket; enabled; preset: enabled) Active: active (listening) since Mon 2026-02-16 17:56:20 UTC; 20min ago Triggers: ● uuidd.service Listen: /run/uuidd/request (Stream) CGroup: /system.slice/uuidd.socket Feb 16 17:56:20 np0000155647 systemd[1]: Listening on uuidd.socket - UUID daemon activation socket. ● virtlockd-admin.socket - libvirt locking daemon admin socket Loaded: loaded (/usr/lib/systemd/system/virtlockd-admin.socket; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:02:21 UTC; 14min ago Triggers: ● virtlockd.service Listen: /run/libvirt/virtlockd-admin-sock (Stream) CGroup: /system.slice/virtlockd-admin.socket Feb 16 18:02:21 np0000155647 systemd[1]: Listening on virtlockd-admin.socket - libvirt locking daemon admin socket. ● virtlockd.socket - libvirt locking daemon socket Loaded: loaded (/usr/lib/systemd/system/virtlockd.socket; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:02:19 UTC; 14min ago Triggers: ● virtlockd.service Listen: /run/libvirt/virtlockd-sock (Stream) CGroup: /system.slice/virtlockd.socket Feb 16 18:02:19 np0000155647 systemd[1]: Listening on virtlockd.socket - libvirt locking daemon socket. ● virtlogd-admin.socket - libvirt logging daemon admin socket Loaded: loaded (/usr/lib/systemd/system/virtlogd-admin.socket; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:02:21 UTC; 14min ago Triggers: ● virtlogd.service Listen: /run/libvirt/virtlogd-admin-sock (Stream) CGroup: /system.slice/virtlogd-admin.socket Feb 16 18:02:21 np0000155647 systemd[1]: Listening on virtlogd-admin.socket - libvirt logging daemon admin socket. ● virtlogd.socket - libvirt logging daemon socket Loaded: loaded (/usr/lib/systemd/system/virtlogd.socket; enabled; preset: enabled) Active: active (running) since Mon 2026-02-16 18:02:19 UTC; 14min ago Triggers: ● virtlogd.service Listen: /run/libvirt/virtlogd-sock (Stream) CGroup: /system.slice/virtlogd.socket Feb 16 18:02:19 np0000155647 systemd[1]: Listening on virtlogd.socket - libvirt logging daemon socket. ● root-swapfile.swap - /root/swapfile Loaded: loaded (/etc/fstab; generated) Active: active since Mon 2026-02-16 17:53:34 UTC; 23min ago What: /root/swapfile Docs: man:fstab(5) man:systemd-fstab-generator(8) ● basic.target - Basic System Loaded: loaded (/usr/lib/systemd/system/basic.target; static) Active: active since Mon 2026-02-16 17:51:10 UTC; 25min ago Docs: man:systemd.special(7) Feb 16 17:51:10 np0000155647 systemd[1]: Reached target basic.target - Basic System. ○ blockdev@dev-disk-by\x2dlabel-cloudimg\x2drootfs.target - Block Device Preparation for /dev/disk/by-label/cloudimg-rootfs Loaded: loaded (/usr/lib/systemd/system/blockdev@.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ● cloud-config.target - Cloud-config availability Loaded: loaded (/usr/lib/systemd/system/cloud-config.target; static) Active: active since Mon 2026-02-16 17:51:10 UTC; 25min ago Feb 16 17:51:10 np0000155647 systemd[1]: Reached target cloud-config.target - Cloud-config availability. ● cloud-init.target - Cloud-init target Loaded: loaded (/usr/lib/systemd/system/cloud-init.target; enabled-runtime; preset: enabled) Active: active since Mon 2026-02-16 17:51:11 UTC; 25min ago Feb 16 17:51:11 np0000155647 systemd[1]: Reached target cloud-init.target - Cloud-init target. ○ cryptsetup-pre.target - Local Encrypted Volumes (Pre) Loaded: loaded (/usr/lib/systemd/system/cryptsetup-pre.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ● cryptsetup.target - Local Encrypted Volumes Loaded: loaded (/usr/lib/systemd/system/cryptsetup.target; static) Active: active since Mon 2026-02-16 17:51:02 UTC; 25min ago Docs: man:systemd.special(7) Notice: journal has been rotated since unit was started, output may be incomplete. ○ emergency.target - Emergency Mode Loaded: loaded (/usr/lib/systemd/system/emergency.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ○ final.target - Late Shutdown Services Loaded: loaded (/usr/lib/systemd/system/final.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ● first-boot-complete.target - First Boot Complete Loaded: loaded (/usr/lib/systemd/system/first-boot-complete.target; static) Active: active since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd.special(7) Feb 16 17:51:03 ubuntu systemd[1]: Reached target first-boot-complete.target - First Boot Complete. ○ getty-pre.target - Preparation for Logins Loaded: loaded (/usr/lib/systemd/system/getty-pre.target; static) Active: inactive (dead) Docs: man:systemd.special(7) man:systemd-getty-generator(8) https://0pointer.de/blog/projects/serial-console.html ● getty.target - Login Prompts Loaded: loaded (/usr/lib/systemd/system/getty.target; static) Active: active since Mon 2026-02-16 17:51:10 UTC; 25min ago Docs: man:systemd.special(7) man:systemd-getty-generator(8) https://0pointer.de/blog/projects/serial-console.html Feb 16 17:51:10 np0000155647 systemd[1]: Reached target getty.target - Login Prompts. ● graphical.target - Graphical Interface Loaded: loaded (/usr/lib/systemd/system/graphical.target; static) Active: active since Mon 2026-02-16 17:51:10 UTC; 25min ago Docs: man:systemd.special(7) Feb 16 17:51:10 np0000155647 systemd[1]: Reached target graphical.target - Graphical Interface. ○ hibernate.target - System Hibernation Loaded: loaded (/usr/lib/systemd/system/hibernate.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ○ hybrid-sleep.target - Hybrid Suspend+Hibernate Loaded: loaded (/usr/lib/systemd/system/hybrid-sleep.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ○ initrd-fs.target - Initrd File Systems Loaded: loaded (/usr/lib/systemd/system/initrd-fs.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ○ initrd-root-device.target - Initrd Root Device Loaded: loaded (/usr/lib/systemd/system/initrd-root-device.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ○ initrd-root-fs.target - Initrd Root File System Loaded: loaded (/usr/lib/systemd/system/initrd-root-fs.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ○ initrd-switch-root.target - Switch Root Loaded: loaded (/usr/lib/systemd/system/initrd-switch-root.target; static) Active: inactive (dead) ○ initrd-usr-fs.target - Initrd /usr File System Loaded: loaded (/usr/lib/systemd/system/initrd-usr-fs.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ○ initrd.target - Initrd Default Target Loaded: loaded (/usr/lib/systemd/system/initrd.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ● integritysetup.target - Local Integrity Protected Volumes Loaded: loaded (/usr/lib/systemd/system/integritysetup.target; static) Active: active since Mon 2026-02-16 17:51:02 UTC; 25min ago Docs: man:systemd.special(7) Notice: journal has been rotated since unit was started, output may be incomplete. ● local-fs-pre.target - Preparation for Local File Systems Loaded: loaded (/usr/lib/systemd/system/local-fs-pre.target; static) Active: active since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd.special(7) Feb 16 17:51:03 ubuntu systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. ● local-fs.target - Local File Systems Loaded: loaded (/usr/lib/systemd/system/local-fs.target; static) Active: active since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd.special(7) Feb 16 17:51:03 ubuntu systemd[1]: Reached target local-fs.target - Local File Systems. ● machines.target - Containers Loaded: loaded (/usr/lib/systemd/system/machines.target; enabled; preset: enabled) Active: active since Mon 2026-02-16 18:02:15 UTC; 14min ago Docs: man:systemd.special(7) Feb 16 18:02:15 np0000155647 systemd[1]: Reached target machines.target - Containers. ● multi-user.target - Multi-User System Loaded: loaded (/usr/lib/systemd/system/multi-user.target; static) Active: active since Mon 2026-02-16 17:51:10 UTC; 25min ago Docs: man:systemd.special(7) Feb 16 17:51:10 np0000155647 systemd[1]: Reached target multi-user.target - Multi-User System. ● network-online.target - Network is Online Loaded: loaded (/usr/lib/systemd/system/network-online.target; static) Active: active since Mon 2026-02-16 17:51:10 UTC; 25min ago Docs: man:systemd.special(7) https://systemd.io/NETWORK_ONLINE Feb 16 17:51:10 np0000155647 systemd[1]: Reached target network-online.target - Network is Online. ● network-pre.target - Preparation for Network Loaded: loaded (/usr/lib/systemd/system/network-pre.target; static) Active: active since Mon 2026-02-16 17:51:04 UTC; 25min ago Docs: man:systemd.special(7) https://systemd.io/NETWORK_ONLINE Feb 16 17:51:04 np0000155647 systemd[1]: Reached target network-pre.target - Preparation for Network. ● network.target - Network Loaded: loaded (/usr/lib/systemd/system/network.target; static) Active: active since Mon 2026-02-16 17:51:04 UTC; 25min ago Docs: man:systemd.special(7) https://systemd.io/NETWORK_ONLINE Feb 16 17:51:04 np0000155647 systemd[1]: Reached target network.target - Network. ● nfs-client.target - NFS client services Loaded: loaded (/usr/lib/systemd/system/nfs-client.target; enabled; preset: enabled) Active: active since Mon 2026-02-16 18:03:46 UTC; 13min ago Feb 16 18:03:46 np0000155647 systemd[1]: Reached target nfs-client.target - NFS client services. ● nss-lookup.target - Host and Network Name Lookups Loaded: loaded (/usr/lib/systemd/system/nss-lookup.target; static) Active: active since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd.special(7) Feb 16 17:51:03 ubuntu systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. ○ nss-user-lookup.target - User and Group Name Lookups Loaded: loaded (/usr/lib/systemd/system/nss-user-lookup.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ● paths.target - Path Units Loaded: loaded (/usr/lib/systemd/system/paths.target; static) Active: active since Mon 2026-02-16 17:51:02 UTC; 25min ago Docs: man:systemd.special(7) Notice: journal has been rotated since unit was started, output may be incomplete. ○ remote-cryptsetup.target - Remote Encrypted Volumes Loaded: loaded (/usr/lib/systemd/system/remote-cryptsetup.target; disabled; preset: enabled) Active: inactive (dead) Docs: man:systemd.special(7) ● remote-fs-pre.target - Preparation for Remote File Systems Loaded: loaded (/usr/lib/systemd/system/remote-fs-pre.target; static) Active: active since Mon 2026-02-16 17:51:10 UTC; 25min ago Docs: man:systemd.special(7) Feb 16 17:51:10 np0000155647 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. ● remote-fs.target - Remote File Systems Loaded: loaded (/usr/lib/systemd/system/remote-fs.target; enabled; preset: enabled) Active: active since Mon 2026-02-16 17:51:10 UTC; 25min ago Docs: man:systemd.special(7) Feb 16 17:51:10 np0000155647 systemd[1]: Reached target remote-fs.target - Remote File Systems. ○ remote-veritysetup.target - Remote Verity Protected Volumes Loaded: loaded (/usr/lib/systemd/system/remote-veritysetup.target; disabled; preset: enabled) Active: inactive (dead) Docs: man:systemd.special(7) ○ rescue.target - Rescue Mode Loaded: loaded (/usr/lib/systemd/system/rescue.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ● rpc_pipefs.target Loaded: loaded (/run/systemd/generator/rpc_pipefs.target; generated) Active: active since Mon 2026-02-16 18:03:46 UTC; 13min ago Feb 16 18:03:46 np0000155647 systemd[1]: Reached target rpc_pipefs.target. ● rpcbind.target - RPC Port Mapper Loaded: loaded (/usr/lib/systemd/system/rpcbind.target; static) Active: active since Mon 2026-02-16 18:03:42 UTC; 13min ago Docs: man:systemd.special(7) Feb 16 18:03:42 np0000155647 systemd[1]: Reached target rpcbind.target - RPC Port Mapper. ○ shutdown.target - System Shutdown Loaded: loaded (/usr/lib/systemd/system/shutdown.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ○ sleep.target - Sleep Loaded: loaded (/usr/lib/systemd/system/sleep.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ● slices.target - Slice Units Loaded: loaded (/usr/lib/systemd/system/slices.target; static) Active: active since Mon 2026-02-16 17:51:02 UTC; 25min ago Docs: man:systemd.special(7) Notice: journal has been rotated since unit was started, output may be incomplete. ● sockets.target - Socket Units Loaded: loaded (/usr/lib/systemd/system/sockets.target; static) Active: active since Mon 2026-02-16 17:51:10 UTC; 25min ago Docs: man:systemd.special(7) Feb 16 17:51:10 np0000155647 systemd[1]: Reached target sockets.target - Socket Units. ○ soft-reboot.target - Reboot System Userspace Loaded: loaded (/usr/lib/systemd/system/soft-reboot.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ○ suspend-then-hibernate.target - Suspend; Hibernate if not used for a period of time Loaded: loaded (/usr/lib/systemd/system/suspend-then-hibernate.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ○ suspend.target - Suspend Loaded: loaded (/usr/lib/systemd/system/suspend.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ● swap.target - Swaps Loaded: loaded (/usr/lib/systemd/system/swap.target; static) Active: active since Mon 2026-02-16 17:51:02 UTC; 25min ago Docs: man:systemd.special(7) Notice: journal has been rotated since unit was started, output may be incomplete. ● sysinit.target - System Initialization Loaded: loaded (/usr/lib/systemd/system/sysinit.target; static) Active: active since Mon 2026-02-16 17:51:10 UTC; 25min ago Docs: man:systemd.special(7) Feb 16 17:51:10 np0000155647 systemd[1]: Reached target sysinit.target - System Initialization. ● time-set.target - System Time Set Loaded: loaded (/usr/lib/systemd/system/time-set.target; static) Active: active since Mon 2026-02-16 17:51:03 UTC; 25min ago Docs: man:systemd.special(7) Feb 16 17:51:03 ubuntu systemd[1]: Reached target time-set.target - System Time Set. ○ time-sync.target - System Time Synchronized Loaded: loaded (/usr/lib/systemd/system/time-sync.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ● timers.target - Timer Units Loaded: loaded (/usr/lib/systemd/system/timers.target; static) Active: active since Mon 2026-02-16 17:51:10 UTC; 25min ago Docs: man:systemd.special(7) Feb 16 17:51:10 np0000155647 systemd[1]: Reached target timers.target - Timer Units. ○ umount.target - Unmount All Filesystems Loaded: loaded (/usr/lib/systemd/system/umount.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ○ veritysetup-pre.target - Local Verity Protected Volumes (Pre) Loaded: loaded (/usr/lib/systemd/system/veritysetup-pre.target; static) Active: inactive (dead) Docs: man:systemd.special(7) ● veritysetup.target - Local Verity Protected Volumes Loaded: loaded (/usr/lib/systemd/system/veritysetup.target; static) Active: active since Mon 2026-02-16 17:51:02 UTC; 25min ago Docs: man:systemd.special(7) Notice: journal has been rotated since unit was started, output may be incomplete. ● virt-guest-shutdown.target - libvirt guests shutdown target Loaded: loaded (/usr/lib/systemd/system/virt-guest-shutdown.target; static) Active: active since Mon 2026-02-16 18:02:21 UTC; 14min ago Docs: https://libvirt.org/ Feb 16 18:02:21 np0000155647 systemd[1]: Reached target virt-guest-shutdown.target - libvirt guests shutdown target. ● apt-daily-upgrade.timer - Daily apt upgrade and clean activities Loaded: loaded (/usr/lib/systemd/system/apt-daily-upgrade.timer; enabled; preset: enabled) Active: active (waiting) since Mon 2026-02-16 17:51:10 UTC; 25min ago Trigger: Tue 2026-02-17 06:01:06 UTC; 11h left Triggers: ● apt-daily-upgrade.service Feb 16 17:51:10 np0000155647 systemd[1]: Started apt-daily-upgrade.timer - Daily apt upgrade and clean activities. ● apt-daily.timer - Daily apt download activities Loaded: loaded (/usr/lib/systemd/system/apt-daily.timer; enabled; preset: enabled) Active: active (waiting) since Mon 2026-02-16 17:51:10 UTC; 25min ago Trigger: Tue 2026-02-17 15:13:37 UTC; 20h left Triggers: ● apt-daily.service Feb 16 17:51:10 np0000155647 systemd[1]: Started apt-daily.timer - Daily apt download activities. ● dpkg-db-backup.timer - Daily dpkg database backup timer Loaded: loaded (/usr/lib/systemd/system/dpkg-db-backup.timer; enabled; preset: enabled) Active: active (waiting) since Mon 2026-02-16 17:51:10 UTC; 25min ago Trigger: Tue 2026-02-17 00:00:00 UTC; 5h 42min left Triggers: ● dpkg-db-backup.service Docs: man:dpkg(1) Feb 16 17:51:10 np0000155647 systemd[1]: Started dpkg-db-backup.timer - Daily dpkg database backup timer. ● e2scrub_all.timer - Periodic ext4 Online Metadata Check for All Filesystems Loaded: loaded (/usr/lib/systemd/system/e2scrub_all.timer; enabled; preset: enabled) Active: active (waiting) since Mon 2026-02-16 17:51:10 UTC; 25min ago Trigger: Sun 2026-02-22 03:10:19 UTC; 5 days left Triggers: ● e2scrub_all.service Feb 16 17:51:10 np0000155647 systemd[1]: Started e2scrub_all.timer - Periodic ext4 Online Metadata Check for All Filesystems. ● fstrim.timer - Discard unused filesystem blocks once a week Loaded: loaded (/usr/lib/systemd/system/fstrim.timer; enabled; preset: enabled) Active: active (waiting) since Mon 2026-02-16 17:51:10 UTC; 25min ago Trigger: Mon 2026-02-23 01:34:06 UTC; 6 days left Triggers: ● fstrim.service Docs: man:fstrim Feb 16 17:51:10 np0000155647 systemd[1]: Started fstrim.timer - Discard unused filesystem blocks once a week. ● logrotate.timer - Daily rotation of log files Loaded: loaded (/usr/lib/systemd/system/logrotate.timer; enabled; preset: enabled) Active: active (waiting) since Mon 2026-02-16 17:59:18 UTC; 17min ago Trigger: Tue 2026-02-17 00:00:00 UTC; 5h 42min left Triggers: ● logrotate.service Docs: man:logrotate(8) man:logrotate.conf(5) Feb 16 17:59:18 np0000155647 systemd[1]: Started logrotate.timer - Daily rotation of log files. ● man-db.timer - Daily man-db regeneration Loaded: loaded (/usr/lib/systemd/system/man-db.timer; enabled; preset: enabled) Active: active (waiting) since Mon 2026-02-16 17:56:29 UTC; 20min ago Trigger: Tue 2026-02-17 09:20:57 UTC; 15h left Triggers: ● man-db.service Docs: man:mandb(8) Feb 16 17:56:29 np0000155647 systemd[1]: Started man-db.timer - Daily man-db regeneration. ● motd-news.timer - Message of the Day Loaded: loaded (/usr/lib/systemd/system/motd-news.timer; enabled; preset: enabled) Active: active (waiting) since Mon 2026-02-16 17:51:10 UTC; 25min ago Trigger: Tue 2026-02-17 09:40:13 UTC; 15h left Triggers: ● motd-news.service Feb 16 17:51:10 np0000155647 systemd[1]: Started motd-news.timer - Message of the Day. ● systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories Loaded: loaded (/usr/lib/systemd/system/systemd-tmpfiles-clean.timer; static) Active: active (waiting) since Mon 2026-02-16 17:51:10 UTC; 25min ago Trigger: Tue 2026-02-17 18:06:06 UTC; 23h left Triggers: ● systemd-tmpfiles-clean.service Docs: man:tmpfiles.d(5) man:systemd-tmpfiles(8) Feb 16 17:51:10 np0000155647 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.