I0407 09:39:23.234188 1 options.go:220] external host was not specified, using 199.204.45.19 I0407 09:39:23.235447 1 server.go:148] Version: v1.28.13 I0407 09:39:23.235498 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0407 09:39:24.100656 1 shared_informer.go:311] Waiting for caches to sync for node_authorizer I0407 09:39:24.111049 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0407 09:39:24.111079 1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota. I0407 09:39:24.111315 1 instance.go:298] Using reconciler: lease I0407 09:39:24.239972 1 handler.go:275] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager W0407 09:39:24.240021 1 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources. I0407 09:39:24.514800 1 handler.go:275] Adding GroupVersion v1 to ResourceManager I0407 09:39:24.515170 1 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping. I0407 09:39:25.031319 1 instance.go:709] API group "resource.k8s.io" is not enabled, skipping. I0407 09:39:25.048494 1 handler.go:275] Adding GroupVersion authentication.k8s.io v1 to ResourceManager W0407 09:39:25.048536 1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources. W0407 09:39:25.048546 1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources. I0407 09:39:25.049347 1 handler.go:275] Adding GroupVersion authorization.k8s.io v1 to ResourceManager W0407 09:39:25.049372 1 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources. I0407 09:39:25.050728 1 handler.go:275] Adding GroupVersion autoscaling v2 to ResourceManager I0407 09:39:25.051708 1 handler.go:275] Adding GroupVersion autoscaling v1 to ResourceManager W0407 09:39:25.051725 1 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources. W0407 09:39:25.051734 1 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources. I0407 09:39:25.053426 1 handler.go:275] Adding GroupVersion batch v1 to ResourceManager W0407 09:39:25.053446 1 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources. I0407 09:39:25.054286 1 handler.go:275] Adding GroupVersion certificates.k8s.io v1 to ResourceManager W0407 09:39:25.054301 1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources. W0407 09:39:25.054305 1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources. I0407 09:39:25.055061 1 handler.go:275] Adding GroupVersion coordination.k8s.io v1 to ResourceManager W0407 09:39:25.055079 1 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources. W0407 09:39:25.055124 1 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources. I0407 09:39:25.055878 1 handler.go:275] Adding GroupVersion discovery.k8s.io v1 to ResourceManager I0407 09:39:25.057554 1 handler.go:275] Adding GroupVersion networking.k8s.io v1 to ResourceManager W0407 09:39:25.057571 1 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources. W0407 09:39:25.057574 1 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources. I0407 09:39:25.058010 1 handler.go:275] Adding GroupVersion node.k8s.io v1 to ResourceManager W0407 09:39:25.058027 1 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources. W0407 09:39:25.058031 1 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources. I0407 09:39:25.058833 1 handler.go:275] Adding GroupVersion policy v1 to ResourceManager W0407 09:39:25.058852 1 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources. I0407 09:39:25.060496 1 handler.go:275] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager W0407 09:39:25.060516 1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources. W0407 09:39:25.060526 1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. I0407 09:39:25.060944 1 handler.go:275] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager W0407 09:39:25.060962 1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources. W0407 09:39:25.060966 1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. I0407 09:39:25.063137 1 handler.go:275] Adding GroupVersion storage.k8s.io v1 to ResourceManager W0407 09:39:25.063154 1 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources. W0407 09:39:25.063157 1 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources. I0407 09:39:25.064124 1 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager I0407 09:39:25.065185 1 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager W0407 09:39:25.065204 1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources. W0407 09:39:25.065209 1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources. I0407 09:39:25.068390 1 handler.go:275] Adding GroupVersion apps v1 to ResourceManager W0407 09:39:25.068413 1 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources. W0407 09:39:25.068418 1 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources. I0407 09:39:25.069347 1 handler.go:275] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager W0407 09:39:25.069364 1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. W0407 09:39:25.069368 1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources. I0407 09:39:25.074483 1 handler.go:275] Adding GroupVersion events.k8s.io v1 to ResourceManager W0407 09:39:25.074586 1 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources. I0407 09:39:25.109367 1 handler.go:275] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager W0407 09:39:25.109395 1 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. I0407 09:39:25.531109 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt" I0407 09:39:25.531129 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" I0407 09:39:25.531446 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key" I0407 09:39:25.531724 1 secure_serving.go:213] Serving securely on [::]:6443 I0407 09:39:25.531801 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0407 09:39:25.531939 1 controller.go:80] Starting OpenAPI V3 AggregationController I0407 09:39:25.531980 1 apf_controller.go:374] Starting API Priority and Fairness config controller I0407 09:39:25.532042 1 available_controller.go:423] Starting AvailableConditionController I0407 09:39:25.532067 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0407 09:39:25.532080 1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/etc/kubernetes/pki/front-proxy-client.crt::/etc/kubernetes/pki/front-proxy-client.key" I0407 09:39:25.532265 1 customresource_discovery_controller.go:289] Starting DiscoveryController I0407 09:39:25.532359 1 controller.go:116] Starting legacy_token_tracking_controller I0407 09:39:25.532379 1 shared_informer.go:311] Waiting for caches to sync for configmaps I0407 09:39:25.532379 1 controller.go:78] Starting OpenAPI AggregationController I0407 09:39:25.532419 1 controller.go:85] Starting OpenAPI V3 controller I0407 09:39:25.532427 1 aggregator.go:164] waiting for initial CRD sync... I0407 09:39:25.532454 1 naming_controller.go:291] Starting NamingConditionController I0407 09:39:25.532396 1 controller.go:134] Starting OpenAPI controller I0407 09:39:25.532497 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0407 09:39:25.532395 1 system_namespaces_controller.go:67] Starting system namespaces controller I0407 09:39:25.532483 1 establishing_controller.go:76] Starting EstablishingController I0407 09:39:25.532521 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0407 09:39:25.532634 1 gc_controller.go:78] Starting apiserver lease garbage collector I0407 09:39:25.532534 1 crd_finalizer.go:266] Starting CRDFinalizer I0407 09:39:25.532669 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0407 09:39:25.532709 1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller I0407 09:39:25.532763 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" I0407 09:39:25.532895 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt" I0407 09:39:25.532998 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0407 09:39:25.533014 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0407 09:39:25.533040 1 handler_discovery.go:412] Starting ResourceDiscoveryManager I0407 09:39:25.533236 1 gc_controller.go:78] Starting apiserver lease garbage collector I0407 09:39:25.539253 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0407 09:39:25.539275 1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister I0407 09:39:25.601394 1 shared_informer.go:318] Caches are synced for node_authorizer E0407 09:39:25.605877 1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms" I0407 09:39:25.632810 1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller I0407 09:39:25.632992 1 apf_controller.go:379] Running API Priority and Fairness config worker I0407 09:39:25.633022 1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process I0407 09:39:25.633015 1 cache.go:39] Caches are synced for AvailableConditionController controller I0407 09:39:25.633122 1 shared_informer.go:318] Caches are synced for configmaps I0407 09:39:25.633263 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0407 09:39:25.635585 1 controller.go:624] quota admission added evaluator for: namespaces I0407 09:39:25.640240 1 shared_informer.go:318] Caches are synced for crd-autoregister I0407 09:39:25.640288 1 aggregator.go:166] initial CRD sync complete... I0407 09:39:25.640300 1 autoregister_controller.go:141] Starting autoregister controller I0407 09:39:25.640309 1 cache.go:32] Waiting for caches to sync for autoregister controller I0407 09:39:25.640319 1 cache.go:39] Caches are synced for autoregister controller I0407 09:39:25.810425 1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io I0407 09:39:26.540043 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000 I0407 09:39:26.545085 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000 I0407 09:39:26.545107 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist. I0407 09:39:27.168799 1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0407 09:39:27.217521 1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0407 09:39:27.349667 1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"} W0407 09:39:27.357235 1 lease.go:263] Resetting endpoints for master service "kubernetes" to [199.204.45.19] I0407 09:39:27.358902 1 controller.go:624] quota admission added evaluator for: endpoints I0407 09:39:27.364576 1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io I0407 09:39:29.292707 1 controller.go:624] quota admission added evaluator for: serviceaccounts I0407 09:39:29.303371 1 controller.go:624] quota admission added evaluator for: deployments.apps I0407 09:39:29.318834 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"} I0407 09:39:29.409973 1 controller.go:624] quota admission added evaluator for: daemonsets.apps I0407 09:39:34.320856 1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps I0407 09:39:34.573132 1 controller.go:624] quota admission added evaluator for: replicasets.apps I0407 09:39:54.962080 1 handler.go:275] Adding GroupVersion gateway.networking.x-k8s.io v1alpha1 to ResourceManager I0407 09:39:54.990457 1 handler.go:275] Adding GroupVersion gateway.networking.k8s.io v1 to ResourceManager I0407 09:39:54.990578 1 handler.go:275] Adding GroupVersion gateway.networking.k8s.io v1beta1 to ResourceManager I0407 09:39:55.113245 1 handler.go:275] Adding GroupVersion gateway.networking.k8s.io v1beta1 to ResourceManager I0407 09:39:55.211547 1 handler.go:275] Adding GroupVersion gateway.networking.x-k8s.io v1alpha1 to ResourceManager I0407 09:39:55.227137 1 handler.go:275] Adding GroupVersion gateway.networking.k8s.io v1alpha2 to ResourceManager I0407 09:39:55.259594 1 handler.go:275] Adding GroupVersion gateway.networking.k8s.io v1alpha2 to ResourceManager I0407 09:39:55.279410 1 handler.go:275] Adding GroupVersion gateway.networking.k8s.io v1 to ResourceManager I0407 09:39:55.279458 1 handler.go:275] Adding GroupVersion gateway.networking.k8s.io v1alpha3 to ResourceManager I0407 09:39:55.293670 1 handler.go:275] Adding GroupVersion gateway.networking.x-k8s.io v1alpha1 to ResourceManager I0407 09:39:55.329135 1 handler.go:275] Adding GroupVersion gateway.networking.k8s.io v1alpha2 to ResourceManager I0407 09:39:55.329198 1 handler.go:275] Adding GroupVersion gateway.networking.k8s.io v1alpha3 to ResourceManager I0407 09:39:55.382186 1 handler.go:275] Adding GroupVersion gateway.networking.k8s.io v1 to ResourceManager I0407 09:39:55.382259 1 handler.go:275] Adding GroupVersion gateway.networking.k8s.io v1beta1 to ResourceManager I0407 09:39:55.417743 1 handler.go:275] Adding GroupVersion gateway.networking.k8s.io v1 to ResourceManager I0407 09:39:55.537700 1 handler.go:275] Adding GroupVersion gateway.networking.k8s.io v1 to ResourceManager I0407 09:39:55.537862 1 handler.go:275] Adding GroupVersion gateway.networking.k8s.io v1beta1 to ResourceManager I0407 09:39:55.547400 1 handler.go:275] Adding GroupVersion gateway.envoyproxy.io v1alpha1 to ResourceManager I0407 09:39:55.610454 1 handler.go:275] Adding GroupVersion gateway.envoyproxy.io v1alpha1 to ResourceManager I0407 09:39:55.637378 1 handler.go:275] Adding GroupVersion gateway.envoyproxy.io v1alpha1 to ResourceManager I0407 09:39:55.673560 1 handler.go:275] Adding GroupVersion gateway.envoyproxy.io v1alpha1 to ResourceManager I0407 09:39:55.683986 1 handler.go:275] Adding GroupVersion gateway.envoyproxy.io v1alpha1 to ResourceManager I0407 09:39:56.410449 1 handler.go:275] Adding GroupVersion gateway.envoyproxy.io v1alpha1 to ResourceManager I0407 09:39:56.427564 1 handler.go:275] Adding GroupVersion gateway.envoyproxy.io v1alpha1 to ResourceManager I0407 09:39:56.522676 1 handler.go:275] Adding GroupVersion gateway.envoyproxy.io v1alpha1 to ResourceManager I0407 09:40:01.792820 1 controller.go:624] quota admission added evaluator for: jobs.batch I0407 09:40:03.406289 1 handler.go:275] Adding GroupVersion cilium.io v2alpha1 to ResourceManager I0407 09:40:03.412124 1 handler.go:275] Adding GroupVersion cilium.io v2 to ResourceManager I0407 09:40:03.603744 1 handler.go:275] Adding GroupVersion cilium.io v2 to ResourceManager I0407 09:40:03.808454 1 handler.go:275] Adding GroupVersion cilium.io v2alpha1 to ResourceManager I0407 09:40:05.022013 1 handler.go:275] Adding GroupVersion cilium.io v2alpha1 to ResourceManager I0407 09:40:05.133926 1 handler.go:275] Adding GroupVersion cilium.io v2alpha1 to ResourceManager I0407 09:40:05.142398 1 handler.go:275] Adding GroupVersion cilium.io v2alpha1 to ResourceManager I0407 09:40:05.159337 1 handler.go:275] Adding GroupVersion cilium.io v2 to ResourceManager I0407 09:40:06.632220 1 handler.go:275] Adding GroupVersion cilium.io v2 to ResourceManager I0407 09:40:07.249853 1 handler.go:275] Adding GroupVersion cilium.io v2 to ResourceManager I0407 09:40:07.644361 1 handler.go:275] Adding GroupVersion cilium.io v2 to ResourceManager W0407 09:40:08.350890 1 dispatcher.go:210] Failed calling webhook, failing open topology.webhook.gateway.envoyproxy.io: failed calling webhook "topology.webhook.gateway.envoyproxy.io": failed to call webhook: Post "https://envoy-gateway.envoy-gateway-system.svc:9443/inject-pod-topology?timeout=10s": service "envoy-gateway" not found E0407 09:40:08.350991 1 dispatcher.go:214] failed calling webhook "topology.webhook.gateway.envoyproxy.io": failed to call webhook: Post "https://envoy-gateway.envoy-gateway-system.svc:9443/inject-pod-topology?timeout=10s": service "envoy-gateway" not found I0407 09:40:11.871920 1 controller.go:624] quota admission added evaluator for: ciliumendpoints.cilium.io I0407 09:40:19.704887 1 alloc.go:330] "allocated clusterIPs" service="envoy-gateway-system/envoy-gateway" clusterIPs={"IPv4":"10.105.43.21"} W0407 09:40:21.740996 1 dispatcher.go:210] Failed calling webhook, failing open topology.webhook.gateway.envoyproxy.io: failed calling webhook "topology.webhook.gateway.envoyproxy.io": failed to call webhook: Post "https://envoy-gateway.envoy-gateway-system.svc:9443/inject-pod-topology?timeout=10s": dial tcp 10.105.43.21:9443: connect: connection refused E0407 09:40:21.741072 1 dispatcher.go:214] failed calling webhook "topology.webhook.gateway.envoyproxy.io": failed to call webhook: Post "https://envoy-gateway.envoy-gateway-system.svc:9443/inject-pod-topology?timeout=10s": dial tcp 10.105.43.21:9443: connect: connection refused I0407 09:40:32.332474 1 handler.go:275] Adding GroupVersion cert-manager.io v1 to ResourceManager I0407 09:40:32.345430 1 handler.go:275] Adding GroupVersion acme.cert-manager.io v1 to ResourceManager I0407 09:40:32.370274 1 handler.go:275] Adding GroupVersion cert-manager.io v1 to ResourceManager I0407 09:40:32.411226 1 alloc.go:330] "allocated clusterIPs" service="cert-manager/cert-manager-webhook" clusterIPs={"IPv4":"10.98.235.188"} I0407 09:40:32.418957 1 alloc.go:330] "allocated clusterIPs" service="cert-manager/cert-manager" clusterIPs={"IPv4":"10.105.241.151"} I0407 09:40:32.434666 1 handler.go:275] Adding GroupVersion acme.cert-manager.io v1 to ResourceManager I0407 09:40:32.479076 1 handler.go:275] Adding GroupVersion cert-manager.io v1 to ResourceManager I0407 09:40:32.509215 1 handler.go:275] Adding GroupVersion cert-manager.io v1 to ResourceManager I0407 09:40:43.045806 1 controller.go:624] quota admission added evaluator for: certificates.cert-manager.io I0407 09:40:48.682025 1 controller.go:624] quota admission added evaluator for: certificaterequests.cert-manager.io I0407 09:41:02.988479 1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.106.6.157"} I0407 09:41:02.995969 1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-metrics" clusterIPs={"IPv4":"10.101.22.60"} I0407 09:41:03.003955 1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-defaultbackend" clusterIPs={"IPv4":"10.97.213.241"} I0407 09:41:03.011042 1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.101.221.253"} I0407 09:41:09.563382 1 handler.go:275] Adding GroupVersion rabbitmq.com v1beta1 to ResourceManager I0407 09:41:09.584068 1 handler.go:275] Adding GroupVersion rabbitmq.com v1beta1 to ResourceManager I0407 09:41:09.606436 1 handler.go:275] Adding GroupVersion rabbitmq.com v1beta1 to ResourceManager I0407 09:41:09.625230 1 handler.go:275] Adding GroupVersion rabbitmq.com v1beta1 to ResourceManager I0407 09:41:09.642976 1 handler.go:275] Adding GroupVersion rabbitmq.com v1beta1 to ResourceManager I0407 09:41:09.667095 1 handler.go:275] Adding GroupVersion rabbitmq.com v1beta1 to ResourceManager I0407 09:41:09.699701 1 handler.go:275] Adding GroupVersion rabbitmq.com v1beta1 to ResourceManager I0407 09:41:09.716165 1 handler.go:275] Adding GroupVersion rabbitmq.com v1beta1 to ResourceManager I0407 09:41:09.740423 1 handler.go:275] Adding GroupVersion rabbitmq.com v1beta1 to ResourceManager I0407 09:41:09.756515 1 handler.go:275] Adding GroupVersion rabbitmq.com v1alpha1 to ResourceManager I0407 09:41:09.779425 1 handler.go:275] Adding GroupVersion rabbitmq.com v1beta1 to ResourceManager I0407 09:41:09.803390 1 handler.go:275] Adding GroupVersion rabbitmq.com v1beta1 to ResourceManager I0407 09:41:09.823585 1 handler.go:275] Adding GroupVersion rabbitmq.com v1beta1 to ResourceManager I0407 09:41:11.280968 1 handler.go:275] Adding GroupVersion rabbitmq.com v1beta1 to ResourceManager I0407 09:41:15.192555 1 controller.go:624] quota admission added evaluator for: networkpolicies.networking.k8s.io I0407 09:41:15.200694 1 controller.go:624] quota admission added evaluator for: poddisruptionbudgets.policy I0407 09:41:15.200696 1 controller.go:624] quota admission added evaluator for: poddisruptionbudgets.policy I0407 09:41:15.244352 1 alloc.go:330] "allocated clusterIPs" service="openstack/rabbitmq-messaging-topology-operator-webhook" clusterIPs={"IPv4":"10.100.253.88"} I0407 09:41:15.277889 1 controller.go:624] quota admission added evaluator for: issuers.cert-manager.io I0407 09:41:17.727656 1 handler.go:275] Adding GroupVersion pxc.percona.com v1 to ResourceManager I0407 09:41:17.769809 1 handler.go:275] Adding GroupVersion pxc.percona.com v1 to ResourceManager I0407 09:41:18.470030 1 handler.go:275] Adding GroupVersion pxc.percona.com v1-10-0 to ResourceManager I0407 09:41:18.470148 1 handler.go:275] Adding GroupVersion pxc.percona.com v1-11-0 to ResourceManager I0407 09:41:18.470215 1 handler.go:275] Adding GroupVersion pxc.percona.com v1 to ResourceManager I0407 09:41:25.565656 1 controller.go:624] quota admission added evaluator for: perconaxtradbclusters.pxc.percona.com I0407 09:41:28.182271 1 alloc.go:330] "allocated clusterIPs" service="openstack/percona-xtradb-cluster-operator" clusterIPs={"IPv4":"10.109.70.253"} I0407 09:41:28.561854 1 controller.go:624] quota admission added evaluator for: statefulsets.apps I0407 09:41:28.815628 1 alloc.go:330] "allocated clusterIPs" service="openstack/percona-xtradb-haproxy" clusterIPs={"IPv4":"10.100.220.47"} I0407 09:41:28.830667 1 alloc.go:330] "allocated clusterIPs" service="openstack/percona-xtradb-haproxy-replicas" clusterIPs={"IPv4":"10.96.248.89"} I0407 09:42:48.101197 1 alloc.go:330] "allocated clusterIPs" service="openstack/percona-xtradb-haproxy-metrics" clusterIPs={"IPv4":"10.105.35.95"} I0407 09:42:50.788384 1 alloc.go:330] "allocated clusterIPs" service="openstack/valkey-metrics" clusterIPs={"IPv4":"10.104.55.108"} I0407 09:42:50.794687 1 alloc.go:330] "allocated clusterIPs" service="openstack/valkey" clusterIPs={"IPv4":"10.104.248.17"} I0407 09:42:57.638954 1 alloc.go:330] "allocated clusterIPs" service="auth-system/keycloak" clusterIPs={"IPv4":"10.96.56.145"} I0407 09:42:57.646377 1 alloc.go:330] "allocated clusterIPs" service="auth-system/keycloak-metrics" clusterIPs={"IPv4":"10.110.122.104"} I0407 09:45:31.621400 1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io I0407 09:45:36.145405 1 handler.go:275] Adding GroupVersion nfd.k8s-sigs.io v1alpha1 to ResourceManager I0407 09:45:36.178842 1 handler.go:275] Adding GroupVersion nfd.k8s-sigs.io v1alpha1 to ResourceManager I0407 09:45:40.354409 1 handler.go:275] Adding GroupVersion secretgen.carvel.dev v1alpha1 to ResourceManager I0407 09:45:40.365338 1 handler.go:275] Adding GroupVersion secretgen.carvel.dev v1alpha1 to ResourceManager I0407 09:45:40.384026 1 handler.go:275] Adding GroupVersion secretgen.carvel.dev v1alpha1 to ResourceManager I0407 09:45:40.399876 1 handler.go:275] Adding GroupVersion secretgen.k14s.io v1alpha1 to ResourceManager I0407 09:45:40.414042 1 handler.go:275] Adding GroupVersion secretgen.k14s.io v1alpha1 to ResourceManager I0407 09:45:40.424162 1 handler.go:275] Adding GroupVersion secretgen.k14s.io v1alpha1 to ResourceManager I0407 09:45:40.435287 1 handler.go:275] Adding GroupVersion secretgen.k14s.io v1alpha1 to ResourceManager I0407 09:45:42.662762 1 controller.go:624] quota admission added evaluator for: nodefeatures.nfd.k8s-sigs.io I0407 09:45:53.094613 1 controller.go:624] quota admission added evaluator for: passwords.secretgen.k14s.io I0407 09:46:32.134069 1 controller.go:624] quota admission added evaluator for: secrettemplates.secretgen.carvel.dev I0407 09:46:46.535454 1 handler.go:275] Adding GroupVersion monitoring.coreos.com v1alpha1 to ResourceManager I0407 09:46:46.752762 1 handler.go:275] Adding GroupVersion monitoring.coreos.com v1 to ResourceManager I0407 09:46:46.779398 1 handler.go:275] Adding GroupVersion monitoring.coreos.com v1 to ResourceManager I0407 09:46:46.788808 1 handler.go:275] Adding GroupVersion monitoring.coreos.com v1 to ResourceManager I0407 09:46:47.036202 1 handler.go:275] Adding GroupVersion monitoring.coreos.com v1alpha1 to ResourceManager I0407 09:46:47.371451 1 handler.go:275] Adding GroupVersion monitoring.coreos.com v1 to ResourceManager I0407 09:46:47.449814 1 handler.go:275] Adding GroupVersion monitoring.coreos.com v1alpha1 to ResourceManager I0407 09:46:47.462485 1 handler.go:275] Adding GroupVersion monitoring.coreos.com v1 to ResourceManager I0407 09:46:47.614510 1 handler.go:275] Adding GroupVersion monitoring.coreos.com v1 to ResourceManager I0407 09:47:27.048938 1 alloc.go:330] "allocated clusterIPs" service="monitoring/kube-prometheus-stack-grafana" clusterIPs={"IPv4":"10.99.163.40"} I0407 09:47:27.057182 1 alloc.go:330] "allocated clusterIPs" service="monitoring/kube-prometheus-stack-alertmanager" clusterIPs={"IPv4":"10.109.120.12"} I0407 09:47:27.069866 1 alloc.go:330] "allocated clusterIPs" service="monitoring/kube-prometheus-stack-kube-state-metrics" clusterIPs={"IPv4":"10.106.228.32"} I0407 09:47:27.077813 1 alloc.go:330] "allocated clusterIPs" service="monitoring/kube-prometheus-stack-prometheus-node-exporter" clusterIPs={"IPv4":"10.107.121.205"} I0407 09:47:27.085261 1 alloc.go:330] "allocated clusterIPs" service="monitoring/kube-prometheus-stack-prometheus" clusterIPs={"IPv4":"10.111.247.208"} I0407 09:47:27.091910 1 alloc.go:330] "allocated clusterIPs" service="monitoring/kube-prometheus-stack-operator" clusterIPs={"IPv4":"10.96.225.128"} I0407 09:47:27.140981 1 controller.go:624] quota admission added evaluator for: alertmanagers.monitoring.coreos.com I0407 09:47:27.149324 1 controller.go:624] quota admission added evaluator for: prometheusrules.monitoring.coreos.com I0407 09:47:27.186590 1 controller.go:624] quota admission added evaluator for: podmonitors.monitoring.coreos.com I0407 09:47:27.197250 1 controller.go:624] quota admission added evaluator for: servicemonitors.monitoring.coreos.com I0407 09:47:27.215762 1 controller.go:624] quota admission added evaluator for: prometheuses.monitoring.coreos.com W0407 09:47:27.237371 1 dispatcher.go:210] Failed calling webhook, failing open prometheusrulemutate.monitoring.coreos.com: failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused W0407 09:47:27.237489 1 dispatcher.go:210] Failed calling webhook, failing open prometheusrulemutate.monitoring.coreos.com: failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused E0407 09:47:27.237504 1 dispatcher.go:214] failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused E0407 09:47:27.237568 1 dispatcher.go:214] failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused W0407 09:47:27.237888 1 dispatcher.go:210] Failed calling webhook, failing open prometheusrulemutate.monitoring.coreos.com: failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused E0407 09:47:27.237932 1 dispatcher.go:214] failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused W0407 09:47:27.239463 1 dispatcher.go:210] Failed calling webhook, failing open prometheusrulemutate.monitoring.coreos.com: failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused E0407 09:47:27.239506 1 dispatcher.go:214] failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused W0407 09:47:27.240349 1 dispatcher.go:210] Failed calling webhook, failing open prometheusrulemutate.monitoring.coreos.com: failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused E0407 09:47:27.240390 1 dispatcher.go:214] failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused W0407 09:47:27.240933 1 dispatcher.go:210] Failed calling webhook, failing open prometheusrulemutate.monitoring.coreos.com: failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused E0407 09:47:27.240963 1 dispatcher.go:214] failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused W0407 09:47:27.241030 1 dispatcher.go:210] Failed calling webhook, failing open prometheusrulemutate.monitoring.coreos.com: failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused E0407 09:47:27.241057 1 dispatcher.go:214] failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused W0407 09:47:27.241228 1 dispatcher.go:210] Failed calling webhook, failing open prometheusrulemutate.monitoring.coreos.com: failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused E0407 09:47:27.241299 1 dispatcher.go:214] failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused W0407 09:47:27.241431 1 dispatcher.go:210] Failed calling webhook, failing open prometheusrulemutate.monitoring.coreos.com: failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused E0407 09:47:27.241466 1 dispatcher.go:214] failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused W0407 09:47:27.241950 1 dispatcher.go:210] Failed calling webhook, failing open prometheusrulemutate.monitoring.coreos.com: failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused E0407 09:47:27.241977 1 dispatcher.go:214] failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused W0407 09:47:27.242515 1 dispatcher.go:210] Failed calling webhook, failing open prometheusrulemutate.monitoring.coreos.com: failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused E0407 09:47:27.242553 1 dispatcher.go:214] failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused W0407 09:47:27.242791 1 dispatcher.go:210] Failed calling webhook, failing open prometheusrulemutate.monitoring.coreos.com: failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused E0407 09:47:27.242850 1 dispatcher.go:214] failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused W0407 09:47:27.243214 1 dispatcher.go:210] Failed calling webhook, failing open prometheusrulemutate.monitoring.coreos.com: failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused E0407 09:47:27.243246 1 dispatcher.go:214] failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused W0407 09:47:27.243295 1 dispatcher.go:210] Failed calling webhook, failing open prometheusrulemutate.monitoring.coreos.com: failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused E0407 09:47:27.243325 1 dispatcher.go:214] failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused W0407 09:47:27.243565 1 dispatcher.go:210] Failed calling webhook, failing open prometheusrulemutate.monitoring.coreos.com: failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused E0407 09:47:27.243595 1 dispatcher.go:214] failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused W0407 09:47:27.244593 1 dispatcher.go:210] Failed calling webhook, failing open prometheusrulemutate.monitoring.coreos.com: failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused E0407 09:47:27.244621 1 dispatcher.go:214] failed calling webhook "prometheusrulemutate.monitoring.coreos.com": failed to call webhook: Post "https://kube-prometheus-stack-operator.monitoring.svc:443/admission-prometheusrules/mutate?timeout=10s": dial tcp 10.96.225.128:443: connect: connection refused I0407 09:47:38.042439 1 alloc.go:330] "allocated clusterIPs" service="monitoring/loki" clusterIPs={"IPv4":"10.111.248.21"} I0407 09:47:38.052840 1 alloc.go:330] "allocated clusterIPs" service="monitoring/loki-gateway" clusterIPs={"IPv4":"10.110.149.134"} I0407 09:47:43.205285 1 alloc.go:330] "allocated clusterIPs" service="monitoring/goldpinger" clusterIPs={"IPv4":"10.108.163.193"} I0407 09:47:46.101849 1 alloc.go:330] "allocated clusterIPs" service="monitoring/prometheus-pushgateway" clusterIPs={"IPv4":"10.106.209.203"} I0407 09:47:51.178638 1 alloc.go:330] "allocated clusterIPs" service="openstack/memcached" clusterIPs={"IPv4":"10.103.250.91"} I0407 09:47:52.074154 1 trace.go:236] Trace[1421206805]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:18041eed-63de-4994-85a4-114ccd82293a,client:199.204.45.19,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/openstack/endpoints,user-agent:kube-controller-manager/v1.28.13 (linux/amd64) kubernetes/024ab2a/system:serviceaccount:kube-system:endpoint-controller,verb:POST (07-Apr-2026 09:47:51.180) (total time: 893ms): Trace[1421206805]: ["Create etcd3" audit-id:18041eed-63de-4994-85a4-114ccd82293a,key:/services/endpoints/openstack/memcached,type:*core.Endpoints,resource:endpoints 891ms (09:47:51.182) Trace[1421206805]: ---"Txn call succeeded" 891ms (09:47:52.073)] Trace[1421206805]: [893.130304ms] [893.130304ms] END I0407 09:47:52.074427 1 trace.go:236] Trace[714113436]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:a7aee839-545e-4e71-a2f8-c2922c3f2615,client:199.204.45.19,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.28.13 (linux/amd64) kubernetes/024ab2a/leader-election,verb:GET (07-Apr-2026 09:47:51.530) (total time: 543ms): Trace[714113436]: ---"About to write a response" 543ms (09:47:52.074) Trace[714113436]: [543.849411ms] [543.849411ms] END I0407 09:47:52.074596 1 trace.go:236] Trace[1672058696]: "Create" accept:application/json,audit-id:557bf58c-af2e-45f3-b937-1af3df48c3ec,client:199.204.45.19,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openstack/deployments,user-agent:Helm/3.11.2,verb:POST (07-Apr-2026 09:47:51.180) (total time: 893ms): Trace[1672058696]: ["Create etcd3" audit-id:557bf58c-af2e-45f3-b937-1af3df48c3ec,key:/deployments/openstack/memcached-memcached,type:*apps.Deployment,resource:deployments.apps 890ms (09:47:51.184) Trace[1672058696]: ---"Txn call succeeded" 889ms (09:47:52.073)] Trace[1672058696]: [893.766349ms] [893.766349ms] END I0407 09:47:52.074698 1 trace.go:236] Trace[1868796480]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:858a91be-0312-4661-8eb8-747dc49c422a,client:199.204.45.19,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/monitoring/pods/vector-m2bmg/status,user-agent:kubelet/v1.28.13 (linux/amd64) kubernetes/024ab2a,verb:PATCH (07-Apr-2026 09:47:51.183) (total time: 891ms): Trace[1868796480]: ["GuaranteedUpdate etcd3" audit-id:858a91be-0312-4661-8eb8-747dc49c422a,key:/pods/monitoring/vector-m2bmg,type:*core.Pod,resource:pods 891ms (09:47:51.183) Trace[1868796480]: ---"Txn call completed" 887ms (09:47:52.073)] Trace[1868796480]: ---"Object stored in database" 888ms (09:47:52.074) Trace[1868796480]: [891.368232ms] [891.368232ms] END I0407 09:47:52.075633 1 trace.go:236] Trace[1701116245]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:ec108a2f-c4d6-4a22-8ecc-425cb50786db,client:199.204.45.19,protocol:HTTP/2.0,resource:endpointslices,scope:resource,url:/apis/discovery.k8s.io/v1/namespaces/openstack/endpointslices,user-agent:kube-controller-manager/v1.28.13 (linux/amd64) kubernetes/024ab2a/system:serviceaccount:kube-system:endpointslice-controller,verb:POST (07-Apr-2026 09:47:51.181) (total time: 894ms): Trace[1701116245]: ["Create etcd3" audit-id:ec108a2f-c4d6-4a22-8ecc-425cb50786db,key:/endpointslices/openstack/memcached-fvvqb,type:*discovery.EndpointSlice,resource:endpointslices.discovery.k8s.io 893ms (09:47:51.182) Trace[1701116245]: ---"Txn call succeeded" 891ms (09:47:52.073)] Trace[1701116245]: [894.231792ms] [894.231792ms] END I0407 09:47:54.983713 1 trace.go:236] Trace[1514516377]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:7480043c-5b82-4714-b0b9-2fdf8bb569b3,client:199.204.45.19,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/openstack/events,user-agent:kube-controller-manager/v1.28.13 (linux/amd64) kubernetes/024ab2a/system:serviceaccount:kube-system:replicaset-controller,verb:POST (07-Apr-2026 09:47:52.732) (total time: 2251ms): Trace[1514516377]: ["Create etcd3" audit-id:7480043c-5b82-4714-b0b9-2fdf8bb569b3,key:/events/openstack/memcached-memcached-6479589586.18a40a1cfe2a5bc1,type:*core.Event,resource:events 2250ms (09:47:52.733) Trace[1514516377]: ---"Txn call succeeded" 2250ms (09:47:54.983)] Trace[1514516377]: [2.251402881s] [2.251402881s] END I0407 09:47:55.847906 1 trace.go:236] Trace[2091577953]: "Update" accept:application/json, */*,audit-id:3e2c551f-44e6-45c8-9a0b-8da2cb25b40c,client:10.0.0.17,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/cert-manager/leases/cert-manager-cainjector-leader-election,user-agent:cainjector/v0.0.0 (linux/amd64) kubernetes/$Format/leader-election,verb:PUT (07-Apr-2026 09:47:52.733) (total time: 3114ms): Trace[2091577953]: ["GuaranteedUpdate etcd3" audit-id:3e2c551f-44e6-45c8-9a0b-8da2cb25b40c,key:/leases/cert-manager/cert-manager-cainjector-leader-election,type:*coordination.Lease,resource:leases.coordination.k8s.io 3113ms (09:47:52.734) Trace[2091577953]: ---"Txn call completed" 3111ms (09:47:55.847)] Trace[2091577953]: [3.114151114s] [3.114151114s] END I0407 09:47:55.849223 1 trace.go:236] Trace[1390142145]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:6a54e952-c779-404a-bca1-b027e1dfb876,client:199.204.45.19,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/openstack/pods/memcached-memcached-6479589586-jnkkv/binding,user-agent:kube-scheduler/v1.28.13 (linux/amd64) kubernetes/024ab2a/scheduler,verb:POST (07-Apr-2026 09:47:52.732) (total time: 3116ms): Trace[1390142145]: ["GuaranteedUpdate etcd3" audit-id:6a54e952-c779-404a-bca1-b027e1dfb876,key:/pods/openstack/memcached-memcached-6479589586-jnkkv,type:*core.Pod,resource:pods 3115ms (09:47:52.733) Trace[1390142145]: ---"Txn call completed" 3113ms (09:47:55.847)] Trace[1390142145]: [3.116590573s] [3.116590573s] END I0407 09:47:55.849420 1 trace.go:236] Trace[1400539916]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:f003874e-b66a-4f13-9798-9ea3263e55a8,client:199.204.45.19,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openstack/deployments/memcached-memcached/status,user-agent:kube-controller-manager/v1.28.13 (linux/amd64) kubernetes/024ab2a/system:serviceaccount:kube-system:deployment-controller,verb:PUT (07-Apr-2026 09:47:52.733) (total time: 3115ms): Trace[1400539916]: ["GuaranteedUpdate etcd3" audit-id:f003874e-b66a-4f13-9798-9ea3263e55a8,key:/deployments/openstack/memcached-memcached,type:*apps.Deployment,resource:deployments.apps 3115ms (09:47:52.733) Trace[1400539916]: ---"Txn call completed" 3108ms (09:47:55.847)] Trace[1400539916]: [3.11560037s] [3.11560037s] END I0407 09:47:55.850697 1 trace.go:236] Trace[1577502032]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:93c825fe-f969-4962-b007-64d46bb35af9,client:199.204.45.19,protocol:HTTP/2.0,resource:replicasets,scope:resource,url:/apis/apps/v1/namespaces/openstack/replicasets/memcached-memcached-6479589586/status,user-agent:kube-controller-manager/v1.28.13 (linux/amd64) kubernetes/024ab2a/system:serviceaccount:kube-system:replicaset-controller,verb:PUT (07-Apr-2026 09:47:52.732) (total time: 3118ms): Trace[1577502032]: ["GuaranteedUpdate etcd3" audit-id:93c825fe-f969-4962-b007-64d46bb35af9,key:/replicasets/openstack/memcached-memcached-6479589586,type:*apps.ReplicaSet,resource:replicasets.apps 3118ms (09:47:52.732) Trace[1577502032]: ---"Txn call completed" 3109ms (09:47:55.847)] Trace[1577502032]: [3.1185396s] [3.1185396s] END E0407 09:47:57.772249 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled E0407 09:47:57.772355 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout E0407 09:47:57.773499 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout E0407 09:47:57.773544 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout I0407 09:47:57.774625 1 trace.go:236] Trace[1084858626]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:ce903572-1a68-4f27-8b59-529f7b507ece,client:199.204.45.19,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.28.13 (linux/amd64) kubernetes/024ab2a/leader-election,verb:GET (07-Apr-2026 09:47:52.771) (total time: 5002ms): Trace[1084858626]: [5.002643238s] [5.002643238s] END E0407 09:47:57.774861 1 timeout.go:142] post-timeout activity - time-elapsed: 2.593793ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager" result: I0407 09:47:57.959372 1 trace.go:236] Trace[1374371706]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:824c4b93-9a85-4bc4-8e37-620372d494ba,client:10.0.0.126,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openstack/leases/rabbitmq-cluster-operator-leader-election,user-agent:manager/v0.0.0 (linux/amd64) kubernetes/$Format/leader-election,verb:PUT (07-Apr-2026 09:47:52.959) (total time: 4999ms): Trace[1374371706]: ---"Write to database call failed" len:469,err:Timeout: request did not complete within requested timeout - context canceled 4999ms (09:47:57.959) Trace[1374371706]: [4.999942312s] [4.999942312s] END E0407 09:47:57.959481 1 wrap.go:54] timeout or abort while handling: method=PUT URI="/apis/coordination.k8s.io/v1/namespaces/openstack/leases/rabbitmq-cluster-operator-leader-election?timeout=5s" audit-ID="824c4b93-9a85-4bc4-8e37-620372d494ba" E0407 09:47:57.959543 1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 10.51µs, panicked: false, err: context canceled, panic-reason: E0407 09:47:57.959549 1 timeout.go:142] post-timeout activity - time-elapsed: 5.621µs, PUT "/apis/coordination.k8s.io/v1/namespaces/openstack/leases/rabbitmq-cluster-operator-leader-election" result: E0407 09:47:58.125513 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout E0407 09:47:58.125659 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout E0407 09:47:58.125586 1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 11.63µs, panicked: false, err: context canceled, panic-reason: E0407 09:47:58.126771 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout I0407 09:47:58.126896 1 trace.go:236] Trace[1520362451]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:663c8cb7-831c-4d71-b4d4-3f07690fcd53,client:10.0.0.49,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/envoy-gateway-system/leases/5b9825d2.gateway.envoyproxy.io,user-agent:envoy-gateway/v0.0.0 (linux/amd64) kubernetes/$Format/leader-election,verb:PUT (07-Apr-2026 09:47:53.126) (total time: 5000ms): Trace[1520362451]: ["GuaranteedUpdate etcd3" audit-id:663c8cb7-831c-4d71-b4d4-3f07690fcd53,key:/leases/envoy-gateway-system/5b9825d2.gateway.envoyproxy.io,type:*coordination.Lease,resource:leases.coordination.k8s.io 5000ms (09:47:53.126) Trace[1520362451]: ---"Txn call failed" err:context canceled 4998ms (09:47:58.125)] Trace[1520362451]: [5.00070595s] [5.00070595s] END E0407 09:47:58.127069 1 timeout.go:142] post-timeout activity - time-elapsed: 1.516287ms, PUT "/apis/coordination.k8s.io/v1/namespaces/envoy-gateway-system/leases/5b9825d2.gateway.envoyproxy.io" result: E0407 09:47:59.078800 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout E0407 09:47:59.078847 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout E0407 09:47:59.078886 1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 13.39µs, panicked: false, err: context deadline exceeded, panic-reason: E0407 09:47:59.080672 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout I0407 09:47:59.080785 1 trace.go:236] Trace[19074508]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:1c1e1e30-0a6a-4368-9f97-23cb52b14277,client:10.0.0.70,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openstack/leases/08db1feb.percona.com,user-agent:percona-xtradb-cluster-operator/v0.0.0 (linux/amd64) kubernetes/$Format/leader-election,verb:PUT (07-Apr-2026 09:47:54.078) (total time: 5001ms): Trace[19074508]: ["GuaranteedUpdate etcd3" audit-id:1c1e1e30-0a6a-4368-9f97-23cb52b14277,key:/leases/openstack/08db1feb.percona.com,type:*coordination.Lease,resource:leases.coordination.k8s.io 5001ms (09:47:54.079) Trace[19074508]: ---"Txn call failed" err:context deadline exceeded 4998ms (09:47:59.078)] Trace[19074508]: [5.001711465s] [5.001711465s] END E0407 09:47:59.081009 1 timeout.go:142] post-timeout activity - time-elapsed: 2.167762ms, PUT "/apis/coordination.k8s.io/v1/namespaces/openstack/leases/08db1feb.percona.com" result: E0407 09:47:59.403326 1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded E0407 09:47:59.403469 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout E0407 09:47:59.405063 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout E0407 09:47:59.405119 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout I0407 09:47:59.406316 1 trace.go:236] Trace[1168612830]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:fffe1283-0c83-4623-b60f-6b4678c059c3,client:199.204.45.19,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.28.13 (linux/amd64) kubernetes/024ab2a/leader-election,verb:GET (07-Apr-2026 09:47:54.403) (total time: 5002ms): Trace[1168612830]: [5.002574905s] [5.002574905s] END E0407 09:47:59.406457 1 timeout.go:142] post-timeout activity - time-elapsed: 2.993723ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" result: E0407 09:47:59.523670 1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded E0407 09:47:59.523783 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout E0407 09:47:59.524910 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout E0407 09:47:59.524954 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout I0407 09:47:59.526103 1 trace.go:236] Trace[836005871]: "Get" accept:application/json, */*,audit-id:52c5f8d6-c266-4f89-9d3e-5ac4404f0cbb,client:199.204.45.19,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock,user-agent:cilium-operator-generic/v0.0.0 (linux/amd64) kubernetes/$Format/leader-election,verb:GET (07-Apr-2026 09:47:54.523) (total time: 5002ms): Trace[836005871]: [5.002033873s] [5.002033873s] END E0407 09:47:59.526356 1 timeout.go:142] post-timeout activity - time-elapsed: 2.561842ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock" result: E0407 09:47:59.734018 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout E0407 09:47:59.734098 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout E0407 09:47:59.734130 1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 20.6µs, panicked: false, err: context canceled, panic-reason: E0407 09:47:59.735167 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout I0407 09:47:59.735260 1 trace.go:236] Trace[2095386400]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:8aa20375-fa96-4f36-945b-9a29f850e2be,client:10.0.0.3,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openstack/leases/messaging-topology-operator-leader-election,user-agent:manager/v0.0.0 (linux/amd64) kubernetes/$Format/leader-election,verb:PUT (07-Apr-2026 09:47:54.733) (total time: 5001ms): Trace[2095386400]: ["GuaranteedUpdate etcd3" audit-id:8aa20375-fa96-4f36-945b-9a29f850e2be,key:/leases/openstack/messaging-topology-operator-leader-election,type:*coordination.Lease,resource:leases.coordination.k8s.io 5000ms (09:47:54.734) Trace[2095386400]: ---"Txn call failed" err:context canceled 4998ms (09:47:59.733)] Trace[2095386400]: [5.001208272s] [5.001208272s] END E0407 09:47:59.735420 1 timeout.go:142] post-timeout activity - time-elapsed: 1.479607ms, PUT "/apis/coordination.k8s.io/v1/namespaces/openstack/leases/messaging-topology-operator-leader-election" result: E0407 09:48:01.639146 1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}: etcdserver: request timed out I0407 09:48:01.639324 1 trace.go:236] Trace[1938051024]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:ded4debe-541f-4b7c-bd59-6f156167710d,client:199.204.45.19,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/ingress-nginx/leases/ingress-nginx-leader,user-agent:nginx-ingress-controller/v1.12.1 (linux/amd64) ingress-nginx/51c2b819690bbf1709b844dbf321a9acf6eda5a7,verb:PUT (07-Apr-2026 09:47:54.635) (total time: 7003ms): Trace[1938051024]: ["GuaranteedUpdate etcd3" audit-id:ded4debe-541f-4b7c-bd59-6f156167710d,key:/leases/ingress-nginx/ingress-nginx-leader,type:*coordination.Lease,resource:leases.coordination.k8s.io 7003ms (09:47:54.635) Trace[1938051024]: ---"Txn call failed" err:etcdserver: request timed out 7002ms (09:48:01.639)] Trace[1938051024]: [7.003684783s] [7.003684783s] END E0407 09:48:02.771411 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled E0407 09:48:02.771521 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout E0407 09:48:02.772673 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout E0407 09:48:02.772753 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout I0407 09:48:02.773855 1 trace.go:236] Trace[193636894]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:2e6ab2fd-5545-4bd2-ab6f-cf975c94dad4,client:199.204.45.19,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.28.13 (linux/amd64) kubernetes/024ab2a/leader-election,verb:GET (07-Apr-2026 09:47:59.773) (total time: 3000ms): Trace[193636894]: [3.000071899s] [3.000071899s] END E0407 09:48:02.774179 1 timeout.go:142] post-timeout activity - time-elapsed: 2.795478ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager" result: E0407 09:48:02.798886 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout E0407 09:48:02.798971 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout E0407 09:48:02.799007 1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 13.48µs, panicked: false, err: context canceled, panic-reason: E0407 09:48:02.800092 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout I0407 09:48:02.800303 1 trace.go:236] Trace[1322288294]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:24ef2a00-5880-43b6-a38c-5a3e614c871d,client:199.204.45.19,protocol:HTTP/2.0,resource:replicasets,scope:resource,url:/apis/apps/v1/namespaces/openstack/replicasets/memcached-memcached-6479589586/status,user-agent:kube-controller-manager/v1.28.13 (linux/amd64) kubernetes/024ab2a/system:serviceaccount:kube-system:replicaset-controller,verb:PUT (07-Apr-2026 09:47:55.853) (total time: 6946ms): Trace[1322288294]: ["GuaranteedUpdate etcd3" audit-id:24ef2a00-5880-43b6-a38c-5a3e614c871d,key:/replicasets/openstack/memcached-memcached-6479589586,type:*apps.ReplicaSet,resource:replicasets.apps 6946ms (09:47:55.853) Trace[1322288294]: ---"Txn call failed" err:context canceled 6940ms (09:48:02.798)] Trace[1322288294]: [6.946915063s] [6.946915063s] END E0407 09:48:02.800491 1 timeout.go:142] post-timeout activity - time-elapsed: 1.690591ms, PUT "/apis/apps/v1/namespaces/openstack/replicasets/memcached-memcached-6479589586/status" result: E0407 09:48:02.854864 1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}: etcdserver: request timed out I0407 09:48:02.855141 1 trace.go:236] Trace[1111495589]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:cff71f43-8d36-43fa-8286-070b615f4886,client:199.204.45.19,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/openstack/events,user-agent:kube-scheduler/v1.28.13 (linux/amd64) kubernetes/024ab2a/scheduler,verb:POST (07-Apr-2026 09:47:55.852) (total time: 7002ms): Trace[1111495589]: ["Create etcd3" audit-id:cff71f43-8d36-43fa-8286-070b615f4886,key:/events/openstack/memcached-memcached-6479589586-jnkkv.18a40a1db8307dc3,type:*core.Event,resource:events 7001ms (09:47:55.853) Trace[1111495589]: ---"Txn call failed" err:etcdserver: request timed out 7001ms (09:48:02.854)] Trace[1111495589]: [7.002314648s] [7.002314648s] END E0407 09:48:02.959390 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled E0407 09:48:02.959549 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout E0407 09:48:02.960662 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout E0407 09:48:02.960742 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout I0407 09:48:02.961886 1 trace.go:236] Trace[940470122]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:8098d389-a9d6-4935-a706-42163d60b0ab,client:10.0.0.126,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openstack/leases/rabbitmq-cluster-operator-leader-election,user-agent:manager/v0.0.0 (linux/amd64) kubernetes/$Format/leader-election,verb:GET (07-Apr-2026 09:47:57.960) (total time: 5001ms): Trace[940470122]: [5.001798827s] [5.001798827s] END E0407 09:48:02.962134 1 timeout.go:142] post-timeout activity - time-elapsed: 2.835138ms, GET "/apis/coordination.k8s.io/v1/namespaces/openstack/leases/rabbitmq-cluster-operator-leader-election" result: E0407 09:48:03.126174 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled E0407 09:48:03.126306 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout E0407 09:48:03.127251 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout E0407 09:48:03.127460 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout I0407 09:48:03.128448 1 trace.go:236] Trace[1846923705]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:7807f91d-bac3-4590-9ee4-2d37dd5a39cb,client:10.0.0.49,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/envoy-gateway-system/leases/5b9825d2.gateway.envoyproxy.io,user-agent:envoy-gateway/v0.0.0 (linux/amd64) kubernetes/$Format/leader-election,verb:GET (07-Apr-2026 09:47:58.127) (total time: 5000ms): Trace[1846923705]: [5.000597548s] [5.000597548s] END E0407 09:48:03.128585 1 timeout.go:142] post-timeout activity - time-elapsed: 2.464269ms, GET "/apis/coordination.k8s.io/v1/namespaces/envoy-gateway-system/leases/5b9825d2.gateway.envoyproxy.io" result: E0407 09:48:03.410813 1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}: etcdserver: request timed out I0407 09:48:03.411002 1 trace.go:236] Trace[2077631070]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:a2225596-007b-4b0d-8169-2cbb6167a7fe,client:::1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-f3tcohoifagyom4bbi4wgeu7te,user-agent:kube-apiserver/v1.28.13 (linux/amd64) kubernetes/024ab2a,verb:PUT (07-Apr-2026 09:47:56.406) (total time: 7004ms): Trace[2077631070]: ["GuaranteedUpdate etcd3" audit-id:a2225596-007b-4b0d-8169-2cbb6167a7fe,key:/leases/kube-system/apiserver-f3tcohoifagyom4bbi4wgeu7te,type:*coordination.Lease,resource:leases.coordination.k8s.io 7003ms (09:47:56.407) Trace[2077631070]: ---"Txn call failed" err:etcdserver: request timed out 7002ms (09:48:03.410)] Trace[2077631070]: [7.004206952s] [7.004206952s] END E0407 09:48:03.411631 1 controller.go:193] "Failed to update lease" err="etcdserver: request timed out" E0407 09:48:04.078944 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled E0407 09:48:04.079482 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout E0407 09:48:04.080560 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout E0407 09:48:04.080610 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout I0407 09:48:04.081804 1 trace.go:236] Trace[2088305109]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:a3cc74b4-a593-42df-b4d7-af32057b9778,client:10.0.0.70,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openstack/leases/08db1feb.percona.com,user-agent:percona-xtradb-cluster-operator/v0.0.0 (linux/amd64) kubernetes/$Format/leader-election,verb:GET (07-Apr-2026 09:47:59.080) (total time: 5001ms): Trace[2088305109]: [5.001435108s] [5.001435108s] END E0407 09:48:04.082032 1 timeout.go:142] post-timeout activity - time-elapsed: 2.351947ms, GET "/apis/coordination.k8s.io/v1/namespaces/openstack/leases/08db1feb.percona.com" result: E0407 09:48:04.102942 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled E0407 09:48:04.103022 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout E0407 09:48:04.104746 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout E0407 09:48:04.104795 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout I0407 09:48:04.105940 1 trace.go:236] Trace[9272130]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:65e902d9-03ea-4499-85a7-9eb7da521a50,client:10.0.0.70,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/openstack/pods/percona-xtradb-pxc-0,user-agent:percona-xtradb-cluster-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (07-Apr-2026 09:47:53.578) (total time: 10527ms): Trace[9272130]: [10.527206137s] [10.527206137s] END E0407 09:48:04.106113 1 timeout.go:142] post-timeout activity - time-elapsed: 3.168306ms, GET "/api/v1/namespaces/openstack/pods/percona-xtradb-pxc-0" result: E0407 09:48:04.317587 1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}: etcdserver: request timed out I0407 09:48:04.317829 1 trace.go:236] Trace[883964670]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:ee11f723-32ee-48c2-8c1f-363bfe92fc66,client:199.204.45.19,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.28.13 (linux/amd64) kubernetes/024ab2a,verb:POST (07-Apr-2026 09:47:57.314) (total time: 7002ms): Trace[883964670]: ["Create etcd3" audit-id:ee11f723-32ee-48c2-8c1f-363bfe92fc66,key:/events/kube-system/kube-apiserver-instance.18a40a1e0f52a8c5,type:*core.Event,resource:events 7001ms (09:47:57.315) Trace[883964670]: ---"Txn call failed" err:etcdserver: request timed out 7001ms (09:48:04.317)] Trace[883964670]: [7.002928682s] [7.002928682s] END E0407 09:48:04.402672 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled E0407 09:48:04.402757 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout E0407 09:48:04.404456 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout E0407 09:48:04.404509 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout I0407 09:48:04.405680 1 trace.go:236] Trace[1946120133]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:16d69aa4-6e4c-4ba1-8ee4-485cdc6301c6,client:199.204.45.19,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.28.13 (linux/amd64) kubernetes/024ab2a/leader-election,verb:GET (07-Apr-2026 09:48:01.406) (total time: 2999ms): Trace[1946120133]: [2.999489254s] [2.999489254s] END E0407 09:48:04.405903 1 timeout.go:142] post-timeout activity - time-elapsed: 3.250498ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" result: E0407 09:48:04.523538 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled E0407 09:48:04.523633 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout E0407 09:48:04.524706 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled E0407 09:48:04.524723 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled E0407 09:48:04.524814 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout E0407 09:48:04.525200 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout E0407 09:48:04.525952 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout E0407 09:48:04.526024 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout E0407 09:48:04.526107 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout E0407 09:48:04.526279 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout I0407 09:48:04.527590 1 trace.go:236] Trace[1508883863]: "Get" accept:application/json, */*,audit-id:5f2cb8b0-7cd4-490e-8782-c8792d964460,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (07-Apr-2026 09:47:54.524) (total time: 10003ms): Trace[1508883863]: [10.003448392s] [10.003448392s] END E0407 09:48:04.527613 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout E0407 09:48:04.527663 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout E0407 09:48:04.527713 1 timeout.go:142] post-timeout activity - time-elapsed: 3.176746ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock" result: I0407 09:48:04.527787 1 trace.go:236] Trace[1558755435]: "Get" accept:application/json, */*,audit-id:b2ba05f2-6ee7-4635-aa23-1ba7f094f9ff,client:199.204.45.19,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock,user-agent:cilium-operator-generic/v0.0.0 (linux/amd64) kubernetes/$Format/leader-election,verb:GET (07-Apr-2026 09:48:01.525) (total time: 3001ms): Trace[1558755435]: [3.001880122s] [3.001880122s] END E0407 09:48:04.528028 1 timeout.go:142] post-timeout activity - time-elapsed: 4.513499ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock" result: I0407 09:48:04.528933 1 trace.go:236] Trace[421016285]: "Get" accept:application/json, */*,audit-id:617fab17-a040-4e71-a173-a26724748054,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-svcs-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (07-Apr-2026 09:47:54.523) (total time: 10004ms): Trace[421016285]: [10.004956707s] [10.004956707s] END E0407 09:48:04.529039 1 timeout.go:142] post-timeout activity - time-elapsed: 4.20894ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-svcs-lock" result: I0407 09:48:04.647898 1 trace.go:236] Trace[1882018038]: "Create" accept:application/json, */*,audit-id:b3d66864-e2d6-4c60-936f-55d40b20b9a8,client:10.0.0.49,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/envoy-gateway-system/events,user-agent:envoy-gateway/v0.0.0 (linux/amd64) kubernetes/$Format,verb:POST (07-Apr-2026 09:48:03.128) (total time: 1519ms): Trace[1882018038]: ["Create etcd3" audit-id:b3d66864-e2d6-4c60-936f-55d40b20b9a8,key:/events/envoy-gateway-system/5b9825d2.gateway.envoyproxy.io.18a40a1f69c945d3,type:*core.Event,resource:events 1518ms (09:48:03.129) Trace[1882018038]: ---"Txn call succeeded" 1518ms (09:48:04.647)] Trace[1882018038]: [1.519633857s] [1.519633857s] END I0407 09:48:04.648553 1 trace.go:236] Trace[1370392647]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:1f3848f1-a2ed-45da-bc42-6057c0ff3d57,client:199.204.45.19,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/instance/status,user-agent:kubelet/v1.28.13 (linux/amd64) kubernetes/024ab2a,verb:PATCH (07-Apr-2026 09:47:59.782) (total time: 4865ms): Trace[1370392647]: ["GuaranteedUpdate etcd3" audit-id:1f3848f1-a2ed-45da-bc42-6057c0ff3d57,key:/minions/instance,type:*core.Node,resource:nodes 4865ms (09:47:59.782) Trace[1370392647]: ---"Txn call completed" 4857ms (09:48:04.647)] Trace[1370392647]: ---"Object stored in database" 4859ms (09:48:04.647) Trace[1370392647]: [4.865804662s] [4.865804662s] END I0407 09:48:04.649267 1 trace.go:236] Trace[1263462987]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:330739cb-3e95-4e5b-99bc-4ef6095dd9b7,client:::1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-f3tcohoifagyom4bbi4wgeu7te,user-agent:kube-apiserver/v1.28.13 (linux/amd64) kubernetes/024ab2a,verb:PUT (07-Apr-2026 09:48:03.412) (total time: 1236ms): Trace[1263462987]: ["GuaranteedUpdate etcd3" audit-id:330739cb-3e95-4e5b-99bc-4ef6095dd9b7,key:/leases/kube-system/apiserver-f3tcohoifagyom4bbi4wgeu7te,type:*coordination.Lease,resource:leases.coordination.k8s.io 1236ms (09:48:03.412) Trace[1263462987]: ---"Txn call completed" 1234ms (09:48:04.647)] Trace[1263462987]: [1.236553137s] [1.236553137s] END I0407 09:48:04.649287 1 trace.go:236] Trace[1220791005]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:3db85628-389f-4a4f-974b-cbe2c7930baa,client:199.204.45.19,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/instance,user-agent:kubelet/v1.28.13 (linux/amd64) kubernetes/024ab2a,verb:PUT (07-Apr-2026 09:48:00.197) (total time: 4451ms): Trace[1220791005]: ["GuaranteedUpdate etcd3" audit-id:3db85628-389f-4a4f-974b-cbe2c7930baa,key:/leases/kube-node-lease/instance,type:*coordination.Lease,resource:leases.coordination.k8s.io 4451ms (09:48:00.197) Trace[1220791005]: ---"Txn call completed" 4450ms (09:48:04.648)] Trace[1220791005]: [4.451541951s] [4.451541951s] END E0407 09:48:04.650519 1 controller.go:193] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"apiserver-f3tcohoifagyom4bbi4wgeu7te\": the object has been modified; please apply your changes to the latest version and try again" E0407 09:48:04.733830 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled E0407 09:48:04.733952 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout E0407 09:48:04.735010 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout E0407 09:48:04.735099 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout I0407 09:48:04.736184 1 trace.go:236] Trace[1851433095]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:31f59d19-ecfa-438d-b734-e137ca593a33,client:10.0.0.3,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openstack/leases/messaging-topology-operator-leader-election,user-agent:manager/v0.0.0 (linux/amd64) kubernetes/$Format/leader-election,verb:GET (07-Apr-2026 09:47:59.734) (total time: 5001ms): Trace[1851433095]: [5.001276175s] [5.001276175s] END E0407 09:48:04.736448 1 timeout.go:142] post-timeout activity - time-elapsed: 2.581102ms, GET "/apis/coordination.k8s.io/v1/namespaces/openstack/leases/messaging-topology-operator-leader-election" result: I0407 09:48:04.790710 1 trace.go:236] Trace[1003534156]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:0a398f2a-64b4-4087-b34c-ddc43103a4ef,client:199.204.45.19,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/ingress-nginx/leases/ingress-nginx-leader,user-agent:nginx-ingress-controller/v1.12.1 (linux/amd64) ingress-nginx/51c2b819690bbf1709b844dbf321a9acf6eda5a7,verb:GET (07-Apr-2026 09:48:01.641) (total time: 3149ms): Trace[1003534156]: ---"About to write a response" 3149ms (09:48:04.790) Trace[1003534156]: [3.149421956s] [3.149421956s] END I0407 09:48:04.790897 1 trace.go:236] Trace[1226210336]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:c2e6f7e2-af74-4b27-814e-975af79c8cb8,client:199.204.45.19,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/openstack/pods/memcached-memcached-6479589586-jnkkv,user-agent:kubelet/v1.28.13 (linux/amd64) kubernetes/024ab2a,verb:GET (07-Apr-2026 09:47:55.856) (total time: 8934ms): Trace[1226210336]: ---"About to write a response" 8934ms (09:48:04.790) Trace[1226210336]: [8.934685037s] [8.934685037s] END I0407 09:48:04.791152 1 trace.go:236] Trace[216147489]: "Get" accept:application/json, */*,audit-id:c12c1372-8678-445f-be99-54e1505ffdab,client:199.204.45.19,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/openstack/pods/keepalived-8pk7j,user-agent:kubernetes-entrypoint/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (07-Apr-2026 09:47:52.772) (total time: 12018ms): Trace[216147489]: ---"About to write a response" 12018ms (09:48:04.790) Trace[216147489]: [12.018649275s] [12.018649275s] END I0407 09:48:04.795158 1 trace.go:236] Trace[1346217077]: "List" accept:application/json, */*,audit-id:d896722e-46b7-4797-8f44-410fb6359bce,client:199.204.45.19,protocol:HTTP/2.0,resource:secrets,scope:namespace,url:/api/v1/namespaces/openstack/secrets,user-agent:Helm/3.11.2,verb:LIST (07-Apr-2026 09:47:52.863) (total time: 11931ms): Trace[1346217077]: ["List(recursive=true) etcd3" audit-id:d896722e-46b7-4797-8f44-410fb6359bce,key:/secrets/openstack,resourceVersion:,resourceVersionMatch:,limit:0,continue: 11931ms (09:47:52.863)] Trace[1346217077]: [11.931903764s] [11.931903764s] END I0407 09:48:04.797403 1 trace.go:236] Trace[1427644300]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:abe05eae-6f33-430d-aab6-92a69e20f364,client:199.204.45.19,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/openstack/serviceaccounts/memcached-memcached/token,user-agent:kubelet/v1.28.13 (linux/amd64) kubernetes/024ab2a,verb:POST (07-Apr-2026 09:47:56.047) (total time: 8749ms): Trace[1427644300]: ---"Write to database call succeeded" len:168 8749ms (09:48:04.797) Trace[1427644300]: [8.749469244s] [8.749469244s] END I0407 09:48:04.999501 1 trace.go:236] Trace[731043712]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/199.204.45.19,type:*v1.Endpoints,resource:apiServerIPInfo (07-Apr-2026 09:47:57.370) (total time: 7628ms): Trace[731043712]: ---"initial value restored" 7419ms (09:48:04.790) Trace[731043712]: ---"Txn call completed" 188ms (09:48:04.999) Trace[731043712]: [7.628269705s] [7.628269705s] END I0407 09:48:05.543298 1 alloc.go:330] "allocated clusterIPs" service="openstack/memcached-metrics" clusterIPs={"IPv4":"10.104.169.255"} I0407 09:48:12.573115 1 controller.go:624] quota admission added evaluator for: rabbitmqclusters.rabbitmq.com I0407 09:48:23.189997 1 alloc.go:330] "allocated clusterIPs" service="openstack/rabbitmq-keystone" clusterIPs={"IPv4":"10.98.180.12"} I0407 09:49:04.110280 1 alloc.go:330] "allocated clusterIPs" service="openstack/keystone-api" clusterIPs={"IPv4":"10.111.38.76"} I0407 09:49:04.127404 1 controller.go:624] quota admission added evaluator for: cronjobs.batch I0407 09:52:10.284744 1 alloc.go:330] "allocated clusterIPs" service="openstack/rabbitmq-barbican" clusterIPs={"IPv4":"10.100.8.78"} I0407 09:52:38.788440 1 alloc.go:330] "allocated clusterIPs" service="openstack/barbican-api" clusterIPs={"IPv4":"10.111.235.121"} I0407 09:54:35.114160 1 handler.go:275] Adding GroupVersion ceph.rook.io v1 to ResourceManager I0407 09:54:35.120048 1 handler.go:275] Adding GroupVersion ceph.rook.io v1 to ResourceManager I0407 09:54:35.125361 1 handler.go:275] Adding GroupVersion objectbucket.io v1alpha1 to ResourceManager I0407 09:54:35.151875 1 handler.go:275] Adding GroupVersion ceph.rook.io v1 to ResourceManager I0407 09:54:35.152340 1 handler.go:275] Adding GroupVersion objectbucket.io v1alpha1 to ResourceManager I0407 09:54:35.163244 1 handler.go:275] Adding GroupVersion ceph.rook.io v1 to ResourceManager I0407 09:54:35.176830 1 handler.go:275] Adding GroupVersion ceph.rook.io v1 to ResourceManager I0407 09:54:35.186290 1 handler.go:275] Adding GroupVersion ceph.rook.io v1 to ResourceManager I0407 09:54:35.193466 1 handler.go:275] Adding GroupVersion ceph.rook.io v1 to ResourceManager I0407 09:54:35.210854 1 handler.go:275] Adding GroupVersion ceph.rook.io v1 to ResourceManager I0407 09:54:35.217474 1 handler.go:275] Adding GroupVersion ceph.rook.io v1 to ResourceManager I0407 09:54:35.227861 1 handler.go:275] Adding GroupVersion ceph.rook.io v1 to ResourceManager I0407 09:54:35.244105 1 handler.go:275] Adding GroupVersion ceph.rook.io v1 to ResourceManager I0407 09:54:35.260607 1 handler.go:275] Adding GroupVersion ceph.rook.io v1 to ResourceManager I0407 09:54:35.275637 1 handler.go:275] Adding GroupVersion ceph.rook.io v1 to ResourceManager I0407 09:54:35.302968 1 handler.go:275] Adding GroupVersion ceph.rook.io v1 to ResourceManager I0407 09:54:35.356412 1 handler.go:275] Adding GroupVersion ceph.rook.io v1 to ResourceManager I0407 09:54:35.418179 1 handler.go:275] Adding GroupVersion ceph.rook.io v1 to ResourceManager I0407 09:54:35.530793 1 handler.go:275] Adding GroupVersion ceph.rook.io v1 to ResourceManager I0407 09:55:15.635409 1 controller.go:624] quota admission added evaluator for: cephclusters.ceph.rook.io I0407 09:55:15.646016 1 controller.go:624] quota admission added evaluator for: cephobjectstores.ceph.rook.io I0407 09:55:49.225402 1 alloc.go:330] "allocated clusterIPs" service="openstack/rook-ceph-rgw-ceph" clusterIPs={"IPv4":"10.99.160.27"} I0407 09:55:50.172688 1 alloc.go:330] "allocated clusterIPs" service="openstack/rabbitmq-glance" clusterIPs={"IPv4":"10.97.33.95"} E0407 09:56:16.760496 1 upgradeaware.go:439] Error proxying data from backend to client: write tcp 199.204.45.19:6443->10.0.0.126:50416: write: broken pipe I0407 09:56:24.153807 1 alloc.go:330] "allocated clusterIPs" service="openstack/glance-api" clusterIPs={"IPv4":"10.111.239.212"} I0407 10:00:57.030350 1 alloc.go:330] "allocated clusterIPs" service="openstack/staffeln-api" clusterIPs={"IPv4":"10.101.107.180"}