ungleich redmine: Activityhttp://localhost:3000/http://localhost:3000/favicon.ico?16699092332024-02-25T08:05:43Zungleich redmine
Redmine Open Infrastructure - Task #12598 (In Progress): Evaluate authentik for use at ungleichhttp://localhost:3000/issues/125982024-02-25T08:05:43ZNico Schotteliusnico.schottelius@ungleich.ch
<a name="Installation"></a>
<h2 >Installation<a href="#Installation" class="wiki-anchor">¶</a></h2>
<pre>
helm repo add authentik https://charts.goauthentik.io
helm repo update
helm upgrade --install authentik authentik/authentik -f values.yaml
</pre> Open Infrastructure - Task #12597 (Seen): Add support for pixelfed hostinghttp://localhost:3000/issues/125972024-02-24T12:43:48ZNico Schotteliusnico.schottelius@ungleich.ch
<ul>
<li>Images: <a class="external" href="https://quay.io/repository/zknt/pixelfed">https://quay.io/repository/zknt/pixelfed</a></li>
</ul> Open Infrastructure - Task #12596 (Waiting): Add support for lemmy hostinghttp://localhost:3000/issues/125962024-02-24T12:41:01ZNico Schotteliusnico.schottelius@ungleich.ch
<ul>
<li>Currently stalled until lemmy-ui issue is resolved</li>
<li><a class="external" href="https://github.com/LemmyNet/lemmy-ui/issues/2374">https://github.com/LemmyNet/lemmy-ui/issues/2374</a></li>
<li>Also see: <a class="external" href="https://ipv6.social/@ungleich/111986164143493986">https://ipv6.social/@ungleich/111986164143493986</a></li>
<li>Chart base exists in dev/lemmy</li>
</ul> Open Infrastructure - Task #12344 (Seen): Evaluate kubevirt (Q12024)http://localhost:3000/issues/123442024-01-07T17:41:50ZNico Schotteliusnico.schottelius@ungleich.ch
<ul>
<li>Could it potentially replace opennebula</li>
<li>If yes, how?</li>
</ul> Open Infrastructure - Task #12343: Evaluate cloud-hypervisor (Q12024)http://localhost:3000/issues/12343#change-524912024-01-07T16:27:19ZNico Schotteliusnico.schottelius@ungleich.ch
<a name="running-simple-VM"></a>
<h2 >running simple VM<a href="#running-simple-VM" class="wiki-anchor">¶</a></h2>
<pre>
cloud-hypervisor \
--kernel ./hypervisor-fw \
--disk path=focal-server-cloudimg-amd64.raw \
--cpus boot=2 \
--memory size=1024M \
--net "tap=,mac=,ip=,mask="
</pre>
<ul>
<li>Comes up with a login after a few seconds</li>
</ul> Open Infrastructure - Task #12343 (In Progress): Evaluate cloud-hypervisor (Q12024)http://localhost:3000/issues/123432024-01-07T16:23:14ZNico Schotteliusnico.schottelius@ungleich.ch
<ul>
<li>Might be a lightweight option for running in k8s</li>
<li>Requires quite some work around it</li>
<li>No management tooling</li>
<li>Storage
<ul>
<li>For ceph probably need to use RBD mapped, rook supports that</li>
<li>For creating thin provisioning, probably need to create a wrapper/controller</li>
</ul>
</li>
<li>live migration
<ul>
<li>seems to be supported</li>
</ul>
</li>
<li>Networking unclear
<ul>
<li>macvtap support, see below</li>
</ul></li>
</ul>
<pre>
"tap=<if_name>,ip=<ip_addr>,mask=<net_mask>,mac=<mac_addr>,fd=<fd1:fd2...>,iommu=on|off,num_queues=<number_of_queues>,queue_size=<size_of_each_queue>,id=<device_id>,vhost_user=<vhost_user_enable>,socket=<vhost_user_socket_path>,vhost_mode=client|server,bw_size=<bytes>,bw_one_time_burst=<bytes>,bw_refill_time=<ms>,ops_size=<io_ops>,ops_one_time_burst=<io_ops>,ops_refill_time=<ms>"
</pre>
<a name="live-migration"></a>
<h2 >live migration<a href="#live-migration" class="wiki-anchor">¶</a></h2>
<ul>
<li>examples use same host migration</li>
</ul>
<pre>
% cloud-hypervisor \
--kernel ./hypervisor-fw \
--disk path=focal-server-cloudimg-amd64.raw \
--cpus boot=2 \
--memory size=1024M \
--net "tap=,mac=,ip=,mask=" --api-socket=/tmp/api1
[17:42] nb3:~% cloud-hypervisor --api-socket=/tmp/api2
# receive VM
ch-remote --api-socket=/tmp/api2 receive-migration unix:/tmp/sock
# send VM - fails
% ch-remote --api-socket=/tmp/api1 send-migration --local unix:/tmp/sock
Error running command: Server responded with an error: InternalServerError: ApiError(VmSendMigration(MigrateSend(Local migration requires shared memory or hugepages enabled)))
</pre>
<a name="Sketch-for-running-VMs-with-cloud-hypervisor"></a>
<h2 >Sketch for running VMs with cloud-hypervisor<a href="#Sketch-for-running-VMs-with-cloud-hypervisor" class="wiki-anchor">¶</a></h2>
<ul>
<li>Manage networking outside
<ul>
<li>pod running potentially in hostnetwork</li>
<li>creating bridge depending on which customer it is</li>
<li>Potentially running IPAM on a per customer basis</li>
<li>Could potentially utilise netbox as a backend, but needs to be written</li>
</ul>
</li>
<li>Console access
<ul>
<li>read only via pod</li>
<li>serial forwarding unclear</li>
</ul>
</li>
<li>Disk management
<ul>
<li>Thin provisioning / templates needs to be built</li>
<li>Growing disks might be supported native by k8s/rook</li>
</ul></li>
</ul> Open Infrastructure - Task #12342 (In Progress): Evaluate Cloudstack (Q1 2024)http://localhost:3000/issues/123422024-01-07T16:07:26ZNico Schotteliusnico.schottelius@ungleich.ch
<a name="Notes"></a>
<h2 >Notes<a href="#Notes" class="wiki-anchor">¶</a></h2>
<ul>
<li>Not sure if it supports ceph
<ul>
<li>Not listed on <a class="external" href="https://docs.cloudstack.apache.org/en/4.18.1.0/conceptsandterminology/choosing_deployment_architecture.html">https://docs.cloudstack.apache.org/en/4.18.1.0/conceptsandterminology/choosing_deployment_architecture.html</a></li>
<li>Storage ref: <a class="external" href="http://docs.cloudstack.apache.org/projects/archived-cloudstack-administration/en/latest/storage.html">http://docs.cloudstack.apache.org/projects/archived-cloudstack-administration/en/latest/storage.html</a>
<ul>
<li>lists ceph for kvm</li>
</ul>
</li>
<li>Seems to focus on qcow2 (-> no thin provisioning?)</li>
</ul>
</li>
<li>Can we setup/run cloudstack in k8s?
<ul>
<li>Seems to be an idea, but far from implementation: <a class="external" href="https://github.com/apache/cloudstack/issues/7298">https://github.com/apache/cloudstack/issues/7298</a></li>
</ul></li>
</ul> Open Infrastructure - Task #12340: Evaluate openstack helm chartshttp://localhost:3000/issues/12340#change-524722024-01-06T18:26:16ZNico Schotteliusnico.schottelius@ungleich.ch
<p>Marked ticket public for public review</p> Open Infrastructure - Task #12340: Evaluate openstack helm chartshttp://localhost:3000/issues/12340#change-524712024-01-06T17:49:45ZNico Schotteliusnico.schottelius@ungleich.ch
<a name="Setup-openstack-client-TBD"></a>
<h3 >Setup openstack client (TBD)<a href="#Setup-openstack-client-TBD" class="wiki-anchor">¶</a></h3>
<ul>
<li><a class="external" href="https://docs.openstack.org/openstack-helm/latest/install/setup_openstack_client.html">https://docs.openstack.org/openstack-helm/latest/install/setup_openstack_client.html</a></li>
<li>Creating /etc/openstack and installing python</li>
</ul> Open Infrastructure - Task #12340: Evaluate openstack helm chartshttp://localhost:3000/issues/12340#change-524702024-01-06T17:49:22ZNico Schotteliusnico.schottelius@ungleich.ch
<a name="Setup-ceph-in-progress"></a>
<h2 >Setup ceph (in progress)<a href="#Setup-ceph-in-progress" class="wiki-anchor">¶</a></h2>
<ul>
<li><a class="external" href="https://docs.openstack.org/openstack-helm/latest/install/deploy_ceph.html">https://docs.openstack.org/openstack-helm/latest/install/deploy_ceph.html</a></li>
<li>already done before</li>
<li>Need to check the difference</li>
<li>scripts
<ul>
<li>./tools/deployment/ceph/ceph-rook.sh</li>
<li>./tools/deployment/ceph/ceph-adapter-rook.sh</li>
</ul>
</li>
<li>Findings
<ul>
<li>installs rook in ceph namespace</li>
<li>creates a cluster</li>
<li>Deploys a new svc that matches on all ceph monitors</li>
</ul></li>
</ul> Open Infrastructure - Task #12340: Evaluate openstack helm chartshttp://localhost:3000/issues/12340#change-524632024-01-06T15:33:34ZNico Schotteliusnico.schottelius@ungleich.ch
<a name="setup-steps"></a>
<h2 >setup / steps<a href="#setup-steps" class="wiki-anchor">¶</a></h2>
<pre>
mkdir ~/osh
cd ~/osh
git clone https://opendev.org/openstack/openstack-helm.git
git clone https://opendev.org/openstack/openstack-helm-infra.git
</pre>
<pre>
export OPENSTACK_RELEASE=2023.2
export CONTAINER_DISTRO_NAME=ubuntu
export CONTAINER_DISTRO_VERSION=jammy
</pre>
<a name="Prepare-the-cluster"></a>
<h3 >Prepare the cluster<a href="#Prepare-the-cluster" class="wiki-anchor">¶</a></h3>
<ul>
<li><a class="external" href="https://docs.openstack.org/openstack-helm/latest/install/prepare_kubernetes.html">https://docs.openstack.org/openstack-helm/latest/install/prepare_kubernetes.html</a></li>
</ul>
<pre>
[16:20] nb3:openstack-helm% cat ./tools/deployment/common/prepare-k8s.sh
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -ex
# Add labels to the core namespaces & nodes
kubectl label --overwrite namespace default name=default
kubectl label --overwrite namespace kube-system name=kube-system
kubectl label --overwrite namespace kube-public name=kube-public
kubectl label --overwrite nodes --all openstack-control-plane=enabled
kubectl label --overwrite nodes --all openstack-compute-node=enabled
kubectl label --overwrite nodes --all openvswitch=enabled
kubectl label --overwrite nodes --all linuxbridge=enabled
kubectl label --overwrite nodes --all ceph-mon=enabled
kubectl label --overwrite nodes --all ceph-osd=enabled
kubectl label --overwrite nodes --all ceph-mds=enabled
kubectl label --overwrite nodes --all ceph-rgw=enabled
kubectl label --overwrite nodes --all ceph-mgr=enabled
# We deploy l3 agent only on the node where we run test scripts.
# In this case virtual router will be created only on this node
# and we don't need L2 overlay (will be implemented later).
kubectl label --overwrite nodes -l "node-role.kubernetes.io/control-plane" l3-agent=enabled
kubectl label --overwrite nodes -l "node-role.kubernetes.io/control-plane" openstack-network-node=enabled
for NAMESPACE in ceph openstack osh-infra; do
tee /tmp/${NAMESPACE}-ns.yaml << EOF
apiVersion: v1
kind: Namespace
metadata:
labels:
kubernetes.io/metadata.name: ${NAMESPACE}
name: ${NAMESPACE}
name: ${NAMESPACE}
EOF
kubectl apply -f /tmp/${NAMESPACE}-ns.yaml
done
make all
</pre> Open Infrastructure - Task #12340 (In Progress): Evaluate openstack helm chartshttp://localhost:3000/issues/12340#change-524602024-01-06T14:43:02ZNico Schotteliusnico.schottelius@ungleich.chOpen Infrastructure - Task #12340 (In Progress): Evaluate openstack helm chartshttp://localhost:3000/issues/123402024-01-06T14:18:48ZNico Schotteliusnico.schottelius@ungleich.ch
<a name="Objective"></a>
<h2 >Objective<a href="#Objective" class="wiki-anchor">¶</a></h2>
<ul>
<li>Find out whether we can run openstack with it in our IPv6 only clusters</li>
</ul>
<a name="Summary"></a>
<h2 >Summary<a href="#Summary" class="wiki-anchor">¶</a></h2>
<ul>
<li>Seems to be very fragile / unfinished status
<ul>
<li>Charts are distributed in 2 repositories</li>
<li>No released charts so far, cannot just run helm upgrade --install against a chart repo</li>
<li>A lot of distributed files in the repos</li>
<li>ceph-adaptor seems to be IPv4 based (splitting address on dots)</li>
</ul>
</li>
<li>Might be possible to build on top of it, but might need quite some involvement</li>
</ul>
<a name="Progress"></a>
<h2 >Progress<a href="#Progress" class="wiki-anchor">¶</a></h2>
<ul>
<li>Try to stick to "in order setup" </li>
<li>But when one item is blocked, setup other components that might crash due to missing dependencies</li>
</ul>
<a name="Base-documentation"></a>
<h2 >Base documentation<a href="#Base-documentation" class="wiki-anchor">¶</a></h2>
<ul>
<li><a class="external" href="https://docs.openstack.org/openstack-helm/latest/">https://docs.openstack.org/openstack-helm/latest/</a></li>
<li>Related tools from our side: <a class="external" href="https://code.ungleich.ch/ungleich-public/ungleich-tools/src/branch/master/openstack">https://code.ungleich.ch/ungleich-public/ungleich-tools/src/branch/master/openstack</a></li>
</ul>
<a name="Communication"></a>
<h3 >Communication<a href="#Communication" class="wiki-anchor">¶</a></h3>
<ul>
<li>IRC via matrix: <a class="external" href="https://matrix.ungleich.ch/#/room/#_oftc_openstack-helm:matrix.org">https://matrix.ungleich.ch/#/room/#_oftc_openstack-helm:matrix.org</a></li>
<li>Slack: <a class="external" href="https://app.slack.com/client/T09NY5SBT/C3WERB7DE">https://app.slack.com/client/T09NY5SBT/C3WERB7DE</a></li>
</ul>
<a name="Components"></a>
<h2 >Components<a href="#Components" class="wiki-anchor">¶</a></h2>
<ul>
<li><a class="external" href="https://docs.openstack.org/openstack-helm/latest/install/deploy_openstack_backend.html">https://docs.openstack.org/openstack-helm/latest/install/deploy_openstack_backend.html</a></li>
<li><a class="external" href="https://docs.openstack.org/openstack-helm/latest/install/deploy_openstack.html">https://docs.openstack.org/openstack-helm/latest/install/deploy_openstack.html</a></li>
</ul>
<a name="OpenStack-client"></a>
<h3 >OpenStack client<a href="#OpenStack-client" class="wiki-anchor">¶</a></h3>
<ul>
<li>Is installed on the local machine</li>
<li>Installs some python and creates a config file</li>
<li>Installs python packages as root / using pip
<ul>
<li>cmd2 python-openstackclient python-heatclient</li>
</ul></li>
</ul>
<a name="Ceph"></a>
<h3 >Ceph<a href="#Ceph" class="wiki-anchor">¶</a></h3>
<ul>
<li><code>./tools/deployment/ceph/ceph-rook.sh</code>
<ul>
<li>setups up rook in rook-ceph namespace</li>
<li>also saw ceph namespace somewhere
<ul>
<li>ceph cluster is put into ceph namespace</li>
<li>operator is in rook-ceph</li>
</ul>
</li>
<li>sets min_size=1 for testing</li>
<li>uses loop devices</li>
</ul>
</li>
<li><code>./tools/deployment/ceph/ceph-adapter-rook.sh</code>
<ul>
<li>builds a helm chart first: /home/nico/osh/openstack-helm-infra/ceph-adapter-rook-0.1.0.tgz</li>
<li>maybe can reference the chart directly from the git repo</li>
</ul>
</li>
<li>There is also ./tools/deployment/ceph/ceph.sh, not sure for what, not mentioned in doc</li>
</ul>
<a name="Ingress"></a>
<h3 >Ingress<a href="#Ingress" class="wiki-anchor">¶</a></h3>
<ul>
<li>for outside reachability, as usual</li>
</ul>
<a name="rabbitmq"></a>
<h3 >rabbitmq<a href="#rabbitmq" class="wiki-anchor">¶</a></h3>
<a name="MariaDB"></a>
<h3 >MariaDB<a href="#MariaDB" class="wiki-anchor">¶</a></h3>
<a name="Memcached"></a>
<h3 >Memcached<a href="#Memcached" class="wiki-anchor">¶</a></h3>
<a name="Keystone"></a>
<h3 >Keystone<a href="#Keystone" class="wiki-anchor">¶</a></h3>
<ul>
<li>Identity management</li>
<li>./tools/deployment/component/keystone/keystone.sh</li>
</ul>
<a name="Heat"></a>
<h3 >Heat<a href="#Heat" class="wiki-anchor">¶</a></h3>
<ul>
<li>Templating / infra</li>
<li>Unclear</li>
<li>./tools/deployment/component/heat/heat.sh</li>
</ul>
<a name="Glance"></a>
<h3 >Glance<a href="#Glance" class="wiki-anchor">¶</a></h3>
<ul>
<li>Image service</li>
<li>./tools/deployment/component/glance/glance.sh</li>
</ul>
<a name="Placement-Nova-Neutron"></a>
<h3 >Placement, Nova, Neutron<a href="#Placement-Nova-Neutron" class="wiki-anchor">¶</a></h3>
<ul>
<li>OpenStack Nova is the compute service</li>
<li>Neutron is the networking service</li>
<li>Using openswitch, probably in hostnetwork mode (guess)</li>
</ul>
<pre>
cd ~/osh/openstack-helm
./tools/deployment/component/compute-kit/openvswitch.sh
./tools/deployment/component/compute-kit/libvirt.sh
./tools/deployment/component/compute-kit/compute-kit.sh
</pre>
<a name="Cinder"></a>
<h3 >Cinder<a href="#Cinder" class="wiki-anchor">¶</a></h3>
<ul>
<li>block storage service</li>
<li>probably interacts with ceph </li>
<li>not sure yet how/where the monitor is set, might be in the rook step</li>
</ul>
<pre>
cd ~/osh/openstack-helm
./tools/deployment/component/cinder/cinder.sh
</pre>
<a name="Image-management-ceph"></a>
<h2 >Image management (ceph?)<a href="#Image-management-ceph" class="wiki-anchor">¶</a></h2>
<ul>
<li>Should be able to use thin provisioning</li>
</ul> Open Infrastructure - Task #12339 (Closed): Evaluate yaaok for openstack in k8shttp://localhost:3000/issues/123392024-01-06T14:18:17ZNico Schotteliusnico.schottelius@ungleich.ch
<ul>
<li>Does not have IPv6 support</li>
<li>Created bug report at <a class="external" href="https://gitlab.com/yaook/operator/-/issues/479">https://gitlab.com/yaook/operator/-/issues/479</a> on 2023-11-29</li>
</ul>
<pre>
2024-01-06 14:16:37,715 ERROR yaook.op.daemon.yaook.cloud/v1.keystonedeployments.yaook.keystone failed to reconcile state <yaook.statemachine.resources.k8s_authz.TemplatedRole component='policy_validation_management_role'>
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/api_utils.py", line 1101, in get_cluster_domain
response = socket.gethostbyname_ex("kubernetes.default.svc")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
socket.gaierror: [Errno -5] No address associated with hostname
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/statemachine.py", line 78, in _ensure_state
await state.reconcile(
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/tracing.py", line 63, in wrapper
return await function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/resources/k8s.py", line 762, in reconcile
new_body = await self._make_body(ctx, dependencies)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/resources/k8s.py", line 966, in _make_body
await self._get_template_parameters(ctx, dependencies),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/tracing.py", line 63, in wrapper
return await function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/resources/k8s.py", line 984, in _get_template_parameters
result = await super()._get_template_parameters(ctx, dependencies)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/resources/k8s.py", line 907, in _get_template_parameters
"cluster_domain": api_utils.get_cluster_domain(),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/api_utils.py", line 1104, in get_cluster_domain
raise KeyError("No DNS Response for kubernetes.default.svc. Maybe we "
KeyError: 'No DNS Response for kubernetes.default.svc. Maybe we are not running inside a cluster. You can set YAOOK_OP_CLUSTER_DOMAIN to override it'
2024-01-06 14:16:37,735 ERROR yaook.op.tasks task TaskItem(func=<bound method OperatorDaemon._reconcile_cr of <yaook.op.daemon.OperatorDaemon object at 0x7f9151a86b50>>, data=(<CustomResource keystonedeployments.yaook.cloud/v1>, 'yaook', 'keystone')) failed. retrying in 114.55600857834835s
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/api_utils.py", line 1101, in get_cluster_domain
response = socket.gethostbyname_ex("kubernetes.default.svc")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
socket.gaierror: [Errno -5] No address associated with hostname
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/yaook/op/tasks.py", line 313, in run_next_task
requeue = await func(*data)
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/tracing.py", line 63, in wrapper
return await function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/yaook/op/daemon.py", line 754, in _reconcile_cr
await cr_obj.reconcile(ctx, generation)
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/customresource.py", line 742, in reconcile
await super().reconcile(ctx, generation)
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/tracing.py", line 63, in wrapper
return await function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/customresource.py", line 257, in reconcile
blocking = await self.sm.ensure(ctx)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/tracing.py", line 63, in wrapper
return await function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/statemachine.py", line 152, in ensure
ready = await self._ensure_state(state, ctx)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/tracing.py", line 63, in wrapper
return await function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/statemachine.py", line 78, in _ensure_state
await state.reconcile(
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/tracing.py", line 63, in wrapper
return await function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/resources/k8s.py", line 762, in reconcile
new_body = await self._make_body(ctx, dependencies)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/resources/k8s.py", line 966, in _make_body
await self._get_template_parameters(ctx, dependencies),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/tracing.py", line 63, in wrapper
return await function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/resources/k8s.py", line 984, in _get_template_parameters
result = await super()._get_template_parameters(ctx, dependencies)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/resources/k8s.py", line 907, in _get_template_parameters
"cluster_domain": api_utils.get_cluster_domain(),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/yaook/statemachine/api_utils.py", line 1104, in get_cluster_domain
raise KeyError("No DNS Response for kubernetes.default.svc. Maybe we "
KeyError: 'No DNS Response for kubernetes.default.svc. Maybe we are not running inside a cluster. You can set YAOOK_OP_CLUSTER_DOMAIN to override it'
2024-01-06 14:16:37,738 DEBUG yaook.op.tasks next item is scheduled for 10408.364446871348 (in 114.56s)
2024-01-06 14:16:37,738 DEBUG yaook.op.tasks next item is scheduled for 10408.364446871348 (in 114.56s)
2024-01-06 14:16:37,738 DEBUG yaook.op.tasks next item is scheduled for 10408.364446871348 (in 114.56s)
</pre> Open Infrastructure - Task #8069 (Closed): Investigate potential bottleneck on storage/CEPH at DCLhttp://localhost:3000/issues/8069#change-523732024-01-03T18:31:09ZNico Schotteliusnico.schottelius@ungleich.ch