The ungleich kubernetes infrastructure » History » Revision 84
Revision 83 (Nico Schottelius, 12/24/2021 11:57 AM) → Revision 84/222 (Nico Schottelius, 12/24/2021 11:59 AM)
h1. The ungleich kubernetes infrastructure and ungleich kubernetes manual
{{toc}}
h2. Status
This document is **pre-production**.
This document is to become the ungleich kubernetes infrastructure overview as well as the ungleich kubernetes manual.
h2. k8s clusters
| Cluster | Purpose/Setup | Maintainer | Master(s) | argo | rook | v4 http proxy | last verified |
| c0.k8s.ooo | Dev | - | UNUSED | | | | 2021-10-05 |
| c1.k8s.ooo | Dev p6 VM | Nico | 2a0a-e5c0-2-11-0-62ff-fe0b-1a3d.k8s-1.place6.ungleich.ch | | | | 2021-10-05 |
| c2.k8s.ooo | Dev p7 HW | Nico | server47 server53 server54 | "argo":https://argocd-server.argocd.svc.c2.k8s.ooo | x | | 2021-10-05 |
| c3.k8s.ooo | Test p7 PI | - | UNUSED | | | | 2021-10-05 |
| c4.k8s.ooo | Dev2 p7 HW | Fran/Jin-Guk | server52 server53 server54 | | | | - |
| c5.k8s.ooo | Dev p6 VM Amal | Nico/Amal | 2a0a-e5c0-2-11-0-62ff-fe0b-1a46.k8s-1.place6.ungleich.ch | | | | |
| c6.k8s.ooo | Dev p6 VM Jin-Guk | Jin-Guk | | | | | |
| [[p5.k8s.ooo]] | production | | server34 server36 server38 | "argo":https://argocd-server.argocd.svc.p5.k8s.ooo | | - | |
| [[p6.k8s.ooo]] | production | | server67 server69 server71 | "argo":https://argocd-server.argocd.svc.p6.k8s.ooo | x | 147.78.194.13 | 2021-10-05 |
| [[p10.k8s.ooo]] | production | | server63 server65 server83 | "argo":https://argocd-server.argocd.svc.p10.k8s.ooo | x | 147.78.194.12 | 2021-10-05 |
h2. General architecture and components overview
* All k8s clusters are IPv6 only
* We use BGP peering to propagate podcidr and serviceCidr networks to our infrastructure
* The main public testing repository is "ungleich-k8s":https://code.ungleich.ch/ungleich-public/ungleich-k8s
** Private configurations are found in the **k8s-config** repository
h3. Cluster types
| **Type/Feature** | **Development** | **Production** |
| Min No. nodes | 3 (1 master, 3 worker) | 5 (3 master, 3 worker) |
| Recommended minimum | 4 (dedicated master, 3 worker) | 8 (3 master, 5 worker) |
| Separation of control plane | optional | recommended |
| Persistent storage | required | required |
| Number of storage monitors | 3 | 5 |
h2. General k8s operations
h3. Cheat sheet / external great references
* "kubectl cheatsheet":https://kubernetes.io/docs/reference/kubectl/cheatsheet/
h3. Allowing to schedule work on the control plane
* Mostly for single node / test / development clusters
* Just remove the master taint as follows
<pre>
kubectl taint nodes --all node-role.kubernetes.io/master-
</pre>
h3. Get the cluster admin.conf
* On the masters of each cluster you can find the file @/etc/kubernetes/admin.conf@
* To be able to administrate the cluster you can copy the admin.conf to your local machine
* Multi cluster debugging can very easy if you name the config ~/cX-admin.conf (see example below)
<pre>
% scp root@server47.place7.ungleich.ch:/etc/kubernetes/admin.conf ~/c2-admin.conf
% export KUBECONFIG=~/c2-admin.conf
% kubectl get nodes
NAME STATUS ROLES AGE VERSION
server47 Ready control-plane,master 82d v1.22.0
server48 Ready control-plane,master 82d v1.22.0
server49 Ready <none> 82d v1.22.0
server50 Ready <none> 82d v1.22.0
server59 Ready control-plane,master 82d v1.22.0
server60 Ready,SchedulingDisabled <none> 82d v1.22.0
server61 Ready <none> 82d v1.22.0
server62 Ready <none> 82d v1.22.0
</pre>
h3. Installing a new k8s cluster
* Decide on the cluster name (usually *cX.k8s.ooo*), X counting upwards
** Using pXX.k8s.ooo for production clusters of placeXX
* Use cdist to configure the nodes with requirements like crio
* Decide between single or multi node control plane setups (see below)
** Single control plane suitable for development clusters
Typical init procedure:
* Single control plane: @kubeadm init --config bootstrap/XXX/kubeadm.yaml@
* Multi control plane (HA): @kubeadm init --config bootstrap/XXX/kubeadm.yaml --upload-certs@
h3. Deleting a pod that is hanging in terminating state
<pre>
kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>
</pre>
(from https://stackoverflow.com/questions/35453792/pods-stuck-in-terminating-status)
h3. Listing nodes of a cluster
<pre>
[15:05] bridge:~% kubectl get nodes
NAME STATUS ROLES AGE VERSION
server22 Ready <none> 52d v1.22.0
server23 Ready <none> 52d v1.22.2
server24 Ready <none> 52d v1.22.0
server25 Ready <none> 52d v1.22.0
server26 Ready <none> 52d v1.22.0
server27 Ready <none> 52d v1.22.0
server63 Ready control-plane,master 52d v1.22.0
server64 Ready <none> 52d v1.22.0
server65 Ready control-plane,master 52d v1.22.0
server66 Ready <none> 52d v1.22.0
server83 Ready control-plane,master 52d v1.22.0
server84 Ready <none> 52d v1.22.0
server85 Ready <none> 52d v1.22.0
server86 Ready <none> 52d v1.22.0
</pre>
h3. Removing / draining a node
Usually @kubectl drain server@ should do the job, but sometimes we need to be more aggressive:
<pre>
kubectl drain --delete-emptydir-data --ignore-daemonsets server23
</pre>
h3. Readding a node after draining
<pre>
kubectl uncordon serverXX
</pre>
h3. (Re-)joining worker nodes after creating the cluster
* We need to have an up-to-date token
* We use different join commands for the workers and control plane nodes
Generating the join command on an existing control plane node:
<pre>
kubeadm token create --print-join-command
</pre>
h3. (Re-)joining control plane nodes after creating the cluster
* We generate the token again
* We upload the certificates
* We need to combine/create the join command for the control plane node
Example session:
<pre>
% kubeadm token create --print-join-command
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash
% kubeadm init phase upload-certs --upload-certs
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
CERTKEY
# Then we use these two outputs on the joining node:
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash --control-plane --certificate-key CERTKEY
</pre>
Commands to be used on a control plane node:
<pre>
kubeadm token create --print-join-command
kubeadm init phase upload-certs --upload-certs
</pre>
Commands to be used on the joining node:
<pre>
JOINCOMMAND --control-plane --certificate-key CERTKEY
</pre>
SEE ALSO
* https://stackoverflow.com/questions/63936268/how-to-generate-kubeadm-token-for-secondary-control-plane-nodes
* https://blog.scottlowe.org/2019/08/15/reconstructing-the-join-command-for-kubeadm/
h3. How to fix etcd does not start when rejoining a kubernetes cluster as a control plane
If during the above step etcd does not come up, @kubeadm join@ can hang as follows:
<pre>
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://[2a0a:e5c0:10:1:225:b3ff:fe20:37
8a]:2379 with maintenance client: context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
</pre>
Then the problem is likely that the etcd server is still a member of the cluster. We first need to remove it from the etcd cluster and then the join works.
To fix this we do:
* Find a working etcd pod
* Find the etcd members / member list
* Remove the etcd member that we want to re-join the cluster
<pre>
# Find the etcd pods
kubectl -n kube-system get pods -l component=etcd,tier=control-plane
# Get the list of etcd servers with the member id
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
# Remove the member
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove MEMBERID
</pre>
Sample session:
<pre>
[10:48] line:~% kubectl -n kube-system get pods -l component=etcd,tier=control-plane
NAME READY STATUS RESTARTS AGE
etcd-server63 1/1 Running 0 3m11s
etcd-server65 1/1 Running 3 7d2h
etcd-server83 1/1 Running 8 (6d ago) 7d2h
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
356891cd676df6e4, started, server65, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2379, false
371b8a07185dee7e, started, server63, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2379, false
5942bc58307f8af9, started, server83, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2380, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2379, false
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove 371b8a07185dee7e
Member 371b8a07185dee7e removed from cluster e3c0805f592a8f77
</pre>
SEE ALSO
* We found the solution using https://stackoverflow.com/questions/67921552/re-installed-node-cannot-join-kubernetes-cluster
h2. Calico CNI
h3. Calico Installation
* We install "calico using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
* This has the following advantages:
** Easy to upgrade
** Does not require os to configure IPv6/dual stack settings as the tigera operator figures out things on its own
Usually plain calico can be installed directly using:
<pre>
helm repo add projectcalico https://docs.projectcalico.org/charts
helm install calico projectcalico/tigera-operator --version v3.20.2
</pre>
h3. Installing calicoctl
To be able to manage and configure calico, we need to
"install calicoctl (we choose the version as a pod)":https://docs.projectcalico.org/getting-started/clis/calicoctl/install#install-calicoctl-as-a-kubernetes-pod
<pre>
kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml
</pre>
And making it easier accessible by alias:
<pre>
alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl"
</pre>
h3. Calico configuration
By default our k8s clusters "BGP peer":https://docs.projectcalico.org/networking/bgp
with an upstream router to propagate podcidr and servicecidr.
Default settings in our infrastructure:
* We use a full-mesh using the @nodeToNodeMeshEnabled: true@ option
* We keep the original next hop so that *only* the server with the pod is announcing it (instead of ecmp)
* We use private ASNs for k8s clusters
* We do *not* use any overlay
After installing calico and calicoctl the last step of the installation is usually:
<pre>
calicoctl create -f - < calico-bgp.yaml
</pre>
A sample BGP configuration:
<pre>
---
apiVersion: projectcalico.org/v3
kind: BGPConfiguration
metadata:
name: default
spec:
logSeverityScreen: Info
nodeToNodeMeshEnabled: true
asNumber: 65534
serviceClusterIPs:
- cidr: 2a0a:e5c0:10:3::/108
serviceExternalIPs:
- cidr: 2a0a:e5c0:10:3::/108
---
apiVersion: projectcalico.org/v3
kind: BGPPeer
metadata:
name: router1-place10
spec:
peerIP: 2a0a:e5c0:10:1::50
asNumber: 213081
keepOriginalNextHop: true
</pre>
h2. ArgoCD / ArgoWorkFlow
h3. Argocd Installation
As there is no configuration management present yet, argocd is installed using
<pre>
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
</pre>
* See https://argo-cd.readthedocs.io/en/stable/
h3. Get the argocd credentials
<pre>
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo ""
</pre>
h3. Using the argocd webhook to trigger changes
* To trigger changes post json https://argocd.example.com/api/webhook
h3. Deploying an application
* Applications are deployed via git towards gitea (code.ungleich.ch) and then pulled by argo
* Always include the *redmine-url* pointing to the (customer) ticket
** Also add the support-url if it exists
Application sample
<pre>
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: gitea-CUSTOMER
namespace: argocd
spec:
destination:
namespace: default
server: 'https://kubernetes.default.svc'
source:
path: apps/prod/gitea
repoURL: 'https://code.ungleich.ch/ungleich-intern/k8s-config.git'
targetRevision: HEAD
helm:
parameters:
- name: storage.data.storageClass
value: rook-ceph-block-hdd
- name: storage.data.size
value: 200Gi
- name: storage.db.storageClass
value: rook-ceph-block-ssd
- name: storage.db.size
value: 10Gi
- name: storage.letsencrypt.storageClass
value: rook-ceph-block-hdd
- name: storage.letsencrypt.size
value: 50Mi
- name: letsencryptStaging
value: 'no'
- name: fqdn
value: 'code.verua.online'
project: default
syncPolicy:
automated:
prune: true
selfHeal: true
info:
- name: 'redmine-url'
value: 'https://redmine.ungleich.ch/issues/ISSUEID'
- name: 'support-url'
value: 'https://support.ungleich.ch/Ticket/Display.html?id=TICKETID'
</pre>
h2. Helm related operations and conventions
We use helm charts extensively.
* In production, they are managed via argocd
* In development, helm chart can de developed and deployed manually using the helm utility.
h3. Installing a helm chart
One can use the usual pattern of
<pre>
helm install <releasename> <chartdirectory>
</pre>
However often you want to reinstall/update when testing helm charts. The following pattern is "better", because it allows you to reinstall, if it is already installed:
<pre>
helm upgrade --install <releasename> <chartdirectory>
</pre>
h3. Naming services and deployments in helm charts [Application labels]
* We always have {{ .Release.Name }} to identify the current "instance"
* Deployments:
** use @app: <what it is>@, f.i. @app: nginx@, @app: postgres@, ...
* See more about standard labels on
** https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/
** https://helm.sh/docs/chart_best_practices/labels/
h2. Rook / Ceph Related Operations
h3. Executing ceph commands
Using the ceph-tools pod as follows:
<pre>
kubectl exec -n rook-ceph -ti $(kubectl -n rook-ceph get pods -l app=rook-ceph-tools -o jsonpath='{.items[*].metadata.name}') -- ceph -s
</pre>
h3. Inspecting the logs of a specific server
<pre>
# Get the related pods
kubectl -n rook-ceph get pods -l app=rook-ceph-osd-prepare
...
# Inspect the logs of a specific pod
kubectl -n rook-ceph logs -f rook-ceph-osd-prepare-server23--1-444qx
</pre>
h3. Inspecting the logs of the rook-ceph-operator
<pre>
kubectl -n rook-ceph logs -f -l app=rook-ceph-operator
</pre>
h3. Triggering server prepare / adding new osds
The rook-ceph-operator triggers/watches/creates pods to maintain hosts. To trigger a full "re scan", simply delete that pod:
<pre>
kubectl -n rook-ceph delete pods -l app=rook-ceph-operator
</pre>
This will cause all the @rook-ceph-osd-prepare-..@ jobs to be recreated and thus OSDs to be created, if new disks have been added.
h3. Removing an OSD
* See "Ceph OSD Management":https://rook.io/docs/rook/v1.7/ceph-osd-mgmt.html
* More specifically: https://github.com/rook/rook/blob/release-1.7/cluster/examples/kubernetes/ceph/osd-purge.yaml
h2. Harbor
* We user "Harbor":https://goharbor.io/ for caching and as an image registry. Internal app reference: apps/prod/harbor.
* The admin password is in the password store, auto generated per cluster
* At the moment harbor only authenticates against the internal ldap tree
h3. LDAP configuration
* The url needs to be ldaps://...
* uid = uid
* rest standard
h2. Nextcloud
h3. How to get the username and password
* The initial username is set to "nextcloud"
* The password is autogenerated and saved in a kubernetes secret
<pre>
</pre>
h3. How to fix "Access through untrusted domain"
* Nextcloud stores the initial domain configuration
* If the FQDN is changed, it will show the error message "Access through untrusted domain"
* To fix, edit /var/www/html/config/config.php and correct the domain
* Then delete the pods
h2. Infrastructure versions
h3. ungleich kubernetes infrastructure v5 (2021-10)
Clusters are configured / setup in this order:
* Bootstrap via kubeadm
* "Networking via calico + BGP (non ECMP) using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
* "ArgoCD for CD":https://argo-cd.readthedocs.io/en/stable/
** "rook for storage via argocd":https://rook.io/
** haproxy for in IPv6-cluster-IPv4-to-IPv6 proxy via argocd
** "kubernetes-secret-generator for in cluster secrets":https://github.com/mittwald/kubernetes-secret-generator
** "ungleich-certbot managing certs and nginx":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
h3. ungleich kubernetes infrastructure v4 (2021-09)
* rook is configured via manifests instead of using the rook-ceph-cluster helm chart
* The rook operator is still being installed via helm
h3. ungleich kubernetes infrastructure v3 (2021-07)
* rook is now installed via helm via argocd instead of directly via manifests
h3. ungleich kubernetes infrastructure v2 (2021-05)
* Replaced fluxv2 from ungleich k8s v1 with argocd
** argocd can apply helm templates directly without needing to go through Chart releases
* We are also using argoflow for build flows
* Planned to add "kaniko":https://github.com/GoogleContainerTools/kaniko for image building
h3. ungleich kubernetes infrastructure v1 (2021-01)
We are using the following components:
* "Calico as a CNI":https://www.projectcalico.org/ with BGP, IPv6 only, no encapsulation
** Needed for basic networking
* "kubernetes-secret-generator":https://github.com/mittwald/kubernetes-secret-generator for creating secrets
** Needed so that secrets are not stored in the git repository, but only in the cluster
* "ungleich-certbot":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
** Needed to get letsencrypt certificates for services
* "rook with ceph rbd + cephfs":https://rook.io/ for storage
** rbd for almost everything, *ReadWriteOnce*
** cephfs for smaller things, multi access *ReadWriteMany*
** Needed for providing persistent storage
* "flux v2":https://fluxcd.io/
** Needed to manage resources automatically