The ungleich kubernetes infrastructure » History » Version 56
Nico Schottelius, 10/29/2021 09:17 AM
1 | 22 | Nico Schottelius | h1. The ungleich kubernetes infrastructure and ungleich kubernetes manual |
---|---|---|---|
2 | 1 | Nico Schottelius | |
3 | 3 | Nico Schottelius | {{toc}} |
4 | |||
5 | 1 | Nico Schottelius | h2. Status |
6 | |||
7 | 28 | Nico Schottelius | This document is **pre-production**. |
8 | This document is to become the ungleich kubernetes infrastructure overview as well as the ungleich kubernetes manual. |
||
9 | 1 | Nico Schottelius | |
10 | 10 | Nico Schottelius | h2. k8s clusters |
11 | |||
12 | 48 | Nico Schottelius | | Cluster | Purpose/Setup | Maintainer | Master(s) | argo | rook | last verified | |
13 | | c0.k8s.ooo | Dev | - | UNUSED | | | 2021-10-05 | |
||
14 | | c1.k8s.ooo | Dev p6 VM | Nico | 2a0a-e5c0-2-11-0-62ff-fe0b-1a3d.k8s-1.place6.ungleich.ch | | | 2021-10-05 | |
||
15 | | c2.k8s.ooo | Dev p7 HW | Nico | server47 server53 server54 | x | x | 2021-10-05 | |
||
16 | | c3.k8s.ooo | Test p7 PI | - | UNUSED | | | 2021-10-05 | |
||
17 | | c4.k8s.ooo | Dev2 p7 HW | Fran/Jin-Guk | server52 server53 server54 | | | - | |
||
18 | | c5.k8s.ooo | Dev p6 VM Amal | Nico/Amal | 2a0a-e5c0-2-11-0-62ff-fe0b-1a46.k8s-1.place6.ungleich.ch | | | | |
||
19 | | c6.k8s.ooo | Dev p6 VM Jin-Guk | Jin-Guk | | | | | |
||
20 | | [[p6.k8s.ooo]] | production | | server67 server69 server71 | x | x | 2021-10-05 | |
||
21 | | [[p10.k8s.ooo]] | production | | server63 server65 server83 | x | x | 2021-10-05 | |
||
22 | | | | | | | | | |
||
23 | |||
24 | 21 | Nico Schottelius | |
25 | 1 | Nico Schottelius | h2. General architecture and components overview |
26 | |||
27 | * All k8s clusters are IPv6 only |
||
28 | * We use BGP peering to propagate podcidr and serviceCidr networks to our infrastructure |
||
29 | * The main public testing repository is "ungleich-k8s":https://code.ungleich.ch/ungleich-public/ungleich-k8s |
||
30 | 18 | Nico Schottelius | ** Private configurations are found in the **k8s-config** repository |
31 | 1 | Nico Schottelius | |
32 | h3. Cluster types |
||
33 | |||
34 | 28 | Nico Schottelius | | **Type/Feature** | **Development** | **Production** | |
35 | | Min No. nodes | 3 (1 master, 3 worker) | 5 (3 master, 3 worker) | |
||
36 | | Recommended minimum | 4 (dedicated master, 3 worker) | 8 (3 master, 5 worker) | |
||
37 | | Separation of control plane | optional | recommended | |
||
38 | | Persistent storage | required | required | |
||
39 | | Number of storage monitors | 3 | 5 | |
||
40 | 1 | Nico Schottelius | |
41 | 43 | Nico Schottelius | h2. General k8s operations |
42 | 1 | Nico Schottelius | |
43 | 46 | Nico Schottelius | h3. Cheat sheet / external great references |
44 | |||
45 | * "kubectl cheatsheet":https://kubernetes.io/docs/reference/kubectl/cheatsheet/ |
||
46 | |||
47 | 44 | Nico Schottelius | h3. Get the cluster admin.conf |
48 | |||
49 | * On the masters of each cluster you can find the file @/etc/kubernetes/admin.conf@ |
||
50 | * To be able to administrate the cluster you can copy the admin.conf to your local machine |
||
51 | * Multi cluster debugging can very easy if you name the config ~/cX-admin.conf (see example below) |
||
52 | |||
53 | <pre> |
||
54 | % scp root@server47.place7.ungleich.ch:/etc/kubernetes/admin.conf ~/c2-admin.conf |
||
55 | % export KUBECONFIG=~/c2-admin.conf |
||
56 | % kubectl get nodes |
||
57 | NAME STATUS ROLES AGE VERSION |
||
58 | server47 Ready control-plane,master 82d v1.22.0 |
||
59 | server48 Ready control-plane,master 82d v1.22.0 |
||
60 | server49 Ready <none> 82d v1.22.0 |
||
61 | server50 Ready <none> 82d v1.22.0 |
||
62 | server59 Ready control-plane,master 82d v1.22.0 |
||
63 | server60 Ready,SchedulingDisabled <none> 82d v1.22.0 |
||
64 | server61 Ready <none> 82d v1.22.0 |
||
65 | server62 Ready <none> 82d v1.22.0 |
||
66 | </pre> |
||
67 | |||
68 | 18 | Nico Schottelius | h3. Installing a new k8s cluster |
69 | 8 | Nico Schottelius | |
70 | 9 | Nico Schottelius | * Decide on the cluster name (usually *cX.k8s.ooo*), X counting upwards |
71 | 28 | Nico Schottelius | ** Using pXX.k8s.ooo for production clusters of placeXX |
72 | 9 | Nico Schottelius | * Use cdist to configure the nodes with requirements like crio |
73 | * Decide between single or multi node control plane setups (see below) |
||
74 | 28 | Nico Schottelius | ** Single control plane suitable for development clusters |
75 | 9 | Nico Schottelius | |
76 | 28 | Nico Schottelius | Typical init procedure: |
77 | 9 | Nico Schottelius | |
78 | 28 | Nico Schottelius | * Single control plane: @kubeadm init --config bootstrap/XXX/kubeadm.yaml@ |
79 | * Multi control plane (HA): @kubeadm init --config bootstrap/XXX/kubeadm.yaml --upload-certs@ |
||
80 | 10 | Nico Schottelius | |
81 | 29 | Nico Schottelius | h3. Deleting a pod that is hanging in terminating state |
82 | |||
83 | <pre> |
||
84 | kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE> |
||
85 | </pre> |
||
86 | |||
87 | (from https://stackoverflow.com/questions/35453792/pods-stuck-in-terminating-status) |
||
88 | |||
89 | 42 | Nico Schottelius | h3. Listing nodes of a cluster |
90 | |||
91 | <pre> |
||
92 | [15:05] bridge:~% kubectl get nodes |
||
93 | NAME STATUS ROLES AGE VERSION |
||
94 | server22 Ready <none> 52d v1.22.0 |
||
95 | server23 Ready <none> 52d v1.22.2 |
||
96 | server24 Ready <none> 52d v1.22.0 |
||
97 | server25 Ready <none> 52d v1.22.0 |
||
98 | server26 Ready <none> 52d v1.22.0 |
||
99 | server27 Ready <none> 52d v1.22.0 |
||
100 | server63 Ready control-plane,master 52d v1.22.0 |
||
101 | server64 Ready <none> 52d v1.22.0 |
||
102 | server65 Ready control-plane,master 52d v1.22.0 |
||
103 | server66 Ready <none> 52d v1.22.0 |
||
104 | server83 Ready control-plane,master 52d v1.22.0 |
||
105 | server84 Ready <none> 52d v1.22.0 |
||
106 | server85 Ready <none> 52d v1.22.0 |
||
107 | server86 Ready <none> 52d v1.22.0 |
||
108 | </pre> |
||
109 | |||
110 | |||
111 | 41 | Nico Schottelius | h3. Removing / draining a node |
112 | |||
113 | Usually @kubectl drain server@ should do the job, but sometimes we need to be more aggressive: |
||
114 | |||
115 | <pre> |
||
116 | kubectl drain --delete-emptydir-data --ignore-daemonsets server23 |
||
117 | 42 | Nico Schottelius | </pre> |
118 | |||
119 | h3. Readding a node after draining |
||
120 | |||
121 | <pre> |
||
122 | kubectl uncordon serverXX |
||
123 | 1 | Nico Schottelius | </pre> |
124 | 43 | Nico Schottelius | |
125 | 50 | Nico Schottelius | h3. (Re-)joining worker nodes after creating the cluster |
126 | 49 | Nico Schottelius | |
127 | * We need to have an up-to-date token |
||
128 | * We use different join commands for the workers and control plane nodes |
||
129 | |||
130 | Generating the join command on an existing control plane node: |
||
131 | |||
132 | <pre> |
||
133 | kubeadm token create --print-join-command |
||
134 | </pre> |
||
135 | |||
136 | 50 | Nico Schottelius | h3. (Re-)joining control plane nodes after creating the cluster |
137 | 1 | Nico Schottelius | |
138 | 50 | Nico Schottelius | * We generate the token again |
139 | * We upload the certificates |
||
140 | * We need to combine/create the join command for the control plane node |
||
141 | |||
142 | Example session: |
||
143 | |||
144 | <pre> |
||
145 | % kubeadm token create --print-join-command |
||
146 | kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash |
||
147 | |||
148 | % kubeadm init phase upload-certs --upload-certs |
||
149 | [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace |
||
150 | [upload-certs] Using certificate key: |
||
151 | CERTKEY |
||
152 | |||
153 | # Then we use these two outputs on the joining node: |
||
154 | |||
155 | kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash --control-plane --certificate-key CERTKEY |
||
156 | </pre> |
||
157 | |||
158 | Commands to be used on a control plane node: |
||
159 | |||
160 | <pre> |
||
161 | kubeadm token create --print-join-command |
||
162 | kubeadm init phase upload-certs --upload-certs |
||
163 | </pre> |
||
164 | |||
165 | Commands to be used on the joining node: |
||
166 | |||
167 | <pre> |
||
168 | JOINCOMMAND --control-plane --certificate-key CERTKEY |
||
169 | </pre> |
||
170 | 49 | Nico Schottelius | |
171 | 51 | Nico Schottelius | SEE ALSO |
172 | |||
173 | * https://stackoverflow.com/questions/63936268/how-to-generate-kubeadm-token-for-secondary-control-plane-nodes |
||
174 | * https://blog.scottlowe.org/2019/08/15/reconstructing-the-join-command-for-kubeadm/ |
||
175 | |||
176 | 53 | Nico Schottelius | h3. How to fix etcd does not start when rejoining a kubernetes cluster as a control plane |
177 | 52 | Nico Schottelius | |
178 | If during the above step etcd does not come up, @kubeadm join@ can hang as follows: |
||
179 | |||
180 | <pre> |
||
181 | [control-plane] Creating static Pod manifest for "kube-apiserver" |
||
182 | [control-plane] Creating static Pod manifest for "kube-controller-manager" |
||
183 | [control-plane] Creating static Pod manifest for "kube-scheduler" |
||
184 | [check-etcd] Checking that the etcd cluster is healthy |
||
185 | error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://[2a0a:e5c0:10:1:225:b3ff:fe20:37 |
||
186 | 8a]:2379 with maintenance client: context deadline exceeded |
||
187 | To see the stack trace of this error execute with --v=5 or higher |
||
188 | </pre> |
||
189 | |||
190 | Then the problem is likely that the etcd server is still a member of the cluster. We first need to remove it from the etcd cluster and then the join works. |
||
191 | |||
192 | To fix this we do: |
||
193 | |||
194 | * Find a working etcd pod |
||
195 | * Find the etcd members / member list |
||
196 | * Remove the etcd member that we want to re-join the cluster |
||
197 | |||
198 | |||
199 | <pre> |
||
200 | # Find the etcd pods |
||
201 | kubectl -n kube-system get pods -l component=etcd,tier=control-plane |
||
202 | |||
203 | # Get the list of etcd servers with the member id |
||
204 | kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list |
||
205 | |||
206 | # Remove the member |
||
207 | kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove MEMBERID |
||
208 | </pre> |
||
209 | |||
210 | Sample session: |
||
211 | |||
212 | <pre> |
||
213 | [10:48] line:~% kubectl -n kube-system get pods -l component=etcd,tier=control-plane |
||
214 | NAME READY STATUS RESTARTS AGE |
||
215 | etcd-server63 1/1 Running 0 3m11s |
||
216 | etcd-server65 1/1 Running 3 7d2h |
||
217 | etcd-server83 1/1 Running 8 (6d ago) 7d2h |
||
218 | [10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list |
||
219 | 356891cd676df6e4, started, server65, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2379, false |
||
220 | 371b8a07185dee7e, started, server63, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2379, false |
||
221 | 5942bc58307f8af9, started, server83, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2380, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2379, false |
||
222 | |||
223 | [10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove 371b8a07185dee7e |
||
224 | Member 371b8a07185dee7e removed from cluster e3c0805f592a8f77 |
||
225 | 1 | Nico Schottelius | |
226 | </pre> |
||
227 | |||
228 | SEE ALSO |
||
229 | |||
230 | * We found the solution using https://stackoverflow.com/questions/67921552/re-installed-node-cannot-join-kubernetes-cluster |
||
231 | 56 | Nico Schottelius | |
232 | h2. ArgoCD / ArgoFlow |
||
233 | |||
234 | h3. Get the argocd credentials |
||
235 | |||
236 | <pre> |
||
237 | kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo "" |
||
238 | </pre> |
||
239 | |||
240 | |||
241 | 52 | Nico Schottelius | |
242 | 55 | Nico Schottelius | h2. Helm related operations |
243 | |||
244 | h3. Installing a helm chart |
||
245 | |||
246 | One can use the usual pattern of |
||
247 | |||
248 | <pre> |
||
249 | helm install <releasename> <chartdirectory> |
||
250 | </pre> |
||
251 | |||
252 | However often you want to reinstall/update when testing helm charts. The following pattern is "better", because it allows you to reinstall, if it is already installed: |
||
253 | |||
254 | <pre> |
||
255 | helm upgrade --install <releasename> <chartdirectory> |
||
256 | </pre> |
||
257 | |||
258 | |||
259 | |||
260 | 43 | Nico Schottelius | h2. Rook / Ceph Related Operations |
261 | |||
262 | h3. Inspecting the logs of a specific server |
||
263 | |||
264 | <pre> |
||
265 | # Get the related pods |
||
266 | kubectl -n rook-ceph get pods -l app=rook-ceph-osd-prepare |
||
267 | ... |
||
268 | |||
269 | # Inspect the logs of a specific pod |
||
270 | kubectl -n rook-ceph logs -f rook-ceph-osd-prepare-server23--1-444qx |
||
271 | |||
272 | </pre> |
||
273 | |||
274 | h3. Triggering server prepare / adding new osds |
||
275 | |||
276 | The rook-ceph-operator triggers/watches/creates pods to maintain hosts. To trigger a full "re scan", simply delete that pod: |
||
277 | |||
278 | <pre> |
||
279 | kubectl -n rook-ceph delete pods -l app=rook-ceph-operator |
||
280 | </pre> |
||
281 | |||
282 | This will cause all the @rook-ceph-osd-prepare-..@ jobs to be recreated and thus OSDs to be created, if new disks have been added. |
||
283 | |||
284 | h3. Removing an OSD |
||
285 | |||
286 | * See "Ceph OSD Management":https://rook.io/docs/rook/v1.7/ceph-osd-mgmt.html |
||
287 | 41 | Nico Schottelius | |
288 | 1 | Nico Schottelius | h2. Infrastructure versions |
289 | 35 | Nico Schottelius | |
290 | 54 | Nico Schottelius | h3. ungleich kubernetes infrastructure v4 |
291 | |||
292 | * rook is configured via manifests instead of using the rook-ceph-cluster helm chart |
||
293 | * The rook operator is still being installed via helm |
||
294 | |||
295 | 35 | Nico Schottelius | h3. ungleich kubernetes infrastructure v3 |
296 | |||
297 | * rook is now installed via helm via argocd instead of directly via manifests |
||
298 | 10 | Nico Schottelius | |
299 | 28 | Nico Schottelius | h3. ungleich kubernetes infrastructure v2 |
300 | |||
301 | * Replaced fluxv2 from ungleich k8s v1 with argocd |
||
302 | ** argocd can apply helm templates directly without needing to go through Chart releases |
||
303 | * We are also using argoflow for build flows |
||
304 | * Planned to add "kaniko":https://github.com/GoogleContainerTools/kaniko for image building |
||
305 | |||
306 | h3. ungleich kubernetes infrastructure v1 |
||
307 | |||
308 | We are using the following components: |
||
309 | |||
310 | * "Calico as a CNI":https://www.projectcalico.org/ with BGP, IPv6 only, no encapsulation |
||
311 | ** Needed for basic networking |
||
312 | * "kubernetes-secret-generator":https://github.com/mittwald/kubernetes-secret-generator for creating secrets |
||
313 | ** Needed so that secrets are not stored in the git repository, but only in the cluster |
||
314 | * "ungleich-certbot":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot |
||
315 | ** Needed to get letsencrypt certificates for services |
||
316 | * "rook with ceph rbd + cephfs":https://rook.io/ for storage |
||
317 | ** rbd for almost everything, *ReadWriteOnce* |
||
318 | ** cephfs for smaller things, multi access *ReadWriteMany* |
||
319 | ** Needed for providing persistent storage |
||
320 | * "flux v2":https://fluxcd.io/ |
||
321 | ** Needed to manage resources automatically |