Project

General

Profile

The ungleich kubernetes infrastructure » History » Version 220

Nico Schottelius, 12/11/2024 11:08 AM

1 22 Nico Schottelius
h1. The ungleich kubernetes infrastructure and ungleich kubernetes manual
2 1 Nico Schottelius
3 3 Nico Schottelius
{{toc}}
4
5 1 Nico Schottelius
h2. Status
6
7 211 Nico Schottelius
This document is **production**.
8
This document is the ungleich kubernetes infrastructure overview as well as the ungleich kubernetes manual.
9 1 Nico Schottelius
10 10 Nico Schottelius
h2. k8s clusters
11
12 123 Nico Schottelius
| Cluster            | Purpose/Setup     | Maintainer | Master(s)                     | argo                                                   | v4 http proxy | last verified |
13
| c0.k8s.ooo         | Dev               | -          | UNUSED                        |                                                        |               |    2021-10-05 |
14
| c1.k8s.ooo         | retired           |            | -                             |                                                        |               |    2022-03-15 |
15
| c2.k8s.ooo         | Dev p7 HW         | Nico       | server47 server53 server54    | "argo":https://argocd-server.argocd.svc.c2.k8s.ooo     |               |    2021-10-05 |
16
| c3.k8s.ooo         | retired           | -          | -                             |                                                        |               |    2021-10-05 |
17
| c4.k8s.ooo         | Dev2 p7 HW        | Jin-Guk    | server52 server53 server54    |                                                        |               |             - |
18
| c5.k8s.ooo         | retired           |            | -                             |                                                        |               |    2022-03-15 |
19
| c6.k8s.ooo         | Dev p6 VM Jin-Guk | Jin-Guk    |                               |                                                        |               |               |
20
| [[p5.k8s.ooo]]     | production        |            | server34 server36 server38    | "argo":https://argocd-server.argocd.svc.p5.k8s.ooo     | -             |               |
21
| [[p5-cow.k8s.ooo]] | production        | Nico       | server47 server51 server55    | "argo":https://argocd-server.argocd.svc.p5-cow.k8s.ooo |               |    2022-08-27 |
22
| [[p6.k8s.ooo]]     | production        |            | server67 server69 server71    | "argo":https://argocd-server.argocd.svc.p6.k8s.ooo     | 147.78.194.13 |    2021-10-05 |
23 184 Nico Schottelius
| [[p6-cow.k8s.ooo]] | production        |            | server134 server135 server136 | "argo":https://argocd-server.argocd.svc.p6in10.k8s.ooo | ?             |    2023-05-17 |
24 177 Nico Schottelius
| [[p10.k8s.ooo]]    | production        |            | server131 server132 server133 | "argo":https://argocd-server.argocd.svc.p10.k8s.ooo    | 147.78.194.12 |    2021-10-05 |
25 123 Nico Schottelius
| [[k8s.ge.nau.so]]  | development       |            | server107 server108 server109 | "argo":https://argocd-server.argocd.svc.k8s.ge.nau.so  |               |               |
26
| [[dev.k8s.ooo]]    | development       |            | server110 server111 server112 | "argo":https://argocd-server.argocd.svc.dev.k8s.ooo    | -             |    2022-07-08 |
27 164 Nico Schottelius
| [[r1r2p15k8sooo|r1.p15.k8s.ooo]] | production | Nico | server120 | | | 2022-10-30 |
28
| [[r1r2p15k8sooo|r2.p15.k8s.ooo]] | production | Nico | server121 | | | 2022-09-06 |
29 162 Nico Schottelius
| [[r1r2p10k8sooo|r1.p10.k8s.ooo]] | production | Nico | server122 | | | 2022-10-30 |
30
| [[r1r2p10k8sooo|r2.p10.k8s.ooo]] | production | Nico | server123 | | | 2022-10-15 |
31
| [[r1r2p5k8sooo|r1.p5.k8s.ooo]] | production | Nico | server137 | | | 2022-10-30 |
32
| [[r1r2p5k8sooo|r2.p5.k8s.ooo]] | production | Nico | server138 | | | 2022-10-30 |
33
| [[r1r2p6k8sooo|r1.p6.k8s.ooo]] | production | Nico | server139 | | | 2022-10-30 |
34
| [[r1r2p6k8sooo|r2.p6.k8s.ooo]] | production | Nico | server140 | | | 2022-10-30 |
35 21 Nico Schottelius
36 1 Nico Schottelius
h2. General architecture and components overview
37
38
* All k8s clusters are IPv6 only
39
* We use BGP peering to propagate podcidr and serviceCidr networks to our infrastructure
40
* The main public testing repository is "ungleich-k8s":https://code.ungleich.ch/ungleich-public/ungleich-k8s
41 18 Nico Schottelius
** Private configurations are found in the **k8s-config** repository
42 1 Nico Schottelius
43
h3. Cluster types
44
45 28 Nico Schottelius
| **Type/Feature**            | **Development**                | **Production**         |
46
| Min No. nodes               | 3 (1 master, 3 worker)         | 5 (3 master, 3 worker) |
47
| Recommended minimum         | 4 (dedicated master, 3 worker) | 8 (3 master, 5 worker) |
48
| Separation of control plane | optional                       | recommended            |
49
| Persistent storage          | required                       | required               |
50
| Number of storage monitors  | 3                              | 5                      |
51 1 Nico Schottelius
52 43 Nico Schottelius
h2. General k8s operations
53 1 Nico Schottelius
54 46 Nico Schottelius
h3. Cheat sheet / external great references
55
56
* "kubectl cheatsheet":https://kubernetes.io/docs/reference/kubectl/cheatsheet/
57
58 214 Nico Schottelius
Some examples:
59
60
h4. Use kubectl to print only the node names
61
62
<pre>
63
kubectl get nodes -o jsonpath='{.items[*].metadata.name}'
64
</pre>
65
66
Can easily be used in a shell loop like this:
67
68
<pre>
69
for host in $(kubectl get nodes -o jsonpath='{.items[*].metadata.name}'); do echo $host; ssh root@${host} uptime; done
70
</pre>
71
72 117 Nico Schottelius
h3. Allowing to schedule work on the control plane / removing node taints
73 69 Nico Schottelius
74
* Mostly for single node / test / development clusters
75
* Just remove the master taint as follows
76
77
<pre>
78
kubectl taint nodes --all node-role.kubernetes.io/master-
79 118 Nico Schottelius
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
80 69 Nico Schottelius
</pre>
81 1 Nico Schottelius
82 117 Nico Schottelius
You can check the node taints using @kubectl describe node ...@
83 69 Nico Schottelius
84 208 Nico Schottelius
h3. Adding taints
85
86
* For instance to limit nodes to specific customers
87
88
<pre>
89
kubectl taint nodes serverXX customer=CUSTOMERNAME:NoSchedule
90
</pre>
91
92 44 Nico Schottelius
h3. Get the cluster admin.conf
93
94
* On the masters of each cluster you can find the file @/etc/kubernetes/admin.conf@
95
* To be able to administrate the cluster you can copy the admin.conf to your local machine
96
* Multi cluster debugging can very easy if you name the config ~/cX-admin.conf (see example below)
97
98
<pre>
99
% scp root@server47.place7.ungleich.ch:/etc/kubernetes/admin.conf ~/c2-admin.conf
100
% export KUBECONFIG=~/c2-admin.conf    
101
% kubectl get nodes
102
NAME       STATUS                     ROLES                  AGE   VERSION
103
server47   Ready                      control-plane,master   82d   v1.22.0
104
server48   Ready                      control-plane,master   82d   v1.22.0
105
server49   Ready                      <none>                 82d   v1.22.0
106
server50   Ready                      <none>                 82d   v1.22.0
107
server59   Ready                      control-plane,master   82d   v1.22.0
108
server60   Ready,SchedulingDisabled   <none>                 82d   v1.22.0
109
server61   Ready                      <none>                 82d   v1.22.0
110
server62   Ready                      <none>                 82d   v1.22.0               
111
</pre>
112
113 18 Nico Schottelius
h3. Installing a new k8s cluster
114 8 Nico Schottelius
115 9 Nico Schottelius
* Decide on the cluster name (usually *cX.k8s.ooo*), X counting upwards
116 28 Nico Schottelius
** Using pXX.k8s.ooo for production clusters of placeXX
117 9 Nico Schottelius
* Use cdist to configure the nodes with requirements like crio
118
* Decide between single or multi node control plane setups (see below)
119 28 Nico Schottelius
** Single control plane suitable for development clusters
120 9 Nico Schottelius
121 28 Nico Schottelius
Typical init procedure:
122 9 Nico Schottelius
123 206 Nico Schottelius
h4. Single control plane:
124
125
<pre>
126
kubeadm init --config bootstrap/XXX/kubeadm.yaml
127
</pre>
128
129
h4. Multi control plane (HA):
130
131
<pre>
132
kubeadm init --config bootstrap/XXX/kubeadm.yaml --upload-certs
133
</pre>
134
135 10 Nico Schottelius
136 29 Nico Schottelius
h3. Deleting a pod that is hanging in terminating state
137
138
<pre>
139
kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>
140
</pre>
141
142
(from https://stackoverflow.com/questions/35453792/pods-stuck-in-terminating-status)
143
144 42 Nico Schottelius
h3. Listing nodes of a cluster
145
146
<pre>
147
[15:05] bridge:~% kubectl get nodes
148
NAME       STATUS   ROLES                  AGE   VERSION
149
server22   Ready    <none>                 52d   v1.22.0
150
server23   Ready    <none>                 52d   v1.22.2
151
server24   Ready    <none>                 52d   v1.22.0
152
server25   Ready    <none>                 52d   v1.22.0
153
server26   Ready    <none>                 52d   v1.22.0
154
server27   Ready    <none>                 52d   v1.22.0
155
server63   Ready    control-plane,master   52d   v1.22.0
156
server64   Ready    <none>                 52d   v1.22.0
157
server65   Ready    control-plane,master   52d   v1.22.0
158
server66   Ready    <none>                 52d   v1.22.0
159
server83   Ready    control-plane,master   52d   v1.22.0
160
server84   Ready    <none>                 52d   v1.22.0
161
server85   Ready    <none>                 52d   v1.22.0
162
server86   Ready    <none>                 52d   v1.22.0
163
</pre>
164
165 41 Nico Schottelius
h3. Removing / draining a node
166
167
Usually @kubectl drain server@ should do the job, but sometimes we need to be more aggressive:
168
169 1 Nico Schottelius
<pre>
170 103 Nico Schottelius
kubectl drain --delete-emptydir-data --ignore-daemonsets serverXX
171 42 Nico Schottelius
</pre>
172
173
h3. Readding a node after draining
174
175
<pre>
176
kubectl uncordon serverXX
177 1 Nico Schottelius
</pre>
178 43 Nico Schottelius
179 50 Nico Schottelius
h3. (Re-)joining worker nodes after creating the cluster
180 49 Nico Schottelius
181
* We need to have an up-to-date token
182
* We use different join commands for the workers and control plane nodes
183
184
Generating the join command on an existing control plane node:
185
186
<pre>
187
kubeadm token create --print-join-command
188
</pre>
189
190 50 Nico Schottelius
h3. (Re-)joining control plane nodes after creating the cluster
191 1 Nico Schottelius
192 50 Nico Schottelius
* We generate the token again
193
* We upload the certificates
194
* We need to combine/create the join command for the control plane node
195
196
Example session:
197
198
<pre>
199
% kubeadm token create --print-join-command
200
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash 
201
202
% kubeadm init phase upload-certs --upload-certs
203
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
204
[upload-certs] Using certificate key:
205
CERTKEY
206
207
# Then we use these two outputs on the joining node:
208
209
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash --control-plane --certificate-key CERTKEY
210
</pre>
211
212
Commands to be used on a control plane node:
213
214
<pre>
215
kubeadm token create --print-join-command
216
kubeadm init phase upload-certs --upload-certs
217
</pre>
218
219
Commands to be used on the joining node:
220
221
<pre>
222
JOINCOMMAND --control-plane --certificate-key CERTKEY
223
</pre>
224 49 Nico Schottelius
225 51 Nico Schottelius
SEE ALSO
226
227
* https://stackoverflow.com/questions/63936268/how-to-generate-kubeadm-token-for-secondary-control-plane-nodes
228
* https://blog.scottlowe.org/2019/08/15/reconstructing-the-join-command-for-kubeadm/
229
230 53 Nico Schottelius
h3. How to fix etcd does not start when rejoining a kubernetes cluster as a control plane
231 52 Nico Schottelius
232
If during the above step etcd does not come up, @kubeadm join@ can hang as follows:
233
234
<pre>
235
[control-plane] Creating static Pod manifest for "kube-apiserver"                                                              
236
[control-plane] Creating static Pod manifest for "kube-controller-manager"                                                     
237
[control-plane] Creating static Pod manifest for "kube-scheduler"                                                              
238
[check-etcd] Checking that the etcd cluster is healthy                                                                         
239
error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://[2a0a:e5c0:10:1:225:b3ff:fe20:37
240
8a]:2379 with maintenance client: context deadline exceeded                                                                    
241
To see the stack trace of this error execute with --v=5 or higher         
242
</pre>
243
244
Then the problem is likely that the etcd server is still a member of the cluster. We first need to remove it from the etcd cluster and then the join works.
245
246
To fix this we do:
247
248
* Find a working etcd pod
249
* Find the etcd members / member list
250
* Remove the etcd member that we want to re-join the cluster
251
252
253
<pre>
254
# Find the etcd pods
255
kubectl -n kube-system get pods -l component=etcd,tier=control-plane
256
257
# Get the list of etcd servers with the member id 
258
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
259
260
# Remove the member
261
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove MEMBERID
262
</pre>
263
264
Sample session:
265
266
<pre>
267
[10:48] line:~% kubectl -n kube-system get pods -l component=etcd,tier=control-plane
268
NAME            READY   STATUS    RESTARTS     AGE
269
etcd-server63   1/1     Running   0            3m11s
270
etcd-server65   1/1     Running   3            7d2h
271
etcd-server83   1/1     Running   8 (6d ago)   7d2h
272
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
273
356891cd676df6e4, started, server65, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2379, false
274
371b8a07185dee7e, started, server63, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2379, false
275
5942bc58307f8af9, started, server83, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2380, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2379, false
276
277
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove 371b8a07185dee7e
278
Member 371b8a07185dee7e removed from cluster e3c0805f592a8f77
279 1 Nico Schottelius
280
</pre>
281
282
SEE ALSO
283
284
* We found the solution using https://stackoverflow.com/questions/67921552/re-installed-node-cannot-join-kubernetes-cluster
285 56 Nico Schottelius
286 213 Nico Schottelius
h4. Updating the members
287
288
1) get alive member
289
290
<pre>
291
% kubectl -n kube-system get pods -l component=etcd,tier=control-plane
292
NAME            READY   STATUS    RESTARTS   AGE
293
etcd-server67   1/1     Running   1          185d
294
etcd-server69   1/1     Running   1          185d
295
etcd-server71   1/1     Running   2          185d
296
[20:57] sun:~% 
297
</pre>
298
299
2) get member list
300
301
* in this case via crictl, as the api does not work correctly anymore
302
303
<pre>
304
305
306
</pre>
307
308
309
3) update
310
311
<pre>
312
etcdctl member update MEMBERID  --peer-urls=https://[...]:2380
313
314
315
</pre>
316
317 147 Nico Schottelius
h3. Node labels (adding, showing, removing)
318
319
Listing the labels:
320
321
<pre>
322
kubectl get nodes --show-labels
323
</pre>
324
325
Adding labels:
326
327
<pre>
328
kubectl label nodes LIST-OF-NODES label1=value1 
329
330
</pre>
331
332
For instance:
333
334
<pre>
335
kubectl label nodes router2 router3 hosttype=router 
336
</pre>
337
338
Selecting nodes in pods:
339
340
<pre>
341
apiVersion: v1
342
kind: Pod
343
...
344
spec:
345
  nodeSelector:
346
    hosttype: router
347
</pre>
348
349 148 Nico Schottelius
Removing labels by adding a minus at the end of the label name:
350
351
<pre>
352
kubectl label node <nodename> <labelname>-
353
</pre>
354
355
For instance:
356
357
<pre>
358
kubectl label nodes router2 router3 hosttype- 
359
</pre>
360
361 147 Nico Schottelius
SEE ALSO
362 1 Nico Schottelius
363 148 Nico Schottelius
* https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/
364
* https://stackoverflow.com/questions/34067979/how-to-delete-a-node-label-by-command-and-api
365 147 Nico Schottelius
366 199 Nico Schottelius
h3. Listing all pods on a node
367
368
<pre>
369
kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName=serverXX
370
</pre>
371
372
Found on https://stackoverflow.com/questions/62000559/how-to-list-all-the-pods-running-in-a-particular-worker-node-by-executing-a-comm
373
374 101 Nico Schottelius
h3. Hardware Maintenance using ungleich-hardware
375
376
Use the following manifest and replace the HOST with the actual host:
377
378
<pre>
379
apiVersion: v1
380
kind: Pod
381
metadata:
382
  name: ungleich-hardware-HOST
383
spec:
384
  containers:
385
  - name: ungleich-hardware
386
    image: ungleich/ungleich-hardware:0.0.5
387
    args:
388
    - sleep
389
    - "1000000"
390
    volumeMounts:
391
      - mountPath: /dev
392
        name: dev
393
    securityContext:
394
      privileged: true
395
  nodeSelector:
396
    kubernetes.io/hostname: "HOST"
397
398
  volumes:
399
    - name: dev
400
      hostPath:
401
        path: /dev
402
</pre>
403
404 102 Nico Schottelius
Also see: [[The_ungleich_hardware_maintenance_guide]]
405
406 105 Nico Schottelius
h3. Triggering a cronjob / creating a job from a cronjob
407 104 Nico Schottelius
408
To test a cronjob, we can create a job from a cronjob:
409
410
<pre>
411
kubectl create job --from=cronjob/volume2-daily-backup volume2-manual
412
</pre>
413
414
This creates a job volume2-manual based on the cronjob  volume2-daily
415
416 112 Nico Schottelius
h3. su-ing into a user that has nologin shell set
417
418
Many times users are having nologin as their shell inside the container. To be able to execute maintenance commands within the
419
container, we can use @su -s /bin/sh@ like this:
420
421
<pre>
422
su -s /bin/sh -c '/path/to/your/script' testuser
423
</pre>
424
425
Found on https://serverfault.com/questions/351046/how-to-run-command-as-user-who-has-usr-sbin-nologin-as-shell
426
427 113 Nico Schottelius
h3. How to print a secret value
428
429
Assuming you want the "password" item from a secret, use:
430
431
<pre>
432
kubectl get secret SECRETNAME -o jsonpath="{.data.password}" | base64 -d; echo "" 
433
</pre>
434
435 209 Nico Schottelius
h3. Fixing the "ImageInspectError"
436
437
If you see this problem:
438
439
<pre>
440
# kubectl get pods
441
NAME                                                       READY   STATUS                   RESTARTS   AGE
442
bird-router-server137-bird-767f65bb47-g4xsh                0/1     Init:ImageInspectError   0          77d
443
bird-router-server137-openvpn-server120-5c987b7ffb-cn9xf   0/1     ImageInspectError        1          159d
444
bird-router-server137-unbound-5c6f5d4bb6-cxbpr             0/1     ImageInspectError        1          159d
445
</pre>
446
447
Fixes so far:
448
449
* correct registries.conf
450
451 212 Nico Schottelius
h3. Automatic cleanup of images
452
453
* options to kubelet
454
455
<pre>
456
  --image-gc-high-threshold=90: The percent of disk usage after which image garbage collection is always run. Default: 90%
457
  --image-gc-low-threshold=80: The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. Default: 80%
458
</pre>
459 209 Nico Schottelius
460 173 Nico Schottelius
h3. How to upgrade a kubernetes cluster
461 172 Nico Schottelius
462
h4. General
463
464
* Should be done every X months to stay up-to-date
465
** X probably something like 3-6
466
* kubeadm based clusters
467
* Needs specific kubeadm versions for upgrade
468
* Follow instructions on https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
469 190 Nico Schottelius
* Finding releases: https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG
470 172 Nico Schottelius
471
h4. Getting a specific kubeadm or kubelet version
472
473
<pre>
474 190 Nico Schottelius
RELEASE=v1.22.17
475
RELEASE=v1.23.17
476 181 Nico Schottelius
RELEASE=v1.24.9
477 1 Nico Schottelius
RELEASE=v1.25.9
478
RELEASE=v1.26.6
479 190 Nico Schottelius
RELEASE=v1.27.2
480
481 187 Nico Schottelius
ARCH=amd64
482 172 Nico Schottelius
483
curl -L --remote-name-all https://dl.k8s.io/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet}
484 182 Nico Schottelius
chmod u+x kubeadm kubelet
485 172 Nico Schottelius
</pre>
486
487
h4. Steps
488
489
* kubeadm upgrade plan
490
** On one control plane node
491
* kubeadm upgrade apply vXX.YY.ZZ
492
** On one control plane node
493 189 Nico Schottelius
* kubeadm upgrade node
494
** On all other control plane nodes
495
** On all worker nodes afterwards
496
497 172 Nico Schottelius
498 173 Nico Schottelius
Repeat for all control planes nodes. The upgrade kubelet on all other nodes via package manager.
499 172 Nico Schottelius
500 193 Nico Schottelius
h4. Upgrading to 1.22.17
501 1 Nico Schottelius
502 193 Nico Schottelius
* https://v1-22.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
503 194 Nico Schottelius
* Need to create a kubeadm config map
504 198 Nico Schottelius
** f.i. using the following
505
** @/usr/local/bin/kubeadm-v1.22.17   upgrade --config kubeadm.yaml --ignore-preflight-errors=CoreDNSUnsupportedPlugins,CoreDNSMigration apply -y v1.22.17@
506 193 Nico Schottelius
* Done for p6 on 2023-10-04
507
508
h4. Upgrading to 1.23.17
509
510
* https://v1-23.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
511
* No special notes
512
* Done for p6 on 2023-10-04
513
514
h4. Upgrading to 1.24.17
515
516
* https://v1-24.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
517
* No special notes
518
* Done for p6 on 2023-10-04
519
520
h4. Upgrading to 1.25.14
521
522
* https://v1-24.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
523
* No special notes
524
* Done for p6 on 2023-10-04
525
526
h4. Upgrading to 1.26.9
527
528 1 Nico Schottelius
* https://v1-26.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
529 193 Nico Schottelius
* No special notes
530
* Done for p6 on 2023-10-04
531 188 Nico Schottelius
532 196 Nico Schottelius
h4. Upgrading to 1.27
533 186 Nico Schottelius
534 192 Nico Schottelius
* https://v1-27.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
535 186 Nico Schottelius
* kubelet will not start anymore
536
* reason: @"command failed" err="failed to parse kubelet flag: unknown flag: --container-runtime"@
537
* /var/lib/kubelet/kubeadm-flags.env contains that parameter
538
* remove it, start kubelet
539 192 Nico Schottelius
540 197 Nico Schottelius
h4. Upgrading to 1.28
541 192 Nico Schottelius
542
* https://v1-28.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
543 186 Nico Schottelius
544 219 Nico Schottelius
h4. Upgrading to 1.31
545
546
* Cluster needs to updated FIRST before kubelet/the OS
547
548
Otherwise you run into errors in the pod like this:
549
550
<pre>
551
  Warning  Failed     11s (x3 over 12s)  kubelet            Error: services have not yet been read at least once, cannot construct envvars
552
</pre>
553
554
And the resulting pod state is:
555
556
<pre>
557
Init:CreateContainerConfigError
558
</pre>
559
560 210 Nico Schottelius
h4. Upgrading to 1.29
561
562
* Done for many clusters around 2024-01-10
563
* Unsure if it was properly released
564
* https://v1-29.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
565
566 186 Nico Schottelius
h4. Upgrade to crio 1.27: missing crun
567
568
Error message
569
570
<pre>
571
level=fatal msg="validating runtime config: runtime validation: \"crun\" not found in $PATH: exec: \"crun\": executable file not found in $PATH"
572
</pre>
573
574
Fix:
575
576
<pre>
577
apk add crun
578
</pre>
579
580 157 Nico Schottelius
h2. Reference CNI
581
582
* Mainly "stupid", but effective plugins
583
* Main documentation on https://www.cni.dev/plugins/current/
584 158 Nico Schottelius
* Plugins
585
** bridge
586
*** Can create the bridge on the host
587
*** But seems not to be able to add host interfaces to it as well
588
*** Has support for vlan tags
589
** vlan
590
*** creates vlan tagged sub interface on the host
591 160 Nico Schottelius
*** "It's a 1:1 mapping (i.e. no bridge in between)":https://github.com/k8snetworkplumbingwg/multus-cni/issues/569
592 158 Nico Schottelius
** host-device
593
*** moves the interface from the host into the container
594
*** very easy for physical connections to containers
595 159 Nico Schottelius
** ipvlan
596
*** "virtualisation" of a host device
597
*** routing based on IP
598
*** Same MAC for everyone
599
*** Cannot reach the master interface
600
** maclvan
601
*** With mac addresses
602
*** Supports various modes (to be checked)
603
** ptp ("point to point")
604
*** Creates a host device and connects it to the container
605
** win*
606 158 Nico Schottelius
*** Windows implementations
607 157 Nico Schottelius
608 62 Nico Schottelius
h2. Calico CNI
609
610
h3. Calico Installation
611
612
* We install "calico using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
613
* This has the following advantages:
614
** Easy to upgrade
615
** Does not require os to configure IPv6/dual stack settings as the tigera operator figures out things on its own
616
617
Usually plain calico can be installed directly using:
618
619
<pre>
620 174 Nico Schottelius
VERSION=v3.25.0
621 149 Nico Schottelius
622 1 Nico Schottelius
helm repo add projectcalico https://docs.projectcalico.org/charts
623 167 Nico Schottelius
helm repo update
624 124 Nico Schottelius
helm upgrade --install --namespace tigera calico projectcalico/tigera-operator --version $VERSION --create-namespace
625 1 Nico Schottelius
</pre>
626 92 Nico Schottelius
627
* Check the tags on https://github.com/projectcalico/calico/tags for the latest release
628 62 Nico Schottelius
629
h3. Installing calicoctl
630
631 115 Nico Schottelius
* General installation instructions, including binary download: https://projectcalico.docs.tigera.io/maintenance/clis/calicoctl/install
632
633 62 Nico Schottelius
To be able to manage and configure calico, we need to 
634
"install calicoctl (we choose the version as a pod)":https://docs.projectcalico.org/getting-started/clis/calicoctl/install#install-calicoctl-as-a-kubernetes-pod
635
636
<pre>
637
kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml
638
</pre>
639
640 93 Nico Schottelius
Or version specific:
641
642
<pre>
643
kubectl apply -f https://github.com/projectcalico/calico/blob/v3.20.4/manifests/calicoctl.yaml
644 97 Nico Schottelius
645
# For 3.22
646
kubectl apply -f https://projectcalico.docs.tigera.io/archive/v3.22/manifests/calicoctl.yaml
647 93 Nico Schottelius
</pre>
648
649 70 Nico Schottelius
And making it easier accessible by alias:
650
651
<pre>
652
alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl"
653
</pre>
654
655 62 Nico Schottelius
h3. Calico configuration
656
657 63 Nico Schottelius
By default our k8s clusters "BGP peer":https://docs.projectcalico.org/networking/bgp
658
with an upstream router to propagate podcidr and servicecidr.
659 62 Nico Schottelius
660
Default settings in our infrastructure:
661
662
* We use a full-mesh using the @nodeToNodeMeshEnabled: true@ option
663
* We keep the original next hop so that *only* the server with the pod is announcing it (instead of ecmp)
664 1 Nico Schottelius
* We use private ASNs for k8s clusters
665 63 Nico Schottelius
* We do *not* use any overlay
666 62 Nico Schottelius
667
After installing calico and calicoctl the last step of the installation is usually:
668
669 1 Nico Schottelius
<pre>
670 79 Nico Schottelius
calicoctl create -f - < calico-bgp.yaml
671 62 Nico Schottelius
</pre>
672
673
674
A sample BGP configuration:
675
676
<pre>
677
---
678
apiVersion: projectcalico.org/v3
679
kind: BGPConfiguration
680
metadata:
681
  name: default
682
spec:
683
  logSeverityScreen: Info
684
  nodeToNodeMeshEnabled: true
685
  asNumber: 65534
686
  serviceClusterIPs:
687
  - cidr: 2a0a:e5c0:10:3::/108
688
  serviceExternalIPs:
689
  - cidr: 2a0a:e5c0:10:3::/108
690
---
691
apiVersion: projectcalico.org/v3
692
kind: BGPPeer
693
metadata:
694
  name: router1-place10
695
spec:
696
  peerIP: 2a0a:e5c0:10:1::50
697
  asNumber: 213081
698
  keepOriginalNextHop: true
699
</pre>
700
701 126 Nico Schottelius
h2. Cilium CNI (experimental)
702
703 137 Nico Schottelius
h3. Status
704
705 138 Nico Schottelius
*NO WORKING CILIUM CONFIGURATION FOR IPV6 only modes*
706 137 Nico Schottelius
707 146 Nico Schottelius
h3. Latest error
708
709
It seems cilium does not run on IPv6 only hosts:
710
711
<pre>
712
level=info msg="Validating configured node address ranges" subsys=daemon
713
level=fatal msg="postinit failed" error="external IPv4 node address could not be derived, please configure via --ipv4-node" subsys=daemon
714
level=info msg="Starting IP identity watcher" subsys=ipcache
715
</pre>
716
717
It crashes after that log entry
718
719 128 Nico Schottelius
h3. BGP configuration
720
721
* The cilium-operator will not start without a correct configmap being present beforehand (see error message below)
722
* Creating the bgp config beforehand as a configmap is thus required.
723
724
The error one gets without the configmap present:
725
726
Pods are hanging with:
727
728
<pre>
729
cilium-bpqm6                       0/1     Init:0/4            0             9s
730
cilium-operator-5947d94f7f-5bmh2   0/1     ContainerCreating   0             9s
731
</pre>
732
733
The error message in the cilium-*perator is:
734
735
<pre>
736
Events:
737
  Type     Reason       Age                From               Message
738
  ----     ------       ----               ----               -------
739
  Normal   Scheduled    80s                default-scheduler  Successfully assigned kube-system/cilium-operator-5947d94f7f-lqcsp to server56
740
  Warning  FailedMount  16s (x8 over 80s)  kubelet            MountVolume.SetUp failed for volume "bgp-config-path" : configmap "bgp-config" not found
741
</pre>
742
743
A correct bgp config looks like this:
744
745
<pre>
746
apiVersion: v1
747
kind: ConfigMap
748
metadata:
749
  name: bgp-config
750
  namespace: kube-system
751
data:
752
  config.yaml: |
753
    peers:
754
      - peer-address: 2a0a:e5c0::46
755
        peer-asn: 209898
756
        my-asn: 65533
757
      - peer-address: 2a0a:e5c0::47
758
        peer-asn: 209898
759
        my-asn: 65533
760
    address-pools:
761
      - name: default
762
        protocol: bgp
763
        addresses:
764
          - 2a0a:e5c0:0:14::/64
765
</pre>
766 127 Nico Schottelius
767
h3. Installation
768 130 Nico Schottelius
769 127 Nico Schottelius
Adding the repo
770 1 Nico Schottelius
<pre>
771 127 Nico Schottelius
772 129 Nico Schottelius
helm repo add cilium https://helm.cilium.io/
773 130 Nico Schottelius
helm repo update
774
</pre>
775 129 Nico Schottelius
776 135 Nico Schottelius
Installing + configuring cilium
777 129 Nico Schottelius
<pre>
778 130 Nico Schottelius
ipv6pool=2a0a:e5c0:0:14::/112
779 1 Nico Schottelius
780 146 Nico Schottelius
version=1.12.2
781 129 Nico Schottelius
782
helm upgrade --install cilium cilium/cilium --version $version \
783 1 Nico Schottelius
  --namespace kube-system \
784
  --set ipv4.enabled=false \
785
  --set ipv6.enabled=true \
786 146 Nico Schottelius
  --set enableIPv6Masquerade=false \
787
  --set bgpControlPlane.enabled=true 
788 1 Nico Schottelius
789 146 Nico Schottelius
#  --set ipam.operator.clusterPoolIPv6PodCIDRList=$ipv6pool
790
791
# Old style bgp?
792 136 Nico Schottelius
#   --set bgp.enabled=true --set bgp.announce.podCIDR=true \
793 127 Nico Schottelius
794
# Show possible configuration options
795
helm show values cilium/cilium
796
797 1 Nico Schottelius
</pre>
798 132 Nico Schottelius
799
Using a /64 for ipam.operator.clusterPoolIPv6PodCIDRList fails with:
800
801
<pre>
802
level=fatal msg="Unable to init cluster-pool allocator" error="unable to initialize IPv6 allocator New CIDR set failed; the node CIDR size is too big" subsys=cilium-operator-generic
803
</pre>
804
805 126 Nico Schottelius
806 1 Nico Schottelius
See also https://github.com/cilium/cilium/issues/20756
807 135 Nico Schottelius
808
Seems a /112 is actually working.
809
810
h3. Kernel modules
811
812
Cilium requires the following modules to be loaded on the host (not loaded by default):
813
814
<pre>
815 1 Nico Schottelius
modprobe  ip6table_raw
816
modprobe  ip6table_filter
817
</pre>
818 146 Nico Schottelius
819
h3. Interesting helm flags
820
821
* autoDirectNodeRoutes
822
* bgpControlPlane.enabled = true
823
824
h3. SEE ALSO
825
826
* https://docs.cilium.io/en/v1.12/helm-reference/
827 133 Nico Schottelius
828 179 Nico Schottelius
h2. Multus
829 168 Nico Schottelius
830
* https://github.com/k8snetworkplumbingwg/multus-cni
831
* Installing a deployment w/ CRDs
832 150 Nico Schottelius
833 169 Nico Schottelius
<pre>
834 176 Nico Schottelius
VERSION=v4.0.1
835 169 Nico Schottelius
836 170 Nico Schottelius
kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/${VERSION}/deployments/multus-daemonset-crio.yml
837
</pre>
838 169 Nico Schottelius
839 191 Nico Schottelius
h2. ArgoCD
840 56 Nico Schottelius
841 60 Nico Schottelius
h3. Argocd Installation
842 1 Nico Schottelius
843 116 Nico Schottelius
* See https://argo-cd.readthedocs.io/en/stable/
844
845 60 Nico Schottelius
As there is no configuration management present yet, argocd is installed using
846
847 1 Nico Schottelius
<pre>
848 60 Nico Schottelius
kubectl create namespace argocd
849 1 Nico Schottelius
850
# OR: latest stable
851
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
852
853 191 Nico Schottelius
# OR Specific Version
854
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.3.2/manifests/install.yaml
855 56 Nico Schottelius
856 191 Nico Schottelius
857
</pre>
858 1 Nico Schottelius
859 60 Nico Schottelius
h3. Get the argocd credentials
860
861
<pre>
862
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo ""
863
</pre>
864 52 Nico Schottelius
865 87 Nico Schottelius
h3. Accessing argocd
866
867
In regular IPv6 clusters:
868
869
* Navigate to https://argocd-server.argocd.CLUSTERDOMAIN
870
871
In legacy IPv4 clusters
872
873
<pre>
874
kubectl --namespace argocd port-forward svc/argocd-server 8080:80
875
</pre>
876
877 88 Nico Schottelius
* Navigate to https://localhost:8080
878
879 68 Nico Schottelius
h3. Using the argocd webhook to trigger changes
880 67 Nico Schottelius
881
* To trigger changes post json https://argocd.example.com/api/webhook
882
883 72 Nico Schottelius
h3. Deploying an application
884
885
* Applications are deployed via git towards gitea (code.ungleich.ch) and then pulled by argo
886 73 Nico Schottelius
* Always include the *redmine-url* pointing to the (customer) ticket
887
** Also add the support-url if it exists
888 72 Nico Schottelius
889
Application sample
890
891
<pre>
892
apiVersion: argoproj.io/v1alpha1
893
kind: Application
894
metadata:
895
  name: gitea-CUSTOMER
896
  namespace: argocd
897
spec:
898
  destination:
899
    namespace: default
900
    server: 'https://kubernetes.default.svc'
901
  source:
902
    path: apps/prod/gitea
903
    repoURL: 'https://code.ungleich.ch/ungleich-intern/k8s-config.git'
904
    targetRevision: HEAD
905
    helm:
906
      parameters:
907
        - name: storage.data.storageClass
908
          value: rook-ceph-block-hdd
909
        - name: storage.data.size
910
          value: 200Gi
911
        - name: storage.db.storageClass
912
          value: rook-ceph-block-ssd
913
        - name: storage.db.size
914
          value: 10Gi
915
        - name: storage.letsencrypt.storageClass
916
          value: rook-ceph-block-hdd
917
        - name: storage.letsencrypt.size
918
          value: 50Mi
919
        - name: letsencryptStaging
920
          value: 'no'
921
        - name: fqdn
922
          value: 'code.verua.online'
923
  project: default
924
  syncPolicy:
925
    automated:
926
      prune: true
927
      selfHeal: true
928
  info:
929
    - name: 'redmine-url'
930
      value: 'https://redmine.ungleich.ch/issues/ISSUEID'
931
    - name: 'support-url'
932
      value: 'https://support.ungleich.ch/Ticket/Display.html?id=TICKETID'
933
</pre>
934
935 80 Nico Schottelius
h2. Helm related operations and conventions
936 55 Nico Schottelius
937 61 Nico Schottelius
We use helm charts extensively.
938
939
* In production, they are managed via argocd
940
* In development, helm chart can de developed and deployed manually using the helm utility.
941
942 55 Nico Schottelius
h3. Installing a helm chart
943
944
One can use the usual pattern of
945
946
<pre>
947
helm install <releasename> <chartdirectory>
948
</pre>
949
950
However often you want to reinstall/update when testing helm charts. The following pattern is "better", because it allows you to reinstall, if it is already installed:
951
952
<pre>
953
helm upgrade --install <releasename> <chartdirectory>
954 1 Nico Schottelius
</pre>
955 80 Nico Schottelius
956
h3. Naming services and deployments in helm charts [Application labels]
957
958
* We always have {{ .Release.Name }} to identify the current "instance"
959
* Deployments:
960
** use @app: <what it is>@, f.i. @app: nginx@, @app: postgres@, ...
961 81 Nico Schottelius
* See more about standard labels on
962
** https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/
963
** https://helm.sh/docs/chart_best_practices/labels/
964 55 Nico Schottelius
965 151 Nico Schottelius
h3. Show all versions of a helm chart
966
967
<pre>
968
helm search repo -l repo/chart
969
</pre>
970
971
For example:
972
973
<pre>
974
% helm search repo -l projectcalico/tigera-operator 
975
NAME                         	CHART VERSION	APP VERSION	DESCRIPTION                            
976
projectcalico/tigera-operator	v3.23.3      	v3.23.3    	Installs the Tigera operator for Calico
977
projectcalico/tigera-operator	v3.23.2      	v3.23.2    	Installs the Tigera operator for Calico
978
....
979
</pre>
980
981 152 Nico Schottelius
h3. Show possible values of a chart
982
983
<pre>
984
helm show values <repo/chart>
985
</pre>
986
987
Example:
988
989
<pre>
990
helm show values ingress-nginx/ingress-nginx
991
</pre>
992
993 207 Nico Schottelius
h3. Show all possible charts in a repo
994
995
<pre>
996
helm search repo REPO
997
</pre>
998
999 178 Nico Schottelius
h3. Download a chart
1000
1001
For instance for checking it out locally. Use:
1002
1003
<pre>
1004
helm pull <repo/chart>
1005
</pre>
1006 152 Nico Schottelius
1007 139 Nico Schottelius
h2. Rook + Ceph
1008
1009
h3. Installation
1010
1011
* Usually directly via argocd
1012
1013 71 Nico Schottelius
h3. Executing ceph commands
1014
1015
Using the ceph-tools pod as follows:
1016
1017
<pre>
1018
kubectl exec -n rook-ceph -ti $(kubectl -n rook-ceph get pods -l app=rook-ceph-tools -o jsonpath='{.items[*].metadata.name}') -- ceph -s
1019
</pre>
1020
1021 43 Nico Schottelius
h3. Inspecting the logs of a specific server
1022
1023
<pre>
1024
# Get the related pods
1025
kubectl -n rook-ceph get pods -l app=rook-ceph-osd-prepare 
1026
...
1027
1028
# Inspect the logs of a specific pod
1029
kubectl -n rook-ceph logs -f rook-ceph-osd-prepare-server23--1-444qx
1030
1031 71 Nico Schottelius
</pre>
1032
1033
h3. Inspecting the logs of the rook-ceph-operator
1034
1035
<pre>
1036
kubectl -n rook-ceph logs -f -l app=rook-ceph-operator
1037 43 Nico Schottelius
</pre>
1038
1039 200 Nico Schottelius
h3. (Temporarily) Disabling the rook-operation
1040
1041
* first disabling the sync in argocd
1042
* then scale it down
1043
1044
<pre>
1045
kubectl -n rook-ceph scale deployment rook-ceph-operator --replicas=0
1046
</pre>
1047
1048
When done with the work/maintenance, re-enable sync in argocd.
1049
The following command is thus strictly speaking not required, as argocd will fix it on its own:
1050
1051
<pre>
1052
kubectl -n rook-ceph scale deployment rook-ceph-operator --replicas=1
1053
</pre>
1054
1055 121 Nico Schottelius
h3. Restarting the rook operator
1056
1057
<pre>
1058
kubectl -n rook-ceph delete pods  -l app=rook-ceph-operator
1059
</pre>
1060
1061 43 Nico Schottelius
h3. Triggering server prepare / adding new osds
1062
1063
The rook-ceph-operator triggers/watches/creates pods to maintain hosts. To trigger a full "re scan", simply delete that pod:
1064
1065
<pre>
1066
kubectl -n rook-ceph delete pods -l app=rook-ceph-operator
1067
</pre>
1068
1069
This will cause all the @rook-ceph-osd-prepare-..@ jobs to be recreated and thus OSDs to be created, if new disks have been added.
1070
1071
h3. Removing an OSD
1072
1073
* See "Ceph OSD Management":https://rook.io/docs/rook/v1.7/ceph-osd-mgmt.html
1074 77 Nico Schottelius
* More specifically: https://github.com/rook/rook/blob/release-1.7/cluster/examples/kubernetes/ceph/osd-purge.yaml
1075 99 Nico Schottelius
* Then delete the related deployment
1076 41 Nico Schottelius
1077 98 Nico Schottelius
Set osd id in the osd-purge.yaml and apply it. OSD should be down before.
1078
1079
<pre>
1080
apiVersion: batch/v1
1081
kind: Job
1082
metadata:
1083
  name: rook-ceph-purge-osd
1084
  namespace: rook-ceph # namespace:cluster
1085
  labels:
1086
    app: rook-ceph-purge-osd
1087
spec:
1088
  template:
1089
    metadata:
1090
      labels:
1091
        app: rook-ceph-purge-osd
1092
    spec:
1093
      serviceAccountName: rook-ceph-purge-osd
1094
      containers:
1095
        - name: osd-removal
1096
          image: rook/ceph:master
1097
          # TODO: Insert the OSD ID in the last parameter that is to be removed
1098
          # The OSD IDs are a comma-separated list. For example: "0" or "0,2".
1099
          # If you want to preserve the OSD PVCs, set `--preserve-pvc true`.
1100
          #
1101
          # A --force-osd-removal option is available if the OSD should be destroyed even though the
1102
          # removal could lead to data loss.
1103
          args:
1104
            - "ceph"
1105
            - "osd"
1106
            - "remove"
1107
            - "--preserve-pvc"
1108
            - "false"
1109
            - "--force-osd-removal"
1110
            - "false"
1111
            - "--osd-ids"
1112
            - "SETTHEOSDIDHERE"
1113
          env:
1114
            - name: POD_NAMESPACE
1115
              valueFrom:
1116
                fieldRef:
1117
                  fieldPath: metadata.namespace
1118
            - name: ROOK_MON_ENDPOINTS
1119
              valueFrom:
1120
                configMapKeyRef:
1121
                  key: data
1122
                  name: rook-ceph-mon-endpoints
1123
            - name: ROOK_CEPH_USERNAME
1124
              valueFrom:
1125
                secretKeyRef:
1126
                  key: ceph-username
1127
                  name: rook-ceph-mon
1128
            - name: ROOK_CEPH_SECRET
1129
              valueFrom:
1130
                secretKeyRef:
1131
                  key: ceph-secret
1132
                  name: rook-ceph-mon
1133
            - name: ROOK_CONFIG_DIR
1134
              value: /var/lib/rook
1135
            - name: ROOK_CEPH_CONFIG_OVERRIDE
1136
              value: /etc/rook/config/override.conf
1137
            - name: ROOK_FSID
1138
              valueFrom:
1139
                secretKeyRef:
1140
                  key: fsid
1141
                  name: rook-ceph-mon
1142
            - name: ROOK_LOG_LEVEL
1143
              value: DEBUG
1144
          volumeMounts:
1145
            - mountPath: /etc/ceph
1146
              name: ceph-conf-emptydir
1147
            - mountPath: /var/lib/rook
1148
              name: rook-config
1149
      volumes:
1150
        - emptyDir: {}
1151
          name: ceph-conf-emptydir
1152
        - emptyDir: {}
1153
          name: rook-config
1154
      restartPolicy: Never
1155
1156
1157 99 Nico Schottelius
</pre>
1158
1159 1 Nico Schottelius
Deleting the deployment:
1160
1161
<pre>
1162
[18:05] bridge:~% kubectl -n rook-ceph delete deployment rook-ceph-osd-6
1163 99 Nico Schottelius
deployment.apps "rook-ceph-osd-6" deleted
1164
</pre>
1165 185 Nico Schottelius
1166
h3. Placement of mons/osds/etc.
1167
1168
See https://rook.io/docs/rook/v1.11/CRDs/Cluster/ceph-cluster-crd/#placement-configuration-settings
1169 98 Nico Schottelius
1170 215 Nico Schottelius
h3. Setting up and managing S3 object storage
1171
1172 217 Nico Schottelius
h4. Endpoints
1173
1174
| Location | Enpdoint |
1175
| p5 | https://s3.k8s.place5.ungleich.ch |
1176
1177
1178 215 Nico Schottelius
h4. Setting up a storage class
1179
1180
* This will store the buckets of a specific customer
1181
1182
Similar to this:
1183
1184
<pre>
1185
apiVersion: storage.k8s.io/v1
1186
kind: StorageClass
1187
metadata:
1188
  name: ungleich-archive-bucket-sc
1189
  namespace: rook-ceph
1190
provisioner: rook-ceph.ceph.rook.io/bucket
1191
reclaimPolicy: Delete
1192
parameters:
1193
  objectStoreName: place5
1194
  objectStoreNamespace: rook-ceph
1195
</pre>
1196
1197
h4. Setting up the Bucket
1198
1199
Similar to this:
1200
1201
<pre>
1202
apiVersion: objectbucket.io/v1alpha1
1203
kind: ObjectBucketClaim
1204
metadata:
1205
  name: ungleich-archive-bucket-claim
1206
  namespace: rook-ceph
1207
spec:
1208
  generateBucketName: ungleich-archive-ceph-bkt
1209
  storageClassName: ungleich-archive-bucket-sc
1210
  additionalConfig:
1211
    # To set for quota for OBC
1212
    #maxObjects: "1000"
1213
    maxSize: "100G"
1214
</pre>
1215
1216
* See also: https://rook.io/docs/rook/latest-release/Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-claim/#obc-custom-resource
1217
1218
h4. Getting the credentials for the bucket
1219
1220
* Get "public" information from the configmap
1221
* Get secret from the secret
1222
1223 216 Nico Schottelius
<pre>
1224 1 Nico Schottelius
name=BUCKETNAME
1225 217 Nico Schottelius
endpoint=https://s3.k8s.place5.ungleich.ch
1226 1 Nico Schottelius
1227
cm=$(kubectl -n rook-ceph get configmap -o yaml ${name}-bucket-claim)
1228 217 Nico Schottelius
1229 1 Nico Schottelius
sec=$(kubectl -n rook-ceph get secrets -o yaml ${name}-bucket-claim)
1230 217 Nico Schottelius
AWS_ACCESS_KEY_ID=$(echo $sec | yq .data.AWS_ACCESS_KEY_ID | base64 -d ; echo "")
1231
AWS_SECRET_ACCESS_KEY=$(echo $sec | yq .data.AWS_SECRET_ACCESS_KEY | base64 -d ; echo "")
1232 1 Nico Schottelius
1233 217 Nico Schottelius
1234 216 Nico Schottelius
bucket_name=$(echo $cm | yq .data.BUCKET_NAME)
1235 1 Nico Schottelius
</pre>
1236 217 Nico Schottelius
1237 220 Nico Schottelius
h5. Access via s3cmd
1238
1239
<pre>
1240
s3cmd --endpoint-url ${endpoint} --access_key=${AWS_ACCESS_KEY_ID} --secret_key=${AWS_SECRET_ACCESS_KEY} ls
1241
</pre>
1242
1243 217 Nico Schottelius
h5. Access via s4cmd
1244
1245
<pre>
1246
s4cmd --endpoint-url ${endpoint} --access-key=$(AWS_ACCESS_KEY_ID) --secret-key=$(AWS_SECRET_ACCESS_KEY) ls
1247
</pre>
1248 215 Nico Schottelius
1249 145 Nico Schottelius
h2. Ingress + Cert Manager
1250
1251
* We deploy "nginx-ingress":https://docs.nginx.com/nginx-ingress-controller/ to get an ingress
1252
* we deploy "cert-manager":https://cert-manager.io/ to handle certificates
1253
* We independently deploy @ClusterIssuer@ to allow the cert-manager app to deploy and the issuer to be created once the CRDs from cert manager are in place
1254
1255
h3. IPv4 reachability 
1256
1257
The ingress is by default IPv6 only. To make it reachable from the IPv4 world, get its IPv6 address and configure a NAT64 mapping in Jool.
1258
1259
Steps:
1260
1261
h4. Get the ingress IPv6 address
1262
1263
Use @kubectl -n ingress-nginx get svc ingress-nginx-controller -o jsonpath='{.spec.clusterIP}'; echo ''@
1264
1265
Example:
1266
1267
<pre>
1268
kubectl -n ingress-nginx get svc ingress-nginx-controller -o jsonpath='{.spec.clusterIP}'; echo ''
1269
2a0a:e5c0:10:1b::ce11
1270
</pre>
1271
1272
h4. Add NAT64 mapping
1273
1274
* Update the __dcl_jool_siit cdist type
1275
* Record the two IPs (IPv6 and IPv4)
1276
* Configure all routers
1277
1278
1279
h4. Add DNS record
1280
1281
To use the ingress capable as a CNAME destination, create an "ingress" DNS record, such as:
1282
1283
<pre>
1284
; k8s ingress for dev
1285
dev-ingress                 AAAA 2a0a:e5c0:10:1b::ce11
1286
dev-ingress                 A 147.78.194.23
1287
1288
</pre> 
1289
1290
h4. Add supporting wildcard DNS
1291
1292
If you plan to add various sites under a specific domain, we can add a wildcard DNS entry, such as *.k8s-dev.django-hosting.ch:
1293
1294
<pre>
1295
*.k8s-dev         CNAME dev-ingress.ungleich.ch.
1296
</pre>
1297
1298 76 Nico Schottelius
h2. Harbor
1299
1300 175 Nico Schottelius
* We user "Harbor":https://goharbor.io/ as an image registry for our own images. Internal app reference: apps/prod/harbor.
1301
* The admin password is in the password store, it is Harbor12345 by default
1302 76 Nico Schottelius
* At the moment harbor only authenticates against the internal ldap tree
1303
1304
h3. LDAP configuration
1305
1306
* The url needs to be ldaps://...
1307
* uid = uid
1308
* rest standard
1309 75 Nico Schottelius
1310 89 Nico Schottelius
h2. Monitoring / Prometheus
1311
1312 90 Nico Schottelius
* Via "kube-prometheus":https://github.com/prometheus-operator/kube-prometheus/
1313 89 Nico Schottelius
1314 91 Nico Schottelius
Access via ...
1315
1316
* http://prometheus-k8s.monitoring.svc:9090
1317
* http://grafana.monitoring.svc:3000
1318
* http://alertmanager.monitoring.svc:9093
1319
1320
1321 100 Nico Schottelius
h3. Prometheus Options
1322
1323
* "helm/kube-prometheus-stack":https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
1324
** Includes dashboards and co.
1325
* "manifest based kube-prometheus":https://github.com/prometheus-operator/kube-prometheus
1326
** Includes dashboards and co.
1327
* "Prometheus Operator (mainly CRD manifest":https://github.com/prometheus-operator/prometheus-operator
1328
1329 171 Nico Schottelius
h3. Grafana default password
1330
1331 218 Nico Schottelius
* If not changed: admin / @prom-operator@
1332
** Can be changed via:
1333
1334
<pre>
1335
    helm:
1336
      values: |-
1337
        configurations: |-
1338
          grafana:
1339
            adminPassword: "..."
1340
</pre>
1341 171 Nico Schottelius
1342 82 Nico Schottelius
h2. Nextcloud
1343
1344 85 Nico Schottelius
h3. How to get the nextcloud credentials 
1345 84 Nico Schottelius
1346
* The initial username is set to "nextcloud"
1347
* The password is autogenerated and saved in a kubernetes secret
1348
1349
<pre>
1350 85 Nico Schottelius
kubectl get secret RELEASENAME-nextcloud -o jsonpath="{.data.PASSWORD}" | base64 -d; echo "" 
1351 84 Nico Schottelius
</pre>
1352
1353 83 Nico Schottelius
h3. How to fix "Access through untrusted domain"
1354
1355 82 Nico Schottelius
* Nextcloud stores the initial domain configuration
1356 1 Nico Schottelius
* If the FQDN is changed, it will show the error message "Access through untrusted domain"
1357 82 Nico Schottelius
* To fix, edit /var/www/html/config/config.php and correct the domain
1358 1 Nico Schottelius
* Then delete the pods
1359 165 Nico Schottelius
1360
h3. Running occ commands inside the nextcloud container
1361
1362
* Find the pod in the right namespace
1363
1364
Exec:
1365
1366
<pre>
1367
su www-data -s /bin/sh -c ./occ
1368
</pre>
1369
1370
* -s /bin/sh is needed as the default shell is set to /bin/false
1371
1372 166 Nico Schottelius
h4. Rescanning files
1373 165 Nico Schottelius
1374 166 Nico Schottelius
* If files have been added without nextcloud's knowledge
1375
1376
<pre>
1377
su www-data -s /bin/sh -c "./occ files:scan --all"
1378
</pre>
1379 82 Nico Schottelius
1380 201 Nico Schottelius
h2. Sealed Secrets
1381
1382 202 Jin-Guk Kwon
* install kubeseal
1383 1 Nico Schottelius
1384 202 Jin-Guk Kwon
<pre>
1385
KUBESEAL_VERSION='0.23.0'
1386
wget "https://github.com/bitnami-labs/sealed-secrets/releases/download/v${KUBESEAL_VERSION:?}/kubeseal-${KUBESEAL_VERSION:?}-linux-amd64.tar.gz" 
1387
tar -xvzf kubeseal-${KUBESEAL_VERSION:?}-linux-amd64.tar.gz kubeseal
1388
sudo install -m 755 kubeseal /usr/local/bin/kubeseal
1389
</pre>
1390
1391
* create key for sealed-secret
1392
1393
<pre>
1394
kubeseal --fetch-cert > /tmp/public-key-cert.pem
1395
</pre>
1396
1397
* create the secret
1398
1399
<pre>
1400 203 Jin-Guk Kwon
ex)
1401 202 Jin-Guk Kwon
apiVersion: v1
1402
kind: Secret
1403
metadata:
1404
  name: Release.Name-postgres-config
1405
  annotations:
1406
    secret-generator.v1.mittwald.de/autogenerate: POSTGRES_PASSWORD
1407
    hosting: Release.Name
1408
  labels:
1409
    app.kubernetes.io/instance: Release.Name
1410
    app.kubernetes.io/component: postgres
1411
stringData:
1412
  POSTGRES_USER: postgresUser
1413
  POSTGRES_DB: postgresDBName
1414
  POSTGRES_INITDB_ARGS: "--no-locale --encoding=UTF8"
1415
</pre>
1416
1417
* convert secret.yaml to sealed-secret.yaml
1418
1419
<pre>
1420
kubeseal -n <namespace> --cert=/tmp/public-key-cert.pem --format=yaml < ./secret.yaml  > ./sealed-secret.yaml
1421
</pre>
1422
1423
* use sealed-secret.yaml on helm-chart directory
1424 201 Nico Schottelius
1425 205 Jin-Guk Kwon
* refer ticket : #11989 , #12120
1426 204 Jin-Guk Kwon
1427 1 Nico Schottelius
h2. Infrastructure versions
1428 35 Nico Schottelius
1429 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v5 (2021-10)
1430 1 Nico Schottelius
1431 57 Nico Schottelius
Clusters are configured / setup in this order:
1432
1433
* Bootstrap via kubeadm
1434 59 Nico Schottelius
* "Networking via calico + BGP (non ECMP) using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
1435
* "ArgoCD for CD":https://argo-cd.readthedocs.io/en/stable/
1436
** "rook for storage via argocd":https://rook.io/
1437 58 Nico Schottelius
** haproxy for in IPv6-cluster-IPv4-to-IPv6 proxy via argocd
1438
** "kubernetes-secret-generator for in cluster secrets":https://github.com/mittwald/kubernetes-secret-generator
1439
** "ungleich-certbot managing certs and nginx":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
1440
1441 57 Nico Schottelius
1442
h3. ungleich kubernetes infrastructure v4 (2021-09)
1443
1444 54 Nico Schottelius
* rook is configured via manifests instead of using the rook-ceph-cluster helm chart
1445 1 Nico Schottelius
* The rook operator is still being installed via helm
1446 35 Nico Schottelius
1447 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v3 (2021-07)
1448 1 Nico Schottelius
1449 10 Nico Schottelius
* rook is now installed via helm via argocd instead of directly via manifests
1450 28 Nico Schottelius
1451 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v2 (2021-05)
1452 28 Nico Schottelius
1453
* Replaced fluxv2 from ungleich k8s v1 with argocd
1454 1 Nico Schottelius
** argocd can apply helm templates directly without needing to go through Chart releases
1455 28 Nico Schottelius
* We are also using argoflow for build flows
1456
* Planned to add "kaniko":https://github.com/GoogleContainerTools/kaniko for image building
1457
1458 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v1 (2021-01)
1459 28 Nico Schottelius
1460
We are using the following components:
1461
1462
* "Calico as a CNI":https://www.projectcalico.org/ with BGP, IPv6 only, no encapsulation
1463
** Needed for basic networking
1464
* "kubernetes-secret-generator":https://github.com/mittwald/kubernetes-secret-generator for creating secrets
1465
** Needed so that secrets are not stored in the git repository, but only in the cluster
1466
* "ungleich-certbot":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
1467
** Needed to get letsencrypt certificates for services
1468
* "rook with ceph rbd + cephfs":https://rook.io/ for storage
1469
** rbd for almost everything, *ReadWriteOnce*
1470
** cephfs for smaller things, multi access *ReadWriteMany*
1471
** Needed for providing persistent storage
1472
* "flux v2":https://fluxcd.io/
1473
** Needed to manage resources automatically