Project

General

Profile

The ungleich kubernetes infrastructure » History » Version 211

Nico Schottelius, 02/05/2024 09:19 PM

1 22 Nico Schottelius
h1. The ungleich kubernetes infrastructure and ungleich kubernetes manual
2 1 Nico Schottelius
3 3 Nico Schottelius
{{toc}}
4
5 1 Nico Schottelius
h2. Status
6
7 211 Nico Schottelius
This document is **production**.
8
This document is the ungleich kubernetes infrastructure overview as well as the ungleich kubernetes manual.
9 1 Nico Schottelius
10 10 Nico Schottelius
h2. k8s clusters
11
12 123 Nico Schottelius
| Cluster            | Purpose/Setup     | Maintainer | Master(s)                     | argo                                                   | v4 http proxy | last verified |
13
| c0.k8s.ooo         | Dev               | -          | UNUSED                        |                                                        |               |    2021-10-05 |
14
| c1.k8s.ooo         | retired           |            | -                             |                                                        |               |    2022-03-15 |
15
| c2.k8s.ooo         | Dev p7 HW         | Nico       | server47 server53 server54    | "argo":https://argocd-server.argocd.svc.c2.k8s.ooo     |               |    2021-10-05 |
16
| c3.k8s.ooo         | retired           | -          | -                             |                                                        |               |    2021-10-05 |
17
| c4.k8s.ooo         | Dev2 p7 HW        | Jin-Guk    | server52 server53 server54    |                                                        |               |             - |
18
| c5.k8s.ooo         | retired           |            | -                             |                                                        |               |    2022-03-15 |
19
| c6.k8s.ooo         | Dev p6 VM Jin-Guk | Jin-Guk    |                               |                                                        |               |               |
20
| [[p5.k8s.ooo]]     | production        |            | server34 server36 server38    | "argo":https://argocd-server.argocd.svc.p5.k8s.ooo     | -             |               |
21
| [[p5-cow.k8s.ooo]] | production        | Nico       | server47 server51 server55    | "argo":https://argocd-server.argocd.svc.p5-cow.k8s.ooo |               |    2022-08-27 |
22
| [[p6.k8s.ooo]]     | production        |            | server67 server69 server71    | "argo":https://argocd-server.argocd.svc.p6.k8s.ooo     | 147.78.194.13 |    2021-10-05 |
23 184 Nico Schottelius
| [[p6-cow.k8s.ooo]] | production        |            | server134 server135 server136 | "argo":https://argocd-server.argocd.svc.p6in10.k8s.ooo | ?             |    2023-05-17 |
24 177 Nico Schottelius
| [[p10.k8s.ooo]]    | production        |            | server131 server132 server133 | "argo":https://argocd-server.argocd.svc.p10.k8s.ooo    | 147.78.194.12 |    2021-10-05 |
25 123 Nico Schottelius
| [[k8s.ge.nau.so]]  | development       |            | server107 server108 server109 | "argo":https://argocd-server.argocd.svc.k8s.ge.nau.so  |               |               |
26
| [[dev.k8s.ooo]]    | development       |            | server110 server111 server112 | "argo":https://argocd-server.argocd.svc.dev.k8s.ooo    | -             |    2022-07-08 |
27 164 Nico Schottelius
| [[r1r2p15k8sooo|r1.p15.k8s.ooo]] | production | Nico | server120 | | | 2022-10-30 |
28
| [[r1r2p15k8sooo|r2.p15.k8s.ooo]] | production | Nico | server121 | | | 2022-09-06 |
29 162 Nico Schottelius
| [[r1r2p10k8sooo|r1.p10.k8s.ooo]] | production | Nico | server122 | | | 2022-10-30 |
30
| [[r1r2p10k8sooo|r2.p10.k8s.ooo]] | production | Nico | server123 | | | 2022-10-15 |
31
| [[r1r2p5k8sooo|r1.p5.k8s.ooo]] | production | Nico | server137 | | | 2022-10-30 |
32
| [[r1r2p5k8sooo|r2.p5.k8s.ooo]] | production | Nico | server138 | | | 2022-10-30 |
33
| [[r1r2p6k8sooo|r1.p6.k8s.ooo]] | production | Nico | server139 | | | 2022-10-30 |
34
| [[r1r2p6k8sooo|r2.p6.k8s.ooo]] | production | Nico | server140 | | | 2022-10-30 |
35 21 Nico Schottelius
36 1 Nico Schottelius
h2. General architecture and components overview
37
38
* All k8s clusters are IPv6 only
39
* We use BGP peering to propagate podcidr and serviceCidr networks to our infrastructure
40
* The main public testing repository is "ungleich-k8s":https://code.ungleich.ch/ungleich-public/ungleich-k8s
41 18 Nico Schottelius
** Private configurations are found in the **k8s-config** repository
42 1 Nico Schottelius
43
h3. Cluster types
44
45 28 Nico Schottelius
| **Type/Feature**            | **Development**                | **Production**         |
46
| Min No. nodes               | 3 (1 master, 3 worker)         | 5 (3 master, 3 worker) |
47
| Recommended minimum         | 4 (dedicated master, 3 worker) | 8 (3 master, 5 worker) |
48
| Separation of control plane | optional                       | recommended            |
49
| Persistent storage          | required                       | required               |
50
| Number of storage monitors  | 3                              | 5                      |
51 1 Nico Schottelius
52 43 Nico Schottelius
h2. General k8s operations
53 1 Nico Schottelius
54 46 Nico Schottelius
h3. Cheat sheet / external great references
55
56
* "kubectl cheatsheet":https://kubernetes.io/docs/reference/kubectl/cheatsheet/
57
58 117 Nico Schottelius
h3. Allowing to schedule work on the control plane / removing node taints
59 69 Nico Schottelius
60
* Mostly for single node / test / development clusters
61
* Just remove the master taint as follows
62
63
<pre>
64
kubectl taint nodes --all node-role.kubernetes.io/master-
65 118 Nico Schottelius
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
66 69 Nico Schottelius
</pre>
67 1 Nico Schottelius
68 117 Nico Schottelius
You can check the node taints using @kubectl describe node ...@
69 69 Nico Schottelius
70 208 Nico Schottelius
h3. Adding taints
71
72
* For instance to limit nodes to specific customers
73
74
<pre>
75
kubectl taint nodes serverXX customer=CUSTOMERNAME:NoSchedule
76
</pre>
77
78 44 Nico Schottelius
h3. Get the cluster admin.conf
79
80
* On the masters of each cluster you can find the file @/etc/kubernetes/admin.conf@
81
* To be able to administrate the cluster you can copy the admin.conf to your local machine
82
* Multi cluster debugging can very easy if you name the config ~/cX-admin.conf (see example below)
83
84
<pre>
85
% scp root@server47.place7.ungleich.ch:/etc/kubernetes/admin.conf ~/c2-admin.conf
86
% export KUBECONFIG=~/c2-admin.conf    
87
% kubectl get nodes
88
NAME       STATUS                     ROLES                  AGE   VERSION
89
server47   Ready                      control-plane,master   82d   v1.22.0
90
server48   Ready                      control-plane,master   82d   v1.22.0
91
server49   Ready                      <none>                 82d   v1.22.0
92
server50   Ready                      <none>                 82d   v1.22.0
93
server59   Ready                      control-plane,master   82d   v1.22.0
94
server60   Ready,SchedulingDisabled   <none>                 82d   v1.22.0
95
server61   Ready                      <none>                 82d   v1.22.0
96
server62   Ready                      <none>                 82d   v1.22.0               
97
</pre>
98
99 18 Nico Schottelius
h3. Installing a new k8s cluster
100 8 Nico Schottelius
101 9 Nico Schottelius
* Decide on the cluster name (usually *cX.k8s.ooo*), X counting upwards
102 28 Nico Schottelius
** Using pXX.k8s.ooo for production clusters of placeXX
103 9 Nico Schottelius
* Use cdist to configure the nodes with requirements like crio
104
* Decide between single or multi node control plane setups (see below)
105 28 Nico Schottelius
** Single control plane suitable for development clusters
106 9 Nico Schottelius
107 28 Nico Schottelius
Typical init procedure:
108 9 Nico Schottelius
109 206 Nico Schottelius
h4. Single control plane:
110
111
<pre>
112
kubeadm init --config bootstrap/XXX/kubeadm.yaml
113
</pre>
114
115
h4. Multi control plane (HA):
116
117
<pre>
118
kubeadm init --config bootstrap/XXX/kubeadm.yaml --upload-certs
119
</pre>
120
121 10 Nico Schottelius
122 29 Nico Schottelius
h3. Deleting a pod that is hanging in terminating state
123
124
<pre>
125
kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>
126
</pre>
127
128
(from https://stackoverflow.com/questions/35453792/pods-stuck-in-terminating-status)
129
130 42 Nico Schottelius
h3. Listing nodes of a cluster
131
132
<pre>
133
[15:05] bridge:~% kubectl get nodes
134
NAME       STATUS   ROLES                  AGE   VERSION
135
server22   Ready    <none>                 52d   v1.22.0
136
server23   Ready    <none>                 52d   v1.22.2
137
server24   Ready    <none>                 52d   v1.22.0
138
server25   Ready    <none>                 52d   v1.22.0
139
server26   Ready    <none>                 52d   v1.22.0
140
server27   Ready    <none>                 52d   v1.22.0
141
server63   Ready    control-plane,master   52d   v1.22.0
142
server64   Ready    <none>                 52d   v1.22.0
143
server65   Ready    control-plane,master   52d   v1.22.0
144
server66   Ready    <none>                 52d   v1.22.0
145
server83   Ready    control-plane,master   52d   v1.22.0
146
server84   Ready    <none>                 52d   v1.22.0
147
server85   Ready    <none>                 52d   v1.22.0
148
server86   Ready    <none>                 52d   v1.22.0
149
</pre>
150
151 41 Nico Schottelius
h3. Removing / draining a node
152
153
Usually @kubectl drain server@ should do the job, but sometimes we need to be more aggressive:
154
155 1 Nico Schottelius
<pre>
156 103 Nico Schottelius
kubectl drain --delete-emptydir-data --ignore-daemonsets serverXX
157 42 Nico Schottelius
</pre>
158
159
h3. Readding a node after draining
160
161
<pre>
162
kubectl uncordon serverXX
163 1 Nico Schottelius
</pre>
164 43 Nico Schottelius
165 50 Nico Schottelius
h3. (Re-)joining worker nodes after creating the cluster
166 49 Nico Schottelius
167
* We need to have an up-to-date token
168
* We use different join commands for the workers and control plane nodes
169
170
Generating the join command on an existing control plane node:
171
172
<pre>
173
kubeadm token create --print-join-command
174
</pre>
175
176 50 Nico Schottelius
h3. (Re-)joining control plane nodes after creating the cluster
177 1 Nico Schottelius
178 50 Nico Schottelius
* We generate the token again
179
* We upload the certificates
180
* We need to combine/create the join command for the control plane node
181
182
Example session:
183
184
<pre>
185
% kubeadm token create --print-join-command
186
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash 
187
188
% kubeadm init phase upload-certs --upload-certs
189
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
190
[upload-certs] Using certificate key:
191
CERTKEY
192
193
# Then we use these two outputs on the joining node:
194
195
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash --control-plane --certificate-key CERTKEY
196
</pre>
197
198
Commands to be used on a control plane node:
199
200
<pre>
201
kubeadm token create --print-join-command
202
kubeadm init phase upload-certs --upload-certs
203
</pre>
204
205
Commands to be used on the joining node:
206
207
<pre>
208
JOINCOMMAND --control-plane --certificate-key CERTKEY
209
</pre>
210 49 Nico Schottelius
211 51 Nico Schottelius
SEE ALSO
212
213
* https://stackoverflow.com/questions/63936268/how-to-generate-kubeadm-token-for-secondary-control-plane-nodes
214
* https://blog.scottlowe.org/2019/08/15/reconstructing-the-join-command-for-kubeadm/
215
216 53 Nico Schottelius
h3. How to fix etcd does not start when rejoining a kubernetes cluster as a control plane
217 52 Nico Schottelius
218
If during the above step etcd does not come up, @kubeadm join@ can hang as follows:
219
220
<pre>
221
[control-plane] Creating static Pod manifest for "kube-apiserver"                                                              
222
[control-plane] Creating static Pod manifest for "kube-controller-manager"                                                     
223
[control-plane] Creating static Pod manifest for "kube-scheduler"                                                              
224
[check-etcd] Checking that the etcd cluster is healthy                                                                         
225
error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://[2a0a:e5c0:10:1:225:b3ff:fe20:37
226
8a]:2379 with maintenance client: context deadline exceeded                                                                    
227
To see the stack trace of this error execute with --v=5 or higher         
228
</pre>
229
230
Then the problem is likely that the etcd server is still a member of the cluster. We first need to remove it from the etcd cluster and then the join works.
231
232
To fix this we do:
233
234
* Find a working etcd pod
235
* Find the etcd members / member list
236
* Remove the etcd member that we want to re-join the cluster
237
238
239
<pre>
240
# Find the etcd pods
241
kubectl -n kube-system get pods -l component=etcd,tier=control-plane
242
243
# Get the list of etcd servers with the member id 
244
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
245
246
# Remove the member
247
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove MEMBERID
248
</pre>
249
250
Sample session:
251
252
<pre>
253
[10:48] line:~% kubectl -n kube-system get pods -l component=etcd,tier=control-plane
254
NAME            READY   STATUS    RESTARTS     AGE
255
etcd-server63   1/1     Running   0            3m11s
256
etcd-server65   1/1     Running   3            7d2h
257
etcd-server83   1/1     Running   8 (6d ago)   7d2h
258
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
259
356891cd676df6e4, started, server65, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2379, false
260
371b8a07185dee7e, started, server63, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2379, false
261
5942bc58307f8af9, started, server83, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2380, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2379, false
262
263
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove 371b8a07185dee7e
264
Member 371b8a07185dee7e removed from cluster e3c0805f592a8f77
265 1 Nico Schottelius
266
</pre>
267
268
SEE ALSO
269
270
* We found the solution using https://stackoverflow.com/questions/67921552/re-installed-node-cannot-join-kubernetes-cluster
271 56 Nico Schottelius
272 147 Nico Schottelius
h3. Node labels (adding, showing, removing)
273
274
Listing the labels:
275
276
<pre>
277
kubectl get nodes --show-labels
278
</pre>
279
280
Adding labels:
281
282
<pre>
283
kubectl label nodes LIST-OF-NODES label1=value1 
284
285
</pre>
286
287
For instance:
288
289
<pre>
290
kubectl label nodes router2 router3 hosttype=router 
291
</pre>
292
293
Selecting nodes in pods:
294
295
<pre>
296
apiVersion: v1
297
kind: Pod
298
...
299
spec:
300
  nodeSelector:
301
    hosttype: router
302
</pre>
303
304 148 Nico Schottelius
Removing labels by adding a minus at the end of the label name:
305
306
<pre>
307
kubectl label node <nodename> <labelname>-
308
</pre>
309
310
For instance:
311
312
<pre>
313
kubectl label nodes router2 router3 hosttype- 
314
</pre>
315
316 147 Nico Schottelius
SEE ALSO
317 1 Nico Schottelius
318 148 Nico Schottelius
* https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/
319
* https://stackoverflow.com/questions/34067979/how-to-delete-a-node-label-by-command-and-api
320 147 Nico Schottelius
321 199 Nico Schottelius
h3. Listing all pods on a node
322
323
<pre>
324
kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName=serverXX
325
</pre>
326
327
Found on https://stackoverflow.com/questions/62000559/how-to-list-all-the-pods-running-in-a-particular-worker-node-by-executing-a-comm
328
329 101 Nico Schottelius
h3. Hardware Maintenance using ungleich-hardware
330
331
Use the following manifest and replace the HOST with the actual host:
332
333
<pre>
334
apiVersion: v1
335
kind: Pod
336
metadata:
337
  name: ungleich-hardware-HOST
338
spec:
339
  containers:
340
  - name: ungleich-hardware
341
    image: ungleich/ungleich-hardware:0.0.5
342
    args:
343
    - sleep
344
    - "1000000"
345
    volumeMounts:
346
      - mountPath: /dev
347
        name: dev
348
    securityContext:
349
      privileged: true
350
  nodeSelector:
351
    kubernetes.io/hostname: "HOST"
352
353
  volumes:
354
    - name: dev
355
      hostPath:
356
        path: /dev
357
</pre>
358
359 102 Nico Schottelius
Also see: [[The_ungleich_hardware_maintenance_guide]]
360
361 105 Nico Schottelius
h3. Triggering a cronjob / creating a job from a cronjob
362 104 Nico Schottelius
363
To test a cronjob, we can create a job from a cronjob:
364
365
<pre>
366
kubectl create job --from=cronjob/volume2-daily-backup volume2-manual
367
</pre>
368
369
This creates a job volume2-manual based on the cronjob  volume2-daily
370
371 112 Nico Schottelius
h3. su-ing into a user that has nologin shell set
372
373
Many times users are having nologin as their shell inside the container. To be able to execute maintenance commands within the
374
container, we can use @su -s /bin/sh@ like this:
375
376
<pre>
377
su -s /bin/sh -c '/path/to/your/script' testuser
378
</pre>
379
380
Found on https://serverfault.com/questions/351046/how-to-run-command-as-user-who-has-usr-sbin-nologin-as-shell
381
382 113 Nico Schottelius
h3. How to print a secret value
383
384
Assuming you want the "password" item from a secret, use:
385
386
<pre>
387
kubectl get secret SECRETNAME -o jsonpath="{.data.password}" | base64 -d; echo "" 
388
</pre>
389
390 209 Nico Schottelius
h3. Fixing the "ImageInspectError"
391
392
If you see this problem:
393
394
<pre>
395
# kubectl get pods
396
NAME                                                       READY   STATUS                   RESTARTS   AGE
397
bird-router-server137-bird-767f65bb47-g4xsh                0/1     Init:ImageInspectError   0          77d
398
bird-router-server137-openvpn-server120-5c987b7ffb-cn9xf   0/1     ImageInspectError        1          159d
399
bird-router-server137-unbound-5c6f5d4bb6-cxbpr             0/1     ImageInspectError        1          159d
400
</pre>
401
402
Fixes so far:
403
404
* correct registries.conf
405
406
407 173 Nico Schottelius
h3. How to upgrade a kubernetes cluster
408 172 Nico Schottelius
409
h4. General
410
411
* Should be done every X months to stay up-to-date
412
** X probably something like 3-6
413
* kubeadm based clusters
414
* Needs specific kubeadm versions for upgrade
415
* Follow instructions on https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
416 190 Nico Schottelius
* Finding releases: https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG
417 172 Nico Schottelius
418
h4. Getting a specific kubeadm or kubelet version
419
420
<pre>
421 190 Nico Schottelius
RELEASE=v1.22.17
422
RELEASE=v1.23.17
423 181 Nico Schottelius
RELEASE=v1.24.9
424 1 Nico Schottelius
RELEASE=v1.25.9
425
RELEASE=v1.26.6
426 190 Nico Schottelius
RELEASE=v1.27.2
427
428 187 Nico Schottelius
ARCH=amd64
429 172 Nico Schottelius
430
curl -L --remote-name-all https://dl.k8s.io/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet}
431 182 Nico Schottelius
chmod u+x kubeadm kubelet
432 172 Nico Schottelius
</pre>
433
434
h4. Steps
435
436
* kubeadm upgrade plan
437
** On one control plane node
438
* kubeadm upgrade apply vXX.YY.ZZ
439
** On one control plane node
440 189 Nico Schottelius
* kubeadm upgrade node
441
** On all other control plane nodes
442
** On all worker nodes afterwards
443
444 172 Nico Schottelius
445 173 Nico Schottelius
Repeat for all control planes nodes. The upgrade kubelet on all other nodes via package manager.
446 172 Nico Schottelius
447 193 Nico Schottelius
h4. Upgrading to 1.22.17
448 1 Nico Schottelius
449 193 Nico Schottelius
* https://v1-22.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
450 194 Nico Schottelius
* Need to create a kubeadm config map
451 198 Nico Schottelius
** f.i. using the following
452
** @/usr/local/bin/kubeadm-v1.22.17   upgrade --config kubeadm.yaml --ignore-preflight-errors=CoreDNSUnsupportedPlugins,CoreDNSMigration apply -y v1.22.17@
453 193 Nico Schottelius
* Done for p6 on 2023-10-04
454
455
h4. Upgrading to 1.23.17
456
457
* https://v1-23.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
458
* No special notes
459
* Done for p6 on 2023-10-04
460
461
h4. Upgrading to 1.24.17
462
463
* https://v1-24.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
464
* No special notes
465
* Done for p6 on 2023-10-04
466
467
h4. Upgrading to 1.25.14
468
469
* https://v1-24.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
470
* No special notes
471
* Done for p6 on 2023-10-04
472
473
h4. Upgrading to 1.26.9
474
475 1 Nico Schottelius
* https://v1-26.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
476 193 Nico Schottelius
* No special notes
477
* Done for p6 on 2023-10-04
478 188 Nico Schottelius
479 196 Nico Schottelius
h4. Upgrading to 1.27
480 186 Nico Schottelius
481 192 Nico Schottelius
* https://v1-27.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
482 186 Nico Schottelius
* kubelet will not start anymore
483
* reason: @"command failed" err="failed to parse kubelet flag: unknown flag: --container-runtime"@
484
* /var/lib/kubelet/kubeadm-flags.env contains that parameter
485
* remove it, start kubelet
486 192 Nico Schottelius
487 197 Nico Schottelius
h4. Upgrading to 1.28
488 192 Nico Schottelius
489
* https://v1-28.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
490 186 Nico Schottelius
491 210 Nico Schottelius
h4. Upgrading to 1.29
492
493
* Done for many clusters around 2024-01-10
494
* Unsure if it was properly released
495
* https://v1-29.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
496
497 186 Nico Schottelius
h4. Upgrade to crio 1.27: missing crun
498
499
Error message
500
501
<pre>
502
level=fatal msg="validating runtime config: runtime validation: \"crun\" not found in $PATH: exec: \"crun\": executable file not found in $PATH"
503
</pre>
504
505
Fix:
506
507
<pre>
508
apk add crun
509
</pre>
510
511 157 Nico Schottelius
h2. Reference CNI
512
513
* Mainly "stupid", but effective plugins
514
* Main documentation on https://www.cni.dev/plugins/current/
515 158 Nico Schottelius
* Plugins
516
** bridge
517
*** Can create the bridge on the host
518
*** But seems not to be able to add host interfaces to it as well
519
*** Has support for vlan tags
520
** vlan
521
*** creates vlan tagged sub interface on the host
522 160 Nico Schottelius
*** "It's a 1:1 mapping (i.e. no bridge in between)":https://github.com/k8snetworkplumbingwg/multus-cni/issues/569
523 158 Nico Schottelius
** host-device
524
*** moves the interface from the host into the container
525
*** very easy for physical connections to containers
526 159 Nico Schottelius
** ipvlan
527
*** "virtualisation" of a host device
528
*** routing based on IP
529
*** Same MAC for everyone
530
*** Cannot reach the master interface
531
** maclvan
532
*** With mac addresses
533
*** Supports various modes (to be checked)
534
** ptp ("point to point")
535
*** Creates a host device and connects it to the container
536
** win*
537 158 Nico Schottelius
*** Windows implementations
538 157 Nico Schottelius
539 62 Nico Schottelius
h2. Calico CNI
540
541
h3. Calico Installation
542
543
* We install "calico using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
544
* This has the following advantages:
545
** Easy to upgrade
546
** Does not require os to configure IPv6/dual stack settings as the tigera operator figures out things on its own
547
548
Usually plain calico can be installed directly using:
549
550
<pre>
551 174 Nico Schottelius
VERSION=v3.25.0
552 149 Nico Schottelius
553 1 Nico Schottelius
helm repo add projectcalico https://docs.projectcalico.org/charts
554 167 Nico Schottelius
helm repo update
555 124 Nico Schottelius
helm upgrade --install --namespace tigera calico projectcalico/tigera-operator --version $VERSION --create-namespace
556 1 Nico Schottelius
</pre>
557 92 Nico Schottelius
558
* Check the tags on https://github.com/projectcalico/calico/tags for the latest release
559 62 Nico Schottelius
560
h3. Installing calicoctl
561
562 115 Nico Schottelius
* General installation instructions, including binary download: https://projectcalico.docs.tigera.io/maintenance/clis/calicoctl/install
563
564 62 Nico Schottelius
To be able to manage and configure calico, we need to 
565
"install calicoctl (we choose the version as a pod)":https://docs.projectcalico.org/getting-started/clis/calicoctl/install#install-calicoctl-as-a-kubernetes-pod
566
567
<pre>
568
kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml
569
</pre>
570
571 93 Nico Schottelius
Or version specific:
572
573
<pre>
574
kubectl apply -f https://github.com/projectcalico/calico/blob/v3.20.4/manifests/calicoctl.yaml
575 97 Nico Schottelius
576
# For 3.22
577
kubectl apply -f https://projectcalico.docs.tigera.io/archive/v3.22/manifests/calicoctl.yaml
578 93 Nico Schottelius
</pre>
579
580 70 Nico Schottelius
And making it easier accessible by alias:
581
582
<pre>
583
alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl"
584
</pre>
585
586 62 Nico Schottelius
h3. Calico configuration
587
588 63 Nico Schottelius
By default our k8s clusters "BGP peer":https://docs.projectcalico.org/networking/bgp
589
with an upstream router to propagate podcidr and servicecidr.
590 62 Nico Schottelius
591
Default settings in our infrastructure:
592
593
* We use a full-mesh using the @nodeToNodeMeshEnabled: true@ option
594
* We keep the original next hop so that *only* the server with the pod is announcing it (instead of ecmp)
595 1 Nico Schottelius
* We use private ASNs for k8s clusters
596 63 Nico Schottelius
* We do *not* use any overlay
597 62 Nico Schottelius
598
After installing calico and calicoctl the last step of the installation is usually:
599
600 1 Nico Schottelius
<pre>
601 79 Nico Schottelius
calicoctl create -f - < calico-bgp.yaml
602 62 Nico Schottelius
</pre>
603
604
605
A sample BGP configuration:
606
607
<pre>
608
---
609
apiVersion: projectcalico.org/v3
610
kind: BGPConfiguration
611
metadata:
612
  name: default
613
spec:
614
  logSeverityScreen: Info
615
  nodeToNodeMeshEnabled: true
616
  asNumber: 65534
617
  serviceClusterIPs:
618
  - cidr: 2a0a:e5c0:10:3::/108
619
  serviceExternalIPs:
620
  - cidr: 2a0a:e5c0:10:3::/108
621
---
622
apiVersion: projectcalico.org/v3
623
kind: BGPPeer
624
metadata:
625
  name: router1-place10
626
spec:
627
  peerIP: 2a0a:e5c0:10:1::50
628
  asNumber: 213081
629
  keepOriginalNextHop: true
630
</pre>
631
632 126 Nico Schottelius
h2. Cilium CNI (experimental)
633
634 137 Nico Schottelius
h3. Status
635
636 138 Nico Schottelius
*NO WORKING CILIUM CONFIGURATION FOR IPV6 only modes*
637 137 Nico Schottelius
638 146 Nico Schottelius
h3. Latest error
639
640
It seems cilium does not run on IPv6 only hosts:
641
642
<pre>
643
level=info msg="Validating configured node address ranges" subsys=daemon
644
level=fatal msg="postinit failed" error="external IPv4 node address could not be derived, please configure via --ipv4-node" subsys=daemon
645
level=info msg="Starting IP identity watcher" subsys=ipcache
646
</pre>
647
648
It crashes after that log entry
649
650 128 Nico Schottelius
h3. BGP configuration
651
652
* The cilium-operator will not start without a correct configmap being present beforehand (see error message below)
653
* Creating the bgp config beforehand as a configmap is thus required.
654
655
The error one gets without the configmap present:
656
657
Pods are hanging with:
658
659
<pre>
660
cilium-bpqm6                       0/1     Init:0/4            0             9s
661
cilium-operator-5947d94f7f-5bmh2   0/1     ContainerCreating   0             9s
662
</pre>
663
664
The error message in the cilium-*perator is:
665
666
<pre>
667
Events:
668
  Type     Reason       Age                From               Message
669
  ----     ------       ----               ----               -------
670
  Normal   Scheduled    80s                default-scheduler  Successfully assigned kube-system/cilium-operator-5947d94f7f-lqcsp to server56
671
  Warning  FailedMount  16s (x8 over 80s)  kubelet            MountVolume.SetUp failed for volume "bgp-config-path" : configmap "bgp-config" not found
672
</pre>
673
674
A correct bgp config looks like this:
675
676
<pre>
677
apiVersion: v1
678
kind: ConfigMap
679
metadata:
680
  name: bgp-config
681
  namespace: kube-system
682
data:
683
  config.yaml: |
684
    peers:
685
      - peer-address: 2a0a:e5c0::46
686
        peer-asn: 209898
687
        my-asn: 65533
688
      - peer-address: 2a0a:e5c0::47
689
        peer-asn: 209898
690
        my-asn: 65533
691
    address-pools:
692
      - name: default
693
        protocol: bgp
694
        addresses:
695
          - 2a0a:e5c0:0:14::/64
696
</pre>
697 127 Nico Schottelius
698
h3. Installation
699 130 Nico Schottelius
700 127 Nico Schottelius
Adding the repo
701 1 Nico Schottelius
<pre>
702 127 Nico Schottelius
703 129 Nico Schottelius
helm repo add cilium https://helm.cilium.io/
704 130 Nico Schottelius
helm repo update
705
</pre>
706 129 Nico Schottelius
707 135 Nico Schottelius
Installing + configuring cilium
708 129 Nico Schottelius
<pre>
709 130 Nico Schottelius
ipv6pool=2a0a:e5c0:0:14::/112
710 1 Nico Schottelius
711 146 Nico Schottelius
version=1.12.2
712 129 Nico Schottelius
713
helm upgrade --install cilium cilium/cilium --version $version \
714 1 Nico Schottelius
  --namespace kube-system \
715
  --set ipv4.enabled=false \
716
  --set ipv6.enabled=true \
717 146 Nico Schottelius
  --set enableIPv6Masquerade=false \
718
  --set bgpControlPlane.enabled=true 
719 1 Nico Schottelius
720 146 Nico Schottelius
#  --set ipam.operator.clusterPoolIPv6PodCIDRList=$ipv6pool
721
722
# Old style bgp?
723 136 Nico Schottelius
#   --set bgp.enabled=true --set bgp.announce.podCIDR=true \
724 127 Nico Schottelius
725
# Show possible configuration options
726
helm show values cilium/cilium
727
728 1 Nico Schottelius
</pre>
729 132 Nico Schottelius
730
Using a /64 for ipam.operator.clusterPoolIPv6PodCIDRList fails with:
731
732
<pre>
733
level=fatal msg="Unable to init cluster-pool allocator" error="unable to initialize IPv6 allocator New CIDR set failed; the node CIDR size is too big" subsys=cilium-operator-generic
734
</pre>
735
736 126 Nico Schottelius
737 1 Nico Schottelius
See also https://github.com/cilium/cilium/issues/20756
738 135 Nico Schottelius
739
Seems a /112 is actually working.
740
741
h3. Kernel modules
742
743
Cilium requires the following modules to be loaded on the host (not loaded by default):
744
745
<pre>
746 1 Nico Schottelius
modprobe  ip6table_raw
747
modprobe  ip6table_filter
748
</pre>
749 146 Nico Schottelius
750
h3. Interesting helm flags
751
752
* autoDirectNodeRoutes
753
* bgpControlPlane.enabled = true
754
755
h3. SEE ALSO
756
757
* https://docs.cilium.io/en/v1.12/helm-reference/
758 133 Nico Schottelius
759 179 Nico Schottelius
h2. Multus
760 168 Nico Schottelius
761
* https://github.com/k8snetworkplumbingwg/multus-cni
762
* Installing a deployment w/ CRDs
763 150 Nico Schottelius
764 169 Nico Schottelius
<pre>
765 176 Nico Schottelius
VERSION=v4.0.1
766 169 Nico Schottelius
767 170 Nico Schottelius
kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/${VERSION}/deployments/multus-daemonset-crio.yml
768
</pre>
769 169 Nico Schottelius
770 191 Nico Schottelius
h2. ArgoCD
771 56 Nico Schottelius
772 60 Nico Schottelius
h3. Argocd Installation
773 1 Nico Schottelius
774 116 Nico Schottelius
* See https://argo-cd.readthedocs.io/en/stable/
775
776 60 Nico Schottelius
As there is no configuration management present yet, argocd is installed using
777
778 1 Nico Schottelius
<pre>
779 60 Nico Schottelius
kubectl create namespace argocd
780 1 Nico Schottelius
781
# OR: latest stable
782
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
783
784 191 Nico Schottelius
# OR Specific Version
785
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.3.2/manifests/install.yaml
786 56 Nico Schottelius
787 191 Nico Schottelius
788
</pre>
789 1 Nico Schottelius
790 60 Nico Schottelius
h3. Get the argocd credentials
791
792
<pre>
793
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo ""
794
</pre>
795 52 Nico Schottelius
796 87 Nico Schottelius
h3. Accessing argocd
797
798
In regular IPv6 clusters:
799
800
* Navigate to https://argocd-server.argocd.CLUSTERDOMAIN
801
802
In legacy IPv4 clusters
803
804
<pre>
805
kubectl --namespace argocd port-forward svc/argocd-server 8080:80
806
</pre>
807
808 88 Nico Schottelius
* Navigate to https://localhost:8080
809
810 68 Nico Schottelius
h3. Using the argocd webhook to trigger changes
811 67 Nico Schottelius
812
* To trigger changes post json https://argocd.example.com/api/webhook
813
814 72 Nico Schottelius
h3. Deploying an application
815
816
* Applications are deployed via git towards gitea (code.ungleich.ch) and then pulled by argo
817 73 Nico Schottelius
* Always include the *redmine-url* pointing to the (customer) ticket
818
** Also add the support-url if it exists
819 72 Nico Schottelius
820
Application sample
821
822
<pre>
823
apiVersion: argoproj.io/v1alpha1
824
kind: Application
825
metadata:
826
  name: gitea-CUSTOMER
827
  namespace: argocd
828
spec:
829
  destination:
830
    namespace: default
831
    server: 'https://kubernetes.default.svc'
832
  source:
833
    path: apps/prod/gitea
834
    repoURL: 'https://code.ungleich.ch/ungleich-intern/k8s-config.git'
835
    targetRevision: HEAD
836
    helm:
837
      parameters:
838
        - name: storage.data.storageClass
839
          value: rook-ceph-block-hdd
840
        - name: storage.data.size
841
          value: 200Gi
842
        - name: storage.db.storageClass
843
          value: rook-ceph-block-ssd
844
        - name: storage.db.size
845
          value: 10Gi
846
        - name: storage.letsencrypt.storageClass
847
          value: rook-ceph-block-hdd
848
        - name: storage.letsencrypt.size
849
          value: 50Mi
850
        - name: letsencryptStaging
851
          value: 'no'
852
        - name: fqdn
853
          value: 'code.verua.online'
854
  project: default
855
  syncPolicy:
856
    automated:
857
      prune: true
858
      selfHeal: true
859
  info:
860
    - name: 'redmine-url'
861
      value: 'https://redmine.ungleich.ch/issues/ISSUEID'
862
    - name: 'support-url'
863
      value: 'https://support.ungleich.ch/Ticket/Display.html?id=TICKETID'
864
</pre>
865
866 80 Nico Schottelius
h2. Helm related operations and conventions
867 55 Nico Schottelius
868 61 Nico Schottelius
We use helm charts extensively.
869
870
* In production, they are managed via argocd
871
* In development, helm chart can de developed and deployed manually using the helm utility.
872
873 55 Nico Schottelius
h3. Installing a helm chart
874
875
One can use the usual pattern of
876
877
<pre>
878
helm install <releasename> <chartdirectory>
879
</pre>
880
881
However often you want to reinstall/update when testing helm charts. The following pattern is "better", because it allows you to reinstall, if it is already installed:
882
883
<pre>
884
helm upgrade --install <releasename> <chartdirectory>
885 1 Nico Schottelius
</pre>
886 80 Nico Schottelius
887
h3. Naming services and deployments in helm charts [Application labels]
888
889
* We always have {{ .Release.Name }} to identify the current "instance"
890
* Deployments:
891
** use @app: <what it is>@, f.i. @app: nginx@, @app: postgres@, ...
892 81 Nico Schottelius
* See more about standard labels on
893
** https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/
894
** https://helm.sh/docs/chart_best_practices/labels/
895 55 Nico Schottelius
896 151 Nico Schottelius
h3. Show all versions of a helm chart
897
898
<pre>
899
helm search repo -l repo/chart
900
</pre>
901
902
For example:
903
904
<pre>
905
% helm search repo -l projectcalico/tigera-operator 
906
NAME                         	CHART VERSION	APP VERSION	DESCRIPTION                            
907
projectcalico/tigera-operator	v3.23.3      	v3.23.3    	Installs the Tigera operator for Calico
908
projectcalico/tigera-operator	v3.23.2      	v3.23.2    	Installs the Tigera operator for Calico
909
....
910
</pre>
911
912 152 Nico Schottelius
h3. Show possible values of a chart
913
914
<pre>
915
helm show values <repo/chart>
916
</pre>
917
918
Example:
919
920
<pre>
921
helm show values ingress-nginx/ingress-nginx
922
</pre>
923
924 207 Nico Schottelius
h3. Show all possible charts in a repo
925
926
<pre>
927
helm search repo REPO
928
</pre>
929
930 178 Nico Schottelius
h3. Download a chart
931
932
For instance for checking it out locally. Use:
933
934
<pre>
935
helm pull <repo/chart>
936
</pre>
937 152 Nico Schottelius
938 139 Nico Schottelius
h2. Rook + Ceph
939
940
h3. Installation
941
942
* Usually directly via argocd
943
944 71 Nico Schottelius
h3. Executing ceph commands
945
946
Using the ceph-tools pod as follows:
947
948
<pre>
949
kubectl exec -n rook-ceph -ti $(kubectl -n rook-ceph get pods -l app=rook-ceph-tools -o jsonpath='{.items[*].metadata.name}') -- ceph -s
950
</pre>
951
952 43 Nico Schottelius
h3. Inspecting the logs of a specific server
953
954
<pre>
955
# Get the related pods
956
kubectl -n rook-ceph get pods -l app=rook-ceph-osd-prepare 
957
...
958
959
# Inspect the logs of a specific pod
960
kubectl -n rook-ceph logs -f rook-ceph-osd-prepare-server23--1-444qx
961
962 71 Nico Schottelius
</pre>
963
964
h3. Inspecting the logs of the rook-ceph-operator
965
966
<pre>
967
kubectl -n rook-ceph logs -f -l app=rook-ceph-operator
968 43 Nico Schottelius
</pre>
969
970 200 Nico Schottelius
h3. (Temporarily) Disabling the rook-operation
971
972
* first disabling the sync in argocd
973
* then scale it down
974
975
<pre>
976
kubectl -n rook-ceph scale deployment rook-ceph-operator --replicas=0
977
</pre>
978
979
When done with the work/maintenance, re-enable sync in argocd.
980
The following command is thus strictly speaking not required, as argocd will fix it on its own:
981
982
<pre>
983
kubectl -n rook-ceph scale deployment rook-ceph-operator --replicas=1
984
</pre>
985
986 121 Nico Schottelius
h3. Restarting the rook operator
987
988
<pre>
989
kubectl -n rook-ceph delete pods  -l app=rook-ceph-operator
990
</pre>
991
992 43 Nico Schottelius
h3. Triggering server prepare / adding new osds
993
994
The rook-ceph-operator triggers/watches/creates pods to maintain hosts. To trigger a full "re scan", simply delete that pod:
995
996
<pre>
997
kubectl -n rook-ceph delete pods -l app=rook-ceph-operator
998
</pre>
999
1000
This will cause all the @rook-ceph-osd-prepare-..@ jobs to be recreated and thus OSDs to be created, if new disks have been added.
1001
1002
h3. Removing an OSD
1003
1004
* See "Ceph OSD Management":https://rook.io/docs/rook/v1.7/ceph-osd-mgmt.html
1005 77 Nico Schottelius
* More specifically: https://github.com/rook/rook/blob/release-1.7/cluster/examples/kubernetes/ceph/osd-purge.yaml
1006 99 Nico Schottelius
* Then delete the related deployment
1007 41 Nico Schottelius
1008 98 Nico Schottelius
Set osd id in the osd-purge.yaml and apply it. OSD should be down before.
1009
1010
<pre>
1011
apiVersion: batch/v1
1012
kind: Job
1013
metadata:
1014
  name: rook-ceph-purge-osd
1015
  namespace: rook-ceph # namespace:cluster
1016
  labels:
1017
    app: rook-ceph-purge-osd
1018
spec:
1019
  template:
1020
    metadata:
1021
      labels:
1022
        app: rook-ceph-purge-osd
1023
    spec:
1024
      serviceAccountName: rook-ceph-purge-osd
1025
      containers:
1026
        - name: osd-removal
1027
          image: rook/ceph:master
1028
          # TODO: Insert the OSD ID in the last parameter that is to be removed
1029
          # The OSD IDs are a comma-separated list. For example: "0" or "0,2".
1030
          # If you want to preserve the OSD PVCs, set `--preserve-pvc true`.
1031
          #
1032
          # A --force-osd-removal option is available if the OSD should be destroyed even though the
1033
          # removal could lead to data loss.
1034
          args:
1035
            - "ceph"
1036
            - "osd"
1037
            - "remove"
1038
            - "--preserve-pvc"
1039
            - "false"
1040
            - "--force-osd-removal"
1041
            - "false"
1042
            - "--osd-ids"
1043
            - "SETTHEOSDIDHERE"
1044
          env:
1045
            - name: POD_NAMESPACE
1046
              valueFrom:
1047
                fieldRef:
1048
                  fieldPath: metadata.namespace
1049
            - name: ROOK_MON_ENDPOINTS
1050
              valueFrom:
1051
                configMapKeyRef:
1052
                  key: data
1053
                  name: rook-ceph-mon-endpoints
1054
            - name: ROOK_CEPH_USERNAME
1055
              valueFrom:
1056
                secretKeyRef:
1057
                  key: ceph-username
1058
                  name: rook-ceph-mon
1059
            - name: ROOK_CEPH_SECRET
1060
              valueFrom:
1061
                secretKeyRef:
1062
                  key: ceph-secret
1063
                  name: rook-ceph-mon
1064
            - name: ROOK_CONFIG_DIR
1065
              value: /var/lib/rook
1066
            - name: ROOK_CEPH_CONFIG_OVERRIDE
1067
              value: /etc/rook/config/override.conf
1068
            - name: ROOK_FSID
1069
              valueFrom:
1070
                secretKeyRef:
1071
                  key: fsid
1072
                  name: rook-ceph-mon
1073
            - name: ROOK_LOG_LEVEL
1074
              value: DEBUG
1075
          volumeMounts:
1076
            - mountPath: /etc/ceph
1077
              name: ceph-conf-emptydir
1078
            - mountPath: /var/lib/rook
1079
              name: rook-config
1080
      volumes:
1081
        - emptyDir: {}
1082
          name: ceph-conf-emptydir
1083
        - emptyDir: {}
1084
          name: rook-config
1085
      restartPolicy: Never
1086
1087
1088 99 Nico Schottelius
</pre>
1089
1090 1 Nico Schottelius
Deleting the deployment:
1091
1092
<pre>
1093
[18:05] bridge:~% kubectl -n rook-ceph delete deployment rook-ceph-osd-6
1094 99 Nico Schottelius
deployment.apps "rook-ceph-osd-6" deleted
1095
</pre>
1096 185 Nico Schottelius
1097
h3. Placement of mons/osds/etc.
1098
1099
See https://rook.io/docs/rook/v1.11/CRDs/Cluster/ceph-cluster-crd/#placement-configuration-settings
1100 98 Nico Schottelius
1101 145 Nico Schottelius
h2. Ingress + Cert Manager
1102
1103
* We deploy "nginx-ingress":https://docs.nginx.com/nginx-ingress-controller/ to get an ingress
1104
* we deploy "cert-manager":https://cert-manager.io/ to handle certificates
1105
* We independently deploy @ClusterIssuer@ to allow the cert-manager app to deploy and the issuer to be created once the CRDs from cert manager are in place
1106
1107
h3. IPv4 reachability 
1108
1109
The ingress is by default IPv6 only. To make it reachable from the IPv4 world, get its IPv6 address and configure a NAT64 mapping in Jool.
1110
1111
Steps:
1112
1113
h4. Get the ingress IPv6 address
1114
1115
Use @kubectl -n ingress-nginx get svc ingress-nginx-controller -o jsonpath='{.spec.clusterIP}'; echo ''@
1116
1117
Example:
1118
1119
<pre>
1120
kubectl -n ingress-nginx get svc ingress-nginx-controller -o jsonpath='{.spec.clusterIP}'; echo ''
1121
2a0a:e5c0:10:1b::ce11
1122
</pre>
1123
1124
h4. Add NAT64 mapping
1125
1126
* Update the __dcl_jool_siit cdist type
1127
* Record the two IPs (IPv6 and IPv4)
1128
* Configure all routers
1129
1130
1131
h4. Add DNS record
1132
1133
To use the ingress capable as a CNAME destination, create an "ingress" DNS record, such as:
1134
1135
<pre>
1136
; k8s ingress for dev
1137
dev-ingress                 AAAA 2a0a:e5c0:10:1b::ce11
1138
dev-ingress                 A 147.78.194.23
1139
1140
</pre> 
1141
1142
h4. Add supporting wildcard DNS
1143
1144
If you plan to add various sites under a specific domain, we can add a wildcard DNS entry, such as *.k8s-dev.django-hosting.ch:
1145
1146
<pre>
1147
*.k8s-dev         CNAME dev-ingress.ungleich.ch.
1148
</pre>
1149
1150 76 Nico Schottelius
h2. Harbor
1151
1152 175 Nico Schottelius
* We user "Harbor":https://goharbor.io/ as an image registry for our own images. Internal app reference: apps/prod/harbor.
1153
* The admin password is in the password store, it is Harbor12345 by default
1154 76 Nico Schottelius
* At the moment harbor only authenticates against the internal ldap tree
1155
1156
h3. LDAP configuration
1157
1158
* The url needs to be ldaps://...
1159
* uid = uid
1160
* rest standard
1161 75 Nico Schottelius
1162 89 Nico Schottelius
h2. Monitoring / Prometheus
1163
1164 90 Nico Schottelius
* Via "kube-prometheus":https://github.com/prometheus-operator/kube-prometheus/
1165 89 Nico Schottelius
1166 91 Nico Schottelius
Access via ...
1167
1168
* http://prometheus-k8s.monitoring.svc:9090
1169
* http://grafana.monitoring.svc:3000
1170
* http://alertmanager.monitoring.svc:9093
1171
1172
1173 100 Nico Schottelius
h3. Prometheus Options
1174
1175
* "helm/kube-prometheus-stack":https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
1176
** Includes dashboards and co.
1177
* "manifest based kube-prometheus":https://github.com/prometheus-operator/kube-prometheus
1178
** Includes dashboards and co.
1179
* "Prometheus Operator (mainly CRD manifest":https://github.com/prometheus-operator/prometheus-operator
1180
1181 171 Nico Schottelius
h3. Grafana default password
1182
1183
* If not changed: @prom-operator@
1184
1185 82 Nico Schottelius
h2. Nextcloud
1186
1187 85 Nico Schottelius
h3. How to get the nextcloud credentials 
1188 84 Nico Schottelius
1189
* The initial username is set to "nextcloud"
1190
* The password is autogenerated and saved in a kubernetes secret
1191
1192
<pre>
1193 85 Nico Schottelius
kubectl get secret RELEASENAME-nextcloud -o jsonpath="{.data.PASSWORD}" | base64 -d; echo "" 
1194 84 Nico Schottelius
</pre>
1195
1196 83 Nico Schottelius
h3. How to fix "Access through untrusted domain"
1197
1198 82 Nico Schottelius
* Nextcloud stores the initial domain configuration
1199 1 Nico Schottelius
* If the FQDN is changed, it will show the error message "Access through untrusted domain"
1200 82 Nico Schottelius
* To fix, edit /var/www/html/config/config.php and correct the domain
1201 1 Nico Schottelius
* Then delete the pods
1202 165 Nico Schottelius
1203
h3. Running occ commands inside the nextcloud container
1204
1205
* Find the pod in the right namespace
1206
1207
Exec:
1208
1209
<pre>
1210
su www-data -s /bin/sh -c ./occ
1211
</pre>
1212
1213
* -s /bin/sh is needed as the default shell is set to /bin/false
1214
1215 166 Nico Schottelius
h4. Rescanning files
1216 165 Nico Schottelius
1217 166 Nico Schottelius
* If files have been added without nextcloud's knowledge
1218
1219
<pre>
1220
su www-data -s /bin/sh -c "./occ files:scan --all"
1221
</pre>
1222 82 Nico Schottelius
1223 201 Nico Schottelius
h2. Sealed Secrets
1224
1225 202 Jin-Guk Kwon
* install kubeseal
1226 1 Nico Schottelius
1227 202 Jin-Guk Kwon
<pre>
1228
KUBESEAL_VERSION='0.23.0'
1229
wget "https://github.com/bitnami-labs/sealed-secrets/releases/download/v${KUBESEAL_VERSION:?}/kubeseal-${KUBESEAL_VERSION:?}-linux-amd64.tar.gz" 
1230
tar -xvzf kubeseal-${KUBESEAL_VERSION:?}-linux-amd64.tar.gz kubeseal
1231
sudo install -m 755 kubeseal /usr/local/bin/kubeseal
1232
</pre>
1233
1234
* create key for sealed-secret
1235
1236
<pre>
1237
kubeseal --fetch-cert > /tmp/public-key-cert.pem
1238
</pre>
1239
1240
* create the secret
1241
1242
<pre>
1243 203 Jin-Guk Kwon
ex)
1244 202 Jin-Guk Kwon
apiVersion: v1
1245
kind: Secret
1246
metadata:
1247
  name: Release.Name-postgres-config
1248
  annotations:
1249
    secret-generator.v1.mittwald.de/autogenerate: POSTGRES_PASSWORD
1250
    hosting: Release.Name
1251
  labels:
1252
    app.kubernetes.io/instance: Release.Name
1253
    app.kubernetes.io/component: postgres
1254
stringData:
1255
  POSTGRES_USER: postgresUser
1256
  POSTGRES_DB: postgresDBName
1257
  POSTGRES_INITDB_ARGS: "--no-locale --encoding=UTF8"
1258
</pre>
1259
1260
* convert secret.yaml to sealed-secret.yaml
1261
1262
<pre>
1263
kubeseal -n <namespace> --cert=/tmp/public-key-cert.pem --format=yaml < ./secret.yaml  > ./sealed-secret.yaml
1264
</pre>
1265
1266
* use sealed-secret.yaml on helm-chart directory
1267 201 Nico Schottelius
1268 205 Jin-Guk Kwon
* refer ticket : #11989 , #12120
1269 204 Jin-Guk Kwon
1270 1 Nico Schottelius
h2. Infrastructure versions
1271 35 Nico Schottelius
1272 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v5 (2021-10)
1273 1 Nico Schottelius
1274 57 Nico Schottelius
Clusters are configured / setup in this order:
1275
1276
* Bootstrap via kubeadm
1277 59 Nico Schottelius
* "Networking via calico + BGP (non ECMP) using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
1278
* "ArgoCD for CD":https://argo-cd.readthedocs.io/en/stable/
1279
** "rook for storage via argocd":https://rook.io/
1280 58 Nico Schottelius
** haproxy for in IPv6-cluster-IPv4-to-IPv6 proxy via argocd
1281
** "kubernetes-secret-generator for in cluster secrets":https://github.com/mittwald/kubernetes-secret-generator
1282
** "ungleich-certbot managing certs and nginx":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
1283
1284 57 Nico Schottelius
1285
h3. ungleich kubernetes infrastructure v4 (2021-09)
1286
1287 54 Nico Schottelius
* rook is configured via manifests instead of using the rook-ceph-cluster helm chart
1288 1 Nico Schottelius
* The rook operator is still being installed via helm
1289 35 Nico Schottelius
1290 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v3 (2021-07)
1291 1 Nico Schottelius
1292 10 Nico Schottelius
* rook is now installed via helm via argocd instead of directly via manifests
1293 28 Nico Schottelius
1294 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v2 (2021-05)
1295 28 Nico Schottelius
1296
* Replaced fluxv2 from ungleich k8s v1 with argocd
1297 1 Nico Schottelius
** argocd can apply helm templates directly without needing to go through Chart releases
1298 28 Nico Schottelius
* We are also using argoflow for build flows
1299
* Planned to add "kaniko":https://github.com/GoogleContainerTools/kaniko for image building
1300
1301 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v1 (2021-01)
1302 28 Nico Schottelius
1303
We are using the following components:
1304
1305
* "Calico as a CNI":https://www.projectcalico.org/ with BGP, IPv6 only, no encapsulation
1306
** Needed for basic networking
1307
* "kubernetes-secret-generator":https://github.com/mittwald/kubernetes-secret-generator for creating secrets
1308
** Needed so that secrets are not stored in the git repository, but only in the cluster
1309
* "ungleich-certbot":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
1310
** Needed to get letsencrypt certificates for services
1311
* "rook with ceph rbd + cephfs":https://rook.io/ for storage
1312
** rbd for almost everything, *ReadWriteOnce*
1313
** cephfs for smaller things, multi access *ReadWriteMany*
1314
** Needed for providing persistent storage
1315
* "flux v2":https://fluxcd.io/
1316
** Needed to manage resources automatically