Project

General

Profile

The ungleich kubernetes infrastructure » History » Version 193

Nico Schottelius, 10/04/2023 11:42 AM

1 22 Nico Schottelius
h1. The ungleich kubernetes infrastructure and ungleich kubernetes manual
2 1 Nico Schottelius
3 3 Nico Schottelius
{{toc}}
4
5 1 Nico Schottelius
h2. Status
6
7 28 Nico Schottelius
This document is **pre-production**.
8
This document is to become the ungleich kubernetes infrastructure overview as well as the ungleich kubernetes manual.
9 1 Nico Schottelius
10 10 Nico Schottelius
h2. k8s clusters
11
12 123 Nico Schottelius
| Cluster            | Purpose/Setup     | Maintainer | Master(s)                     | argo                                                   | v4 http proxy | last verified |
13
| c0.k8s.ooo         | Dev               | -          | UNUSED                        |                                                        |               |    2021-10-05 |
14
| c1.k8s.ooo         | retired           |            | -                             |                                                        |               |    2022-03-15 |
15
| c2.k8s.ooo         | Dev p7 HW         | Nico       | server47 server53 server54    | "argo":https://argocd-server.argocd.svc.c2.k8s.ooo     |               |    2021-10-05 |
16
| c3.k8s.ooo         | retired           | -          | -                             |                                                        |               |    2021-10-05 |
17
| c4.k8s.ooo         | Dev2 p7 HW        | Jin-Guk    | server52 server53 server54    |                                                        |               |             - |
18
| c5.k8s.ooo         | retired           |            | -                             |                                                        |               |    2022-03-15 |
19
| c6.k8s.ooo         | Dev p6 VM Jin-Guk | Jin-Guk    |                               |                                                        |               |               |
20
| [[p5.k8s.ooo]]     | production        |            | server34 server36 server38    | "argo":https://argocd-server.argocd.svc.p5.k8s.ooo     | -             |               |
21
| [[p5-cow.k8s.ooo]] | production        | Nico       | server47 server51 server55    | "argo":https://argocd-server.argocd.svc.p5-cow.k8s.ooo |               |    2022-08-27 |
22
| [[p6.k8s.ooo]]     | production        |            | server67 server69 server71    | "argo":https://argocd-server.argocd.svc.p6.k8s.ooo     | 147.78.194.13 |    2021-10-05 |
23 184 Nico Schottelius
| [[p6-cow.k8s.ooo]] | production        |            | server134 server135 server136 | "argo":https://argocd-server.argocd.svc.p6in10.k8s.ooo | ?             |    2023-05-17 |
24 177 Nico Schottelius
| [[p10.k8s.ooo]]    | production        |            | server131 server132 server133 | "argo":https://argocd-server.argocd.svc.p10.k8s.ooo    | 147.78.194.12 |    2021-10-05 |
25 123 Nico Schottelius
| [[k8s.ge.nau.so]]  | development       |            | server107 server108 server109 | "argo":https://argocd-server.argocd.svc.k8s.ge.nau.so  |               |               |
26
| [[dev.k8s.ooo]]    | development       |            | server110 server111 server112 | "argo":https://argocd-server.argocd.svc.dev.k8s.ooo    | -             |    2022-07-08 |
27 164 Nico Schottelius
| [[r1r2p15k8sooo|r1.p15.k8s.ooo]] | production | Nico | server120 | | | 2022-10-30 |
28
| [[r1r2p15k8sooo|r2.p15.k8s.ooo]] | production | Nico | server121 | | | 2022-09-06 |
29 162 Nico Schottelius
| [[r1r2p10k8sooo|r1.p10.k8s.ooo]] | production | Nico | server122 | | | 2022-10-30 |
30
| [[r1r2p10k8sooo|r2.p10.k8s.ooo]] | production | Nico | server123 | | | 2022-10-15 |
31
| [[r1r2p5k8sooo|r1.p5.k8s.ooo]] | production | Nico | server137 | | | 2022-10-30 |
32
| [[r1r2p5k8sooo|r2.p5.k8s.ooo]] | production | Nico | server138 | | | 2022-10-30 |
33
| [[r1r2p6k8sooo|r1.p6.k8s.ooo]] | production | Nico | server139 | | | 2022-10-30 |
34
| [[r1r2p6k8sooo|r2.p6.k8s.ooo]] | production | Nico | server140 | | | 2022-10-30 |
35 21 Nico Schottelius
36 1 Nico Schottelius
h2. General architecture and components overview
37
38
* All k8s clusters are IPv6 only
39
* We use BGP peering to propagate podcidr and serviceCidr networks to our infrastructure
40
* The main public testing repository is "ungleich-k8s":https://code.ungleich.ch/ungleich-public/ungleich-k8s
41 18 Nico Schottelius
** Private configurations are found in the **k8s-config** repository
42 1 Nico Schottelius
43
h3. Cluster types
44
45 28 Nico Schottelius
| **Type/Feature**            | **Development**                | **Production**         |
46
| Min No. nodes               | 3 (1 master, 3 worker)         | 5 (3 master, 3 worker) |
47
| Recommended minimum         | 4 (dedicated master, 3 worker) | 8 (3 master, 5 worker) |
48
| Separation of control plane | optional                       | recommended            |
49
| Persistent storage          | required                       | required               |
50
| Number of storage monitors  | 3                              | 5                      |
51 1 Nico Schottelius
52 43 Nico Schottelius
h2. General k8s operations
53 1 Nico Schottelius
54 46 Nico Schottelius
h3. Cheat sheet / external great references
55
56
* "kubectl cheatsheet":https://kubernetes.io/docs/reference/kubectl/cheatsheet/
57
58 117 Nico Schottelius
h3. Allowing to schedule work on the control plane / removing node taints
59 69 Nico Schottelius
60
* Mostly for single node / test / development clusters
61
* Just remove the master taint as follows
62
63
<pre>
64
kubectl taint nodes --all node-role.kubernetes.io/master-
65 118 Nico Schottelius
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
66 69 Nico Schottelius
</pre>
67 1 Nico Schottelius
68 117 Nico Schottelius
You can check the node taints using @kubectl describe node ...@
69 69 Nico Schottelius
70 44 Nico Schottelius
h3. Get the cluster admin.conf
71
72
* On the masters of each cluster you can find the file @/etc/kubernetes/admin.conf@
73
* To be able to administrate the cluster you can copy the admin.conf to your local machine
74
* Multi cluster debugging can very easy if you name the config ~/cX-admin.conf (see example below)
75
76
<pre>
77
% scp root@server47.place7.ungleich.ch:/etc/kubernetes/admin.conf ~/c2-admin.conf
78
% export KUBECONFIG=~/c2-admin.conf    
79
% kubectl get nodes
80
NAME       STATUS                     ROLES                  AGE   VERSION
81
server47   Ready                      control-plane,master   82d   v1.22.0
82
server48   Ready                      control-plane,master   82d   v1.22.0
83
server49   Ready                      <none>                 82d   v1.22.0
84
server50   Ready                      <none>                 82d   v1.22.0
85
server59   Ready                      control-plane,master   82d   v1.22.0
86
server60   Ready,SchedulingDisabled   <none>                 82d   v1.22.0
87
server61   Ready                      <none>                 82d   v1.22.0
88
server62   Ready                      <none>                 82d   v1.22.0               
89
</pre>
90
91 18 Nico Schottelius
h3. Installing a new k8s cluster
92 8 Nico Schottelius
93 9 Nico Schottelius
* Decide on the cluster name (usually *cX.k8s.ooo*), X counting upwards
94 28 Nico Schottelius
** Using pXX.k8s.ooo for production clusters of placeXX
95 9 Nico Schottelius
* Use cdist to configure the nodes with requirements like crio
96
* Decide between single or multi node control plane setups (see below)
97 28 Nico Schottelius
** Single control plane suitable for development clusters
98 9 Nico Schottelius
99 28 Nico Schottelius
Typical init procedure:
100 9 Nico Schottelius
101 28 Nico Schottelius
* Single control plane: @kubeadm init --config bootstrap/XXX/kubeadm.yaml@
102
* Multi control plane (HA): @kubeadm init --config bootstrap/XXX/kubeadm.yaml --upload-certs@
103 10 Nico Schottelius
104 29 Nico Schottelius
h3. Deleting a pod that is hanging in terminating state
105
106
<pre>
107
kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>
108
</pre>
109
110
(from https://stackoverflow.com/questions/35453792/pods-stuck-in-terminating-status)
111
112 42 Nico Schottelius
h3. Listing nodes of a cluster
113
114
<pre>
115
[15:05] bridge:~% kubectl get nodes
116
NAME       STATUS   ROLES                  AGE   VERSION
117
server22   Ready    <none>                 52d   v1.22.0
118
server23   Ready    <none>                 52d   v1.22.2
119
server24   Ready    <none>                 52d   v1.22.0
120
server25   Ready    <none>                 52d   v1.22.0
121
server26   Ready    <none>                 52d   v1.22.0
122
server27   Ready    <none>                 52d   v1.22.0
123
server63   Ready    control-plane,master   52d   v1.22.0
124
server64   Ready    <none>                 52d   v1.22.0
125
server65   Ready    control-plane,master   52d   v1.22.0
126
server66   Ready    <none>                 52d   v1.22.0
127
server83   Ready    control-plane,master   52d   v1.22.0
128
server84   Ready    <none>                 52d   v1.22.0
129
server85   Ready    <none>                 52d   v1.22.0
130
server86   Ready    <none>                 52d   v1.22.0
131
</pre>
132
133 41 Nico Schottelius
h3. Removing / draining a node
134
135
Usually @kubectl drain server@ should do the job, but sometimes we need to be more aggressive:
136
137 1 Nico Schottelius
<pre>
138 103 Nico Schottelius
kubectl drain --delete-emptydir-data --ignore-daemonsets serverXX
139 42 Nico Schottelius
</pre>
140
141
h3. Readding a node after draining
142
143
<pre>
144
kubectl uncordon serverXX
145 1 Nico Schottelius
</pre>
146 43 Nico Schottelius
147 50 Nico Schottelius
h3. (Re-)joining worker nodes after creating the cluster
148 49 Nico Schottelius
149
* We need to have an up-to-date token
150
* We use different join commands for the workers and control plane nodes
151
152
Generating the join command on an existing control plane node:
153
154
<pre>
155
kubeadm token create --print-join-command
156
</pre>
157
158 50 Nico Schottelius
h3. (Re-)joining control plane nodes after creating the cluster
159 1 Nico Schottelius
160 50 Nico Schottelius
* We generate the token again
161
* We upload the certificates
162
* We need to combine/create the join command for the control plane node
163
164
Example session:
165
166
<pre>
167
% kubeadm token create --print-join-command
168
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash 
169
170
% kubeadm init phase upload-certs --upload-certs
171
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
172
[upload-certs] Using certificate key:
173
CERTKEY
174
175
# Then we use these two outputs on the joining node:
176
177
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash --control-plane --certificate-key CERTKEY
178
</pre>
179
180
Commands to be used on a control plane node:
181
182
<pre>
183
kubeadm token create --print-join-command
184
kubeadm init phase upload-certs --upload-certs
185
</pre>
186
187
Commands to be used on the joining node:
188
189
<pre>
190
JOINCOMMAND --control-plane --certificate-key CERTKEY
191
</pre>
192 49 Nico Schottelius
193 51 Nico Schottelius
SEE ALSO
194
195
* https://stackoverflow.com/questions/63936268/how-to-generate-kubeadm-token-for-secondary-control-plane-nodes
196
* https://blog.scottlowe.org/2019/08/15/reconstructing-the-join-command-for-kubeadm/
197
198 53 Nico Schottelius
h3. How to fix etcd does not start when rejoining a kubernetes cluster as a control plane
199 52 Nico Schottelius
200
If during the above step etcd does not come up, @kubeadm join@ can hang as follows:
201
202
<pre>
203
[control-plane] Creating static Pod manifest for "kube-apiserver"                                                              
204
[control-plane] Creating static Pod manifest for "kube-controller-manager"                                                     
205
[control-plane] Creating static Pod manifest for "kube-scheduler"                                                              
206
[check-etcd] Checking that the etcd cluster is healthy                                                                         
207
error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://[2a0a:e5c0:10:1:225:b3ff:fe20:37
208
8a]:2379 with maintenance client: context deadline exceeded                                                                    
209
To see the stack trace of this error execute with --v=5 or higher         
210
</pre>
211
212
Then the problem is likely that the etcd server is still a member of the cluster. We first need to remove it from the etcd cluster and then the join works.
213
214
To fix this we do:
215
216
* Find a working etcd pod
217
* Find the etcd members / member list
218
* Remove the etcd member that we want to re-join the cluster
219
220
221
<pre>
222
# Find the etcd pods
223
kubectl -n kube-system get pods -l component=etcd,tier=control-plane
224
225
# Get the list of etcd servers with the member id 
226
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
227
228
# Remove the member
229
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove MEMBERID
230
</pre>
231
232
Sample session:
233
234
<pre>
235
[10:48] line:~% kubectl -n kube-system get pods -l component=etcd,tier=control-plane
236
NAME            READY   STATUS    RESTARTS     AGE
237
etcd-server63   1/1     Running   0            3m11s
238
etcd-server65   1/1     Running   3            7d2h
239
etcd-server83   1/1     Running   8 (6d ago)   7d2h
240
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
241
356891cd676df6e4, started, server65, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2379, false
242
371b8a07185dee7e, started, server63, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2379, false
243
5942bc58307f8af9, started, server83, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2380, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2379, false
244
245
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove 371b8a07185dee7e
246
Member 371b8a07185dee7e removed from cluster e3c0805f592a8f77
247 1 Nico Schottelius
248
</pre>
249
250
SEE ALSO
251
252
* We found the solution using https://stackoverflow.com/questions/67921552/re-installed-node-cannot-join-kubernetes-cluster
253 56 Nico Schottelius
254 147 Nico Schottelius
h3. Node labels (adding, showing, removing)
255
256
Listing the labels:
257
258
<pre>
259
kubectl get nodes --show-labels
260
</pre>
261
262
Adding labels:
263
264
<pre>
265
kubectl label nodes LIST-OF-NODES label1=value1 
266
267
</pre>
268
269
For instance:
270
271
<pre>
272
kubectl label nodes router2 router3 hosttype=router 
273
</pre>
274
275
Selecting nodes in pods:
276
277
<pre>
278
apiVersion: v1
279
kind: Pod
280
...
281
spec:
282
  nodeSelector:
283
    hosttype: router
284
</pre>
285
286 148 Nico Schottelius
Removing labels by adding a minus at the end of the label name:
287
288
<pre>
289
kubectl label node <nodename> <labelname>-
290
</pre>
291
292
For instance:
293
294
<pre>
295
kubectl label nodes router2 router3 hosttype- 
296
</pre>
297
298 147 Nico Schottelius
SEE ALSO
299 1 Nico Schottelius
300 148 Nico Schottelius
* https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/
301
* https://stackoverflow.com/questions/34067979/how-to-delete-a-node-label-by-command-and-api
302 147 Nico Schottelius
303 101 Nico Schottelius
h3. Hardware Maintenance using ungleich-hardware
304
305
Use the following manifest and replace the HOST with the actual host:
306
307
<pre>
308
apiVersion: v1
309
kind: Pod
310
metadata:
311
  name: ungleich-hardware-HOST
312
spec:
313
  containers:
314
  - name: ungleich-hardware
315
    image: ungleich/ungleich-hardware:0.0.5
316
    args:
317
    - sleep
318
    - "1000000"
319
    volumeMounts:
320
      - mountPath: /dev
321
        name: dev
322
    securityContext:
323
      privileged: true
324
  nodeSelector:
325
    kubernetes.io/hostname: "HOST"
326
327
  volumes:
328
    - name: dev
329
      hostPath:
330
        path: /dev
331
</pre>
332
333 102 Nico Schottelius
Also see: [[The_ungleich_hardware_maintenance_guide]]
334
335 105 Nico Schottelius
h3. Triggering a cronjob / creating a job from a cronjob
336 104 Nico Schottelius
337
To test a cronjob, we can create a job from a cronjob:
338
339
<pre>
340
kubectl create job --from=cronjob/volume2-daily-backup volume2-manual
341
</pre>
342
343
This creates a job volume2-manual based on the cronjob  volume2-daily
344
345 112 Nico Schottelius
h3. su-ing into a user that has nologin shell set
346
347
Many times users are having nologin as their shell inside the container. To be able to execute maintenance commands within the
348
container, we can use @su -s /bin/sh@ like this:
349
350
<pre>
351
su -s /bin/sh -c '/path/to/your/script' testuser
352
</pre>
353
354
Found on https://serverfault.com/questions/351046/how-to-run-command-as-user-who-has-usr-sbin-nologin-as-shell
355
356 113 Nico Schottelius
h3. How to print a secret value
357
358
Assuming you want the "password" item from a secret, use:
359
360
<pre>
361
kubectl get secret SECRETNAME -o jsonpath="{.data.password}" | base64 -d; echo "" 
362
</pre>
363
364 173 Nico Schottelius
h3. How to upgrade a kubernetes cluster
365 172 Nico Schottelius
366
h4. General
367
368
* Should be done every X months to stay up-to-date
369
** X probably something like 3-6
370
* kubeadm based clusters
371
* Needs specific kubeadm versions for upgrade
372
* Follow instructions on https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
373 190 Nico Schottelius
* Finding releases: https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG
374 172 Nico Schottelius
375
h4. Getting a specific kubeadm or kubelet version
376
377
<pre>
378 190 Nico Schottelius
RELEASE=v1.22.17
379
RELEASE=v1.23.17
380 181 Nico Schottelius
RELEASE=v1.24.9
381 1 Nico Schottelius
RELEASE=v1.25.9
382
RELEASE=v1.26.6
383 190 Nico Schottelius
RELEASE=v1.27.2
384
385 187 Nico Schottelius
ARCH=amd64
386 172 Nico Schottelius
387
curl -L --remote-name-all https://dl.k8s.io/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet}
388 182 Nico Schottelius
chmod u+x kubeadm kubelet
389 172 Nico Schottelius
</pre>
390
391
h4. Steps
392
393
* kubeadm upgrade plan
394
** On one control plane node
395
* kubeadm upgrade apply vXX.YY.ZZ
396
** On one control plane node
397 189 Nico Schottelius
* kubeadm upgrade node
398
** On all other control plane nodes
399
** On all worker nodes afterwards
400
401 172 Nico Schottelius
402 173 Nico Schottelius
Repeat for all control planes nodes. The upgrade kubelet on all other nodes via package manager.
403 172 Nico Schottelius
404 193 Nico Schottelius
h4. Upgrading to 1.22.17
405 1 Nico Schottelius
406 193 Nico Schottelius
* https://v1-22.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
407
* No special notes
408
* Done for p6 on 2023-10-04
409
410
h4. Upgrading to 1.23.17
411
412
* https://v1-23.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
413
* No special notes
414
* Done for p6 on 2023-10-04
415
416
h4. Upgrading to 1.24.17
417
418
* https://v1-24.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
419
* No special notes
420
* Done for p6 on 2023-10-04
421
422
h4. Upgrading to 1.25.14
423
424
* https://v1-24.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
425
* No special notes
426
* Done for p6 on 2023-10-04
427
428
h4. Upgrading to 1.26.9
429
430 1 Nico Schottelius
* https://v1-26.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
431 193 Nico Schottelius
* No special notes
432
* Done for p6 on 2023-10-04
433 188 Nico Schottelius
434
435 186 Nico Schottelius
h4. Upgrade to kubernetes 1.27
436
437 192 Nico Schottelius
* https://v1-27.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
438 186 Nico Schottelius
* kubelet will not start anymore
439
* reason: @"command failed" err="failed to parse kubelet flag: unknown flag: --container-runtime"@
440
* /var/lib/kubelet/kubeadm-flags.env contains that parameter
441
* remove it, start kubelet
442 192 Nico Schottelius
443
h4. Upgrade to kubernetes 1.28
444
445
* https://v1-28.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
446 186 Nico Schottelius
447
h4. Upgrade to crio 1.27: missing crun
448
449
Error message
450
451
<pre>
452
level=fatal msg="validating runtime config: runtime validation: \"crun\" not found in $PATH: exec: \"crun\": executable file not found in $PATH"
453
</pre>
454
455
Fix:
456
457
<pre>
458
apk add crun
459
</pre>
460
461 157 Nico Schottelius
h2. Reference CNI
462
463
* Mainly "stupid", but effective plugins
464
* Main documentation on https://www.cni.dev/plugins/current/
465 158 Nico Schottelius
* Plugins
466
** bridge
467
*** Can create the bridge on the host
468
*** But seems not to be able to add host interfaces to it as well
469
*** Has support for vlan tags
470
** vlan
471
*** creates vlan tagged sub interface on the host
472 160 Nico Schottelius
*** "It's a 1:1 mapping (i.e. no bridge in between)":https://github.com/k8snetworkplumbingwg/multus-cni/issues/569
473 158 Nico Schottelius
** host-device
474
*** moves the interface from the host into the container
475
*** very easy for physical connections to containers
476 159 Nico Schottelius
** ipvlan
477
*** "virtualisation" of a host device
478
*** routing based on IP
479
*** Same MAC for everyone
480
*** Cannot reach the master interface
481
** maclvan
482
*** With mac addresses
483
*** Supports various modes (to be checked)
484
** ptp ("point to point")
485
*** Creates a host device and connects it to the container
486
** win*
487 158 Nico Schottelius
*** Windows implementations
488 157 Nico Schottelius
489 62 Nico Schottelius
h2. Calico CNI
490
491
h3. Calico Installation
492
493
* We install "calico using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
494
* This has the following advantages:
495
** Easy to upgrade
496
** Does not require os to configure IPv6/dual stack settings as the tigera operator figures out things on its own
497
498
Usually plain calico can be installed directly using:
499
500
<pre>
501 174 Nico Schottelius
VERSION=v3.25.0
502 149 Nico Schottelius
503 1 Nico Schottelius
helm repo add projectcalico https://docs.projectcalico.org/charts
504 167 Nico Schottelius
helm repo update
505 124 Nico Schottelius
helm upgrade --install --namespace tigera calico projectcalico/tigera-operator --version $VERSION --create-namespace
506 1 Nico Schottelius
</pre>
507 92 Nico Schottelius
508
* Check the tags on https://github.com/projectcalico/calico/tags for the latest release
509 62 Nico Schottelius
510
h3. Installing calicoctl
511
512 115 Nico Schottelius
* General installation instructions, including binary download: https://projectcalico.docs.tigera.io/maintenance/clis/calicoctl/install
513
514 62 Nico Schottelius
To be able to manage and configure calico, we need to 
515
"install calicoctl (we choose the version as a pod)":https://docs.projectcalico.org/getting-started/clis/calicoctl/install#install-calicoctl-as-a-kubernetes-pod
516
517
<pre>
518
kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml
519
</pre>
520
521 93 Nico Schottelius
Or version specific:
522
523
<pre>
524
kubectl apply -f https://github.com/projectcalico/calico/blob/v3.20.4/manifests/calicoctl.yaml
525 97 Nico Schottelius
526
# For 3.22
527
kubectl apply -f https://projectcalico.docs.tigera.io/archive/v3.22/manifests/calicoctl.yaml
528 93 Nico Schottelius
</pre>
529
530 70 Nico Schottelius
And making it easier accessible by alias:
531
532
<pre>
533
alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl"
534
</pre>
535
536 62 Nico Schottelius
h3. Calico configuration
537
538 63 Nico Schottelius
By default our k8s clusters "BGP peer":https://docs.projectcalico.org/networking/bgp
539
with an upstream router to propagate podcidr and servicecidr.
540 62 Nico Schottelius
541
Default settings in our infrastructure:
542
543
* We use a full-mesh using the @nodeToNodeMeshEnabled: true@ option
544
* We keep the original next hop so that *only* the server with the pod is announcing it (instead of ecmp)
545 1 Nico Schottelius
* We use private ASNs for k8s clusters
546 63 Nico Schottelius
* We do *not* use any overlay
547 62 Nico Schottelius
548
After installing calico and calicoctl the last step of the installation is usually:
549
550 1 Nico Schottelius
<pre>
551 79 Nico Schottelius
calicoctl create -f - < calico-bgp.yaml
552 62 Nico Schottelius
</pre>
553
554
555
A sample BGP configuration:
556
557
<pre>
558
---
559
apiVersion: projectcalico.org/v3
560
kind: BGPConfiguration
561
metadata:
562
  name: default
563
spec:
564
  logSeverityScreen: Info
565
  nodeToNodeMeshEnabled: true
566
  asNumber: 65534
567
  serviceClusterIPs:
568
  - cidr: 2a0a:e5c0:10:3::/108
569
  serviceExternalIPs:
570
  - cidr: 2a0a:e5c0:10:3::/108
571
---
572
apiVersion: projectcalico.org/v3
573
kind: BGPPeer
574
metadata:
575
  name: router1-place10
576
spec:
577
  peerIP: 2a0a:e5c0:10:1::50
578
  asNumber: 213081
579
  keepOriginalNextHop: true
580
</pre>
581
582 126 Nico Schottelius
h2. Cilium CNI (experimental)
583
584 137 Nico Schottelius
h3. Status
585
586 138 Nico Schottelius
*NO WORKING CILIUM CONFIGURATION FOR IPV6 only modes*
587 137 Nico Schottelius
588 146 Nico Schottelius
h3. Latest error
589
590
It seems cilium does not run on IPv6 only hosts:
591
592
<pre>
593
level=info msg="Validating configured node address ranges" subsys=daemon
594
level=fatal msg="postinit failed" error="external IPv4 node address could not be derived, please configure via --ipv4-node" subsys=daemon
595
level=info msg="Starting IP identity watcher" subsys=ipcache
596
</pre>
597
598
It crashes after that log entry
599
600 128 Nico Schottelius
h3. BGP configuration
601
602
* The cilium-operator will not start without a correct configmap being present beforehand (see error message below)
603
* Creating the bgp config beforehand as a configmap is thus required.
604
605
The error one gets without the configmap present:
606
607
Pods are hanging with:
608
609
<pre>
610
cilium-bpqm6                       0/1     Init:0/4            0             9s
611
cilium-operator-5947d94f7f-5bmh2   0/1     ContainerCreating   0             9s
612
</pre>
613
614
The error message in the cilium-*perator is:
615
616
<pre>
617
Events:
618
  Type     Reason       Age                From               Message
619
  ----     ------       ----               ----               -------
620
  Normal   Scheduled    80s                default-scheduler  Successfully assigned kube-system/cilium-operator-5947d94f7f-lqcsp to server56
621
  Warning  FailedMount  16s (x8 over 80s)  kubelet            MountVolume.SetUp failed for volume "bgp-config-path" : configmap "bgp-config" not found
622
</pre>
623
624
A correct bgp config looks like this:
625
626
<pre>
627
apiVersion: v1
628
kind: ConfigMap
629
metadata:
630
  name: bgp-config
631
  namespace: kube-system
632
data:
633
  config.yaml: |
634
    peers:
635
      - peer-address: 2a0a:e5c0::46
636
        peer-asn: 209898
637
        my-asn: 65533
638
      - peer-address: 2a0a:e5c0::47
639
        peer-asn: 209898
640
        my-asn: 65533
641
    address-pools:
642
      - name: default
643
        protocol: bgp
644
        addresses:
645
          - 2a0a:e5c0:0:14::/64
646
</pre>
647 127 Nico Schottelius
648
h3. Installation
649 130 Nico Schottelius
650 127 Nico Schottelius
Adding the repo
651 1 Nico Schottelius
<pre>
652 127 Nico Schottelius
653 129 Nico Schottelius
helm repo add cilium https://helm.cilium.io/
654 130 Nico Schottelius
helm repo update
655
</pre>
656 129 Nico Schottelius
657 135 Nico Schottelius
Installing + configuring cilium
658 129 Nico Schottelius
<pre>
659 130 Nico Schottelius
ipv6pool=2a0a:e5c0:0:14::/112
660 1 Nico Schottelius
661 146 Nico Schottelius
version=1.12.2
662 129 Nico Schottelius
663
helm upgrade --install cilium cilium/cilium --version $version \
664 1 Nico Schottelius
  --namespace kube-system \
665
  --set ipv4.enabled=false \
666
  --set ipv6.enabled=true \
667 146 Nico Schottelius
  --set enableIPv6Masquerade=false \
668
  --set bgpControlPlane.enabled=true 
669 1 Nico Schottelius
670 146 Nico Schottelius
#  --set ipam.operator.clusterPoolIPv6PodCIDRList=$ipv6pool
671
672
# Old style bgp?
673 136 Nico Schottelius
#   --set bgp.enabled=true --set bgp.announce.podCIDR=true \
674 127 Nico Schottelius
675
# Show possible configuration options
676
helm show values cilium/cilium
677
678 1 Nico Schottelius
</pre>
679 132 Nico Schottelius
680
Using a /64 for ipam.operator.clusterPoolIPv6PodCIDRList fails with:
681
682
<pre>
683
level=fatal msg="Unable to init cluster-pool allocator" error="unable to initialize IPv6 allocator New CIDR set failed; the node CIDR size is too big" subsys=cilium-operator-generic
684
</pre>
685
686 126 Nico Schottelius
687 1 Nico Schottelius
See also https://github.com/cilium/cilium/issues/20756
688 135 Nico Schottelius
689
Seems a /112 is actually working.
690
691
h3. Kernel modules
692
693
Cilium requires the following modules to be loaded on the host (not loaded by default):
694
695
<pre>
696 1 Nico Schottelius
modprobe  ip6table_raw
697
modprobe  ip6table_filter
698
</pre>
699 146 Nico Schottelius
700
h3. Interesting helm flags
701
702
* autoDirectNodeRoutes
703
* bgpControlPlane.enabled = true
704
705
h3. SEE ALSO
706
707
* https://docs.cilium.io/en/v1.12/helm-reference/
708 133 Nico Schottelius
709 179 Nico Schottelius
h2. Multus
710 168 Nico Schottelius
711
* https://github.com/k8snetworkplumbingwg/multus-cni
712
* Installing a deployment w/ CRDs
713 150 Nico Schottelius
714 169 Nico Schottelius
<pre>
715 176 Nico Schottelius
VERSION=v4.0.1
716 169 Nico Schottelius
717 170 Nico Schottelius
kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/${VERSION}/deployments/multus-daemonset-crio.yml
718
</pre>
719 169 Nico Schottelius
720 191 Nico Schottelius
h2. ArgoCD
721 56 Nico Schottelius
722 60 Nico Schottelius
h3. Argocd Installation
723 1 Nico Schottelius
724 116 Nico Schottelius
* See https://argo-cd.readthedocs.io/en/stable/
725
726 60 Nico Schottelius
As there is no configuration management present yet, argocd is installed using
727
728 1 Nico Schottelius
<pre>
729 60 Nico Schottelius
kubectl create namespace argocd
730 1 Nico Schottelius
731
# OR: latest stable
732
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
733
734 191 Nico Schottelius
# OR Specific Version
735
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.3.2/manifests/install.yaml
736 56 Nico Schottelius
737 191 Nico Schottelius
738
</pre>
739 1 Nico Schottelius
740 60 Nico Schottelius
h3. Get the argocd credentials
741
742
<pre>
743
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo ""
744
</pre>
745 52 Nico Schottelius
746 87 Nico Schottelius
h3. Accessing argocd
747
748
In regular IPv6 clusters:
749
750
* Navigate to https://argocd-server.argocd.CLUSTERDOMAIN
751
752
In legacy IPv4 clusters
753
754
<pre>
755
kubectl --namespace argocd port-forward svc/argocd-server 8080:80
756
</pre>
757
758 88 Nico Schottelius
* Navigate to https://localhost:8080
759
760 68 Nico Schottelius
h3. Using the argocd webhook to trigger changes
761 67 Nico Schottelius
762
* To trigger changes post json https://argocd.example.com/api/webhook
763
764 72 Nico Schottelius
h3. Deploying an application
765
766
* Applications are deployed via git towards gitea (code.ungleich.ch) and then pulled by argo
767 73 Nico Schottelius
* Always include the *redmine-url* pointing to the (customer) ticket
768
** Also add the support-url if it exists
769 72 Nico Schottelius
770
Application sample
771
772
<pre>
773
apiVersion: argoproj.io/v1alpha1
774
kind: Application
775
metadata:
776
  name: gitea-CUSTOMER
777
  namespace: argocd
778
spec:
779
  destination:
780
    namespace: default
781
    server: 'https://kubernetes.default.svc'
782
  source:
783
    path: apps/prod/gitea
784
    repoURL: 'https://code.ungleich.ch/ungleich-intern/k8s-config.git'
785
    targetRevision: HEAD
786
    helm:
787
      parameters:
788
        - name: storage.data.storageClass
789
          value: rook-ceph-block-hdd
790
        - name: storage.data.size
791
          value: 200Gi
792
        - name: storage.db.storageClass
793
          value: rook-ceph-block-ssd
794
        - name: storage.db.size
795
          value: 10Gi
796
        - name: storage.letsencrypt.storageClass
797
          value: rook-ceph-block-hdd
798
        - name: storage.letsencrypt.size
799
          value: 50Mi
800
        - name: letsencryptStaging
801
          value: 'no'
802
        - name: fqdn
803
          value: 'code.verua.online'
804
  project: default
805
  syncPolicy:
806
    automated:
807
      prune: true
808
      selfHeal: true
809
  info:
810
    - name: 'redmine-url'
811
      value: 'https://redmine.ungleich.ch/issues/ISSUEID'
812
    - name: 'support-url'
813
      value: 'https://support.ungleich.ch/Ticket/Display.html?id=TICKETID'
814
</pre>
815
816 80 Nico Schottelius
h2. Helm related operations and conventions
817 55 Nico Schottelius
818 61 Nico Schottelius
We use helm charts extensively.
819
820
* In production, they are managed via argocd
821
* In development, helm chart can de developed and deployed manually using the helm utility.
822
823 55 Nico Schottelius
h3. Installing a helm chart
824
825
One can use the usual pattern of
826
827
<pre>
828
helm install <releasename> <chartdirectory>
829
</pre>
830
831
However often you want to reinstall/update when testing helm charts. The following pattern is "better", because it allows you to reinstall, if it is already installed:
832
833
<pre>
834
helm upgrade --install <releasename> <chartdirectory>
835 1 Nico Schottelius
</pre>
836 80 Nico Schottelius
837
h3. Naming services and deployments in helm charts [Application labels]
838
839
* We always have {{ .Release.Name }} to identify the current "instance"
840
* Deployments:
841
** use @app: <what it is>@, f.i. @app: nginx@, @app: postgres@, ...
842 81 Nico Schottelius
* See more about standard labels on
843
** https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/
844
** https://helm.sh/docs/chart_best_practices/labels/
845 55 Nico Schottelius
846 151 Nico Schottelius
h3. Show all versions of a helm chart
847
848
<pre>
849
helm search repo -l repo/chart
850
</pre>
851
852
For example:
853
854
<pre>
855
% helm search repo -l projectcalico/tigera-operator 
856
NAME                         	CHART VERSION	APP VERSION	DESCRIPTION                            
857
projectcalico/tigera-operator	v3.23.3      	v3.23.3    	Installs the Tigera operator for Calico
858
projectcalico/tigera-operator	v3.23.2      	v3.23.2    	Installs the Tigera operator for Calico
859
....
860
</pre>
861
862 152 Nico Schottelius
h3. Show possible values of a chart
863
864
<pre>
865
helm show values <repo/chart>
866
</pre>
867
868
Example:
869
870
<pre>
871
helm show values ingress-nginx/ingress-nginx
872
</pre>
873
874 178 Nico Schottelius
h3. Download a chart
875
876
For instance for checking it out locally. Use:
877
878
<pre>
879
helm pull <repo/chart>
880
</pre>
881 152 Nico Schottelius
882 139 Nico Schottelius
h2. Rook + Ceph
883
884
h3. Installation
885
886
* Usually directly via argocd
887
888 71 Nico Schottelius
h3. Executing ceph commands
889
890
Using the ceph-tools pod as follows:
891
892
<pre>
893
kubectl exec -n rook-ceph -ti $(kubectl -n rook-ceph get pods -l app=rook-ceph-tools -o jsonpath='{.items[*].metadata.name}') -- ceph -s
894
</pre>
895
896 43 Nico Schottelius
h3. Inspecting the logs of a specific server
897
898
<pre>
899
# Get the related pods
900
kubectl -n rook-ceph get pods -l app=rook-ceph-osd-prepare 
901
...
902
903
# Inspect the logs of a specific pod
904
kubectl -n rook-ceph logs -f rook-ceph-osd-prepare-server23--1-444qx
905
906 71 Nico Schottelius
</pre>
907
908
h3. Inspecting the logs of the rook-ceph-operator
909
910
<pre>
911
kubectl -n rook-ceph logs -f -l app=rook-ceph-operator
912 43 Nico Schottelius
</pre>
913
914 121 Nico Schottelius
h3. Restarting the rook operator
915
916
<pre>
917
kubectl -n rook-ceph delete pods  -l app=rook-ceph-operator
918
</pre>
919
920 43 Nico Schottelius
h3. Triggering server prepare / adding new osds
921
922
The rook-ceph-operator triggers/watches/creates pods to maintain hosts. To trigger a full "re scan", simply delete that pod:
923
924
<pre>
925
kubectl -n rook-ceph delete pods -l app=rook-ceph-operator
926
</pre>
927
928
This will cause all the @rook-ceph-osd-prepare-..@ jobs to be recreated and thus OSDs to be created, if new disks have been added.
929
930
h3. Removing an OSD
931
932
* See "Ceph OSD Management":https://rook.io/docs/rook/v1.7/ceph-osd-mgmt.html
933 77 Nico Schottelius
* More specifically: https://github.com/rook/rook/blob/release-1.7/cluster/examples/kubernetes/ceph/osd-purge.yaml
934 99 Nico Schottelius
* Then delete the related deployment
935 41 Nico Schottelius
936 98 Nico Schottelius
Set osd id in the osd-purge.yaml and apply it. OSD should be down before.
937
938
<pre>
939
apiVersion: batch/v1
940
kind: Job
941
metadata:
942
  name: rook-ceph-purge-osd
943
  namespace: rook-ceph # namespace:cluster
944
  labels:
945
    app: rook-ceph-purge-osd
946
spec:
947
  template:
948
    metadata:
949
      labels:
950
        app: rook-ceph-purge-osd
951
    spec:
952
      serviceAccountName: rook-ceph-purge-osd
953
      containers:
954
        - name: osd-removal
955
          image: rook/ceph:master
956
          # TODO: Insert the OSD ID in the last parameter that is to be removed
957
          # The OSD IDs are a comma-separated list. For example: "0" or "0,2".
958
          # If you want to preserve the OSD PVCs, set `--preserve-pvc true`.
959
          #
960
          # A --force-osd-removal option is available if the OSD should be destroyed even though the
961
          # removal could lead to data loss.
962
          args:
963
            - "ceph"
964
            - "osd"
965
            - "remove"
966
            - "--preserve-pvc"
967
            - "false"
968
            - "--force-osd-removal"
969
            - "false"
970
            - "--osd-ids"
971
            - "SETTHEOSDIDHERE"
972
          env:
973
            - name: POD_NAMESPACE
974
              valueFrom:
975
                fieldRef:
976
                  fieldPath: metadata.namespace
977
            - name: ROOK_MON_ENDPOINTS
978
              valueFrom:
979
                configMapKeyRef:
980
                  key: data
981
                  name: rook-ceph-mon-endpoints
982
            - name: ROOK_CEPH_USERNAME
983
              valueFrom:
984
                secretKeyRef:
985
                  key: ceph-username
986
                  name: rook-ceph-mon
987
            - name: ROOK_CEPH_SECRET
988
              valueFrom:
989
                secretKeyRef:
990
                  key: ceph-secret
991
                  name: rook-ceph-mon
992
            - name: ROOK_CONFIG_DIR
993
              value: /var/lib/rook
994
            - name: ROOK_CEPH_CONFIG_OVERRIDE
995
              value: /etc/rook/config/override.conf
996
            - name: ROOK_FSID
997
              valueFrom:
998
                secretKeyRef:
999
                  key: fsid
1000
                  name: rook-ceph-mon
1001
            - name: ROOK_LOG_LEVEL
1002
              value: DEBUG
1003
          volumeMounts:
1004
            - mountPath: /etc/ceph
1005
              name: ceph-conf-emptydir
1006
            - mountPath: /var/lib/rook
1007
              name: rook-config
1008
      volumes:
1009
        - emptyDir: {}
1010
          name: ceph-conf-emptydir
1011
        - emptyDir: {}
1012
          name: rook-config
1013
      restartPolicy: Never
1014
1015
1016 99 Nico Schottelius
</pre>
1017
1018 1 Nico Schottelius
Deleting the deployment:
1019
1020
<pre>
1021
[18:05] bridge:~% kubectl -n rook-ceph delete deployment rook-ceph-osd-6
1022 99 Nico Schottelius
deployment.apps "rook-ceph-osd-6" deleted
1023
</pre>
1024 185 Nico Schottelius
1025
h3. Placement of mons/osds/etc.
1026
1027
See https://rook.io/docs/rook/v1.11/CRDs/Cluster/ceph-cluster-crd/#placement-configuration-settings
1028 98 Nico Schottelius
1029 145 Nico Schottelius
h2. Ingress + Cert Manager
1030
1031
* We deploy "nginx-ingress":https://docs.nginx.com/nginx-ingress-controller/ to get an ingress
1032
* we deploy "cert-manager":https://cert-manager.io/ to handle certificates
1033
* We independently deploy @ClusterIssuer@ to allow the cert-manager app to deploy and the issuer to be created once the CRDs from cert manager are in place
1034
1035
h3. IPv4 reachability 
1036
1037
The ingress is by default IPv6 only. To make it reachable from the IPv4 world, get its IPv6 address and configure a NAT64 mapping in Jool.
1038
1039
Steps:
1040
1041
h4. Get the ingress IPv6 address
1042
1043
Use @kubectl -n ingress-nginx get svc ingress-nginx-controller -o jsonpath='{.spec.clusterIP}'; echo ''@
1044
1045
Example:
1046
1047
<pre>
1048
kubectl -n ingress-nginx get svc ingress-nginx-controller -o jsonpath='{.spec.clusterIP}'; echo ''
1049
2a0a:e5c0:10:1b::ce11
1050
</pre>
1051
1052
h4. Add NAT64 mapping
1053
1054
* Update the __dcl_jool_siit cdist type
1055
* Record the two IPs (IPv6 and IPv4)
1056
* Configure all routers
1057
1058
1059
h4. Add DNS record
1060
1061
To use the ingress capable as a CNAME destination, create an "ingress" DNS record, such as:
1062
1063
<pre>
1064
; k8s ingress for dev
1065
dev-ingress                 AAAA 2a0a:e5c0:10:1b::ce11
1066
dev-ingress                 A 147.78.194.23
1067
1068
</pre> 
1069
1070
h4. Add supporting wildcard DNS
1071
1072
If you plan to add various sites under a specific domain, we can add a wildcard DNS entry, such as *.k8s-dev.django-hosting.ch:
1073
1074
<pre>
1075
*.k8s-dev         CNAME dev-ingress.ungleich.ch.
1076
</pre>
1077
1078 76 Nico Schottelius
h2. Harbor
1079
1080 175 Nico Schottelius
* We user "Harbor":https://goharbor.io/ as an image registry for our own images. Internal app reference: apps/prod/harbor.
1081
* The admin password is in the password store, it is Harbor12345 by default
1082 76 Nico Schottelius
* At the moment harbor only authenticates against the internal ldap tree
1083
1084
h3. LDAP configuration
1085
1086
* The url needs to be ldaps://...
1087
* uid = uid
1088
* rest standard
1089 75 Nico Schottelius
1090 89 Nico Schottelius
h2. Monitoring / Prometheus
1091
1092 90 Nico Schottelius
* Via "kube-prometheus":https://github.com/prometheus-operator/kube-prometheus/
1093 89 Nico Schottelius
1094 91 Nico Schottelius
Access via ...
1095
1096
* http://prometheus-k8s.monitoring.svc:9090
1097
* http://grafana.monitoring.svc:3000
1098
* http://alertmanager.monitoring.svc:9093
1099
1100
1101 100 Nico Schottelius
h3. Prometheus Options
1102
1103
* "helm/kube-prometheus-stack":https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
1104
** Includes dashboards and co.
1105
* "manifest based kube-prometheus":https://github.com/prometheus-operator/kube-prometheus
1106
** Includes dashboards and co.
1107
* "Prometheus Operator (mainly CRD manifest":https://github.com/prometheus-operator/prometheus-operator
1108
1109 171 Nico Schottelius
h3. Grafana default password
1110
1111
* If not changed: @prom-operator@
1112
1113 82 Nico Schottelius
h2. Nextcloud
1114
1115 85 Nico Schottelius
h3. How to get the nextcloud credentials 
1116 84 Nico Schottelius
1117
* The initial username is set to "nextcloud"
1118
* The password is autogenerated and saved in a kubernetes secret
1119
1120
<pre>
1121 85 Nico Schottelius
kubectl get secret RELEASENAME-nextcloud -o jsonpath="{.data.PASSWORD}" | base64 -d; echo "" 
1122 84 Nico Schottelius
</pre>
1123
1124 83 Nico Schottelius
h3. How to fix "Access through untrusted domain"
1125
1126 82 Nico Schottelius
* Nextcloud stores the initial domain configuration
1127 1 Nico Schottelius
* If the FQDN is changed, it will show the error message "Access through untrusted domain"
1128 82 Nico Schottelius
* To fix, edit /var/www/html/config/config.php and correct the domain
1129 1 Nico Schottelius
* Then delete the pods
1130 165 Nico Schottelius
1131
h3. Running occ commands inside the nextcloud container
1132
1133
* Find the pod in the right namespace
1134
1135
Exec:
1136
1137
<pre>
1138
su www-data -s /bin/sh -c ./occ
1139
</pre>
1140
1141
* -s /bin/sh is needed as the default shell is set to /bin/false
1142
1143 166 Nico Schottelius
h4. Rescanning files
1144 165 Nico Schottelius
1145 166 Nico Schottelius
* If files have been added without nextcloud's knowledge
1146
1147
<pre>
1148
su www-data -s /bin/sh -c "./occ files:scan --all"
1149
</pre>
1150 82 Nico Schottelius
1151 1 Nico Schottelius
h2. Infrastructure versions
1152 35 Nico Schottelius
1153 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v5 (2021-10)
1154 1 Nico Schottelius
1155 57 Nico Schottelius
Clusters are configured / setup in this order:
1156
1157
* Bootstrap via kubeadm
1158 59 Nico Schottelius
* "Networking via calico + BGP (non ECMP) using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
1159
* "ArgoCD for CD":https://argo-cd.readthedocs.io/en/stable/
1160
** "rook for storage via argocd":https://rook.io/
1161 58 Nico Schottelius
** haproxy for in IPv6-cluster-IPv4-to-IPv6 proxy via argocd
1162
** "kubernetes-secret-generator for in cluster secrets":https://github.com/mittwald/kubernetes-secret-generator
1163
** "ungleich-certbot managing certs and nginx":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
1164
1165 57 Nico Schottelius
1166
h3. ungleich kubernetes infrastructure v4 (2021-09)
1167
1168 54 Nico Schottelius
* rook is configured via manifests instead of using the rook-ceph-cluster helm chart
1169 1 Nico Schottelius
* The rook operator is still being installed via helm
1170 35 Nico Schottelius
1171 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v3 (2021-07)
1172 1 Nico Schottelius
1173 10 Nico Schottelius
* rook is now installed via helm via argocd instead of directly via manifests
1174 28 Nico Schottelius
1175 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v2 (2021-05)
1176 28 Nico Schottelius
1177
* Replaced fluxv2 from ungleich k8s v1 with argocd
1178 1 Nico Schottelius
** argocd can apply helm templates directly without needing to go through Chart releases
1179 28 Nico Schottelius
* We are also using argoflow for build flows
1180
* Planned to add "kaniko":https://github.com/GoogleContainerTools/kaniko for image building
1181
1182 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v1 (2021-01)
1183 28 Nico Schottelius
1184
We are using the following components:
1185
1186
* "Calico as a CNI":https://www.projectcalico.org/ with BGP, IPv6 only, no encapsulation
1187
** Needed for basic networking
1188
* "kubernetes-secret-generator":https://github.com/mittwald/kubernetes-secret-generator for creating secrets
1189
** Needed so that secrets are not stored in the git repository, but only in the cluster
1190
* "ungleich-certbot":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
1191
** Needed to get letsencrypt certificates for services
1192
* "rook with ceph rbd + cephfs":https://rook.io/ for storage
1193
** rbd for almost everything, *ReadWriteOnce*
1194
** cephfs for smaller things, multi access *ReadWriteMany*
1195
** Needed for providing persistent storage
1196
* "flux v2":https://fluxcd.io/
1197
** Needed to manage resources automatically