Project

General

Profile

The ungleich kubernetes infrastructure » History » Version 201

Nico Schottelius, 10/31/2023 01:05 PM

1 22 Nico Schottelius
h1. The ungleich kubernetes infrastructure and ungleich kubernetes manual
2 1 Nico Schottelius
3 3 Nico Schottelius
{{toc}}
4
5 1 Nico Schottelius
h2. Status
6
7 28 Nico Schottelius
This document is **pre-production**.
8
This document is to become the ungleich kubernetes infrastructure overview as well as the ungleich kubernetes manual.
9 1 Nico Schottelius
10 10 Nico Schottelius
h2. k8s clusters
11
12 123 Nico Schottelius
| Cluster            | Purpose/Setup     | Maintainer | Master(s)                     | argo                                                   | v4 http proxy | last verified |
13
| c0.k8s.ooo         | Dev               | -          | UNUSED                        |                                                        |               |    2021-10-05 |
14
| c1.k8s.ooo         | retired           |            | -                             |                                                        |               |    2022-03-15 |
15
| c2.k8s.ooo         | Dev p7 HW         | Nico       | server47 server53 server54    | "argo":https://argocd-server.argocd.svc.c2.k8s.ooo     |               |    2021-10-05 |
16
| c3.k8s.ooo         | retired           | -          | -                             |                                                        |               |    2021-10-05 |
17
| c4.k8s.ooo         | Dev2 p7 HW        | Jin-Guk    | server52 server53 server54    |                                                        |               |             - |
18
| c5.k8s.ooo         | retired           |            | -                             |                                                        |               |    2022-03-15 |
19
| c6.k8s.ooo         | Dev p6 VM Jin-Guk | Jin-Guk    |                               |                                                        |               |               |
20
| [[p5.k8s.ooo]]     | production        |            | server34 server36 server38    | "argo":https://argocd-server.argocd.svc.p5.k8s.ooo     | -             |               |
21
| [[p5-cow.k8s.ooo]] | production        | Nico       | server47 server51 server55    | "argo":https://argocd-server.argocd.svc.p5-cow.k8s.ooo |               |    2022-08-27 |
22
| [[p6.k8s.ooo]]     | production        |            | server67 server69 server71    | "argo":https://argocd-server.argocd.svc.p6.k8s.ooo     | 147.78.194.13 |    2021-10-05 |
23 184 Nico Schottelius
| [[p6-cow.k8s.ooo]] | production        |            | server134 server135 server136 | "argo":https://argocd-server.argocd.svc.p6in10.k8s.ooo | ?             |    2023-05-17 |
24 177 Nico Schottelius
| [[p10.k8s.ooo]]    | production        |            | server131 server132 server133 | "argo":https://argocd-server.argocd.svc.p10.k8s.ooo    | 147.78.194.12 |    2021-10-05 |
25 123 Nico Schottelius
| [[k8s.ge.nau.so]]  | development       |            | server107 server108 server109 | "argo":https://argocd-server.argocd.svc.k8s.ge.nau.so  |               |               |
26
| [[dev.k8s.ooo]]    | development       |            | server110 server111 server112 | "argo":https://argocd-server.argocd.svc.dev.k8s.ooo    | -             |    2022-07-08 |
27 164 Nico Schottelius
| [[r1r2p15k8sooo|r1.p15.k8s.ooo]] | production | Nico | server120 | | | 2022-10-30 |
28
| [[r1r2p15k8sooo|r2.p15.k8s.ooo]] | production | Nico | server121 | | | 2022-09-06 |
29 162 Nico Schottelius
| [[r1r2p10k8sooo|r1.p10.k8s.ooo]] | production | Nico | server122 | | | 2022-10-30 |
30
| [[r1r2p10k8sooo|r2.p10.k8s.ooo]] | production | Nico | server123 | | | 2022-10-15 |
31
| [[r1r2p5k8sooo|r1.p5.k8s.ooo]] | production | Nico | server137 | | | 2022-10-30 |
32
| [[r1r2p5k8sooo|r2.p5.k8s.ooo]] | production | Nico | server138 | | | 2022-10-30 |
33
| [[r1r2p6k8sooo|r1.p6.k8s.ooo]] | production | Nico | server139 | | | 2022-10-30 |
34
| [[r1r2p6k8sooo|r2.p6.k8s.ooo]] | production | Nico | server140 | | | 2022-10-30 |
35 21 Nico Schottelius
36 1 Nico Schottelius
h2. General architecture and components overview
37
38
* All k8s clusters are IPv6 only
39
* We use BGP peering to propagate podcidr and serviceCidr networks to our infrastructure
40
* The main public testing repository is "ungleich-k8s":https://code.ungleich.ch/ungleich-public/ungleich-k8s
41 18 Nico Schottelius
** Private configurations are found in the **k8s-config** repository
42 1 Nico Schottelius
43
h3. Cluster types
44
45 28 Nico Schottelius
| **Type/Feature**            | **Development**                | **Production**         |
46
| Min No. nodes               | 3 (1 master, 3 worker)         | 5 (3 master, 3 worker) |
47
| Recommended minimum         | 4 (dedicated master, 3 worker) | 8 (3 master, 5 worker) |
48
| Separation of control plane | optional                       | recommended            |
49
| Persistent storage          | required                       | required               |
50
| Number of storage monitors  | 3                              | 5                      |
51 1 Nico Schottelius
52 43 Nico Schottelius
h2. General k8s operations
53 1 Nico Schottelius
54 46 Nico Schottelius
h3. Cheat sheet / external great references
55
56
* "kubectl cheatsheet":https://kubernetes.io/docs/reference/kubectl/cheatsheet/
57
58 117 Nico Schottelius
h3. Allowing to schedule work on the control plane / removing node taints
59 69 Nico Schottelius
60
* Mostly for single node / test / development clusters
61
* Just remove the master taint as follows
62
63
<pre>
64
kubectl taint nodes --all node-role.kubernetes.io/master-
65 118 Nico Schottelius
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
66 69 Nico Schottelius
</pre>
67 1 Nico Schottelius
68 117 Nico Schottelius
You can check the node taints using @kubectl describe node ...@
69 69 Nico Schottelius
70 44 Nico Schottelius
h3. Get the cluster admin.conf
71
72
* On the masters of each cluster you can find the file @/etc/kubernetes/admin.conf@
73
* To be able to administrate the cluster you can copy the admin.conf to your local machine
74
* Multi cluster debugging can very easy if you name the config ~/cX-admin.conf (see example below)
75
76
<pre>
77
% scp root@server47.place7.ungleich.ch:/etc/kubernetes/admin.conf ~/c2-admin.conf
78
% export KUBECONFIG=~/c2-admin.conf    
79
% kubectl get nodes
80
NAME       STATUS                     ROLES                  AGE   VERSION
81
server47   Ready                      control-plane,master   82d   v1.22.0
82
server48   Ready                      control-plane,master   82d   v1.22.0
83
server49   Ready                      <none>                 82d   v1.22.0
84
server50   Ready                      <none>                 82d   v1.22.0
85
server59   Ready                      control-plane,master   82d   v1.22.0
86
server60   Ready,SchedulingDisabled   <none>                 82d   v1.22.0
87
server61   Ready                      <none>                 82d   v1.22.0
88
server62   Ready                      <none>                 82d   v1.22.0               
89
</pre>
90
91 18 Nico Schottelius
h3. Installing a new k8s cluster
92 8 Nico Schottelius
93 9 Nico Schottelius
* Decide on the cluster name (usually *cX.k8s.ooo*), X counting upwards
94 28 Nico Schottelius
** Using pXX.k8s.ooo for production clusters of placeXX
95 9 Nico Schottelius
* Use cdist to configure the nodes with requirements like crio
96
* Decide between single or multi node control plane setups (see below)
97 28 Nico Schottelius
** Single control plane suitable for development clusters
98 9 Nico Schottelius
99 28 Nico Schottelius
Typical init procedure:
100 9 Nico Schottelius
101 28 Nico Schottelius
* Single control plane: @kubeadm init --config bootstrap/XXX/kubeadm.yaml@
102
* Multi control plane (HA): @kubeadm init --config bootstrap/XXX/kubeadm.yaml --upload-certs@
103 10 Nico Schottelius
104 29 Nico Schottelius
h3. Deleting a pod that is hanging in terminating state
105
106
<pre>
107
kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>
108
</pre>
109
110
(from https://stackoverflow.com/questions/35453792/pods-stuck-in-terminating-status)
111
112 42 Nico Schottelius
h3. Listing nodes of a cluster
113
114
<pre>
115
[15:05] bridge:~% kubectl get nodes
116
NAME       STATUS   ROLES                  AGE   VERSION
117
server22   Ready    <none>                 52d   v1.22.0
118
server23   Ready    <none>                 52d   v1.22.2
119
server24   Ready    <none>                 52d   v1.22.0
120
server25   Ready    <none>                 52d   v1.22.0
121
server26   Ready    <none>                 52d   v1.22.0
122
server27   Ready    <none>                 52d   v1.22.0
123
server63   Ready    control-plane,master   52d   v1.22.0
124
server64   Ready    <none>                 52d   v1.22.0
125
server65   Ready    control-plane,master   52d   v1.22.0
126
server66   Ready    <none>                 52d   v1.22.0
127
server83   Ready    control-plane,master   52d   v1.22.0
128
server84   Ready    <none>                 52d   v1.22.0
129
server85   Ready    <none>                 52d   v1.22.0
130
server86   Ready    <none>                 52d   v1.22.0
131
</pre>
132
133 41 Nico Schottelius
h3. Removing / draining a node
134
135
Usually @kubectl drain server@ should do the job, but sometimes we need to be more aggressive:
136
137 1 Nico Schottelius
<pre>
138 103 Nico Schottelius
kubectl drain --delete-emptydir-data --ignore-daemonsets serverXX
139 42 Nico Schottelius
</pre>
140
141
h3. Readding a node after draining
142
143
<pre>
144
kubectl uncordon serverXX
145 1 Nico Schottelius
</pre>
146 43 Nico Schottelius
147 50 Nico Schottelius
h3. (Re-)joining worker nodes after creating the cluster
148 49 Nico Schottelius
149
* We need to have an up-to-date token
150
* We use different join commands for the workers and control plane nodes
151
152
Generating the join command on an existing control plane node:
153
154
<pre>
155
kubeadm token create --print-join-command
156
</pre>
157
158 50 Nico Schottelius
h3. (Re-)joining control plane nodes after creating the cluster
159 1 Nico Schottelius
160 50 Nico Schottelius
* We generate the token again
161
* We upload the certificates
162
* We need to combine/create the join command for the control plane node
163
164
Example session:
165
166
<pre>
167
% kubeadm token create --print-join-command
168
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash 
169
170
% kubeadm init phase upload-certs --upload-certs
171
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
172
[upload-certs] Using certificate key:
173
CERTKEY
174
175
# Then we use these two outputs on the joining node:
176
177
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash --control-plane --certificate-key CERTKEY
178
</pre>
179
180
Commands to be used on a control plane node:
181
182
<pre>
183
kubeadm token create --print-join-command
184
kubeadm init phase upload-certs --upload-certs
185
</pre>
186
187
Commands to be used on the joining node:
188
189
<pre>
190
JOINCOMMAND --control-plane --certificate-key CERTKEY
191
</pre>
192 49 Nico Schottelius
193 51 Nico Schottelius
SEE ALSO
194
195
* https://stackoverflow.com/questions/63936268/how-to-generate-kubeadm-token-for-secondary-control-plane-nodes
196
* https://blog.scottlowe.org/2019/08/15/reconstructing-the-join-command-for-kubeadm/
197
198 53 Nico Schottelius
h3. How to fix etcd does not start when rejoining a kubernetes cluster as a control plane
199 52 Nico Schottelius
200
If during the above step etcd does not come up, @kubeadm join@ can hang as follows:
201
202
<pre>
203
[control-plane] Creating static Pod manifest for "kube-apiserver"                                                              
204
[control-plane] Creating static Pod manifest for "kube-controller-manager"                                                     
205
[control-plane] Creating static Pod manifest for "kube-scheduler"                                                              
206
[check-etcd] Checking that the etcd cluster is healthy                                                                         
207
error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://[2a0a:e5c0:10:1:225:b3ff:fe20:37
208
8a]:2379 with maintenance client: context deadline exceeded                                                                    
209
To see the stack trace of this error execute with --v=5 or higher         
210
</pre>
211
212
Then the problem is likely that the etcd server is still a member of the cluster. We first need to remove it from the etcd cluster and then the join works.
213
214
To fix this we do:
215
216
* Find a working etcd pod
217
* Find the etcd members / member list
218
* Remove the etcd member that we want to re-join the cluster
219
220
221
<pre>
222
# Find the etcd pods
223
kubectl -n kube-system get pods -l component=etcd,tier=control-plane
224
225
# Get the list of etcd servers with the member id 
226
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
227
228
# Remove the member
229
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove MEMBERID
230
</pre>
231
232
Sample session:
233
234
<pre>
235
[10:48] line:~% kubectl -n kube-system get pods -l component=etcd,tier=control-plane
236
NAME            READY   STATUS    RESTARTS     AGE
237
etcd-server63   1/1     Running   0            3m11s
238
etcd-server65   1/1     Running   3            7d2h
239
etcd-server83   1/1     Running   8 (6d ago)   7d2h
240
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
241
356891cd676df6e4, started, server65, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2379, false
242
371b8a07185dee7e, started, server63, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2379, false
243
5942bc58307f8af9, started, server83, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2380, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2379, false
244
245
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove 371b8a07185dee7e
246
Member 371b8a07185dee7e removed from cluster e3c0805f592a8f77
247 1 Nico Schottelius
248
</pre>
249
250
SEE ALSO
251
252
* We found the solution using https://stackoverflow.com/questions/67921552/re-installed-node-cannot-join-kubernetes-cluster
253 56 Nico Schottelius
254 147 Nico Schottelius
h3. Node labels (adding, showing, removing)
255
256
Listing the labels:
257
258
<pre>
259
kubectl get nodes --show-labels
260
</pre>
261
262
Adding labels:
263
264
<pre>
265
kubectl label nodes LIST-OF-NODES label1=value1 
266
267
</pre>
268
269
For instance:
270
271
<pre>
272
kubectl label nodes router2 router3 hosttype=router 
273
</pre>
274
275
Selecting nodes in pods:
276
277
<pre>
278
apiVersion: v1
279
kind: Pod
280
...
281
spec:
282
  nodeSelector:
283
    hosttype: router
284
</pre>
285
286 148 Nico Schottelius
Removing labels by adding a minus at the end of the label name:
287
288
<pre>
289
kubectl label node <nodename> <labelname>-
290
</pre>
291
292
For instance:
293
294
<pre>
295
kubectl label nodes router2 router3 hosttype- 
296
</pre>
297
298 147 Nico Schottelius
SEE ALSO
299 1 Nico Schottelius
300 148 Nico Schottelius
* https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/
301
* https://stackoverflow.com/questions/34067979/how-to-delete-a-node-label-by-command-and-api
302 147 Nico Schottelius
303 199 Nico Schottelius
h3. Listing all pods on a node
304
305
<pre>
306
kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName=serverXX
307
</pre>
308
309
Found on https://stackoverflow.com/questions/62000559/how-to-list-all-the-pods-running-in-a-particular-worker-node-by-executing-a-comm
310
311 101 Nico Schottelius
h3. Hardware Maintenance using ungleich-hardware
312
313
Use the following manifest and replace the HOST with the actual host:
314
315
<pre>
316
apiVersion: v1
317
kind: Pod
318
metadata:
319
  name: ungleich-hardware-HOST
320
spec:
321
  containers:
322
  - name: ungleich-hardware
323
    image: ungleich/ungleich-hardware:0.0.5
324
    args:
325
    - sleep
326
    - "1000000"
327
    volumeMounts:
328
      - mountPath: /dev
329
        name: dev
330
    securityContext:
331
      privileged: true
332
  nodeSelector:
333
    kubernetes.io/hostname: "HOST"
334
335
  volumes:
336
    - name: dev
337
      hostPath:
338
        path: /dev
339
</pre>
340
341 102 Nico Schottelius
Also see: [[The_ungleich_hardware_maintenance_guide]]
342
343 105 Nico Schottelius
h3. Triggering a cronjob / creating a job from a cronjob
344 104 Nico Schottelius
345
To test a cronjob, we can create a job from a cronjob:
346
347
<pre>
348
kubectl create job --from=cronjob/volume2-daily-backup volume2-manual
349
</pre>
350
351
This creates a job volume2-manual based on the cronjob  volume2-daily
352
353 112 Nico Schottelius
h3. su-ing into a user that has nologin shell set
354
355
Many times users are having nologin as their shell inside the container. To be able to execute maintenance commands within the
356
container, we can use @su -s /bin/sh@ like this:
357
358
<pre>
359
su -s /bin/sh -c '/path/to/your/script' testuser
360
</pre>
361
362
Found on https://serverfault.com/questions/351046/how-to-run-command-as-user-who-has-usr-sbin-nologin-as-shell
363
364 113 Nico Schottelius
h3. How to print a secret value
365
366
Assuming you want the "password" item from a secret, use:
367
368
<pre>
369
kubectl get secret SECRETNAME -o jsonpath="{.data.password}" | base64 -d; echo "" 
370
</pre>
371
372 173 Nico Schottelius
h3. How to upgrade a kubernetes cluster
373 172 Nico Schottelius
374
h4. General
375
376
* Should be done every X months to stay up-to-date
377
** X probably something like 3-6
378
* kubeadm based clusters
379
* Needs specific kubeadm versions for upgrade
380
* Follow instructions on https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
381 190 Nico Schottelius
* Finding releases: https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG
382 172 Nico Schottelius
383
h4. Getting a specific kubeadm or kubelet version
384
385
<pre>
386 190 Nico Schottelius
RELEASE=v1.22.17
387
RELEASE=v1.23.17
388 181 Nico Schottelius
RELEASE=v1.24.9
389 1 Nico Schottelius
RELEASE=v1.25.9
390
RELEASE=v1.26.6
391 190 Nico Schottelius
RELEASE=v1.27.2
392
393 187 Nico Schottelius
ARCH=amd64
394 172 Nico Schottelius
395
curl -L --remote-name-all https://dl.k8s.io/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet}
396 182 Nico Schottelius
chmod u+x kubeadm kubelet
397 172 Nico Schottelius
</pre>
398
399
h4. Steps
400
401
* kubeadm upgrade plan
402
** On one control plane node
403
* kubeadm upgrade apply vXX.YY.ZZ
404
** On one control plane node
405 189 Nico Schottelius
* kubeadm upgrade node
406
** On all other control plane nodes
407
** On all worker nodes afterwards
408
409 172 Nico Schottelius
410 173 Nico Schottelius
Repeat for all control planes nodes. The upgrade kubelet on all other nodes via package manager.
411 172 Nico Schottelius
412 193 Nico Schottelius
h4. Upgrading to 1.22.17
413 1 Nico Schottelius
414 193 Nico Schottelius
* https://v1-22.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
415 194 Nico Schottelius
* Need to create a kubeadm config map
416 198 Nico Schottelius
** f.i. using the following
417
** @/usr/local/bin/kubeadm-v1.22.17   upgrade --config kubeadm.yaml --ignore-preflight-errors=CoreDNSUnsupportedPlugins,CoreDNSMigration apply -y v1.22.17@
418 193 Nico Schottelius
* Done for p6 on 2023-10-04
419
420
h4. Upgrading to 1.23.17
421
422
* https://v1-23.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
423
* No special notes
424
* Done for p6 on 2023-10-04
425
426
h4. Upgrading to 1.24.17
427
428
* https://v1-24.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
429
* No special notes
430
* Done for p6 on 2023-10-04
431
432
h4. Upgrading to 1.25.14
433
434
* https://v1-24.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
435
* No special notes
436
* Done for p6 on 2023-10-04
437
438
h4. Upgrading to 1.26.9
439
440 1 Nico Schottelius
* https://v1-26.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
441 193 Nico Schottelius
* No special notes
442
* Done for p6 on 2023-10-04
443 188 Nico Schottelius
444 196 Nico Schottelius
h4. Upgrading to 1.27
445 186 Nico Schottelius
446 192 Nico Schottelius
* https://v1-27.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
447 186 Nico Schottelius
* kubelet will not start anymore
448
* reason: @"command failed" err="failed to parse kubelet flag: unknown flag: --container-runtime"@
449
* /var/lib/kubelet/kubeadm-flags.env contains that parameter
450
* remove it, start kubelet
451 192 Nico Schottelius
452 197 Nico Schottelius
h4. Upgrading to 1.28
453 192 Nico Schottelius
454
* https://v1-28.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
455 186 Nico Schottelius
456
h4. Upgrade to crio 1.27: missing crun
457
458
Error message
459
460
<pre>
461
level=fatal msg="validating runtime config: runtime validation: \"crun\" not found in $PATH: exec: \"crun\": executable file not found in $PATH"
462
</pre>
463
464
Fix:
465
466
<pre>
467
apk add crun
468
</pre>
469
470 157 Nico Schottelius
h2. Reference CNI
471
472
* Mainly "stupid", but effective plugins
473
* Main documentation on https://www.cni.dev/plugins/current/
474 158 Nico Schottelius
* Plugins
475
** bridge
476
*** Can create the bridge on the host
477
*** But seems not to be able to add host interfaces to it as well
478
*** Has support for vlan tags
479
** vlan
480
*** creates vlan tagged sub interface on the host
481 160 Nico Schottelius
*** "It's a 1:1 mapping (i.e. no bridge in between)":https://github.com/k8snetworkplumbingwg/multus-cni/issues/569
482 158 Nico Schottelius
** host-device
483
*** moves the interface from the host into the container
484
*** very easy for physical connections to containers
485 159 Nico Schottelius
** ipvlan
486
*** "virtualisation" of a host device
487
*** routing based on IP
488
*** Same MAC for everyone
489
*** Cannot reach the master interface
490
** maclvan
491
*** With mac addresses
492
*** Supports various modes (to be checked)
493
** ptp ("point to point")
494
*** Creates a host device and connects it to the container
495
** win*
496 158 Nico Schottelius
*** Windows implementations
497 157 Nico Schottelius
498 62 Nico Schottelius
h2. Calico CNI
499
500
h3. Calico Installation
501
502
* We install "calico using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
503
* This has the following advantages:
504
** Easy to upgrade
505
** Does not require os to configure IPv6/dual stack settings as the tigera operator figures out things on its own
506
507
Usually plain calico can be installed directly using:
508
509
<pre>
510 174 Nico Schottelius
VERSION=v3.25.0
511 149 Nico Schottelius
512 1 Nico Schottelius
helm repo add projectcalico https://docs.projectcalico.org/charts
513 167 Nico Schottelius
helm repo update
514 124 Nico Schottelius
helm upgrade --install --namespace tigera calico projectcalico/tigera-operator --version $VERSION --create-namespace
515 1 Nico Schottelius
</pre>
516 92 Nico Schottelius
517
* Check the tags on https://github.com/projectcalico/calico/tags for the latest release
518 62 Nico Schottelius
519
h3. Installing calicoctl
520
521 115 Nico Schottelius
* General installation instructions, including binary download: https://projectcalico.docs.tigera.io/maintenance/clis/calicoctl/install
522
523 62 Nico Schottelius
To be able to manage and configure calico, we need to 
524
"install calicoctl (we choose the version as a pod)":https://docs.projectcalico.org/getting-started/clis/calicoctl/install#install-calicoctl-as-a-kubernetes-pod
525
526
<pre>
527
kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml
528
</pre>
529
530 93 Nico Schottelius
Or version specific:
531
532
<pre>
533
kubectl apply -f https://github.com/projectcalico/calico/blob/v3.20.4/manifests/calicoctl.yaml
534 97 Nico Schottelius
535
# For 3.22
536
kubectl apply -f https://projectcalico.docs.tigera.io/archive/v3.22/manifests/calicoctl.yaml
537 93 Nico Schottelius
</pre>
538
539 70 Nico Schottelius
And making it easier accessible by alias:
540
541
<pre>
542
alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl"
543
</pre>
544
545 62 Nico Schottelius
h3. Calico configuration
546
547 63 Nico Schottelius
By default our k8s clusters "BGP peer":https://docs.projectcalico.org/networking/bgp
548
with an upstream router to propagate podcidr and servicecidr.
549 62 Nico Schottelius
550
Default settings in our infrastructure:
551
552
* We use a full-mesh using the @nodeToNodeMeshEnabled: true@ option
553
* We keep the original next hop so that *only* the server with the pod is announcing it (instead of ecmp)
554 1 Nico Schottelius
* We use private ASNs for k8s clusters
555 63 Nico Schottelius
* We do *not* use any overlay
556 62 Nico Schottelius
557
After installing calico and calicoctl the last step of the installation is usually:
558
559 1 Nico Schottelius
<pre>
560 79 Nico Schottelius
calicoctl create -f - < calico-bgp.yaml
561 62 Nico Schottelius
</pre>
562
563
564
A sample BGP configuration:
565
566
<pre>
567
---
568
apiVersion: projectcalico.org/v3
569
kind: BGPConfiguration
570
metadata:
571
  name: default
572
spec:
573
  logSeverityScreen: Info
574
  nodeToNodeMeshEnabled: true
575
  asNumber: 65534
576
  serviceClusterIPs:
577
  - cidr: 2a0a:e5c0:10:3::/108
578
  serviceExternalIPs:
579
  - cidr: 2a0a:e5c0:10:3::/108
580
---
581
apiVersion: projectcalico.org/v3
582
kind: BGPPeer
583
metadata:
584
  name: router1-place10
585
spec:
586
  peerIP: 2a0a:e5c0:10:1::50
587
  asNumber: 213081
588
  keepOriginalNextHop: true
589
</pre>
590
591 126 Nico Schottelius
h2. Cilium CNI (experimental)
592
593 137 Nico Schottelius
h3. Status
594
595 138 Nico Schottelius
*NO WORKING CILIUM CONFIGURATION FOR IPV6 only modes*
596 137 Nico Schottelius
597 146 Nico Schottelius
h3. Latest error
598
599
It seems cilium does not run on IPv6 only hosts:
600
601
<pre>
602
level=info msg="Validating configured node address ranges" subsys=daemon
603
level=fatal msg="postinit failed" error="external IPv4 node address could not be derived, please configure via --ipv4-node" subsys=daemon
604
level=info msg="Starting IP identity watcher" subsys=ipcache
605
</pre>
606
607
It crashes after that log entry
608
609 128 Nico Schottelius
h3. BGP configuration
610
611
* The cilium-operator will not start without a correct configmap being present beforehand (see error message below)
612
* Creating the bgp config beforehand as a configmap is thus required.
613
614
The error one gets without the configmap present:
615
616
Pods are hanging with:
617
618
<pre>
619
cilium-bpqm6                       0/1     Init:0/4            0             9s
620
cilium-operator-5947d94f7f-5bmh2   0/1     ContainerCreating   0             9s
621
</pre>
622
623
The error message in the cilium-*perator is:
624
625
<pre>
626
Events:
627
  Type     Reason       Age                From               Message
628
  ----     ------       ----               ----               -------
629
  Normal   Scheduled    80s                default-scheduler  Successfully assigned kube-system/cilium-operator-5947d94f7f-lqcsp to server56
630
  Warning  FailedMount  16s (x8 over 80s)  kubelet            MountVolume.SetUp failed for volume "bgp-config-path" : configmap "bgp-config" not found
631
</pre>
632
633
A correct bgp config looks like this:
634
635
<pre>
636
apiVersion: v1
637
kind: ConfigMap
638
metadata:
639
  name: bgp-config
640
  namespace: kube-system
641
data:
642
  config.yaml: |
643
    peers:
644
      - peer-address: 2a0a:e5c0::46
645
        peer-asn: 209898
646
        my-asn: 65533
647
      - peer-address: 2a0a:e5c0::47
648
        peer-asn: 209898
649
        my-asn: 65533
650
    address-pools:
651
      - name: default
652
        protocol: bgp
653
        addresses:
654
          - 2a0a:e5c0:0:14::/64
655
</pre>
656 127 Nico Schottelius
657
h3. Installation
658 130 Nico Schottelius
659 127 Nico Schottelius
Adding the repo
660 1 Nico Schottelius
<pre>
661 127 Nico Schottelius
662 129 Nico Schottelius
helm repo add cilium https://helm.cilium.io/
663 130 Nico Schottelius
helm repo update
664
</pre>
665 129 Nico Schottelius
666 135 Nico Schottelius
Installing + configuring cilium
667 129 Nico Schottelius
<pre>
668 130 Nico Schottelius
ipv6pool=2a0a:e5c0:0:14::/112
669 1 Nico Schottelius
670 146 Nico Schottelius
version=1.12.2
671 129 Nico Schottelius
672
helm upgrade --install cilium cilium/cilium --version $version \
673 1 Nico Schottelius
  --namespace kube-system \
674
  --set ipv4.enabled=false \
675
  --set ipv6.enabled=true \
676 146 Nico Schottelius
  --set enableIPv6Masquerade=false \
677
  --set bgpControlPlane.enabled=true 
678 1 Nico Schottelius
679 146 Nico Schottelius
#  --set ipam.operator.clusterPoolIPv6PodCIDRList=$ipv6pool
680
681
# Old style bgp?
682 136 Nico Schottelius
#   --set bgp.enabled=true --set bgp.announce.podCIDR=true \
683 127 Nico Schottelius
684
# Show possible configuration options
685
helm show values cilium/cilium
686
687 1 Nico Schottelius
</pre>
688 132 Nico Schottelius
689
Using a /64 for ipam.operator.clusterPoolIPv6PodCIDRList fails with:
690
691
<pre>
692
level=fatal msg="Unable to init cluster-pool allocator" error="unable to initialize IPv6 allocator New CIDR set failed; the node CIDR size is too big" subsys=cilium-operator-generic
693
</pre>
694
695 126 Nico Schottelius
696 1 Nico Schottelius
See also https://github.com/cilium/cilium/issues/20756
697 135 Nico Schottelius
698
Seems a /112 is actually working.
699
700
h3. Kernel modules
701
702
Cilium requires the following modules to be loaded on the host (not loaded by default):
703
704
<pre>
705 1 Nico Schottelius
modprobe  ip6table_raw
706
modprobe  ip6table_filter
707
</pre>
708 146 Nico Schottelius
709
h3. Interesting helm flags
710
711
* autoDirectNodeRoutes
712
* bgpControlPlane.enabled = true
713
714
h3. SEE ALSO
715
716
* https://docs.cilium.io/en/v1.12/helm-reference/
717 133 Nico Schottelius
718 179 Nico Schottelius
h2. Multus
719 168 Nico Schottelius
720
* https://github.com/k8snetworkplumbingwg/multus-cni
721
* Installing a deployment w/ CRDs
722 150 Nico Schottelius
723 169 Nico Schottelius
<pre>
724 176 Nico Schottelius
VERSION=v4.0.1
725 169 Nico Schottelius
726 170 Nico Schottelius
kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/${VERSION}/deployments/multus-daemonset-crio.yml
727
</pre>
728 169 Nico Schottelius
729 191 Nico Schottelius
h2. ArgoCD
730 56 Nico Schottelius
731 60 Nico Schottelius
h3. Argocd Installation
732 1 Nico Schottelius
733 116 Nico Schottelius
* See https://argo-cd.readthedocs.io/en/stable/
734
735 60 Nico Schottelius
As there is no configuration management present yet, argocd is installed using
736
737 1 Nico Schottelius
<pre>
738 60 Nico Schottelius
kubectl create namespace argocd
739 1 Nico Schottelius
740
# OR: latest stable
741
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
742
743 191 Nico Schottelius
# OR Specific Version
744
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.3.2/manifests/install.yaml
745 56 Nico Schottelius
746 191 Nico Schottelius
747
</pre>
748 1 Nico Schottelius
749 60 Nico Schottelius
h3. Get the argocd credentials
750
751
<pre>
752
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo ""
753
</pre>
754 52 Nico Schottelius
755 87 Nico Schottelius
h3. Accessing argocd
756
757
In regular IPv6 clusters:
758
759
* Navigate to https://argocd-server.argocd.CLUSTERDOMAIN
760
761
In legacy IPv4 clusters
762
763
<pre>
764
kubectl --namespace argocd port-forward svc/argocd-server 8080:80
765
</pre>
766
767 88 Nico Schottelius
* Navigate to https://localhost:8080
768
769 68 Nico Schottelius
h3. Using the argocd webhook to trigger changes
770 67 Nico Schottelius
771
* To trigger changes post json https://argocd.example.com/api/webhook
772
773 72 Nico Schottelius
h3. Deploying an application
774
775
* Applications are deployed via git towards gitea (code.ungleich.ch) and then pulled by argo
776 73 Nico Schottelius
* Always include the *redmine-url* pointing to the (customer) ticket
777
** Also add the support-url if it exists
778 72 Nico Schottelius
779
Application sample
780
781
<pre>
782
apiVersion: argoproj.io/v1alpha1
783
kind: Application
784
metadata:
785
  name: gitea-CUSTOMER
786
  namespace: argocd
787
spec:
788
  destination:
789
    namespace: default
790
    server: 'https://kubernetes.default.svc'
791
  source:
792
    path: apps/prod/gitea
793
    repoURL: 'https://code.ungleich.ch/ungleich-intern/k8s-config.git'
794
    targetRevision: HEAD
795
    helm:
796
      parameters:
797
        - name: storage.data.storageClass
798
          value: rook-ceph-block-hdd
799
        - name: storage.data.size
800
          value: 200Gi
801
        - name: storage.db.storageClass
802
          value: rook-ceph-block-ssd
803
        - name: storage.db.size
804
          value: 10Gi
805
        - name: storage.letsencrypt.storageClass
806
          value: rook-ceph-block-hdd
807
        - name: storage.letsencrypt.size
808
          value: 50Mi
809
        - name: letsencryptStaging
810
          value: 'no'
811
        - name: fqdn
812
          value: 'code.verua.online'
813
  project: default
814
  syncPolicy:
815
    automated:
816
      prune: true
817
      selfHeal: true
818
  info:
819
    - name: 'redmine-url'
820
      value: 'https://redmine.ungleich.ch/issues/ISSUEID'
821
    - name: 'support-url'
822
      value: 'https://support.ungleich.ch/Ticket/Display.html?id=TICKETID'
823
</pre>
824
825 80 Nico Schottelius
h2. Helm related operations and conventions
826 55 Nico Schottelius
827 61 Nico Schottelius
We use helm charts extensively.
828
829
* In production, they are managed via argocd
830
* In development, helm chart can de developed and deployed manually using the helm utility.
831
832 55 Nico Schottelius
h3. Installing a helm chart
833
834
One can use the usual pattern of
835
836
<pre>
837
helm install <releasename> <chartdirectory>
838
</pre>
839
840
However often you want to reinstall/update when testing helm charts. The following pattern is "better", because it allows you to reinstall, if it is already installed:
841
842
<pre>
843
helm upgrade --install <releasename> <chartdirectory>
844 1 Nico Schottelius
</pre>
845 80 Nico Schottelius
846
h3. Naming services and deployments in helm charts [Application labels]
847
848
* We always have {{ .Release.Name }} to identify the current "instance"
849
* Deployments:
850
** use @app: <what it is>@, f.i. @app: nginx@, @app: postgres@, ...
851 81 Nico Schottelius
* See more about standard labels on
852
** https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/
853
** https://helm.sh/docs/chart_best_practices/labels/
854 55 Nico Schottelius
855 151 Nico Schottelius
h3. Show all versions of a helm chart
856
857
<pre>
858
helm search repo -l repo/chart
859
</pre>
860
861
For example:
862
863
<pre>
864
% helm search repo -l projectcalico/tigera-operator 
865
NAME                         	CHART VERSION	APP VERSION	DESCRIPTION                            
866
projectcalico/tigera-operator	v3.23.3      	v3.23.3    	Installs the Tigera operator for Calico
867
projectcalico/tigera-operator	v3.23.2      	v3.23.2    	Installs the Tigera operator for Calico
868
....
869
</pre>
870
871 152 Nico Schottelius
h3. Show possible values of a chart
872
873
<pre>
874
helm show values <repo/chart>
875
</pre>
876
877
Example:
878
879
<pre>
880
helm show values ingress-nginx/ingress-nginx
881
</pre>
882
883 178 Nico Schottelius
h3. Download a chart
884
885
For instance for checking it out locally. Use:
886
887
<pre>
888
helm pull <repo/chart>
889
</pre>
890 152 Nico Schottelius
891 139 Nico Schottelius
h2. Rook + Ceph
892
893
h3. Installation
894
895
* Usually directly via argocd
896
897 71 Nico Schottelius
h3. Executing ceph commands
898
899
Using the ceph-tools pod as follows:
900
901
<pre>
902
kubectl exec -n rook-ceph -ti $(kubectl -n rook-ceph get pods -l app=rook-ceph-tools -o jsonpath='{.items[*].metadata.name}') -- ceph -s
903
</pre>
904
905 43 Nico Schottelius
h3. Inspecting the logs of a specific server
906
907
<pre>
908
# Get the related pods
909
kubectl -n rook-ceph get pods -l app=rook-ceph-osd-prepare 
910
...
911
912
# Inspect the logs of a specific pod
913
kubectl -n rook-ceph logs -f rook-ceph-osd-prepare-server23--1-444qx
914
915 71 Nico Schottelius
</pre>
916
917
h3. Inspecting the logs of the rook-ceph-operator
918
919
<pre>
920
kubectl -n rook-ceph logs -f -l app=rook-ceph-operator
921 43 Nico Schottelius
</pre>
922
923 200 Nico Schottelius
h3. (Temporarily) Disabling the rook-operation
924
925
* first disabling the sync in argocd
926
* then scale it down
927
928
<pre>
929
kubectl -n rook-ceph scale deployment rook-ceph-operator --replicas=0
930
</pre>
931
932
When done with the work/maintenance, re-enable sync in argocd.
933
The following command is thus strictly speaking not required, as argocd will fix it on its own:
934
935
<pre>
936
kubectl -n rook-ceph scale deployment rook-ceph-operator --replicas=1
937
</pre>
938
939 121 Nico Schottelius
h3. Restarting the rook operator
940
941
<pre>
942
kubectl -n rook-ceph delete pods  -l app=rook-ceph-operator
943
</pre>
944
945 43 Nico Schottelius
h3. Triggering server prepare / adding new osds
946
947
The rook-ceph-operator triggers/watches/creates pods to maintain hosts. To trigger a full "re scan", simply delete that pod:
948
949
<pre>
950
kubectl -n rook-ceph delete pods -l app=rook-ceph-operator
951
</pre>
952
953
This will cause all the @rook-ceph-osd-prepare-..@ jobs to be recreated and thus OSDs to be created, if new disks have been added.
954
955
h3. Removing an OSD
956
957
* See "Ceph OSD Management":https://rook.io/docs/rook/v1.7/ceph-osd-mgmt.html
958 77 Nico Schottelius
* More specifically: https://github.com/rook/rook/blob/release-1.7/cluster/examples/kubernetes/ceph/osd-purge.yaml
959 99 Nico Schottelius
* Then delete the related deployment
960 41 Nico Schottelius
961 98 Nico Schottelius
Set osd id in the osd-purge.yaml and apply it. OSD should be down before.
962
963
<pre>
964
apiVersion: batch/v1
965
kind: Job
966
metadata:
967
  name: rook-ceph-purge-osd
968
  namespace: rook-ceph # namespace:cluster
969
  labels:
970
    app: rook-ceph-purge-osd
971
spec:
972
  template:
973
    metadata:
974
      labels:
975
        app: rook-ceph-purge-osd
976
    spec:
977
      serviceAccountName: rook-ceph-purge-osd
978
      containers:
979
        - name: osd-removal
980
          image: rook/ceph:master
981
          # TODO: Insert the OSD ID in the last parameter that is to be removed
982
          # The OSD IDs are a comma-separated list. For example: "0" or "0,2".
983
          # If you want to preserve the OSD PVCs, set `--preserve-pvc true`.
984
          #
985
          # A --force-osd-removal option is available if the OSD should be destroyed even though the
986
          # removal could lead to data loss.
987
          args:
988
            - "ceph"
989
            - "osd"
990
            - "remove"
991
            - "--preserve-pvc"
992
            - "false"
993
            - "--force-osd-removal"
994
            - "false"
995
            - "--osd-ids"
996
            - "SETTHEOSDIDHERE"
997
          env:
998
            - name: POD_NAMESPACE
999
              valueFrom:
1000
                fieldRef:
1001
                  fieldPath: metadata.namespace
1002
            - name: ROOK_MON_ENDPOINTS
1003
              valueFrom:
1004
                configMapKeyRef:
1005
                  key: data
1006
                  name: rook-ceph-mon-endpoints
1007
            - name: ROOK_CEPH_USERNAME
1008
              valueFrom:
1009
                secretKeyRef:
1010
                  key: ceph-username
1011
                  name: rook-ceph-mon
1012
            - name: ROOK_CEPH_SECRET
1013
              valueFrom:
1014
                secretKeyRef:
1015
                  key: ceph-secret
1016
                  name: rook-ceph-mon
1017
            - name: ROOK_CONFIG_DIR
1018
              value: /var/lib/rook
1019
            - name: ROOK_CEPH_CONFIG_OVERRIDE
1020
              value: /etc/rook/config/override.conf
1021
            - name: ROOK_FSID
1022
              valueFrom:
1023
                secretKeyRef:
1024
                  key: fsid
1025
                  name: rook-ceph-mon
1026
            - name: ROOK_LOG_LEVEL
1027
              value: DEBUG
1028
          volumeMounts:
1029
            - mountPath: /etc/ceph
1030
              name: ceph-conf-emptydir
1031
            - mountPath: /var/lib/rook
1032
              name: rook-config
1033
      volumes:
1034
        - emptyDir: {}
1035
          name: ceph-conf-emptydir
1036
        - emptyDir: {}
1037
          name: rook-config
1038
      restartPolicy: Never
1039
1040
1041 99 Nico Schottelius
</pre>
1042
1043 1 Nico Schottelius
Deleting the deployment:
1044
1045
<pre>
1046
[18:05] bridge:~% kubectl -n rook-ceph delete deployment rook-ceph-osd-6
1047 99 Nico Schottelius
deployment.apps "rook-ceph-osd-6" deleted
1048
</pre>
1049 185 Nico Schottelius
1050
h3. Placement of mons/osds/etc.
1051
1052
See https://rook.io/docs/rook/v1.11/CRDs/Cluster/ceph-cluster-crd/#placement-configuration-settings
1053 98 Nico Schottelius
1054 145 Nico Schottelius
h2. Ingress + Cert Manager
1055
1056
* We deploy "nginx-ingress":https://docs.nginx.com/nginx-ingress-controller/ to get an ingress
1057
* we deploy "cert-manager":https://cert-manager.io/ to handle certificates
1058
* We independently deploy @ClusterIssuer@ to allow the cert-manager app to deploy and the issuer to be created once the CRDs from cert manager are in place
1059
1060
h3. IPv4 reachability 
1061
1062
The ingress is by default IPv6 only. To make it reachable from the IPv4 world, get its IPv6 address and configure a NAT64 mapping in Jool.
1063
1064
Steps:
1065
1066
h4. Get the ingress IPv6 address
1067
1068
Use @kubectl -n ingress-nginx get svc ingress-nginx-controller -o jsonpath='{.spec.clusterIP}'; echo ''@
1069
1070
Example:
1071
1072
<pre>
1073
kubectl -n ingress-nginx get svc ingress-nginx-controller -o jsonpath='{.spec.clusterIP}'; echo ''
1074
2a0a:e5c0:10:1b::ce11
1075
</pre>
1076
1077
h4. Add NAT64 mapping
1078
1079
* Update the __dcl_jool_siit cdist type
1080
* Record the two IPs (IPv6 and IPv4)
1081
* Configure all routers
1082
1083
1084
h4. Add DNS record
1085
1086
To use the ingress capable as a CNAME destination, create an "ingress" DNS record, such as:
1087
1088
<pre>
1089
; k8s ingress for dev
1090
dev-ingress                 AAAA 2a0a:e5c0:10:1b::ce11
1091
dev-ingress                 A 147.78.194.23
1092
1093
</pre> 
1094
1095
h4. Add supporting wildcard DNS
1096
1097
If you plan to add various sites under a specific domain, we can add a wildcard DNS entry, such as *.k8s-dev.django-hosting.ch:
1098
1099
<pre>
1100
*.k8s-dev         CNAME dev-ingress.ungleich.ch.
1101
</pre>
1102
1103 76 Nico Schottelius
h2. Harbor
1104
1105 175 Nico Schottelius
* We user "Harbor":https://goharbor.io/ as an image registry for our own images. Internal app reference: apps/prod/harbor.
1106
* The admin password is in the password store, it is Harbor12345 by default
1107 76 Nico Schottelius
* At the moment harbor only authenticates against the internal ldap tree
1108
1109
h3. LDAP configuration
1110
1111
* The url needs to be ldaps://...
1112
* uid = uid
1113
* rest standard
1114 75 Nico Schottelius
1115 89 Nico Schottelius
h2. Monitoring / Prometheus
1116
1117 90 Nico Schottelius
* Via "kube-prometheus":https://github.com/prometheus-operator/kube-prometheus/
1118 89 Nico Schottelius
1119 91 Nico Schottelius
Access via ...
1120
1121
* http://prometheus-k8s.monitoring.svc:9090
1122
* http://grafana.monitoring.svc:3000
1123
* http://alertmanager.monitoring.svc:9093
1124
1125
1126 100 Nico Schottelius
h3. Prometheus Options
1127
1128
* "helm/kube-prometheus-stack":https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
1129
** Includes dashboards and co.
1130
* "manifest based kube-prometheus":https://github.com/prometheus-operator/kube-prometheus
1131
** Includes dashboards and co.
1132
* "Prometheus Operator (mainly CRD manifest":https://github.com/prometheus-operator/prometheus-operator
1133
1134 171 Nico Schottelius
h3. Grafana default password
1135
1136
* If not changed: @prom-operator@
1137
1138 82 Nico Schottelius
h2. Nextcloud
1139
1140 85 Nico Schottelius
h3. How to get the nextcloud credentials 
1141 84 Nico Schottelius
1142
* The initial username is set to "nextcloud"
1143
* The password is autogenerated and saved in a kubernetes secret
1144
1145
<pre>
1146 85 Nico Schottelius
kubectl get secret RELEASENAME-nextcloud -o jsonpath="{.data.PASSWORD}" | base64 -d; echo "" 
1147 84 Nico Schottelius
</pre>
1148
1149 83 Nico Schottelius
h3. How to fix "Access through untrusted domain"
1150
1151 82 Nico Schottelius
* Nextcloud stores the initial domain configuration
1152 1 Nico Schottelius
* If the FQDN is changed, it will show the error message "Access through untrusted domain"
1153 82 Nico Schottelius
* To fix, edit /var/www/html/config/config.php and correct the domain
1154 1 Nico Schottelius
* Then delete the pods
1155 165 Nico Schottelius
1156
h3. Running occ commands inside the nextcloud container
1157
1158
* Find the pod in the right namespace
1159
1160
Exec:
1161
1162
<pre>
1163
su www-data -s /bin/sh -c ./occ
1164
</pre>
1165
1166
* -s /bin/sh is needed as the default shell is set to /bin/false
1167
1168 166 Nico Schottelius
h4. Rescanning files
1169 165 Nico Schottelius
1170 166 Nico Schottelius
* If files have been added without nextcloud's knowledge
1171
1172
<pre>
1173
su www-data -s /bin/sh -c "./occ files:scan --all"
1174
</pre>
1175 82 Nico Schottelius
1176 201 Nico Schottelius
h2. Sealed Secrets
1177
1178
* To be filled in by Jin-Guk
1179
* How we create the keys
1180
* How we create the secrets
1181
* What needs to be installed on the client side
1182
1183
1184 1 Nico Schottelius
h2. Infrastructure versions
1185 35 Nico Schottelius
1186 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v5 (2021-10)
1187 1 Nico Schottelius
1188 57 Nico Schottelius
Clusters are configured / setup in this order:
1189
1190
* Bootstrap via kubeadm
1191 59 Nico Schottelius
* "Networking via calico + BGP (non ECMP) using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
1192
* "ArgoCD for CD":https://argo-cd.readthedocs.io/en/stable/
1193
** "rook for storage via argocd":https://rook.io/
1194 58 Nico Schottelius
** haproxy for in IPv6-cluster-IPv4-to-IPv6 proxy via argocd
1195
** "kubernetes-secret-generator for in cluster secrets":https://github.com/mittwald/kubernetes-secret-generator
1196
** "ungleich-certbot managing certs and nginx":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
1197
1198 57 Nico Schottelius
1199
h3. ungleich kubernetes infrastructure v4 (2021-09)
1200
1201 54 Nico Schottelius
* rook is configured via manifests instead of using the rook-ceph-cluster helm chart
1202 1 Nico Schottelius
* The rook operator is still being installed via helm
1203 35 Nico Schottelius
1204 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v3 (2021-07)
1205 1 Nico Schottelius
1206 10 Nico Schottelius
* rook is now installed via helm via argocd instead of directly via manifests
1207 28 Nico Schottelius
1208 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v2 (2021-05)
1209 28 Nico Schottelius
1210
* Replaced fluxv2 from ungleich k8s v1 with argocd
1211 1 Nico Schottelius
** argocd can apply helm templates directly without needing to go through Chart releases
1212 28 Nico Schottelius
* We are also using argoflow for build flows
1213
* Planned to add "kaniko":https://github.com/GoogleContainerTools/kaniko for image building
1214
1215 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v1 (2021-01)
1216 28 Nico Schottelius
1217
We are using the following components:
1218
1219
* "Calico as a CNI":https://www.projectcalico.org/ with BGP, IPv6 only, no encapsulation
1220
** Needed for basic networking
1221
* "kubernetes-secret-generator":https://github.com/mittwald/kubernetes-secret-generator for creating secrets
1222
** Needed so that secrets are not stored in the git repository, but only in the cluster
1223
* "ungleich-certbot":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
1224
** Needed to get letsencrypt certificates for services
1225
* "rook with ceph rbd + cephfs":https://rook.io/ for storage
1226
** rbd for almost everything, *ReadWriteOnce*
1227
** cephfs for smaller things, multi access *ReadWriteMany*
1228
** Needed for providing persistent storage
1229
* "flux v2":https://fluxcd.io/
1230
** Needed to manage resources automatically