Project

General

Profile

The ungleich kubernetes infrastructure » History » Version 154

Nico Schottelius, 10/30/2022 07:25 AM

1 22 Nico Schottelius
h1. The ungleich kubernetes infrastructure and ungleich kubernetes manual
2 1 Nico Schottelius
3 3 Nico Schottelius
{{toc}}
4
5 1 Nico Schottelius
h2. Status
6
7 28 Nico Schottelius
This document is **pre-production**.
8
This document is to become the ungleich kubernetes infrastructure overview as well as the ungleich kubernetes manual.
9 1 Nico Schottelius
10 10 Nico Schottelius
h2. k8s clusters
11
12 123 Nico Schottelius
| Cluster            | Purpose/Setup     | Maintainer | Master(s)                     | argo                                                   | v4 http proxy | last verified |
13
| c0.k8s.ooo         | Dev               | -          | UNUSED                        |                                                        |               |    2021-10-05 |
14
| c1.k8s.ooo         | retired           |            | -                             |                                                        |               |    2022-03-15 |
15
| c2.k8s.ooo         | Dev p7 HW         | Nico       | server47 server53 server54    | "argo":https://argocd-server.argocd.svc.c2.k8s.ooo     |               |    2021-10-05 |
16
| c3.k8s.ooo         | retired           | -          | -                             |                                                        |               |    2021-10-05 |
17
| c4.k8s.ooo         | Dev2 p7 HW        | Jin-Guk    | server52 server53 server54    |                                                        |               |             - |
18
| c5.k8s.ooo         | retired           |            | -                             |                                                        |               |    2022-03-15 |
19
| c6.k8s.ooo         | Dev p6 VM Jin-Guk | Jin-Guk    |                               |                                                        |               |               |
20
| [[p5.k8s.ooo]]     | production        |            | server34 server36 server38    | "argo":https://argocd-server.argocd.svc.p5.k8s.ooo     | -             |               |
21
| [[p5-cow.k8s.ooo]] | production        | Nico       | server47 server51 server55    | "argo":https://argocd-server.argocd.svc.p5-cow.k8s.ooo |               |    2022-08-27 |
22
| [[p6.k8s.ooo]]     | production        |            | server67 server69 server71    | "argo":https://argocd-server.argocd.svc.p6.k8s.ooo     | 147.78.194.13 |    2021-10-05 |
23
| [[p10.k8s.ooo]]    | production        |            | server63 server65 server83    | "argo":https://argocd-server.argocd.svc.p10.k8s.ooo    | 147.78.194.12 |    2021-10-05 |
24
| [[k8s.ge.nau.so]]  | development       |            | server107 server108 server109 | "argo":https://argocd-server.argocd.svc.k8s.ge.nau.so  |               |               |
25
| [[dev.k8s.ooo]]    | development       |            | server110 server111 server112 | "argo":https://argocd-server.argocd.svc.dev.k8s.ooo    | -             |    2022-07-08 |
26 142 Nico Schottelius
| [[server121.k8s.ooo]] | production | Nico | server121 | | | 2022-09-06 |
27 154 Nico Schottelius
| [[server122-3.k8s.ooo]] | production | Nico | server122 | | | 2022-10-30 |
28 153 Nico Schottelius
| [[server123.k8s.ooo]] | production | Nico | server123 | | | 2022-10-15 |
29 21 Nico Schottelius
30 1 Nico Schottelius
h2. General architecture and components overview
31
32
* All k8s clusters are IPv6 only
33
* We use BGP peering to propagate podcidr and serviceCidr networks to our infrastructure
34
* The main public testing repository is "ungleich-k8s":https://code.ungleich.ch/ungleich-public/ungleich-k8s
35 18 Nico Schottelius
** Private configurations are found in the **k8s-config** repository
36 1 Nico Schottelius
37
h3. Cluster types
38
39 28 Nico Schottelius
| **Type/Feature**            | **Development**                | **Production**         |
40
| Min No. nodes               | 3 (1 master, 3 worker)         | 5 (3 master, 3 worker) |
41
| Recommended minimum         | 4 (dedicated master, 3 worker) | 8 (3 master, 5 worker) |
42
| Separation of control plane | optional                       | recommended            |
43
| Persistent storage          | required                       | required               |
44
| Number of storage monitors  | 3                              | 5                      |
45 1 Nico Schottelius
46 43 Nico Schottelius
h2. General k8s operations
47 1 Nico Schottelius
48 46 Nico Schottelius
h3. Cheat sheet / external great references
49
50
* "kubectl cheatsheet":https://kubernetes.io/docs/reference/kubectl/cheatsheet/
51
52 117 Nico Schottelius
h3. Allowing to schedule work on the control plane / removing node taints
53 69 Nico Schottelius
54
* Mostly for single node / test / development clusters
55
* Just remove the master taint as follows
56
57
<pre>
58
kubectl taint nodes --all node-role.kubernetes.io/master-
59 118 Nico Schottelius
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
60 69 Nico Schottelius
</pre>
61 1 Nico Schottelius
62 117 Nico Schottelius
You can check the node taints using @kubectl describe node ...@
63 69 Nico Schottelius
64 44 Nico Schottelius
h3. Get the cluster admin.conf
65
66
* On the masters of each cluster you can find the file @/etc/kubernetes/admin.conf@
67
* To be able to administrate the cluster you can copy the admin.conf to your local machine
68
* Multi cluster debugging can very easy if you name the config ~/cX-admin.conf (see example below)
69
70
<pre>
71
% scp root@server47.place7.ungleich.ch:/etc/kubernetes/admin.conf ~/c2-admin.conf
72
% export KUBECONFIG=~/c2-admin.conf    
73
% kubectl get nodes
74
NAME       STATUS                     ROLES                  AGE   VERSION
75
server47   Ready                      control-plane,master   82d   v1.22.0
76
server48   Ready                      control-plane,master   82d   v1.22.0
77
server49   Ready                      <none>                 82d   v1.22.0
78
server50   Ready                      <none>                 82d   v1.22.0
79
server59   Ready                      control-plane,master   82d   v1.22.0
80
server60   Ready,SchedulingDisabled   <none>                 82d   v1.22.0
81
server61   Ready                      <none>                 82d   v1.22.0
82
server62   Ready                      <none>                 82d   v1.22.0               
83
</pre>
84
85 18 Nico Schottelius
h3. Installing a new k8s cluster
86 8 Nico Schottelius
87 9 Nico Schottelius
* Decide on the cluster name (usually *cX.k8s.ooo*), X counting upwards
88 28 Nico Schottelius
** Using pXX.k8s.ooo for production clusters of placeXX
89 9 Nico Schottelius
* Use cdist to configure the nodes with requirements like crio
90
* Decide between single or multi node control plane setups (see below)
91 28 Nico Schottelius
** Single control plane suitable for development clusters
92 9 Nico Schottelius
93 28 Nico Schottelius
Typical init procedure:
94 9 Nico Schottelius
95 28 Nico Schottelius
* Single control plane: @kubeadm init --config bootstrap/XXX/kubeadm.yaml@
96
* Multi control plane (HA): @kubeadm init --config bootstrap/XXX/kubeadm.yaml --upload-certs@
97 10 Nico Schottelius
98 29 Nico Schottelius
h3. Deleting a pod that is hanging in terminating state
99
100
<pre>
101
kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>
102
</pre>
103
104
(from https://stackoverflow.com/questions/35453792/pods-stuck-in-terminating-status)
105
106 42 Nico Schottelius
h3. Listing nodes of a cluster
107
108
<pre>
109
[15:05] bridge:~% kubectl get nodes
110
NAME       STATUS   ROLES                  AGE   VERSION
111
server22   Ready    <none>                 52d   v1.22.0
112
server23   Ready    <none>                 52d   v1.22.2
113
server24   Ready    <none>                 52d   v1.22.0
114
server25   Ready    <none>                 52d   v1.22.0
115
server26   Ready    <none>                 52d   v1.22.0
116
server27   Ready    <none>                 52d   v1.22.0
117
server63   Ready    control-plane,master   52d   v1.22.0
118
server64   Ready    <none>                 52d   v1.22.0
119
server65   Ready    control-plane,master   52d   v1.22.0
120
server66   Ready    <none>                 52d   v1.22.0
121
server83   Ready    control-plane,master   52d   v1.22.0
122
server84   Ready    <none>                 52d   v1.22.0
123
server85   Ready    <none>                 52d   v1.22.0
124
server86   Ready    <none>                 52d   v1.22.0
125
</pre>
126
127 41 Nico Schottelius
h3. Removing / draining a node
128
129
Usually @kubectl drain server@ should do the job, but sometimes we need to be more aggressive:
130
131 1 Nico Schottelius
<pre>
132 103 Nico Schottelius
kubectl drain --delete-emptydir-data --ignore-daemonsets serverXX
133 42 Nico Schottelius
</pre>
134
135
h3. Readding a node after draining
136
137
<pre>
138
kubectl uncordon serverXX
139 1 Nico Schottelius
</pre>
140 43 Nico Schottelius
141 50 Nico Schottelius
h3. (Re-)joining worker nodes after creating the cluster
142 49 Nico Schottelius
143
* We need to have an up-to-date token
144
* We use different join commands for the workers and control plane nodes
145
146
Generating the join command on an existing control plane node:
147
148
<pre>
149
kubeadm token create --print-join-command
150
</pre>
151
152 50 Nico Schottelius
h3. (Re-)joining control plane nodes after creating the cluster
153 1 Nico Schottelius
154 50 Nico Schottelius
* We generate the token again
155
* We upload the certificates
156
* We need to combine/create the join command for the control plane node
157
158
Example session:
159
160
<pre>
161
% kubeadm token create --print-join-command
162
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash 
163
164
% kubeadm init phase upload-certs --upload-certs
165
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
166
[upload-certs] Using certificate key:
167
CERTKEY
168
169
# Then we use these two outputs on the joining node:
170
171
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash --control-plane --certificate-key CERTKEY
172
</pre>
173
174
Commands to be used on a control plane node:
175
176
<pre>
177
kubeadm token create --print-join-command
178
kubeadm init phase upload-certs --upload-certs
179
</pre>
180
181
Commands to be used on the joining node:
182
183
<pre>
184
JOINCOMMAND --control-plane --certificate-key CERTKEY
185
</pre>
186 49 Nico Schottelius
187 51 Nico Schottelius
SEE ALSO
188
189
* https://stackoverflow.com/questions/63936268/how-to-generate-kubeadm-token-for-secondary-control-plane-nodes
190
* https://blog.scottlowe.org/2019/08/15/reconstructing-the-join-command-for-kubeadm/
191
192 53 Nico Schottelius
h3. How to fix etcd does not start when rejoining a kubernetes cluster as a control plane
193 52 Nico Schottelius
194
If during the above step etcd does not come up, @kubeadm join@ can hang as follows:
195
196
<pre>
197
[control-plane] Creating static Pod manifest for "kube-apiserver"                                                              
198
[control-plane] Creating static Pod manifest for "kube-controller-manager"                                                     
199
[control-plane] Creating static Pod manifest for "kube-scheduler"                                                              
200
[check-etcd] Checking that the etcd cluster is healthy                                                                         
201
error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://[2a0a:e5c0:10:1:225:b3ff:fe20:37
202
8a]:2379 with maintenance client: context deadline exceeded                                                                    
203
To see the stack trace of this error execute with --v=5 or higher         
204
</pre>
205
206
Then the problem is likely that the etcd server is still a member of the cluster. We first need to remove it from the etcd cluster and then the join works.
207
208
To fix this we do:
209
210
* Find a working etcd pod
211
* Find the etcd members / member list
212
* Remove the etcd member that we want to re-join the cluster
213
214
215
<pre>
216
# Find the etcd pods
217
kubectl -n kube-system get pods -l component=etcd,tier=control-plane
218
219
# Get the list of etcd servers with the member id 
220
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
221
222
# Remove the member
223
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove MEMBERID
224
</pre>
225
226
Sample session:
227
228
<pre>
229
[10:48] line:~% kubectl -n kube-system get pods -l component=etcd,tier=control-plane
230
NAME            READY   STATUS    RESTARTS     AGE
231
etcd-server63   1/1     Running   0            3m11s
232
etcd-server65   1/1     Running   3            7d2h
233
etcd-server83   1/1     Running   8 (6d ago)   7d2h
234
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
235
356891cd676df6e4, started, server65, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2379, false
236
371b8a07185dee7e, started, server63, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2379, false
237
5942bc58307f8af9, started, server83, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2380, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2379, false
238
239
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove 371b8a07185dee7e
240
Member 371b8a07185dee7e removed from cluster e3c0805f592a8f77
241 1 Nico Schottelius
242
</pre>
243
244
SEE ALSO
245
246
* We found the solution using https://stackoverflow.com/questions/67921552/re-installed-node-cannot-join-kubernetes-cluster
247 56 Nico Schottelius
248 147 Nico Schottelius
h3. Node labels (adding, showing, removing)
249
250
Listing the labels:
251
252
<pre>
253
kubectl get nodes --show-labels
254
</pre>
255
256
Adding labels:
257
258
<pre>
259
kubectl label nodes LIST-OF-NODES label1=value1 
260
261
</pre>
262
263
For instance:
264
265
<pre>
266
kubectl label nodes router2 router3 hosttype=router 
267
</pre>
268
269
Selecting nodes in pods:
270
271
<pre>
272
apiVersion: v1
273
kind: Pod
274
...
275
spec:
276
  nodeSelector:
277
    hosttype: router
278
</pre>
279
280 148 Nico Schottelius
Removing labels by adding a minus at the end of the label name:
281
282
<pre>
283
kubectl label node <nodename> <labelname>-
284
</pre>
285
286
For instance:
287
288
<pre>
289
kubectl label nodes router2 router3 hosttype- 
290
</pre>
291
292 147 Nico Schottelius
SEE ALSO
293 1 Nico Schottelius
294 148 Nico Schottelius
* https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/
295
* https://stackoverflow.com/questions/34067979/how-to-delete-a-node-label-by-command-and-api
296 147 Nico Schottelius
297 101 Nico Schottelius
h3. Hardware Maintenance using ungleich-hardware
298
299
Use the following manifest and replace the HOST with the actual host:
300
301
<pre>
302
apiVersion: v1
303
kind: Pod
304
metadata:
305
  name: ungleich-hardware-HOST
306
spec:
307
  containers:
308
  - name: ungleich-hardware
309
    image: ungleich/ungleich-hardware:0.0.5
310
    args:
311
    - sleep
312
    - "1000000"
313
    volumeMounts:
314
      - mountPath: /dev
315
        name: dev
316
    securityContext:
317
      privileged: true
318
  nodeSelector:
319
    kubernetes.io/hostname: "HOST"
320
321
  volumes:
322
    - name: dev
323
      hostPath:
324
        path: /dev
325
</pre>
326
327 102 Nico Schottelius
Also see: [[The_ungleich_hardware_maintenance_guide]]
328
329 105 Nico Schottelius
h3. Triggering a cronjob / creating a job from a cronjob
330 104 Nico Schottelius
331
To test a cronjob, we can create a job from a cronjob:
332
333
<pre>
334
kubectl create job --from=cronjob/volume2-daily-backup volume2-manual
335
</pre>
336
337
This creates a job volume2-manual based on the cronjob  volume2-daily
338
339 112 Nico Schottelius
h3. su-ing into a user that has nologin shell set
340
341
Many times users are having nologin as their shell inside the container. To be able to execute maintenance commands within the
342
container, we can use @su -s /bin/sh@ like this:
343
344
<pre>
345
su -s /bin/sh -c '/path/to/your/script' testuser
346
</pre>
347
348
Found on https://serverfault.com/questions/351046/how-to-run-command-as-user-who-has-usr-sbin-nologin-as-shell
349
350 113 Nico Schottelius
h3. How to print a secret value
351
352
Assuming you want the "password" item from a secret, use:
353
354
<pre>
355
kubectl get secret SECRETNAME -o jsonpath="{.data.password}" | base64 -d; echo "" 
356
</pre>
357
358 62 Nico Schottelius
h2. Calico CNI
359
360
h3. Calico Installation
361
362
* We install "calico using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
363
* This has the following advantages:
364
** Easy to upgrade
365
** Does not require os to configure IPv6/dual stack settings as the tigera operator figures out things on its own
366
367
Usually plain calico can be installed directly using:
368
369
<pre>
370 149 Nico Schottelius
VERSION=v3.24.1
371
372 120 Nico Schottelius
helm repo add projectcalico https://docs.projectcalico.org/charts
373 124 Nico Schottelius
helm upgrade --install --namespace tigera calico projectcalico/tigera-operator --version $VERSION --create-namespace
374 1 Nico Schottelius
</pre>
375 92 Nico Schottelius
376
* Check the tags on https://github.com/projectcalico/calico/tags for the latest release
377 62 Nico Schottelius
378
h3. Installing calicoctl
379
380 115 Nico Schottelius
* General installation instructions, including binary download: https://projectcalico.docs.tigera.io/maintenance/clis/calicoctl/install
381
382 62 Nico Schottelius
To be able to manage and configure calico, we need to 
383
"install calicoctl (we choose the version as a pod)":https://docs.projectcalico.org/getting-started/clis/calicoctl/install#install-calicoctl-as-a-kubernetes-pod
384
385
<pre>
386
kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml
387
</pre>
388
389 93 Nico Schottelius
Or version specific:
390
391
<pre>
392
kubectl apply -f https://github.com/projectcalico/calico/blob/v3.20.4/manifests/calicoctl.yaml
393 97 Nico Schottelius
394
# For 3.22
395
kubectl apply -f https://projectcalico.docs.tigera.io/archive/v3.22/manifests/calicoctl.yaml
396 93 Nico Schottelius
</pre>
397
398 70 Nico Schottelius
And making it easier accessible by alias:
399
400
<pre>
401
alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl"
402
</pre>
403
404 62 Nico Schottelius
h3. Calico configuration
405
406 63 Nico Schottelius
By default our k8s clusters "BGP peer":https://docs.projectcalico.org/networking/bgp
407
with an upstream router to propagate podcidr and servicecidr.
408 62 Nico Schottelius
409
Default settings in our infrastructure:
410
411
* We use a full-mesh using the @nodeToNodeMeshEnabled: true@ option
412
* We keep the original next hop so that *only* the server with the pod is announcing it (instead of ecmp)
413 1 Nico Schottelius
* We use private ASNs for k8s clusters
414 63 Nico Schottelius
* We do *not* use any overlay
415 62 Nico Schottelius
416
After installing calico and calicoctl the last step of the installation is usually:
417
418 1 Nico Schottelius
<pre>
419 79 Nico Schottelius
calicoctl create -f - < calico-bgp.yaml
420 62 Nico Schottelius
</pre>
421
422
423
A sample BGP configuration:
424
425
<pre>
426
---
427
apiVersion: projectcalico.org/v3
428
kind: BGPConfiguration
429
metadata:
430
  name: default
431
spec:
432
  logSeverityScreen: Info
433
  nodeToNodeMeshEnabled: true
434
  asNumber: 65534
435
  serviceClusterIPs:
436
  - cidr: 2a0a:e5c0:10:3::/108
437
  serviceExternalIPs:
438
  - cidr: 2a0a:e5c0:10:3::/108
439
---
440
apiVersion: projectcalico.org/v3
441
kind: BGPPeer
442
metadata:
443
  name: router1-place10
444
spec:
445
  peerIP: 2a0a:e5c0:10:1::50
446
  asNumber: 213081
447
  keepOriginalNextHop: true
448
</pre>
449
450 126 Nico Schottelius
h2. Cilium CNI (experimental)
451
452 137 Nico Schottelius
h3. Status
453
454 138 Nico Schottelius
*NO WORKING CILIUM CONFIGURATION FOR IPV6 only modes*
455 137 Nico Schottelius
456 146 Nico Schottelius
h3. Latest error
457
458
It seems cilium does not run on IPv6 only hosts:
459
460
<pre>
461
level=info msg="Validating configured node address ranges" subsys=daemon
462
level=fatal msg="postinit failed" error="external IPv4 node address could not be derived, please configure via --ipv4-node" subsys=daemon
463
level=info msg="Starting IP identity watcher" subsys=ipcache
464
</pre>
465
466
It crashes after that log entry
467
468 128 Nico Schottelius
h3. BGP configuration
469
470
* The cilium-operator will not start without a correct configmap being present beforehand (see error message below)
471
* Creating the bgp config beforehand as a configmap is thus required.
472
473
The error one gets without the configmap present:
474
475
Pods are hanging with:
476
477
<pre>
478
cilium-bpqm6                       0/1     Init:0/4            0             9s
479
cilium-operator-5947d94f7f-5bmh2   0/1     ContainerCreating   0             9s
480
</pre>
481
482
The error message in the cilium-*perator is:
483
484
<pre>
485
Events:
486
  Type     Reason       Age                From               Message
487
  ----     ------       ----               ----               -------
488
  Normal   Scheduled    80s                default-scheduler  Successfully assigned kube-system/cilium-operator-5947d94f7f-lqcsp to server56
489
  Warning  FailedMount  16s (x8 over 80s)  kubelet            MountVolume.SetUp failed for volume "bgp-config-path" : configmap "bgp-config" not found
490
</pre>
491
492
A correct bgp config looks like this:
493
494
<pre>
495
apiVersion: v1
496
kind: ConfigMap
497
metadata:
498
  name: bgp-config
499
  namespace: kube-system
500
data:
501
  config.yaml: |
502
    peers:
503
      - peer-address: 2a0a:e5c0::46
504
        peer-asn: 209898
505
        my-asn: 65533
506
      - peer-address: 2a0a:e5c0::47
507
        peer-asn: 209898
508
        my-asn: 65533
509
    address-pools:
510
      - name: default
511
        protocol: bgp
512
        addresses:
513
          - 2a0a:e5c0:0:14::/64
514
</pre>
515 127 Nico Schottelius
516
h3. Installation
517 130 Nico Schottelius
518 127 Nico Schottelius
Adding the repo
519 1 Nico Schottelius
<pre>
520 127 Nico Schottelius
521 129 Nico Schottelius
helm repo add cilium https://helm.cilium.io/
522 130 Nico Schottelius
helm repo update
523
</pre>
524 129 Nico Schottelius
525 135 Nico Schottelius
Installing + configuring cilium
526 129 Nico Schottelius
<pre>
527 130 Nico Schottelius
ipv6pool=2a0a:e5c0:0:14::/112
528 1 Nico Schottelius
529 146 Nico Schottelius
version=1.12.2
530 129 Nico Schottelius
531
helm upgrade --install cilium cilium/cilium --version $version \
532 1 Nico Schottelius
  --namespace kube-system \
533
  --set ipv4.enabled=false \
534
  --set ipv6.enabled=true \
535 146 Nico Schottelius
  --set enableIPv6Masquerade=false \
536
  --set bgpControlPlane.enabled=true 
537 1 Nico Schottelius
538 146 Nico Schottelius
#  --set ipam.operator.clusterPoolIPv6PodCIDRList=$ipv6pool
539
540
# Old style bgp?
541 136 Nico Schottelius
#   --set bgp.enabled=true --set bgp.announce.podCIDR=true \
542 127 Nico Schottelius
543
# Show possible configuration options
544
helm show values cilium/cilium
545
546 1 Nico Schottelius
</pre>
547 132 Nico Schottelius
548
Using a /64 for ipam.operator.clusterPoolIPv6PodCIDRList fails with:
549
550
<pre>
551
level=fatal msg="Unable to init cluster-pool allocator" error="unable to initialize IPv6 allocator New CIDR set failed; the node CIDR size is too big" subsys=cilium-operator-generic
552
</pre>
553
554 126 Nico Schottelius
555 1 Nico Schottelius
See also https://github.com/cilium/cilium/issues/20756
556 135 Nico Schottelius
557
Seems a /112 is actually working.
558
559
h3. Kernel modules
560
561
Cilium requires the following modules to be loaded on the host (not loaded by default):
562
563
<pre>
564 1 Nico Schottelius
modprobe  ip6table_raw
565
modprobe  ip6table_filter
566
</pre>
567 146 Nico Schottelius
568
h3. Interesting helm flags
569
570
* autoDirectNodeRoutes
571
* bgpControlPlane.enabled = true
572
573
h3. SEE ALSO
574
575
* https://docs.cilium.io/en/v1.12/helm-reference/
576 133 Nico Schottelius
577 150 Nico Schottelius
h2. Multus (incomplete/experimental)
578
579
(TBD)
580
581 122 Nico Schottelius
h2. ArgoCD 
582 56 Nico Schottelius
583 60 Nico Schottelius
h3. Argocd Installation
584 1 Nico Schottelius
585 116 Nico Schottelius
* See https://argo-cd.readthedocs.io/en/stable/
586
587 60 Nico Schottelius
As there is no configuration management present yet, argocd is installed using
588
589 1 Nico Schottelius
<pre>
590 60 Nico Schottelius
kubectl create namespace argocd
591 86 Nico Schottelius
592 96 Nico Schottelius
# Specific Version
593
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.3.2/manifests/install.yaml
594 86 Nico Schottelius
595
# OR: latest stable
596 60 Nico Schottelius
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
597 56 Nico Schottelius
</pre>
598 1 Nico Schottelius
599 116 Nico Schottelius
600 1 Nico Schottelius
601 60 Nico Schottelius
h3. Get the argocd credentials
602
603
<pre>
604
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo ""
605
</pre>
606 52 Nico Schottelius
607 87 Nico Schottelius
h3. Accessing argocd
608
609
In regular IPv6 clusters:
610
611
* Navigate to https://argocd-server.argocd.CLUSTERDOMAIN
612
613
In legacy IPv4 clusters
614
615
<pre>
616
kubectl --namespace argocd port-forward svc/argocd-server 8080:80
617
</pre>
618
619 88 Nico Schottelius
* Navigate to https://localhost:8080
620
621 68 Nico Schottelius
h3. Using the argocd webhook to trigger changes
622 67 Nico Schottelius
623
* To trigger changes post json https://argocd.example.com/api/webhook
624
625 72 Nico Schottelius
h3. Deploying an application
626
627
* Applications are deployed via git towards gitea (code.ungleich.ch) and then pulled by argo
628 73 Nico Schottelius
* Always include the *redmine-url* pointing to the (customer) ticket
629
** Also add the support-url if it exists
630 72 Nico Schottelius
631
Application sample
632
633
<pre>
634
apiVersion: argoproj.io/v1alpha1
635
kind: Application
636
metadata:
637
  name: gitea-CUSTOMER
638
  namespace: argocd
639
spec:
640
  destination:
641
    namespace: default
642
    server: 'https://kubernetes.default.svc'
643
  source:
644
    path: apps/prod/gitea
645
    repoURL: 'https://code.ungleich.ch/ungleich-intern/k8s-config.git'
646
    targetRevision: HEAD
647
    helm:
648
      parameters:
649
        - name: storage.data.storageClass
650
          value: rook-ceph-block-hdd
651
        - name: storage.data.size
652
          value: 200Gi
653
        - name: storage.db.storageClass
654
          value: rook-ceph-block-ssd
655
        - name: storage.db.size
656
          value: 10Gi
657
        - name: storage.letsencrypt.storageClass
658
          value: rook-ceph-block-hdd
659
        - name: storage.letsencrypt.size
660
          value: 50Mi
661
        - name: letsencryptStaging
662
          value: 'no'
663
        - name: fqdn
664
          value: 'code.verua.online'
665
  project: default
666
  syncPolicy:
667
    automated:
668
      prune: true
669
      selfHeal: true
670
  info:
671
    - name: 'redmine-url'
672
      value: 'https://redmine.ungleich.ch/issues/ISSUEID'
673
    - name: 'support-url'
674
      value: 'https://support.ungleich.ch/Ticket/Display.html?id=TICKETID'
675
</pre>
676
677 80 Nico Schottelius
h2. Helm related operations and conventions
678 55 Nico Schottelius
679 61 Nico Schottelius
We use helm charts extensively.
680
681
* In production, they are managed via argocd
682
* In development, helm chart can de developed and deployed manually using the helm utility.
683
684 55 Nico Schottelius
h3. Installing a helm chart
685
686
One can use the usual pattern of
687
688
<pre>
689
helm install <releasename> <chartdirectory>
690
</pre>
691
692
However often you want to reinstall/update when testing helm charts. The following pattern is "better", because it allows you to reinstall, if it is already installed:
693
694
<pre>
695
helm upgrade --install <releasename> <chartdirectory>
696 1 Nico Schottelius
</pre>
697 80 Nico Schottelius
698
h3. Naming services and deployments in helm charts [Application labels]
699
700
* We always have {{ .Release.Name }} to identify the current "instance"
701
* Deployments:
702
** use @app: <what it is>@, f.i. @app: nginx@, @app: postgres@, ...
703 81 Nico Schottelius
* See more about standard labels on
704
** https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/
705
** https://helm.sh/docs/chart_best_practices/labels/
706 55 Nico Schottelius
707 151 Nico Schottelius
h3. Show all versions of a helm chart
708
709
<pre>
710
helm search repo -l repo/chart
711
</pre>
712
713
For example:
714
715
<pre>
716
% helm search repo -l projectcalico/tigera-operator 
717
NAME                         	CHART VERSION	APP VERSION	DESCRIPTION                            
718
projectcalico/tigera-operator	v3.23.3      	v3.23.3    	Installs the Tigera operator for Calico
719
projectcalico/tigera-operator	v3.23.2      	v3.23.2    	Installs the Tigera operator for Calico
720
....
721
</pre>
722
723 152 Nico Schottelius
h3. Show possible values of a chart
724
725
<pre>
726
helm show values <repo/chart>
727
</pre>
728
729
Example:
730
731
<pre>
732
helm show values ingress-nginx/ingress-nginx
733
</pre>
734
735
736 139 Nico Schottelius
h2. Rook + Ceph
737
738
h3. Installation
739
740
* Usually directly via argocd
741
742
Manual steps:
743
744
<pre>
745
746
</pre>
747 43 Nico Schottelius
748 71 Nico Schottelius
h3. Executing ceph commands
749
750
Using the ceph-tools pod as follows:
751
752
<pre>
753
kubectl exec -n rook-ceph -ti $(kubectl -n rook-ceph get pods -l app=rook-ceph-tools -o jsonpath='{.items[*].metadata.name}') -- ceph -s
754
</pre>
755
756 43 Nico Schottelius
h3. Inspecting the logs of a specific server
757
758
<pre>
759
# Get the related pods
760
kubectl -n rook-ceph get pods -l app=rook-ceph-osd-prepare 
761
...
762
763
# Inspect the logs of a specific pod
764
kubectl -n rook-ceph logs -f rook-ceph-osd-prepare-server23--1-444qx
765
766 71 Nico Schottelius
</pre>
767
768
h3. Inspecting the logs of the rook-ceph-operator
769
770
<pre>
771
kubectl -n rook-ceph logs -f -l app=rook-ceph-operator
772 43 Nico Schottelius
</pre>
773
774 121 Nico Schottelius
h3. Restarting the rook operator
775
776
<pre>
777
kubectl -n rook-ceph delete pods  -l app=rook-ceph-operator
778
</pre>
779
780 43 Nico Schottelius
h3. Triggering server prepare / adding new osds
781
782
The rook-ceph-operator triggers/watches/creates pods to maintain hosts. To trigger a full "re scan", simply delete that pod:
783
784
<pre>
785
kubectl -n rook-ceph delete pods -l app=rook-ceph-operator
786
</pre>
787
788
This will cause all the @rook-ceph-osd-prepare-..@ jobs to be recreated and thus OSDs to be created, if new disks have been added.
789
790
h3. Removing an OSD
791
792
* See "Ceph OSD Management":https://rook.io/docs/rook/v1.7/ceph-osd-mgmt.html
793 77 Nico Schottelius
* More specifically: https://github.com/rook/rook/blob/release-1.7/cluster/examples/kubernetes/ceph/osd-purge.yaml
794 99 Nico Schottelius
* Then delete the related deployment
795 41 Nico Schottelius
796 98 Nico Schottelius
Set osd id in the osd-purge.yaml and apply it. OSD should be down before.
797
798
<pre>
799
apiVersion: batch/v1
800
kind: Job
801
metadata:
802
  name: rook-ceph-purge-osd
803
  namespace: rook-ceph # namespace:cluster
804
  labels:
805
    app: rook-ceph-purge-osd
806
spec:
807
  template:
808
    metadata:
809
      labels:
810
        app: rook-ceph-purge-osd
811
    spec:
812
      serviceAccountName: rook-ceph-purge-osd
813
      containers:
814
        - name: osd-removal
815
          image: rook/ceph:master
816
          # TODO: Insert the OSD ID in the last parameter that is to be removed
817
          # The OSD IDs are a comma-separated list. For example: "0" or "0,2".
818
          # If you want to preserve the OSD PVCs, set `--preserve-pvc true`.
819
          #
820
          # A --force-osd-removal option is available if the OSD should be destroyed even though the
821
          # removal could lead to data loss.
822
          args:
823
            - "ceph"
824
            - "osd"
825
            - "remove"
826
            - "--preserve-pvc"
827
            - "false"
828
            - "--force-osd-removal"
829
            - "false"
830
            - "--osd-ids"
831
            - "SETTHEOSDIDHERE"
832
          env:
833
            - name: POD_NAMESPACE
834
              valueFrom:
835
                fieldRef:
836
                  fieldPath: metadata.namespace
837
            - name: ROOK_MON_ENDPOINTS
838
              valueFrom:
839
                configMapKeyRef:
840
                  key: data
841
                  name: rook-ceph-mon-endpoints
842
            - name: ROOK_CEPH_USERNAME
843
              valueFrom:
844
                secretKeyRef:
845
                  key: ceph-username
846
                  name: rook-ceph-mon
847
            - name: ROOK_CEPH_SECRET
848
              valueFrom:
849
                secretKeyRef:
850
                  key: ceph-secret
851
                  name: rook-ceph-mon
852
            - name: ROOK_CONFIG_DIR
853
              value: /var/lib/rook
854
            - name: ROOK_CEPH_CONFIG_OVERRIDE
855
              value: /etc/rook/config/override.conf
856
            - name: ROOK_FSID
857
              valueFrom:
858
                secretKeyRef:
859
                  key: fsid
860
                  name: rook-ceph-mon
861
            - name: ROOK_LOG_LEVEL
862
              value: DEBUG
863
          volumeMounts:
864
            - mountPath: /etc/ceph
865
              name: ceph-conf-emptydir
866
            - mountPath: /var/lib/rook
867
              name: rook-config
868
      volumes:
869
        - emptyDir: {}
870
          name: ceph-conf-emptydir
871
        - emptyDir: {}
872
          name: rook-config
873
      restartPolicy: Never
874
875
876 99 Nico Schottelius
</pre>
877
878
Deleting the deployment:
879
880
<pre>
881
[18:05] bridge:~% kubectl -n rook-ceph delete deployment rook-ceph-osd-6
882
deployment.apps "rook-ceph-osd-6" deleted
883 98 Nico Schottelius
</pre>
884
885 145 Nico Schottelius
h2. Ingress + Cert Manager
886
887
* We deploy "nginx-ingress":https://docs.nginx.com/nginx-ingress-controller/ to get an ingress
888
* we deploy "cert-manager":https://cert-manager.io/ to handle certificates
889
* We independently deploy @ClusterIssuer@ to allow the cert-manager app to deploy and the issuer to be created once the CRDs from cert manager are in place
890
891
h3. IPv4 reachability 
892
893
The ingress is by default IPv6 only. To make it reachable from the IPv4 world, get its IPv6 address and configure a NAT64 mapping in Jool.
894
895
Steps:
896
897
h4. Get the ingress IPv6 address
898
899
Use @kubectl -n ingress-nginx get svc ingress-nginx-controller -o jsonpath='{.spec.clusterIP}'; echo ''@
900
901
Example:
902
903
<pre>
904
kubectl -n ingress-nginx get svc ingress-nginx-controller -o jsonpath='{.spec.clusterIP}'; echo ''
905
2a0a:e5c0:10:1b::ce11
906
</pre>
907
908
h4. Add NAT64 mapping
909
910
* Update the __dcl_jool_siit cdist type
911
* Record the two IPs (IPv6 and IPv4)
912
* Configure all routers
913
914
915
h4. Add DNS record
916
917
To use the ingress capable as a CNAME destination, create an "ingress" DNS record, such as:
918
919
<pre>
920
; k8s ingress for dev
921
dev-ingress                 AAAA 2a0a:e5c0:10:1b::ce11
922
dev-ingress                 A 147.78.194.23
923
924
</pre> 
925
926
h4. Add supporting wildcard DNS
927
928
If you plan to add various sites under a specific domain, we can add a wildcard DNS entry, such as *.k8s-dev.django-hosting.ch:
929
930
<pre>
931
*.k8s-dev         CNAME dev-ingress.ungleich.ch.
932
</pre>
933
934 76 Nico Schottelius
h2. Harbor
935
936
* We user "Harbor":https://goharbor.io/ for caching and as an image registry. Internal app reference: apps/prod/harbor.
937
* The admin password is in the password store, auto generated per cluster
938
* At the moment harbor only authenticates against the internal ldap tree
939
940
h3. LDAP configuration
941
942
* The url needs to be ldaps://...
943
* uid = uid
944
* rest standard
945 75 Nico Schottelius
946 89 Nico Schottelius
h2. Monitoring / Prometheus
947
948 90 Nico Schottelius
* Via "kube-prometheus":https://github.com/prometheus-operator/kube-prometheus/
949 89 Nico Schottelius
950 91 Nico Schottelius
Access via ...
951
952
* http://prometheus-k8s.monitoring.svc:9090
953
* http://grafana.monitoring.svc:3000
954
* http://alertmanager.monitoring.svc:9093
955
956
957 100 Nico Schottelius
h3. Prometheus Options
958
959
* "helm/kube-prometheus-stack":https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
960
** Includes dashboards and co.
961
* "manifest based kube-prometheus":https://github.com/prometheus-operator/kube-prometheus
962
** Includes dashboards and co.
963
* "Prometheus Operator (mainly CRD manifest":https://github.com/prometheus-operator/prometheus-operator
964
965 91 Nico Schottelius
966 82 Nico Schottelius
h2. Nextcloud
967
968 85 Nico Schottelius
h3. How to get the nextcloud credentials 
969 84 Nico Schottelius
970
* The initial username is set to "nextcloud"
971
* The password is autogenerated and saved in a kubernetes secret
972
973
<pre>
974 85 Nico Schottelius
kubectl get secret RELEASENAME-nextcloud -o jsonpath="{.data.PASSWORD}" | base64 -d; echo "" 
975 84 Nico Schottelius
</pre>
976
977 83 Nico Schottelius
h3. How to fix "Access through untrusted domain"
978
979 82 Nico Schottelius
* Nextcloud stores the initial domain configuration
980 1 Nico Schottelius
* If the FQDN is changed, it will show the error message "Access through untrusted domain"
981 82 Nico Schottelius
* To fix, edit /var/www/html/config/config.php and correct the domain
982 83 Nico Schottelius
* Then delete the pods
983 82 Nico Schottelius
984 1 Nico Schottelius
h2. Infrastructure versions
985 35 Nico Schottelius
986 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v5 (2021-10)
987 1 Nico Schottelius
988 57 Nico Schottelius
Clusters are configured / setup in this order:
989
990
* Bootstrap via kubeadm
991 59 Nico Schottelius
* "Networking via calico + BGP (non ECMP) using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
992
* "ArgoCD for CD":https://argo-cd.readthedocs.io/en/stable/
993
** "rook for storage via argocd":https://rook.io/
994 58 Nico Schottelius
** haproxy for in IPv6-cluster-IPv4-to-IPv6 proxy via argocd
995
** "kubernetes-secret-generator for in cluster secrets":https://github.com/mittwald/kubernetes-secret-generator
996
** "ungleich-certbot managing certs and nginx":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
997
998 57 Nico Schottelius
999
h3. ungleich kubernetes infrastructure v4 (2021-09)
1000
1001 54 Nico Schottelius
* rook is configured via manifests instead of using the rook-ceph-cluster helm chart
1002 1 Nico Schottelius
* The rook operator is still being installed via helm
1003 35 Nico Schottelius
1004 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v3 (2021-07)
1005 1 Nico Schottelius
1006 10 Nico Schottelius
* rook is now installed via helm via argocd instead of directly via manifests
1007 28 Nico Schottelius
1008 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v2 (2021-05)
1009 28 Nico Schottelius
1010
* Replaced fluxv2 from ungleich k8s v1 with argocd
1011 1 Nico Schottelius
** argocd can apply helm templates directly without needing to go through Chart releases
1012 28 Nico Schottelius
* We are also using argoflow for build flows
1013
* Planned to add "kaniko":https://github.com/GoogleContainerTools/kaniko for image building
1014
1015 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v1 (2021-01)
1016 28 Nico Schottelius
1017
We are using the following components:
1018
1019
* "Calico as a CNI":https://www.projectcalico.org/ with BGP, IPv6 only, no encapsulation
1020
** Needed for basic networking
1021
* "kubernetes-secret-generator":https://github.com/mittwald/kubernetes-secret-generator for creating secrets
1022
** Needed so that secrets are not stored in the git repository, but only in the cluster
1023
* "ungleich-certbot":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
1024
** Needed to get letsencrypt certificates for services
1025
* "rook with ceph rbd + cephfs":https://rook.io/ for storage
1026
** rbd for almost everything, *ReadWriteOnce*
1027
** cephfs for smaller things, multi access *ReadWriteMany*
1028
** Needed for providing persistent storage
1029
* "flux v2":https://fluxcd.io/
1030
** Needed to manage resources automatically