Project

General

Profile

The ungleich kubernetes infrastructure » History » Version 165

Nico Schottelius, 11/24/2022 08:33 AM

1 22 Nico Schottelius
h1. The ungleich kubernetes infrastructure and ungleich kubernetes manual
2 1 Nico Schottelius
3 3 Nico Schottelius
{{toc}}
4
5 1 Nico Schottelius
h2. Status
6
7 28 Nico Schottelius
This document is **pre-production**.
8
This document is to become the ungleich kubernetes infrastructure overview as well as the ungleich kubernetes manual.
9 1 Nico Schottelius
10 10 Nico Schottelius
h2. k8s clusters
11
12 123 Nico Schottelius
| Cluster            | Purpose/Setup     | Maintainer | Master(s)                     | argo                                                   | v4 http proxy | last verified |
13
| c0.k8s.ooo         | Dev               | -          | UNUSED                        |                                                        |               |    2021-10-05 |
14
| c1.k8s.ooo         | retired           |            | -                             |                                                        |               |    2022-03-15 |
15
| c2.k8s.ooo         | Dev p7 HW         | Nico       | server47 server53 server54    | "argo":https://argocd-server.argocd.svc.c2.k8s.ooo     |               |    2021-10-05 |
16
| c3.k8s.ooo         | retired           | -          | -                             |                                                        |               |    2021-10-05 |
17
| c4.k8s.ooo         | Dev2 p7 HW        | Jin-Guk    | server52 server53 server54    |                                                        |               |             - |
18
| c5.k8s.ooo         | retired           |            | -                             |                                                        |               |    2022-03-15 |
19
| c6.k8s.ooo         | Dev p6 VM Jin-Guk | Jin-Guk    |                               |                                                        |               |               |
20
| [[p5.k8s.ooo]]     | production        |            | server34 server36 server38    | "argo":https://argocd-server.argocd.svc.p5.k8s.ooo     | -             |               |
21
| [[p5-cow.k8s.ooo]] | production        | Nico       | server47 server51 server55    | "argo":https://argocd-server.argocd.svc.p5-cow.k8s.ooo |               |    2022-08-27 |
22
| [[p6.k8s.ooo]]     | production        |            | server67 server69 server71    | "argo":https://argocd-server.argocd.svc.p6.k8s.ooo     | 147.78.194.13 |    2021-10-05 |
23
| [[p10.k8s.ooo]]    | production        |            | server63 server65 server83    | "argo":https://argocd-server.argocd.svc.p10.k8s.ooo    | 147.78.194.12 |    2021-10-05 |
24
| [[k8s.ge.nau.so]]  | development       |            | server107 server108 server109 | "argo":https://argocd-server.argocd.svc.k8s.ge.nau.so  |               |               |
25
| [[dev.k8s.ooo]]    | development       |            | server110 server111 server112 | "argo":https://argocd-server.argocd.svc.dev.k8s.ooo    | -             |    2022-07-08 |
26 164 Nico Schottelius
| [[r1r2p15k8sooo|r1.p15.k8s.ooo]] | production | Nico | server120 | | | 2022-10-30 |
27
| [[r1r2p15k8sooo|r2.p15.k8s.ooo]] | production | Nico | server121 | | | 2022-09-06 |
28 162 Nico Schottelius
| [[r1r2p10k8sooo|r1.p10.k8s.ooo]] | production | Nico | server122 | | | 2022-10-30 |
29
| [[r1r2p10k8sooo|r2.p10.k8s.ooo]] | production | Nico | server123 | | | 2022-10-15 |
30
| [[r1r2p5k8sooo|r1.p5.k8s.ooo]] | production | Nico | server137 | | | 2022-10-30 |
31
| [[r1r2p5k8sooo|r2.p5.k8s.ooo]] | production | Nico | server138 | | | 2022-10-30 |
32
| [[r1r2p6k8sooo|r1.p6.k8s.ooo]] | production | Nico | server139 | | | 2022-10-30 |
33
| [[r1r2p6k8sooo|r2.p6.k8s.ooo]] | production | Nico | server140 | | | 2022-10-30 |
34 21 Nico Schottelius
35 1 Nico Schottelius
h2. General architecture and components overview
36
37
* All k8s clusters are IPv6 only
38
* We use BGP peering to propagate podcidr and serviceCidr networks to our infrastructure
39
* The main public testing repository is "ungleich-k8s":https://code.ungleich.ch/ungleich-public/ungleich-k8s
40 18 Nico Schottelius
** Private configurations are found in the **k8s-config** repository
41 1 Nico Schottelius
42
h3. Cluster types
43
44 28 Nico Schottelius
| **Type/Feature**            | **Development**                | **Production**         |
45
| Min No. nodes               | 3 (1 master, 3 worker)         | 5 (3 master, 3 worker) |
46
| Recommended minimum         | 4 (dedicated master, 3 worker) | 8 (3 master, 5 worker) |
47
| Separation of control plane | optional                       | recommended            |
48
| Persistent storage          | required                       | required               |
49
| Number of storage monitors  | 3                              | 5                      |
50 1 Nico Schottelius
51 43 Nico Schottelius
h2. General k8s operations
52 1 Nico Schottelius
53 46 Nico Schottelius
h3. Cheat sheet / external great references
54
55
* "kubectl cheatsheet":https://kubernetes.io/docs/reference/kubectl/cheatsheet/
56
57 117 Nico Schottelius
h3. Allowing to schedule work on the control plane / removing node taints
58 69 Nico Schottelius
59
* Mostly for single node / test / development clusters
60
* Just remove the master taint as follows
61
62
<pre>
63
kubectl taint nodes --all node-role.kubernetes.io/master-
64 118 Nico Schottelius
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
65 69 Nico Schottelius
</pre>
66 1 Nico Schottelius
67 117 Nico Schottelius
You can check the node taints using @kubectl describe node ...@
68 69 Nico Schottelius
69 44 Nico Schottelius
h3. Get the cluster admin.conf
70
71
* On the masters of each cluster you can find the file @/etc/kubernetes/admin.conf@
72
* To be able to administrate the cluster you can copy the admin.conf to your local machine
73
* Multi cluster debugging can very easy if you name the config ~/cX-admin.conf (see example below)
74
75
<pre>
76
% scp root@server47.place7.ungleich.ch:/etc/kubernetes/admin.conf ~/c2-admin.conf
77
% export KUBECONFIG=~/c2-admin.conf    
78
% kubectl get nodes
79
NAME       STATUS                     ROLES                  AGE   VERSION
80
server47   Ready                      control-plane,master   82d   v1.22.0
81
server48   Ready                      control-plane,master   82d   v1.22.0
82
server49   Ready                      <none>                 82d   v1.22.0
83
server50   Ready                      <none>                 82d   v1.22.0
84
server59   Ready                      control-plane,master   82d   v1.22.0
85
server60   Ready,SchedulingDisabled   <none>                 82d   v1.22.0
86
server61   Ready                      <none>                 82d   v1.22.0
87
server62   Ready                      <none>                 82d   v1.22.0               
88
</pre>
89
90 18 Nico Schottelius
h3. Installing a new k8s cluster
91 8 Nico Schottelius
92 9 Nico Schottelius
* Decide on the cluster name (usually *cX.k8s.ooo*), X counting upwards
93 28 Nico Schottelius
** Using pXX.k8s.ooo for production clusters of placeXX
94 9 Nico Schottelius
* Use cdist to configure the nodes with requirements like crio
95
* Decide between single or multi node control plane setups (see below)
96 28 Nico Schottelius
** Single control plane suitable for development clusters
97 9 Nico Schottelius
98 28 Nico Schottelius
Typical init procedure:
99 9 Nico Schottelius
100 28 Nico Schottelius
* Single control plane: @kubeadm init --config bootstrap/XXX/kubeadm.yaml@
101
* Multi control plane (HA): @kubeadm init --config bootstrap/XXX/kubeadm.yaml --upload-certs@
102 10 Nico Schottelius
103 29 Nico Schottelius
h3. Deleting a pod that is hanging in terminating state
104
105
<pre>
106
kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>
107
</pre>
108
109
(from https://stackoverflow.com/questions/35453792/pods-stuck-in-terminating-status)
110
111 42 Nico Schottelius
h3. Listing nodes of a cluster
112
113
<pre>
114
[15:05] bridge:~% kubectl get nodes
115
NAME       STATUS   ROLES                  AGE   VERSION
116
server22   Ready    <none>                 52d   v1.22.0
117
server23   Ready    <none>                 52d   v1.22.2
118
server24   Ready    <none>                 52d   v1.22.0
119
server25   Ready    <none>                 52d   v1.22.0
120
server26   Ready    <none>                 52d   v1.22.0
121
server27   Ready    <none>                 52d   v1.22.0
122
server63   Ready    control-plane,master   52d   v1.22.0
123
server64   Ready    <none>                 52d   v1.22.0
124
server65   Ready    control-plane,master   52d   v1.22.0
125
server66   Ready    <none>                 52d   v1.22.0
126
server83   Ready    control-plane,master   52d   v1.22.0
127
server84   Ready    <none>                 52d   v1.22.0
128
server85   Ready    <none>                 52d   v1.22.0
129
server86   Ready    <none>                 52d   v1.22.0
130
</pre>
131
132 41 Nico Schottelius
h3. Removing / draining a node
133
134
Usually @kubectl drain server@ should do the job, but sometimes we need to be more aggressive:
135
136 1 Nico Schottelius
<pre>
137 103 Nico Schottelius
kubectl drain --delete-emptydir-data --ignore-daemonsets serverXX
138 42 Nico Schottelius
</pre>
139
140
h3. Readding a node after draining
141
142
<pre>
143
kubectl uncordon serverXX
144 1 Nico Schottelius
</pre>
145 43 Nico Schottelius
146 50 Nico Schottelius
h3. (Re-)joining worker nodes after creating the cluster
147 49 Nico Schottelius
148
* We need to have an up-to-date token
149
* We use different join commands for the workers and control plane nodes
150
151
Generating the join command on an existing control plane node:
152
153
<pre>
154
kubeadm token create --print-join-command
155
</pre>
156
157 50 Nico Schottelius
h3. (Re-)joining control plane nodes after creating the cluster
158 1 Nico Schottelius
159 50 Nico Schottelius
* We generate the token again
160
* We upload the certificates
161
* We need to combine/create the join command for the control plane node
162
163
Example session:
164
165
<pre>
166
% kubeadm token create --print-join-command
167
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash 
168
169
% kubeadm init phase upload-certs --upload-certs
170
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
171
[upload-certs] Using certificate key:
172
CERTKEY
173
174
# Then we use these two outputs on the joining node:
175
176
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash --control-plane --certificate-key CERTKEY
177
</pre>
178
179
Commands to be used on a control plane node:
180
181
<pre>
182
kubeadm token create --print-join-command
183
kubeadm init phase upload-certs --upload-certs
184
</pre>
185
186
Commands to be used on the joining node:
187
188
<pre>
189
JOINCOMMAND --control-plane --certificate-key CERTKEY
190
</pre>
191 49 Nico Schottelius
192 51 Nico Schottelius
SEE ALSO
193
194
* https://stackoverflow.com/questions/63936268/how-to-generate-kubeadm-token-for-secondary-control-plane-nodes
195
* https://blog.scottlowe.org/2019/08/15/reconstructing-the-join-command-for-kubeadm/
196
197 53 Nico Schottelius
h3. How to fix etcd does not start when rejoining a kubernetes cluster as a control plane
198 52 Nico Schottelius
199
If during the above step etcd does not come up, @kubeadm join@ can hang as follows:
200
201
<pre>
202
[control-plane] Creating static Pod manifest for "kube-apiserver"                                                              
203
[control-plane] Creating static Pod manifest for "kube-controller-manager"                                                     
204
[control-plane] Creating static Pod manifest for "kube-scheduler"                                                              
205
[check-etcd] Checking that the etcd cluster is healthy                                                                         
206
error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://[2a0a:e5c0:10:1:225:b3ff:fe20:37
207
8a]:2379 with maintenance client: context deadline exceeded                                                                    
208
To see the stack trace of this error execute with --v=5 or higher         
209
</pre>
210
211
Then the problem is likely that the etcd server is still a member of the cluster. We first need to remove it from the etcd cluster and then the join works.
212
213
To fix this we do:
214
215
* Find a working etcd pod
216
* Find the etcd members / member list
217
* Remove the etcd member that we want to re-join the cluster
218
219
220
<pre>
221
# Find the etcd pods
222
kubectl -n kube-system get pods -l component=etcd,tier=control-plane
223
224
# Get the list of etcd servers with the member id 
225
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
226
227
# Remove the member
228
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove MEMBERID
229
</pre>
230
231
Sample session:
232
233
<pre>
234
[10:48] line:~% kubectl -n kube-system get pods -l component=etcd,tier=control-plane
235
NAME            READY   STATUS    RESTARTS     AGE
236
etcd-server63   1/1     Running   0            3m11s
237
etcd-server65   1/1     Running   3            7d2h
238
etcd-server83   1/1     Running   8 (6d ago)   7d2h
239
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
240
356891cd676df6e4, started, server65, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2379, false
241
371b8a07185dee7e, started, server63, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2379, false
242
5942bc58307f8af9, started, server83, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2380, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2379, false
243
244
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove 371b8a07185dee7e
245
Member 371b8a07185dee7e removed from cluster e3c0805f592a8f77
246 1 Nico Schottelius
247
</pre>
248
249
SEE ALSO
250
251
* We found the solution using https://stackoverflow.com/questions/67921552/re-installed-node-cannot-join-kubernetes-cluster
252 56 Nico Schottelius
253 147 Nico Schottelius
h3. Node labels (adding, showing, removing)
254
255
Listing the labels:
256
257
<pre>
258
kubectl get nodes --show-labels
259
</pre>
260
261
Adding labels:
262
263
<pre>
264
kubectl label nodes LIST-OF-NODES label1=value1 
265
266
</pre>
267
268
For instance:
269
270
<pre>
271
kubectl label nodes router2 router3 hosttype=router 
272
</pre>
273
274
Selecting nodes in pods:
275
276
<pre>
277
apiVersion: v1
278
kind: Pod
279
...
280
spec:
281
  nodeSelector:
282
    hosttype: router
283
</pre>
284
285 148 Nico Schottelius
Removing labels by adding a minus at the end of the label name:
286
287
<pre>
288
kubectl label node <nodename> <labelname>-
289
</pre>
290
291
For instance:
292
293
<pre>
294
kubectl label nodes router2 router3 hosttype- 
295
</pre>
296
297 147 Nico Schottelius
SEE ALSO
298 1 Nico Schottelius
299 148 Nico Schottelius
* https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/
300
* https://stackoverflow.com/questions/34067979/how-to-delete-a-node-label-by-command-and-api
301 147 Nico Schottelius
302 101 Nico Schottelius
h3. Hardware Maintenance using ungleich-hardware
303
304
Use the following manifest and replace the HOST with the actual host:
305
306
<pre>
307
apiVersion: v1
308
kind: Pod
309
metadata:
310
  name: ungleich-hardware-HOST
311
spec:
312
  containers:
313
  - name: ungleich-hardware
314
    image: ungleich/ungleich-hardware:0.0.5
315
    args:
316
    - sleep
317
    - "1000000"
318
    volumeMounts:
319
      - mountPath: /dev
320
        name: dev
321
    securityContext:
322
      privileged: true
323
  nodeSelector:
324
    kubernetes.io/hostname: "HOST"
325
326
  volumes:
327
    - name: dev
328
      hostPath:
329
        path: /dev
330
</pre>
331
332 102 Nico Schottelius
Also see: [[The_ungleich_hardware_maintenance_guide]]
333
334 105 Nico Schottelius
h3. Triggering a cronjob / creating a job from a cronjob
335 104 Nico Schottelius
336
To test a cronjob, we can create a job from a cronjob:
337
338
<pre>
339
kubectl create job --from=cronjob/volume2-daily-backup volume2-manual
340
</pre>
341
342
This creates a job volume2-manual based on the cronjob  volume2-daily
343
344 112 Nico Schottelius
h3. su-ing into a user that has nologin shell set
345
346
Many times users are having nologin as their shell inside the container. To be able to execute maintenance commands within the
347
container, we can use @su -s /bin/sh@ like this:
348
349
<pre>
350
su -s /bin/sh -c '/path/to/your/script' testuser
351
</pre>
352
353
Found on https://serverfault.com/questions/351046/how-to-run-command-as-user-who-has-usr-sbin-nologin-as-shell
354
355 113 Nico Schottelius
h3. How to print a secret value
356
357
Assuming you want the "password" item from a secret, use:
358
359
<pre>
360
kubectl get secret SECRETNAME -o jsonpath="{.data.password}" | base64 -d; echo "" 
361
</pre>
362
363 157 Nico Schottelius
h2. Reference CNI
364
365
* Mainly "stupid", but effective plugins
366
* Main documentation on https://www.cni.dev/plugins/current/
367 158 Nico Schottelius
* Plugins
368
** bridge
369
*** Can create the bridge on the host
370
*** But seems not to be able to add host interfaces to it as well
371
*** Has support for vlan tags
372
** vlan
373
*** creates vlan tagged sub interface on the host
374 160 Nico Schottelius
*** "It's a 1:1 mapping (i.e. no bridge in between)":https://github.com/k8snetworkplumbingwg/multus-cni/issues/569
375 158 Nico Schottelius
** host-device
376
*** moves the interface from the host into the container
377
*** very easy for physical connections to containers
378 159 Nico Schottelius
** ipvlan
379
*** "virtualisation" of a host device
380
*** routing based on IP
381
*** Same MAC for everyone
382
*** Cannot reach the master interface
383
** maclvan
384
*** With mac addresses
385
*** Supports various modes (to be checked)
386
** ptp ("point to point")
387
*** Creates a host device and connects it to the container
388
** win*
389 158 Nico Schottelius
*** Windows implementations
390 157 Nico Schottelius
391 62 Nico Schottelius
h2. Calico CNI
392
393
h3. Calico Installation
394
395
* We install "calico using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
396
* This has the following advantages:
397
** Easy to upgrade
398
** Does not require os to configure IPv6/dual stack settings as the tigera operator figures out things on its own
399
400
Usually plain calico can be installed directly using:
401
402
<pre>
403 149 Nico Schottelius
VERSION=v3.24.1
404
405 120 Nico Schottelius
helm repo add projectcalico https://docs.projectcalico.org/charts
406 124 Nico Schottelius
helm upgrade --install --namespace tigera calico projectcalico/tigera-operator --version $VERSION --create-namespace
407 1 Nico Schottelius
</pre>
408 92 Nico Schottelius
409
* Check the tags on https://github.com/projectcalico/calico/tags for the latest release
410 62 Nico Schottelius
411
h3. Installing calicoctl
412
413 115 Nico Schottelius
* General installation instructions, including binary download: https://projectcalico.docs.tigera.io/maintenance/clis/calicoctl/install
414
415 62 Nico Schottelius
To be able to manage and configure calico, we need to 
416
"install calicoctl (we choose the version as a pod)":https://docs.projectcalico.org/getting-started/clis/calicoctl/install#install-calicoctl-as-a-kubernetes-pod
417
418
<pre>
419
kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml
420
</pre>
421
422 93 Nico Schottelius
Or version specific:
423
424
<pre>
425
kubectl apply -f https://github.com/projectcalico/calico/blob/v3.20.4/manifests/calicoctl.yaml
426 97 Nico Schottelius
427
# For 3.22
428
kubectl apply -f https://projectcalico.docs.tigera.io/archive/v3.22/manifests/calicoctl.yaml
429 93 Nico Schottelius
</pre>
430
431 70 Nico Schottelius
And making it easier accessible by alias:
432
433
<pre>
434
alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl"
435
</pre>
436
437 62 Nico Schottelius
h3. Calico configuration
438
439 63 Nico Schottelius
By default our k8s clusters "BGP peer":https://docs.projectcalico.org/networking/bgp
440
with an upstream router to propagate podcidr and servicecidr.
441 62 Nico Schottelius
442
Default settings in our infrastructure:
443
444
* We use a full-mesh using the @nodeToNodeMeshEnabled: true@ option
445
* We keep the original next hop so that *only* the server with the pod is announcing it (instead of ecmp)
446 1 Nico Schottelius
* We use private ASNs for k8s clusters
447 63 Nico Schottelius
* We do *not* use any overlay
448 62 Nico Schottelius
449
After installing calico and calicoctl the last step of the installation is usually:
450
451 1 Nico Schottelius
<pre>
452 79 Nico Schottelius
calicoctl create -f - < calico-bgp.yaml
453 62 Nico Schottelius
</pre>
454
455
456
A sample BGP configuration:
457
458
<pre>
459
---
460
apiVersion: projectcalico.org/v3
461
kind: BGPConfiguration
462
metadata:
463
  name: default
464
spec:
465
  logSeverityScreen: Info
466
  nodeToNodeMeshEnabled: true
467
  asNumber: 65534
468
  serviceClusterIPs:
469
  - cidr: 2a0a:e5c0:10:3::/108
470
  serviceExternalIPs:
471
  - cidr: 2a0a:e5c0:10:3::/108
472
---
473
apiVersion: projectcalico.org/v3
474
kind: BGPPeer
475
metadata:
476
  name: router1-place10
477
spec:
478
  peerIP: 2a0a:e5c0:10:1::50
479
  asNumber: 213081
480
  keepOriginalNextHop: true
481
</pre>
482
483 126 Nico Schottelius
h2. Cilium CNI (experimental)
484
485 137 Nico Schottelius
h3. Status
486
487 138 Nico Schottelius
*NO WORKING CILIUM CONFIGURATION FOR IPV6 only modes*
488 137 Nico Schottelius
489 146 Nico Schottelius
h3. Latest error
490
491
It seems cilium does not run on IPv6 only hosts:
492
493
<pre>
494
level=info msg="Validating configured node address ranges" subsys=daemon
495
level=fatal msg="postinit failed" error="external IPv4 node address could not be derived, please configure via --ipv4-node" subsys=daemon
496
level=info msg="Starting IP identity watcher" subsys=ipcache
497
</pre>
498
499
It crashes after that log entry
500
501 128 Nico Schottelius
h3. BGP configuration
502
503
* The cilium-operator will not start without a correct configmap being present beforehand (see error message below)
504
* Creating the bgp config beforehand as a configmap is thus required.
505
506
The error one gets without the configmap present:
507
508
Pods are hanging with:
509
510
<pre>
511
cilium-bpqm6                       0/1     Init:0/4            0             9s
512
cilium-operator-5947d94f7f-5bmh2   0/1     ContainerCreating   0             9s
513
</pre>
514
515
The error message in the cilium-*perator is:
516
517
<pre>
518
Events:
519
  Type     Reason       Age                From               Message
520
  ----     ------       ----               ----               -------
521
  Normal   Scheduled    80s                default-scheduler  Successfully assigned kube-system/cilium-operator-5947d94f7f-lqcsp to server56
522
  Warning  FailedMount  16s (x8 over 80s)  kubelet            MountVolume.SetUp failed for volume "bgp-config-path" : configmap "bgp-config" not found
523
</pre>
524
525
A correct bgp config looks like this:
526
527
<pre>
528
apiVersion: v1
529
kind: ConfigMap
530
metadata:
531
  name: bgp-config
532
  namespace: kube-system
533
data:
534
  config.yaml: |
535
    peers:
536
      - peer-address: 2a0a:e5c0::46
537
        peer-asn: 209898
538
        my-asn: 65533
539
      - peer-address: 2a0a:e5c0::47
540
        peer-asn: 209898
541
        my-asn: 65533
542
    address-pools:
543
      - name: default
544
        protocol: bgp
545
        addresses:
546
          - 2a0a:e5c0:0:14::/64
547
</pre>
548 127 Nico Schottelius
549
h3. Installation
550 130 Nico Schottelius
551 127 Nico Schottelius
Adding the repo
552 1 Nico Schottelius
<pre>
553 127 Nico Schottelius
554 129 Nico Schottelius
helm repo add cilium https://helm.cilium.io/
555 130 Nico Schottelius
helm repo update
556
</pre>
557 129 Nico Schottelius
558 135 Nico Schottelius
Installing + configuring cilium
559 129 Nico Schottelius
<pre>
560 130 Nico Schottelius
ipv6pool=2a0a:e5c0:0:14::/112
561 1 Nico Schottelius
562 146 Nico Schottelius
version=1.12.2
563 129 Nico Schottelius
564
helm upgrade --install cilium cilium/cilium --version $version \
565 1 Nico Schottelius
  --namespace kube-system \
566
  --set ipv4.enabled=false \
567
  --set ipv6.enabled=true \
568 146 Nico Schottelius
  --set enableIPv6Masquerade=false \
569
  --set bgpControlPlane.enabled=true 
570 1 Nico Schottelius
571 146 Nico Schottelius
#  --set ipam.operator.clusterPoolIPv6PodCIDRList=$ipv6pool
572
573
# Old style bgp?
574 136 Nico Schottelius
#   --set bgp.enabled=true --set bgp.announce.podCIDR=true \
575 127 Nico Schottelius
576
# Show possible configuration options
577
helm show values cilium/cilium
578
579 1 Nico Schottelius
</pre>
580 132 Nico Schottelius
581
Using a /64 for ipam.operator.clusterPoolIPv6PodCIDRList fails with:
582
583
<pre>
584
level=fatal msg="Unable to init cluster-pool allocator" error="unable to initialize IPv6 allocator New CIDR set failed; the node CIDR size is too big" subsys=cilium-operator-generic
585
</pre>
586
587 126 Nico Schottelius
588 1 Nico Schottelius
See also https://github.com/cilium/cilium/issues/20756
589 135 Nico Schottelius
590
Seems a /112 is actually working.
591
592
h3. Kernel modules
593
594
Cilium requires the following modules to be loaded on the host (not loaded by default):
595
596
<pre>
597 1 Nico Schottelius
modprobe  ip6table_raw
598
modprobe  ip6table_filter
599
</pre>
600 146 Nico Schottelius
601
h3. Interesting helm flags
602
603
* autoDirectNodeRoutes
604
* bgpControlPlane.enabled = true
605
606
h3. SEE ALSO
607
608
* https://docs.cilium.io/en/v1.12/helm-reference/
609 133 Nico Schottelius
610 150 Nico Schottelius
h2. Multus (incomplete/experimental)
611
612
(TBD)
613
614 122 Nico Schottelius
h2. ArgoCD 
615 56 Nico Schottelius
616 60 Nico Schottelius
h3. Argocd Installation
617 1 Nico Schottelius
618 116 Nico Schottelius
* See https://argo-cd.readthedocs.io/en/stable/
619
620 60 Nico Schottelius
As there is no configuration management present yet, argocd is installed using
621
622 1 Nico Schottelius
<pre>
623 60 Nico Schottelius
kubectl create namespace argocd
624 86 Nico Schottelius
625 96 Nico Schottelius
# Specific Version
626
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.3.2/manifests/install.yaml
627 86 Nico Schottelius
628
# OR: latest stable
629 60 Nico Schottelius
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
630 56 Nico Schottelius
</pre>
631 1 Nico Schottelius
632 116 Nico Schottelius
633 1 Nico Schottelius
634 60 Nico Schottelius
h3. Get the argocd credentials
635
636
<pre>
637
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo ""
638
</pre>
639 52 Nico Schottelius
640 87 Nico Schottelius
h3. Accessing argocd
641
642
In regular IPv6 clusters:
643
644
* Navigate to https://argocd-server.argocd.CLUSTERDOMAIN
645
646
In legacy IPv4 clusters
647
648
<pre>
649
kubectl --namespace argocd port-forward svc/argocd-server 8080:80
650
</pre>
651
652 88 Nico Schottelius
* Navigate to https://localhost:8080
653
654 68 Nico Schottelius
h3. Using the argocd webhook to trigger changes
655 67 Nico Schottelius
656
* To trigger changes post json https://argocd.example.com/api/webhook
657
658 72 Nico Schottelius
h3. Deploying an application
659
660
* Applications are deployed via git towards gitea (code.ungleich.ch) and then pulled by argo
661 73 Nico Schottelius
* Always include the *redmine-url* pointing to the (customer) ticket
662
** Also add the support-url if it exists
663 72 Nico Schottelius
664
Application sample
665
666
<pre>
667
apiVersion: argoproj.io/v1alpha1
668
kind: Application
669
metadata:
670
  name: gitea-CUSTOMER
671
  namespace: argocd
672
spec:
673
  destination:
674
    namespace: default
675
    server: 'https://kubernetes.default.svc'
676
  source:
677
    path: apps/prod/gitea
678
    repoURL: 'https://code.ungleich.ch/ungleich-intern/k8s-config.git'
679
    targetRevision: HEAD
680
    helm:
681
      parameters:
682
        - name: storage.data.storageClass
683
          value: rook-ceph-block-hdd
684
        - name: storage.data.size
685
          value: 200Gi
686
        - name: storage.db.storageClass
687
          value: rook-ceph-block-ssd
688
        - name: storage.db.size
689
          value: 10Gi
690
        - name: storage.letsencrypt.storageClass
691
          value: rook-ceph-block-hdd
692
        - name: storage.letsencrypt.size
693
          value: 50Mi
694
        - name: letsencryptStaging
695
          value: 'no'
696
        - name: fqdn
697
          value: 'code.verua.online'
698
  project: default
699
  syncPolicy:
700
    automated:
701
      prune: true
702
      selfHeal: true
703
  info:
704
    - name: 'redmine-url'
705
      value: 'https://redmine.ungleich.ch/issues/ISSUEID'
706
    - name: 'support-url'
707
      value: 'https://support.ungleich.ch/Ticket/Display.html?id=TICKETID'
708
</pre>
709
710 80 Nico Schottelius
h2. Helm related operations and conventions
711 55 Nico Schottelius
712 61 Nico Schottelius
We use helm charts extensively.
713
714
* In production, they are managed via argocd
715
* In development, helm chart can de developed and deployed manually using the helm utility.
716
717 55 Nico Schottelius
h3. Installing a helm chart
718
719
One can use the usual pattern of
720
721
<pre>
722
helm install <releasename> <chartdirectory>
723
</pre>
724
725
However often you want to reinstall/update when testing helm charts. The following pattern is "better", because it allows you to reinstall, if it is already installed:
726
727
<pre>
728
helm upgrade --install <releasename> <chartdirectory>
729 1 Nico Schottelius
</pre>
730 80 Nico Schottelius
731
h3. Naming services and deployments in helm charts [Application labels]
732
733
* We always have {{ .Release.Name }} to identify the current "instance"
734
* Deployments:
735
** use @app: <what it is>@, f.i. @app: nginx@, @app: postgres@, ...
736 81 Nico Schottelius
* See more about standard labels on
737
** https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/
738
** https://helm.sh/docs/chart_best_practices/labels/
739 55 Nico Schottelius
740 151 Nico Schottelius
h3. Show all versions of a helm chart
741
742
<pre>
743
helm search repo -l repo/chart
744
</pre>
745
746
For example:
747
748
<pre>
749
% helm search repo -l projectcalico/tigera-operator 
750
NAME                         	CHART VERSION	APP VERSION	DESCRIPTION                            
751
projectcalico/tigera-operator	v3.23.3      	v3.23.3    	Installs the Tigera operator for Calico
752
projectcalico/tigera-operator	v3.23.2      	v3.23.2    	Installs the Tigera operator for Calico
753
....
754
</pre>
755
756 152 Nico Schottelius
h3. Show possible values of a chart
757
758
<pre>
759
helm show values <repo/chart>
760
</pre>
761
762
Example:
763
764
<pre>
765
helm show values ingress-nginx/ingress-nginx
766
</pre>
767
768
769 139 Nico Schottelius
h2. Rook + Ceph
770
771
h3. Installation
772
773
* Usually directly via argocd
774
775
Manual steps:
776
777
<pre>
778
779
</pre>
780 43 Nico Schottelius
781 71 Nico Schottelius
h3. Executing ceph commands
782
783
Using the ceph-tools pod as follows:
784
785
<pre>
786
kubectl exec -n rook-ceph -ti $(kubectl -n rook-ceph get pods -l app=rook-ceph-tools -o jsonpath='{.items[*].metadata.name}') -- ceph -s
787
</pre>
788
789 43 Nico Schottelius
h3. Inspecting the logs of a specific server
790
791
<pre>
792
# Get the related pods
793
kubectl -n rook-ceph get pods -l app=rook-ceph-osd-prepare 
794
...
795
796
# Inspect the logs of a specific pod
797
kubectl -n rook-ceph logs -f rook-ceph-osd-prepare-server23--1-444qx
798
799 71 Nico Schottelius
</pre>
800
801
h3. Inspecting the logs of the rook-ceph-operator
802
803
<pre>
804
kubectl -n rook-ceph logs -f -l app=rook-ceph-operator
805 43 Nico Schottelius
</pre>
806
807 121 Nico Schottelius
h3. Restarting the rook operator
808
809
<pre>
810
kubectl -n rook-ceph delete pods  -l app=rook-ceph-operator
811
</pre>
812
813 43 Nico Schottelius
h3. Triggering server prepare / adding new osds
814
815
The rook-ceph-operator triggers/watches/creates pods to maintain hosts. To trigger a full "re scan", simply delete that pod:
816
817
<pre>
818
kubectl -n rook-ceph delete pods -l app=rook-ceph-operator
819
</pre>
820
821
This will cause all the @rook-ceph-osd-prepare-..@ jobs to be recreated and thus OSDs to be created, if new disks have been added.
822
823
h3. Removing an OSD
824
825
* See "Ceph OSD Management":https://rook.io/docs/rook/v1.7/ceph-osd-mgmt.html
826 77 Nico Schottelius
* More specifically: https://github.com/rook/rook/blob/release-1.7/cluster/examples/kubernetes/ceph/osd-purge.yaml
827 99 Nico Schottelius
* Then delete the related deployment
828 41 Nico Schottelius
829 98 Nico Schottelius
Set osd id in the osd-purge.yaml and apply it. OSD should be down before.
830
831
<pre>
832
apiVersion: batch/v1
833
kind: Job
834
metadata:
835
  name: rook-ceph-purge-osd
836
  namespace: rook-ceph # namespace:cluster
837
  labels:
838
    app: rook-ceph-purge-osd
839
spec:
840
  template:
841
    metadata:
842
      labels:
843
        app: rook-ceph-purge-osd
844
    spec:
845
      serviceAccountName: rook-ceph-purge-osd
846
      containers:
847
        - name: osd-removal
848
          image: rook/ceph:master
849
          # TODO: Insert the OSD ID in the last parameter that is to be removed
850
          # The OSD IDs are a comma-separated list. For example: "0" or "0,2".
851
          # If you want to preserve the OSD PVCs, set `--preserve-pvc true`.
852
          #
853
          # A --force-osd-removal option is available if the OSD should be destroyed even though the
854
          # removal could lead to data loss.
855
          args:
856
            - "ceph"
857
            - "osd"
858
            - "remove"
859
            - "--preserve-pvc"
860
            - "false"
861
            - "--force-osd-removal"
862
            - "false"
863
            - "--osd-ids"
864
            - "SETTHEOSDIDHERE"
865
          env:
866
            - name: POD_NAMESPACE
867
              valueFrom:
868
                fieldRef:
869
                  fieldPath: metadata.namespace
870
            - name: ROOK_MON_ENDPOINTS
871
              valueFrom:
872
                configMapKeyRef:
873
                  key: data
874
                  name: rook-ceph-mon-endpoints
875
            - name: ROOK_CEPH_USERNAME
876
              valueFrom:
877
                secretKeyRef:
878
                  key: ceph-username
879
                  name: rook-ceph-mon
880
            - name: ROOK_CEPH_SECRET
881
              valueFrom:
882
                secretKeyRef:
883
                  key: ceph-secret
884
                  name: rook-ceph-mon
885
            - name: ROOK_CONFIG_DIR
886
              value: /var/lib/rook
887
            - name: ROOK_CEPH_CONFIG_OVERRIDE
888
              value: /etc/rook/config/override.conf
889
            - name: ROOK_FSID
890
              valueFrom:
891
                secretKeyRef:
892
                  key: fsid
893
                  name: rook-ceph-mon
894
            - name: ROOK_LOG_LEVEL
895
              value: DEBUG
896
          volumeMounts:
897
            - mountPath: /etc/ceph
898
              name: ceph-conf-emptydir
899
            - mountPath: /var/lib/rook
900
              name: rook-config
901
      volumes:
902
        - emptyDir: {}
903
          name: ceph-conf-emptydir
904
        - emptyDir: {}
905
          name: rook-config
906
      restartPolicy: Never
907
908
909 99 Nico Schottelius
</pre>
910
911
Deleting the deployment:
912
913
<pre>
914
[18:05] bridge:~% kubectl -n rook-ceph delete deployment rook-ceph-osd-6
915
deployment.apps "rook-ceph-osd-6" deleted
916 98 Nico Schottelius
</pre>
917
918 145 Nico Schottelius
h2. Ingress + Cert Manager
919
920
* We deploy "nginx-ingress":https://docs.nginx.com/nginx-ingress-controller/ to get an ingress
921
* we deploy "cert-manager":https://cert-manager.io/ to handle certificates
922
* We independently deploy @ClusterIssuer@ to allow the cert-manager app to deploy and the issuer to be created once the CRDs from cert manager are in place
923
924
h3. IPv4 reachability 
925
926
The ingress is by default IPv6 only. To make it reachable from the IPv4 world, get its IPv6 address and configure a NAT64 mapping in Jool.
927
928
Steps:
929
930
h4. Get the ingress IPv6 address
931
932
Use @kubectl -n ingress-nginx get svc ingress-nginx-controller -o jsonpath='{.spec.clusterIP}'; echo ''@
933
934
Example:
935
936
<pre>
937
kubectl -n ingress-nginx get svc ingress-nginx-controller -o jsonpath='{.spec.clusterIP}'; echo ''
938
2a0a:e5c0:10:1b::ce11
939
</pre>
940
941
h4. Add NAT64 mapping
942
943
* Update the __dcl_jool_siit cdist type
944
* Record the two IPs (IPv6 and IPv4)
945
* Configure all routers
946
947
948
h4. Add DNS record
949
950
To use the ingress capable as a CNAME destination, create an "ingress" DNS record, such as:
951
952
<pre>
953
; k8s ingress for dev
954
dev-ingress                 AAAA 2a0a:e5c0:10:1b::ce11
955
dev-ingress                 A 147.78.194.23
956
957
</pre> 
958
959
h4. Add supporting wildcard DNS
960
961
If you plan to add various sites under a specific domain, we can add a wildcard DNS entry, such as *.k8s-dev.django-hosting.ch:
962
963
<pre>
964
*.k8s-dev         CNAME dev-ingress.ungleich.ch.
965
</pre>
966
967 76 Nico Schottelius
h2. Harbor
968
969
* We user "Harbor":https://goharbor.io/ for caching and as an image registry. Internal app reference: apps/prod/harbor.
970
* The admin password is in the password store, auto generated per cluster
971
* At the moment harbor only authenticates against the internal ldap tree
972
973
h3. LDAP configuration
974
975
* The url needs to be ldaps://...
976
* uid = uid
977
* rest standard
978 75 Nico Schottelius
979 89 Nico Schottelius
h2. Monitoring / Prometheus
980
981 90 Nico Schottelius
* Via "kube-prometheus":https://github.com/prometheus-operator/kube-prometheus/
982 89 Nico Schottelius
983 91 Nico Schottelius
Access via ...
984
985
* http://prometheus-k8s.monitoring.svc:9090
986
* http://grafana.monitoring.svc:3000
987
* http://alertmanager.monitoring.svc:9093
988
989
990 100 Nico Schottelius
h3. Prometheus Options
991
992
* "helm/kube-prometheus-stack":https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
993
** Includes dashboards and co.
994
* "manifest based kube-prometheus":https://github.com/prometheus-operator/kube-prometheus
995
** Includes dashboards and co.
996
* "Prometheus Operator (mainly CRD manifest":https://github.com/prometheus-operator/prometheus-operator
997
998 82 Nico Schottelius
h2. Nextcloud
999
1000 85 Nico Schottelius
h3. How to get the nextcloud credentials 
1001 84 Nico Schottelius
1002
* The initial username is set to "nextcloud"
1003
* The password is autogenerated and saved in a kubernetes secret
1004
1005
<pre>
1006 85 Nico Schottelius
kubectl get secret RELEASENAME-nextcloud -o jsonpath="{.data.PASSWORD}" | base64 -d; echo "" 
1007 84 Nico Schottelius
</pre>
1008
1009 83 Nico Schottelius
h3. How to fix "Access through untrusted domain"
1010
1011 82 Nico Schottelius
* Nextcloud stores the initial domain configuration
1012 1 Nico Schottelius
* If the FQDN is changed, it will show the error message "Access through untrusted domain"
1013 82 Nico Schottelius
* To fix, edit /var/www/html/config/config.php and correct the domain
1014 1 Nico Schottelius
* Then delete the pods
1015 165 Nico Schottelius
1016
h3. Running occ commands inside the nextcloud container
1017
1018
* Find the pod in the right namespace
1019
1020
Exec:
1021
1022
<pre>
1023
su www-data -s /bin/sh -c ./occ
1024
</pre>
1025
1026
* -s /bin/sh is needed as the default shell is set to /bin/false
1027
1028
1029 82 Nico Schottelius
1030 1 Nico Schottelius
h2. Infrastructure versions
1031 35 Nico Schottelius
1032 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v5 (2021-10)
1033 1 Nico Schottelius
1034 57 Nico Schottelius
Clusters are configured / setup in this order:
1035
1036
* Bootstrap via kubeadm
1037 59 Nico Schottelius
* "Networking via calico + BGP (non ECMP) using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
1038
* "ArgoCD for CD":https://argo-cd.readthedocs.io/en/stable/
1039
** "rook for storage via argocd":https://rook.io/
1040 58 Nico Schottelius
** haproxy for in IPv6-cluster-IPv4-to-IPv6 proxy via argocd
1041
** "kubernetes-secret-generator for in cluster secrets":https://github.com/mittwald/kubernetes-secret-generator
1042
** "ungleich-certbot managing certs and nginx":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
1043
1044 57 Nico Schottelius
1045
h3. ungleich kubernetes infrastructure v4 (2021-09)
1046
1047 54 Nico Schottelius
* rook is configured via manifests instead of using the rook-ceph-cluster helm chart
1048 1 Nico Schottelius
* The rook operator is still being installed via helm
1049 35 Nico Schottelius
1050 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v3 (2021-07)
1051 1 Nico Schottelius
1052 10 Nico Schottelius
* rook is now installed via helm via argocd instead of directly via manifests
1053 28 Nico Schottelius
1054 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v2 (2021-05)
1055 28 Nico Schottelius
1056
* Replaced fluxv2 from ungleich k8s v1 with argocd
1057 1 Nico Schottelius
** argocd can apply helm templates directly without needing to go through Chart releases
1058 28 Nico Schottelius
* We are also using argoflow for build flows
1059
* Planned to add "kaniko":https://github.com/GoogleContainerTools/kaniko for image building
1060
1061 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v1 (2021-01)
1062 28 Nico Schottelius
1063
We are using the following components:
1064
1065
* "Calico as a CNI":https://www.projectcalico.org/ with BGP, IPv6 only, no encapsulation
1066
** Needed for basic networking
1067
* "kubernetes-secret-generator":https://github.com/mittwald/kubernetes-secret-generator for creating secrets
1068
** Needed so that secrets are not stored in the git repository, but only in the cluster
1069
* "ungleich-certbot":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
1070
** Needed to get letsencrypt certificates for services
1071
* "rook with ceph rbd + cephfs":https://rook.io/ for storage
1072
** rbd for almost everything, *ReadWriteOnce*
1073
** cephfs for smaller things, multi access *ReadWriteMany*
1074
** Needed for providing persistent storage
1075
* "flux v2":https://fluxcd.io/
1076
** Needed to manage resources automatically