Project

General

Profile

The ungleich kubernetes infrastructure » History » Version 72

Nico Schottelius, 11/30/2021 02:22 PM

1 22 Nico Schottelius
h1. The ungleich kubernetes infrastructure and ungleich kubernetes manual
2 1 Nico Schottelius
3 3 Nico Schottelius
{{toc}}
4
5 1 Nico Schottelius
h2. Status
6
7 28 Nico Schottelius
This document is **pre-production**.
8
This document is to become the ungleich kubernetes infrastructure overview as well as the ungleich kubernetes manual.
9 1 Nico Schottelius
10 10 Nico Schottelius
h2. k8s clusters
11
12 66 Nico Schottelius
| Cluster         | Purpose/Setup     | Maintainer   | Master(s)                                                | argo | rook | v4 http proxy | last verified |
13 65 Nico Schottelius
| c0.k8s.ooo      | Dev               | -            | UNUSED                                                   |      |      |               |    2021-10-05 |
14
| c1.k8s.ooo      | Dev p6 VM         | Nico         | 2a0a-e5c0-2-11-0-62ff-fe0b-1a3d.k8s-1.place6.ungleich.ch |      |      |               |    2021-10-05 |
15
| c2.k8s.ooo      | Dev p7 HW         | Nico         | server47 server53 server54                               | x    | x    |               |    2021-10-05 |
16
| c3.k8s.ooo      | Test p7 PI        | -            | UNUSED                                                   |      |      |               |    2021-10-05 |
17
| c4.k8s.ooo      | Dev2 p7 HW        | Fran/Jin-Guk | server52 server53 server54                               |      |      |               |             - |
18
| c5.k8s.ooo      | Dev p6 VM Amal    | Nico/Amal    | 2a0a-e5c0-2-11-0-62ff-fe0b-1a46.k8s-1.place6.ungleich.ch |      |      |               |               |
19
| c6.k8s.ooo      | Dev p6 VM Jin-Guk | Jin-Guk      |                                                          |      |      |               |               |
20
| [[p6.k8s.ooo]]  | production        |              | server67 server69 server71                               | x    | x    | 147.78.194.13 |    2021-10-05 |
21 48 Nico Schottelius
| [[p10.k8s.ooo]] | production        |              | server63 server65 server83                               | x    | x    | 147.78.194.12 |    2021-10-05 |
22 21 Nico Schottelius
23 1 Nico Schottelius
h2. General architecture and components overview
24
25
* All k8s clusters are IPv6 only
26
* We use BGP peering to propagate podcidr and serviceCidr networks to our infrastructure
27
* The main public testing repository is "ungleich-k8s":https://code.ungleich.ch/ungleich-public/ungleich-k8s
28 18 Nico Schottelius
** Private configurations are found in the **k8s-config** repository
29 1 Nico Schottelius
30
h3. Cluster types
31
32 28 Nico Schottelius
| **Type/Feature**            | **Development**                | **Production**         |
33
| Min No. nodes               | 3 (1 master, 3 worker)         | 5 (3 master, 3 worker) |
34
| Recommended minimum         | 4 (dedicated master, 3 worker) | 8 (3 master, 5 worker) |
35
| Separation of control plane | optional                       | recommended            |
36
| Persistent storage          | required                       | required               |
37
| Number of storage monitors  | 3                              | 5                      |
38 1 Nico Schottelius
39 43 Nico Schottelius
h2. General k8s operations
40 1 Nico Schottelius
41 46 Nico Schottelius
h3. Cheat sheet / external great references
42
43
* "kubectl cheatsheet":https://kubernetes.io/docs/reference/kubectl/cheatsheet/
44
45 69 Nico Schottelius
h3. Allowing to schedule work on the control plane
46
47
* Mostly for single node / test / development clusters
48
* Just remove the master taint as follows
49
50
<pre>
51
kubectl taint nodes --all node-role.kubernetes.io/master-
52
</pre>
53
54
55 44 Nico Schottelius
h3. Get the cluster admin.conf
56
57
* On the masters of each cluster you can find the file @/etc/kubernetes/admin.conf@
58
* To be able to administrate the cluster you can copy the admin.conf to your local machine
59
* Multi cluster debugging can very easy if you name the config ~/cX-admin.conf (see example below)
60
61
<pre>
62
% scp root@server47.place7.ungleich.ch:/etc/kubernetes/admin.conf ~/c2-admin.conf
63
% export KUBECONFIG=~/c2-admin.conf    
64
% kubectl get nodes
65
NAME       STATUS                     ROLES                  AGE   VERSION
66
server47   Ready                      control-plane,master   82d   v1.22.0
67
server48   Ready                      control-plane,master   82d   v1.22.0
68
server49   Ready                      <none>                 82d   v1.22.0
69
server50   Ready                      <none>                 82d   v1.22.0
70
server59   Ready                      control-plane,master   82d   v1.22.0
71
server60   Ready,SchedulingDisabled   <none>                 82d   v1.22.0
72
server61   Ready                      <none>                 82d   v1.22.0
73
server62   Ready                      <none>                 82d   v1.22.0               
74
</pre>
75
76 18 Nico Schottelius
h3. Installing a new k8s cluster
77 8 Nico Schottelius
78 9 Nico Schottelius
* Decide on the cluster name (usually *cX.k8s.ooo*), X counting upwards
79 28 Nico Schottelius
** Using pXX.k8s.ooo for production clusters of placeXX
80 9 Nico Schottelius
* Use cdist to configure the nodes with requirements like crio
81
* Decide between single or multi node control plane setups (see below)
82 28 Nico Schottelius
** Single control plane suitable for development clusters
83 9 Nico Schottelius
84 28 Nico Schottelius
Typical init procedure:
85 9 Nico Schottelius
86 28 Nico Schottelius
* Single control plane: @kubeadm init --config bootstrap/XXX/kubeadm.yaml@
87
* Multi control plane (HA): @kubeadm init --config bootstrap/XXX/kubeadm.yaml --upload-certs@
88 10 Nico Schottelius
89 29 Nico Schottelius
h3. Deleting a pod that is hanging in terminating state
90
91
<pre>
92
kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>
93
</pre>
94
95
(from https://stackoverflow.com/questions/35453792/pods-stuck-in-terminating-status)
96
97 42 Nico Schottelius
h3. Listing nodes of a cluster
98
99
<pre>
100
[15:05] bridge:~% kubectl get nodes
101
NAME       STATUS   ROLES                  AGE   VERSION
102
server22   Ready    <none>                 52d   v1.22.0
103
server23   Ready    <none>                 52d   v1.22.2
104
server24   Ready    <none>                 52d   v1.22.0
105
server25   Ready    <none>                 52d   v1.22.0
106
server26   Ready    <none>                 52d   v1.22.0
107
server27   Ready    <none>                 52d   v1.22.0
108
server63   Ready    control-plane,master   52d   v1.22.0
109
server64   Ready    <none>                 52d   v1.22.0
110
server65   Ready    control-plane,master   52d   v1.22.0
111
server66   Ready    <none>                 52d   v1.22.0
112
server83   Ready    control-plane,master   52d   v1.22.0
113
server84   Ready    <none>                 52d   v1.22.0
114
server85   Ready    <none>                 52d   v1.22.0
115
server86   Ready    <none>                 52d   v1.22.0
116
</pre>
117
118
119 41 Nico Schottelius
h3. Removing / draining a node
120
121
Usually @kubectl drain server@ should do the job, but sometimes we need to be more aggressive:
122
123
<pre>
124
kubectl drain --delete-emptydir-data --ignore-daemonsets server23
125 42 Nico Schottelius
</pre>
126
127
h3. Readding a node after draining
128
129
<pre>
130
kubectl uncordon serverXX
131 1 Nico Schottelius
</pre>
132 43 Nico Schottelius
133 50 Nico Schottelius
h3. (Re-)joining worker nodes after creating the cluster
134 49 Nico Schottelius
135
* We need to have an up-to-date token
136
* We use different join commands for the workers and control plane nodes
137
138
Generating the join command on an existing control plane node:
139
140
<pre>
141
kubeadm token create --print-join-command
142
</pre>
143
144 50 Nico Schottelius
h3. (Re-)joining control plane nodes after creating the cluster
145 1 Nico Schottelius
146 50 Nico Schottelius
* We generate the token again
147
* We upload the certificates
148
* We need to combine/create the join command for the control plane node
149
150
Example session:
151
152
<pre>
153
% kubeadm token create --print-join-command
154
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash 
155
156
% kubeadm init phase upload-certs --upload-certs
157
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
158
[upload-certs] Using certificate key:
159
CERTKEY
160
161
# Then we use these two outputs on the joining node:
162
163
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash --control-plane --certificate-key CERTKEY
164
</pre>
165
166
Commands to be used on a control plane node:
167
168
<pre>
169
kubeadm token create --print-join-command
170
kubeadm init phase upload-certs --upload-certs
171
</pre>
172
173
Commands to be used on the joining node:
174
175
<pre>
176
JOINCOMMAND --control-plane --certificate-key CERTKEY
177
</pre>
178 49 Nico Schottelius
179 51 Nico Schottelius
SEE ALSO
180
181
* https://stackoverflow.com/questions/63936268/how-to-generate-kubeadm-token-for-secondary-control-plane-nodes
182
* https://blog.scottlowe.org/2019/08/15/reconstructing-the-join-command-for-kubeadm/
183
184 53 Nico Schottelius
h3. How to fix etcd does not start when rejoining a kubernetes cluster as a control plane
185 52 Nico Schottelius
186
If during the above step etcd does not come up, @kubeadm join@ can hang as follows:
187
188
<pre>
189
[control-plane] Creating static Pod manifest for "kube-apiserver"                                                              
190
[control-plane] Creating static Pod manifest for "kube-controller-manager"                                                     
191
[control-plane] Creating static Pod manifest for "kube-scheduler"                                                              
192
[check-etcd] Checking that the etcd cluster is healthy                                                                         
193
error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://[2a0a:e5c0:10:1:225:b3ff:fe20:37
194
8a]:2379 with maintenance client: context deadline exceeded                                                                    
195
To see the stack trace of this error execute with --v=5 or higher         
196
</pre>
197
198
Then the problem is likely that the etcd server is still a member of the cluster. We first need to remove it from the etcd cluster and then the join works.
199
200
To fix this we do:
201
202
* Find a working etcd pod
203
* Find the etcd members / member list
204
* Remove the etcd member that we want to re-join the cluster
205
206
207
<pre>
208
# Find the etcd pods
209
kubectl -n kube-system get pods -l component=etcd,tier=control-plane
210
211
# Get the list of etcd servers with the member id 
212
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
213
214
# Remove the member
215
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove MEMBERID
216
</pre>
217
218
Sample session:
219
220
<pre>
221
[10:48] line:~% kubectl -n kube-system get pods -l component=etcd,tier=control-plane
222
NAME            READY   STATUS    RESTARTS     AGE
223
etcd-server63   1/1     Running   0            3m11s
224
etcd-server65   1/1     Running   3            7d2h
225
etcd-server83   1/1     Running   8 (6d ago)   7d2h
226
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
227
356891cd676df6e4, started, server65, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2379, false
228
371b8a07185dee7e, started, server63, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2379, false
229
5942bc58307f8af9, started, server83, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2380, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2379, false
230
231
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove 371b8a07185dee7e
232
Member 371b8a07185dee7e removed from cluster e3c0805f592a8f77
233 1 Nico Schottelius
234
</pre>
235
236
SEE ALSO
237
238
* We found the solution using https://stackoverflow.com/questions/67921552/re-installed-node-cannot-join-kubernetes-cluster
239 56 Nico Schottelius
240 62 Nico Schottelius
h2. Calico CNI
241
242
243
h3. Calico Installation
244
245
* We install "calico using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
246
* This has the following advantages:
247
** Easy to upgrade
248
** Does not require os to configure IPv6/dual stack settings as the tigera operator figures out things on its own
249
250
Usually plain calico can be installed directly using:
251
252
<pre>
253
helm repo add projectcalico https://docs.projectcalico.org/charts
254
helm install calico projectcalico/tigera-operator --version v3.20.2
255
</pre>
256
257
h3. Installing calicoctl
258
259
To be able to manage and configure calico, we need to 
260
"install calicoctl (we choose the version as a pod)":https://docs.projectcalico.org/getting-started/clis/calicoctl/install#install-calicoctl-as-a-kubernetes-pod
261
262
<pre>
263
kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml
264
</pre>
265
266 70 Nico Schottelius
And making it easier accessible by alias:
267
268
<pre>
269
alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl"
270
</pre>
271
272
273 62 Nico Schottelius
h3. Calico configuration
274
275 63 Nico Schottelius
By default our k8s clusters "BGP peer":https://docs.projectcalico.org/networking/bgp
276
with an upstream router to propagate podcidr and servicecidr.
277 62 Nico Schottelius
278
Default settings in our infrastructure:
279
280
* We use a full-mesh using the @nodeToNodeMeshEnabled: true@ option
281
* We keep the original next hop so that *only* the server with the pod is announcing it (instead of ecmp)
282 1 Nico Schottelius
* We use private ASNs for k8s clusters
283 63 Nico Schottelius
* We do *not* use any overlay
284 62 Nico Schottelius
285
After installing calico and calicoctl the last step of the installation is usually:
286
287
<pre>
288
calicoctl create -f - < bgp-config-this-cluster.yaml
289
</pre>
290
291
292
A sample BGP configuration:
293
294
<pre>
295
---
296
apiVersion: projectcalico.org/v3
297
kind: BGPConfiguration
298
metadata:
299
  name: default
300
spec:
301
  logSeverityScreen: Info
302
  nodeToNodeMeshEnabled: true
303
  asNumber: 65534
304
  serviceClusterIPs:
305
  - cidr: 2a0a:e5c0:10:3::/108
306
  serviceExternalIPs:
307
  - cidr: 2a0a:e5c0:10:3::/108
308
---
309
apiVersion: projectcalico.org/v3
310
kind: BGPPeer
311
metadata:
312
  name: router1-place10
313
spec:
314
  peerIP: 2a0a:e5c0:10:1::50
315
  asNumber: 213081
316
  keepOriginalNextHop: true
317
</pre>
318
319 64 Nico Schottelius
h2. ArgoCD / ArgoWorkFlow
320 56 Nico Schottelius
321 60 Nico Schottelius
h3. Argocd Installation
322 1 Nico Schottelius
323 60 Nico Schottelius
As there is no configuration management present yet, argocd is installed using
324
325 1 Nico Schottelius
<pre>
326 60 Nico Schottelius
kubectl create namespace argocd
327
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
328 1 Nico Schottelius
</pre>
329 56 Nico Schottelius
330 60 Nico Schottelius
* See https://argo-cd.readthedocs.io/en/stable/
331 1 Nico Schottelius
332 60 Nico Schottelius
h3. Get the argocd credentials
333
334
<pre>
335
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo ""
336
</pre>
337 52 Nico Schottelius
338 68 Nico Schottelius
h3. Using the argocd webhook to trigger changes
339 67 Nico Schottelius
340
* To trigger changes post json https://argocd.example.com/api/webhook
341
342 72 Nico Schottelius
h3. Deploying an application
343
344
* Applications are deployed via git towards gitea (code.ungleich.ch) and then pulled by argo
345
* Always include the redmine-url pointing to the (customer) ticket
346
347
Application sample
348
349
<pre>
350
apiVersion: argoproj.io/v1alpha1
351
kind: Application
352
metadata:
353
  name: gitea-CUSTOMER
354
  namespace: argocd
355
spec:
356
  destination:
357
    namespace: default
358
    server: 'https://kubernetes.default.svc'
359
  source:
360
    path: apps/prod/gitea
361
    repoURL: 'https://code.ungleich.ch/ungleich-intern/k8s-config.git'
362
    targetRevision: HEAD
363
    helm:
364
      parameters:
365
        - name: storage.data.storageClass
366
          value: rook-ceph-block-hdd
367
        - name: storage.data.size
368
          value: 200Gi
369
        - name: storage.db.storageClass
370
          value: rook-ceph-block-ssd
371
        - name: storage.db.size
372
          value: 10Gi
373
        - name: storage.letsencrypt.storageClass
374
          value: rook-ceph-block-hdd
375
        - name: storage.letsencrypt.size
376
          value: 50Mi
377
        - name: letsencryptStaging
378
          value: 'no'
379
        - name: fqdn
380
          value: 'code.verua.online'
381
  project: default
382
  syncPolicy:
383
    automated:
384
      prune: true
385
      selfHeal: true
386
  info:
387
    - name: 'redmine-url'
388
      value: 'https://redmine.ungleich.ch/issues/ISSUEID'
389
    - name: 'support-url'
390
      value: 'https://support.ungleich.ch/Ticket/Display.html?id=TICKETID'
391
</pre>
392
393
394 55 Nico Schottelius
h2. Helm related operations
395
396 61 Nico Schottelius
We use helm charts extensively.
397
398
* In production, they are managed via argocd
399
* In development, helm chart can de developed and deployed manually using the helm utility.
400
401 55 Nico Schottelius
h3. Installing a helm chart
402
403
One can use the usual pattern of
404
405
<pre>
406
helm install <releasename> <chartdirectory>
407
</pre>
408
409
However often you want to reinstall/update when testing helm charts. The following pattern is "better", because it allows you to reinstall, if it is already installed:
410
411
<pre>
412
helm upgrade --install <releasename> <chartdirectory>
413
</pre>
414
415 43 Nico Schottelius
h2. Rook / Ceph Related Operations
416
417 71 Nico Schottelius
h3. Executing ceph commands
418
419
Using the ceph-tools pod as follows:
420
421
<pre>
422
kubectl exec -n rook-ceph -ti $(kubectl -n rook-ceph get pods -l app=rook-ceph-tools -o jsonpath='{.items[*].metadata.name}') -- ceph -s
423
</pre>
424
425 43 Nico Schottelius
h3. Inspecting the logs of a specific server
426
427
<pre>
428
# Get the related pods
429
kubectl -n rook-ceph get pods -l app=rook-ceph-osd-prepare 
430
...
431
432
# Inspect the logs of a specific pod
433
kubectl -n rook-ceph logs -f rook-ceph-osd-prepare-server23--1-444qx
434
435 71 Nico Schottelius
</pre>
436
437
h3. Inspecting the logs of the rook-ceph-operator
438
439
<pre>
440
kubectl -n rook-ceph logs -f -l app=rook-ceph-operator
441 43 Nico Schottelius
</pre>
442
443
h3. Triggering server prepare / adding new osds
444
445
The rook-ceph-operator triggers/watches/creates pods to maintain hosts. To trigger a full "re scan", simply delete that pod:
446
447
<pre>
448
kubectl -n rook-ceph delete pods -l app=rook-ceph-operator
449
</pre>
450
451
This will cause all the @rook-ceph-osd-prepare-..@ jobs to be recreated and thus OSDs to be created, if new disks have been added.
452
453
h3. Removing an OSD
454
455
* See "Ceph OSD Management":https://rook.io/docs/rook/v1.7/ceph-osd-mgmt.html
456 41 Nico Schottelius
457 1 Nico Schottelius
h2. Infrastructure versions
458 35 Nico Schottelius
459 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v5 (2021-10)
460 1 Nico Schottelius
461 57 Nico Schottelius
Clusters are configured / setup in this order:
462
463
* Bootstrap via kubeadm
464 59 Nico Schottelius
* "Networking via calico + BGP (non ECMP) using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
465
* "ArgoCD for CD":https://argo-cd.readthedocs.io/en/stable/
466
** "rook for storage via argocd":https://rook.io/
467 58 Nico Schottelius
** haproxy for in IPv6-cluster-IPv4-to-IPv6 proxy via argocd
468
** "kubernetes-secret-generator for in cluster secrets":https://github.com/mittwald/kubernetes-secret-generator
469
** "ungleich-certbot managing certs and nginx":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
470
471 57 Nico Schottelius
472
h3. ungleich kubernetes infrastructure v4 (2021-09)
473
474 54 Nico Schottelius
* rook is configured via manifests instead of using the rook-ceph-cluster helm chart
475 1 Nico Schottelius
* The rook operator is still being installed via helm
476 35 Nico Schottelius
477 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v3 (2021-07)
478 1 Nico Schottelius
479 10 Nico Schottelius
* rook is now installed via helm via argocd instead of directly via manifests
480 28 Nico Schottelius
481 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v2 (2021-05)
482 28 Nico Schottelius
483
* Replaced fluxv2 from ungleich k8s v1 with argocd
484 1 Nico Schottelius
** argocd can apply helm templates directly without needing to go through Chart releases
485 28 Nico Schottelius
* We are also using argoflow for build flows
486
* Planned to add "kaniko":https://github.com/GoogleContainerTools/kaniko for image building
487
488 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v1 (2021-01)
489 28 Nico Schottelius
490
We are using the following components:
491
492
* "Calico as a CNI":https://www.projectcalico.org/ with BGP, IPv6 only, no encapsulation
493
** Needed for basic networking
494
* "kubernetes-secret-generator":https://github.com/mittwald/kubernetes-secret-generator for creating secrets
495
** Needed so that secrets are not stored in the git repository, but only in the cluster
496
* "ungleich-certbot":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
497
** Needed to get letsencrypt certificates for services
498
* "rook with ceph rbd + cephfs":https://rook.io/ for storage
499
** rbd for almost everything, *ReadWriteOnce*
500
** cephfs for smaller things, multi access *ReadWriteMany*
501
** Needed for providing persistent storage
502
* "flux v2":https://fluxcd.io/
503
** Needed to manage resources automatically