Project

General

Profile

The ungleich kubernetes infrastructure » History » Version 68

Nico Schottelius, 11/10/2021 09:06 PM

1 22 Nico Schottelius
h1. The ungleich kubernetes infrastructure and ungleich kubernetes manual
2 1 Nico Schottelius
3 3 Nico Schottelius
{{toc}}
4
5 1 Nico Schottelius
h2. Status
6
7 28 Nico Schottelius
This document is **pre-production**.
8
This document is to become the ungleich kubernetes infrastructure overview as well as the ungleich kubernetes manual.
9 1 Nico Schottelius
10 10 Nico Schottelius
h2. k8s clusters
11
12 66 Nico Schottelius
| Cluster         | Purpose/Setup     | Maintainer   | Master(s)                                                | argo | rook | v4 http proxy | last verified |
13 65 Nico Schottelius
| c0.k8s.ooo      | Dev               | -            | UNUSED                                                   |      |      |               |    2021-10-05 |
14
| c1.k8s.ooo      | Dev p6 VM         | Nico         | 2a0a-e5c0-2-11-0-62ff-fe0b-1a3d.k8s-1.place6.ungleich.ch |      |      |               |    2021-10-05 |
15
| c2.k8s.ooo      | Dev p7 HW         | Nico         | server47 server53 server54                               | x    | x    |               |    2021-10-05 |
16
| c3.k8s.ooo      | Test p7 PI        | -            | UNUSED                                                   |      |      |               |    2021-10-05 |
17
| c4.k8s.ooo      | Dev2 p7 HW        | Fran/Jin-Guk | server52 server53 server54                               |      |      |               |             - |
18
| c5.k8s.ooo      | Dev p6 VM Amal    | Nico/Amal    | 2a0a-e5c0-2-11-0-62ff-fe0b-1a46.k8s-1.place6.ungleich.ch |      |      |               |               |
19
| c6.k8s.ooo      | Dev p6 VM Jin-Guk | Jin-Guk      |                                                          |      |      |               |               |
20
| [[p6.k8s.ooo]]  | production        |              | server67 server69 server71                               | x    | x    | 147.78.194.13 |    2021-10-05 |
21 48 Nico Schottelius
| [[p10.k8s.ooo]] | production        |              | server63 server65 server83                               | x    | x    | 147.78.194.12 |    2021-10-05 |
22 21 Nico Schottelius
23 1 Nico Schottelius
h2. General architecture and components overview
24
25
* All k8s clusters are IPv6 only
26
* We use BGP peering to propagate podcidr and serviceCidr networks to our infrastructure
27
* The main public testing repository is "ungleich-k8s":https://code.ungleich.ch/ungleich-public/ungleich-k8s
28 18 Nico Schottelius
** Private configurations are found in the **k8s-config** repository
29 1 Nico Schottelius
30
h3. Cluster types
31
32 28 Nico Schottelius
| **Type/Feature**            | **Development**                | **Production**         |
33
| Min No. nodes               | 3 (1 master, 3 worker)         | 5 (3 master, 3 worker) |
34
| Recommended minimum         | 4 (dedicated master, 3 worker) | 8 (3 master, 5 worker) |
35
| Separation of control plane | optional                       | recommended            |
36
| Persistent storage          | required                       | required               |
37
| Number of storage monitors  | 3                              | 5                      |
38 1 Nico Schottelius
39 43 Nico Schottelius
h2. General k8s operations
40 1 Nico Schottelius
41 46 Nico Schottelius
h3. Cheat sheet / external great references
42
43
* "kubectl cheatsheet":https://kubernetes.io/docs/reference/kubectl/cheatsheet/
44
45 44 Nico Schottelius
h3. Get the cluster admin.conf
46
47
* On the masters of each cluster you can find the file @/etc/kubernetes/admin.conf@
48
* To be able to administrate the cluster you can copy the admin.conf to your local machine
49
* Multi cluster debugging can very easy if you name the config ~/cX-admin.conf (see example below)
50
51
<pre>
52
% scp root@server47.place7.ungleich.ch:/etc/kubernetes/admin.conf ~/c2-admin.conf
53
% export KUBECONFIG=~/c2-admin.conf    
54
% kubectl get nodes
55
NAME       STATUS                     ROLES                  AGE   VERSION
56
server47   Ready                      control-plane,master   82d   v1.22.0
57
server48   Ready                      control-plane,master   82d   v1.22.0
58
server49   Ready                      <none>                 82d   v1.22.0
59
server50   Ready                      <none>                 82d   v1.22.0
60
server59   Ready                      control-plane,master   82d   v1.22.0
61
server60   Ready,SchedulingDisabled   <none>                 82d   v1.22.0
62
server61   Ready                      <none>                 82d   v1.22.0
63
server62   Ready                      <none>                 82d   v1.22.0               
64
</pre>
65
66 18 Nico Schottelius
h3. Installing a new k8s cluster
67 8 Nico Schottelius
68 9 Nico Schottelius
* Decide on the cluster name (usually *cX.k8s.ooo*), X counting upwards
69 28 Nico Schottelius
** Using pXX.k8s.ooo for production clusters of placeXX
70 9 Nico Schottelius
* Use cdist to configure the nodes with requirements like crio
71
* Decide between single or multi node control plane setups (see below)
72 28 Nico Schottelius
** Single control plane suitable for development clusters
73 9 Nico Schottelius
74 28 Nico Schottelius
Typical init procedure:
75 9 Nico Schottelius
76 28 Nico Schottelius
* Single control plane: @kubeadm init --config bootstrap/XXX/kubeadm.yaml@
77
* Multi control plane (HA): @kubeadm init --config bootstrap/XXX/kubeadm.yaml --upload-certs@
78 10 Nico Schottelius
79 29 Nico Schottelius
h3. Deleting a pod that is hanging in terminating state
80
81
<pre>
82
kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>
83
</pre>
84
85
(from https://stackoverflow.com/questions/35453792/pods-stuck-in-terminating-status)
86
87 42 Nico Schottelius
h3. Listing nodes of a cluster
88
89
<pre>
90
[15:05] bridge:~% kubectl get nodes
91
NAME       STATUS   ROLES                  AGE   VERSION
92
server22   Ready    <none>                 52d   v1.22.0
93
server23   Ready    <none>                 52d   v1.22.2
94
server24   Ready    <none>                 52d   v1.22.0
95
server25   Ready    <none>                 52d   v1.22.0
96
server26   Ready    <none>                 52d   v1.22.0
97
server27   Ready    <none>                 52d   v1.22.0
98
server63   Ready    control-plane,master   52d   v1.22.0
99
server64   Ready    <none>                 52d   v1.22.0
100
server65   Ready    control-plane,master   52d   v1.22.0
101
server66   Ready    <none>                 52d   v1.22.0
102
server83   Ready    control-plane,master   52d   v1.22.0
103
server84   Ready    <none>                 52d   v1.22.0
104
server85   Ready    <none>                 52d   v1.22.0
105
server86   Ready    <none>                 52d   v1.22.0
106
</pre>
107
108
109 41 Nico Schottelius
h3. Removing / draining a node
110
111
Usually @kubectl drain server@ should do the job, but sometimes we need to be more aggressive:
112
113
<pre>
114
kubectl drain --delete-emptydir-data --ignore-daemonsets server23
115 42 Nico Schottelius
</pre>
116
117
h3. Readding a node after draining
118
119
<pre>
120
kubectl uncordon serverXX
121 1 Nico Schottelius
</pre>
122 43 Nico Schottelius
123 50 Nico Schottelius
h3. (Re-)joining worker nodes after creating the cluster
124 49 Nico Schottelius
125
* We need to have an up-to-date token
126
* We use different join commands for the workers and control plane nodes
127
128
Generating the join command on an existing control plane node:
129
130
<pre>
131
kubeadm token create --print-join-command
132
</pre>
133
134 50 Nico Schottelius
h3. (Re-)joining control plane nodes after creating the cluster
135 1 Nico Schottelius
136 50 Nico Schottelius
* We generate the token again
137
* We upload the certificates
138
* We need to combine/create the join command for the control plane node
139
140
Example session:
141
142
<pre>
143
% kubeadm token create --print-join-command
144
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash 
145
146
% kubeadm init phase upload-certs --upload-certs
147
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
148
[upload-certs] Using certificate key:
149
CERTKEY
150
151
# Then we use these two outputs on the joining node:
152
153
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash --control-plane --certificate-key CERTKEY
154
</pre>
155
156
Commands to be used on a control plane node:
157
158
<pre>
159
kubeadm token create --print-join-command
160
kubeadm init phase upload-certs --upload-certs
161
</pre>
162
163
Commands to be used on the joining node:
164
165
<pre>
166
JOINCOMMAND --control-plane --certificate-key CERTKEY
167
</pre>
168 49 Nico Schottelius
169 51 Nico Schottelius
SEE ALSO
170
171
* https://stackoverflow.com/questions/63936268/how-to-generate-kubeadm-token-for-secondary-control-plane-nodes
172
* https://blog.scottlowe.org/2019/08/15/reconstructing-the-join-command-for-kubeadm/
173
174 53 Nico Schottelius
h3. How to fix etcd does not start when rejoining a kubernetes cluster as a control plane
175 52 Nico Schottelius
176
If during the above step etcd does not come up, @kubeadm join@ can hang as follows:
177
178
<pre>
179
[control-plane] Creating static Pod manifest for "kube-apiserver"                                                              
180
[control-plane] Creating static Pod manifest for "kube-controller-manager"                                                     
181
[control-plane] Creating static Pod manifest for "kube-scheduler"                                                              
182
[check-etcd] Checking that the etcd cluster is healthy                                                                         
183
error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://[2a0a:e5c0:10:1:225:b3ff:fe20:37
184
8a]:2379 with maintenance client: context deadline exceeded                                                                    
185
To see the stack trace of this error execute with --v=5 or higher         
186
</pre>
187
188
Then the problem is likely that the etcd server is still a member of the cluster. We first need to remove it from the etcd cluster and then the join works.
189
190
To fix this we do:
191
192
* Find a working etcd pod
193
* Find the etcd members / member list
194
* Remove the etcd member that we want to re-join the cluster
195
196
197
<pre>
198
# Find the etcd pods
199
kubectl -n kube-system get pods -l component=etcd,tier=control-plane
200
201
# Get the list of etcd servers with the member id 
202
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
203
204
# Remove the member
205
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove MEMBERID
206
</pre>
207
208
Sample session:
209
210
<pre>
211
[10:48] line:~% kubectl -n kube-system get pods -l component=etcd,tier=control-plane
212
NAME            READY   STATUS    RESTARTS     AGE
213
etcd-server63   1/1     Running   0            3m11s
214
etcd-server65   1/1     Running   3            7d2h
215
etcd-server83   1/1     Running   8 (6d ago)   7d2h
216
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
217
356891cd676df6e4, started, server65, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2379, false
218
371b8a07185dee7e, started, server63, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2379, false
219
5942bc58307f8af9, started, server83, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2380, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2379, false
220
221
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove 371b8a07185dee7e
222
Member 371b8a07185dee7e removed from cluster e3c0805f592a8f77
223 1 Nico Schottelius
224
</pre>
225
226
SEE ALSO
227
228
* We found the solution using https://stackoverflow.com/questions/67921552/re-installed-node-cannot-join-kubernetes-cluster
229 56 Nico Schottelius
230 62 Nico Schottelius
h2. Calico CNI
231
232
233
h3. Calico Installation
234
235
* We install "calico using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
236
* This has the following advantages:
237
** Easy to upgrade
238
** Does not require os to configure IPv6/dual stack settings as the tigera operator figures out things on its own
239
240
Usually plain calico can be installed directly using:
241
242
<pre>
243
helm repo add projectcalico https://docs.projectcalico.org/charts
244
helm install calico projectcalico/tigera-operator --version v3.20.2
245
</pre>
246
247
h3. Installing calicoctl
248
249
To be able to manage and configure calico, we need to 
250
"install calicoctl (we choose the version as a pod)":https://docs.projectcalico.org/getting-started/clis/calicoctl/install#install-calicoctl-as-a-kubernetes-pod
251
252
<pre>
253
kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml
254
</pre>
255
256
h3. Calico configuration
257
258 63 Nico Schottelius
By default our k8s clusters "BGP peer":https://docs.projectcalico.org/networking/bgp
259
with an upstream router to propagate podcidr and servicecidr.
260 62 Nico Schottelius
261
Default settings in our infrastructure:
262
263
* We use a full-mesh using the @nodeToNodeMeshEnabled: true@ option
264
* We keep the original next hop so that *only* the server with the pod is announcing it (instead of ecmp)
265 1 Nico Schottelius
* We use private ASNs for k8s clusters
266 63 Nico Schottelius
* We do *not* use any overlay
267 62 Nico Schottelius
268
After installing calico and calicoctl the last step of the installation is usually:
269
270
<pre>
271
calicoctl create -f - < bgp-config-this-cluster.yaml
272
</pre>
273
274
275
A sample BGP configuration:
276
277
<pre>
278
---
279
apiVersion: projectcalico.org/v3
280
kind: BGPConfiguration
281
metadata:
282
  name: default
283
spec:
284
  logSeverityScreen: Info
285
  nodeToNodeMeshEnabled: true
286
  asNumber: 65534
287
  serviceClusterIPs:
288
  - cidr: 2a0a:e5c0:10:3::/108
289
  serviceExternalIPs:
290
  - cidr: 2a0a:e5c0:10:3::/108
291
---
292
apiVersion: projectcalico.org/v3
293
kind: BGPPeer
294
metadata:
295
  name: router1-place10
296
spec:
297
  peerIP: 2a0a:e5c0:10:1::50
298
  asNumber: 213081
299
  keepOriginalNextHop: true
300
</pre>
301
302 64 Nico Schottelius
h2. ArgoCD / ArgoWorkFlow
303 56 Nico Schottelius
304 60 Nico Schottelius
h3. Argocd Installation
305 1 Nico Schottelius
306 60 Nico Schottelius
As there is no configuration management present yet, argocd is installed using
307
308 1 Nico Schottelius
<pre>
309 60 Nico Schottelius
kubectl create namespace argocd
310
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
311 1 Nico Schottelius
</pre>
312 56 Nico Schottelius
313 60 Nico Schottelius
* See https://argo-cd.readthedocs.io/en/stable/
314 1 Nico Schottelius
315 60 Nico Schottelius
h3. Get the argocd credentials
316
317
<pre>
318
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo ""
319
</pre>
320 52 Nico Schottelius
321 68 Nico Schottelius
h3. Using the argocd webhook to trigger changes
322 67 Nico Schottelius
323
* To trigger changes post json https://argocd.example.com/api/webhook
324
325 55 Nico Schottelius
h2. Helm related operations
326
327 61 Nico Schottelius
We use helm charts extensively.
328
329
* In production, they are managed via argocd
330
* In development, helm chart can de developed and deployed manually using the helm utility.
331
332 55 Nico Schottelius
h3. Installing a helm chart
333
334
One can use the usual pattern of
335
336
<pre>
337
helm install <releasename> <chartdirectory>
338
</pre>
339
340
However often you want to reinstall/update when testing helm charts. The following pattern is "better", because it allows you to reinstall, if it is already installed:
341
342
<pre>
343
helm upgrade --install <releasename> <chartdirectory>
344
</pre>
345
346 43 Nico Schottelius
h2. Rook / Ceph Related Operations
347
348
h3. Inspecting the logs of a specific server
349
350
<pre>
351
# Get the related pods
352
kubectl -n rook-ceph get pods -l app=rook-ceph-osd-prepare 
353
...
354
355
# Inspect the logs of a specific pod
356
kubectl -n rook-ceph logs -f rook-ceph-osd-prepare-server23--1-444qx
357
358
</pre>
359
360
h3. Triggering server prepare / adding new osds
361
362
The rook-ceph-operator triggers/watches/creates pods to maintain hosts. To trigger a full "re scan", simply delete that pod:
363
364
<pre>
365
kubectl -n rook-ceph delete pods -l app=rook-ceph-operator
366
</pre>
367
368
This will cause all the @rook-ceph-osd-prepare-..@ jobs to be recreated and thus OSDs to be created, if new disks have been added.
369
370
h3. Removing an OSD
371
372
* See "Ceph OSD Management":https://rook.io/docs/rook/v1.7/ceph-osd-mgmt.html
373 41 Nico Schottelius
374 1 Nico Schottelius
h2. Infrastructure versions
375 35 Nico Schottelius
376 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v5 (2021-10)
377 1 Nico Schottelius
378 57 Nico Schottelius
Clusters are configured / setup in this order:
379
380
* Bootstrap via kubeadm
381 59 Nico Schottelius
* "Networking via calico + BGP (non ECMP) using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
382
* "ArgoCD for CD":https://argo-cd.readthedocs.io/en/stable/
383
** "rook for storage via argocd":https://rook.io/
384 58 Nico Schottelius
** haproxy for in IPv6-cluster-IPv4-to-IPv6 proxy via argocd
385
** "kubernetes-secret-generator for in cluster secrets":https://github.com/mittwald/kubernetes-secret-generator
386
** "ungleich-certbot managing certs and nginx":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
387
388 57 Nico Schottelius
389
h3. ungleich kubernetes infrastructure v4 (2021-09)
390
391 54 Nico Schottelius
* rook is configured via manifests instead of using the rook-ceph-cluster helm chart
392 1 Nico Schottelius
* The rook operator is still being installed via helm
393 35 Nico Schottelius
394 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v3 (2021-07)
395 1 Nico Schottelius
396 10 Nico Schottelius
* rook is now installed via helm via argocd instead of directly via manifests
397 28 Nico Schottelius
398 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v2 (2021-05)
399 28 Nico Schottelius
400
* Replaced fluxv2 from ungleich k8s v1 with argocd
401 1 Nico Schottelius
** argocd can apply helm templates directly without needing to go through Chart releases
402 28 Nico Schottelius
* We are also using argoflow for build flows
403
* Planned to add "kaniko":https://github.com/GoogleContainerTools/kaniko for image building
404
405 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v1 (2021-01)
406 28 Nico Schottelius
407
We are using the following components:
408
409
* "Calico as a CNI":https://www.projectcalico.org/ with BGP, IPv6 only, no encapsulation
410
** Needed for basic networking
411
* "kubernetes-secret-generator":https://github.com/mittwald/kubernetes-secret-generator for creating secrets
412
** Needed so that secrets are not stored in the git repository, but only in the cluster
413
* "ungleich-certbot":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
414
** Needed to get letsencrypt certificates for services
415
* "rook with ceph rbd + cephfs":https://rook.io/ for storage
416
** rbd for almost everything, *ReadWriteOnce*
417
** cephfs for smaller things, multi access *ReadWriteMany*
418
** Needed for providing persistent storage
419
* "flux v2":https://fluxcd.io/
420
** Needed to manage resources automatically