Project

General

Profile

The ungleich kubernetes infrastructure » History » Version 112

Nico Schottelius, 06/23/2022 10:52 AM

1 22 Nico Schottelius
h1. The ungleich kubernetes infrastructure and ungleich kubernetes manual
2 1 Nico Schottelius
3 3 Nico Schottelius
{{toc}}
4
5 1 Nico Schottelius
h2. Status
6
7 28 Nico Schottelius
This document is **pre-production**.
8
This document is to become the ungleich kubernetes infrastructure overview as well as the ungleich kubernetes manual.
9 1 Nico Schottelius
10 10 Nico Schottelius
h2. k8s clusters
11
12 111 Nico Schottelius
| Cluster           | Purpose/Setup     | Maintainer | Master(s)                     | argo                                                  | v4 http proxy | last verified |
13
| c0.k8s.ooo        | Dev               | -          | UNUSED                        |                                                       |               |    2021-10-05 |
14
| c1.k8s.ooo        | retired           |            | -                             |                                                       |               |    2022-03-15 |
15
| c2.k8s.ooo        | Dev p7 HW         | Nico       | server47 server53 server54    | "argo":https://argocd-server.argocd.svc.c2.k8s.ooo    |               |    2021-10-05 |
16
| c3.k8s.ooo        | retired           | -          | -                             |                                                       |               |    2021-10-05 |
17
| c4.k8s.ooo        | Dev2 p7 HW        | Jin-Guk    | server52 server53 server54    |                                                       |               |             - |
18
| c5.k8s.ooo        | retired           |            | -                             |                                                       |               |    2022-03-15 |
19
| c6.k8s.ooo        | Dev p6 VM Jin-Guk | Jin-Guk    |                               |                                                       |               |               |
20
| [[p5.k8s.ooo]]    | production        |            | server34 server36 server38    | "argo":https://argocd-server.argocd.svc.p5.k8s.ooo    |             - |               |
21
| [[p6.k8s.ooo]]    | production        |            | server67 server69 server71    | "argo":https://argocd-server.argocd.svc.p6.k8s.ooo    | 147.78.194.13 |    2021-10-05 |
22
| [[p10.k8s.ooo]]   | production        |            | server63 server65 server83    | "argo":https://argocd-server.argocd.svc.p10.k8s.ooo   | 147.78.194.12 |    2021-10-05 |
23
| [[k8s.ge.nau.so]] | development       |            | server107 server108 server109 | "argo":https://argocd-server.argocd.svc.k8s.ge.nau.so |               |               |
24 110 Nico Schottelius
25 21 Nico Schottelius
26 1 Nico Schottelius
h2. General architecture and components overview
27
28
* All k8s clusters are IPv6 only
29
* We use BGP peering to propagate podcidr and serviceCidr networks to our infrastructure
30
* The main public testing repository is "ungleich-k8s":https://code.ungleich.ch/ungleich-public/ungleich-k8s
31 18 Nico Schottelius
** Private configurations are found in the **k8s-config** repository
32 1 Nico Schottelius
33
h3. Cluster types
34
35 28 Nico Schottelius
| **Type/Feature**            | **Development**                | **Production**         |
36
| Min No. nodes               | 3 (1 master, 3 worker)         | 5 (3 master, 3 worker) |
37
| Recommended minimum         | 4 (dedicated master, 3 worker) | 8 (3 master, 5 worker) |
38
| Separation of control plane | optional                       | recommended            |
39
| Persistent storage          | required                       | required               |
40
| Number of storage monitors  | 3                              | 5                      |
41 1 Nico Schottelius
42 43 Nico Schottelius
h2. General k8s operations
43 1 Nico Schottelius
44 46 Nico Schottelius
h3. Cheat sheet / external great references
45
46
* "kubectl cheatsheet":https://kubernetes.io/docs/reference/kubectl/cheatsheet/
47
48 69 Nico Schottelius
h3. Allowing to schedule work on the control plane
49
50
* Mostly for single node / test / development clusters
51
* Just remove the master taint as follows
52
53
<pre>
54
kubectl taint nodes --all node-role.kubernetes.io/master-
55
</pre>
56
57
58 44 Nico Schottelius
h3. Get the cluster admin.conf
59
60
* On the masters of each cluster you can find the file @/etc/kubernetes/admin.conf@
61
* To be able to administrate the cluster you can copy the admin.conf to your local machine
62
* Multi cluster debugging can very easy if you name the config ~/cX-admin.conf (see example below)
63
64
<pre>
65
% scp root@server47.place7.ungleich.ch:/etc/kubernetes/admin.conf ~/c2-admin.conf
66
% export KUBECONFIG=~/c2-admin.conf    
67
% kubectl get nodes
68
NAME       STATUS                     ROLES                  AGE   VERSION
69
server47   Ready                      control-plane,master   82d   v1.22.0
70
server48   Ready                      control-plane,master   82d   v1.22.0
71
server49   Ready                      <none>                 82d   v1.22.0
72
server50   Ready                      <none>                 82d   v1.22.0
73
server59   Ready                      control-plane,master   82d   v1.22.0
74
server60   Ready,SchedulingDisabled   <none>                 82d   v1.22.0
75
server61   Ready                      <none>                 82d   v1.22.0
76
server62   Ready                      <none>                 82d   v1.22.0               
77
</pre>
78
79 18 Nico Schottelius
h3. Installing a new k8s cluster
80 8 Nico Schottelius
81 9 Nico Schottelius
* Decide on the cluster name (usually *cX.k8s.ooo*), X counting upwards
82 28 Nico Schottelius
** Using pXX.k8s.ooo for production clusters of placeXX
83 9 Nico Schottelius
* Use cdist to configure the nodes with requirements like crio
84
* Decide between single or multi node control plane setups (see below)
85 28 Nico Schottelius
** Single control plane suitable for development clusters
86 9 Nico Schottelius
87 28 Nico Schottelius
Typical init procedure:
88 9 Nico Schottelius
89 28 Nico Schottelius
* Single control plane: @kubeadm init --config bootstrap/XXX/kubeadm.yaml@
90
* Multi control plane (HA): @kubeadm init --config bootstrap/XXX/kubeadm.yaml --upload-certs@
91 10 Nico Schottelius
92 29 Nico Schottelius
h3. Deleting a pod that is hanging in terminating state
93
94
<pre>
95
kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>
96
</pre>
97
98
(from https://stackoverflow.com/questions/35453792/pods-stuck-in-terminating-status)
99
100 42 Nico Schottelius
h3. Listing nodes of a cluster
101
102
<pre>
103
[15:05] bridge:~% kubectl get nodes
104
NAME       STATUS   ROLES                  AGE   VERSION
105
server22   Ready    <none>                 52d   v1.22.0
106
server23   Ready    <none>                 52d   v1.22.2
107
server24   Ready    <none>                 52d   v1.22.0
108
server25   Ready    <none>                 52d   v1.22.0
109
server26   Ready    <none>                 52d   v1.22.0
110
server27   Ready    <none>                 52d   v1.22.0
111
server63   Ready    control-plane,master   52d   v1.22.0
112
server64   Ready    <none>                 52d   v1.22.0
113
server65   Ready    control-plane,master   52d   v1.22.0
114
server66   Ready    <none>                 52d   v1.22.0
115
server83   Ready    control-plane,master   52d   v1.22.0
116
server84   Ready    <none>                 52d   v1.22.0
117
server85   Ready    <none>                 52d   v1.22.0
118
server86   Ready    <none>                 52d   v1.22.0
119
</pre>
120
121 41 Nico Schottelius
h3. Removing / draining a node
122
123
Usually @kubectl drain server@ should do the job, but sometimes we need to be more aggressive:
124
125 1 Nico Schottelius
<pre>
126 103 Nico Schottelius
kubectl drain --delete-emptydir-data --ignore-daemonsets serverXX
127 42 Nico Schottelius
</pre>
128
129
h3. Readding a node after draining
130
131
<pre>
132
kubectl uncordon serverXX
133 1 Nico Schottelius
</pre>
134 43 Nico Schottelius
135 50 Nico Schottelius
h3. (Re-)joining worker nodes after creating the cluster
136 49 Nico Schottelius
137
* We need to have an up-to-date token
138
* We use different join commands for the workers and control plane nodes
139
140
Generating the join command on an existing control plane node:
141
142
<pre>
143
kubeadm token create --print-join-command
144
</pre>
145
146 50 Nico Schottelius
h3. (Re-)joining control plane nodes after creating the cluster
147 1 Nico Schottelius
148 50 Nico Schottelius
* We generate the token again
149
* We upload the certificates
150
* We need to combine/create the join command for the control plane node
151
152
Example session:
153
154
<pre>
155
% kubeadm token create --print-join-command
156
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash 
157
158
% kubeadm init phase upload-certs --upload-certs
159
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
160
[upload-certs] Using certificate key:
161
CERTKEY
162
163
# Then we use these two outputs on the joining node:
164
165
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash --control-plane --certificate-key CERTKEY
166
</pre>
167
168
Commands to be used on a control plane node:
169
170
<pre>
171
kubeadm token create --print-join-command
172
kubeadm init phase upload-certs --upload-certs
173
</pre>
174
175
Commands to be used on the joining node:
176
177
<pre>
178
JOINCOMMAND --control-plane --certificate-key CERTKEY
179
</pre>
180 49 Nico Schottelius
181 51 Nico Schottelius
SEE ALSO
182
183
* https://stackoverflow.com/questions/63936268/how-to-generate-kubeadm-token-for-secondary-control-plane-nodes
184
* https://blog.scottlowe.org/2019/08/15/reconstructing-the-join-command-for-kubeadm/
185
186 53 Nico Schottelius
h3. How to fix etcd does not start when rejoining a kubernetes cluster as a control plane
187 52 Nico Schottelius
188
If during the above step etcd does not come up, @kubeadm join@ can hang as follows:
189
190
<pre>
191
[control-plane] Creating static Pod manifest for "kube-apiserver"                                                              
192
[control-plane] Creating static Pod manifest for "kube-controller-manager"                                                     
193
[control-plane] Creating static Pod manifest for "kube-scheduler"                                                              
194
[check-etcd] Checking that the etcd cluster is healthy                                                                         
195
error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://[2a0a:e5c0:10:1:225:b3ff:fe20:37
196
8a]:2379 with maintenance client: context deadline exceeded                                                                    
197
To see the stack trace of this error execute with --v=5 or higher         
198
</pre>
199
200
Then the problem is likely that the etcd server is still a member of the cluster. We first need to remove it from the etcd cluster and then the join works.
201
202
To fix this we do:
203
204
* Find a working etcd pod
205
* Find the etcd members / member list
206
* Remove the etcd member that we want to re-join the cluster
207
208
209
<pre>
210
# Find the etcd pods
211
kubectl -n kube-system get pods -l component=etcd,tier=control-plane
212
213
# Get the list of etcd servers with the member id 
214
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
215
216
# Remove the member
217
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove MEMBERID
218
</pre>
219
220
Sample session:
221
222
<pre>
223
[10:48] line:~% kubectl -n kube-system get pods -l component=etcd,tier=control-plane
224
NAME            READY   STATUS    RESTARTS     AGE
225
etcd-server63   1/1     Running   0            3m11s
226
etcd-server65   1/1     Running   3            7d2h
227
etcd-server83   1/1     Running   8 (6d ago)   7d2h
228
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
229
356891cd676df6e4, started, server65, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2379, false
230
371b8a07185dee7e, started, server63, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2379, false
231
5942bc58307f8af9, started, server83, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2380, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2379, false
232
233
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove 371b8a07185dee7e
234
Member 371b8a07185dee7e removed from cluster e3c0805f592a8f77
235 1 Nico Schottelius
236
</pre>
237
238
SEE ALSO
239
240
* We found the solution using https://stackoverflow.com/questions/67921552/re-installed-node-cannot-join-kubernetes-cluster
241 56 Nico Schottelius
242 101 Nico Schottelius
h3. Hardware Maintenance using ungleich-hardware
243
244
Use the following manifest and replace the HOST with the actual host:
245
246
<pre>
247
apiVersion: v1
248
kind: Pod
249
metadata:
250
  name: ungleich-hardware-HOST
251
spec:
252
  containers:
253
  - name: ungleich-hardware
254
    image: ungleich/ungleich-hardware:0.0.5
255
    args:
256
    - sleep
257
    - "1000000"
258
    volumeMounts:
259
      - mountPath: /dev
260
        name: dev
261
    securityContext:
262
      privileged: true
263
  nodeSelector:
264
    kubernetes.io/hostname: "HOST"
265
266
  volumes:
267
    - name: dev
268
      hostPath:
269
        path: /dev
270
</pre>
271
272 102 Nico Schottelius
Also see: [[The_ungleich_hardware_maintenance_guide]]
273
274 105 Nico Schottelius
h3. Triggering a cronjob / creating a job from a cronjob
275 104 Nico Schottelius
276
To test a cronjob, we can create a job from a cronjob:
277
278
<pre>
279
kubectl create job --from=cronjob/volume2-daily-backup volume2-manual
280
</pre>
281
282
This creates a job volume2-manual based on the cronjob  volume2-daily
283
284 112 Nico Schottelius
h3. su-ing into a user that has nologin shell set
285
286
Many times users are having nologin as their shell inside the container. To be able to execute maintenance commands within the
287
container, we can use @su -s /bin/sh@ like this:
288
289
<pre>
290
su -s /bin/sh -c '/path/to/your/script' testuser
291
</pre>
292
293
Found on https://serverfault.com/questions/351046/how-to-run-command-as-user-who-has-usr-sbin-nologin-as-shell
294
295 62 Nico Schottelius
h2. Calico CNI
296
297
h3. Calico Installation
298
299
* We install "calico using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
300
* This has the following advantages:
301
** Easy to upgrade
302
** Does not require os to configure IPv6/dual stack settings as the tigera operator figures out things on its own
303
304
Usually plain calico can be installed directly using:
305
306
<pre>
307
helm repo add projectcalico https://docs.projectcalico.org/charts
308 107 Nico Schottelius
helm install --namespace tigera calico projectcalico/tigera-operator --version v3.23.1
309 1 Nico Schottelius
</pre>
310 92 Nico Schottelius
311
* Check the tags on https://github.com/projectcalico/calico/tags for the latest release
312 62 Nico Schottelius
313
h3. Installing calicoctl
314
315
To be able to manage and configure calico, we need to 
316
"install calicoctl (we choose the version as a pod)":https://docs.projectcalico.org/getting-started/clis/calicoctl/install#install-calicoctl-as-a-kubernetes-pod
317
318
<pre>
319
kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml
320
</pre>
321
322 93 Nico Schottelius
Or version specific:
323
324
<pre>
325
kubectl apply -f https://github.com/projectcalico/calico/blob/v3.20.4/manifests/calicoctl.yaml
326 97 Nico Schottelius
327
# For 3.22
328
kubectl apply -f https://projectcalico.docs.tigera.io/archive/v3.22/manifests/calicoctl.yaml
329 93 Nico Schottelius
</pre>
330
331 70 Nico Schottelius
And making it easier accessible by alias:
332
333
<pre>
334
alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl"
335
</pre>
336
337 62 Nico Schottelius
h3. Calico configuration
338
339 63 Nico Schottelius
By default our k8s clusters "BGP peer":https://docs.projectcalico.org/networking/bgp
340
with an upstream router to propagate podcidr and servicecidr.
341 62 Nico Schottelius
342
Default settings in our infrastructure:
343
344
* We use a full-mesh using the @nodeToNodeMeshEnabled: true@ option
345
* We keep the original next hop so that *only* the server with the pod is announcing it (instead of ecmp)
346 1 Nico Schottelius
* We use private ASNs for k8s clusters
347 63 Nico Schottelius
* We do *not* use any overlay
348 62 Nico Schottelius
349
After installing calico and calicoctl the last step of the installation is usually:
350
351 1 Nico Schottelius
<pre>
352 79 Nico Schottelius
calicoctl create -f - < calico-bgp.yaml
353 62 Nico Schottelius
</pre>
354
355
356
A sample BGP configuration:
357
358
<pre>
359
---
360
apiVersion: projectcalico.org/v3
361
kind: BGPConfiguration
362
metadata:
363
  name: default
364
spec:
365
  logSeverityScreen: Info
366
  nodeToNodeMeshEnabled: true
367
  asNumber: 65534
368
  serviceClusterIPs:
369
  - cidr: 2a0a:e5c0:10:3::/108
370
  serviceExternalIPs:
371
  - cidr: 2a0a:e5c0:10:3::/108
372
---
373
apiVersion: projectcalico.org/v3
374
kind: BGPPeer
375
metadata:
376
  name: router1-place10
377
spec:
378
  peerIP: 2a0a:e5c0:10:1::50
379
  asNumber: 213081
380
  keepOriginalNextHop: true
381
</pre>
382
383 64 Nico Schottelius
h2. ArgoCD / ArgoWorkFlow
384 56 Nico Schottelius
385 60 Nico Schottelius
h3. Argocd Installation
386 1 Nico Schottelius
387 60 Nico Schottelius
As there is no configuration management present yet, argocd is installed using
388
389 1 Nico Schottelius
<pre>
390 60 Nico Schottelius
kubectl create namespace argocd
391 86 Nico Schottelius
392 96 Nico Schottelius
# Specific Version
393
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.3.2/manifests/install.yaml
394 86 Nico Schottelius
395
# OR: latest stable
396 60 Nico Schottelius
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
397 1 Nico Schottelius
</pre>
398 56 Nico Schottelius
399 60 Nico Schottelius
* See https://argo-cd.readthedocs.io/en/stable/
400 1 Nico Schottelius
401 60 Nico Schottelius
h3. Get the argocd credentials
402
403
<pre>
404
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo ""
405
</pre>
406 52 Nico Schottelius
407 87 Nico Schottelius
h3. Accessing argocd
408
409
In regular IPv6 clusters:
410
411
* Navigate to https://argocd-server.argocd.CLUSTERDOMAIN
412
413
In legacy IPv4 clusters
414
415
<pre>
416
kubectl --namespace argocd port-forward svc/argocd-server 8080:80
417
</pre>
418
419 88 Nico Schottelius
* Navigate to https://localhost:8080
420
421 68 Nico Schottelius
h3. Using the argocd webhook to trigger changes
422 67 Nico Schottelius
423
* To trigger changes post json https://argocd.example.com/api/webhook
424
425 72 Nico Schottelius
h3. Deploying an application
426
427
* Applications are deployed via git towards gitea (code.ungleich.ch) and then pulled by argo
428 73 Nico Schottelius
* Always include the *redmine-url* pointing to the (customer) ticket
429
** Also add the support-url if it exists
430 72 Nico Schottelius
431
Application sample
432
433
<pre>
434
apiVersion: argoproj.io/v1alpha1
435
kind: Application
436
metadata:
437
  name: gitea-CUSTOMER
438
  namespace: argocd
439
spec:
440
  destination:
441
    namespace: default
442
    server: 'https://kubernetes.default.svc'
443
  source:
444
    path: apps/prod/gitea
445
    repoURL: 'https://code.ungleich.ch/ungleich-intern/k8s-config.git'
446
    targetRevision: HEAD
447
    helm:
448
      parameters:
449
        - name: storage.data.storageClass
450
          value: rook-ceph-block-hdd
451
        - name: storage.data.size
452
          value: 200Gi
453
        - name: storage.db.storageClass
454
          value: rook-ceph-block-ssd
455
        - name: storage.db.size
456
          value: 10Gi
457
        - name: storage.letsencrypt.storageClass
458
          value: rook-ceph-block-hdd
459
        - name: storage.letsencrypt.size
460
          value: 50Mi
461
        - name: letsencryptStaging
462
          value: 'no'
463
        - name: fqdn
464
          value: 'code.verua.online'
465
  project: default
466
  syncPolicy:
467
    automated:
468
      prune: true
469
      selfHeal: true
470
  info:
471
    - name: 'redmine-url'
472
      value: 'https://redmine.ungleich.ch/issues/ISSUEID'
473
    - name: 'support-url'
474
      value: 'https://support.ungleich.ch/Ticket/Display.html?id=TICKETID'
475
</pre>
476
477 80 Nico Schottelius
h2. Helm related operations and conventions
478 55 Nico Schottelius
479 61 Nico Schottelius
We use helm charts extensively.
480
481
* In production, they are managed via argocd
482
* In development, helm chart can de developed and deployed manually using the helm utility.
483
484 55 Nico Schottelius
h3. Installing a helm chart
485
486
One can use the usual pattern of
487
488
<pre>
489
helm install <releasename> <chartdirectory>
490
</pre>
491
492
However often you want to reinstall/update when testing helm charts. The following pattern is "better", because it allows you to reinstall, if it is already installed:
493
494
<pre>
495
helm upgrade --install <releasename> <chartdirectory>
496 1 Nico Schottelius
</pre>
497 80 Nico Schottelius
498
h3. Naming services and deployments in helm charts [Application labels]
499
500
* We always have {{ .Release.Name }} to identify the current "instance"
501
* Deployments:
502
** use @app: <what it is>@, f.i. @app: nginx@, @app: postgres@, ...
503 81 Nico Schottelius
* See more about standard labels on
504
** https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/
505
** https://helm.sh/docs/chart_best_practices/labels/
506 55 Nico Schottelius
507 43 Nico Schottelius
h2. Rook / Ceph Related Operations
508
509 71 Nico Schottelius
h3. Executing ceph commands
510
511
Using the ceph-tools pod as follows:
512
513
<pre>
514
kubectl exec -n rook-ceph -ti $(kubectl -n rook-ceph get pods -l app=rook-ceph-tools -o jsonpath='{.items[*].metadata.name}') -- ceph -s
515
</pre>
516
517 43 Nico Schottelius
h3. Inspecting the logs of a specific server
518
519
<pre>
520
# Get the related pods
521
kubectl -n rook-ceph get pods -l app=rook-ceph-osd-prepare 
522
...
523
524
# Inspect the logs of a specific pod
525
kubectl -n rook-ceph logs -f rook-ceph-osd-prepare-server23--1-444qx
526
527 71 Nico Schottelius
</pre>
528
529
h3. Inspecting the logs of the rook-ceph-operator
530
531
<pre>
532
kubectl -n rook-ceph logs -f -l app=rook-ceph-operator
533 43 Nico Schottelius
</pre>
534
535
h3. Triggering server prepare / adding new osds
536
537
The rook-ceph-operator triggers/watches/creates pods to maintain hosts. To trigger a full "re scan", simply delete that pod:
538
539
<pre>
540
kubectl -n rook-ceph delete pods -l app=rook-ceph-operator
541
</pre>
542
543
This will cause all the @rook-ceph-osd-prepare-..@ jobs to be recreated and thus OSDs to be created, if new disks have been added.
544
545
h3. Removing an OSD
546
547
* See "Ceph OSD Management":https://rook.io/docs/rook/v1.7/ceph-osd-mgmt.html
548 77 Nico Schottelius
* More specifically: https://github.com/rook/rook/blob/release-1.7/cluster/examples/kubernetes/ceph/osd-purge.yaml
549 99 Nico Schottelius
* Then delete the related deployment
550 41 Nico Schottelius
551 98 Nico Schottelius
Set osd id in the osd-purge.yaml and apply it. OSD should be down before.
552
553
<pre>
554
apiVersion: batch/v1
555
kind: Job
556
metadata:
557
  name: rook-ceph-purge-osd
558
  namespace: rook-ceph # namespace:cluster
559
  labels:
560
    app: rook-ceph-purge-osd
561
spec:
562
  template:
563
    metadata:
564
      labels:
565
        app: rook-ceph-purge-osd
566
    spec:
567
      serviceAccountName: rook-ceph-purge-osd
568
      containers:
569
        - name: osd-removal
570
          image: rook/ceph:master
571
          # TODO: Insert the OSD ID in the last parameter that is to be removed
572
          # The OSD IDs are a comma-separated list. For example: "0" or "0,2".
573
          # If you want to preserve the OSD PVCs, set `--preserve-pvc true`.
574
          #
575
          # A --force-osd-removal option is available if the OSD should be destroyed even though the
576
          # removal could lead to data loss.
577
          args:
578
            - "ceph"
579
            - "osd"
580
            - "remove"
581
            - "--preserve-pvc"
582
            - "false"
583
            - "--force-osd-removal"
584
            - "false"
585
            - "--osd-ids"
586
            - "SETTHEOSDIDHERE"
587
          env:
588
            - name: POD_NAMESPACE
589
              valueFrom:
590
                fieldRef:
591
                  fieldPath: metadata.namespace
592
            - name: ROOK_MON_ENDPOINTS
593
              valueFrom:
594
                configMapKeyRef:
595
                  key: data
596
                  name: rook-ceph-mon-endpoints
597
            - name: ROOK_CEPH_USERNAME
598
              valueFrom:
599
                secretKeyRef:
600
                  key: ceph-username
601
                  name: rook-ceph-mon
602
            - name: ROOK_CEPH_SECRET
603
              valueFrom:
604
                secretKeyRef:
605
                  key: ceph-secret
606
                  name: rook-ceph-mon
607
            - name: ROOK_CONFIG_DIR
608
              value: /var/lib/rook
609
            - name: ROOK_CEPH_CONFIG_OVERRIDE
610
              value: /etc/rook/config/override.conf
611
            - name: ROOK_FSID
612
              valueFrom:
613
                secretKeyRef:
614
                  key: fsid
615
                  name: rook-ceph-mon
616
            - name: ROOK_LOG_LEVEL
617
              value: DEBUG
618
          volumeMounts:
619
            - mountPath: /etc/ceph
620
              name: ceph-conf-emptydir
621
            - mountPath: /var/lib/rook
622
              name: rook-config
623
      volumes:
624
        - emptyDir: {}
625
          name: ceph-conf-emptydir
626
        - emptyDir: {}
627
          name: rook-config
628
      restartPolicy: Never
629
630
631 99 Nico Schottelius
</pre>
632
633
Deleting the deployment:
634
635
<pre>
636
[18:05] bridge:~% kubectl -n rook-ceph delete deployment rook-ceph-osd-6
637
deployment.apps "rook-ceph-osd-6" deleted
638 98 Nico Schottelius
</pre>
639
640 76 Nico Schottelius
h2. Harbor
641
642
* We user "Harbor":https://goharbor.io/ for caching and as an image registry. Internal app reference: apps/prod/harbor.
643
* The admin password is in the password store, auto generated per cluster
644
* At the moment harbor only authenticates against the internal ldap tree
645
646
h3. LDAP configuration
647
648
* The url needs to be ldaps://...
649
* uid = uid
650
* rest standard
651 75 Nico Schottelius
652 89 Nico Schottelius
h2. Monitoring / Prometheus
653
654 90 Nico Schottelius
* Via "kube-prometheus":https://github.com/prometheus-operator/kube-prometheus/
655 89 Nico Schottelius
656 91 Nico Schottelius
Access via ...
657
658
* http://prometheus-k8s.monitoring.svc:9090
659
* http://grafana.monitoring.svc:3000
660
* http://alertmanager.monitoring.svc:9093
661
662
663 100 Nico Schottelius
h3. Prometheus Options
664
665
* "helm/kube-prometheus-stack":https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
666
** Includes dashboards and co.
667
* "manifest based kube-prometheus":https://github.com/prometheus-operator/kube-prometheus
668
** Includes dashboards and co.
669
* "Prometheus Operator (mainly CRD manifest":https://github.com/prometheus-operator/prometheus-operator
670
671 91 Nico Schottelius
672 82 Nico Schottelius
h2. Nextcloud
673
674 85 Nico Schottelius
h3. How to get the nextcloud credentials 
675 84 Nico Schottelius
676
* The initial username is set to "nextcloud"
677
* The password is autogenerated and saved in a kubernetes secret
678
679
<pre>
680 85 Nico Schottelius
kubectl get secret RELEASENAME-nextcloud -o jsonpath="{.data.PASSWORD}" | base64 -d; echo "" 
681 84 Nico Schottelius
</pre>
682
683 83 Nico Schottelius
h3. How to fix "Access through untrusted domain"
684
685 82 Nico Schottelius
* Nextcloud stores the initial domain configuration
686 1 Nico Schottelius
* If the FQDN is changed, it will show the error message "Access through untrusted domain"
687 82 Nico Schottelius
* To fix, edit /var/www/html/config/config.php and correct the domain
688 83 Nico Schottelius
* Then delete the pods
689 82 Nico Schottelius
690 1 Nico Schottelius
h2. Infrastructure versions
691 35 Nico Schottelius
692 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v5 (2021-10)
693 1 Nico Schottelius
694 57 Nico Schottelius
Clusters are configured / setup in this order:
695
696
* Bootstrap via kubeadm
697 59 Nico Schottelius
* "Networking via calico + BGP (non ECMP) using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
698
* "ArgoCD for CD":https://argo-cd.readthedocs.io/en/stable/
699
** "rook for storage via argocd":https://rook.io/
700 58 Nico Schottelius
** haproxy for in IPv6-cluster-IPv4-to-IPv6 proxy via argocd
701
** "kubernetes-secret-generator for in cluster secrets":https://github.com/mittwald/kubernetes-secret-generator
702
** "ungleich-certbot managing certs and nginx":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
703
704 57 Nico Schottelius
705
h3. ungleich kubernetes infrastructure v4 (2021-09)
706
707 54 Nico Schottelius
* rook is configured via manifests instead of using the rook-ceph-cluster helm chart
708 1 Nico Schottelius
* The rook operator is still being installed via helm
709 35 Nico Schottelius
710 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v3 (2021-07)
711 1 Nico Schottelius
712 10 Nico Schottelius
* rook is now installed via helm via argocd instead of directly via manifests
713 28 Nico Schottelius
714 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v2 (2021-05)
715 28 Nico Schottelius
716
* Replaced fluxv2 from ungleich k8s v1 with argocd
717 1 Nico Schottelius
** argocd can apply helm templates directly without needing to go through Chart releases
718 28 Nico Schottelius
* We are also using argoflow for build flows
719
* Planned to add "kaniko":https://github.com/GoogleContainerTools/kaniko for image building
720
721 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v1 (2021-01)
722 28 Nico Schottelius
723
We are using the following components:
724
725
* "Calico as a CNI":https://www.projectcalico.org/ with BGP, IPv6 only, no encapsulation
726
** Needed for basic networking
727
* "kubernetes-secret-generator":https://github.com/mittwald/kubernetes-secret-generator for creating secrets
728
** Needed so that secrets are not stored in the git repository, but only in the cluster
729
* "ungleich-certbot":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
730
** Needed to get letsencrypt certificates for services
731
* "rook with ceph rbd + cephfs":https://rook.io/ for storage
732
** rbd for almost everything, *ReadWriteOnce*
733
** cephfs for smaller things, multi access *ReadWriteMany*
734
** Needed for providing persistent storage
735
* "flux v2":https://fluxcd.io/
736
** Needed to manage resources automatically