Project

General

Profile

The ungleich kubernetes infrastructure » History » Version 95

Nico Schottelius, 03/15/2022 09:44 AM

1 22 Nico Schottelius
h1. The ungleich kubernetes infrastructure and ungleich kubernetes manual
2 1 Nico Schottelius
3 3 Nico Schottelius
{{toc}}
4
5 1 Nico Schottelius
h2. Status
6
7 28 Nico Schottelius
This document is **pre-production**.
8
This document is to become the ungleich kubernetes infrastructure overview as well as the ungleich kubernetes manual.
9 1 Nico Schottelius
10 10 Nico Schottelius
h2. k8s clusters
11
12 95 Nico Schottelius
| Cluster         | Purpose/Setup     | Maintainer | Master(s)                  | argo                                                | v4 http proxy | last verified |
13
| c0.k8s.ooo      | Dev               | -          | UNUSED                     |                                                     |               |    2021-10-05 |
14
| c1.k8s.ooo      | retired           |            | -                          |                                                     |               |    2022-03-15 |
15
| c2.k8s.ooo      | Dev p7 HW         | Nico       | server47 server53 server54 | "argo":https://argocd-server.argocd.svc.c2.k8s.ooo  |               |    2021-10-05 |
16
| c3.k8s.ooo      | retired           | -          | -                          |                                                     |               |    2021-10-05 |
17
| c4.k8s.ooo      | Dev2 p7 HW        | Jin-Guk    | server52 server53 server54 |                                                     |               |             - |
18
| c5.k8s.ooo      | retired           |            | -                          |                                                     |               |    2022-03-15 |
19
| c6.k8s.ooo      | Dev p6 VM Jin-Guk | Jin-Guk    |                            |                                                     |               |               |
20
| [[p5.k8s.ooo]]  | production        |            | server34 server36 server38 | "argo":https://argocd-server.argocd.svc.p5.k8s.ooo  |             - |               |
21
| [[p6.k8s.ooo]]  | production        |            | server67 server69 server71 | "argo":https://argocd-server.argocd.svc.p6.k8s.ooo  | 147.78.194.13 |    2021-10-05 |
22
| [[p10.k8s.ooo]] | production        |            | server63 server65 server83 | "argo":https://argocd-server.argocd.svc.p10.k8s.ooo | 147.78.194.12 |    2021-10-05 |
23
| fnnf            | development       | Nico       | server75                   |                                                     |               |               |
24 78 Nico Schottelius
25 21 Nico Schottelius
26 1 Nico Schottelius
h2. General architecture and components overview
27
28
* All k8s clusters are IPv6 only
29
* We use BGP peering to propagate podcidr and serviceCidr networks to our infrastructure
30
* The main public testing repository is "ungleich-k8s":https://code.ungleich.ch/ungleich-public/ungleich-k8s
31 18 Nico Schottelius
** Private configurations are found in the **k8s-config** repository
32 1 Nico Schottelius
33
h3. Cluster types
34
35 28 Nico Schottelius
| **Type/Feature**            | **Development**                | **Production**         |
36
| Min No. nodes               | 3 (1 master, 3 worker)         | 5 (3 master, 3 worker) |
37
| Recommended minimum         | 4 (dedicated master, 3 worker) | 8 (3 master, 5 worker) |
38
| Separation of control plane | optional                       | recommended            |
39
| Persistent storage          | required                       | required               |
40
| Number of storage monitors  | 3                              | 5                      |
41 1 Nico Schottelius
42 43 Nico Schottelius
h2. General k8s operations
43 1 Nico Schottelius
44 46 Nico Schottelius
h3. Cheat sheet / external great references
45
46
* "kubectl cheatsheet":https://kubernetes.io/docs/reference/kubectl/cheatsheet/
47
48 69 Nico Schottelius
h3. Allowing to schedule work on the control plane
49
50
* Mostly for single node / test / development clusters
51
* Just remove the master taint as follows
52
53
<pre>
54
kubectl taint nodes --all node-role.kubernetes.io/master-
55
</pre>
56
57
58 44 Nico Schottelius
h3. Get the cluster admin.conf
59
60
* On the masters of each cluster you can find the file @/etc/kubernetes/admin.conf@
61
* To be able to administrate the cluster you can copy the admin.conf to your local machine
62
* Multi cluster debugging can very easy if you name the config ~/cX-admin.conf (see example below)
63
64
<pre>
65
% scp root@server47.place7.ungleich.ch:/etc/kubernetes/admin.conf ~/c2-admin.conf
66
% export KUBECONFIG=~/c2-admin.conf    
67
% kubectl get nodes
68
NAME       STATUS                     ROLES                  AGE   VERSION
69
server47   Ready                      control-plane,master   82d   v1.22.0
70
server48   Ready                      control-plane,master   82d   v1.22.0
71
server49   Ready                      <none>                 82d   v1.22.0
72
server50   Ready                      <none>                 82d   v1.22.0
73
server59   Ready                      control-plane,master   82d   v1.22.0
74
server60   Ready,SchedulingDisabled   <none>                 82d   v1.22.0
75
server61   Ready                      <none>                 82d   v1.22.0
76
server62   Ready                      <none>                 82d   v1.22.0               
77
</pre>
78
79 18 Nico Schottelius
h3. Installing a new k8s cluster
80 8 Nico Schottelius
81 9 Nico Schottelius
* Decide on the cluster name (usually *cX.k8s.ooo*), X counting upwards
82 28 Nico Schottelius
** Using pXX.k8s.ooo for production clusters of placeXX
83 9 Nico Schottelius
* Use cdist to configure the nodes with requirements like crio
84
* Decide between single or multi node control plane setups (see below)
85 28 Nico Schottelius
** Single control plane suitable for development clusters
86 9 Nico Schottelius
87 28 Nico Schottelius
Typical init procedure:
88 9 Nico Schottelius
89 28 Nico Schottelius
* Single control plane: @kubeadm init --config bootstrap/XXX/kubeadm.yaml@
90
* Multi control plane (HA): @kubeadm init --config bootstrap/XXX/kubeadm.yaml --upload-certs@
91 10 Nico Schottelius
92 29 Nico Schottelius
h3. Deleting a pod that is hanging in terminating state
93
94
<pre>
95
kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>
96
</pre>
97
98
(from https://stackoverflow.com/questions/35453792/pods-stuck-in-terminating-status)
99
100 42 Nico Schottelius
h3. Listing nodes of a cluster
101
102
<pre>
103
[15:05] bridge:~% kubectl get nodes
104
NAME       STATUS   ROLES                  AGE   VERSION
105
server22   Ready    <none>                 52d   v1.22.0
106
server23   Ready    <none>                 52d   v1.22.2
107
server24   Ready    <none>                 52d   v1.22.0
108
server25   Ready    <none>                 52d   v1.22.0
109
server26   Ready    <none>                 52d   v1.22.0
110
server27   Ready    <none>                 52d   v1.22.0
111
server63   Ready    control-plane,master   52d   v1.22.0
112
server64   Ready    <none>                 52d   v1.22.0
113
server65   Ready    control-plane,master   52d   v1.22.0
114
server66   Ready    <none>                 52d   v1.22.0
115
server83   Ready    control-plane,master   52d   v1.22.0
116
server84   Ready    <none>                 52d   v1.22.0
117
server85   Ready    <none>                 52d   v1.22.0
118
server86   Ready    <none>                 52d   v1.22.0
119
</pre>
120
121
122 41 Nico Schottelius
h3. Removing / draining a node
123
124
Usually @kubectl drain server@ should do the job, but sometimes we need to be more aggressive:
125
126
<pre>
127
kubectl drain --delete-emptydir-data --ignore-daemonsets server23
128 42 Nico Schottelius
</pre>
129
130
h3. Readding a node after draining
131
132
<pre>
133
kubectl uncordon serverXX
134 1 Nico Schottelius
</pre>
135 43 Nico Schottelius
136 50 Nico Schottelius
h3. (Re-)joining worker nodes after creating the cluster
137 49 Nico Schottelius
138
* We need to have an up-to-date token
139
* We use different join commands for the workers and control plane nodes
140
141
Generating the join command on an existing control plane node:
142
143
<pre>
144
kubeadm token create --print-join-command
145
</pre>
146
147 50 Nico Schottelius
h3. (Re-)joining control plane nodes after creating the cluster
148 1 Nico Schottelius
149 50 Nico Schottelius
* We generate the token again
150
* We upload the certificates
151
* We need to combine/create the join command for the control plane node
152
153
Example session:
154
155
<pre>
156
% kubeadm token create --print-join-command
157
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash 
158
159
% kubeadm init phase upload-certs --upload-certs
160
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
161
[upload-certs] Using certificate key:
162
CERTKEY
163
164
# Then we use these two outputs on the joining node:
165
166
kubeadm join p10-api.k8s.ooo:6443 --token xmff4i.ABC --discovery-token-ca-cert-hash sha256:longhash --control-plane --certificate-key CERTKEY
167
</pre>
168
169
Commands to be used on a control plane node:
170
171
<pre>
172
kubeadm token create --print-join-command
173
kubeadm init phase upload-certs --upload-certs
174
</pre>
175
176
Commands to be used on the joining node:
177
178
<pre>
179
JOINCOMMAND --control-plane --certificate-key CERTKEY
180
</pre>
181 49 Nico Schottelius
182 51 Nico Schottelius
SEE ALSO
183
184
* https://stackoverflow.com/questions/63936268/how-to-generate-kubeadm-token-for-secondary-control-plane-nodes
185
* https://blog.scottlowe.org/2019/08/15/reconstructing-the-join-command-for-kubeadm/
186
187 53 Nico Schottelius
h3. How to fix etcd does not start when rejoining a kubernetes cluster as a control plane
188 52 Nico Schottelius
189
If during the above step etcd does not come up, @kubeadm join@ can hang as follows:
190
191
<pre>
192
[control-plane] Creating static Pod manifest for "kube-apiserver"                                                              
193
[control-plane] Creating static Pod manifest for "kube-controller-manager"                                                     
194
[control-plane] Creating static Pod manifest for "kube-scheduler"                                                              
195
[check-etcd] Checking that the etcd cluster is healthy                                                                         
196
error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://[2a0a:e5c0:10:1:225:b3ff:fe20:37
197
8a]:2379 with maintenance client: context deadline exceeded                                                                    
198
To see the stack trace of this error execute with --v=5 or higher         
199
</pre>
200
201
Then the problem is likely that the etcd server is still a member of the cluster. We first need to remove it from the etcd cluster and then the join works.
202
203
To fix this we do:
204
205
* Find a working etcd pod
206
* Find the etcd members / member list
207
* Remove the etcd member that we want to re-join the cluster
208
209
210
<pre>
211
# Find the etcd pods
212
kubectl -n kube-system get pods -l component=etcd,tier=control-plane
213
214
# Get the list of etcd servers with the member id 
215
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
216
217
# Remove the member
218
kubectl exec -n kube-system -ti ETCDPODNAME -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove MEMBERID
219
</pre>
220
221
Sample session:
222
223
<pre>
224
[10:48] line:~% kubectl -n kube-system get pods -l component=etcd,tier=control-plane
225
NAME            READY   STATUS    RESTARTS     AGE
226
etcd-server63   1/1     Running   0            3m11s
227
etcd-server65   1/1     Running   3            7d2h
228
etcd-server83   1/1     Running   8 (6d ago)   7d2h
229
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
230
356891cd676df6e4, started, server65, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:375c]:2379, false
231
371b8a07185dee7e, started, server63, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2380, https://[2a0a:e5c0:10:1:225:b3ff:fe20:378a]:2379, false
232
5942bc58307f8af9, started, server83, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2380, https://[2a0a:e5c0:10:1:3e4a:92ff:fe79:bb98]:2379, false
233
234
[10:48] line:~% kubectl exec -n kube-system -ti etcd-server65 -- etcdctl --endpoints '[::1]:2379' --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove 371b8a07185dee7e
235
Member 371b8a07185dee7e removed from cluster e3c0805f592a8f77
236 1 Nico Schottelius
237
</pre>
238
239
SEE ALSO
240
241
* We found the solution using https://stackoverflow.com/questions/67921552/re-installed-node-cannot-join-kubernetes-cluster
242 56 Nico Schottelius
243 62 Nico Schottelius
h2. Calico CNI
244
245
h3. Calico Installation
246
247
* We install "calico using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
248
* This has the following advantages:
249
** Easy to upgrade
250
** Does not require os to configure IPv6/dual stack settings as the tigera operator figures out things on its own
251
252
Usually plain calico can be installed directly using:
253
254
<pre>
255
helm repo add projectcalico https://docs.projectcalico.org/charts
256 94 Nico Schottelius
helm install calico projectcalico/tigera-operator --version v3.20.4
257 1 Nico Schottelius
</pre>
258 92 Nico Schottelius
259
* Check the tags on https://github.com/projectcalico/calico/tags for the latest release
260 62 Nico Schottelius
261
h3. Installing calicoctl
262
263
To be able to manage and configure calico, we need to 
264
"install calicoctl (we choose the version as a pod)":https://docs.projectcalico.org/getting-started/clis/calicoctl/install#install-calicoctl-as-a-kubernetes-pod
265
266
<pre>
267
kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml
268
</pre>
269
270 93 Nico Schottelius
Or version specific:
271
272
<pre>
273
kubectl apply -f https://github.com/projectcalico/calico/blob/v3.20.4/manifests/calicoctl.yaml
274
</pre>
275
276 70 Nico Schottelius
And making it easier accessible by alias:
277
278
<pre>
279
alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl"
280
</pre>
281
282 62 Nico Schottelius
h3. Calico configuration
283
284 63 Nico Schottelius
By default our k8s clusters "BGP peer":https://docs.projectcalico.org/networking/bgp
285
with an upstream router to propagate podcidr and servicecidr.
286 62 Nico Schottelius
287
Default settings in our infrastructure:
288
289
* We use a full-mesh using the @nodeToNodeMeshEnabled: true@ option
290
* We keep the original next hop so that *only* the server with the pod is announcing it (instead of ecmp)
291 1 Nico Schottelius
* We use private ASNs for k8s clusters
292 63 Nico Schottelius
* We do *not* use any overlay
293 62 Nico Schottelius
294
After installing calico and calicoctl the last step of the installation is usually:
295
296 1 Nico Schottelius
<pre>
297 79 Nico Schottelius
calicoctl create -f - < calico-bgp.yaml
298 62 Nico Schottelius
</pre>
299
300
301
A sample BGP configuration:
302
303
<pre>
304
---
305
apiVersion: projectcalico.org/v3
306
kind: BGPConfiguration
307
metadata:
308
  name: default
309
spec:
310
  logSeverityScreen: Info
311
  nodeToNodeMeshEnabled: true
312
  asNumber: 65534
313
  serviceClusterIPs:
314
  - cidr: 2a0a:e5c0:10:3::/108
315
  serviceExternalIPs:
316
  - cidr: 2a0a:e5c0:10:3::/108
317
---
318
apiVersion: projectcalico.org/v3
319
kind: BGPPeer
320
metadata:
321
  name: router1-place10
322
spec:
323
  peerIP: 2a0a:e5c0:10:1::50
324
  asNumber: 213081
325
  keepOriginalNextHop: true
326
</pre>
327
328 64 Nico Schottelius
h2. ArgoCD / ArgoWorkFlow
329 56 Nico Schottelius
330 60 Nico Schottelius
h3. Argocd Installation
331 1 Nico Schottelius
332 60 Nico Schottelius
As there is no configuration management present yet, argocd is installed using
333
334 1 Nico Schottelius
<pre>
335 60 Nico Schottelius
kubectl create namespace argocd
336 86 Nico Schottelius
337
# Version 2.2.3
338
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.2.3/manifests/install.yaml
339
340
# OR: latest stable
341 60 Nico Schottelius
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
342 1 Nico Schottelius
</pre>
343 56 Nico Schottelius
344 60 Nico Schottelius
* See https://argo-cd.readthedocs.io/en/stable/
345 1 Nico Schottelius
346 60 Nico Schottelius
h3. Get the argocd credentials
347
348
<pre>
349
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo ""
350
</pre>
351 52 Nico Schottelius
352 87 Nico Schottelius
h3. Accessing argocd
353
354
In regular IPv6 clusters:
355
356
* Navigate to https://argocd-server.argocd.CLUSTERDOMAIN
357
358
In legacy IPv4 clusters
359
360
<pre>
361
kubectl --namespace argocd port-forward svc/argocd-server 8080:80
362
</pre>
363
364 88 Nico Schottelius
* Navigate to https://localhost:8080
365
366 68 Nico Schottelius
h3. Using the argocd webhook to trigger changes
367 67 Nico Schottelius
368
* To trigger changes post json https://argocd.example.com/api/webhook
369
370 72 Nico Schottelius
h3. Deploying an application
371
372
* Applications are deployed via git towards gitea (code.ungleich.ch) and then pulled by argo
373 73 Nico Schottelius
* Always include the *redmine-url* pointing to the (customer) ticket
374
** Also add the support-url if it exists
375 72 Nico Schottelius
376
Application sample
377
378
<pre>
379
apiVersion: argoproj.io/v1alpha1
380
kind: Application
381
metadata:
382
  name: gitea-CUSTOMER
383
  namespace: argocd
384
spec:
385
  destination:
386
    namespace: default
387
    server: 'https://kubernetes.default.svc'
388
  source:
389
    path: apps/prod/gitea
390
    repoURL: 'https://code.ungleich.ch/ungleich-intern/k8s-config.git'
391
    targetRevision: HEAD
392
    helm:
393
      parameters:
394
        - name: storage.data.storageClass
395
          value: rook-ceph-block-hdd
396
        - name: storage.data.size
397
          value: 200Gi
398
        - name: storage.db.storageClass
399
          value: rook-ceph-block-ssd
400
        - name: storage.db.size
401
          value: 10Gi
402
        - name: storage.letsencrypt.storageClass
403
          value: rook-ceph-block-hdd
404
        - name: storage.letsencrypt.size
405
          value: 50Mi
406
        - name: letsencryptStaging
407
          value: 'no'
408
        - name: fqdn
409
          value: 'code.verua.online'
410
  project: default
411
  syncPolicy:
412
    automated:
413
      prune: true
414
      selfHeal: true
415
  info:
416
    - name: 'redmine-url'
417
      value: 'https://redmine.ungleich.ch/issues/ISSUEID'
418
    - name: 'support-url'
419
      value: 'https://support.ungleich.ch/Ticket/Display.html?id=TICKETID'
420
</pre>
421
422 80 Nico Schottelius
h2. Helm related operations and conventions
423 55 Nico Schottelius
424 61 Nico Schottelius
We use helm charts extensively.
425
426
* In production, they are managed via argocd
427
* In development, helm chart can de developed and deployed manually using the helm utility.
428
429 55 Nico Schottelius
h3. Installing a helm chart
430
431
One can use the usual pattern of
432
433
<pre>
434
helm install <releasename> <chartdirectory>
435
</pre>
436
437
However often you want to reinstall/update when testing helm charts. The following pattern is "better", because it allows you to reinstall, if it is already installed:
438
439
<pre>
440
helm upgrade --install <releasename> <chartdirectory>
441 1 Nico Schottelius
</pre>
442 80 Nico Schottelius
443
h3. Naming services and deployments in helm charts [Application labels]
444
445
* We always have {{ .Release.Name }} to identify the current "instance"
446
* Deployments:
447
** use @app: <what it is>@, f.i. @app: nginx@, @app: postgres@, ...
448 81 Nico Schottelius
* See more about standard labels on
449
** https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/
450
** https://helm.sh/docs/chart_best_practices/labels/
451 55 Nico Schottelius
452 43 Nico Schottelius
h2. Rook / Ceph Related Operations
453
454 71 Nico Schottelius
h3. Executing ceph commands
455
456
Using the ceph-tools pod as follows:
457
458
<pre>
459
kubectl exec -n rook-ceph -ti $(kubectl -n rook-ceph get pods -l app=rook-ceph-tools -o jsonpath='{.items[*].metadata.name}') -- ceph -s
460
</pre>
461
462 43 Nico Schottelius
h3. Inspecting the logs of a specific server
463
464
<pre>
465
# Get the related pods
466
kubectl -n rook-ceph get pods -l app=rook-ceph-osd-prepare 
467
...
468
469
# Inspect the logs of a specific pod
470
kubectl -n rook-ceph logs -f rook-ceph-osd-prepare-server23--1-444qx
471
472 71 Nico Schottelius
</pre>
473
474
h3. Inspecting the logs of the rook-ceph-operator
475
476
<pre>
477
kubectl -n rook-ceph logs -f -l app=rook-ceph-operator
478 43 Nico Schottelius
</pre>
479
480
h3. Triggering server prepare / adding new osds
481
482
The rook-ceph-operator triggers/watches/creates pods to maintain hosts. To trigger a full "re scan", simply delete that pod:
483
484
<pre>
485
kubectl -n rook-ceph delete pods -l app=rook-ceph-operator
486
</pre>
487
488
This will cause all the @rook-ceph-osd-prepare-..@ jobs to be recreated and thus OSDs to be created, if new disks have been added.
489
490
h3. Removing an OSD
491
492
* See "Ceph OSD Management":https://rook.io/docs/rook/v1.7/ceph-osd-mgmt.html
493 77 Nico Schottelius
* More specifically: https://github.com/rook/rook/blob/release-1.7/cluster/examples/kubernetes/ceph/osd-purge.yaml
494 41 Nico Schottelius
495 76 Nico Schottelius
h2. Harbor
496
497
* We user "Harbor":https://goharbor.io/ for caching and as an image registry. Internal app reference: apps/prod/harbor.
498
* The admin password is in the password store, auto generated per cluster
499
* At the moment harbor only authenticates against the internal ldap tree
500
501
h3. LDAP configuration
502
503
* The url needs to be ldaps://...
504
* uid = uid
505
* rest standard
506 75 Nico Schottelius
507 89 Nico Schottelius
h2. Monitoring / Prometheus
508
509 90 Nico Schottelius
* Via "kube-prometheus":https://github.com/prometheus-operator/kube-prometheus/
510 89 Nico Schottelius
511 91 Nico Schottelius
Access via ...
512
513
* http://prometheus-k8s.monitoring.svc:9090
514
* http://grafana.monitoring.svc:3000
515
* http://alertmanager.monitoring.svc:9093
516
517
518
519 82 Nico Schottelius
h2. Nextcloud
520
521 85 Nico Schottelius
h3. How to get the nextcloud credentials 
522 84 Nico Schottelius
523
* The initial username is set to "nextcloud"
524
* The password is autogenerated and saved in a kubernetes secret
525
526
<pre>
527 85 Nico Schottelius
kubectl get secret RELEASENAME-nextcloud -o jsonpath="{.data.PASSWORD}" | base64 -d; echo "" 
528 84 Nico Schottelius
</pre>
529
530 83 Nico Schottelius
h3. How to fix "Access through untrusted domain"
531
532 82 Nico Schottelius
* Nextcloud stores the initial domain configuration
533 1 Nico Schottelius
* If the FQDN is changed, it will show the error message "Access through untrusted domain"
534 82 Nico Schottelius
* To fix, edit /var/www/html/config/config.php and correct the domain
535 83 Nico Schottelius
* Then delete the pods
536 82 Nico Schottelius
537 1 Nico Schottelius
h2. Infrastructure versions
538 35 Nico Schottelius
539 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v5 (2021-10)
540 1 Nico Schottelius
541 57 Nico Schottelius
Clusters are configured / setup in this order:
542
543
* Bootstrap via kubeadm
544 59 Nico Schottelius
* "Networking via calico + BGP (non ECMP) using helm":https://docs.projectcalico.org/getting-started/kubernetes/helm
545
* "ArgoCD for CD":https://argo-cd.readthedocs.io/en/stable/
546
** "rook for storage via argocd":https://rook.io/
547 58 Nico Schottelius
** haproxy for in IPv6-cluster-IPv4-to-IPv6 proxy via argocd
548
** "kubernetes-secret-generator for in cluster secrets":https://github.com/mittwald/kubernetes-secret-generator
549
** "ungleich-certbot managing certs and nginx":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
550
551 57 Nico Schottelius
552
h3. ungleich kubernetes infrastructure v4 (2021-09)
553
554 54 Nico Schottelius
* rook is configured via manifests instead of using the rook-ceph-cluster helm chart
555 1 Nico Schottelius
* The rook operator is still being installed via helm
556 35 Nico Schottelius
557 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v3 (2021-07)
558 1 Nico Schottelius
559 10 Nico Schottelius
* rook is now installed via helm via argocd instead of directly via manifests
560 28 Nico Schottelius
561 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v2 (2021-05)
562 28 Nico Schottelius
563
* Replaced fluxv2 from ungleich k8s v1 with argocd
564 1 Nico Schottelius
** argocd can apply helm templates directly without needing to go through Chart releases
565 28 Nico Schottelius
* We are also using argoflow for build flows
566
* Planned to add "kaniko":https://github.com/GoogleContainerTools/kaniko for image building
567
568 57 Nico Schottelius
h3. ungleich kubernetes infrastructure v1 (2021-01)
569 28 Nico Schottelius
570
We are using the following components:
571
572
* "Calico as a CNI":https://www.projectcalico.org/ with BGP, IPv6 only, no encapsulation
573
** Needed for basic networking
574
* "kubernetes-secret-generator":https://github.com/mittwald/kubernetes-secret-generator for creating secrets
575
** Needed so that secrets are not stored in the git repository, but only in the cluster
576
* "ungleich-certbot":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
577
** Needed to get letsencrypt certificates for services
578
* "rook with ceph rbd + cephfs":https://rook.io/ for storage
579
** rbd for almost everything, *ReadWriteOnce*
580
** cephfs for smaller things, multi access *ReadWriteMany*
581
** Needed for providing persistent storage
582
* "flux v2":https://fluxcd.io/
583
** Needed to manage resources automatically