Project

General

Profile

The ungleich kubernetes infrastructure » History » Version 47

Nico Schottelius, 10/12/2021 08:07 AM

1 22 Nico Schottelius
h1. The ungleich kubernetes infrastructure and ungleich kubernetes manual
2 1 Nico Schottelius
3 3 Nico Schottelius
{{toc}}
4
5 1 Nico Schottelius
h2. Status
6
7 28 Nico Schottelius
This document is **pre-production**.
8
This document is to become the ungleich kubernetes infrastructure overview as well as the ungleich kubernetes manual.
9 1 Nico Schottelius
10 10 Nico Schottelius
h2. k8s clusters
11
12 40 Nico Schottelius
| Cluster     | Purpose/Setup  | Maintainer   | Master(s)                                                | last verified |
13
| c0.k8s.ooo  | Dev            | -            | UNUSED                                                   |    2021-10-05 |
14
| c1.k8s.ooo  | Dev p6 VM      | Nico         | 2a0a-e5c0-2-11-0-62ff-fe0b-1a3d.k8s-1.place6.ungleich.ch |    2021-10-05 |
15
| c2.k8s.ooo  | Dev p7 HW      | Nico         | server47 server53 server54                               |    2021-10-05 |
16
| c3.k8s.ooo  | Test p7 PI     | -            | UNUSED                                                   |    2021-10-05 |
17
| c4.k8s.ooo  | Dev2 p7 HW     | Fran/Jin-Guk | server52 server53 server54                               |             - |
18
| c5.k8s.ooo  | Dev p6 VM Amal | Nico/Amal    | 2a0a-e5c0-2-11-0-62ff-fe0b-1a46.k8s-1.place6.ungleich.ch |               |
19 45 Nico Schottelius
| c6.k8s.ooo  | Dev p6 VM Jin-Guk | Jin-Guk    |  |               |
20 47 Nico Schottelius
| [[p6.k8s.ooo]]  | production     |              | server67 server69 server71                               |    2021-10-05 |
21
| [[p10.k8s.ooo]] | production     |              | server63 server65 server83                               |    2021-10-05 |
22 39 Nico Schottelius
|             |                |              |                                                          |               |
23 21 Nico Schottelius
24 1 Nico Schottelius
h2. General architecture and components overview
25
26
* All k8s clusters are IPv6 only
27
* We use BGP peering to propagate podcidr and serviceCidr networks to our infrastructure
28
* The main public testing repository is "ungleich-k8s":https://code.ungleich.ch/ungleich-public/ungleich-k8s
29 18 Nico Schottelius
** Private configurations are found in the **k8s-config** repository
30 1 Nico Schottelius
31
h3. Cluster types
32
33 28 Nico Schottelius
| **Type/Feature**            | **Development**                | **Production**         |
34
| Min No. nodes               | 3 (1 master, 3 worker)         | 5 (3 master, 3 worker) |
35
| Recommended minimum         | 4 (dedicated master, 3 worker) | 8 (3 master, 5 worker) |
36
| Separation of control plane | optional                       | recommended            |
37
| Persistent storage          | required                       | required               |
38
| Number of storage monitors  | 3                              | 5                      |
39 1 Nico Schottelius
40 43 Nico Schottelius
h2. General k8s operations
41 1 Nico Schottelius
42 46 Nico Schottelius
h3. Cheat sheet / external great references
43
44
* "kubectl cheatsheet":https://kubernetes.io/docs/reference/kubectl/cheatsheet/
45
46 36 Nico Schottelius
h3. Get the argocd credentials
47
48
<pre>
49 37 Nico Schottelius
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo ""
50 36 Nico Schottelius
</pre>
51
52 44 Nico Schottelius
h3. Get the cluster admin.conf
53
54
* On the masters of each cluster you can find the file @/etc/kubernetes/admin.conf@
55
* To be able to administrate the cluster you can copy the admin.conf to your local machine
56
* Multi cluster debugging can very easy if you name the config ~/cX-admin.conf (see example below)
57
58
<pre>
59
% scp root@server47.place7.ungleich.ch:/etc/kubernetes/admin.conf ~/c2-admin.conf
60
% export KUBECONFIG=~/c2-admin.conf    
61
% kubectl get nodes
62
NAME       STATUS                     ROLES                  AGE   VERSION
63
server47   Ready                      control-plane,master   82d   v1.22.0
64
server48   Ready                      control-plane,master   82d   v1.22.0
65
server49   Ready                      <none>                 82d   v1.22.0
66
server50   Ready                      <none>                 82d   v1.22.0
67
server59   Ready                      control-plane,master   82d   v1.22.0
68
server60   Ready,SchedulingDisabled   <none>                 82d   v1.22.0
69
server61   Ready                      <none>                 82d   v1.22.0
70
server62   Ready                      <none>                 82d   v1.22.0               
71
</pre>
72
73 18 Nico Schottelius
h3. Installing a new k8s cluster
74 8 Nico Schottelius
75 9 Nico Schottelius
* Decide on the cluster name (usually *cX.k8s.ooo*), X counting upwards
76 28 Nico Schottelius
** Using pXX.k8s.ooo for production clusters of placeXX
77 9 Nico Schottelius
* Use cdist to configure the nodes with requirements like crio
78
* Decide between single or multi node control plane setups (see below)
79 28 Nico Schottelius
** Single control plane suitable for development clusters
80 9 Nico Schottelius
81 28 Nico Schottelius
Typical init procedure:
82 9 Nico Schottelius
83 28 Nico Schottelius
* Single control plane: @kubeadm init --config bootstrap/XXX/kubeadm.yaml@
84
* Multi control plane (HA): @kubeadm init --config bootstrap/XXX/kubeadm.yaml --upload-certs@
85 10 Nico Schottelius
86 29 Nico Schottelius
h3. Deleting a pod that is hanging in terminating state
87
88
<pre>
89
kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>
90
</pre>
91
92
(from https://stackoverflow.com/questions/35453792/pods-stuck-in-terminating-status)
93
94 42 Nico Schottelius
h3. Listing nodes of a cluster
95
96
<pre>
97
[15:05] bridge:~% kubectl get nodes
98
NAME       STATUS   ROLES                  AGE   VERSION
99
server22   Ready    <none>                 52d   v1.22.0
100
server23   Ready    <none>                 52d   v1.22.2
101
server24   Ready    <none>                 52d   v1.22.0
102
server25   Ready    <none>                 52d   v1.22.0
103
server26   Ready    <none>                 52d   v1.22.0
104
server27   Ready    <none>                 52d   v1.22.0
105
server63   Ready    control-plane,master   52d   v1.22.0
106
server64   Ready    <none>                 52d   v1.22.0
107
server65   Ready    control-plane,master   52d   v1.22.0
108
server66   Ready    <none>                 52d   v1.22.0
109
server83   Ready    control-plane,master   52d   v1.22.0
110
server84   Ready    <none>                 52d   v1.22.0
111
server85   Ready    <none>                 52d   v1.22.0
112
server86   Ready    <none>                 52d   v1.22.0
113
</pre>
114
115
116 41 Nico Schottelius
h3. Removing / draining a node
117
118
Usually @kubectl drain server@ should do the job, but sometimes we need to be more aggressive:
119
120
<pre>
121
kubectl drain --delete-emptydir-data --ignore-daemonsets server23
122 42 Nico Schottelius
</pre>
123
124
h3. Readding a node after draining
125
126
<pre>
127
kubectl uncordon serverXX
128 1 Nico Schottelius
</pre>
129 43 Nico Schottelius
130
h2. Rook / Ceph Related Operations
131
132
h3. Inspecting the logs of a specific server
133
134
<pre>
135
# Get the related pods
136
kubectl -n rook-ceph get pods -l app=rook-ceph-osd-prepare 
137
...
138
139
# Inspect the logs of a specific pod
140
kubectl -n rook-ceph logs -f rook-ceph-osd-prepare-server23--1-444qx
141
142
</pre>
143
144
h3. Triggering server prepare / adding new osds
145
146
The rook-ceph-operator triggers/watches/creates pods to maintain hosts. To trigger a full "re scan", simply delete that pod:
147
148
<pre>
149
kubectl -n rook-ceph delete pods -l app=rook-ceph-operator
150
</pre>
151
152
This will cause all the @rook-ceph-osd-prepare-..@ jobs to be recreated and thus OSDs to be created, if new disks have been added.
153
154
h3. Removing an OSD
155
156
* See "Ceph OSD Management":https://rook.io/docs/rook/v1.7/ceph-osd-mgmt.html
157 41 Nico Schottelius
158 1 Nico Schottelius
h2. Infrastructure versions
159 35 Nico Schottelius
160
h3. ungleich kubernetes infrastructure v3
161
162
* rook is now installed via helm via argocd instead of directly via manifests
163 10 Nico Schottelius
164 28 Nico Schottelius
h3. ungleich kubernetes infrastructure v2
165
166
* Replaced fluxv2 from ungleich k8s v1 with argocd
167
** argocd can apply helm templates directly without needing to go through Chart releases
168
* We are also using argoflow for build flows
169
* Planned to add "kaniko":https://github.com/GoogleContainerTools/kaniko for image building
170
171
h3. ungleich kubernetes infrastructure v1
172
173
We are using the following components:
174
175
* "Calico as a CNI":https://www.projectcalico.org/ with BGP, IPv6 only, no encapsulation
176
** Needed for basic networking
177
* "kubernetes-secret-generator":https://github.com/mittwald/kubernetes-secret-generator for creating secrets
178
** Needed so that secrets are not stored in the git repository, but only in the cluster
179
* "ungleich-certbot":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot
180
** Needed to get letsencrypt certificates for services
181
* "rook with ceph rbd + cephfs":https://rook.io/ for storage
182
** rbd for almost everything, *ReadWriteOnce*
183
** cephfs for smaller things, multi access *ReadWriteMany*
184
** Needed for providing persistent storage
185
* "flux v2":https://fluxcd.io/
186
** Needed to manage resources automatically