The ungleich kubernetes infrastructure » History » Revision 27
Revision 26 (Nico Schottelius, 08/14/2021 07:15 PM) → Revision 27/219 (Nico Schottelius, 08/14/2021 07:16 PM)
h1. The ungleich kubernetes infrastructure and ungleich kubernetes manual {{toc}} h2. Status This document is **pre-production**. This document is to become the ungleich kubernetes infrastructure overview as well as the ungleich kubernetes manual. h2. k8s clusters | Cluster | Purpose | | c0.k8s.ooo | Dev | | c1.k8s.ooo | Dev p6 | | c2.k8s.ooo | Demo/Semiprod | | c3.k8s.ooo | Dev p6 | | c4.k8s.ooo | active-dev p7 | | p6.k8s.ooo | planned | | p10.k8s.ooo | production | Typical init procedure: * Single control plane: @kubeadm init --config bootstrap/XXX/kubeadm.yaml@ * Multi control plane (HA): @kubeadm init --config bootstrap/XXX/kubeadm.yaml --upload-certs@ h2. General architecture and components overview * All k8s clusters are IPv6 only * We use BGP peering to propagate podcidr and serviceCidr networks to our infrastructure * The main / public testing repository is "ungleich-k8s":https://code.ungleich.ch/ungleich-public/ungleich-k8s ** Private configurations are found in the **k8s-config** repository h2. ungleich kubernetes infrastructure v1 We are using the following components: * "Calico as a CNI":https://www.projectcalico.org/ with BGP, IPv6 only, no encapsulation ** Needed for basic networking * "kubernetes-secret-generator":https://github.com/mittwald/kubernetes-secret-generator for creating secrets ** Needed so that secrets are not stored in the git repository, but only in the cluster * "ungleich-certbot":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot ** Needed to get letsencrypt certificates for services * "rook with ceph rbd + cephfs":https://rook.io/ for storage ** rbd for almost everything, *ReadWriteOnce* ** cephfs for smaller things, multi access *ReadWriteMany* ** Needed for providing persistent storage * "flux v2":https://fluxcd.io/ ** Needed to manage resources automatically h3. Persistent storage setup * 3 or 5 monitors h3. Cluster types | **Type/Feature** | **Development** | **Production** | | Min No. nodes | 3 (1 master, 3 worker) | 5 (3 master, 3 worker) | | Recommended minimum | 4 (dedicated master, 3 worker) | 8 (3 master, 5 worker) | | Separation of control plane | optional | recommended | | Persistent storage | required | required | | Number of storage monitors | 3 | 5 | h2. Operations h3. Installing a new k8s cluster * Decide on the cluster name (usually *cX.k8s.ooo*), X counting upwards * Use cdist to configure the nodes with requirements like crio * Decide between single or multi node control plane setups (see below) * Setup DNS delegation and glue records: ** kube-dns.kube-system.svc.cX AAAA ... ** kube-dns.kube-system.svc.cX A ... ** cX NS kube-dns.kube-system.svc.cX h2. Open Issues / To be discussed * "Maybe add Autoscaling support?":https://github.com/kubernetes-sigs/metrics-server ** https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ ** https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler/ * Certainly deploy in-cluster monitoring ** "prometheus-operator":https://github.com/prometheus-operator/prometheus-operator CR ** "kube-prometheus":https://github.com/prometheus-operator/kube-prometheus complete example, based on prometheus-operator ** "kubernetes dashboard":https://github.com/kubernetes/dashboard generic cluster overview, basically kubectl for a broswer + graphs ** "kube-prometheus-stack via helm":https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#kube-prometheus-stack *** Looks most fitting, testing it in #9468 * Matrix/Notification bot ** Informing about changes in the cluster