Project

General

Profile

The ungleich kubernetes infrastructure » History » Revision 14

Revision 13 (Nico Schottelius, 07/14/2021 09:37 AM) → Revision 14/219 (Nico Schottelius, 07/14/2021 09:53 AM)

h1. The ungleich kubernetes infrastructure 

 {{toc}} 

 h2. Status 

 This document is **pre-production** 

 

 h2. k8s clusters 

 | Cluster | Purpose | Init/Notes | 
 | c0.k8s.ooo | Dev | @kubeadm init --config k8s/c0/kubeadm.yaml --upload-certs@ | 
 | c1 | undef | Dev | |  
 | c2.k8s.ooo | Demo/Semiprod | @kubeadm init --config k8s/c2/kubeadm.yaml --upload-certs@ | 
 | p5.k8s.ooo c3.k8s.ooo | Prod | Place5 Place6 | 
  
 | p6.k8s.ooo c4.k8s.ooo | Prod | Place6 Place5 |  


  


 h2. General architecture and components overview 

 * All k8s clusters are IPv6 only 
 * We use BGP peering to propagate podcidr and serviceCidr networks to our infrastructure 
 * The main / public repository is "ungleich-k8s":https://code.ungleich.ch/ungleich-public/ungleich-k8s 

 h2. ungleich kubernetes infrastructure v1 

 We are using the following components: 

 * "Calico as a CNI":https://www.projectcalico.org/ with BGP, IPv6 only, no encapsulation 
 * "kubernetes-secret-generator":https://github.com/mittwald/kubernetes-secret-generator for creating secrets 
 * "ungleich-certbot":https://hub.docker.com/repository/docker/ungleich/ungleich-certbot to get certificates 
 * "rook with ceph rbd + cephfs":https://rook.io/ for storage 
 ** rbd for almost everything, *ReadWriteOnce* 
 ** cephfs for smaller things, multi access *ReadWriteMany* 

 h3. Persistent storage setup  

 * 3 or 5 monitors 

 h3. Cluster types 

 | **Type/Feature** | **Development** | **Production** | 
 | Min No. nodes | 3 (1 master, 3 worker) | 5 (3 master, 3 worker) | 
 | Recommended minimum | 4 (dedicated master, 3 worker) | 8 (3 master, 5 worker) | 
 | Separation of control plane | optional | recommended | 
 | Persistent storage | required | required | 
 | Number of storage monitors | 3 | 5 | 

 h2. Operations 

 h3. Installing a new k8s cluster 

 * Decide on the cluster name (usually *cX.k8s.ooo*), X counting upwards 
 * Use cdist to configure the nodes with requirements like crio 
 * Decide between single or multi node control plane setups (see below) 
 * Setup DNS delegation and glue records: 
 ** kube-dns.kube-system.svc.cX AAAA ... 
 ** kube-dns.kube-system.svc.cX A ... 
 ** cX NS kube-dns.kube-system.svc.cX 



 h2. Open Issues / To be discussed 

 * "Maybe add Autoscaling support?":https://github.com/kubernetes-sigs/metrics-server 
 ** https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ 
 ** https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler/ 
 * Certainly deploy in-cluster monitoring 
 ** "prometheus-operator":https://github.com/prometheus-operator/prometheus-operator CR 
 ** "kube-prometheus":https://github.com/prometheus-operator/kube-prometheus complete example, based on prometheus-operator 
 ** "kubernetes dashboard":https://github.com/kubernetes/dashboard generic cluster overview, basically kubectl for a broswer + graphs 
 ** "kube-prometheus-stack via helm":https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#kube-prometheus-stack 
 *** Looks most fitting, testing it in #9468 
 * Matrix/Notification bot 
 ** Informing about changes in the cluster