Project

General

Profile

Uptime objectives » History » Version 7

Nico Schottelius, 07/01/2019 07:40 PM

1 1 Nico Schottelius
h1. Uptime objectives
2
3 2 Nico Schottelius
{{toc}}
4 1 Nico Schottelius
5 4 Nico Schottelius
h2. Uptime definitons
6
7
8
| %  | Downtime / year |
9
| 99 |  87h or 3.65 days |
10
| 99.9 | 8.76h |
11
| 99.99 | 0.876h or 52.55 minutes |
12
| 99.999 | 5.25 minutes |
13
14 1 Nico Schottelius
h2. Power Supply
15
16
* What: Power supply to all systems
17
* Setup:
18
** Core systems are connected to UPS that last between 7-30 minutes
19 4 Nico Schottelius
** Virtualisation systems are not (yet) fully connected to UPS (to be finished 2019-07)
20
* Uptime objective
21
** Prior to full UPS installation: <= 24h/year (99%)
22
** After UPS installation: 99.9%
23
*** Probably less, as most power outages are <1m
24 1 Nico Schottelius
25 5 Nico Schottelius
h2. L2 Internal Network
26 1 Nico Schottelius
27
* What: The connection between servers, routers and switches.
28
* Setup: All systems are connected twice internally, usually via fiber
29
* Expected outages
30
** Single switch outage: no outage, maybe short packet loss (LACP link detection might take some seconds)
31
** Double switch outage: full outage, manual replacement
32
* Uptime objectives
33
** From 2019: >= 99.999%
34
35
h2. L2 external Network
36
37
* What: the network between the different locations
38
* Setup:
39
** Provided by local (electricity) companies.
40
** No additional active equipment / same as internal network
41
* Expected outages
42
** 1 in 2018 that could be bridged by Wifi
43
** If an outage happens, it's long (digging through the cable)
44
** But it happens very rarely
45
** Mid term geo redundant lines planned
46 4 Nico Schottelius
** Geo redundancy might be achieved starting 2020
47 1 Nico Schottelius
* Uptime objectives
48
** 2019: >= 99.99%
49 4 Nico Schottelius
** 2020: >= 99.999%
50
** 2021: >= 99.999%
51 1 Nico Schottelius
52
53
h2. L3 external Network
54
55
* What: the external (uplink) networks
56
* Setup
57 4 Nico Schottelius
** Currently 2 uplinks
58
** Soon 2 individual plus a third central uplink
59 1 Nico Schottelius
* Expected outages
60 4 Nico Schottelius
** 2019 added bgp support
61
** Outage simulations still due
62 1 Nico Schottelius
* Uptime objectives
63 4 Nico Schottelius
** 2019: >= 99.99%
64
** 2020: >= 99.999%
65
** 2021: >= 99.999%
66 1 Nico Schottelius
67
68
h2. Routers
69
70 4 Nico Schottelius
* What: the central routers
71
* Setup
72
** Two routers running Linux with keepalived
73
** Both routers are rebooted periodically -> downtime during that time is critical, but unlikely
74
** Routers are connected to UPS
75
** Routers are running raid1
76
* Expected outages
77
** Machines are rather reliable
78
** If one machines has to be replaced, replacement can be prepared while other routers are active
79
** Rare events, nice 2017 no router related downtime
80
* Uptime objectives
81
** 2019: >= 99.99%
82
** 2020: >= 99.999%
83
** 2021: >= 99.999%
84 1 Nico Schottelius
85 6 Nico Schottelius
h2. VMs on servers
86 1 Nico Schottelius
87
* What: Servers host VMs and in case of a defect VMs need to be restarted on a different server
88
* Setup:
89
** Servers are dual power connected
90
** Servers are used hardware
91
** Servers are being monitored (prometheus+consul)
92
** Not yet sure how to detect soon failng servers
93
** So far 3 servers affected (out of about 30)
94 6 Nico Schottelius
** Restart of a VM takes a couple of seconds, as data is distributed in ceph
95
** Detection is not yet reliably automated -> needs to be finished in 2019
96 4 Nico Schottelius
* Expected outages
97
** At the moment servers "run until they die"
98
** In the future servers should be periodically rebooted to detect broken hardware (live migrations enable this)
99
** While a server downtime effects all VMs (up to 100 per server), it's a rare event
100
* Uptime objectives (per VM)
101
** 2019: >= 99.99%
102
** 2020: >= 99.999%
103
** 2021: >= 99.999%
104 7 Nico Schottelius
105
h2. Storage backends
106
107
* What: the ceph storage that contains the data of VMs and services
108
* Setup
109
** A disk is striped into 4MB blocks
110
** Each block is saved 3x
111
* Expected outages
112
** Downtime happens at 3 failures at the same time in a near time window
113
** 1 disk failure triggers instant replication
114
** Disks (HDD, SSD) are ranging from 600GB to 10TB
115
** Slow rebuild speed is around 200MB/s
116
** Thus slowest rebuild window is 14.56h
117
* Uptime objectives (per image)
118
** 2019: >= 99.999%
119
** 2020: >= 99.999%
120
** 2021: >= 99.999%