Project

General

Profile

The ungleich ceph handbook » History » Version 8

Nico Schottelius, 11/07/2018 04:55 PM

1 1 Nico Schottelius
h1. The ungleich ceph handbook
2
3 3 Nico Schottelius
{{toc}}
4
5 1 Nico Schottelius
h2. Status
6
7 7 Nico Schottelius
This document is **IN PRODUCTION**.
8 1 Nico Schottelius
9
h2. Introduction
10
11
This article describes the ungleich storage architecture that is based on ceph. It describes our architecture as well maintenance commands. Required for 
12
13
h2. Communication guide
14
15
Usually when disks fails no customer communication is necessary, as it is automatically compensated/rebalanced by ceph. However in case multiple disk failures happen at the same time, I/O speed might be reduced and thus customer experience impacted.
16
17
For this reason communicate whenever I/O recovery settings are temporarily tuned.
18
19 2 Nico Schottelius
h2. Adding a new disk/ssd to the ceph cluster
20
21
h3. For Dell servers
22
23
First find the disk and then add it to the operating system
24
25
<pre>
26
megacli -PDList -aALL  | grep -B16 -i unconfigur
27
28
# Sample output:
29
[19:46:50] server7.place6:~#  megacli -PDList -aALL  | grep -B16 -i unconfigur
30
Enclosure Device ID: N/A
31
Slot Number: 0
32
Enclosure position: N/A
33
Device Id: 0
34
WWN: 0000000000000000
35
Sequence Number: 1
36
Media Error Count: 0
37
Other Error Count: 0
38
Predictive Failure Count: 0
39
Last Predictive Failure Event Seq Number: 0
40
PD Type: SATA
41
42
Raw Size: 894.252 GB [0x6fc81ab0 Sectors]
43
Non Coerced Size: 893.752 GB [0x6fb81ab0 Sectors]
44
Coerced Size: 893.75 GB [0x6fb80000 Sectors]
45
Sector Size:  0
46
Firmware state: Unconfigured(good), Spun Up
47
</pre>
48
49
Then add the disk to the OS:
50
51
<pre>
52
megacli -CfgLdAdd -r0 [enclosure:slot] -aX
53
54
# Sample call, if enclosure and slot are KNOWN (aka not N/A)
55
megacli -CfgLdAdd -r0 [32:0] -a0
56
57
# Sample call, if enclosure is N/A
58
megacli -CfgLdAdd -r0 [:0] -a0
59
</pre>
60
61 1 Nico Schottelius
h2. Moving a disk/ssd to another server
62 4 Nico Schottelius
63
(needs to be described better)
64
65
Generally speaking:
66
67
* Stop the osd, remove monit on the server you want to take it out
68 8 Nico Schottelius
* umount the disk
69 4 Nico Schottelius
* Take disk out
70
* Discard preserved cache on the server you took it out 
71
* Insert into new server
72 1 Nico Schottelius
* Clear foreign configuration (megacli)
73 8 Nico Schottelius
* Disk will now appear in the OS, ceph/udev will automatically start the OSD
74 4 Nico Schottelius
* Create the monit configuration file so that monit watches the OSD
75
* Reload monit
76 1 Nico Schottelius
77
h2. Removing a disk/ssd
78
79 5 Nico Schottelius
To permanently remove a failed disk from a cluster, use ***ceph-osd-stop-remove-permanently*** from ungleich-tools repo. Warning: if the disk is still active, the OSD will be shutdown AND removed from the cluster -> all data of that disk will need to be rebalanced.
80
81 1 Nico Schottelius
h2. Handling DOWN osds with filesystem errors
82
83
If an email arrives with the subject "monit alert -- Does not exist osd.XX-whoami", the filesystem of an OSD cannot be read anymore. It is very highly likely that the disk / ssd is broken. Steps that need to be done:
84
85
* Login to any ceph monitor (cephX.placeY.ungleich.ch)
86
* Check **ceph -s**, find host using **ceph osd tree**
87
* Login to the affected host
88
* Run the following commands:
89
** ls /var/lib/ceph/osd/ceph-XX
90
** dmesg
91
* Create a new ticket in the datacenter light project
92
** Subject: "Replace broken OSD.XX on serverX.placeY.ungleich.ch"
93
** Add (partial) output of above commands
94
** Use /opt/ungleich-tools/ceph-osd-stop-remove-permanently XX, where XX is the osd id, to remove the disk from the cluster
95
** Remove the physical disk from the host, checkout if there is warranty on it and if yes
96
*** Create a short letter to the vendor, including technical details a from above
97
*** Record when you sent it in
98
*** Put ticket into status waiting
99
** If there is no warranty, dispose it
100
101
102
103
h2. Change ceph speed for i/o recovery
104
105
By default we want to keep I/O recovery traffic low to not impact customer experience. However when multiple disks fail at the same point, we might want to prioritise recover for data safety over performance.
106
107
The default configuration on our servers contains:
108
109
<pre>
110
[osd]
111
osd max backfills = 1
112
osd recovery max active = 1
113
osd recovery op priority = 2
114
</pre>
115
116
The important settings are *osd max backfills* and *osd recovery max active*, the priority is always kept low so that regular I/O has priority.
117
118
To adjust the number of backfills *per osd* and to change the *number of threads* used for recovery, we can use on any node with the admin keyring:
119
120
<pre>
121
ceph tell osd.* injectargs '--osd-max-backfills Y'
122
ceph tell osd.* injectargs '--osd-recovery-max-active X'
123
</pre>
124
125
where Y and X are the values that we want to use. Experience shows that Y=5 and X=5 doubles to triples the recovery performance, whereas X=10 and Y=10 increases recovery performance 5 times.
126
127
h2. Debug scrub errors / inconsistent pg message
128
129 6 Nico Schottelius
From time to time disks don't save what they are told to save. Ceph scrubbing detects these errors and switches to HEALTH_ERR. Use *ceph health detail* to find out which placement groups (*pgs*) are affected. Usually a ***ceph pg repair <number>*** fixes the problem.
130 1 Nico Schottelius
131
If this does not help, consult https://ceph.com/geen-categorie/ceph-manually-repair-object/.