Project

General

Profile

The ungleich ceph handbook » History » Version 20

Nico Schottelius, 02/26/2019 02:17 PM

1 1 Nico Schottelius
h1. The ungleich ceph handbook
2
3 3 Nico Schottelius
{{toc}}
4
5 1 Nico Schottelius
h2. Status
6
7 7 Nico Schottelius
This document is **IN PRODUCTION**.
8 1 Nico Schottelius
9
h2. Introduction
10
11
This article describes the ungleich storage architecture that is based on ceph. It describes our architecture as well maintenance commands. Required for 
12
13
h2. Communication guide
14
15
Usually when disks fails no customer communication is necessary, as it is automatically compensated/rebalanced by ceph. However in case multiple disk failures happen at the same time, I/O speed might be reduced and thus customer experience impacted.
16
17
For this reason communicate whenever I/O recovery settings are temporarily tuned.
18
19 20 Nico Schottelius
h2. Analysing 
20
21
h3 ceph osd df tree
22
23
Using @ceph osd df tree@ you can see not only the disk usage per OSD, but also the number of PGs on an OSD. This is especially useful to see how the OSDs are balanced.
24
25
26 2 Nico Schottelius
h2. Adding a new disk/ssd to the ceph cluster
27
28
h3. For Dell servers
29
30
First find the disk and then add it to the operating system
31
32
<pre>
33
megacli -PDList -aALL  | grep -B16 -i unconfigur
34
35
# Sample output:
36
[19:46:50] server7.place6:~#  megacli -PDList -aALL  | grep -B16 -i unconfigur
37
Enclosure Device ID: N/A
38
Slot Number: 0
39
Enclosure position: N/A
40
Device Id: 0
41
WWN: 0000000000000000
42
Sequence Number: 1
43
Media Error Count: 0
44
Other Error Count: 0
45
Predictive Failure Count: 0
46
Last Predictive Failure Event Seq Number: 0
47
PD Type: SATA
48
49
Raw Size: 894.252 GB [0x6fc81ab0 Sectors]
50
Non Coerced Size: 893.752 GB [0x6fb81ab0 Sectors]
51
Coerced Size: 893.75 GB [0x6fb80000 Sectors]
52
Sector Size:  0
53
Firmware state: Unconfigured(good), Spun Up
54
</pre>
55
56
Then add the disk to the OS:
57
58
<pre>
59 19 Jin-Guk Kwon
megacli -CfgLdAdd -r0 [enclosure position:slot] -aX (X : host is 0. marray is 1)
60 2 Nico Schottelius
61
# Sample call, if enclosure and slot are KNOWN (aka not N/A)
62
megacli -CfgLdAdd -r0 [32:0] -a0
63
64
# Sample call, if enclosure is N/A
65
megacli -CfgLdAdd -r0 [:0] -a0
66
</pre>
67
68 1 Nico Schottelius
h2. Moving a disk/ssd to another server
69 4 Nico Schottelius
70
(needs to be described better)
71
72
Generally speaking:
73
74 9 Nico Schottelius
* /opt/ungleich-tools/ceph-osd-stop-disable does the following:
75
** Stop the osd, remove monit on the server you want to take it out
76
** umount the disk
77 1 Nico Schottelius
* Take disk out
78
* Discard preserved cache on the server you took it out 
79 9 Nico Schottelius
** using megacli
80 1 Nico Schottelius
* Insert into new server
81 9 Nico Schottelius
* Clear foreign configuration
82
** using megacli
83
* Disk will now appear in the OS, ceph/udev will automatically start the OSD (!)
84
** No creating of the osd required!
85
* Verify that the disk exists and that the osd is started
86
** using *ps aux*
87
** using *ceph osd tree*
88 10 Nico Schottelius
* */opt/ungleich-tools/monit-ceph-create-start osd.XX* # where osd.XX is the osd + number
89 9 Nico Schottelius
** Creates the monit configuration file so that monit watches the OSD
90
** Reload monit
91 11 Nico Schottelius
* Verify monit using *monit status*
92 1 Nico Schottelius
93
h2. Removing a disk/ssd
94 5 Nico Schottelius
95
To permanently remove a failed disk from a cluster, use ***ceph-osd-stop-remove-permanently*** from ungleich-tools repo. Warning: if the disk is still active, the OSD will be shutdown AND removed from the cluster -> all data of that disk will need to be rebalanced.
96 1 Nico Schottelius
97
h2. Handling DOWN osds with filesystem errors
98
99
If an email arrives with the subject "monit alert -- Does not exist osd.XX-whoami", the filesystem of an OSD cannot be read anymore. It is very highly likely that the disk / ssd is broken. Steps that need to be done:
100
101
* Login to any ceph monitor (cephX.placeY.ungleich.ch)
102
* Check **ceph -s**, find host using **ceph osd tree**
103
* Login to the affected host
104
* Run the following commands:
105
** ls /var/lib/ceph/osd/ceph-XX
106
** dmesg
107
* Create a new ticket in the datacenter light project
108
** Subject: "Replace broken OSD.XX on serverX.placeY.ungleich.ch"
109
** Add (partial) output of above commands
110
** Use /opt/ungleich-tools/ceph-osd-stop-remove-permanently XX, where XX is the osd id, to remove the disk from the cluster
111
** Remove the physical disk from the host, checkout if there is warranty on it and if yes
112
*** Create a short letter to the vendor, including technical details a from above
113
*** Record when you sent it in
114
*** Put ticket into status waiting
115
** If there is no warranty, dispose it
116
117
118
119
h2. Change ceph speed for i/o recovery
120
121
By default we want to keep I/O recovery traffic low to not impact customer experience. However when multiple disks fail at the same point, we might want to prioritise recover for data safety over performance.
122
123
The default configuration on our servers contains:
124
125
<pre>
126
[osd]
127
osd max backfills = 1
128
osd recovery max active = 1
129
osd recovery op priority = 2
130
</pre>
131
132
The important settings are *osd max backfills* and *osd recovery max active*, the priority is always kept low so that regular I/O has priority.
133
134
To adjust the number of backfills *per osd* and to change the *number of threads* used for recovery, we can use on any node with the admin keyring:
135
136
<pre>
137
ceph tell osd.* injectargs '--osd-max-backfills Y'
138
ceph tell osd.* injectargs '--osd-recovery-max-active X'
139
</pre>
140
141
where Y and X are the values that we want to use. Experience shows that Y=5 and X=5 doubles to triples the recovery performance, whereas X=10 and Y=10 increases recovery performance 5 times.
142
143
h2. Debug scrub errors / inconsistent pg message
144 6 Nico Schottelius
145 1 Nico Schottelius
From time to time disks don't save what they are told to save. Ceph scrubbing detects these errors and switches to HEALTH_ERR. Use *ceph health detail* to find out which placement groups (*pgs*) are affected. Usually a ***ceph pg repair <number>*** fixes the problem.
146
147
If this does not help, consult https://ceph.com/geen-categorie/ceph-manually-repair-object/.
148 12 Nico Schottelius
149
h2. Move servers into the osd tree
150
151
New servers have their buckets placed outside the **default root** and thus need to be moved inside.
152
Output might look as follows:
153
154
<pre>
155
[11:19:27] server5.place6:~# ceph osd tree
156
ID  CLASS   WEIGHT    TYPE NAME        STATUS REWEIGHT PRI-AFF 
157
 -3           0.87270 host server5                             
158
 41     ssd   0.87270     osd.41           up  1.00000 1.00000 
159
 -1         251.85580 root default                             
160
 -7          81.56271     host server2                         
161
  0 hdd-big   9.09511         osd.0        up  1.00000 1.00000 
162
  5 hdd-big   9.09511         osd.5        up  1.00000 1.00000 
163
...
164
</pre>
165
166
167
Use **ceph osd crush move serverX root=default** (where serverX is the new server),
168
which will move the bucket in the right place:
169
170
<pre>
171
[11:21:17] server5.place6:~# ceph osd crush move server5 root=default
172
moved item id -3 name 'server5' to location {root=default} in crush map
173
[11:32:12] server5.place6:~# ceph osd tree
174
ID  CLASS   WEIGHT    TYPE NAME        STATUS REWEIGHT PRI-AFF 
175
 -1         252.72850 root default                             
176
...
177
 -3           0.87270     host server5                         
178
 41     ssd   0.87270         osd.41       up  1.00000 1.00000 
179
180
181
</pre>
182 13 Nico Schottelius
183
h2. How to fix existing osds with wrong partition layout
184
185
In the first version of DCL we used filestore/3 partition based layout.
186
In the second version of DCL, including OSD autodection, we use bluestore/2 partition based layout.
187
188
To convert, we delete the old OSD, clean the partitions and create a new osd:
189
190 14 Nico Schottelius
h3. Inactive OSD
191 1 Nico Schottelius
192 14 Nico Schottelius
If the OSD is *not active*, we can do the following:
193
194 13 Nico Schottelius
* Find the OSD number: mount the partition and find the whoami file
195
196
<pre>
197
root@server2:/opt/ungleich-tools# mount /dev/sda2 /mnt/
198
root@server2:/opt/ungleich-tools# cat /mnt/whoami 
199
0
200
root@server2:/opt/ungleich-tools# umount  /mnt/
201
202
</pre>
203
204
* Verify in the *ceph osd tree* that the OSD is on that server
205
* Deleting the OSD
206
** ceph osd crush remove $osd_name
207 1 Nico Schottelius
** ceph osd rm $osd_name
208 14 Nico Schottelius
209
Then continue below as described in "Recreating the OSD".
210
211
h3. Remove Active OSD
212
213
* Use /opt/ungleich-tools/ceph-osd-stop-remove-permanently OSDID to stop and remove the OSD
214
* Then continue below as described in "Recreating the OSD".
215
216
217
h3. Recreating the OSD
218
219 13 Nico Schottelius
* Create an empty partition table
220
** fdisk /dev/sdX
221
** g
222
** w
223
* Create a new OSD
224
** /opt/ungleich-tools/ceph-osd-create-start /dev/sdX CLASS # use hdd, ssd, ... for the CLASS
225 15 Jin-Guk Kwon
226
h2. How to fix unfound pg
227
228
refer to https://redmine.ungleich.ch/issues/6388
229 16 Jin-Guk Kwon
230
* Check health state 
231
** ceph health detail
232
* Check which server has that osd
233
** ceph osd tree
234
* Check which VM is running in server place
235 17 Jin-Guk Kwon
** virsh list  
236 16 Jin-Guk Kwon
* Check pg map
237 17 Jin-Guk Kwon
** ceph osd map [osd pool] [VMID]
238 18 Jin-Guk Kwon
* revert pg
239
** ceph pg [PGID] mark_unfound_lost revert