The ungleich ceph handbook » History » Revision 3

« Previous | Revision 3/62 (diff) | Next »
Nico Schottelius, 10/23/2018 07:53 PM

The ungleich ceph handbook


This document is WORK IN PROGRESS.


This article describes the ungleich storage architecture that is based on ceph. It describes our architecture as well maintenance commands. Required for

Communication guide

Usually when disks fails no customer communication is necessary, as it is automatically compensated/rebalanced by ceph. However in case multiple disk failures happen at the same time, I/O speed might be reduced and thus customer experience impacted.

For this reason communicate whenever I/O recovery settings are temporarily tuned.

Adding a new disk/ssd to the ceph cluster

For Dell servers

First find the disk and then add it to the operating system

megacli -PDList -aALL  | grep -B16 -i unconfigur

# Sample output:
[19:46:50] server7.place6:~#  megacli -PDList -aALL  | grep -B16 -i unconfigur
Enclosure Device ID: N/A
Slot Number: 0
Enclosure position: N/A
Device Id: 0
WWN: 0000000000000000
Sequence Number: 1
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0

Raw Size: 894.252 GB [0x6fc81ab0 Sectors]
Non Coerced Size: 893.752 GB [0x6fb81ab0 Sectors]
Coerced Size: 893.75 GB [0x6fb80000 Sectors]
Sector Size:  0
Firmware state: Unconfigured(good), Spun Up

Then add the disk to the OS:

megacli -CfgLdAdd -r0 [enclosure:slot] -aX

# Sample call, if enclosure and slot are KNOWN (aka not N/A)
megacli -CfgLdAdd -r0 [32:0] -a0

# Sample call, if enclosure is N/A
megacli -CfgLdAdd -r0 [:0] -a0

Moving a disk/ssd to another server

Removing a disk/ssd

Handling DOWN osds with filesystem errors

If an email arrives with the subject "monit alert -- Does not exist osd.XX-whoami", the filesystem of an OSD cannot be read anymore. It is very highly likely that the disk / ssd is broken. Steps that need to be done:

  • Login to any ceph monitor (
  • Check ceph -s, find host using ceph osd tree
  • Login to the affected host
  • Run the following commands:
    • ls /var/lib/ceph/osd/ceph-XX
    • dmesg
  • Create a new ticket in the datacenter light project
    • Subject: "Replace broken OSD.XX on"
    • Add (partial) output of above commands
    • Use /opt/ungleich-tools/ceph-osd-stop-remove-permanently XX, where XX is the osd id, to remove the disk from the cluster
    • Remove the physical disk from the host, checkout if there is warranty on it and if yes
      • Create a short letter to the vendor, including technical details a from above
      • Record when you sent it in
      • Put ticket into status waiting
    • If there is no warranty, dispose it

Change ceph speed for i/o recovery

By default we want to keep I/O recovery traffic low to not impact customer experience. However when multiple disks fail at the same point, we might want to prioritise recover for data safety over performance.

The default configuration on our servers contains:

osd max backfills = 1
osd recovery max active = 1
osd recovery op priority = 2

The important settings are osd max backfills and osd recovery max active, the priority is always kept low so that regular I/O has priority.

To adjust the number of backfills per osd and to change the number of threads used for recovery, we can use on any node with the admin keyring:

ceph tell osd.* injectargs '--osd-max-backfills Y'
ceph tell osd.* injectargs '--osd-recovery-max-active X'

where Y and X are the values that we want to use. Experience shows that Y=5 and X=5 doubles to triples the recovery performance, whereas X=10 and Y=10 increases recovery performance 5 times.

Debug scrub errors / inconsistent pg message

From time to time disks don't save what they are told to save. Ceph scrubbing detects these errors and switches to HEALTH_ERR. Use ceph health detail to find out which placement groups (pgs) are affected. Usually a *ceph pg repair <number> fixes the problem.

If this does not help, consult

Updated by Nico Schottelius almost 4 years ago · 3 revisions