Project

General

Profile

Create new pool and place new osd » History » Version 1

Jin-Guk Kwon, 09/13/2020 02:37 PM

1 1 Jin-Guk Kwon
h1. Create new pool and place new osd
2
3
create pool manual
4
https://docs.ceph.com/docs/jewel/rados/operations/pools/#create-a-pool
5
6
<pre>
7
ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] \
8
     [crush-ruleset-name] [expected-num-objects]
9
</pre>
10
11
osds are 5~10 -->  pg_num 512
12
osds are 10~50 --> pg_num 4096
13
osds are more than 50 --> need to calculation(pgcalc) 
14
15
<pre>
16
ceph osd pool create xruk-ssd-pool 128 128
17
</pre>
18
19
create rule 
20
https://docs.ceph.com/docs/master/rados/operations/crush-map/#creating-a-rule-for-a-replicated-pool
21
22
<pre>
23
ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
24
</pre>
25
26
<pre>
27
ceph osd crush rule create-replicated xruk-ssd-rule default host xruk-ssd
28
</pre>
29
30
set rule on pool
31
https://docs.ceph.com/docs/master/rados/operations/crush-map/#device-classes
32
33
<pre>
34
ceph osd pool set <pool-name> crush_rule <rule-name>
35
</pre>
36
37
<pre>
38
ceph osd pool set xruk-ssd-pool crush_rule xruk-ssd-rule
39
</pre>
40
41
set ceph application enable
42
43
<pre>
44
ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications
45
</pre>
46
47
<pre>
48
~# ceph osd pool application enable xruk-ssd-pool rbd
49
enabled application 'rbd' on pool 'xruk-ssd-pool'
50
</pre>