Project

General

Profile

Create new pool and place new osd » History » Version 9

Jin-Guk Kwon, 09/14/2020 12:26 PM

1 1 Jin-Guk Kwon
h1. Create new pool and place new osd
2
3
4 6 Jin-Guk Kwon
h2. 1. create a new pool
5
6 4 Jin-Guk Kwon
manual
7 1 Jin-Guk Kwon
https://docs.ceph.com/docs/jewel/rados/operations/pools/#create-a-pool
8
9 6 Jin-Guk Kwon
h3. check a current pool
10
11 1 Jin-Guk Kwon
<pre>
12 6 Jin-Guk Kwon
ceph osd lspools
13
</pre>
14
15
h3. create a new pool
16
17
<pre>
18 1 Jin-Guk Kwon
ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] \
19
     [crush-ruleset-name] [expected-num-objects]
20
</pre>
21
22
osds are 5~10 -->  pg_num 512
23
osds are 10~50 --> pg_num 4096
24
osds are more than 50 --> need to calculation(pgcalc) 
25
26
<pre>
27
ex) ceph osd pool create xruk-ssd-pool 128 128
28
</pre>
29
30 8 Jin-Guk Kwon
h2. 2. create a crush rule for new pool 
31 4 Jin-Guk Kwon
32 1 Jin-Guk Kwon
manual 
33
https://docs.ceph.com/docs/master/rados/operations/crush-map/#creating-a-rule-for-a-replicated-pool
34
35 6 Jin-Guk Kwon
h3. check a current rule
36
37 1 Jin-Guk Kwon
<pre>
38 6 Jin-Guk Kwon
ceph osd crush rule ls
39
</pre>
40
41 9 Jin-Guk Kwon
h3. create a new crush rule(copy from existing rule)
42 6 Jin-Guk Kwon
43
<pre>
44 1 Jin-Guk Kwon
ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
45 3 Jin-Guk Kwon
</pre>
46 1 Jin-Guk Kwon
47
<pre>
48
ex) ceph osd crush rule create-replicated xruk-ssd-rule default host xruk-ssd
49
</pre>
50 5 Jin-Guk Kwon
51 7 Jin-Guk Kwon
h2. 3. assign a crush rule to new pool
52 4 Jin-Guk Kwon
53 1 Jin-Guk Kwon
manual
54
https://docs.ceph.com/docs/master/rados/operations/crush-map/#device-classes
55
56 6 Jin-Guk Kwon
- assgin a new crush-rule to a new pool
57 1 Jin-Guk Kwon
<pre>
58
ceph osd pool set <pool-name> crush_rule <rule-name>
59
</pre>
60
61
<pre>
62
ex) ceph osd pool set xruk-ssd-pool crush_rule xruk-ssd-rule
63
</pre>
64 3 Jin-Guk Kwon
65 6 Jin-Guk Kwon
h2. 4. update a key infro to access a new pool
66 1 Jin-Guk Kwon
67
manual
68
https://docs.ceph.com/docs/mimic/rados/operations/user-management/
69 4 Jin-Guk Kwon
70 6 Jin-Guk Kwon
h3. check the current key
71
72
- check the list of key
73 1 Jin-Guk Kwon
<pre>
74 6 Jin-Guk Kwon
ceph auth ls
75
</pre>
76
77
- get info about the opennbula's key
78
<pre>
79
ceph auth get client.libvirt
80
</pre>
81
82
- update a key's info
83
<pre>
84 2 Jin-Guk Kwon
ceph auth caps USERTYPE.USERID {daemon} 'allow [r|w|x|*|...] [pool={pool-name}] [namespace={namespace-name}]' [{daemon} 'allow [r|w|x|*|...] [pool={pool-name}] [namespace={namespace-name}]']
85
86
mon 'profile {name}' 
87
osd 'profile {name} [pool={pool-name} [namespace={namespace-name}]]'
88 1 Jin-Guk Kwon
</pre>
89
90
<pre>
91
ex) ceph auth caps client.libvirt mon 'profile rbd' osd 'profile rbd pool=xruk-ssd-pool, profile rbd pool=hdd, profile rbd pool=ssd'
92 6 Jin-Guk Kwon
</pre>
93
94
- verify a updated key
95
96
<pre>
97
ceph auth get client.libvirt
98 2 Jin-Guk Kwon
</pre>
99
100 5 Jin-Guk Kwon
h2. 5. set ceph application enable
101 1 Jin-Guk Kwon
102
<pre>
103
ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications
104
</pre>
105
106
<pre>
107 3 Jin-Guk Kwon
ex)
108 1 Jin-Guk Kwon
~# ceph osd pool application enable xruk-ssd-pool rbd
109
enabled application 'rbd' on pool 'xruk-ssd-pool'
110
</pre>