Project

General

Profile

Create new pool and place new osd » History » Version 10

Nico Schottelius, 09/14/2020 12:28 PM

1 1 Jin-Guk Kwon
h1. Create new pool and place new osd
2
3 10 Nico Schottelius
{{toc}}
4 1 Jin-Guk Kwon
5 6 Jin-Guk Kwon
h2. 1. create a new pool
6
7 4 Jin-Guk Kwon
manual
8 1 Jin-Guk Kwon
https://docs.ceph.com/docs/jewel/rados/operations/pools/#create-a-pool
9
10 6 Jin-Guk Kwon
h3. check a current pool
11
12 1 Jin-Guk Kwon
<pre>
13 6 Jin-Guk Kwon
ceph osd lspools
14
</pre>
15
16
h3. create a new pool
17
18
<pre>
19 1 Jin-Guk Kwon
ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] \
20
     [crush-ruleset-name] [expected-num-objects]
21
</pre>
22
23
osds are 5~10 -->  pg_num 512
24
osds are 10~50 --> pg_num 4096
25
osds are more than 50 --> need to calculation(pgcalc) 
26
27
<pre>
28
ex) ceph osd pool create xruk-ssd-pool 128 128
29
</pre>
30
31 8 Jin-Guk Kwon
h2. 2. create a crush rule for new pool 
32 4 Jin-Guk Kwon
33 1 Jin-Guk Kwon
manual 
34
https://docs.ceph.com/docs/master/rados/operations/crush-map/#creating-a-rule-for-a-replicated-pool
35
36 6 Jin-Guk Kwon
h3. check a current rule
37
38 1 Jin-Guk Kwon
<pre>
39 6 Jin-Guk Kwon
ceph osd crush rule ls
40
</pre>
41
42 9 Jin-Guk Kwon
h3. create a new crush rule(copy from existing rule)
43 6 Jin-Guk Kwon
44
<pre>
45 1 Jin-Guk Kwon
ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
46 3 Jin-Guk Kwon
</pre>
47 1 Jin-Guk Kwon
48
<pre>
49
ex) ceph osd crush rule create-replicated xruk-ssd-rule default host xruk-ssd
50
</pre>
51 5 Jin-Guk Kwon
52 7 Jin-Guk Kwon
h2. 3. assign a crush rule to new pool
53 4 Jin-Guk Kwon
54 1 Jin-Guk Kwon
manual
55
https://docs.ceph.com/docs/master/rados/operations/crush-map/#device-classes
56
57 6 Jin-Guk Kwon
- assgin a new crush-rule to a new pool
58 1 Jin-Guk Kwon
<pre>
59
ceph osd pool set <pool-name> crush_rule <rule-name>
60
</pre>
61
62
<pre>
63
ex) ceph osd pool set xruk-ssd-pool crush_rule xruk-ssd-rule
64
</pre>
65 3 Jin-Guk Kwon
66 6 Jin-Guk Kwon
h2. 4. update a key infro to access a new pool
67 1 Jin-Guk Kwon
68
manual
69
https://docs.ceph.com/docs/mimic/rados/operations/user-management/
70 4 Jin-Guk Kwon
71 6 Jin-Guk Kwon
h3. check the current key
72
73
- check the list of key
74 1 Jin-Guk Kwon
<pre>
75 6 Jin-Guk Kwon
ceph auth ls
76
</pre>
77
78
- get info about the opennbula's key
79
<pre>
80
ceph auth get client.libvirt
81
</pre>
82
83
- update a key's info
84
<pre>
85 2 Jin-Guk Kwon
ceph auth caps USERTYPE.USERID {daemon} 'allow [r|w|x|*|...] [pool={pool-name}] [namespace={namespace-name}]' [{daemon} 'allow [r|w|x|*|...] [pool={pool-name}] [namespace={namespace-name}]']
86
87
mon 'profile {name}' 
88
osd 'profile {name} [pool={pool-name} [namespace={namespace-name}]]'
89 1 Jin-Guk Kwon
</pre>
90
91
<pre>
92
ex) ceph auth caps client.libvirt mon 'profile rbd' osd 'profile rbd pool=xruk-ssd-pool, profile rbd pool=hdd, profile rbd pool=ssd'
93 6 Jin-Guk Kwon
</pre>
94
95
- verify a updated key
96
97
<pre>
98
ceph auth get client.libvirt
99 2 Jin-Guk Kwon
</pre>
100
101 5 Jin-Guk Kwon
h2. 5. set ceph application enable
102 1 Jin-Guk Kwon
103
<pre>
104
ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications
105
</pre>
106
107
<pre>
108 3 Jin-Guk Kwon
ex)
109 1 Jin-Guk Kwon
~# ceph osd pool application enable xruk-ssd-pool rbd
110
enabled application 'rbd' on pool 'xruk-ssd-pool'
111
</pre>