Create new pool and place new osd » History » Revision 8
Revision 7 (Jin-Guk Kwon, 09/14/2020 12:25 PM) → Revision 8/14 (Jin-Guk Kwon, 09/14/2020 12:26 PM)
h1. Create new pool and place new osd
h2. 1. create a new pool
manual
https://docs.ceph.com/docs/jewel/rados/operations/pools/#create-a-pool
h3. check a current pool
<pre>
ceph osd lspools
</pre>
h3. create a new pool
<pre>
ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] \
[crush-ruleset-name] [expected-num-objects]
</pre>
osds are 5~10 --> pg_num 512
osds are 10~50 --> pg_num 4096
osds are more than 50 --> need to calculation(pgcalc)
<pre>
ex) ceph osd pool create xruk-ssd-pool 128 128
</pre>
h2. 2. create a crush rule for new pool
manual
https://docs.ceph.com/docs/master/rados/operations/crush-map/#creating-a-rule-for-a-replicated-pool
h3. check a current rule
<pre>
ceph osd crush rule ls
</pre>
h3. create a new rule(copy from existing rule)
<pre>
ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
</pre>
<pre>
ex) ceph osd crush rule create-replicated xruk-ssd-rule default host xruk-ssd
</pre>
h2. 3. assign a crush rule to new pool
manual
https://docs.ceph.com/docs/master/rados/operations/crush-map/#device-classes
- assgin a new crush-rule to a new pool
<pre>
ceph osd pool set <pool-name> crush_rule <rule-name>
</pre>
<pre>
ex) ceph osd pool set xruk-ssd-pool crush_rule xruk-ssd-rule
</pre>
h2. 4. update a key infro to access a new pool
manual
https://docs.ceph.com/docs/mimic/rados/operations/user-management/
h3. check the current key
- check the list of key
<pre>
ceph auth ls
</pre>
- get info about the opennbula's key
<pre>
ceph auth get client.libvirt
</pre>
- update a key's info
<pre>
ceph auth caps USERTYPE.USERID {daemon} 'allow [r|w|x|*|...] [pool={pool-name}] [namespace={namespace-name}]' [{daemon} 'allow [r|w|x|*|...] [pool={pool-name}] [namespace={namespace-name}]']
mon 'profile {name}'
osd 'profile {name} [pool={pool-name} [namespace={namespace-name}]]'
</pre>
<pre>
ex) ceph auth caps client.libvirt mon 'profile rbd' osd 'profile rbd pool=xruk-ssd-pool, profile rbd pool=hdd, profile rbd pool=ssd'
</pre>
- verify a updated key
<pre>
ceph auth get client.libvirt
</pre>
h2. 5. set ceph application enable
<pre>
ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications
</pre>
<pre>
ex)
~# ceph osd pool application enable xruk-ssd-pool rbd
enabled application 'rbd' on pool 'xruk-ssd-pool'
</pre>