Create new pool and place new osd » History » Revision 2
Revision 1 (Jin-Guk Kwon, 09/13/2020 02:37 PM) → Revision 2/14 (Jin-Guk Kwon, 09/14/2020 08:20 AM)
h1. Create new pool and place new osd
create pool manual
https://docs.ceph.com/docs/jewel/rados/operations/pools/#create-a-pool
<pre>
ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] \
[crush-ruleset-name] [expected-num-objects]
</pre>
osds are 5~10 --> pg_num 512
osds are 10~50 --> pg_num 4096
osds are more than 50 --> need to calculation(pgcalc)
<pre>
ceph osd pool create xruk-ssd-pool 128 128
</pre>
create rule
https://docs.ceph.com/docs/master/rados/operations/crush-map/#creating-a-rule-for-a-replicated-pool
<pre>
ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
</pre>
<pre>
ceph osd crush rule create-replicated xruk-ssd-rule default host xruk-ssd
</pre>
set rule on pool
https://docs.ceph.com/docs/master/rados/operations/crush-map/#device-classes
<pre>
ceph osd pool set <pool-name> crush_rule <rule-name>
</pre>
<pre>
ceph osd pool set xruk-ssd-pool crush_rule xruk-ssd-rule
</pre>
update key
https://docs.ceph.com/docs/mimic/rados/operations/user-management/
<pre>
ceph auth caps USERTYPE.USERID {daemon} 'allow [r|w|x|*|...] [pool={pool-name}] [namespace={namespace-name}]' [{daemon} 'allow [r|w|x|*|...] [pool={pool-name}] [namespace={namespace-name}]']
mon 'profile {name}'
osd 'profile {name} [pool={pool-name} [namespace={namespace-name}]]'
</pre>
<pre>
ceph auth caps client.libvirt mon 'profile rbd' osd 'profile rbd pool=xruk-ssd-pool, profile rbd pool=hdd, profile rbd pool=ssd'
</pre>
set ceph application enable
<pre>
ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications
</pre>
<pre>
~# ceph osd pool application enable xruk-ssd-pool rbd
enabled application 'rbd' on pool 'xruk-ssd-pool'
</pre>