Create new pool and place new osd » History » Revision 13
« Previous |
Revision 13/14
(diff)
| Next »
Jin-Guk Kwon, 02/08/2021 09:45 AM
Create new pool and place new osd¶
- Table of contents
- Create new pool and place new osd
1. create a new pool¶
manual
https://docs.ceph.com/docs/jewel/rados/operations/pools/#create-a-pool
check a current pool¶
ceph osd lspools
create a new pool¶
ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] \ [crush-ruleset-name] [expected-num-objects]
osds are 5~10 --> pg_num 512
osds are 10~50 --> pg_num 4096
osds are more than 50 --> need to calculation(pgcalc)
ex) ceph osd pool create xruk-ssd-pool 128 128
2. create a crush rule for new pool¶
manual
https://docs.ceph.com/docs/master/rados/operations/crush-map/#creating-a-rule-for-a-replicated-pool
check a current rule¶
ceph osd crush rule ls
create a new crush rule(copy from existing rule)¶
ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
ex) ceph osd crush rule create-replicated xruk-ssd-rule default host xruk-ssd
3. assign a crush rule to new pool¶
manual
https://docs.ceph.com/docs/master/rados/operations/crush-map/#device-classes
- assgin a new crush-rule to a new pool
ceph osd pool set <pool-name> crush_rule <rule-name>
ex) ceph osd pool set xruk-ssd-pool crush_rule xruk-ssd-rule
4. update a key infro to access a new pool¶
manual
https://docs.ceph.com/en/latest/rados/operations/user-management/
check the current key¶
- check the list of key
ceph auth ls
- get info about the opennbula's key
ceph auth get client.libvirt
- update a key's info
ceph auth caps USERTYPE.USERID {daemon} 'allow [r|w|x|*|...] [pool={pool-name}] [namespace={namespace-name}]' [{daemon} 'allow [r|w|x|*|...] [pool={pool-name}] [namespace={namespace-name}]'] mon 'profile {name}' osd 'profile {name} [pool={pool-name} [namespace={namespace-name}]]'
ex) ceph auth caps client.libvirt mon 'profile rbd' osd 'profile rbd pool=xruk-ssd-pool, profile rbd pool=hdd, profile rbd pool=ssd'
- verify a updated key
ceph auth get client.libvirt
5. set ceph application enable¶
ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications
ex) ~# ceph osd pool application enable xruk-ssd-pool rbd enabled application 'rbd' on pool 'xruk-ssd-pool'
* remove pool and crush rule¶
https://docs.ceph.com/en/latest/rados/operations/pools/?highlight=ceph%20osd%20pool#delete-a-pool
https://docs.ceph.com/en/latest/rados/operations/crush-map/#device-classes
- check current pool's rule
ex) ceph osd pool get ssd-pool crush_rule
- To remove a pool the mon_allow_pool_delete flag must be set to true in the Monitor’s configuration. Otherwise they will refuse to remove a pool.
ex) ceph tell mon.\* injectargs '--mon-allow-pool-delete=true'
- remove pool
ex) ceph osd pool delete ssd-pool ssd-pool --yes-i-really-really-mean-it
- disavle monitor flag
ex) ceph tell mon.\* injectargs '--mon-allow-pool-delete=false'
- check pool list
ceph osd lspools
- remove crush rule
ex) ceph osd crush rule rm ssd-rule
- check rule list
ceph osd crush rule ls
Updated by Jin-Guk Kwon almost 4 years ago · 13 revisions