Project

General

Profile

Create new pool and place new osd » History » Revision 12

Revision 11 (Jin-Guk Kwon, 09/25/2020 11:07 AM) → Revision 12/14 (Jin-Guk Kwon, 02/08/2021 09:44 AM)

h1. Create new pool and place new osd 

 {{toc}} 

 h2. 1. create a new pool 

 manual 
 https://docs.ceph.com/docs/jewel/rados/operations/pools/#create-a-pool 

 h3. check a current pool 

 <pre> 
 ceph osd lspools 
 </pre> 

 h3. create a new pool 

 <pre> 
 ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] \ 
      [crush-ruleset-name] [expected-num-objects] 
 </pre> 

 osds are 5~10 -->    pg_num 512 
 osds are 10~50 --> pg_num 4096 
 osds are more than 50 --> need to calculation(pgcalc)  

 <pre> 
 ex) ceph osd pool create xruk-ssd-pool 128 128 
 </pre> 

 h2. 2. create a crush rule for new pool  

 manual  
 https://docs.ceph.com/docs/master/rados/operations/crush-map/#creating-a-rule-for-a-replicated-pool 

 h3. check a current rule 

 <pre> 
 ceph osd crush rule ls 
 </pre> 

 h3. create a new crush rule(copy from existing rule) 

 <pre> 
 ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class> 
 </pre> 

 <pre> 
 ex) ceph osd crush rule create-replicated xruk-ssd-rule default host xruk-ssd 
 </pre> 

 h2. 3. assign a crush rule to new pool 

 manual 
 https://docs.ceph.com/docs/master/rados/operations/crush-map/#device-classes 

 - assgin a new crush-rule to a new pool 
 <pre> 
 ceph osd pool set <pool-name> crush_rule <rule-name> 
 </pre> 

 <pre> 
 ex) ceph osd pool set xruk-ssd-pool crush_rule xruk-ssd-rule 
 </pre> 

 

 h2. 4. update a key infro to access a new pool 

 manual 
 https://docs.ceph.com/en/latest/rados/operations/user-management/ 

 h3. check the current key 

 - check the list of key 
 <pre> 
 ceph auth ls 
 </pre> 

 - get info about the opennbula's key 
 <pre> 
 ceph auth get client.libvirt 
 </pre> 

 - update a key's info 
 <pre> 
 ceph auth caps USERTYPE.USERID {daemon} 'allow [r|w|x|*|...] [pool={pool-name}] [namespace={namespace-name}]' [{daemon} 'allow [r|w|x|*|...] [pool={pool-name}] [namespace={namespace-name}]'] 

 mon 'profile {name}'  
 osd 'profile {name} [pool={pool-name} [namespace={namespace-name}]]' 
 </pre> 

 <pre> 
 ex) ceph auth caps client.libvirt mon 'profile rbd' osd 'profile rbd pool=xruk-ssd-pool, profile rbd pool=hdd, profile rbd pool=ssd' 
 </pre> 

 - verify a updated key 

 <pre> 
 ceph auth get client.libvirt 
 </pre> 

 

 h2. 5. set ceph application enable 

 <pre> 
 ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications 
 </pre> 

 <pre> 
 ex) 
 ~# ceph osd pool application enable xruk-ssd-pool rbd 
 enabled application 'rbd' on pool 'xruk-ssd-pool' 
 </pre> 

 h2. remove pool and crush rule 

 - check current pool's rule 

 <pre> 
 ex) ceph osd pool get ssd-pool crush_rule 
 </pre> 

 - To remove a pool the mon_allow_pool_delete flag must be set to true in the Monitor’s configuration. Otherwise they will refuse to remove a pool. 

 <pre> 
 ex) ceph tell mon.\* injectargs '--mon-allow-pool-delete=true' 
 </pre> 

 - remove pool 

 <pre> 
 ex) ceph osd pool delete ssd-pool ssd-pool --yes-i-really-really-mean-it 
 </pre> 

 - disavle monitor flag 

 <pre> 
 ex) ceph tell mon.\* injectargs '--mon-allow-pool-delete=false' 
 </pre> 

 - check pool list 

 <pre> 
 ceph osd lspools 
 </pre> 

 - remove crush rule 

 <pre> 
 ex) ceph osd crush rule rm ssd-rule 
 </pre> 

 - check rule list 

 <pre> 
 ceph osd crush rule ls 
 </pre>