Create new pool and place new osd » History » Version 2
Jin-Guk Kwon, 09/14/2020 08:20 AM
1 | 1 | Jin-Guk Kwon | h1. Create new pool and place new osd |
---|---|---|---|
2 | |||
3 | create pool manual |
||
4 | https://docs.ceph.com/docs/jewel/rados/operations/pools/#create-a-pool |
||
5 | |||
6 | <pre> |
||
7 | ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] \ |
||
8 | [crush-ruleset-name] [expected-num-objects] |
||
9 | </pre> |
||
10 | |||
11 | osds are 5~10 --> pg_num 512 |
||
12 | osds are 10~50 --> pg_num 4096 |
||
13 | osds are more than 50 --> need to calculation(pgcalc) |
||
14 | |||
15 | <pre> |
||
16 | ceph osd pool create xruk-ssd-pool 128 128 |
||
17 | </pre> |
||
18 | |||
19 | create rule |
||
20 | https://docs.ceph.com/docs/master/rados/operations/crush-map/#creating-a-rule-for-a-replicated-pool |
||
21 | |||
22 | <pre> |
||
23 | ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class> |
||
24 | </pre> |
||
25 | |||
26 | <pre> |
||
27 | ceph osd crush rule create-replicated xruk-ssd-rule default host xruk-ssd |
||
28 | </pre> |
||
29 | |||
30 | set rule on pool |
||
31 | https://docs.ceph.com/docs/master/rados/operations/crush-map/#device-classes |
||
32 | |||
33 | <pre> |
||
34 | ceph osd pool set <pool-name> crush_rule <rule-name> |
||
35 | </pre> |
||
36 | |||
37 | <pre> |
||
38 | ceph osd pool set xruk-ssd-pool crush_rule xruk-ssd-rule |
||
39 | </pre> |
||
40 | |||
41 | 2 | Jin-Guk Kwon | update key |
42 | https://docs.ceph.com/docs/mimic/rados/operations/user-management/ |
||
43 | |||
44 | <pre> |
||
45 | ceph auth caps USERTYPE.USERID {daemon} 'allow [r|w|x|*|...] [pool={pool-name}] [namespace={namespace-name}]' [{daemon} 'allow [r|w|x|*|...] [pool={pool-name}] [namespace={namespace-name}]'] |
||
46 | |||
47 | mon 'profile {name}' |
||
48 | osd 'profile {name} [pool={pool-name} [namespace={namespace-name}]]' |
||
49 | </pre> |
||
50 | |||
51 | <pre> |
||
52 | ceph auth caps client.libvirt mon 'profile rbd' osd 'profile rbd pool=xruk-ssd-pool, profile rbd pool=hdd, profile rbd pool=ssd' |
||
53 | </pre> |
||
54 | |||
55 | 1 | Jin-Guk Kwon | set ceph application enable |
56 | |||
57 | <pre> |
||
58 | ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications |
||
59 | </pre> |
||
60 | |||
61 | <pre> |
||
62 | ~# ceph osd pool application enable xruk-ssd-pool rbd |
||
63 | enabled application 'rbd' on pool 'xruk-ssd-pool' |
||
64 | </pre> |