Create new pool and place new osd » History » Version 12
Jin-Guk Kwon, 02/08/2021 09:44 AM
1 | 1 | Jin-Guk Kwon | h1. Create new pool and place new osd |
---|---|---|---|
2 | |||
3 | 10 | Nico Schottelius | {{toc}} |
4 | 1 | Jin-Guk Kwon | |
5 | 6 | Jin-Guk Kwon | h2. 1. create a new pool |
6 | |||
7 | 4 | Jin-Guk Kwon | manual |
8 | 1 | Jin-Guk Kwon | https://docs.ceph.com/docs/jewel/rados/operations/pools/#create-a-pool |
9 | |||
10 | 6 | Jin-Guk Kwon | h3. check a current pool |
11 | |||
12 | 1 | Jin-Guk Kwon | <pre> |
13 | 6 | Jin-Guk Kwon | ceph osd lspools |
14 | </pre> |
||
15 | |||
16 | h3. create a new pool |
||
17 | |||
18 | <pre> |
||
19 | 1 | Jin-Guk Kwon | ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] \ |
20 | [crush-ruleset-name] [expected-num-objects] |
||
21 | </pre> |
||
22 | |||
23 | osds are 5~10 --> pg_num 512 |
||
24 | osds are 10~50 --> pg_num 4096 |
||
25 | osds are more than 50 --> need to calculation(pgcalc) |
||
26 | |||
27 | <pre> |
||
28 | ex) ceph osd pool create xruk-ssd-pool 128 128 |
||
29 | </pre> |
||
30 | |||
31 | 8 | Jin-Guk Kwon | h2. 2. create a crush rule for new pool |
32 | 4 | Jin-Guk Kwon | |
33 | 1 | Jin-Guk Kwon | manual |
34 | https://docs.ceph.com/docs/master/rados/operations/crush-map/#creating-a-rule-for-a-replicated-pool |
||
35 | |||
36 | 6 | Jin-Guk Kwon | h3. check a current rule |
37 | |||
38 | 1 | Jin-Guk Kwon | <pre> |
39 | 6 | Jin-Guk Kwon | ceph osd crush rule ls |
40 | </pre> |
||
41 | |||
42 | 9 | Jin-Guk Kwon | h3. create a new crush rule(copy from existing rule) |
43 | 6 | Jin-Guk Kwon | |
44 | <pre> |
||
45 | 1 | Jin-Guk Kwon | ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class> |
46 | 3 | Jin-Guk Kwon | </pre> |
47 | 1 | Jin-Guk Kwon | |
48 | <pre> |
||
49 | ex) ceph osd crush rule create-replicated xruk-ssd-rule default host xruk-ssd |
||
50 | </pre> |
||
51 | 5 | Jin-Guk Kwon | |
52 | 7 | Jin-Guk Kwon | h2. 3. assign a crush rule to new pool |
53 | 4 | Jin-Guk Kwon | |
54 | 1 | Jin-Guk Kwon | manual |
55 | https://docs.ceph.com/docs/master/rados/operations/crush-map/#device-classes |
||
56 | |||
57 | 6 | Jin-Guk Kwon | - assgin a new crush-rule to a new pool |
58 | 1 | Jin-Guk Kwon | <pre> |
59 | ceph osd pool set <pool-name> crush_rule <rule-name> |
||
60 | </pre> |
||
61 | |||
62 | <pre> |
||
63 | ex) ceph osd pool set xruk-ssd-pool crush_rule xruk-ssd-rule |
||
64 | </pre> |
||
65 | 3 | Jin-Guk Kwon | |
66 | 6 | Jin-Guk Kwon | h2. 4. update a key infro to access a new pool |
67 | 1 | Jin-Guk Kwon | |
68 | manual |
||
69 | 11 | Jin-Guk Kwon | https://docs.ceph.com/en/latest/rados/operations/user-management/ |
70 | 4 | Jin-Guk Kwon | |
71 | 6 | Jin-Guk Kwon | h3. check the current key |
72 | |||
73 | - check the list of key |
||
74 | 1 | Jin-Guk Kwon | <pre> |
75 | 6 | Jin-Guk Kwon | ceph auth ls |
76 | </pre> |
||
77 | |||
78 | - get info about the opennbula's key |
||
79 | <pre> |
||
80 | ceph auth get client.libvirt |
||
81 | </pre> |
||
82 | |||
83 | - update a key's info |
||
84 | <pre> |
||
85 | 2 | Jin-Guk Kwon | ceph auth caps USERTYPE.USERID {daemon} 'allow [r|w|x|*|...] [pool={pool-name}] [namespace={namespace-name}]' [{daemon} 'allow [r|w|x|*|...] [pool={pool-name}] [namespace={namespace-name}]'] |
86 | |||
87 | mon 'profile {name}' |
||
88 | osd 'profile {name} [pool={pool-name} [namespace={namespace-name}]]' |
||
89 | 1 | Jin-Guk Kwon | </pre> |
90 | |||
91 | <pre> |
||
92 | ex) ceph auth caps client.libvirt mon 'profile rbd' osd 'profile rbd pool=xruk-ssd-pool, profile rbd pool=hdd, profile rbd pool=ssd' |
||
93 | 6 | Jin-Guk Kwon | </pre> |
94 | |||
95 | - verify a updated key |
||
96 | |||
97 | <pre> |
||
98 | ceph auth get client.libvirt |
||
99 | 2 | Jin-Guk Kwon | </pre> |
100 | |||
101 | 5 | Jin-Guk Kwon | h2. 5. set ceph application enable |
102 | 1 | Jin-Guk Kwon | |
103 | <pre> |
||
104 | ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications |
||
105 | </pre> |
||
106 | |||
107 | <pre> |
||
108 | 3 | Jin-Guk Kwon | ex) |
109 | 1 | Jin-Guk Kwon | ~# ceph osd pool application enable xruk-ssd-pool rbd |
110 | enabled application 'rbd' on pool 'xruk-ssd-pool' |
||
111 | </pre> |
||
112 | 12 | Jin-Guk Kwon | |
113 | h2. remove pool and crush rule |
||
114 | |||
115 | - check current pool's rule |
||
116 | |||
117 | <pre> |
||
118 | ex) ceph osd pool get ssd-pool crush_rule |
||
119 | </pre> |
||
120 | |||
121 | - To remove a pool the mon_allow_pool_delete flag must be set to true in the Monitor’s configuration. Otherwise they will refuse to remove a pool. |
||
122 | |||
123 | <pre> |
||
124 | ex) ceph tell mon.\* injectargs '--mon-allow-pool-delete=true' |
||
125 | </pre> |
||
126 | |||
127 | - remove pool |
||
128 | |||
129 | <pre> |
||
130 | ex) ceph osd pool delete ssd-pool ssd-pool --yes-i-really-really-mean-it |
||
131 | </pre> |
||
132 | |||
133 | - disavle monitor flag |
||
134 | |||
135 | <pre> |
||
136 | ex) ceph tell mon.\* injectargs '--mon-allow-pool-delete=false' |
||
137 | </pre> |
||
138 | |||
139 | - check pool list |
||
140 | |||
141 | <pre> |
||
142 | ceph osd lspools |
||
143 | </pre> |
||
144 | |||
145 | - remove crush rule |
||
146 | |||
147 | <pre> |
||
148 | ex) ceph osd crush rule rm ssd-rule |
||
149 | </pre> |
||
150 | |||
151 | - check rule list |
||
152 | |||
153 | <pre> |
||
154 | ceph osd crush rule ls |
||
155 | </pre> |