Skip to content
This repository was archived by the owner on Mar 26, 2020. It is now read-only.

Commit 79fb1e9

Browse files
committed
Doc: detailed changes for user doc
Has the first set of changes to improve a doc with differentiation for user. managing volumes, feature, known issues and trouble shooting are yet to be worked on. Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
1 parent 8d5b37d commit 79fb1e9

File tree

3 files changed

+283
-0
lines changed

3 files changed

+283
-0
lines changed
Lines changed: 104 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,104 @@
1+
# Managing Trusted Storage Pools
2+
3+
4+
### Overview
5+
6+
A trusted storage pool(TSP) is a trusted network of storage servers(peers). More about TSP [here](https://docs.gluster.org/en/latest/Administrator%20Guide/Storage%20Pools/)
7+
8+
The respective commands for glusterd2 can be found below.
9+
10+
11+
- [Adding Servers](#adding-servers)
12+
- [Listing Servers](#listing-servers)
13+
- [Viewing Peer Status](#peer-status)
14+
- [Removing Servers](#removing-servers)
15+
16+
17+
<a name="adding-servers"></a>
18+
### Adding Servers
19+
20+
To add a server to a TSP, do add peer from a server already in the pool.
21+
22+
# glustercli peer add <server>
23+
24+
For example, to add a new server(server2) to the cluster described above, probe it from one of the other servers:
25+
26+
server1# glustercli peer add server2
27+
Peer add successful
28+
+--------------------------------------+---------+-----------------------+-----------------------+
29+
| ID | NAME | CLIENT ADDRESSES | PEER ADDRESSES |
30+
+--------------------------------------+---------+-----------------------+-----------------------+
31+
| fd0aaa07-9e5f-4265-b778-e49514874ca2 | server2 | 127.0.0.1:24007 | server2:24008 |
32+
| | | 192.168.122.193:24007 | 192.168.122.193:24008 |
33+
+--------------------------------------+---------+-----------------------+-----------------------+
34+
35+
36+
Verify the peer status from the first server (server1):
37+
38+
server1# glustercli peer status
39+
+--------------------------------------+---------+-----------------------+-----------------------+--------+-------+
40+
| ID | NAME | CLIENT ADDRESSES | PEER ADDRESSES | ONLINE | PID |
41+
+--------------------------------------+---------+-----------------------+-----------------------+--------+-------+
42+
| d82734dc-57c0-44ef-a682-8b59c43d0cef | server1 | 127.0.0.1:24007 | 192.168.122.18:24008 | yes | 1269 |
43+
| | | 192.168.122.18:24007 | | | |
44+
| fd0aaa07-9e5f-4265-b778-e49514874ca2 | server2 | 127.0.0.1:24007 | 192.168.122.193:24008 | yes | 18657 |
45+
| | | 192.168.122.193:24007 | | | |
46+
+--------------------------------------+---------+-----------------------+-----------------------+--------+-------+
47+
48+
49+
<a name="listing-servers"></a>
50+
### Listing Servers
51+
52+
To list all nodes in the TSP:
53+
54+
server1# glustercli peer list
55+
+--------------------------------------+---------+-----------------------+-----------------------+--------+-------+
56+
| ID | NAME | CLIENT ADDRESSES | PEER ADDRESSES | ONLINE | PID |
57+
+--------------------------------------+---------+-----------------------+-----------------------+--------+-------+
58+
| d82734dc-57c0-44ef-a682-8b59c43d0cef | server1 | 127.0.0.1:24007 | 192.168.122.18:24008 | yes | 1269 |
59+
| | | 192.168.122.18:24007 | | | |
60+
| fd0aaa07-9e5f-4265-b778-e49514874ca2 | server2 | 127.0.0.1:24007 | 192.168.122.193:24008 | yes | 18657 |
61+
| | | 192.168.122.193:24007 | | | |
62+
+--------------------------------------+---------+-----------------------+-----------------------+--------+-------+
63+
64+
65+
<a name="peer-status"></a>
66+
### Viewing Peer Status
67+
68+
To view the status of the peers in the TSP:
69+
70+
server1# glustercli peer status
71+
+--------------------------------------+---------+-----------------------+-----------------------+--------+-------+
72+
| ID | NAME | CLIENT ADDRESSES | PEER ADDRESSES | ONLINE | PID |
73+
+--------------------------------------+---------+-----------------------+-----------------------+--------+-------+
74+
| d82734dc-57c0-44ef-a682-8b59c43d0cef | server1 | 127.0.0.1:24007 | 192.168.122.18:24008 | yes | 1269 |
75+
| | | 192.168.122.18:24007 | | | |
76+
| fd0aaa07-9e5f-4265-b778-e49514874ca2 | server2 | 127.0.0.1:24007 | 192.168.122.193:24008 | yes | 18657 |
77+
| | | 192.168.122.193:24007 | | | |
78+
+--------------------------------------+---------+-----------------------+-----------------------+--------+-------+
79+
80+
81+
<a name="removing-servers"></a>
82+
### Removing Servers
83+
84+
To remove a server from the TSP, run the following command from another server in the pool:
85+
86+
# gluster peer remove <peer-ID>
87+
88+
For example, to remove server4 from the trusted storage pool:
89+
90+
server1# glustercli peer remove fd0aaa07-9e5f-4265-b778-e49514874ca2
91+
Peer remove success
92+
93+
***Note:*** For now remove peer works only with peerid which you can get from peer status.
94+
95+
Verify the peer status:
96+
97+
server1# glustercli peer status
98+
+--------------------------------------+---------+----------------------+----------------------+--------+------+
99+
| ID | NAME | CLIENT ADDRESSES | PEER ADDRESSES | ONLINE | PID |
100+
+--------------------------------------+---------+----------------------+----------------------+--------+------+
101+
| d82734dc-57c0-44ef-a682-8b59c43d0cef | server1 | 127.0.0.1:24007 | 192.168.122.18:24008 | yes | 1269 |
102+
| | | 192.168.122.18:24007 | | | |
103+
+--------------------------------------+---------+----------------------+----------------------+--------+------+
104+

doc/setting-up-volumes.md

Lines changed: 121 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,121 @@
1+
# Setting up GlusterFS Volumes
2+
3+
The commands that differ with GD2 are mentioned in this doc. For info about volume types and so on you can refer [here](https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/)
4+
5+
## Creating New Volumes:
6+
7+
### Creating Distributed Volumes
8+
9+
`# glustercli volume create --name <VOLNAME> <UUID1>:<brick1> .. <UUIDn>:<brickm> `
10+
11+
where n is the number of servers and m is the number of bricks. n and m can be same or m can be more than n.
12+
13+
For example, a four node distributed volume:
14+
15+
# glustercli volume create --name testvol server1:/export/brick1/data server2:/export/brick2/data server3:/export/brick3/data server4:/export/brick4/data
16+
testvol Volume created successfully
17+
Volume ID: 15c1611d-aae6-44f0-ae8d-fa04f31f5c99
18+
19+
### Creating Replicated Volumes
20+
21+
`# glustercli volume create --name <VOLNAME> --replica <count> <UUID1>:<brick1> .. <UUIDn>:<brickm>`
22+
23+
where n is the server count and m is the number of bricks.
24+
25+
For example, to create a replicated volume with two storage servers:
26+
27+
# glustercli volume create testvol server1:/exp1 server2:/exp2 --replica 2
28+
testvol Volume created successfully
29+
Volume ID: 15c1611d-aae6-44f0-ae8d-fa04f31f5c99
30+
31+
> **Note**:
32+
33+
> - GlusterD2 creates a replicate volume if more than one brick of a replica set is present on the same peer. For eg. a four node replicated volume where more than one brick of a replica set is present on the same peer.
34+
>
35+
36+
> # glustercli volume create --name <VOLNAME> --replica 4 server1:/brick1 server1:/brick2 server2:/brick2 server3:/brick3
37+
> <VOLNAME> Volume created successfully
38+
> Volume ID: 15c1611d-aae6-44f0-ae8d-fa04f31f5c99
39+
40+
### Arbiter configuration for replica volumes
41+
42+
'# glustercli volume create <VOLNAME> --replica 2 --arbiter 1 <UUID1>:<brick1> <UUID2>:<brick2> <UUID3>:<brick3>'
43+
44+
>**Note:**
45+
>
46+
> 1) It is mentioned as replica 2 and not replica 3 even though there are 3 replicas (arbiter included).
47+
> 2) The arbiter configuration for replica 3 can be used to create distributed-replicate volumes as well.
48+
49+
## Creating Distributed Replicated Volumes
50+
51+
`# glustercli volume create --name <VOLNAME> <UUID1>:<brick1> .. <UUIDn>:<brickm> --replica <count> `
52+
53+
where n is the number of servers and m is the number of bricks.
54+
55+
For example, a four node distributed (replicated) volume with a
56+
two-way mirror:
57+
58+
# glustercli volume create --name testvol server1:/export/brick1/data server2:/export/brick2/data server1:/export/brick3/data server2:/export/brick4/data --replica 2
59+
testvol Volume created successfully
60+
Volume ID: 15c1611d-aae6-44f0-ae8d-fa04f31f5c99
61+
62+
For example, to create a six node distributed (replicated) volume
63+
with a two-way mirror:
64+
65+
# glustercli volume create testvol server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 --replica 2
66+
testvol Volume created successfully
67+
Volume ID: 15c1611d-aae6-44f0-ae8d-fa04f31f5c99
68+
69+
> **Note**:
70+
71+
> - GlusterD2 creates a distribute replicate volume if more than one brick of a replica set is present on the same peer. For eg. for a four node distribute (replicated) volume where more than one brick of a replica set is present on the same peer.
72+
>
73+
74+
> # glustercli volume create --name <volname> --replica 2 server1:/brick1 server1:/brick2 server2:/brick3 server2:/brick4
75+
> <VOLNAME> Volume created successfully
76+
> Volume ID: 15c1611d-aae6-44f0-ae8d-fa04f31f5c99
77+
78+
79+
## Creating Dispersed Volumes
80+
81+
`# glustercli volume create --name <VOLNAME> --disperse <COUNT> <UUID1>:<brick1> .. <UUIDn>:<brickm>`
82+
83+
For example, a four node dispersed volume:
84+
85+
# glustercli volume create --name testvol --dispersed 4 server{1..4}:/export/brick/data
86+
testvol Volume created successfully
87+
Volume ID: 15c1611d-aae6-44f0-ae8d-fa04f31f5c99
88+
89+
For example, to create a six node dispersed volume:
90+
91+
# glustercli volume create testvol --disperse 6 server{1..6}:/export/brick/data
92+
testvol Volume created successfully
93+
Volume ID: 15c1611d-aae6-44f0-ae8d-fa04f31f5c99
94+
95+
The redundancy count is automatically set as 2 here.
96+
97+
## Creating Distributed Dispersed Volumes
98+
99+
`# glustercli volume create --name <VOLNAME> --disperse <COUNT> <UUID1>:<brick1> .. <UUIDn>:<brickm>`
100+
101+
For example, to create a six node dispersed volume:
102+
103+
# glustercli volume create testvol --disperse 3 server1:/export/brick/data{1..6}
104+
testvol Volume created successfully
105+
Volume ID: 15c1611d-aae6-44f0-ae8d-fa04f31f5c99
106+
107+
108+
## Starting Volumes
109+
110+
You must start your volumes before you try to mount them.
111+
112+
**To start a volume**
113+
114+
- Start a volume:
115+
116+
`# glustercli volume start <VOLNAME>`
117+
118+
For example, to start test-volume:
119+
120+
# glustercli volume start testvol
121+
Volume testvol started successfully

doc/user_guide.md

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
# User guide:
2+
3+
## Glusterd2 and glusterfs
4+
5+
This helps in understanding how glusterd2 (GD2) goes along with glusterfs.
6+
7+
### Glusterfs
8+
9+
GlusterFS is a scalable distributed network filesystem. More about gluster can be found [here] (https://docs.gluster.org/en/latest/).
10+
***Note:*** An understanding of Glusterfs is necessary to use Glusterd2.
11+
12+
#### Glusterd
13+
14+
Glusterd is the management daemon for glusterfs. The glusterd serves as the Gluster elastic volume manager, overseeing glusterfs processes, and co-ordinating dynamic volume operations, such as adding and removing volumes across multiple storage servers non-disruptively.
15+
16+
Glusterd runs on all the servers. Commands are issued to glusterd using the cli which is a part of glusterd (can be issued on any server running glusterd).
17+
18+
#### Glusterd2
19+
20+
Glusterd2 is the next version of glusterd and its a maintained as a separate project for now.
21+
It works along with glusterfs binaries and more about it will be explained in the installation.
22+
23+
Glusterd2 has its own cli which is different from glusterds'cli.
24+
25+
**Note:** There are other ways to communicate with glusterd2 which is explained in the architecture as well as the [configuring GD2]() section
26+
27+
## Installation
28+
29+
Note: Glusterd and gluster cli (the first version) are installed with the glusterfs. Glusterd2 has to be installed separately as of now.
30+
31+
## Configuring GD2
32+
33+
## Using GD2
34+
35+
### Basics Tasks
36+
37+
[Starting and stopping GD2](doc/managing-the-glusterd2-service.md)
38+
[Managing Trusted Storage Pools](doc/managing-trusted-storage-pool.md)
39+
[Setting Up Storage](https://docs.gluster.org/en/latest/Administrator%20Guide/setting-up-storage/)
40+
[Setting Up Volumes](doc/setting-up-volumes.md)
41+
[Setting Up Clients](https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Clients/)
42+
[Managing GlusterFS Volumes](doc/managing-volumes.md)
43+
44+
### Features
45+
46+
[Geo-replication](doc/geo-replication.md)
47+
[Snapshot](doc/snapshot.md)
48+
[Bit-rot](doc/bitrot.md)
49+
[Quota](doc/quota.md)
50+
51+
52+
## Known Issues
53+
54+
**IMPORTANT:** Do not use glusterd and glusterd2 together. Do not file bugs when done so.
55+
56+
[Known issues](doc/known-issues.md)
57+
58+
## Trouble shooting

0 commit comments

Comments
 (0)