The Funtoo Linux project has transitioned to "Hobby Mode" and this wiki is now read-only.
Difference between revisions of "GlusterFS"
Line 1: | Line 1: | ||
{{Article | |||
|Author=Drobbins | |||
}} | |||
== GlusterFS Distribution == | == GlusterFS Distribution == | ||
Line 254: | Line 257: | ||
[[Category:Articles]] | [[Category:Articles]] | ||
{{PageNeedsUpdates}} | {{PageNeedsUpdates}} | ||
{{ArticleFooter}} |
Latest revision as of 17:02, December 28, 2014
GlusterFS Distribution
Below, we create a distributed volume using two bricks (XFS filesystems.) This spreads IO and files among two bricks.
root # gluster peer status No peers present root # gluster peer probe rhs-lab2 Probe successful root # gluster peer status Number of Peers: 1 Hostname: rhs-lab2 Uuid: 6b6c9ffc-da79-4d24-8325-086d44869338 State: Peer in Cluster (Connected) root # gluster peer probe rhs-lab3 Probe successful root # gluster peer probe rhs-lab4 Probe successful root # gluster peer status Number of Peers: 3 Hostname: rhs-lab2 Uuid: 6b6c9ffc-da79-4d24-8325-086d44869338 State: Peer in Cluster (Connected) Hostname: rhs-lab3 Uuid: cbcd508e-5f80-4224-91df-fd5f8e12915d State: Peer in Cluster (Connected) Hostname: rhs-lab4 Uuid: a02f68d8-88af-4b79-92d8-1057dd85af45 State: Peer in Cluster (Connected) root # gluster volume create dist rhs-lab1:/data/dist rhs-lab2:/data/dist Creation of volume dist has been successful. Please start the volume to access data.
root # gluster volume info Volume Name: dist Type: Distribute Volume ID: f9758871-20dc-4728-9576-a5bb5b24ca4f Status: Created Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: rhs-lab1:/data/dist Brick2: rhs-lab2:/data/dist
root # gluster volume start dist Starting volume dist has been successful
root # gluster volume info Volume Name: dist Type: Distribute Volume ID: f9758871-20dc-4728-9576-a5bb5b24ca4f Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: rhs-lab1:/data/dist Brick2: rhs-lab2:/data/dist
root # mount -t glusterfs rhs-lab1:/dist /mnt/dist
GlusterFS Mirroring
Below, we mirror data between two bricks (XFS volumes). This creates a redundant system and also allows for read performance to be improved.
root # gluster volume create mirror replica 2 rhs-lab1:/data/mirror rhs-lab2:/data/mirror Creation of volume mirror has been successful. Please start the volume to access data. root # gluster volume start mirror Starting volume mirror has been successful root # gluster volume info mirror Volume Name: mirror Type: Replicate Volume ID: 4edacef8-982c-46a9-be7e-29e34fa40f95 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: rhs-lab1:/data/mirror Brick2: rhs-lab2:/data/mirror root # install -d /mnt/mirror root # mount -t glusterfs rhs-lab1:/mirror /mnt/mirror
Growing GlusterFS
Now we will add a new brick to our distributed filesystem. We will run a rebalance (optional) to get the files distributed ideally. This will involve distributing some existing files on to our new brick on rhs-lab3:
root # gluster volume add-brick dist rhs-lab3:/data/dist Add Brick successful root # gluster volume rebalance dist start Starting rebalance on volume dist has been successful
After the rebalance, our distributed GlusterFS filesystem will have optimal performance and one third of the files will have moved to rhs-lab3.
root # gluster volume rebalance dist status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 0 0 0 0 completed rhs-lab4 0 0 0 0 completed rhs-lab3 0 0 0 0 completed rhs-lab2 0 0 0 0 completed
Growing a GlusterFS Replicated Volume
You can grow a replicated volume by adding pairs of bricks:
root # gluster volume add-brick mirror rhs-lab3:/data/mirror rhs-lab4:/data/mirror Add Brick successful root # gluster volume info mirror Volume Name: mirror Type: Distributed-Replicate Volume ID: 4edacef8-982c-46a9-be7e-29e34fa40f95 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: rhs-lab1:/data/mirror Brick2: rhs-lab2:/data/mirror Brick3: rhs-lab3:/data/mirror Brick4: rhs-lab4:/data/mirror
GlusterFS Brick Migration
Here is how you migrate data off of an existing brick and on to a new brick:
root # gluster volume replace-brick dist rhs-lab3:/data/dist rhs-lab4:/data/dist start replace-brick started successfully root # gluster volume replace-brick dist rhs-lab3:/data/dist rhs-lab4:/data/dist status Number of files migrated = 0 Migration complete root # gluster volume replace-brick dist rhs-lab3:/data/dist rhs-lab4:/data/dist commit replace-brick commit successful root # gluster volume info Volume Name: dist Type: Distribute Volume ID: f9758871-20dc-4728-9576-a5bb5b24ca4f Status: Started Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: rhs-lab1:/data/dist Brick2: rhs-lab2:/data/dist Brick3: rhs-lab4:/data/dist Volume Name: mirror Type: Distributed-Replicate Volume ID: 4edacef8-982c-46a9-be7e-29e34fa40f95 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: rhs-lab1:/data/mirror Brick2: rhs-lab2:/data/mirror Brick3: rhs-lab3:/data/mirror Brick4: rhs-lab4:/data/mirror
Removing a Brick
Here's how you remove a brick. The add-brick and remove-brick commands will ensure that you don't break mirrors, so you will need to remove both volumes in a mirror if you are working with a replicated volume.
root # gluster volume remove-brick dist rhs-lab4:/data/dist start Remove Brick start successful root # gluster volume remove-brick dist rhs-lab4:/data/dist status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 0 0 0 0 not started rhs-lab3 0 0 0 0 not started rhs-lab2 0 0 0 0 not started rhs-lab4 0 0 0 0 completed root # gluster volume remove-brick dist rhs-lab4:/data/dist commit Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y Remove Brick commit successful
Georeplication
At the local GlusterFS site:
root # gluster volume create georep rhs-lab1:/data/georep Creation of volume georep has been successful. Please start the volume to access data. root # gluster volume start georep Starting volume georep has been successful root # gluster volume info georep Volume Name: georep Type: Distribute Volume ID: 001bc914-74ad-48e6-846a-1767a5b2cb58 Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: rhs-lab1:/data/georep root # mkdir /mnt/georep root # mount -t glusterfs rhs-lab1:/georep /mnt/georep root # cd /mnt/georep/ root # ls root # df -h . Filesystem Size Used Avail Use% Mounted on rhs-lab1:/georep 5.1G 33M 5.0G 1% /mnt/georep
At the remote site, set up a georep-dr volume:
root # gluster volume create georep-dr rhs-lab4:/data/georep-dr root # gluster volume start georep-dr
Local side:
root # gluster volume geo-replication georep status MASTER SLAVE STATUS -------------------------------------------------------------------------------- root # gluster volume geo-replication georep ssh://rhs-lab4::georep-dr start Starting geo-replication session between georep & ssh://rhs-lab4::georep-dr has been successful
GlusterFS Security
Currently, any GlusterFS peer can join your volume if it exists on your LAN. Securing GlusterFS can be accomplished with iptables by blocking TCP ports.
Browse all our available articles below. Use the search field to search for topics and keywords in real-time.