By Hitesh Jethva, Alibaba Cloud Tech Share Author. Tech Share is Alibaba Cloud’s incentive program to encourage the sharing of technical knowledge and best practices within the cloud community.
GlusterFS is a free, open source and scalable network filesystem specially designed for data-intensive tasks such as cloud storage and media streaming. GlusterFS made up of two components a server and a client. The server runs glusterfsd and the client used to mount the exported filesystem. You can achieve high availability by distributing the data across the multiple volumes/nodes using GlusterFS. GlusterFS client can access the storage like local storage. GlusterFS is a file-based scale-out storage that allows you to combine large numbers of commodity storage and compute resources into a high performance and virtualized pool. You can scale both capacity and performance on demand from terabytes to petabytes.
Features
In this tutorial, we will be setting up GlusterFS with two replicas on three Alibaba Cloud Elastic Compute Service (ECS) instances with Ubuntu 16.04.
First, log in to your Alibaba Cloud ECS Console. Create a new ECS instance, choosing Ubuntu 16.04 as the operating system with at least 2GB RAM. Connect to your ECS instance and log in as the root user.
Once you are logged into your Ubuntu 16.04 instance, run the following command to update your base system with the latest available packages.
apt-get update -y
Before starting, you will need to setup /etc/hosts file on each instance. So each instance can communicate with each other using hostname. You can do this by editing /etc/hosts file on each instance:
nano /etc/hosts
Add the following lines:
192.168.0.101 GlusterFS1
192.168.0.102 GlusterFS2
192.168.0.103 GlusterFS-Client
Save and close the file. Then verify hostname resolution using the following command:
ping GlusterFS1
ping GlusterFS2
ping GlusterFS-Client
Before starting, you will need to install GlusterFS on both GlusterFS instance. By default, GlusterFS is not available in the Ubuntu 16.04 default repository. So you will need to add the repository for that. You can do this by running the following command on both instance:
apt-get install software-properties-common -y
add-apt-repository ppa:gluster/glusterfs-3.10
Output:
GlusterFS 3.10
More info: https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.10
Press [ENTER] to continue or ctrl-c to cancel adding it
gpg: keyring `/tmp/tmpj4keidrx/secring.gpg' created
gpg: keyring `/tmp/tmpj4keidrx/pubring.gpg' created
gpg: requesting key 3FE869A9 from hkp server keyserver.ubuntu.com
gpg: /tmp/tmpj4keidrx/trustdb.gpg: trustdb created
gpg: key 3FE869A9: public key "Launchpad PPA for Gluster" imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
OK
Once the repository is added, update the repository and install the GlusterFS by running the following command:
apt-get update -y
apt-get install glusterfs-server -y
Next, start the GlusterFS service and enable it to start on boot time with the following command:
systemctl start glusterfs-server
systemctl enable glusterfs-server
You can check the status of GlusterFS with the following command:
systemctl status glusterfs-server
Output:
glusterfs-server.service - LSB: GlusterFS server
Loaded: loaded (/etc/init.d/glusterfs-server; bad; vendor preset: enabled)
Active: active (running) since Mon 2018-08-06 22:16:27 IST; 1min 1s ago
Docs: man:systemd-sysv-generator(8)
CGroup: /system.slice/glusterfs-server.service
└─8030 /usr/sbin/glusterd -p /var/run/glusterd.pid
Aug 06 22:16:22 Node1 systemd[1]: Starting LSB: GlusterFS server...
Aug 06 22:16:22 Node1 glusterfs-server[8019]: * Starting glusterd service glusterd
Aug 06 22:16:27 Node1 glusterfs-server[8019]: ...done.
Aug 06 22:16:27 Node1 systemd[1]: Started LSB: GlusterFS server.
Aug 06 22:17:23 Node1 systemd[1]: Started LSB: GlusterFS server.
First, you will need to create a partition on external HDD (/dev/sdb) on both GlusterFS instance.
You can create a partition by running the following command on both instance:
fdisk /dev/sdb
Output:
Welcome to fdisk (util-linux 2.27.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x96eae0dd.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-4194303, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-4194303, default 4194303):
Created a new partition 1 of type 'Linux' and of size 2 GiB.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Now, format the partition with the following command:
mkfs.ext4 /dev/sdb1
Output:
mke2fs 1.42.13 (17-May-2015)
Creating filesystem with 524032 4k blocks and 131072 inodes
Filesystem UUID: d8fc7e2b-a3a3-4e7d-b278-51cf8395c3b2
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
Next, create a storage directory for GlusterFS and mount the partition (/dev/sdb1) on it:
mkdir /glusterfs
mount /dev/sdb1 /glusterfs
Next, create a persistent mount point by editing /etc/fstab file:
nano /etc/fstab
Add the following line:
/dev/sdb1 /glusterfs ext4 defaults 0 0
Save and close the file, when you are finished.
You will also need to create a trusted storage pool on GlusterFS1 instance by adding GlusterFS2 on it. You can do this by running the following command on GlusterFS1 server:
gluster peer probe GlusterFS2
You can verify the status of the trusted storage pool with the following command:
gluster peer status
Output:
peer probe: success.
You can also list the storage pool with the following command:
gluster pool list
Output:
UUID Hostname State
64fca937-4fde-4d13-bd85-a05ba906e1f1 GlusterFS2 Connected
eda74d66-597d-4d80-a408-e20093401fea localhost Connected
Next, you will need to create a brick directory with name gvol0 in the mounted file system on both GlusterFS instance:
mkdir /glusterfs/gvol0
Now, create the volume named "gvol0" with two replicas by running the following command on GlusterFS1 instance:
gluster volume create gvol0 replica 2 GlusterFS1:/glusterfs/gvol0 GlusterFS2:/glusterfs/gvol0
Output:
volume create: gvol0: success: please start the volume to access data
Now, start the volume with the following command:
gluster volume start gvol0
You can now check the status of created volume with the following command:
gluster volume info gvol0
Output:
Volume Name: gvol0
Type: Replicate
Volume ID: 94f27972-9ecf-49f1-810c-67d3c6d219ce
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: GlusterFS1:/glusterfs/gvol0
Brick2: GlusterFS2:/glusterfs/gvol0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
First, you will need to install glusterfs-client package on GlusterFS-Client instance. By default, GlusterFS Client package is not available in the Ubuntu 16.04 default repository. So you will need to add the repository for that. You can do this by running the following command:
apt-get install software-properties-common -y
add-apt-repository ppa:gluster/glusterfs-3.10
Once the repository is added, update the repository and install the GlusterFS CLient by running the following command:
apt-get update -y
apt-get install glusterfs-client -y
Next, create a directory to mount GlusterFS filesystem:
mkdir /glusterfs
Now, mount the GlusterFS file system on /glusterfs with the following command:
mount -t glusterfs GlusterFS1:/gvol0 /glusterfs
You can verify the mounted GlusterFS file system with the following command:
cat /proc/mounts | grep glusterfs
Output:
GlusterFS1:/gvol0 /glusterfs fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0
Next, create a persistent mount point by editing /etc/fstab file:
nano /etc/fstab
Add the following line:
GlusterFS1:/gvol0 /glusterfs glusterfs defaults,_netdev 0 0
Save and close the file, when you are finished.
Now, GlusterFS storage, pool and volume are configured. It's time to test GlusterFS replication and high-availability.
To check replication, mount GlusterFS volume on both GlusterFS instance:
On GlusterFS1:
mount -t glusterfs GlusterFS1:/gvol0 /mnt
On GlusterFS2:
mount -t glusterfs GlusterFS2:/gvol0 /mnt
Now, go to the GlusterFS Client instance and create some files on the mounted filesystem:
touch /glusterfs/test1
touch /glusterfs/test2
Now, verify both GlusterFS instances by running the following command:
On GlusterFS1:
ls -l /mnt
You should see the same files which we have created on GlusterFS Client:
total 0
-rw-r--r-- 1 root root 0 Aug 6 22:39 test1
-rw-r--r-- 1 root root 0 Aug 6 22:39 test2
On GlusterFS2:
ls -l /mnt
You should see the same files which we have created on GlusterFS Client:
total 0
-rw-r--r-- 1 root root 0 Aug 6 22:39 test1
-rw-r--r-- 1 root root 0 Aug 6 22:39 test2
Replication is now working fine.
To check the high-availability, shut down the GlusterFS1 instance.
Now, go to the GlusterFS client instance and check the availability of the files:
On GlusterFS Client:
ls -l /glusterfs/
You should see the files even though the GlusterFS1 is down:
Next, create some files on GlusterFS Client:
On GlusterFS Client:
touch /glusterfs/test3
touch /glusterfs/test4
touch /glusterfs/test5
All the file are now written on GlusterFS2. Now, start the GlusterFS1 instance and mount GlusterFS file system:
On GlusterFS1:
mount -t glusterfs GlusterFS1:/gvol0 /mnt
Now, check the /mnt directory:
ls -l /mnt
Output:
-rw-r--r-- 1 root root 0 Aug 6 22:39 test1
-rw-r--r-- 1 root root 0 Aug 6 22:39 test2
-rw-r--r-- 1 root root 0 Aug 6 22:58 test3
-rw-r--r-- 1 root root 0 Aug 6 22:58 test4
-rw-r--r-- 1 root root 0 Aug 6 22:58 test5
You should see all the five files in the above output, which we have created on GlusterFS Client, which means the high-availability is working fine.
Alibaba Cloud Express Connect is a convenient and efficient network service that provides a fast, stable, secure and private or dedicated network communication between different cloud environments. With Express Connect you can increase the flexibility of your network topology and enhance the quality and security of inter-network communication.
Alibaba Cloud VPC helps you build an isolated network environment based on Alibaba Cloud including customizing the IP address range, network segment, route table, and gateway. In addition, you can connect VPC and a traditional IDC through a leased line, VPN, or GRE to provide hybrid cloud services.
SUSE and Alibaba Cloud Partner to Meet Global Demand for Cloud-Based Business-Critical Applications
2,599 posts | 764 followers
FollowAlibaba Clouder - April 23, 2018
Alibaba Clouder - September 30, 2017
Hiteshjethva - October 31, 2019
Alibaba Clouder - April 23, 2019
Alibaba Clouder - June 4, 2019
Alibaba Clouder - December 26, 2018
2,599 posts | 764 followers
FollowElastic and secure virtual cloud servers to cater all your cloud hosting needs.
Learn MoreAn encrypted and secure cloud storage service which stores, processes and accesses massive amounts of data from anywhere in the world
Learn MoreLearn More
More Posts by Alibaba Clouder
Raja_KT February 9, 2019 at 7:04 am
At one point of time, it looks like GlusterFS will be an alternative to HDFS and it is not connected to Lustre FS too. Being with Redhat and now IBM , where is the future?