Cheap VPS & Xen Server

Residential Proxy Network - Hourly & Monthly Packages

Distributed Replicated Storage Across Four Storage Nodes With GlusterFS On CentOS 5.4


This tutorial shows how to combine four single storage servers (running CentOS 5.4) to a distributed replicated storage with GlusterFS. Nodes 1 and 2 (replication1) as well as 3 and 4 (replication2) will mirror each other, and replication1 and replication2 will be combined to one larger storage server (distribution). Basically, this is RAID10 over network. If you lose one server from replication1 and one from replication2, the distributed volume continues to work. The client system (CentOS 5.4 as well) will be able to access the storage as if it was a local filesystem. GlusterFS is a clustered file-system capable of scaling to several peta-bytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. Storage bricks can be made of any commodity hardware such as x86_64 servers with SATA-II RAID and Infiniband HBA.

I do not issue any guarantee that this will work for you!

 

1 Preliminary Note

In this tutorial I use five systems, four servers and a client:

  • server1.example.com: IP address 192.168.0.100 (server)
  • server2.example.com: IP address 192.168.0.101 (server)
  • server3.example.com: IP address 192.168.0.102 (server)
  • server4.example.com: IP address 192.168.0.103 (server)
  • client1.example.com: IP address 192.168.0.104 (client)

All five systems should be able to resolve the other systems’ hostnames. If this cannot be done through DNS, you should edit the /etc/hosts file so that it contains the following lines on all five systems:

vi /etc/hosts

[...]
192.168.0.100   server1.example.com     server1
192.168.0.101   server2.example.com     server2
192.168.0.102   server3.example.com     server3
192.168.0.103   server4.example.com     server4
192.168.0.104   client1.example.com     client1
[...]

(It is also possible to use IP addresses instead of hostnames in the following setup. If you prefer to use IP addresses, you don’t have to care about whether the hostnames can be resolved or not.)

 

2 Setting Up The GlusterFS Servers

server1.example.com/server2.example.com/server3.example.com/server4.example.com:

GlusterFS isn’t available as a package for CentOS 5.4, therefore we have to build it ourselves. First we install the prerequisites:

yum groupinstall ‘Development Tools’

yum groupinstall ‘Development Libraries’

yum install libibverbs-devel fuse-devel

Then we download the latest GlusterFS release from http://www.gluster.org/download.php and build it as follows:

cd /tmp
wget http://ftp.gluster.com/pub/gluster/glusterfs/2.0/LATEST/glusterfs-2.0.9.tar.gz
tar xvfz glusterfs-2.0.9.tar.gz
cd glusterfs-2.0.9
./configure

At the end of the ./configure command, you should see something like this:

[…]
GlusterFS configure summary
===========================
FUSE client        : yes
Infiniband verbs   : yes
epoll IO multiplex : yes
Berkeley-DB        : yes
libglusterfsclient : yes
argp-standalone    : no

[root@server1 glusterfs-2.0.9]#

make && make install
ldconfig

Check the GlusterFS version afterwards (should be 2.0.9):

glusterfs –version

[root@server1 glusterfs-2.0.9]# glusterfs –version
glusterfs 2.0.9 built on Mar 1 2010 15:34:50
Repository revision: v2.0.9
Copyright (c) 2006-2009 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@server1 glusterfs-2.0.9]#

Next we create a few directories:

mkdir /data/
mkdir /data/export
mkdir /data/export-ns
mkdir /etc/glusterfs

Now we create the GlusterFS server configuration file /etc/glusterfs/glusterfsd.vol which defines which directory will be exported (/data/export) and what client is allowed to connect (192.168.0.104 = client1.example.com):

vi /etc/glusterfs/glusterfsd.vol

volume posix
  type storage/posix
  option directory /data/export
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume brick
  type performance/io-threads
  option thread-count 8
  subvolumes locks
end-volume

volume server
  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow 192.168.0.104
  subvolumes brick
end-volume

Please note that it is possible to use wildcards for the IP addresses (like 192.168.*) and that you can specify multiple IP addresses separated by comma (e.g. 192.168.0.104,192.168.0.105).

Afterwards we create the following symlink…

ln -s /usr/local/sbin/glusterfsd /sbin/glusterfsd

… and then the system startup links for the GlusterFS server and start it:

chkconfig –levels 35 glusterfsd on
/etc/init.d/glusterfsd start

3 Setting Up The GlusterFS Client

client1.example.com:

GlusterFS isn’t available as a package for CentOS 5.4, therefore we have to build it ourselves. First we install the prerequisites:

yum groupinstall ‘Development Tools’

yum groupinstall ‘Development Libraries’

yum install libibverbs-devel fuse-devel

Then we load the fuse kernel module…

modprobe fuse

… and create the file /etc/rc.modules with the following contents so that the fuse kernel module will be loaded automatically whenever the system boots:

vi /etc/rc.modules

modprobe fuse

Make the file executable:

chmod +x /etc/rc.modules

Then we download the GlusterFS 2.0.9 sources (please note that this is the same version that is installed on the server!) and build GlusterFS as follows:

cd /tmp
wget http://ftp.gluster.com/pub/gluster/glusterfs/2.0/LATEST/glusterfs-2.0.9.tar.gz
tar xvfz glusterfs-2.0.9.tar.gz
cd glusterfs-2.0.9
./configure

At the end of the ./configure command, you should see something like this:

[…]
GlusterFS configure summary
===========================
FUSE client        : yes
Infiniband verbs   : yes
epoll IO multiplex : yes
Berkeley-DB        : yes
libglusterfsclient : yes
argp-standalone    : no

make && make install
ldconfig

Check the GlusterFS version afterwards (should be 2.0.9):

glusterfs –version

[root@client1 glusterfs-2.0.9]# glusterfs –version
glusterfs 2.0.9 built on Mar 1 2010 15:58:06
Repository revision: v2.0.9
Copyright (c) 2006-2009 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@client1 glusterfs-2.0.9]#

Then we create the following two directories:

mkdir /mnt/glusterfs
mkdir /etc/glusterfs

Next we create the file /etc/glusterfs/glusterfs.vol:

vi /etc/glusterfs/glusterfs.vol

volume remote1
  type protocol/client
  option transport-type tcp
  option remote-host server1.example.com
  option remote-subvolume brick
end-volume

volume remote2
  type protocol/client
  option transport-type tcp
  option remote-host server2.example.com
  option remote-subvolume brick
end-volume

volume remote3
  type protocol/client
  option transport-type tcp
  option remote-host server3.example.com
  option remote-subvolume brick
end-volume

volume remote4
  type protocol/client
  option transport-type tcp
  option remote-host server4.example.com
  option remote-subvolume brick
end-volume

volume replicate1
  type cluster/replicate
  subvolumes remote1 remote2
end-volume

volume replicate2
  type cluster/replicate
  subvolumes remote3 remote4
end-volume

volume distribute
  type cluster/distribute
  subvolumes replicate1 replicate2
end-volume

volume writebehind
  type performance/write-behind
  option window-size 1MB
  subvolumes distribute
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

Make sure you use the correct server hostnames or IP addresses in the option remote-host lines!

That’s it! Now we can mount the GlusterFS filesystem to /mnt/glusterfs with one of the following two commands:

glusterfs -f /etc/glusterfs/glusterfs.vol /mnt/glusterfs

or

mount -t glusterfs /etc/glusterfs/glusterfs.vol /mnt/glusterfs

You should now see the new share in the outputs of…

mount

[root@client1 ~]# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
glusterfs#/etc/glusterfs/glusterfs.vol on /mnt/glusterfs type fuse (rw,allow_other,default_permissions,max_read=131072)
[root@client1 ~]#

… and…

df -h

[root@client1 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
29G  2.2G   25G   9% /
/dev/sda1              99M   13M   82M  14% /boot
tmpfs                 187M     0  187M   0% /dev/shm
glusterfs#/etc/glusterfs/glusterfs.vol
56G  2.3G   54G   4% /mnt/glusterfs
[root@client1 ~]#

(The size of the distributed storage is calculated by replication1 + replication2, where both replication volumes are as big as the smallest brick.)

Instead of mounting the GlusterFS share manually on the client, you could modify /etc/fstab so that the share gets mounted automatically when the client boots.

Open /etc/fstab and append the following line:

vi /etc/fstab

[...]
/etc/glusterfs/glusterfs.vol  /mnt/glusterfs  glusterfs  defaults  0  0

To test if your modified /etc/fstab is working, reboot the client:

reboot

After the reboot, you should find the share in the outputs of…

df -h

… and…

mount

 

4 Testing

Now let’s create some test files on the GlusterFS share:

client1.example.com:

touch /mnt/glusterfs/test1
touch /mnt/glusterfs/test2
touch /mnt/glusterfs/test3
touch /mnt/glusterfs/test4
touch /mnt/glusterfs/test5
touch /mnt/glusterfs/test6

Now let’s check the /data/export directory on server1.example.com, server2.example.com, server3.example.com, and server4.example.com. You will notice that replication1 as well as replication2 hold only a part of the files/directories that make up the GlusterFS share on the client, but the nodes that make up replication1 (server1 and server2) or replication2 (server3 and server4) contain the same files (mirroring):

server1.example.com:

ls -l /data/export

[root@server1 ~]# ls -l /data/export
total 0
-rw-r–r– 1 root root 0 2010-02-23 15:41 test1
-rw-r–r– 1 root root 0 2010-02-23 15:41 test2
-rw-r–r– 1 root root 0 2010-02-23 15:41 test4
-rw-r–r– 1 root root 0 2010-02-23 15:41 test5
[root@server1 ~]#

server2.example.com:

ls -l /data/export

[root@server2 ~]# ls -l /data/export
total 0
-rw-r–r– 1 root root 0 2010-02-23 15:41 test1
-rw-r–r– 1 root root 0 2010-02-23 15:41 test2
-rw-r–r– 1 root root 0 2010-02-23 15:41 test4
-rw-r–r– 1 root root 0 2010-02-23 15:41 test5
[root@server2 ~]#

server3.example.com:

ls -l /data/export

[root@server3 ~]# ls -l /data/export
total 0
-rw-r–r– 1 root root 0 2010-02-23 15:41 test3
-rw-r–r– 1 root root 0 2010-02-23 15:41 test6
[root@server3 ~]#

server4.example.com:

ls -l /data/export

[root@server4 ~]# ls -l /data/export
total 0
-rw-r–r– 1 root root 0 2010-02-23 15:41 test3
-rw-r–r– 1 root root 0 2010-02-23 15:41 test6
[root@server4 ~]#

Now we shut down server1.example.com and server4.example.com and add/delete some files on the GlusterFS share on client1.example.com.

server1.example.com/server4.example.com:

shutdown -h now

client1.example.com:

rm -f /mnt/glusterfs/test5
rm -f /mnt/glusterfs/test6

The changes should be visible in the /data/export directory on server2.example.com and server3.example.com:

server2.example.com:

ls -l /data/export

[root@server2 ~]# ls -l /data/export
total 0
-rw-r–r– 1 root root 0 2010-02-23 15:41 test1
-rw-r–r– 1 root root 0 2010-02-23 15:41 test2
-rw-r–r– 1 root root 0 2010-02-23 15:41 test4
[root@server2 ~]#

server3.example.com:

ls -l /data/export

[root@server3 ~]# ls -l /data/export
total 0
-rw-r–r– 1 root root 0 2010-02-23 15:41 test3
[root@server3 ~]#

Let’s boot server1.example.com and server4.example.com again and take a look at the /data/export directory:

server1.example.com:

ls -l /data/export

[root@server1 ~]# ls -l /data/export
total 0
-rw-r–r– 1 root root 0 2010-02-23 15:41 test1
-rw-r–r– 1 root root 0 2010-02-23 15:41 test2
-rw-r–r– 1 root root 0 2010-02-23 15:41 test4
-rw-r–r– 1 root root 0 2010-02-23 15:41 test5
[root@server1 ~]#

server4.example.com:

ls -l /data/export

[root@server4 ~]# ls -l /data/export
total 0
-rw-r–r– 1 root root 0 2010-02-23 15:41 test3
-rw-r–r– 1 root root 0 2010-02-23 15:41 test6
[root@server4 ~]#

As you see, server1.example.com and server4.example.com haven’t noticed the changes that happened while they were down. This is easy to fix, all we need to do is invoke a read command on the GlusterFS share on client1.example.com, e.g.:

client1.example.com:

ls -l /mnt/glusterfs/

[root@client1 ~]# ls -l /data/export
total 0
-rw-r–r– 1 root root 0 2010-02-23 15:41 test1
-rw-r–r– 1 root root 0 2010-02-23 15:41 test2
-rw-r–r– 1 root root 0 2010-02-23 15:41 test3
-rw-r–r– 1 root root 0 2010-02-23 15:41 test4
[root@client1 ~]#

Now take a look at the /data/export directory on server1.example.com and server4.example.com again, and you should see that the changes have been replicated to these nodes:

server1.example.com:

ls -l /data/export

[root@server1 ~]# ls -l /data/export
total 0
-rw-r–r– 1 root root 0 2010-02-23 15:41 test1
-rw-r–r– 1 root root 0 2010-02-23 15:41 test2
-rw-r–r– 1 root root 0 2010-02-23 15:41 test4
[root@server1 ~]#

server4.example.com:

ls -l /data/export

[root@server4 ~]# ls -l /data/export
total 0
-rw-r–r– 1 root root 0 2010-02-23 15:41 test3
[root@server4 ~]#

 

  • GlusterFS: http://www.gluster.org/
  • CentOS: http://www.centos.org/

Comments

comments