Cheap VPS & Xen Server

Residential Proxy Network - Hourly & Monthly Packages

High-Availability Storage With GlusterFS On Mandriva 2010.0 – Automatic File Replication Across Two Storage Servers


This tutorial shows how to set up a high-availability storage with two storage servers (Mandriva 2010.0) that use GlusterFS. Each storage server will be a mirror of the other storage server, and files will be replicated automatically across both storage servers. The client system (Mandriva 2010.0 as well) will be able to access the storage as if it was a local filesystem. GlusterFS is a clustered file-system capable of scaling to several peta-bytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. Storage bricks can be made of any commodity hardware such as x86_64 servers with SATA-II RAID and Infiniband HBA.

I do not issue any guarantee that this will work for you!

 

1 Preliminary Note

In this tutorial I use three systems, two servers and a client:

  • server1.example.com: IP address 192.168.0.100 (server)
  • server2.example.com: IP address 192.168.0.101 (server)
  • client1.example.com: IP address 192.168.0.102 (client)

All three systems should be able to resolve the other systems’ hostnames. If this cannot be done through DNS, you should edit the /etc/hosts file so that it looks as follows on all three systems:

vi /etc/hosts

127.0.0.1       localhost.localdomain   localhost
192.168.0.100   server1.example.com     server1
192.168.0.101   server2.example.com     server2
192.168.0.102   client1.example.com     client1

(It is also possible to use IP addresses instead of hostnames in the following setup. If you prefer to use IP addresses, you don’t have to care about whether the hostnames can be resolved or not.)

 

2 Setting Up The GlusterFS Servers

server1.example.com/server2.example.com:

GlusterFS is available as a package for Mandriva 2010.0, therefore we can install it as follows:

urpmi glusterfs-server

The command

glusterfs –version

should now show the GlusterFS version that you’ve just installed (2.0.6 in this case):

[root@server1 administrator]# glusterfs –version
glusterfs 2.0.6 built on Sep 20 2009 06:40:50
Repository revision: v2.0.6
Copyright (c) 2006-2009 Z RESEARCH Inc. <http://www.zresearch.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@server1 administrator]#

Next we create a few directories:

mkdir /data/
mkdir /data/export
mkdir /data/export-ns

Now we create the GlusterFS server configuration file /etc/glusterfs/glusterfsd.vol which defines which directory will be exported (/data/export) and what client is allowed to connect (192.168.0.102 = client1.example.com):

vi /etc/glusterfs/glusterfsd.vol

volume posix
  type storage/posix
  option directory /data/export
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume brick
  type performance/io-threads
  option thread-count 8
  subvolumes locks
end-volume

volume server
  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow 192.168.0.102
  subvolumes brick
end-volume

Please note that it is possible to use wildcards for the IP addresses (like 192.168.*) and that you can specify multiple IP addresses separated by comma (e.g. 192.168.0.102,192.168.0.103).

Afterwards we restart the GlusterFS server:

/etc/init.d/glusterfsd restart

3 Setting Up The GlusterFS Client

client1.example.com:

On the client, we can install the GlusterFS client as follows:

urpmi glusterfs-client glusterfs-server

Then we create the following directory:

mkdir /mnt/glusterfs

Next we create the file /etc/glusterfs/glusterfs.vol:

vi /etc/glusterfs/glusterfs.vol

volume remote1
  type protocol/client
  option transport-type tcp
  option remote-host server1.example.com
  option remote-subvolume brick
end-volume

volume remote2
  type protocol/client
  option transport-type tcp
  option remote-host server2.example.com
  option remote-subvolume brick
end-volume

volume replicate
  type cluster/replicate
  subvolumes remote1 remote2
end-volume

volume writebehind
  type performance/write-behind
  option window-size 1MB
  subvolumes replicate
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

Make sure you use the correct server hostnames or IP addresses in the option remote-host lines!

That’s it! Now we can mount the GlusterFS filesystem to /mnt/glusterfs with one of the following two commands:

glusterfs -f /etc/glusterfs/glusterfs.vol /mnt/glusterfs

or

mount -t glusterfs /etc/glusterfs/glusterfs.vol /mnt/glusterfs

You should now see the new share in the outputs of…

mount

[root@client1 administrator]# mount
/dev/sda1 on / type ext4 (rw,relatime)
none on /proc type proc (rw)
/dev/sda6 on /home type ext4 (rw,relatime)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
/etc/glusterfs/glusterfs.vol on /mnt/glusterfs type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)
[root@client1 administrator]#

… and…

df -h

[root@client1 administrator]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              12G  1.5G  9.8G  13% /
/dev/sda6              16G  172M   16G   2% /home
/etc/glusterfs/glusterfs.vol
29G  1.7G   26G   6% /mnt/glusterfs
[root@client1 administrator]#

(server1.example.com and server2.example.com each have 29GB of space for the GlusterFS filesystem, but because the data is mirrored, the client doesn’t see 58GB (2 x 29GB), but only 29GB.)

Instead of mounting the GlusterFS share manually on the client, you could modify /etc/fstab so that the share gets mounted automatically when the client boots.

Open /etc/fstab and append the following line:

vi /etc/fstab

[...]
/etc/glusterfs/glusterfs.vol  /mnt/glusterfs  glusterfs  defaults  0  0

To test if your modified /etc/fstab is working, reboot the client:

reboot

After the reboot, you should find the share in the outputs of…

df -h

… and…

mount

 

4 Testing

Now let’s create some test files on the GlusterFS share:

client1.example.com:

touch /mnt/glusterfs/test1
touch /mnt/glusterfs/test2

Now let’s check the /data/export directory on server1.example.com and server2.example.com. The test1 and test2 files should be present on each node:

server1.example.com/server2.example.com:

ls -l /data/export

[root@server1 administrator]# ls -l /data/export
total 0
-rw-r–r– 1 root root 0 2009-12-18 15:37 test1
-rw-r–r– 1 root root 0 2009-12-18 15:37 test2
[root@server1 administrator]#

Now we shut down server1.example.com and add/delete some files on the GlusterFS share on client1.example.com.

server1.example.com:

shutdown -h now

client1.example.com:

touch /mnt/glusterfs/test3
touch /mnt/glusterfs/test4
rm -f /mnt/glusterfs/test2

The changes should be visible in the /data/export directory on server2.example.com:

server2.example.com:

ls -l /data/export

[root@server2 administrator]# ls -l /data/export
total 0
-rw-r–r– 1 root root 0 2009-12-18 15:37 test1
-rw-r–r– 1 root root 0 2009-12-18 15:39 test3
-rw-r–r– 1 root root 0 2009-12-18 15:39 test4
[root@server2 administrator]#

Let’s boot server1.example.com again and take a look at the /data/export directory:

server1.example.com:

ls -l /data/export

[root@server1 administrator]# ls -l /data/export
total 0
-rw-r–r– 1 root root 0 2009-12-18 15:37 test1
-rw-r–r– 1 root root 0 2009-12-18 15:37 test2
[root@server1 administrator]#

As you see, server1.example.com hasn’t noticed the changes that happened while it was down. This is easy to fix, all we need to do is invoke a read command on the GlusterFS share on client1.example.com, e.g.:

client1.example.com:

ls -l /mnt/glusterfs/

[root@client1 administrator]# ls -l /mnt/glusterfs/
total 0
-rw-r–r– 1 root root 0 2009-12-18 15:37 test1
-rw-r–r– 1 root root 0 2009-12-18 15:39 test3
-rw-r–r– 1 root root 0 2009-12-18 15:39 test4
[root@client1 administrator]#

Now take a look at the /data/export directory on server1.example.com again, and you should see that the changes have been replicated to that node:

server1.example.com:

ls -l /data/export

[root@server1 administrator]# ls -l /data/export
total 0
-rw-r–r– 1 root root 0 2009-12-18 15:37 test1
-rw-r–r– 1 root root 0 2009-12-18 15:39 test3
-rw-r–r– 1 root root 0 2009-12-18 15:39 test4
[root@server1 administrator]#

 

  • GlusterFS: http://www.gluster.org/
  • Mandriva: http://www.mandriva.com/

Comments

comments