Cheap VPS & Xen Server

Residential Proxy Network - Hourly & Monthly Packages

Setting Up A Standalone Storage Server With GlusterFS And Samba On Debian Squeeze

This tutorial shows how to set up a standalone storage server on Debian 6.0, using GlusterFS and SAMBA, and custom scripts and settings to make life easier 😉

I do not issue any guarantee that this will work for you!

I do not issue any guarantee that you will understand my poor english 😉


1 Preliminary Note

Tutorial is based on Falko Timme article.

Linux Distribution:

I?m using the Debian 6.0 (Squeeze) distribution. The installation of the Debian is very simple, so I?m not going to explain it. Just remember that you’ll need to have a disk or partition exclusive to data.


In this tutorial I use three systems, a two storage nodes and a windows client:

  • IP address
  • IP address
  • MS Windows client: IP address

2 Preparing The Nodes

We have to make sure, that both nodes are up to date and have installed SSH and other software that we like or need.


apt-get update

apt-get install mc ssh

We need to be sure, that both nodes should be able to resolve the other system’s hostname:


vi /etc/hosts    localhost    node1    node2


Checking All Settings


ping -c 1 node2

PING ( 56(84) bytes of data.
64 bytes from ( icmp_req=1 ttl=64 time=0.818 ms

— ping statistics —
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.818/0.818/0.818/0.000 ms


ping -c 1 node1

PING ( 56(84) bytes of data.
64 bytes from ( icmp_req=1 ttl=64 time=0.802 ms

— ping statistics —
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.802/0.802/0.802/0.000 ms


3 Setting Up The Data Disks

On both nodes we have exclusive disk, we need to set them up:


fdisk /dev/sdb

Command (m for help): <– n
Command action
e   extended
p   primary partition (1-4) <– p
Partition number (1-4, default 1): <– 1
First sector (1-1305, default 1): <– ENTER
Using default value 1 Last sector, +sectors or +size{K,M,G} (1-1305, default 1305): <– ENTER
Using default value 1305

Command (m for help): <– t
Selected partition 1
Hex code (type L to list codes): <– 83
Command (m for help): <– w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

Now run

fdisk -l

and you should find /dev/sdb1 in the output on both nodes:

Device Boot Start End Blocks Id System
/dev/sdb1 1 1305 10482381 83 Linux

Now we create a filesystem on /dev/sdb1, and mount the /dev/sdb1 to /data directory


mkfs.ext3 /dev/sdb1
mkdir /data/
vi /etc/fstab

/dev/sdb1 /data ext3 defaults  0 0 

Now run

mount -a

After that, you should find the share in the outputs of

df -h

/dev/sdb1 9,9G 151M 9,2G 2% /data


/dev/sdb1 on /data type ext3 (rw)

4 Setting Up The GlusterFS Servers


apt-get install glusterfs-server

Next we create a few directories:

mkdir /data/export
mkdir /data/export-ns

Now we need to create the GlusterFS server configuration file /etc/glusterfs/glusterfsd.vol

cp /etc/glusterfs/glusterfsd.vol /etc/glusterfs/glusterfsd.vol_orig
cat /dev/null > /etc/glusterfs/glusterfsd.vol
vi /etc/glusterfs/glusterfsd.vol

volume posix
type storage/posix
option directory /data/export
volume locks
type features/locks
subvolumes posix
volume brick
type performance/io-threads
option thread-count 8
subvolumes locks
volume server
type protocol/server
option transport-type tcp
option auth.addr.brick.allow,
subvolumes brick

At last we can start the GlusterFS server:

/etc/init.d/glusterfs-server start


5 Setting Up The GlusterFS Client

In this case, MS Windows client need to have access to both nodes via SMB. That?s why, both nodes are working as GlusterFS server and client in same time.
On both nodes:


First we need to create client config file:

cp /etc/glusterfs/glusterfs.vol /etc/glusterfs/glusterfs.vol_orig
cat /dev/null > /etc/glusterfs/glusterfs.vol
vi /etc/glusterfs/glusterfs.vol

volume remote1
type protocol/client
option transport-type tcp
option remote-host
option remote-subvolume brick
volume remote2
type protocol/client
option transport-type tcp
option remote-host
option remote-subvolume brick
volume replicate
type cluster/replicate
subvolumes remote1 remote2
volume writebehind
type performance/write-behind
option window-size 1MB
subvolumes replicate
volume cache
type performance/io-cache
option cache-size 512MB
subvolumes writebehind

Done! Our cluster is set up. At last, we can mount the Gluster file system to /home directory with one of the following two commands:

glusterfs -f /etc/glusterfs/glusterfs.vol /home


mount -t glusterfs /etc/glusterfs/glusterfs.vol /home

You should now see the mounted share:

df -h

/dev/sdb1 9,9G 151M 9,2G 2% /data
/etc/glusterfs/glusterfs.vol 9,9G 151M 9,2G 2% /home

Like you can see, the same share is mounted twice. That’s because GlusteFS server use /data directory, and GlusterFS client use /home directory.

Of course we want the shares gets mounted automatically when the servers start. The best way is to append following line into the /etc/rc.local (before the exit 0 line):

/bin/mount -t glusterfs /etc/glusterfs/glusterfs.vol /home


Testing ?

on node2

… run …

watch ls /home

on node1

… run …

touch /home/test.file

on node2

… you should see …

Every 2,0s: ls /home Tue Dec 25 13:12:30 2012

That?’s it, cluster already running. You may do some more tests, to see how GlusterFS works.

6 Configuring Samba

In this case, we assume that we have 3 departments.
Every employee has got access to his home directory, and to the public directory in company.
First, we need to install SAMBA


apt-get install samba

Next, we need to configure /etc/samba/smb.conf on both nodes:


cp /etc/samba/smb.conf /etc/samba/smb.conf_orig
cat /dev/null > /etc/samba/smb.conf
vi /etc/samba/smb.conf

netbios name = CLUSTER

comment = Public directory
path = /home/public
force user = nobody
force group = nogroup
read only = No
create mask = 0664
directory mask = 0775
guest ok = Yes

comment = Home directory %S
valid users = %S
read only = No
create mask = 0700
directory mask = 0700
browseable = No

comment = Department 1
path = /home/Dep_1
guest ok = no
browseable = yes
writeable = yes
create mask = 0660
directory mask = 0770
write list = @Dep_1

comment = Department 2
path = /home/Dep_2
guest ok = no
browseable = yes
writeable = yes
create mask = 0660
directory mask = 0770
write list = @Dep_2

comment = Department 3
path = /home/Dep_3
guest ok = no
browseable = yes
writeable = yes
create mask = 0660
directory mask = 0770
write list = @Dep_3

Create users groups on both nodes:


groupadd Dep_1
groupadd Dep_2
groupadd Dep_3

Create the directories and set privileges (only on node1):

on node1

mkdir /home/public
mkdir /home/Dep_1
mkdir /home/Dep_2
mkdir /home/Dep_3
chmod 0770 /home/Dep_1
chmod 0770 /home/Dep_2
chmod 0770 /home/Dep_3
chown root:Dep_1 /home/Dep_1
chown root:Dep_2 /home/Dep_2
chown root:Dep_3 /home/Dep_3

Make sure, that all is fine…
… run …

ls -l /home/

… as result, you should se:

drwxrwx— 2 root Dep_1 4096 12-25 13:50 Dep_1
drwxrwx— 2 root Dep_2 4096 12-25 13:50 Dep_2
drwxrwx— 2 root Dep_3 4096 12-25 13:50 Dep_3
drwxrwxrwx 2 root root 4096 12-25 13:50 public


7 Custom scripts and settings to make life easier 😉

How many times did You hear “where is my very important file?” Sometimes users remove wrong files of their shares. With samba, we can ?avoid? this. At least we can recover that file, if we were using the “recycle” module.

Append following lines into the /etc/samba/smb.conf in global section


vi /etc/samba/smb.conf

recycle:repository = /home/TRASH/%u_%I_%S
recycle:keeptree = TRUE
recycle:versions = TRUE
recycle:touch = TRUE

And next append following line in section [homes]:

vfs objects = recycle  

Create TRASH directory:

mkdir /home/TRASH

Of course after that we need to restart SAMBA, thats mean We have to login on every one node and run command like:

/etc/init.d/samba restart

It would be easier if We don’t need to login twice. We may do that in few simple steps:
First we need to decide which node is more important, let?s say it will be node1.
Next run following command on node1:

on node1

ssh-keygen -t rsa

Command ssh-keygen -t rsa generate two files, one of them contains private key and another contain public key.
Now we have to move public key to the another node, and change filename into “authorized_keys”.
Run that command:

cp /root/.ssh/ /root/.ssh/authorized_keys
scp /root/.ssh/authorized_keys root@node2:~/authorized_keys

and on node2 run following command:

on node2

mkdir /root/.ssh
mv /root/authorised_keys /root/.ssh

That’s it, now we may run on node1 following command, and we dont have to enter any passwords:

ssh node2 /etc/init.d/samba restart


8 The very last part

To make sure that both nodes could be accessed via SMB, and there are the same users accounts, we need to add Linux and Samba users accounts on both nodes. Of course we don’t want to let users login to any node shell. We can do it in that way (run following command):

on node1

Remember, be nice and say hello to every new user.

mkdir /root/skel
touch /root/skel/hello
echo “Hello My Friend” > /root/skel/hello

Next we create a useful script.

touch /root/
chmod 700 /root/
vi /root/

backup_time=`date +%Y.%m.%d_%H:%M`

#Add Linux user account
/usr/sbin/useradd  -d /home/$DEPARTMENT/$USER -g $DEPARTMENT -G users,$DEPARTMENT -m -k /root/skel -s /bin/false  $USER

#Setup directory permission
chmod 700 /home/$DEPARTMENT/$USER

#Setup SAMBA user - password
echo -ne "$PASSWORD\n$PASSWORD\n" | smbpasswd -s -a $USER

#Make a copy, and move on node2
cp /etc/passwd /root/passwd.copy.$backup_time
cp /etc/group /root/group.copy.$backup_time
tar czvf /root/tdb_$backup_time.tgz /var/lib/samba
ssh node2 /etc/init.d/samba stop
scp /etc/passwd node2:/etc/passwd
scp /etc/group node2:/etc/group
scp -r /var/lib/samba/* node2:/var/lib/samba/
ssh node2 /etc/init.d/samba start

From now, if you want to add next user, just run …

cd /root
./ john Dep_1 john25


Let?s make some tests:

On MS Windows client:

Press START -> Run -> and try to get access to \\

If everything ok, do it on node2: \\

On both servers you should have access to John?s directory and public directory.
Now try to create a file being logged on node1 //, next login to node2 // and make sure that your new file is there.
After that open again John?s home directory on node1 and remove last created file.
After that login on node1 server via SSH, and run following command…

ls /home/TRASH

… you should see that result …


… next run …

ls /home/TRASH/john_192.168.20.7_Dep_1

… you should see all removed files and directories.

Of course You may want to use CTDB Cluster to manage the Samba, and You probably should.
I didn’t use CTDB because I have only two nodes and I want to be able (in feature) to use both servers separately, with no errors (with no connection between them).
Let me know, if You have better option. 😉



  • GlusterFS:
  • Falko Timme article: