Cheap VPS & Xen Server

Residential Proxy Network - Hourly & Monthly Packages

Openfiler 2.99 Active/Passive With Corosync, Pacemaker And DRBD


Openfiler is a Linux based NAS/SAN Application which can deliver storage over nfs/smb/iscsi and ftp. It has a web interface over that you can control these services. This howto is based on the latest version of openfiler at this date, you can download it from the official homepage www.openfiler.com.

Thanks to the Openfiler team that made this howto possible.

 

1. Create Systems with following setup:

  • hostname: filer01
  • eth0: 10.10.11.101
  • eth1: 10.10.50.101
  • 500MB Meta partition
  • 4GB+ Data partition
  • hostname: filer02
  • eth0: 10.10.11.102
  • eth1: 10.10.50.102
  • 500MB Meta partition
  • 4GB+ Data partition

virtualip: 10.10.11.105 ( don’t use on any adapter, we will make this later with corosync )

 

1.1 Create hosts file for easier access

root@filer01 ~# nano /etc/hosts

Add:

10.10.50.102	filer02

 

root@filer01 ~# nano /etc/hosts

On filer02 add:

10.10.50.101	filer01

 

1.2 Create/Exchange SSH Keys for easier file exchange

root@filer01 ~# ssh-keygen -t dsa

Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa):
Created directory ‘/root/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:

Do the same on filer02.

root@filer02 ~# ssh-keygen -t dsa

Then exchange the files:

root@filer01 ~# scp ~/.ssh/id_dsa.pub root@filer02:~/.ssh/authorized_keys

root@filer02 ~# scp ~/.ssh/id_dsa.pub root@filer01:~/.ssh/authorized_keys

And now you can exchange files between the nodes without entering a password.

 

2. Create meta/data Partition on both filers

Before we can actually start the cluster we have to prepaire both systems and let the data and meta partition sync before it can be used by corosync/pacemaker as the first cluster config will start drbd and take over the control of this service. So we prepaire our partitions this time before we do the actual cluster configuration as we did in openfiler 2.3.

 

2.1 Create DRBD Setup

Edit /etc/drbd.conf on filer01 and filer02:

# You can find an example in  /usr/share/doc/drbd.../drbd.conf.example
include "drbd.d/global_common.conf";
#include "drbd.d/*.res";
resource meta {
 on filer01 {
  device /dev/drbd0;
  disk /dev/sdb1;
  address 10.10.50.101:7788;
  meta-disk internal;
 }
 on filer02 {
  device /dev/drbd0;
  disk /dev/sdb1;
  address 10.10.50.102:7788;
  meta-disk internal;
 }
}
resource data {
 on filer01 {
  device /dev/drbd1;
  disk /dev/sdb2;
  address 10.10.50.101:7789;
  meta-disk internal;
 }
 on filer02 {
  device /dev/drbd1;
  disk /dev/sdb2;
  address 10.10.50.102:7789;
  meta-disk internal;
 }
}

 

After that create the meta-data on it, if you get errors when this happens, please empty out the filesystem with, if you have anything in /etc/fstab related to the partitions /meta then remove these lines. ( This happens when you create the meta partitions in the installation phase ).

dd if=/dev/zero of=/dev/drbdX

root@filer01 ~# drbdadm create-md meta
root@filer01 ~# drbdadm create-md data

root@filer02 ~# drbdadm create-md meta
root@filer02 ~# drbdadm create-md data

Now you can start up drbd with:

service drbd start

on both nodes.

Make one node primary:

root@filer01 ~# drbdsetup /dev/drbd0 primary -o
root@filer01 ~# drbdsetup /dev/drbd1 primary -o

 

2.2 Prepare the Configuration Partition

root@filer01 ~# mkfs.ext3 /dev/drbd0

root@filer01 ~# service openfiler stop

 

2.2.1 Openfiler to meta-Partition

root@filer01 ~# mkdir /meta
root@filer01 ~# mount /dev/drbd0 /meta
root@filer01 ~# mv /opt/openfiler/ /opt/openfiler.local
root@filer01 ~# mkdir /meta/opt
root@filer01 ~# cp -a /opt/openfiler.local /meta/opt/openfiler
root@filer01 ~# ln -s /meta/opt/openfiler /opt/openfiler
root@filer01 ~# rm /meta/opt/openfiler/sbin/openfiler
root@filer01 ~# ln -s /usr/sbin/httpd /meta/opt/openfiler/sbin/openfiler
root@filer01 ~# rm /meta/opt/openfiler/etc/rsync.xml
root@filer01 ~# ln -s /opt/openfiler.local/etc/rsync.xml /meta/opt/openfiler/etc/
root@filer01 ~# mkdir -p /meta/etc/httpd/conf.d

 

2.2.2 Samba/NFS/ISCSI/PROFTPD Configuration Files to Meta Partition

root@filer01 ~# service nfslock stop
root@filer01 ~# umount -a -t rpc-pipefs
root@filer01 ~# mkdir /meta/etc
root@filer01 ~# mv /etc/samba/ /meta/etc/
root@filer01 ~# ln -s /meta/etc/samba/ /etc/samba
root@filer01 ~# mkdir -p /meta/var/spool
root@filer01 ~# mv /var/spool/samba/ /meta/var/spool/
root@filer01 ~# ln -s /meta/var/spool/samba/ /var/spool/samba
root@filer01 ~# mkdir -p /meta/var/lib
root@filer01 ~# mv /var/lib/nfs/ /meta/var/lib/
root@filer01 ~# ln -s /meta/var/lib/nfs/ /var/lib/nfs
root@filer01 ~# mv /etc/exports /meta/etc/
root@filer01 ~# ln -s /meta/etc/exports /etc/exports
root@filer01 ~# mv /etc/ietd.conf /meta/etc/
root@filer01 ~# ln -s /meta/etc/ietd.conf /etc/ietd.conf
root@filer01 ~# mv /etc/initiators.allow /meta/etc/
root@filer01 ~# ln -s /meta/etc/initiators.allow /etc/initiators.allow
root@filer01 ~# mv /etc/initiators.deny /meta/etc/
root@filer01 ~# ln -s /meta/etc/initiators.deny /etc/initiators.deny
root@filer01 ~# mv /etc/proftpd /meta/etc/
root@filer01 ~# ln -s /meta/etc/proftpd/ /etc/proftpd

 

2.2.3 httpd Modules for Openfiler

root@filer01 ~# rm /opt/openfiler/etc/httpd/modules
root@filer01 ~# ln -s /usr/lib64/httpd/modules /opt/openfiler/etc/httpd/modules

Now do a start and see if Openfiler can run:

root@filer01 ~# service openfiler start

2.2.4 filer02 Openfiler Configuration

root@filer02 ~# service openfiler stop
root@filer02 ~# mkdir /meta
root@filer02 ~# mv /opt/openfiler/ /opt/openfiler.local
root@filer02 ~# ln -s /meta/opt/openfiler /opt/openfiler

 

2.2.5 Samba/NFS/ISCSI/ProFTPD Configuration Files to Meta Partition

root@filer02 ~# service nfslock stop
root@filer02 ~# umount -a -t rpc-pipefs
root@filer02 ~# rm -rf /etc/samba/
root@filer02 ~# ln -s /meta/etc/samba/ /etc/samba
root@filer02 ~# rm -rf /var/spool/samba/
root@filer02 ~# ln -s /meta/var/spool/samba/ /var/spool/samba
root@filer02 ~# rm -rf /var/lib/nfs/
root@filer02 ~# ln -s /meta/var/lib/nfs/ /var/lib/nfs
root@filer02 ~# rm -rf /etc/exports
root@filer02 ~# ln -s /meta/etc/exports /etc/exports
root@filer02 ~# rm /etc/ietd.conf
root@filer02 ~# ln -s /meta/etc/ietd.conf /etc/ietd.conf
root@filer02 ~# rm /etc/initiators.allow
root@filer02 ~# ln -s /meta/etc/initiators.allow /etc/initiators.allow
root@filer02 ~# rm /etc/initiators.deny
root@filer02 ~# ln -s /meta/etc/initiators.deny /etc/initiators.deny
root@filer02 ~# rm -rf /etc/proftpd
root@filer02 ~# ln -s /meta/etc/proftpd/ /etc/proftpd

 

2.3 Prepare the Data Partition

Change the lvm filter in the

/etc/lvm/lvm.conf

file from:

filter = [ "a/.*/" ]

to

filter = [ "a|drbd[0-9]|", "r|.*|" ]

Exchange this file to the other filer node

root@filer01 ~# scp /etc/lvm/lvm.conf root@filer02:/etc/lvm/lvm.conf

After that we can create the actual used stuff:

root@filer01 ~# pvcreate /dev/drbd1
root@filer01 ~# vgcreate data /dev/drbd1
root@filer01 ~# lvcreate -L 400M -n filer data

 

3. Start Corosync and create a configuration for it:

3.1 Create Corosync authkey

root@filer01~# corosync-keygen

( Press the real keyboard instead of pressing keys in an ssh terminal. )

Copy the authkey file to the other node and change the fileaccess:

root@filer01~# scp /etc/corosync/authkey root@filer02:/etc/corosync/authkey
root@filer02~# chmod 400 /etc/corosync/authkey

 

3.2 Create a file named pcmk /etc/corosync/service.d/pcmk

root@filer01~# vi /etc/corosync/service.d/pcmk

service {
        # Load the Pacemaker Cluster Resource Manager
        name: pacemaker
        ver:  0
 }

 

3.2.1 Exchange this file to the other node

root@filer01~# scp /etc/corosync/service.d/pcmk root@filer02:/etc/corosync/service.d/pcmk

 

3.3 Create the corosync.conf file and change it to present your lan net ( bindnetaddr )

root@filer01~# vi /etc/corosync/corosync.conf

# Please read the corosync.conf.5 manual page
compatibility: whitetank
totem {
        version: 2
        secauth: off
        threads: 0
        interface {
                ringnumber: 0
                bindnetaddr: 10.10.50.0
                mcastaddr: 226.94.1.1
                mcastport: 5405
                ttl: 1
        }
}
logging {
        fileline: off
        to_stderr: no
        to_logfile: yes
        to_syslog: yes
        logfile: /var/log/cluster/corosync.log
        debug: off
        timestamp: on
        logger_subsys {
                subsys: AMF
                debug: off
        }
}
amf {
        mode: disabled
		}

 

3.3.1 Exchange the file to the other node

root@filer01~# scp /etc/corosync/corosync.conf root@filer02:/etc/corosync/corosync.conf

4. Prepare everything for the first corosync start

First we are preparing our nodes for a restart for this we disable some services which are handled by corosync at a later point.

root@filer01~# chkconfig –level 2345 openfiler off
root@filer01~# chkconfig –level 2345 nfs-lock off
root@filer01~# chkconfig –level 2345 corosync on

Do the same on the other node:

root@filer02~# chkconfig –level 2345 openfiler off
root@filer02~# chkconfig –level 2345 nfs-lock off
root@filer02~# chkconfig –level 2345 corosync on

Now restart both nodes and check if corosync runs properly in the next part, dont enable drbd as this will be handled by corosync.

 

4.1 Check if corosync started properly

root@filer01~# ps auxf

root@filer01~# ps auxf
root      3480  0.0  0.8 534456  4112 ?        Ssl  19:15   0:00 corosync
root      3486  0.0  0.5  68172  2776 ?        S    19:15   0:00  \_ /usr/lib64/heartbeat/stonith
106       3487  0.0  1.0  67684  4956 ?        S    19:15   0:00  \_ /usr/lib64/heartbeat/cib
root      3488  0.0  0.4  70828  2196 ?        S    19:15   0:00  \_ /usr/lib64/heartbeat/lrmd
106       3489  0.0  0.6  68536  3096 ?        S    19:15   0:00  \_ /usr/lib64/heartbeat/attrd
106       3490  0.0  0.6  69064  3420 ?        S    19:15   0:00  \_ /usr/lib64/heartbeat/pengine
106       3491  0.0  0.7  76764  3488 ?        S    19:15   0:00  \_ /usr/lib64/heartbeat/crmd

root@filer02~# crm_mon –one-shot -V

crm_mon[3602]: 2011/03/24_19:32:07 ERROR: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
crm_mon[3602]: 2011/03/24_19:32:07 ERROR: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
crm_mon[3602]: 2011/03/24_19:32:07 ERROR: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
============
Last updated: Thu Mar 24 19:32:07 2011
Stack: openais
Current DC: filer01 – partition with quorum
Version: 1.1.2-c6b59218ee949eebff30e837ff6f3824ed0ab86b
2 Nodes configured, 2 expected votes
0 Resources configured.
============

Online: [ filer01 filer02 ]

 

4.2 Configure Corosync as following

Now before do monitor the status of starting the cluster on filer02:

root@filer02~# crm_mon

 

4.2.1 Howto configure corosync step by step

root@filer01~# crm configure
crm(live)configure# property stonith-enabled=”false”
crm(live)configure# property no-quorum-policy=”ignore”

crm(live)configure# rsc_defaults $id=”rsc-options” \
> resource-stickiness=”100″

crm(live)configure# primitive ClusterIP ocf:heartbeat:IPaddr2 \
params ip=”10.10.11.105″ cidr_netmask=”32″ \
op monitor interval=”30s”

crm(live)configure# primitive MetaFS ocf:heartbeat:Filesystem \
> params device=”/dev/drbd0″ directory=”/meta” fstype=”ext3″

crm(live)configure# primitive lvmdata ocf:heartbeat:LVM \
> params volgrpname=”data”

crm(live)configure# primitive drbd_meta ocf:linbit:drbd \
> params drbd_resource=”meta” \
> op monitor interval=”15s”

crm(live)configure# primitive drbd_data ocf:linbit:drbd \
> params drbd_resource=”data” \
> op monitor interval=”15s”

crm(live)configure# primitive openfiler lsb:openfiler

crm(live)configure# primitive iscsi lsb:iscsi-target

crm(live)configure# primitive samba lsb:smb

crm(live)configure# primitive nfs lsb:nfs
crm(live)configure# primitive nfs-lock lsb:nfs-lock

crm(live)configure# group g_drbd drbd_meta drbd_data
crm(live)configure# group g_services MetaFS lvmdata openfiler ClusterIP iscsi samba nfs nfs-lock

crm(live)configure# ms ms_g_drbd g_drbd \
> meta master-max=”1″ master-node-max=”1″ \
> clone-max=”2″ clone-node-max=”1″ \
> notify=”true”

crm(live)configure# colocation c_g_services_on_g_drbd inf: g_services ms_g_drbd:Master
crm(live)configure# order o_g_servicesafter_g_drbd inf: ms_g_drbd:promote g_services:start

crm(live)configure# commit

Watch now on the monitor process how the resources all start hopefully.

root@filer01 ~# crm_mon

 

4.2.2 Troubleshooting

If you get any errors because you done commit before the end of the config, then you need to do a cleanup, as in this example:

root@filer01~# crm
crm(live)resource cleanup MetaFS

 

4.2.3 Verify the config

To verify the config:

root@filer01~#crm configure show

node filer01 \
attributes standby=”off”
node filer02 \
attributes standby=”off”

primitive ClusterIP ocf:heartbeat:IPaddr2 \
params ip=”10.10.11.105″ cidr_netmask=”32″ \
op monitor interval=”30s”

primitive MetaFS ocf:heartbeat:Filesystem \
params device=”/dev/drbd0″ directory=”/meta” fstype=”ext3″

primitive drbd_data ocf:linbit:drbd \
params drbd_resource=”data” \
op monitor interval=”15s”

primitive drbd_meta ocf:linbit:drbd \
params drbd_resource=”meta” \
op monitor interval=”15s”

primitive lvmdata ocf:heartbeat:LVM \
params volgrpname=”data”

primitive openfiler lsb:openfiler

primitive iscsi lsb:iscsi-target

primitive samba lsb:smb

primitive nfs lsb:nfs
primitive nfs-lock lsb:nfs-lock

group g_drbd drbd_meta drbd_data
group g_services MetaFS lvmdata openfiler ClusterIP iscsi samba nfs nfs-lock

ms ms_g_drbd g_drbd \
meta master-max=”1″ master-node-max=”1″ clone-max=”2″ clone-node-max=”1″ notify=”true”
colocation c_g_services_on_g_drbd inf: g_services ms_g_drbd:Master
order o_g_services_after_g_drbd inf: ms_g_drbd:promote g_services:start
property $id=”cib-bootstrap-options” \
dc-version=”1.1.2-c6b59218ee949eebff30e837ff6f3824ed0ab86b” \
cluster-infrastructure=”openais” \
expected-quorum-votes=”2″ \
stonith-enabled=”false” \
no-quorum-policy=”ignore” \
last-lrm-refresh=”1301801257″
rsc_defaults $id=”rsc-options” \
resource-stickiness=”100″

 

5. Specify your Setup

Contrary to openfiler 2.3 where you had to manually exchange the haresource file after each change to the services, the config gets exchanged here on whichever node you change it, furthermore you can just modify your setup and remove services from the above setup as it starts all services used by openfiler, you can just start the one you use in the end.

Comments

comments