Cheap VPS & Xen Server

Residential Proxy Network - Hourly & Monthly Packages

How To Create A RAID1 Setup On An Existing CentOS/RedHat 6.0 System


This tutorial is for turning a single disk CentOS 6 system into a two disk RAID1 system. The GRUB bootloader will be configured in such a way that the system will still be able to boot if one of the hard drives fails (no matter which one).

NOTE: Everything has to be done as root:

su –
enter root password

In this example the initial layout for the hard disks was:

Disk with installed OS. “Original”

Device Mountpoint Size

——————————————————————————–
/dev/sdb ~1002GB
/dev/sdb1 /boot 256MB
/dev/sdb2 / 24GB
/dev/sdb3 swap 4GB
/dev/sdb5 /var 4GB
/dev/sdb6 /home ~900GB

And we will be adding the other hard disk: /dev/sda (~1002GB). “Target disk”.

1. Back everything up! You might want to get your data back after you crashed the conversion. Trust me on this!

2. Verify Backup! See above.

3. Create partitions on /dev/sda identical to the partitions on /dev/sdb:

sfdisk -d /dev/sdb | sfdisk /dev/sda

4. We load a few kernel modules (to avoid a reboot):

modprobe linear
modprobe raid0
modprobe raid1

5. Now run:

cat /proc/mdstat

The output should look as follows:

root@server:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1]
unused devices: <none>

Here we see now that the RAID kernel modules are working, but there are no RAID sets yet.

6. Run the following commands:

mdadm –create /dev/md0 –level=1 –raid-disks=2 /dev/sda1 missing
mdadm –create /dev/md1 –level=1 –raid-disks=2 /dev/sda2 missing
mdadm –create /dev/md2 –level=1 –raid-disks=2 /dev/sda5 missing
mdadm –create /dev/md3 –level=1 –raid-disks=2 /dev/sda6 missing

This generates the raid devices 0 to 3 in a degenerated state because the second drive is missing.

7. If you want to use Grub 0.97 (default in CentOS 5 or 6)) on RAID 1, you need to specify an older version of metadata than the default. Add the option “–metadata=0.90” to the above command. Otherwise Grub will respond with “Filesystem type unknown, partition type 0xfd” and refuse to install. This is supposedly not necessary with Grub 2.

Like this:

mdadm –create /dev/md0 –metadata=0.90 –level=1 –raid-devices=2 /dev/sda1 missing

8. Check the output of

cat /proc/mdstat

#cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb2[1]
473792 blocks [2/2] [U_]

md2 : active raid1 sdb5[1]
4980032 blocks [2/2] [U_]

md3 : active raid1 sdb6[1]
3349440 blocks [2/2] [U_]

md0 : active raid1 sdb1[1]
80192 blocks [2/2] [U_]

unused devices: <none>

9. Create a mdadm.conf from your current configuration:

mdadm –detail –scan > /etc/mdadm.conf

10. Display the contents of the file:

cat /etc/mdadm.conf

At the bottom of the file you should now see details about our (degraded) RAID arrays.

11. We use dracut to rebuild the initramfs with the new mdadm.conf:

mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.old

dracut –mdadmconf –force /boot/initramfs-$(uname -r).img $(uname -r)

12. Create the filesystems on these new software raid devices:

mkfs.ext2 /dev/md0 # For /boot ext2 is good
mkfs.ext4 /dev/md1 # For / ext4 is good
mkfs.ext4 /dev/md2 # For /home ext4 is good
mkfs.ext4 /dev/md3 # For /var ext4 is good
mkswap -c /dev/sda2 #We want swap partitions on both drives for performance

13. Copy the data from the existing (and still running) partitions to the newly created raid partitions:

mkdir /mnt/raid
mount /dev/md0 /mnt/raid
cd /boot; find . -depth | cpio -pmd /mnt/raid

(If SELinux is in use also do this:

touch /mnt/raid/.autorelabel

)

sync
umount /mnt/raid

mount /dev/md1 /mnt/raid
cd / ; find . -depth -xdev | grep -v ‘^\./tmp/’ | cpio -pmd /mnt/raid
sync
umount /mnt/raid

NOTES: You really do not want to copy files in /tmp and /var/tmp.
This command will create empty mount points like ‘proc’ or ‘dev’ and will not forget things like /.autofsck.

mount /dev/md2 /mnt/raid
cd /var; find . -depth | cpio -pmd /mnt/raid sync umount /mnt/raid

mount /dev/md3 /mnt/raid
cd /home; find . -depth | cpio -pmd /mnt/raid
sync
umount /mnt/raid

At this point we have our raid system created and the existing data was mirrored manually onto the new devices.
To make sure that the system will boot from the raid devices we have to change some entries in /etc/fstab and /boot/grub/menu.lst.

14. Open another console window and run:

blkid | grep /dev/md

Here you will see the UUID for each md type filesystem. It should look something like this:

/dev/md0: UUID=”0b0fddf7-1160-4c70-8e76-5a5a5365e07d” TYPE=”ext2″
/dev/md1: LABEL=”/ROOT” UUID=”36d389c4-fc0f-4de7-a80b-40cc6dece66f” TYPE=”ext4″
/dev/md2: UUID=”47fbbe32-c756-4ea6-8fd6-b34867be0c84″ TYPE=”ext4″
/dev/md3: LABEL=”/VAR” UUID=”f92cc249-c1af-456b-a291-ee1ea9ef8e22″ TYPE=”ext4″

Note the UUID for /dev/md0 and copy it and paste it in fstab as shown below.

mount /dev/md1 /mnt/raid

In /mnt/raid/etc/fstab change the line containing the mount point /boot to the UUID of the new /dev/md0 filesystem:

UUID=0b0fddf7-1160-4c70-8e76-5a5a5365e07d /boot ext2 defaults 1 1

Repeat for the UUID of the filesystem for the new / md device. Find it and copy it.

Now change the line containing the mount point for / to and paste in the UUID:

UUID=36d389c4-fc0f-4de7-a80b-40cc6dece66f / ext4 defaults 1 1

Later on we will use the following, but keep these two commented for the moment.

Keep the existing lines for mounting /var and /home intact.

The new line for mounting /var will be added:

/dev/sdb5 /var ext4 defaults 1 2
#UUID=47fbbe32-c756-4ea6-8fd6-b34867be0c84 /var ext4 defaults 1 2

The line for mounting /home will be added:

/dev/sdb6 /home ext4 defaults 1 2
#UUID=f92cc249-c1af-456b-a291-ee1ea9ef8e22 /home ext4 defaults 1 2

Next:

umount /mnt/raid

15. Mount /dev/md0 again to /mnt/raid.

In /mnt/raid/grub/menu.lst change the entry for the kernel to

kernel PATH-TO-KERNEL ro root=/dev/md1 SOME OPTIONS

Make sure that there is no longer an option to EXCLUDE md devices!

Just to be sure the system will boot from the raid array copy the file /mnt/raid/grub/menu.lst to /boot/grub/menu.lst and /mnt/raid/etc/fstab to /etc/fstab.

You could make copies of these files for safety, but that’s the cowards way.

16. Reboot the machine.

Enter the system BIOS, and choose the new disk as the one that your system boots from. Save the BIOS setting and boot.

17. Assuming the reboot went smoothly, change the existing partitions of the old drive to be raid device partitions:

Check the partition tables to confirm which disk is the old and which is the new one:

fdisk -l

Examine the output and see which disk has partitions of type 83 Linux. That disk will be our old system disk.

By using fdisk, cfdisk or parted:

Change the partition type to 0xfd on that disks partitions sdb1, sdb2, sdb5 and sdb6. Note I am assuming here that this is still /dev/sdb.

Run

partprobe

Add the newly modified partitions to the RAIDs to make them complete. Note that once again I am assuming that the old disk still shows up as sdb.

mdadm /dev/md0 -a /dev/sdb1
mdadm /dev/md1 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb5
mdadm /dev/md3 -a /dev/sdb6

To see what’s going on, use (in a new console window as root):

watch -n 5 cat/proc/mdstat

The output should look similar to the one below and will be updated every 5 seconds:

Personalities : [raid1]
md1 : active raid1 sdb3[1] sda3[0]
473792 blocks [2/2] [UU]
[===>……………..] recovery 25.0% (118448/473792) finish=2.4min speed=2412

md2 : active raid1 sdb5[1] sda5[0]
4980032 blocks [2/2] [UU]
resync=DELAYED

md3 : active raid1 sdb6[1] sda6[0]
3349440 blocks [2/2] [UU]
resync=DELAYED

md0 : active raid1 sdb1[1] sda1[0]
80192 blocks [2/2] [UU]

unused devices: <none>

As soon as all the md devices are done recovery, your system is in essence up and running.

Next we shall add some additional steps to gain on performance and redundancy.

First, the system should be able to boot even if the first hard disk fails. For this to happen, the following step has to be done:

18. Create a boot record on the second hard disk.

THESE INSTRUCTIONS ASSUME YOU ARE USING OLD STYLE GRUB. FOR GRUB2 SEE FUTURE INSTRUCTIONS!

To create a boot record on the second hard disk, start a grub shell:

# grub

grub>

Set the root device temporarily to the second disk:

grub> root (hd1,0)

Filesystem type is ext2fs, partition type is 0xfd

grub> setup (hd1)

Checking if “/boot/grub/stage1” exists … no
Checking if “/grub/stage1” exists … yes
Checking if “/grub/stage2” exists … yes
Checking if “/grub/e2fs_stage1_5” exists … yes
Running “embed /grub/e2fs_stage1_5 (hd1)” … 16 sectors embedded.
succeeded
Running “install /grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/grub/stage2 /grub/grub.conf”… succeeded
Done.

Repeat for the first disk:

grub> root (hd0,0)

Filesystem type is ext2fs, partition type is 0xfd

grub> setup (hd0)

Checking if “/boot/grub/stage1” exists … no
Checking if “/grub/stage1” exists … yes
Checking if “/grub/stage2” exists … yes
Checking if “/grub/e2fs_stage1_5” exists … yes
Running “embed /grub/e2fs_stage1_5 (hd1)” … 16 sectors embedded.
succeeded
Running “install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2 /grub/grub.conf”… succeeded
Done.

grub> quit

Reboot the system:

reboot

It should boot without problems.

If so, disconnect the first disk (sda) and try again. Does it boot?

If so, power off, reconnect sda, disconnect the second disk (sdb). Does it boot?

Comments

comments