Cheap VPS & Xen Server

Residential Proxy Network - Hourly & Monthly Packages

How To Set Up Software RAID1 On A Running LVM System (Incl. GRUB2 Configuration) (Ubuntu 10.04)


This guide explains how to set up software RAID1 on an already running LVM system (Ubuntu 10.04). The GRUB2 bootloader will be configured in such a way that the system will still be able to boot if one of the hard drives fails (no matter which one).

I do not issue any guarantee that this will work for you!

 

1 Preliminary Note

In this tutorial I’m using an Ubuntu 10.04 system with two hard drives, /dev/sda and /dev/sdb which are identical in size. /dev/sdb is currently unused, and /dev/sda has the following partitions (this is the default Ubuntu 10.04 LVM partitioning scheme – you should find something similar on your system unless you chose to manually partition during the installation of the system):

  • /dev/sda1: /boot partition, ext2;
  • /dev/sda2: extended, contains /dev/sda5;
  • /dev/sda5: is used for LVM (volume group server1) and contains / (volume root) and swap (volume swap_1).

In the end I want to have the following situation:

  • /dev/md0 (made up of /dev/sda1 and /dev/sdb1): /boot partition, ext2;
  • /dev/md1 (made up of /dev/sda5 and /dev/sdb5): LVM (volume group server1), contains / (volume root) and swap (volume swap_1).

This is the current situation:

df -h

root@server1:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/server1-root
4.5G  809M  3.5G  19% /
none                  243M  176K  242M   1% /dev
none                  247M     0  247M   0% /dev/shm
none                  247M   36K  247M   1% /var/run
none                  247M     0  247M   0% /var/lock
none                  247M     0  247M   0% /lib/init/rw
/dev/sda1             228M   17M  199M   8% /boot
root@server1:~#

fdisk -l

root@server1:~# fdisk -l

Disk /dev/sda: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0006b7b7

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          32      248832   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              32         653     4990977    5  Extended
Partition 2 does not end on cylinder boundary.
/dev/sda5              32         653     4990976   8e  Linux LVM

Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn’t contain a valid partition table
root@server1:~#

pvdisplay

root@server1:~# pvdisplay
— Physical volume —
PV Name               /dev/sda5
VG Name               server1
PV Size               4.76 GiB / not usable 2.00 MiB
Allocatable           yes
PE Size               4.00 MiB
Total PE              1218
Free PE               3
Allocated PE          1215
PV UUID               bsF5F5-s2RN-ed1h-zjeb-4mAJ-aktq-kEn86r

root@server1:~#

vgdisplay

root@server1:~# vgdisplay
— Volume group —
VG Name               server1
System ID
Format                lvm2
Metadata Areas        1
Metadata Sequence No  3
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                2
Open LV               2
Max PV                0
Cur PV                1
Act PV                1
VG Size               4.76 GiB
PE Size               4.00 MiB
Total PE              1218
Alloc PE / Size       1215 / 4.75 GiB
Free  PE / Size       3 / 12.00 MiB
VG UUID               hMwXAh-zZsA-w39k-g6Bg-LW4W-XX8q-EbyXfA

root@server1:~#

lvdisplay

root@server1:~# lvdisplay
— Logical volume —
LV Name                /dev/server1/root
VG Name                server1
LV UUID                b5A1R5-Zhml-LSNy-v7WY-NVmD-yX1w-tuQVUW
LV Write Access        read/write
LV Status              available
# open                 1
LV Size                4.49 GiB
Current LE             1149
Segments               1
Allocation             inherit
Read ahead sectors     auto
– currently set to     256
Block device           251:0

— Logical volume —
LV Name                /dev/server1/swap_1
VG Name                server1
LV UUID                2UuF7C-zxKA-Hgz1-gZHe-rFlq-cKW7-jYVCzp
LV Write Access        read/write
LV Status              available
# open                 1
LV Size                264.00 MiB
Current LE             66
Segments               1
Allocation             inherit
Read ahead sectors     auto
– currently set to     256
Block device           251:1

root@server1:~#

 

2 Installing mdadm

The most important tool for setting up RAID is mdadm. Let’s install it like this:

aptitude install initramfs-tools mdadm

Afterwards, we load a few kernel modules (to avoid a reboot):

modprobe linear
modprobe multipath
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid10

Now run

cat /proc/mdstat

The output should look as follows:

root@server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices: <none>
root@server1:~#

 

3 Preparing /dev/sdb

To create a RAID1 array on our already running system, we must prepare the /dev/sdb hard drive for RAID1, then copy the contents of our /dev/sda hard drive to it, and finally add /dev/sda to the RAID1 array.

First, we copy the partition table from /dev/sda to /dev/sdb so that both disks have exactly the same layout:

sfdisk -d /dev/sda | sfdisk –force /dev/sdb

The output should be as follows:

root@server1:~# sfdisk -d /dev/sda | sfdisk –force /dev/sdb
Checking that no-one is using this disk right now …
Warning: extended partition does not start at a cylinder boundary.
DOS and Linux will interpret the contents differently.
OK

Disk /dev/sdb: 652 cylinders, 255 heads, 63 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature
/dev/sdb: unrecognized partition table type
Old situation:
No partitions found
New situation:
Units = sectors of 512 bytes, counting from 0

Device Boot    Start       End   #sectors  Id  System
/dev/sdb1   *      2048    499711     497664  83  Linux
/dev/sdb2        501758  10483711    9981954   5  Extended
/dev/sdb3             0         –          0   0  Empty
/dev/sdb4             0         –          0   0  Empty
/dev/sdb5        501760  10483711    9981952  8e  Linux LVM
Warning: partition 1 does not end at a cylinder boundary
Successfully wrote the new partition table

Re-reading the partition table …

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
root@server1:~#

The command

fdisk -l

should now show that both HDDs have the same layout:

root@server1:~# fdisk -l

Disk /dev/sda: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0006b7b7

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          32      248832   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              32         653     4990977    5  Extended
Partition 2 does not end on cylinder boundary.
/dev/sda5              32         653     4990976   8e  Linux LVM

Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          32      248832   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sdb2              32         653     4990977    5  Extended
Partition 2 does not end on cylinder boundary.
/dev/sdb5              32         653     4990976   8e  Linux LVM
root@server1:~#

Next we must change the partition type of our three partitions on /dev/sdb to Linux raid autodetect:

fdisk /dev/sdb

root@server1:~# fdisk /dev/sdb

WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).

Command (m for help): <– m
Command action
a   toggle a bootable flag
b   edit bsd disklabel
c   toggle the dos compatibility flag
d   delete a partition
l   list known partition types
m   print this menu
n   add a new partition
o   create a new empty DOS partition table
p   print the partition table
q   quit without saving changes
s   create a new empty Sun disklabel
t   change a partition’s system id
u   change display/entry units
v   verify the partition table
w   write table to disk and exit
x   extra functionality (experts only)

Command (m for help): <– t
Partition number (1-5): <– 1
Hex code (type L to list codes): <– L

0  Empty           24  NEC DOS         81  Minix / old Lin bf  Solaris
1  FAT12           39  Plan 9          82  Linux swap / So c1  DRDOS/sec (FAT-
2  XENIX root      3c  PartitionMagic  83  Linux           c4  DRDOS/sec (FAT-
3  XENIX usr       40  Venix 80286     84  OS/2 hidden C:  c6  DRDOS/sec (FAT-
4  FAT16 <32M      41  PPC PReP Boot   85  Linux extended  c7  Syrinx
5  Extended        42  SFS             86  NTFS volume set da  Non-FS data
6  FAT16           4d  QNX4.x          87  NTFS volume set db  CP/M / CTOS / .
7  HPFS/NTFS       4e  QNX4.x 2nd part 88  Linux plaintext de  Dell Utility
8  AIX             4f  QNX4.x 3rd part 8e  Linux LVM       df  BootIt
9  AIX bootable    50  OnTrack DM      93  Amoeba          e1  DOS access
a  OS/2 Boot Manag 51  OnTrack DM6 Aux 94  Amoeba BBT      e3  DOS R/O
b  W95 FAT32       52  CP/M            9f  BSD/OS          e4  SpeedStor
c  W95 FAT32 (LBA) 53  OnTrack DM6 Aux a0  IBM Thinkpad hi eb  BeOS fs
e  W95 FAT16 (LBA) 54  OnTrackDM6      a5  FreeBSD         ee  GPT
f  W95 Ext’d (LBA) 55  EZ-Drive        a6  OpenBSD         ef  EFI (FAT-12/16/
10  OPUS            56  Golden Bow      a7  NeXTSTEP        f0  Linux/PA-RISC b
11  Hidden FAT12    5c  Priam Edisk     a8  Darwin UFS      f1  SpeedStor
12  Compaq diagnost 61  SpeedStor       a9  NetBSD          f4  SpeedStor
14  Hidden FAT16 <3 63  GNU HURD or Sys ab  Darwin boot     f2  DOS secondary
16  Hidden FAT16    64  Novell Netware  af  HFS / HFS+      fb  VMware VMFS
17  Hidden HPFS/NTF 65  Novell Netware  b7  BSDI fs         fc  VMware VMKCORE
18  AST SmartSleep  70  DiskSecure Mult b8  BSDI swap       fd  Linux raid auto
1b  Hidden W95 FAT3 75  PC/IX           bb  Boot Wizard hid fe  LANstep
1c  Hidden W95 FAT3 80  Old Minix       be  Solaris boot    ff  BBT
1e  Hidden W95 FAT1
Hex code (type L to list codes):
 <– fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): <– t
Partition number (1-5): <– 5
Hex code (type L to list codes): <– fd
Changed system type of partition 5 to fd (Linux raid autodetect)

Command (m for help): <– w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
root@server1:~#

The command

fdisk -l

should now show that /dev/sdb1 and /dev/sdb5 are of the type Linux raid autodetect:

root@server1:~# fdisk -l

Disk /dev/sda: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0006b7b7

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          32      248832   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              32         653     4990977    5  Extended
Partition 2 does not end on cylinder boundary.
/dev/sda5              32         653     4990976   8e  Linux LVM

Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          32      248832   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sdb2              32         653     4990977    5  Extended
Partition 2 does not end on cylinder boundary.
/dev/sdb5              32         653     4990976   fd  Linux raid autodetect
root@server1:~#

To make sure that there are no remains from previous RAID installations on /dev/sdb, we run the following commands:

mdadm –zero-superblock /dev/sdb1
mdadm –zero-superblock /dev/sdb5

If there are no remains from previous RAID installations, each of the above commands will throw an error like this one (which is nothing to worry about):

root@server1:~# mdadm –zero-superblock /dev/sdb1
mdadm: Unrecognised md component device – /dev/sdb1
root@server1:~#

Otherwise the commands will not display anything at all.

4 Creating Our RAID Arrays

Now let’s create our RAID arrays /dev/md0 and /dev/md1. /dev/sdb1 will be added to /dev/md0 and/dev/sdb5 to /dev/md1. /dev/sda1 and /dev/sda5 can’t be added right now (because the system is currently running on them), therefore we use the placeholder missing in the following two commands:

mdadm –create /dev/md0 –level=1 –raid-disks=2 missing /dev/sdb1
mdadm –create /dev/md1 –level=1 –raid-disks=2 missing /dev/sdb5

The command

cat /proc/mdstat

should now show that you have two degraded RAID arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok):

root@server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdb5[1]
4990912 blocks [2/1] [_U]

md0 : active raid1 sdb1[1]
248768 blocks [2/1] [_U]

unused devices: <none>
root@server1:~#

Next we create a filesystem (ext2) on our non-LVM RAID array /dev/md0:

mkfs.ext2 /dev/md0

Now we come to our LVM RAID array /dev/md1. To prepare it for LVM, we run:

pvcreate /dev/md1

Then we add /dev/md1 to our volume group server1:

vgextend server1 /dev/md1

The output of

pvdisplay

should now be similar to this:

root@server1:~# pvdisplay
— Physical volume —
PV Name               /dev/sda5
VG Name               server1
PV Size               4.76 GiB / not usable 2.00 MiB
Allocatable           yes
PE Size               4.00 MiB
Total PE              1218
Free PE               3
Allocated PE          1215
PV UUID               bsF5F5-s2RN-ed1h-zjeb-4mAJ-aktq-kEn86r

— Physical volume —
PV Name               /dev/md1
VG Name               server1
PV Size               4.76 GiB / not usable 1.94 MiB
Allocatable           yes
PE Size               4.00 MiB
Total PE              1218
Free PE               1218
Allocated PE          0
PV UUID               rQf0Rj-Nn9l-VgbP-0kIr-2lve-5jlC-TWTBGp

root@server1:~#

The output of

vgdisplay

should be as follows:

root@server1:~# vgdisplay
— Volume group —
VG Name               server1
System ID
Format                lvm2
Metadata Areas        2
Metadata Sequence No  4
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                2
Open LV               2
Max PV                0
Cur PV                2
Act PV                2
VG Size               9.52 GiB
PE Size               4.00 MiB
Total PE              2436
Alloc PE / Size       1215 / 4.75 GiB
Free  PE / Size       1221 / 4.77 GiB
VG UUID               hMwXAh-zZsA-w39k-g6Bg-LW4W-XX8q-EbyXfA

You have new mail in /var/mail/root
root@server1:~#

Next we must adjust /etc/mdadm/mdadm.conf (which doesn’t contain any information about our new RAID arrays yet) to the new situation:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm –examine –scan >> /etc/mdadm/mdadm.conf

Display the contents of the file:

cat /etc/mdadm/mdadm.conf

In the file you should now see details about our two (degraded) RAID arrays:

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This file was auto-generated on Wed, 16 Jun 2010 20:01:25 +0200
# by mkconf $Id$
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=90f05e41:bf936896:325ecf68:79913751
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=1ab36b7f:3e2031c0:325ecf68:79913751

Next we modify /etc/fstab. Comment out the current /boot partition and add the line /dev/md0 /boot           ext2    defaults        0       2 instead so that the file looks as follows:

vi /etc/fstab

# /etc/fstab: static file system information.
#
# Use 'blkid -o value -s UUID' to print the universally unique identifier
# for a device; this may be used with UUID= as a more robust way to name
# devices that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    nodev,noexec,nosuid 0       0
/dev/mapper/server1-root /               ext4    errors=remount-ro 0       1
# /boot was on /dev/sda1 during installation
#UUID=67b1337f-89a2-4729-a6c8-6d43ba82d1f1 /boot           ext2    defaults        0       2
/dev/md0 /boot           ext2    defaults        0       2
/dev/mapper/server1-swap_1 none            swap    sw              0       0
/dev/fd0        /media/floppy0  auto    rw,user,noauto,exec,utf8 0       0

Next replace /dev/sda1 with /dev/md0 in /etc/mtab:

vi /etc/mtab

/dev/mapper/server1-root / ext4 rw,errors=remount-ro 0 0
proc /proc proc rw,noexec,nosuid,nodev 0 0
none /sys sysfs rw,noexec,nosuid,nodev 0 0
none /sys/fs/fuse/connections fusectl rw 0 0
none /sys/kernel/debug debugfs rw 0 0
none /sys/kernel/security securityfs rw 0 0
none /dev devtmpfs rw,mode=0755 0 0
none /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0
none /dev/shm tmpfs rw,nosuid,nodev 0 0
none /var/run tmpfs rw,nosuid,mode=0755 0 0
none /var/lock tmpfs rw,noexec,nosuid,nodev 0 0
none /lib/init/rw tmpfs rw,nosuid,mode=0755 0 0
/dev/md0 /boot ext2 rw 0 0

Now up to the GRUB2 boot loader. Create the file /etc/grub.d/09_swraid1_setup as follows:

cp /etc/grub.d/40_custom /etc/grub.d/09_swraid1_setup
vi /etc/grub.d/09_swraid1_setup

#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.
menuentry 'Ubuntu, with Linux 2.6.32-21-server' --class ubuntu --class gnu-linux --class gnu --class os {
        recordfail
        insmod raid
        insmod mdraid
        insmod ext2
        set root='(md0)'
        linux   /vmlinuz-2.6.32-21-server root=/dev/mapper/server1-root ro   quiet
        initrd  /initrd.img-2.6.32-21-server
}

Make sure you use the correct kernel version in the menuentry stanza (in the linux and initrd lines). You can find it out by running

uname -r

or by taking a look at the current menuentry stanzas in the ### BEGIN /etc/grub.d/10_linux ### section in /boot/grub/grub.cfg. Also make sure that you use the correct volume group in the linux line – if your volume group isn’t named server1, you must use something else than root=/dev/mapper/server1-root. Again, take a look at the current menuentry stanzas in the ### BEGIN /etc/grub.d/10_linux ### section in /boot/grub/grub.cfg to find out the correct value.

The important part in our new menuentry stanza is the line set root='(md0)’ – it makes sure that we boot from our RAID1 array /dev/md0 (which will hold the /boot partition) instead of /dev/sda or /dev/sdb which is important if one of our hard drives fails – the system will still be able to boot.

Run

update-grub

to write our new kernel stanza from /etc/grub.d/09_swraid1_setup to /boot/grub/grub.cfg.

Next we adjust our ramdisk to the new situation:

update-initramfs -u

6 Preparing GRUB2

Afterwards we must make sure that the GRUB2 bootloader is installed on both hard drives, /dev/sda and /dev/sdb:

grub-install /dev/sda
grub-install /dev/sdb

Now we reboot the system and hope that it boots ok from our RAID arrays:

reboot

 

7 Preparing /dev/sda

If all goes well, you should now find /dev/md0 in the output of

df -h

root@server1:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/server1-root
4.5G  816M  3.4G  19% /
none                  242M  196K  242M   1% /dev
none                  247M     0  247M   0% /dev/shm
none                  247M   40K  247M   1% /var/run
none                  247M     0  247M   0% /var/lock
none                  247M     0  247M   0% /lib/init/rw
/dev/md0              236M   23M  201M  11% /boot
root@server1:~#

The output of

cat /proc/mdstat

should be as follows:

root@server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[1]
248768 blocks [2/1] [_U]

md1 : active raid1 sda5[0] sdb5[1]
4990912 blocks [2/2] [UU]

unused devices: <none>
root@server1:~#

The outputs of pvdisplay, vgdisplay, and lvdisplay should be as follows:

pvdisplay

root@server1:~# pvdisplay
— Physical volume —
PV Name /dev/md1
VG Name server1
PV Size 4.76 GiB / not usable 1.94 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 1218
Free PE 3
Allocated PE 1215
PV UUID rQf0Rj-Nn9l-VgbP-0kIr-2lve-5jlC-TWTBGp

root@server1:~#

vgdisplay

root@server1:~# vgdisplay
— Volume group —
VG Name server1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 9
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 4.76 GiB
PE Size 4.00 MiB
Total PE 1218
Alloc PE / Size 1215 / 4.75 GiB
Free PE / Size 3 / 12.00 MiB
VG UUID hMwXAh-zZsA-w39k-g6Bg-LW4W-XX8q-EbyXfA

root@server1:~#

lvdisplay

root@server1:~# lvdisplay
— Logical volume —
LV Name /dev/server1/root
VG Name server1
LV UUID b5A1R5-Zhml-LSNy-v7WY-NVmD-yX1w-tuQVUW
LV Write Access read/write
LV Status available
# open 1
LV Size 4.49 GiB
Current LE 1149
Segments 1
Allocation inherit
Read ahead sectors auto
– currently set to 256
Block device 251:0

— Logical volume —
LV Name /dev/server1/swap_1
VG Name server1
LV UUID 2UuF7C-zxKA-Hgz1-gZHe-rFlq-cKW7-jYVCzp
LV Write Access read/write
LV Status available
# open 1
LV Size 264.00 MiB
Current LE 66
Segments 1
Allocation inherit
Read ahead sectors auto
– currently set to 256
Block device 251:1

root@server1:~#

Now we must change the partition type of /dev/sda1 to Linux raid autodetect as well:

fdisk /dev/sda

root@server1:~# fdisk /dev/sda

WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).

Command (m for help): <– t
Partition number (1-5): <– 1
Hex code (type L to list codes): <– fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): <– w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
root@server1:~#

Now we can add /dev/sda1 to the /dev/md0 RAID array:

mdadm –add /dev/md0 /dev/sda1

Now take a look at

cat /proc/mdstat

root@server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sda1[0] sdb1[1]
248768 blocks [2/2] [UU]

md1 : active raid1 sda5[0] sdb5[1]
4990912 blocks [2/2] [UU]

unused devices: <none>
root@server1:~#

Then adjust /etc/mdadm/mdadm.conf to the new situation:

cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf
mdadm –examine –scan >> /etc/mdadm/mdadm.conf

/etc/mdadm/mdadm.conf should now look something like this:

cat /etc/mdadm/mdadm.conf

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This file was auto-generated on Wed, 16 Jun 2010 20:01:25 +0200
# by mkconf $Id$
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=90f05e41:bf936896:325ecf68:79913751
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=1ab36b7f:3e2031c0:325ecf68:79913751

Now we delete /etc/grub.d/09_swraid1_setup…

rm -f /etc/grub.d/09_swraid1_setup

… and update our GRUB2 bootloader configuration:

update-grub
update-initramfs -u

Now if you take a look at /boot/grub/grub.cfg, you should find that the menuentry stanzas in the ### BEGIN /etc/grub.d/10_linux ### section look pretty much the same as what we had in /etc/grub.d/09_swraid1_setup (they should now also be set to boot from /dev/md0 instead of (hd0,1) or (hd1,1)), that’s why we don’t need /etc/grub.d/09_swraid1_setup anymore.

Reboot the system:

reboot

It should boot without problems.

That’s it – you’ve successfully set up software RAID1 on your running LVM system!

8 Testing

Now let’s simulate a hard drive failure. It doesn’t matter if you select /dev/sda or /dev/sdb here. In this example I assume that /dev/sdb has failed.

To simulate the hard drive failure, you can either shut down the system and remove /dev/sdb from the system, or you (soft-)remove it like this:

mdadm –manage /dev/md0 –fail /dev/sdb1
mdadm –manage /dev/md1 –fail /dev/sdb5

mdadm –manage /dev/md0 –remove /dev/sdb1
mdadm –manage /dev/md1 –remove /dev/sdb5

Shut down the system:

shutdown -h now

Then put in a new /dev/sdb drive (if you simulate a failure of /dev/sda, you should now put /dev/sdb in /dev/sda‘s place and connect the new HDD as /dev/sdb!) and boot the system. It should still start without problems.

Now run

cat /proc/mdstat

and you should see that we have a degraded array:

root@server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sda1[0]
248768 blocks [2/1] [U_]

md1 : active raid1 sda5[0]
4990912 blocks [2/1] [U_]

unused devices: <none>
root@server1:~#

The output of

fdisk -l

should look as follows:

root@server1:~# fdisk -l

Disk /dev/sda: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0006b7b7

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          32      248832   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sda2              32         653     4990977    5  Extended
Partition 2 does not end on cylinder boundary.
/dev/sda5              32         653     4990976   fd  Linux raid autodetect

Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn’t contain a valid partition table

Disk /dev/md1: 5110 MB, 5110693888 bytes
2 heads, 4 sectors/track, 1247728 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/md1 doesn’t contain a valid partition table

Disk /dev/md0: 254 MB, 254738432 bytes
2 heads, 4 sectors/track, 62192 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn’t contain a valid partition table
root@server1:~#

Now we copy the partition table of /dev/sda to /dev/sdb:

sfdisk -d /dev/sda | sfdisk –force /dev/sdb

root@server1:~# sfdisk -d /dev/sda | sfdisk –force /dev/sdb
Checking that no-one is using this disk right now …
Warning: extended partition does not start at a cylinder boundary.
DOS and Linux will interpret the contents differently.
OK

Disk /dev/sdb: 652 cylinders, 255 heads, 63 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature
/dev/sdb: unrecognized partition table type
Old situation:
No partitions found
New situation:
Units = sectors of 512 bytes, counting from 0

Device Boot    Start       End   #sectors  Id  System
/dev/sdb1   *      2048    499711     497664  fd  Linux raid autodetect
/dev/sdb2        501758  10483711    9981954   5  Extended
/dev/sdb3             0         –          0   0  Empty
/dev/sdb4             0         –          0   0  Empty
/dev/sdb5        501760  10483711    9981952  fd  Linux raid autodetect
Warning: partition 1 does not end at a cylinder boundary
Successfully wrote the new partition table

Re-reading the partition table …

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
root@server1:~#

Afterwards we remove any remains of a previous RAID array from /dev/sdb

mdadm –zero-superblock /dev/sdb1
mdadm –zero-superblock /dev/sdb5

… and add /dev/sdb to the RAID array:

mdadm -a /dev/md0 /dev/sdb1
mdadm -a /dev/md1 /dev/sdb5

Now take a look at

cat /proc/mdstat

root@server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[1] sda1[0]
248768 blocks [2/2] [UU]

md1 : active raid1 sdb5[2] sda5[0]
4990912 blocks [2/1] [U_]
[========>…………]  recovery = 42.1% (2101760/4990912) finish=0.5min speed=87573K/sec

unused devices: <none>
root@server1:~#

Wait until the synchronization has finished:

root@server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[1] sda1[0]
248768 blocks [2/2] [UU]

md1 : active raid1 sdb5[1] sda5[0]
4990912 blocks [2/2] [UU]

unused devices: <none>
root@server1:~#

Then install the bootloader on both HDDs:

grub-install /dev/sda
grub-install /dev/sdb

That’s it. You’ve just replaced a failed hard drive in your RAID1 array.

 

  • The Software-RAID Howto: http://tldp.org/HOWTO/Software-RAID-HOWTO.html
  • Ubuntu: http://www.ubuntu.com/

Comments

comments