Cheap VPS & Xen Server

Residential Proxy Network - Hourly & Monthly Packages

How To Resize LVM Software RAID1 Partitions (Shrink & Grow)


This article describes how you can shrink and grow existing software RAID1 partitions with LVM on top (if you don’t use LVM, please read this guide instead: How To Resize RAID Partitions (Shrink & Grow) (Software RAID)). I have tested this with logical volumes that use ext3 as the file system. I will describe this procedure for an intact RAID array and also a degraded RAID array.

I do not issue any guarantee that this will work for you!

 

1 Preliminary Note

A few days ago I found out that one of my servers had a degraded RAID1 array (/dev/md1, made up of /dev/sda5 and /dev/sdb5; /dev/sda5 had failed, /dev/sdb5 was still active):

server1:~# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb5[1]
4988032 blocks [2/1] [_U]

md0 : active raid1 sda1[0] sdb1[1]
248896 blocks [2/2] [UU]

unused devices: <none>
server1:~#

I tried to fix it (using this tutorial), but unfortunately at the end of the sync process (with 99.9% complete), the sync stopped and started over again. As I found out, this happened because there were some defect sectors at the end of the (working) partition /dev/sdb5 – this was in /var/log/kern.log:

Nov 22 18:51:06 server1 kernel: sdb: Current: sense key: Aborted Command
Nov 22 18:51:06 server1 kernel: end_request: I/O error, dev sdb, sector 1465142856

So this was the worst case that could happen – /dev/sda dead and /dev/sdb about to die. To fix this, I imagined I could shrink /dev/md1 so that it leaves out the broken sectors at the end of /dev/sdb5, then add the new /dev/sda5 (from the replaced hard drive) to /dev/md1, let the sync finish, remove /dev/sdb5 from the array and replace /dev/sdb with a new hard drive, add the new /dev/sdb5 to /dev/md1, and grow /dev/md1 again.

This is one of the use cases for the following procedures (I will describe the process for an intact array and a degraded array).

I have used two small hard drives (5GB) for this tutorial – that way I didn’t have to wait that long for the resyncing to finish. Of course, on real systems you will have much larger hard drives.

Please note that /dev/md1 contains the volume group /dev/server1 with the logical volumes /dev/server1/root (which is the system partition, mount point /) and /dev/server1/swap_1 (swap).

Because I have to resize my system partition (/dev/server1/root), I have to use a rescue system (e.g. Knoppix Live-CD) to resize the array. If the array you want to resize does not contain your system partition, you probably don’t need to boot into a rescue system; but in either case, make sure that the logical volumes are unmounted!
2 Intact Array

I will describe how to resize the array /dev/md1, made up of /dev/sda5 and /dev/sdb5. This is the current situation:

cat /proc/mdstat

server1:~# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda5[0] sdb5[1]
4988032 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
248896 blocks [2/2] [UU]

unused devices: <none>
server1:~#

df -h

server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/server1-root
4.5G 741M 3.5G 18% /
tmpfs 126M 0 126M 0% /lib/init/rw
udev 10M 68K 10M 1% /dev
tmpfs 126M 0 126M 0% /dev/shm
/dev/md0 236M 18M 206M 8% /boot
server1:~#

pvdisplay

server1:~# pvdisplay
— Physical volume —
PV Name /dev/md1
VG Name server1
PV Size 4.75 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 1217
Free PE 0
Allocated PE 1217
PV UUID Ntrsmz-m0o1-WAPD-xhsb-YpH7-0okm-qfdBQG

server1:~#

vgdisplay

server1:~# vgdisplay
— Volume group —
VG Name server1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 9
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 4.75 GB
PE Size 4.00 MB
Total PE 1217
Alloc PE / Size 1217 / 4.75 GB
Free PE / Size 0 / 0
VG UUID X3ZYTy-39yq-20k7-GCGk-vKVU-Xe0i-REdEu0

server1:~#

lvdisplay

server1:~# lvdisplay
— Logical volume —
LV Name /dev/server1/root
VG Name server1
LV UUID 3ZgGnd-Sq1s-Rchu-92b9-DpAX-mk24-0aOMm2
LV Write Access read/write
LV Status available
# open 1
LV Size 4.50 GB
Current LE 1151
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:0

— Logical volume —
LV Name /dev/server1/swap_1
VG Name server1
LV UUID KM6Yq8-jQaQ-rkP8-6f4t-zrXA-Jk13-yFrWi2
LV Write Access read/write
LV Status available
# open 2
LV Size 264.00 MB
Current LE 66
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:1

server1:~#
2.1 Shrinking An Intact Array

Boot into your rescue system and activate all needed modules:

modprobe md
modprobe linear
modprobe multipath
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid10

Then activate your RAID arrays…

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm –examine –scan >> /etc/mdadm/mdadm.conf

mdadm -A –scan

… and start LVM:

/etc/init.d/lvm start

Run

e2fsck -f /dev/server1/root

to check the file system.

/dev/md1 has a size of 5GB; I want to shrink it to 4GB. First we have to shrink the file system on the logical volume /dev/server1/root with resize2fs; the file system is inside the logical volume /dev/server1/root, so the filesystem should be <= the logical volume (therefore I make the file system 2GB). The logical volumes (LV – we have two of them, /dev/server1/root and /dev/server1/swap_1) again are inside the physical volume (PV) /dev/md1 (therefore LV /dev/server1/root + LV /dev/server1/swap_1 <= PV; I make LV /dev/server1/root 2.5GB and delete /dev/server1/swap_1, see next paragraph) which is on the RAID array /dev/md1 that we want to shrink (so PV <= /dev/md1; I shrink the PV to 3GB).

As /dev/server1/swap_1 is at the end of our hard drive, we can delete it, shrink the PV and then create /dev/server1/swap_1 again to make sure that /dev/server1/root fits into our PV. If the swap LV is not at the end of your hard drive in your case, there’s no need to delete it – you must make sure that you shrink the last LV on the drive enough so that it fits into the PV.

So I shrink /dev/server1/root’s filesystem to 2GB (make sure you use a big enough value so that all your files and directories fit into it!):

resize2fs /dev/server1/root 2G

… and the /dev/server1/root LV to 2.5GB:

lvreduce -L2.5G /dev/server1/root

Then I delete the /dev/server1/swap_1 LV (not necessary if swap is not at the end of your hard drive – in this case make sure you shrink the last LV on the drive so that it fits into the PV!)…

lvremove /dev/server1/swap_1

… and resize the PV to 3GB:

pvresize –setphysicalvolumesize 3G /dev/md1

Now we shrink /dev/md1 to 4GB. The –size value must be in KiBytes (4 x 1024 x 1024 = 4194304); make sure it can be divided by 64:

mdadm –grow /dev/md1 –size=4194304

Now I grow the PV to the largest possible value (if you don’t specify a size, pvresize will use the largest possible value):

pvresize /dev/md1

Now let’s check the output of

vgdisplay

root@Knoppix:~# vgdisplay
— Volume group —
VG Name server1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 26
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 4.00 GB
PE Size 4.00 MB
Total PE 1023
Alloc PE / Size 640 / 2.50 GB
Free PE / Size 383 / 1.50 GB
VG UUID X3ZYTy-39yq-20k7-GCGk-vKVU-Xe0i-REdEu0

root@Knoppix:~#

As you see, we have 383 free PE, so we can recreate the /dev/server1/swap_1 LV (which had 66 PE before we deleted it):

lvcreate –name swap_1 -l 66 server1

mkswap /dev/server1/swap_1

Let’s check

vgdisplay

again:

root@Knoppix:~# vgdisplay
— Volume group —
VG Name server1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 27
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 4.00 GB
PE Size 4.00 MB
Total PE 1023
Alloc PE / Size 706 / 2.76 GB
Free PE / Size 317 / 1.24 GB
VG UUID X3ZYTy-39yq-20k7-GCGk-vKVU-Xe0i-REdEu0

root@Knoppix:~#

We still have 317 free PE, so we can extend our /dev/server1/root LV:

lvextend -l +317 /dev/server1/root

Now we resize /dev/server1/root’s filesystem to the largest possible value (if you don’t specify a size, resize2fs will use the largest possible value)…

resize2fs /dev/server1/root

… and run a file system check again:

e2fsck -f /dev/server1/root

That’s it – you can now boot into the normal system again.

After the reboot you should see that /dev/md1 is now smaller:

cat /proc/mdstat

server1:~# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda5[0] sdb5[1]
4194304 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
248896 blocks [2/2] [UU]

unused devices: <none>
server1:~#

df -h

server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/server1-root
3.7G 742M 3.0G 20% /
tmpfs 126M 0 126M 0% /lib/init/rw
udev 10M 68K 10M 1% /dev
tmpfs 126M 0 126M 0% /dev/shm
/dev/md0 236M 18M 206M 8% /boot
server1:~#

pvdisplay

server1:~# pvdisplay
— Physical volume —
PV Name /dev/md1
VG Name server1
PV Size 4.00 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 1023
Free PE 0
Allocated PE 1023
PV UUID Ntrsmz-m0o1-WAPD-xhsb-YpH7-0okm-qfdBQG

server1:~#

vgdisplay

server1:~# vgdisplay
— Volume group —
VG Name server1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 28
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 4.00 GB
PE Size 4.00 MB
Total PE 1023
Alloc PE / Size 1023 / 4.00 GB
Free PE / Size 0 / 0
VG UUID X3ZYTy-39yq-20k7-GCGk-vKVU-Xe0i-REdEu0

server1:~#

lvdisplay

server1:~# lvdisplay
— Logical volume —
LV Name /dev/server1/root
VG Name server1
LV UUID 3ZgGnd-Sq1s-Rchu-92b9-DpAX-mk24-0aOMm2
LV Write Access read/write
LV Status available
# open 1
LV Size 3.74 GB
Current LE 957
Segments 2
Allocation inherit
Read ahead sectors 0
Block device 253:0

— Logical volume —
LV Name /dev/server1/swap_1
VG Name server1
LV UUID sAzi1B-pKdf-dM1E-Swx0-mgse-RFMP-ns50GQ
LV Write Access read/write
LV Status available
# open 2
LV Size 264.00 MB
Current LE 66
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:1

server1:~#

2.2 Growing An Intact Array

Boot into your rescue system and activate all needed modules:

modprobe md
modprobe linear
modprobe multipath
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid10

Then activate your RAID arrays…

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm –examine –scan >> /etc/mdadm/mdadm.conf

mdadm -A –scan

…and start LVM:

/etc/init.d/lvm start

Now we can grow /dev/md1 as follows:

mdadm –grow /dev/md1 –size=max

–size=max means the largest possible value. You can as well specify a size in KiBytes (see previous chapter).

Then we grow the PV to the largest possible value…

pvresize /dev/md1

… and take a look at

vgdisplay

root@Knoppix:~# vgdisplay
— Volume group —
VG Name server1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 29
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 4.75 GB
PE Size 4.00 MB
Total PE 1217
Alloc PE / Size 1023 / 4.00 GB
Free PE / Size 194 / 776.00 MB
VG UUID X3ZYTy-39yq-20k7-GCGk-vKVU-Xe0i-REdEu0

root@Knoppix:~#

We have 194 free PE that we can allocate to our /dev/server1/root LV:

lvextend -l +194 /dev/server1/root

Then we run a file system check…

e2fsck -f /dev/server1/root

…, resize the file system…

resize2fs /dev/server1/root

… and check the file system again:

e2fsck -f /dev/server1/root

Afterwards you can boot back into your normal system.

3 Degraded Array

I will describe how to resize the degraded array /dev/md1, made up of /dev/sda5 and /dev/sdb5, where /dev/sda5 has failed:

cat /proc/mdstat

server1:~# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb5[1]
4988032 blocks [2/1] [_U]

md0 : active raid1 sda1[0] sdb1[1]
248896 blocks [2/2] [UU]

unused devices: <none>
server1:~#

df -h

server1:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/server1-root
4.5G  741M  3.5G  18% /
tmpfs                 126M     0  126M   0% /lib/init/rw
udev                   10M   68K   10M   1% /dev
tmpfs                 126M     0  126M   0% /dev/shm
/dev/md0              236M   18M  206M   8% /boot
server1:~#

pvdisplay

server1:~# pvdisplay
— Physical volume —
PV Name /dev/md1
VG Name server1
PV Size 4.75 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 1217
Free PE 0
Allocated PE 1217
PV UUID Ntrsmz-m0o1-WAPD-xhsb-YpH7-0okm-qfdBQG

server1:~#

vgdisplay

server1:~# vgdisplay
— Volume group —
VG Name server1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 9
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 4.75 GB
PE Size 4.00 MB
Total PE 1217
Alloc PE / Size 1217 / 4.75 GB
Free PE / Size 0 / 0
VG UUID X3ZYTy-39yq-20k7-GCGk-vKVU-Xe0i-REdEu0

server1:~#

lvdisplay

server1:~# lvdisplay
— Logical volume —
LV Name /dev/server1/root
VG Name server1
LV UUID 3ZgGnd-Sq1s-Rchu-92b9-DpAX-mk24-0aOMm2
LV Write Access read/write
LV Status available
# open 1
LV Size 4.50 GB
Current LE 1151
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:0

— Logical volume —
LV Name /dev/server1/swap_1
VG Name server1
LV UUID KM6Yq8-jQaQ-rkP8-6f4t-zrXA-Jk13-yFrWi2
LV Write Access read/write
LV Status available
# open 2
LV Size 264.00 MB
Current LE 66
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:1

server1:~#
3.1 Shrinking A Degraded Array

Before we boot into the rescue system, we must make sure that /dev/sda5 is really removed from the array:

mdadm –manage /dev/md1 –fail /dev/sda5
mdadm –manage /dev/md1 –remove /dev/sda5

Then we overwrite the superblock on /dev/sda5 (this is very important – if you forget this, the system might now boot anymore after the resizal!):

mdadm –zero-superblock /dev/sda5

Boot into your rescue system and activate all needed modules:

modprobe md
modprobe linear
modprobe multipath
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid10

Then activate your RAID arrays…

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm –examine –scan >> /etc/mdadm/mdadm.conf

mdadm -A –scan

… and start LVM:

/etc/init.d/lvm start

Run

e2fsck -f /dev/server1/root

to check the file system.

/dev/md1 has a size of 5GB; I want to shrink it to 4GB. First we have to shrink the file system on the logical volume /dev/server1/root with resize2fs; the file system is inside the logical volume /dev/server1/root, so the filesystem should be <= the logical volume (therefore I make the file system 2GB). The logical volumes (LV – we have two of them, /dev/server1/root and /dev/server1/swap_1) again are inside the physical volume (PV) /dev/md1 (therefore LV /dev/server1/root + LV /dev/server1/swap_1 <= PV; I make LV /dev/server1/root 2.5GB and delete /dev/server1/swap_1, see next paragraph) which is on the RAID array /dev/md1 that we want to shrink (so PV <= /dev/md1; I shrink the PV to 3GB).

As /dev/server1/swap_1 is at the end of our hard drive, we can delete it, shrink the PV and then create /dev/server1/swap_1 again to make sure that /dev/server1/root fits into our PV. If the swap LV is not at the end of your hard drive in your case, there’s no need to delete it – you must make sure that you shrink the last LV on the drive enough so that it fits into the PV.

So I shrink /dev/server1/root’s filesystem to 2GB (make sure you use a big enough value so that all your files and directories fit into it!):

resize2fs /dev/server1/root 2G

… and the /dev/server1/root LV to 2.5GB:

lvreduce -L2.5G /dev/server1/root

Then I delete the /dev/server1/swap_1 LV (not necessary if swap is not at the end of your hard drive – in this case make sure you shrink the last LV on the drive so that it fits into the PV!)…

lvremove /dev/server1/swap_1

… and resize the PV to 3GB:

pvresize –setphysicalvolumesize 3G /dev/md1

Now we shrink /dev/md1 to 4GB. The –size value must be in KiBytes (4 x 1024 x 1024 = 4194304); make sure it can be divided by 64:

mdadm –grow /dev/md1 –size=4194304

Now I grow the PV to the largest possible value (if you don’t specify a size, pvresize will use the largest possible value):

pvresize /dev/md1

Now let’s check the output of

vgdisplay

root@Knoppix:~# vgdisplay
— Volume group —
VG Name server1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 26
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 4.00 GB
PE Size 4.00 MB
Total PE 1023
Alloc PE / Size 640 / 2.50 GB
Free PE / Size 383 / 1.50 GB
VG UUID X3ZYTy-39yq-20k7-GCGk-vKVU-Xe0i-REdEu0

root@Knoppix:~#

As you see, we have 383 free PE, so we can recreate the /dev/server1/swap_1 LV (which had 66 PE before we deleted it):

lvcreate –name swap_1 -l 66 server1

mkswap /dev/server1/swap_1

Let’s check

vgdisplay

again:

root@Knoppix:~# vgdisplay
— Volume group —
VG Name server1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 27
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 4.00 GB
PE Size 4.00 MB
Total PE 1023
Alloc PE / Size 706 / 2.76 GB
Free PE / Size 317 / 1.24 GB
VG UUID X3ZYTy-39yq-20k7-GCGk-vKVU-Xe0i-REdEu0

root@Knoppix:~#

We still have 317 free PE, so we can extend our /dev/server1/root LV:

lvextend -l +317 /dev/server1/root

Now we resize /dev/server1/root’s filesystem to the largest possible value (if you don’t specify a size, resize2fs will use the largest possible value)…

resize2fs /dev/server1/root

… and run a file system check again:

e2fsck -f /dev/server1/root

Then boot into the normal system again and run the following two commands to add /dev/sda5 back to the array /dev/md1:

mdadm –zero-superblock /dev/sda5
mdadm -a /dev/md1 /dev/sda5

Take a look at

cat /proc/mdstat

and you should see that /dev/sdb5 and /dev/sda5 are now being synced.

3.2 Growing A Degraded Array

Before we boot into the rescue system, we must make sure that /dev/sda5 is really removed from the array:

mdadm –manage /dev/md1 –fail /dev/sda5
mdadm –manage /dev/md1 –remove /dev/sda5

Then we overwrite the superblock on /dev/sda5 (this is very important – if you forget this, the system might now boot anymore after the resizal!):

mdadm –zero-superblock /dev/sda5

Boot into your rescue system and activate all needed modules:

modprobe md
modprobe linear
modprobe multipath
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid10

Then activate your RAID arrays…

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm –examine –scan >> /etc/mdadm/mdadm.conf

mdadm -A –scan

…and start LVM:

/etc/init.d/lvm start

Now we can grow /dev/md1 as follows:

mdadm –grow /dev/md1 –size=max

–size=max means the largest possible value. You can as well specify a size in KiBytes (see previous chapter).

Then we grow the PV to the largest possible value…

pvresize /dev/md1

… and take a look at

vgdisplay

root@Knoppix:~# vgdisplay
— Volume group —
VG Name server1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 29
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 4.75 GB
PE Size 4.00 MB
Total PE 1217
Alloc PE / Size 1023 / 4.00 GB
Free PE / Size 194 / 776.00 MB
VG UUID X3ZYTy-39yq-20k7-GCGk-vKVU-Xe0i-REdEu0

root@Knoppix:~#

We have 194 free PE that we can allocate to our /dev/server1/root LV:

lvextend -l +194 /dev/server1/root

Then we run a file system check…

e2fsck -f /dev/server1/root

…, resize the file system…

resize2fs /dev/server1/root

… and check the file system again:

e2fsck -f /dev/server1/root

Then boot into the normal system again and run the following two commands to add /dev/sda5 back to the array /dev/md1:

mdadm –zero-superblock /dev/sda5
mdadm -a /dev/md1 /dev/sda5

Take a look at

cat /proc/mdstat

and you should see that /dev/sdb5 and /dev/sda5 are now being synced.
4 Links

Knoppix: http://www.knoppix.net/

Comments

comments