Cheap VPS & Xen Server


Residential Proxy Network - Hourly & Monthly Packages

Virtualization With KVM On A Debian Lenny Server


This guide explains how you can install and use KVM for creating and running virtual machines on a Debian Lenny server. I will show how to create image-based virtual machines and also virtual machines that use a logical volume (LVM). KVM is short for Kernel-based Virtual Machine and makes use of hardware virtualization, i.e., you need a CPU that supports hardware virtualization, e.g. Intel VT or AMD-V.

I do not issue any guarantee that this will work for you!

 

1 Preliminary Note

I’m using a machine with the hostname server1.example.com and the IP address 192.168.0.100 here as my KVM host.

We also need a desktop system where we install virt-manager so that we can connect to the graphical console of the virtual machines that we install. I’m using an Ubuntu 8.10 desktop here.

 

2 Installing KVM

Debian Lenny KVM Host:

First check if your CPU supports hardware virtualization – if this is the case, the command

egrep ‘(vmx|svm)’ –color=always /proc/cpuinfo

should display something, e.g. like this:

server1:~# egrep ‘(vmx|svm)’ –color=always /proc/cpuinfo
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall
nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow rep_good pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy 3dnowprefetch
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall
nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow rep_good pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy 3dnowprefetch
server1:~#

If nothing is displayed, then your processor doesn’t support hardware virtualization, and you must stop here.

To install KVM and virtinst (a tool to create virtual machines), we run

aptitude install kvm libvirt-bin virtinst

Afterwards we must add the user as which we’re currently logged in (root) to the group libvirt:

adduser `id -un` libvirt

You need to log out and log back in for the new group membership to take effect.

To check if KVM has successfully been installed, run

virsh -c qemu:///system list

It should display something like this:

server1:~# virsh -c qemu:///system list
Id Name                 State
———————————-

server1:~#

If it displays an error instead, then something went wrong.

Next we need to set up a network bridge on our server so that our virtual machines can be accessed from other hosts as if they were physical systems in the network.

To do this, we install the package bridge-utils

aptitude install bridge-utils

… and configure a bridge. Open /etc/network/interfaces:

vi /etc/network/interfaces

Before the modification, my file looks as follows:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
#allow-hotplug eth0
#iface eth0 inet dhcp
auto eth0
iface eth0 inet static
        address 192.168.0.100
        netmask 255.255.255.0
        network 192.168.0.0
        broadcast 192.168.0.255
        gateway 192.168.0.1

I change it so that it looks like this:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
#allow-hotplug eth0
#iface eth0 inet dhcp
auto eth0
iface eth0 inet manual

auto br0
iface br0 inet static
        address 192.168.0.100
        network 192.168.0.0
        netmask 255.255.255.0
        broadcast 192.168.0.255
        gateway 192.168.0.1
        bridge_ports eth0
        bridge_fd 9
        bridge_hello 2
        bridge_maxage 12
        bridge_stp off

(Make sure you use the correct settings for your network!)

Restart the network…

/etc/init.d/networking restart

… and run

ifconfig

It should now show the network bridge (br0):

server1:~# ifconfig
br0       Link encap:Ethernet  HWaddr 00:1e:90:f3:f0:02
inet addr:192.168.0.100  Bcast:192.168.0.255  Mask:255.255.255.0
inet6 addr: fe80::21e:90ff:fef3:f002/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:6 errors:0 dropped:0 overruns:0 frame:0
TX packets:14 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:350 (350.0 B)  TX bytes:1456 (1.4 KiB)

eth0      Link encap:Ethernet  HWaddr 00:1e:90:f3:f0:02
inet6 addr: fe80::21e:90ff:fef3:f002/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:43262 errors:0 dropped:0 overruns:0 frame:0
TX packets:23574 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:63379451 (60.4 MiB)  TX bytes:1868584 (1.7 MiB)
Interrupt:251 Base address:0xc000

lo        Link encap:Local Loopback
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:16436  Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:560 (560.0 B)  TX bytes:560 (560.0 B)

server1:~#

3 Installing virt-viewer Or virt-manager On Your Ubuntu 8.10 Desktop

Ubuntu 8.10 Desktop:

We need a means of connecting to the graphical console of our guests – we can use virt-manager (see KVM Guest Management With Virt-Manager On Ubuntu 8.10) for this. I’m assuming that you’re using an Ubuntu 8.10 desktop.

Run

sudo aptitude install virt-manager

to install virt-manager.

 

4 Creating A Debian Lenny Guest (Image-Based)

Debian Lenny KVM Host:

Now let’s go back to our Debian Lenny KVM host.

Take a look at

man virt-install

to learn how to use it.

To create a Debian Lenny guest (in bridging mode) with the name vm10, 512MB of RAM, two virtual CPUs, and the disk image ~/vm10.qcow2 (with a size of 12GB), insert the Debian Lenny Netinstall CD into the CD drive and run

virt-install –connect qemu:///system -n vm10 -r 512 –vcpus=2 -f ~/vm10.qcow2 -s 12 -c /dev/cdrom –vnc –noautoconsole –os-type linux –os-variant debianLenny –accelerate –network=bridge:br0 –hvm

Of course, you can also create an ISO image of the Debian Lenny Netinstall CD…

dd if=/dev/cdrom of=~/debian-500-amd64-netinst.iso

… and use the ISO image in the virt-install command:

virt-install –connect qemu:///system -n vm10 -r 512 –vcpus=2 -f ~/vm10.qcow2 -s 12 -c ~/debian-500-amd64-netinst.iso –vnc –noautoconsole –os-type linux –os-variant debianLenny –accelerate –network=bridge:br0 –hvm

The output is as follows:

server1:~# virt-install –connect qemu:///system -n vm10 -r 512 –vcpus=2 -f ~/vm10.qcow2 -s 12 -c ~/debian-500-amd64-netinst.iso –vnc –noautoconsole –os-type linux –os-variant debianLenny –accelerate –network=bridge:br0 –hvm

Starting install…
Creating storage file…  100% |=========================|  12 GB    00:00
Creating domain…                                                 0 B 00:00
Domain installation still in progress. You can reconnect to
the console to complete the installation process.
server1:~#

5 Connecting To The Guest

Ubuntu 8.10 Desktop:

The KVM guest will now boot from the Debian Lenny Netinstall CD and start the Debian installer – that’s why we need to connect to the graphical console of the guest. You can do this with virt-manager on the Ubuntu 8.10 desktop (see KVM Guest Management With Virt-Manager On Ubuntu 8.10).

Run

sudo virt-manager

on the Ubuntu desktop to start virt-manager.

In virt-manager, connect to the KVM host:

1

Type in the root password of the KVM host:

2

You should see vm10 as running. Mark that guest and click on the Open button to open the graphical console of the guest:

3

Type in the root password of the KVM host again:

4

You should now be connected to the graphical console of the guest and see the Debian installer:

5

Now install Debian as you would normally do on a physical system. Please note that at the end of the installation, the Debian guest needs a reboot. The guest will then stop, so you need to start it again, either with virt-manager or like this on our Debian Lenny KVM host command line:

Debian Lenny KVM Host:

virsh –connect qemu:///system

start vm10

quit

Afterwards, you can connect to the guest again with virt-manager and configure the guest. If you install OpenSSH (package openssh-server) in the guest, you can connect to it with an SSH client (such as PuTTY).

 

6 Managing A KVM Guest

Debian Lenny KVM Host:

KVM guests can be managed through virsh, the “virtual shell”. To connect to the virtual shell, run

virsh –connect qemu:///system

This is how the virtual shell looks:

server1:~# virsh –connect qemu:///system
Welcome to virsh, the virtualization interactive terminal.

Type:  ‘help’ for help with commands
‘quit’ to quit

virsh #

You can now type in commands on the virtual shell to manage your guests. Run

help

to get a list of available commands:

virsh # help
Commands:

help            print help
attach-device   attach device from an XML file
attach-disk     attach disk device
attach-interface attach network interface
autostart       autostart a domain
capabilities    capabilities
connect         (re)connect to hypervisor
console         connect to the guest console
create          create a domain from an XML file
start           start a (previously defined) inactive domain
destroy         destroy a domain
detach-device   detach device from an XML file
detach-disk     detach disk device
detach-interface detach network interface
define          define (but don’t start) a domain from an XML file
domid           convert a domain name or UUID to domain id
domuuid         convert a domain name or id to domain UUID
dominfo         domain information
domname         convert a domain id or UUID to domain name
domstate        domain state
domblkstat      get device block stats for a domain
domifstat       get network interface stats for a domain
dumpxml         domain information in XML
edit            edit XML configuration for a domain
find-storage-pool-sources discover potential storage pool sources
find-storage-pool-sources-as find potential storage pool sources
freecell        NUMA free memory
hostname        print the hypervisor hostname
list            list domains
migrate         migrate domain to another host
net-autostart   autostart a network
net-create      create a network from an XML file
net-define      define (but don’t start) a network from an XML file
net-destroy     destroy a network
net-dumpxml     network information in XML
net-edit        edit XML configuration for a network
net-list        list networks
net-name        convert a network UUID to network name
net-start       start a (previously defined) inactive network
net-undefine    undefine an inactive network
net-uuid        convert a network name to network UUID
nodeinfo        node information
pool-autostart  autostart a pool
pool-build      build a pool
pool-create     create a pool from an XML file
pool-create-as  create a pool from a set of args
pool-define     define (but don’t start) a pool from an XML file
pool-define-as  define a pool from a set of args
pool-destroy    destroy a pool
pool-delete     delete a pool
pool-dumpxml    pool information in XML
pool-edit       edit XML configuration for a storage pool
pool-info       storage pool information
pool-list       list pools
pool-name       convert a pool UUID to pool name
pool-refresh    refresh a pool
pool-start      start a (previously defined) inactive pool
pool-undefine   undefine an inactive pool
pool-uuid       convert a pool name to pool UUID
quit            quit this interactive terminal
reboot          reboot a domain
restore         restore a domain from a saved state in a file
resume          resume a domain
save            save a domain state to a file
schedinfo       show/set scheduler parameters
dump            dump the core of a domain to a file for analysis
shutdown        gracefully shutdown a domain
setmem          change memory allocation
setmaxmem       change maximum memory limit
setvcpus        change number of virtual CPUs
suspend         suspend a domain
ttyconsole      tty console
undefine        undefine an inactive domain
uri             print the hypervisor canonical URI
vol-create      create a vol from an XML file
vol-create-as   create a volume from a set of args
vol-delete      delete a vol
vol-dumpxml     vol information in XML
vol-info        storage vol information
vol-list        list vols
vol-path        convert a vol UUID to vol path
vol-name        convert a vol UUID to vol name
vol-key         convert a vol UUID to vol key
vcpuinfo        domain vcpu information
vcpupin         control domain vcpu affinity
version         show version
vncdisplay      vnc display

virsh #

list

shows all running guests;

list –all

shows all guests, running and inactive:

virsh # list –all
Id Name                 State
———————————-
1 vm10                 running

virsh #

If you modify a guest’s xml file (located in the /etc/libvirt/qemu/ directory), you must redefine the guest:

define /etc/libvirt/qemu/vm10.xml

Please note that whenever you modify the guest’s xml file in /etc/libvirt/qemu/, you must run the define command again!

To start a stopped guest, run:

start vm10

To stop a guest, run

shutdown vm10

To immediately stop it (i.e., pull the power plug), run

destroy vm10

Suspend a guest:

suspend vm10

Resume a guest:

resume vm10

These are the most important commands.

Type

quit

to leave the virtual shell.

7 Creating An LVM-Based Guest

Debian Lenny KVM Host:

LVM-based guests have some advantages over image-based guests. They are not as heavy on hard disk IO, and they are easier to back up (using LVM snapshots).

To use LVM-based guests, you need a volume group that has some free space that is not allocated to any logical volume. In this example, I use the volume group /dev/vg0 with a size of approx. 465GB…

vgdisplay

server1:~# vgdisplay
— Volume group —
VG Name               vg0
System ID
Format                lvm2
Metadata Areas        1
Metadata Sequence No  3
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                2
Open LV               2
Max PV                0
Cur PV                1
Act PV                1
VG Size               465.28 GB
PE Size               4.00 MB
Total PE              119112
Alloc PE / Size       59842 / 233.76 GB
Free  PE / Size       59270 / 231.52 GB
VG UUID               gnUCYV-mYXj-qxpM-PEat-tdXS-wumf-6FK3rA

server1:~#

… that contains the logical volume /dev/vg0/root with a size of approx. 232GB and the logical volume /dev/vg0/swap_1 (about 1GB) – the rest is not allocated and can be used for KVM guests:

lvdisplay

server1:~# lvdisplay
— Logical volume —
LV Name                /dev/vg0/root
VG Name                vg0
LV UUID                kMYrHg-d0ox-yc6y-1eNR-lB2R-yMIn-WFgzSZ
LV Write Access        read/write
LV Status              available
# open                 1
LV Size                232.83 GB
Current LE             59604
Segments               1
Allocation             inherit
Read ahead sectors     auto
– currently set to     256
Block device           254:0

— Logical volume —
LV Name                /dev/vg0/swap_1
VG Name                vg0
LV UUID                SUI0uq-iTsy-7EnZ-INNz-gjvu-tqLD-rGSegE
LV Write Access        read/write
LV Status              available
# open                 2
LV Size                952.00 MB
Current LE             238
Segments               1
Allocation             inherit
Read ahead sectors     auto
– currently set to     256
Block device           254:1

server1:~#

I will now create the virtual machine vm11 as an LVM-based guest. I want vm11 to have 20GB of disk space, so I create the logical volume /dev/vg0/vm11 with a size of 20GB:

lvcreate -L20G -n vm11 vg0

Afterwards, we use the virt-install command again to create the guest:

virt-install –connect qemu:///system -n vm11 -r 512 –vcpus=2 –disk path=/dev/vg0/vm11 -c ~/debian-500-amd64-netinst.iso –vnc –noautoconsole –os-type linux –os-variant debianLenny –accelerate –network=bridge:br0 –hvm

Please note that instead of -f ~/vm11.qcow2 I use –disk path=/dev/vg0/vm11, and I don’t need the -s switch to define the disk space anymore because the disk space is defined by the size of the logical volume vm11 (20GB).

Now follow chapter 5 to install that guest.

 

8 Converting Image-Based Guests To LVM-Based Guests

Debian Lenny KVM Host:

No let’s assume we want to convert our image-based guest vm10 into an LVM-based guest. This is how we do it:

First make sure the guest is stopped:

virsh –connect qemu:///system

shutdown vm10

quit

Then create a logical volume (e.g. /dev/vg0/vm10) that has the same size as the image file – the image has 12GB, so the logical volume must have 12GB of size as well:

lvcreate -L12G -n vm10 vg0

Now you can convert the image:

qemu-img convert ~/vm10.qcow2 -O raw /dev/vg0/vm10

Afterwards you can delete the disk image:

rm -f ~/vm10.qcow2

Now we must open the guest’s xml configuration file /etc/libvirt/qemu/vm10.xml

vi /etc/libvirt/qemu/vm10.xml

… and change the following section…

[...]
    <disk type='file' device='disk'>
      <source file='/root/vm10.qcow2'/>
      <target dev='vda' bus='virtio'/>
    </disk>
[...]

… so that it looks as follows:

[...]
    <disk type='file' device='disk'>
      <source file='/dev/vg0/vm10'/>
      <target dev='vda' bus='virtio'/>
    </disk>
[...]

Afterwards we must redefine the guest:

virsh –connect qemu:///system

define /etc/libvirt/qemu/vm10.xml

Still on the virsh shell, we can start the guest…

start vm10

… and leave the virsh shell:

quit

 

  • KVM: http://kvm.qumranet.com/
  • Debian: http://www.debian.org/
  • Ubuntu: http://www.ubuntu.com/

Comments

comments