Cheap VPS & Xen Server

Residential Proxy Network - Hourly & Monthly Packages

Virtualization With KVM On A Fedora 14 Server

This guide explains how you can install and use KVM for creating and running virtual machines on a Fedora 14 server. I will show how to create image-based virtual machines and also virtual machines that use a logical volume (LVM). KVM is short for Kernel-based Virtual Machine and makes use of hardware virtualization, i.e., you need a CPU that supports hardware virtualization, e.g. Intel VT or AMD-V.

I do not issue any guarantee that this will work for you!


1 Preliminary Note

I’m using a Fedora 14 server with the hostname and the IP address here as my KVM host.

We also need a desktop system where we install virt-manager so that we can connect to the graphical console of the virtual machines that we install. I’m using a Fedora 14 desktop here.


2 Installing KVM

Fedora 14 KVM Host:

First check if your CPU supports hardware virtualization – if this is the case, the command

egrep ‘(vmx|svm)’ –color=always /proc/cpuinfo

should display something, e.g. like this:

[root@server1 ~]# egrep ‘(vmx|svm)’ –color=always /proc/cpuinfo
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall
nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow rep_good nopl pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy 3dnowprefetch
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall
nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow rep_good nopl pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy 3dnowprefetch
[root@server1 ~]#

If nothing is displayed, then your processor doesn’t support hardware virtualization, and you must stop here.

To install KVM and virtinst (a tool to create virtual machines), we run

yum install kvm qemu libvirt python-virtinst qemu-kvm

Then start the libvirt daemon:

/etc/init.d/libvirtd start

To check if KVM has successfully been installed, run

virsh -c qemu:///system list

It should display something like this:

[root@server1 ~]# virsh -c qemu:///system list
Id Name                 State

[root@server1 ~]#

If it displays an error instead, then something went wrong.

Next we need to set up a network bridge on our server so that our virtual machines can be accessed from other hosts as if they were physical systems in the network.

To do this, we install the package bridge-utils

yum install bridge-utils

… and configure a bridge.

I disable Fedora’s NetworkManager and enable “normal” networking. NetworkManager is good for desktops where network connections can change (e.g. LAN vs. WLAN), but on a server you usually don’t change network connections:

chkconfig NetworkManager off
chkconfig –levels 35 network on
/etc/init.d/network restart
/etc/init.d/NetworkManager stop

Check your /etc/resolv.conf if it lists all nameservers that you’ve previously configured:

cat /etc/resolv.conf

If nameservers are missing, run


and add the missing nameservers again.

To configure the bridge, create the file /etc/sysconfig/network-scripts/ifcfg-br0 (please use the DNS1 (plus any other DNS settings, if any), GATEWAY, IPADDR, NETMASK and SEARCH values from the /etc/sysconfig/network-scripts/ifcfg-eth0 file):

vi /etc/sysconfig/network-scripts/ifcfg-br0


Modify /etc/sysconfig/network-scripts/ifcfg-eth0 as follows (comment out BOOTPROTO, DNS1 (and all other DNS servers, if any), GATEWAY, IPADDR, NETMASK, and SEARCH, set NM_CONTROLLED to no, and add BRIDGE=br0):

vi /etc/sysconfig/network-scripts/ifcfg-eth0

NAME="System eth0"

Then reboot the system:


After the reboot, run


It should now show the network bridge (br0):

[root@server1 ~]# ifconfig
br0       Link encap:Ethernet  HWaddr 00:1E:90:F3:F0:02
inet addr:  Bcast:  Mask:
inet6 addr: fe80::21e:90ff:fef3:f002/64 Scope:Link
RX packets:31 errors:0 dropped:0 overruns:0 frame:0
TX packets:69 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3330 (3.2 KiB)  TX bytes:9896 (9.6 KiB)

eth0      Link encap:Ethernet  HWaddr 00:1E:90:F3:F0:02
inet6 addr: fe80::21e:90ff:fef3:f002/64 Scope:Link
RX packets:50 errors:0 dropped:0 overruns:0 frame:0
TX packets:38 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:7340 (7.1 KiB)  TX bytes:5367 (5.2 KiB)
Interrupt:44 Base address:0xe000

lo        Link encap:Local Loopback
inet addr:  Mask:
inet6 addr: ::1/128 Scope:Host
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:100 (100.0 b)  TX bytes:100 (100.0 b)

virbr0    Link encap:Ethernet  HWaddr 1A:D9:31:07:5D:6E
inet addr:  Bcast:  Mask:
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b)  TX bytes:3229 (3.1 KiB)

[root@server1 ~]#


3 Installing virt-manager On Your Fedora 14 Desktop

Fedora 14 Desktop:

We need a means of connecting to the graphical console of our guests – we can use virt-manager for this. I’m assuming that you’re using a Fedora 14 desktop.

Become root…


… and run…

yum install virt-manager libvirt qemu-system-x86

… to install virt-manager.

(If you’re using an Ubuntu 10.10 desktop, you can install virt-manager as follows:

sudo aptitude install virt-manager


4 Creating A Debian Lenny Guest (Image-Based)

Fedora 14 KVM Host:

Now let’s go back to our Fedora 14 KVM host.

Take a look at

man virt-install

to learn how to use it.

I want to create my virtual machines in the directory /vm (they cannot be created in the /root directory because the qemu user doesn’t have read permissions in that directory), so I have to create it first:

mkdir /vm

(If you try to create a virtual machine in the /root directory, you will get errors similar to this one:

[root@server1 ~]# virt-install –connect qemu:///system -n vm10 -r 512 –vcpus=2 -f ~/vm10.qcow2 -s 12 -c ~/debian-500-amd64-netinst.iso –vnc –noautoconsole –os-type linux –os-variant debianlenny –accelerate –network=bridge:br0 –hvm

Starting install…
Creating storage file vm10.qcow2                                                          |  12 GB     00:00
ERROR    internal error Process exited while reading console log output: char device redirected to /dev/pts/2
qemu: could not open disk image /root/vm10.qcow2: Permission denied

Domain installation may not have been
successful.  If it was, you can restart your domain
by running ‘virsh start vm10’; otherwise, please
restart your installation.
ERROR    internal error Process exited while reading console log output: char device redirected to /dev/pts/2
qemu: could not open disk image /root/vm10.qcow2: Permission denied
Traceback (most recent call last):
File “/usr/sbin/virt-install”, line 972, in <module>
File “/usr/sbin/virt-install”, line 834, in main
start_time, guest.start_install)
File “/usr/sbin/virt-install”, line 896, in do_install
dom = install_func(conscb, progresscb, wait=(not wait))
File “/usr/lib/python2.6/site-packages/virtinst/”, line 798, in start_install
return self._do_install(consolecb, meter, removeOld, wait)
File “/usr/lib/python2.6/site-packages/virtinst/”, line 899, in _do_install
self.domain = self.conn.createLinux(install_xml, 0)
File “/usr/lib64/python2.6/site-packages/”, line 1147, in createLinux
if ret is None:raise libvirtError(‘virDomainCreateLinux() failed’, conn=self)
libvirtError: internal error Process exited while reading console log output: char device redirected to /dev/pts/2
qemu: could not open disk image /root/vm10.qcow2: Permission denied

[root@server1 ~]#


To create a Debian Lenny guest (in bridging mode) with the name vm10, 512MB of RAM, two virtual CPUs, and the disk image /vm/vm10.qcow2 (with a size of 12GB), insert the Debian Lenny Netinstall CD into the CD drive and run

virt-install –connect qemu:///system -n vm10 -r 512 –vcpus=2 -f /vm/vm10.qcow2 -s 12 -c /dev/cdrom –vnc –noautoconsole –os-type linux –os-variant debianlenny –accelerate –network=bridge:br0 –hvm

Of course, you can also create an ISO image of the Debian Lenny Netinstall CD…

dd if=/dev/cdrom of=/vm/debian-500-amd64-netinst.iso

… and use the ISO image in the virt-install command:

virt-install –connect qemu:///system -n vm10 -r 512 –vcpus=2 -f /vm/vm10.qcow2 -s 12 -c /vm/debian-500-amd64-netinst.iso –vnc –noautoconsole –os-type linux –os-variant debianlenny –accelerate –network=bridge:br0 –hvm

The output is as follows:

[root@server1 ~]# virt-install –connect qemu:///system -n vm10 -r 512 –vcpus=2 -f /vm/vm10.qcow2
-s 12 -c /dev/cdrom –vnc –noautoconsole –os-type linux –os-variant debianlenny –accelerate –network=bridge:br0 –hvm

Starting install…
Creating storage file vm10.qcow2                              |  12 GB     00:00
Creating domain…                                            |    0 B     00:01
Domain installation still in progress. You can reconnect to
the console to complete the installation process.
[root@server1 ~]#

5 Connecting To The Guest

Fedora 14 Desktop:

The KVM guest will now boot from the Debian Lenny Netinstall CD and start the Debian installer – that’s why we need to connect to the graphical console of the guest. You can do this with virt-manager on the Fedora 14 desktop.



as a normal user (not root) on the desktop to start virt-manager (this is exactly the same on an Ubuntu desktop).

When you start virt-manager for the first time, you will most likely see the following message (Unable to open a connection to the libvirt management daemon.). You can ignoer this because we don’t want to connect to the local libvirt daemon, but to the one on our Fedora 14 KVM host. Click on Close


… and go to File > Add Connection… to connect to our Fedora 14 KVM host:


Select QEMU/KVM as Hypervisor, then check Connect to remote host, select SSH in the Method drop-down menu, type in root as the Username and the hostname ( or IP address ( of the Fedora 14 KVM host in the Hostname field. Then click on Connect:


If this is the first connection to the remote KVM server, you must type in yes and click on OK:


Afterwards type in the root password of the Fedora 14 KVM host:


You should see vm10 as running. Mark that guest and click on the Open button to open the graphical console of the guest:


Type in the root password of the KVM host again:


You should now be connected to the graphical console of the guest and see the Debian installer:


Now install Debian as you would normally do on a physical system. Please note that at the end of the installation, the Debian guest needs a reboot. The guest will then stop, so you need to start it again, either with virt-manager or like this on our Fedora 14 KVM host command line:

Fedora 14 KVM Host:

virsh –connect qemu:///system

start vm10


Afterwards, you can connect to the guest again with virt-manager and configure the guest. If you install OpenSSH (package openssh-server) in the guest, you can connect to it with an SSH client (such as PuTTY).

6 Managing A KVM Guest

Fedora 14 KVM Host:

KVM guests can be managed through virsh, the “virtual shell”. To connect to the virtual shell, run

virsh –connect qemu:///system

This is how the virtual shell looks:

[root@server1 ~]# virsh –connect qemu:///system
Welcome to virsh, the virtualization interactive terminal.

Type:  ‘help’ for help with commands
‘quit’ to quit

virsh #

You can now type in commands on the virtual shell to manage your guests. Run


to get a list of available commands:

virsh # help

help            print help
attach-device   attach device from an XML file
attach-disk     attach disk device
attach-interface attach network interface
autostart       autostart a domain
capabilities    capabilities
cd              change the current directory
connect         (re)connect to hypervisor
console         connect to the guest console
cpu-baseline    compute baseline CPU
cpu-compare     compare host CPU with a CPU described by an XML file
create          create a domain from an XML file
start           start a (previously defined) inactive domain
destroy         destroy a domain
detach-device   detach device from an XML file
detach-disk     detach disk device
detach-interface detach network interface
define          define (but don’t start) a domain from an XML file
domid           convert a domain name or UUID to domain id
domuuid         convert a domain name or id to domain UUID
dominfo         domain information
domjobinfo      domain job information
domjobabort     abort active domain job
domname         convert a domain id or UUID to domain name
domstate        domain state
domblkstat      get device block stats for a domain
domifstat       get network interface stats for a domain
dommemstat      get memory statistics for a domain
domblkinfo      domain block device size information
domxml-from-native Convert native config to domain XML
domxml-to-native Convert domain XML to native config
dumpxml         domain information in XML
edit            edit XML configuration for a domain
find-storage-pool-sources discover potential storage pool sources
find-storage-pool-sources-as find potential storage pool sources
freecell        NUMA free memory
hostname        print the hypervisor hostname
list            list domains
migrate         migrate domain to another host
migrate-setmaxdowntime set maximum tolerable downtime
net-autostart   autostart a network
net-create      create a network from an XML file
net-define      define (but don’t start) a network from an XML file
net-destroy     destroy a network
net-dumpxml     network information in XML
net-edit        edit XML configuration for a network
net-list        list networks
net-name        convert a network UUID to network name
net-start       start a (previously defined) inactive network
net-undefine    undefine an inactive network
net-uuid        convert a network name to network UUID
iface-list      list physical host interfaces
iface-name      convert an interface MAC address to interface name
iface-mac       convert an interface name to interface MAC address
iface-dumpxml   interface information in XML
iface-define    define (but don’t start) a physical host interface from an XML file
iface-undefine  undefine a physical host interface (remove it from configuration)
iface-edit      edit XML configuration for a physical host interface
iface-start     start a physical host interface (enable it / “if-up”)
iface-destroy   destroy a physical host interface (disable it / “if-down”)
managedsave     managed save of a domain state
managedsave-remove Remove managed save of a domain
nodeinfo        node information
nodedev-list    enumerate devices on this host
nodedev-dumpxml node device details in XML
nodedev-dettach dettach node device from its device driver
nodedev-reattach reattach node device to its device driver
nodedev-reset   reset node device
nodedev-create  create a device defined by an XML file on the node
nodedev-destroy destroy a device on the node
nwfilter-define define or update a network filter from an XML file
nwfilter-undefine undefine a network filter
nwfilter-dumpxml network filter information in XML
nwfilter-list   list network filters
nwfilter-edit   edit XML configuration for a network filter
pool-autostart  autostart a pool
pool-build      build a pool
pool-create     create a pool from an XML file
pool-create-as  create a pool from a set of args
pool-define     define (but don’t start) a pool from an XML file
pool-define-as  define a pool from a set of args
pool-destroy    destroy a pool
pool-delete     delete a pool
pool-dumpxml    pool information in XML
pool-edit       edit XML configuration for a storage pool
pool-info       storage pool information
pool-list       list pools
pool-name       convert a pool UUID to pool name
pool-refresh    refresh a pool
pool-start      start a (previously defined) inactive pool
pool-undefine   undefine an inactive pool
pool-uuid       convert a pool name to pool UUID
secret-define   define or modify a secret from an XML file
secret-dumpxml  secret attributes in XML
secret-set-value set a secret value
secret-get-value Output a secret value
secret-undefine undefine a secret
secret-list     list secrets
pwd             print the current directory
quit            quit this interactive terminal
exit            quit this interactive terminal
reboot          reboot a domain
restore         restore a domain from a saved state in a file
resume          resume a domain
save            save a domain state to a file
schedinfo       show/set scheduler parameters
dump            dump the core of a domain to a file for analysis
shutdown        gracefully shutdown a domain
setmem          change memory allocation
setmaxmem       change maximum memory limit
setvcpus        change number of virtual CPUs
suspend         suspend a domain
ttyconsole      tty console
undefine        undefine an inactive domain
update-device   update device from an XML file
uri             print the hypervisor canonical URI
vol-create      create a vol from an XML file
vol-create-from create a vol, using another volume as input
vol-create-as   create a volume from a set of args
vol-clone       clone a volume.
vol-delete      delete a vol
vol-wipe        wipe a vol
vol-dumpxml     vol information in XML
vol-info        storage vol information
vol-list        list vols
vol-pool        returns the storage pool for a given volume key or path
vol-path        returns the volume path for a given volume name or key
vol-name        returns the volume name for a given volume key or path
vol-key         returns the volume key for a given volume name or path
vcpuinfo        domain vcpu information
vcpupin         control domain vcpu affinity
version         show version
vncdisplay      vnc display
snapshot-create Create a snapshot
snapshot-current Get the current snapshot
snapshot-delete Delete a domain snapshot
snapshot-dumpxml Dump XML for a domain snapshot
snapshot-list   List snapshots for a domain
snapshot-revert Revert a domain to a snapshot

virsh #


shows all running guests;

list –all

shows all guests, running and inactive:

virsh # list –all
Id Name State
1 vm10 running

virsh #

If you modify a guest’s xml file (located in the /etc/libvirt/qemu/ directory), you must redefine the guest:

define /etc/libvirt/qemu/vm10.xml

Please note that whenever you modify the guest’s xml file in /etc/libvirt/qemu/, you must run the define command again!

To start a stopped guest, run:

start vm10

To stop a guest, run

shutdown vm10

To immediately stop it (i.e., pull the power plug), run

destroy vm10

Suspend a guest:

suspend vm10

Resume a guest:

resume vm10

These are the most important commands.



to leave the virtual shell.
7 Creating An LVM-Based Guest

Fedora 14 KVM Host:

LVM-based guests have some advantages over image-based guests. They are not as heavy on hard disk IO, and they are easier to back up (using LVM snapshots).

To use LVM-based guests, you need a volume group that has some free space that is not allocated to any logical volume. In this example, I use the volume group /dev/vg_server1 with a size of approx. 465GB…


[root@server1 ~]# vgdisplay
— Volume group —
VG Name vg_server1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 465.56 GB
PE Size 4.00 MB
Total PE 119184
Alloc PE / Size 26420 / 103.20 GB
Free PE / Size 92764 / 362.36 GB
VG UUID aHRSbB-piY1-maoZ-OWPy-DHIy-Bl2F-MPD0y2

[root@server1 ~]#

… that contains the logical volume /dev/vg_server1/lv_root with a size of approx. 98GB and the logical volume /dev/vg_server1/lv_swap (about 5.5GB) – the rest is not allocated and can be used for KVM guests:


[root@server1 ~]# lvdisplay
— Logical volume —
LV Name /dev/vg_server1/lv_root
VG Name vg_server1
LV UUID QCl4x8-zR8r-yYZE-dNp1-leQk-ei9n-vTCcb4
LV Write Access read/write
LV Status available
# open 1
LV Size 97.66 GB
Current LE 25000
Segments 1
Allocation inherit
Read ahead sectors auto
– currently set to 256
Block device 253:0

— Logical volume —
LV Name /dev/vg_server1/lv_swap
VG Name vg_server1
LV UUID rRg2Ua-WBbi-8bjn-TC0E-DBf2-Gcr2-k1nivK
LV Write Access read/write
LV Status available
# open 1
LV Size 5.55 GB
Current LE 1420
Segments 1
Allocation inherit
Read ahead sectors auto
– currently set to 256
Block device 253:1

[root@server1 ~]#

I will now create the virtual machine vm11 as an LVM-based guest. I want vm11 to have 20GB of disk space, so I create the logical volume /dev/vg_server1/vm11 with a size of 20GB:

lvcreate -L20G -n vm11 vg_server1

Afterwards, we use the virt-install command again to create the guest:

virt-install –connect qemu:///system -n vm11 -r 512 –vcpus=2 –disk path=/dev/vg_server1/vm11 -c /vm/debian-500-amd64-netinst.iso –vnc –noautoconsole –os-type linux –os-variant debianlenny –accelerate –network=bridge:br0 –hvm

Please note that instead of -f ~/vm11.qcow2 I use –disk path=/dev/vg_server1/vm11, and I don’t need the -s switch to define the disk space anymore because the disk space is defined by the size of the logical volume vm11 (20GB).

Now follow chapter 5 to install that guest.
8 Links