Cheap VPS & Xen Server

Residential Proxy Network - Hourly & Monthly Packages

Virtualization With KVM On An OpenSUSE 11.4 Server


This guide explains how you can install and use KVM for creating and running virtual machines on an OpenSUSE 11.4 server. I will show how to create image-based virtual machines and also virtual machines that use a logical volume (LVM). KVM is short for Kernel-based Virtual Machine and makes use of hardware virtualization, i.e., you need a CPU that supports hardware virtualization, e.g. Intel VT or AMD-V.

I do not issue any guarantee that this will work for you!

1 Preliminary Note

I’m using an OpenSUSE 11.4 server with the hostname server1.example.com and the IP address 192.168.0.100 here as my KVM host.

We also need a desktop system where we install virt-manager so that we can connect to the graphical console of the virtual machines that we install. I’m using an OpenSUSE 11.4 desktop here.

 

2 Installing KVM

OpenSUSE 11.4 KVM Host:

First check if your CPU supports hardware virtualization – if this is the case, the command

egrep ‘(vmx|svm)’ –color=always /proc/cpuinfo

should display something, e.g. like this:

server1:~ # egrep ‘(vmx|svm)’ –color=always /proc/cpuinfo
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall
nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow rep_good extd_apicid pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy 3dnowprefetch lbrv
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall
nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow rep_good extd_apicid pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy 3dnowprefetch lbrv
server1:~ #

If nothing is displayed, then your processor doesn’t support hardware virtualization, and you must stop here.

One of the dependencies that gets installed when we install KVM is Python 2.7, but Python 2.7 conflicts with the package patterns-openSUSE-minimal_base. Therefore we must uninstall that package first. To do so, start YaST:

yast2

In YaST, go to Software > Software Management:

28

Type patterns-openSUSE-minimal_base in the Search field and press ENTER. The package should be listed as installed (i) in the main window. Mark the package and press the ENTER key until there’s a minus () sign in front of the package (the minus stands for uninstall), then hit [Accept]:

29

As a replacment for the package, some other packages need to be installed. Accept the selection by hitting [OK]:

30

Leave YaST afterwards.

To install KVM and virtinst (a tool to create virtual machines), we run

yast2 -i kvm libvirt libvirt-python qemu virt-manager

Then create the system startup links for libvirtd…

chkconfig –add libvirtd

… and start the libvirt daemon:

/etc/init.d/libvirtd start

To check if KVM has successfully been installed, run

virsh -c qemu:///system list

It should display something like this:

server1:~ # virsh -c qemu:///system list
Id Name                 State
———————————-

server1:~ #

If it displays an error instead, then something went wrong.

Next we need to set up a network bridge on our server so that our virtual machines can be accessed from other hosts as if they were physical systems in the network.

To do this, we install the package bridge-utils

yast2 -i bridge-utils

… and configure a bridge.

To configure the bridge, create the file /etc/sysconfig/network/ifcfg-br0 as follows (make sure you use the IPADDR setting from the /etc/sysconfig/network/ifcfg-eth0 file):

vi /etc/sysconfig/network/ifcfg-br0

STARTMODE='auto'
BOOTPROTO='static'
IPADDR='192.168.0.100/24'
MTU=''
NETMASK=''
NETWORK=''
BROADCAST=''
USERCONTROL=no
NAME='Bridge 0'
NM_CONTROLLED=no
BRIDGE='yes'
BRIDGE_PORTS='eth0'
BRIDGE_AGEINGTIME='300'
BRIDGE_FORWARDDELAY='0'
BRIDGE_HELLOTIME='2'
BRIDGE_MAXAGE='20'
BRIDGE_PATHCOSTS='19'
BRIDGE_PORTPRIORITIES=
BRIDGE_PRIORITY=
BRIDGE_STP='on'

(If you get the message You do not have a valid vim binary package installed. Please install either “vim”, “vim-enhanced” or “gvim”., please run

yast2 -i vim

to install vi and try again. )

Modify /etc/sysconfig/network/ifcfg-eth0 as follows (set IPADDR to 0.0.0.0 and change STARTMODE to hotplug):

vi /etc/sysconfig/network/ifcfg-eth0

BOOTPROTO='static'
BROADCAST=''
ETHTOOL_OPTIONS=''
IPADDR='0.0.0.0'
MTU=''
NAME='MCP77 Ethernet'
NETMASK=''
NETWORK=''
REMOTE_IPADDR=''
STARTMODE='hotplug'
USERCONTROL='no'

Then restart the network:

/etc/init.d/network restart

Afterwards, run

ifconfig

It should now show the network bridge (br0):

server1:~ # ifconfig
br0       Link encap:Ethernet  HWaddr 00:1E:90:F3:F0:02
inet addr:192.168.0.100  Bcast:192.168.0.255  Mask:255.255.255.0
inet6 addr: fe80::21e:90ff:fef3:f002/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:45 errors:0 dropped:0 overruns:0 frame:0
TX packets:45 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4499 (4.3 Kb)  TX bytes:5703 (5.5 Kb)

eth0      Link encap:Ethernet  HWaddr 00:1E:90:F3:F0:02
inet6 addr: fe80::21e:90ff:fef3:f002/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:58 errors:0 dropped:0 overruns:0 frame:0
TX packets:65 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:5995 (5.8 Kb)  TX bytes:7381 (7.2 Kb)
Interrupt:41 Base address:0xe000

lo        Link encap:Local Loopback
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:16436  Metric:1
RX packets:2 errors:0 dropped:0 overruns:0 frame:0
TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:100 (100.0 b)  TX bytes:100 (100.0 b)

server1:~ #

3 Installing virt-manager On Your OpenSUSE 11.4 Desktop

OpenSUSE 11.4 Desktop:

We need a means of connecting to the graphical console of our guests – we can use virt-manager for this. I’m assuming that you’re using an OpenSUSE 11.4 desktop.

Become root…

su

… and run…

yast2 -i virt-manager libvirt

… to install virt-manager.

(If you’re using an Ubuntu 10.10 desktop, you can install virt-manager as follows:

sudo aptitude install virt-manager

)

 

4 Creating A Debian Squeeze Guest (Image-Based)

OpenSUSE 11.4 KVM Host:

Now let’s go back to our OpenSUSE 11.4 KVM host.

Take a look at

virt-install –help

to learn how to use it.

We will create our image-based virtual machines in the directory /var/lib/libvirt/images/ which was created automatically when we installed KVM in chapter two.

To create a Debian Squeeze guest (in bridging mode) with the name vm10, 512MB of RAM, two virtual CPUs, and the disk image /var/lib/libvirt/images/vm10.img (with a size of 12GB), insert the Debian Squeeze Netinstall CD into the CD drive and run

virt-install –connect qemu:///system -n vm10 -r 512 –vcpus=2 –disk path=/var/lib/libvirt/images/vm10.img,size=12 -c /dev/cdrom –vnc –noautoconsole –os-type linux –os-variant debiansqueeze –accelerate –network=bridge:br0 –hvm

Of course, you can also create an ISO image of the Debian Squeeze Netinstall CD (please create it in the /var/lib/libvirt/images/ directory because later on I will show how to create virtual machines through virt-manager from your OpenSUSE 11.4 desktop, and virt-manager will look for ISO images in the /var/lib/libvirt/images/ directory)…

dd if=/dev/cdrom of=/var/lib/libvirt/images/debian-6.0.0-amd64-netinst.iso

… and use the ISO image in the virt-install command:

virt-install –connect qemu:///system -n vm10 -r 512 –vcpus=2 –disk path=/var/lib/libvirt/images/vm10.img,size=12 -c /var/lib/libvirt/images/debian-6.0.0-amd64-netinst.iso –vnc –noautoconsole –os-type linux –os-variant debiansqueeze –accelerate –network=bridge:br0 –hvm

The output is as follows:

server1:~ # virt-install –connect qemu:///system -n vm10 -r 512 –vcpus=2 –disk path=/var/lib/libvirt/images/vm10.img,size=12 -c /var/lib/libvirt/images/debian-6.0.0-amd64-netinst.iso –vnc –noautoconsole –os-type linux –os-variant debiansqueeze –accelerate –network=bridge:br0 –hvm

Starting install…
Allocating ‘vm10.img’     100% |=========================|  12 GB    00:00
Creating domain…                                                 0 B 00:00
Domain installation still in progress. You can reconnect to
the console to complete the installation process.
server1:~ #

5 Connecting To The Guest

OpenSUSE 11.4 Desktop:

The KVM guest will now boot from the Debian Squeeze Netinstall CD and start the Debian installer – that’s why we need to connect to the graphical console of the guest. You can do this with virt-manager on the OpenSUSE 11.4 desktop.

Run

virt-manager

as a normal user (not root) on the desktop to start virt-manager (this is exactly the same on an Ubuntu desktop).

When you start virt-manager for the first time, you will most likely see the following message (Could not detect a default hypervisor.). You can ignore this because we don’t want to connect to the local libvirt daemon, but to the one on our OpenSUSE 11.4 KVM host.

1

Go to File > Add Connection… to connect to our OpenSUSE 11.4 KVM host:

2

Select QEMU/KVM as Hypervisor, then check Connect to remote host, select SSH from the Method drop-down menu, fill in root in the Username field, type in the hostname (server1.example.com) or IP address (192.168.0.100) of the OpenSUSE 11.4 KVM host in the Hostname field, and click on Connect:

3

If this is the first connection to the remote KVM server, you must type in yes and click on OK:

4

Afterwards type in the root password of the OpenSUSE 11.4 KVM host:

5

You should see vm10 as running. Mark that guest and click on the Open button to open the graphical console of the guest:

6

Type in the root password of the KVM host again (you might have to minimize the vm10 Virtual Machine window to see the password window):

7

You should now be connected to the graphical console of the guest and see the Debian installer:

8

Now install Debian as you would normally do on a physical system. Please note that at the end of the installation, the Debian guest needs a reboot. The guest will then stop, so you need to start it again, either with virt-manager or like this on our OpenSUSE 11.4 KVM host command line:

OpenSUSE 11.4 KVM Host:

virsh –connect qemu:///system

start vm10

quit

Afterwards, you can connect to the guest again with virt-manager and configure the guest. If you install OpenSSH (package openssh-server) in the guest, you can connect to it with an SSH client (such as PuTTY).

6 Creating A Debian Squeeze Guest (Image-Based) From The Desktop With virt-manager

Instead of creating a virtual machine from the command line (as shown in chapter 4), you can as well create it from the OpenSUSE 11.4 desktop using virt-manager (of course, the virtual machine will be created on the OpenSUSE 11.4 KVM host – in case you ask yourself if virt-manager is able to create virtual machines on remote systems).

To do this, click on the following button:

9

The New VM dialogue comes up. Fill in a name for the VM (e.g. vm11), select Local install media (ISO image or CDROM), and click on Forward:

10

Next select Linux in the OS type drop-down menu and Debian Squeeze in the Version drop-down menu, then check Use ISO image and click on the Browse… button:

11

Select the debian-6.0.0-amd64-netinst.iso image that you created in chapter 4 and click on Choose Volume:

12

Now click on Forward:

13

Assign memory and the number of CPUs to the virtual machine and click on Forward:

14

Now we come to the storage. Check Enable storage for this virtual machine, select Create a disk image on the computer’s hard drive, specify the size of the hard drive (e.g. 12GB), and check Allocate entire disk now. Then click on Forward:

15

Now we come to the last step of the New VM dialogue. Go to the Advanced options section. Select Specify shared device name; the Bridge name field will then appear where you fill in br0 (the name of the bridge which we created in chapter 2). Click on Finish afterwards:

16

The disk image for the VM is now being created.

Afterwards, the VM will start. Type in the root password of the OpenSUSE 11.4 KVM host:

17

You should now be connected to the graphical console of the guest and see the Debian installer:

18

Now install Debian as you would normally do on a physical system.

7 Managing A KVM Guest

OpenSUSE 11.4 KVM Host:

KVM guests can be managed through virsh, the “virtual shell”. To connect to the virtual shell, run

virsh –connect qemu:///system

This is how the virtual shell looks:

server1:~ # virsh –connect qemu:///system
Welcome to virsh, the virtualization interactive terminal.

Type:  ‘help’ for help with commands
‘quit’ to quit

virsh #

You can now type in commands on the virtual shell to manage your guests. Run

help

to get a list of available commands:

virsh # help
Grouped commands:

Domain Management (help keyword ‘domain’):
attach-device                  attach device from an XML file
attach-disk                    attach disk device
attach-interface               attach network interface
autostart                      autostart a domain
console                        connect to the guest console
cpu-baseline                   compute baseline CPU
cpu-compare                    compare host CPU with a CPU described by an XML file
create                         create a domain from an XML file
define                         define (but don’t start) a domain from an XML file
destroy                        destroy a domain
detach-device                  detach device from an XML file
detach-disk                    detach disk device
detach-interface               detach network interface
domid                          convert a domain name or UUID to domain id
domjobabort                    abort active domain job
domjobinfo                     domain job information
domname                        convert a domain id or UUID to domain name
domuuid                        convert a domain name or id to domain UUID
domxml-from-native             Convert native config to domain XML
domxml-to-native               Convert domain XML to native config
dump                           dump the core of a domain to a file for analysis
dumpxml                        domain information in XML
edit                           edit XML configuration for a domain
managedsave                    managed save of a domain state
managedsave-remove             Remove managed save of a domain
maxvcpus                       connection vcpu maximum
memtune                        Get or set memory parameters
migrate                        migrate domain to another host
migrate-setmaxdowntime         set maximum tolerable downtime
reboot                         reboot a domain
restore                        restore a domain from a saved state in a file
resume                         resume a domain
save                           save a domain state to a file
schedinfo                      show/set scheduler parameters
setmaxmem                      change maximum memory limit
setmem                         change memory allocation
setvcpus                       change number of virtual CPUs
shutdown                       gracefully shutdown a domain
start                          start a (previously defined) inactive domain
suspend                        suspend a domain
ttyconsole                     tty console
undefine                       undefine an inactive domain
update-device                  update device from an XML file
vcpucount                      domain vcpu counts
vcpuinfo                       detailed domain vcpu information
vcpupin                        control domain vcpu affinity
version                        show version
vncdisplay                     vnc display

Domain Monitoring (help keyword ‘monitor’):
domblkinfo                     domain block device size information
domblkstat                     get device block stats for a domain
domifstat                      get network interface stats for a domain
dominfo                        domain information
dommemstat                     get memory statistics for a domain
domstate                       domain state
list                           list domains

Host and Hypervisor (help keyword ‘host’):
capabilities                   capabilities
connect                        (re)connect to hypervisor
freecell                       NUMA free memory
hostname                       print the hypervisor hostname
nodeinfo                       node information
qemu-monitor-command           Qemu Monitor Command
sysinfo                        print the hypervisor sysinfo
uri                            print the hypervisor canonical URI

Interface (help keyword ‘interface’):
iface-define                   define (but don’t start) a physical host interface from an XML file
iface-destroy                  destroy a physical host interface (disable it / “if-down”)
iface-dumpxml                  interface information in XML
iface-edit                     edit XML configuration for a physical host interface
iface-list                     list physical host interfaces
iface-mac                      convert an interface name to interface MAC address
iface-name                     convert an interface MAC address to interface name
iface-start                    start a physical host interface (enable it / “if-up”)
iface-undefine                 undefine a physical host interface (remove it from configuration)

Network Filter (help keyword ‘filter’):
nwfilter-define                define or update a network filter from an XML file
nwfilter-dumpxml               network filter information in XML
nwfilter-edit                  edit XML configuration for a network filter
nwfilter-list                  list network filters
nwfilter-undefine              undefine a network filter

Networking (help keyword ‘network’):
net-autostart                  autostart a network
net-create                     create a network from an XML file
net-define                     define (but don’t start) a network from an XML file
net-destroy                    destroy a network
net-dumpxml                    network information in XML
net-edit                       edit XML configuration for a network
net-info                       network information
net-list                       list networks
net-name                       convert a network UUID to network name
net-start                      start a (previously defined) inactive network
net-undefine                   undefine an inactive network
net-uuid                       convert a network name to network UUID

Node Device (help keyword ‘nodedev’):
nodedev-create                 create a device defined by an XML file on the node
nodedev-destroy                destroy a device on the node
nodedev-dettach                dettach node device from its device driver
nodedev-dumpxml                node device details in XML
nodedev-list                   enumerate devices on this host
nodedev-reattach               reattach node device to its device driver
nodedev-reset                  reset node device

Secret (help keyword ‘secret’):
secret-define                  define or modify a secret from an XML file
secret-dumpxml                 secret attributes in XML
secret-get-value               Output a secret value
secret-list                    list secrets
secret-set-value               set a secret value
secret-undefine                undefine a secret

Snapshot (help keyword ‘snapshot’):
snapshot-create                Create a snapshot
snapshot-current               Get the current snapshot
snapshot-delete                Delete a domain snapshot
snapshot-dumpxml               Dump XML for a domain snapshot
snapshot-list                  List snapshots for a domain
snapshot-revert                Revert a domain to a snapshot

Storage Pool (help keyword ‘pool’):
find-storage-pool-sources-as   find potential storage pool sources
find-storage-pool-sources      discover potential storage pool sources
pool-autostart                 autostart a pool
pool-build                     build a pool
pool-create-as                 create a pool from a set of args
pool-create                    create a pool from an XML file
pool-define-as                 define a pool from a set of args
pool-define                    define (but don’t start) a pool from an XML file
pool-delete                    delete a pool
pool-destroy                   destroy a pool
pool-dumpxml                   pool information in XML
pool-edit                      edit XML configuration for a storage pool
pool-info                      storage pool information
pool-list                      list pools
pool-name                      convert a pool UUID to pool name
pool-refresh                   refresh a pool
pool-start                     start a (previously defined) inactive pool
pool-undefine                  undefine an inactive pool
pool-uuid                      convert a pool name to pool UUID

Storage Volume (help keyword ‘volume’):
vol-clone                      clone a volume.
vol-create-as                  create a volume from a set of args
vol-create                     create a vol from an XML file
vol-create-from                create a vol, using another volume as input
vol-delete                     delete a vol
vol-dumpxml                    vol information in XML
vol-info                       storage vol information
vol-key                        returns the volume key for a given volume name or path
vol-list                       list vols
vol-name                       returns the volume name for a given volume key or path
vol-path                       returns the volume path for a given volume name or key
vol-pool                       returns the storage pool for a given volume key or path
vol-wipe                       wipe a vol

Virsh itself (help keyword ‘virsh’):
cd                             change the current directory
echo                           echo arguments
exit                           quit this interactive terminal
help                           print help
pwd                            print the current directory
quit                           quit this interactive terminal

virsh #

list

shows all running guests;

list –all

shows all guests, running and inactive:

virsh # list –all

Id Name                 State
———————————-
3 vm10                 running
4 vm11                 running

virsh #

If you modify a guest’s xml file (located in the /etc/libvirt/qemu/ directory), you must redefine the guest:

define /etc/libvirt/qemu/vm10.xml

Please note that whenever you modify the guest’s xml file in /etc/libvirt/qemu/, you must run the define command again!

To start a stopped guest, run:

start vm10

To stop a guest, run

shutdown vm10

To immediately stop it (i.e., pull the power plug), run

destroy vm10

Suspend a guest:

suspend vm10

Resume a guest:

resume vm10

These are the most important commands.

Type

quit

to leave the virtual shell.

8 Creating An LVM-Based Guest

OpenSUSE 11.4 KVM Host:

LVM-based guests have some advantages over image-based guests. They are not as heavy on hard disk IO, and they are easier to back up (using LVM snapshots).

To use LVM-based guests, you need a volume group that has some free space that is not allocated to any logical volume. In this example, I use the volume group /dev/vg_server1 with a size of approx. 465GB…

vgdisplay

server1:~ # vgdisplay
— Volume group —
VG Name               vg_server1
System ID
Format                lvm2
Metadata Areas        1
Metadata Sequence No  3
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                2
Open LV               2
Max PV                0
Cur PV                1
Act PV                1
VG Size               465.61 GiB
PE Size               4.00 MiB
Total PE              119195
Alloc PE / Size       27136 / 106.00 GiB
Free  PE / Size       92059 / 359.61 GiB
VG UUID               jv7lY3-pHqK-4kCc-C9oJ-mczc-PL7R-bf9QJL

server1:~ #

… that contains the logical volume /dev/vg_server1/lv_root with a size of approx. 100GB and the logical volume /dev/vg_server1/lv_swap (about 6GB) – the rest is not allocated and can be used for KVM guests:

lvdisplay

server1:~ # lvdisplay
— Logical volume —
LV Name                /dev/vg_server1/lv_root
VG Name                vg_server1
LV UUID                60W4MZ-VMn0-s0Tr-0B2r-1xc1-Pjmx-tRvaRd
LV Write Access        read/write
LV Status              available
# open                 1
LV Size                100.00 GiB
Current LE             25600
Segments               1
Allocation             inherit
Read ahead sectors     auto
– currently set to     256
Block device           253:0

— Logical volume —
LV Name                /dev/vg_server1/lv_swap
VG Name                vg_server1
LV UUID                1vByAo-pwVs-GmMJ-hxHi-jZqj-CWQj-2euYdU
LV Write Access        read/write
LV Status              available
# open                 1
LV Size                6.00 GiB
Current LE             1536
Segments               1
Allocation             inherit
Read ahead sectors     auto
– currently set to     256
Block device           253:1

server1:~ #

I will now create the virtual machine vm12 as an LVM-based guest. I want vm12 to have 20GB of disk space, so I create the logical volume /dev/vg_server1/vm12 with a size of 20GB:

lvcreate -L20G -n vm12 vg_server1

Afterwards, we use the virt-install command again to create the guest:

virt-install –connect qemu:///system -n vm12 -r 512 –vcpus=2 –disk path=/dev/vg_server1/vm12 -c /var/lib/libvirt/images/debian-6.0.0-amd64-netinst.iso –vnc –noautoconsole –os-type linux –os-variant debiansqueeze –accelerate –network=bridge:br0 –hvm

Please note that instead of –disk path=/var/lib/libvirt/images/vm12.img,size=20 I use –disk path=/dev/vg_server1/vm12, and I don’t need to define the disk space anymore because the disk space is defined by the size of the logical volume vm12 (20GB).

Now follow chapter 5 to install that guest.

 

9 Converting Image-Based Guests To LVM-Based Guests

OpenSUSE 11.4 KVM Host:

No let’s assume we want to convert our image-based guest vm10 into an LVM-based guest. This is how we do it:

First make sure the guest is stopped:

virsh –connect qemu:///system

shutdown vm10

quit

Then create a logical volume (e.g. /dev/vg_server1/vm10) that has the same size as the image file – the image has 12GB, so the logical volume must have 12GB of size as well:

lvcreate -L12G -n vm10 vg_server1

Now convert the disk image:

qemu-img convert /var/lib/libvirt/images/vm10.img -O raw /dev/vg_server1/vm10

Afterwards you can delete the disk image:

rm -f /var/lib/libvirt/images/vm10.img

Now we must open the guest’s xml configuration file /etc/libvirt/qemu/vm10.xml

vi /etc/libvirt/qemu/vm10.xml

… and change the following section…

[...]
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/vm10.img'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
[...]

… so that it looks as follows:

[...]
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/vg_server1/vm10'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
[...]

Afterwards we must redefine the guest:

virsh –connect qemu:///system

define /etc/libvirt/qemu/vm10.xml

Still on the virsh shell, we can start the guest…

start vm10

… and leave the virsh shell:

quit

 

  • KVM: http://kvm.qumranet.com/
  • OpenSUSE: http://www.opensuse.org/
  • Debian: http://www.debian.org/
  • Ubuntu: http://www.ubuntu.com/

 

 

 

 

 

 

Comments

comments