Cheap VPS & Xen Server

Residential Proxy Network - Hourly & Monthly Packages

Virtualization With KVM On Ubuntu 10.10


This guide explains how you can install and use KVM for creating and running virtual machines on an Ubuntu 10.10 server. I will show how to create image-based virtual machines and also virtual machines that use a logical volume (LVM). KVM is short for Kernel-based Virtual Machine and makes use of hardware virtualization, i.e., you need a CPU that supports hardware virtualization, e.g. Intel VT or AMD-V.

I do not issue any guarantee that this will work for you!

 

1 Preliminary Note

I’m using a machine with the hostname server1.example.com and the IP address 192.168.0.100 here as my KVM host.

Because we will run all the steps from this tutorial with root privileges, we can either prepend all commands in this tutorial with the string sudo, or we become root right now by typing

sudo su

 

2 Installing KVM And vmbuilder

First check if your CPU supports hardware virtualization – if this is the case, the command

egrep ‘(vmx|svm)’ –color=always /proc/cpuinfo

should display something, e.g. like this:

root@server1:~# egrep ‘(vmx|svm)’ –color=always /proc/cpuinfo
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext
fxsr_opt rdtscp lm 3dnowext 3dnow rep_good nopl pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy 3dnowprefetch
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext
fxsr_opt rdtscp lm 3dnowext 3dnow rep_good nopl pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy 3dnowprefetch
root@server1:~#

If nothing is displayed, then your processor doesn’t support hardware virtualization, and you must stop here.

To install KVM and vmbuilder (a script to create Ubuntu-based virtual machines), we run

aptitude install ubuntu-virt-server python-vm-builder kvm-pxe

General type of mail configuration: <– Internet Site
System mail name: <– server1.example.com

Afterwards we must add the user as which we’re currently logged in (root) to the group libvirtd:

adduser `id -un` libvirtd
adduser `id -un` kvm

You need to log out and log back in for the new group memberships to take effect.

To check if KVM has successfully been installed, run

virsh -c qemu:///system list

It should display something like this:

root@server1:~# virsh -c qemu:///system list
Id Name                 State
———————————-

root@server1:~#

If it displays an error instead, then something went wrong.

Next we need to set up a network bridge on our server so that our virtual machines can be accessed from other hosts as if they were physical systems in the network.

To do this, we install the package bridge-utils

aptitude install bridge-utils

… and configure a bridge. Open /etc/network/interfaces:

vi /etc/network/interfaces

Before the modification, my file looks as follows:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
        address 192.168.0.100
        netmask 255.255.255.0
        network 192.168.0.0
        broadcast 192.168.0.255
        gateway 192.168.0.1

I change it so that it looks like this:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet manual

auto br0
iface br0 inet static
        address 192.168.0.100
        network 192.168.0.0
        netmask 255.255.255.0
        broadcast 192.168.0.255
        gateway 192.168.0.1
        bridge_ports eth0
        bridge_fd 9
        bridge_hello 2
        bridge_maxage 12
        bridge_stp off

(Make sure you use the correct settings for your network!)

Restart the network…

/etc/init.d/networking restart

… and run

ifconfig

It should now show the network bridge (br0):

root@server1:~# ifconfig
br0       Link encap:Ethernet  HWaddr 00:1e:90:f3:f0:02
inet addr:192.168.0.100  Bcast:192.168.0.255  Mask:255.255.255.0
inet6 addr: fe80::21e:90ff:fef3:f002/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:15 errors:0 dropped:0 overruns:0 frame:0
TX packets:19 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1076 (1.0 KB)  TX bytes:1766 (1.7 KB)

eth0      Link encap:Ethernet  HWaddr 00:1e:90:f3:f0:02
inet6 addr: fe80::21e:90ff:fef3:f002/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:37204 errors:0 dropped:0 overruns:0 frame:0
TX packets:20197 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:53840040 (53.8 MB)  TX bytes:1655487 (1.6 MB)
Interrupt:44 Base address:0xa000

lo        Link encap:Local Loopback
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:16436  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

virbr0    Link encap:Ethernet  HWaddr d2:80:51:63:84:92
inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
inet6 addr: fe80::d080:51ff:fe63:8492/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B)  TX bytes:468 (468.0 B)

root@server1:~#

Before we start our first virtual machine, I recommend to reboot the system:

reboot

If you don’t do this, you might get an error like open /dev/kvm: Permission denied in the virtual machine logs in the /var/log/libvirt/qemu/ directory.

 

3 Creating An Image-Based VM

We can now create our first VM – an image-based VM (if you expect lots of traffic and many read- and write operations for that VM, use an LVM-based VM instead as shown in chapter 6 – image-based VMs are heavy on hard disk IO).

I want to create my virtual machines in the directory /vm (they cannot be created in the /root directory because the libvirt-qemu user doesn’t have read permissions in that directory), so I have to create it first:

mkdir /vm

We will create a new directory for each VM that we want to create, e.g. /vm/vm1, /vm/vm2, /vm/vm3, and so on, because each VM will have a subdirectory called ubuntu-kvm, and obviously there can be just one such directory in /vm/vm1, for example. If you try to create a second VM in /vm/vm1, for example, you will get an error message saying ubuntu-kvm already exists (unless you run vmbuilder with the –dest=DESTDIR argument):

root@server1:/vm/vm1# vmbuilder kvm ubuntu -c vm2.cfg
2009-05-07 16:32:44,185 INFO     Cleaning up
ubuntu-kvm already exists
root@server1:/vm/vm1#

We will use the vmbuilder tool to create VMs. (You can learn more about vmbuilder here.) vmbuilder uses a template to create virtual machines – this template is located in the /etc/vmbuilder/libvirt/ directory. First we create a copy:

mkdir -p /vm/vm1/mytemplates/libvirt
cp /etc/vmbuilder/libvirt/* /vm/vm1/mytemplates/libvirt/

Now we come to the partitioning of our VM. We create a file called vmbuilder.partition

vi /vm/vm1/vmbuilder.partition

… and define the desired partitions as follows:

root 8000
swap 4000
---
/var 20000

This defines a root partition (/) with a size of 8000MB, a swap partition of 4000MB, and a /var partition of 20000MB. The line makes that the following partition (/var in this example) is on a separate disk image (i.e., this would create two disk images, one for root and swap and one for /var). Of course, you are free to define whatever partitions you like (as long as you also define root and swap), and of course, they can be in just one disk image – this is just an example.

I want to install openssh-server in the VM. To make sure that each VM gets a unique OpenSSH key, we cannot install openssh-server when we create the VM. Therefore we create a script called boot.sh that will be executed when the VM is booted for the first time. It will install openssh-server (with a unique key) and also force the user (I will use the default username administrator for my VMs together with the default password Kreationnext) to change the password when he logs in for the first time:

vi /vm/vm1/boot.sh

# This script will run the first time the virtual machine boots
# It is ran as root.

# Expire the user account
passwd -e administrator

# Install openssh-server
apt-get update
apt-get install -qqy --force-yes openssh-server

Make sure you replace the username administrator with your default login name.

(You can find more about this here: https://help.ubuntu.com/community/JeOSVMBuilder#First%20boot)

(You can also define a “first login” script as described here: https://help.ubuntu.com/community/JeOSVMBuilder#First%20login)

Now take a look at

vmbuilder kvm ubuntu –help

to learn about the available options.

To create our first VM, vm1, we go to the VM directory…

cd /vm/vm1/

… and run vmbuilder, e.g. as follows:

vmbuilder kvm ubuntu –suite=lucid –flavour=virtual –arch=amd64 –mirror=http://de.archive.ubuntu.com/ubuntu -o –libvirt=qemu:///system –ip=192.168.0.101 –gw=192.168.0.1 –part=vmbuilder.partition –templates=mytemplates –user=administrator –name=Administrator –pass=Kreationnext –addpkg=vim-nox –addpkg=unattended-upgrades –addpkg=acpid –firstboot=/vm/vm1/boot.sh –mem=256 –hostname=vm1 –bridge=br0

Most of the options are self-explanatory. –part specifies the file with the partitioning details, relative to our working directory (that’s why we had to go to our VM directory before running vmbuilder), –templates specifies the directory that holds the template file (again relative to our working directory), and –firstboot specifies the firstboot script. –libvirt=qemu:///system tells KVM to add this VM to the list of available virtual machines. –addpkg allows you to specify Ubuntu packages that you want to have installed during the VM creation (see above why you shouldn’t add openssh-server to that list and use the firstboot script instead). –bridge sets up a bridged network; as we have created the bridge br0 in chapter 2, we specify that bridge here.

In the –mirror line, you can specify an official Ubuntu repository in –mirror, e.g. http://de.archive.ubuntu.com/ubuntu. If you leave out –mirror, then the default Ubuntu repository (http://archive.ubuntu.com/ubuntu) will be used.

If you specify an IP address in the –ip switch, make sure that you also specify the correct gateway IP using the –gw switch (otherwise vmbuilder will assume that it is the first valid address in the network which might not be correct). Usually the gateway IP is the same that you use in /etc/network/interfaces (see chapter 2).

In the –suite switch, I have used lucid (Ubuntu 10.04 LTS) – at the time of this writing, maverick (Ubuntu 10.10) was not yet supported.

The build process can take a few minutes.

Afterwards, you can find an XML configuration file for the VM in /etc/libvirt/qemu/ (=> /etc/libvirt/qemu/vm1.xml):

ls -l /etc/libvirt/qemu/

root@server1:~/vm1# ls -l /etc/libvirt/qemu/
total 8
drwxr-xr-x 3 root root 4096 2010-11-12 16:12 networks
-rw——- 1 root root 1759 2010-11-12 16:39 vm1.xml
root@server1:~/vm1#

The disk images are located in the ubuntu-kvm/ subdirectory of our VM directory:

ls -l /vm/vm1/ubuntu-kvm/

root@server1:/vm/vm1# ls -l /vm/vm1/ubuntu-kvm/
total 454876
-rwx—r-x 1 root root        95 2010-11-12 16:39 run.sh
-rw-r–r– 1 root root 343605248 2010-11-12 16:38 tmpEauAHb.qcow2
-rw-r–r– 1 root root 122421248 2010-11-12 16:39 tmpGCsM0p.qcow2
root@server1:/vm/vm1#

4 Creating A Second VM

If you want to create a second VM (vm2), here’s a short summary of the commands:

mkdir -p /vm/vm2/mytemplates/libvirt
cp /etc/vmbuilder/libvirt/* /vm/vm2/mytemplates/libvirt/

vi /vm/vm2/vmbuilder.partition

vi /vm/vm2/boot.sh

cd /vm/vm2/
vmbuilder kvm ubuntu –suite=lucid –flavour=virtual –arch=amd64 –mirror=http://de.archive.ubuntu.com/ubuntu -o –libvirt=qemu:///system –ip=192.168.0.102 –gw=192.168.0.1 –part=vmbuilder.partition –templates=mytemplates –user=administrator –name=Administrator –pass=Kreationnext –addpkg=vim-nox –addpkg=unattended-upgrades –addpkg=acpid –firstboot=/vm/vm2/boot.sh –mem=256 –hostname=vm2 –bridge=br0

(Please note that you don’t have to create a new directory for the VM (~/vm2) if you pass the -d DESTDIR argument to the vmbuilder command – it allows you to create a VM in a directory where you’ve already created another VM. In that case you don’t have to create new vmbuilder.partition and boot.sh files and don’t have to modify the template, but can simply use the existing files:

cd /vm/vm1/
vmbuilder kvm ubuntu –suite=lucid –flavour=virtual –arch=amd64 –mirror=http://de.archive.ubuntu.com/ubuntu -o –libvirt=qemu:///system –ip=192.168.0.102 –gw=192.168.0.1 –part=vmbuilder.partition –templates=mytemplates –user=administrator –name=Administrator –pass=Kreationnext –addpkg=vim-nox –addpkg=unattended-upgrades –addpkg=acpid –firstboot=/vm/vm1/boot.sh –mem=256 –hostname=vm2 –bridge=br0 -d vm2-kvm

)

 

5 Managing A VM

VMs can be managed through virsh, the “virtual shell”. To connect to the virtual shell, run

virsh –connect qemu:///system

This is how the virtual shell looks:

root@server1:~/vm2# virsh –connect qemu:///system
Connecting to uri: qemu:///system
Welcome to virsh, the virtualization interactive terminal.

Type:  ‘help’ for help with commands
‘quit’ to quit

virsh #

You can now type in commands on the virtual shell to manage your VMs. Run

help

to get a list of available commands:

virsh # help
Commands:

help            print help
attach-device   attach device from an XML file
attach-disk     attach disk device
attach-interface attach network interface
autostart       autostart a domain
capabilities    capabilities
cd              change the current directory
connect         (re)connect to hypervisor
console         connect to the guest console
cpu-baseline    compute baseline CPU
cpu-compare     compare host CPU with a CPU described by an XML file
create          create a domain from an XML file
start           start a (previously defined) inactive domain
destroy         destroy a domain
detach-device   detach device from an XML file
detach-disk     detach disk device
detach-interface detach network interface
define          define (but don’t start) a domain from an XML file
domid           convert a domain name or UUID to domain id
domuuid         convert a domain name or id to domain UUID
dominfo         domain information
domjobinfo      domain job information
domjobabort     abort active domain job
domname         convert a domain id or UUID to domain name
domstate        domain state
domblkstat      get device block stats for a domain
domifstat       get network interface stats for a domain
dommemstat      get memory statistics for a domain
domblkinfo      domain block device size information
domxml-from-native Convert native config to domain XML
domxml-to-native Convert domain XML to native config
dumpxml         domain information in XML
edit            edit XML configuration for a domain
find-storage-pool-sources discover potential storage pool sources
find-storage-pool-sources-as find potential storage pool sources
freecell        NUMA free memory
hostname        print the hypervisor hostname
list            list domains
migrate         migrate domain to another host
migrate-setmaxdowntime set maximum tolerable downtime
net-autostart   autostart a network
net-create      create a network from an XML file
net-define      define (but don’t start) a network from an XML file
net-destroy     destroy a network
net-dumpxml     network information in XML
net-edit        edit XML configuration for a network
net-list        list networks
net-name        convert a network UUID to network name
net-start       start a (previously defined) inactive network
net-undefine    undefine an inactive network
net-uuid        convert a network name to network UUID
iface-list      list physical host interfaces
iface-name      convert an interface MAC address to interface name
iface-mac       convert an interface name to interface MAC address
iface-dumpxml   interface information in XML
iface-define    define (but don’t start) a physical host interface from an XML file
iface-undefine  undefine a physical host interface (remove it from configuration)
iface-edit      edit XML configuration for a physical host interface
iface-start     start a physical host interface (enable it / “if-up”)
iface-destroy   destroy a physical host interface (disable it / “if-down”)
managedsave     managed save of a domain state
managedsave-remove Remove managed save of a domain
nodeinfo        node information
nodedev-list    enumerate devices on this host
nodedev-dumpxml node device details in XML
nodedev-dettach dettach node device from its device driver
nodedev-reattach reattach node device to its device driver
nodedev-reset   reset node device
nodedev-create  create a device defined by an XML file on the node
nodedev-destroy destroy a device on the node
nwfilter-define define or update a network filter from an XML file
nwfilter-undefine undefine a network filter
nwfilter-dumpxml network filter information in XML
nwfilter-list   list network filters
nwfilter-edit   edit XML configuration for a network filter
pool-autostart  autostart a pool
pool-build      build a pool
pool-create     create a pool from an XML file
pool-create-as  create a pool from a set of args
pool-define     define (but don’t start) a pool from an XML file
pool-define-as  define a pool from a set of args
pool-destroy    destroy a pool
pool-delete     delete a pool
pool-dumpxml    pool information in XML
pool-edit       edit XML configuration for a storage pool
pool-info       storage pool information
pool-list       list pools
pool-name       convert a pool UUID to pool name
pool-refresh    refresh a pool
pool-start      start a (previously defined) inactive pool
pool-undefine   undefine an inactive pool
pool-uuid       convert a pool name to pool UUID
secret-define   define or modify a secret from an XML file
secret-dumpxml  secret attributes in XML
secret-set-value set a secret value
secret-get-value Output a secret value
secret-undefine undefine a secret
secret-list     list secrets
pwd             print the current directory
quit            quit this interactive terminal
exit            quit this interactive terminal
reboot          reboot a domain
restore         restore a domain from a saved state in a file
resume          resume a domain
save            save a domain state to a file
schedinfo       show/set scheduler parameters
dump            dump the core of a domain to a file for analysis
shutdown        gracefully shutdown a domain
setmem          change memory allocation
setmaxmem       change maximum memory limit
setvcpus        change number of virtual CPUs
suspend         suspend a domain
ttyconsole      tty console
undefine        undefine an inactive domain
update-device   update device from an XML file
uri             print the hypervisor canonical URI
vol-create      create a vol from an XML file
vol-create-from create a vol, using another volume as input
vol-create-as   create a volume from a set of args
vol-clone       clone a volume.
vol-delete      delete a vol
vol-wipe        wipe a vol
vol-dumpxml     vol information in XML
vol-info        storage vol information
vol-list        list vols
vol-pool        returns the storage pool for a given volume key or path
vol-path        returns the volume path for a given volume name or key
vol-name        returns the volume name for a given volume key or path
vol-key         returns the volume key for a given volume name or path
vcpuinfo        domain vcpu information
vcpupin         control domain vcpu affinity
version         show version
vncdisplay      vnc display
snapshot-create Create a snapshot
snapshot-current Get the current snapshot
snapshot-delete Delete a domain snapshot
snapshot-dumpxml Dump XML for a domain snapshot
snapshot-list   List snapshots for a domain
snapshot-revert Revert a domain to a snapshot

virsh #

list

shows all running VMs;

list –all

shows all VMs, running and inactive:

virsh # list –all
Id Name                 State
———————————-
– vm1                  shut off
– vm2                  shut off

virsh #

Before you start a new VM for the first time, you must define it from its xml file (located in the /etc/libvirt/qemu/ directory):

define /etc/libvirt/qemu/vm1.xml

Please note that whenever you modify the VM’s xml file in /etc/libvirt/qemu/, you must run the define command again!

Now you can start the VM:

start vm1

After a few moments, you should be able to connect to the VM with an SSH client such as PuTTY; log in with the default username and password. After the first login you will be prompted to change the password.

list

should now show the VM as running:

virsh # list
Id Name                 State
———————————-
1 vm1                  running

virsh #

To stop a VM, run

shutdown vm1

To immediately stop it (i.e., pull the power plug), run

destroy vm1

Suspend a VM:

suspend vm1

Resume a VM:

resume vm1

These are the most important commands.

Type

quit

to leave the virtual shell.

6 Creating An LVM-Based VM

LVM-based VMs have some advantages over image-based VMs. They are not as heavy on hard disk IO, and they are easier to back up (using LVM snapshots).

To use LVM-based VMs, you need a volume group that has some free space that is not allocated to any logical volume. In this example, I use the volume group /dev/vg0 with a size of approx. 465GB…

vgdisplay

root@server1:~# vgdisplay
— Volume group —
VG Name               vg0
System ID
Format                lvm2
Metadata Areas        1
Metadata Sequence No  3
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                2
Open LV               2
Max PV                0
Cur PV                1
Act PV                1
VG Size               465.29 GB
PE Size               4.00 MB
Total PE              119114
Alloc PE / Size       24079 / 94.06 GB
Free  PE / Size       95035 / 371.23 GB
VG UUID               hUDyB2-hGR5-T7gI-wxt6-p4Om-PT6l-Bgbi85

root@server1:~#

… that contains the logical volumes /dev/vg0/root with a size of approx. 100GB and /dev/vg0/swap_1 with a size of 1GB – the rest is not allocated and can be used for VMs:

lvdisplay

root@server1:~# lvdisplay
— Logical volume —
LV Name                /dev/vg0/root
VG Name                vg0
LV UUID                5PHWtQ-5XuQ-jgvu-uFrJ-f889-w46a-cIRFcb
LV Write Access        read/write
LV Status              available
# open                 1
LV Size                93.13 GB
Current LE             23841
Segments               1
Allocation             inherit
Read ahead sectors     auto
– currently set to     256
Block device           252:0

— Logical volume —
LV Name                /dev/vg0/swap_1
VG Name                vg0
LV UUID                N25s1p-AQWJ-X2WH-FAyA-xlS6-ettD-55ZHE8
LV Write Access        read/write
LV Status              available
# open                 2
LV Size                952.00 MB
Current LE             238
Segments               1
Allocation             inherit
Read ahead sectors     auto
– currently set to     256
Block device           252:1

root@server1:~#

I will now create the virtual machine vm3 as an LVM-based VM. We can use the vmbuilder command again. vmbuilder knows the –raw option which allows to write the VM to a block device (e.g. /dev/vg0/vm3).

mkdir -p /vm/vm3/mytemplates/libvirt
cp /etc/vmbuilder/libvirt/* /vm/vm3/mytemplates/libvirt/

Make sure that you create all partitions in just one image file, so don’t use in the vmbuilder.partition file:

vi /vm/vm3/vmbuilder.partition

root 8000
swap 2000
/var 10000

vi /vm/vm3/boot.sh

# This script will run the first time the virtual machine boots
# It is ran as root.

# Expire the user account
passwd -e administrator

# Install openssh-server
apt-get update
apt-get install -qqy --force-yes openssh-server

As you see from the vmbuilder.partition file, the VM will use a max. of 20GB, so we create a logical volume called /dev/vg0/vm3 with a size of 20GB now:

lvcreate -L20G -n vm3 vg0

We can now create the new VM as follows (please note the –raw=/dev/vg0/vm3 switch!):

cd /vm/vm3/
vmbuilder kvm ubuntu –suite=lucid –flavour=virtual –arch=amd64 –mirror=http://de.archive.ubuntu.com/ubuntu -o –libvirt=qemu:///system –ip=192.168.0.103 –gw=192.168.0.1 –part=vmbuilder.partition –raw=/dev/vg0/vm3 –templates=mytemplates –user=administrator –name=Administrator –pass=Kreationnext –addpkg=vim-nox –addpkg=unattended-upgrades –addpkg=acpid –firstboot=/vm/vm3/boot.sh –mem=256 –hostname=vm3 –bridge=br0

You can now use virsh to manage the VM:

virsh –connect qemu:///system

Run the define command first…

define /etc/libvirt/qemu/vm3.xml

… before you start the VM:

start vm3

 

  • KVM (Ubuntu Community Documentation): https://help.ubuntu.com/community/KVM
  • vmbuilder: https://help.ubuntu.com/community/JeOSVMBuilder
  • JeOS and vmbuilder: http://doc.ubuntu.com/ubuntu/serverguide/C/jeos-and-vmbuilder.html
  • Ubuntu: http://www.ubuntu.com/

 

Comments

comments