Virtualization in Fedora 8
Fedora 8 includes support for both the KVM and the Xen virtualization platforms. For more information on different virtualization platforms, see http://virt.kernelnewbies.org/TechComparison.
More information on Xen itself can be found at http://wiki.xensource.com/xenwiki/ and the Fedora Xen page. More information on KVM can be found at http://kvm.qumranet.com/kvmwiki.
Fedora is following the 3.0.x Xen line. Xen 3.0.0 was released in December of 2005 and is incompatible with guests using the previous Xen 2.0.x releases.
Quick Start
Setting up Xen and guests in Fedora 8 has some significant changes and improvements since the release of Fedora Core 6. The following guide will explain how to set up Xen and KVM, and how to create and manage guests using either the command line or GUI interface.
System Requirements
- For Xen, GRUB, the default boot loader for is required[[FootNote(This is required because the system actually boots the Xen hypervisor and it then starts the Linux kernel. It does this using the Multi
Boot standard.)]
- For KVM, the system must have a CPU with virtualization support.
- Sufficient storage space for the guest operating systems. A minimal command-line Fedora system requires around 600MB of storage, a standard desktop Fedora system requires around 3GB.
- Generally speaking, at least 256 megs of RAM per guest plus 256 for the base OS. Practically speaking, it is hard to do work with virtualization with less than 1 GB of RAM.
Requirements for Para-virtualized Guests
Any x86_64, or ia64 CPU is supported for running para-virtualized guests with Xen. For i386 hardware, a CPU with the PAE extension is required. Many older laptops (particularly those based on Pentium Mobile / Centrino) do not have PAE support. To determine if a CPU has PAE support, run the following command:
grep pae /proc/cpuinfo flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 mmx fxsr sse syscall mmxext 3dnowext 3dnow up ts
The above output shows a CPU that does have PAE support. If the command returns nothing, then the CPU does not have PAE support.
Fully-virtualized guests (HVM/Intel-VT/AMD-V)
To run fully virtualized guests in Xen or KVM, host CPU support is needed. This is typically referred to as Intel VT, or AMD-V. To check for Intel VT support look for the 'vmx' flag, or for AMD-V support check for 'svm' flag:
....For Intel.... grep vmx /proc/cpuinfo flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr lahf_lm ....For AMD.... grep svm /proc/cpuinfo flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm cr8_legacy
Installing the Virtualization Software
When doing a fresh install of Fedora 8, the virtualization packages can be installed by selecting Virtualization in the Base Group in the installer.
For an existing Fedora 8 installation, the Xen kernel, KVM, and other virtualization tools can be installed by running the following command:
su -c "yum groupinstall 'Virtualization'"
Enter the root
password when prompted.
This installs python-virtinst
kvm
, qemu
, and virt-manager
. Optional packages in this group are xen
, kernel-xen
, gnome-applet-vm
.
If kernel-xen
is installed, there will be an entry in the file /boot/grub/grub.conf
for booting the xen
kernel. The xen
kernel is not set as the default boot option.
To set GRUB to boot with kernel-xen
by default, edit /boot/grub/grub.conf
and set the default to the xen [[FootNote(Note that future kernel-xen packages can be set to the default kernel by editing /etc/sysconfig/kernel
)]
This is an example /boot/grub/grub.conf
configured to boot into the Xen hypervisor:
default=0 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.21-2950.fc8xen) root (hd0,0) kernel /xen.gz-2.6.21-2950.fc8 module /vmlinuz-2.6.21-2950.fc8xen ro root=/dev/vg_system/lv_root rhgb quiet module /initrd-2.6.21-2950.fc8xen.img title Fedora (2.6.23.1-49.fc8) root (hd0,0) kernel /vmlinuz-2.6.23.1-49.fc8 ro root=/dev/vg_system/lv_root rhgb quiet initrd /initrd-2.6.23.1-49.fc8.img
Verify Virtualization is Enabled
Fedora 8 supports multiple underlying Virtualization platforms. Verifying that Virtualization is enabled depends on which platform is being used. For example, when using KVM the command to display all domains on the local system would be: virsh -c qemu:///system list
. When using Xen,the command would be virsh -c xen:///system list
.
To verify that Virtualization is enabled on the system, run the following command, where <URI> is a valid URI that libvirt
can recognize. For more details on URIs: see http://libvirt.org/uri.html.
su -c "virsh -c <URI> list" Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 610 1 r----- 12492.1
The above output indicates that there is an active hypervisor. If Virtualization is not enabled an error similar to the following will appear:
su -c "virsh -c <URI> list" libvir: error : operation failed: xenProxyOpen error: failed to connect to the hypervisor error: no valid connection
If the above error appears, make sure that:
- For Xen, make sure the system is running the Xen kernel and that
xend
is running - For KVM, make sure that
libvirtd
is running - For either, make sure the URI is properly specified (see http://libvirt.org/uri.html for details)
Configuring Remote Management
Fedora 8 adds the ability to manage virtual domains in a secure manner from remote hosts. To use these features, choose one of the following methods for the remote host to communicate with the hypervisor:
- Create SSH keys for root, and use
ssh-agent
andssh-add
before launchingvirt-manager
. - Set up a local certificate authority and issue x509 certs to all servers and clients. For information on configuring this option, refer to http://libvirt.org/remote.html.
Building a Fedora Guest System
With Fedora 8, installation of Fedora 8 guests using anaconda is supported. The installation can be started on the command line via the virt-install
program or in the GUI program virt-manager
.
Building a Fedora Guest System using virt-install
Start the interactive install process by running the virt-install
command-line program:
su -c "/usr/sbin/virt-install"
Enter the root
password when prompted.
The following questions about the new guest OS will be presented. This information can also be passed as command line options; run with an argument of --help
for more details. In particular, kickstart options can be passed with -x ks=options
.
- What is the name of your virtual machine? This is the label that will identify the guest OS. This label will be used for various
virsh
commands and also appear invirt-manager
the Gnome-panel Xen applet. - How much RAM should be allocated (in megabytes)? This is the amount of RAM to be allocated for the guest instance in megabytes (eg, 256). Note that installation with less than 256 megabytes is not recommended.
- What would you like to use as the disk (path)? The local path and file name of the file to serve as the disk image for the guest (eg, /home/joe/xenbox1). This will be exported as a full disk to your guest.
- How large would you like the disk to be (in gigabytes)? The size of the virtual disk for the guest (only appears if the file specified above does not already exist). 4.0 gigabytes is a reasonable size for a "default" install
- Would you like to enable graphics support (yes or no): Should the graphical installer be used?
- What is the install location? This is the path to a Fedora 8 installation tree in the format used by anaconda. NFS, FTP, and HTTP locations are all supported. Examples include:
nfs:my.nfs.server.com:/path/to/test2/tree/
http://my.http.server.com/path/to/tree/
The installation will then commence. If graphics were enabled, a VNC window will open and present the graphical installer. If graphics were not enabled, the standard text installer will appear. Proceed as normal with the installation.
Building a Fedora Guest System using virt-manager
Start the GUI Virtual Machine Manager by selecting it from the "Applications-->System Tools" menu, or by running the following command as root:
su -c "virt-manager"
Enter the root
password when prompted.
- Open a connection to a hypervisor by choosing File-->Open connection...
- Choose "qemu" for KVM, or "Xen" for Xen.
- Choose "local" or select a method to connect to a remote hypervisor
- After a connection is opened, click the new icon next to the hypervisor, or right click on the active hypervisor and select "New" (Note - the new icon is going to be improved to make it easier to see)
- A wizard will present the same questions as appear with the
virt-install
command-line utility (see descriptions above). The wizard assumes that a graphical installation is desired and does not prompt for this option. - On the last page of the wizard there is a "Finish" button. When this is clicked, the guest OS is provisioned. After a few moments a VNC window should appear. Proceed with the installation as normal.
Building a Fedora Guest System using 'cobbler' and 'koan'
Cobbler is a tool for configuring a provisioning server for PXE, Xen, and existing systems. See http://cobbler.et.redhat.com for details. The following instructions are rather minimal and more configuration options are available.
First, set up a provisioning server:
su -c "yum install cobbler" man cobbler # read the docs! cobbler check # validate that the system is configured correctly cobbler distro add --name=myxendistro --kernel=/path/to/vmlinuz --initrd=/path/to/initrd.img cobbler profile add --name=myxenprofile --distro==myxendistro [--kickstart=/path/to/kickstart] cobbler list # review the configuration cobbler sync # apply the configuration to the filesystem
Alternatively, cobbler can import a Fedora rsync mirror and create profiles automatically from there. Some of the imported distros will be Xen profiles and some will be for bare metal. Usage of the Xen profiles will be required. See the manpage for details.
cobbler import --mirror=rsync://your-fedora-mirror --mirror-name=fedora cobbler sync
On the system that will host the image:
su -c "yum install koan" koan --virt --profile=myxenprofile --server=hostname-of-cobbler-server
After Installation
When the installation of the guest operating system is complete, it can be managed using the GUI virt-manager
program or on the command line using virsh
.
Managing Virtual Machines graphically with virt-manager
Start the GUI Virtual Machine Manager by selecting it from the "Applications-->System Tools" menue, or by running the following command:
virt-manager
{1} If you are not root, you will be prompted to enter the root password. ChooseRun unprivileged
to operate in a read-only non-root mode.
- Choose "Local Xen Host" and click "Connect" in the "Open Connection" dialog window.
- The list of virtual machines is displayed in the main window. The first machine is called "Domain 0"; this is the host computer.
- If a machine is not listed, it is probably not running. To start up a machine select "File-->Restore a saved machine..." and select the file that serves as the guest's disk.
- The display lists the status, CPU and memory usage for each machine. Additional statistics can be selected under the "View" menu.
- Double click the name of a machine to open the virtual console.
- From the virtual console, select "View-->Details" to access the machine's properties and change its hardware configuration
- To access the serial console (if there is a problem with the graphical console) select "View-->Serial Console"
For further information about virt-manager
consult the project website
Bugs in the virt-manager
tool should be reported in BugZilla against the 'virt-manager' component
Managing Virtual Machines from the command line with virsh
Virtual machines can be managed on the command line with the virsh
utility. The virsh
utility is built around the libvirt management API and has a number of advantages over the traditional Xen xm
tool:
virsh
has a stable set of commands whose syntax and semantics are preserved across updates to the underlying virtualization platform.virsh
can be used as an unprivileged user for read-only operations (e.g. listing domains, listing domain statistics).virsh
can manage domains running under Xen or KVM with no perceptible difference to the user
To start a virtual machine:
su -c "virsh -c <URI> create <name of virtual machine>"
To list the virtual machines currently running:
su -c "virsh -c <URI> list"
To gracefully power off a guest:
su -c "virsh -c <URI> shutdown <virtual machine (name | id | uuid)>"
To save a snapshot of the machine to a file:
su -c "virsh -c <URI> save <virtual machine (name | id | uuid)> <filename>"
To restore a previously saved snapshot:
su -c "virsh -c <URI> restore <filename>"
To export the configuration file of a virtual machine:
su -c "virsh -c <URI> dumpxml <virtual machine (name | id | uuid)"
For a complete list of commands available for use with virsh
:
su -c "virsh help"
Or consult the manual page: man 1 virsh
Bugs in the virsh
tool should be reported in BugZilla against the 'libvirt' component.
Managing Virtual Machines from the command line with qemu-kvm
KVM virtual machines can also be managed in the command line using the 'qemu-kvm' command. See man qemu-kvm
for more details.
Troubleshooting
SELinux
The SELinux policy in Fedora 8 has the neccessary rules to allow use of Xen with SELinux enabled. The main caveat to be aware of is that any file backed disk images need to be in a special directory - /var/lib/xen/images. This applies both to regular disk images, and ISO images. Block device backed disks are already labelled correctly to allow them to pass SELinux checks.
Log files
There are two log files stored on the host system to assist with debugging Xen related problems. The file /var/log/xen/xend.log
holds the same information reported with 'xm log
. Unfortunately these log messages are often very short and contain little useful information. The following is the output of trying to create a domain running the kernel for NetBSD/xen.
[2005-06-27 02:23:02 xend] ERROR (SrvBase:163) op=create: Error creating domain:(0, 'Error') Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/xen/xend/server/SrvBase.py", line 107, in _perform val = op_method(op, req) File "/usr/lib/python2.4/site-packages/xen/xend/server/SrvDomainDir.py", line 71, in op_create raise XendError("Error creating domain: " + str(ex)) XendError: Error creating domain: (0, 'Error')
The second file, /var/log/xen/xend-debug.log
usually contains much more detailed information. Trying to start the NetBSD/xen kernel will result in the following log output:
ERROR: Will only load images built for Xen v3.0 ERROR: Actually saw: 'GUEST_OS=netbsd,GUEST_VER=2.0,XEN_VER=2.0,LOADER=generic,BSD_SYMTAB' ERROR: Error constructing guest OS
When reporting errors, always include the output from both /var/log/xen/xend.log
and /var/log/xen/xend-debug.log
.
If starting a fully-virtualized domains (ie to run unmodified OS) there are also logs in /var/log/xen/qemu-dm*.log which can contain useful information.
Finally, hypervisor logs can be seen by running the command
xm dmesg
Serial Console
Host serial console access
For more difficult problems, serial console can be very helpful. If the Xen kernel itself has died and the hypervisor has generated an error, there is no way to record the error persistently on the local host. Serial console lets you capture it on a remote host.
The Xen host must be setup for serial console output, and a remote host must exist to capture it. For the console output, set the appropriate options in /etc/grub.conf:
title Fedora Core (2.6.17-1.2600.fc6xen) root (hd0,2) kernel /xen.gz-2.6.17-1.2600.fc6 com1=38400,8n1 sync_console module /vmlinuz-2.6.17-1.2600.fc6xen ro root=LABEL=/ rhgb quiet console=ttyS0 console=tty pnpacpi=off module /initrd-2.6.17-1.2600.fc6xen.img
for a 38400-bps serial console on com1 (ie. /dev/ttyS0 on Linux.) The "sync_console" works around a problem that can cause hangs with asynchronous hypervisor console output, and the "pnpacpi=off" works around a problem that breaks input on serial console. "console=ttyS0 console=tty" means that kernel errors get logged both on the normal VGA console and on serial console. Once that is done, install and set up ttywatch
to capture the information on a remote host connected by a standard null-modem cable. For example, on the remote host:
su -c "ttywatch --name myhost --port /dev/ttyS0"
Will log output from /dev/ttyS0 into a file /var/log/ttywatch/myhost.log
Paravirt guest serial console access
Para-virtualized guest OS will automatically have a serial console configured, and plumbed through to the Domain-0 OS. This can be accessed from the command line using
su -c "virsh console <domain name>"
Alternatively, the graphical virt-manager
program can display the serial console. Simply display the 'console' or 'details' window for the guest and select 'View -> Serial console' from the menu bar.
Full Virt guest serial console access
Fully-virtualized guest OS will automatically have a serial console configured, but the guest kernel will not be configured to use this out of the box. To enable the guest console in a Linux fully-virt guest, edit the /etc/grub.conf in the guest and add 'console=ttyS0 console=tty0'. This ensures that all kernel messages get sent to the serial console, and the regular graphical console. The serial console can then be access in same way as paravirt guests:
su -c "virsh console <domain name>"
Alternatively, the graphical virt-manager
program can display the serial console. Simply display the 'console' or 'details' window for the guest & select 'View -> Serial console' from the menu bar.
Accessing data on a guest disk image
There are two tools which can help greatly in accessing data within a guest disk image: lomount and kpartx.
- lomount
su -c "lomount -t ext3 -diskimage /xen/images/fc5-file.img -partition 1 /mnt/boot"
lomount only works with small disk images and cannot deal with LVM volumes, so for more complex cases, kpartx (from the device-mapper-multipath RPM) is preferred:
- kpartx
su -c "yum install device-mapper-multipath" su -c "kpartx -av /dev/xen/guest1" add map guest1p1 : 0 208782 linear /dev/xen/guest1 63 add map guest1p2 : 0 16563015 linear /dev/xen/guest1 208845
Note that this only works for block devices, not for images installed on regular files. To use file images, set up a loopback device for the file first:
su -c "losetup -f" /dev/loop0 su -c "losetup /dev/loop0 /xen/images/fc5-file.img" su -c "kpartx -av /dev/loop0" add map loop0p1 : 0 208782 linear /dev/loop0 63 add map loop0p2 : 0 12370050 linear /dev/loop0 208845
In this case we have added an image formatted as a default Fedora install, so it has two partitions: one /boot, and one LVM volume containing everything else. They are accessible under /dev/mapper:
su -c "ls -l /dev/mapper/ | grep guest1" brw-rw---- 1 root disk 253, 6 Jun 6 10:32 xen-guest1 brw-rw---- 1 root disk 253, 14 Jun 6 11:13 guest1p1 brw-rw---- 1 root disk 253, 15 Jun 6 11:13 guest1p2 su -c "mount /dev/mapper/guest1p1 /mnt/boot/"
To access LVM volumes on the second partition, rescan LVM with vgscan
and activate the volume group on that partition (named "VolGroup00" by default) with vgchange -ay
:
su -c "kpartx -a /dev/xen/guest1" su -c "vgscan" Reading all physical volumes. This may take a while... Found volume group "VolGroup00" using metadata type lvm2 su -c "vgchange -ay VolGroup00" 2 logical volume(s) in volume group "VolGroup00" now active su -c "lvs" LV VG Attr LSize Origin Snap% Move Log Copy% LogVol00 VolGroup00 -wi-a- 5.06G LogVol01 VolGroup00 -wi-a- 800.00M su -c "mount /dev/VolGroup00/LogVol00 /mnt/" ... su -c "umount /mnt" su -c "vgchange -an VolGroup00" su -c "kpartx -d /dev/xen/guest1"
Frequently Asked Questions
- Q: I am trying to start the xend service and nothing happens, then when I do a
virsh list
I get the following:
Error: Error connecting to xend: Connection refused. Is xend running?
Alternatively, I run xend start
manually and get the following error:
ERROR: Could not obtain handle on privileged command interface (2 = No such file or directory) Traceback (most recent call last): File "/usr/sbin/xend", line 33, in ? from xen.xend.server import SrvDaemon File "/usr/lib/python2.4/site-packages/xen/xend/server/SrvDaemon.py", line 21, in ? import relocate File "/usr/lib/python2.4/site-packages/xen/xend/server/relocate.py", line 26, in ? from xen.xend import XendDomain File "/usr/lib/python2.4/site-packages/xen/xend/XendDomain.py", line 33, in ? import XendDomainInfo File "/usr/lib/python2.4/site-packages/xen/xend/XendDomainInfo.py", line 37, in ? import image File "/usr/lib/python2.4/site-packages/xen/xend/image.py", line 30, in ? xc = xen.lowlevel.xc.xc() RuntimeError: (2, 'No such file or directory')
A: You have rebooted your host into a kernel that is not a xen-hypervisor kernel. Yes I did this myself in testing :)
You either need to select the xen-hypervisor kernel at boot time or set the xen-hypervisor kernel as default in your grub.conf file.
- Q. When creating a guest the message "Invalid argument" is displayed.
A. This usually indicates that the kernel image you are trying to boot is incompatible with the hypervisor. This will be seen if trying to run a FC5 (non-PAE) kernel on FC6 (which is PAE only), or if trying to run a bare metal kernel.
- Q. When I do a yum update and get a new kernel, the grub.conf default kernel switches back to the bare-metal kernel instead of the Xen kernel
A. The default kernel RPM can be changed in /etc/sysconfig/kernel. If it is set to 'kernel-xen', then the Xenified kernel will always be set as default option in grub.conf
Getting Help
If the Troubleshooting section above does not help you to solve your problem, check the Red Hat Bugzilla for existing bug reports on Xen in FC6. The product is "Fedora Core", and the component is "kernel" for bugs related to the xen kernel and "xen" for bugs related to the tools. These reports contain useful advice from fellow xen testers and often describe work-arounds.
For general Xen issues and useful information check the Xen project documentation , and mailing list archives .
Finally, discussion on Fedora Xen support issues occur on the Fedora Xen mailing list
References
- http://www-128.ibm.com/developerworks/linux/library/l-linux-kvm/?ca=dgr-lnxw07LinuxKVM
- http://kerneltrap.org/node/8088