m (Updated virt debug link) |
|||
Line 5: | Line 5: | ||
Xen supports para-virtualized guests as well as fully virtualized guests with para-virtualized drivers. Para-virtualization is faster than full virtualization but does not work with non-Linux operating systems or Linux operating system without the Xen kernel extensions. Xen fully virtualized are slower than KVM fully virtualized guests. | Xen supports para-virtualized guests as well as fully virtualized guests with para-virtualized drivers. Para-virtualization is faster than full virtualization but does not work with non-Linux operating systems or Linux operating system without the Xen kernel extensions. Xen fully virtualized are slower than KVM fully virtualized guests. | ||
KVM offers fast full virtualization, which requires the virtualization instructions sets on your processor. KVM requires an x86 | KVM offers fast full virtualization, which requires the virtualization instructions sets on your processor. KVM requires an x86 Intel or AMD processors with virtualization extensions enabled. Without these extensions KVM uses QEMU software virtualization. | ||
Other virtualization products and packages are available but are not covered by this guide. | Other virtualization products and packages are available but are not covered by this guide. |
Revision as of 22:42, 25 December 2009
Using virtualization on fedora
Fedora provides virtualization with both the KVM and the Xen virtualization platforms. For information on other virtualization platforms, refer to http://virt.kernelnewbies.org/TechComparison.
Xen supports para-virtualized guests as well as fully virtualized guests with para-virtualized drivers. Para-virtualization is faster than full virtualization but does not work with non-Linux operating systems or Linux operating system without the Xen kernel extensions. Xen fully virtualized are slower than KVM fully virtualized guests.
KVM offers fast full virtualization, which requires the virtualization instructions sets on your processor. KVM requires an x86 Intel or AMD processors with virtualization extensions enabled. Without these extensions KVM uses QEMU software virtualization.
Other virtualization products and packages are available but are not covered by this guide.
For information on Xen, refer to http://wiki.xensource.com/xenwiki/ and the Fedora Xen pages.
For information on KVM, refer to http://kvm.qumranet.com/kvmwiki.
Fedora uses Xen version 3.0.x. Xen 3.0.0 was released in December of 2005 and is incompatible with guests created using Xen 2.0.x versions.
Installing and configuring fedora for virtualized guests
This section covers setting up Xen, KVM or both on your system. After the successful completion of this section you will be able to create virtualized guest operating systems.
System requirements
The common system requirements for virtualization on fedora are:
- At least 600MB of hard disk storage per guest. A minimal command-line fedora system requires 600MB of storage. Standard fedora desktop guests require at least 3GB of space.
- At least 256 megs of RAM per guest plus 256 for the base OS. At least 756MB is recommended for each guest of a modern operating system. A good rule of thumb is to think about how much memory is required for the operating system normally and allocate that much to the virtualized guest.
- Xen host or Domain-0 support requires Fedora 8. Support will return once parvirt_ops features are implemented in the upstream kernel.
Additional requirements for para-virtualized guests
- Xen. KVM does not support para-virtualization at this time. The kernel-xen package is required with versions of Fedora older than 10.
- Any x86-64 or Intel Itanium CPU or any x86 CPU with the PAE extensions. Many older laptops (particularly those based on Pentium Mobile / Centrino) do not have PAE support. To determine if a CPU has PAE extensions, execute:
$ grep pae /proc/cpuinfo flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 mmx fxsr sse syscall mmxext 3dnowext 3dnow up ts
The above output shows a CPU with the PAE extensions. If the command returns nothing, then the CPU does not support para-virtualization.
Additional requirements for fully virtualized guests
Full virtualization with Xen or KVM requires a CPU with virtualization extensions, that is, the Intel VT or AMD-V extensions.
Verify whether your Intel CPU has Intel VT support (the 'vmx' flag):
$ grep vmx /proc/cpuinfo flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr lahf_lm
On some Intel based systems(usually laptops) the Intel VT extensions are disabled in BIOS. Enter BIOS and enable Intel-VT or Vanderpool Technology which is usually located in the CPU options or Chipset menus.
Verify whether your AMD CPU has AMD-V support (the 'svm' flag):
$ grep svm /proc/cpuinfo flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm cr8_legacy
Via Nano processors use the 'vmx' instruction set.
You can use QEMU software emulation for full virtualization. Software virtualization is far slower than virtualization using the Intel VT or AMD-V extensions. QEMU can also virtualize other processor architectures like ARM or PowerPC.
Installing the virtualization packages
When installing fedora, the virtualization packages can be installed by selecting Virtualization in the Base Group in the installer.
For existing fedora installations, QEMU, KVM, and other virtualization tools can be installed by running the following command:
su -c "yum groupinstall 'Virtualization'"
This will install qemu-kvm
, python-virtinst
, qemu
, virt-manager
, virt-viewer
and all dependencies are needed. Optional packages in this group are gnome-applet-vm
and virt-top
.
Introduction to virtualization with fedora
Fedora supports multiple virtualization platforms. Different platforms require slightly different methods.
When using KVM, to display all domains on the local system the command is virsh -c qemu:///system list
.
When using Xen, the same command is virsh -c xen:///system list
.
Be aware of this subtle variation.
To verify that virtualization is enabled on the system, run the following command, where <URI> is a valid URI that libvirt
can recognize. For more details on URIs: see http://libvirt.org/uri.html.
$ su -c "virsh -c <URI> list" Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 610 1 r----- 12492.1
The above output indicates that there is an active hypervisor. If virtualization is not enabled an error similar to the following appears:
$ su -c "virsh -c <URI> list" libvir: error : operation failed: xenProxyOpen error: failed to connect to the hypervisor error: no valid connection
If the above error appears, make sure that:
- For Xen, ensure
xend
is running. - For KVM, ensure
libvirtd
is running. - For either, ensure the URI is properly specified (see http://libvirt.org/uri.html for details).
Creating a fedora guest
The installation of Fedora guests using anaconda is supported. The installation can be started on the command line via the virt-install
program or in the GUI program virt-manager
. You will be prompted for the type of virtualization (that is, KVM or Xen and para-virtualization or full virtualization) used during the guest creation process.
Creating a fedora guest with virt-install
virt-install
is a command line based tool for creating virtualized guests. To start the interactive install process, run the virt-install
command:
su -c "/usr/sbin/virt-install"
The following questions for the new guest will be presented.
- What is the name of your virtual machine? This is the label that will identify the guest OS. This label is used with
virsh
commands andvirt-manager
(Virtual Machine Manager). - How much RAM should be allocated (in megabytes)? This is the amount of RAM to be allocated for the guest instance in megabytes (eg, 256). Note that installation with less than 256 megabytes is not recommended.
- What would you like to use as the disk (path)? The local path and file name of the file to serve as the disk image for the guest (eg, /home/joe/xenbox1). This will be exported as a full disk to your guest.
- How large would you like the disk to be (in gigabytes)? The size of the virtual disk for the guest (only appears if the file specified above does not already exist). 4.0 gigabytes is a reasonable size for a "default" install
- Would you like to enable graphics support (yes or no): Should the graphical installer be used?
- What is the install location? This is the path to a Fedora installation tree in the format used by anaconda. NFS, FTP, and HTTP locations are all supported. Examples include:
nfs:my.nfs.server.com:/path/to/test2/tree/
http://my.http.server.com/path/to/tree/
ftp://my.ftp.server.com/path/to/tree
These options can be passed as command line options, execute virt-install --help
for details.
virt-install
can use kickstart files, for example
virt-install -x ks=kickstart-file-name.ks
.
If graphics were enabled, a VNC window will open and present the graphical installer. If graphics were not enabled, a text installer will appear. Proceed with the fedora installation.
Creating a fedora guest with virt-manager
Start the GUI Virtual Machine Manager by selecting it from the "Applications-->System Tools" menu, or by running the following command:
su -c "virt-manager"
Enter the root
password when prompted.
- Open a connection to a hypervisor by choosing File-->Open connection...
- Choose "qemu" for KVM, or "Xen" for Xen.
- Choose "local" or select a method to connect to a remote hypervisor
- After a connection is opened, click the new icon next to the hypervisor, or right click on the active hypervisor and select "New" (Note - the new icon is going to be improved to make it easier to see)
- A wizard will present the same questions as appear with the
virt-install
command-line utility (see descriptions above). The wizard assumes that a graphical installation is desired and does not prompt for this option. - On the last page of the wizard there is a "Finish" button. When this is clicked, the guest OS is provisioned. After a few moments a VNC window should appear. Proceed with the installation as normal.
Remote management
The following remote management options are available:
- Create SSH keys for root, and use
ssh-agent
andssh-add
before launchingvirt-manager
. - Set up a local certificate authority and issue x509 certs to all servers and clients. For information on configuring this option, refer to http://libvirt.org/remote.html.
Guest system administration
When the installation of the guest operating system is complete, it can be managed using the GUI virt-manager
program or on the command line using virsh
.
Managing guests with virt-manager
Start the Virtual Machine Manager. Virtual Machine Manager is in the "Applications-->System Tools" menu, or execute:
su -c "virt-manager"
{1} If you are not root, you will be prompted to enter the root password. ChooseRun unprivileged
to operate in a read-only non-root mode.
- Choose "Local Xen Host" and click "Connect" in the "Open Connection" dialog window.
- The list of virtual machines is displayed in the main window. The first machine is called "Domain 0"; this is the host computer.
- If a machine is not listed, it is probably not running. To start up a machine select "File-->Restore a saved machine..." and select the file that serves as the guest's disk.
- The display lists the status, CPU and memory usage for each machine. Additional statistics can be selected under the "View" menu.
- Double click the name of a machine to open the virtual console.
- From the virtual console, select "View-->Details" to access the machine's properties and change its hardware configuration
- To access the serial console (if there is a problem with the graphical console) select "View-->Serial Console"
For further information about virt-manager
consult the project website
Bugs in the virt-manager
tool should be reported in BugZilla against the 'virt-manager' component
Managing guests with virsh
The virsh
command is a safe alternative to the xm
command. virsh
provides error checking and many other useful features over the xm
command.
Guests can be managed on the command line with the virsh
utility. The virsh
utility is built around the libvirt management API and has a number of advantages over the traditional Xen xm
tool:
virsh
has a stable set of commands whose syntax and semantics are preserved across updates to the underlying virtualization platform.virsh
can be used as an unprivileged user for read-only operations (e.g. listing domains, listing domain statistics).virsh
can manage domains running under Xen or KVM with no perceptible difference to the user
To start a virtual machine:
su -c "virsh -c <URI> create <name of virtual machine>"
To list the virtual machines currently running:
su -c "virsh -c <URI> list"
To gracefully power off a guest:
su -c "virsh -c <URI> shutdown <virtual machine (name | id | uuid)>"
To save a snapshot of the machine to a file:
su -c "virsh -c <URI> save <virtual machine (name | id | uuid)> <filename>"
To restore a previously saved snapshot:
su -c "virsh -c <URI> restore <filename>"
To export the configuration file of a virtual machine:
su -c "virsh -c <URI> dumpxml <virtual machine (name | id | uuid)"
For a complete list of commands available for use with virsh
:
su -c "virsh help"
Or consult the manual page: man 1 virsh
Bugs in the virsh
tool should be reported in BugZilla against the 'libvirt' component.
Managing guests with qemu-kvm
KVM virtual machines can also be managed in the command line using the 'qemu-kvm' command. See man qemu-kvm
for more details.
Troubleshooting virtualization
SELinux
The SELinux policy in Fedora has the necessary rules to allow the use of virtualization. The main caveat to be aware of is that any file backed disk images need to be in the directory /var/lib/libvirt/images
. This applies both to regular disk images, and ISO images. Block device backed disks are already labelled correctly to allow them to pass SELinux checks.
Beginning with Fedora 11, virtual machines under SELinux are isolated from each other with sVirt.
Log files
The graphical interface, virt-manager
, used to create and manage
virtual machines, logs to $HOME/.virt-manager/virt-manager.log
.
The virt-install
tool, used to create virtual machines, logs to $HOME/.virtinst/virt-install.log
Logging from virt-manger
and virt-intsall
may be increased by setting the environment variable LIBVIRT_DEBUG=1
.
See http://libvirt.org/logging.html
All QEMU command lines executed by libvirt
are logged to /var/log/libvirt/qemu/$DOMAIN.log
where $DOMAIN
is the name of the guest.
The libvirtd
daemon is responsible for handling connections from
tools such as virsh
and virt-manager
.
The level and type of logging produced by libvirtd
may be modified in /etc/libvirt/libvirtd.conf
.
There are two log files stored on the host system to assist with debugging Xen related problems. The file /var/log/xen/xend.log
holds the same information reported with the 'xm log
' command.
The second file, /var/log/xen/xend-debug.log
usually contains much more detailed information.
When reporting errors, always include the output from both /var/log/xen/xend.log
and /var/log/xen/xend-debug.log
.
If starting a fully-virtualized domains (ie unmodified guest OS) there are also logs in /var/log/xen/qemu-dm*.log
which can contain useful information.
Xen hypervisor logs can be seen by running the 'xm dmesg
' command.
Serial console access for troubleshooting and management
Serial console access is useful for debugging kernel crashes and remote management can be very helpful. Accessing the serial consoles of xen kernels or virtualized guests is slightly different to the normal procedure.
Host serial console access
If the Xen kernel itself has died and the hypervisor has generated an error, there is no way to record the error persistently on the local host. Serial console lets you capture it on a remote host.
The Xen host must be setup for serial console output, and a remote host must exist to capture it. For the console output, set the appropriate options in /etc/grub.conf:
title Fedora root (hd0,1) kernel /vmlinuz-current.running.version com1=38400,8n1 sync_console module /vmlinuz-current.running.version ro root=LABEL=/ rhgb quiet console=ttyS0 console=tty pnpacpi=off module /initrd-current.running.version
for a 38400-bps serial console on com1 (ie. /dev/ttyS0 on Linux.) The "sync_console" works around a problem that can cause hangs with asynchronous hypervisor console output, and the "pnpacpi=off" works around a problem that breaks input on serial console. "console=ttyS0 console=tty" means that kernel errors get logged both on the normal VGA console and on serial console. Once that is done, install and set up ttywatch
to capture the information on a remote host connected by a standard null-modem cable. For example, on the remote host:
su -c "ttywatch --name myhost --port /dev/ttyS0"
Will log output from /dev/ttyS0 into a file /var/log/ttywatch/myhost.log
Para-virtualized guest serial console access
Para-virtualized guest OS will automatically have a serial console configured, and plumbed through to the Domain-0 OS. This can be accessed from the command line using
su -c "virsh console <domain name>"
Alternatively, the graphical virt-manager
program can display the serial console. Simply display the 'console' or 'details' window for the guest and select 'View -> Serial console' from the menu bar.
Fully virtualized guest serial console access
Fully-virtualized guest OS will automatically have a serial console configured, but the guest kernel will not be configured to use this out of the box. To enable the guest console in a Linux fully-virt guest, edit the /etc/grub.conf in the guest and add 'console=ttyS0 console=tty0'. This ensures that all kernel messages get sent to the serial console, and the regular graphical console. The serial console can then be access in same way as paravirt guests:
su -c "virsh console <domain name>"
Alternatively, the graphical virt-manager
program can display the serial console. Simply display the 'console' or 'details' window for the guest & select 'View -> Serial console' from the menu bar.
Accessing data on guest disk images
There are two tools which can help greatly in accessing data within a guest disk image: lomount and kpartx.
- lomount
su -c "lomount -t ext3 -diskimage /xen/images/fc5-file.img -partition 1 /mnt/boot"
lomount only works with small disk images and cannot deal with LVM volumes, so for more complex cases, kpartx (from the device-mapper-multipath RPM) is preferred:
- kpartx
su -c "yum install device-mapper-multipath" su -c "kpartx -av /dev/xen/guest1" add map guest1p1 : 0 208782 linear /dev/xen/guest1 63 add map guest1p2 : 0 16563015 linear /dev/xen/guest1 208845
Note that this only works for block devices, not for images installed on regular files. To use file images, set up a loopback device for the file first:
su -c "losetup -f" /dev/loop0 su -c "losetup /dev/loop0 /xen/images/fc5-file.img" su -c "kpartx -av /dev/loop0" add map loop0p1 : 0 208782 linear /dev/loop0 63 add map loop0p2 : 0 12370050 linear /dev/loop0 208845
In this case we have added an image formatted as a default Fedora install, so it has two partitions: one /boot, and one LVM volume containing everything else. They are accessible under /dev/mapper:
su -c "ls -l /dev/mapper/ | grep guest1" brw-rw---- 1 root disk 253, 6 Jun 6 10:32 xen-guest1 brw-rw---- 1 root disk 253, 14 Jun 6 11:13 guest1p1 brw-rw---- 1 root disk 253, 15 Jun 6 11:13 guest1p2 su -c "mount /dev/mapper/guest1p1 /mnt/boot/"
To access LVM volumes on the second partition, rescan LVM with vgscan
and activate the volume group on that partition (named "VolGroup00" by default) with vgchange -ay
:
su -c "kpartx -a /dev/xen/guest1" su -c "vgscan" Reading all physical volumes. This may take a while... Found volume group "VolGroup00" using metadata type lvm2 su -c "vgchange -ay VolGroup00" 2 logical volume(s) in volume group "VolGroup00" now active su -c "lvs" LV VG Attr LSize Origin Snap% Move Log Copy% LogVol00 VolGroup00 -wi-a- 5.06G LogVol01 VolGroup00 -wi-a- 800.00M su -c "mount /dev/VolGroup00/LogVol00 /mnt/" ... su -c "umount /mnt" su -c "vgchange -an VolGroup00" su -c "kpartx -d /dev/xen/guest1"
Getting help
If the Troubleshooting section above does not help you to solve your problem, check the list of existing virtualization bugs, and search the archives of the mailing lists in the resources section. If you believe your problem is a previously undiscovered bug, please report it to Bugzilla.
Resources
- Fedora
fedora-virt
mailing list
- Xen discussion
- Fedora
fedora-xen
mailing list - Xensource
xen-users
mailing list
- Virtual Machine Manager,
virt-inst
and related tools
- Red Hat
et-mgmt-tools
mailing list
- Libvirt discussion
- Red Hat
libvir-list
mailing list
References
- http://www-128.ibm.com/developerworks/linux/library/l-linux-kvm/?ca=dgr-lnxw07LinuxKVM
- http://kerneltrap.org/node/8088
Previous Fedora Virtualization Guides: