No edit summary |
|||
Line 93: | Line 93: | ||
[http://www.redhat.com/mailman/listinfo/libvir-list libvir-list]. | [http://www.redhat.com/mailman/listinfo/libvir-list libvir-list]. | ||
==== ==== | ==== New Release libvirt 0.6.5 ==== | ||
[[DanielVeillard|Daniel Veillard]] | |||
announced<ref>http://www.redhat.com/archives/libvir-list/2009-July/msg00060.html</ref> | |||
a new {{package|libvirt}} release, version 0.6.5. | |||
'''New features:''' | |||
'''Improvements:''' | |||
<code>libvirt</code> 0.6.4 was | |||
released<ref>http://fedoraproject.org/wiki/FWN/Issue179#New_Release_libvirt_0.6.4</ref> | |||
on May 29. | |||
<references /> | |||
==== F11 and KVM Migrations ==== | |||
[[ScottBaker|Scott Baker]] | |||
tried<ref>http://www.redhat.com/archives/libvir-list/2009-July/msg00187.html</ref> | |||
"to do a 'migration' from one host to another and I'm getting an error." | |||
"Where can I look next to figure out why it didn't work?" | |||
virsh # migrate --live Narwhal qemu+ssh://10.1.1.1/system | |||
error: operation failed: failed to start listening VM | |||
[[DanielVeillard|Daniel Veillard]] | |||
suggested checking {{filename|/var/log/libvirt/qemu/Narwhal.log}} on the target server. It came out that one server was running x86_64 while the other was i586. | |||
[[ChrisLalancette|Chris Lalancette]] | |||
said<ref>http://www.redhat.com/archives/libvir-list/2009-July/msg00202.html</ref> | |||
"that's just not going to work. In theory it might work, but it's never | |||
been tested, so I'm not surprised it doesn't. In general migration is extremely | |||
finicky when it comes to CPU versions, and versions of the software." And | |||
suggested trying again after starting libvirtd by hand with debugging turned | |||
up. | |||
<code>LIBVIRT_DEBUG=1 /usr/sbin/libvirtd --verbose --listen</code> | |||
<references /> | |||
==== The Role of libvirtd ==== | |||
[[HughBrock|Hugh Brock]] | |||
described<ref>http://www.redhat.com/archives/libvir-list/2009-July/msg00179.html</ref> | |||
client desires to make | |||
"libvirtd be a one-stop shop for everything they need | |||
to do on a virtualization host, including things we have traditionally | |||
held out-of-scope for libvirt. A partial list of those things would | |||
include:" | |||
* In-depth multipath config management | |||
* Hardware lifecycle management (power-off, reboot, etc.) | |||
* HA configuration | |||
Hugh then asked "why *not* expand the scope of libvirtd | |||
to be a one-stop shop for managing a node? Is there a really good | |||
reason it shouldn't have the remaining capabilities libvirt users | |||
want?" | |||
[[DanielBerrange|Daniel Berrange]] | |||
replied<ref>http://www.redhat.com/archives/libvir-list/2009-July/msg00182.html</ref> | |||
"This is essentially suggesting that libvirtd become a general purpose | |||
RPC layer for all remote management tasks. At which point you have | |||
just re-invented QPid/AMQP or CIM or any number of other general | |||
purpose message buses." | |||
<pre> | |||
libvirtd has a core well defined goal: | |||
- Provide a remote proxy for libvirt API calls | |||
if you want todo anything more than that you should be considering an | |||
alternative remote management system. We already have 2 good ones to | |||
choose from supported with libvirt | |||
- QPid/AMQP, with libvirt-qpid agent + your own custom agents | |||
- CIM, with libvirt-CIM + your own custom CIM providers | |||
Both of these offer other benefits besides just pluggable support | |||
for other functionality. In particular | |||
- Non-blocking asynchronous RPC calls | |||
- Assured delivery for RPC calls | |||
- Scalable network architecture / topology | |||
- Inter-operability with plugins written by other projects/vendors | |||
Furthermore, adding more plugins to libvirtd means we will never | |||
be able to reduce its privileges to an acceptable level, because we'll | |||
never know what capabilities the plugins may want. | |||
</pre> | |||
Hugh countered <ref>http://www.redhat.com/archives/libvir-list/2009-July/msg00183.html</ref> | |||
<pre> | |||
I understand your point -- certainly we want to use existing RPC | |||
mechanisms for libvirt and node management, not maintain our own. | |||
However, given a libvirt-qpid daemon on the node that handles RPC over | |||
QMF (for example), is there not some value in having libvirt expose a | |||
consistent API for the operations people want to do on a host regardless | |||
of whether they have directly to do with managing a virtual machine or | |||
not? | |||
I will note that when I presented the large client with the option of | |||
QMF talking to multiple agents on the node but exposing (effectively) a | |||
single API and a single connection, they seemed much happier. So perhaps | |||
the right way to attack this is with the ovirt-qpid daemon we are | |||
currently working on. | |||
Daniel V., any further thoughts on this? | |||
</pre> | |||
[[DanielBerrange|Daniel Berrange]] | |||
<ref>http://www.redhat.com/archives/libvir-list/2009-July/msg00184.html</ref> | |||
<pre> | |||
> consistent API for the operations people want to do on a host regardless | |||
> of whether they have directly to do with managing a virtual machine or | |||
> not? | |||
I don't really see any value in that - you're just putting in another | |||
abstraction layer where none need exist. Just have whatever QMF agent | |||
you write talk directly to the thing you need to manage. If someone | |||
wants to write a QMF agent to managing cluster software, they don't | |||
need to introduce an artificial dependancy on libvirtd, when their | |||
agent could talk directly to the cluster software being managed, and | |||
thus be useful without libvirt deployed. | |||
> I will note that when I presented the large client with the option of | |||
> QMF talking to multiple agents on the node but exposing (effectively) a | |||
> single API and a single connection, they seemed much happier. So perhaps | |||
> the right way to attack this is with the ovirt-qpid daemon we are | |||
> currently working on. | |||
A client application cannot tell whether a remote service is implemented | |||
by a single agent, or multiple agents, nor do they see the concept of | |||
a connection. All they see is a set of objects, representing everything | |||
on the message bus. So again for clients, there is no need for everything | |||
to be in one agent. | |||
</pre> | |||
[[DanielVeillard|Daniel Veillard]] | |||
was<ref>http://www.redhat.com/archives/libvir-list/2009-July/msg00186.html</ref> | |||
"a bit synpathetic to the suggestion though." | |||
<pre> | |||
I think libvirt API | |||
should help run those virtualization nodes, I would not open the gate | |||
like completely, but if we could provide all APIs needed to manage the | |||
node on a day by day basis then I think this is not really beyond our | |||
scope. I think that netcf is an example of such API where we start to | |||
add admin services for the purpose of running virtualization. Things | |||
like rebooting or shutting down the node would fit in this, maybe | |||
editing a drive partition too. | |||
HA configuration starts to be a bit stretched, I would expect this to | |||
be set once at creation and not part of the routine maintainance, so | |||
probably out of scope, multipath is a bit more in scope we discussed | |||
this already. | |||
Basically if we take the idea of a stripped down Node used only for | |||
virtualization, then except for operations which are first time setup | |||
options or maintainance, I think we should try to cover the requirements | |||
of normal operations of that node. To some extend that means we would | |||
step on the toes of CIM, but we would stick to a subset that's sure. | |||
</pre> | |||
<references /> | |||
==== Upcoming 0.6.5 release and switching to git ==== | |||
[[DanielVeillard|Daniel Veillard]] | |||
<ref>http://www.redhat.com/archives/libvir-list/2009-July/msg00007.html</ref> | |||
<references /> | |||
==== libvirt Repositories Mirrored on Gitorious ==== | |||
[[DanielBerrange|Daniel Berrange]] | |||
announced<ref>http://www.redhat.com/archives/libvir-list/2009-July/msg00252.html</ref> | |||
"I have created a {{package|libvirt}} project<ref>http://gitorious.org/libvirt</ref> on gitorious which has a mirror of | |||
the master branch of the libvirt.git repository. This mirror is *readonly* | |||
and updated automatically every 15 minutes. The purpose of this mirror is | |||
to allow people to easily publish their personal <code>libvirt</code> working repos | |||
to the world. The master upstream repository for <code>libvirt</code> does not change<ref>http://libvirt.org/git</ref>". | |||
<references /> | |||
==== virsh Dump for QEMU Guests ==== | |||
[[PaoloBonzini|Paolo Bonzini]] | |||
submitted<ref>http://www.redhat.com/archives/libvir-list/2009-July/msg00255.html</ref> | |||
a patch that "uses a stop/migrate/cont combination to implement | |||
"virsh dump" for QEMU guests | |||
{{bz|507551}}. The code is mostly based on qemudDomainSave | |||
, except that the XML | |||
prolog is not included as it is not needed to examine the dump | |||
with e.g. crash." | |||
<references /> | <references /> | ||
Revision as of 18:21, 10 July 2009
Virtualization
In this section, we cover discussion of Fedora virtualization technologies on the @et-mgmnt-tools-list, @fedora-xen-list, @libvirt-list and @ovirt-devel-list lists.
Contributing Writer: Dale Bewley
Enterprise Management Tools List
This section contains the discussion happening on the et-mgmt-tools list
More Device Support in virt-manager 'Add Hardware' Wizard
Cole Robinson
patched[1]
virt-manager
to implement adding of virtual video devices in the
'Add Hardware' wizard. Cole also implemented[2] attaching serial and parallel devices.
Both these features were added to
virt-install
[3]. Serial ports can be directed to sockets listening on remote hosts. For example: --serial udp,host=192.168.10.20:4444
. That may come in handy for the F12 Hostinfo feature[4].
Xen, Windows, and ACPI
Guido Günther
noted[1]
that virt-install
disables ACPI and APIC for Windows XP guests.
Adding, that it seems "that Windows XP is working fine with acpi/apic enabled which has
the immediate advantage that poweroff via ACPI works as expected.
So does it make sense to handle winxp the same win2k3?". Windows 2003 guests have ACPI enabled.
Pasi Kärkkäinen went to the xen-devel list and confirmed[2] and relayed "Keir Fraser replied that ACPI with Windows has been working properly at least since Xen 3.1.0 days". Pasi then updated the Xen wiki page[3].
Fedora Virtualization List
This section contains the discussion happening on the fedora-virt list.
New Mailing List and New Releases of libguestfs
Richard Jones
announced[1]
the creation of a new list[2] dedicated to
"libguestfs
/guestfish
/virt-inspector
discussion/development".
The current release is now 1.0.57[3], but Richard is so fast that may change by the time you read this.
Recent new features:
virt-df
- like 'df' for virtual machines- New Perl library called Sys::Guestfs::Lib
- Now available for EPEL
- Tab completion in guestfish now completes files and devices
- Big change to the code generator
- Lots more regression tests
- guestfish commands: time, glob, more, less
- new commands: readdir, mknod*, umask, du, df*, head*, tail*, wc*, mkdtemp, scrub, sh, sh-lines.
- Debian native[4] (debootstrap, debirf) support
See previous release announcement for 1.0.14 in FWN#179[5] and be sure to see the project homepage[6] for extensive usage examples.
- ↑ http://www.redhat.com/archives/fedora-virt/2009-July/msg00107.html
- ↑ http://www.redhat.com/mailman/listinfo/libguestfs
- ↑ http://www.redhat.com/archives/libguestfs/2009-July/msg00011.html
- ↑ http://www.redhat.com/archives/fedora-virt/2009-July/msg00088.html
- ↑ http://fedoraproject.org/wiki/FWN/Issue179#New_Release_libguestfs_1.0.41
- ↑ http://libguestfs.org/
Fedora Virt Status Update
Mark McLoughlin posted[1] another Fedora Virt Status Update reminding that Fedora 12 is quickly approaching with the Feature Freeze on 2009-07-28.
Also mentioned were:
- Details of a fix for "a dramatic slowdown in virtio-blk performance in F-11 guests"[2]
- Note on Xen Dom0 support.
- New wiki pages created.
- Detailed run-down of current virt bugs.
USB Passthrough to Virtual Machines
Mark McLoughlin
posted instructions[1] for attaching a USB device to a guest using virt-manager
in Fedora 11. This could previously (FWN#165[2]) be accomplished only on the command line.
Unfortunately, those wishing to manage their iPhone or newer iPods in a guest (yours truly included), KVM does not yet support the required USB 2.
Libvirt List
This section contains the discussion happening on the libvir-list.
New Release libvirt 0.6.5
Daniel Veillard
announced[1]
a new libvirt
release, version 0.6.5.
New features:
Improvements:
libvirt
0.6.4 was
released[2]
on May 29.
F11 and KVM Migrations
Scott Baker tried[1] "to do a 'migration' from one host to another and I'm getting an error." "Where can I look next to figure out why it didn't work?"
virsh # migrate --live Narwhal qemu+ssh://10.1.1.1/system error: operation failed: failed to start listening VM
Daniel Veillard
suggested checking /var/log/libvirt/qemu/Narwhal.log
on the target server. It came out that one server was running x86_64 while the other was i586.
Chris Lalancette
said[2]
"that's just not going to work. In theory it might work, but it's never
been tested, so I'm not surprised it doesn't. In general migration is extremely
finicky when it comes to CPU versions, and versions of the software." And
suggested trying again after starting libvirtd by hand with debugging turned
up.
LIBVIRT_DEBUG=1 /usr/sbin/libvirtd --verbose --listen
The Role of libvirtd
Hugh Brock described[1] client desires to make "libvirtd be a one-stop shop for everything they need to do on a virtualization host, including things we have traditionally held out-of-scope for libvirt. A partial list of those things would include:"
- In-depth multipath config management
- Hardware lifecycle management (power-off, reboot, etc.)
- HA configuration
Hugh then asked "why *not* expand the scope of libvirtd to be a one-stop shop for managing a node? Is there a really good reason it shouldn't have the remaining capabilities libvirt users want?"
Daniel Berrange replied[2] "This is essentially suggesting that libvirtd become a general purpose RPC layer for all remote management tasks. At which point you have just re-invented QPid/AMQP or CIM or any number of other general purpose message buses."
libvirtd has a core well defined goal: - Provide a remote proxy for libvirt API calls if you want todo anything more than that you should be considering an alternative remote management system. We already have 2 good ones to choose from supported with libvirt - QPid/AMQP, with libvirt-qpid agent + your own custom agents - CIM, with libvirt-CIM + your own custom CIM providers Both of these offer other benefits besides just pluggable support for other functionality. In particular - Non-blocking asynchronous RPC calls - Assured delivery for RPC calls - Scalable network architecture / topology - Inter-operability with plugins written by other projects/vendors Furthermore, adding more plugins to libvirtd means we will never be able to reduce its privileges to an acceptable level, because we'll never know what capabilities the plugins may want.
Hugh countered [3]
I understand your point -- certainly we want to use existing RPC mechanisms for libvirt and node management, not maintain our own. However, given a libvirt-qpid daemon on the node that handles RPC over QMF (for example), is there not some value in having libvirt expose a consistent API for the operations people want to do on a host regardless of whether they have directly to do with managing a virtual machine or not? I will note that when I presented the large client with the option of QMF talking to multiple agents on the node but exposing (effectively) a single API and a single connection, they seemed much happier. So perhaps the right way to attack this is with the ovirt-qpid daemon we are currently working on. Daniel V., any further thoughts on this?
> consistent API for the operations people want to do on a host regardless > of whether they have directly to do with managing a virtual machine or > not? I don't really see any value in that - you're just putting in another abstraction layer where none need exist. Just have whatever QMF agent you write talk directly to the thing you need to manage. If someone wants to write a QMF agent to managing cluster software, they don't need to introduce an artificial dependancy on libvirtd, when their agent could talk directly to the cluster software being managed, and thus be useful without libvirt deployed. > I will note that when I presented the large client with the option of > QMF talking to multiple agents on the node but exposing (effectively) a > single API and a single connection, they seemed much happier. So perhaps > the right way to attack this is with the ovirt-qpid daemon we are > currently working on. A client application cannot tell whether a remote service is implemented by a single agent, or multiple agents, nor do they see the concept of a connection. All they see is a set of objects, representing everything on the message bus. So again for clients, there is no need for everything to be in one agent.
Daniel Veillard was[5] "a bit synpathetic to the suggestion though."
I think libvirt API should help run those virtualization nodes, I would not open the gate like completely, but if we could provide all APIs needed to manage the node on a day by day basis then I think this is not really beyond our scope. I think that netcf is an example of such API where we start to add admin services for the purpose of running virtualization. Things like rebooting or shutting down the node would fit in this, maybe editing a drive partition too. HA configuration starts to be a bit stretched, I would expect this to be set once at creation and not part of the routine maintainance, so probably out of scope, multipath is a bit more in scope we discussed this already. Basically if we take the idea of a stripped down Node used only for virtualization, then except for operations which are first time setup options or maintainance, I think we should try to cover the requirements of normal operations of that node. To some extend that means we would step on the toes of CIM, but we would stick to a subset that's sure.
- ↑ http://www.redhat.com/archives/libvir-list/2009-July/msg00179.html
- ↑ http://www.redhat.com/archives/libvir-list/2009-July/msg00182.html
- ↑ http://www.redhat.com/archives/libvir-list/2009-July/msg00183.html
- ↑ http://www.redhat.com/archives/libvir-list/2009-July/msg00184.html
- ↑ http://www.redhat.com/archives/libvir-list/2009-July/msg00186.html
Upcoming 0.6.5 release and switching to git
libvirt Repositories Mirrored on Gitorious
Daniel Berrange
announced[1]
"I have created a libvirt
project[2] on gitorious which has a mirror of
the master branch of the libvirt.git repository. This mirror is *readonly*
and updated automatically every 15 minutes. The purpose of this mirror is
to allow people to easily publish their personal libvirt
working repos
to the world. The master upstream repository for libvirt
does not change[3]".
virsh Dump for QEMU Guests
Paolo Bonzini submitted[1] a patch that "uses a stop/migrate/cont combination to implement "virsh dump" for QEMU guests RHBZ #507551. The code is mostly based on qemudDomainSave , except that the XML prolog is not included as it is not needed to examine the dump with e.g. crash."
Fedora-Xen List
This section contains the discussion happening on the fedora-xen list.
Xen dom0 Forward Ported to Latest Kernel
Previously, Xen dom0 support in Fedora was provided by forward porting the Xensource patches from kernel 2.6.18 to the version found in the Fedora release at the time. This consumed developer resources and led to separate kernel
and kernel-xen
packages for a time. As of
Fedora 9[1] this practice was deamed[2] untenable, and support for hosting Xen guests was dropped from Fedora.
Work has since focused on creating a paravirt operations dom0[3] kernel based on the most recent upstream vanilla kernel. This work is incomplete and not expected to be done before F12 or even F13. However, experimental dom0 kernels[4] have been created for the adventurous.
Pasi Kärkkäinen tells[5] us the Xen 2.6.18 patches have now been forward-ported to the current 2.6.29 and 2.6.30 kernel. "Forward-porting has been done by Novell for OpenSUSE. Novell also has a forward-port to 2.6.27 for SLES11."
The patches can be found here[6] here [7] and here[8].
Pasi added "These patches are still more stable and mature than the pv_ops dom0 code.. Also, these patches have the full Xen feature set (pv_ops still lacks some features)."
More history is avilable[9].
- ↑ http://docs.fedoraproject.org/release-notes/f9/en_US/sn-Virtualization.html
- ↑ https://www.redhat.com/archives/fedora-xen/2007-November/msg00106.html
- ↑ http://fedoraproject.org/wiki/Features/XenPvopsDom0
- ↑ http://fedoraproject.org/wiki/FWN/Issue170#Experimental_Dom0_Kernel_Update
- ↑ http://www.redhat.com/archives/fedora-xen/2009-July/msg00000.html
- ↑ http://www.nabble.com/2.6.30-dom0-Xen-patches-td24293721.html
- ↑ http://code.google.com/p/gentoo-xen-kernel/downloads/list
- ↑ http://x17.eu/xen/
- ↑ http://fedoraproject.org/wiki/Virtualization/History