(add to virt category, add sort keys) |
(punt to fedora 13) |
||
Line 9: | Line 9: | ||
== Current status == | == Current status == | ||
* Targeted release: [[Releases/ | * Targeted release: [[Releases/13| Fedora 13]] | ||
* Last updated: 2009- | * Last updated: 2009-07-22 | ||
* Percentage of completion: 20% | * Percentage of completion: 20% | ||
== Detailed Description == | == Detailed Description == | ||
Line 108: | Line 108: | ||
[[Category:Virtualization|VHostNet]] | [[Category:Virtualization|VHostNet]] | ||
[[Category: | [[Category:F13 Virt Features|VHostNet]] | ||
[[Category:FeaturePageIncomplete]] | [[Category:FeaturePageIncomplete]] | ||
<!-- Category:FeatureReadyForWrangler --> | <!-- Category:FeatureReadyForWrangler --> |
Revision as of 11:52, 22 July 2009
Enable kernel acceleration for kvm networking
Summary
Enable kernel acceleration for kvm networking
Owner
- Name: Michael S. Tsirkin
Current status
- Targeted release: Fedora 13
- Last updated: 2009-07-22
- Percentage of completion: 20%
Detailed Description
vhost net moves the task of converting virtio descriptors to skbs and back from qemu userspace to the kernel driver.
Benefit to Fedora
Using a kernel module reduces latency and improves packets per second for small packets.
Scope
The work is all upstream in the kernel and qemu. Guest code is already upstream. Host/qemu work is in progress. For Fedora 12 will likely have to backport some of it.
Milestones:
- Guest Kernel:
MSI-X support in virtio net
- Host Kernel:
iosignalfd, irqfd, eventfd polling finalize kernel/user interface socket polling virtio transport with copy from/to user
<- at this point can be used in production the rest are optimizations we will most likely need
mergeable buffers TX credits using destructor (or: poll device status) TSO/GSO pin memory with get user pages profile and tune
- qemu:
MSI-X support in virtio net raw sockets support in qemu, promisc mode connect to kernel backend with MSI-X migration PCI interrupts emulation
<- at this point can be used in production the rest are optimizations we will most likely need
programming MAC TSO/GSO profile and tune
Test Plan
Guest:
- WHQL networking tests
Networking:
- Various MTU sizes
- Broadcasts, multicasts,
- Ethtool
- Latency tests
- Bandwidth tests
- UDP testing
- Guest to guest communication
- More types of protocol testing
- Guest vlans
- Tests combination of multiple vnics on the guests
- With/without {IP|TCP|UDP} offload
Virtualization:
- Live migration
Kernel side:
- Load/unload driver
User Experience
Users should see faster networking at least in cases of SRIOV or a dedicated per-guest network device.
Dependencies
- kernel acceleration is implemented in the kernel rpm and depends on changes in qemu-kvm to work correctly.
Contingency Plan
- We don't turn it on by default if it turns out to be unstable.