No edit summary |
No edit summary |
||
Line 15: | Line 15: | ||
== Summary == | == Summary == | ||
<!-- A sentence or two summarizing what this feature is and what it will do. This information is used for the overall feature summary page for each release. --> | <!-- A sentence or two summarizing what this feature is and what it will do. This information is used for the overall feature summary page for each release. --> | ||
Multiqueue virtio-net provides an approach that scales network performance as the increasing of the number of vcpus by allowing them to transfer packets through more than one virtqueue pairs. | Multiqueue virtio-net provides an approach that scales the network performance as the increasing of the number of vcpus by allowing them to transfer packets through more than one virtqueue pairs. | ||
== Owner == | == Owner == | ||
Line 35: | Line 35: | ||
Today's high-end server have more processors, guests running on them tend have an increasing number of vcpus. The scale of the protocol stack in guest in restricted because of the single queue virtio-net: | Today's high-end server have more processors, guests running on them tend have an increasing number of vcpus. The scale of the protocol stack in guest in restricted because of the single queue virtio-net: | ||
* The network performance does not scale as the number of vcpus increasing: Guest can not transmit or retrieve packets in parallel as virtio-net have only one TX and RX, virtio-net drivers must be synchronized before sending and receiving packets. Even through there's software technology to spread the loads into different processor such as RFS, such kind of method is only for transmission and is really expensive in guest as they depends on IPI which may brings extra overhead in virtualized environment. | |||
* Multiqueue nic were more common used and is well supported by linux kernel, but current virtual nic can not utilize the multi queue support: the tap and virtio-net backend must serialize the co-current transmission/receiving request comes from different cpus. | |||
In order the remove those bottlenecks, we must allow the paralleled packet processing by introducing multi queue support for both back-end and guest drivers. Ideally, we may let the packet handing be done by processors in parallel without interleaving and scale the network performance as the number of vcpus increasing. | In order the remove those bottlenecks, we must allow the paralleled packet processing by introducing multi queue support for both back-end and guest drivers. Ideally, we may let the packet handing be done by processors in parallel without interleaving and scale the network performance as the number of vcpus increasing. |
Revision as of 08:47, 29 January 2013
Multiqueue virtio-net
Summary
Multiqueue virtio-net provides an approach that scales the network performance as the increasing of the number of vcpus by allowing them to transfer packets through more than one virtqueue pairs.
Owner
- Name: Jason Wang
- Email: <jasowang@redhat.com>
Current status
- Targeted release: [[Releases/<number> | Fedora 19 ]]
- Last updated: Jan 29th 2013
- Percentage of completion: 66%
Detailed Description
Today's high-end server have more processors, guests running on them tend have an increasing number of vcpus. The scale of the protocol stack in guest in restricted because of the single queue virtio-net:
- The network performance does not scale as the number of vcpus increasing: Guest can not transmit or retrieve packets in parallel as virtio-net have only one TX and RX, virtio-net drivers must be synchronized before sending and receiving packets. Even through there's software technology to spread the loads into different processor such as RFS, such kind of method is only for transmission and is really expensive in guest as they depends on IPI which may brings extra overhead in virtualized environment.
- Multiqueue nic were more common used and is well supported by linux kernel, but current virtual nic can not utilize the multi queue support: the tap and virtio-net backend must serialize the co-current transmission/receiving request comes from different cpus.
In order the remove those bottlenecks, we must allow the paralleled packet processing by introducing multi queue support for both back-end and guest drivers. Ideally, we may let the packet handing be done by processors in parallel without interleaving and scale the network performance as the number of vcpus increasing.