Re: AF_VSOCK status

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Apr 05, 2016 at 07:34:59PM +0700, Antoine Martin wrote:
> Forgive me if these questions are obvious, I am not a kernel developer.
> From what I am reading here:
> http://lists.linuxfoundation.org/pipermail/virtualization/2015-December/030935.html
> The code has been removed from mainline, is it queued for 4.6? If not,
> when are you planning on re-submitting it?

The patches are on the list (latest version sent last week):
http://comments.gmane.org/gmane.linux.kernel.virtualization/27455

They are only "Request For Comments" because the VIRTIO specification
changes have not been approved yet.  Once the spec is approved then the
patches can be seriously considered for merging.

There will definitely be a v6 with Claudio Imbrenda's locking fixes.

> We now have a vsock transport merged into xpra, which works very well
> with the kernel and qemu versions found here:
> http://qemu-project.org/Features/VirtioVsock
> Congratulations on making this easy to use!
> Is the upcoming revised interface likely to cause incompatibilities with
> existing binaries?

Userspace applications should not notice a difference.

> It seems impossible for the host to connect to a guest: the guest has to
> initiate the connection. Is this a feature / known limitation or am I
> missing something? For some of our use cases, it would be more practical
> to connect in the other direction.

host->guest connections have always been allowed.  I just checked that
it works with the latest code in my repo:

  guest# nc-vsock -l 1234
  host# nc-vsock 3 1234

> In terms of raw performance, I am getting about 10Gbps on an Intel
> Skylake i7 (the data stream arrives from the OS socket recv syscall
> split into 256KB chunks), that's good but not much faster than
> virtio-net and since the packets are avoiding all sorts of OS layer
> overheads I was hoping to get a little bit closer to the ~200Gbps memory
> bandwidth that this CPU+RAM are capable of. Am I dreaming or just doing
> it wrong?

virtio-vsock is not yet optimized but the priority is not to make
something faster than virtio-net.  virtio-vsock is not for applications
who are trying to squeeze out every last drop of performance.  Instead
the goal is to have a transport for guest<->hypervisor services that
need to be zero-configuration.

> How hard would it be to introduce a virtio mmap-like transport of some
> sort so that the guest and host could share some memory region?
> I assume this would give us the best possible performance when
> transferring large amounts of data? (we already have a local mmap
> transport we could adapt)

Shared memory is beyond the scope of virtio-vsock and it's unlikely to
be added.  There are a few existing ways to achieve that without
involving virtio-vsock: vhost-user or ivshmem.

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Kernel Discussion]     [TCP Instrumentation]     [Ethernet Bridging]     [Linux Wireless Networking]     [Linux WPAN Networking]     [Linux Host AP]     [Linux WPAN Networking]     [Linux Bluetooth Networking]     [Linux ATH6KL Networking]     [Linux Networking Users]     [Linux Coverity]     [VLAN]     [Git]     [IETF Annouce]     [Linux Assembly]     [Security]     [Bugtraq]     [Yosemite Information]     [MIPS Linux]     [ARM Linux Kernel]     [ARM Linux]     [Linux Virtualization]     [Linux IDE]     [Linux RAID]     [Linux SCSI]