Re: Is anyone working on VirtFS (FUSE over VirtIO)

From: Emil Tsalapatis <freebsd-lists_at_etsalapatis.com>
Date: Sun, 14 Jul 2024 14:02:48 UTC
Hi David, Warner,

    I'm glad you find this approach interesting! I've been meaning to
update the virtio-dbg patch for a while but unfortunately haven't found the
time in the last month since I uploaded it... I'll update it soon to
address the reviews and split off the userspace device emulation code out
of the patch to make reviewing easier (thanks Alan for the suggestion). If
you have any questions or feedback please let me know.

WRT virtiofs itself, I've been working on it too but I haven't found the
time to clean it up and upload it. I have a messy but working
implementation here
<https://github.com/etsal/freebsd-src/tree/virtiofs-head>. The changes to
FUSE itself are indeed minimal because it is enough to redirect the
messages into a virtiofs device instead of sending them to a local FUSE
device. The virtiofs device and the FUSE device are both simple
bidirectional queues. Not sure on how to deal with directly mapping files
between host and guest just yet, because the Linux driver uses their DAX
interface for that, but it should be possible.

Emil

On Sun, Jul 14, 2024 at 3:11 AM David Chisnall <theraven@freebsd.org> wrote:

> Wow, that looks incredibly useful.  Not needing bhyve / qemu (nested, if
> your main development is a VM) to test virtio drivers would be a huge
> productivity win.
>
> David
>
> On 13 Jul 2024, at 23:06, Warner Losh <imp@bsdimp.com> wrote:
>
> Hey David,
>
> You might want to check out  https://reviews.freebsd.org/D45370 which has
> the testing framework as well as hints at other work that's been done for
> virtiofs by Emil Tsalapatis. It looks quite interesting. Anything he's done
> that's at odds with what I've said just shows where my analysis was flawed
> :) This looks quite promising, but I've not had the time to look at it in
> detail yet.
>
> Warner
>
> On Sat, Jul 13, 2024 at 2:44 AM David Chisnall <theraven@freebsd.org>
> wrote:
>
>> On 31 Dec 2023, at 16:19, Warner Losh <imp@bsdimp.com> wrote:
>>
>>
>> Yea. The FUSE protocol is going to be the challenge here. For this to be
>> useful, the VirtioFS support on the FreeBSD  needs to be 100% in the
>> kernel, since you can't have userland in the loop. This isn't so terrible,
>> though, since our VFS interface provides a natural breaking point for
>> converting the requests into FUSE requests. The trouble, I fear, is a
>> mismatch between FreeBSD's VFS abstraction layer and Linux's will cause
>> issues (many years ago, the weakness of FreeBSD VFS caused problems for a
>> company doing caching, though things have no doubt improved from those
>> days). Second, there's a KVM tie-in for the direct mapped pages between the
>> VM and the hypervisor. I'm not sure how that works on the client (FreeBSD)
>> side (though the description also says it's mapped via a PCI bar, so maybe
>> the VM OS doesn't care).
>>
>>
>> From what I can tell from a little bit of looking at the code, our FUSE
>> implementation has a fairly cleanly abstracted layer (in fuse_ipc.c) for
>> handling the message queue.  For VirtioFS, it would 'just' be necessary to
>> factor out the bits here that do uio into something that talked to a VirtIO
>> ring.  I don’t know what the VFS limitations are, but since the protocol
>> for VirtioFS is the kernel <-> userspace protocol for FUSE, it seems that
>> any functionality that works with FUSE filesystems in userspace would work
>> with VirtioFS filesystems.
>>
>> The shared buffer cache bits are nice, but are optional, so could be done
>> in a later version once the basic functionality worked.
>>
>> David
>>
>>
>