ZFS subvolume support inside Bhyve vm
Сергей Мамонов
mrqwer88 at gmail.com
Fri Mar 11 00:15:58 UTC 2016
Hello!
Yes - zvols looks awesome. But what driver you use for it? And what about
disk usage overhead in guest?
virtio-blk doesnt support fstrim (ahci-hd support it, but slower? "*At this
point virtio-blk is indeed faster then ahci-hd on high IOPS*").
In linux && kvm we try used virtio-scsi driver with support fstrim, but how
I see it not availble now in 10-2 stable for bhyve.
And I not lonely with this question -
https://lists.freebsd.org/pipermail/freebsd-virtualization/2015-March/003442.html
2016-03-11 2:45 GMT+03:00 Paul Vixie <paul at redbarn.org>:
>
>
> Pavel Odintsov wrote:
>
>> Hello, Dear Community!
>>
>> I would like to ask about plans for this storage engine approach. I like
>> ZFS so much and we are storing about half petabyte of data here.
>>
>> But when we are speaking about vm's we should use zvols or even raw file
>> based images and they are discarding all ZFS benefits.
>>
>
> i use zvols for my bhyves and they have two of the most important zfs
> advantages:
>
> 1. snapshots.
>
> root at mm1:/home/vixie # zfs list|grep fam
>> zroot1/vms/family 55.7G 3.84T 5.34G -
>> root at mm1:/home/vixie # zfs snap zroot1/vms/family at before
>>
>> [family.redbarn:amd64] touch /var/tmp/after
>>
>> root at mm1:/home/vixie # zfs snap zroot1/vms/family at after
>> root at mm1:/home/vixie # mkdir /mnt/before /mnt/after
>> root at mm1:/home/vixie # zfs clone zroot1/vms/family at before zroot1/before
>> root at mm1:/home/vixie # fsck_ffs -p /dev/zvol/zroot1/beforep2
>> ...
>> /dev/zvol/zroot1/beforep2: 264283 files, 1118905 used, 11575625 free
>> (28697 frags, 1443366 blocks, 0.2% fragmentation)
>> root at mm1:/home/vixie # mount -r /dev/zvol/zroot1/beforep2 /mnt/before
>> root at mm1:/home/vixie # mount -r /dev/zvol/zroot1/beforep2 /mnt/before
>>
>> root at mm1:/home/vixie # zfs clone zroot1/vms/family at after zroot1/after
>> root at mm1:/home/vixie # fsck_ffs -p /dev/zvol/zroot1/afterp2
>> ...
>> /dev/zvol/zroot1/afterp2: 264284 files, 1118905 used, 11575625 free
>> (28697 frags, 1443366 blocks, 0.2% fragmentation)
>> root at mm1:/home/vixie # mount -r /dev/zvol/zroot1/afterp2 /mnt/after
>>
>> root at mm1:/home/vixie # ls -l /mnt/{before,after}/var/tmp/after
>> ls: /mnt/before/var/tmp/after: No such file or directory
>> -rw-rw-r-- 1 vixie wheel 0 Mar 10 22:52 /mnt/after/var/tmp/after
>>
>
> 2. storage redundancy, read caching, and write caching:
>
> root at mm1:/home/vixie # zpool status | tr -d '\t'
>> pool: zroot1
>> state: ONLINE
>> scan: scrub repaired 0 in 2h24m with 0 errors on Thu Mar 10 12:24:13
>> 2016
>> config:
>>
>> NAME STATE READ WRITE CKSUM
>> zroot1 ONLINE 0 0 0
>> mirror-0 ONLINE 0 0 0
>> gptid/2427e651-d9cc-11e3-b8a1-002590ea750a ONLINE 0 0 0
>> gptid/250b0f01-d9cc-11e3-b8a1-002590ea750a ONLINE 0 0 0
>> mirror-1 ONLINE 0 0 0
>> gptid/d35bb315-da08-11e3-b17f-002590ea750a ONLINE 0 0 0
>> gptid/d85ad8be-da08-11e3-b17f-002590ea750a ONLINE 0 0 0
>> logs
>> mirror-2 ONLINE 0 0 0
>> ada0s1 ONLINE 0 0 0
>> ada1s1 ONLINE 0 0 0
>> cache
>> ada0s2 ONLINE 0 0 0
>> ada1s2 ONLINE 0 0 0
>>
>> errors: No known data errors
>>
>
> so while i'd love to chroot a bhyve driver to some place in the middle of
> the host's file system and then pass VFS right on through, more or less the
> way mount_nullfs does, i am pretty comfortable with zvol UFS, and i think
> it's misleading to say that zvol UFS lacks all ZFS benefits.
>
> --
> P Vixie
>
> _______________________________________________
> freebsd-virtualization at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
> To unsubscribe, send any mail to "
> freebsd-virtualization-unsubscribe at freebsd.org"
>
More information about the freebsd-virtualization
mailing list