Would anybody port DragonFlyBSD's HAMMER fs to FreeBSD?
Miroslav Lachman
000.fbsd at quip.cz
Wed Oct 1 22:25:38 UTC 2008
lhmwzy wrote:
> Yes,this is a way.
> I would do as you said if I need to do so.
>
> 2008/10/1 Jeremy Chadwick <koitsu at freebsd.org>:
>
>>On Wed, Oct 01, 2008 at 02:29:12PM +0800, lhmwzy wrote:
>>
>>>That's it.
>>>Since we don't have the skill,what we can do is wait.
>>>
>>>Waiting is such a bad thing.......
>>
>>If this functionality is really something you want/need, you should
>>consider finding a kernel programmer who would be willing to port it,
>>for financial exchange (in English: you will be paying them $XX/hour
>>to port it to FreeBSD).
>>
>>This has happened in the past for some key features. Like I said, it
>>all depends on how much it matters to you.
HAMMER seems good, but at this time, it is more important to finish ZFS
integration in to FreeBSD. Fixing all known issues, more testing, wider
audience and make it production ready. Not because ZFS is better, may be
is worse - it does not metter. I think it is important to have one
successful port finished than two filesystems in non-production state.
FreeBSD is currently lag behind other operating systems in supported
filesystems. UFS2 is insufficient for todays storage requirements.
Once we have ZFS production ready, we can talk about another filesystems.
I can't do any programming to port whatever filesystem, nor write
patches. All I can do is testing and reporting - and I am doing it.
I have some stresstests of ZFS. Currently I have one ZFS mount with 56
snapshots taken during heavy tasks like coping or removing large number
of small files (mainly cp -R /usr/ports /tank/test/$i in loops plus
taring / untaring tasks), some large files creation with dd on
background etc. All is running fine on FreeBSD 7.0 amd64 with 4GB RAM
and some kernel tunning.
vm.kmem_size="1024M"
vm.kmem_size_max="1024M"
kern.maxvnodes="400000"
vfs.zfs.prefetch_disable="1"
vfs.zfs.arc_min="16M"
vfs.zfs.arc_max="64M"
There are 53202511 inodes on ZFS partition. Zpool was created over two
slices of two disks (mirror):
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
tank 434G 10.5G 75 1.24K 618K 5.76M
mirror 434G 10.5G 75 1.24K 618K 5.76M
ad4s2 - - 13 328 918K 5.76M
ad6s2 - - 16 326 1.09M 5.76M
---------- ----- ----- ----- ----- ----- -----
I have no crash of ZFS, but as I read in mailing lists, there are still
some problems, so let it be fixed and settle down before porting another
good filesystem.
Just my €0.02
Miroslav Lachman
More information about the freebsd-stable
mailing list