BigDisk project: du(1) 64bit clean.
Julian Elischer
julian at elischer.org
Tue Jan 4 22:57:34 PST 2005
M. Warner Losh wrote:
> In message: <41DB2B24.6050005 at elischer.org>
> Julian Elischer <julian at elischer.org> writes:
> :
> :
> : Pawel Jakub Dawidek wrote:
> :
> : >Hi.
> : >
> : >I want you to look at two patches which makes du(1) 64bit clean.
> : >This work is part of the BigDisk project:
> : >
> : > http://www.freebsd.org/projects/bigdisk/
> : >
> : >
> : One thing that needs to be done is an 2ndary storage fsck.
> : that doesn't try put everything in RAM.
> : Basically this will mean extracting all the metadata from filesystems into
> : files and running sort operations of various kinds on them
> : to order the data in ways that allows consistencies to be checked.
> : It will take a bit longer than a RAM fsck but maybe not as much as
> : one might fear.
> : We all remember those "sort a mag-tape larger than RAM"
> : lessons from CS101 don't we?
> : At least it doesn't have to be "in place" so merge sorts are OK. :-)
> :
> : why?
> :
> : A bitmap of 1TB of 512 byte records is 244MB so with a 4BG machine
> : with 3GB available to the process you can't even fit the bitmaps into
> : memory for a 12TB Filesystem let alone other metadata.
> :
> : Going to 2048 byte frags helps but you still run into a limit.
> : last I tried it, you need about 600MB per TB of fileysstem to check.
> :
> : So I think a special fsck that uses files is a must for really big
> : filesystems, unless they (the filesystems) can be broken up in
> : a logical way (IBM did that many years ago I believe).
> : I think you should add that to your list.
>
> I think that a big amount of this could be reduced by using simple
> arrays rather than lists which are more memory efficient...
>
> Warner
but that just puts the problem a bit further away.
More information about the freebsd-arch
mailing list