cvs commit: src/sys/fs/msdosfs msdosfs_denode.c
Bruce Evans
bde at zeta.org.au
Thu Sep 8 12:54:38 PDT 2005
On Thu, 8 Sep 2005, Dmitry Pryanishnikov wrote:
> On Thu, 8 Sep 2005, Mike Silbersack wrote:
>>> entries begin at byte offsets from the start of the media with identical
>>> low-order 32 bits; e.g., 64-bit offsets
>>>
>>> 0x0000000000001000 and
>>> 0x0000000100001000
>>
>> Hm, maybe it wouldn't be too difficult to create, then. There is an option
>> to have compressed filesystems, so if one wrote a huge filesystem with
>> files that all contained zeros, perhaps it would compress well enough.
There is an option to create sparse files, and a driver to map files to
devices. These can easily be used to create multi-terabyte file systems
using only a few KB of physical disk before newfs[_msdosfs] and only a
few MB of physical disk after newfs*. Use a maximal blocksize to minimize
metadata. I think someone fixed the overflow bug at 4G sectors in md(4),
so it is now easy to create multi-petabyte file systems. msdosfs with
FAT32 seems to be limited to a measly 8TB (2^28 FAT entries with a cluster
size of 32KB; the FAT size for 2^28 entries is 4GB, and FreeBSD would panic
trying to malloc() 512MB for the FAT bitmap).
>> If you just started creating a lot of equally sized files containing zero
>> as their content, maybe it could be done via a script. Yeah, you could
>> just call truncate in some sort of shell script loop until you have enough
>> files, then go back and try reading file "000001", and that should cause
>> the panic, right?
>
> Our task is slightly different: not our files should start at magic offset,
> but their _directory entries_. I think this task is achievable by creating
> new FAT32 filesystem, then (in strict order) a directory, a large (approx.
> 4Gb) file in it, a second directory, a file in it, then lookup first
> file. In order to get a panic whe just have to tune size of the large file.
> If I have enough time I'll try to prepare such a regression test.
msdosfs has pessimized block allocation to maximize fragmentation. This
involves random allocation of the first block in every file (including
directories). The randomness makes the chance of a collison very small
and not much affected by the presence of large files. However, it is
easy to change the allocation policy to maximize collisions. I use
the following to test allocation policies:
%%%
Index: msdosfs_fat.c
===================================================================
RCS file: /home/ncvs/src/sys/fs/msdosfs/msdosfs_fat.c,v
retrieving revision 1.35
diff -u -2 -r1.35 msdosfs_fat.c
--- msdosfs_fat.c 29 Dec 2003 11:59:05 -0000 1.35
+++ msdosfs_fat.c 26 Apr 2004 05:03:55 -0000
@@ -68,4 +68,6 @@
#include <fs/msdosfs/fat.h>
+static int fat_allocpolicy = 1;
+
/*
* Fat cache stats.
@@ -796,9 +808,30 @@
len = 0;
- /*
- * Start at a (pseudo) random place to maximize cluster runs
- * under multiple writers.
- */
- newst = random() % (pmp->pm_maxcluster + 1);
+ switch (fat_allocpolicy) {
+ case 0:
+ newst = start;
+ break;
+ case 1:
+ newst = pmp->pm_nxtfree;
+ break;
+ case 5:
+ newst = (start == 0 ? pmp->pm_nxtfree : start);
+ break;
+ case 2:
+ /* FALLTHROUGH */
+ case 3:
+ if (start != 0) {
+ newst = fat_allocpolicy == 2 ? start : pmp->pm_nxtfree;
+ break;
+ }
+ /* FALLTHROUGH */
+ default:
+ /*
+ * Start at a (pseudo) random place to maximize cluster runs
+ * under multiple writers.
+ */
+ newst = random() % (pmp->pm_maxcluster + 1);
+ }
+
foundl = 0;
%%%
Bruce
More information about the cvs-src
mailing list