Terabyte-FS
Tobias
c4 at portad.se
Tue Jun 22 15:23:16 GMT 2004
Okay heres another of those "we cant use terabyte filesystem"-reports
again... :)
vinum -> create -f /etc/vinum.conf
13 drives:
D 200a State: up /dev/ad2s1d A: 0/194474 MB (0%)
D 200b State: up /dev/ad4s1d A: 0/194474 MB (0%)
D 200c State: up /dev/ad5s1d A: 0/194474 MB (0%)
D 200d State: up /dev/ad6s1d A: 0/194474 MB (0%)
D 200e State: up /dev/ad7s1d A: 0/194474 MB (0%)
D 120a State: up /dev/ad8s1d A: 0/117796 MB (0%)
D 120b State: up /dev/ad9s1d A: 0/117796 MB (0%)
D 120c State: up /dev/ad10s1d A: 0/117796 MB (0%)
D 120d State: up /dev/ad11s1d A: 0/117796 MB (0%)
D 120e State: up /dev/ad12s1d A: 0/117796 MB (0%)
D 120f State: up /dev/ad13s1d A: 0/117796 MB (0%)
D 120g State: up /dev/ad14s1d A: 0/117796 MB (0%)
D 120h State: up /dev/ad15s1d A: 0/117796 MB (0%)
1 volumes:
V fetus State: up Plexes: 1 Size: 1869 GB
1 plexes:
P fetus.p0 C State: up Subdisks: 13 Size: 1869 GB
13 subdisks:
S fetus.p0.s0 State: up D: 200a Size: 189 GB
S fetus.p0.s1 State: up D: 200b Size: 189 GB
S fetus.p0.s2 State: up D: 200c Size: 189 GB
S fetus.p0.s3 State: up D: 200d Size: 189 GB
S fetus.p0.s4 State: up D: 200e Size: 189 GB
S fetus.p0.s5 State: up D: 120a Size: 115 GB
S fetus.p0.s6 State: up D: 120b Size: 115 GB
S fetus.p0.s7 State: up D: 120c Size: 115 GB
S fetus.p0.s8 State: up D: 120d Size: 115 GB
S fetus.p0.s9 State: up D: 120e Size: 115 GB
S fetus.p0.s10 State: up D: 120f Size: 115 GB
S fetus.p0.s11 State: up D: 120g Size: 115 GB
S fetus.p0.s12 State: up D: 120h Size: 115 GB
(root at st1:~) newfs -O 2 /dev/vinum/fetus
/dev/vinum/fetus: 1914745.1MB (3921397976 sectors) block size 16384,
fragment size 2048
using 10420 cylinder groups of 183.77MB, 11761 blks, 23552 inodes.
newfs: can't read old UFS1 superblock: read error from block device:
Invalid argument
(root at st1:~) uname -a
FreeBSD st1 5.2.1-RELEASE FreeBSD 5.2.1-RELEASE #0: Mon Feb 23 20:45:55 GMT
2004 root at wv1u.btc.adaptec.com:/usr/obj/usr/src/sys/GENERIC i386
I also got the same message with 5.1-RELEASE so...
One solution would be to divide it into two parts... like all the 200gb
drives on one end and the rest on another, but it would be much nicer to
have it all as one unit.
I'll try the latest -CURRENT later tonight and see if anything has changed
until that not that I think so.
More information about the freebsd-fs
mailing list