large disk > 8 TB
Mark Carlson
carlsonmark at gmail.com
Tue Dec 11 15:19:04 PST 2007
On 12/11/07, Ivan Voras <ivoras at freebsd.org> wrote:
> Michael Fuckner wrote:
> > Lan Tran wrote:
> >> I have a Dell PERC 6/E controller connected to an external Dell MD1000
> >> storage, which I set up RAID 6 for. The RAID BIOS reports 8.5 TB. I
> >> installed 7BETA4 amd64 and Sysinstall/dmesg.boot detects this correctly:
> >> mfid1: <MFI Logical Disk> on mfi1
> >> mfid1: 8578560MB (17568890880 sectors) RAID volume 'raid6' is optimal"
> >>
> >> However, after I created a zfs zpool on this device it only shows 185
> >> GB. # zpool create tank /dev/mfid1s1d
> >> # zpool list
> >> NAME SIZE USED AVAIL CAP HEALTH ALTROOT
> >> tank 185G 111K 185G 0% ONLINE -
> >>
> >> also with 'dh':
> >> # df -h tank
> >> Filesystem Size Used Avail Capacity Mounted on
> >> tank 182G 0B 182G 0% /tank
> >>
> >
> > The main purpose of ZFS is doing Software raid (which is even faster
> > than HW Raid nowadays).
> >
> > You should export all disks seperately to the OS- and then you don't
> > have the 4GB limit wrapping the size to 185GB.
>
> This is the wrong way around. Why would something wrap drive sizes at a
> 32-bit limit? The driver and the GEOM systems are 64-bit clean, if this
> is a problem in ZFS, it's a serious one.
>
> I don't have the drive capacity to create a large array, but I assume
> someone has tested ZFS on large arrays (Pawel?)
If there is a bug here, I'm not sure people will see it for a while in
practice. I mean, who is using hardware raid to create a 8.5GB "disk"
then create a zpool with that one "disk"?
In this case, it appears there was some misconfiguration that has been
remedied: hardware raid seems to have been disabled now, and all the
disks are exported seperately to create a zpool with, resulting in
success.
> Can you run "diskinfo -v " on the large array (the 8.5 TB one) and
> verify the system sees it all?
More information about the freebsd-hardware
mailing list