Growing UFS beyond 2 TB
Nick Gustas
freebsd-fs at tychl.net
Thu May 24 17:34:52 UTC 2007
Richard Noorlandt wrote:
> Hi everybody,
>
> I'm currently configuring a large fileserver (Dual Opteron and an
> Areca 1160
> for hardware RAID), and I'm running into some partitioning problems.
> Currently, I have 6 500 GB drives to put in a main RAID-6 array,
> giving me 2
> TB of usable storage. Now, I want this 2 TB to be partitioned in several
> separate partitions of various sizes. The last partition will be 1 TB,
> and
> will be the most important partition on the array.
>
> Now my problem is that this 1 TB partition must be able to grow beyond
> 2 TB
> at a later stage (after adding extra HD's). If I understand correctly,
> it is
> not possible to grow a UFS partition beyond 2 TB when the drive is
> partitioned with fdisk. One should use GPT instead. However, it
> appears that
> GPT currently has no way to resize partitions, giving me no
> possibility to
> enlarge the 1TB partition and run growfs.
>
> Does anyone have a suggestion? Or am I overlooking something? I can
> hardly
> imagine that what I want is very rare, so I think there must be some
> solution.
>
> Best regards,
>
> Richard
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
You should be able to simply finish the online expansion of the array,
recreate your GPT partitions and run growfs. I recently did a OCE on an
array with a single GPT partition containing ZFS on a 3ware controller
and it worked fine. destroying the GPT partitioning doesn't hurt any
data as long as you put it back the same way.
order of operations was:
OCE finished, one drive added to array.
zpool export threeware
gpt destroy da0
gpt create da0
gpt add da0
zpool import threeware
everything was fine, all the data was there, just a bigger partition.
I see no reason this wouldn't work with UFS over 2TB assuming you have
some of the growfs patches that have been posted in the last few weeks.
In your case I would record the commands you used to create the gpt
partitions in the first place, or at least the output of gpt show <disk>
assuming two main partitions you aren't going to resize, and a third you
are going to resize.
OCE finished
umount /partitions
gpt destroy da0
gpt add -i 1 -s youroriginalsizehere da0
gpt add -i 2 -s youroriginalsizehere da0
gpt add -i 3 da0 #use remainder of disk for your last partition,
whatever that may be.
growfs /dev/da0p3
mount /partitions
All your data in the first two partitions should be fine with intact
original data, and assuming growfs did its job, your third should be
there too
I have a few minutes and a free 7.0 system handy here. I don't have time
to do an actual array expansion, but, I can do the gpt commands that you
would use
example below:
da0 at twa0 bus 0 target 0 lun 0
da0: <AMCC 9650SE-24M DISK 3.08> Fixed Direct Access SCSI-5 device
da0: 100.000MB/s transfers
da0: 762918MB (1562456064 512 byte sectors: 255H 63S/T 97258C)
lets say you used this sequence of commands to get started:
AMD64# gpt create da0
AMD64# gpt add -i 1 -s 100000000 da0
da0p1 added
AMD64# gpt add -i 2 -s 100000000 da0
da0p2 added
AMD64# gpt add -i 3 -s 1000000000 da0
da0p3 added
AMD64# gpt show da0
start size index contents
0 1 PMBR
1 1 Pri GPT header
2 32 Pri GPT table
34 100000000 1 GPT part - FreeBSD UFS/UFS2
100000034 100000000 2 GPT part - FreeBSD UFS/UFS2
200000034 1000000000 3 GPT part - FreeBSD UFS/UFS2
1200000034 362455997
1562456031 32 Sec GPT table
1562456063 1 Sec GPT header
again, I don't have time free to expand the array, so I initially capped
the size of p3 to simulate
AMD64# newfs /dev/da0p1
/dev/da0p1: 48828.1MB (100000000 sectors) block size 16384, fragment
size 2048
using 266 cylinder groups of 183.72MB, 11758 blks, 23552 inodes.
AMD64# newfs /dev/da0p2
/dev/da0p2: 48828.1MB (100000000 sectors) block size 16384, fragment
size 2048
using 266 cylinder groups of 183.72MB, 11758 blks, 23552 inodes.
AMD64# newfs /dev/da0p3
/dev/da0p3: 488281.2MB (1000000000 sectors) block size 16384, fragment
size 2048
using 2658 cylinder groups of 183.72MB, 11758 blks, 23552 inodes.
super-block backups (for fsck -b #) at:
AMD64# mkdir /1 /2 /3
AMD64# mount /dev/da0p1 /1
AMD64# mount /dev/da0p2 /2
AMD64# mount /dev/da0p3 /3
AMD64# df -h /1 /2 /3
Filesystem Size Used Avail Capacity Mounted on
/dev/da0p1 46G 4.0K 42G 0% /1
/dev/da0p2 46G 4.0K 42G 0% /2
/dev/da0p3 462G 4.0K 425G 0% /3
AMD64# cp /COPYRIGHT /1
AMD64# cp /COPYRIGHT /2
AMD64# cp /COPYRIGHT /3
AMD64# md5 /1/COPYRIGHT /2/COPYRIGHT /3/COPYRIGHT
MD5 (/1/COPYRIGHT) = 0b9d198f2b3abf7587682d7291dbcb8b
MD5 (/2/COPYRIGHT) = 0b9d198f2b3abf7587682d7291dbcb8b
MD5 (/3/COPYRIGHT) = 0b9d198f2b3abf7587682d7291dbcb8b
AMD64# umount /1
AMD64# umount /2
AMD64# umount /3
(array instantly expanded here)
AMD64# gpt destroy da0
AMD64# gpt create da0
AMD64# gpt add -i 1 -s 100000000 da0
da0p1 added
AMD64# gpt add -i 2 -s 100000000 da0
da0p2 added
AMD64# gpt add -i 3 da0
da0p3 added
AMD64# growfs /dev/da0p3
We strongly recommend you to make a backup before growing the Filesystem
Did you backup your data (Yes/No) ? Yes
new file systemsize is: 340613999 frags
Warning: 33020 sector(s) cannot be allocated.
growfs: 665245.6MB (1362422976 sectors) block size 16384, fragment size 2048
using 3621 cylinder groups of 183.72MB, 11758 blks, 23552 inodes.
AMD64# mount /dev/da0p1 /1
AMD64# mount /dev/da0p2 /2
AMD64# mount /dev/da0p3 /3
AMD64# df -h /1 /2 /3
Filesystem Size Used Avail Capacity Mounted on
/dev/da0p1 46G 12K 42G 0% /1
/dev/da0p2 46G 12K 42G 0% /2
/dev/da0p3 629G 12K 579G 0% /3
AMD64# md5 /1/COPYRIGHT /2/COPYRIGHT /3/COPYRIGHT
MD5 (/1/COPYRIGHT) = 0b9d198f2b3abf7587682d7291dbcb8b
MD5 (/2/COPYRIGHT) = 0b9d198f2b3abf7587682d7291dbcb8b
MD5 (/3/COPYRIGHT) = 0b9d198f2b3abf7587682d7291dbcb8b
growfs scares me more than gpt destroy :)
hope this helps!
More information about the freebsd-fs
mailing list