Resizing a zpool as a VMware ESXi guest ...
Matthew Grooms
mgrooms at shrew.net
Fri Oct 10 20:42:55 UTC 2014
All,
I am a long time user and advocate of FreeBSD and manage a several
deployments of FreeBSD in a few data centers. Now that these
environments are almost always virtual, it would make sense that FreeBSD
support for basic features such as dynamic disk resizing. It looks like
most of the parts are intended to work. Kudos to the FreeBSD foundation
for seeing the need and sponsoring dynamic increase of online UFS
filesystems via growfs. Unfortunately, it would appear that there are
still problems in this area, such as ...
a) cam/geom recognizing when a drive's size has increased
b) zpool recognizing when a gpt partition size has increased
For example, if I do an install of FreeBSD 10 on VMware using ZFS, I see
the following ...
root at zpool-test:~ # gpart show
=> 34 16777149 da0 GPT (8.0G)
34 1024 1 freebsd-boot (512K)
1058 4194304 2 freebsd-swap (2.0G)
4195362 12581821 3 freebsd-zfs (6.0G)
If I increase the VM disk size using VMware to 16G and rescan using
camcontrol, this is what I see ...
root at zpool-test:~ # camcontrol rescan all
Re-scan of bus 0 was successful
Re-scan of bus 1 was successful
Re-scan of bus 2 was successful
root at zpool-test:~ # gpart show
=> 34 16777149 da0 GPT (8.0G)
34 1024 1 freebsd-boot (512K)
1058 4194304 2 freebsd-swap (2.0G)
4195362 12581821 3 freebsd-zfs (6.0G)
The GPT label still appears to be 8G. If I reboot the VM, it picks up
the correct size ...
root at zpool-test:~ # gpart show
=> 34 16777149 da0 GPT (16G) [CORRUPT]
34 1024 1 freebsd-boot (512K)
1058 4194304 2 freebsd-swap (2.0G)
4195362 12581821 3 freebsd-zfs (6.0G)
Now I have 16G to play with. I'll expand the freebsd-zfs partition to
claim the additional space ...
root at zpool-test:~ # gpart recover da0
da0 recovered
root at zpool-test:~ # gpart show
=> 34 33554365 da0 GPT (16G)
34 1024 1 freebsd-boot (512K)
1058 4194304 2 freebsd-swap (2.0G)
4195362 12581821 3 freebsd-zfs (6.0G)
16777183 16777216 - free - (8.0G)
root at zpool-test:~ # gpart resize -i 3 da0
root at zpool-test:~ # gpart show
=> 34 33554365 da0 GPT (16G)
34 1024 1 freebsd-boot (512K)
1058 4194304 2 freebsd-swap (2.0G)
4195362 29359037 3 freebsd-zfs (14G)
Now I want the claim the additional 14 gigs of space for my zpool ...
root at zpool-test:~ # zpool status
pool: zroot
state: ONLINE
scan: none requested
config:
NAME STATE READ
WRITE CKSUM
zroot ONLINE 0 0 0
gptid/352086bd-50b5-11e4-95b8-0050569b2a04 ONLINE 0 0 0
root at zpool-test:~ # zpool set autoexpand=on zroot
root at zpool-test:~ # zpool online -e zroot
gptid/352086bd-50b5-11e4-95b8-0050569b2a04
root at zpool-test:~ # zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zroot 5.97G 876M 5.11G 14% 1.00x ONLINE -
The zpool appears to still only have 5.11G free. Lets reboot and try
again ...
root at zpool-test:~ # zpool set autoexpand=on zroot
root at zpool-test:~ # zpool online -e zroot
gptid/352086bd-50b5-11e4-95b8-0050569b2a04
root at zpool-test:~ # zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zroot 14.0G 876M 13.1G 6% 1.00x ONLINE -
Now I have 13.1G free. I can add this space to any of my zfs volumes and
it picks the change up immediately. So the question remains, why do I
need to reboot the OS twice to allocate new disk space to a volume?
FreeBSD is first and foremost a server operating system. Servers are
commonly deployed in data centers. Virtual environments are now
commonplace in data centers, not the exception to the rule. VMware still
has the vast majority of the private virutal environment market. I
assume that most would expect things like this to work out of the box.
Did I miss a required step or is this fixed in CURRENT?
Thanks,
-Matthew
More information about the freebsd-current
mailing list