Issues with XEN and ZFS

Rodney W. Grimes freebsd-rwg at pdx.rh.CN85.dnsmgr.net
Mon Feb 11 15:43:48 UTC 2019


> Thanks for the testing!
> 
> On Fri, Feb 08, 2019 at 07:35:04PM +0000, Eric Bautsch wrote:
> > Hi.
> > 
> > 
> > Brief abstract: I'm having ZFS/Xen interaction issues with the disks being
> > declared unusable by the dom0.
> >
> > 
> > The longer bit:
> > 
> > I'm new to FreeBSD, so my apologies for all the stupid questions. I'm trying
> > to migrate from Linux as my virtual platform host (very bad experiences with
> > stability, let's leave it at that). I'm hosting mostly Solaris VMs (that
> > being my choice of OS, but again, Betamax/VHS, need I say more), as well as
> > a Windows VM (because I have to) and a Linux VM (as a future desktop via
> > thin clients as and when I have to retire my SunRay solution which also runs
> > on a VM for lack of functionality).
> > 
> > So, I got xen working on FreeBSD now after my newbie mistake was pointed out to me.
> > 
> > However, I seem to be stuck again:
> > 
> > I have, in this initial test server, only two disks. They are SATA hanging
> > off the on-board SATA controller. The system is one of those Shuttle XPC
> > cubes, an older one I had hanging around with 16GB memory and I think 4
> > cores.
> > 
> > I've given the dom0 2GB of memory and 2 core to start with.
> 
> 2GB might be too low when using ZFS, I would suggest 4G as a minimum
> when using ZFS for reasonable performance, even 8G. ZFS is quite
> memory hungry.

2GB should not be too low, I comfortably run ZFS in 1G.  ZFS is a
"free memory hog", by design it uses all memory it can.  Unfortantly
often the free aspect is over looked and it does not return memory when
it should, leading to OOM kills, those are bugs and need fixed.

If you are going to run ZFS at all I do strongly suggest overriding
the arc memory size with vfs.zfs.arc_max= in /boot/loader.conf to be
something more reasonable than the default 95% of host memory.

For a DOM0 I would start at 50% of memory (so 1G in this case) and
monitor the DOM0 internally with top, and slowly increase this limit
until the free memory dropped to the 256MB region.  If the work load
on DOM0 changes dramatically you may need to readjust.

> 
> > The root filesystem is zfs with a mirror between the two disks.
> > 
> > The entire thing is dead easy to blow away and re-install as I was very
> > impressed how easy the FreeBSD automatic installer was to understand and
> > pick up, so I have it all scripted. If I need to blow stuff away to test, no
> > problem and I can always get back to a known configuration.
> > 
> > 
> > As I only have two disks, I have created a zfs volume for the Xen domU thus:
> > 
> > zfs create -V40G -o volmode=dev zroot/nereid0
> > 
> > 
> > The domU nereid is defined thus:
> > 
> > cat - << EOI > /export/vm/nereid.cfg
> > builder = "hvm"
> > name = "nereid"
> > memory = 2048
> > vcpus = 1
> > vif = [ 'mac=00:16:3E:11:11:51,bridge=bridge0',
> >         'mac=00:16:3E:11:11:52,bridge=bridge1',
> >         'mac=00:16:3E:11:11:53,bridge=bridge2' ]
> > disk = [ '/dev/zvol/zroot/nereid0,raw,hda,rw' ]
> > vnc = 1
> > vnclisten = "0.0.0.0"
> > serial = "pty"
> > EOI
> > 
> > nereid itself also auto-installs, it's a Solaris 11.3 instance.
> > 
> > 
> > As it tries to install, I get this in the dom0:
> > 
> > Feb  8 18:57:16 bianca.swangage.co.uk kernel: (ada1:ahcich1:0:0:0):
> > WRITE_FPDMA_QUEUED. ACB: 61 18 a0 ef 88 40 46 00 00 00 00 00
> > Feb  8 18:57:16 bianca.swangage.co.uk last message repeated 4 times
> > Feb  8 18:57:16 bianca.swangage.co.uk kernel: (ada1:ahcich1:0:0:0): CAM
> > status: CCB request was invalid
> 
> That's weird, and I would say it's not related to ZFS, the same could
> likely happen with UFS since this is an error message from the
> disk controller hardware.

CCB invalid, thats not good, we sent a command to the drive/controller that
it does not like.
This drive may need to be quirked in some way, or there may be
some hardware issues here of some kind.

> 
> Can you test whether the same happens _without_ Xen running?
> 
> Ie: booting FreeBSD without Xen and then doing some kind of disk
> stress test, like fio [0].
> 
> Thanks, Roger.
> 
> [0] https://svnweb.freebsd.org/ports/head/benchmarks/fio/
> _______________________________________________
> freebsd-xen at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-xen
> To unsubscribe, send any mail to "freebsd-xen-unsubscribe at freebsd.org"
> 

-- 
Rod Grimes                                                 rgrimes at freebsd.org


More information about the freebsd-xen mailing list