compiling on nfs directories
Rick Macklem
rmacklem at uoguelph.ca
Wed Dec 17 23:07:36 UTC 2014
Russell L. Carter wrote:
>
>
> On 12/16/14 20:39, Russell L. Carter wrote:
> >
> >
> > On 12/16/14 11:37, John Baldwin wrote:
> >> On Monday, December 15, 2014 3:59:29 pm Rick Macklem wrote:
> >
> > [...]
> >
> >>> What I suspect might cause this is one of two things:
> >>> 1 - The modify time of the file is now changing at a time the
> >>> Linux
> >>> client doesn't expect, due to changes in ZFS or maybe TOD
> >>> clock
> >>> resolution. (At one time, the TOD clock was only at a
> >>> resolution
> >>> of 1sec, so the client wouldn't see the modify time change
> >>> often.
> >>> I think it is now at a much higher resolution, but would
> >>> have to
> >>> look at the code/test to be sure.)
> >>
> >> No, it's still only a second resolution on FreeBSD by default.
> >> You can
> >> make this precise on the NFS server by setting the
> >> vfs.timestamp_precision
> >> sysctl to 3. We should probably be using that by default for at
> >> least
> >> server-class systems.
> >>
> >
> > Hmm, what's this? Let's see:
> >
> > rcarter at feyerabend> uname -a
> > FreeBSD feyerabend.n1.pinyon.org 10.1-STABLE FreeBSD 10.1-STABLE #1
> > r275516+3a52b5f(stable-jhb-em): Sat Dec 6 10:37:16 MST 2014
> > toor at feyerabend.n1.pinyon.org:/usr/obj/usr/src/sys/RLCGSV amd64
> > rcarter at feyerabend> man -k vfs.timestamp_precision
> > vfs.timestamp_precision: nothing appropriate
> > rcarter at feyerabend> sysctl -d vfs.timestamp_precision
> > vfs.timestamp_precision: File timestamp precision (0: seconds, 1:
> > sec +
> > ns accurate to 1/HZ, 2: sec + ns truncated to ms, 3+: sec + ns
> > (max.
> > precision))
> > rcarter at feyerabend> sysctl vfs.timestamp_precision
> > vfs.timestamp_precision: 0
> >
> > Ah, that's *VERY* interesting. I am unfortunately leaving the
> > physical vicinity of my server farm soon, so not the right time for
> > experiments. But I have been whining for some time now about what
> > looks to be very similar to gerrit.kuehn's symptoms. I see them on
> > installworlds via NFS v4.1, on -current or stable/10-trunk. About
> > 9
> > out of 10 installs fail trying to rebuild parts of the tree. I
> > finally resorted to copying /usr/obj* around and then just mounting
> > /usr/src via NFS. ick. Oh, and also buildworld/buildkernel -j1.
> > A
> > pity on a cluster where 8 cores/system are the norm. But now I
> > have
> > something sensible to try. Looking forward to it.
>
> After figuring out a way to test this reversibly, I tried the
> following:
>
> server & client vfs.timestamp_precision=3, make -j12
> buildworld/kernel,
> and make installworld -j1 on the client => fail, in /usr/src/sys/boot
>
> server & client vfs.timestamp_precision=0, make -j1 build/install,
> succeeds.
>
> Worth a shot anyway...
>
If this is using an exported ZFS volume, it would be nice if you
could do the same test using an exported UFS file system, to see if
this is ZFS related.
rick
> Cheers,
> Russell
>
> > Happy holidays, and cheers!
> > Russell
> > _______________________________________________
> > freebsd-net at freebsd.org mailing list
> > http://lists.freebsd.org/mailman/listinfo/freebsd-net
> > To unsubscribe, send any mail to
> > "freebsd-net-unsubscribe at freebsd.org"
> _______________________________________________
> freebsd-net at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to
> "freebsd-net-unsubscribe at freebsd.org"
>
More information about the freebsd-net
mailing list