Change default VFS timestamp precision?

John Baldwin jhb at freebsd.org
Thu Dec 18 19:49:33 UTC 2014


On Wednesday, December 17, 2014 2:09:14 pm Poul-Henning Kamp wrote:
> --------
> In message <CAJ-Vmokkc-p4-keMExxT+wyjugA8zYRS2XRv6VucWnfH0iw_Pw at mail.gmail.com>
> , Adrian Chadd writes:
> 
> >> I think it is over 10 years ago when make(1) first started seeing
> >> identical timestamps which wasn't.
> >>
> >> In most Makefiles this doesn't matter, but there are cases, in particular
> >> in less integrated families of makefiles than our own.
> >
> >Surely there has to be better ways of doing this stuff. Computers keep
> >getting faster; it wouldn't be out of the realm of possibility that we
> >could see a compiler read, compile and spit out a .o inside of a
> >millisecond. (Obviously not C++, but..)
> 
> A millisecond is pushing it, all things considered, it would have to
> be an utterly trivial source file for a utterly trivial language.
> 
> Given that it has epsilon cost, switching to TSP_HZ should be a
> no-brainer, I've been running that for ages.
> 
> Why TSP_USEC exists is beyond me, it's slower and worse than TSP_NSEC.
> 
> But going to TSP_NSEC by default seems unwarranted to me.

Eh, the use case I most care about is back-to-back updates to a directory on
an NFS server.  Those can certainly occur quite quickly and in under a
millisecond (e.g. rm foo* in a directory is going to be multiple unlink()
calls each of which will update the mtime of the directory).  I don't understand
why you think TSP_USEC is slower than TSP_NSEC.  microtime() and nanotime()
both just call bintime() and then convert the result using similar math.
However, I think I buy Jilles' argument that TSP_USEC is likely to give more
stable results (i.e. increasing mtime) if back to back updates are performed
across CPUs (assuming some amount of TSC jitter for example).

-- 
John Baldwin


More information about the freebsd-arch mailing list