svn commit: r326095 - head/usr.sbin/bsdinstall/scripts
Bruce Evans
brde at optusnet.com.au
Sat Nov 25 11:09:18 UTC 2017
On Fri, 24 Nov 2017, Ian Lepore wrote:
> On Fri, 2017-11-24 at 22:25 +1100, Bruce Evans wrote:
>> On Thu, 23 Nov 2017, Devin Teske wrote:
>> [...]
>>
>> ntpdate's man page claims this, but is wrong AFAIK. It says that the
>> functionality of ntpdate is now available in ntpd(8) using -q. However
>> ntpdate -q is far from having equivalent functionality. According to both
>> testing of the old version and its current man page, it does the same slow
>> syncing as normal ntpd startup, and just exits after this and doesn't
>> daemonize while doing this. With the old version, this step takes 35-40
>> seconds starting with the time already synced to withing a few microseconds
>> (by a previous invocation of ntpd), while ntpdate syncs (perhaps not so
>> well) with a server half the world away in about 1 second.
>
> Ahh, the good ol' days, when ntpdate was fast by default. Not
> anymore...
>
> unicorn# time ntpdate ntp.hippie.lan
> 24 Nov 15:21:31 ntpdate[734]: adjust time server [...] offset -0.000123 sec
> 0.013u 0.006s 0:06.13 0.1% 192+420k 0+0io 0pf+0w
>
> If you want the fast old sub-second behavior these days, you have to
> add -p1. Or, better yet, use sntp -r <server>.
The default of -p4 hasn't changed, but its speed has. I get the following
times for ntpdate -q -pN:
- old ntpdate -p1 0.31 seconds (my system -> US server ping latency 180ms)
- -p2 0.52
- -p3 0.83
- -p4 0.95 (default for -p)
- new ntpdate -p1 0.37 seconds (freefall -> same US server)
- -p2 2.39
- -p3 4.36
- -p4 6.36 (default)
- old ntpdate -p8 0.10 (max for -p) (my system -> localhost ping lat 0.014 ms)
- new ntpdate -p8 fail (freefall -> localhost ping lat 0.060 ms)
- old ntpdate -p8 0.10 (my LAN -> my system ping lat 0.120 ms)
- new ntpdate -pN same as US server (freefall -> freebsd server ping lat 80 ms)
- old ntpdate -p8 0.24 (my system -> ISP server ping lat 12 ms)
This shows that old ntpdate -pN takes approximately the ping latency times N.
New ntpdate takes that for N = 1; otherwise it takes almost 2 seconds for
each increment of N. ktrace shows many sleeps of 100 msec between
sendto/recvfrom pairs.
> I'm not sure where you're coming up with numbers like "35 seconds" for
> ntpd to initially step the clock. The version we're currently
> distributing in base takes the same 6-7 seconds as ntpdate (assuming
> you've used 'iburst' in ntp.conf). That's true in the normal startup
> case, or when doing ntpd -qG to mimic ntpdate.
This is for old ntpd [-q] with iburst maxpoll 6, to the ISP server. To
the LAN server, ntpd -q takes the same 35 seconds. Normal startup with
ntpd -N high takes about the same time.
The examples in /etc/defaults/rc.conf don't give a hint about the -p flag
for ntpdate or the -N flag for ntpd. Low -p values are probably good
enough for ntpdate before ntpd.
> If there is an ntpd.drift file, ntpd is essentially sync'd as soon as
> it steps.
If the drift file is correct.
I do have a correct drift file, and the above times are with it. With
a correct driftfile and ntpdate before ntpd, ntpd is essentially synced
as soon as it starts :-). When calibrating it manually, I verify this
by killing it soon after it starts and observing drift using ntpdate -q.
> If there is not, it does a clock step, then does 300 seconds
> of frequency training during which the clock can drift pretty far off-
> time. It used to be possible to shorten the frequency training
> interval with the 'tinker stepout' command, but the ntpd folks
> decoupled that (always inapproriately overloaded) behavior between
> stepout interval and training interval. There is no longer any way to
> control the training interval at all, which IMO is a serious regression
> in ntpd (albeit noticed primarily by those of us who DO have an atomic
> clock and get a microsecond-accurate measurement of frequency drift in
> just 2 seconds).
Is there any use for ntp as a client if you have an atomic clock? Just to
validate both it and ntpd?
Bruce
More information about the svn-src-all
mailing list