Time Synchronization

The Root of All Timing: Understanding root delay and root dispersion in NTP

February 25, 2021 by Douglas Arnold 2 Comments

Five Minute Facts About Packet Timing

If you examine an NTP packet you will see the fields root delay and root dispersion.  See the diagram from RFC 5905 in Figure 1 bellow, which defines NTP version 4, the current version.

ntp round trip delay

You might ask what is with all this “root” stuff?  Root in this case refers to the root of the time distribution spanning tree.  The root is a stratum 1 NTP server.  Something that usually has a GNSS receiver to get UTC.  If you are particularly fond of tree analogies, you can think of higher stratum NTP servers as branches, and clients as the leaves. 

The root delay is the round-trip packet delay from a client to a stratum 1 server.  This is important because it gives a crude estimate of the worst-case time transfer error between a client, or higher stratum server acting as a client, and a stratum 1 server.  In fact, it is the worst-case contribution to the time transfer error due to network asymmetry , if all of the round-trip delay was in one direction and none in the other direction.  Okay some of the network delay has to be in each direction, but this is an upper bound. The root delay is also used in clock steering algorithms to identify false tickers, that is servers with bad time or one’s that are sitting on the other side of a highly asymmetric network path. So, it is important. 

The root dispersion tells you how much error is added due to other factors.  One factor is the error introduced by a client due to the inaccuracy of its clock frequency.  Using NTP the higher stratum can set its clock to the lower stratum server, but if its clock frequency is off, then an error is introduced.

Recall that an NTP time transfer involves four timestamps as shown in Figure 2.

ntp round trip delay

From the four timestamps the offset of the client with respect to the server is known and the round-trip delay from the client to the server and back. 

Offset = [(t2 + t3) – (t4 + t1)]/2

Delay = (t4 -t1) – (t3 – t2)

Dispersion = DR * (t4 – t1) + timestamping errors

Where DR is the drift rate of client clocks time and is equal to the fractional frequency error.  The timestamping errors term includes things like the errors due to the finite resolution of the clock, and delays in reading the clock when fetching a timestamp. The sum of root dispersion and half the root delay is called the root distance and is the total worse case timing error accumulated between the stratum 1 server and the client.

When a client gets time from a higher stratum receiver the root delay is the sum of the delays from all of the client-server pairs in the timing spanning tree.  In the case when a client is getting time from multiple servers, the best root delay is used.  This applies to root dispersion as well.  An example is shown in Figure 3.

ntp round trip delay

Okay so you are ready to amaze your friends with your mastery of NTP root delays and root dispersion.

If you have any questions about packet timing, don’t hesitate to send me an email at  doug.arnold@meinberg-usa.com , or visit our website at  www.meinbergglobal.com .

  • share  
  • share    

Related posts:

  • NTP Monitoring
  • Network Time Security (NTS): Updated security for NTP
  • Leap Second Test
  • The virtues of clock watching: Why it’s important to monitor your timing network

' src=

May 30, 2023 at 9:44 am

Why do root servers (stratum 1) have zero dispersion? While a refclock probably has no dispersion by definition, the clock following the refclock is not perfect. Similarly for root delay: Even when the refclock is the “perfect time”, doesn’t the startum-1 server has to communicate with the refclock, introducing some non-constant delays?

' src=

June 1, 2023 at 9:28 pm

You are correct that a real implementations of stratum 1 servers are not perfect. However the definitions for root delay and root dispersion in the RFCs that define versions of NTP do not include any imperfections in stratum 1 servers. In practice, it is nearly always true that the limitations of the server are negligible compared to the delay and dispersion contributions from the network.

Share Your Comments & Feedback Cancel reply

  • Site Notice
  • Privacy Policy

You may also like

  • PTP / IEEE 1588
  • Configuration Guidelines
  • Industry Applications

How NTP works

Figure 5 shows the basic workflow of NTP. Device A and Device B are connected over a network. They have their own independent system clocks, which need to be automatically synchronized through NTP. Assume that:

Prior to system clock synchronization between Device A and Device B, the clock of Device A is set to 10:00:00 am while that of Device B is set to 11:00:00 am.

Device B is used as the NTP time server, so Device A synchronizes to Device B.

It takes 1 second for an NTP message to travel from one device to the other.

Figure 5: Basic work flow of NTP

ntp round trip delay

The synchronization process is as follows:

Device A sends Device B an NTP message, which is timestamped when it leaves Device A. The timestamp is 10:00:00 am (T1).

When this NTP message arrives at Device B, it is timestamped by Device B. The timestamp is 11:00:01 am (T2).

When the NTP message leaves Device B, Device B timestamps it. The timestamp is 11:00:02 am (T3).

When Device A receives the NTP message, the local time of Device A is 10:00:03 am (T4).

Up to now, Device A can calculate the following parameters based on the timestamps:

The roundtrip delay of NTP message: Delay = (T4–T1) – (T3-T2) = 2 seconds.

Time difference between Device A and Device B: Offset = ((T2-T1) + (T3-T4))/2 = 1 hour.

Based on these parameters, Device A can synchronize its own clock to the clock of Device B.

This is a rough description of how NTP works. For more information, see RFC 1305.

© Copyright 2016 Hewlett Packard Enterprise Development LP

NTP round-trip delay

Dec 25, 2017

We all know, that the delay between the NTP server and a NTP client has important influence for the precision of the time. There are a lot of information in the Internet which I did already know, but I was surprised how dramatically this could be.

Since I run my own rubidium disciplined NTP server since this summer I try to adjust my stratum-1 server very accurate. Beside the rubidium NTP server I have also a GPS based and a DCF77 based time source as reference. To adjust these servers I need a reference from the Internet. Located near Vienna in Austria I took stratum-1 server with low RTT. My decision was to take the server from Bundesamt für Eich- und Vermessungswesen (BEV) in Vienna. BEV is also the official time source from Austria.

With a rubidium disciplined NTP server it’s actually possible to adjust this server with an offset to “real time” within a millisecond or so. I realized that the reference couldn’t be adjusted so precisely. Offset was sometimes higher and sometimes lower. So I looked at the GPS based NTP server and I found the same behavior.

Below you can see the offset between November 19th and 24th. The graph show an average value of all 3 NTP server at BEV ( 178.189.127.147, 178.189.127.148 and 178.189.127.147 ). In the image it is listed as 178.189.127.14

offset_6days.png

We can clearly see that during the first 3 days offset is about zero and than it is much higher ( with a negative value ).

If we do an average calculation of the first 3 days we get -5.8 micro seconds.

offset_before.png

Doing the same with the next 3 days we have -37.1 micro seconds. So the difference is more than 31 micro seconds.

offset_after.png

As I have the same phenomena on the rubidium server I was sure it was not my hardware. But I couldn’t believe it was a problem of BEV. Finally it’s the official time for Austria. But between me and BEV there is a third component. It’s the network.

Looking at other statistics I found very quick the issue. The round-trip delay changed marginal.

roundtripdelay_6days.png

It was during the first 3 days of observation 4.226 milliseconds.

roundtripdelay_before.png

and changed to 4.143 milliseconds during the next days. Which is only 83 micro seconds less.

roundtripdelay_after.png

Very obviously it’s not only the round-trip time. I assume that the delay difference between sending and receiving packets also changed. See also my experiments with ADSL connectivity NTP and ADSL and NTP and ADSL p2 .

I have to accept that I have some high precisely time server but I never know how close I am to the real time.

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

How well will ntpd work when the latency is highly variable?

I have an application where we are using some non-standard networking equipment (cannot be changed) that goes into a dormant state between traffic bursts. The network latency is very high for the first packet since it's essentially waking the system, waiting for it to reconnect, and then making the first round-trip. Subsequent messages (provided they are within the next minute or so) are much faster, but still highly-latent. A typical set of pings will look like 2500ms, 900ms, 880ms, 885ms, 900ms, 890ms, etc.

Given that NTP uses several round trips before computing the offset, how well can I expect ntpd to work over this kind of link? Will the initially slow first round trip be ignored based on the much different (and faster) following messages to/from the ntp server?

John Gardeniers's user avatar

The short answer is yes, NTP will prefer low round trip timestamps over high round trip timestamps. There used to be a calldelay option to tell NTP about this problem, typically created by networks that used dial-on-demand technologies that impose a call delay. However, now NTP does this automatically.

If you want to speed up initial timesync, you may wish to use the iburst keyword on the server / peer lines. This tells NTP to use 8 spaced out packets instead of 1 to measure the time. It will ignore any packets that have much higher round trip times. If you have the bandwidth, you can use burst to make it use 8 packets every time it measures the time.

David Schwartz's user avatar

You must log in to answer this question.

Not the answer you're looking for browse other questions tagged ntp ntpd ..

  • Featured on Meta
  • Announcing a change to the data-dump process
  • Upcoming initiatives on Stack Overflow and across the Stack Exchange network...
  • We spent a sprint addressing your requests — here’s how it went

Hot Network Questions

  • Fill the triangular grid using the digits 1-9 subject to the constraints provided
  • Membership and offices in the Privy Council – what is the significance of the different predicates used to describe the transactions?
  • Rules/guidelines about rerouting flights in the EU
  • How do I get Windows 11 to use existing Linux GPT on 6TB external HDD?
  • Which civil aircraft use fly-by-wire without mechanical backup?
  • What's the grammatical structure after a comma?
  • vi (an old AIX vi, not vim): map: I can search, move, yank, or dd, but cannot paste
  • Is this an invitation to submit or a polite rejection?
  • ミラさん が すんで いた うち を かいました。who brought the house? Me or mira san?
  • Looking for piece name/number, it looks like a wedge with 4 studs
  • How can I fix this rust on spokes?
  • Seatstay eyelet cracked on carbon frame
  • An adjective for something peaceful but sad?
  • How should I analyse TV episode popularity while accounting for time?
  • A web site allows upload of pdf/svg files, can we say it is vulnerable to Stored XSS?
  • Traceless Hermitian matrices with simultaneously vanishing Rayleigh quotients
  • Linux disk space running full
  • In the travel industry, why is the "business" term coined in for luxury or premium services?
  • What's the price of banana?
  • Has a rocket engine ever been reused by a second/third stage
  • Questions about writing a Linear Algebra textbook, with Earth Science applications
  • Using a different background image for every LaTeX Beamer slide
  • Older brother licking younger sister's legs
  • Problems recording music from Yamaha keyboard to PC

ntp round trip delay

Platform products

  • Red Hat Enterprise Linux A flexible, stable operating system to support hybrid cloud innovation.
  • Red Hat OpenShift A container platform to build, modernize, and deploy applications at scale.
  • Red Hat Ansible Automation Platform A foundation for implementing enterprise-wide automation.
  • Start a trial Assess a product with a no-cost trial.
  • Buy online Buy select products and services in the Red Hat Store.

Cloud providers: Amazon Web Services, Microsoft Azure, and Google Cloud

  • Red Hat Enterprise Linux AI
  • Red Hat OpenShift AI
  • Red Hat OpenShift Virtualization
  • Red Hat OpenShift Service on AWS
  • Microsoft Azure Red Hat OpenShift
  • Application platform Simplify the way you build, deploy, manage, and secure apps across the hybrid cloud.
  • Artificial intelligence Build, deploy, and monitor AI models and apps with Red Hat's open source platforms.
  • Edge computing Deploy workloads closer to the source with security-focused edge technology.
  • IT automation Unite disparate tech, teams, and environments with 1 comprehensive automation platform.
  • Linux standardization Get consistency across operating environments with an open, flexible infrastructure.
  • Security Deliver software using trusted platforms and real-time security scanning and remediation.

By industry

  • Financial services
  • Industrial sector
  • Media and entertainment
  • Public sector
  • Telecommunications
  • Open Innovation Labs
  • Technical Account Management

Training & certification

  • All courses and exams
  • All certifications
  • Verify a certification
  • Skills assessment
  • Learning subscription
  • Learning community
  • Red Hat Academy
  • Connect with learning experts
  • Ansible Basics: Automation Technical Overview (No cost)
  • Containers, Kubernetes and Red Hat OpenShift Technical Overview (No cost)
  • Red Hat Enterprise Linux Technical Overview (No cost)
  • Red Hat Certified System Administrator exam
  • Red Hat System Administration I
  • Application modernization
  • Cloud computing
  • Cloud-native applications

Edge computing

  • Virtualization
  • See all topics
  • What is InstructLab?
  • What are cloud services?
  • What is edge computing?
  • What is hybrid cloud?
  • Why build a Red Hat cloud?
  • Cloud vs. edge
  • Red Hat OpenShift vs. Kubernetes
  • Learning Ansible basics
  • What is Linux?

More to explore

  • Customer success stories
  • Events and webinars
  • Podcasts and video series
  • Documentation
  • Resource library
  • Training and certification

For customers

  • Our partners
  • Red Hat Ecosystem Catalog
  • Find a partner

For partners

  • Partner Connect
  • Become a partner
  • Access the partner portal
  • Our company
  • How we work
  • Our social impact
  • Development model
  • Subscription model
  • Product support

Open source

  • Open source commitments
  • How we contribute
  • Red Hat on GitHub

Company details

  • Analyst relations

Recommendations

As you browse redhat.com, we'll recommend resources you may like. For now, try these.

  • All Red Hat products
  • Tech topics
  • Red Hat resources

Select a language

  • Training & services
  • Red Hat Enterprise Linux
  • Red Hat OpenShift
  • Red Hat Ansible Automation Platform

Applications

  • Cloud services

Infrastructure

Open hybrid cloud, original shows, combining ptp with ntp to get the best of both worlds.

  • Back to all posts

There are two supported protocols in Red Hat Enterprise Linux for synchronization of computer clocks over a network. The older and more well-known protocol is the Network Time Protocol (NTP). In its fourth version, NTP is defined by IETF in RFC 5905 . The newer protocol is the Precision Time Protocol (PTP), which is defined in the IEEE 1588-2008 standard.

The reference implementation of NTP is provided in the ntp package. Starting with Red Hat Enterprise Linux 7.0 (and now in Red Hat Enterprise Linux 6.8) a more versatile NTP implementation is also provided via the chrony package, which can usually synchronize the clock with better accuracy and has other advantages over the reference implementation. PTP is implemented in the linuxptp package.

With two different protocols designed for synchronization of clocks, there is an obvious question as to which one is

better. PTP was designed for local networks with broadcast/multicast transmission and, in ideal conditions, the system clock can be synchronized with sub-microsecond accuracy to the reference time. NTP was primarily designed for synchronization over the Internet using unicast, where it can usually achieve accuracy in the single-digit millisecond range. As it's currently implemented in chrony and ntp, in local networks (again, in ideal conditions) the accuracy can get within tens of microseconds. However, accuracy in ideal conditions isn't the only thing that matters. There are usually other criteria that need to be considered - criteria like resiliency, security, and cost. As will be explained later, each protocol has some advantages over the other and the best choice actually may be to use them both (at the same time) in order to combine their respective advantages.

The basic principles of the two protocols are the same. Computers or other devices that have a clock are connected in a network and form a hierarchy of time sources in which time is distributed from top to bottom. The devices on top are normally synchronized to a reference time source (e.g. a timing signal from a GPS receiver). Devices "below" periodically exchange timestamps with their time sources in order to measure the offset of their clocks. The clocks are continuously adjusted to correct for random variations in their rate (due to effects like thermal changes) and to minimize the observed offset.

ntp round trip delay

In NTP, one level of the hierarchy is called stratum. The devices on top are stratum 1 servers, below them are stratum 2 clients, which are servers to stratum 3 clients, and so on. In PTP there are slaves, which are synchronized to their masters. Each communication path has one master and its slaves can be masters on other communication paths. The master on top is called grandmaster (GM). A device that has ports in two or more communication paths (i.e. it can be a slave and also master of other slaves at the same time) is a boundary clock (BC). Clocks with one port are ordinary clocks (OC). The group of all clocks that are directly or indirectly synchronized to each other using the protocol is called a PTP domain.

Read more about optimizing performance for the open-hybrid enterprise.

Source selection.

The first important difference between NTP and PTP is in how time sources are selected when multiple paths to the top of the hierarchy are available. In NTP, the selection happens on the client side. An NTP client is receiving timestamps from all of its possible sources and it's up to the client to decide which sources are acceptable for synchronization. The servers just tell the client what time they think it is and its maximum expected error. The client checks if the measurements overlap and a majority of the sources agree on what time it is. Sources that are off are rejected as serving false time (falsetickers). From the remaining sources are selected sources with best statistics and shortest paths to the reference time sources on stratum 1 servers. The measurements are combined and the result is used for the adjustment of the local clock. This source selection process / procedure makes the protocol very resilient against failures. The client will stay synchronized for as long as it has enough good sources to outvote falsetickers. With three sources, it can detect one falseticker, with five sources it can detect two falsetickers, and so on.

In PTP, each slave has only one master and there is only one grandmaster in a PTP domain. Masters on the communication paths are selected by a Best Master Clock (BMC) algorithm, which runs independently on each clock in the domain. There is no negotiation between the clocks. The current master announces attributes of its grandmaster (class, accuracy, priority, etc.) to other clocks on the path and if there is a clock with a better grandmaster, or is closer to the same grandmaster in the network topology, it will switch to the master state and the previous master will stop when it sees it's no longer the best master on the communication path. The selection may take several iterations before it stabilizes, but ultimately there will be just one grandmaster and all slaves will be synchronized with it through masters on the shortest paths.

When a network link or master fails, another clock on a path to the grandmaster can take over. When the grandmaster fails, another clock with a reference time source can be the grandmaster. There are optional mechanisms in PTP that allow fast reselection. But the assumption here is that it fails completely, or at least is able to detect its failure and stop working. There is no resiliency against other failures. When the synchronization of a (grand)master fails or degrades for some reason (e.g. its clock becomes unstable or a network link becomes congested or asymmetric), the master will still be the clock with the best attributes on the communication path and all clocks synchronized to it will fail with it. A single failure can disrupt synchronization in whole PTP domain, even if there are redundant network paths and multiple clocks with a reference time source.

In order to have resiliency against any kind of failure it's necessary to run multiple PTP domains on separate networks, where clocks on bottom of the hierarchy are connected to all of them and implement some logic for their selection. Ideally, PTP devices in different domains should be from different vendors to avoid simultaneous failures (e.g. due to bugs in handling of rare events like GPS week rollover, leap seconds, etc.).

Synchronization in NTP

NTP has three synchronization modes, namely: a client/server mode, a symmetric mode between two peers, and a broadcast mode. There are differences in how they measure the offset and network delay.

The most commonly used mode is the client/server mode. In this mode, the client periodically sends a request to the server using a client mode packet and the server responds immediately with a server mode packet. From each exchange the client gets four timestamps: time when the request was sent (t1), time when it was received by the server (t2), time when the response (from the server) was sent (t3) and time when it was received back at the client (t4). Note that t1 and t4 are measured by the client's clock and t2 and t3 are measured by the server's clock. The offset of the client's clock is the difference between the midpoints of intervals [t1, t4] and [t2, t3]. The delay is the round-trip time not including the server's processing time (i.e. the length of the local interval [t1, t4] without remote interval [t2, t3]).

ntp round trip delay

The assumption here is that the delays were identical in both directions. If not, the measured offset will have an error equal to the half of the difference in the delays. As the client has both offset and delay for each measurement, it can throw away measurements that have unusually large delay, assuming the extra delay was asymmetric (which it usually is) and the measured offset has a large error.

The symmetric mode is similar to the client/server mode, except it allows synchronization in both directions. It's typically used between NTP servers operating at the same stratum as a backup in case one of them loses its upstream synchronization. A symmetric mode packet is basically a client request and server response at the same time. Normally, peers don't respond immediately, but they both send packets in their own interval. Similarly to the client/server mode, after each received packet each peer has four timestamps from which it can calculate the offset and delay. The only difference is that the [t2, t3] interval may be significantly longer.

ntp round trip delay

The broadcast mode is very different from the client/server and symmetric modes. It is fully supported in ntp, but chrony supports it only as a broadcast server, not as a client. The main purpose of the broadcast mode is to simplify configuration of clients in very large networks. Instead of configuring each client with a list of NTP servers (or distributing the list via DHCP for instance), all clients can use the same configuration file, which just enables reception of broadcast or multicast NTP packets.

A broadcast server periodically sends a broadcast mode packet to an IP broadcast or multicast address. Its clients normally don't send any requests. After each received packet they have only two timestamps, the remote time when the packet was sent by the server (t3) and the local time when the packet was received (t4). That's not enough to calculate both offset and delay. The delay is measured independently using client mode packets first and the offset can be then calculated by subtracting half of the delay from the difference between t4 and t3.

ntp round trip delay

The assumption here is that the delay doesn't change over time. If it does change, and it almost always does at least a bit due to variations in delays in network switches and routers, the error in the measured offset is equal to the change in the delay. That's twice as much as the error in the client/server mode due to asymmetry in the delay. As the client doesn't know the delay for each measurement, it can't easily discard those that were significantly delayed and have a large error in the offset. This means the broadcast mode is less accurate (and less secure even if authentication is enabled) than the other modes and should generally be avoided.

Synchronization in PTP

The synchronization of clocks in PTP is similar to the NTP broadcast mode. Typically, all clocks on the communication path send messages to the same IP or layer 2 (e.g. Ethernet) multicast address. PTP also supports unicast messaging, but it doesn't add any new message types or change anything in how the offset and delay are measured. It just changes the addressing of messages, which can be useful in larger networks to reduce the amount of PTP traffic. Note that linuxptp currently does not support unicast transmissions.

A PTP master periodically sends sync messages, which are received by its slaves. This gives them two timestamps, remote time when the message was sent and local time when the message was received. As in the NTP broadcast mode, that's not enough to calculate the offset of the clock. The network delay between the master and slave has to be measured first.

There are two mechanisms how it can be measured: end-to-end (E2E) using delay request/response and peer-to-peer (P2P) using peer delay request/response. With the E2E mechanism the slave sends a delay request and the master immediately responds with the timestamp when it received the request. This gives the slave the two missing timestamps, which allow it to calculate the delay exactly as in the NTP client/server mode. With the P2P mechanism, both request and response are timestamped, but the slave generally doesn't know exactly the remote timestamps, only their difference. This is sufficient to calculate the delay directly without using the timestamps from the sync message. When the slave knows the delay, it can calculate the offset with each sync message.

Unlike in the NTP broadcast mode, where clients normally measure the delay only once on start, PTP slaves measure the delay periodically. The rate is controlled by the master and it's normally a fraction of the rate of sync messages (i.e. slaves usually have more timestamps from the sync messages than delay response messages). The standard PTP approach is to calculate the offset and delay independently. Alternatively, with the E2E delay mechanism it's possible to use the four most recent timestamps (two for sync message and two for delay response) and calculate the offset and delay at the same time as in the NTP client/server mode. This allows more effective filtering, but reduces the number of samples. Which of the two works better depends on the stability of the clock and the amount of jitter in the measurements.

Accuracy of Synchronization

The accuracy of the NTP and PTP synchronization ultimately depends on the accuracy of the measured offset of the clock against the source. This error accumulates in the synchronization hierarchy. The clients and slaves don't know how accurate their clocks really are, they just try to minimize the observed offset by adjusting the rate of their clocks. The error has a variable component, which can be reduced with multiple measurements by filtering and averaging, and a constant component, which generally can't be detected. If the error is stable and the clock is also stable, the offset can be reduced to very small values, but that doesn't necessarily mean the clock is also accurate. This is very important when looking at the offsets reported in the ptp4l , chronyd , or ntpd logs, or values printed by the pmc , chronyc , or ntpq programs. Small offsets generally indicate low network jitter and a stable clock, but it doesn't say much about the accuracy as there still may be a large constant error.

Asymmetry in Network Delay

One source of the error is asymmetry in the network delay. The calculation of the offset assumes the delay in the two directions is exactly the same, but that's rarely the case. For instance, if packets sent from A to B take 200 microseconds and packets sent from B to A only 100 microseconds, the measured offset will have an error of 50 microseconds. If A can keep its offset close to zero, it will actually be running 50 microseconds ahead of B.

ntp round trip delay

The asymmetry has multiple sources. It may be in the physical or link layer, the packets may go over different network paths (e.g. due to asymmetric routing), and there may be different processing and queueing delays in switches and routers. There is nothing the clients/slaves can do to measure this error. It can be corrected only if it's measured externally by other means (e.g. with a reference time source connected directly to the client/slave). In PTP there is an option called delayAsymmetry intended for this correction.

Fortunately, this error has an upper bound. The clients/slaves don't know the asymmetry between the delays, but they do know the round-trip delay. If the packet was received instantly after it was sent in one direction and the other direction took the whole delay, the error in the offset would be equal to the half of the round-trip delay. This is the maximum error due to asymmetry in the network delay between two devices. In order to determine the maximum error relative to the reference time source, it's necessary to know the accumulated round-trip delay over all levels of the hierarchy to the reference time source.

In NTP this value is called root delay. It's exchanged in NTP packets together with root dispersion, which estimates the maximum error of the synchronization itself, taking into account stability of the clocks on the path to the reference time source. Both values can be monitored using the chronyc or ntpq programs. In PTP this information is not exchanged, which means only slaves of the grandmaster can estimate their maximum error and it's also less reliable due to the limitations of the broadcast synchronization as the delay is normally calculated independently from the offset.

The asymmetry in the network delay due to switches and routers can be corrected if they support a special correction field in PTP messages. A PTP-aware switch or router that supports this correction is called a transparent clock (TC). When it's forwarding a PTP message which includes this field, the difference between the time of reception and time of transmission is added to the value in the field. The slave includes these corrections in the calculation of the delay and offset, which improves the accuracy significantly. In an ideal case, the error drops to zero as if the slave and master were connected directly. NTP doesn't have anything like that. In order to avoid this error all NTP devices would have to be connected directly.

Timestamping Errors

Another source of the error in the offset is inaccuracy of the timestamping itself (i.e. the transmit or receive timestamp doesn't correspond exactly to the time when the packet was actually sent or received by the NIC).

ntp round trip delay

On a Linux machine, there are basically three different places where the timestamps can be generated:

  • In user space (i.e. the NTP/PTP daemon), typically before making a send() system call and after a select() or poll() system call.
  • In the kernel, before the packet is copied to the NIC ring buffer and when the NIC issues an interrupt after receiving a packet. This is called software timestamping.
  • In the NIC itself, when the packet enters and leaves the link or physical layer. This is called hardware timestamping.

Software timestamping is more accurate than user-space timestamping, because it doesn't include the context switch, processing of the packet in the kernel and waiting in the network stack. Hardware timestamping is more accurate than software timestamping, because it doesn't include waiting in the NIC. However, there are several issues with HW and SW timestamping that make them more difficult to use than user-space timestamping.

The first issue is that not every NIC and network driver supports HW timestamping. In the past only few selected NICs had support for HW timestamping, but it's more common with modern hardware. Also, some NICs don't allow for the timestamping of arbitrary packets and support is limited to PTP packets. SW timestamping depends entirely on the driver. Supported timestamping features can be verified with the ethtool -T command.

The second issue is that with HW timestamping the NIC has its own clock, which is independent from the system clock, and there has to be some form of synchronization between these two clocks. HW timestamping can be so accurate that the weakest link of the synchronization may actually be between the NIC and the system clock (!). Measuring the offset between the two clocks involves sending messages over PCIe bus and the round-trip delay is typically a few microseconds. As the asymmetry in the delay on the bus and the time the NIC needs to respond are unknown, the error in the offset may actually be close to the half of the round-trip delay. There is no easy way to measure this error. Even if the NIC clock is accurate to few nanoseconds, the system clock may still be off by a microsecond.

The third issue is with servers/masters sending packets which are supposed to include the transmit timestamp of the packet itself. With SW timestamping the kernel would have to know where to put the timestamp in the packet. With HW timestamping it would have to be done by the NIC. The Linux kernel supports some NICs that can do this with PTP packets, which are called one-step clocks. Some NICs have a special "launch time" feature that would allow sending packets with an accurate transmit timestamp by pre-programming the time of the transmission instead of making modifications in the packet, but the kernel doesn't support that yet.

If the NIC can't modify the packet, the protocol itself has to provide some mechanism to send the transmit timestamp to the client/slave later. PTP has follow-up messages and the devices that use them are called two-step clocks. The NTP specification doesn't have anything like that (yet). The reference implementation supports special interleaved variants of the symmetric and broadcast modes, which allow the peer/server to send the transmit timestamp of the previous packet, but it doesn't support HW timestamping on Linux, so there is currently no practical use for it. The client/server mode could have an interleaved variant too if the server was allowed to keep some state for each client.

Both NTP implementations currently use SW timestamping for reception and user-space timestamping for transmission; linuxptp supports SW and HW timestamping for both reception and transmission. With HW timestamping the synchronization of the two clocks is separate. The NIC clock is synchronized by ptp4l and the system clock is synchronized to the NIC clock by phc2sys .

Similarly to the network delay, the error in the measured offset doesn't depend on the absolute error in the timestamping, but asymmetry in the errors between the server/master and client/slave. If the sum of the error in the transmit timestamp and receive timestamp for packets sent from A to B is similar to the sum of errors in timestamps for packets sent from B to A, the errors will cancel out. There may be a large asymmetry between the errors in transmit and receive timestamps on one side, but that's not a problem if the other side has a similar asymmetry. For this reason it's recommended to use the same combination of timestamping on both sides and ideally also the same model of NIC. Mixing different combinations of timestamping or different NICs may increase the error in the measured offset.

One source of the error in user-space and SW timestamping is interrupt coalescing. When the NIC receives a packet, it may wait for more packets before interrupting the kernel in order to reduce the number of interrupts, but this means the user-space or SW timestamp is made later and has a larger error. On some NICs interrupts coalescing can be configured with the ethtool -C command. Different NICs and drivers have different configurations. Adjusting the values for a shorter delay may reduce the error in the receive timestamp, but without a reference time source it's difficult to tell how the asymmetry between the server/master and client/slave has changed and whether the accuracy has actually improved.

ntp round trip delay

These graphs show differences between user-space/SW and HW timestamps that were collected over several hours in one-second interval with a gigabit Ethernet NIC. Some patterns can be seen in the plot of error vs packets, which correspond to an increased network and CPU load. The distribution shows that the user-space transmit timestamps are most likely to be around 30 microseconds early. The SW receive timestamps are most of the time only about 6 microseconds late, which indicates the interrupt coalescing on this NIC is adaptive. If this was a client/slave using user-space/SW timestamps for synchronization and errors in the timestamping on the server/master were perfectly symmetric, the client/slave would be running about 12 microsecond ahead of the server/master.

Combining PTP with NTP

In order to get both accuracy and resiliency at the same time, it would be useful if PTP and NTP could be combined. PTP would be the primary source for synchronization of the clock when everything is working as expected. NTP would keep the PTP sources in check and allow for fallback between different PTP sources, or to NTP servers when all PTP sources fail.

In Red Hat Enterprise Linux, this is possible. Programs from the linuxptp package can be used in a combination with an NTP daemon. A PTP clock on a NIC is synchronized by ptp4l and is used as a reference clock by chronyd or ntpd for synchronization of the system clock. The phc2sys program has an option to work as a shared memory (SHM) reference clock, which is supported by both NTP daemons. With multiple NICs supporting HW timestamping, for each PTP clock there is one ptp4l instance and one phc2sys instance. To make the configuration easy, linuxptp includes also a timemaster program, which from a single configuration file can create configuration files for all other programs and start them as needed. It supports both chronyd and ntpd . chronyd is preferred as it can synchronize the clock with better accuracy.

The timemaster configuration file is in /etc/timemaster.conf . It specifies NTP servers, network interfaces in PTP domains, and also options for chronyd / ntpd , ptp4l or phc2sys . This is a minimal example of the configuration file using a single PTP source and an NTP source:

In this configuration timemaster configures and starts chronyd with one NTP server and one SHM reference clock. The NTP server is polled every 16 seconds. The PTP clock on the eth0 interface is synchronized by ptp4l in the PTP domain number 0 and phc2sys provides the PTP clock as a SHM reference clock to chronyd .

The delay option sets the maximum expected error of the PTP clock due to asymmetry in network delays and timestamping to ±5 microseconds. This value can't be provided by PTP, so it needs to be specified in the configuration file. The default value is 100 microseconds (maximum error of ±50 microseconds), which should cover errors in SW timestamping, but in most cases with HW timestamping it would be unnecessarily large. Setting the delay to a smaller value will prevent chronyd from combining the PTP source with close NTP servers in local network, which are expected to have a much larger error than the PTP source, and it will also make the detection of falsetickers more sensitive.

In normal conditions the system clock is synchronized to the PTP clock. If ptp4l switches to an unsynchronized state (e.g. after a complete failure of a PTP master), phc2sys will stop updating the SHM refclock and chronyd will switch to the NTP source when the estimated error of the system clock becomes comparable to the estimated error of the NTP source. A short loss of the PTP source doesn't cause an immediate switch to a significantly worse NTP source. If synchronization of one source fails in such a way that it gives false time, there will be a problem. The two sources won't agree with each other and chronyd will have to stop the synchronization as it doesn't know which one is correct. A warning message will be written to the system log. If the configuration file specified a second NTP server, the falseticker could be identified and the synchronization would continue uninterrupted.

The next example shows a more resilient configuration using two different PTP domains and three NTP servers.

timemaster will now start chronyd with two ptp4l instances and two phc2sys instances. The following diagram illustrates how this all works together.

ntp round trip delay

This configuration is resilient against up to four sources failing completely and up to two sources giving false time. In normal conditions all five sources are expected to give true time, but only the two PTP sources are used for synchronization of the system clock. Combining the PTP sources may improve its accuracy as their average is likely to be closer to the true time. If one PTP source fails, the accuracy won't degrade significantly. If both PTP sources fail completely or they start giving false time, the synchronization will fall back to a combination of the NTP sources.

If the PTP grandmasters can also serve time over NTP and are specified as the two of the three NTP sources, this configuration will still be resilient against any failure of a grandmaster, even though it's used as two separate sources. The grandmaster clocks should ideally be from different vendors in order to avoid four sources failing at the same time in the same way due to a firmware or software bug, which could outvote the third NTP source. If the two PTP domains are in separate networks, this configuration will be resilient also against network failures.

The following examples show what the chronyc sources command prints in this configuration when different failures of PTP sources are simulated. In the first example everything is working as expected and all sources agree with each other. Only the two PTP sources are used for synchronization (indicated with the * and + symbols).

In the next example synchronization in one PTP domain failed. The source is off by about 200 microseconds and it's drifting away. It doesn't agree with other sources, so it's rejected as a falseticker (indicated with the x symbol). The other PTP source still works as expected and is used for synchronization.

In the last example the other PTP source failed completely ( ? symbol) and NTP sources are used for synchronization. The accuracy of the clock is significantly worse than before.

When the PTP sources are fixed, they will be used for synchronization and the accuracy will improve again. Failure of any NTP source would be handled in the same way. The synchronization will work correctly for as long as the number of remaining good sources is larger than the number of falsetickers.

Here is an overview of main features that are currently specified in the protocols and that have an effect on accuracy, resiliency, or security:

Both NTP and PTP have some strong advantages over the other. PTP in ideal conditions with HW timestamping and transparent clocks can effectively eliminate the effect of the network on the measurements and synchronize the system clock with sub-microsecond accuracy. NTP is highly resilient. It works with multiple sources, estimates their errors, and selects only good sources for synchronization.

NTP supports authentication with symmetric keys in order to allow clients to verify the authenticity and integrity of received packets and prevent attackers from synchronizing them to a false time. The PTP specification includes an experimental security extension, which is not supported in linuxptp. For NTP there is also Autokey (RFC 5906), which is based on public-key cryptography, but it's no longer considered secure and should be avoided. It's supposed to be replaced by new Network Time Security (NTS) protocol, which will probably be specified for both NTP and PTP.

We can expect that both protocols will be improved over time. It will be interesting to see which one will be the first to allow both highly resilient and highly accurate synchronization. NTP will need to specify a new extension field for delay corrections, which will have to be supported in networking devices. For timestamp corrections, the specification could just include the ntpd's interleaved modes, possibly extended for the client/server mode, so servers don't have to be configured to accept symmetric associations. Alternatively, a new extension field could be introduced to request follow-up messages. PTP may need more substantial changes. It may need to allow multiple independent masters on one communication path, provide slaves with more information, and allow them to select between masters.

Irrespective of future improvements, as mentioned above, it is possible to get both accuracy and resiliency at the same time by combining PTP and NTP through the use of timemaster.

Interested in learning more?  Questions on PTP, NTP, or timemaster?  Please don't hesitate to reach out.

About the author

Miroslav lichvar, more like this, friday five — july 12, 2024 | red hat, fast system utilization summaries for system administrators, the truth about netcode | compiler, transforming your acquisition, browse by channel.

The latest on IT automation for tech, teams, and environments

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

Explore how we build a more flexible future with hybrid cloud

The latest on how we reduce risks across environments and technologies

Updates on the platforms that simplify operations at the edge

The latest on the world’s leading enterprise Linux platform

Inside our solutions to the toughest application challenges

Entertaining stories from the makers and leaders in enterprise tech

  • See all products
  • Customer support
  • Developer resources
  • Red Hat value calculator

Try, buy, & sell

  • Product trial center
  • Red Hat Marketplace
  • Red Hat Store
  • Buy online (Japan)

Communicate

  • Contact sales
  • Contact customer service
  • Contact training
  • About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Red Hat legal and privacy links

  • Contact Red Hat
  • Red Hat Blog
  • Diversity, equity, and inclusion
  • Cool Stuff Store
  • Red Hat Summit
  • Privacy statement
  • Terms of use
  • All policies and guidelines
  • Digital accessibility
  • Labs Blog Posts
  • Labs Presentations
  • Measurements and Data
  • IP Addresses

Javascript is disabled

We would like to provide you with a better user experience. Please re-enable Javascript in your web browser.

Protocol Basics – The Network Time Protocol

B ack at the end of June 2012 [ 0 ] there was a brief IT hiccup as the world adjusted the Coordinated Universal Time (UTC) standard by adding an extra second to the last minute of the 31st of June. Normally such an adjustment would pass unnoticed by all but a small dedicated collection of time keepers, but this time the story spread out into the popular media as numerous Linux systems hiccupped over this additional second, and they supported some high-profile services, including a major air carrier’s reservation and ticketing backend system. The entire topic of time, time standards, and the difficulty of keeping a highly stable and regular clock standard in sync with a slightly wobbly rotating Earth has been a longstanding debate in the International Telecommunication Union Radiocommunication Sector (ITU-R) standards body that oversees this coordinated time standard. However, I am not sure that anyone would argue that the challenges of synchronizing a strict time signal with a less than perfectly rotating planet is sufficient reason to discard the concept of a coordinated time standard and just let each computer system drift away on its own concept of time. These days we have become used to a world that operates on a consistent time standard, and we have become used to our computers operating at sub-second accuracy. But how do they do so? In this article I will look at how a consistent time standard is spread across the Internet, and examine the operation of the Network Time Protocol (NTP).

Some communications protocols in the IP protocol suite are quite recent, whereas others have a long and rich history that extends back to the start of the Internet. The ARPANET switched over to use the TCP/IP protocol suite in January 1983, and by 1985 NTP was in operation on the network. Indeed it has been asserted that NTP is the longest running, continuously operating, distributed application on the Internet [ 1 ] .

The objective of NTP is simple: to allow a client to synchronize its clock with UTC time, and to do so with a high degree of accuracy and a high degree of stability. Within the scope of a WAN, NTP will provide an accuracy of small numbers of milliseconds. As the network scope gets finer, the accuracy of NTP can increase, allowing for submillisecond accuracy on LANs and sub-microsecond accuracy when using a precision time source such as a Global Positioning System (GPS) receiver or a caesium oscillator.

If a collection of clients all use NTP, then this set of clients can operate with a synchronized clock signal. A shared data model, where the modification time of the data is of critical importance, is one example of the use of NTP in a networked context. (I have relied on NTP timer accuracy at the microsecond level when trying to combine numerous discrete data sources, such as a web log on a server combined with a Domain Name System (DNS) query log from DNS resolvers and a packet trace.)

NTP, Time, and Timekeeping

To consider NTP, it is necessary to consider the topic of timekeeping itself. It is useful to introduce some timekeeping terms at this juncture:

NTP is designed to allow a computer to be aware of three critical metrics for timekeeping: the offset of the local clock to a selected reference clock, the round-trip delay of the network path between the local computer and the selected reference clock server, and the dispersion of the local clock, which is a measure of the maximum error of the local clock relative to the reference clock. Each of these components is maintained separately in NTP. They provide not only precision measurements of offset and delay, to allow the local clock to be adjusted to synchronize with a reference clock signal, but also definitive maximum error bounds of the synchronization process, so that the user interface can determine not only the time, but the quality of the time as well.

Universal Time Standards

It would be reasonable to expect that the time is just the time, but that is not the case. The Universal Time reference standard has several versions, but these two standards are of interest to network timekeeping.

UT1 is the principal form of Universal Time. Although conceptually it is Mean Solar Time at 0° longitude, precise measurements of the Sun are difficult. Hence, it is computed from observations of distant quasars using long baseline interferometry, laser ranging of the Moon and artificial satellites, as well as the determination of GPS satellite orbits. UT1 is the same everywhere on Earth, and is proportional to the rotation angle of the Earth with respect to distant quasars, specifically the International Celestial Reference Frame (ICRF), neglecting some small adjustments.

The observations allow the determination of a measure of the Earth’s angle with respect to the ICRF, called the Earth Rotation Angle (ERA), which serves as a modern replacement for Greenwich Mean Sidereal Time ). UT1 is required to follow the relationship

   ERA = 2π(0.7790572732640 + 1.00273781191135448Tu) radians       where Tu = (Julian UT1 date – 2451545.0)

Coordinated Universal Time (UTC) is an atomic timescale that approximates UT1. It is the international standard on which civil time is based. It ticks SI seconds, in step with International Atomic Time (TAI). It usually has 86,400 SI seconds per day, but is kept within 0.9 seconds of UT1 by the introduction of occasional intercalary leap seconds. As of 2012 these leaps have always been positive, with a day of 86,401 seconds. [ 9 ]

NTP uses UTC, as distinct from the Greenwich Mean Time (GMT), as the reference clock standard. UTC uses the TAI time standard, based on the measurement of 1 second as 9,192,631,770 periods of the radiation emitted by a caesium-133 atom in the transition between the two hyperfine levels of its ground state, implying that, like UTC itself, NTP has to incorporate leap second adjustments from time to time.

NTP is an “absolute” time protocol, so that local time zones—and conversion of the absolute time to a calendar date and time with reference to a particular location on the Earth’s surface—are not an intrinsic part of the NTP protocol. This conversion from UTC to the wall-clock time, namely the local date and time, is left to the local host.

Servers and Clients

NTP uses the concepts of server and client. A server is a source of time information, and a client is a system that is attempting to synchronize its clock to a server.

Servers can be either a primary server or a secondary server. A primary server (sometimes also referred to as a stratum 1 server using terminology borrowed from the time reference architecture of the telephone network) is a server that receives a UTC time signal directly from an authoritative clock source, such as a configured atomic clock or—very commonly these days—a GPS signal source. A secondary server receives its time signal from one or more upstream servers, and distributes its time signal to one of more downstream servers and clients. Secondary servers can be thought of as clock signal repeaters, and their role is to relieve the client query load from the primary servers while still being able to provide their clients with a clock signal of comparable quality to that of the primary servers. The secondary servers need to be arranged in a strict hierarchy in terms of upstream and downstream, and the stratum terminology is often used to assist in this process.

As noted previously, a stratum 1 server receives its time signal from a UTC reference source. A stratum 2 server receives its time signal from a stratum 1 server, a stratum 3 server from stratum 2 servers, and so on. A stratum n server can peer with many stratum n – 1 servers in order to maintain a reference clock signal. This stratum framework is used to avoid synchronization loops within a set of time servers.

Clients peer with servers in order to synchronize their internal clocks to the NTP time signal.

The NTP Protocol

At its most basic, the NTP protocol is a clock request transaction, where a client requests the current time from a server, passing its own time with the request. The server adds its time to the data packet and passes the packet back to the client. When the client receives the packet, the client can derive two essential pieces of information: the reference time at the server and the elapsed time, as measured by the local clock, for a signal to pass from the client to the server and back again. Repeated iterations of this procedure allow the local client to remove the effects of network jitter and thereby gain a stable value for the delay between the local clock and the reference clock standard at the server. This value can then be used to adjust the local clock so that it is synchronized with the server. Further iterations of this protocol exchange can allow the local client to continuously correct the local clock to address local clock skew.

NTP operates over the User Datagram Protocol (UDP). An NTP server listens for client NTP packets on port 123. The NTP server is stateless and responds to each received client NTP packet in a simple transactional manner by adding fields to the received packet and passing the packet back to the original sender, without reference to preceding NTP transactions.

Upon receipt of a client NTP packet, the receiver time-stamps receipt of the packet as soon as possible within the packet assembly logic of the server. The packet is then passed to the NTP server process. This process interchanges the IP Header Address and Port fields in the packet, overwrites numerous fields in the NTP packet with local clock values, time-stamps the egress of the packet, recalculates the checksum, and sends the packet back to the client.

The NTP packets sent by the client to the server and the responses from the server to the client use a common format, as shown in Figure 1.

Figure 1: NTP Message Format

The header fields of the NTP message are as follows:

Figure 2: Reference Identifier Codes (from RFC 4330)

The next four fields use a 64-bit time-stamp value. This value is an unsigned 32-bit seconds value, and a 32-bit fractional part. In this notation the value 2.5 would be represented by the 64-bit string:

The unit of time is in seconds, and the epoch is 1 January 1900, meaning that the NTP time will cycle in the year 2036 (two years before the 32-bit Unix time cycle event in 2038).

The smallest time fraction that can be represented in this format is 232 picoseconds.

The basic operation of the protocol is that a client sends a packet to a server and records the time the packet left the client in the Origin Timestamp field (T1). The server records the time the packet was received (T2). A response packet is then assembled with the original Origin Timestamp and the Receive Timestamp equal to the packet receive time, and then the Transmit Timestamp is set to the time that the message is passed back toward the client (T3). The client then records the time the packet arrived (T4), giving the client four time measurements, as shown in Figure 3.

Figure 3: NTP Transaction Timestamps (from RFC 4330)

These four parameters are passed into the client timekeeping function to drive the clock synchronization function, which we will look at in the next section.

The optional Key and Message Digest fields allow a client and a server to share a secret 128-bit key, and use this shared secret to generate a 128-bit MD5 hash of the key and the NTP message fields. This construct allows a client to detect attempts to inject false responses from a man-in the-middle attack.

The final part of this overview of the protocol operation is the polling frequency algorithm. A NTP client will send a message at regular intervals to a NTP server. This regular interval is commonly set to be 16 seconds. If the server is unreachable, NTP will back off from this polling rate, doubling the back-off time at each unsuccessful poll attempt to a minimum poll rate of 1 poll attempt every 36 hours. When NTP is attempting to resynchronize with a server, it will increase its polling frequency and send a burst of eight packets spaced at 2-second intervals.

When the client clock is operating within a sufficient small offset from the server clock, NTP lengthens the polling interval and sends the eight-packet burst every 4 to 8 minutes (or 256 to 512 seconds).

Timekeeping on the Client

The next part of the operation of NTP is how an NTP process on a client uses the information generated by the periodic polls to a server to moderate the local clock.

From an NTP poll transaction, the client can estimate the delay between the client and the server. Using the time fields described in Figure 3, the transmission delay can be calculated as the total time from transmission of the poll to reception of the response minus the recorded time for the server to process the poll and generate a response:

   δ = (T4 – T1) – (T3 – T2)

The offset of the client clock from the server clock can also be estimated by the following:

 Θ = ½ [(T2 – T1) + (T3 – T4)]

It should be noted that this calculation assumes that the network path delay from the client to the server is the same as the path delay from the server to the client.

NTP uses the minimum of the last eight delay measurements as δ 0 . The selected offset, Θ 0 , is one measured at the lowest delay. The values (Θ 0 ,δ 0 ) become the NTP update value.

When a client is configured with a single server, the client clock is adjusted by a slew operation to bring the offset with the server clock to zero, as long as the server offset value is within an acceptable range.

When a client is configured with numerous servers, the client will use a selection algorithm to select the preferred server to synchronize against from among the candidate servers. Clustering of the time signals is performed to reject outlier servers, and then the algorithm selects the server with the lowest stratum with minimal offset and jitter values. The algorithm used by NTP to perform this operation is Marzullo’s Algorithm [ 2 ] .

When NTP is configured on a client, it attempts to keep the client clock synchronized against the reference time standard. To do this task NTP conventionally adjusts the local time by small offsets (larger offsets may cause side effects on running applications, as has been found when processing leap seconds). This small adjustment is undertaken by an adjtime() system call, which slews the clock by altering the frequency of the software clock until the time correction is achieved. Slewing the clock is a slow process for large time offsets; a typical slew rate is 0.5 ms per second.

Obviously this informal description has taken a rather complex algorithm and some rather detailed math formulas without addressing the details. If you are interested in how NTP operates at a more detailed level, consult the references that follow, which will take you far deeper into the algorithms and the underlying models of clock selection and synchronization than I have done here.

NTP is in essence an extremely simple stateless transaction protocol that provides a quite surprising outcome. From a regular exchange of simple clock readings between a client and a server, it is possible for the client to train its clock to maintain a high degree of precision despite the possibility of potential problems in the stability and accuracy of the local clock and despite the fact that this time synchronization is occurring over network paths that impose a noise element in the form of jitter in the packet exchange between client and server. Much of today’s distributed Internet service infrastructure relies on a common time base, and this base is provided by the common use of the Network Time Protocol.

References and Further Reading

  • Conferences

APNIC Asia Pacific Network Information Centre

[email protected] Tel: +61 7 3858 3188

YouTube

© 2015 APNIC | Privacy

How Does NTP Work?

The Network Time Protocol (NTP) is a system for synchronizing the clocks of hosts and clients across the Internet. NTP is a protocol intended to synchronize all computers participating in the network to within a few milliseconds of Coordinated Universal Time (UTC). The core of the protocol is NTP’s clock discipline algorithm that adjusts the local computer’s clock time and tick frequency in response to an external source — such as another trusted NTP server, a radio or satellite receiver, or a telephone modem. A core problem in NTP is establishing the trust and accuracy of nodes in the NTP network. This is done through a combination of selection and filtering algorithms to choose from the most reliable and accurate peer in the synchronization network.

An argument can be made that the Network Time Protocol (NTP) is the longest running, continuously operating, distributed application on the Internet, with roots that can be traced back to 1979. The first documentation of the protocol was made available in 1981 as Internet Engineering Note series 173 IEN-173 . Since then, it has evolved into Network Time Protocol Version 4, documented in RFC 5905 and ported to almost every client and server platform available today. NTP is running on millions of servers and clients across the world to keep accurate time on devices throughout the Internet.

NTP Network Architecture #

NTP uses a hierarchical network architecture that forms a tree structure. Each level of this hierarchy is called a stratum and is assigned a number starting with zero representing reference hardware clocks. A level one server is synchronized with a level zero server, and this relationship continues so that a server synchronized to a stratum \( n \) server runs at stratum \(n + 1\). The stratum number therefore represents the distance from an accurate reference clock. In general, the stratum of a node in the server is an indication of quality and reliability but this is not always the case; it is common to find stratum three time sources that are higher quality than other stratum two time sources.

Wikipedia has a good set of definitions for the different stratum that I’ve provided here with slight modification:

And this hierarchy continues up to stratum 15. The following figure depicts the hierarchical nature of the NTP system. Reference clocks at the top of the hierarchy are accurate time pieces and stratum 1 are computers directly attached to those time pieces. From there, increasing strata numbers indicate where each computer synchronizes time data from.

Within the NTP network, clients poll one or more servers at an interval to retrieve updated timestamp information.

The Local Clock Model #

To make the NTP system as accurate and reliable as possible requires first an accurate and reliable local clock on host systems. For most purposes, the local clock is assumed to be a quartz crystal clock, which is a typical digital timepiece used in watches and computers. When a voltage is applied to the crystal it causes it to change shape; when the voltage is removed the crystal returns to its original shape (generating a small amount of voltage in the process). This change and return of shape happens at a stable frequency that can be adjusted during the manufacturing process. Once set, the crystal can maintain the frequency over long periods of time.

The local computer system uses the frequency that crystal oscillates at to increment the logical clock of the computer. Since the crystal oscillates at a pre-defined rate, the computer system can use this rate as a model of time. Quartz is accurate enough that it can maintain time within a few milliseconds per day. Over time however, these imperfections of a few milliseconds per day can accumulate, making the clock inaccurate. Enter NTP. NTP adjusts these inaccuracies at periodic intervals using corrections that are received by more accurate time servers. For hosts requiring the highest reliability, the computer is attached to a local hardware clock such as an atomic clock and receives time information directly from that attached clock.

The Phase and Frequency Locked Loop (PLL/FLL) #

A phase-locked loop or PLL is any control system that generates an output signal whose phase is related to the phase of an input signal. The simplest version of a PLL is an electronic circuit consisting of a variable frequency oscillator and a phase detector that operate in a continuous feedback loop. The oscillator generates a periodic signal, and the phase detector compares the phase of that signal with the phase of the input periodic signal, adjusting the oscillator to keep the phases matched.

The following figure provides a more in-depth example.

In the figure, \(x_i\) represents the reference timestamp from a reliable source and \(c_i\) the local timestamp. Both timestamps are at the \(i\)th update. The difference between these timestamps \(x_i - c_i\) is the input offset, which is processed by the clock filter. The filter recorded the most recent offsets and selects one from the set as the output offset to change the local clock with. The loop fliter is then used to produce the correction required for the local clock oscillator. If a correction is required, the local clock is gradually skewed to the correct value so that the clock shows smooth time indications and so that time values are monotonically increasing.

An alternative to a PLL is an FLL, which operates on the same principles but adjusts frequency rather than phase. Evidence shows that PLL usually works better when network jitter is the dominant factor for clock drift, and FLL works better when the natural wander of an oscillator is the dominant factor for clock drift. NTP v4 uses a combination of both a PLL and an FLL and combines those factors into a clock local adjustment.

A completely exhaustive explanation of the PLL/FLL algorithm in NTP requires 90 pages of documentation , but the overall ideas can be summarized fairly succinctly.

An NTP client receives time data from one or more connected servers and uses that time data to compute a phase or frequency correction to apply to the local clock. If the time correction is only a slight change, the correction is applied gradually (called slewing ) in order to avoid clock jumps. If the local clock is off by a large amount, the adjustment may be applied all at once.

The adjustment to the local clock is called the clock discipline algorithm and it is implemented similar to the simple PLL feedback control system. The following diagram, from the presentation Network Time Protocol (NTP) General Overview , shows the basic overview of the process:

Here, an NTP client is connected to three NTP servers and receives timestamp data from all of them. Having multiple servers act as NTP peers provides some redundancy and diversity to the system. The clock filters select the statistically best time offset from the previous eight time samples received by the client, and the selection and clustering algorithms further narrow the dataset by pruning outliers from the result set. The combining algorithm then computes a weighted average of the time assets.

The output of this first process serves as the input to the PLL/FLL loop. The loop itself continuously adjusts the local clock phase and frequency in response to the input given from the filtered and averaged set of NTP servers.

The Clock Synchronization Algorithm #

During typical operation, an NTP client regularly polls one or more NTP servers to receive updated time data. Upon receipt of new data, the client computes the time offset and round-trip delay from the server as in the following figure.

The time offset \( \theta \), the difference in absolute time between the two clocks, is defined mathematically by

$$ {\displaystyle \theta =\left\vert {\frac {(t_{2}-t_{1})+(t_{3}-t_{4})}{2}}\right\vert .} $$

More intuitively, the offset calculation is the absolute time difference between the two clocks time it takes for packets to transmit between the client and the server.

Using the time data, we can also calculate the network delay \( \delta \):

$$ {\displaystyle \delta ={(t_{4}-t_{1})-(t_{3}-t_{2})} .} $$

  • \(t1\) is the client’s timestamp of the request packet transmission,
  • \(t2\) is the server’s timestamp of the request packet reception,
  • \(t3\) is the server’s timestamp of the response packet transmission and
  • \(t4\) is the client’s timestamp of the response packet reception.

At the very heart of NTP are the algorithms used to improve the accuracy of the values for \( \theta \) and \( \delta \) using filtering and selection algorithms. The complexity of these algorithms varies depending on the statistical properties of the path between peers and the accuracies required. For example, if two nodes are on the same gigabit LAN, the path delays between messages sent between peers are usually within or below any required clock accuracies. In a case like this, the raw offsets delivered by the receive procedure can be used to directly adjust the local clock. In other cases, two nodes may be distributed widely over the Internet and the delay might be much larger than acceptable.

Clock Filtering Algorithm #

There are a number of algorithms for filtering time-offset data to remove glitches that fall into roughly two broad categories: majority-subset algorithms and clustering algorithms . Majority-subset algorithms attempt to separate good subsets of data from bad subsets of data by comparing statistics like mean and variance to select the best clock from a population of different clocks. Clustering algorithms work, on the other hand, by removing outliers to improve the overall offset estimate for a clock given a series of observations.

The full implementation of NTP’s clock filtering algorithm is described fairly succinctly in the presentation NTP Architecture, Protocol and Algorithms .

Clock Selection Algorithm #

Likely the single most important factor in maintaining high reliability within NTP is choosing a peer. Whenever an event comes in and new offset estimates are calculated for a peer, the peer selection algorithm is used to determine which peer should be selected as the clock source.

Within the NTP network, a key design assumption that helps with the selection algorithm is that accurate clocks are relatively numerous and can be represented by random variables narrowly distributed close to UTC, while erroneous clocks are relatively rare and can be represented by random variables widely distributed throughout the measurement space.

The peer selection procedure thus begins by constructing a list of candidate peers by stratum. To be included on the candidate list the peer must pass certain sanity checks. For example, one check requires that the peer must not be the host itself. Another check requires the peers must have reachability registers that are at least half full, which avoids using data from low quality associations or obviously broken implementations. If no candidates pass the sanity checks, the existing clock selection, if any, is cancelled and the local clock free-runs at its intrinsic frequency.

The list is then pruned from the end to be no longer than a maximum size, currently set to five. Starting from the beginning, the list is truncated at the first entry where the number of different strata in the list exceeds a maximum, currently set to two. This procedure is designed to favor those peers near the head of the candidate list, which are at the lowest stratum and lowest delay and presumably can provide the most accurate time.

The full implementation of NTP’s selection algorithm is also given in the presentation NTP Architecture, Protocol and Algorithms .

Clock Combining Algorithm #

The combine algorithm is the simplest of the bunch: it computes the final clock offset by averaging the surviving clocks that have been filtered and selected to produce a final offset used for adjusting the clock using the PLL/FLL clock control system.

NTP Message Formats #

An NTP timestamp records the number of standard seconds relative to UTC. UTC started measuring time on 1 January 1972. The conceptual NTP clock was set to 2,272,060,800.0 at this same point in time, which represents the number of standard seconds since 1 January 1972 (the start of UTC) and 1 January 1900 (the conceptual start of NTP time).

An NTP timestamp is encoded as a 64-bit unsigned number, where the first 32 bits encode the integer portion of the timestamp and the last 32 bits encode the fractional portion. This format allows for convenient multiple-precision arithmetic and easy conversion to other time formats as required. The 32 bits of precision of a timestamp provide about 232 picoseconds of accuracy, which is likely more than enough precision for practical applications over networked systems.

NTP hosts and clients exchange timestamp records by copying the current value of the local clock to a timestamp variable in response to an incoming NTP message. An NTP host parses the incoming message, sets the timestamp on the message, and returns it to the client.

Some of the keys fields are described here:

  • LI (Leap Indicator): Warns of an impending leap second to be inserted or deleted at the end of the current day.
  • VN (Version Number): Identifies the present NTP version.
  • Mode, Stratum, Precision: Indicate the current operating mode, stratum and precision
  • Poll: Controls the interval between NTP messages sent by the host to a peer. The sending host always uses the minimum of its own poll interval and the peer poll interval.
  • Root Delay/Dispersion: Indicate the estimated roundtrip delay and estimated dispersion, respective to the primary reference source.
  • Reference ID, Reference Timestamp: Identify the reference clock and the time of its last update, intended primarily for management functions.
  • Origin Timestamp: The time when the last received NTP message was originated, copied from its transmit timestamp field upon arrival.
  • Receive Timestamp: The local time when the latest NTP message was received.
  • Transmit Timestamp: The local time when the latest NTP message was transmitted.

The NTP state machine running on each host machine maintains state variables for each of the above quantities as well as recording the IP address and ports of the host and its peers, a timer recording the interval between transmitting NTP messages, a register recording if the peer is reachable or not, as well as data measuring the current and estimated measured delay and offset associated with each single observation.

NTP also tracks the current clock source which identifies the clock that is currently being used to track time and the local clock time derived from the logical clock on the host machine.

In normal client-server operation, a server receives a message with this format from a peer, and the server populates the message with updated timestamp data before sending it back to the peer.

Each NTP host also sets a reachability shift register for each peer they communicate with. Each time a message is received the lowest order bit in the register is set to one, and the remaining positions are shifted to the left. The peer is considered reachable if the register is not zero, that is, if the peer has sent at least one message in the last eight update intervals. Peers who are unreachable may have their state information cleared.

NTP is a system of clients and peers that communicate timestamp information in a hierarchical format where nodes closer to the root of the hierarchy are, in general, considered more accurate keepers of time than nodes lower down in the hierarchy. Each NTP host responds to incoming messages requesting time information and keeps an active peer association for each of the clients to track key information.

In response to receiving time information, an NTP peer runs a process that includes the selection, cluster, and combine algorithms that mitigate among the various servers and reference clocks to determine the most accurate and reliable candidates to synchronize the with clock. The selection algorithm uses Byzantine fault detection principles to discard the presumably incorrect candidates called “falsetickers” from the incident population, leaving only good candidates called “truechimers”. A truechimer is a clock that maintains timekeeping accuracy to a previously published and trusted standard, while a falseticker is a clock that shows misleading or inconsistent time. The cluster algorithm uses statistical principles to find the most accurate set of truechimers. The combine algorithm computes the final clock offset by statistically averaging the surviving truechimers.

Once an appropriate clock adjustment, the adjustment itself is processed by a phase-locked and frequency-locked loop.

Resources #

This article provides an introduction to NTP, and should provided a good overview for someone wanting to learn more about the protocol. If you want to learn more, there are several great resources to choose from:

  • RFC 1129 Internet Time Synchronization: The Network Time Protocol
  • RFC 1305 Network Time Protocol (Version 3) Specification, Implementation and Analysis
  • RFC 5905 Network Time Protocol Version 4: Protocol and Algorithms Specification )
  • Network Time Synchronization Research Project

Introduction to Network Time Protocol (NTP)

Network Time Protocol (NTP) is a protocol designed to enable synchronizing the clocks of computer systems and devices on a TCP/IP-based network to a shared time source. First developed by David L. Mills of the University of Delaware in the early 1980s, NTP is now on its fourth version (NTPv4).

NTP supports four modes of operation: Client/Server mode, Symmetric (Peer) mode, Multicast mode and Broadcast mode. This blog briefly introduces the Server/Client mode, which is supported by some of our devices.

In Client/Server mode, both the client and the server have a time axis that represents their respective clock times. The workflow of time synchronization between the client and the server is as follows:

  • When the client wants to synchronize itself to the time of the server, the client will send an NTP request message to the NTP server. This message includes a record of the time when it leaves the client (t1);
  • After receiving the message, the server will add a record to the message indicating the time the message reached the server (t2);
  • After a period of processing, the message is returned to the client, and the time it leaves the server is recorded in the message (t3);
  • The client receives the message and records the time it arrives at the client (t4). 

ntp round trip delay

The round-trip delay of the NTP message can be calculated as (t4-t1) - (t3-t2), and the time difference between the client and the server is ((t2-t1) + (t3-t4)) / 2. The client can adjust its own time based on the above two parameters to synchronize itself with the server time.

NTP is typically applied in scenarios which require the clocks of all devices on the network to be consistent to efficiently achieve highly accurate time synchronization. For example, the clocks of all devices in a parking lot billing system are synchronized.

Magewell Ultra Encode  devices and Pro Convert encoders can be configured to use public or private NTP servers as a reference time source. This allows multiple encoders to be synchronized, and allows multiple decoders receiving the streams to synchronize them for output based on timecode in the SEI information in the streams.

ntp round trip delay

  • Skip to content
  • Skip to search
  • Skip to footer

Troubleshoot and Debug Network Time Protocol (NTP) Issues

ntp round trip delay

Available Languages

Download options.

  • ePub (93.3 KB) View in various apps on iPhone, iPad, Android, Sony Reader, or Windows Phone
  • Mobi (Kindle) (91.7 KB) View on Kindle device or Kindle app on multiple devices

Bias-Free Language

The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.

Introduction

This document describes how to troubleshoot Network Time Protocol (NTP) issues with   debug    commands and the   show ntp   command.

Prerequisites

Requirements.

There are no specific requirements for this document.

Components Used

This document is not restricted to specific software and hardware versions.

The information in this document was created from the devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If your network is live, ensure that you understand the potential impact of any command.

NTP show Commands

Before you look at the cause of NTP problems, you must understand the use of and output from these commands:

show ntp association

Show ntp association detail, show ntp status.

Note: Use the Command Lookup Tool in order to obtain more information on the commands used in this section. Only registered Cisco users can access internal tools and information.

Note: The Output Interpreter Tool  supports certain show commands. Use the Output Interpreter Tool in order to view an analysis of show command output. Only registered Cisco users can access internal tools and information.

An NTP association can be a peer association (one system is willing to synchronize to the other system or to allow the other system to synchronize to it) or a server association (only one system synchronizes to the other system and not the other way around).

This is an example of output from the show ntp association command:

This is an example of output from the show ntp association detail command:

Terms already defined in the show up association section are not repeated here.

Packet data is valid if tests 1 to 4 are passed. The data is then used in order to calculate offset, delay, and dispersion.

Packet header is valid if tests 5 to 8 are passed. Only packets with a valid header can be used to determine whether a peer can be selected for synchronization.

The sanity checks have failed, so time from the server is not accepted. The server is unsynced.

The peer/server time is valid. The local client accepts this time if this peer becomes the primary.

The peer/server time is invalid, and time cannot be accepted.

Each peer/server is assigned a reference ID (label).

Time is the last time stamp received from that peer/server.

our mode/ peer mode

This is the state of the local client/peer.

our poll intvl/ peer poll intvl

This is the poll interval from our poll to this peer or from the peer to the local machine.

Root delay is the delay in milliseconds to the root of the NTP setup. Stratum 1 clocks are considered to be at the root of an NTP setup/design. In the example, all three servers can be the root because they are at stratum 1.

root dispersion

Root dispersion is the maximum clock time difference that was ever observed between the local clock and the root clock. Refer to the explanation of 'disp' under show up association for more details.

This is an estimate of the maximum difference between the time on the stratum 0 source and the time measured by the client; it consists of components for round trip time, system precision, and clock drift since the last actual read of the stratum source.

In a large NTP setup (NTP servers at stratum 1 in the internet, with servers that source time at different strata) with servers/clients at multiple strata, NTP synchronization topology must be organized in order to produce the highest accuracy, but must never be allowed to form a time sync loop. An additional factor is that each increment in stratum involves a potentially unreliable time server, which introduces additional measurement errors. The selection algorithm used in NTP uses a variant of the Bellman-Ford distributed routing algorithm in order to compute the minimum-weight spanning trees rooted on the primary servers. The distance metric used by the algorithm consists of the stratum plus the synchronization distance, which itself consists of the dispersion plus one-half the absolute delay. Thus, the synchronization path always takes the minimum number of servers to the root; ties are resolved on the basis of maximum error.

This is the round trip delay to peer.

This is the precision of the peer clock in Hz.

This is the NTP version number used by the peer.

This is the time stamp of the NTP packet originator; in other words, it is the peer time stamp when it created the NTP packet but before it sent the packet to the local client.

This is the time stamp when the local client received the message. The difference between org time and rcv time is the offset for this peer. In the example, primary 10.4.2.254 has these times:

The difference is the offset of 268.3044 msec.

This is the transmit time stamp for the NTP packet the local client sends to this peer/server.

filtdelay filtoffset filterror

This is the round trip delay in milliseconds of each sample. This is the clock offset in milliseconds of each sample. This is the approximate error of each sample.

A sample is the last NTP packet received. In the example, primary 10.4.2.254 has these values:

These eight samples correspond to the value of the reach field, which shows whether the local client received the last eight NTP packets.

This is an example of output from the show ntp status command:

Terms already defined in the show up association section or the show ntp association detail section  are not repeated.

Troubleshoot NTP with Debugs

Some of the most common causes of NTP issues are:

  • NTP packets are not received.
  • NTP packets are received, but are not processed by the NTP process on the Cisco IOS.
  • NTP packets are processed, but erroneous factors or packet data causes the loss of synchronization.
  • NTP clock-period is manually set.

Important debug commands that help isolate the cause of these issues include:

  • debug ip packets <acl>

debug ntp packets

Debug ntp validity.

  • debug ntp sync
  • debug ntp events

The next sections illustrate the use of debugs in order to resolve these common issues.

Note: Refer to Important Information on Debug Commands before you use debug commands.

NTP Packets Not Received

Use the debug ip packet command in order to check if NTP packets are received and sent. Since debug output can be chatty, you can limit debug output with the use of Access Control Lists (ACLs). NTP uses User Datagram Protocol (UDP) port 123.

  • Create ACL 101: access-list 101 permit udp any any eq 123 access-list 101 permit udp any eq 123 any NTP packets usually have a source and destination port of 123, so this helps: permit udp any eq 123 any eq 123
  • Use this ACL in order to limit output from the debug ip packet command: debug ip packet 101
  • If the issue is with particular peers, narrow the ACL 101 down to those peers. If the peer is 172.16.1.1, change ACL 101 to: access-list 101 permit udp host 172.16.1.1 any eq 123 access-list 101 permit udp any eq 123 host 172.16.1.1

This example output indicates that packets are not sent:

Once you confirm that NTP packets are not received, you must:

  • Check if NTP is configured correctly.
  • Check if an ACL blocks NTP packets.
  • Check for routing issues to the source or destination IP.

NTP Packets Not Processed

With both debug ip packet and debug ntp packets commands enabled, you can see the packets that are received and transmitted, and you can see that NTP acts on those packets. For every NTP packet received (as shown by debug ip packet ), there is a correspondent entry generated by debug ntp packets .

This is the debug output when the NTP process works on received packets:

This is an example where NTP does not work on received packets. Although NTP packets are received (as shown by debug ip packets), the NTP process does not act on them. For NTP packets that are sent out, a corresponding debug ntp packets output is present, because the NTP process has to generate the packet. The issue is specific to received NTP packets that are not processed.

Loss of Synchronization

Loss of synchronization can occur if the dispersion and/or delay value for a server goes very high. High values indicate that the packets take too long to get to the client from the server/peer in reference to the root of the clock. So, the local machine cannot trust the accuracy of the time present in the packet, because it does not know how long it took for the packet to get here.

NTP is meticulous about time and can not synchronize with another device it cannot trust or cannot adjust in a way so that it can be trusted.

If there is a saturated link and buffering occurs along the way, the packets get delayed as they come to the NTP client. So, the timestamp contained in a subsequent NTP packet can occasionally vary a lot, and the local client cannot really adjust for that variance.

NTP does not offer a method to turn off the validation of these packets unless you use SNTP (Simple Network Time Protocol). SNTP is not much of an alternative because it is not widely supported in software.

If you experience loss of synchronization, you must check the links:

  • Are they saturated?
  • Are there any kinds of drops in your wide-area network (WAN) links
  • Does encryption occur?

Monitor the reach value from the show ntp associations detail command. The highest value is 377. If the value is 0 or low, NTP packets are received intermittently, and the local client goes out of sync with the server.

The debug ntp validity command indicates whether the NTP packet failed sanity or validity checks and reveals the reason for the failure. Compare this output to the sanity tests specified in RFC1305 that are used in order to test the NTP packet received from a server. Eight tests are defined:

This is sample output of from the debug ntp validity command:

You can use the debug ntp packets command in order to see the time that the peer/server gives you in the received packet. The time local machine also tells the time it knows to the peer/server in the transmitted packet.

In this sample output, the time stamps in the received packet from the server and the packet sent to another server are the same, which indicates that the client NTP is in sync.

This is an example of output when the clocks are not in sync. Notice the time difference between the xmit packet and the rcv packet. The peer dispersion can be at the max value of 16000, and the reach for the peer can show 0.

debug ntp sync and debug ntp events

The debug ntp sync command produces one-line outputs that show whether the clock has synced or the sync has changed. The command is generally enabled with debug ntp events.

The debug ntp events command shows any NTP events that occur, which helps you determine if a change in the NTP triggered an issue such as clocks that go out of sync. (In other words, if your happily synced clocks suddenly go crazy, you know to look for a change or trigger!)

This is an example of both debugs. Initially, the client clocks were synced. The debug ntp events command shows that an NTP peer stratum change occurred, and the clocks then went out of sync.

NTP Clock-period Manually Set

The Cisco.com website warns that:

"The ntp clock-period command is automatically generated to reflect the correction factor that constantly changes when the copy running-configuration startup-configuration command is entered to save the configuration to NVRAM. Do not attempt to manually use the ntp clock-period command. Ensure that you remove this command line when you copy configuration files to other devices."

The clock-period value is dependent on the hardware, so it differs for every device.

The ntp clock-period command automatically appears in the configuration when you enable NTP. The command is used in order to adjust the software clock. The 'adjustment value' compensates for the 4 msec tick interval, so that, with the minor adjustment, you have 1 second at the end of the interval.

If the device has calculated that its system clock loses time (perhaps there needs to be a frequency compensation from the base level of the router), it automatically adds this value to the system clock in order to maintain its synchronicity.

Note: This command must not be changed by the user.

The default NTP clock-period for a router is 17179869 and is essentially used in order to start the NTP process.

The conversion formula is 17179869 * 2^(-32) = 0.00399999995715916156768798828125, or approximately 4 milliseconds.

For example, the system clock for the Cisco 2611 routers (one of the Cisco 2600 Series Routers) was found to be slightly out-of-sync and could be resynchronized with this command:

This equals 17208078 * 2^(-32) = 0.0040065678767859935760498046875, or a little over 4 milliseconds.

Cisco recommends that you let the router run for a week or so in normal network conditions and then use the wr mem command in order to save the value. This gives you an accurate figure for next reboot and allows NTP to synchronize more quickly.

Use the no ntp clock-period command when you save the configuration for use on another device because this command drops the clock-period back to the default of that particular device. You can recalcuate the true value (but can reduce the accuracy of the system clock during that recalculation time period).

Remember that this value is hardware dependent, so if you copy a configuration and use it on different devices, you can cause problems. Cisco plans to replace NTP version 3 with version 4 in order to resolve this issue.

If you are not aware of these issues, you can decide to manually tinker with this value. In order to migrate from one device to another, you can decide to copy the old configuration and paste it on the new device. Unfortunately, because the ntp clock-period command appears in the running-config and startup-config, NTP clock-period is pasted on the new device. When this happens, NTP on the new client always goes out of sync with the server with a high peer dispersion value.

Instead, clear the NTP clock-period with the no ntp clock-period command, then save the configuration. The router eventually calculates a clock-period appropriate for itself.

The ntp clock-period command is no longer available in Cisco IOS software Version 15.0 or later; the parser now rejects the command with the error:

You are not allowed to configure the clock-period manually, and the clock-period is not allowed in the running-config. Since the parser rejects the command if it was in the start-up config (in earlier Cisco IOS versions such as 12.4), the parser rejects the command when it copies the start-up config to the running-config on boot-up.

The new, replacement command is ntp clear drift.

Related Information

  • Support Forum Thread: NTP clock-period not configured
  • Network Time Protocol: Best Practices White Paper
  • Troubleshoot Network Time Protocol (NTP)
  • Cisco Technical Support & Downloads

Revision History

TAC Authored

Contributed by Cisco Engineers

  • Krishna Nagavolu Cisco Software Engineering Technical Leader

Was this Document Helpful?

Feedback

Contact Cisco

login required

  • (Requires a Cisco Service Contract )

ntp round trip delay

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

How to eliminate the delay in NTP synchronization?

Currently, I am doing a NTP time synchronization between two ubuntu pcs connected by a LAN cable. After I finish the setup, I find that the delay is around 0.170s (170ms), which is not acceptable since I expect a delay less than 30ms. What could be the reasons causing the delay? And how could I solve it?

Teddy_NTU's user avatar

  • how do you measure this delay? –  pim Commented Apr 24, 2018 at 9:17

I assume you use ntpq , which gives delay in milliseconds. 0.170 in the delay column of ntpq -p means a delay of 0.170ms, not 0.170 seconds.

0.170ms is a reasonable delay over a switched ethernet. 170ms does not make a lot of sense.

Explanation of the output is given in the ntpq documentation :

peers Obtains a current list peers of the server, along with a summary of each peer's state. Summary information includes the address of the remote peer, the reference ID (0.0.0.0 if this is unknown), the stratum of the remote peer, the type of the peer (local, unicast, multicast or broadcast), when the last packet was received, the polling interval, in seconds, the reachability register, in octal, and the current estimated delay, offset and dispersion of the peer, all in milliseconds . The character at the left margin of each line shows the synchronization status of the association and is a valuable diagnostic tool. The encoding and meaning of this character, called the tally code, is given later in this page.

My formatting.

vidarlo's user avatar

  • 1 Oh my god. I think you are correct. I thought the unit of delay in ntpq -p is in seconds. Thank you very much. –  Teddy_NTU Commented Apr 25, 2018 at 5:57

You must log in to answer this question.

Not the answer you're looking for browse other questions tagged ntp ..

  • Featured on Meta
  • Announcing a change to the data-dump process
  • Upcoming initiatives on Stack Overflow and across the Stack Exchange network...
  • We spent a sprint addressing your requests — here’s how it went

Hot Network Questions

  • Implementation of Euler-Maruyama numerical solver
  • Writing graduate,undergraduate or research level math textbooks?
  • Ideas for an alternative to nuclear weapons as a deterrent?
  • Fill the triangular grid using the digits 1-9 subject to the constraints provided
  • Looking for piece name/number, it looks like a wedge with 4 studs
  • 6/8 or 2/4 with triplets?
  • Team member working from home is distracted with kids while on video calls - should I say anything as her manager?
  • The use of Bio-weapons as a deterrent?
  • Mechanism behind a pink human skeleton
  • A web site allows upload of pdf/svg files, can we say it is vulnerable to Stored XSS?
  • Draw a Regular Reuleaux Polygon
  • Source for a story about algebraic number theory?
  • Can a festival or a celebration like Halloween be "invented"?
  • Membership and offices in the Privy Council – what is the significance of the different predicates used to describe the transactions?
  • Does (and how?) darkvision work underwater?
  • Positive sum can always be presented as a sum with strictly positive incremental sub-sums
  • Using a different background image for every LaTeX Beamer slide
  • How do I get Windows 11 to use existing Linux GPT on 6TB external HDD?
  • Can a star be made of sun spots?
  • Why do my lifetime ISA and stocks and shares ISA perform differently if they've both bought the same fund?
  • Trapdoor for SIS
  • Address Formatting Issue in LaTeX
  • Is this an invitation to submit or a polite rejection?
  • How did Sirius Black bring the Weasley family picture back from Azkaban?

ntp round trip delay

IMAGES

  1. NTP round-trip delay

    ntp round trip delay

  2. Round trip delay via public Internet between NIST in Boulder, Colorado

    ntp round trip delay

  3. NOS-T Overview

    ntp round trip delay

  4. Benchtop Tests, boxplots of delay (NTP) and RTT (Roughtime) for all

    ntp round trip delay

  5. Round-trip time for HTTP and CoAP using NTP server

    ntp round trip delay

  6. NTP round trip time, Chicago -Japan (NICT)

    ntp round trip delay

VIDEO

  1. USPSA Match 9-16-12 with SIG P229

  2. Urgent Update: Delay in Social Security COLA

  3. 중국국제항공 129편 김해공항 착륙, AIR CHINA 129 Landing at Gimhae Int'l Airport #planespotting #shorts

  4. spend the week with me!!

  5. Sarah cürse will k!ll Agradaa_Sabato expöse Agra ev!l minds against Sarah afta his Family begs Antoa

  6. Gundam Breaker 2: part 1

COMMENTS

  1. synchronization

    To synchronize its clock with a remote server, the NTP client must compute the round-trip delay time and the offset. The round-trip delay is computed as. where. t3 is the client's timestamp of the response packet reception. Therefore. of the request packet and the reception of the response packet and. t2 − t1 is the time the server waited ...

  2. Network Time Protocol

    Round-trip delay time δ. A typical NTP client regularly polls one or more NTP servers. The client must compute its time offset and round-trip delay. Time offset θ is positive or negative (client time > server time) difference in absolute time between the two clocks. It is defined by

  3. The Root of All Timing: Understanding root delay and root dispersion in NTP

    The root is a stratum 1 NTP server. Something that usually has a GNSS receiver to get UTC. If you are particularly fond of tree analogies, you can think of higher stratum NTP servers as branches, and clients as the leaves. The root delay is the round-trip packet delay from a client to a stratum 1 server.

  4. NTP Timestamp Calculations

    This document presents a mathematical analysis of the principles of clock offset and roundtrip delay calculations used by NTP. The analysis is based on the properties of finite additive groups using two's complement arithmetic. An important conclusion is that the correct time synchronization is assured if the NTP client is set reliably within 68 years when first started.

  5. How NTP works

    The roundtrip delay of NTP message: Delay = (T4-T1) - (T3-T2) = 2 seconds. Time difference between Device A and Device B: Offset = ((T2-T1) + (T3-T4))/2 = 1 hour. Based on these parameters, Device A can synchronize its own clock to the clock of Device B. This is a rough description of how NTP works. For more information, see RFC 1305.

  6. What is NTP dispersion and how do I control it?

    This yields two values, offset (the time difference between client and server), and the delay (essential the network travel time) with the following formulas: offset= ((t4 - t3) + (t1 - t2)) / 2 delay = (t4 - t1) - (t3 - t2) The client selects the current offset from the last 8 packets received, choosing the one with the smallest delay.

  7. NTP round-trip delay

    NTP round-trip delay. Dec 25, 2017. We all know, that the delay between the NTP server and a NTP client has important influence for the precision of the time. There are a lot of information in the Internet which I did already know, but I was surprised how dramatically this could be. ... The round-trip delay changed marginal. It was during the ...

  8. ntp

    The short answer is yes, NTP will prefer low round trip timestamps over high round trip timestamps. There used to be a calldelay option to tell NTP about this problem, typically created by networks that used dial-on-demand technologies that impose a call delay. However, now NTP does this automatically. If you want to speed up initial timesync ...

  9. What's the difference between `Latency` and `Round Trip Time`?

    Round-trip time (RTT) is the time it takes for a packet to go from the sending endpoint to the receiving endpoint and back. There are many factors that affect RTT, including propagation delay, processing delay, queuing delay, and encoding delay. These factors are generally constant for a given pair of communicating endpoints.

  10. Combining PTP with NTP to Get the Best of Both Worlds

    The offset of the client's clock is the difference between the midpoints of intervals [t1, t4] and [t2, t3]. The delay is the round-trip time not including the server's processing time (i.e. the length of the local interval [t1, t4] without remote interval [t2, t3]). The assumption here is that the delays were identical in both directions.

  11. PDF Time's Forgotten: Using NTP to understand Internet Latency

    NTP synchronization over IPv6, and one serves clients over IPv4 (one server is dual-stack). Overall, our raw log data in- ... The round-trip delay is computed as (t3 −t0)−(t2 −t1), and the one-way. delay is assumed to be statistically one-half the RTT. Since

  12. Protocol Basics

    The total round-trip delay from the server to the primary reference sourced. The value is a 32-bit signed fixed-point number in units of seconds, with the fraction point between bits 15 and 16. This field is significant only in server messages. ... NTP uses the minimum of the last eight delay measurements as ...

  13. PDF Practical Limitations of NTP Time Transfer

    NTP time transfer uncertainties. Note also that the "divide by 2" in Eq. (1) assumes that the delay from the server to the client is equal to one half of the round trip delay. If this assumption were true, the delays in the path to and from the server would be equivalent and dividing by two would fully compensate for all delays.

  14. How Does NTP Work?

    During typical operation, an NTP client regularly polls one or more NTP servers to receive updated time data. Upon receipt of new data, the client computes the time offset and round-trip delay from the server as in the following figure. Offset and Delay Calculations

  15. Introduction to Network Time Protocol (NTP)

    The round-trip delay of the NTP message can be calculated as (t4-t1) - (t3-t2), and the time difference between the client and the server is ((t2-t1) + (t3-t4)) / 2. The client can adjust its own time based on the above two parameters to synchronize itself with the server time.

  16. Troubleshoot and Debug Network Time Protocol (NTP) Issues

    delay. This is the round trip delay to peer. precision. This is the precision of the peer clock in Hz. version. This is the NTP version number used by the peer. org time. This is the time stamp of the NTP packet originator; in other words, it is the peer time stamp when it created the NTP packet but before it sent the packet to the local client ...

  17. How to eliminate the delay in NTP synchronization?

    1. I assume you use ntpq, which gives delay in milliseconds. 0.170 in the delay column of ntpq -p means a delay of 0.170ms, not 0.170 seconds. 0.170ms is a reasonable delay over a switched ethernet. 170ms does not make a lot of sense. Explanation of the output is given in the ntpq documentation: peers Obtains a current list peers of the server ...

  18. c

    Total round trip delay time. (NTP short format) union { // Total dispersion to the reference clock,in NTP short format. uint32_t rootDispersion;// (NTP short format) struct { uint16_t rootDispersion_s; uint16_t rootDispersion_f; } uRDI; }; // 32 bits. Total round trip delay time. (NTP short format) uint32_t refId; // 32 bits. ...