Real-World Computing: Time

When the first true personal computer, the Apple II, was introduced in 1978, the state of the art did not include computers having clocks. To the Apple II, one day was the same as the next, one moment in time was indistinguishable from another. The operating system, Apple DOS, did not keep track of time like you might expect an operating system to do. The only way a program could recognize the passage of time was to rely on the CPU clock frequency, just over 1 MHz. Knowing exactly how many CPU clock cycles it took to run each instruction (which was easy to predict in the simple days of the 6502 CPU) could allow a program to determine the amount of time that had elapsed. However, this was impractical for long-term timekeeping since every piece of machine code that could be executed would have to be known and analyzed in advance. I recall seeing some add-on products that provided real-time clock data for timing sensitive applications.

With the introduction of the IBM PC in 1981, personal computers now had a simple and effective way to keep time. The operating system, IBM PC-DOS, was automatically interrupted by the hardware timer exactly 18.2 times per second. At each interrupting, PC-DOS would increment an internal counter. When the correct number of these "timer ticks" had occurred, PC-DOS would increment the second value of its current time. In this way it could accurately keep track of time on a long-term basis. Days, months, or sometimes years later, the 18.2-times-per-second heartbeat would keep the current date and time updated, regardless of what else might be running on the computer.

The IBM PC, however, did not keep track of the time when it was turned off. Every time the computer was started, it would assume the current time was 1:00 AM on Tuesday January 1, 1980, unless you told it otherwise. The IBM PC AT introduced a "real-time clock" chip, which not only kept track of the time independently of the operating system, but also had a battery back-up to keep the clock running while the rest of the computer was turned off. When the PC AT was started, it would read the current date and time from the real-time clock chip, and PC-DOS would use that to initialize its date and time. From that point onward, PC-DOS would use the 18.2 Hz heartbeat to update the clock.

The introduction of the real-time clock chip made working with the computer somewhat easier. However, the poor quality of the real-time clock hardware was a nuisance. If you wear a wristwatch, even a $5 one, you almost undoubtedly have a better timepiece on you wrist than the one inside your computer. The real-time clock could easily gain or lose several minutes per day. Real-time clock hardware has improved -- today's computers are pretty good at keeping a reasonably accurate clock speed.

Today, there are two major issues with time on your computer: accuracy and time zone.


Look at the current time shown on your computer. How close is that to the actual time? You can hear the current time announced on radio and TV stations, or by calling a telephone time service, but how accurate are they? Why does this matter?

In today's networked world, computers almost inevitably talk to one another. Since you're reading this, your computer has probably recently made a connection to the computer serving the web site. Part of the HTTP protocol involves passing date and time information between the browser and the web server. In particular, the web server might send back an "Expires:" header, which tells the browser the date and time when it can no longer rely on a cached copy of the downloaded content. If your computer's clock is not correct, it might misinterpret the date and time sent with the "Expires:" header. For example, if your computer's clock is accidentally set to the same day and time last year, then your browser will think it can cache the content for one year longer than it should. If the content should expire tomorrow, your browser may incorrectly cache it for the next 366 days.

The browser example is something that would not be significantly affected by small inaccuracies in the time on your computer. If your clock is 30 seconds fast, you'll probably never notice the effect this has on browser content caching. However, consider the NFS file server protocol. When an NFS client writes a file on the server, the server decides what the last-modified timestamp should be. When a client writes a file and then views the last-modified timestamp, the timestamp will appear to be in the future if the client's clock is behind that of the server's. Similarly, the timestamp will appear to be too far in the past if the client's clock is ahead that of the server's. This can have significantly bad effects in some programs such as make, which rely on accurate timestamp information. It is essential that networked computers all keep the same time, preferably within less than a second of each other.

The good news is that since computers are so well networked, they can easily and quickly check their time with known timekeeper servers, and update their own clocks if necessary. There are several different ways to do this in common use today, in increasing order of complexity:

The Daytime Protocol

The Daytime Protocol is defined in RFC 867. This protocol sends the server's date and time information as a human-readable character string whenever a client connects. Since the exact format of the character string is not defined, this is not really useful for computers to use, because any program reading the output of the daytime protocol server will not know exactly what to expect. However, it is still useful for diagnostics, if you can find a server that supports this protocol.

The Time Protocol

The Time Protocol is defined in RFC 868. This protocol sends the server's date and time information as a machine-readable binary number whenever a client connects. The binary number represents the number of seconds that have elapsed since midnight on January 1, 1900. Clients can use this service to update their clocks automatically.

Simple Network Time Protocol

The Simple Network Time Protocol is defined in RFC 2030. It is similar in concept to the Time Protocol, but the information passed between the client and the server is much more detailed. For example, the resolution of the time information in SNTP is 200 picoseconds (light travels only 6 cm in 200 ps). Computers these days are fast enough that better than 1 second resolution is required.

Network Time Protocol

The Network Time Protocol is defined in RFC 1305. This protocol is a complex client-server protocol. The client runs continuously, and periodically talks to one or more NTP servers to determine the correct time. Even if some of the servers don't actually have the correct time, the client will use the known good data from other servers. The NTP client cooperates with the operating system to adjust the speed of the clock in the system so it remains as accurate as possible.

Time Zone

Related Links

Greg Hewgill <>