In What Time Is It? we covered the human mess of timekeeping – sundials, calendars, time zones, daylight saving, and the volunteer-maintained database that keeps your phone from lying to you. All of that assumes we know what a “second” actually is. But what is a second? How do you count one? And what happens when you count very, very carefully?
What are we actually counting?
A second used to be defined as 1/86,400 of a mean solar day. Simple enough – divide the day into hours, the hours into minutes, the minutes into seconds, done. The problem is that the Earth’s rotation isn’t constant. Tidal friction from the moon is gradually slowing us down. “Gradually” here means roughly 2.3 milliseconds per century, which sounds negligible until you’re trying to land a spacecraft or synchronise financial transactions across continents.
In 1967, the 13th General Conference on Weights and Measures decoupled the second from the Earth entirely. A second is now defined as 9,192,631,770 periods of the radiation corresponding to the transition between two hyperfine levels of the ground state of the caesium-133 atom. This is a mouthful, but it has the enormous advantage of being the same everywhere in the universe (with a caveat we’ll get to in Time Is Weirder Than You Think), and it’s measurable to extraordinary precision.
The trouble is that atomic seconds and solar days are now measuring different things. Atomic time marches on with metronomic precision. Solar time wobbles and drifts. They disagree, and the disagreement grows over time.
From quartz to caesium
Before atomic clocks, the most precise portable timekeepers were quartz crystal oscillators. The key property is piezoelectricity: certain crystals, when mechanically stressed, generate a voltage – and conversely, when voltage is applied, they vibrate. A quartz crystal cut to the right shape and size vibrates at a very stable frequency. In a wristwatch, that frequency is typically 32,768 Hz. That’s 2 to the power of 15, chosen because it can be divided down to exactly one pulse per second using a simple binary counter circuit – fifteen halvings and you’re there.
The first quartz clock was built at Bell Telephone Laboratories in 1927 by Warren Marrison and J.W. Horton. It was roughly the size of a large refrigerator. By the late 1960s, Seiko had miniaturised the technology enough to fit it on a wrist – the Seiko Astron, released on Christmas Day 1969, was the world’s first commercially available quartz wristwatch. It cost as much as a small car. Within a decade, quartz watches were cheap enough to give away as promotional items. The Swiss watch industry, which had dominated mechanical horology for centuries, was nearly destroyed in what’s now called the Quartz Crisis. Accuracy that had once required master craftsmen and hand-finished movements was suddenly available from a factory in Japan for a few dollars.
Quartz watches were accurate to within a few seconds per month – far better than any mechanical watch. But quartz crystals aren’t perfect. Their frequency drifts with temperature, age, and mechanical stress. A temperature-compensated crystal oscillator (TCXO) can hold stability to within a second or two per year. An oven-controlled crystal oscillator (OCXO), which keeps the crystal at a constant temperature in a tiny heated enclosure, does better still – a few milliseconds per day. For everyday timekeeping, even basic quartz is more than adequate. For science, navigation, and telecommunications, it matters enormously that “a few seconds per month” isn’t zero.
The leap to atomic timekeeping came from the insight that atoms are, in a sense, nature’s own frequency standards. The idea was first proposed by Isidor Rabi at Columbia University in 1945, building on his Nobel Prize-winning work on molecular beam magnetic resonance. Every caesium-133 atom in the universe vibrates at exactly the same frequency when it transitions between two specific energy states. No manufacturing variation. No wear. No temperature drift (at least, not in the transition itself). If you can build a device that locks onto that frequency and counts the vibrations, you have a clock that’s stable to a degree that mechanical and quartz clocks can’t approach.
The first working caesium beam clock, built by Louis Essen and Jack Parry at the National Physical Laboratory in Teddington, England, began operating in 1955. Within two years it had demonstrated accuracy of one second in 300 years – already orders of magnitude better than any quartz oscillator. By 1967, it was good enough that the international scientific community decided to redefine the second itself based on the caesium atom rather than the Earth’s rotation. The atom had become more reliable than the planet.
Atomic clocks and their limits
How they work. A caesium beam clock works by exposing a beam of caesium-133 atoms to microwave radiation and tuning the frequency until the maximum number of atoms change energy states. That peak frequency – 9,192,631,770 Hz exactly, by definition – is the second. Hydrogen maser clocks use a similar principle with hydrogen atoms and are more stable over short periods, making them excellent for applications that need precise frequency over hours rather than years.
Optical lattice clocks represent the current frontier. They use atoms (often strontium or ytterbium) trapped in a lattice of laser light and interrogated with optical-frequency lasers rather than microwaves. The higher frequency means finer measurement. The best optical lattice clocks at NIST and JILA in the US, and at the University of Tokyo, have demonstrated accuracy of roughly one second in 15 billion years – longer than the age of the universe (Bloom et al., 2014, Nature). In 2024, the BIPM began formally considering redefining the second based on optical clocks.
But even they drift. Every clock, no matter how precise, has some uncertainty. Caesium beam clocks drift by roughly one second in 300 million years. Optical lattice clocks are better by orders of magnitude, but “better” isn’t “perfect”. No clock is perfect. This is a fundamental consequence of quantum mechanics: measurement always has uncertainty.
Clock ensembles. UTC is not kept by a single clock. It’s derived from a weighted average of approximately 450 atomic clocks in laboratories across more than 80 countries. The Bureau International des Poids et Mesures (BIPM) in Paris collects data from all of them, weights each clock by its past performance and stability, and computes a combined timescale called UTC. The results are published retrospectively in a document called Circular T, which means that UTC is, strictly speaking, only known after the fact. The UTC that your phone shows you is actually an approximation, steered to match the BIPM’s post-hoc calculation as closely as possible.
Leap years and leap seconds
Most people know about leap years. The Earth takes approximately 365.2422 days to orbit the sun, so every four years we add a day to February to stop the calendar drifting away from the seasons. Except every 100 years we skip the leap year. Except every 400 years we don’t skip it. So 1900 wasn’t a leap year, but 2000 was. This approximation is good to about one day in 3,236 years, which is close enough that nobody currently alive needs to worry about the next correction.
Leap seconds are a much more recent and much messier invention. Since atomic clocks and the Earth’s rotation disagree, the International Earth Rotation and Reference Systems Service (IERS) periodically adds a leap second to UTC to keep it within 0.9 seconds of solar time. They’ve done this 27 times since 1972, always on the last day of June or December. All 27 have been positive – adding a second because the Earth is slowing down. But in recent years the Earth has unexpectedly sped up slightly, and for a while there was serious discussion about whether we’d need a negative leap second – removing a second, something that has never been done and that most software has certainly never been tested for. The prospect of 23:59:58 being followed directly by 00:00:00, skipping 23:59:59 entirely, was enough to give the timekeeping community genuine anxiety.
This sounds harmless but it drives software engineers to quiet despair. A leap second means that the sequence 23:59:59 is followed by 23:59:60 before 00:00:00. Most software doesn’t expect a minute to have 61 seconds. When a leap second was inserted in 2012, it crashed Reddit, Gawker, LinkedIn, FourSquare, and Yelp because of a Linux kernel bug in the way NTP interacted with the high-resolution timer system.
Google’s approach is to “smear” the leap second – they slightly slow down their clocks over a period of hours so the extra second is absorbed gradually. Amazon does something similar, though with a different smear profile. This is practical but means that during the smear window, Google’s clocks disagree with Amazon’s, and both disagree with everyone else’s, and a timestamp generated on one platform during that window doesn’t mean quite the same thing as a timestamp generated on another. If you’re processing financial transactions that cross cloud providers during a leap second smear, you’d best not think too hard about what “the same time” means.
The good news, or bad news depending on your perspective, is that in 2022 the General Conference on Weights and Measures voted to abolish leap seconds by 2035. UTC and solar time will be allowed to drift apart, with a correction planned at some larger threshold – perhaps a “leap minute” in a century or so. Astronomers who need solar time will adjust. The rest of us will stop having to worry about 61-second minutes.
Describing a moment
Given all of this, how do you actually specify an exact moment in time?
You might think a timestamp like “2026-04-28T14:30:00Z” does the job. And it does, mostly. The “Z” means UTC, which is a specific timescale maintained by a weighted average of atomic clocks around the world. But UTC includes leap seconds, which makes the relationship between any two UTC timestamps ambiguous unless you know how many leap seconds occurred between them.
This is where TAI – International Atomic Time – comes in. TAI is a pure count of SI seconds since an epoch in 1958, with no leap seconds. It’s the “true” atomic timescale. UTC is defined as TAI minus some whole number of seconds (currently 37). If you want to measure the exact duration between two events, TAI is what you want. If you want to know roughly what angle the sun is at, UTC is what you want.
Then there’s GPS time, which started counting at the same moment as UTC in January 1980 and has never inserted a leap second since. GPS time is currently 18 seconds ahead of UTC.
And there are others. TDB, Barycentric Dynamical Time, used for solar system ephemerides. TCG, Geocentric Coordinate Time, which ticks slightly faster than clocks on Earth’s surface because it’s defined for a clock at rest and infinitely far from the Earth’s gravitational field. Each serves a different purpose, each disagrees with the others by small but significant amounts.
The point is that “what time is it?” is never a single question. It’s really “what time is it, in which timescale, as measured by which clock, where?”
For most software, the practical answer is: use UTC, store it as an ISO 8601 string or a Unix timestamp (seconds since midnight on 1 January 1970, UTC, not counting leap seconds), and convert to local time for display only. This works for the vast majority of applications. But if you need to compute precise durations across leap second boundaries, or compare timestamps from different systems that may have been smearing at different rates, or handle historical dates in jurisdictions that have changed their timezone rules, “just use UTC” stops being simple fast. The rabbit hole is always deeper than it looks.
NTP and time synchronisation
Having an accurate clock is only half the problem. You also need to get that accuracy to the devices that need it. This is the job of the Network Time Protocol, NTP.
NTP was designed by David Mills at the University of Delaware in 1985, and its descendants still synchronise nearly every clock on the internet. The protocol works by exchanging timestamps between a client and a server, measuring the round-trip delay, and using the result to estimate the offset between the two clocks. The clever bit is in the statistics – NTP uses filtering algorithms to reject noisy measurements and converge on the best estimate of the true time.
The system is hierarchical. Stratum 0 sources are the reference clocks themselves – caesium standards, GPS receivers, radio stations like DCF77 in Germany or WWVB in the US that broadcast time signals. Stratum 1 servers are directly connected to a Stratum 0 source. Stratum 2 servers synchronise to Stratum 1, and so on. Your laptop or phone is typically Stratum 3 or 4, synchronised to a pool of public NTP servers.
That pool – the NTP Pool Project – is another piece of critical internet infrastructure run almost entirely by volunteers. Over 4,000 servers donated by individuals and organisations around the world, serving billions of time queries per day. When your phone synchronises its clock, it’s probably talking to a server that someone is running in their spare time, on their own hardware, at their own expense. Like the tz database, like the DNS root servers, like so much of the infrastructure the modern world depends on – it works because people choose to make it work. There’s no contract. There’s no SLA. There’s just a community that thinks accurate time matters enough to donate the resources.
The accuracy you can achieve depends on your network. On a local network, NTP can keep clocks within a few hundred microseconds. Over the internet, a few milliseconds is typical. For applications that need tighter synchronisation – financial trading, for instance, or telecommunications – Precision Time Protocol (PTP, IEEE 1588) operates at the hardware level, timestamping packets as they enter and leave the network interface card, and can achieve sub-microsecond accuracy.
GPS is also a time-distribution system, not just a positioning one. In fact, positioning is time distribution – a GPS receiver determines its position by measuring the time it takes signals to arrive from multiple satellites, then solving for the intersection. Each GPS satellite carries multiple atomic clocks (a mix of caesium and rubidium) and broadcasts precise time signals. A GPS receiver on the ground can determine the time to within roughly 10 nanoseconds. Many NTP Stratum 1 servers use GPS as their reference source.
But GPS is a US military system. It was built by the Department of Defense, it’s operated by the US Space Force, and the US government retains the right to degrade or deny the civilian signal at will. They did exactly that until May 2000 – a deliberate error called Selective Availability that made civilian GPS accurate to about 100 metres instead of 10. The military got the good signal. Everyone else got the blurred one.
That dependency on a single nation’s military made other countries nervous. The European Union built Galileo, which became fully operational in 2016 – a civilian-controlled system from the start, with no equivalent of Selective Availability. Russia has GLONASS, operational since 1993. China has BeiDou, globally operational since 2020. India has NavIC covering the Indian subcontinent.
Modern receivers use multiple constellations simultaneously. Your phone probably tracks GPS, Galileo, and GLONASS at once. More satellites in view means better geometry, faster fixes, and improved accuracy – from roughly 3-5 metres with GPS alone to under 1 metre with multi-constellation receivers. For timing applications, using multiple independent constellations also provides redundancy: if one system has a problem, the others keep you synchronised.
When synchronisation fails, the consequences are real. In 2016, a GPS ground station error introduced a 13-microsecond timing glitch that propagated to GPS-disciplined clocks worldwide. Telecommunications networks that relied on GPS for synchronisation experienced disruptions. In 2019, a Galileo outage left receivers without a valid time signal for several days. Having multiple constellations didn’t prevent the Galileo outage, but it meant that receivers tracking GPS and GLONASS simultaneously kept working while Galileo was down. Redundancy isn’t a theoretical benefit – it’s the difference between “the system degraded” and “the system failed.”
Radio time signals offer a terrestrial alternative. MSF in the UK broadcasts from Anthorn in Cumbria on 60 kHz. DCF77 in Germany broadcasts from Mainflingen near Frankfurt on 77.5 kHz. WWVB in the US broadcasts from Fort Collins, Colorado on 60 kHz. These long-wave signals can reach hundreds of kilometres and are used by “radio-controlled” clocks and watches – the ones that seem to magically stay accurate without any intervention. They receive the signal, typically at night when propagation is best, and correct themselves against it. The system is elegant and low-tech compared to GPS, but limited in precision to roughly a millisecond and in range to whatever the transmitter can cover.
The dependency chain is worth noting: your phone’s clock depends on NTP, which depends on Stratum 1 servers, which depend on atomic clocks or GPS, which depends on the satellites’ onboard atomic clocks, which depend on the ground control system that monitors and corrects them against the master clock at the US Naval Observatory. Every link in the chain adds a tiny bit of uncertainty. The time on your phone is an estimate, steering toward a post-hoc average of 450 clocks, computed in Paris, distributed through a hierarchy of servers and satellites, and corrected for relativistic effects that Einstein predicted in 1915. It’s close enough. It’s never exact.
Time in the financial markets
Nowhere is the practical importance of precise time synchronisation more visible than in financial trading. The EU’s MiFID II regulation, which came into force in January 2018, requires that timestamps on financial transactions be accurate to within 100 microseconds of UTC for most trading activities, and within one microsecond for high-frequency trading. The US SEC has similar requirements. This isn’t paranoia – it’s about being able to reconstruct the exact order of events when disputes arise or markets crash.
High-frequency trading firms spend millions on low-latency connections and precise clock synchronisation. A difference of a few microseconds can determine who gets a trade filled and who doesn’t. Some firms use rubidium or caesium oscillators at their trading sites, disciplined by GPS, to ensure their timestamps are as close to UTC as hardware allows. Others lease dedicated fibre connections to minimise and stabilise network latency between their servers and the exchange.
The irony is that all this infrastructure – GPS-disciplined atomic clocks, PTP synchronisation, nanosecond-accurate timestamps – exists to coordinate an activity (buying and selling financial instruments) that is fundamentally a human invention. We built clocks precise enough to measure relativistic effects, and we use them to work out who pressed “buy” first.
Jet lag and the body clock
Your body has its own clock, and it doesn’t care about any of this.
The circadian rhythm is an internal cycle that runs on approximately 24.1 hours – slightly longer than a solar day. It’s governed primarily by the suprachiasmatic nucleus, a tiny region in the hypothalamus, and it’s synchronised to the outside world mainly by light. Czeisler et al. demonstrated the near-24-hour intrinsic period in a landmark 1999 study in Science, confirming that humans kept in constant dim light still cycle with remarkable regularity.
Jet lag is what happens when you cross time zones faster than your body can adjust – roughly one day of recovery per time zone crossed. Eastward travel is generally worse than westward because you’re shortening the day, and your body’s natural cycle is slightly longer than 24 hours, making it easier to extend the day than compress it. Living in Perth, I feel this every time I fly to Europe – eight or nine time zones east is brutal, while the return trip west is noticeably easier.
Chronic circadian disruption is an occupational hazard for long-haul flight crews. Studies have linked it to increased cancer risk – the International Agency for Research on Cancer (IARC) classifies shift work involving circadian disruption as “probably carcinogenic to humans” (Group 2A). The body’s clock is not a metaphor. It’s a biological mechanism, and forcing it out of sync has measurable health consequences.
The clock inside everything
We’ve gone from sticks in the ground to laser-trapped atoms oscillating hundreds of trillions of times per second. The precision is breathtaking. But precision brings its own strange problems. When your clocks are accurate enough to detect the difference in gravity between the floor and the ceiling, “what time is it?” stops being a simple question and starts being a question about the structure of spacetime itself.
That’s where things get truly weird. In Time Is Weirder Than You Think, we’ll see what happens when Einstein enters the picture – why GPS satellites need relativistic corrections, why the core of the Earth is younger than the surface, and why time might not flow at all.