How TLS Works: From Trusted Networks to Trust-No-One

October 19, 2026 · 35 min read

Part of Under the Hood — deep dives into the technology we use every day.

The internet was built on trust. Not cryptographic trust – the human kind. People who knew each other, working on machines they controlled, connected by wires they could see. That assumption held for about twenty years. Then it shattered, and we’ve been picking up the pieces ever since.

Trust on a wire

In 1969, the first message sent over ARPANET travelled from UCLA to the Stanford Research Institute. It was supposed to be “LOGIN”. The system crashed after “LO”. A fitting start.

ARPANET connected a small community – universities, defence contractors, government laboratories. Everyone on the network knew, more or less, everyone else. The protocols that emerged from this world – TCP/IP, designed by Vint Cerf and Bob Kahn (which I covered in detail in How Computer Networks Work) – were engineered for reliability and interoperability. They were not engineered for confidentiality. Data travelled in plaintext because there was nobody untrustworthy to read it.

This wasn’t naive. It was a reasonable assumption for a network of a few dozen nodes in friendly institutions. The protocols were solving a genuine, hard problem: how to get packets reliably from one machine to another across unreliable links. Adding encryption to that mix would have been premature complexity in a world where the threat model was “the cable might break”, not “someone is listening”.

But networks grow. And when they grow beyond the walls of the community that built them, the assumption of trust doesn’t scale.

The first attacks – when trust ran out

The early internet was remarkably open. On shared Ethernet networks – which used hubs, not switches – every machine on the local segment could see every packet. Packet sniffing wasn’t a sophisticated attack; it was a matter of putting your network interface into promiscuous mode and reading what came past. On a university LAN in the early 1990s, you could watch passwords, emails, and file transfers scroll by in plain text.

In 1988, the Morris Worm tore through the internet, infecting roughly 10% of the 60,000 machines then connected. It wasn’t a cryptographic attack – it exploited buffer overflows in sendmail and fingerd, and weak password guessing. But it was a watershed moment. The network was no longer a friendly neighbourhood. RFC 1135, delightfully titled “The Helminthiasis of the Internet”, documented the incident and its fallout.

As the web went commercial in the early 1990s, the stakes changed. People started typing credit card numbers into web forms. Those numbers travelled across the open internet in plaintext HTTP, readable by anyone with access to any router or network segment along the path. This was obviously unsustainable.

The problem was made visceral to the general public much later, in 2010, when Eric Butler released Firesheep – a Firefox extension that let anyone on an open WiFi network hijack other people’s sessions on Facebook, Twitter, and dozens of other sites, with a single click. It wasn’t a new attack. But it turned a theoretical problem into something your non-technical friend could demonstrate at a coffee shop. The resulting panic accelerated the push toward universal HTTPS.

Netscape’s answer – SSL is born

In 1994, Netscape Communications had a problem. They wanted to build an internet commerce platform – one where people could safely buy things through a web browser. But HTTP offered no confidentiality whatsoever. They needed a way to encrypt traffic between the browser and the server, transparently, without requiring users to understand cryptography.

The result was the Secure Sockets Layer protocol – SSL.

SSL 1.0 was never released publicly. It had fundamental design flaws that were caught during internal review. Netscape quietly shelved it and started again.

SSL 2.0, released in 1995, was the first public version. It worked, in the sense that it encrypted traffic, but it had serious vulnerabilities. It was susceptible to man-in-the-middle attacks during the handshake, allowed cipher suite downgrade attacks (where an attacker could force weaker encryption), and was vulnerable to truncation attacks where an attacker could silently close a connection. Wagner and Schneier’s analysis catalogued the problems.

SSL 3.0 (1996) was a complete redesign, led by Paul Kocher working with Netscape’s engineering team. It was substantially stronger and became the foundation for everything that followed. RFC 6101 later documented it for historical purposes.

The core idea behind SSL – and its successor, TLS – is elegant:

Before sending secrets, two parties negotiate a shared secret over an insecure channel, then use that secret to encrypt everything after.

That negotiation is the handshake. It’s the cleverest part of the whole system, and it’s worth understanding.

The handshake – what actually happens

Imagine you’re in a crowded room. You need to agree on a secret language with someone on the other side of the room. Everyone can hear everything you shout. How do you agree on a secret without anyone else learning it?

That’s the problem the TLS handshake solves. Here’s what happens, step by step:

1. ClientHello. Your browser sends a message to the server: “Here’s what I support.” This includes the TLS versions it can speak, the cipher suites it understands (which combinations of key exchange, encryption, and integrity algorithms), and a block of random bytes.

2. ServerHello. The server replies: “Let’s use this.” It picks a TLS version and cipher suite from the client’s list, and sends its own block of random bytes.

3. Certificate. The server sends its digital certificate – essentially a document that says “I am example.com, and here’s my public key, signed by a trusted authority.” The certificate includes a chain of signatures leading back to a root Certificate Authority that your browser trusts. (More on this in the PKI section.)

4. Key Exchange. This is where the magic happens. Using the chosen key exchange algorithm, client and server agree on a shared secret – even though the entire conversation has been visible to any eavesdropper. How this works depends on the algorithm; we’ll explore the maths shortly.

5. Finished. Both sides compute a verification hash of the entire handshake transcript so far, encrypted with the new shared secret. If both sides arrive at the same value, neither the messages nor the participants have been tampered with. From this point on, everything is encrypted.

In the original TLS 1.2 handshake, this required two round trips between client and server (2-RTT). TLS 1.3 simplified this dramatically – the key exchange parameters are sent in the ClientHello itself, reducing the handshake to a single round trip (1-RTT). There’s even a 0-RTT resumption mode where a client that has connected before can send encrypted application data in its very first message, though this comes with replay caveats we’ll discuss later.

The maths – how secrets travel in public

You don’t need to understand the mathematics to use TLS. But knowing why it works builds justified confidence – and makes the whole system feel less like magic and more like engineering.

RSA

In 1977, Ron Rivest, Adi Shamir, and Leonard Adleman published “A Method for Obtaining Digital Signatures and Public-Key Cryptosystems” – the paper that introduced RSA, the first practical public-key cryptosystem.

RSA is built on a simple asymmetry: multiplying two large prime numbers together is easy; factoring the result back into those primes is computationally infeasible for sufficiently large numbers. If you multiply two 1,024-bit primes, you get a 2,048-bit product. Any computer can do the multiplication in microseconds. No computer on Earth can do the reverse in a useful timeframe.

This asymmetry creates a pair of keys: a public key (which you share with the world) and a private key (which you keep secret). Anything encrypted with the public key can only be decrypted with the private key, and vice versa. In early SSL, the client would generate a random secret, encrypt it with the server’s public RSA key, and send it over. Only the server (with the private key) could decrypt it. Both sides then used that shared secret for symmetric encryption.

It worked. But it had a problem: if someone recorded the encrypted traffic and later obtained the server’s private key (through a breach, a legal order, or a mathematical breakthrough), they could decrypt every past session. There was no forward secrecy.

Diffie-Hellman key exchange

A year before RSA, in 1976, Whitfield Diffie and Martin Hellman published “New Directions in Cryptography” – a paper that introduced the idea of public-key cryptography and described the Diffie-Hellman key exchange.

The best analogy for DH is paint mixing. Imagine this:

  1. Alice and Bob publicly agree on a shared base colour – say, yellow. Everyone can see this.
  2. Alice picks a secret colour (say, red) and mixes it with yellow. She sends the resulting orange to Bob. Everyone can see the orange.
  3. Bob picks a secret colour (say, blue) and mixes it with yellow. He sends the resulting green to Alice. Everyone can see the green.
  4. Alice takes Bob’s green and mixes in her secret red. She gets a brownish colour.
  5. Bob takes Alice’s orange and mixes in his secret blue. He gets the same brownish colour.
  6. Nobody watching – who only saw yellow, orange, and green – can work out the final colour. Unmixing paint is hard.

The mathematical version uses modular exponentiation instead of paint, and the “unmixing” problem is the discrete logarithm problem – believed to be computationally infeasible for large numbers. The beauty is that both parties arrive at the same shared secret without ever transmitting it.

Elliptic Curve Cryptography (ECC)

In 1985, Neal Koblitz and Victor Miller independently proposed using elliptic curves for cryptography. The key insight: the elliptic curve discrete logarithm problem is even harder than the standard discrete logarithm problem, which means you can achieve the same level of security with dramatically smaller keys.

A 256-bit ECC key provides roughly the same security as a 3,072-bit RSA key. Smaller keys mean faster handshakes, less bandwidth, and lower computational cost – a significant advantage for mobile devices and high-traffic servers.

ECDHE – the modern default

Put the pieces together and you get ECDHE: Elliptic Curve Diffie-Hellman Ephemeral. This is the key exchange algorithm used by default in TLS 1.3, and the dominant choice in TLS 1.2 deployments.

The “Ephemeral” part is crucial. Instead of reusing the server’s long-term key for key exchange (as RSA key exchange did), ECDHE generates a fresh key pair for every single session. The session key is derived from the ephemeral exchange, used once, and then discarded.

This gives you forward secrecy. Even if an attacker later obtains the server’s long-term private key – through a breach, a court order, or advances in computing – they cannot decrypt past sessions. Each session’s key was unique and is gone. TLS 1.3 made forward secrecy mandatory by removing RSA key exchange entirely. It was one of the most important decisions in the specification.

Symmetric encryption – the workhorse

The asymmetric cryptography (RSA, ECDHE) is only used for the handshake – to agree on a shared secret. Once both sides have that secret, they switch to symmetric encryption for the actual data: algorithms where the same key encrypts and decrypts.

Why? Speed. Symmetric ciphers are orders of magnitude faster than asymmetric ones. The dominant choices today are:

  • AES-GCM (Advanced Encryption Standard in Galois/Counter Mode) – fast, widely supported, hardware-accelerated on most modern CPUs.
  • ChaCha20-Poly1305 – designed by Daniel Bernstein, excellent on devices without AES hardware acceleration (many ARM processors).

Both provide authenticated encryption: they encrypt the data and guarantee it hasn’t been tampered with. The days of encrypt-then-hope-nobody-flips-a-bit are over.

PKI – the web of trust that isn’t

The handshake gives you encryption. But encryption alone isn’t enough. If you’re encrypting your traffic to an attacker who’s pretending to be your bank, you’ve achieved nothing useful. You need authentication – proof that the server you’re talking to is who it claims to be.

Certificate Authorities

TLS solves this with Certificate Authorities (CAs) – trusted third parties who verify the identity of a server and sign its certificate. When your browser connects to a site, the server presents its certificate. Your browser checks the signature chain: was this certificate signed by a CA I trust?

Your operating system and browser ship with a pre-installed list of roughly 100-150 trusted root CAs. These are organisations like DigiCert, Let’s Encrypt, Sectigo, and GlobalSign. If the certificate chain leads back to one of these roots, your browser shows the padlock.

Certificate chains

In practice, root CAs don’t sign end-entity certificates directly. Instead, they sign intermediate CA certificates, and those intermediates sign the server’s certificate. The chain looks like:

Root CA –> Intermediate CA –> Your Server’s Certificate

Why the extra layer? Root CA private keys are extraordinarily valuable. They’re kept offline, in hardware security modules, in physically secured facilities. If a root key were compromised, every certificate it ever signed would be suspect. By using intermediates for day-to-day signing, the root key stays locked away. If an intermediate is compromised, only that intermediate’s certificates need to be revoked – the root survives.

X.509 certificates

The standard format for TLS certificates is X.509, defined in RFC 5280. An X.509 certificate contains:

  • Subject: who the certificate is for (e.g., CN=example.com)
  • Issuer: who signed it (the CA)
  • Public key: the server’s public key
  • Validity period: not-before and not-after dates
  • Signature: the CA’s cryptographic signature over all of the above

When your browser verifies a certificate, it checks that the signature is valid, the certificate hasn’t expired, the domain matches, and the issuing CA chains back to a trusted root. It’s a lot of checking, and it all happens in milliseconds.

The trust problem

Here’s the uncomfortable truth about PKI: you’re trusting every CA in the root store. If any single one of them is compromised or malicious, they can issue a valid certificate for any domain on the internet. Your browser will accept it without complaint.

This isn’t theoretical.

In 2011, DigiNotar, a Dutch CA, was breached. Attackers issued fraudulent certificates for Google, Yahoo, Mozilla, and other major sites. These fake certificates were used to intercept the Gmail traffic of Iranian users – a state-level surveillance operation. When the breach was discovered, every major browser removed DigiNotar from its trust store. The company went bankrupt.

The same year, Comodo (now Sectigo) was breached through compromised registration authority accounts. Fraudulent certificates were issued for major domains including google.com, yahoo.com, and live.com.

These incidents demonstrated that the CA model has a fundamental fragility: it’s only as strong as the weakest CA in the store.

Certificate Transparency

Google’s response to the CA trust problem was Certificate Transparency (CT), described in RFC 6962 and originally proposed by Ben Laurie and Adam Langley.

The idea is simple and powerful: every certificate issued by a CA must be logged in public, append-only logs. These logs are cryptographically verifiable (using Merkle trees) and open for anyone to audit. If a CA issues a rogue certificate, it will appear in the logs. Domain owners can monitor the logs for unexpected certificates. Browsers can require that certificates come with a Signed Certificate Timestamp (SCT) proving they were logged.

CT doesn’t prevent a CA from issuing a bad certificate. But it makes it visible – and in practice, that’s almost as good. The knowledge that you’ll be caught is a powerful deterrent.

CAA records

As an additional defence, domain owners can publish CAA (Certification Authority Authorization) records in DNS, as specified in RFC 8659. A CAA record says “only these CAs are allowed to issue certificates for this domain.” CAs are required to check CAA records before issuing a certificate. It’s not bulletproof – a compromised CA can ignore the check – but it adds another layer of accountability.

The Crypto Wars – US export restrictions

The story of TLS cannot be told without the story of how the United States government tried to control cryptography, and how that effort backfired spectacularly.

Cryptography as a munition

Under the International Traffic in Arms Regulations (ITAR) and later the Export Administration Regulations (EAR), the US government classified strong cryptographic software as a munition – in the same category as bombs and fighter jets. Exporting strong encryption outside the United States required a licence that was, in practice, nearly impossible to obtain.

The result was absurd. Netscape shipped two versions of its browser: a domestic version with 128-bit encryption, and an export version limited to 40-bit encryption. The export version was laughably weak – 40-bit keys could be brute-forced in hours even with 1990s hardware. But those were the rules. If you were outside the United States, your “secure” connection was secured with the cryptographic equivalent of a screen door.

This created a two-tier internet: strong security for Americans, weak security for everyone else.

Daniel Bernstein and code as speech

In 1995, Daniel Bernstein – then a mathematics PhD student at UC Berkeley, and the same person behind the TAI64 timestamp format I mentioned in What Time Is It? – wanted to publish a paper about a cryptographic algorithm he’d developed called Snuffle. The US State Department told him he’d need an arms export licence to put the source code on the internet. Bernstein sued.

The case, Bernstein v. United States, went through several rounds in the courts. In 1999, the Ninth Circuit Court of Appeals ruled that source code is protected speech under the First Amendment, and that the export regulations were an unconstitutional prior restraint. It was a landmark decision – one of the first to establish that code is expression.

Phil Zimmermann and PGP

In 1991, Phil Zimmermann published PGP (Pretty Good Privacy) – an encryption tool that gave ordinary people access to strong cryptography. He uploaded it to the internet. The US government opened a criminal investigation, treating Zimmermann as if he had exported weapons.

The investigation dragged on for three years. It was eventually dropped in 1996, but not before making Zimmermann a cause celebre for digital rights and inadvertently popularising PGP far more than it might have been otherwise.

The book trick

The absurdity of treating code as a munition reached its peak with the “book trick.” The RSA algorithm could be printed in a book – protected by the First Amendment as published material – physically carried out of the United States (books aren’t munitions), and then scanned and OCR’d back into working code abroad.

People actually did this. Applied Cryptography by Bruce Schneier was legally exported with source code listings inside. Adam Back printed the RSA algorithm on a t-shirt, three lines of Perl, with “This shirt is classified as a munition” on the back. You could wear a weapon through customs.

Relaxation and its ghosts

In 2000, the Clinton administration significantly relaxed export controls on commercial encryption. The crypto wars were, for the most part, over. But the damage was already done.

Those weak 40-bit and 512-bit export cipher suites had been baked into SSL/TLS implementations worldwide. Even after the restrictions were lifted, the code remained – dormant in millions of servers and clients, waiting to be exploited.

Fifteen years later, it was.

In 2015, the FREAK attack (Beurdouche et al.) showed that an attacker could force modern clients and servers to downgrade to export-grade RSA – and then break the weakened encryption. The same year, the Logjam attack (Adrian et al.) did the same for export-grade Diffie-Hellman. Both attacks exploited cipher suites that had been obsolete for a decade but were never removed from production systems.

The ghost of 1990s policy haunted 2015 infrastructure. It’s a perfect example of how security debt compounds: a bad decision doesn’t just affect the present; it lies dormant for years, waiting for someone clever enough to exploit it.

SSL to TLS – the versions and why they matter

The evolution from SSL to TLS is a story of progressive hardening – each version responding to real attacks, each deprecation removing options that turned out to be dangerous.

Version Year RFC Status Key Change
SSL 1.0 1994 None Never released Internal Netscape prototype; fundamental flaws found in review
SSL 2.0 1995 None Deprecated 2011 (RFC 6176) First public release; vulnerable to downgrade, MITM, and truncation attacks
SSL 3.0 1996 6101 Deprecated 2015 (RFC 7568) Complete redesign by Paul Kocher; much stronger foundation
TLS 1.0 1999 2246 Deprecated 2021 (RFC 8996) SSL handed to IETF; essentially SSL 3.1; vulnerable to BEAST
TLS 1.1 2006 4346 Deprecated 2021 (RFC 8996) Explicit IVs fixed BEAST-class attacks; low adoption (most jumped to 1.2)
TLS 1.2 2008 5246 Current (widely deployed) AEAD ciphers (AES-GCM), SHA-256, flexible cipher negotiation
TLS 1.3 2018 8446 Current (recommended) Major cleanup: mandatory forward secrecy, 1-RTT handshake, removed all legacy insecure options

TLS 1.0 (1999) was essentially SSL 3.1, renamed when the protocol was handed from Netscape to the IETF for standardisation. The changes were minor. The vulnerabilities that would later be exploited by the BEAST attack (Duong & Rizzo, 2011) were already present: predictable initialisation vectors in CBC mode.

TLS 1.1 (2006) fixed the BEAST-class vulnerabilities by requiring explicit initialisation vectors. But adoption was slow – most deployments skipped straight to TLS 1.2.

TLS 1.2 (2008) was the real workhorse. It introduced AEAD (Authenticated Encryption with Associated Data) cipher suites, most importantly AES-GCM. It supported SHA-256 for integrity. It allowed much more flexible cipher suite negotiation. TLS 1.2 carried the web for a decade and remains widely deployed.

TLS 1.3 (2018, RFC 8446) was a radical simplification. Eric Rescorla led the specification effort, and the result was a protocol that removed every insecure option accumulated over twenty years:

  • No more RSA key exchange (no forward secrecy)
  • No more CBC mode ciphers (source of BEAST, POODLE, Lucky 13)
  • No more RC4 (broken)
  • No more SHA-1 (broken)
  • No more export ciphers (the ghost of the crypto wars)
  • No more compression (source of CRIME)
  • No more renegotiation vulnerabilities

What remained was a clean, fast, secure core. Forward secrecy is mandatory. The handshake is one round trip. The attack surface is dramatically reduced. If you can choose only one version of TLS to support, choose 1.3.

Notable attacks and vulnerabilities

Every major TLS vulnerability has taught us something, and every lesson has been burned into the specification as a permanent fix. Here’s the roll call.

BEAST (2011)

Thai Duong and Juliano Rizzo demonstrated that CBC mode in TLS 1.0 used predictable initialisation vectors, allowing an attacker to decrypt portions of the encrypted traffic. The attack required the attacker to be on the same network and to inject chosen plaintext (typically through JavaScript in a browser). TLS 1.1’s explicit IVs were the fix.

CRIME (2012) and BREACH (2013)

Rizzo and Duong showed that TLS-level compression leaked information about the plaintext through the compressed size – by making repeated requests with slightly different content and observing the size, an attacker could reconstruct session cookies byte by byte. The fix: disable TLS compression (TLS 1.3 doesn’t support it at all). Gluck, Harris, and Prado then applied the same principle to HTTP-level compression (gzip) in the BREACH attack. Even with TLS compression disabled, if the server compressed HTTP responses, the same information leak existed. BREACH is harder to fix at the protocol level – it requires application-level mitigations like per-request CSRF tokens and response randomisation.

Heartbleed (2014)

The most famous TLS vulnerability wasn’t in the protocol at all – it was in the implementation. A buffer over-read bug in OpenSSL’s heartbeat extension (CVE-2014-0160) allowed an attacker to read up to 64KB of server memory per request. That memory could contain anything: private keys, session cookies, user credentials, plaintext data.

The bug had been present in OpenSSL for over two years before it was discovered. It affected an estimated 17% of the internet’s TLS-enabled servers. The response was enormous: mass certificate revocations, key rotations, and a hard look at the fact that critical internet infrastructure was maintained by a handful of underfunded volunteers. The Core Infrastructure Initiative (now the Open Source Security Foundation) was created partly in response.

Heartbleed didn’t reveal a flaw in TLS. It revealed a flaw in how we fund and maintain the software that implements TLS.

POODLE (2014)

Bodo Moller, Thai Duong, and Krzysztof Kotowicz at Google demonstrated a padding oracle attack on SSL 3.0’s CBC mode. An attacker who could trigger many connections (through JavaScript, say) could decrypt one byte of the ciphertext per 256 attempts. It was the final nail in SSL 3.0’s coffin – the protocol was deprecated in RFC 7568 the following year.

FREAK (2015)

Beurdouche et al. showed that many servers still accepted export-grade RSA cipher suites (512-bit keys) and that clients could be tricked into using them. The 512-bit keys could then be factored in about seven hours on Amazon EC2 for roughly $100. A twenty-year-old policy decision was still breaking security. The fix: remove export cipher suites from all implementations.

Logjam (2015)

Adrian et al. demonstrated the same downgrade attack against export-grade Diffie-Hellman (512-bit). Worse, they showed that even non-export 1024-bit Diffie-Hellman groups were within reach of nation-state attackers. The fix: use at least 2048-bit DH groups, or (better) switch to ECDHE. TLS 1.3 requires ECDHE with strong curves.

ROBOT (2017)

Bock, Somorovsky, and Young showed that Bleichenbacher’s 1998 RSA padding oracle attack – nearly twenty years old – was still exploitable against many TLS implementations. The attack exploits subtle differences in server behaviour when processing valid versus invalid RSA PKCS#1 v1.5 padding. Despite being known for two decades, implementation errors kept reintroducing the vulnerability. TLS 1.3’s removal of RSA key exchange eliminated the attack surface entirely.

The pattern across all of these is clear: each attack removed an unsafe option from the protocol. The cumulative effect was TLS 1.3 – a specification that is secure not by adding defences, but by removing every feature that had ever been exploited.

The cost of certificates – from $300 to free

For much of TLS history, the technology worked fine but the economics didn’t. SSL certificates cost money – often hundreds of dollars per year. Certificate Authorities charged for the identity verification process, with prices varying by certificate type:

  • Domain Validation (DV): cheapest, proves you control the domain
  • Organisation Validation (OV): more expensive, verifies the organisation exists
  • Extended Validation (EV): most expensive, thorough background checks, used to get the green address bar

For a large company, $300 a year was trivial. For a small business, a personal blog, or an open-source project, it was a barrier. The result was a web where HTTPS was the exception rather than the rule. In 2015, only about 40% of web page loads in Chrome used HTTPS, according to the Google Transparency Report.

StartSSL and the early attempts

In 2005, StartCom launched StartSSL, offering free DV certificates. It was the first major free CA. But StartSSL had a troubled ending: it was quietly acquired by WoSign, a Chinese CA, in 2015. WoSign was subsequently caught backdating certificates to circumvent the SHA-1 deprecation deadline. Mozilla’s investigation led to both WoSign and StartCom being distrusted by all major browsers in 2016. The free certificate market needed a trustworthy player.

Let’s Encrypt

In 2015, the Internet Security Research Group (ISRG), founded by EFF, Mozilla, Akamai, Cisco, and the University of Michigan, launched Let’s Encrypt. Public beta began in December 2015, and general availability followed in April 2016.

Let’s Encrypt was different from previous CAs in several important ways:

  • Free: DV certificates at no cost, forever
  • Automated: Certificates are issued and renewed through the ACME protocol (Automated Certificate Management Environment, RFC 8555) – no human intervention required
  • Short-lived: 90-day certificates (compared to the industry standard of one to two years), encouraging automation and limiting the window of exposure if a key is compromised

The ACME protocol works by having the client prove control of the domain through automated challenges:

  • HTTP-01: Place a specific file at a specific URL on the domain
  • DNS-01: Create a specific DNS TXT record for the domain

A typical setup using the certbot client:

sudo certbot --nginx -d example.com -d www.example.com

That’s it. Certbot contacts Let’s Encrypt, proves domain control, obtains the certificate, configures Nginx, and sets up automatic renewal via a cron job or systemd timer. What used to require a manual process every year now runs unattended.

As of 2024, Let’s Encrypt serves over 300 million active certificates and has issued billions since launch. The stats page shows the curve – it’s a hockey stick.

The ecosystem follows

Let’s Encrypt proved the model, and the ecosystem followed:

  • Cloudflare Universal SSL (2014): Free TLS for all sites on Cloudflare. Massive impact – millions of sites got HTTPS overnight.
  • AWS Certificate Manager (2016): Free TLS certificates for AWS services (CloudFront, ELB, API Gateway). Further normalised the idea that HTTPS should cost nothing.

The result: HTTPS adoption went from ~40% in 2015 to well over 95% by 2025. The economic barrier to TLS is gone. There is no longer a reason for any public website to be served over plain HTTP.

HSTS – closing the last gap

Even with HTTPS available and free, there’s a subtle vulnerability in the transition. When you type example.com into your browser, the first request goes to http://example.com – unencrypted. The server responds with a redirect to https://example.com. But that first, unencrypted request is vulnerable.

In 2009, Moxie Marlinspike demonstrated this with sslstrip, a tool that sat between the user and the server on a local network. It intercepted the HTTP-to-HTTPS redirect and rewrote it, keeping the user on HTTP while making a legitimate HTTPS connection to the server. The user saw no padlock, but most people didn’t check. Credentials, session cookies – all captured in plaintext.

HTTP Strict Transport Security

The fix is HSTS (RFC 6797) – a response header that tells the browser: “From now on, never connect to this domain over HTTP. Always use HTTPS. And don’t even try HTTP first – go straight to HTTPS.”

The header looks like this:

Strict-Transport-Security: max-age=63072000; includeSubDomains; preload
  • max-age: How long (in seconds) the browser should remember this instruction. 63072000 is two years.
  • includeSubDomains: Apply the same rule to all subdomains.
  • preload: A signal that the domain should be included in browser preload lists.

Once a browser receives this header over a valid HTTPS connection, it will refuse to make HTTP connections to that domain for the specified period. Redirects become unnecessary because the browser never tries HTTP in the first place.

The HSTS Preload List

There’s still a bootstrap problem: the very first time a user visits a site, they haven’t received the HSTS header yet. That first request is still vulnerable.

The solution is the HSTS Preload List – a list of domains hardcoded into browsers that must always be accessed over HTTPS, from the very first connection. If your domain is on the preload list, there is no vulnerable first request. Chrome, Firefox, Safari, and Edge all use the same list.

Entire top-level domains are on the preload list. If you register a .dev or .app domain, it can never be accessed over plain HTTP – those TLDs were added to the preload list by Google before they were opened for registration.

TLS protects the transport layer, but security is a system property. There are several application-level concerns that interact with TLS in important ways.

Session hijacking

If a session cookie is sent over HTTP – even once – an attacker on the same network can steal it and impersonate the user. This is exactly what Firesheep demonstrated. The session cookie is a bearer token: whoever has it is you, as far as the server is concerned.

The fix is the Secure cookie flag, which tells the browser: “Only send this cookie over HTTPS connections. Never over HTTP.” Combined with HSTS (which ensures the browser never makes HTTP requests to the domain), this closes the session hijacking vector.

Three cookie flags are essential for security, specified in RFC 6265bis:

Set-Cookie: session=abc123; Secure; HttpOnly; SameSite=Lax
  • Secure: Only send over HTTPS.
  • HttpOnly: JavaScript cannot access this cookie. Prevents cross-site scripting (XSS) attacks from stealing session tokens.
  • SameSite: Controls whether the cookie is sent with cross-site requests. Strict means never. Lax means only on top-level navigations (clicking a link). None means always (but requires Secure). This mitigates cross-site request forgery (CSRF).

Form hijacking

Suppose your login page is served over HTTP, but the form posts to an HTTPS endpoint. Sounds fine? It’s not.

An attacker on the network can modify the HTTP-served page in transit. They can change the form’s action attribute to point to their own server. The user, seeing a normal-looking login page, enters their credentials, which are sent to the attacker. The attacker then forwards them to the real server, and the user never knows.

This is why the entire site must be HTTPS – not just the pages that handle sensitive data. Any HTTP page can be modified in transit to do anything.

Mixed content

If an HTTPS page loads resources (scripts, stylesheets, images) over HTTP, those resources can be intercepted and tampered with. A modified JavaScript file can do anything the page can do – read form data, redirect the user, install a keylogger.

Modern browsers block mixed active content (scripts, stylesheets, iframes loaded over HTTP on an HTTPS page) and warn about mixed passive content (images, audio, video). The fix is simple: ensure all resources use HTTPS.

Current issues and open questions

TLS in 2026 is more secure and more widely deployed than ever. But several significant challenges remain.

The CA trust model

The fundamental fragility hasn’t changed: every CA in the root store can issue certificates for any domain. Certificate Transparency helps with detection, but it doesn’t prevent issuance. The gap between “a rogue certificate is issued” and “someone notices it in the CT logs” is a window of vulnerability.

Proposals to constrain CAs further – such as delegated credentials, DANE (DNS-Based Authentication of Named Entities, RFC 6698), and stricter CT enforcement – are active areas of work, but none has yet replaced the CA model entirely.

Post-quantum cryptography

The key exchange algorithms at the heart of TLS – ECDHE, RSA, Diffie-Hellman – are all vulnerable to a sufficiently powerful quantum computer running Shor’s algorithm. A quantum computer with enough stable qubits could solve the discrete logarithm and integer factorisation problems that these algorithms rely on.

No such computer exists today, but the threat is taken seriously because of “harvest now, decrypt later” attacks: an adversary can record encrypted traffic today and decrypt it when quantum computers become available.

NIST’s Post-Quantum Cryptography project selected ML-KEM (Module-Lattice-Based Key-Encapsulation Mechanism), based on the CRYSTALS-Kyber algorithm, as the first post-quantum KEM standard (FIPS 203, finalised 2024). Chrome, Firefox, and other clients are already shipping hybrid key exchange – X25519Kyber768 – which combines a classical ECDHE exchange with a post-quantum KEM. If the post-quantum algorithm turns out to be flawed, the classical exchange still provides security. If quantum computers arrive, the post-quantum exchange provides protection.

This is happening now, not in the future. If you look at the key exchange in your browser’s connection details for many major sites, you’ll see the hybrid algorithm already in use.

0-RTT replay

TLS 1.3’s 0-RTT resumption mode allows a client to send encrypted application data in its very first message, using a pre-shared key from a previous session. This saves a full round trip of latency – significant for mobile connections and distant servers.

But 0-RTT data has a critical caveat: it can be replayed. An attacker who captures the 0-RTT message can resend it, and the server may process it again. For idempotent requests (like GET), this is usually harmless. For state-changing requests (like POST), it could be dangerous – imagine a replayed payment request.

RFC 8446 section 8 explicitly warns about this. Servers must carefully evaluate which requests are safe to accept in 0-RTT mode. Most implementations limit 0-RTT to safe, idempotent operations.

Certificate lifecycle shortening

In 2023, Apple proposed reducing maximum certificate lifetimes from 398 days to just 47 days, through CA/Browser Forum ballot SC-081. The argument: shorter lifetimes reduce the window of exposure if a key is compromised, push the industry toward full automation, and reduce reliance on revocation mechanisms (which have historically been unreliable).

The tradeoff: every organisation that manages TLS certificates must have robust, automated certificate management. Manual renewal every 47 days is not viable. This is good for security – automation is more reliable than humans – but it’s a significant operational shift for organisations still doing certificate management by hand.

Encrypted Client Hello (ECH)

There’s one piece of information that TLS doesn’t encrypt: the Server Name Indication (SNI) in the ClientHello message. SNI tells the server which website the client wants to connect to – necessary when multiple sites share a single IP address. But because it’s sent before encryption begins, anyone monitoring the connection can see which site you’re visiting, even if they can’t see the content.

Encrypted Client Hello (draft-ietf-tls-esni) addresses this by encrypting the SNI using a public key published in the server’s DNS records. It’s still in draft, but Cloudflare and Firefox have experimental support. When widely deployed, it will close one of the last metadata leakage channels in TLS.

TLS everywhere vs. inspection

A tension exists in enterprise environments: organisations often deploy TLS inspection proxies that terminate employees’ TLS connections, inspect the plaintext, and then re-encrypt the traffic with their own certificate (installed as a trusted root on corporate devices). This provides visibility for security monitoring, malware detection, and data loss prevention.

But it breaks the end-to-end model that TLS was designed to provide. The proxy can see everything – passwords, personal communications, health records. The user’s browser shows a padlock, but the encryption terminates at the proxy, not the intended server.

There’s no easy answer here. Both sides have legitimate concerns. The important thing is transparency: users should know when their traffic is being inspected, and the practice should be subject to policy and oversight.

Trust, layered

TLS is not one thing. It’s layers of trust, mathematics, policy, and engineering, refined over thirty years of real-world attacks.

Every attack made it stronger. Every cost barrier removed made it more universal. Every failed export restriction made the community more determined to keep cryptography free and open.

The protocol has survived fundamental design errors (SSL 2.0), implementation catastrophes (Heartbleed), state-level attacks (DigiNotar), and the slow poison of policy decisions that took two decades to fully clean up (export ciphers). It survived not because it was designed perfectly from the start, but because the community that builds and maintains it responds to failure by making things better. It’s the same lesson the GreenBox team learned in their threat modelling work: security is a practice, not a feature.

The web in 2026 is more secure than it has ever been. Over 95% of traffic is encrypted. Certificates are free. Forward secrecy is the default. Post-quantum key exchange is being deployed proactively, before quantum computers arrive. The handshake is faster than ever.

And the work continues. Certificate lifetimes are shrinking. Encrypted Client Hello is approaching standardisation. CT logs are being enforced more strictly. The CA trust model is being scrutinised and constrained.

In A Gentle Guide to Typography, I wrote about how the history of written communication is a history of standardisation – from scribes to printing presses to Unicode. The history of TLS is similar. It’s the story of a community standardising trust, one painful vulnerability at a time.

Security isn’t a product you buy or a box you tick. It’s a practice – a community effort, sustained across decades, by researchers who find the flaws, engineers who fix them, and standards bodies that codify the lessons. Use HTTPS. Automate your certificate management. Update your dependencies. Keep an eye on the CT logs for your domains. And maybe, once in a while, take a moment to appreciate the extraordinary engineering that makes it all work – invisibly, billions of times a day, every time your browser shows that little padlock.