Close your eyes for a second. Actually, don’t – you’re reading. But imagine you did. The world behind your eyelids is dark, maybe some vague phosphene blobs drifting about, and definitely no colour. Now open them (you already have, stay with me) and suddenly: colour. Everywhere. The blue of the sky. The green of a leaf. The specific, impossible-to-describe orange of a Western Australian sunset over the Indian Ocean.
Here’s the thing, though. None of that colour is out there. Not really. Colour is not a property of objects. It’s a perception – a hallucination that your brain and eyes have agreed to maintain, moment by moment, for your entire life (as Stephen Palmer explores in Vision Science). The leaf isn’t green. It absorbs most wavelengths of visible light and reflects the ones we’ve collectively decided to call “green.” The sunset isn’t orange. It’s a particular cocktail of scattered photons that your visual cortex has decided to render in a shade that makes you want to pull over and take a photo.
Colour is, arguably, the greatest conjuring trick in biology.
So let’s pull back the curtain.
What even is colour?
Light is electromagnetic radiation. That’s it. It’s the same stuff as radio waves, microwaves, X-rays, and gamma rays – just at a different frequency. The slice of the electromagnetic spectrum that human eyes can detect runs from roughly 380 nanometres (violet) to about 700 nanometres (red). That’s a laughably narrow band. If the full electromagnetic spectrum were a piano keyboard stretching from here to the Sun, visible light would be a single key somewhere around middle C.
Within that tiny window, different wavelengths correspond (loosely) to different perceived colours. Short wavelengths look violet and blue. Medium wavelengths look green and yellow. Long wavelengths look orange and red. But – and this is crucial – the wavelength doesn’t contain the colour. The colour happens inside your head. A 580-nanometre photon isn’t “yellow.” It’s an oscillating electromagnetic wave that, when it hits certain cells in your retina, triggers a cascade of electrochemical signals that your visual cortex interprets as yellow.
This distinction matters. It’s the reason two people can look at the same object and disagree about its colour. It’s the reason The Dress broke the internet in 2015. And it’s the reason mantis shrimp and humans can look at the same coral reef and see entirely different worlds.
How we see colour
Your retina is lined with two types of photoreceptor cells: rods and cones. Rods are the workhorses of dim-light vision – there are about 120 million of them, and they’re exquisitely sensitive, but they don’t do colour. They see the world in greyscale. Cones are the colour specialists. You’ve got roughly 6 million of them, concentrated in a small central area called the fovea, and they come in three varieties.
This is where it gets good.
The three types of cone are called S, M, and L – short, medium, and long wavelength. S-cones peak at around 420nm (blue-ish). M-cones peak at around 530nm (green-ish). L-cones peak at around 560nm (red-ish, though honestly it’s more yellow-green – the naming is a bit generous, as Bowmaker and Dartnall showed). Every colour you’ve ever seen is your brain’s interpretation of the ratio of signals from these three cone types. When all three fire roughly equally, you see white. When L-cones fire strongly and S-cones barely fire at all, you see red. When M and L fire together but S stays quiet, you see yellow.
Your brain is basically running a three-channel mixing desk, all day, every day, and it’s so good at it that you never notice.
But there’s more going on than simple mixing. In the 1870s, the German physiologist Ewald Hering proposed what’s now called opponent process theory: your brain doesn’t just blend three signals, it processes them as opposing pairs – red versus green, blue versus yellow, light versus dark (as Hering described in Outlines of a Theory of the Light Sense). This is why you can imagine a reddish yellow (orange) or a bluish green (teal), but you cannot imagine a reddish green or a bluish yellow. Those pairings are wired into the system as opposites.
And then there’s the really wild bit.
Magenta doesn’t exist.
Not as a wavelength, anyway. Look at a rainbow – a real one, or a diagram of the visible spectrum. Violet is at one end, red is at the other, and there’s no magenta in between. There’s no single wavelength of light that produces the colour magenta. Your brain invents it to bridge the gap between red and violet, because it needs a complete colour wheel to make sense of the world. Magenta is a neurological fiction. You’re seeing a colour that the universe never bothered to make.
How other things see colour
We’re not the only ones hallucinating. Almost every animal with eyes has its own version of colour, and some of them make ours look embarrassingly limited.
Dogs are dichromats – they have two types of cone instead of three. They see blues and yellows perfectly well, but reds and greens collapse into a murky brownish-grey. That red ball you’re throwing for your dog? It’s not red to them. It’s roughly the colour of the grass it lands on. (They find it by smell, obviously. They’re dogs.)
Bees can’t see red at all, but they can see ultraviolet – a colour we can’t even imagine. Flowers that look plain white or yellow to us are covered in ultraviolet patterns that act as landing strips for pollinators. There’s a whole layer of visual information painted across every meadow that we’re completely blind to.
Birds are tetrachromats – four types of cone, including one sensitive to UV light. A male bird that looks drab brown to us might be a riot of UV patterns to a female of the same species (as Hart documented). We’ve been misunderstanding bird plumage for centuries because we literally couldn’t see what they were showing each other.
And then there are mantis shrimp.
Mantis shrimp have sixteen types of photoreceptor cone. Sixteen. We have three and we get rainbows, sunsets, and Rothko paintings. These creatures have sixteen and they’re the size of a prawn. For years, scientists assumed mantis shrimp must see the most incredibly rich, nuanced colour palette in the animal kingdom. But research by Marshall, Oberwinkler, and later Thoen et al. suggests the truth is stranger: mantis shrimp don’t process colour the way we do. Rather than comparing signals between receptors (like our opponent processing), they appear to recognise wavelengths individually, more like a barcode scanner than a painter’s palette (Marshall and Oberwinkler, 1999; Thoen et al., 2014). They’re doing something we don’t fully understand yet, and it might not be “colour vision” in any sense we’d recognise.
Oh, and about humans. Most of us are trichromats, but a small number of people – almost exclusively women, for genetic reasons we’ll get to – may be tetrachromats, carrying a fourth type of cone that lets them distinguish colours the rest of us see as identical. The estimated range: where a typical person sees about a million distinct colours, a true functional tetrachromat might see up to a hundred million. If you’ve ever had an argument about whether something is beige or taupe and lost badly, now you know why.
Colour blindness
About 8% of men and 0.5% of women have some form of colour vision deficiency. That’s not a rounding error. In a lecture hall of 200 people, statistically eight of them are seeing the slides differently from everyone else. If you’re designing software, data visualisations, traffic systems, or anything visual for a general audience, you need to care about this.
The most common forms are red-green colour blindness, which comes in two flavours. Protanopia means the L-cones (red-sensitive) are missing entirely. Deuteranopia means the M-cones (green-sensitive) are missing. In both cases, reds and greens become difficult or impossible to distinguish. There are also milder versions – protanomaly and deuteranomaly – where the cones are present but their peak sensitivity has shifted, so colours are muted rather than absent (as Neitz and Neitz described).
Tritanopia is much rarer: the S-cones (blue-sensitive) are missing, making it hard to tell blue from green and yellow from violet. And at the far end of the spectrum (no pun intended) is rod monochromacy, or achromatopsia – a complete absence of functioning cones. People with achromatopsia see the world entirely through rods: greyscale, extremely sensitive to bright light, and with reduced visual acuity. It’s exceptionally rare, affecting roughly 1 in 30,000 people, though the incidence is famously much higher on the Pacific atoll of Pingelap, where Oliver Sacks documented the condition in The Island of the Colorblind.
The genetics are straightforward, if a bit unfair. The genes for L-cone and M-cone photopigments sit on the X chromosome. Men have one X, women have two. If a man’s single X carries a faulty copy, he’s colour blind. A woman needs faulty copies on both X chromosomes (the molecular basis is well understood). This is why red-green colour blindness is roughly sixteen times more common in men than women.
Colour blindness can also be acquired. Cataracts filter out short wavelengths, shifting the world yellowish. Glaucoma can damage the optic nerve. Certain chemicals – notably some solvents and medications – can degrade cone function. Ageing itself gradually yellows the lens and reduces colour discrimination, which is why your grandparents’ lounge room colour choices might be more adventurous than you’d expect (age-related changes are described in The Aging Eye).
The Ishihara test – those circles made of coloured dots with a number hidden inside – has been the standard screening tool since Shinobu Ishihara published it in 1917. It’s beautifully simple: if you can see the number, your cone responses are distinguishing the foreground from the background. If you can’t, they’re not.
The practical impacts ripple through daily life. Traffic lights are designed with position redundancy (top, middle, bottom) precisely because red and green are the hardest colours for the most people. But for UI design, data visualisation, and wayfinding, the lesson is this: never encode meaning in colour alone. Use shape, position, texture, labels. If the only way to tell “error” from “success” in your interface is red versus green, roughly one in twelve of your male users is guessing (Brettel et al. simulated what this looks like).
Who named the colours, and why?
In 1666, Isaac Newton stuck a glass prism in a beam of sunlight and watched the white light fan out into a spectrum of colours. He could have carved it up any number of ways, but he chose seven: red, orange, yellow, green, blue, indigo, and violet. Why seven? Because Newton was deeply interested in the harmony between light and sound, and he wanted the colours to correspond to the seven notes of a musical scale (as he described in Opticks). Indigo is basically a vanity pick – most people can’t reliably distinguish it from blue or violet, but Newton needed that seventh colour to complete his analogy. We’ve been memorising it in school ever since.
But naming colours goes far deeper than Newton.
Homer, composing the Iliad and the Odyssey around 2,700 years ago, described the sea as “wine-dark” – oinops in Greek. Not blue. Wine-dark. This isn’t because Homer was colourblind (a popular theory, but unlikely for an entire civilisation). It’s because, as Gladstone argued in Studies on Homer, the ancient Greeks didn’t have a distinct word for blue. They had words for light, dark, red, yellow, and green-ish, but the sky and the sea were described with terms we’d translate as shining, dark, or grey. This isn’t a Greek quirk – it turns out that the pattern of colour naming follows a remarkably consistent order across languages worldwide.
In 1969, Brent Berlin and Paul Kay published Basic Color Terms, a landmark study of colour vocabulary across 98 languages. They found that languages develop colour terms in a near-universal sequence: first black and white (or dark and light), then red, then green and/or yellow, then blue, then brown, then a cluster of purple, pink, orange, and grey (Berlin and Kay, Basic Color Terms). Every language they studied that had a word for blue also had words for all the earlier colours. No language had a word for blue but not for red.
This raises a genuinely unsettling question: if you don’t have a word for a colour, can you see it?
The Sapir-Whorf hypothesis, or linguistic relativity, suggests that language shapes perception. Research with the Himba people of Namibia – whose language has different colour boundaries from English – showed that they could distinguish between shades of green that English speakers found identical, but struggled with a blue-green distinction that English speakers found obvious (Roberson et al., 2000). More recent work by Winawer et al. found that Russian speakers, who have separate basic words for light blue (goluboy) and dark blue (siniy), can discriminate those shades faster than English speakers, who lump them both under “blue” (Winawer et al., 2007).
Language might not determine what you can see, but it does seem to influence how quickly and easily you see it.
One more bit of colour-naming trivia that’s too good to skip: the colour orange is named after the fruit, not the other way around. Before the word “orange” entered English (via Old French, from the Arabic naranj, from the Sanskrit naranga), English speakers called that colour geoluhread – literally “yellow-red” (as documented in Maerz and Paul’s A Dictionary of Color). The fruit arrived in Europe, people noticed its colour, and only then did the colour get its own name. For centuries before that, English had no word for orange. It was just… yellow-red.
A history of colour: from caves to screens
Humans have been making colour on purpose for a very long time.
The oldest known pigment use dates to around 100,000 years ago at Blombos Cave in South Africa, where archaeologists found ochre processing toolkits – grinding stones, bone containers, and lumps of red and yellow iron oxide that had been deliberately scraped and mixed (Henshilwood et al., 2011). These weren’t paints in the artistic sense (the cave paintings at Lascaux and Altamira would come 80,000 years later), but they’re evidence that our ancestors were already interested in producing and applying colour to things. In Australia, ochre has been traded across vast distances for tens of thousands of years, and the rock art traditions of the First Nations peoples – at sites like Arnhem Land and the Kimberley – represent the longest continuous artistic tradition on Earth.
The first synthetic pigment was probably Egyptian blue, a calcium copper silicate created by heating sand, copper, and natron (a naturally occurring flux) to around 900 degrees Celsius. It dates to roughly 2500 BC, and it was used on everything from tomb walls to the Bust of Nefertiti (Jaksch et al., 1983). The Egyptians worked out how to make colour that didn’t exist in nature, and they did it four and a half thousand years ago. If that doesn’t make you feel a bit inadequate, nothing will.
Then there’s Tyrian purple. The Phoenicians of Tyre (in modern Lebanon) discovered that a particular species of predatory sea snail, Bolinus brandaris, produces a mucus that turns deep reddish-purple in sunlight. Extracting the dye was nightmarish: each snail yields one or two drops of precursor fluid, and you need roughly 10,000 snails to produce a single gram of dye (Koren, 2005). The smell was apparently so appalling that dye works were banished to the outskirts of cities. But the resulting colour was extraordinary – rich, lightfast, and impossible to reproduce any other way. By weight, Tyrian purple was worth more than gold. This is why purple became the colour of emperors and royalty: it was literally the most expensive substance in the ancient world.
Vermillion – mercury sulphide, also known as cinnabar when found naturally – gave the world a brilliant, warm red. The Chinese were mining it by at least 2000 BC, and the Romans used it lavishly. It’s gorgeous. It’s also toxic, because it’s full of mercury. This would become a theme (Gettens et al., 1972).
Ultramarine was made from ground lapis lazuli, a semi-precious stone mined almost exclusively in what is now Afghanistan. It produced a blue so deep and luminous that, in Renaissance Europe, it cost more per ounce than gold. Painters reserved it for the most sacred subjects – the robes of the Virgin Mary, the cloaks of Christ. If you see a medieval painting where the blue robes look especially vivid, somebody paid a fortune for that (Plesters, 1993).
Lead white (basic lead carbonate) was the finest white pigment for centuries – smooth, opaque, and warm. It was also, obviously, lead. Painters who ground it inhaled lead dust; women who used it as face powder absorbed it through their skin. It caused lead poisoning on an industrial scale, and it wasn’t fully replaced in house paint until the twentieth century (as Needleman documented).
And then there’s Scheele’s Green, an arsenic-based pigment introduced in 1775 by the Swedish chemist Carl Wilhelm Scheele. It was a beautiful, vivid green, cheap to produce, and wildly popular for wallpapers, fabrics, and paints. It was also, in damp conditions, capable of releasing arsine gas – a lethal arsenic compound. There’s a persistent (though debated) theory that Napoleon was slowly killed by the green wallpaper in his bedroom on Saint Helena (Jones and Ledingham, 1982). Whether or not it actually finished him off, the wallpaper samples that survive do test positive for arsenic.
The democratic revolution in colour came in 1856, when an eighteen-year-old English chemistry student named William Henry Perkin accidentally synthesised a purple dye while trying to make quinine. He called it mauveine, and it was the first synthetic aniline dye (the story is told in Garfield’s Mauve). Before Perkin, vivid colours were expensive luxuries. After Perkin, they were industrial products. The synthetic dye industry exploded, producing colours cheaper, brighter, and more lightfast than anything nature could provide. Colour, for the first time in history, became affordable to everyone.
Colour standardisation came later. Pantone launched its Matching System in 1963, giving designers and printers a shared vocabulary of numbered colour swatches (documented in Pantone: The 20th Century in Color). The German RAL system had been doing something similar since 1927 for industrial applications. Albert Munsell’s colour system – organising colours by hue, value, and chroma in three-dimensional space – dates to 1905 and remains influential in soil science, art education, and paint mixing to this day (Munsell, A Color Notation, 1905). The CIE 1931 colour space, developed by the International Commission on Illumination, was the first mathematically defined colour model to map human colour perception, and it underpins virtually every digital colour system we use today (CIE, 1931).
Colour on screen: how displays work
The history of getting colour onto a screen is a history of increasingly clever tricks with light.
CRT (cathode ray tube) displays – the big, heavy, warm monitors and televisions that dominated from the 1950s to the early 2000s – worked by firing three electron guns at a glass screen coated with tiny dots of phosphor in red, green, and blue. Each gun excited its corresponding phosphor, which glowed briefly. Vary the intensity of each gun, and you could mix any colour in the RGB gamut. The electron beam scanned across the screen line by line, painting the image from top to bottom, fast enough that your eye saw a stable picture. It was beautifully physical: you could feel the static electricity on the glass (Keller’s The Cathode-Ray Tube covers the technology).
LCD (liquid crystal display) screens replaced CRTs by doing something entirely different. A backlight (originally fluorescent tubes, now LEDs) shines through a matrix of liquid crystal cells, each sandwiched between two polarising filters. Applying voltage to a cell twists the crystals, changing how much light passes through. In front of each cell sits a colour filter – red, green, or blue – so each subpixel passes only one colour of light. Combine three subpixels and you’ve got one pixel that can display a wide range of colours by varying the brightness of each channel (den Boer, Active Matrix Liquid Crystal Displays).
OLED (organic light-emitting diode) displays represent a fundamental shift: instead of filtering a backlight, each pixel produces its own light using organic compounds that emit photons when current flows through them. No backlight means true blacks (a pixel that’s off emits no light at all), thinner panels, wider viewing angles, and generally more vivid colours. The trade-off is that organic materials degrade over time – blue OLEDs especially – though manufacturing improvements have made this less of a practical concern (Tsujimura, OLED Display Fundamentals and Applications).
MicroLED is the next frontier: like OLED in that each pixel is self-emitting, but using inorganic LEDs instead of organic compounds. Higher brightness, longer lifespan, no burn-in risk. The challenge is manufacturing: placing millions of microscopic LEDs precisely on a substrate is extraordinarily difficult and expensive. It’s coming, but slowly (Huang et al., 2020).
And then there’s Apple Vision Pro, which raises entirely new colour challenges. It uses micro-OLED displays (one for each eye) running at about 23 million pixels total, but the clever part is the passthrough AR: cameras capture the real world, and the headset composites virtual objects into that feed in real time. Getting the colour of virtual objects to match the colour of the real environment – under varying lighting conditions, through the distortion of camera optics and display optics – is a problem that requires real-time colour science at a level that would have made the CIE committee weep (Apple Vision Pro display specifications).
Calibrating displays (and did we calibrate paper?)
Here’s a thing that will bother you once you know it: the white you see on your screen right now is almost certainly not the same white I see on mine. Every display panel is slightly different. Backlights age, shifting colour temperature. Factory calibration profiles are usually optimised for “pop” – punchy colours that look impressive in a shop – not for accuracy (Berns discusses this in Principles of Color Technology).
This is why colour profiles exist. The most common is sRGB, a standard colour space defined in 1996 by HP and Microsoft that covers roughly 35% of the colours humans can perceive – a deliberately modest range designed to be reproducible on cheap monitors (the sRGB standard dates to 1999). For professional photo and print work, Adobe RGB expands the gamut, especially in greens and cyans. DCI-P3, originally defined for digital cinema projection, is the standard Apple uses for its wide-colour displays. And Rec. 2020, defined for ultra-high-definition television, covers about 75% of visible colours – though no consumer display can actually reproduce it all yet (ITU-R BT.2020).
Hardware calibration uses a colorimeter – a small device you stick to your screen that measures the actual colours being emitted. Products like the Datacolor SpyderX or X-Rite i1Display Pro measure your screen’s output against known reference values, then build a correction profile (an ICC profile) that maps what your screen does produce to what it should produce (Green, Understanding and Using ICC Profiles). It’s the difference between “close enough” and “that proof matches the print.”
There’s also gamma – the relationship between the signal value and the brightness the screen produces. A gamma of 2.2 is the standard for most displays, meaning the relationship is nonlinear: a signal value of 50% doesn’t produce 50% brightness, it produces about 22%. This nonlinearity roughly matches how human vision perceives brightness (we’re more sensitive to differences in dark tones than light ones), which is why it works as a standard (Poynton, Digital Video and HD).
And yes – to answer the question you might be thinking – we absolutely calibrated paper. The entire print industry runs on colour management. ICC profiles for printers describe exactly how a given printer, ink, and paper combination produces colour. Paper has a white point (no paper is truly white – some are warm, some cool, some slightly yellow). Ink has limiting – there’s a maximum amount of ink a paper can absorb before it bleeds or curls. Pantone swatches are physical calibration tools: standardised ink samples printed on standardised paper, used to verify that the colour you specified is the colour you got (Sharma, Understanding Color Management).
And then there’s metamerism – the phenomenon where two colours match perfectly under one light source and look completely different under another. The paint in the tin matches the paint on the wall in the shop’s fluorescent lighting, but under your kitchen’s warm LEDs they’re visibly different shades (covered rigorously in Wyszecki and Stiles, Color Science). Metamerism is the bane of paint shops, fashion retailers, and anyone who’s ever tried to match a thread colour to a fabric under three different lights.
Colour in cinema
Film and colour have one of the great love stories in technology.
Early film was monochrome, but people wanted colour almost immediately. Hand-tinting individual frames was common in the silent era – painstaking work, usually done by women in factories, painting each frame with tiny brushes. Then came Technicolor.
The three-strip Technicolor process, perfected in the 1930s, was a mechanical marvel. Light entering the camera hit a beam-splitting prism that directed it to three separate strips of black-and-white film, each filtered to record only red, green, or blue light. These three records were then recombined in a dye-transfer printing process that produced stunningly saturated, stable colour prints (Higgins, 1999). The result was the hyper-vivid palette of The Wizard of Oz (1939) and Gone with the Wind (1939) – colours so rich they looked like someone had turned the saturation dial past maximum. Technicolor was colour as event.
Later, single-strip colour films from Kodak and Fuji replaced the unwieldy three-strip process. Each had its own colour science – its own personality. Kodachrome, introduced in 1935, was legendary for its warmth, saturation, and archival stability. Paul Simon wrote a song about it. National Geographic shot on it for decades. Its reds were warm, its blues were deep, and its grain had a character that digital processing still tries to emulate (as chronicled in Focal Encyclopedia of Photography).
Ektachrome was Kodak’s other major slide film, cooler and crisper than Kodachrome. Fuji’s Velvia pushed saturation even further – landscape photographers loved it, portrait photographers found it terrifying. Each film stock had a response curve – a graph showing how it translated real-world light into recorded density – and that curve was the film’s fingerprint (Ansel Adams discussed this in The Negative).
Film development was its own dark art. C-41 is the standard process for colour negative film, producing the orange-masked negatives you’d get from a one-hour photo lab. E-6 processes colour reversal (slide) film. Cross-processing – deliberately running one type of film through the wrong chemistry (E-6 film in C-41 chemicals, for instance) – produces wild colour shifts and contrast changes that became an aesthetic in fashion and music photography (Langford, Basic Photography). Push processing (extending development time) and pull processing (shortening it) let photographers adjust effective exposure and contrast after the fact – the analogue ancestor of dragging a slider in Lightroom.
Why does film “look” different from digital? Several reasons. Film has grain – physical clumps of silver halide crystals that add a textured, organic randomness absent from digital noise. Film has halation – a soft glow around bright highlights caused by light bouncing off the film base. And film has highlight rolloff – the characteristic way it handles overexposure, compressing highlights gracefully rather than clipping them to hard white (Kennel, Color and Mastering for Digital Cinema). Digital sensors are linear; film is not. That nonlinearity is most of what people mean when they say film “feels” different.
In the digital era, colour in cinema is handled through log and RAW workflows. Cameras like the ARRI Alexa (whose ALEV sensor is beloved by colourists for its natural skin tones and wide dynamic range), RED, and Sony Venice shoot in log colour spaces – deliberately flat, desaturated images that preserve maximum dynamic range for grading in post-production (ARRI colour science documentation). Colourists then apply LUTs (Look-Up Tables) – mathematical transformations that remap the flat footage into a specific colour look. The teal-and-orange grade that dominates modern Hollywood? That’s a LUT. The desaturated bleach-bypass look of Saving Private Ryan? Also colour grading (van Hurkman, Color Correction Handbook).
Cameras and colour
Every digital camera faces a fundamental problem: the sensor sees in greyscale. A silicon photodiode doesn’t know what colour a photon is – it just counts how many arrive. To get colour, you need a trick.
The most common trick is the Bayer filter, invented by Bryce Bayer at Kodak in 1976. It’s a mosaic of tiny colour filters placed over the sensor – one filter per pixel, in a repeating pattern of red, green, green, blue. Why twice as much green? Because human vision is most sensitive to green wavelengths, so more green information produces a more detailed luminance signal (Bayer, US Patent 3,971,065). Each pixel only records one colour, so a demosaicing algorithm reconstructs the full-colour image by interpolating the missing channels from neighbouring pixels. Every JPEG from a Bayer-filter camera is, at a fundamental level, a best guess.
Sigma took a different approach with the Foveon sensor: three layers of silicon stacked vertically, each absorbing a different wavelength range. Every pixel captures red, green, and blue simultaneously – no mosaic, no interpolation, theoretically sharper colour detail. In practice, Foveon sensors struggle in low light and the colour science is… divisive. But the concept is elegant (Sigma Foveon white paper).
White balance is the correction your camera applies to account for the colour of ambient light. Daylight is bluish (~5500K colour temperature). Incandescent bulbs are orange (~2700K). Fluorescent lights are a murky green. Your brain automatically adjusts for this – a white shirt looks white to you under any light source. Your camera is not so clever. Get the white balance wrong and your indoor photos look orange, your outdoor photos look blue, and your fluorescent-lit photos look like they were taken in a hospital circa 1987 (Hunt and Pointer, Measuring Colour).
Smartphone computational photography has pushed colour science into entirely new territory. Modern phones don’t just capture a single exposure – they capture multiple frames, align them, stack them, and apply machine learning models to produce the final image. HDR stacking recovers detail in highlights and shadows simultaneously. Night mode combines dozens of exposures to synthesise colour information from almost no light. And the colour tuning itself is learned from millions of images, trained to produce results that look “pleasing” rather than strictly accurate (Liba et al., 2019). Your phone’s photos don’t show you what the scene looked like. They show you what a neural network thinks the scene should have looked like. That’s a philosophical shift, even if the photos look great.
The feel of colour: sensation and synesthesia
Colour doesn’t just live in your eyes. It seeps into your other senses in ways that are measurable, repeatable, and occasionally very strange.
Red feels hot. Blue feels cold. You knew that already. But here’s the thing: it’s not universal, and it’s not entirely learned. Research shows that people in climate-controlled rooms report feeling warmer when the room is lit in red and cooler in blue, even when the actual temperature is identical (Fanger et al., 1977). The association between colour and temperature is partly cultural (red taps, blue taps), but it has a perceptual grounding that crosses cultures.
Dark colours feel heavier. Studies going back to Alexander Wright in the 1960s have shown that people consistently judge dark-coloured objects as heavier than light-coloured ones of identical weight. Moving companies reportedly paint boxes light colours for this reason – workers handle them faster, believing they’re lighter (Wright and Rainwater, 1962).
High-pitched sounds “look” light in colour; low-pitched sounds “look” dark. This cross-modal association was explored by Wolfgang Kohler in the 1920s with his famous bouba/kiki experiment: rounded shapes are associated with low sounds and soft, warm colours; spiky shapes with high sounds and sharp, cool colours (Kohler, Gestalt Psychology). These aren’t arbitrary pairings. Something in the architecture of human perception makes them feel right.
Then there’s synesthesia – a genuine neurological phenomenon where stimulation of one sense triggers automatic, involuntary experience in another. Grapheme-colour synesthesia means seeing letters or numbers in specific, consistent colours: A is always red, 7 is always green, and these mappings are stable over years. Chromesthesia means hearing colours: musical notes trigger colour experiences. Some researchers estimate 2-4% of the population has some form of synesthesia (Simner et al., 2006). It’s not a disorder. It’s a feature – a form of cross-wiring in sensory processing that produces a richer (if occasionally overwhelming) perceptual experience.
There’s even emerging research on haptic colour perception – the idea that people associate certain colours with textures and can, in some limited experimental conditions, detect colour differences through touch via unconscious texture-colour mappings (Slobodenyuk et al., 2015). The evidence is preliminary, but the implication is gorgeous: colour might be something you can feel.
Emotion, culture, and colour
Colour and emotion are tangled together so deeply that it’s hard to know where one ends and the other begins.
Red grabs attention. It raises heart rate. It’s associated with danger, passion, anger, and love – sometimes all at once. Studies by Elliot and Maier have shown that seeing red before a test can impair performance (it triggers avoidance motivation), while wearing red in competitive sports is associated with a small but statistically significant advantage (Elliot and Maier, 2014).
Blue is the world’s favourite colour, consistently across cultures and surveys. It’s associated with calm, trust, and competence – which is why banks, tech companies, and social media platforms swim in it. Facebook is blue because Mark Zuckerberg is red-green colourblind and blue is the colour he can see most vividly (as reported in The New Yorker). It’s the most consequential design decision made by a colour vision deficiency in history.
But colour associations are slippery. They vary with culture, age, and personal experience in ways that resist easy generalisation.
White means purity and weddings in the West. In China and India, it’s the colour of mourning and death. Red means luck and celebration in China; it means stop and danger in most of the West. Purple signifies royalty in Europe (thanks to Tyrian purple and the expense it carried), but it’s a colour of mourning in Thailand (Gage, Color and Meaning).
Pink for girls, blue for boys is barely a century old, and for much of its early history, the convention was reversed. A 1918 article in the American trade publication Earnshaw’s Infants’ Department stated: “The generally accepted rule is pink for the boys, and blue for the girls. The reason is that pink, being a more decided and stronger colour, is more suitable for the boy, while blue, which is more delicate and dainty, is prettier for the girl” (Earnshaw’s Infants’ Department, June 1918). The switch happened gradually through the mid-twentieth century, driven more by marketing than by any innate preference. Studies by Palmer and Schloss on colour preference show that while there are small statistical differences between groups, the variation within any group is far larger than the variation between groups (Palmer and Schloss, 2010).
Age matters too. Children reliably prefer bright, saturated colours. As people age, preferences tend to shift toward more muted, complex tones – though whether this is biological (the lens yellows) or experiential (you’ve just seen more colours) is debated.
Then there’s colour in marketing and design. Fast food chains use red and yellow because research suggests these colours stimulate appetite and create a sense of urgency – McDonald’s, KFC, Burger King, and Pizza Hut didn’t arrive at the same palette by coincidence (Singh, 2006). Exit signs are green in Europe, Australia, and most of Asia (following ISO 3864), but red in the United States – a legacy of different design philosophies about whether “exit” should mean “safe direction” (green) or “pay attention” (red) (ISO 3864-1:2011).
What colour tells us about ourselves
Here’s what I find genuinely moving about all of this.
Colour isn’t a thing in the world. It’s a conversation between the world and your particular brain, your particular eyes, your particular life. The sunset you see is not the sunset your dog sees, or a mantis shrimp sees, or a tetrachromat sees. It’s not even the same sunset you saw last year, because your lens has yellowed a fraction and your cones have lost a few photopigment molecules.
Every colour you experience is, in the most literal sense, unique to you.
Humans figured this out early. A hundred thousand years ago at Blombos Cave, someone ground ochre into powder and decided that colour was worth making on purpose. Forty thousand years ago, someone crawled deep into a cave in what is now southern France and painted a horse on the wall by firelight using four pigments and a reed blowpipe. The Anangu people of Central Australia have been mixing ochre and applying it to rock surfaces, bodies, and sacred objects in unbroken tradition for tens of thousands of years.
We went from ochre to Egyptian blue to Tyrian purple to Scheele’s arsenic green to Perkin’s mauveine to Pantone 186 C to DCI-P3 to micro-OLED panels firing photons into your retinas at 90 frames per second.
And through all of it, colour has never been anything more – or less – than a shared hallucination. A beautiful, useful, occasionally lethal hallucination that we’ve spent a hundred millennia learning to control, name, argue about, and love.
The light doesn’t care what colour it is.
But we do. And that might be the most human thing about us.
Publishing around 24 April – come back soon.