Back to Top ^

The Miracle of Digital Imaging

“In a discussion lasting not more than an hour, the basic structure of the CCD was sketched out on the blackboard, the principles of operation defined, and some preliminary ideas concerning applications were developed,” Smith recalled in a recent interview. It didn’t take long before they realized that the hypothetical CCD could do more than simply store data: it could create images, thanks to the phenomenon called the photoelectric effect.

Contrary to popular belief, Albert Einstein won his Nobel Prize in 1921 not for his work on relativity but for his explanation of this manifestation of quantum theory. When photons hit the atoms of certain materials, each of these atoms gives off an electron, in essence generating an electric flow. The well-known effect had already been used for years in phototubes, “electric eyes,” and solar cells. “In the old days it was done with some kind of a vacuum tube. You might have some kind of special coating on a piece of metal and then a high voltage, and then light would hit it, and it would release electrons and you would collect it,” explains Tim Madden, an electronics engineer at Argonne National Laboratory. But unlike those familiar devices, the CCD doesn’t use the effect to produce current.

Instead, the silicon surface of the CCD is divided into a grid of millions of separate cells or pixels, each sensitive to light. The cells are essentially capacitors, holders of electrical charge. When photons strike the cells, electrons are released in the silicon in numbers proportional to photon input—the more photons, the more electrons set free. When a voltage is applied to a row of the CCD array, each cell’s “bucket” of charge is passed along to the next cell in line, like pails of water in a bucket brigade, until it reaches the edge of the CCD, where it is read out and the amount of charge in each cell is registered. “So each pixel is kind of like a bucket of charge, and it just gets transferred from pixel to pixel over to the edge,” says Madden. It’s this passing through or “coupling” of electrical charge among the cells that gives the CCD its name.

With the charge of each pixel reduced to a string of digital data, the precise pattern of light captured by the entire CCD can be reconstructed almost perfectly. The efficiency of the charge transfer across the CCD is remarkably high, usually well over 99 percent, so few if any data are lost. The charge can be stored indefinitely, not to mention endlessly modified or enhanced by computer tweaking of the data. To register color images, filters in front of the CCD (and in line with the lens usually needed to focus light onto it) control the intensity of the different colors that it detects, information also recorded as part of the entire image data set.

Not long after that afternoon brainstorming session, Boyle and Smith had the technicians at Bell work up a prototype, which performed exactly as they had predicted. After they publicly announced their invention in 1970—demonstrating, among other things, the use of a CCD in a video camera—“all hell broke loose,” Boyle remembers. Soon CCDs became the hottest thing in the semiconductor world, with engineers and scientists at Bell and elsewhere rushing to develop and perfect the device for myriad applications. Fairchild Electronics became the first to enter the commercial market.

Astronomers immediately realized the enormous potential of digital imaging and enthusiastically embraced the CCD. With their science entirely dependent on collecting and analyzing light, the advent of a light-detecting device hundreds and eventually thousands of times more sensitive than the most advanced photographic film was nothing less than a revolution. Even the most sensitive chemical film emulsion might capture only a handful of the photons hurtling in from a distant galaxy, while a CCD could register 90 percent of them. “[CCDs] are so incredibly sensitive you can pick up individual photons,” says Madden. That precision gives astronomers the ability to obtain pictures of unprecedented clarity and quality without the hours-long exposures formerly necessary with photographic film.

The first astronomical images captured by CCDs, which were taken in 1974, hardly needed that kind of sensitivity: they were pictures of our familiar, nearby Moon. But CCD technology rapidly became the standard imaging tool in astronomy, not only because of its sensitivity but also because of its versatility. CCDs can “see” not only in visible light but in every other part of the electromagnetic spectrum. The power to digitally manipulate and analyze imagery has also proven invaluable. Although it’s true that some professional astronomers still operate telescopes inside darkened observatory domes on cold mountaintops, they’re no longer peering into eyepieces in metal cages high above the floor but sitting comfortably in warm offices, gazing at computer screens displaying images collected by the CCD arrays inside their telescopes.