Sic Transit Transitor
It began as a crude, poorly understood device. Within ten years it was changing the world. And today it’s history.
On the greeting-card racks this past Christmas could be seen a minor technological miracle—a Christmas card that upon opening showed a small yellow light that glowed while the card played a tinny but recognizable version of “Jingle Bells.” The yellow light was the latest addition to a novelty that made its first appearance a couple of years ago—the electronic greeting card. The thing that is most notable about this minor miracle is just how ordinary it seems, how easily it is shrugged off by a public that accepts it as simply another application of microelectronics, perhaps to join the ranks of Pong (the early video game), talking scales, and wristwatches that incorporate calculators and thermometers.
Most of those picking up the singing Christmas card have at least a vague idea of how it works. There is, of course, a battery-powered “chip”—an integrated circuit, combining hundreds or thousands of electronic functions on a finger-nail-sized slice of crystal. This device, at the heart of modern electronics technology, from guided missiles to computers to pocket calculators, is the result of the harnessing of a class of substances called semiconductors.
Our mastery of these materials began in earnest almost forty years ago, with the invention of the transistor. This most people realize; less well understood is the real role of the transistor in the microelectronics revolution. As some saw at its introduction, in 1948, the transistor brought to electronics new capabilities, presenting to engineers challenges and opportunities hitherto unknown and pointing ahead to a “solid-state” technology that would have a very different look and feel from vacuum-tube electronics. Yet the transistor as it first appeared and as it was applied over the first decade of its existence, was also the final stage in an older technical tradition, an electronics conceived and built around circuits of individual components. In hindsight, it is now possible to appreciate the single, discrete transistor as a truly transitional technology. It was distinctively different from older ways of doing things, but it was not fully a part of the revolution to come.
In the years after World War II, few scientists were looking for a revolution in electronics. The war had seen the flourishing of radio and related technologies beyond anything that could have been imagined before the conflict began. While there were indeed wartime technical and scientific achievements that were more spectacular—the atomic bomb, for instance—to many observers it was electronics that gave the Allies the real technical margin of victory. Radar, sonar, field-communications systems, and the proximity fuze were recognized by soldiers and civilians alike as technical accomplishments of the first order, the creations of engineers and physicists intent on demonstrating how advanced technology could make the critical difference in the war effort.
Many people were also aware of the creation of another new device, the electronic digital computer, which saw limited wartime use but promised to open up whole new realms of technical possibilities. The years after the war were expected to see the consolidation of the great technical strides that had been taken under military pressure and the application of these new capabilities to areas that would improve civilian life. The world eagerly awaited the war-delayed spread of television and FM radio, and industry was hungry to see what the new electronics could do in the massive conversion to a consumer economy. No fundamental technical change was sought or anticipated; everyone would be busy making the best use of the still bright, new technologies of the day.
Only a few years after the war’s end, however, there began the most profound technical revolution since the beginning of electronics itself at the turn of the century. At the end of 1947, three physicists working at the Bell Telephone Laboratories in Murray Hill, New Jersey, learned how to make a piece of semiconductor crystal behave like a vacuum-tube amplifier. The story of John Bardeen, Walter Brattain, William Shockley, and the invention of the transistor is a well-known tale of the successful application of science in an industrial setting, a research program that earned the investigators the Nobel Prize for physics and the Bell System its most valuable patent since the one Alexander Graham Bell took out for the telephone in 1876.
Not so well known is just how the transistor came to have the revolutionary impact it did. It was one thing to bring together advanced scientific knowledge and sophisticated technical insight to create a new component that worked on novel principles and in distinctly different ways from older ones. It was quite another to make that component a useful—not to say profoundly important—part of the technological world. Doing so presented a challenge to the engineers and designers who were the shapers of that world, a challenge that initially went unrecognized and unappreciated by people who were comfortable and confident with the established technical order of things. It took close to a decade from the time of the transistor’s invention at Murray Hill to the point where the new technology began to have a real impact on modern life—and began to lead to other, more far-reaching inventions.
The source of the electronics engineers’ confidence at war’s end was mastery of the vacuum tube. The history of the electronic vacuum tube began in 1883 with Thomas Edison’s observation of the passage of an electric current from the hot, glowing filament of one of his incandescent lamps, across the high vacuum of the bulb, to an extra electrode introduced into the bulb’s glass envelope. Edison thought this effect might be somehow useful but couldn’t think of a good application and so did nothing more with it. An Englishman, John Ambrose Fleming, figured out two decades later that this was just the kind of device needed by the infant technology of radio.
Since the Edison-effect current will flow only in one direction, from the filament (or cathode) to the other electrode (or anode) but not the reverse, Fleming called his invention a valve. It was useful for radio because the response of the valve to a radio-wave signal, which is a kind of alternating current, was a direct current (hence the valve is also called a rectifier). Fleming’s valve gave the radio pioneers a reliable and sensitive detector of signals and thus began displacing the “cat’s-whisker” detector—a semiconductor crystal, ironically—that had been depended on up to then.
The real possibilities of vacuum-tube technology, however, were revealed by the American Lee de Forest in 1906. De Forest discovered that if he placed a third electrode between the cathode and the anode and allowed the radio signal to determine the voltage in this element (called the grid), the vacuum tube could be used as a powerful amplifier of the signal. De Forest called his invention an audion, but it became more generally known as a triode (just as the valve was referred to as a diode). The subsequent history of radio, and indeed of all electronics technology, for the next four decades involved the design of still more complex tubes—tetrodes, pentodes, hexodes, and even higher “odes”—and more complex circuits to make the best use of them. The “odes” were shortly joined by the “trons,” such as the pliotron (a very-high-vacuum triode), the ignitron (for very-high-power applications), and the magnetron (for the generation of very-high-frequency signals).
When, in 1933, the electronics industry sought to show off its wonders at the Chicago World’s Fair, one of the features of the display was a giant vacuum tube designed to explain the basic phenomenon to fairgoers. Other exhibits showed off some of the literally hundreds of different tubes that were used for a great variety of purposes, from ever-improving radio receivers to machine controls, phonograph amplifiers, short-wave radio sets, and even “electric eyes.” Despite the Great Depression, the 1930s saw the vigorous expansion of electronics from the already impressive displays of Chicago in 1933 to the truly spectacular show put on at New York’s World’s Fair in 1939, where fully electronic television began regular broadcasting to the American public for the first time.
The coming of war in the 1940s, therefore, found the American electronics engineers well equipped to make important contributions to the nation’s defense. Physicists and engineers at the MIT Radiation Laboratory devised tubes that made radar a reliable and widely applicable tool. Researchers at the Applied Physics Laboratory outside Baltimore built another, less famous but also strategically vital device, the proximity fuze. This consisted of a radar unit reduced in size to fit into an antiaircraft or artillery shell and set to detonate the shell at the desired distance from its target. It required the design of vacuum tubes that were both rugged enough to withstand being fired from a gun and small enough to fit into a circuit placed in the head of a five-inch shell. Engineers thus began speaking of “ruggedization” and “miniaturization” as new and important design challenges.
These challenges redirected the attention of some to the possible uses of semiconductors. Ever since the vacuum tube had replaced the cat’s whisker, there had been only limited interest in these materials. Semiconductors, as the name implies, do not conduct electric currents as well as common metallic conductors (like copper, silver, or aluminum) but they perform better than true insulators (like glass, rubber, or most plastics). The contact between a semiconductor and a conductor can be designed so that current will pass easily in only one direction, acting as a rectifier. Before the war, numerous applications were found for rectifiers made of the semiconductor materials copper oxide and selenium. Ruggedization and miniaturization compelled many engineers to pay closer attention to these materials, but there was little theory to guide the applications of any of the semiconductors, and further uses would require more knowledge about how they really behaved.
Even before the war, and again during it, some electronics engineers asked if a semiconductor crystal couldn’t be made to act like a triode amplifier as well as a diode rectifier. Isolated efforts showed that this was not, in fact, easily done. Besides, the ever-increasing power and versatility of vacuum tubes made the question of a semiconductor amplifier seem relatively unimportant. But tubes had their limitations, and as more applications were explored, these limitations became increasingly apparent. In high-frequency applications, tube designers often ran into severe difficulties in combining power, signal response, and reliability. When long-term reliability was necessary, as in telephone networks or undersea cables, tubes fell short of what was needed. And when thousands of tubes were used together, as in the new giant digital computers, the problems of keeping such large numbers of heat-producing devices going caused numerous headaches. Considerations such as these were at work at Bell Labs when plans were made for the research that led to the transistor.
Even in the 1930s, some engineers had perceived the limitations of the mechanical switching technology that was at the heart of the telephone network. People like Mervin Kelly, Bell’s vice-president for research, saw that electronic switching, with its instantaneous response to signals, held out a possible answer. The roadblock was in the nightmare of depending on huge banks of hot vacuum tubes, each liable to break down after a few months of constant use. As Kelly saw it, a solid-state switching device, requiring no power at all most of the time and capable of responding immediately to an incoming signal, was not only desirable, it was becoming necessary if the telephone system was not to drown in its own complexity and size.
So, with the war winding to a close in the summer of 1945, a research group in solid-state physics was organized at Bell Labs. From the summer of 1945 until the end of 1947, the solid-state physics team under William Shockley performed experiments on silicon and germanium crystals, aided by the superior materials-handling capabilities of Bell chemists and metallurgists. The primary goal of their experiments was a semiconductor amplifier, a replacement for the vacuum triode. The transistor effect was first observed in December of 1947, and the people at Bell knew instantly that they had an important invention on their hands. It was so important, in fact, that they didn’t allow word to get out for six months; patent documents were drawn up and further experiments were pursued to make sure that the investigators knew just how their device worked.
On June 30, 1948, an elaborate public announcement and demonstration was staged in New York City, complete with a giant model of the point-contact transistor and a radio and a television with the new devices substituted for tubes. The public response was muted—The New York Times, for example, ran the story on page 46, at the end of its “News of Radio” column. Perhaps it was the sheer simplicity of the device that deceived onlookers. The Times described it as consisting “solely of two fine wires that run down to a pinhead of solid semi-conductive material soldered to a metal base.” The first transistor, housed in a metal cylinder a little less than an inch long and a quarter-inch across, was not much smaller than some of the miniature tubes that had been made for the military.
Nonetheless, some people were impressed, even if they could only dimly perceive the implications. The popular magazine Radio-Craft, for example, featured the Bell announcement in an article headlined ECLIPSE OF THE RADIO TUBE: “The implications of this development, once it emerges from the laboratory and is placed in commercial channels, are staggering.” This comment was evoked less by the transistor’s size than by its most conspicuous other feature, its low power requirement. “No longer will it be necessary to supply power—whether it be by batteries or filament heating transformers—to heat an electron-emitting cathode to incandescence.” Since all vacuum tubes begin with the Edison effect, they must be “lit” like light bulbs and thus require considerable power. The transistor required far less power and would not burn out, so it promised a reliability and longevity that tube designers could only dream about.
But the new point-contact transistor was in fact a very frustrating device. For a component that was supposed to be based on sound understanding of physical theory, it instead behaved more like the whimsical devices jury-rigged by amateurs in the first decades of radio. The first transistors were “noisy,” easy to overload, restricted in their frequency responses, and easily damaged or contaminated. Transistor makers had great difficulty producing what they wanted. Some spoke of the “wishing in” effect, where the tester of a finished device was reduced to simply hoping it would work right. And there was the “friendly” effect—the tendency of a transistor to acknowledge on a testing oscilloscope a simple wave of the hand in its direction. At first only 20 percent of the transistors made were workable. One engineer was quoted later as saying, “The transistor in 1949 didn’t seem like anything very revolutionary to me. It just seemed like another one of those crummy jobs that required one hell of a lot of overtime and a lot of suff from my wife.”
The early point-contact transistor was frustrating. Only one in five worked.
|
To complicate things further, the transistor might do the same sorts of things a tube could do, but it didn’t do them in the same way. This meant that circuit designers—the engineers responsible for turning components into workable tools and instruments—would have to reconfigure their often very complex circuits to accommodate the transistor’s special characteristics. For many, it hardly seemed worth it. But others made enormous efforts to get the transistor working and out into the world of applications.
By 1955 the transistor’s future importance in computers was obvious.
|
Besides the Bell System itself, the most important impetus to transistor development was provided by the U.S. military. The value of miniature electronics had been clear to some military men at least since the 1930s, when the Army Signal Corps made the first walkie-talkies. By the beginning of World War II, the corps had produced the Handle-Talkie, a six-pound one-piece radio that was essential to field communications. Experience with this device, however, highlighted the limitations of vacuum-tube technology. Under rugged battlefield conditions, especially in extreme heat and cold, the instruments were prone to rapid breakdown. Small as they might be, the virtues of even greater reductions in size were apparent to anyone who had used them in the field, and the problems of heavy, quickly exhausted batteries were a constant annoyance.
The Signal Corps thus expressed interest in transistors right away and even started manufacturing them on a very small scale as early as 1949. Other branches of the military quickly followed, and a number of the first applications of the transistor were in experimental military devices, particularly computerlike instruments. The military services pressed Bell to speed up development and to begin quantity production, and by 1953 they were providing half the funding for Bell’s transistor research and underwriting the costs of manufacturing facilities.
With this kind of encouragement, the researchers at Bell Labs were able to make considerable improvements in transistor technology. The most important of these was the making of the first junction transistor in 1951. Relying on the theoretical work of William Shockley and very sophisticated crystal-growing techniques developed by Gordon Teal and others, the junction transistor utilized the internal properties of semiconductors rather than the surface effects that the point-contact device depended on. In the long run this was a much more reliable technology, and it provided the fundamental basis for all subsequent transistor development. While the making of junction transistors was initially extremely difficult, the result was a true marvel, even smaller than the original transistors, using even less power, and behaving as well as the best vacuum tubes in terms of noise or interference. Soon after the announcement of the junction transistor, Bell began making arrangements to license companies that wished to exploit the new technology, holding seminars and training sessions for outside engineers, and compiling what was known about transistors into fat volumes of technical papers.
By 1953 Bell and the military had finally begun to make use of the transistor, but there was still no commercial, publicly available application. The transistor still meant nothing to the public at large. This changed suddenly that year, not in a way that affected a large number of people, but in a way that a relatively small number were very grateful for. In December 1952 the Sonotone Corporation announced that it was offering a hearing aid for sale in which a transistor replaced one of the three vacuum tubes. The device wasn’t any smaller than earlier models, but the transistor saved on battery costs. This opened the floodgates to exploitation of the new technology by dozens of hearing-aid manufacturers, and within months there were all-transistor hearing aids that promised to rapidly transform a technology that had been growing steadily but slowly since the 1930s.
As manufacturers redesigned circuits and found other miniature components to fit into them, hearing aids shrank radically. Just as important, however, was the savings in batteries. Despite the fact that transistors cost about five times what comparable tubes did, it was estimated that a hearing-aid user saved the difference in reduced battery costs in only one year. It is no wonder that the transistor took the hearing-aid industry by storm. In that first year, 1953, transistors were found in three-quarters of all hearing aids; by 1954, fully 97 percent of all instruments used only transistors. At the same time, hearing-aid sales increased 50 percent in one year (from 225,000 to 335,000). The transistor was showing a capacity for transforming older technologies that only the most prescient could have foreseen.
When the Regency radio made its appearance in time for Christmas of 1954, this first all-transistor radio was an instant success at $49.95. Other radios quickly followed, and the “transistorization” of consumer electronics was under way. Still, the early transistor radio was largely a novelty, only slowly displacing older, more familiar instruments. Much more important was the application of the transistor to digital computers. The first truly electronic programmable digital computer, the ENIAC, had been completed in February 1946. It contained 17,468 tubes, took up 3,000 cubic feet of space, consumed 174 kilowatts, and weighed 30 tons. Many engineers saw from the beginning the great promise that the transistor held out for computer technology, but it was not until 1955 that IBM began marketing its first transistorized machine (the 7090). Size was reduced, the air conditioning that the tubes required was gone, and power consumption was reduced 95 percent. It was obvious to all that the marriage of the transistor and the digital computer would be an important and fruitful one.
Still, in 1955 the transistor remained a minor element in the larger scheme of things. Total transistor production up to that time totaled less than four million units (over about seven years). That many vacuum tubes were being made in the United States every two working days. And as exciting as the computer might be, it represented a tiny market—only 150 computer systems were sold in the United States that year. Over the next few years, however, transistor production and use expanded rapidly. As more companies entered the business, fabrication techniques were improved, and engineers continued to design circuits that used semiconductor devices.
Nevertheless, the microelectronics revolution might have remained a quiet and limited affair but for two great events, one within the industry, the other a dramatic intrusion from the outside world. On October 4, 1957, the Soviet Union put Sputnik I into orbit around the Earth, ushering in an era of tense and vigorous competition for the mastery of space technology. The already considerable interest of the U.S. government in microelectronics now reached a fever pitch.
With Noyce’s integrated circuits, the revolution revealed its true form.
|
That same year, a small group of physicists and engineers, former employees of the transistor pioneer William Shockley, set up their own shop and laboratory in a warehouse in Mountain View, California, at the northern end of the Santa Clara valley. At the head of this new enterprise, called Fairchild Semiconductor, was the twenty-nine-year-old Robert Noyce. Within a couple of years Noyce and his colleagues had put together the elements of a new semiconductor technology that would shape the future in ways that the transistor itself never could have done.
In 1958 Fairchild’s Jean Hoerni invented the planar technique for making transistors. Building on processes developed at Bell and General Electric, the planar technique allowed for simplification and refinement of transistor manufacture beyond anything yet possible. It consisted of making transistors by creating layers of silicon and silicon dioxide on a thinly sliced “wafer” of silicon crystal. This layering allowed careful control of the electronic properties of the fabricated material and also produced a quantity of very small, flat transistors in one sequence of operations. Much more important, however, the process led Robert Noyce to another, much more fundamental invention, the integrated circuit.
Noyce was neither the first to conceive of putting an entire electronic circuit on a single piece of semiconductor material nor was he the first to accomplish the feat. The idea had been around for some years, and in the summer of 1958 Jack Kilby, an engineer at Texas Instruments in Dallas, built the first true integrated circuit, a simple circuit on a piece of germanium. Kilby’s device was a piece of very creative engineering, but it depended on putting together the circuit components by hand. Noyce’s great contribution was to show how the planar process allowed construction of circuits on single pieces of silicon by the same mechanical and chemical procedures formerly used to make individual transistors. Noyce’s process, which he described in early 1959 and demonstrated a few months later, allowed for the design and manufacture of circuits so small and so complex that the elements could not even be seen by the unaided eye. Indeed, in Noyce’s circuits, the transistors were reduced to little dots under a microscope. Finally the microelectronics revolution had revealed its true form.
No one in 1948 could have predicted what that form would be. Indeed, even in 1962, when integrated circuits first came to market, the extraordinary nature of the changes to which they would lead was only dimly perceived by a few. The integrated circuit overcame what some engineers referred to as the “tyranny of numbers.” By the late 1950s, large computers had as many as two hundred thousand separate components, and even with transistors the construction and testing of such large machines was becoming an engineering nightmare. Still larger computers, with millions of components, were envisioned, especially by military and space planners. The integrated circuit, with its ability to combine hundreds and later even thousands of elements into a single unit, made such complex machines possible. Just as importantly, however, it made them cheap. By the end of the 1960s, computer memory chips that could handle a thousand bits of information were available.
In 1971 the first microprocessor was introduced on the market by Intel Corporation. Sometimes called a “computer on a chip,” the microprocessor demonstrated that sufficiently complicated integrated circuits could be mass-produced and could bring large, sophisticated computers down to such a tiny size and small cost that they could be inserted into every area of life. The home computer, the modern automobile’s instrumentation, the digital watch, the bank’s electronic teller machine, and even the electronic greeting card all testify to the power and pervasiveness of the integrated circuit’s triumph over size and complexity.
The appearance of the integrated circuit little more than ten years after the announcement of the transistor’s invention marked the onset of the next stage in the evolution of electronics, a stage that a quarter-century later has still not run its course. The transistor as a discrete device, as a replacement for the vacuum tube, is still important, but in a more profound sense, the transistor was a transitional technology. It showed engineers and others the way beyond the tube, even though the truly revolutionary paths of electronics in the late twentieth century were to be blazed by other, far more complex inventions.