Why You Need To Understand Y2k
THE NATURAL PROCESS OF REFLECTING ON THE PASSING century involves much pride in technological achievement. It is, after all, the century of wireless communications, jet travel, space exploration, and computer networks. Marvelous, therefore, is the irony that surrounds all this in the form of the “Y2K Problem” or the “Millennium Bug.” This century of technological triumph, in which the creation of imposing instruments of data manipulation and exchange has been perhaps the most surprising, and least expected, of all our technologies, ends with near panic at the prospect of malfunctioning machines and systems all over the globe, ranging from the elevator down the hall to the worldwide air-traffic control system. Even in late 1999 it is still too early to tell just how far this panic is unjustified hysteria and how much it embodies legitimate concern, but we already know that billions of dollars—and euros, pounds, yen, and the like—have been and i are being spent to control the problem. , Alongside this is the accompanying flood of warnings, pronouncements, prescriptions, and analyses. Entire shelves of books, reams of articles, and more Web sites than can be counted have been devoted to Y2K. In the midst of this flood, it’s a bit surprising how little attention has been paid to the historical significance of the phenomenon, either to its longer-term importance or to what it can tell us about history and our historical sense of the technologies around us. In fact it provides an extraordinary opportunity to explore fundamental questions about technological change and technological confidence, indeed about the most fundamental relationships between past and present.
The following comment appeared in a guest editorial column in the Washington Post a year ago: “The Year 2000 (or Y2K) Problem is a glitch that makes computers confused about the correct date. It has also caused worries that are now blossoming into full-blown, completely unwarranted hysteria… . Why Y2K? In a frugal but misguided attempt to save space, many computer programs were written to express dates with only two digits; thus ‘98’ for 1998.”
This kind of explanation is a common one. Programmers were “frugal but misguided.” Really? When I asked a historian colleague whose first career was as a programmer in the late 1960s and early 1970s, he agreed heartily with the “frugal” part but certainly not with the “misguided.” Which raises an interesting set of questions: Is the Y2K Problem the result of one or more “mistakes,” steps taken or decisions made that could, with appropriate knowledge or foresight, have been avoided? If so, what mistakes, by whom, and when? If not, is this a completely unavoidable situation, an inevitable product of computer technology? Or does the answer lie in some kind of middle territory—not the fault of mistakes or fundamentally part of the technology of the information age, but instead a product of the values and expectations of modern technological civilization?
In all the literature about Y2K, it is widely understood that saving on computer memory usage in the early days of computing is at the root of the situation, but from this point on there is an interesting range of opinions. The suggestions include: Programmers were simply stupid; programmers (or those who paid them) were greedy and shortsighted; and programmers knew this would make for a great money-making opportunity as corporations and governments grew desperate for their services. The real answer lies not with any of these notions of stupidity, duplicity, or conspiracy but rather with the nature of technologies and of the decisions that shape them and their places in the economy and society at large.
Paul Ceruzzi, a computer historian at the Smithsonian Institution who has been exploring the origins of Y2K, notes that technologies frequently just barely work when they are being developed. They are not designed to attain some sort of theoretical optimum but rather inch their way toward useful functionality by small steps that are often crude and tentative. The first electronic digital computers, devised in the mid- and late 1940s, depended on vacuum tubes, not transistors, and they broke down after every 15 or 20 minutes of operation when a tube burned out. A fundamental measure of technical progress for such devices was the mean time between failures, or “mtbf.” Even through the 1950s the mtbf of a computer was often measured in days. These were still tender machines, requiring enormous patience and much care. Just getting them to work at all for any length of time was the operators’ and the programmers’ key aim. They also had very little memory, and their input and output devices were glacially slow by today’s standards and likewise prone to failure.
This fact has been much neglected in discussions about the beginnings of Y2K, but it is fundamental. It is also fundamental to the history of many complex technologies, yet its implications are ignored. If getting a machine or system to work is very difficult, then the primary value in design lies in its working at all, not in its achieving some ideal. Still, once a working design has been achieved and proves itself robust, the features of that design will prove difficult to dislodge. In fact, these features will typically be changed only when they must, and not before.
In the case of early computers, additional constraints pushed the design of both hardware and software in particular, important directions. In the first computers, program and data storage was physical and mechanical, embodied in paper tape or punched cards. The 80-column Hollerith card (or IBM card) was the symbol of computer regimentation for an entire generation, its famous adage “Do Not Bend, Fold, Spindle, or Mutilate” representing the demands and constraints that the computer placed on its users and subjects. With such technical limitations, it is small wonder that every incentive existed, from the beginning, to reduce data entry and size to a minimum.
Even when memory changed to magnetic storage, costs stayed high, and there was a need to keep programmers’ codes short. Magnetic disk storage, introduced in the late 1950s, helped increase computer use and reliability enormously, but in 1963 leasing a megabyte of disk storage on a computer still cost about $175 a month—around $900 in today’s dollars. Compare this with our costs of less than 10 cents a month per leased megabyte on a modern mainframe computer; the difference is roughly 9,000 times. By the early 1970s storage costs had fallen to about $130 per megabyte per month in current dollars, but that was still about 1,300 times today’s rate. Ten years later it was about 22 times what we pay today.
Some rather crude, but still plausible, calculations can gauge just what kind of difference was made by accommodating shorter dates. In the standard American style a complete date is written as 05/28/1999. Early programmers quickly dropped the separators (the slashes) and truncated the year, yielding 052899. This eliminated 40 percent of the original number of characters, and the truncation of the year alone was half the savings. It has been estimated that date fields may occupy as much as 3 percent of all computer data (or at least did in the early years, when data didn’t usually involve extensive text fields). The elimination of the century figures (19) from dates could thus yield savings in data space on the order of 1 percent. In light of the high memory costs of the time, this is not at all a trivial amount, particularly over years or even decades. The decision to truncate dates was a rational strategy for saving scarce resources; to have done otherwise would have been wasteful.
The turning Of the digits at this year’s end is not exactly an unpredictable event, so the real historical question is not why early code was written with two-digit years but why it persisted. This persistence, more than the original decision to save memory space with dates, is seen as the “mistake” behind Y2K. To be sure, there is clearly some point at which it is a mistake to write computer programs that will fail in a number of years, but that point is indeterminate, and the long life of computer code and of programming practices that fail to accommodate Y2K highlights a number of important lessons.
First, short-term returns almost always win out over long-term solutions, particularly in competitive environments. Second, the persistence of computer code, short date fields and all, is not an accident or error; it reflects a nearly universal, if not in fact necessary, aspect of the way all technologies develop. Third, while the century gets shorter and 2000 comes closer, forces remain at work that make the old code more valuable rather than less so. Finally, the economy of using old code and old standards for dates persists even when memory costs are no longer very relevant. These less than obvious points are crucial to any appreciation of the lessons of Y2K.
One of the most important commercial breakthroughs in computing history was the introduction in 1964 of IBM’s System/360 computer line. This involved a significant redesign of the mainframe computer, taking full advantage of transistorized printed circuits. IBM designed a range of 36Os and associated equipment to serve large corporations, small companies, universities, and government agencies. So expanded were the capacities of the new machine that four-digit years could be accommodated in the basic code, thus avoiding any year 2000 problem. But at this point IBM was in heated competition with Honeywell, and when the System/360 was approaching its release date, Honeywell announced an inexpensive but powerful new machine designed to emulate the older basic IBM model, the 1401. This emulation called for two-digit year coding and all the other memory-saving tricks of the older computers.
The Honeywell H-200 began selling well, and IBM feared a significant loss of sales. The company responded by accelerating efforts to develop hardware that would allow the System/360 to emulate the 1401, so customers could continue to use their old programs. This worked brilliantly from both an engineering and a commercial point of view, and IBM’s dominance of the mainframe computer market was secured. But the emulation of the 1401—what we now call “downward compatibility”—locked in the two-digit year codes for another generation. One important lesson here is that short-term advantages, in the real world of business and engineering, almost always triumph over long-term ideals. This should not surprise us one bit, but we overlook it all the time in our interpretations of modern technology.
Were computer programmers aware of what they were doing? Did they believe that the programs they wrote in the 1960s were not relevant to what would happen 30 or 40 years later? They certainly were aware that computer code is very valuable and very persistent; programmers learn early to recycle old code whenever they have the chance. Not only does this save initial effort, but it also saves debugging time down the road, since the more tried and true the code you use, the less the chance of new errors in a program. Just getting a computer’s programs to work at all is typically the programmer’s core mission; anyone who has memories of debugging even a simple BASIC program will appreciate this fact. This leaves very little room for the niceties of worrying about the implications of recycled code or what might happen a few decades away.
As the paleontologist and historian Stephen Jay Gould has pointed out, this is essentially how nature works in the course of evolution. New species always come from old ones, and in the course of evolution useful patterns and forms persist from species to species. This persistence is economical, although it often carries with it unuseful consequences. The nipples on a male’s chest have no utility whatsoever; nature just happens to operate with a single human plan, which is, in the course of sexual development, differentiated into a useful female variety, leaving the male nipples as mere artifacts. The appendix, too, is a historical marker—a bit like old computer code.
There are other ways in which the past lives in the present, even in the rational, planned world of technology, and appreciating this is important too. One of the most notable technological examples, adduced by Gould himself, is the standard QWERTY typewriter and computer keyboard. As the Stanford economic historian Paul A. David first pointed out, the QWERTY keyboard is suboptimal, in the sense that it is not designed to speed up typing. This is because when Christopher Latham Sholes built the original practical mechanical typewriter around 1870, he discovered that if adjacent keys were struck consecutively, they were likely to jam. The sensible design solution was to determine what letter pairings were most common in English and separate them in the keyboard layout. The resulting QWERTY keyboard, introduced by the Remington typewriter makers, actually increased overall typing speed, but by reducing mechanical jamming rather than by enhancing the typist’s ability to hit keys fast. As typewriter mechanisms and construction improved, first in mechanical devices and later in motorized and then electronic typewriters and word processors, the advantages of the QWERTY layout diminished and then disappeared. Alternative designs have been introduced over the past century, but to little avail.
David refers to the phenomenon by which long-gone historical circumstances determine current conditions as “path dependence.” This simply means that, as he puts it, “where things are heading depends upon where they have been.” Gould, for his part, breaks the phenomenon down into two elements, which he calls “contingency” and “incumbency.” Contingency refers to the fact that the path of development is often determined by specific circumstances not in any particular way inherent in the system but the product of chance, or at least of a confluence of conditions quite unpredictable from the initial state of the system. Incumbency refers to the fact that things tend to remain as they are unless forced to change. In David’s language, some systems are governed or influenced by positive feedback, meaning that a given tendency tends to reinforce itself over time. This is particularly important language for economists, for they tend to describe economic systems in terms of negative feedback, focusing on self-correction, or the achievement of equilibrium through competing and countervailing forces such as supply and demand. Their emphasis on negative feedback, David points out, leads to ahistorical explanations for things, because in a system that is constantly self-adjusting, the past states of the system are not very relevant. Positive feedback, on the other hand, makes history very relevant indeed.
The Y2K problem is clearly an example of incumbency at work: The two-digit year code was a good solution to a very real technical problem (just as the QWERTY keyboard had been), but it outlived its usefulness once computer memory became cheap (just as the keyboard had done with the coming of electric typewriters, not to mention computers). It was difficult to wean ourselves from the convenient shorthand of the twodigit year until near panic began to move things.
David has pointed out that just as much to blame for the Y2K crisis as initial programming decisions is the modern engineering strategy known by the acronym JIT, for “just in time.” This attempts to maximize productivity by putting off actions for as long as possible. Once resources have been allocated to a particular expenditure or set of actions, they cannot be recovered, goes the logic, so if circumstances change or better solutions emerge, those resources have been squandered or at least spent less efficiently than they might have been. So JIT engineering tells us to wait until alternative actions are no longer possible. To some degree this often happens anyway, even without conscious decisions, for a competitive system will almost always favor relatively certain short-term returns over more problematic long-term ones.
This tendency to think only in the short term is encouraged by another key element in the Y2K story, the belief, or at least hope, that technological change will sweep away old problems. The twentieth century has seen its share of shocks to technological optimism, but these have generally proved limited and short-term. While the assumptions of inevitable economic and social progress that accompanied Victorian technological swagger were dealt mortal blows in our century, confidence in technology itself has rarely been challenged until our own generation. The apogee of this confidence can be conveniently marked—at least for Americans—by Apollo 11 ’s landing on the surface of the moon 30 years ago. Until the Challenger disaster, in 1986, the technologies of the space age were the consummate symbols of confidence, even in the face of occasional setbacks.
It was in the midst of such assurance that the key Y2K decisions were being made. It seems a little startling to think of it now, but in 1969 computers were still not a very large part of the scheme of things, at least not by today’s measure. They were in their adolescence, if not their infancy, and the logic of history to the computer makers of the day was clear: Computers and programs could be expected to continue to change radically. The plummeting cost of memory and of other elements of the computer did indeed transform the technology over the next decades, but it hardly made obsolete all the key elements of the system. In fact, the increasing importance of computers made ever more valuable certain of those elements, particularly reliable computer code and dependable and flexible software. Even as the year 2000 approached, good old code became more, not less, valuable. Why else is Microsoft, and not IBM, the corporate gorilla of the computer age?
This raises the question of the relationship between technological improvement, on the one hand, and obsolescence, on the other. Historians like to see change in complex systems in terms of an advancing front with occasional salients and reverse salients, which assumes that the progress of the vanguard will inevitably entail change all along the line. This is not, in fact, always—or maybe even generally—the case. It is not at all unusual for systems to accommodate improvement in some areas and not in others. The unimproved sectors are not necessarily reverse salients, awaiting catch-up by inventors and developers, but often are adequate, useful, economical elements of the system that resist obsolescence. But this is not part of the usual view of change we carry around with us. Indeed, incumbency is a fundamental fact of technological life and change.
We tend to grossly understate the rational, economical, sensible conservatism of our technological world. This is particularly true in our characterization of the technologies of the twentieth century. There is a widespread belief, almost desperately clung to in some quarters, that technological change is necessarily an accelerating phenomenon and that change in our own time is thus faster and more thorough than at any time before us. This is a fundamental element of our technological confidence: If change is ever faster, then the likelihood that “problems” will fall before the assault of our scientists and engineers is naturally always increasing.
The idea of accelerating change is older than the century. The anthropologist Lewis Morgan wrote in 1877 that “human progress, from first to last, has been in a ratio not rigorously but essentially geometrical.” Almost 50 years later the sociologist William F. Ogburn came to a similar conclusion, and Ogburn’s pupil Hornell Hart wrote in the mid-1950s that “the reality of acceleration in technological growth can scarcely be doubted by any intelligent person who is acquainted with the facts.” The sense is that invention by its nature is a kind of multiplier effect and therefore must increase geometrically.
This sort of claim is all around us. In a piece in London’s Daily Telegraph last year Peter Cochrane, a British Telecom research manager, proclaimed: “We are surfing a wave of change that is getting bigger and faster than ever before.” Earlier this year The Economist discussed the dynamics of innovation in Schumpeterian waves of “successive industrial revolutions” with the conclusion that the cycles are shortening—from Joseph Schumpeter’s 50 to 60 years to something closer to 30 years today. The notion that change is ever swifter and more profound as we head toward the future is so commonplace as to hardly require comment.
The danger of this idea is made wonderfully apparent by the Y2K problem. To be sure, the first makers and users of digital computers in the 1950s and 1960s did witness breathtaking change, and certainly machines and practices in 1970 bore little resemblance to those of 1950. The computer of 1990 was also different from that of 1970, but the differences were more a matter of scale than of kind, and while scale (memory, speed, interconnections, etc.) counts for a great deal in computing, the change itself is not of the same character as in the early years of the technology. The Y2K problem brings home powerfully the real implications of differing qualities of technological change. The sizable literature on the problem uses the term legacy systems to refer to computers and programs that are not Y2K compliant because of their retention of old design practices and assumptions. But legacy simply means that history is still with us.
For instance, the history of COBOL , the programming language behind many of the systems causing concern, is very much with us. COBOL was devised in the late 1950s and early 1960s, largely under U.S. government sponsorship, to provide businesses with a publicly available programming language that could be adapted for many different computers. Its core was a compiler that Grace Murray Hopper had written in the early 1950s for the UNIVAC computer. One of COBOL ’s great distinctions was that it made possible the use of ordinary English expressions in instructions and data definitions, but its significance for the Y2K story lies in its success and the legacy that it left.
Used in the computers of all major companies, mandated by the U.S. government for its machines, relatively easy to learn and write, COBOL became, to use Gould’s term, incumbent, and despite the great technical changes of the next several decades of the computer age, its programs and their derivatives remained everywhere. But COBOL had been written in the days of punched cards and of 10-cents-a-bit memories. It could,particularly thanks to something called the PICTURE clause, handle date fields of any length, but programmers took advantage of this only when they had to (elders of the Mormon Church, for example, needed it for genealogical records). Thousands, and then millions, of lines of COBOL code were written with two-digit years. A few of the COBOL designers pointed S out the limitations of this, but L standardization, not adaptation, was the spirit behind its I use. The Pentagon mandated I the two-digit year in all its COBOL code, and the standard—at first a compelling economy, later an anachronism—became nearly universal. Thus are legacies created in the computer world.
Our technologies always tend to be compromises g rather than ideal systems. I We need to appreciate the sheer difficulty of simply getting something novel to work for the first time, and to work in a manner the market will tolerate and accept. Then we can better understand what our technologies don’t do as well as what they accomplish. The made world is not the product of plan or logic, though these have their place. It is much more the product of historical process. Our technologies contain within themselves their stubborn histories, and they are unimaginable and unfathomable without an understanding of them. At century’s end our real challenge may be to muster the candor and honesty needed to confront this simple fact.