Why Things Bite Back
IS OUR TECHNOLOGY LEADING US TOWARD AN ideally frictionless and disembodied information society and a world where hunger and disease have been finally vanquished? Or, on the other hand, is it dragging us down in a calamitous decline that will end only in a global environmental apocalypse? Neither, of course, argues Edward Tenner, the author of a bracing new book titled Why Things Bite Back: Technology and the Revenge of Unintended Consequences (Alfred A. Knopf, $26.00). Tenner, a historian and former science editor at Princeton University Press, rejects the idea that technology constitutes an irresistible demiurge dominating human affairs. He also debunks the opposite but equally widespread illusion that given the right dose of managerial expertise, human beings can predict and control the impact of technological change. He believes that technological developments nearly always have “revenge effects”—unforeseen results that create new problems or undo existing solutions to old problems. Why Things Bite Back is a history of the unexpected malfunctions that led to the formulation of Murphy’s Law (“If anything can go wrong, it will”). But it is not a fatalistic prediction of disaster; it is a call for alertness, anticipation, and adaptation. Tenner’s view of technology is nuanced and convincing partly because it stems from his own experience, his efforts to navigate the promise and peril of a technological society in the late twentieth century.
I talked to him last fall in Princeton, New Jersey.
Maybe you could start by telling us a little about your own background and how you went from being a historian to being a science editor and then the author of this book.
I started out with a Ph.D. in European history from the University of Chicago and was one of those people who did not find the right slot at the right time. Instead I took a number of other positions that actually (talk about unintended consequences) turned out to be better for me than if I had published my work on popular uprisings in early-nineteenth-century Germany.
You had worked with William McNeill, the author of Plagues and Peoples , at Chicago?
Yes, that was a tremendous experience. I saw how closely science history, the history of medicine, and biological history were interwoven with political history and social history—especially the decisive role of contagious disease in the European conquest of America. I saw the artificiality of the compartments prevailing in history departments where people believe, “Well, it’s history of science, it’s history of technology, I don’t have to know that. That’s a job for those specialists in that field.” Had I been successful in my job search at that time, I might very well have become that kind of historian. Things that once seemed disasters turned out surprisingly well.
Well, I think the British historian E. P. Thompson said somewhere that all history was the history of unintended consequences. That kind of sweeping assertion raises eyebrows, but it has weight.
A lot. It’s striking to me how little scholarship has actually been generated by that notion, which you would think would be so fecund. Maybe it’s because we live in a culture that sanctifies will and choice and the mastery of fate. We shy away from the implications of this subversive idea—for example, how the Voting Rights Act, pushed for mainly by Democrats, helped lead to the beginning of the resurgence of the Republican party, as it pushed many white Southern Democrats toward crossing party lines.
What was the germ of Why Things Bite Back ?
The general idea came as I was pondering the phrase pushing the envelope . It intrigued me that no matter how many safety precautions we build into things, some people will find ways to work around them. For example, the early buyers of cars with antilock brake systems tended to be risk-taking drivers who used those systems to drive faster and in more dangerous circumstances and who might thus have even more accidents. I thought that was fascinating behavior. I started to look for other instances of pushing the envelope. Since the 1980s I have kept dozens of files on ideas; I see something in a newspaper or a magazine and save it for a possible essay. These files have lives of their own. I have three fivedrawer filing cabinets plus a dozen or two boxes in storage just with folders like that. Pushing the envelope was one progenitor of Why Things Bite Back . Another was the computerization of the 1980s. It made me see that pushing the envelope was not confined to high-risk behavior on the road or in the air. It was everywhere.
And so you began to sidle up to the idea of the revenge effect?
That concept really came much later. First I had a set of behaviors still without a name. In fact I didn’t fully realize it was a set of behaviors. Much social science begins with a theory and looks for evidence to test the theory. Instead I was intrigued by things that still had no name, let alone a theory. These impelled me to find all the theory that could apply and ultimately to build my own.
What pieces of the puzzle did you find in the computerized office?
I realized by the mid-eighties that it had been very hard to foresee how computers would alter the behavior of organizations. Electronics had made vast changes, but not the ones that people had expected. People had believed that networked personal computers would create “the paperless office”; instead they caused paper to proliferate. I wrote an essay that explored the technological reasons for this explosion. I thought that the forecasters of computers, far from being too technical or too preoccupied with the technical, hadn’t paid enough attention to the built-in limitations of electronic hardware, such as the resolving problems of screens and legibility and so forth. The personal computer encourages people to write longer documents but is a very inefficient way to read longer documents. I recently downloaded the 1996 Republican and Democratic platforms from the Internet. Good news for people tired of sound bites: The Republican document with appendices runs to sixtytwo pages, and the Democratic platform about half that. If you printed out those two programs at the Princeton Public Library, you’d have to pay almost ten dollars for the paper, and if you used an ink jet at home, your cost for very expensive ink-jet ink would be at least half that. And you’d face a ten- or fifteen-minute limit in the public library for reading the stuff on-screen. Information has a way of not staying free.
I found other, social reasons for people’s reliance on paper. Many aren’t confident that even tape backups will preserve the integrity of important documents. What if a tape gets magnetized or mangled in a drive? So they use paper as an additional backup.
Some people would argue that that’s a mere vestige of the print culture we are superseding.
The transition argument is one of these characteristic lines of the techno-Babbitts. We are undoubtedly in a transitional period, but we always have been and always will be, and the boosters themselves are always talking about how rapidly things change. So, we’re not soon going to be in a time of real stability. The paradox is that there was a technologically much steadier period when huge organizations like IBM and AT&T regulated and implemented change according to very slow and gradual protocols. They would introduce a new generation of machines and use pricing and other incentives to phase out the old. “Forced migration,” it was called. They would hire the world’s greatest designers to make one telephone instrument that was absolutely optimal, and that was it. Then, maybe twenty-five years later, they would design another optimal telephone. And telephones were built to last a full generation; they were leased. In that environment it was possible to foresee where change was going because it was being managed so closely by what I call deep organizations.
Something like vertically integrated corporations, with their own research departments, their own “deep” knowledge, and so on?
That’s right. There’s an interesting example in the computer industry of how much “shallower” organizations have become. Until now Intel’s microprocessor designs, though ingenious, have not really been unique; they’ve been adaptations of other people’s architecture, of minicomputers and mainframes. Now Intel has reached the limit of that approach and has to establish its own advanced-research department and produce a new generation of designs. The difference between Intel or Microsoft and the older giants is that these new organizations have much, much shorter product cycles, and they have done far less basic research. They are more sensitive to tremors in the marketplace, more vulnerable potentially to competitors. They appear to rule but have less control. So who can foresee the end point of the transition?
So this unpredictability, combined with people’s cultural attachment to hard copy and the technical difficulty of reading from video monitors, undid the dream of the paperless future. This is the sort of pattern you then see emerging in a tremendous variety of fields. In the book you range everywhere from the kudzu vine to the tennis racket and discover that technological innovations introduced with the most benign intentions have left their innovators hoisted with their own petard—unable to accomplish what they set out to do, whether it was to reduce the amount of paper in the office or regenerate the leached-out soil of Southern cotton fields. Are unintended consequences a result of inadequate planning, or are they something deeper than that?
Sometimes there are conflicting plans within the same organization or between rival organizations. The office-equipment manufacturers were as much promoters of the use of paper as they were of paperlessness. After all, it was Hewlett-Packard that increased the speed and efficiency of laser printers and made them the most effective way to present information. It was Xerox that after starting to promote the idea of the paperless office very soon saw that it was really in the document business. What it wanted was the shortest link possible between the computer on your desk and your printer. So the manufacturers (or different parts of the manufacturers) and software vendors were pursuing very different agendas.
And working at cross-purposes?
Yes. I saw lots of other cases in which people didn’t see how complex the technology they were using was and how complex the interactions between it and society were. Sometimes it’s just a matter of scale, what happens when enough people start using something. If enough people have car alarms, for example, and thieves are discouraged from stealing cars in a conventional way, carjacking may be one of the results. The criminal mind will say, “Hey, if I can’t jimmy the car and hot-wire it, I’ll just get a pistol and take it that way.”
One of your key targets is the illusion of control. Yet in fact, there has been a genuine increase in control over many natural events—over medical emergencies, for instance—but we don’t feel more secure. What is your explanation for the unease that seems to pervade discussions of the human condition and the future even in the most industrialized countries?
In the case of medicine, when you increase what you control, you also necessarily push up against what you still can’t control. Take the concept, for example, of pathocenosis. Mirko Grmek, a physician who wrote the standard history of AIDS, has speculated that diseases have a community of their own, are part of a total biological regime. If we start to suppress one group of organisms, the bacteria, not only do antibiotic-resistant bacteria arise, but if Grmek’s hypothesis is right, viruses (including the AIDS virus) may respond to a new opportunity in the body for a totally different group of organisms that can take advantage of it. They’re taking over the turf that we thought we had reclaimed from the harmful bacteria. This idea is unproved, but I think it points to a weakness of increasing control, which is that the more success we have with one approach, the larger the field we open for some other kind of trouble.
In talking about the management of medical and other disasters, you argue that we’ve moved from the catastrophic to the chronic.
As I was writing the book, I was struck by how often the problems of the late twentieth century (or at least the perceived problems), when compared with those of the late nineteenth century, are no longer those of big, sudden, overwhelming forces. We shouldn’t be complacent about the continued loss of life and limb in industrial accidents, but if you look at workers’ compensation statistics, the most serious costs are linked to lower-back pain and carpal tunnel syndrome, conditions that arise gradually from repeated, small insults to the body. The asbestos curtain in theaters once promised safety from catastrophic fire; now asbestos is a symbol of all that is carcinogenic. The response to a perceived catastrophic risk turns out to be a solution that promotes a chronic risk. Global warming and the depletion of the ozone layer result partly from the invention of fluorocarbons, which were designed to keep refrigerators from exploding. That was a serious problem in the 1920s. Disasters mobilize people to find solutions, and those solutions in turn have sometimes have a chronic risk that we find out about only later.
As you mention with respect to the Titanic disaster, unintended consequences can be benign as well as malign. The sinking of the Titanic led to improved communication and navigation in the North Atlantic and, ultimately, safer ocean liners, just as the wreck of the Spanish Armada led to technological improvements. But sometimes those improvements create their own chronic problems.
People are encouraged by the apparent safety of some activity to repeat it more and more often and only later learn that there was some hidden risk. Think of the X rays, fluoroscopes, that we used for fitting shoes in the 1950s. Though there was no apparent catastrophic damage associated with them, there was a level of chronic risk. This is difficult to study because we often can understand it only by using statistical tools that are themselves now subjects of controversy. People naturally feel much more uneasy than they did when these catastrophic problems were more common, for at least everybody was working on them and improvement was steady. There were safety catches, brakes, interlocks. There was greater faith in progress because we didn’t need an extra layer of statistics and long-term studies to say whether something was safe or not.
The discipline of risk assessment is in some ways a formal acknowledgment of the problems that you’re describing. But why has Why Things Bite Back found a wide audience outside the risk assessment industry?
Most of all, I think, because the book comes out of my own experience, as an enthusiastic but wary user of technology, which is what many people are in industrial society. We are used to buying something with certain expectations, after hearing certain promises, and then finding all kinds of unexpected difficulties when using it—and sometimes problems arising from its benefits themselves. It’ really an obvious perspective. A number of people have told that what they like most about the book is that after you’ve read it, the main arguments seem obvious—but obvious in a way they weren’t before.
Your book weaves many strands of thought that have been around a while but have never really been pulled together into a convincing overall argument. How does it fit in with other present-day discussions of the relationship between technology and society?
I have not been mainly a corporate consultant, as some popular forecasters have been. Nor have I run a farm, large or small, as some technology critics have done. Unlike still others in both camps, I am not a career college teacher. Many of these writers have been effective oracles, wise men and women. My role has been different. I wrote the book from the standpoint of somebody who had used technology to do a certain job, scientific publishing. I was an early user of some technologies, especially the Internet, because so many of Princeton’s scientific authors were already there. I was making appointments and corresponding about manuscripts on the Net. I was bringing authors of electronic manuscripts together with the press’s specialists. I saw how many strange things can happen when you start implementing new technologies, some pleasant surprises but also many traps. As a writer I am a stand-in for the reader. I’m constantly referring back to my own life as a user, and I’m always trying out new products. Right now I’m looking for a laptop. I turn my own experiences, and others, into stories that readers might find intriguing and useful.
There’s no sense that you have an ideological agenda. If you’re driven by anything, it’s curiosity about how things work and don’t work. And you give a name to a source of pervasive, diffuse unease.
It’s not just unease but ambiguity, a mixture of enthusiasm and disappointment and maybe expectation and frustration. Perhaps that’s to the good of industry. I’ve heard that some economists argue that the disappointment of the purchaser plays a very important part in helping drive the market, by bringing consumers back for more.
Grant McCracken, a marketing historian, calls it the Diderot effect. He refers to a little essay by Diderot about getting a new dressing gown as a gift and then finding that his old writing desk looked shabby when he was sitting at it in his new garment, so he bought a new writing desk, and finally he redecorated the entire room because the desk was too grand for these old furnishings. Yet in the end he felt less comfortable in the new surroundings than when he had been working in his old dressing gown at his old writing desk. Perhaps people have that experience in the high-tech world too. They may feel less at ease, maybe even less able to write with the latest multimedia release of their software program than they did with WordPerfect 5.1. Often we reach a technological optimum; something is so refined on a certain level that it’s very hard to improve on it further without making it harder to use.
Doesn’t that undermine the corporate version of progress, the idea of progress that we’re always being fed by TV commercials? Your book suggests that human agency, misjudgment, poor planning, accident, all these things profoundly upset any linear notion of technological progress. You speak to people’s experiences of technology in a way that the advertised version never does.
I don’t think I necessarily attack the portraits of the radiant future that advertisers might be painting. Those are good and necessary sometimes, provided people recognize them for what they are, a certain kind of rhetorical statement. Some of these statements have given us really valuable things, and in some cases we could do worse than to go back to some of the Utopias of the past. There were some genuinely good ideas in, say, 1930s world’s fair fantasies, ideas that perhaps have been ridiculed too much when you consider the kind of suburban sprawl that we have instead. Many of these plans called for more intelligently integrated systems of transportation than America has developed. I don’t dismiss Utopian thinking. And I don’t blame manufacturers, advertisers, for touting what they’re doing. That’s in the nature of their business. People know how to interpret these messages.
But doesn’t the idea of a technological optimum directly conflict with the impulse toward novelty, growth, ceaseless innovation, and planned obsolescence that seems to be at the heart of a kind of market-driven technological change? This is nowhere more evident, it seems to me, than in the computer industry at the moment. Is there an an insoluble conflict here?
It’s not necessarily insoluble. We can agree that certain things are, at least at the moment, optimal while others are ready for major changes. One problem for the software industry is the failure to develop any major new application in the last few years, any important new category, with the major exception of some Internet software. Web browsers are wonderful, but not in the same way as the early word processors or spreadsheets were. The industry is in a baroque period of glorious stagnation. Some companies have developed excellent products but don’t have the marketing power to sell them or support them properly. Often these aren’t given a proper chance: for example, Nota Bene, a word-processing program oriented toward writers and academic users. The producers did not establish themselves rapidly on campus, their primary market. Instead of using site licenses to sell whole departments and even campuses, they sold copies one by one. Even at large universities only a handful of people were actually using a program targeted at that very market.
What are the ecological implications of some of the stories you tell?
My views on ecological issues began to take shape right after graduate school. Working with William McNeill on Plagues and Peoples , helping him with the bibliography, was one of my most important intellectual experiences. Not only did it show me the unity of the biological and the historical, but it also made me aware of how radically humanity had been modifying its environment well before modern industrial technology arose. Humanity as a species has altered nature so powerfully that many other organisms might well classify us as a pest. Treatises in environmental ethics could be written around that thought. If other organisms could write books, they would certainly be talking about this pest invasion and what could be done to stem the tide.
So, the problem is humanity rather than modernity.
That’s right, though technology certainly accelerates it. You can destroy forests a lot faster with earthmoving equipment. But on the other hand, good old-fashioned slash and burn could create a lot of havoc even with very primitive technology. Anybody who looks back to the nineteenth century as a more wholesome time should look at some pictures of the nineteenth-century landscape. By the mid-twentieth century, industrialization had reduced the amount of acreage under cultivation and given a new life to forests. It also let us have the leisure to appreciate nature. It provided technology like highquality binoculars and compact cameras with telephoto lenses, so we could observe birds without shooting them first. The relationship between technology and nature is much more complex than mere antagonism.
If we take a longer historical view and recognize that pre-industrial cultures had their own ways of pillaging the natural environment, what are the implications of this for ecological movements? We don’t want to sentimentalize pre-industrial culture, even though we can appreciate its pastoral landscapes. So what do we want to do?
First of all, we have to recognize that we’re on a technological treadmill and we can’t just step off it and go back to the soil. I’m not dismissing simple living—I think a lot of it is wonderful—but it can be combined with technical innovation. What about a lawn mower that would be ergonomically optimized for the best exercise while you were mowing your lawn? Wouldn’t that seem preferable to people riding on power mowers and then driving to health clubs to get exercise on treadmills? There are lots of things like that still to be developed. Some of them might use new materials, new technology. Nothing says that advanced technology can’t be simple, enjoyable, and beneficial. But I don’t think it’s possible for everybody to have his or her own farm.
But if we are on a technology treadmill, what powers it? I want to get away from the idea of the demiurge, yet that may be what you’re suggesting.
In some ways we power the treadmill. Every technological solution that we use reaches a certain endpoint, and then we have to develop something new; we have to develop a replacement. But we can slow the treadmill down. Take antibiotic resistance. It’s inevitable. But we can retard our exhaustion of the power of an antibiotic. We avoid using it where it will do no good, especially against a viral infection. We can complete a course of treatment rather than stop when symptoms disappear; this reduces the chance that a resistant strain will survive and spread. Our caution will give us more time to develop new approaches.
I’m still concerned about the lack of human agency in the treadmill metaphor, even though you’ve just acknowledged that we can slow it down. For several decades there was an inevitable quality to the nuclear arms race. Now for geopolitical reasons there no longer is. This is not to say that the arms race has ceased, but it has taken a new form that we couldn’t have predicted during the Cold War. This suggests it may be possible not merely to slow the treadmill but actually to stop it and reconfigure it in some way. The impulse to do the research is still going to be there, because the money is still there and the wars are still there, but those reasons are not as compelling as they once were.
There’s the intensifying research that is needed to build bigger and bigger weapons, but there’s also a kind of maintenance research. If you’re going to have this horrible stuff, then you have to be sure that it works. It reminds me of the lawsuit in which some condemned prisoners charged that the lethal injection their state was planning to use had not been approved as safe by the Food and Drug Administration. What is intriguing to me about the end of the arms race as we used to know it is that it reflects the same shift from the catastrophic to the chronic—from the eyeball-to-eyeball confrontation of mutually assured destruction to the management of multiple crises and terrorist threats.
You have said that “we are learning the limits of intensity”—the limits of pulling out the heavy technological artillery whenever we can.
Yes. We went through a phase probably at its peak just after the Second World War, when it appeared that the answer to all questions was simply to pile on resources, as in the massive application of DDT. This resource-intensive approach to the technology of the period affected social policies too. To solve housing problems, we huilt massive high-rise apartment complexes. It was not that different actually from the thinking of the Soviets on technology and other research. It just fell under different auspices, and different people benefited. The Vietnam War played a big part in the reorientation of this kind of technological thinking. In Vietnam a combination of superior technology and more of everything was going to assure victory. That didn’t happen, partly because politics imposed limits but also partly because of the limits of the technology itself in that environment. It just couldn’t perform as promised.
What about in medical technology? Is there an awakening to the limits of intensity there?
I think so. When I had a back problem as I was writing the book, I was informed by my primary-care physician that I would almost certainly need back surgery. I went for an MRI scan, and there was a multicolored chart of my spinal cord, and sure enough, it showed an apparently disastrous herniated disk. I was sent with it to a neurosurgeon. After he had compared my MRI scan with my reactions to a few simple tests in the office—I was by then free from pain—he said there was no point to operating. Since then guidelines have come out from the National Research Council that firmly discourage surgery in the great majority of cases of back pain. We have moved from aggressive intervention to conservative medicine, helped by an understanding that many people who have equally alarming-looking MRI scans turn out to have no pain at all.
Is there along with that a growing awareness of the body and its environment as part of a whole system, rather than just the much closer focus on specific disease entities?
There certainly is. But it has been hard for medicine to do much about the undoubtedly important relationships between surroundings and mental states and health. Take carpal tunnel syndrome. Two newsrooms may use the same software, the same computers, the same keyboards, yet one of them may have an incidence of carpal tunnel syndrome several times higher than the other. The reported rate probably has a lot to do with stress and management relations around the office. But it’s tough for a physician to take out a prescription pad and write, “A nicer boss.” So we reach a limit of medical intervention.
Of course, many people are turning to alternative therapies, but it isn’t clear what benefits they bring. We may be on our way back to the four humors theory. The idea of temperament is making a comeback with developmental psychologists noticing that children from a very early age seem to have a personality, a core, a way of looking at the world that remains relatively unchanged. And then there is a lot of extravagant speculation about what the mapping of the human genome is going to accomplish. Some people involved in this project are reviving the illusion of control. They probably should be more modest about what we’ll really be able to get from the project, at least in the next generation.
In spite of the growing awareness of the limits of intensity, there still is a tendency to resort to a rhetoric of complete transformation and Utopian expectation. Futurology is still big business. In some ways Why Things Bite Back is an extended cautionary tale about the limits of prediction. Is there a future to futurology? Should there be?
The biggest problem of many futurists is their ahistorical and antihistorical bias, their assumption that technology is introducing some sharp discontinuity between the nature of our past experience and the environment that we’re entering. Some imply that the human condition is just so different now that none of the old rules apply. (Perhaps historians should be more involved in debates about the future.) This is a very good way of writing off vast human experience. Yet when you go back to some of the predictions made about airplanes or radios in the early twentieth century, they are strikingly similar to those being made now about the Internet. That isn’t to say that it’s wrong to be excited about the Internet. There are lots of kids who tinkered with radios and have important careers in broadcasting. There were kids fascinated by planes who became commercial pilots, military pilots.
I don’t think there’s anything wrong with optimism about technology. I think it’s great and to some extent necessary. People need that to motivate them. But I think while they seek the joy of trying something new and seeing it work, they should also be prepared to encounter both unexpected benefits and unexpected problems. Often the payoffs of new ideas have not been those that were expected.
I’ve been looking into the story of a device called the Rolamite, invented by a man named Donald Wilkes, who worked for Sandia Laboratories. It’s a bearing or switch with two rollers and a band around them. The idea is that by using different styles of band and different cutouts in the band, you can get this little device to do different things, to serve as an accelerometer, for example. There was a great fanfare about the Rolamite in the 1960s in both the professional engineering press and media like The New “York Times . Donald Wilkes left his job at Sandia Laboratories, raised money, started Rolamite, Inc., and hired engineers. But the Rolamite turned out to be the second best way to do everything, as one engineer said. And it came at just the time when microprocessors were beginning to do the more sophisticated work that the Rolamite was supposed to handle. At the low end it was too expensive to fabricate to the tolerances needed and still be economical. At the high end it was losing out to the lower and lower costs of solid-state control.
There were just three important exceptions, which go to show how radically unexpected technology can be. One was a postal scale you may still see in garage sales with a kind of twisted band in the front; that has Rolamite geometry in it. The second was the use of Rolamites as accelerometers in thermonuclear weapons, probably because they can’t be spooked by electronic countermeasures. The third one is the automobile air bag. Apparently just about every air bag made has a Rolamite in it, because for an accelerometer at the right tolerance, the Rolamite performs better than any alternative technology.
Air bags were probably the last thing on Wilkes’s mind—they didn’t exist at the time—yet the Rolamite has turned out to be a vital element in their design. Wilkes also told me that his experience with the RoIamite gave him ideas for other inventions with similar geometry. If he had known about all the things that would go wrong when he had his idea for the Rolamite, he might still have his old job at Sandia, but would either he or society be better off?
So when I’m talking about unintended consequences, I’m not suggesting that people should be so timid about things going wrong that they not try anything. I mean only that they should keep in mind that the outcome of what they’re doing—positive or negative—is likely to be very different from what they expected it to be.