The Antique Machines Your Life Depends On
WHY CREAKY OLD 1960S-ERA mainframes are still being used to keep air travel safe
DURING THE 1960S AN OVERBURDENED Federal Aviation Administration (FAA) sought to ease the work of its air-traffic controllers by introducing computers. Like any government agency, it awarded its contracts to the lowest bidder. The system that resulted was obsolescent almost from the start, and when the FAA’s managers tried to improve it, they couldn’t. They could only change bits and pieces at a time, which left the traveling public relying for safety on a makeshift patchwork. In most industries today’s computers are as different from those of the 1960s as a Boeing 767 is from a Curtiss Jenny. Yet much of the hardware from the FAA’s original system remains in use, and the obstacles to even small changes, let alone a complete overhaul, have only gotten more formidable.
The FAA’s most basic task is to keep airliners from colliding. This was not so hard in the early days, when planes were slow. Pilots could follow the rule of “see and be seen,” watching out for one another like motorists on a freeway. But jet airliners were too fast for this. At jet speeds—550 miles per hour or more—a pilot might have a clear view of an onrushing airliner yet be unable to react in time.
Through the 1960s see-and-be-seen remained the rule in the immediate vicinity of airports, where even the latest jets flew slowly during takeoffs and landings. But away from the airports, between major cities, everyone relied on radar. Each plane showed up as a blip on a radar screen, and while radar couldn’t determine the planes’ altitudes, the array of blips showed the locations of the aircraft, following them as they flew. Watchful controllers checked to see that they all remained safely separated. Pilots no longer looked out the window to determine where it was safe to fly. Instead they received radioed instructions from controllers on the ground, placing their passengers’ lives in the hands of these unseen specialists.
BY ITSELF A RADAR SCREEN showed nothing more than an array of slowly moving blips. Controllers had to know the identity of each blip and its speed, altitude, and direction. To identify the blips, they relied on small movable plastic tabs called “shrimp boats,” writing an airliner’s flight number on each tab. Controllers placed these bits of plastic alongside the blips, directly on the radar screen, and moved them by hand as they changed position. The radar screens were laree and horizontal, providing room for plenty of blips and shrimp boats while ensuring that the latter would not slide off.
To learn what each blip was doing, controllers used flight progress strips. Each of these was a standard form printed on a narrow strip of paper about nine inches long. Blocks on the form held the necessary information in a standardized format: the flight number, type of aircraft, departure airport, departure time, route of flight, air speed, requested altitude, and destination. Each such slip corresponded to a particular airliner. A controller sat in front of a rack that displayed these strips and kept arranging them on the rack in order of their altitude and speed.
By using “strips and blips” and keeping watch on the radar screen with its shrimp boats, a controller could maintain a clear understanding of his air traffic. However, no single controller could handle all the traffic around a busy airport or within a large block of controlled airspace. Each controller dealt with only a sector, a portion of airspace, relying on other controllers to handle the other sectors. When planes entered his sector, fellow controllers would hand him the appropriate flight strips, and when planes left his sector, he would hand those strips to those sectors’ controllers.
This system worked well as long as no one had to handle too many aircraft. Some air-traffic centers had as many as fifty controllers, which helped spread the workload. But as early as 1956, before jets entered service, Business Week noted that “piston-engine airplanes are already moving so fast that these slips of paper have to be handed from controller to controller at top speed.” Air traffic more than doubled between 1955 and 1965, leaping from 43 million to 103 million passengers, while the speed of aircraft increased markedly. Yet the FAA couldn’t ease the controllers’ workload by hiring more of them and making their sectors smaller. If a sector was too small, its air traffic would change too quickly for a controller to follow, particularly when he was losing his sharpness late in a long working day.
But help was on the way by 1965. The first improvement was to make the radar blips more visible. Early radars had simply bounced their beams off airplanes in flight, amplifying the weak radar returns to create the blips. But by 1965 airliners were carrying transponders as onboard equipment. These responded to a transmitted radar beam by returning an amplified pulse, which showed up brightly on a radar screen.
There was considerable interest in a new type of transponder that would send out a coded string of pulses reporting a plane’s flight number, speed, and altitude. A computer would decode this information and display it on the radar screen using alphanumerics. These displays of information, presented in highly compact form, would replace the shrimp boats by providing a label for each blip. Each label would follow its moving blip automatically.
With the blips labeled, a controller could keep track of many more of them. This would help him keep pace with burgeoning air traffic and increasingly crowded airspace. In addition, by using this system, the radar display would effectively become threedimensional. A blip’s position on the screen would show its plane’s location, while the attached label would tell its altitude.
The FAA experimented with such arrangements during the early 1960s, but it took a fatal accident to get things moving. On December 4, 1965, two airliners collided over Danbury, Connecticut. One of them, a Boeing 707, lost twenty feet of a wing along with its hydraulic system. It nevertheless managed to stagger the forty-five miles to Kennedy Airport and made a safe landing. But the collision smashed the tail of the other plane, a Constellation, which pancaked to a crash landing on a hillside. It broke apart and burst into flame, yet its breakup opened escape routes for the passengers. Of fifty-four people on board, only four died. An FAA regional director said, “It’s fantastic that people walked away. The pilot must have done a wonderful job.” (The pilot, Charles J. White, was one of the four who died.)
Still, the FAA could not trust to such luck as a matter of routine. Following this collision, it faced an urgent need for new equipment that could improve air safety in the critical New York area, which was the nation’s busiest. Fortunately this agency was ready to pull a rabbit from a hat, and the hat was in Atlanta. A year earlier that city’s airport had received an experimental system called ARTS (Advanced Radar Terminal System). It amounted to no more than a basic computer linked to a single radar installation, but it could, as planned, label blips for a three-dimensional overview.
THE FAA’S DEPUTY ADMIN istrator, David Thomas, ordered this equipment to be duplicated in New York, where it would serve a new radar center. This brought together under one roof the radar controllers who had previously worked at La Guardia, Kennedy, and Newark, New Jersey, airports. ARTS now allowed these controllers’ radar to offer a comprehensive view of all air traffic within the New York metropolitan area. Without ARTS such a metropolitan view would have swamped controllers by presenting them with far more blips than they could have handled with shrimp boats. But ARTS, tagging each blip with an alphanumeric label, allowed controllers to track many more of them at a time. This made the metrowide overview into a useful tool instead of a source of confusion.
Why was this important? Until then a controller’s radar had shown only the traffic near his own airport. This kept the number of blips within bounds, but there was no way to coordinate air-traffic control at New York’s three major airports. A controller in the La Guardia tower might watch an outgoing airliner with great care, unaware that it was on a collision course with another plane inbound to Newark. Each airport had specific routes for approach and departure, designed to prevent such collisions, yet airliners sometimes strayed from those routes. This had brought a particularly bad collision in 1960, over Brooklyn, that had claimed 136 lives. With its metrowide view ARTS offered an extra margin of safety.
ARTS thus represented one path whereby computers could assist airtraffic control. A second path involved the flight-progress strips. Their data initially came from pilots’ flight plans, which aircrews filed before takeoff, but inevitably the data needed updates to reflect delays or course changes encountered en route. Controllers therefore continually had to correct the numbers on existing strips or fill out new ones that reflected the updates.
But a computer could receive and store flight-plan data and updates for every plane within a wide region, printing out new strips every time the data changed. The computer also could respond intelligently to the data, tracing a plane’s anticipated route on a stored map showing the nation’s sectors of airspace as if they were so many counties. The route would pass through a predictable set of sectors, with each one assigned to a specific controller. The computer thus could print out the latest version of a flight strip for every controller who wanted it. Each of these people would then receive his strips from a printer at his workstation, instead of having his colleagues hand them to him like hot potatoes.
The success of ARTS whetted the FAA’s appetite, and in 1967 it gave out contracts for a comprehensive system that would process both flightstrip and radar data. At this point the FAA did not aim to provide these services at airport control towers but rather between airports, at its regional air-traffic centers. The nation had twenty-eight such centers, each controlling traffic within a block of airspace the size of California or Montana. Within these vast swaths air controllers had the responsibility of using radar to watch aircraft en route, far from airports.
IBM WON A CONTRACT TO PRO vide both the computers and their programming. Burroughs received a separate contract and proceeded to build digitizers that would translate radar and transponder signals into language the computers could understand. Raytheon came in as a third contractor, building display consoles. These looked like radar screens, but their blips were actually computergenerated.
IBM’s task was particularly difficult. To create its nationwide map of sectors, the software had to know the boundaries between blocks of airspace viewed by different radar screens. When a blip with its attached label approached the boundary between two such blocks, the system had to execute a handoff. ‘The first controller would push a button, causing the blip’s label to blink. It would continue to blink until the new controller pressed a key to confirm the transfer.
Besides all this, the computer had to know the difference between blips that represented airplanes and radar returns from airport structures or other buildings, filtering out the latter. It had to accept radar inputs from a number of sites, working with existing equipment of disparate types that had come from the FAA, the Air Force, and the Navy. The computer had to create radar mosaics, taking data from several radars operating simultaneously, evaluating the data for validity, and selecting the most accurate information for display to the controller.
All this meant that IBM’s software had to incorporate a great deal of knowledge about the FAA’s procedures and practices. This brought problems, for the programmers didn’t understand air-traffic control and the controllers didn’t understand programming. At one point IBM had five hundred programmers debugging its software. When complete, it contained more than 475,000 lines of code, far more than any previous computer program.
These efforts focused on the FAA’s regional centers, with their Californiasize blocks of airspace. The FAA also needed similar systems for city airports and for control centers serving metropolitan areas, such as the original ARTS center in New York. This is where the agency made its big mistake.
IN FEBRUARY 1969 IT CONTRACTED with Sperry Univac to build a new version, ARTS III, for use at city airports and metropolitan centers. Like IBM’s system, Sperry’s processed both radar and flight-strip information in real time, producing broad radar overviews with each blip neatly labeled. But right at the start ARTS III featured a suite of computers that were already approaching obsolescence. These bulky mainframes had less power than the personal computers of the 1980s would have.
The FAA was in the business of looking ahead. Its officials knew that eventually ARTS III would need better hardware to cope with continuing growth of air traffic. And Sperry was a major player, at least in 1969. It had beaten IBM to the computer market and had launched the commercial industry by selling its Univacs early in the 1950s. In time Sperry merged with Unisys, an electronics powerhouse. So it seemed an entirely reasonable choice to build the next generation of ARTS. No one knew that Sperry would fade in importance and fail to build the powerful mainframes that the FAA would need.
After 1970 the nation’s regional airtraffic centers received elements of the IBM system at a rapid pace. Computer processing of flight-strip data reached completion in February 1973, when the last regional center received this capability. Radar data processing came to all these centers by August 1975. At major airports installation of Sperry’s ARTS III also went ahead quickly. Chicago’s O’Hare received the first setup late in 1970. The sixty-third, completing the program, went in at Dallas-Fort Worth, also in August 1975.
Two weeks later, a regional center in Miami became the last to receive radar data processing. Acting FAA Administrator James Dow went to that center to inaugurate the complete system. He called the occasion “one of those rare times when we can talk, without exaggeration, of reaching a milestone.” One of the air controllers added that the new equipment was like “driving a Cadillac instead of riding a bike.”
These computer systems helped the nation cope with its continuing rapid growth in passenger traffic, from 169 million passengers in 1970 to 297 million in 1980. Yet while the FAA now had plenty of equipment, it was another question altogether whether the agency had enough controllers to use it. The FAA tended to treat its people harshly, and it had always been able to get away with it. The work of air-traffic control had much appeal to air-minded young people. The FAA routinely received twenty applications for each position it filled.
Like the Marines, professional controllers saw themselves as an elite group, the few and the proud. “We are the best in the business,” said one. “We are the front line, and we make it work. It sounds macho, but it’s true.”
“I love it,” said a colleague. “When it gets down and dirty, and I’m turning and burning twelve planes, I get on a high. It’s addictive. It’s an ego thing.” A man with ten years of experience added that “you had to be the best controller in the facility, or well on your way to claiming the title. The Glint Eastwood syndrome was alive and well where we worked.”
David Jenkins, a Boston University behavioral scientist and the author of a study of controllers, described them as independent people with extreme self-confidence. They needed it, for their responsibility was huge. “You could never admit you had any limitations,” cautioned a ten-year tower chief. “If you did, and they got reported, you were in bad trouble. What’s much worse were the errors you didn’t know about. You got scared. We were always running scared. You had to believe in the inevitability of a mistake; otherwise you got too gung ho. After an incident [such as a nearmiss that might have become a collision] you were never any good. You worked traffic, you stayed cool, and you puked your guts out in the bathroom afterward.”
The job definitely was not a matter of working nine to five and then heading home for a pleasant weekend with the family. Controllers often worked ten hours a day, six days a week, with the overtime mandatory. No rule required advance notice of compulsory overtime; an unexpected call-up on Saturday was par for the course. Controllers with medical problems held on to their jobs until overwhelmed by stress. Then the FAA dismissed them.
A union, the Professional Air Traffic Controllers Organization (PATCO), showed increasing militancy beginning in 1968, and in 1981 PATCO’s president, Robert PoIi, issued far-reaching demands for shorter hours and more pay. The FAA offered much less, and PATCO responded with a strike. That was illegal, for the controllers were federal employees, forbidden to strike or hold a walkout. President Reagan, though elected the previous year with PATCO’s support, issued an ultimatum: Go back to work or lose your jobs. Most chose to stay out, and the FAA promptly fired 11,345 of them. They would never be rehired.
Reagan’s action amounted to a gamble, betting that the new computerized systems would allow the FAA to maintain air safety with a greatly reduced staff. Despite the firings, it still had nearly 5,000 employees with whom to run the nation’s airways, including nonstrikers, supervisors, military controllers, and people who had never joined PATCO in the first place. In addition, the FAA stretched its limited resources through a practice known as flow control. This amounted to accepting that the system could handle only a restricted number of flights, with others being delayed on the ground or canceled entirely. It worked; passengers continued to fly.
The FAA then turned to the task of training a new generation of replacement controllers. This took a while. In 1985, four years after the strike, the agency had only two-thirds as many fully qualified controllers as in 1981. Yet the system was handling 50 percent more passengers. Mandatory overtime helped. In 1985 this smaller work force put in nearly a million hours of overtime, more than twice as much as the larger staff of 1981. The burdens were severe, but the harried controllers could console themselves by reflecting that at least they still had their jobs, and the strikers didn’t. In the control towers those strikers had once been knights of the air, even though many had no more than a high-school diploma. Tossed to the vagaries of the job market, they were making new lives in a host of less glamorous occupations: selling cars or insurance, driving trucks, working as file clerks, bartenders, bill collectors, or chimney sweeps.
FOR THE FAA THE LACK OF staff might have brought a loss of safety and a surge of crashes. In fact safety actually improved. During the three years of 1978 through 1980, before the strike, America’s airlines sustained 382 deaths of passengers and crew. For 1983-85, when the system was recovering after the strike while coping with many more passengers, the total was 216. Only about one passenger in five million was getting killed.
How did this happen? Flow control helped, restricting the number of planes in the air to no more than what people and equipment could handle. In addition, the FAA increased the required distances between airliners. It believed, correctly, that even if controllers were about to keel over from fatigue, airplanes would still fly safely if each one was surrounded by much more empty airspace than usual.
Yet as the FAA trained new controllers to work with the existing system, it also pursued a sweeping new plan that would again rebuild the air-ways, with advanced computers and greatly improved equipment. As FAA Administrator J. Lynn Helms wrote in 1982, “The present system does have serious limitations. It is labor intensive, which makes it expensive to operate and maintain. Even more important, it has very little ability to handle future traffic growth or future automation needs. These limitations result from an aging physical plant and inefficient procedures and practices. For instance, the present system still has many vacuum-tube systems.”
HELMS’S PROGRAM, THE NA tional Airspace System Plan, got under way in 1982 and sought to do much more than advance beyond vacuum tubes. Radar installations would take on new tasks, such as watching for small but severe windstorms and keeping track of aircraft on fogshrouded runways. Existing data links and weather-reporting services would receive substantial improvements. Airports would acquire new transmitters to guide aircraft to bad-weather landings. Radars and computers of the 1970s would give way to better versions.
Helms and his successors hoped to prevent ever-mounting traffic from overloading the system. But they soon came up against a big problem: software. Major software packages can run to a million lines or more. But under the best of circumstances, a skilled and experienced air-traffic control programmer can write four to five lines per day of fully validated and documented code. This statement may seem absurd to anyone who has written a program of a hundred lines or more and gotten it up and running in a single day. However, the FAA’s real-time systems face very stringent requirements, and their accuracy is literally a matter of life and death.
Because of the system’s complexity, no single programmer can hope to write more than a tiny fraction of the whole. Each person’s contributions must dovetail with those of many others, and the complete codes must then stand up to stringent tests. The few-lines-per-day figure arises when a software specialist writes a thousand lines in the span of a week and then spends an entire year in debugging and development. These difficulties were already clear during the late 1960s, when IBM needed five hundred programmers to create the FAA’s initial 475,000-line code. Since then the agency has shown that it will go to almost any length to avoid rewriting its existing codes.
These programs serve three types of air-traffic centers: control towers at individual airports, metropolitan centers covering large areas around cities, and regional centers that handle huge blocks of airspace. The regional centers were the focus for the FAA’s first major computer effort, and here the software problem has remained manageable. The reason lies with the contractor, IBM.
The code has grown through the years from 475,000 lines to 1.23 million, in a mix of assembly language and a language called Jovial. Assembly code is clumsy and hard to use, while few people still know how to work with Jovial. Nevertheless, IBM has maintained software compatibility across the decades. The regional centers initially used System/360 computers dating to the mid-1960s; they now use the IBM 3083, which has greater power and far more memory. Because the two are compatible, the software has carried over with little difficulty. The new computers went in between 1986 and 1988, speeding response by a factor of ten and boosting the number of planes each regional center could track from 400 to 3,000.
The metropolitan centers have encountered substantially greater difficulties. They use ARTS III, and here the standard software has the form of a proprietary code known as Ultra, written in assembly language. The standard computer, the Sperry Univac 8303, dates to the late 1960s. It stores 256 kilobytes in main memory, while processing 500,000 instructions per second. Gary Stix of Scientific American notes that it would need eight times as much memory merely to play the game Flight Simulator. Even the secretaries’ word processors have more power.
This had led repeatedly to tales of woe. Controllers at the Dallas-Fort Worth metropolitan center, late in the 1980s, had been losing data blocks on their radar screens. During peak travel times they also found that data blocks were being attached to the wrong blips. They predicted that their system would go down on a particular Saturday in 1989, when the University of Texas would play its annual football showdown with the University of Oklahoma, bringing a swarm of fans in their private aircraft. They were right. The computer dropped all data blocks, leaving controllers to struggle with unlabeled blips, as in pre-computer days.
New York had a similar story. By 1989 that city’s metropolitan center, serving all three major airports, was regularly running at more than 90 percent of capacity. Controllers were using home-brewed software to delete some important information on planes outside the control area, hoping to prevent the system from dropping planes at random.
THEN THERE WAS LOS ANGE les, the nation’s third-busiest airport, behind only O’Hare and Atlanta. Sometimes, though rarely, its system would go down for as much as ten minutes. One controller described this situation as “like being on the freeway during rush hour, and all the cars lose their lights at once.” Heavy traffic came every evening, with this metropolitan center orchestrating up to seventy-five jetliners per hour on final approach. At those times the overloaded computer might wipe information about a plane’s altitude and speed from the radar screen or even switch this information among the blips. Other problems could lead the radar to show “XXX” instead of presenting the altitude. Or the computer might stop posting updates on aircraft, freezing on their last known speed, altitude, and heading.
Why has the FAA put up with this? Why can’t it just order a set of powerful computers and put such problems to rest? The problem is software compatibility. The new computers would have to run the old Ultra code, and commercially available workstations do not accommodate it. Nor will the FAA rewrite Ultra to run on today’s computers, for that would plunge it anew into the thicket of software development.
Instead the FAA is turning again to its standard computer for metropolitan air-traffic centers, the Univac 8303. It is purchasing new ones from the one contractor, Unisys, that can build them. It is doing this even though outside the FAA this computer would be seen only in museums. Within those centers these Univacs are receiving new solid-state memories and additional input-output processors, along with extra workstations. At the New York metropolitan center the FAA has gone further, offloading some of the main computer’s tasks and handing them over to other workstations.
Such heroics smack of an attempt to keep World War I dreadnoughts in service by fitting them with modern missiles. Few other computer users would try them. Yet the FAA has no choice, for on one point it will never budge: Ultra, and hence the old Univac computers that run it, must stand at the core of its metropolitan centers. The alternative—a full-blown program of new software development—is too dreadful to contemplate.
Amid such makeshift efforts the FAA has managed to cope with another big increase in traffic, from 297 million passengers in 1980 to 466 million in 1990. Yet life at the operational centers has often shown little change. In 1990, a decade after the strike, the Los Angeles metropolitan center was supposed to have fifty-seven fully qualified controllers but had only twenty-eight. Six-day workweeks were still standard.
Its Univac computers were not its only museum pieces. The two-way radios were thirty years old and had problems of their own. As one controller put it, “The radios are so bad and so full of static that I can tell when the lights come on at the airport, because I hear it. Sometimes that’s about all I can hear.” At times he and his associates couldn’t talk to a plane and found themselves asking one pilot to pass on a message to another one. In the words of a captain, “You always know when controllers are having equipment problems: when they keep asking your altitude. I really feel sorry for them. It happens all the time.”
Where computer programming has remained manageable, the FAA has indeed succeeded in introducing new equipment. In recent years the airways have received new radar installations, automated weather-observation stations, and better radio and data connections. But at the metro centers, ARTS III has remained the standard. The FAA tried for some time to have IBM replace it with a new version, the Advanced Automation System (AAS). But its programming included up to 1.5 million lines of code and repeatedly bogged down. The FAA still hopes to use AAS in its regional centers, but it has abandoned plans to replace ARTS III at the metro level, where traffic is much denser and more demanding.
Still, the agency has not been at a loss. Today’s airliners feature planeto-plane data links. An onboard system called TCAS (Traffic and Collision Avoidance System) takes advantage of those links and provides each airliner with its own specialized radar. A plane’s computer interrogates the transponders of nearby aircraft and determines their bearing, distance, altitude, and relative velocity. If it detects a plane nearby, the computer sounds a warning using synthesized speech: “Traffic, traffic!” If there is danger of collision, it will give further advice: “Climb, climb!” or “Descend, descend!” The system requires its own software, but here too the programming problems have remained manageable.
Since 1993 federal law has mandated its use on all aircraft carrying more than thirty passengers. TCAS has had its quirks. Early versions sometimes sounded, “Traffic, traffic!” in response to airplanes on the ground. Even bridges could bring a TCAS alert when they mounted transponders to make them visible to shipboard radar in fog. One manufacturer’s system had a software bug that produced returns from nonexistent aircraft. All operational TCAS systems could detect their own planes’ transponders and would command evasive maneuvers wherein an aircraft would try to escape from itself.
OPERATING EXPERIENCE brought changes to the software, which reduced these false alarms. And the system has indeed saved lives. In 1992 Jim Portale, a captain with USAir, wrote an article in the Journal of Air Traffic Control describing how TCAS saved the day at a moment when an air traffic controller was definitely not on the ball.
Approaching an airport, Portale obeyed his TCAS and flew under a commuter plane with several hundred feet of vertical separation. He then flew into heavy thunderclouds as he descended to 5,000 feet during his approach, knowing that the clouds and rain would persist to the ground. He expected that at any moment TCAS would give the message “Clear of conflict.” Instead it sounded anew, “Traffic! Traffic!” Then it warned, “Descend, descend!” There was no time to talk to the controller. Yet Portale did not wish to descend, for he was flying at an assigned altitude while surrounded by thick clouds. He wrote that to depart from this altitude was “one of the most uncomfortable decisions you’ll ever deal with in peacetime aviation.”
“Increase descent!” shouted TCAS, at a volume he had not heard before. Its cockpit display underlined the warning with an angry red arc that pointed to a very rapid recommended descent rate of 2,500 feet per minute. There was no time for a smooth maneuver. Portale pushed forward hard on the controls and rolled in a little right bank.
“There it is!” yelled his copilot.
Portale looked up. Directly ahead and above he saw the landing lights of a Boeing 757, approaching at close to 600 mph. For a moment he looked into its left engine, which came so close that it more than filled the entire width of his windshield. The controller’s instructions had vectored Portale’s plane directly into the 757’s path. Portale later learned that although the 757 also had TCAS, it was operating in a mode that gave only traffic alerts, giving no instructions that could have brought evasive action.
Even so, in the words of a leader of the Air Line Pilots Association, TCAS is “a last-ditch bulwark against total system failure—a midair collision. We realize that the controller has a set of assumptions in his mind about the current state of his airspace. TCAS intervenes only when that set of assumptions is not reflected in reality.”
TCAS thus represents one more patch on the overall system. It certainly is no cure-all, and it does not change the basic situation. The FAA continues to rely on harried controllers deployed in less than adequate numbers, using obsolescent computers and other equipment. But even though everyone just muddles through, the system works—usually.