Blood On Demand
TRANSFUSIONS USED TO BE VERY PAINFUL AND VERY DANGEROUS. A FEW DEDICATED DOCTORS KNEW THERE HAD TO BE A BETTER WAY.
ON JUNE 22, 1918, JUST A FEW MONTHS BEFORE THE END of World War I, a provocative article appeared in the British Medical Journal . The author was Capt. Oswald H. Robertson, a physician in the U.S. Army who had been treating battlefield casualties on the front lines. In his paper Robertson described his successful performance of transfusions using blood collected in advance and stored, rather than immediate transfusions from donor to recipient. “There is a definite need in front area medical work of a method for giving transfusions rapidly,” he wrote. “At casualty clearing stations during the busy time of an attack, it is obviously impossible to perform transfusions by the usual methods.”
Since before the turn of the century, researchers had been experimenting with the preservation and storage of blood after its removal from the human body. The results looked good in the laboratory, but until Robertson began his experiments, no human being had successfully received a transfusion of blood that had been stored for more than about an hour. In fact, no doctor had even attempted such a transfusion.
Robertson believed that a raging war with mounting casualties was as good a time as any to try this new technology. Soon after he arrived in France, he began collecting blood and preserving it with sodium citrate and glucose. In essence, Robertson set up the world’s first system of blood banking. It worked well, saving the lives of scores of soldiers who would otherwise have died from loss of blood. Yet it would take another 15 years before doctors everywhere were persuaded to accept blood storage.
The publication of Robertson’s paper came 100 years after the first successful human-to-human blood transfusion, though the idea had been discussed and occasionally tried for centuries before. In 1818 the English obstetrician and physiologist James Blundell was distressed at the high mortality rate attributed to postpartum hemorrhage and wondered if replacing the lost blood could save the lives of women who had just given birth. Three days before Christmas, Blundell collected blood from several donors and injected it into a patient suffering from internal bleeding. The patient died two days later, probably from her disease. But the mere fact that Blundell had transferred blood from one person to another without killing either one was a triumph in itself. Over the next 11 years Blundell transfused 10 patients, half of whom survived.
In retrospect, his success rate was a remarkable feat, considering his ignorance of blood types and anticlotting techniques and his use of crude, unsterilized instruments. Blundell published the results of his transfusion work, and it inspired other physicians to perform their own experiments. Hundreds of transfusions were reported throughout Europe during the nineteenth century, with fatality rates generally matching Blundell’s 50 percent. Without improved survival, the European medical community eventually lost its enthusiasm, and by the 188Os transfusion had fallen out of favor.
The technology was revived and improved, however, in the United States, an unlikely site for innovation since in the late nineteenth century American medical training fell far short of the Continental standard. While the United States had many dedicated and knowledgeable doctors, Europe was unquestionably the place to go for those seeking a thorough medical education. Training to be a doctor in 1900 was less rigorous than training to be a hairdresser today. American medical schools generally had few entrance requirements and provided no clinical training. Most doctors learned their trade through apprenticeships with practicing physicians, which provided the mentors with extra income and cheap labor.
IN THE NATION’S MORE PROGRESSIVE MEDICAL SCHOOLS and hospitals, however, a strong interest in blood transfusion was developing. While still outside the scope of what was considered mainstream medicine, it nevertheless was starting to evolve into a specialty of its own. In principle, at least, two of the major hurdles to successful transfusion had been overcome. The first was the discovery of three of the four basic blood groups, A, B, and C, by Karl Landsteiner at Vienna’s Institute of Pathology in 1901 and 1902. (The fourth group, AB, was discovered by two of Landsteiner’s colleagues a few years later.) The ability to match the blood types of donor and recipient would greatly reduce the rate of fatalities from transfusion reactions, which had mystified doctors by the seemingly haphazard way they occurred.
Significant as it was, this breakthrough was largely ignored for more than a decade. Few physicians considered Landsteiner’s work of much importance or even seemed to be aware of it, despite the large numbers of patients who continued to experience post-transfusion fever, chills, kidney pain, bloody urine, or death. There was more to this lukewarm response than mere conservatism, though that did play a part. Until World War I testing for blood types was less than completely reliable and took several hours to complete. Since transfusion was still saved for desperate cases, these factors often made testing too slow and cumbersome. Surgeons would sometimes perform a crude version of compatibility testing by injecting a small amount of blood (typically 15 milliliters, or about a tablespoon), waiting a minute or two, and watching to see if the patient exhibited any negative reaction.
Reports about blood-type research did catch the attention of Reuben Ottenberg, a pathologist and hematologist at Mount Sinai Hospital in New York City. In 1907 he performed the first transfusion using blood screened for compatibility, and over the next few years he performed 125 successful transfusions without a bad reaction. Still, the surgeons at Mount Sinai were not interested. “I offered to do compatibility testing,” Ottenberg wrote years later, “but many of the surgeons did not accept the offer.” It seemed that they were unwilling to share their turf and wanted to be “free from influence by the laboratory men.” Blood typing and cross matching between donor and recipient would not become a standard procedure until the middle of the 1920s.
The second major development was the improvement of transfusion practice. Widespread acceptance and use of antiseptics and sterile techniques greatly reduced the incidence of infections, and increased skill and better instruments helped lessen the trauma involved in blood removal and injection. These changes made transfusions safer and easier to bear for both donor and recipient, but they did nothing to increase the blood supply. A doctor still needed to have a donor or donors present in the operating room at the time of the transfusion. The next great need, then, would be a way to decouple the donor and the recipient by storing collected blood.
Of all the problems still facing the physician, blood’s unique ability to coagulate, or clot, was the greatest. Coagulation begins almost immediately when blood is exposed to air. Clot formation is a protective mechanism that seals a wound to prevent excessive blood loss and keep foreign particles from entering the bloodstream. If blood didn’t clot, even a small cut could be fatal, as was often the case for hemophiliacs before modern therapy. But while clotting is essential to the survival of the human body, it was the ultimate scourge to doctors trying to move blood from one host to another. Transfusing blood was a race against time.
At first doctors sought to circumvent blood clotting by using “direct transfusion.” In this procedure, blood flowed directly from the donor’s artery to the recipient’s vein, avoiding all contact with air. While successful in preventing clotting, it was a tedious and traumatic ordeal for all involved. There were two basic variations of direct-transfusion techniques. In one, the blood vessels of donor and recipient were sutured together, while in the other, they were connected by means of a cannula, or flexible tube. Both procedures were extremely timeconsuming and required painstaking surgical skill and expertise. It was also impossible to gauge how much blood was being transfused. Some donors were exsanguinated to the point of passing out, while recipients occasionally died from volume overload —too much blood delivered too quickly.
The donor faced stresses beyond the loss of blood. Because these procedures were vein-to-vein, the donor and the recipient had to lie side by side. The patient needing the transfusion was often desperately ill or severely injured, and it could be unnerving for the donor to lie beside a patient in such a condition. Sometimes the patient even died during the transfusion. Moreover, aside from the psychological effects, the donor risked losing the use of his or her hand (which is where the incision was usually made), since cutting the radial artery could cause a permanent impairment in circulation.
For all these reasons, the direct method was too arduous for use on a large scale. Even at a major medical center like Mount Sinai, only about 20 transfusions were being performed per year by 1910. The difficulties with direct transfusion gave rise to what became known as “indirect transfusion,” in which blood was passed between donor and recipient using syringes and stopcocks. While the process was far less intricate and eliminated the necessity of dissecting blood vessels in the donor or the patient, it was cumbersome in its own right.
The syringe method relied on a series of glass syringes, which were filled and emptied in succession. A rival technique was the stopcock method, popularized in 1915 by the surgeon Lester Unger, of Mount Sinai. He began by connecting a container for blood to the donor and one already filled with saline to the recipient. When he had collected enough blood, Unger turned a stopcock to connect the blood to the recipient and the saline to the donor. Many physicians found this process easier, although both methods had their fierce proponents.
Whatever apparatus was used, transfusion was still a race against the onset of clotting and required at least two highly trained practitioners. As a result, it remained complex and expensive, entirely dependent on the availability of a donor and a lineup of qualified staff. This realization prompted some researchers to look at the process from another angle. Instead of trying to beat the clock, it might be better to try to slow it down.
Richard Lewisohn was one of the first to try this. After receiving his education in Germany, he went to New York in 1906, joined the staff at Mount Sinai the following year, and stayed there for the duration of his long and distinguished surgical career. When he began practice at Mount Sinai, Lewisohn regularly performed blood transfusions using both the direct and indirect procedures. He found all the rushing about to be tiresome and looked for a way to decrease or eliminate clotting. The substance he decided to work with was sodium citrate, a chemical that had long been used as an anticoagulant in blood collected for laboratory purposes.
Lewisohn was not the first to experiment with anticoagulants, and other scientists had already speculated on the idea of using citrate for human transfusion. Throughout the latter part of the nineteenth century, in fact, doctors had tried to delay clotting with an assortment of substances (including hirudin, a chemical secreted by leeches), all of which were found to be too toxic for general use. Previous research had established that calcium plays an important role in blood clotting, and sodium citrate binds with calcium ions, inhibiting the coagulation process. The problem was finding a balance of toxicity and efficacy. The solution had to contain enough sodium citrate to bind calcium, but not enough to poison the patient.
For four years Lewisohn experimented with animals and test tubes, trying to determine the minimum concentration of sodium citrate needed to prevent coagulation. He found that 0.2 percent sodium citrate was sufficient to prevent clotting for up to three days and that up to 5 grams—the amount found in 2.5 liters of citrate-treated blood—could be safely introduced into adults intravenously. He published a paper in January 1915 reporting the results of his work, which included successful transfusions of citrated blood in two patients. Later that year he published a second paper describing 22 transfusions given to 18 patients.
ALTHOUGH HE WAS NOT THE FIRST TO TRY SODIUM CIT rate, Lewisohn is associated with its use as an anticoagulant because he discovered the optimum concentration. He was extremely enthusiastic about this discovery. Its simplicity, compared with the methods then in use, was astounding. A doctor could remove blood from a donor, put it in a glass jar, mix in the sodium citrate, and then inject it into the recipient later. A team of specialists was no longer required, for one person could easily perform the procedure. It could be done at a slow and careful pace, and more important, close contact between donor and recipient was no longer required. The donor could be in the next room or could even leave the premises once blood had been drawn.
Dr. Howard Lilienthal was the first Mount Sinai surgeon to perform a transfusion using Lewisohn’s citrate method. He wrote: “The ease and simplicity of this transfusion was most amazing to me, who had so often suffered more than the patient in performing this life-saving operation.” Yet to Lewisohn’s surprise, not all physicians shared his and LilienthaPs enthusiasm. Some were downright hostile. Transfusion medicine had developed into a highly skilled practice and a very lucrative one, and the idea of changing it into a simple procedure that any doctor could perform did not sit well with the transfusion elite. Furthermore, Lester Unger presented his stopcock method that very same year, and he was not about to be deprived of his triumph. Unger became one of the citrate method’s most vocal and tireless opponents. Lewisohn wrote some 40 years later that Mount Sinai had “two camps, almost equally divided.”
In fact there were legitimate reasons to question the new method. Unger pointed out that patients who received citrated blood were subject to “frequent chills and fever and occasionally vomiting,” more so than with direct transfusions. In 1921 Bertram Bernheim, an enthusiastic champion of the citrate method, reported that between 20 and 40 percent of all citrated transfusions developed such reactions, compared with only 5 percent receiving unmodified blood. Opponents accused sodium citrate of injuring, and even destroying, red and white blood cells and platelets, though modern research has proved that accusation false. During the 1920s and early 1930s Unger’s stopcock method became the most widespread transfusion technique, and it seemed that the use of sodium citrate might fall completely out of favor.
Lewisohn knew that citrate itself could not be causing these post-transfusion reactions, since it didn’t remain in the blood long enough to trigger any damage. Within a few minutes of entering the body, it was metabolized by the liver and rendered harmless. So what was behind the unfavorable results?
In 1923 a research paper reported that bacteria originating from incompletely sterilized containers were responsible for some post-transfusion fevers. Ironically, the simplicity of the citrate procedure may have been its very undoing. Because less skilled personnel were able to perform transfusions of citrated blood, they may have been lax in keeping the equipment sterile.
Publication of this report did not quiet the controversy or cause a rapid switchover to citrated blood. But in 1931 a department was established at Mount Sinai for cleaning and sterilizing equipment and preparing solutions. Soon after, the rate of post-transfusion reactions from citrated blood dropped from 12 percent to 1 percent. A report from the Soviet Union said that when Soviet hospitals adopted the Mount Sinai technique, their post-transfusion reaction rate declined from 53 percent to 2 percent. Finally, it seemed, the citrate war was approaching its end. Lewisohn noted that a decade later, when the first blood banks were successfully established, all the reports of damage to cells and platelets were “suddenly stilled.”
A parallel development also had its start in 1915, as the citrate war took hold in the United States and a real war was devastating Europe. Beyond the horrific numbers killed by rifle fire, artillery, and mines, thousands of wounded soldiers were dying afterward from shock, and no attempts were being made to perform blood transfusions. Much of the credit for the eventual introduction of battlefield transfusions belongs to two physicians, both with the surname Robertson. One laid the groundwork for the acceptance of transfusions, while the other revolutionized the way they were done.
After completing medical school in Toronto, Lawrence Bruce Robertson interned at Bellevue Hospital in New York City, where he worked with Edward Lindeman, the inventor of a popular syringe-and-cannula transfusion technique. Upon returning to Canada, he began performing blood transfusions and published the results of his work. Robertson volunteered for military service the day after Canada entered the war, was commissioned a lieutenant in Canada’s Army Medical Corps, and arrived in Europe early in 1915.
“Wound shock,” as it was called on the front lines, was a common cause of death on the battlefields of World War I. It occurred when a loss of blood led to impaired circulation. Most surgeons in Great Britain before the war had used intravenous saline solution to treat shock, but while saline replaced the volume of lost blood, it lacked blood’s life-supporting components. Not surprisingly, it produced disappointing results.
In a series of reports published between 1916 and 1918, Robertson championed the superiority of blood transfusion as a treatment for wound shock. He succeeded in breaking down opposition and set the stage for the next pioneering doctor, who would arrive in 1917. But while Lawrence Robertson was aware of sodium citrate’s use as an anticoagulant, he considered its practicality unproven and relied exclusively on variations of the syringe and stopcock methods.
Oswald Hope Robertson harbored no such doubts. After graduating from Harvard Medical School, he had gone to the Rockefeller Institute for Medical Research in New York City to work in the laboratory of the pathologist Francis Peyton Rous. Rous would later achieve fame and (in 1966, at age 87) received a Nobel Prize for his work with malignant tumors, but in 1915 his research was focused on blood. With the outbreak of the war, even though the United States was not yet involved, he realized the importance of an adequate blood supply. He directed his investigations toward a means of preserving not whole blood but red blood cells—and not just for a few hours or days, but for weeks if necessary.
Human red cells, Rous found, began to break down rather quickly no matter how much citrate was used. Within a week the blood was often unusable. So, working with another researcher, Joseph Turner, he tried preserving citrated red blood cells with a variety of sugars. They found that in a solution of three parts blood, two parts citrate solution, and five parts dextrose solution, the cells remained intact for about a month. The cells could be separated from the watery solution by allowing them to settle out over a period of four to five days.
Robertson brought this knowledge with him when he arrived in France in May 1917, soon after America’s entry into the war. He is credited with introducing the citrate method to British military hospitals. But Robertson had bigger plans: He wanted to have a blood supply ready and waiting before casualties arrived, rather than scramble for donors afterward.
One problem was that at that point only rabbits had been transfused with long-term preserved blood. While they had shown no adverse reactions, no preserved blood transfusion had ever been attempted on a human. But under the dire circumstances of war, Robertson was willing to take a chance on a new technology. By October he was bringing preserved blood up to the front lines in a makeshift ice chest, and later on his transfusion teams set up large underground bunkers where blood was drawn, typed, and crossmatched.
Despite Robertson’s wartime success, virtually no interest arose after the war in establishing civilian blood banks. This may have had to do with the citrate controversy, which gained momentum in the immediate postwar years. Another problem was that transfusion technology was still in its infancy. The number of civilian transfusions remained small, partly because of the lack of blood banks, but more likely because of the continued precariousness of the procedure.
When the citrate controversy was finally resolved, in the early 1930s, it was only a matter of time before the idea of large-scale blood storage caught on. The Soviets were the earliest to embrace it. The world’s first blood bank opened in a Leningrad hospital in 1932, and by the mid-1950s the Soviet government had set up about 60 large centers and more than 500 smaller ones. Blood banking in the United States started in 1935, when a group of anesthesiologists at the Mayo Clinic, in Rochester, Minnesota, began to collect and store citrated blood.
ONE MAJOR IMPETUS FOR LARGE-SCALE BLOOD STORAGE may have come from the clouds of war that were once again looming over Europe and Asia. When the Spanish Civil War erupted in 1936, a system of mobile bloodtransfusion units was set up to deliver blood to the front lines. If it could be done there, it was clearly time to modernize transfusion services at home as well.
Not surprisingly, Oswald Robertson had a hand in the first blood bank on American soil. By 1937 he was a professor of medicine at the University of Chicago, where he acted as an adviser to Bernard Fantus, director of therapeutics at the Solutions Laboratory at Cook County Hospital. Fantus established a blood donation, collection, and preservation service at his facility and coined the term blood bank .
One year later, in 1938, Mount Sinai Hospital opened its first blood bank, with refrigerated shelves lined with bottles of citrated blood. Several other medical facilities soon opened blood banks of their own. However, the problems of blood storage had not been completely solved, and new methods were still needed to improve transfusion. Between 1939 and 1950 several more major discoveries transformed blood banking and transfusion science, allowing it to become an indispensable part of modern medicine.
In 1939, while working at Columbia Presbyterian Hospital in New York City, Dr. Charles Drew speculated that plasma could be used as a substitute for whole blood in some cases. Since it does not contain red blood cells, plasma does not need to be typed and cross-matched, thus making it ideal for emergency use. Even better, Drew developed a method to dehydrate plasma by freeze-drying, which allowed for lengthy storage and easy transportation (plasma was the first major use of freeze-drying technology, which became familiar after World War II in the manufacture of instant coffee). Dehydrated plasma could be reconstituted by adding water just before the transfusion.
Building on Drew’s work, in 1940 Edwin Cohn, a professor of biological chemistry at Harvard Medical School, developed a process called cold ethanol fractionation that enabled plasma to be separated into its components, including the proteins albumin, gamma globulin, and fibrinogen. Albumin was found to be an excellent substitute for whole blood in the treatment of shock. It could also be stored for lengthy periods without spoiling, and it was easily shipped.
On the battlefields, however, front-line medical personnel found that in many cases plasma simply would not do, and whole blood was needed to treat the severe shock that followed traumatic injury. As this became clear, medical units improvised ways to ensure the necessary supply of blood. One British hospital in North Africa sent men out to scavenge beer bottles, which were sterilized, filled with blood, and buried to keep them cool. American field hospitals brought in noncombat troops with Type O blood (the universal donor), who waited around until a transfusion was needed.
It took nearly two years after America’s entry into the war for Army and Navy brass to fully appreciate the need for whole blood. When they did, though, reaction was swift. By late 1943 the military was committed to ensuring an adequate supply, and in the Normandy invasion of 1944, field hospitals and trucks were equipped with insulated blood containers and refrigeration equipment for storage. When the supply from local sources proved inadequate, blood was collected in America and airlifted daily across the Atlantic.
After the war the nation’s experience with military blood collection and storage spread rapidly through the civilian world. By 1950 blood banking had grown to 1,500 hospital blood banks, 46 community centers, and 31 American Red Cross regional blood centers. It was also an exceptional year for the development of blood-banking technology, with two major breakthroughs. The first was the introduction by Dr. Carl Walter and Dr. William P. Murphy of the plastic bag for storing blood. Plastic bags were unbreakable and disposable (eliminating the risk of bacteriological contamination that was prevalent in reusable glass bottles) and were more compact when empty and easier to store when full. They also prevented air exposure and allowed blood products to be frozen at ultralow temperatures that would fracture glass.
The second major discovery of 1950 was the use of glycerol cryoprotectant for freezing red blood cells. Over the course of the previous decade, doctors had come to realize that outside of war zones, few people required transfusions of whole blood. Separating blood into its components was far more economical and of greater medical use. Plasma could be frozen for up to a year, but it was impossible to freeze red blood cells without harming them. This meant that they could be stored for only a short time. In 1949 the English scientists Audrey Smith and Christopher Polge, experimenting with the preservation of sperm, found that they could significantly reduce freezing damage if cells were first soaked in glycerol, a colorless liquid that is chemically similar to ethylene glycol. The following year Smith, who went on to become a founder of the field of cryobiology, showed that glycerol worked with red blood cells as well.
Modern blood banking still relies on these basic techniques, and while a number of further advances have been made, particularly in screening the blood supply for diseases, the basic technology of blood storage remains much as it was 50 years ago. Until someone develops a working substitute, the continuing need to collect blood pint by pint will remain one of the biggest bottlenecks in modern medicine. Yet it’s nowhere near as traumatic—for donor, recipient, and medical staff—as it was in the days before blood could be stored, and that’s why blood banking has been justly called one of the half-dozen most important medical advances of the twentieth century.