goiten_2-060415.jpg

Osler Library of the History of Medicine, McGill University

Dr. William Osler (second from left) at the Johns Hopkins Hospital, Baltimore, where, in the 1890s, he created the first residency program for training physicians after medical school

In the 1890s, Sir William Osler, now regarded as something of a demigod in American medicine, created at the Johns Hopkins Hospital a novel system for training physicians after graduation from medical school. It required young physicians to reside in the hospital full-time without pay, sometimes for years, to learn how to care for patients under the close supervision of senior physicians.

This was the first residency program. Despite the monastic existence, the long hours, and the rigid hierarchy, Osler’s residents apparently loved it. They felt exalted to be able to learn the practice of medicine under the tutelage of great physicians who based their teachings on science, inquiry, and argument, not tradition. And far from bridling at being at the bottom of the pyramid, they virtually worshiped their teachers, who in turn generally lavished great attention and affection on their charges. Osler’s innovation spread rapidly, and the residency system is still the essential feature of teaching hospitals throughout the country.

Residents are young doctors who have completed medical school and are learning their chosen specialty by caring for patients under the supervision of senior physicians, called attendings. Residents in their first year are called interns. As in Osler’s time, residents work long hours, although they no longer live in the hospital and are now paid a modest salary. The time this training takes varies—three years, for example, to complete a program in internal medicine. Following that, many go on to a few more years of training in subspecialties (for example cardiology, a subspecialty of internal medicine), and at this point they are called fellows.

Together residents and fellows, who now number about 120,000 across the country, are called house officers, and their training is termed graduate medical education (GME). The teaching hospitals where most of this takes place are often affiliated with medical schools, which in turn are often part of universities, and together they make up sometimes gigantic conglomerates, called academic medical centers.

Despite the fact that Osler’s idea lives on, there have been enormous changes over the years, and this is the subject of Kenneth Ludmerer’s meticulous new book, Let Me Heal. Ludmerer, a senior faculty physician and professor of the history of medicine at Washington University in St. Louis, sounds a warning. The Oslerian ideal of faculty and residents forming close relationships and thinking together about each patient is in trouble. Instead, residents, with little supervision, are struggling to keep up with staggering workloads, and have little time or energy left for learning. Attending physicians, for their part, are often too occupied with their own research and clinical practices—often in labs and offices outside of the hospital—to pay much attention to the house officers.

The implications for the public are profound. Nearly anyone admitted to a teaching hospital—and these are the most prestigious hospitals in the country—can expect to be cared for by residents and fellows. Whether house officers are well trained and, most important, whether they have the time to provide good care are crucial. Yet until Ludmerer’s book, there has been very little critical attention to these questions. It’s simply assumed that when you are admitted to a teaching hospital, you will get the best care possible. It’s odd that something this important would be regarded in such a Panglossian way.

Ludmerer refers to graduate medical education in the period between the world wars, following Osler, as the “educational era,” by which he means that the highest priority of teaching hospitals was education. Heads of departments were omnipresent on the wards, and knew the house officers intimately. A network of intense, often lifelong mentorships formed. Ludmerer gives a fascinating account of the propagation of talent; for example, William Halsted, the first chief of surgery at Johns Hopkins, had seventeen chief residents, eleven of whom subsequently established their own surgical residency programs at other institutions. Of their 166 chief residents, eighty-five became prominent faculty members at university medical schools. The influence of the giants of the era of education still reaches us through three, four, or five generations of disciples, and house officers quote Osler even today.

There was a strong moral dimension to this system. Ludmerer writes that “house officers learned that medicine is a calling, that altruism is central to being a true medical professional, and that the ideal practitioner placed the welfare of his patients above all else.” Commercialism was antithetical to teaching hospitals in the era of education. “Teaching hospitals regularly acknowledged that they served the public,” writes Ludmerer, “and they competed with each other to be the best, not the biggest or most profitable.”

Advertisement

Indeed, teaching hospitals deliberately limited their growth to maintain the ideal setting for teaching and research. Ludmerer offers the example of the prestigious Peter Bent Brigham Hospital in Boston (now named the Brigham and Women’s Hospital), which in its 1925 annual report declared that it had “more patients than it can satisfactorily handle…. The last thing it desires is to augment this by patients who otherwise will secure adequate professional service.” They also kept prices as low as possible, and delivered large amounts of charity care. With few exceptions, members of the faculty did not patent medical discoveries or accept gifts from industry, and regularly waived fees for poor patients.

To be sure, this golden age was not pure gold. These physicians were, on the whole, paternalistic toward patients; by today’s standards, many were elitist, sexist, and racist. But they were utterly devoted to what they were doing, and to one another, and put that commitment ahead of everything, including their own self-interest.

World War II brought great changes. In the postwar prosperity, the United States began to invest heavily in science and medicine, with rapid expansion of the National Institutes of Health (NIH), which in turn poured money into research at academic medical centers. In addition, the growth of health insurance led to more hospital admissions. In 1965, the creation of Medicare and Medicaid accelerated this growth enormously. According to Ludmerer, between 1965 and 1990, the number of full-time faculty in medical schools increased more than fourfold, NIH funding increased elevenfold, and revenues of academic medical centers from clinical treatment increased nearly two hundred–fold.

Initially, in the couple of decades following the war, the influx of money and the rapid growth simply gave momentum to the trajectory begun in the era of education. Reinforced by leaders who had trained during that era, the established traditions endured, and teaching hospitals for the most part defended their commitment to educational excellence and public service. However, the close-knit, personal character of graduate medical education began to unravel. By the late 1970s, academic medical centers began to take on the character of large businesses, both in their size and complexity, and in their focus on growth and maximizing revenue. Even if technically nonprofit, the benefits of expansion accrued to everyone who worked there, most particularly the executives and administrators. In 1980, Arnold Relman wrote a landmark article in The New England Journal of Medicine, warning of the emergence of a “medical-industrial complex.”

The growing commercialization of teaching hospitals was exacerbated by a change in the method of payment for hospital care. Health care costs were rising rapidly and unsustainably, and in the 1980s health insurers responded with what has been termed “the revolt of the payers.” Previously, most insurers had paid hospitals according to “fee-for-service,” in which payment was made for each consultation, test, treatment, or other service provided. But now Medicare and other insurers, in an effort to control costs, began to reimburse hospitals less liberally and by “prospective payment” methods, in which the hospital received a fixed payment for each patient’s admission according to the diagnosis. Whatever part of that payment was not spent was the hospital’s gain; if the hospital spent more, it was a loss. Hospitals now had a strong incentive to get patients in and out as fast as possible.

Quite suddenly, the torrent of clinical revenue that had so swollen academic medical centers slowed. Many hospitals did not survive in the new environment (the total number of US hospitals decreased by nearly 20 percent between 1980 and 2000). Those that stayed afloat did so by promoting high-revenue subspecialty and procedural care, for example heart catheterization and orthopedic and heart surgery, which were still lucratively rewarded. They also developed more extensive relationships with pharmaceutical and biotech companies and manufacturers of medical devices, which paid them for exclusive marketing rights to drugs or technologies developed by faculty, as well as access to both patients and faculty for research and marketing purposes.1

Such arrangements would generally have been considered unacceptable conflicts of interest, or at least distasteful, just a generation or two earlier. Most importantly, hospitals tried to maximize market share—through mergers and acquisitions, and attracting as many paying patients as possible—so that they could bargain more effectively for better reimbursement. And they kept shortening the length of stay, in response to the financial incentives of prospective payment. In just one decade, according to Ludmerer, average length of stay was cut roughly in half. He calls this time “the era of high throughput”—referring to shuttling more and more patients, more and more quickly, in and out of the hospital.

These changes severely eroded the early priority placed on teaching. The revered chairman-teacher virtually disappeared, consumed by the responsibilities of running a large organization, and was replaced by program directors, who were often junior faculty physicians. Members of the faculty were under pressure to increase their clinical productivity and to obtain research funding, and for the most part, they were promoted and paid on the basis of these income-generating activities, not teaching. Ludmerer writes that some faculty members “noted sardonically that the surest way not to receive tenure in a clinical department was to win an award for teaching.”

Advertisement

Being an attending physician was once considered a great honor, but increasingly programs had trouble finding faculty willing to do the job. The demotion and marginalizing of teaching was not lost on residents. Ludmerer refers to “the physical and emotional abandonment of the house staff by the senior faculty.” Quite suddenly, the important mentoring relationships that characterized graduate medical education over most of the last century were the exception, rather than the rule.

goiten_1-060415.jpg

Musée d’Orsay, Paris/RMN-Grand Palais/Art Resource

‘Before the Operation,’ or ‘Dr. Péan Teaching His Discovery of the Compression of Blood Vessels at Saint-Louis Hospital,’ 1887; painting by Henri Gervex

At the same time, the rapid turnover of patients changed the pace of care. Reliable data on residents’ workloads—as measured by the number of patients cared for by each resident—are in surprisingly short supply, and residency programs vary. But it is likely that in most major teaching hospitals, the size of the house staff did not keep up with the number of admissions.2 The increase in the intensity and pace of medical care was as important as the number of admissions. Because of the incentives to discharge patients as soon as possible, the average length of stay in US hospitals decreased from sixteen days in 1971 to five days in 2011. At the same time, the number and complexity of tests and treatments per patient increased hugely. In short, there was much more to do, for more patients, in much less time. The result was that by the 1990s, many house officers felt they were providing care on a production line that had sped wildly out of control.

I was one of them. (I was also among those interviewed for Ludmerer’s book.) When I started my first year of residency in internal medicine at the Massachusetts General Hospital in 1998, there were 20 percent more patient admissions per intern in my residency program than there had been just three years earlier. The sheer number and complexity of my patients was nearly overwhelming—and I was worried that at best, they were not getting the care they had a right to expect, and at worst, that they were not safe. Ludmerer describes well the pressures under which we worked:

In the era of high throughput…house officers found it increasingly difficult to act in accord with the traditional expectations of professionalism. In particular, it became extremely difficult for them to be thorough and attentive to detail and to make certain that their patients’ problems were recognized and addressed…. The patient volume was too high, the turnover of patients too great, and time simply in too short supply. Interns and residents in every specialty often experienced panic and anxiety as they struggled to care for far more patients than time reasonably permitted. The only way that they could cope with the high patient load was by cutting corners.

Moreover, my receptiveness to teaching by senior physicians plummeted. There was very little time for such instruction, and when it did occur I was unable to concentrate because I was constantly reviewing in my mind the work that needed to be done, or being called away to tend to my many patients’ sudden needs. It seemed an unaffordable luxury to learn about the mechanisms or finer points of disease. Whereas previous generations had emphasized searching for and learning from what was new or unusual—what stretched and challenged current understanding—now training emphasized classification of patients into categories and algorithms in order to cope safely with the pace.

For example, many teaching hospitals provide their residents with “protocols” (often in the form of flow diagrams, or prewritten orders for tests and treatments) for common problems such as chest pain, pneumonia, heart failure, and stroke. While protocols may make residents more efficient and provide a basic safety check, they also devalue innovation and individual initiative, and discourage thoughtful consideration of unusual or unique features of individual patients. As Ludmerer points out, while standardization may impose a floor on performance, it may also impose a ceiling.

With so little time to think about patients, we would order batteries of tests roughly corresponding to whatever anatomic area was brought to our attention, sometimes before we’d even seen the patient. This was the only way to make sure (we hoped) that we wouldn’t “miss anything.” The tendency to overtest began as a survival technique, but by the end of residency it was ingrained as a style of practice—and this excessive use of tests is one driver of health costs.

Ludmerer notes that before World War II, house officers nearly universally described a “wonderful happiness at work”—despite their very long hours. Today, half of residents (and more in some specialties) experience “burnout,” a syndrome of emotional exhaustion with feelings of alienation from patients and low personal accomplishment. These young doctors look at older physicians—some of whom still carry the aura of a deep professional contentment—across a widening gulf.

Paradoxically, it was in this setting—in which house officers often felt there were not enough hours in the day to do their work—that the agency responsible for accrediting graduate medical education programs chose to limit the hours residents could work. In 2003, the American College of Graduate Medical Education (ACGME) restricted the work week to eighty hours or less, and the length of a shift to twenty-four hours or less. In 2011, the limit on the length of a shift was decreased further to sixteen hours. The ACGME was responding largely to public pressure, which had been galvanized by several unexpected deaths in teaching hospitals that were highly publicized, and by a prominent report from the Institute of Medicine in 1999 called “To Err Is Human,” which documented an unacceptably high rate of deaths from medical errors. To the average person, the long hours worked by residents were an obvious target for intervention, since fatigue might reasonably be considered a cause of errors.

But the narrow focus on work hours reflected a fundamental misunderstanding of the nature of residents’ work. The long hours were not some sort of arbitrary hazing, dictated from above by residency programs or attending physicians. Rather, for the most part, residents chose their own hours, to fit their workload. During my first year in residency (before the ACGME work-hour limitations were put in place), I chose to work ninety to one hundred hours per week (sometimes more), because that was what I had to do to keep up. If someone had told me that I had only eighty hours to do the same work, I would have despaired. In the absence of accompanying changes in workload, the regulations meant that residents had to accomplish the same work in less time—leaving even less time for education or thorough patient care.

Residency programs responded to the new regulations with ingenious, byzantine schedules designed to eliminate the peaks of workload by spreading it as evenly as possible among the house officers, including interns. But these fragmented schedules require frequent handoffs (in which a patient is transferred from one resident to another), which themselves create extra work and increase the risk of error. Moreover, such efforts are only shell games: they do nothing to reduce the overall workload, as measured by the total amount of patient care for which the residents are responsible. It is telling that when the ACGME surveyed residents following the implementation of the 2011 regulations, more residents reported a negative effect than a positive effect on patient safety and on their education, and more than twice as many disapproved of the regulations as approved of them.

“In the mad rush to limit resident work hours,” Ludmerer writes, “the importance of the learning environment was generally overlooked, as if nothing else mattered but the amount of time at work.” What really mattered was the loss of teachers and mentors, and the loss of time for education and thorough, considered care of patients. And what also mattered was that, increasingly, residents trained in an environment where money seemed to talk more loudly than traditional professional values. (This is all too clear to a resident when his attending physician spends minimal time teaching or caring for their patients, but has the time to take numerous paid speaking engagements as a consultant for Pfizer, or to develop a start-up company in collaboration with Syntonix.) What was needed was a concerted effort to restore the centrality of teaching (including through pay and promotion), and to protect residents and their teachers from having to care for too many patients, in too little time.

The only effective way to do the latter is to hire more residents to carry the load—or to divert some patients to nonteaching services, in which patients are not cared for by residents, but by physicians who have finished their training and are hired for the purpose. To some extent both of these methods have been employed in response to the work-hour regulations. But overall, these recent efforts to reduce residents’ patient load have not been forceful enough to counteract the increased pace of work experienced by residents in the era of high throughput and limited work hours. One reason is that hiring more residents is expensive for hospitals, and replacing them with already trained physicians is even more so.

To add to the problem, just when the residency system is most in need, public support for it (currently about $15 billion per year, mainly from Medicare) is most in jeopardy. Proposals for drastic cuts have been made in federal budget proposals as well as by presidential commissions, legislative agencies, and influential health policy organizations, and there is even debate about whether the public should pay for residency training at all.3 Make no mistake: if graduate medical education is regarded as a discretionary luxury, medical care at our best hospitals will suffer. Public support for graduate medical education is essential, but teaching hospitals must ensure that it is used for its intended purpose, rather than to support wide-ranging endeavors with little connection to education, such as paying large numbers of faculty members who never teach residents, expanding facilities to compete with other institutions, and funding research and industry-related ventures in which residents do not substantially participate.

Ludmerer is sympathetic to the difficulties encountered by teaching hospitals as they try to maintain their learning environment and service ethic in an increasingly competitive market, with increasingly inadequate public support. But he also sees inadequate funding as a consequence of graduate medical education’s failures: the better a profession serves its purpose and the public, the greater the likelihood of public support. He is merciless in his indictment of leaders in graduate medical education:

As the pressures to increase the throughput of patients grew stronger, professional leaders did little to counteract the tide…. Academic leaders, faculty members, clinical departments, medical schools, and teaching hospitals…benefited too much from the enhanced revenues they received for succumbing to the forces to increase throughput. They followed the money, resting content with the status quo as long as clinical revenues and their own pay continued to grow, even if graduate medical education and patient care might have suffered in the process.

Ludmerer quotes one sorrowful chairman of medicine at Johns Hopkins (a century after Osler) who described the failure of medical leaders to defend the educational mission of teaching hospitals as follows: “Almost all the king’s men and king’s horses did nothing, said nothing, protested nothing, and raised no money to prevent the tarnishing of one of the greatest jewels the world has ever known.”

This may sound overstated, but the stakes are indeed high: doctors are only as good as their training. Medical education is essentially a verbal tradition: knowledge is imparted by physicians talking and demonstrating what they mean at the patient’s bedside much more than through the written word. A resident may read extensively about endocarditis (an infection of a heart valve) in a textbook, but this cannot replace the experience of watching an attending physician make this diagnosis by discovering the subtle physical signs of small infected clots and inflammatory deposits throughout the body—called “Osler’s nodes” when they appear in the hands and feet. A weak link in the connection between generations is therefore a critical disruption. The powerful traditions of medicine’s golden era still reverberate. If we can restore protected time for good teaching and good patient care, they will flourish. Otherwise, not too long from now, we may hear Osler’s footsteps in the hospital hallway—but when we turn around he will not be there.