The Residency and the Hospital: The Consequences of Codependency
In the final decades of the 19th century, the American general hospital served as a social welfare institution for the poor as often as it served as a medical institution for the sick: It provided food, warmth, and cleanliness to the impoverished, along with splints and dressing changes to the injured.1 To the extent that hospitals did offer medical treatment, they catered to the “worthy poor,” composed of the urban working class, many of them recent immigrants, a group whose illness was attributed to misfortune rather than to immoral behavior.2 Anyone with even modest resources preferred to stay at home for medical care, and the vast majority of them did. After all, most of the available medical treatment could easily be provided at home, assuming the patient had a home, his own bed, enough to eat, and a caregiver. The stethoscope and the thermometer were the only medical instruments in widespread use, and although anesthesia had been introduced in the 1840s, routine surgery would have to await the development of aseptic technique in the 1880s. X-rays did not arrive on the scene until 1896, and only 6 therapeutic agents (such as medications, vaccines, and hormones) were commonly used in 1913, compared with 35 in 1943. Given that the hospital was not the site of most medical care, it was not the most obvious candidate to serve as the home for graduate medical education. Yet it was during this period that the hospital and advanced medical education became intimately and inextricably intertwined, with enduring and portentous implications, as Kenneth Ludmerer describes in his comprehensive and insightful book, Let Me Heal: The Opportunity to Preserve Excellence in American Medicine.
The hospital was not the only available option for graduate medical education at the end of the 19th century. Young physicians, as well as medical students, had long sought clinical experience through apprenticeships to practicing physicians. The hospital dispensary, which would evolve into the outpatient clinic, was another site for educating the graduates of America's medical schools. But it was the hospital that prevailed, in no small measure because of the inspiring example set by Johns Hopkins University in Baltimore, Maryland, as well as by the convenience of a ready supply of cases. The critical step was the adoption by the Johns Hopkins Hospital of the German model of the scientific clinician when it opened its doors in 1899. In Germany, the medical capital of the world, physicians were steeped in both bedside medicine and in biologic science, and they were expected to become clinical investigators, applying one to the other. Pioneers at Hopkins, such as William Osler (in medicine) and William Halsted (in surgery), were enamored of the German system, and chose to build residency programs in its image. Crucial to the choice of the hospital as the primary teaching site was also the large concentration of charity patients, patients felt to be deserving of medical care, provided they submitted to the indignities of serving as teaching material. Just as hospital physicians (the 19th century counterpart of today's attending physicians) saw their role as stewards of the lower classes—and benefited from the opportunity to advance their careers by the experience and connections afforded by a hospital practice—so, too, did the newly defined interns and residents.1 And so began the fateful linkage between the hospital and the residency, marked as Ludmerer delineates so well, by an ongoing tension between the service needs of hospitals and the educational needs of young physicians.
The codependence of hospitals and residency programs has had profound consequences for American health care. For example, today, when much of the disease burden is in the form of chronic illness, residency education continues to focus on acute medical problems, largely because moving residency training out of the hospital and into the outpatient setting has proved challenging. Tying residency to the hospital, a “total institution” traditionally structured to accommodate the needs of physicians rather than patients,3 is arguably antithetical to physicians-in-training mastering “patient-centered care,” the contemporary model of optimal care.4 Although both hospital and residency programs are affected by scientific advances and dominant societal trends—the Civil Rights movement, feminism, and consumerism are 3 that Ludmerer addresses—the overwhelming reality is that the hospital and the residency, like a binary solar system in which the 2 stars revolve around a common center of gravity, exert a strong influence on each other.
One of the earliest changes in the hospital that produced corresponding modifications in the residency was the growth in patient acuity. With advances in medicine between the wars, hospitals shed their safety net role and became exclusively medical institutions, leading to an increase in the proportion of sick patients. And because hospitals now had something to offer middle-class patients as well as the poor—in 1928, a physician at Massachusetts General Hospital commented that for the first time, acutely ill patients were better off in the hospital than at home—the volume of admissions rose as well. To care for so many sick people, hospitals turned increasingly to residents, abandoning the pyramidal system in which only a handful of interns were allowed to stay on for a second or third year of training. The multiyear residency became the norm and, no longer restricted to the best and the brightest, evolved away from the scientific investigator model and toward a more strictly clinical experience.
After World War II, another seismic shift in hospitals had a dramatic impact on residency training. The demand for hospital care accelerated further, due in part to the rapid pace of medical discoveries and in part to the rise in private health insurance. During the war, employers circumvented existing wage controls by negotiating an exemption for fringe benefits. As a result, employer-provided health insurance flourished, quickly becoming the norm and making hospital care affordable for many more patients.5 To absorb the influx of patients but also to benefit from the revenue they brought, the hospital arranged with insurers to allow payment for services rendered by residents to “private” patients who had insurance but whose physician was not on the medical staff, spelling the end of the “indigent ward.” The collapse of the rigid divide between charity patients (who were taken care of by residents) and private patients (who were not) ushered in a new era. No longer would the demeaning treatment of ward patients be tolerated. No longer would the role of residents include improving the “character” of patients; henceforth, residency was all about healing.
Perhaps the greatest change for the hospital—with important implications for residency training—was the introduction of Medicare in 1965. The program affected hospitals by requiring segregated wards to be abandoned as a condition for receiving Medicare payments; the integration of the hospital nudged residents into treating their patients of color respectfully. In addition, Medicare provided an enormous financial boost to hospitals through its direct subsidy of graduate medical education (by paying the salaries of residents and teaching faculty) and its indirect subsidy (by offering a higher reimbursement to teaching hospitals to account for the increased complexity of illness and longer length of stay compared with community hospitals). The funds kept the teaching hospital solvent, but at a price: Because salaries for residents were tied to the care they provided for hospitalized patients, off-campus ambulatory training was not covered. Allowing residents to learn how to care for patients in a private physician's office or in an HMO was problematic; residency programs would instead teach residents greater and greater technologic proficiency through exposure to continuous bedside hemodialysis, intra-aortic balloon pumps, and the like—techniques most would never use after graduation.
Just as the hospital influenced the residency, so too did the residency influence the hospital, although residency programs were always the smaller star in the binary system. One of the major ways that residency affected the hospital was through the “power of the pen,” the control exercised by residents over the ordering of tests and procedures (now supplanted by the click as electronic order entry replaces the traditional pen and paper). Because residents were learners, because they were expected to be exceptionally thorough, they were encouraged to be profligate in their test-ordering. With MRI scans and PET scans, endoscopies and echocardiograms all paid for by third-party payers, there was little attempt to rein in the resident's proclivity for greater certainty through more testing. Such behavior increased the cost of medical care, affected the physician's behavior after entering clinical practice, and put residents and hospitals on a technological treadmill.
Let Me Heal makes clear that as graduate medical education goes, so goes American health care: Well-trained physicians, steeped in a culture of scientific medicine and dedicated to patients, are essential for the public's health. Allowing the commercialization of health care to seep into the fabric of residency training, Ludmerer cautions, will have a perfidious effect. The solution to the problem, the historical narrative suggests, will require modifications in the structure of residency programs, but that will not be possible without simultaneously reforming the institution with which residencies are tightly linked: the general hospital.
Author Notes
Ludmerer K. Let Me Heal: The Opportunity to Preserve Excellence in American Medicine. New York, NY: Oxford University Press; 2014.



