Applicants' Self-Reported Priorities in Selecting a Residency Program
Abstract
Background
Residency recruitment is a high-stakes activity for all participants, yet there is limited information about how applicants choose among programs.
Objective
This study evaluated the importance applicants place on various residency program attributes; whether applicant priorities vary by sex, race/ethnicity, or specialty choice; and whether the importance of these factors changes over time.
Methods
Highly ranked applicants to residency programs at 2 academic medical centers were surveyed annually from 2004 to 2012 regarding the importance of 26 characteristics in selecting a program. Mean ratings of importance for each factor were analyzed to assess priority for the overall applicant group, and whether priorities differed for subgroups (by sex, race/ethnicity, and specialty).
Results
Of 9669 applicants surveyed, 6285 (65%) responded. The 5 factors with highest rating of importance (overall and across all subgroups) were the program's ability to prepare residents for future training or position, resident esprit de corps, faculty availability and involvement in teaching, depth and breadth of faculty, and variety of patients and clinical resources. Small but significant differences in the ratings of some factors by sex and/or specialty group were identified. Institution-level characteristics, such as call rooms, salary, and benefits, were relatively unimportant. Applicant priorities were stable over the 9-year study period.
Conclusions
Highly ranked applicants to competitive residency programs value educational aspects of the program most highly, along with resident morale. Top factors were consistent across subgroups and over the 9 years of the study. These findings have implications for resident recruitment strategies.
Editor's Note: The online version of this article contains the postmatch survey used in the study and an importance of factors table.
Introduction
Resident recruitment is a high-stakes, resource-intensive activity for teaching institutions, graduate medical education programs, and applicants. Hospitals direct substantial resources toward attracting top candidates, recognizing that residents affect the quality and efficiency of patient care, and the institution's reputation. Applicants devote substantial time, money, and emotional energy to selecting a program, which requires a multiyear commitment and has implications for their future careers.
Limited information is available to help programs optimize their recruitment process or inform applicants about how their peers make these decisions. Previous survey studies were often limited to a single specialty with a small number of respondents.1–6
The goals of this study were to (1) identify the factors influencing residents' selection of their residency program; (2) determine whether sex, race/ethnicity, or specialty affect factors important to applicants; and (3) assess whether applicant priorities changed from 2004 to 2012, given the increasing medical student debt and the apparent shift toward “controllable lifestyle” specialties.7,8 We hypothesized that applicants prioritized academic factors over factors related to quality of life or program environment, but the importance of quality of life would increase over the study period. We also hypothesized that priorities would vary according to applicants' specialty, sex, and race/ethnicity.
Methods
We surveyed applicants ranked highly by residency programs at the Brigham and Women's Hospital or Massachusetts General Hospital between 2004 and 2012. The survey instrument was developed based on a literature review and included 26 program characteristics (table 1). Some characteristics were assessed in prior studies, and some were not previously studied. Content validation was accomplished via review by experts, including program directors. Cognitive pretesting with 4 interns who had recently selected residency programs informed minor revisions.
The anonymous electronic survey (available as online supplemental material) was distributed via e-mail survey software at approximately the same week annually. Participants were asked to provide sex and race/ethnicity (except in 2012) and to rate program characteristics for their influence on applicants on a scale of 1 (no importance) to 5 (critically important).
Specialties were categorized as “procedural” or “nonprocedural” (by the authors), and as “controllable lifestyle” versus other specialties according to previous literature (table 2).9 Surgical subspecialties not previously categorized were considered “uncontrollable lifestyle”; dentistry was included among the controllable lifestyle specialties.
The Partners HealthCare System Institutional Review Board considered this study exempt.
Data were analyzed using SAS statistical software, version 9.3 (SAS Institute Inc). Lacking information on classification variables from nonresponders, we assumed that responders were representative of the population of interest. Ratings of program characteristics were analyzed as both continuous and ordinal measures and were considered significant only if both analyses concurred.
Differences in the mean factor scores between sex, race/ethnicity, and specialty, and across years were tested by 1-way analysis of variance with accommodation for variance heterogeneity across groups. Tests of sex, race/ethnicity, and specialty weighted the data for each year according to the number of respondents in that year. Effect sizes for differences between groups were calculated as the absolute difference divided by the pooled standard deviation and were categorized according to Cohen's convention, using thresholds of 0.2, 0.5, and 0.8 for “small,” “medium,” and “large” effects.10 Ordinal scores were compared using cumulative logistic regression. A multivariate analysis of variance with sex, race/ethnicity, and 2 specialty categorizations (controllable/uncontrollable lifestyle and procedural/nonprocedural) as simultaneous independent predictors was used to test for differences in multivariate means across all 26 residency program characteristics. Because of the large number of analyses, univariate tests of association with each of the program characteristics used a Bonferroni correction, with P < .002 considered statistically significant. To assess whether potential duplicate survey responses contained in the overall data set may have affected the results, secondary analyses were conducted of data subsets, excluding multiple programs in the same specialty.
Results
Of 9669 surveyed applicants, 6285 (65%) responded, and table 3 shows the response rate by year. The distribution of respondent demographic characteristics and specialties to which they applied are shown in table 4.
Factor ratings overall and for each subgroup are provided as online supplemental material. Applicants assigned the greatest importance (based on average ratings) to the program's ability to prepare residents for their next training (fellowship) position or first job, resident morale and esprit de corps, faculty availability/involvement in teaching, depth and breadth of the faculty, and variety of patients and clinical resources. These 5 factors were rated as most important every year, with minor variations in the order (figure). The 3 least important factors—child care, call rooms, and program brochure/website—were also consistent.



Citation: Journal of Graduate Medical Education 7, 1; 10.4300/JGME-D-14-00142.1
Subgroup analysis according to sex, race/ethnicity, or specialty type (procedural versus nonprocedural and controllable versus uncontrollable lifestyle specialties) identified the same 5 factors as most important with minor variation in the priority order and the same 3 as least important. These findings were unchanged when programs with potential duplicative survey responses were excluded from the analysis.
Although overall priorities were aligned across all subgroups, statistically significant differences (P < .002) with meaningful effect size (ES > 0.20) were identified in average ratings for some factors. Women placed greater emphasis on the institution's climate for women (ES = 0.96), climate for underrepresented minorities (ES = 0.29), and resident and faculty diversity (ES = 0.29). The largest race/ethnicity–related difference in importance ratings related to faculty and resident diversity (ES = 0.66) and the climate for minorities (ES = 0.87). Among the 26 program factors assessed, underrepresented minority (URM) applicants rated 25 factors and Asians rated 24 factors as more important than did non-Hispanic white applicants. Twenty of these differences were statistically significant.
Applicants to controllable lifestyle specialties assigned greater importance to ambulatory training (ES = 0.33) and the cost of living (ES = 0.21) than applicants to uncontrollable lifestyle specialties, and applicants to nonprocedural specialties indicated higher ratings for ambulatory training (ES = 0.42) and didactics (ES = 0.31) than procedural specialty applicants.
Discussion
The results of our study suggest that top residency applicants weigh educational aspects of programs most heavily, along with resident morale and esprit de corps, supporting similar findings from studies involving smaller groups of applicants.1,3,4,6,11,12 However, our observation that program websites and brochures have little influence on program selection conflicts with prior reports,13–16 perhaps because promotional materials are more important in the initial decision to apply to a program than in the eventual choice of program. Similarly, geographic location was only moderately important to applicants in this study but has been identified as a high priority by other investigators.5,6,11,17 This may relate to our focus on applicants who have already decided to apply; it is logical that geography would have a lesser role as applicants choose between locations already deemed acceptable, than it would in initial decisions on where to apply.
Interestingly, certain aspects of the residency environment, such as duty hours and the balance between clinical service and education, did not appear as primary considerations for applicants in this study. This may, in part, reflect that Accreditation Council for Graduate Medical Education duty hour requirements, and increased emphasis on curriculum-based activities over “service,” have made programs similar in these dimensions.
In an era when 84% of US medical school graduates carry debt averaging more than $150,000,18 residency salary and benefits might be expected to strongly influence program choice, but this study and others suggest otherwise.17 This finding could relate to the limited variability in salaries and benefits among programs nationally, making this a less useful discriminating factor.19 Also, applicants in this study rated the importance of health care benefits higher than they did salary, which may reflect the high cost of health insurance and a growing number of residency applicants with dependents.17
The literature is unclear about the effect of sex on residency program choice. Earlier studies found no statistically significant sex-based differences but may not have been sufficiently powered.20 More recent literature identifies an emerging pattern of differences.3,21 Several single specialty studies indicate that women are more likely to value camaraderie among residents, conferences, and didactic teaching, whereas men are more likely to be influenced by salary.22–24 Our study supports these findings in that women rated resident morale, esprit de corps, and didactics more highly than men did—but with minimal effect sizes in each case. This was also true for men's higher ratings of the importance of the salary scale.
A diverse health care workforce is a national priority and enhancing the recruitment of underrepresented minorities is an explicit goal for many residency programs. The results of this study indicate that applicants of all races/ethnicities are most strongly influenced by the same factors and—not surprisingly—that a program's existing diversity and climate for minorities is weighted more heavily by minority candidates. Our finding that the average ratings for most factors were significantly higher among minority applicants (with URM ratings greater than Asian, and Asian greater than non-Hispanic white applicants) is difficult to interpret and may relate to cultural differences in responding to surveys rather than meaningful distinctions in program selection priorities. The only other study we found that reported ratings of residency selection factors for URMs versus non-Hispanic whites and Asians showed the same phenomenon.23
To our knowledge, this is the only long-term study of residency applicant priorities, and its 9-year time frame provides new information regarding potential trends. Using subject anonymity to encourage candid responses and timing the survey to minimize recall bias represent important methodological strengths. The large number of respondents, strong response rate, and inclusion of multiple specialties support a robust analysis of subpopulations, which adds substantially to the limited literature addressing these questions. In addition, striking consistency in the most important and least important factors across subgroups and over time supports the reliability of these results.
This study has several limitations. It included applicants to 2 large, elite, academic health centers that recruit a selective group of residents and was limited to those who could have matched at these institutions. Although these hospitals draw applicants from across the United States, our findings are likely to be most generalizable to highly competitive candidates. Nevertheless, these results should be of general interest because many institutions seek to attract this group of applicants. Anonymous responses prevent elimination of potential “second surveys,” which could have been submitted by applicants applying to programs at both institutions in specialties where parallel programs are offered. Secondary analyses of subsets of programs without the possibility of duplicate surveys confirm the findings reported here.
Another limitation is that some characteristics influencing residency selection may not have been included in the survey. In addition, family medicine applicants were not surveyed because the participating institutions do not have family medicine residency programs. Finally, results should be interpreted in light of the timing of the survey because applicants may be influenced by 1 set of criteria in choosing which programs to apply to, and by a somewhat different set in determining their final ranking.
What are the implications of these results? Our data suggest that program leaders should focus on optimizing the quality of clinical education by strengthening faculty engagement and providing a rich variety of patient experiences, while remaining attentive to resident morale. These findings may also be helpful as institutions consider how to allocate limited funds: Benefits appear to be at least as important as salary and, while maximizing both is desirable, modest improvements are unlikely to have a major impact on recruitment. In addition, the findings may prompt program directors to reexamine the recruitment process to highlight program features that correspond to applicant priorities. For example, promoting interaction between applicants and faculty—demonstrating faculty interest and accessibility—may be more fruitful than a tour of call rooms and other facilities.
Conclusion
A large scale survey of residency applicants found highly reproducible priorities—across subgroups and over 9 years—thus providing reassurance that residency program selection is primarily influenced by educational factors and by resident morale/esprit de corps. With this in mind, programs should continue efforts to optimize clinical preparation and an environment of teamwork for the next generation of physicians.

Five Factors Most Influential on Residency Program Selection (2004–2012)
Author Notes
Roy Phitayakorn, MD, MHPE (MEd), is Director of Surgical Education Research, Massachusetts General Hospital, and Instructor in Surgery, Harvard Medical School; E. A. Macklin, PhD, is Assistant in Biostatistics, Biostatistics Center, Department of Medicine, Massachusetts General Hospital, and Instructor in Medicine, Harvard Medical School; during this project, J. Goldsmith, MS, MEd, was Director of Strategic Initiatives, Office of Graduate Medical Education, Partners HealthCare System, and is now Administrative Director, Division of Global Health Equity, Brigham and Women’s Hospital; and Debra F.Weinstein, MD, is Vice President for Graduate Medical Education, Partners HealthCare System, and Associate Professor of Medicine, Harvard Medical School.
Funding: The authors report no external funding source for this study.
Conflict of interest: The authors declare they have no competing interests.
The authors would like to thank Anne Rigg and Elizabeth Gilliam for their help in revising this manuscript.



