To the Editor: Re-Evaluating Research Incentives—and Firefighting, a Response to Hughes et al
We thank Mr. Hughes and Dr. Checkett for their interest in our work.1 Despite their concerns that our piece provides inadequate context and depth, we find much common ground. We agree that subpar research training, inadequate peer review, and flawed methodological approaches are important causal factors in perpetuating low-quality scientific research, and we support advocacy to improve these areas. Where we differ is in our appraisal of the incentives for authors to produce voluminous low-quality work, and what the consequences of restructuring those incentives might be.
Regardless of the availability of high-quality mentors or previous research training, the incentive for most medical students to participate in research is the reasonable belief that doing so will confer an advantage in competitive residency selection processes.2,3 Many residency programs receive more than 100 applications for each available position.4 A program director who receives 1000 applications and spends just 10 minutes reviewing each would require 10 weeks of professional time to read each application once.5 Applicants recognize that, in such an environment, a lengthier list of publications may be the difference between being offered an interview—or not.
Limiting the number of publications that could be considered in the initial review would remove the incentive for applicants to report longer and longer lists of publications. It might—as Hughes and Checkett suggest—create the unintended consequence of incentivizing applicants previously uninvolved in research to produce at least 3-5 publications to avoid having their application screened out. Yet such applicants have a similar incentive in the current system—the major difference is that, presently, even those with many publications still face pressure to produce a continually inflating quantity of research to survive initial application screening. Furthermore, we would be optimistic that even an increase in this “check box” mentality would be more than offset by the benefits of changing program directors’ incentives when reviewing applicants’ research output. If applicants can no longer be distinguished by the length of their publication list, programs seeking future physician-scientists will need to examine the quality of previous research contributions. It is not at all clear to us why such programs would favor poorly conducted systematic reviews and meta-analyses (or, even if they did, why policymakers or authors of clinical guidelines would view such work any differently than they currently do). Nor is it clear why incentives aimed to increase the quality of research would lead to more authorship misrepresentation. The pressure for quantity over quality seems more likely to drive unethical authorship rates, as authorship misrepresentation is associated with a greater number of reported works on residency applications.6
Finally, we agree that limiting the number of publications is not a panacea (and were careful to acknowledge as much).7 In an ideal world, researchers of all types would have access to high-quality mentorship and robust research methods training (and the incentives to use them). Yet limiting the number of publications that may be listed in a residency application does nothing to prevent advocacy or improvements in these systemic issues. It is, however, a simple and immediately achievable policy that could limit some of the harm in the present system. We find Hughes and Checkett’s firefighting analogy apt, and actually, would extend it further. Should we gaze upon a burning building with a hose in hand, yet refuse to open the nozzle simply because doing so would not improve the quality of the building code or sprinklers in the burning building? Or should we use all our resources to extinguish the immediate danger, while simultaneously working to improve the underlying systemic issues?



