Integration of Continuous Quality Improvement Methods Into Annual Program and Institutional Evaluation
ABSTRACT
Background
The Accreditation Council for Graduate Medical Education's Next Accreditation System requires continuous program improvement as part of program evaluation for residency training institutions and programs.
Objective
To improve the institutional- and program-level evaluation processes, to operationalize a culture of continuous quality improvement (CQI), and to increase the quality and achievement of action items, the Wayne State University Office of Graduate Medical Education (WSU GME) incorporated CQI elements into its program evaluation process.
Methods
Across 4 academic years, WSU GME phased the following 4 CQI elements into the evaluation process at the program and institutional levels, including the annual program evaluation (APE) and the annual institutional review: (1) An APE template; (2) SMART (specific, measurable, accountable, realistic, timely) format for program and institutional goals; (3) Dashboard program and institutional metrics; and (4) Plan-do-study-act cycles for each action item.
Results
Action item goals improved in adherence to the SMART format. In 2014, 38% (18 of 48) omitted at least 1 field, compared with 0% omitting any fields in 2018. More complete action items took less time to resolve: 1.7 years compared with 2.4 years (t(43.3) = 2.87, P = .003). The implementation of CQI in the APE was well received by program leadership.
Conclusions
After leveraging CQI methods, both descriptions of institutional- and program-level goals and the time required for their achievement improved, with overall program director and program coordinator satisfaction.
Introduction
Continuous quality improvement (CQI) is a model that focuses on innovation and growth.1–3 The Accreditation Council for Graduate Medical Education's (ACGME) Next Accreditation System (NAS) explicitly adopts CQI principles, such as improvement requires change that must be monitored.4 The ACGME Common Program Requirements state that the annual program evaluation (APE)—a process of monitoring and tracking program performance—serves as “part of the program's continuous improvement process.”5 This move to CQI requires new strategies and tactics for institutions and programs.
Because CQI mirrors ACGME's NAS, and our clinical faculty are already familiar with CQI in the context of clinical care improvement, the Wayne State University Office of Graduate Medical Education (WSU GME) chose CQI as an appropriate framework for program and institutional improvement. The authors found no scholarship directly applying CQI principles to GME program evaluation, but did find precedents for the use of CQI-like tools in GME more generally. Lypson and colleagues6 used plan-do-study-act (PDSA) cycles to improve their program review process, but did not integrate PDSA cycles into the process itself. Amedee and Piazza7 developed an annual institutional review (AIR) resembling a quality control process, which integrates APEs to allow better institutional oversight of programs. The educational innovations project included program directors who shared improvement tactics and experiences8 that were not modeled on formal CQI processes but were constructive and valuable for participants.9 None of this work adopted robust managerial practices into APE and AIR processes. This article describes the introduction of CQI methods into the APE and AIR evaluation processes over 4 years, from 2014–2015 through 2017–2018 academic years.
We hypothesized that introducing CQI methods into the APE process would improve goal setting and achievement. We introduced CQI incrementally from 2014 to 2018 and studied programs' action items to test this hypothesis.
Methods
Participants
The WSU GME provides accreditation and support to 7 programs that on average train 120 to 125 residents and fellows a year. Programs range in duration from a 1-year transitional year program (16 residents) to a 5-year otolaryngology program (12 residents).
Description of Intervention
The introduction of CQI elements started in 2014–2015, with elements added each academic year (see timeline in figure 1). Each element is described below:
-
APE document template (2014): The WSU GME provided every program with an APE document template. Programs completed this document, which was then peer-reviewed by another program director who submitted written feedback to the WSU GME and made suggestions for improvement verbally at the Graduate Medical Education Committee (GMEC) meeting. Revised documents were approved at a subsequent GMEC meeting.
-
SMART goal format (2014): The template required that every action item mandated for program and institutional improvement had to include each SMART field: specific aims, measurable outcomes, accountable parties, realistic goals, and time for completion. This format is used in business10 and has been shown to help residents' independent learning plans.11
-
Use of SMART goals for the AIR (2014): WSU GME wrote institutional SMART goals to make the AIR process analogous to the open peer review of the APEs. Each year, an executive summary of the AIR detailing SMART goals and PDSA cycles of their implementation was distributed to program directors and institutional leadership.
-
Dashboards (2017): WSU GME developed program and institutional dashboards by combining metrics from multiple sources (ACGME and internal GME surveys, resident in-training examination scores and milestones ratings, accreditation status and citation counts, match performance, etc) into standardized measures of important constructs (resident performance, program quality, faculty development, and graduate performance) with meaningful cut-points for low, moderate, acceptable, and exemplary performance set from national standards and institutional expectations. Metrics were combined using a Bayesian interference to produce point estimates and 80% credible intervals for each construct (figures 1 and 2).
-
PDSA cycles (2018): The SMART goal format was redesigned and tables were added detailing the PDSA process below each action item (figure 1). The table was based on content from the Institute for Healthcare Improvement (IHI) Open School.



Citation: Journal of Graduate Medical Education 11, 5; 10.4300/JGME-D-19-00145.1



Citation: Journal of Graduate Medical Education 11, 5; 10.4300/JGME-D-19-00145.1
Faculty Development and Training
The WSU GME trained program directors and coordinators in these methods and provided online learning tools. The APE document template included instructions and WSU GME provided clarification as needed. WSU GME leadership discussed CQI elements at the GMEC subcommittee of compliance and improvement, the program coordinator meeting, and individual meetings between GME leadership and program directors. Transparency of the peer-review process at GMEC meetings was designed to reinforce a shared culture of the importance and purpose of program evaluation through CQI.
We gave each SMART action item a unique identifier and recorded the year it was introduced, its completeness (including all content in the SMART format), the number of years it persisted, and the year of its resolution or lack thereof. An action item was deemed resolved if it was marked resolved by the program evaluation committee on the APE document or did not appear on the next year's APE document. To track the success of action items, the number of years of persistence of each action item and the frequency of single-year action items were computed for each academic year. The number of years of persistence of complete and incomplete SMART goals were compared using Welch-corrected independent sample t tests.
The Institutional Review Board at Wayne State University reviewed this study and determined that it qualified for exemption.
Results
All sponsored programs successfully completed an APE document template each year of this study. Across the 7 programs, 151 distinct action items were identified. The number of years each action item persisted decreased linearly by the year of its introduction (2014–2015: 3.1 years; 2015–2016: 2.4 years; 2016–2017: 1.5 years; 2017–2018: action items were excluded from the analysis). This was not an artifact of the decreasing number of years available: each year, the percentage of action items persisting only 1 year increased (2014–2015: 33%; 2015–2016: 63%; 2016–2017: 88%).
The use of SMART action items improved each year. In 2014, 38% (18 of 48) of action items had at least 1 missing field, most often the accountable field. This percentage decreased over time: 2015 had 21% (10 of 47), 2016 had 19% (12 of 62), 2017 had 19% (10 of 53), and 2018 had 0% (0 of 25). An examination of the quality of action items found improvement in the scope and clarity of goals over time (table). Of the incomplete action items, 65% (20 of 31) persisted more than 1 year (mean 2.4 years) compared to complete action items of which 35% (33 of 94) persisted more than 1 year (mean 1.7 years). The persistence rate difference was statistically significant (t(50.7) = 2.93, P = .003) as was the difference in number of years (t(43.3) = 2.87, P = .003).
Program directors and coordinators generally received the intervention positively and fully participated in APE document completion and peer review each year. Informal feedback to the WSU GME was constructive and encouraging.
Dashboard construction required the collection and organization of data into a single database and the development of a new approach for standardizing and combining metrics into a hierarchical framework of important constructs. Once developed, the addition of new data into the database was easy and the program used to generate updated dashboards was simple to run. The use of the AIR to demonstrate SMART goals based on dashboard metrics using PDSA cycles for their resolution was employed for ongoing program leadership faculty development. The WSU GME also subscribed to and made available resources at the IHI's Open School for further faculty and staff development.
Discussion
Action item quality improved over the years: SMART goals were more complete and focused, clearer, and resolved in less time. Action items with no empty fields in the SMART format were less likely to persist for more than 1 year. Introducing CQI methods was an iterative, multiyear process requiring time to train faculty and staff, but was well-accepted by program directors and coordinators.
In the spirit of the continuous improvement model, we elected to introduce incrementally various QI elements as an iterative process, responding to feedback by the program directors and coordinators in the process. Some of the challenges we encountered included the turnover of program directors and coordinators, necessitating ongoing efforts of professional development on the CQI model. In addition, we changed the APE template from an Excel spreadsheet to a Word document to simplify entry at the urging of program coordinators.
This study is limited by the focus on 1 medium-sized institution, thus the findings may be less generalizable to smaller or larger GME sponsoring institutions. The amount of faculty development needed may vary across institutions since the effectiveness of SMART goals depends on context and training.12 Also, while we tracked the adherence to SMART goals and action item completion, we did not measure whether this directly led to program improvement or increased resident competence. The iterative introduction of CQI over 4 years precluded a rigorous test of the impact of the methodology on program improvement.
This project will continue with adjustments for observed successes, implementation difficulties, and tactical shortcomings. We will continue to track the impact of these CQI-focused methods on our programs' quality. The investment, primarily of time, in developing tools and training faculty and staff on their use will continue to be compared with the observed increase in program autonomy and program leadership effectiveness, as well as faculty and staff satisfaction.
Conclusions
Our GME office introduced CQI methods into its APE and AIR processes over 4 years. During that time, action items listed in APE documents became better defined and more likely to be resolved quickly. Using a systematic goal-setting process (SMART) and a systematic multi-source assessment process (dashboards), WSU GME has begun to successfully leverage widely accepted CQI methods to capitalize on program and institutional opportunities and strengths and to foster a more effective culture of educational improvement.

Timeline of CQI Methods With Examples From Annual Program Evaluation Document Templates
Abbreviations: CQI, continuous quality improvement; APE, annual program evaluation; SMART, specific, measurable, accountable, realistic, timely; AIR, annual institutional review; GME, graduate medical education; PDSA, plan-do-study-act.

Dashboard Summary Showing 3 Levels of Hierarchya
a The standardized estimate of the construct of “resident performance” (top level) is a combination of standardized estimates of subconstructs (such as “professionalism”), which is a weighted combination of standardized measures from multiple sources (year-adjusted milestones ratings, ACGME survey and institutional graduate medical education survey responses).
Author Notes
Funding: The authors report no external funding source for this study.
Conflict of interest: The authors declare they have no competing interests.
The authors would like to thank Kanye Gardner, Director of Continuous Quality Improvement, Wayne State University School of Medicine, and Robert Dace-Smith, Program Management Specialist, Wayne State University School of Medicine, for their valuable consultation on continuous quality improvement methods and processes; Dr. Heidi Kenaga, Research Coordinator, for her manuscript preparation and editorial consulting; and Martha Jordan, Wayne State University Graduate Medical Education Administrative Director, for her tireless work in implementing the strategies outlined in this article.



