The Importance of Competency-Based Programmatic Assessment in Graduate Medical Education

DO FACOFP,
MD,
MD, PhD, and
MD, MACP, FRCP
Online Publication Date: 23 Apr 2021
Page Range: 113 – 119
DOI: 10.4300/JGME-D-20-00856.1
Save
Download PDF

The transition to competency-based medical education (CBME) began in earnest for accredited graduate medical education (GME) programs with the introduction of the Outcome Project in 2001.1 In 2007, the Accreditation Council for Graduate Medical Education (ACGME) began exploring Milestones.2 The Next Accreditation System (NAS) launched in 2013 with 3 core aims: strengthen the peer-review accreditation system to prepare physicians for practice in the 21st century, promote the transition to outcomes-based accreditation and medical education, and reduce the burden of traditional structure and process-based approaches.3 The NAS implemented multiple major changes. First, the Milestones defined the 6 general competencies in developmental narrative terms. By 2014, almost all participating GME programs were required to submit semiannual resident Milestones evaluations within the accreditation process. Second, all programs were also required to implement clinical competency committees (CCCs) to use group-based decision-making for judging learner progress.3

Prior to the NAS launch, an international group in 2010 identified 4 overarching principles required for effective CBME: focus on outcomes of the educational process, emphasis on acquirable abilities, learner-centeredness, and deemphasis on time-based education.4 van Melle and colleagues extended these principles with their CBME Core Components Framework.5 This framework (Table 1) identifies 5 essential components for competency-based training programs medical educators must, ideally, address to implement CBME. Table 1 also provides gaps in implementation of these components and offers potential goals and approaches to close those gaps. While this discussion will focus on the fifth core component, programmatic assessment, each of these components is essential in implementing CBME.

Table 1 Core Components Framework for Competency-Based Medical Education (CBME)a
Table 1

Operationalizing the NAS continues to be a work in progress. The transition from a time-based model that relies on time and volume proxies to judge competence to an outcomes-based medical education remains a major challenge for the US GME system. The COVID-19 pandemic has further exposed many limitations of a time-based system and disrupted traditional faculty-learner interactions, time-based rotation schedules using fixed learning venues, and previously developed approaches to assessment. Prior to the pandemic, a number of studies showed significant gaps and variability in the assessments used to make decisions about the progression of their learners on the Milestones.6,7 For example, in a study of 14 CCCs by Schumacher and colleagues, only one program reported using multisource feedback, and no programs reported using clinical performance data as part of their program of assessment.8 The ACGME also released guidance last fall for assessment during the pandemic and highlighted the importance of programmatic assessment and the need to still assess all the competencies to ensure graduates are prepared for unsupervised practice.9

Due to the shifting landscape of training venues and individuals conducting direct observation (secondary to redeployment), assessment opportunities have become more challenging.1012 These new and evolving realities create an opportunity to redouble efforts to realize an outcomes-based GME system. To accelerate change, the GME system and the NAS need to further integrate the original 4 principles with the 5 core components of CBME. One essential area requiring heightened effort is programmatic assessment, essential to fully achieve the promise of outcomes-based education to meet the needs of the public. This perspective presents key aspects of successful programmatic assessment for residencies and fellowships, with a focus on newer concepts to enhance effectiveness.

Programmatic Assessment in the NAS

A core principle of CBME is a program must know that the learner demonstrates the expected level of competence to advance as a trainee. To do so requires clear definitions of desired outcomes and assessment systems that accurately identify whether learners have made sufficient progress and ultimately achieve graduation outcomes. The components of programmatic assessment described in Table 1 are essential to this process.5 High-quality assessment can generate data and insights to support and drive effective feedback, coaching, self-regulated learning, and professional growth.13

System of Programmatic Assessment

Systems thinking is necessary for effective programmatic assessment. A programmatic assessment system can be defined as a group of individuals who work together on a regular and longitudinal basis to perform, review, and improve assessments.14 Individuals involved in this system include program directors/associate program directors, core faculty, peers, staff, and patients. Additionally, clinical competency committees (CCCs) and program evaluation committees (PECs) convene subgroups of this assessment system to provide individual learner assessment and overall training program assessment. This group must share goals of programmatic assessment, possess shared understanding of clinical and educational outcomes, create interdependent links between individual learner assessments and program evaluation, process information about learner performance (ie, both feedback and feed-forward mechanisms), and commit to producing trainees fully prepared to enter the next phase of their professional careers. Done correctly, systematic programmatic assessment utilizes both qualitative and quantitative data and professional judgement to optimize learning, facilitates decision-making regarding learner progression toward desired outcomes, and informs programmatic quality improvement activities.14

An idealized GME assessment system is represented in Figure 1. As conceptualized in this figure, programmatic assessment includes all the activities within the box and allows for robust data generation using multiple assessment methods and tools to generate data that informs the judgment of the CCC regarding learner progression. This judgement is then presented as a recommendation to the program director while also providing feedback to both faculty and learners. Building programmatic assessment requires implementing an integrated combination of assessment methods and tools for determining a learner's developmental progression in each of the 6 general competencies. While not a complete list, Table 2 provides a core menu of assessment tools/methods appropriate for each general competency.

Figure 1Figure 1Figure 1
Figure 1 The GME Assessment System

Citation: Journal of Graduate Medical Education 13, 2s; 10.4300/JGME-D-20-00856.1

Table 2 Examples of Recommended Core Assessment Tools/Methods By Competency to Support Programmatic Assessment
Table 2

Programmatic assessment should also sample appropriately across all learning venues and at expected levels of learning. The Milestones provide a basic rubric for developmental progression within the competencies. Miller's Pyramid constitutes a useful framework to assist the program in choosing the right type of assessment for the developmental stage of the learner (Figure 2).15 While the emphasis of assessment at the GME level should focus on the “does” of Miller's Pyramid, programmatic assessment should include appropriate approaches across the full continuum of “knows” to “does.” Ultimately, the majority of assessment should focus on work-based assessments such as direct observation, multisource feedback, clinical performance measures, and methods to probe clinical reasoning in patient care. Finally, tracking where, how, and how frequently assessments are being completed will ensure that robust assessment is completed across all necessary competency domains throughout the program (Figure 3). This programmatic assessment “map” is essential in ensuring the core abilities needed by the learner are being taught and assessed.

Figure 2Figure 2Figure 2
Figure 2 Assessing for the Desired Outcome

Citation: Journal of Graduate Medical Education 13, 2s; 10.4300/JGME-D-20-00856.1

Figure 3Figure 3Figure 3
Figure 3 Programmatic Assessment Mapping Matrix

Citation: Journal of Graduate Medical Education 13, 2s; 10.4300/JGME-D-20-00856.1

Programmatic Assessment and the Human Element

The quality of data generated by assessment programs and individual assessment methods/tools are highly dependent on faculty's capability with them. While energy is routinely spent designing and perfecting assessment tools, most data variability generated by these instruments is due to the human element.16 Rather than pursuing the “perfect tool,” programs are better served ensuring that faculty understand the educational goals and outcomes and have a shared understanding, or mental model, of how the assessment program documents the developmental progression of learners toward those outcomes. It is no longer adequate for assessment to document only what has been learned. This same information must be shared with learners to help catalyze and define their future learning path.17 Assessment and the feedback should address both what has been learned (assessment “of learning”) and the next step in development (assessment “for learning”).

Learner Role in Assessment

The learner's role in assessment has received woefully little attention in medical education. The NAS includes the requirement that residents and fellows develop individualized learning plans and leverage assessment data longitudinally to support their professional development. Learners must understand the role of assessment and utilize assessment data during their training and in preparation for unsupervised practice to support continuous professional development. A philosophy beginning to gain traction in medical education is coproduction.18 Coproduction is based on the principle of restoring individual agency for learning and assessment to the trainee, rather than assuming it rests only with faculty. Coproduction in assessment positions the learner as an active partner generating their own self-assessments, with agency to seek assessment, feedback, and coaching, and help determine what approaches to future learning will be most helpful. These behaviors help struggling learners meet expectations, while ensuring that learners at or above the expected level of competency continue to pursue mastery. Coproduction extends and refines the CBME concept of tailored learning, or learner-centeredness.5

The Role of Milestones and Entrustable Professional Activities in Programmatic Assessment

The NAS Milestones provide a framework for assessing learners' developmental progression in the 6 general competencies. Description of an individual's Milestones progress provides a road map for interpreting rotation-based assessment data (especially work-based assessments) to define that individual's learning trajectories. The Milestones should guide the synthetic judgement completed biannually at the level of the CCC. Milestones were not designed to be used as stand-alone faculty evaluation forms.19 If learner trajectories are consistently missing expected targets in any area of general competency growth, programs should critically review curriculum content, delivery, and assessment to ensure the educational program is providing the appropriate learning environment.20 Through this process, programs can identify and remove or improve ineffective learning and assessment activities as part of programmatic quality improvement.

The Milestones can and will also need to improve. In 2016, the ACGME launched the Milestones 2.0 project to refine and revise all initial Milestones sets.21 Milestones 2.0 addresses the substantial variability in content and developmental progression in the initial subspecialty Milestones and simplifies and standardizes language used to describe developmental progression. The ongoing Milestones 2.0 initiative has identified a set of standardized, or harmonized, subcompetencies in the 4 non–patient care and medical knowledge general competencies. Once complete, this evolution of the subspecialty Milestones will guide programs as they review and update their educational programs to ensure they continue to meet educational outcomes.

As the NAS has evolved, interest in entrustable professional activities (EPAs) has also grown. While use of EPAs is not required for ACGME accreditation, EPAs have gained support as a strategy for structuring clinical assessment. EPAs were introduced by ten Cate as a framework to define and assess essential clinical activities required of the profession.22 EPAs describe the essential work of the profession, whereas Milestones and competencies frame attributes of the learner's abilities. While such EPAs are valuable, programs can also develop customized EPAs to document achievement of desired outcomes for specific rotations (Box 1).

Programmatic Assessment Success

Programmatic assessment must be “fit for purpose.”14 Does an assessment program's combination of tools and methods help determine and guide learners' developmental progression and allow for feedback that informs individual learning plans and program-level improvement? If an assessment is elegantly designed and deployed but does not generate data informing these outcomes, it is insufficient. Hauer and colleagues identified 6 principles of programmatic assessment that can help avoid inadequate programmatic assessment and should be used by all programs as they implement and continuously improve programmatic assessment (Box 2).23

Conclusions

Programmatic assessment, using a systems-lens, is essential to assure desired outcomes in GME. The elements include high-quality multifaceted assessment methods and tools, group decision-making using best practices in group dynamics, longitudinal and developmental thinking in assessment, and a philosophy of coproduction, with learners as active partners. Without each of these, especially learners as active partners, GME risks production of learners with a limited capacity for self-directed, lifelong learning. The disruptions caused by the COVID-19 pandemic has further reinforced the importance of programmatic assessment.

Copyright: 2021
Figure 1
Figure 1

The GME Assessment System


Figure 2
Figure 2

Assessing for the Desired Outcome


Figure 3
Figure 3

Programmatic Assessment Mapping Matrix


Author Notes

Corresponding author: Eric S. Holmboe, MD, MACP, FRCP, Accreditation Council for Graduate Medical Education, eholmboe@acgme.org, Twitter @boedudley
  • Download PDF