Faculty- and Program-Level Adaptations to Competency-Based Assessment Demands

MD,
PhD,
PhD, and
PhD
Online Publication Date: 15 Dec 2023
Page Range: 742 – 743
DOI: 10.4300/JGME-D-23-00712.1
Save
Download PDF

The Challenge

In the past, “great intern” and “best resident ever” were commonly used phrases found in trainee assessments. Now assessors must adopt a competency-based growth mindset to document residents’ and fellows’ progress toward meeting specialty-specific ACGME Milestones and provide them with guidance for improvement. Concurrently, assessors must accept that their implicit biases and experiences affect how they teach and assess and that everyone assesses performance differently. These factors raise concerns about the accuracy of assessments. Given evolving assessment responsibilities, combined with a renewed focus on fairness and equity in assessment, ongoing professional development for clinical faculty who perform assessments is critical. Faculty and program leadership often have limited expertise in identifying and prioritizing the steps needed to improve their assessments.

What Is Known

Competent assessors understand how and when to use the assessment methods best suited to their teaching environments. They understand the relationship between the purpose of assessment (eg, low-stakes versus high-stakes decisions) and the quantity and quality of assessment evidence needed. While skills in selecting and utilizing assessment methods aligned with a program’s goals are important, faculty must also understand the difference between individual assessment events (eg, the Objective Structured Clinical Examination, direct versus indirect observation, feedback) and the overall assessment system.1,2 This is often referred to as assessment literacy.3

Awareness of fairness issues that can affect competency-based assessment (CBA) systems. These include concerns over equity (comparability of learning experiences and assessment opportunities)3,4 and equality or equal treatment. Faculty development in the areas of rater errors (eg, assessing learners based on national or cohort norms rather than competencies) and the impact of biases on assessment practices is needed.1,4Understanding the need for ongoing formative assessment. Trainees often require actionable feedback to meet Milestones.1Understanding that trainees’ skills and knowledge must be assessed across multiple contexts and settings and by multiple raters to provide an accurate view of performance.1,2

A training program’s mission and values will influence the types of assessment evidence and practices deemed credible and feasible. For instance, for programs that value research, assessment of the quantity and quality of trainees’ scholarly products is more meaningful than for programs that place a high value on trainees’ skills in patient-centered care or advocacy.

How You Can Start TODAY

What Can Individual Faculty Do?

  1. Ask about the purpose of an assessment. Faculty should have a clear understanding of the assessment’s purpose, as this affects the information needed for completion. Is the assessment used to provide timely formative feedback so that trainees can improve their performance? Is the data being provided to a Clinical Competency Committee (CCC) to support high-stakes decisions? The intended use should be clear to faculty as they complete assessments.

  2. Keep Milestones at the forefront of your mind. Faculty must attend to the defined, specialty-specific Milestones that trainees are striving to achieve when completing assessments. Improving the relevance of assessment tools in relation to Milestones will remind assessors to provide feedback based on defined metrics, rather than on norm-referenced reflections, or on interjected unconscious bias. Data regarding non-Milestone competencies can be added in the comments section.

  3. Set specific expectations with trainees for providing feedback. Frequent, timely, formative feedback is a CBA cornerstone. Setting defined goals, whether they be completing a formative assessment each week or after each clinical day, highlights that providing ongoing actionable feedback is a priority. Department goals or recognition for faculty who most consistently provide trainees with timely feedback may provide additional incentives for faculty.

  4. Solicit trainees’ understanding. If trainees do not understand the feedback communicated through assessment tools, they cannot effectively use feedback to drive learning or promote self-development. If consistent gaps in understanding are identified, determining trainees’ understanding of the feedback received can inform the need for long-term professional development, thus potentially avoiding the “no one ever told me this before” phenomena.

What You Can Do LONG TERM

What Can Programs Do to Help Their Faculty?

  1. Faculty development. Implementing assessor development can feel overwhelming. If your training program does not provide continuous development on topics and skills related to assessment literacy, consider leveraging offerings that may be available via your sponsoring institution’s education departments (eg, graduate medical education, undergraduate medical education, or faculty development offices). Joint faculty development with other training programs, courses from accreditation agencies, or specialty-specific professional organizations may be options for programs with fewer resources.

  2. Where and when to provide development. Optimally, regular faculty development would be embedded in a program’s existing activities (eg, faculty meetings, competency committee sessions, retreats) to avoid additional time commitments. If you currently offer ongoing faculty development, incorporate sessions focused on key elements of assessment literacy (Table). When possible, encourage trainees to attend these sessions to enrich the dialogue and hone their own assessment literacy skills.

  3. Develop your leaders. Program and block directors as well as CCC members must identify high-quality sources of assessment evidence (eg, faculty-, trainee-, or program-generated), relatively free of rating errors and biases, especially when used for high-stakes decisions such as promotion.1

  4. Monitor for rater errors and bias. Bias can permeate assessment systems through the learning environment, assessment tools, and assessors. Assessment errors by raters (eg, comparing trainee performance to others instead of to Milestones, being lenient) harms the assessment system. To identify and minimize these biases, training programs must engage in continuous quality improvement of their assessment systems, with a specific focus on identifying rater errors or potential biases. Harness the capabilities of your CCC and Program Evaluation Committee to develop a focus on program quality improvement.

  5. Provide faculty feedback. As faculty continue to develop their assessment literacy skills, they need to receive reinforcing and modifying feedback about their assessment practices. Faculty need to be aware of assessment errors and whether they are acting as role models for feedback. The training program needs to not only discover and report this information, but also direct faculty to resources to improve their assessment literacy.

Table Essential Knowledge and Skills for Assessment Literacy in Competency-Based Systems
Table
Copyright: 2023

Author Notes

Corresponding author: Jeanette Zhang, MD, University of Florida, Jacksonville, Florida, USA, jeanette.zhang@jax.ufl.edu
  • Download PDF