Coaching Residents in the Ambulatory Setting: Faculty Direct Observation and Resident Reflection
ABSTRACT
Background
Direct observation can be valuable for learners' skill development in graduate medical education, but it is done infrequently. Information on how to optimize trainee learning from, and best practices of, direct observation interventions in the ambulatory setting is limited.
Objective
We explored the impact of a focused outpatient direct observation and coaching intervention on internal medicine residents.
Methods
Using a behavior checklist based on tenets of clinical excellence, 2 faculty preceptors observed outpatient primary care visits with 96% (46 of 48) of the internal medicine residents in 2017. Residents self-assessed their performance after the visit using the same checklist. Next, a focused coaching feedback session, emphasizing reflection, was structured to highlight areas of discrepancy between resident self-assessment and coach observation (blind spots), and residents were asked to identify goals for practice improvement.
Results
Common blind spots in resident self-assessment related to collaborating with patients while using the electronic health record (48%, 21 of 44), hand washing (43%, 20 of 46), and asking thoughtful questions (40%, 18 of 45). At 1-month follow-up, 93% (43 of 46) of responding residents reported change in practice toward goals often or sometimes. All residents reported that the intervention felt comfortable, and 98% (45 of 46) noted that it helped them identify new behaviors to incorporate into clinical practice.
Conclusions
Structured episodes of direct observation and coaching in the outpatient setting, with a behavior checklist, appear acceptable and useful for internal medicine residents' learning and development.
Introduction
Direct clinical observation is an essential component of assessment in residency education, codified in the United States in the Accreditation Council for Graduate Medical Education (ACGME) Milestones.1,2 Available data suggested that episodes of direct observation can contribute to clinical development and improve the accuracy of trainee evaluation, yet direct observation occurs less frequently than recommended or desired due, in large part, to faculty time constraints and a lack of structure to consistently accommodate it.3–5
The clinical coaching paradigm, a training strategy based on direct observation with attention to discrete skills development, is an increasingly popular model in medical education. Coaching places an emphasis on reflection. Learners are encouraged to think about what they did well and how they might care for their patients more effectively in the future.6,7 An important goal of coaching is to help learners achieve deliberate practice, usually through rigorous self-assessment.8 While clinical coaching generally involves a longitudinal relationship with a learner and serial observations over time, it can also occur in isolated episodes.9,10 In contrast to teaching, where competence is the goal, the objective for coaching is excellence.
Procedural specialties have embraced the coaching model in recent years, demonstrating acceptability to learners and an impact in improving surgical techniques.11 Information on clinical coaching in other areas of medicine is sparse. Ambulatory practice is a crucial aspect of internal medicine residency training in which direct observation is often limited because of encounters occurring behind closed doors with several residents supervised by 1 attending physician. Data suggested that structured episodes of direct observation in ambulatory settings might be valuable for educational assessment and resident skills development,12,13 although no published studies, to our knowledge, have examined the benefits of adding structured coaching in this clinical setting.
We hypothesized that direct observation and coaching by experienced preceptors would identify important deficiencies in clinical practice unrecognized by residents and that coaching would help residents create discrete performance goals. We sought to evaluate the differences between resident self-assessment and faculty observation and to assess the impact of coaching feedback on goal setting and achievement in an academic general internal medicine (GIM) practice.
Methods
Setting and Context
This intervention was implemented at the Johns Hopkins Bayview GIM practice in Baltimore, Maryland, between March and June 2017. The practice serves as the primary continuity clinic for all 48 residents in the Johns Hopkins Bayview Internal Medicine Residency Program. Residents are expected to complete at least 2 partially observed encounters each year with a GIM or geriatrics faculty preceptor using a mini-clinical evaluation exercise (mini-CEX) format.14 These exercises typically focus on 1 particular portion of the visit (eg, physical examination) and rarely span the entire encounter length.
Coaching Intervention
Residents were introduced to the intervention in advance and instructed that the coaching intervention was designed as a quality improvement initiative to optimize their growth in the ambulatory setting. Learners understood that the assessment was intended to be formative.
Two GIM faculty preceptors, with 19 years of combined experience, conducted full-length direct observations of clinical encounters once with each resident in the program. Both coaches participated in the intervention design and completed a 9-month teaching skills course prior to the intervention.15 Coaching faculty had longitudinal relationships with many residents.
Coaching sessions were structured to improve on existing strategies for direct observation in several ways.16 The workflow for the coaching episode is shown in figure 1. Faculty evaluated trainees using a 30-item checklist of behaviors based on the mini-CEX and published tenets of clinical excellence, which had previously been developed for coaching of outpatient attending physicians.17 The checklist was developed with consideration of 6 domains of clinical excellence relevant to the ambulatory setting: (1) professionalism and humanism; (2) communication and interpersonal skills; (3) use of the electronic health record (EHR); (4) diagnostic acumen; (5) skillful negotiation of the health care system; and (6) medical knowledge. Pilot testing was conducted in which coaches watched several faculty members caring for patients to optimize ease of use, enhance clarity, and ensure that agreement was acceptable.



Citation: Journal of Graduate Medical Education 10, 4; 10.4300/JGME-17-00788.1
Immediately after the encounter, residents were given a copy of the checklist to complete as a self-assessment. Next, a 15- to 30-minute focused coaching feedback session was led by the faculty coach. Together, the pair reviewed the 2 copies of the completed checklist to identify areas of disagreement in assessment. The coaching feedback discussion was informed by specific, concrete examples of resident behavior and language during the encounter. Particular emphasis was given to items in which resident self-assessment differed from faculty objective assessment—blind spots. Blind spots were presented to residents as (1) “surprising good news,” when coaches observed residents execute checklist behaviors the residents self-reported as not performed; and (2) “surprising bad news,” when coaches did not observe behaviors the resident reported they had completed.
The study was approved by the Johns Hopkins School of Medicine Institutional Review Board.
Evaluation
After the coaching feedback session, residents independently completed a learning plan that asked them to identify at least 2 goals for their outpatient clinical practice over the ensuing weeks. Finally, residents completed a brief evaluation, with Likert scale response options, that addressed the acceptability and usefulness of coaching.
One month after the coaching intervention, residents received individualized e-mails reminding them of their 2 goals and asking whether they had implemented those changes in their practice.
Cumulative time to execute the coaching was approximately 55 hours. Checklist development and training took 1.5 hours each. The time for coaching sessions, including preparation, direct observation, and debriefing, ranged from 35 minutes to 1.5 hours (average, 1 hour). An administrative assistant scheduled coaching sessions and e-mailed follow-up surveys to residents (2 hours total).
Results
Forty-six of 48 residents (96%) were directly observed during the study period (54% [n = 25] were men, 28% [n = 13] were in the primary care residency track, 35% [n = 16] were interns, 33% [n = 15] were postgraduate year 2 [PGY-2], and 33% [n = 15] were PGY-3).
Behaviors: Observed and Self-Assessed
The most common items in which directly observed behaviors by coaches differed from resident self-assessment (blind spots) are shown in box 1. Among the 208 blind spots identified, “surprising good news” (n = 110, 53%) was slightly more common than “surprising bad news” (n = 98, 47%). More detailed results are shown in figure 2.



Citation: Journal of Graduate Medical Education 10, 4; 10.4300/JGME-17-00788.1
The proportions of checklist item behaviors completed (as noted in the direct observation) and accurately self-assessed by the residents are shown in the table. The least commonly performed behaviors entailed acknowledging the role of the computer (34%, 15 of 44), collaborating with patients when using the EHR (48%, 21 of 44), hand washing (54%, 25 of 46), assessing understanding (teach back; 51%, 22 of 43), and using appropriate physical examination techniques (64%, 28 of 44).
There were no differences by sex for behaviors (women, 87% [479 of 549 items completed]; men, 85% [562 of 662]) or for blind spots in self-assessment (women, 24% [133 of 549]; men, 23% [154 of 662]).
Self-assessed Change in Practice
The most frequently reported resident goals for improvement following the coaching session are shown in box 2. At the 1-month follow-up survey assessing self-reported progress toward goals, we collected 42 responses from 23 of the 48 residents (48%). Residents reported a change in practice toward their stated goal often in 45% (n = 19), sometimes in 45% (n = 19), rarely in 2% (n = 1), and never in 7% (n = 3) of responses. No resident reported a change in practice toward a stated goal as always.
Intervention Acceptability
Residents felt comfortable being coached (100% agreed or strongly agreed), and all agreed coaching added value beyond traditional precepting and indicated they would like to be coached in the future. Slightly fewer acknowledged that coaching identified blind spots in practice (87%, 40 of 46), and 98% (45 of 46) reported coaching helped them to identify new behaviors to incorporate into clinical practice.
Discussion
Our direct observation of internal medicine residents in the outpatient setting by experienced coaches using a behavioral checklist found differences between resident-reported and faculty-observed behaviors. This prompted formative coaching by faculty and goal setting by residents, with some residents reporting continued focus on those goals 1 month later. The coaching sessions were acceptable to residents.
The most robust literature for clinical coaching exists for surgery, where incorporating direct observation and focused feedback on procedural skills were shown to improve technical skills.11 Previous studies examining direct observation in the outpatient setting have shown acceptability among faculty and changes in practice.12,13 A longitudinal coaching program in pediatrics was associated with an increase in integration of resident self-assessment into feedback sessions and improved resident confidence in faculty feedback,18 and other models of direct observation, such as videotaping, have shown promise in improving the quality of feedback.19
We found differences between resident self-evaluation and faculty-observed performance, consistent with previous literature showing poor rates of physician self-assessment of clinical skills.20–23 Integration of the computer and/or EHR into the visit was a commonly omitted and inaccurately self-assessed behavior. This was not surprising given the documented concerns about the effect of EHRs on patient-physician interactions and the lack of standards for resident training and evaluation in using the EHR during patient visits.24,25
Resident-identified goals for improvement following coaching interventions lined up well with the least frequently performed behaviors and blind spots identified in resident self-assessments, suggesting that incorporating resident self-assessment into coaching sessions helped inform the goal-setting process. This may be important, as goal setting and self-assessment of clinical performance are components of professional development.26,27
This study has several limitations. The behavioral checklist was developed based on previously identified tenets of clinical excellence and used by faculty members involved in its development. Interrater agreement when used by other faculty may not be as high. Residents were not trained to use the checklist and may not have interpreted items similarly to faculty. We relied on self-reporting of changes in practice, which may be subject to social desirability bias.28 Our prospective cohort study lacked a control group, and we did not assess goal-setting in residents who did not receive a coaching intervention.
Additional study is needed to further elucidate the utility and broader applicability of direct observation with coaching. Next steps could include introducing serial resident-coaching observations by the same coach to evaluate the effect of longitudinal coaching on clinical performance and expanding checklist use to other faculty preceptors to examine interrater agreement and to determine the need for training in tool use to guide coaching interactions.
Conclusion
Our study demonstrated that an outpatient direct observation and coaching intervention illuminated significant blind spots in resident clinical self-assessment and was well received by residents.

Coaching Session Workflow

Accuracy of Self-Assessment and Blind Spots
Author Notes
Funding: This study was supported by the Johns Hopkins Osler Center for Clinical Excellence. Dr Wright is the Anne Gaines and G. Thomas Miller Professor of Medicine and is supported through the Johns Hopkins Center for Innovative Medicine.
Conflict of interest: The authors declare they have no competing interests.
Data from this work were presented as a poster at the Society of General Internal Medicine National Meeting, Washington, DC, April 19–22, 2017.



