Supervising Trainee Inter-Visit Care Using EHR Learning Analytics: Formative Training Tool or Threat to Well-Being?

MD,
MD, and
MD
Online Publication Date: 14 Feb 2025
Page Range: 12 – 15
DOI: 10.4300/JGME-D-24-00496.1
Save
Download PDF

It’s Friday afternoon. A continuity clinic preceptor looks at their intern’s electronic health record (EHR) inbasket. They see 150 unread messages, with many unaddressed phone calls and results. They panic, thinking about what critical results may have been missed.

Meanwhile, the intern receives a page from their clinic preceptor about their inbasket. They panic, thinking about messages they were uncertain how to address. Inbasket management has been a huge burden this year. They don’t know where to start.

Does this scenario feel familiar? EHR workload significantly contributes to burnout among medical trainees.1-3 As educators, we must reconcile the need to train residents how to proficiently and efficiently interact with the EHR as it is critically important for their future success and well-being. In addition, ensuring competency in digital health within the EHR is included within the Accreditation Council for Graduate Medical Education Milestones.4 In 2021, 20% of internal medicine program directors self-reported awareness of adverse events or near misses with unsupervised resident inbasket decisions.5 Fellowship programs have also struggled with the oversight of trainee inbasket management.6 Despite this, there is little published on how to supervise inter-visit care (care provided between visits), and very few programs have a structured monitoring process.5 EHR user-level performance metrics (ie, Epic Signal or Cerner Lights On Network), which quantify time spent in inbasket activities, have been suggested as a way to identify struggling residents and offer personalized training.5,7 While these objective performance metrics offer the opportunity for enhanced supervision and a way to provide actionable feedback for EHR skill improvement, we worry about how trainees will receive it. We are concerned that if this performance data are presented poorly or interpreted out of context, it may worsen resident anxiety, reduce internal motivation, and ultimately increase the risk of burnout.

How Do We Train Residents to Use the EHR?

EHR training after initial onboarding is highly variable by institution, and there are few studies to guide EHR training within graduate medical education.8 Some programs have created more robust curricula to improve EHR proficiency using regular mentorship.9 This is supported by prior research that EHR training should be longitudinal, one-on-one, and tailored to learner needs.2,5,9 We implemented a similar training model at our institution, yet faculty evaluations continue to report struggling residents who cite inbasket management as a major stressor. A survey of our residents revealed that 45 of 55 (84%) feel as though managing their inbasket leads to burnout. Improved EHR training, particularly as it pertains to improving efficiency, has been shown to reduce rates of burnout among learners.10,11 To improve how we mentor residents to be more proficient and efficient with their EHR inbasket, we decided to incorporate individual performance data into continuity clinic feedback meetings.

Learning Analytics to Improve Feedback on Inter-Visit Care

Learning analytics is the process of using data to provide learners individualized feedback.12,13 EHR analytics programs, like Epic Signal or Cerner Lights On Network, access user-level data to inform physicians of their EHR utilization including efficiency metrics (Figure 1). These programs have been used to guide one-on-one training for physicians on their EHR processes to improve efficiency.14 User data have also been used to identify risk of burnout using metrics such as “pajama time” (time spent in the EHR outside of business hours).15 Trainee user data can be used similarly to help residency programs identify residents struggling with inter-visit care and offer personalized coaching. EHR user data offer great potential to improve trainee digital competency through individualized training and increased trainee supervision. However, since this data is measured by cursor movement and clicks, it should be interpreted as an imperfect view of resident behavior. We worry that this performance data could be misused for summative evaluation and provoke learner self-criticism, which may worsen, rather than reduce, rates of burnout among trainees.

Figure 1Figure 1Figure 1
Figure 1 Features of EHR Data Analytics Programs

Citation: Journal of Graduate Medical Education 17, 1; 10.4300/JGME-D-24-00496.1

Precautions and Lessons Learned on Using EHR Performance Data

To standardize how EHR user data are framed to trainees, institutions may wish to identify a faculty member as an EHR analytics “champion.” This allows a program to focus on a key individual to access the data and receive advanced training on the analytics program, as data interpretation can be nuanced. Since the goal is to benefit resident well-being and learning, we recommend that faculty involve residents in the implementation of the process. We recruited a resident for the curricular redesign team and held informal focus groups in which the residents felt the EHR analytics data should be provided one-on-one by a longitudinal mentor whom they trust. Next, we trained existing clinic mentors on EHR analytics data interpretation and communicating this feedback to the residents as an opportunity for growth. We held a case-based training session for the mentors emphasizing key learning goals. We created resident “phenotypes” to help faculty more easily interpret the data (Figure 2). The training emphasized that data should be presented as formative information to identify areas for growth to ultimately improve their quality of life and enhance patient safety.

Figure 2Figure 2Figure 2
Figure 2 EHR Analytics Resident Phenotypes

Citation: Journal of Graduate Medical Education 17, 1; 10.4300/JGME-D-24-00496.1

The mentor-resident feedback sessions should be conversational and not unidirectional. The data should not be presented as disciplinary (“good” or “bad”) or comparative (“better than other residents” or “worse than other residents”). Disclaimers should be made, both to those delivering and those receiving the data on its limitations (Figure 1). Also, the data must be interpreted within context. For instance, a resident with a slow result turnaround time may be keeping certain results in their inbasket for quality improvement projects or for follow-up reminders despite already addressing the result. While delivering feedback, faculty should offer specific ways in which the residents can improve (use efficiency tools, reduce overdocumentation, effectively use ancillary support, etc). The Box offers an example of how we use EHR user data to give feedback to residents on their inbasket. Programs may offer residents opportunities to gain further insight into their personal data through the faculty champion or peer mentoring. Programs could create optional asynchronous modules to introduce efficiency tools. Spaced delivery of feedback helps the resident understand their process improvement over time and encourages a growth mindset. After implementation, programs should periodically assess how residents perceive the use of their data. We plan to survey residents on self-perception of inbasket management and burnout. Clinical competency committees should not use raw EHR analytics data, given its inaccuracies and required understanding of context. However, programs may uncover professionalism or serious patient safety issues, which could be escalated to program leadership.

Conclusion

EHR user data present a great opportunity to help residency programs enhance individual trainee education and experience. While one of our goals is to improve resident well-being, this feedback process may secondarily improve inbasket supervision and patient safety. However, it is critical that we avoid introducing this data as punitive as it may worsen resident well-being as seen with other competency-based resident assessment.16,17 Using performance data for assessment may increase learner anxiety through fear of digital surveillance and curtail internal motivation. To avoid these pitfalls, programs with similar initiatives should focus on framing them as formative feedback over summative evaluation, involve residents in the implementation process, and reevaluate how individual data are being utilized. To support the well-being of our residents and prepare them to safely manage patient care in an EHR-centric health care system, it is imperative for residency programs to establish data-driven, individualized, and formative feedback sessions on inbasket management with supportive, longitudinal mentors.

Copyright: 2025
Figure 1
Figure 1

Features of EHR Data Analytics Programs


Figure 2
Figure 2

EHR Analytics Resident Phenotypes


Author Notes

Corresponding author: Zachary Boggs, MD, University of Virginia School of Medicine, Charlottesville, Virginia, USA, zb3ca@uvahealth.org, X @ZachBoggsMD
  • Download PDF