Resident Ratings of Communication Skills Using the Kalamazoo Adapted Checklist
ABSTRACT
Background
The Kalamazoo Essential Elements Communication Checklist–Adapted (KEECC-A) is a well-regarded instrument for evaluating communication and interpersonal skills. To date, little research has been conducted that assesses the accuracy of resident self-ratings of their communication skills.
Objective
To assess whether residents can accurately self-rate communication skills, using the KEECC-A, during an objective structured clinical examination (OSCE).
Methods
A group of 104 residents from 8 specialties completed a multistation OSCE as part of an institutional communication skills curriculum conducted at a single institution. Standardized patients (SPs) and observers were trained in rating communication skills using the KEECC-A. Standardized patient ratings and resident self-ratings were completed immediately following each OSCE encounter, and trained observers rated archived videotapes of the encounters.
Results
Resident self-ratings and SP ratings using the KEECC-A were significantly correlated (r104 = 0.238, P = .02), as were resident self-ratings and observer ratings (r104 = 0.284, P = .004). The correlation between the SP ratings and observer (r104 = 0.378, P = .001) ratings were larger in magnitude, but not significantly different (P > .05) from resident/SP or resident/observer correlations.
Conclusions
The results suggest that residents, with a modicum of training using the KEECC-A, can accurately rate their own communication and interpersonal skills during an OSCE. Using trained observers to rate resident communication skills provides a unique opportunity for evaluating SP and resident self-ratings. Our findings also lend further support for the reliability and validity of the KEECC-A.
Introduction
The objective structured clinical examination (OSCE) is widely used for assessing communication and interpersonal skills in undergraduate and graduate medical education.1,2 OSCEs are used to provide feedback to residents and medical students following standardized patient (SP) encounters.3,4 Resident self-ratings with the Kalamazoo Essential Elements Communication Checklist–Adapted (KEECC-A)5 are used at Wayne State University School of Medicine, Detroit, Michigan, as part of the OSCE to promote resident self-reflection.
Self-ratings are important because they require self-reflection and self-monitoring, which are essential for lifelong learning and improvement. Yet, few studies have included resident self-ratings,6,7 and only 1 has incorporated resident self-ratings using the KEECC-A. Joyce et al5 compared faculty, SP, and resident self-ratings, and reported that only the correlation between faculty and SP ratings was statistically significant (r = 0.31, P < .001); resident self-ratings were not significantly correlated with faculty ratings (r = 0.09, P > .05) or SP ratings (r = 0.12, P > .05). No KEECC-A training of residents was offered prior to the OSCE.
The goal of this study was to determine whether residents with a modicum of training with the KEECC-A would accurately self-rate their own communication skills during an OSCE. We compared the scores from the self-ratings with ratings by trained objective raters (ie, individuals not involved in the clinical encounter itself). Given the extensive KEECC-A training that SPs received, we expected a relationship between the SPs' scores and those of the trained observers, and a lower correlation between resident self-ratings and the ratings provided by the observers and the SPs.
Methods
Participants
A total of 104 residents from 8 specialties (dermatology, family medicine, internal medicine, neurology, orthopedic surgery, physical medicine and rehabilitation, otolaryngology, and transitional year) participated in an institutional OSCE in 2012. Sixty-one participants (59%) were men, 35 (34%) were international medical graduates, 47 (45%) were postgraduate year (PGY)–1, 36 (35%) were PGY-2, 14 (13%) were PGY-3, 5 (5%) were PGY-4, and 2 (2%) were PGY-5.
Measures
The KEECC-A2 is a 7-item rating scale of physician communication skills developed through expert consensus. The items include (1) builds relationships, (2) opens the discussion, (3) gathers information, (4) understands the patient's perspective, (5) shares information, (6) reaches agreement, and (7) provides closure. Items are rated on a 5-point Likert scale (1, poor, to 5, excellent). Total scores for 3 SP encounters were summed to provide an overall score for each resident. The KEECC-A was completed by the SPs and residents, and later by the observers. All OSCE encounters were double coded.
Procedures
Standardized Patients
Eight SPs received training for this OSCE in role portrayal and rating resident performance using the KEECC-A. The SPs had 5 years of experience with the program and yearly training using the KEECC-A. For this OSCE, SPs received additional training (three 2 1/2-hour modules) to make them familiar with their case portrayals. Standardized patients watched 3 videos of 1 of the 4 OSCE cases, and rated 2 different residents' communication skills with the KEECC-A. Standardized patients were required to obtain 85% agreement with expert ratings developed for each case. During the OSCE, SPs were given 3 minutes to score each resident's communication skills immediately following the encounter.
Residents
The residents were familiarized with the KEECC-A prior to the OSCE during a general orientation. In addition, each department provided a didactic session about the OSCE and the KEECC-A self-ratings. This 30-minute session occurred approximately 1 month before the OSCE, and a copy of the KEECC-A was sent to the residents via a reminder e-mail to allow them to review it prior to the OSCE. During the OSCE, residents were given 3 minutes after each patient encounter to reflect on and rate their performance using the KEECC-A.
Observers
Raters were doctoral candidates in clinical psychology, trained using background reading and ratings of live patient encounters using the KEECC-A. The observers then rated videotaped encounters of family medicine resident OSCEs from 2010–2011. Once raters reached an acceptable level of agreement (intraclass correlation coefficient ≥ 0.70), they coded the OSCEs used in this project.
The study was designated as exempt by the Wayne State University Institutional Review Board.
Analysis
Correlations were calculated on the OSCE total scores of SP, observer, and resident using Spearman rho. KEECC-A total scores were used because of the unidimensionality of the scale items.2 Correlations were compared using standard effect size estimates (small, 0.10; medium, 0.30; and large, 0.50)8 and using Fisher r-to-z comparisons.
Results
Intercorrelations of the KEECC-A total scores between the 3 raters are reported in the table. The internal consistency for observers (coefficient α) with the KEECC-A was 0.93 and interrater agreement (intraclass correlation coefficient) with Spearman-Brown correction for double coding ranged from 0.63 to 0.82, with a good overall reliability of 0.74.9
The magnitude of the relationship between observers and SPs exceeded a medium effect size, while the magnitude of the relationship between residents and the more objective raters reached a small and small-to-medium effect size, respectively. The results suggest that the relationship between the more objective raters was stronger than either rating with resident ratings. Results of Fisher (1-tail) r-to-z comparisons failed to show statistically significant differences between any of the correlations: observer/SP versus resident/SP (z = 1.19, P = .12) or observer/SP versus resident/observer (z = 0.88; P = .19). Resident self-ratings significantly correlated with observer (P = .004) and SP (P = .02) ratings, suggesting that residents can accurately rate their own communication skills.
Discussion
With a modest amount of training, residents were able to provide ratings of their communication skills that were consistent with those from SPs and trained observers. This is the first project to include independent observers as raters using the KEECC-A with an OSCE. Observers provided a unique opportunity to evaluate the communication scoring of resident self-ratings and SPs, as they are naturally less biased. In contrast to the faculty observers in the study by Joyce et al,5 the observers in this study received extensive training and achieved a good level of interrater reliability. Nonfaculty observers also may have more time available to learn the coding systems and participate in the process, as they may have lower clinical and educational demands than physician faculty.
Although ratings in the current study and in the study by Joyce et al5 between SPs and observers/faculty both exceeded a medium effect size, differences were found between resident self-ratings and ratings by both observers and SPs. In the current study, both correlations were statistically significant and within a small-to-medium effect size, while both correlations in the study by Joyce et al5 were nonsignificant and of a small effect size. These results indicate that the self-ratings in the current study were more robust than those in previous research when residents did not receive prior training.
This study also adds to the literature on the assessment of communication skills by using observer ratings as a criterion variable. Although the observer ratings should not be considered the “gold standard” for ratings of interpersonal and communication skills, they provide a more objective rating than ratings by SPs or clinical faculty raters. Thus, the findings also lend support for the reliability and validity of the KEECC-A rating scale for use in applied OSCE settings.
The study has several limitations. It was conducted at a single institution, reducing the generalizability of the findings. Another limitation is the lack of a control group. Future research should examine how varying levels of training affect the accuracy of resident self-ratings of interpersonal communication.
Conclusion
We demonstrated that residents with a modicum of training using the KEECC-A can accurately self-rate their own communication and interpersonal skills during an OSCE.
Author Notes
Funding: The authors report no external funding source for this study.
Conflict of interest: The authors declare they have no competing interests.



