Dermatology Curriculum for Internal Medicine Residents: A Randomized Trial
Abstract
Background
Physicians in specialties other than dermatology care for the majority of patients with skin diseases, yet most physicians receive little training in dermatology.
Objectives
The primary objective of this study was to determine whether there would be a sizable (20%) improvement in posttraining scores for internal medicine residents after completing 1 of 3 assigned curricula. A secondary objective was to determine whether there were significant differences in improvement among the 3 resident cohorts after completing their curriculum. Finally, we explored the residents' change in perceived clinical knowledge postcurriculum.
Methods
Thirty-six postgraduate year 2 internal medicine residents were randomized to complete 1 of 3 one-month dermatology curricula (didactic, clinical, or combined). The main outcome measure was performance on different sets of Medical Knowledge Self-Assessment Program (MKSAP)-15 questions at study entry and completion. A secondary outcome was self-rated performance in 3 clinical domains.
Results
All participants completed the study. All curricula led to an improvement in MKSAP-15 scores, but only students who completed the didactic curriculum demonstrated a 20% improvement in posttraining scores. A larger number of residents completing the clinical and didactic curricula rated their clinical performance as improved compared to those who completed the combined curriculum.
Conclusions
While all 3 curricula led to improvement, as measured by a standardized assessment, a didactic curriculum in dermatology resulted in the largest improvement in knowledge as measured by a multiple-choice test.
Editor's Note: The online version of this article contains tables of descriptive statistics for the analyses performed in this study.
Introduction
Physicians other than dermatologists manage two-thirds of skin disease–related visits, and many of these physicians are internists,1 yet nearly two-thirds of internists do not feel competent in diagnosing and treating common skin diseases,2 and there is a substantial need to improve their education.3 Unfortunately, dermatology education is a substantial weakness for many internal medicine residencies, and the Accreditation Council for Graduate Medical Education has cited programs for their lack of dermatology training.4 To our knowledge, there were no validated assessments to show which of the traditional curricula, namely didactic and clinical, is most effective.
Methods
We conducted a randomized controlled trial between July 2010 and June 2011 at the University of Texas Southwestern Medical Center at Dallas (UT Southwestern). The study incorporated a 1-month block of dermatology education that used 3 different curricula: a didactic, a clinical, and a didactic plus clinical curriculum combined. Residents completed the 1-month block at different times throughout the year.
The didactic curriculum was composed of weekly 2-hour presentations lasting 4 weeks (total of 8 hours). The presentations covered emergency dermatology, skin cancer, leg ulcers, infections, infestations, and inflammatory dermatoses. The topics had been identified from a needs assessment based on the format proposed by Kern et al5 that is integral to medical curriculum development. Our needs assessment used the core competencies outlined by Hansra et al,2 and a focus group that included internal medicine chief residents, internal medicine residents, and dermatology faculty. This sought to ensure that our 13 competencies focused on topics internists, dermatologists, residents, and faculty deemed important to the education of internal medicine residents and avoided focusing on the isolated needs of internists or dermatologists.6,7
The clinical curriculum consisted of 9 days of outpatient dermatology clinic at a county hospital and 5 days on dermatology consultation service at the university, county, and pediatric teaching hospitals affiliated with UT Southwestern. Typical diagnoses encountered in the outpatient clinics included psoriasis, acne, blistering disease, skin cancer, sarcoidosis, lichen planus, eczema, contact dermatitis, mycosis fungoides, and warts. The most common consultations on the inpatient service were simple drug rashes, Stevens-Johnson syndrome, toxic epidermal necrolysis, disseminated bacterial and deep fungal infections, and vasculitis. Residents had an observer role in the clinic and consultation service. Diagnoses seen were consistent with topics covered by the Medical Knowledge Self-Assessment Program (MKSAP) test questions. The dermatology clinic had a consistent schedule and similar exposure to common dermatologic diagnoses, and the consult service can be assumed to have provided a similar experience for all residents. The combined curriculum involved completion of both the full didactic and clinical curricula. The time requirements for the clinical and combined curricula were 52 hours and 60 hours, respectively.
Participants were selected from the 2010–2011 class of internal medicine postgraduate year (PGY)–2 residents at UT Southwestern. Thirty-six residents were randomized to the 3 curricula.
The main outcome measure was performance differences on 2 different sets of questions from the MKSAP-15, a set of standardized multiple-choice questions developed by the American College of Physicians for internists. The MKSAP-15 is commonly used as preparation for certification examinations, and a recently developed section focusing on dermatology was used for the study. It is the only available validated measure of performance for dermatology knowledge in internists.4
To assess preintervention and postintervention dermatology knowledge, the participants completed sets of 20 questions from the MKSAP-15. The preintervention and postintervention question sets were similar with regard to the number of questions of diagnosis (13 versus 12, respectively), questions of treatment and/or management (7 versus 8, respectively), and questions containing images (9 versus 10, respectively). Preintervention testing was performed on the first day of the curriculum prior to any clinical or didactic work. Postintervention testing was planned to be administered on day 28 of the curriculum, and this was the case for 33 of the residents. Three residents completed the test 3 days after the dermatology curriculum ended. The evaluator of both assessments was blinded to the purposes of the study, participants' identity, and intervention assignment. Questions on both tests were different to prevent question recognition, and participants were randomized to the set of questions administered at study entry and completion.
The primary hypothesis was that there would be significant (20%) improvement in the mean posttraining examination scores after completion of each of the 3 curricula. The secondary hypothesis was that there would be a significantly greater increase in the mean posttraining examination scores for the cohort completing the combined curriculum than in the cohorts completing the didactic or clinical curriculum alone. Based on the desired effect size of 20%, a power calculation determined that a minimum of 10 participants per arm would be needed to obtain 80% power.
Surveys were administered preintervention and postintervention at the same time as the MKSAP-15 to measure residents' self-perceived clinical abilities. Residents were asked to rate their ability as “below average, average, or above average” compared to the “average internal medicine resident” regarding their knowledge in 2 domains: common dermatoses and appropriate topical regimens, and ability to form a differential diagnosis for a dermatological condition. A third question asked residents to state whether they “could” or “could not” describe primary and secondary lesions. No explicit definition of the “average internal medicine resident” was provided.
The UT Southwestern Institutional Review Board deemed this study exempt.
We used the Jonckheere-Terpstra test to look for concordance in trends of the MKSAP test scores over the categories of self-assessment answers. Self-rated responses were evaluated preintervention and postintervention. All 36 residents and the 12 residents each in the 3 intervention groups were evaluated separately. A P value ≤ .05 was deemed significant.
Results
All 36 randomized residents remained in their assigned arm of the study and completed the 1-month curriculum, all examinations, and all surveys.
The mean postcurriculum MKSAP-15 scores increased for all residents taught with all 3 curricula (figure 1). However, only the didactic curriculum achieved the desired increase of 20% (P = .007) between precurriculum and postcurriculum scores. Given this result, there was no need to test further, as only the didactic curriculum achieved the hypothesized primary endpoint.



Citation: Journal of Graduate Medical Education 6, 2; 10.4300/JGME-D-13-00272.1
In their self-rating of competence, most residents reported an increased ability by rating themselves as “average” to “above average” for each clinical activity compared to their preclinical survey (figure 2). The clinical and didactic groups had a comparable proportion of residents reporting an increased ability (from “below average” to “average”) for all 3 questions, which was higher than the increase for the residents completing the combined curriculum. The preintervention and the change in preintervention to postintervention self-assessment scores for all residents showed a significant concordance with multiple-choice scores for 2 of 3 self-rated survey questions (figure 3). Analyzed by intervention group, the didactic and combined curriculum groups had a significant concordance for 1 of 3 self-rated survey questions (figure 3; table).



Citation: Journal of Graduate Medical Education 6, 2; 10.4300/JGME-D-13-00272.1



Citation: Journal of Graduate Medical Education 6, 2; 10.4300/JGME-D-13-00272.1
Discussion
Contrary to our hypothesis, when the curricula were combined, performance on both outcomes (MKSAP and survey) did not show improvement over the individual curricula. Our study showed that an isolated didactic curriculum is sufficient for knowledge acquisition as measured by a multiple-choice test, such as in-training and certifying examinations. We recognize that other curricula may lead to greater improvement in clinical performance and that the choice of which dermatology curriculum to use should rest on the goals for any individual internal medicine program. However, given that the time allotted for teaching has decreased by 25%8 and the constraints on resident work hours has increased, it is important that a residency curriculum be efficient for both teacher and learner. Because a didactic curriculum requires a smaller time commitment than a clinical one, we believe that such a curriculum is an efficient resource for internists in training and should not be overlooked. However, it must be recognized that this type of curriculum is effective most likely at improving examination scores, and its use for clinical skill acquisition is uncertain.
Our results also show a statistically significant concordance in the trends of multiple-choice scores and categories of self-assessment answers. This helps to validate the use of multiple-choice scores as the primary outcome measure in the study and shows that multiple-choice scores appear to concord with the self-reported confidence in clinical abilities of internal medicine residents. The use of self-ratings enabled us to measure components of clinical ability (comprehension, analysis, and application), as outlined by Bloom's taxonomy, with directed questions in these realms.9 The didactic group demonstrated not only improvement in factual knowledge, these students also demonstrated increased confidence in clinical knowledge, comparable to those completing a clinical rotation. For future studies, we believe that self-rated data could be made more valid if they are concentrated around topics in the field of dermatology deemed most important to internal medicine residents.
Limitations of this study included its small sample size, short follow-up period, and testing by means of a single outcome, namely the standardized multiple-choice test. The sample size limited the power of the results, although, the didactic group still improved by the predetermined amount.
Finally, clinical skill acquisition is measured best by a standardized clinical examination rather than by either a multiple-choice test or a self-rated survey, and it must be emphasized that our outcome measure is therefore inferior for such measures and is most appropriate for demonstrating factual knowledge.
Conclusion
It is important that internists receive adequate training in the management of skin disease. Our test of 3 commonly used models of education to assess which would demonstrate the greatest improvement in dermatology knowledge in internal medicine residents showed improvement in postintervention knowledge for all groups, yet the didactics group was the sole group that showed the desired improvement of 20%.

Mean Precurriculum and Postcurriculum Medical Knowledge Self-Assessment Program (MKSAP)-15 Scores by Type of Curriculum
Percentage of changes between precurricular and postcurricular scores were 13% clinical, 20% didactic, and 12% clinical plus didactic. P = .007 and 95% confidence interval of 5.8 to 33.4 using Wilcoxon signed rank test.

Residents' Change in Perceived Knowledge
Change in perceived knowledge is shown by decreasing their “below average” or increasing their “average” response from preintervention to postintervention.

Results of the Post-Preintervention Change in Multiple-Choice Question Test Scores (y-axis) Compared to Change in Self-Assessment (SA) Responses (x-axis) for the 3 Questions
For the SA responses, a score of −1 indicates the SA rating worsened, whereas a score of +1 indicates that it improved.
Author Notes
Rachael Cayce, MD, is Assistant Professor, University of Texas Southwestern Medical Center at Dallas; Paul Bergstresser, MD, is Professor, University of Texas Southwestern Medical Center at Dallas; Kathleen Hesterman, MD, is an Intern, Tulane University; and Daniel Condie, BS, is a Medical Student, University of Texas Southwestern Medical Center at Dallas.
Funding: The authors report no external funding source for this study.
Conflict of interest: The authors declare they have no competing interests.
The authors would like to thank Arturo Dominguez, MD, Assistant Professor in Dermatology; Carol Croft, MD, Professor in Internal Medicine; and Beverley Huet, MS, Assistant Professor in Clinical Sciences, at the University of Texas Southwestern Medical Center for helping to execute this project.



