Filling the Void: Defining Invasive Bedside Procedural Competency for Internal Medicine Residents
Abstract
Background
Residents perform invasive bedside procedures in most training programs. To date, there is no universal approach for determining competency and ensuring quality and safety of care.
Objective
We developed and implemented an assessment of central venous catheter insertion competency for internal medicine and internal medicine–pediatrics residents, using measurements for knowledge, skill, and attitude and linking them to procedural outcomes.
Methods
We conducted a cohort study of a 4-week, resident-run procedure service from July 2007 through June 2011 at a large academic medical center. Knowledge was assessed by using a written test, technical skill by using a checklist, and attitude by self- and supervisor assessments of residents' confidence and capability. Competence was defined as (1) a minimum written test score (70%); (2) a perfect checklist score; (3) a resident's self-assessed confidence and capability scores of 4 or 5 of 5; and (4) faculty rating of the resident's confidence and capability as 5 of 5. A composite success rate was based on procedural outcomes (eg, completed procedures, less than 3 forward needle passes, and complication rate) and was compared to the checklist scores.
Results
A total of 148 internal medicine and medicine–pediatrics residents inserted 639 catheters, and 53 (36%) achieved competence by the end of 4 weeks. Residents judged to be competent by checklist scores had a higher composite success rate than those deemed not competent.
Conclusions
Our multi-factorial criteria used to define central venous catheter insertion competency effectively discriminated between residents judged to be competent and those judged not competent, using data from procedural outcomes.
Editor's Note: The online version of this article contains the performance checklist used in the study.
Introduction
In an effort to improve patient safety, residency programs are seeking new methods of teaching invasive bedside procedures.1–3 Studies confirm that simulation-based training,4,5 use of ultrasound,6,7 adherence to a checklist,8 team training,9 and direct observation10 decrease complication rates, thereby improving safe practices.
In 2007, the American Board of Internal Medicine (ABIM) ceased to mandate technical competency for some procedures, while it recommended active participation in a predetermined number of procedures subsequent to simulation-based learning.11 Despite the relative infrequency of internists performing procedures posttraining,12 some academic programs may continue to rely on residents to do so as a component of their training. In the absence of endorsed criteria for measuring competency, some training programs devised their own systems. Smith et al3 created a medical procedure service and evaluated patient outcomes but did not specifically define a competency threshold. Dong et al2 and Huang et al8 provided validity evidence for a central line checklist by using a task trainer but did not link the learner's skill to patient outcomes. Studies suggest that to provide a comprehensive assessment of trainees' abilities, a range of tools should be used.13
The purpose of our study was to build on the work of others in developing and evaluating new criteria for procedural competency that included knowledge, skill, and attitude coupled with the assessment of procedural outcome measures.
Methods
Setting
In 2007, we implemented a simulation-based, blended curricular approach to invasive bedside procedural education at Jackson Memorial Hospital (JMH), a 1500-bed, tertiary care, urban academic medical center affiliated primarily with the University of Miami Miller School of Medicine.
Participants
Our study included 171 internal medicine and internal medicine–pediatrics residents at JMH from July 2007 through June 2011. Complete data were collected for 148 residents (86.5%). Participants provided written informed consent prior to the collection of baseline data. A total of 117 residents (79%) were in postgraduate year (PGY)-2, 30 residents (20%) were in PGY-3, and 1 (< 1%) was in PGY-4. Annually, the internal medicine training program enrolls 40 categorical residents, whereas the internal medicine–pediatrics program enrolls 4 residents.
Intervention
Our cohort study of an educational intervention was limited to central venous catheter (CVC) insertion. Participants underwent 4-hour training in a single session, including instruction on the use of real-time ultrasound for vascular access, and were then assigned to a 4-week rotation on the procedure team, a resident-run service that, on consultation, performs procedures throughout the hospital, details of which have been reported previously (box).14 All procedures were directly supervised by the attending physician, who documented each CVC placement by using a skills checklist (provided as online supplemental material). The content of the skills checklist was created by local experts but included information similar to that in previously published checklists,2,8 as well as national infection control standards and the Institute for Healthcare Improvement Central Line Bundle for prevention of central line-associated bloodstream infections.15 In contrast to other studies,2,8 there was consistent application of the checklist in the simulation laboratory and at the patient's bedside. Those residents who did not score 100% on the checklist were offered remediation in the simulation laboratory and were prevented from performing the procedure independently and without supervision until their score improved.
Variables for Analysis
Participant baseline characteristics included sex, country of medical school matriculation, postgraduate year of training, and prior procedural experience (number of procedures before training). The anatomic site and accompanying checklist scores were also recorded for each CVC insertion. Finally, a composite success rate was determined.
Baseline knowledge was assessed using a written pretest consisting of 10 questions that addressed topics such as indications, contraindications, and relevant anatomy. The test was based on information from the New England Journal of Medicine article on central venous catheterization,17 and underwent content validity by a multidisciplinary group of local experts. Each participant completed an identical written post-test immediately after the instructional session. Each resident's baseline technical skills were evaluated using a CVC hands-on training model (item BPH600f, Blue Phantom, Kirkland, WA) by an attending physician who used the checklist. Each item was weighted equally, and the attending physician rated the resident's performance based on level of completion (0 = completely missed; 1 = incompletely performed or recalled out of order; 2 = completely performed). The checklist was also used each time the resident performed the procedure on a patient (postchecklist).
Supervising faculty were instructors in the simulation-based component. They performed checklist grading until it mirrored that of the principal author (J.D.L.). If corroboration was not accomplished in the simulation center, the founder (J.D.L.) proctored the supervisor at the bedside, evaluating each procedure individually until correlation occurred. Furthermore, scoring was checked routinely in real time and recalibrated as necessary to ensure similar grading. After the procedure, the resident and attending physician assessed the resident's level of confidence and capability on a scale of 1 to 5 (1 = none and 5 = complete).
The project was approved by the Institutional Review Board.
Outcome Measures
The primary outcome measure was the determination of competence according to proposed criteria, consisting of (1) attainment of the minimum passing score (MPS) of 70% on the written test; (2) achievement of a perfect checklist score; (3) resident self-assessment of confidence and capability scores of 4 or 5 of 5; and (4) faculty rating of the resident's confidence and capability as 5 of 5. The secondary outcome measure was the composite success rate, defined as completion of the procedure with few attempts (eg, less than 3 forward needle passes), without ensuing immediate complications (eg, periprocedural and/or postprocedural within 24 hours; complications included arterial puncture, hemorrhage, pneumothorax, and tip malposition on chest x-ray [other than superior vena cava–right atrial junction]). Any single procedure that did not meet all the criteria stated above was deemed not successful. The tertiary aim was to categorize and evaluate the remainder of the residents as follows:
-
Borderline competent—residents who did not attain MPS but achieved a perfect checklist score, coupled with confidence and capability ratings of 4 or 5 of 5 from the resident but 4 of 5 from faculty
-
Not competent—residents who did not reach MPS and either did not achieve a perfect checklist score or received at least 1 rating of 3 or less on the assessment of confidence and capability scales
Data Analysis
Categorical data were reported as frequency and percentage and were analyzed with logistic regression. Continuous data were reported as means and standard errors and analyzed with a general linear model. Statistical significance was defined as P value < .05. SAS version 9.3 software (SAS Institute, Cary, NC) was used for all analyses.
Results
A total of 171 residents participated in the procedure team rotation. Under direct attending physician supervision, residents inserted CVCs by using real-time ultrasound guidance for a variety of clinical indications. Reported information reflects only those residents with complete data, defined as postintervention written test, checklist scores, and attitude ratings needed to evaluate at least 1 CVC attempt. Excluding the 18 participants with incomplete records and the 5 who did not perform an insertion, we assessed a total of 148 residents during 639 procedures. Fifty percent of the patients were referred by medical teaching teams, and the majority were located on the general medical wards.
The greatest number of CVCs (416 [65%]) were inserted into the internal jugular vein, followed by 173 (27%) in the femoral and 49 (8%) in the subclavian veins. Insertion site was missing for 1 procedure. Each site was chosen based on the clinical indication for the procedure as well as patient-specific characteristics (eg, patency of the selected vein).
The sample population of trainees is described in table 1a. Applying our criteria to the first procedure in a patient, only 4 residents began the training with established competency. Of the 148 residents who completed all assessments, 53 (36%) ultimately achieved the primary outcome of competency, 40 residents (27%) were judged to be borderline competent, and the remaining 55 (37%) were deemed not competent.
Sixty-nine percent of all procedures were scored as 100% on the checklist. For only 7 (1%) procedures did a resident rate himself or herself lower than 4 of 5, despite achieving a 100% checklist score and receiving 5 of 5 from the supervisor. Five of these 7 residents ultimately achieved competency. Conversely, there were 11 (2%) instances of a resident scoring less than 100% and receiving at least 1 faculty rating below 4 of 5, yet the resident self-assessed 4 of 5 or higher. None of these residents were ultimately deemed competent.
The data demonstrated significant improvement in the postintervention written test and checklist scores (P < .001) within each group. Residents reported performing an average of 7 (range: 0–25) insertions prior to the simulation-based program; in our study population, each resident performed an average of 4 (range: 1–12). Those who had performed a higher number of procedures before the training were more likely to be assessed as competent (P = .05). However, the number of procedures performed during the rotation was significantly different among the groups (P < .001), with those performing the most procedures categorized as competent and those with the least number of procedures, not competent.
Procedural outcomes were reviewed individually and as a composite measure (table 1b). There was insufficient evidence to demonstrate a difference in the rate of completed procedures between groups, but a significant difference was noted among groups in the rate of multiple attempts (P = .002), favoring the borderline and competent groups. While the overall difference among groups for complication rate was not statistically different (P = .08), a significant difference was demonstrated between those who were not competent and those who were (P = .03). Taken together, the composite success rate showed a significant difference among all groups (P = .001).
table 2a shows outcomes by final self-assessment regardless of checklist score. There was a significant difference in team experience (P = .02) and multiple attempt rate (P = .02), favoring those with higher ratings. table 2b demonstrates the breakdown of outcomes based on individual self-assessed rating. Similarly, table 3a reports outcomes by faculty rating. Those whom the faculty rated highest performed more procedures while on the team than those not rated as highly (P < .001). More importantly, those rated highest by the faculty demonstrated a higher composite success rate (P = .02). table 3b lists the breakdown of outcomes based on faculty rating.
Discussion
To date, there has been no universal standard for determining that residents are competent to perform invasive bedside procedures.3,18–20 The approach we developed combines objective assessments of knowledge and skill with subjective assessments of attitude, ultimately linking them to procedural outcomes. While others have reported the utility of a similar assessment instrument,8 historically, self-assessments of performance have been challenged.21,22 Our project builds on prior work that called for clearer standards to define competency.3
The prior experience average in our sample was higher than the recent ABIM numerical threshold, yet only 4 residents were competent at the outset. These results support work that has found that prior experience is not a sufficient predictor for competency.2 Of note, the number of procedures performed while on the team demonstrated an association between procedure team experience and attainment of competence. Perhaps the number of procedures is not quite as important as the educational process. This may provide further support for the “progression of competency.”19,20 Compared to those judged not competent, the competent group had a significantly higher composite success rate. Data support the contention that multiple attempts result in a higher complication rate.3,23 That no differences in the overall complication rates among groups were noted was not surprising given the low overall incidence rate (6%). However, the competent group had a significantly lower rate than those who were not competent.
With respect to subjective assessments, a rate of lower multiple attempts was noted for individuals who rated themselves higher on their self-assessment, and a higher composite success rate was found for the group rated highest by the faculty.
One limitation of our study is that participants were evaluated during a brief 4-week experience on the procedure team rotation, and it is possible that if the evaluation period had continued after their time on the team, others might have met the criteria necessary to be judged competent. Second, the proposed definition was used during a single procedure, CVC insertion. Third, our sample size was small. A more robust sample size may demonstrate an association between procedural outcomes and self-assessment. Finally, the study was conducted with internal medicine and medicine–pediatrics residents at a single institution. Further study is needed to evaluate generalizability to other institutions and specialties.
Conclusion
We have demonstrated that the application of our multi-faceted definition (assessments of knowledge, skill, and attitude coupled with procedure-specific outcomes) can result in the determination of an individual's competency for a particular procedure in a short time frame. During our brief 4-week study period, slightly more than one-third of residents achieved the threshold for competency, yet most did not. We offer this new model to other training programs that are challenged to define invasive bedside procedural competency and as a beginning of further work to define the optimal training model and training length to ensure competence and safe and effective patient care.
Author Notes
Joshua D. Lenchus, DO, is Associate Professor, Departments of Medicine and Anesthesiology, University of Miami Miller School of Medicine; Cristiane Mocelin Carvalho, MD, was Fellow, Division of Nephrology, Department of Medicine, Jackson Memorial Hospital; Kaitlyn Ferreri, BS, is Research Support Coordinator, University of Miami-Jackson Memorial Hospital Center for Patient Safety, Miller School of Medicine; Jill Steiner Sanko, MS, ARNP-BC, was Research and Simulation Specialist, University of Miami-Jackson Memorial Hospital Center for Patient Safety, Miller School of Medicine; Kristopher L. Arheart, EdD, is Professor, Department of Epidemiology and Public Health, University of Miami Miller School of Medicine; Maureen Fitzpatrick, MSN, ARNP-BC, is Senior Research Nurse Associate, University of Miami-Jackson Memorial Hospital Center for Patient Safety, Miller School of Medicine; and S. Barry Issenberg, MD, is Professor, Department of Medicine, University of Miami Miller School of Medicine.
Funding: This study was partially funded by the Florida Medical Malpractice Joint Underwriting Association. Funds received from the grant were used for partial salary support for K.L.A. and J.D.L., full salary support for K.F., and procurement of equipment, specifically mannequins and ultrasound units. The granting organization had no role in the design or conduct of the study; collection, management, analysis, or interpretation of the data; or preparation, review, or approval of the manuscript.
The authors wish to thank Jesus Seda, BS, and Lisa Rosen, MA, for their editorial support and incredible assistance.



