Examining Residents' Strategic Mindfulness During Self-Regulated Learning of a Simulated Procedural Skill

PhD,
MD, MSc, and
PhD
Online Publication Date: 01 Jul 2016
Page Range: 364 – 371
DOI: 10.4300/JGME-D-15-00491.1
Save
Download PDF

ABSTRACT

 Simulation-based training is currently embedded in most health professions education curricula. Without evidence for how trainees think about their simulation-based learning, some training techniques may not support trainees' learning strategies.Background

 This study explored how residents think about and self-regulate learning during a lumbar puncture (LP) training session using a simulator.Objective

 In 2010, 20 of 45 postgraduate year 1 internal medicine residents attended a mandatory procedural skills training boot camp. Independently, residents practiced the entire LP skill on a part-task trainer using a clinical LP tray and proper sterile technique. We interviewed participants regarding how they thought about and monitored their learning processes, and then we conducted a thematic analysis of the interview data.Methods

 The analysis suggested that participants considered what they could and could not learn from the simulator; they developed their self-confidence by familiarizing themselves with the LP equipment and repeating the LP algorithmic steps. Participants articulated an idiosyncratic model of learning they used to interpret the challenges and successes they experienced. Participants reported focusing on obtaining cerebrospinal fluid and memorizing the “routine” version of the LP procedure. They did not report much thinking about their learning strategies (eg, self-questioning).Results

 During simulation-based training, residents described assigning greater weight to achieving procedural outcomes and tended to think that the simulated task provided them with routine, generalizable skills. Over this typical 1-hour session, trainees did not appear to consider their strategic mindfulness (ie, awareness and use of learning strategies).Conclusions

Introduction

Simulation-based training has become a standard component of the curriculum at many medical schools and residency training programs in North America.1 Despite this widespread use, health professions education researchers have not developed a full understanding of trainees' conceptions of learning (ie, their system of knowledge and beliefs about learning) in the simulation environment. Without an understanding of trainees' strategies and beliefs about learning, educators run the risk of designing training experiences that unintentionally hinder rather than promote skill acquisition.2 There is a need for systematic studies of how cycles of learning unfold in authentic training contexts, like the simulation environment.35

Previously, researchers interviewed trainees about their perceptions and experiences during simulation-based training, with a primary focus on how the simulated experience reflected participants' clinical practice.6,7 Researchers in nursing interviewed students and found that they situated their learning by identifying the benefits and limitations of simulation-based training when practicing “fundamental nursing”8 and urethral catheterization skills.9 Researchers in medicine used a technique called self-regulated learning (SRL) microanalysis to ask students brief open-ended questions before, during, and after performing simulated venipuncture4 and a paper-based diagnostic reasoning task.7 In those studies, medical students reported using strategies to accomplish the task, like identifying contextual factors when making a diagnosis, yet most did not report thinking strategically about learning, like modifying an ineffective learning plan.10

To expand our understanding, researchers must study how trainees' thinking about learning evolves throughout a longer training session, as most residency training programs now use simulation “boot camps” (ie, half-day to full-week intensive training courses).11 While some of these programs have implemented training to a mastery standard, most do not have the faculty or equipment that mastery training requires. Thus, it is common for trainees to practice on simulators without consistent, direct instruction, requiring them to self-regulate their learning during significant portions of training.

Simulation-based training is ideal for studying how trainees learn to learn, as the modality encourages learning without the distractions of patient care and allows trainees to concentrate on their individual experiences and SRL.4 We define SRL as learning, with or without supervision, by an individual who actively modulates affect (ie, emotions), cognition, and behavior using supports designed to facilitate achievement of desired learning goals.12,13 To be successful, trainees must self-regulate how they learn foundational content, how they use content to accomplish a task, and how they use strategies to learn the content and its clinical applications. Learning of content and learning of strategies may not occur hand-in-hand; for example, a trainee may choose to repetitively practice a specific aspect of lumbar puncture (LP; eg, setting up the manometer), yet he or she may not self-monitor how well his or her goals are accomplished during those repetitions. Theorists have defined the latter capability as the awareness and use of effective learning strategies, which they term “strategic content learning”14 or “strategic mindfulness.”4 Recent evidence suggests that clinical trainees benefit when they are guided to be mindful of effective learning strategies, such as using self-questioning to improve clinical reasoning.3,15

We studied how postgraduate year (PGY) 1 residents think about and regulate their learning, particularly whether and how they exhibit strategic mindfulness during training on 2 clinical scenarios (ie, easy and difficult) using a part-task LP simulator.

Methods

Design and Participants

We conducted this interview study in the context of a larger trial comparing SRL to instructor-led training.16 Forty-five of 49 PGY-1 internal medicine residents (92%) participated in the larger trial. We randomly assigned 20 of the participants (44%) to a SRL group, representing our criterion sample.17 Thirteen residents reported having performed 0 to 2 previous LPs, and 3 reported performing more than 5 (overall range, 0 to 10 LPs; median, 1). Thus, we sampled a heterogeneous spectrum of novices, all having performed 10 or fewer previous LPs.

Data Collection

Participants were not required to complete any prereading prior to the simulation intervention. At the start of the session, we informed them that the objective was to learn LP, and we did not provide a specific orientation toward focusing either on the procedure's processes (ie, steps of LP) or outcomes (ie, attaining cerebrospinal fluid [CSF]). Thus, we allowed residents to interpret the objective and set their own goals. Residents knew the course would end with a posttest for study purposes, and that their program would not use the test scores.

Participants in the SRL group were sequestered from their peers to ensure that they learned independently. Six to 8 trainees participated in the course per day over a total of 3 days. Participants had access to an instructional video18 as a resource during 50 minutes of practice on 2 versions (easy and difficult) of the same LP simulator (Kyoto Kagaku Co Ltd, Kyoto, Japan). The easy version simulated a patient of average weight, whereas the difficult version simulated an obese patient, a version that was more difficult for residents to identify landmarks. They began practice on the easy simulator and self-regulated when to transition to the difficult simulator (progression point) and when to undertake the posttest of their LP performance. Thus, they did not receive explicit instructions for either decision. During practice, instructors did not provide explicit performance feedback, so participants relied on their own self-judgments. Immediately following the posttest, participants interacted 1-on-1 with an instructor for feedback and guidance. This directed self-regulated learning intervention has been shown to be comparable to, and in some ways better than, training in a 1:4 instructor to trainee ratio.16

Two investigators (R.B., R.H.) developed the interview guide using relevant SRL literature. After reaching consensus on the guide, the investigators conducted 2 semistructured interviews: (1) when participants decided to transition between the easy and difficult simulators (progression point interviews), and (2) after interacting with the instructor (exit interviews). Both interviews lasted the length of time required to work through the interview guides, approximately 5 to 10 minutes and 30 to 45 minutes, respectively. Using indirect questions, we asked participants to discuss processes related to monitoring their learning, and how they decided to advance within and then ultimately end their practice (interview guide provided as online supplemental material). We did not probe participants on the specifics of LP skills and did not ask any direct questions about simulation-based training. As part of the interviews, we focused on SRL core processes, particularly self-monitoring (ie, in-the-moment awareness of how learning is progressing) and self-assessment (ie, postperformance evaluation).1921

The University of British Columbia Behavioural Research Ethics Board approved the study protocol.

Data Analysis

Data analysis used NVivo software (QSR International Pty Ltd, Melbourne, Australia). We did not link the interview data to any individual, as we were exploring the aggregate of residents' thoughts and behaviors related to strategic mindfulness as the construct of interest, rather than exploring individual differences.

We conducted a thematic analysis of the interview transcripts. Initially, the 2 interviewers (R.B., R.H.) independently coded 3 randomly selected transcripts, and a third member with qualitative expertise (M.M.) was added as a peer debriefer to enhance credibility of the emerging coding structure.22 During coding, we read the transcripts multiple times, looking for patterns and/or unique insights in the data. We coded the data from both types of interviews together because the team decided that both addressed our construct of strategic mindfulness. Coding was (1) inductive, in that we allowed codes to emerge as we read the transcripts, and (2) deductive, in that we used principles from SRL theory as sensitizing concepts to inform interpretation.23 Therefore, each researcher generated codes using his or her own informed perspective while being attentive to emergent, unexpected, or disconfirming codes; an approach referred to as analyst triangulation.24

We met as a group to review each member's codes. We went through this process 3 times, iteratively categorizing codes until the team agreed on a final grouping of codes they judged as sufficiently representing the data (table 1). We treated the data comprehensively, meaning we aimed to account for all the codes we generated in this final thematic structure.25

table 1 A List of Our Initial Codes and the Final Themesa

            
              table 1

One member of the research team applied the final thematic structure to all transcripts using NVivo software. Iterative team meetings during this process addressed study rigor, including coding reliability. As a final step, we used NVivo to organize the data according to our themes and selected representative quotes to serve as exemplars of each theme. Throughout this process, we maintained an audit trail, including research team notes, individual's preliminary coding, the evolving grouping of codes into themes, and our decision process for choosing representative quotes.

Results

Our analysis resulted in 3 themes that helped us understand how participants thought about regulating their learning, and how those thoughts shaped their self-regulated learning of lumbar puncture: (1) becoming aware of the simulation context, (2) defining comfort and confidence in context, and (3) developing models of learning. Representative quotes (quotes 1 to 13) for each theme are included in tables 2 through 4.

table 2 Representative Quotes for the Theme “Becoming Aware of the Simulation Context”

          
            table 2
table 3 Representative Quotes for the Theme “Defining Confidence and Comfort in Context”

          
            table 3
table 4 Representative Quotes for the Theme “Developing a Model of Learning”

          
            table 4

Theme 1: Becoming Aware of the Simulation Context

Typical educational models used by the simulation community emphasize the design of the environment and what it can do for the learner. In medical education research, for example, SRL is often defined as the lack of instructor presence, rather than an activity in which the learner is engaging.12,26 In contrast with this teacher-centered view, our participants articulated an awareness of their agency in regulating learning (quote 1). Participants were aware of their control over learning and discussed their ability to capitalize on affordances of simulation, like repetitive practice with troubleshooting that does not affect a patient's safety (quote 2). Their awareness made them mindful of the limitations of the simulation environment, particularly in how the patient is represented (quote 3). Participants appeared to analyze the context and largely understood that their learning was bounded by constraints that supported and limited what and how they could learn.

Theme 2: Defining Confidence and Comfort in Context

Participants explained that they linked their self-confidence to associated feelings of comfort and to specific environmental factors. Identifying and using the same equipment as in clinical practice led some to articulate a sense of comfort (quote 4). Beyond a sense of comfort with the equipment, participants also discussed links between confidence and being able to practice the LP process (quote 5). Furthermore, the opportunity for repetitive practice of these experiences served to confirm their sense of confidence (quote 6).

The extent to which participants discussed terms like comfort and confidence suggests that they sought links between these subjective feelings and elements in the learning context, whether those were tangible pieces of equipment or the feeling of knowing that comes with repetitive practice.27 As others have noted, however, confidence is not necessarily indicative of competence.28 Moreover, confidence is only 1 component of a trainee's model of learning.

Theme 3: Developing a Model of Learning

Social cognitivist theorists note a tension between learners' focus on processes (ie, the specific steps in a LP) and their focus on outcomes (ie, obtaining CSF).29,30 Our analysis showed that participants discussed the importance of learning the processes of LP (quote 7). However, rather than emphasizing process throughout the entire cycle of SRL (ie, from preparation to performance to reflection), participants appeared to prioritize forming a perfect plan (quote 8). In so doing, they appeared to consider the LP algorithm as something to be memorized, and their monitoring involved identifying which steps they did not remember (quotes 9 and 10).

In addition to reflecting after performance, some participants described reflecting on their actions in the moment (quote 11). Thus, participants' thoughts about process seemed restricted to the time before and after (rather than during) their performance. Furthermore, in thinking about process, participants emphasized learning strategies from other contexts (eg, memorizing, not forgetting) that may not be beneficial in a simulation context. While participants discussed processes richly, they tended to focus primarily on procedural outcomes, with an aim of “getting the fluid” (quote 12). They seemed to define their success according to outcomes rather than processes (quote 13).

Discussion

We studied how residents think about and participate in SRL during a common situation where they must manage their own learning experience with a part-task simulator. Residents reported that they worked to discover the affordances and limitations of the simulator; to form links between self-confidence, comfort, and certain environmental factors; and to develop a model of learning they used to interpret performance challenges and successes. Our findings have implications for instructional design of simulation-based training, and add to the growing discourse of SRL in health professions education.35,10,12,31

Integration With Previous Research and Implications

Theories of SRL suggest that novices must first learn the processes of successful task completion before emphasizing performance outcomes.29,30 Instead, our participants prioritized the outcomes of LP. In developing their model of learning, participants appeared to think about regulating many learning strategies described in SRL theories, yet how they prioritized those strategies did not match the theoretical ideal. This incomplete integration may have impacted performance, given our previously published finding16 that this group's average immediate posttest score was 3.25 ± 0.21 (mean ± standard error) on a 5-point global rating scale, where 3 represented minimally competent performance (ie, ready only to perform LP under direct supervision).

Thus, a primary finding is the tension in how residents prioritized their learning of performance outcomes over performance processes. That preference may be problematic given there is research that shows novice learners benefiting most when they prioritize processes over outcomes, which may help them better understand how certain actions lead to certain products.29,30 Medical trainees may be influenced to emphasize outcomes due to the medical training system encouraging residents to have an outcome-focused perspective32,33 or because the simulator's most salient form of feedback was the flow of CSF. The implications are that educators must attend to (1) the preconceived notions trainees bring with them to the learning context, including how they interpret the objectives educators set for the session; and (2) the feedback (and other information) provided by the simulator and how trainees might use that information to interpret learning challenges and learning successes. Informed by these findings, the University of British Columbia internal medicine program now explicitly encourages trainees to first set process goals, followed by outcome goals, during procedural skills simulation-based training sessions.

Regarding the processes of LP, residents discussed their need to memorize or get “all the steps in my head” primarily by reflecting on the steps they forgot. As discussed, a process focus can benefit learning; however, the residents' emphasis on memorizing a “perfect plan” aligns with previous research showing that residents tend to view such knowledge as static once it is acquired.33 We argue that emphasizing such rigid plans during simulation-based training might limit performance in the more complicated clinical context. A previous study showed that struggling trainees self-monitored their learning effectively, though they did not adapt their plans (ie, plans became rigid) based on what they had monitored.10 The implications are that educators must help residents to (1) recognize that a perfect plan in a simulation context likely is not perfect in all contexts; (2) monitor how their plan unfolds and flexibly incorporate feedback to rework an ineffective plan; and (3) understand how to modify an initial plan to address different medical conditions or patients. Trainees may be prompted to reflectively monitor and adapt their plans more readily if educators design simulation-based training to reflect the variability of clinical practice, such as changing patient anatomy or using levels of difficulty or challenge.

Residents' responses to the interviews suggest that they prioritized their learning of content over their learning of strategies. They seemed to emphasize how they learn LP rather than how they learn to learn procedures in general. This absence of strategic mindfulness in the early stage of training was also found in a recent interview-based study of medical students' clinical reasoning.10 In our study, residents may have prioritized content due to the design of the session: the title was “LP skills training,” the objective was to “learn LP,” the instructional video emphasized how to complete a LP, and the entire room was set up for LP training.

Without being explicitly asked to do so, novice trainees may not spontaneously consider their learning strategies, or they may not have conscious awareness of their use of strategies at an early stage of training.3,34 In addition, the 1-hour practice time may have been too short for their learning processes to develop to the point that they could talk about them. No matter the explanation, previous studies have shown the benefits of training that supports trainees as they experiment with learning strategies.14 We advocate for educators to design simulation-based training that teaches trainees both the content and the strategies of SRL that will prepare them for future learning in various contexts.35,36

Limitations of our study include a small sample size from a single institution. As is typical in the interpretive paradigm, we chose quotes that we judged to represent each theme, knowing that they do not reflect the experience and perspective of individual residents in the study. We might have had different results had we recruited more advanced residents, provided a longer training session, or focused on residents in a procedural specialty. One interviewer (R.H.) was the internal medicine program director at the time, which may have affected participants' responses. A recommended next step is to study trainees' strategic mindfulness and self-regulation in different training environments with different types and levels of instructor involvement. We also suggest that training should target residents' abilities to be adaptive and flexible33,37 in how they approach clinical problems by emphasizing clinical variability (eg, simulators with a broad range of difficulty or performance with different team members present in uniquely constructed contexts).38,39

Conclusion

Aspects of the training environment, such as a simulator's limitations, can impact the models of SRL residents develop and use. Our findings suggest that residents assign the greatest weight to achieving procedural outcomes, and that they believe that part-task simulators enable them to learn routine versions of a procedural skill. We recommend that in addition to focusing on the content to be learned, educators and researchers design training experiences that raise awareness of and potentially include assessment of how trainees use learning strategies.

References

  • 1
    Passiment M,
    Sacks H,
    Huang G.
    Medical simulation in medical education: results of an AAMC survey. Association of American Medical Colleges. 2011. https://www.aamc.org/download/259760/data. Accessed March 22, 2016.
  • 2
    Brydges R,
    Peets A,
    Issenberg SB,
    Regehr G.
    Divergence in student and educator conceptual structures during auscultation training. Med Educ. 2013;47(
    2
    ):198209.
  • 3
    Brydges R,
    Butler D.
    A reflective analysis of medical education research on self-regulation in learning and practice. Med Educ. 2012;46(
    1
    ):7179.
  • 4
    Cleary TJ,
    Sandars J.
    Assessing self-regulatory processes during clinical skill performance: a pilot study. Med Teach. 2011;33(
    7
    ):e368e374.
  • 5
    Durning SJ,
    Cleary TJ,
    Sandars J,
    Hemmer P,
    Kokotailo P,
    Artino AR.
    Perspective: viewing “strugglers” through a different lens: how a self-regulated learning perspective can help medical educators with assessment and remediation. Acad Med. 2011;86(
    4
    ):488495.
  • 6
    Dieckmann P,
    Manser T,
    Wehner T,
    Rall M.
    Reality and fiction cues in medical patient simulation: an interview study with anesthesiologists. J Cogn Eng Decis Mak. 2007;1(
    2
    ):148168.
  • 7
    Horcik Z,
    Savoldelli G,
    Poizat G,
    Durand M.
    A phenomenological approach to novice nurse anesthetists' experience during simulation-based training sessions. Simul Healthc. 2014;9(
    2
    ):94101.
  • 8
    Reilly A,
    Spratt C.
    The perceptions of undergraduate student nurses of high-fidelity simulation-based learning: a case report from the University of Tasmania. Nurse Educ Today. 2007;27(
    6
    ):542550.
  • 9
    Johannesson E,
    Silen C,
    Kvist J,
    Hult H.
    Students' experiences of learning manual clinical skills through simulation. Adv Health Sci Educ Theory Pract. 2013;18(
    1
    ):99114.
  • 10
    Artino AR Jr,
    Cleary TJ,
    Dong T,
    Hemmer PA,
    Durning SJ.
    Exploring clinical reasoning in novices: a self-regulated learning microanalytic approach. Med Educ. 2014;48(
    3
    ):280291.
  • 11
    Wayne DB,
    Cohen ER,
    Singer BD,
    Moazed F,
    Barsuk JH,
    Lyons EA,
    et al.
    Progress toward improving medical school graduates' skills via a “boot camp” curriculum. Simul Healthc. 2014;9(
    1
    ):3339.
  • 12
    Brydges R,
    Manzone J,
    Shanks D,
    Hatala R,
    Hamstra SJ,
    Zendejas B,
    et al.
    Self-regulated learning in simulation-based training: a systematic review and meta-analysis. Med Educ. 2015;49(
    4
    ):368378.
  • 13
    Sitzmann T,
    Ely K.
    A meta-analysis of self-regulated learning in work-related training and educational attainment: what we know and where we need to go. Psychol Bull. 2011;137(
    3
    ):421442.
  • 14
    Butler DL.
    The strategic content learning approach to promoting self-regulated learning: a report of three studies. J Educ Psychol. 1998;90(
    4
    ):682697.
  • 15
    Chamberland M,
    Mamede S,
    St-Onge C,
    Rivard MA,
    Setrakian J,
    Lévesque A,
    et al.
    Students' self-explanations while solving unfamiliar cases: the role of biomedical knowledge. Med Educ. 2013;47(
    11
    ):11091116.
  • 16
    Brydges R,
    Nair P,
    Ma I,
    Shanks D,
    Hatala R.
    Directed self-regulated learning versus instructor-regulated learning in simulation training. Med Educ. 2012;46(
    7
    ):648656.
  • 17
    Patton MQ.
    Qualitative Evaluation & Research Methods.
    Thousand Oaks, CA
    :
    Sage Publications;
    1990.
  • 18
    Ellenby MS,
    Tegtmeyer K,
    Lai S,
    Braner DA.
    Videos in clinical medicine. Lumbar puncture. N Engl J Med. 2006;355(
    13
    ):e12.
  • 19
    Eva KW,
    Regehr G.
    “I'll never play professional football” and other fallacies of self-assessment. J Contin Educ Health Prof. 2008;28(
    1
    ):1419.
  • 20
    Eva KW,
    Regehr G.
    Exploring the divergence between self-assessment and self-monitoring. Adv Health Sci Educ Theory Pract. 2011;16(
    3
    ):311329.
  • 21
    Regehr G,
    Eva K. Self-assessment,
    self-direction, and the self-regulating professional. Clin Orthop Relat Res. 2006;449:3438.
  • 22
    Lincoln YG,
    Guba EG.
    Naturalistic Inquiry.
    Newbury Park, CA
    :
    Sage Publications;
    1985.
  • 23
    Blumer H.
    Symbolic Interactionism: Perspective and Method.
    Berkeley, CA
    :
    University of California Press;
    1986.
  • 24
    Patton MQ.
    Enhancing the quality and credibility of qualitative analysis. Health Serv Res. 1999;34(5, pt 2):1189.
  • 25
    Silverman D.
    Doing Qualitative Research: A Practical Handbook.
    Thousand Oaks, CA
    :
    Sage Publications;
    2013.
  • 26
    Murad MH,
    Varkey P.
    Self-directed learning in health professions education. Ann Acad Med Singapore. 2008;37(
    7
    ):580590.
  • 27
    Koriat A,
    Nussinson R,
    Bless H,
    Shaked N.
    Information-based and experience-based metacognitive judgments: evidence from subjective confidence. In:
    Dunlosky J,
    Bjork RA,
    eds. A Handbook of Memory and Metamemory.
    Mahwah, NJ
    :
    Erlbaum;
    2008:117134.
  • 28
    Wayne DB,
    Butter J,
    Siddall VJ,
    Fudala MJ,
    Wade LD,
    Feinglass J,
    et al.
    Graduating internal medicine residents' self-assessment and performance of advanced cardiac life support skills. Med Teach. 2006;28(
    4
    ):5.
  • 29
    Zimmerman BJ,
    Kitsantas A.
    Developmental phases in self-regulation: shifting from process goals to outcome goals. J Educ Psychol. 1997;89(
    1
    ):2936.
  • 30
    Zimmerman BJ,
    Kitsantas A.
    Acquiring writing revision skill: shifting from process to outcome self-regulatory goals. J Educ Psychol. 1999;91(
    2
    ):241250.
  • 31
    Sandars J,
    Cleary TJ.
    Self-regulation theory: applications to medical education: AMEE Guide No. 58. Med Teach. 2011;33(
    11
    ):875886.
  • 32
    Cook D,
    West C.
    Perspective: reconsidering the focus on “outcomes research” in medical education: a cautionary note. Acad Med. 2013;88(
    2
    ):162167.
  • 33
    Mylopoulos M,
    Regehr G,
    Ginsburg S.
    Exploring residents' perceptions of expertise and expert development. Acad Med. 2011;86(
    suppl 10
    ):4649.
  • 34
    Evensen DH,
    Salisbury-Glennon JD,
    Glenn J.
    A qualitative study of six medical students in a problem-based curriculum: toward a situated model of self-regulation. J Educ Psychol. 2001;93(
    4
    ):659676.
  • 35
    Bransford JD,
    Schwartz DL.
    Rethinking transfer: a simple proposal with multiple implications. Rev Res Educ. 1999;24:61100.
  • 36
    Schwartz DL,
    Bransford JD,
    Sears D.
    Efficiency and innovation in transfer. In:
    Mestre JP,
    ed. Transfer of Learning From a Modern Multidisciplinary Perspective.
    Greenwich, CT
    :
    Information Age Publishing;
    2005:151.
  • 37
    Mylopoulos M,
    Regehr G.
    Putting the expert together again. Med Educ. 2011;45(
    9
    ):920926.
  • 38
    Cook DA,
    Hamstra SJ,
    Brydges R,
    Zendejas B,
    Szostek JH,
    Wang AT,
    et al.
    Comparative effectiveness of instructional design features in simulation-based education: systematic review and meta-analysis. Med Teach. 2013;35(
    1
    ):e867e898.
  • 39
    Hatala RM,
    Brooks LR,
    Norman GR.
    Practice makes perfect: the critical role of mixed practice in the acquisition of ECG interpretation skills. Adv Health Sci Educ Theory Pract. 2003;8(
    1
    ):1726.
Copyright: 2016
word

Author Notes

Corresponding author: Ryan Brydges, PhD, University of Toronto, Department of Medicine, 190 Elizabeth Street, Toronto, ON M5G 2C4 Canada, 416.340.3202, ryan.brydges@utoronto.ca

Funding: The authors report no external funding source for this study.

Conflict of interest: The authors declare they have no competing interests.

The authors would like to thank Clare Bouchal, David Shanks, Parvathy Nair, and Irene Ma for supporting our data collection. They would also like to thank all those at the Centre for Excellence for Simulation Education & Innovation at the University of British Columbia, in Vancouver, Canada, who contributed to the study.

Editor's Note: The online version of this article contains the interview guide used in the study.

Received: 30 Sept 2015
Accepted: 12 Feb 2016
  • Download PDF