Preventable mistakes occur frequently and can lead to patient harm and death. The emergency department (ED) is notoriously prone to such errors, and evidence suggests that improving teamwork is a key aspect to reduce the rate of error in acute care settings. Only a few strategies are in place to train team skills and communication in interprofessional situations. Our goal was to conceptualize, implement, and evaluate a training module for students of three professions involved in emergency care. The objective was to sensitize participants to barriers for their team skills and communication across professional borders.
Peer-assisted learning (PAL) is the development of new knowledge and skills through active learning support from peers. Benefits of PAL include introduction of teaching skills for students, creation of a safe learning environment, and efficient use of faculty time. We present a novel approach to PAL in an emergency medicine (EM) clerkship curriculum using an inexpensive, tablet-based app for students to cooperatively present and perform low-fidelity, case-based simulations that promotes accountability for student learning, fosters teaching skills, and economizes faculty presence.
Emergency medicine (EM) trainees must achieve expertise across the broad spectrum of clinical skills critical to EM practice, achieving competence in only a few short years. While EM training includes didactics, self-directed learning, and periodic assessments, the key learning occurs while caring for patients under the supervision of experienced physicians. While early medical education often focuses on transmission and retention of data, learners must ultimately gain practical experience applying clinical reasoning, learning to work in teams, and approaching complicated problems and procedures. The understanding and strategic implementation of problem-solving strategies, heuristic approaches, and metacognitive skills leads to the type of understanding that allows the novice to become the expert.
In today’s team-oriented healthcare environment, high-quality patient care requires physicians to possess not only medical knowledge and technical skills but also crisis resource management (CRM) skills. In emergency medicine (EM), the high acuity and dynamic environment makes CRM skills of physicians particularly critical to healthcare team success. The Accreditation Council of Graduate Medicine Education Core Competencies that guide residency program curriculums include CRM skills; however, EM residency programs are not given specific instructions as to how to teach these skills to their trainees. This article describes a simulation-based CRM course designed specifically for novice EM residents.
Emergency Medicine (EM) is a unique clinical learning environment. The American College of Graduate Medical Education Clinical Learning Environment Review Pathways to Excellence calls for “hands-on training” of disclosure of medical error (DME) during residency. Training and practicing key elements of DME using standardized patients (SP) may enhance preparedness among EM residents in performing this crucial skill in a clinical setting.
The WestJEM Blog and Podcast Watch presents high quality open-access educational blogs and podcasts in emergency medicine (EM) based on the ongoing ALiEM Approved Instructional Resources (AIR) and AIR-Professional series. Both series critically appraise resources using an objective scoring rubric. This installment of the Blog and Podcast Watch highlights the topic of neurologic emergencies from the AIR series.
Point-of-care ultrasound (POCUS) is expanding across all medical specialties. As the benefits of US technology are becoming apparent, efforts to integrate US into pre-clinical medical education are growing. Our objective was to describe our process of integrating POCUS as an educational tool into the medical school curriculum and how such efforts are perceived by students.
Derivation of Two Critical Appraisal Scores for Trainees to Evaluate Online Educational Resources: A METRIQ Study
Teresa M. Chan, MD, MHPE, et al.
Online education resources (OERs), like blogs and podcasts, increasingly augment or replace traditional medical education resources such as textbooks and lectures. Trainees’ ability to evaluate these resources is poor, and few quality assessment aids have been developed to assist them. This study aimed to derive a quality evaluation instrument for this purpose.
Volume 17, Issue 3, May 2016
Ryan Miller, BS et al.
Introduction: Over the past decade, medical students have witnessed a decline in the
opportunities to perform technical skills during their clinical years. Ultrasound-guided central
eScholarship provides open access, scholarly publishing
services to the University of California and delivers a dynamic
research platform to scholars worldwide.
venous access (USG-CVA) is a critical procedure commonly performed by emergency medicine,
anesthesia, and general surgery residents, often during their first month of residency. However, the
acquisition of skills required to safely perform this procedure is often deficient upon graduation from
medical school. To ameliorate this lack of technical proficiency, ultrasound simulation models have
been introduced into undergraduate medical education to train venous access skills. Criticisms of
simulation models are the innate lack of realistic tactile qualities, as well as the lack of anatomical
variances when compared to living patients. The purpose of our investigation was to design and
evaluate a life-like and reproducible training model for USG-CVA using a fresh cadaver.
Methods: This was a cross-sectional study at an urban academic medical center. An 18-point
procedural knowledge tool and an 18-point procedural skill evaluation tool were administered
during a cadaver lab at the beginning and end of the surgical clerkship. During the fresh cadaver
lab, procedure naïve third-year medical students were trained on how to perform ultrasoundguided
central venous access of the femoral and internal jugular vessels. Preparation of the fresh
cadaver model involved placement of a thin-walled latex tubing in the anatomic location of the
femoral and internal jugular vein respectively.
Results: Fifty-six third-year medical students participated in this study during their surgical
clerkship. The fresh cadaver model provided high quality and lifelike ultrasound images despite
numerous cannulation attempts. Technical skill scores improved from an average score of 3 to 12
(p<0.001) and procedural knowledge scores improved from an average score of 4 to 8 (p<0.001).
Conclusion: The use of this novel cadaver model prevented extravasation of fluid, maintained
ultrasound-imaging quality, and proved to be an effective educational model allowing third-year
medical students to improve and maintain their technical skills.
Volume 16, Issue 6, November 2015.
Ambrose H. Wong, MD, et al.
Assessment of medical students in their emergency
medicine (EM) clerkship is often based on clinical shift
evaluations and written examinations. Clinical evaluations
offer some insight into students’ ability to apply knowledge to
clinical problems, but are notoriously unreliable, with score
variance that may be driven as much by error as by actual
student performance.
Volume 16, Issue 6, November 2015.
Sharon Bord, MD, et al.
Assessment of medical students in their emergency
medicine (EM) clerkship is often based on clinical shift
evaluations and written examinations. Clinical evaluations
offer some insight into students’ ability to apply knowledge to
clinical problems, but are notoriously unreliable, with score
variance that may be driven as much by error as by actual
student performance.
Volume 16, Issue 6, November 2015.
Meghan Schott, MD, et al.
Introduction: Emergency medicine (EM) milestones are used to assess residents’ progress. While
some milestone validity evidence exists, there is a lack of standardized tools available to reliably
assess residents. Inherent to this is a concern that we may not be truly measuring what we intend
to assess. The purpose of this study was to design a direct observation milestone assessment
instrument supported by validity and reliability evidence. In addition, such a tool would further lend
validity evidence to the EM milestones by demonstrating their accurate measurement.
Methods: This was a multi-center, prospective, observational validity study conducted at eight
institutions. The Critical Care Direct Observation Tool (CDOT) was created to assess EM residents
during resuscitations. This tool was designed using a modified Delphi method focused on content,
response process, and internal structure validity. Paying special attention to content validity, the
CDOT was developed by an expert panel, maintaining the use of the EM milestone wording. We
built response process and internal consistency by piloting and revising the instrument. Raters
were faculty who routinely assess residents on the milestones. A brief training video on utilization
of the instrument was completed by all. Raters used the CDOT to assess simulated videos of three
residents at different stages of training in a critical care scenario. We measured reliability using
Fleiss’ kappa and interclass correlations.
Results: Two versions of the CDOT were used: one used the milestone levels as global rating
scales with anchors, and the second reflected a current trend of a checklist response system.
Although the raters who used the CDOT routinely rate residents in their practice, they did not score
the residents’ performances in the videos comparably, which led to poor reliability. The Fleiss’ kappa
of each of the items measured on both versions of the CDOT was near zero.
Conclusion: The validity and reliability of the current EM milestone assessment tools have yet to
be determined. This study is a rigorous attempt to collect validity evidence in the development of
a direct observation assessment instrument. However, despite strict attention to validity evidence,
inter-rater reliability was low. The potential sources of reducible variance include rater- and
instrument-based error. Based on this study, there may be concerns for the reliability of other EM
milestone assessment tools that are currently in use.
Volume 16, Issue 6, November 2015.
Mark Favot, MD, et al.
Introduction: The curriculum in most emergency medicine (EM) clerkships includes very little
formalized training in point-of-care ultrasound. Medical schools have begun to implement ultrasound
training in the pre-clinical curriculum, and the EM clerkship is an appropriate place to build upon this
training. The objectives are (1) to evaluate the effectiveness of implementing a focused ultrasound
curriculum within an established EM clerkship and (2) to obtain feedback from medical students
regarding the program.
Methods: We conducted a prospective cohort study of medical students during an EM clerkship
year from July 1, 2011, to June 30, 2012. Participants included fourth-year medical students
(n=45) enrolled in the EM clerkship at our institution. The students underwent a structured program
focused on the focused assessment with sonography for trauma exam and ultrasound-guided
vascular access. At the conclusion of the rotation, they took a 10-item multiple choice test assessing
knowledge and image interpretation skills. A cohort of EM residents (n=20) also took the multiple
choice test but did not participate in the training with the students. We used an independent samples
t-test to examine differences in test scores between the groups.
Results: The medical students in the ultrasound training program scored significantly higher on
the multiple-choice test than the EM residents, t(63)=2.3, p<0.05. The feedback from the students
indicated that 82.8% were using ultrasound on their current rotations and the majority (55.2%) felt
that the one-on-one scanning shift was the most valuable aspect of the curriculum.
Discussion: Our study demonstrates support for an ultrasound training program for medical
students in the EM clerkship. After completing the training, students were able to perform similarly to
EM residents on a knowledge-based exam.
Volume 16, Issue 6, November 2015.
Mira Mamtani, MD, et al.
Graduate medical education is increasingly focused on patient safety and quality improvement; training
programs must adapt their curriculum to address these changes. We propose a novel curriculum for
emergency medicine (EM) residency training programs specifically addressing patient safety, systemsbased
management, and practice-based performance improvement, called “EM Debates.” Following
implementation of this educational curriculum, we performed a cross-sectional study to evaluate the
curriculum through resident self-assessment. Additionally, a cross-sectional study to determine the
ED clinical competency committee’s (CCC) ability to assess residents on specific competencies was
performed. Residents were overall very positive towards the implementation of the debates. Of those
participating in a debate, 71% felt that it improved their individual performance within a specific topic,
and 100% of those that led a debate felt that they could propose an evidence-based approach to
a specific topic. The CCC found that it was easier to assess milestones in patient safety, systemsbased
management, and practice-based performance improvement (sub-competencies 16, 17,
and 19) compared to prior to the implementation of the debates. The debates have been a helpful
venue to teach EM residents about patient safety concepts, identifying medical errors, and process
improvement.
Volume 16, Issue 6, November 2015.
Joseph House, MD, et al.
Introduction: Faculty educational contributions are hard to quantify, but in an era of limited
resources it is essential to link funding with effort. The purpose of this study was to determine the
feasibility of an educational value unit (EVU) system in an academic emergency department and
to examine its effect on faculty behavior, particularly on conference attendance and completion of
trainee evaluations.
Methods: A taskforce representing education, research, and clinical missions was convened
to develop a method of incentivizing productivity for an academic emergency medicine faculty.
Domains of educational contributions were defined and assigned a value based on time expended.
A 30-hour EVU threshold for achievement was aligned with departmental goals. Targets included
educational presentations, completion of trainee evaluations and attendance at didactic conferences.
We analyzed comparisons of performance during the year preceding and after implementation.
Results: Faculty (N=50) attended significantly more didactic conferences (22.7 hours v. 34.5
hours, p<0.005) and completed more trainee evaluations (5.9 v. 8.8 months, p<0.005). During
the pre-implementation year, 84% (42/50) met the 30-hour threshold with 94% (47/50) meeting
post-implementation (p=0.11). Mean total EVUs increased significantly (94.4 hours v. 109.8 hours,
p=0.04) resulting from increased conference attendance and evaluation completion without a change
in other categories.
Conclusion: In a busy academic department there are many work allocation pressures. An EVU
system integrated with an incentive structure to recognize faculty contributions increases the
importance of educational responsibilities. We propose an EVU model that could be implemented
and adjusted for differing departmental priorities at other academic departments.
Volume 16, Issue 6, November 2015.
Katherine Hiller, MD, MPH
Introduction: There is great variation in the knowledge base of Emergency Medicine (EM) interns
in July. The first objective knowledge assessment during residency does not occur until eight months
later, in February, when the American Board of EM (ABEM) administers the in-training examination
(ITE). In 2013, the National Board of Medical Examiners (NBME) released the EM Advanced Clinical
Examination (EM-ACE), an assessment intended for fourth-year medical students. Administration of
the EM-ACE to interns at the start of residency may provide an earlier opportunity to assess the new
EM residents’ knowledge base. The primary objective of this study was to determine the correlation
of the NBME EM-ACE, given early in residency, with the EM ITE. Secondary objectives included
determination of the correlation of the United States Medical Licensing Examination (USMLE) Step 1
or 2 scores with early intern EM-ACE and ITE scores and the effect, if any, of clinical EM experience
on examination correlation.
Methods: This was a multi-institutional, observational study. Entering EM interns at six residencies
took the EM-ACE in July 2013 and the ABEM ITE in February 2014. We collected scores for the EMACE
and ITE, age, gender, weeks of clinical EM experience in residency prior to the ITE, and USMLE
Step 1 and 2 scores. Pearson’s correlation and linear regression were performed.
Results: Sixty-two interns took the EM-ACE and the ITE. The Pearson’s correlation coefficient
between the ITE and the EM-ACE was 0.62. R-squared was 0.5 (adjusted 0.4). The coefficient of
determination was 0.41 (95% CI [0.3-0.8]). For every increase of one in the scaled EM-ACE score,
we observed a 0.4% increase in the EM in-training score. In a linear regression model using all
available variables (EM-ACE, gender, age, clinical exposure to EM, and USMLE Step 1 and Step 2
scores), only the EM-ACE score was significantly associated with the ITE (p<0.05). We observed
significant colinearity among the EM-ACE, ITE and USMLE scores. Gender, age and number of
weeks of EM prior to the ITE had no effect on the relationship between EM-ACE and the ITE.
Conclusion Given early during intern year, the EM-ACE score showed positive correlation with ITE.
Clinical EM experience prior to the in-training exam did not affect the correlation.