Volume 16, Issue 6, November 2015.
Jessica N. Schmidt, MD, MPH, et al.
Introduction: Quality resident education in point-of-care ultrasound (POC US) is becoming
increasingly important in emergency medicine (EM); however, the best methods to evaluate
competency in graduating residents has not been established. We sought to design and implement
a rigorous assessment of image acquisition and interpretation in POC US in a cohort of graduating
residents at our institution.
Methods: We evaluated nine senior residents in both image acquisition and image interpretation for
five core US skills (focused assessment with sonography for trauma (FAST), aorta, echocardiogram
(ECHO), pelvic, central line placement). Image acquisition, using an observed clinical skills exam
(OSCE) directed assessment with a standardized patient model. Image interpretation was measured
with a multiple-choice exam including normal and pathologic images.
Results: Residents performed well on image acquisition for core skills with an average score of
85.7% for core skills and 74% including advanced skills (ovaries, advanced ECHO, advanced aorta).
Residents scored well but slightly lower on image interpretation with an average score of 76%.
Conclusion: Senior residents performed well on core POC US skills as evaluated with a rigorous
assessment tool. This tool may be developed further for other EM programs to use for graduating
resident evaluation.
Volume 16, Issue 6, November 2015.
Erin Dehon, PhD, et al.
Introduction: Few studies have documented the value of mentoring for medical students, and
research has been limited to more subjective (e.g., job satisfaction, perceived career preparation)
rather than objective outcomes. This study examined whether having a mentor is associated with
match outcome (where a student matched based on their rank order list [ROL]).
Methods: We sent a survey link to all emergency medicine (EM) program coordinators to distribute
to their residents. EM residents were surveyed about whether they had a mentor during medical
school. Match outcome was assessed by asking residents where they matched on their ROL (e.g.,
first choice, fifth choice). They were also asked about rank in medical school, type of degree (MD vs.
DO), and performance on standardized tests. Residents who indicated having a mentor completed
the Mentorship Effectiveness Scale (MES), which evaluates behavioral characteristics of the
mentor and yields a total score. We assessed correlations among these variables using Pearson’s
correlation coefficient. Post-hoc analysis using independent sample t-test was conducted to compare
differences in the MES score between those who matched to their first or second choice vs. third or
higher choice.
Results: Participants were a convenience sample of 297 EM residents. Of those, 199 (67%)
reported having a mentor during medical school. Contrary to our hypothesis, there was no significant
correlation between having a mentor and match outcome (r=0.06, p=0.29). Match outcome was
associated with class rank (r=0.13, p=0.03), satisfaction with match outcome (r= -0.37, p<0.001),
and type of degree (r=0.12, p=0.04). Among those with mentors, a t-test revealed that the MES
score was significantly higher among those who matched to their first or second choice (M=51.31,
SD=10.13) compared to those who matched to their third or higher choice (M=43.59, SD=17.12),
t(194)=3.65, p<0.001, d=0.55.
Conclusion: Simply having a mentor during medical school does not impact match outcome, but
having an effective mentor is associated with a more favorable match outcome among medical
students applying to EM programs.
Volume 16, Issue 6, November 2015.
Katja Goldflam, MD, et al.
Introduction: In 2012 the Accreditation Council for Graduate Medical Education (ACGME)
introduced the Next Accreditation System (NAS), which implemented milestones to assess the
competency of residents and fellows. While attending evaluation and feedback is crucial for resident
development, perhaps equally important is a resident’s self-assessment. If a resident does not
accurately self-assess, clinical and professional progress may be compromised. The objective of our
study was to compare emergency medicine (EM) resident milestone evaluation by EM faculty with
the same resident’s self-assessment.
Methods: This is an observational, cross-sectional study that was performed at an academic,
four-year EM residency program. Twenty-five randomly chosen residents completed milestone
self-assessment using eight ACGME sub-competencies deemed by residency leadership as
representative of core EM principles. These residents were also evaluated by 20 faculty members.
The milestone levels were evaluated on a nine-point scale. We calculated the average difference
between resident self-ratings and faculty ratings, and used sample t-tests to determine statistical
significance of the difference in scores.
Results: Eighteen residents evaluated themselves. Each resident was assessed by an average
of 16 attendings (min=10, max=20). Residents gave themselves statistically significant higher
milestone ratings than attendings did for each sub-competency examined (p<0.0001).
Conclusion: Residents over-estimated their abilities in every sub-competency assessed. This
underscores the importance of feedback and assessment transparency. More attention needs to be
paid to methods by which residency leadership can make residents’ self-perception of their clinical
ability more congruent with that of their teachers and evaluators. The major limitation of our study is
small sample size of both residents and attendings.
Volume 16, Issue 6, November 2015.
Jay Khadpe, MD, et al.
Technologies and techniques for knowledge translation
are rapidly evolving and there is a need for graduate medical
education (GME) curricula to keep up with these advances
to reach our learners in an effective manner. Technologies
such as blogs, microblogs, wikis, podcasts, and vodcasts have
the potential to expand upon the current didactic models by
adding dimensions and engaging learners in modalities not
previously available.
Volume 16, Issue 6, November 2015.
Mark Favot, MD, et al.
Introduction: The curriculum in most emergency medicine (EM) clerkships includes very little
formalized training in point-of-care ultrasound. Medical schools have begun to implement ultrasound
training in the pre-clinical curriculum, and the EM clerkship is an appropriate place to build upon this
training. The objectives are (1) to evaluate the effectiveness of implementing a focused ultrasound
curriculum within an established EM clerkship and (2) to obtain feedback from medical students
regarding the program.
Methods: We conducted a prospective cohort study of medical students during an EM clerkship
year from July 1, 2011, to June 30, 2012. Participants included fourth-year medical students
(n=45) enrolled in the EM clerkship at our institution. The students underwent a structured program
focused on the focused assessment with sonography for trauma exam and ultrasound-guided
vascular access. At the conclusion of the rotation, they took a 10-item multiple choice test assessing
knowledge and image interpretation skills. A cohort of EM residents (n=20) also took the multiple
choice test but did not participate in the training with the students. We used an independent samples
t-test to examine differences in test scores between the groups.
Results: The medical students in the ultrasound training program scored significantly higher on
the multiple-choice test than the EM residents, t(63)=2.3, p<0.05. The feedback from the students
indicated that 82.8% were using ultrasound on their current rotations and the majority (55.2%) felt
that the one-on-one scanning shift was the most valuable aspect of the curriculum.
Discussion: Our study demonstrates support for an ultrasound training program for medical
students in the EM clerkship. After completing the training, students were able to perform similarly to
EM residents on a knowledge-based exam.
Volume 16, Issue 6, November 2015.
Mira Mamtani, MD, et al.
Graduate medical education is increasingly focused on patient safety and quality improvement; training
programs must adapt their curriculum to address these changes. We propose a novel curriculum for
emergency medicine (EM) residency training programs specifically addressing patient safety, systemsbased
management, and practice-based performance improvement, called “EM Debates.” Following
implementation of this educational curriculum, we performed a cross-sectional study to evaluate the
curriculum through resident self-assessment. Additionally, a cross-sectional study to determine the
ED clinical competency committee’s (CCC) ability to assess residents on specific competencies was
performed. Residents were overall very positive towards the implementation of the debates. Of those
participating in a debate, 71% felt that it improved their individual performance within a specific topic,
and 100% of those that led a debate felt that they could propose an evidence-based approach to
a specific topic. The CCC found that it was easier to assess milestones in patient safety, systemsbased
management, and practice-based performance improvement (sub-competencies 16, 17,
and 19) compared to prior to the implementation of the debates. The debates have been a helpful
venue to teach EM residents about patient safety concepts, identifying medical errors, and process
improvement.
Volume 16, Issue 6, November 2015.
Joseph House, MD, et al.
Introduction: Faculty educational contributions are hard to quantify, but in an era of limited
resources it is essential to link funding with effort. The purpose of this study was to determine the
feasibility of an educational value unit (EVU) system in an academic emergency department and
to examine its effect on faculty behavior, particularly on conference attendance and completion of
trainee evaluations.
Methods: A taskforce representing education, research, and clinical missions was convened
to develop a method of incentivizing productivity for an academic emergency medicine faculty.
Domains of educational contributions were defined and assigned a value based on time expended.
A 30-hour EVU threshold for achievement was aligned with departmental goals. Targets included
educational presentations, completion of trainee evaluations and attendance at didactic conferences.
We analyzed comparisons of performance during the year preceding and after implementation.
Results: Faculty (N=50) attended significantly more didactic conferences (22.7 hours v. 34.5
hours, p<0.005) and completed more trainee evaluations (5.9 v. 8.8 months, p<0.005). During
the pre-implementation year, 84% (42/50) met the 30-hour threshold with 94% (47/50) meeting
post-implementation (p=0.11). Mean total EVUs increased significantly (94.4 hours v. 109.8 hours,
p=0.04) resulting from increased conference attendance and evaluation completion without a change
in other categories.
Conclusion: In a busy academic department there are many work allocation pressures. An EVU
system integrated with an incentive structure to recognize faculty contributions increases the
importance of educational responsibilities. We propose an EVU model that could be implemented
and adjusted for differing departmental priorities at other academic departments.
Volume 16, Issue 6, November 2015.
Katherine Hiller, MD, MPH
Introduction: There is great variation in the knowledge base of Emergency Medicine (EM) interns
in July. The first objective knowledge assessment during residency does not occur until eight months
later, in February, when the American Board of EM (ABEM) administers the in-training examination
(ITE). In 2013, the National Board of Medical Examiners (NBME) released the EM Advanced Clinical
Examination (EM-ACE), an assessment intended for fourth-year medical students. Administration of
the EM-ACE to interns at the start of residency may provide an earlier opportunity to assess the new
EM residents’ knowledge base. The primary objective of this study was to determine the correlation
of the NBME EM-ACE, given early in residency, with the EM ITE. Secondary objectives included
determination of the correlation of the United States Medical Licensing Examination (USMLE) Step 1
or 2 scores with early intern EM-ACE and ITE scores and the effect, if any, of clinical EM experience
on examination correlation.
Methods: This was a multi-institutional, observational study. Entering EM interns at six residencies
took the EM-ACE in July 2013 and the ABEM ITE in February 2014. We collected scores for the EMACE
and ITE, age, gender, weeks of clinical EM experience in residency prior to the ITE, and USMLE
Step 1 and 2 scores. Pearson’s correlation and linear regression were performed.
Results: Sixty-two interns took the EM-ACE and the ITE. The Pearson’s correlation coefficient
between the ITE and the EM-ACE was 0.62. R-squared was 0.5 (adjusted 0.4). The coefficient of
determination was 0.41 (95% CI [0.3-0.8]). For every increase of one in the scaled EM-ACE score,
we observed a 0.4% increase in the EM in-training score. In a linear regression model using all
available variables (EM-ACE, gender, age, clinical exposure to EM, and USMLE Step 1 and Step 2
scores), only the EM-ACE score was significantly associated with the ITE (p<0.05). We observed
significant colinearity among the EM-ACE, ITE and USMLE scores. Gender, age and number of
weeks of EM prior to the ITE had no effect on the relationship between EM-ACE and the ITE.
Conclusion Given early during intern year, the EM-ACE score showed positive correlation with ITE.
Clinical EM experience prior to the in-training exam did not affect the correlation.