Educational Research and Practice

Growing a Specialty-Specific Community of Practice in Education Scholarship

Volume 16, Issue 6, November 2015.
Jeffrey N. Love, MD, et al.

Emergency medicine (EM) educators have many
masters. These include our hospital administrations who
expect efficient patient care reflecting the priorities of safety
and quality, the accreditation council for graduate medical
education which has introduced a new competency-based
standard by which our learners must be educated, and last but
not least, our learners that are using new educational modalities
based on expanding digital platforms. To be successful,
educators must satisfy each of these masters against the
backdrop of increasing regulations, decreasing funding and
information technology that appears to decrease our time with
patients and perhaps learners in clinical practice.

Read More

Emergency Medicine: On the Frontlines of Medical Education Transformation

Volume 16, Issue 6, November 2015.
Eric S. Holmboe, MD

Emergency medicine (EM) has always been on the frontlines of healthcare in the United States.
I experienced this reality first hand as a young general medical officer assigned to an emergency
department (ED) in a small naval hospital in the 1980s. For decades the ED has been the only site
where patients could not be legally denied care. Despite increased insurance coverage for millions of
Americans as a result of the Affordable Care Act, ED directors report an increase in patient volumes
in a recent survey.1
EDs care for patients from across the socioeconomic spectrum suffering from a
wide range of clinical conditions. As a result, the ED is still one of few components of the American
healthcare system where social justice is enacted on a regular basis. Constant turbulence in the
healthcare system, major changes in healthcare delivery, technological advances and shifting
demographic trends necessitate that EM constantly adapt and evolve as a discipline in this complex
environment.

Read More

Morbidity and Mortality Conference in Emergency Medicine Residencies and the Culture of Safety

Volume 16, Issue 6, November 2015.
Emily L. Aaronson, MD, et al.

Introduction: Morbidity and mortality conferences (M+M) are a traditional part of residency training
and mandated by the Accreditation Counsel of Graduate Medical Education. This study’s objective
was to determine the goals, structure, and the prevalence of practices that foster strong safety
cultures in the M+Ms of U.S. emergency medicine (EM) residency programs.
Methods: The authors conducted a national survey of U.S. EM residency program directors. The
survey instrument evaluated five domains of M+M (Organization and Infrastructure; Case Finding;
Case Selection; Presentation; and Follow up) based on the validated Agency for Healthcare
Research & Quality Safety Culture survey.
Results: There was an 80% (151/188) response rate. The primary objectives of M+M were
discussing adverse outcomes (53/151, 35%), identifying systems errors (47/151, 31%) and
identifying cognitive errors (26/151, 17%). Fifty-six percent (84/151) of institutions have anonymous
case submission, with 10% (15/151) maintaining complete anonymity during the presentation and
21% (31/151) maintaining partial anonymity. Forty-seven percent (71/151) of programs report a
formal process to follow up on systems issues identified at M+M. Forty-four percent (67/151) of
programs report regular debriefing with residents who have had their cases presented.
Conclusion: The structure and goals of M+Ms in EM residencies vary widely. Many programs lack
features of M+M that promote a non-punitive response to error, such as anonymity. Other programs
lack features that support strong safety cultures, such as following up on systems issues or reporting
back to residents on improvements. Further research is warranted to determine if M+M structure is
related to patient safety culture in residency programs.

Read More

Are Live Ultrasound Models Replaceable? Traditional versus Simulated Education Module for FAST Exam

Volume 16, Issue 6, November 2015.
Suzanne Bentley, MD, MPH, et al.

Introduction: The focused assessment with sonography for trauma (FAST) is a commonly used and
life-saving tool in the initial assessment of trauma patients. The recommended emergency medicine
(EM) curriculum includes ultrasound and studies show the additional utility of ultrasound training for
medical students. EM clerkships vary and often do not contain formal ultrasound instruction. Time
constraints for facilitating lectures and hands-on learning of ultrasound are challenging. Limitations
on didactics call for development and inclusion of novel educational strategies, such as simulation.
The objective of this study was to compare the test, survey, and performance of ultrasound between
medical students trained on an ultrasound simulator versus those trained via traditional, hands-on
patient format.
Methods: This was a prospective, blinded, controlled educational study focused on EM clerkship
medical students. After all received a standardized lecture with pictorial demonstration of image
acquisition, students were randomized into two groups: control group receiving traditional training
method via practice on a human model and intervention group training via practice on an ultrasound
simulator. Participants were tested and surveyed on indications and interpretation of FAST and training
and confidence with image interpretation and acquisition before and after this educational activity.
Evaluation of FAST skills was performed on a human model to emulate patient care and practical skills
were scored via objective structured clinical examination (OSCE) with critical action checklist.
Results: There was no significant difference between control group (N=54) and intervention group
(N=39) on pretest scores, prior ultrasound training/education, or ultrasound comfort level in general
or on FAST. All students (N=93) showed significant improvement from pre- to post-test scores and
significant improvement in comfort level using ultrasound in general and on FAST (p<0.001). There
was no significant difference between groups on OSCE scores of FAST on a live model. Overall, no
differences were demonstrated between groups trained on human models versus simulator.
Discussion: There was no difference between groups in knowledge based ultrasound test scores,
survey of comfort levels with ultrasound, and students’ abilities to perform and interpret FAST on
human models.
Conclusion: These findings suggest that an ultrasound simulator is a suitable alternative method
for ultrasound education. Additional uses of ultrasound simulation should be explored in the future.

Read More

Teaching and Assessing ED Handoffs: A Qualitative Study Exploring Resident, Attending, and Nurse Perceptions

Volume 16, Issue 6, November 2015.
Moira Flanigan, BA, et al.

Introduction: The Accreditation Council for Graduate Medical Education requires that residency
programs ensure resident competency in performing safe, effective handoffs. Understanding
resident, attending, and nurse perceptions of the key elements of a safe and effective emergency
department (ED) handoff is a crucial step to developing feasible, acceptable educational
interventions to teach and assess this fundamental competency. The aim of our study was to identify
the essential themes of ED-based handoffs and to explore the key cultural and interprofessional
themes that may be barriers to developing and implementing successful ED-based educational
handoff interventions.
Methods: Using a grounded theory approach and constructivist/interpretivist research paradigm, we
analyzed data from three primary and one confirmatory focus groups (FGs) at an urban, academic
ED. FG protocols were developed using open-ended questions that sought to understand what
participants felt were the crucial elements of ED handoffs. ED residents, attendings, a physician
assistant, and nurses participated in the FGs. FGs were observed, hand-transcribed, audiorecorded
and subsequently transcribed. We analyzed data using an iterative process of theme and
subtheme identification. Saturation was reached during the third FG, and the fourth confirmatory
group reinforced the identified themes. Two team members analyzed the transcripts separately and
identified the same major themes.
Results: ED providers identified that crucial elements of ED handoff include the following: 1) Culture
(provider buy-in, openness to change, shared expectations of sign-out goals); 2) Time (brevity,
interruptions, waiting); 3) Environment (physical location, ED factors); 4) Process (standardization,
information order, tools).
Conclusion: Key participants in the ED handoff process perceive that the crucial elements of
intershift handoffs involve the themes of culture, time, environment, and process. Attention to these
themes may improve the feasibility and acceptance of educational interventions that aim to teach
and assess handoff competency.

Read More

The Impact of Medical Student Participation in Emergency Medicine Patient Care on Departmental Press Ganey Scores

Volume 16, Issue 6, November 2015.
Aaron W. Bernard, MD, et al.

Introduction: Press Ganey (PG) scores are used by public entities to gauge the quality of patient
care from medical facilities in the United States. Academic health centers (AHCs) are charged
with educating the new generation of doctors, but rely heavily on PG scores for their business
operation. AHCs need to know what impact medical student involvement has on patient care and
their PG scores.
Purpose: We sought to identify the impact students have on emergency department (ED) PG scores
related to overall visit and the treating physician’s performance.
Methods: This was a retrospective, observational cohort study of discharged ED patients who
completed PG satisfaction surveys at one academic, and one community-based ED. Outcomes
were responses to questions about the overall visit assessment and doctor’s care, measured on a
five-point scale. We compared the distribution of responses for each question through proportions
with 95% confidence intervals (CIs) stratified by medical student participation. For each question, we
constructed a multivariable ordinal logistic regression model including medical student involvement
and other independent variables known to affect PG scores.
Results: We analyzed 2,753 encounters, of which 259 (9.4%) had medical student involvement. For
all questions, there were no appreciable differences in patient responses when stratifying by medical
student involvement. In regression models, medical student involvement was not associated with
PG score for any outcome, including overall rating of care (odds ratio [OR] 1.10, 95% CI [0.90-1.34])
or likelihood of recommending our EDs (OR 1.07, 95% CI [0.86-1.32]). Findings were similar when
each ED was analyzed individually.
Conclusion: We found that medical student involvement in patient care did not adversely impact
ED PG scores in discharged patients. Neither overall scores nor physician-specific scores were
impacted. Results were similar at both the academic medical center and the community teaching
hospital at our institution.

Read More

What is the Prevalence and Success of Remediation of Emergency Medicine Residents?

Volume 16, Issue 6, November 2015.
Mark Silverberg, MD

Introduction: The primary objective of this study was to determine the prevalence of remediation,
competency domains for remediation, the length, and success rates of remediation in emergency
medicine (EM).
Methods: We developed the survey in SurveymonkeyTM with attention to content and response
process validity. EM program directors responded how many residents had been placed on
remediation in the last three years. Details regarding the remediation were collected including
indication, length and success. We reported descriptive data and estimated a multinomial logistic
regression model.
Results: We obtained 126/158 responses (79.7%). Ninety percent of programs had at least one
resident on remediation in the last three years. The prevalence of remediation was 4.4%. Indications
for remediation ranged from difficulties with one core competency to all six competencies (mean
1.9). The most common were medical knowledge (MK) (63.1% of residents), patient care (46.6%)
and professionalism (31.5%). Mean length of remediation was eight months (range 1-36 months).
Successful remediation was 59.9% of remediated residents; 31.3% reported ongoing remediation. In
8.7%, remediation was deemed “unsuccessful.” Training year at time of identification for remediation
(post-graduate year [PGY] 1), longer time spent in remediation, and concerns with practice-based
learning (PBLI) and professionalism were found to have statistically significant association with
unsuccessful remediation.
Conclusion: Remediation in EM residencies is common, with the most common areas being MK
and patient care. The majority of residents are successfully remediated. PGY level, length of time
spent in remediation, and the remediation of the competencies of PBLI and professionalism were
associated with unsuccessful remediation.

Read More

Results from the First Year of Implementation of CONSULT: Consultation with Novel Methods and Simulation for UME Longitudinal Training

Volume 16, Issue 6, November 2015.
Keme Carter, MD, et al.

Introduction: An important area of communication in healthcare is the consultation. Existing literature
suggests that formal training in consultation communication is lacking. We aimed to conduct a targeted
needs assessment of third-year students on their experience calling consultations, and based on these
results, develop, pilot, and evaluate the effectiveness of a consultation curriculum for different learner
levels that can be implemented as a longitudinal curriculum.
Methods: Baseline needs assessment data were gathered using a survey completed by third-year
students at the conclusion of the clinical clerkships. The survey assessed students’ knowledge of
the standardized consultation, experience and comfort calling consultations, and previous instruction
received on consultation communication. Implementation of the consultation curriculum began the
following academic year. Second-year students were introduced to Kessler’s 5 Cs consultation
model through a didactic session consisting of a lecture, viewing of “trigger” videos illustrating
standardized and informal consults, followed by reflection and discussion. Curriculum effectiveness
was assessed through pre- and post- curriculum surveys that assessed knowledge of and comfort
with the consultation process. Fourth-year students participated in a consultation curriculum that
provided instruction on the 5 Cs model and allowed for continued practice of consultation skills through
simulation during the Emergency Medicine clerkship. Proficiency in consult communication in this
cohort was assessed using two assessment tools, the Global Rating Scale and the 5 Cs Checklist.
Results: The targeted needs assessment of third-year students indicated that 93% of students
have called a consultation during their clerkships, but only 24% received feedback. Post-curriculum,
second-year students identified more components of the 5 Cs model (4.04 vs. 4.81, p<0.001) and
reported greater comfort with the consultation process (0% vs. 69%, p<0.001). Post- curriculum,
fourth-year students scored higher in all criteria measuring consultation effectiveness (p<0.001 for
all) and included more necessary items in simulated consultations (62% vs. 77%, p<0.001).
Conclusion: While third-year medical students reported calling consultations, few felt comfortable
and formal training was lacking. A curriculum in consult communication for different levels of learners
can improve knowledge and comfort prior to clinical clerkships and improve consultation skills prior
to residency training.

Read More

Does the Concept of the “Flipped Classroom” Extend to the Emergency Medicine Clinical Clerkship?

Volume 16, Issue 6, November 2015.
Corey Heitz, MD, et al.

Introduction: Linking educational objectives and clinical learning during clerkships can be difficult.
Clinical shifts during emergency medicine (EM) clerkships provide a wide variety of experiences,
some of which may not be relevant to recommended educational objectives. Students can be
directed to standardize their clinical experiences, and this improves performance on examinations.
We hypothesized that applying a “flipped classroom” model to the clinical clerkship would improve
performance on multiple-choice testing when compared to standard learning.
Methods: Students at two institutions were randomized to complete two of four selected EM
clerkship topics in a “flipped fashion,” and two others in a standard fashion. For flipped topics,
students were directed to complete chief complaint-based asynchronous modules prior to a shift,
during which they were directed to focus on the chief complaint. For the other two topics, modules
were to be performed at the students’ discretion, and shifts would not have a theme. At the end
of the four-week clerkship, a 40-question multiple-choice examination was administered with 10
questions per topic. We compared performance on flipped topics with those performed in standard
fashion. Students were surveyed on perceived effectiveness, ability to follow the protocol, and
willingness of preceptors to allow a chief-complaint focus.
Results: Sixty-nine students participated; examination scores for 56 were available for analysis. For
the primary outcome, no difference was seen between the flipped method and standard (p=0.494.)
A mixed model approach showed no effect of flipped status, protocol adherence, or site of rotation
on the primary outcome of exam scores. Students rated the concept of the flipped clerkship highly
(3.48/5). Almost one third (31.1%) of students stated that they were unable to adhere to the protocol.
Conclusion: Preparation for a clinical shift with pre-assigned, web-based learning modules followed
by an attempt at chief-complaint-focused learning during a shift did not result in improvements in
performance on a multiple-choice assessment of knowledge; however, one third of participants did
not adhere strictly to the protocol. Future investigations should ensure performance of pre-assigned
learning as well as clinical experiences, and consider alternate measures of knowledge.

Read More

Assessing the Impact of Video-based Training on Laceration Repair: A Comparison to the Traditional Workshop Method

Volume 16, Issue 6, November 2015.
Ambrose H. Wong, MD, et al.

Introduction: While treating potentially violent patients in the emergency department (ED), both patients
and staff may be subject to unintentional injury. Emergency healthcare providers are at the greatest risk
of experiencing physical and verbal assault from patients. Preliminary studies have shown that a teambased
approach with targeted staff training has significant positive outcomes in mitigating violence in
healthcare settings. Staff attitudes toward patient aggression have also been linked to workplace safety,
but current literature suggests that providers experience fear and anxiety while caring for potentially
violent patients. The objectives of the study were (1) to develop an interprofessional curriculum focusing
on improving teamwork and staff attitudes toward patient violence using simulation-enhanced education
for ED staff, and (2) to assess attitudes towards patient aggression both at pre- and post-curriculum
implementation stages using a survey-based study design.
Methods: Formal roles and responsibilities for each member of the care team, including positioning
during restraint placement, were predefined in conjunction with ED leadership. Emergency medicine
residents, nurses and hospital police officers were assigned to interprofessional teams. The curriculum
started with an introductory lecture discussing de-escalation techniques and restraint placement as
well as core tenets of interprofessional collaboration. Next, we conducted two simulation scenarios
using standardized participants (SPs) and structured debriefing. The study consisted of a survey-based
design comparing pre- and post-intervention responses via a paired Student t-test to assess changes
in staff attitudes. We used the validated Management of Aggression and Violence Attitude Scale
(MAVAS) consisting of 30 Likert-scale questions grouped into four themed constructs.
Results: One hundred sixty-two ED staff members completed the course with >95% staff
participation, generating a total of 106 paired surveys. Constructs for internal/biomedical factors,
external/staff factors and situational/interactional perspectives on patient aggression significantly
improved (p<0.0001, p<0.002, p<0.0001 respectively). Staff attitudes toward management of patient
aggression did not significantly change (p=0.542). Multiple quality improvement initiatives were
successfully implemented, including the creation of an interprofessional crisis management alert and
response protocol. Staff members described appreciation for our simulation-based curriculum and
welcomed the interaction with SPs during their training.
Conclusion: A structured simulation-enhanced interprofessional intervention was successful in
improving multiple facets of ED staff attitudes toward behavioral emergency care.

Read More

Coordinating a Team Response to Behavioral Emergencies in the Emergency Department: A Simulation-Enhanced Interprofessional Curriculum

Volume 16, Issue 6, November 2015.
Ambrose H. Wong, MD, et al.

Assessment of medical students in their emergency
medicine (EM) clerkship is often based on clinical shift
evaluations and written examinations. Clinical evaluations
offer some insight into students’ ability to apply knowledge to
clinical problems, but are notoriously unreliable, with score
variance that may be driven as much by error as by actual
student performance.

Read More

Development of an Objective Structured Clinical Examination for Assessment of Clinical Skills in an Emergency Medicine Clerkship

Volume 16, Issue 6, November 2015.
Sharon Bord, MD, et al.

Assessment of medical students in their emergency
medicine (EM) clerkship is often based on clinical shift
evaluations and written examinations. Clinical evaluations
offer some insight into students’ ability to apply knowledge to
clinical problems, but are notoriously unreliable, with score
variance that may be driven as much by error as by actual
student performance.

Read More

Direct Observation Assessment of Milestones: Problems with Reliability

Volume 16, Issue 6, November 2015.
Meghan Schott, MD, et al.

Introduction: Emergency medicine (EM) milestones are used to assess residents’ progress. While
some milestone validity evidence exists, there is a lack of standardized tools available to reliably
assess residents. Inherent to this is a concern that we may not be truly measuring what we intend
to assess. The purpose of this study was to design a direct observation milestone assessment
instrument supported by validity and reliability evidence. In addition, such a tool would further lend
validity evidence to the EM milestones by demonstrating their accurate measurement.
Methods: This was a multi-center, prospective, observational validity study conducted at eight
institutions. The Critical Care Direct Observation Tool (CDOT) was created to assess EM residents
during resuscitations. This tool was designed using a modified Delphi method focused on content,
response process, and internal structure validity. Paying special attention to content validity, the
CDOT was developed by an expert panel, maintaining the use of the EM milestone wording. We
built response process and internal consistency by piloting and revising the instrument. Raters
were faculty who routinely assess residents on the milestones. A brief training video on utilization
of the instrument was completed by all. Raters used the CDOT to assess simulated videos of three
residents at different stages of training in a critical care scenario. We measured reliability using
Fleiss’ kappa and interclass correlations.
Results: Two versions of the CDOT were used: one used the milestone levels as global rating
scales with anchors, and the second reflected a current trend of a checklist response system.
Although the raters who used the CDOT routinely rate residents in their practice, they did not score
the residents’ performances in the videos comparably, which led to poor reliability. The Fleiss’ kappa
of each of the items measured on both versions of the CDOT was near zero.
Conclusion: The validity and reliability of the current EM milestone assessment tools have yet to
be determined. This study is a rigorous attempt to collect validity evidence in the development of
a direct observation assessment instrument. However, despite strict attention to validity evidence,
inter-rater reliability was low. The potential sources of reducible variance include rater- and
instrument-based error. Based on this study, there may be concerns for the reliability of other EM
milestone assessment tools that are currently in use.

Read More

Development and Implementation of an Emergency Medicine Podcast for Medical Students: EMIGcast

Volume 16, Issue 6, November 2015.
Andrew Lichtenheld, BS, et al.

Podcasts, episodic digital audio recordings downloaded
through web syndication or streamed online, have
been shown to be an effective instructional method in
undergraduate health professions education, and are
increasingly used for self-directed learning. Emergency
medicine (EM) has embraced podcasting: over 80% of
EM residents report listening to podcasts and a substantial
number identify podcasts as the most valuable use
of their educational time.Despite proven efficacy in
undergraduate medical education and remarkable popularity
with EM residents and attendings, there remain few EM
podcasts targeted to medical students. Given that podcast
effectiveness correlates with how well content matches the
listener needs, a podcast specific to EM-bound medical
students may optimally engage this target audience.

Read More

Ready for Discharge? A Survey of Discharge Transition-of-Care Education and Evaluation in Emergency Medicine Residency Programs

Volume 16, Issue 6, November 2015.
Fiona E. Gallahue, MD, et al.

This study aimed to assess current education and practices of emergency medicine (EM) residents
as perceived by EM program directors to determine if there are deficits in resident discharge handoff
training. This survey study was guided by the Kern model for medical curriculum development.
A six-member Council of EM Residency Directors (CORD) Transitions of Care task force of EM
physicians performed these steps and constructed a survey. The survey was distributed to program
residency directors via the CORD listserve and/or direct contact. There were 119 responses to the
survey, which were collected using an online survey tool. Over 71% of the 167 American College of
Graduate Medical Education (ACGME) accredited EM residency programs were represented. Of
those responding, 42.9% of programs reported formal training regarding discharges during initial
orientation and 5.9% reported structured curriculum outside of orientation. A majority (73.9%) of
programs reported that EM residents were not routinely evaluated on their discharge proficiency.
Despite the ACGME requirements requiring formal handoff curriculum and evaluation, many
programs do not provide formal curriculum on the discharge transition of care or evaluate EM
residents on their discharge proficiency.

Read More

Combined Versus Detailed Evaluation Components in Medical Student Global Rating Indexes

Volume 16, Issue 6, November 2015.
Kim L. Askew, MD, et al.

Introduction: To determine if there is any correlation between any of the 10 individual components
of a global rating index on an emergency medicine (EM) student clerkship evaluation form. If there
is correlation, to determine if a weighted average of highly correlated components loses predictive
value for the final clerkship grade.
Methods: This study reviewed medical student evaluations collected over two years of a required
fourth-year rotation in EM. Evaluation cards, comprised of a detailed 10-part evaluation, were
completed after each shift. We used a correlation matrix between evaluation category average
scores, using Spearman’s rho, to determine if there was any correlation of the grades between any
of the 10 items on the evaluation form.
Results: A total of 233 students completed the rotation over the two-year period of the study. There
were strong correlations (>0.80) between assessment components of medical knowledge, history
taking, physical exam, and differential diagnosis. There were also strong correlations between
assessment components of team rapport, patient rapport, and motivation. When these highly
correlated were combined to produce a four-component model, linear regression demonstrated
similar predictive power in terms of final clerkship grade (R2
=0.71, CI95=0.65–0.77 and R2
=0.69,
CI95=0.63–0.76 for the full and reduced models respectively).
Conclusion: This study revealed that several components of the evaluation card had a high degree
of correlation. Combining the correlated items, a reduced model containing four items (clinical skills,
interpersonal skills, procedural skills, and documentation) was as predictive of the student’s clinical
grade as the full 10-item evaluation. Clerkship directors should be aware of the performance of their
individual global rating scales when assessing medical student performance, especially if attempting
to measure greater than four components.

Read More

Effect of Doximity Residency Rankings on Residency Applicants’ Program Choices

Volume 16, Issue 6, November 2015.
Aimee M. Rolston, MD, MS, et al.

Introduction: Choosing a residency program is a stressful and important decision. Doximity
released residency program rankings by specialty in September 2014. This study sought to
investigate the impact of those rankings on residency application choices made by fourth year
medical students.
Methods: A 12-item survey was administered in October 2014 to fourth year medical students
at three schools. Students indicated their specialty, awareness of and perceived accuracy of the
rankings, and the rankings’ impact on the programs to which they chose to apply. Descriptive
statistics were reported for all students and those applying to Emergency Medicine (EM).
Results: A total of 461 (75.8%) students responded, with 425 applying in one of the 20 Doximity
ranked specialties. Of the 425, 247 (58%) were aware of the rankings and 177 looked at them. On
a 1-100 scale (100=very accurate), students reported a mean ranking accuracy rating of 56.7 (SD
20.3). Forty-five percent of students who looked at the rankings modified the number of programs to
which they applied. The majority added programs. Of the 47 students applying to EM, 18 looked at
the rankings and 33% changed their application list with most adding programs.
Conclusion: The Doximity rankings had real effects on students applying to residencies as almost
half of students who looked at the rankings modified their program list. Additionally, students found
the rankings to be moderately accurate. Graduating students might benefit from emphasis on more
objective characterization of programs to assess in light of their own interests and personal/career
goals.

Read More

Introducing Medical Students into the Emergency Department: The Impact upon Patient Satisfaction

Volume 16, Issue 6, November 2015.
Christopher Kiefer, MD, et al.

Introduction: Performance on patient satisfaction surveys is becoming increasingly important for
practicing emergency physicians and the introduction of learners into a new clinical environment
may impact such scores. This study aimed to quantify the impact of introducing fourth-year medical
students on patient satisfaction in two university-affiliated community emergency departments (EDs).
Methods: Two community-based EDs in the Indiana University Health (IUH) system began
hosting medical students in March 2011 and October 2013, respectively. We analyzed responses
from patient satisfaction surveys at each site for seven months before and after the introduction
of students. Two components of the survey, “Would you recommend this ED to your friends
and family?” and “How would you rate this facility overall?” were selected for analysis, as they
represent the primary questions reviewed by the Center for Medicare Services (CMS) as part of
value-based purchasing. We evaluated the percentage of positive responses for adult, pediatric,
and all patients combined.
Results: Analysis did not reveal a statistically significant difference in the percentage of positive
response for the “would you recommend” question at both clinical sites with regards to the adult
and pediatric subgroups, as well as the all-patient group. At one of the sites, there was significant
improvement in the percentage of positive response to the “overall rating” question following the
introduction of medical students when all patients were analyzed (60.3% to 68.2%, p=0.038).
However, there was no statistically significant difference in the “overall rating” when the pediatric or
adult subgroups were analyzed at this site and no significant difference was observed in any group
at the second site.
Conclusion: The introduction of medical students in two community-based EDs is not associated
with a statistically significant difference in overall patient satisfaction, but was associated with a
significant positive effect on the overall rating of the ED at one of the two clinical sites studied.
Further study is needed to evaluate the effect of medical student learners upon patient satisfaction in
settings outside of a single health system.

Read More

Teaching Emotional Intelligence: A Control Group Study of a Brief Educational Intervention for Emergency Medicine Residents

Volume 16, Issue 6, November 2015.
Diane L. Gorgas, MD, et al.

Introduction: Emotional Intelligence (EI) is defined as an ability to perceive another’s emotional
state combined with an ability to modify one’s own. Physicians with this ability are at a distinct
advantage, both in fostering teams and in making sound decisions. Studies have shown that
higher physician EI’s are associated with lower incidence of burn-out, longer careers, more positive
patient-physician interactions, increased empathy, and improved communication skills. We explored
the potential for EI to be learned as a skill (as opposed to being an innate ability) through a brief
educational intervention with emergency medicine (EM) residents.
Methods: This study was conducted at a large urban EM residency program. Residents were
randomized to either EI intervention or control groups. The intervention was a two-hour session
focused on improving the skill of social perspective taking (SPT), a skill related to social awareness.
Due to time limitations, we used a 10-item sample of the Hay 360 Emotional Competence Inventory
to measure EI at three time points for the training group: before (pre) and after (post) training, and at
six-months post training (follow up); and at two time points for the control group: pre- and follow up.
The preliminary analysis was a four-way analysis of variance with one repeated measure: Group x
Gender x Program Year over Time. We also completed post-hoc tests.
Results: Thirty-three EM residents participated in the study (33 of 36, 92%), 19 in the EI intervention
group and 14 in the control group. We found a significant interaction effect between Group and
Time (p<0.05). Post-hoc tests revealed a significant increase in EI scores from Time 1 to 3 for the EI
intervention group (62.6% to 74.2%), but no statistical change was observed for the controls (66.8%
to 66.1%, p=0.77). We observed no main effects involving gender or level of training.
Conclusion: Our brief EI training showed a delayed but statistically significant positive impact on
EM residents six months after the intervention involving SPT. One possible explanation for this
finding is that residents required time to process and apply the EI skills training in order for us to
detect measurable change. More rigorous measurement will be needed in future studies to aid in the
interpretation of our findings.

Read More

Correlation of Simulation Examination to Written Test Scores for Advanced Cardiac Life Support Testing: Prospective Cohort Study

Volume 16, Issue 6, November 2015.
Suzanne L. Strom, MD, et al.

Introduction: Traditional Advanced Cardiac Life Support (ACLS) courses are evaluated using written
multiple-choice tests. High-fidelity simulation is a widely used adjunct to didactic content, and has been
used in many specialties as a training resource as well as an evaluative tool. There are no data to our
knowledge that compare simulation examination scores with written test scores for ACLS courses.
Objective: To compare and correlate a novel high-fidelity simulation-based evaluation with
traditional written testing for senior medical students in an ACLS course.
Methods: We performed a prospective cohort study to determine the correlation between simulation based
evaluation and traditional written testing in a medical school simulation center. Students
were tested on a standard acute coronary syndrome/ventricular fibrillation cardiac arrest scenario.
Our primary outcome measure was correlation of exam results for 19 volunteer fourth-year medical
students after a 32-hour ACLS-based Resuscitation Boot Camp course. Our secondary outcome
was comparison of simulation-based vs. written outcome scores.
Results: The composite average score on the written evaluation was substantially higher (93.6%)
than the simulation performance score (81.3%, absolute difference 12.3%, 95% CI [10.6-14.0%],
p<0.00005). We found a statistically significant moderate correlation between simulation scenario
test performance and traditional written testing (Pearson r=0.48, p=0.04), validating the new
evaluation method.
Conclusion: Simulation-based ACLS evaluation methods correlate with traditional written testing
and demonstrate resuscitation knowledge and skills. Simulation may be a more discriminating and
challenging testing method, as students scored higher on written evaluation methods compared to
simulation.

Read More

How Does Emergency Department Crowding Affect Medical Student Test Scores and Clerkship Evaluations?

Volume 16, Issue 6, November 2015.
Grant Wei, MD, et al.

Introduction: The effect of emergency department (ED) crowding has been recognized as a
concern for more than 20 years; its effect on productivity, medical errors, and patient satisfaction
has been studied extensively. Little research has reviewed the effect of ED crowding on medical
education. Prior studies that have considered this effect have shown no correlation between ED
crowding and resident perception of quality of medical education.
Objective: To determine whether ED crowding, as measured by the National ED Overcrowding
Scale (NEDOCS) score, has a quantifiable effect on medical student objective and subjective
experiences during emergency medicine (EM) clerkship rotations.
Methods: We collected end-of-rotation examinations and medical student evaluations for 21 EM
rotation blocks between July 2010 and May 2012, with a total of 211 students. NEDOCS scores were
calculated for each corresponding period. Weighted regression analyses examined the correlation
between components of the medical student evaluation, student test scores, and the NEDOCS score
for each period.
Results: When all 21 rotations are included in the analysis, NEDOCS scores showed a negative
correlation with medical student tests scores (regression coefficient= -0.16, p=0.04) and three
elements of the rotation evaluation (attending teaching, communication, and systems-based
practice; p<0.05). We excluded an outlying NEDOCS score from the analysis and obtained similar
results. When the data were controlled for effect of month of the year, only student test score
remained significantly correlated with NEDOCS score (p=0.011). No part of the medical student
rotation evaluation attained significant correlation with the NEDOCS score (p≥0.34 in all cases).
Conclusion: ED overcrowding does demonstrate a small but negative association with medical
student performance on end-of-rotation examinations. Additional studies are recommended to further
evaluate this effect.

Read More

Medical Student Performance on the National Board of Medical Examiners Emergency Medicine Advanced Clinical Examination and the National Emergency Medicine M4 Exams

Volume 16 , Issue 6, November 2015.
Katherine Hiller, MD, MPH, et al.

Introduction: In April 2013, the National Board of Medical Examiners (NBME) released an Advanced
Clinical Examination (ACE) in emergency medicine (EM). In addition to this new resource, CDEM
(Clerkship Directors in EM) provides two online, high-quality, internally validated examinations.
National usage statistics are available for all three examinations, however, it is currently unknown how
students entering an EM residency perform as compared to the entire national cohort. This information
may help educators interpret examination scores of both EM-bound and non-EM-bound students.
Objectives: The objective of this study was to compare EM clerkship examination performance
between students who matched into an EM residency in 2014 to students who did not. We made
comparisons were made using the EM-ACE and both versions of the National fourth year medical
student (M4) EM examinations.
Method: In this retrospective multi-institutional cohort study, the EM-ACE and either Version 1 (V1)
or 2 (V2) of the National EM M4 examination was given to students taking a fourth-year EM rotation
at five institutions between April 2013 to February 2014. We collected examination performance,
including the scaled EM-ACE score, and percent correct on the EM M4 exams, and 2014 NRMP
Match status. Student t-tests were performed on the examination averages of students who matched
in EM as compared with those who did not.
Results: A total of 606 students from five different institutions took both the EM-ACE and one of the
EM M4 exams; 94 (15.5%) students matched in EM in the 2014 Match. The mean score for EM-bound
students on the EM-ACE, V1 and V2 of the EM M4 exams were 70.9 (n=47, SD=9.0), 84.4 (n=36,
SD=5.2), and 83.3 (n=11, SD=6.9), respectively. Mean scores for non-EM-bound students were 68.0
(n=256, SD=9.7), 82.9 (n=243, SD=6.5), and 74.5 (n=13, SD=5.9). There was a significant difference
in mean scores in EM-bound and non-EM-bound student for the EM-ACE (p=0.05) and V2 (p<0.01)
but not V1 (p=0.18) of the National EM M4 examination.
Conclusion: Students who successfully matched in EM performed better on all three exams at the
end of their EM clerkship.

Read More

Contact Information

WestJEM/ Department of Emergency Medicine
UC Irvine Health

3800 W Chapman Ave Ste 3200
Orange, CA 92868, USA
Phone: 1-714-456-6389
Email: editor@westjem.org

CC-BY_icon.svg

WestJEM
ISSN: 1936-900X
e-ISSN: 1936-9018

CPC-EM
ISSN: 2474-252X

Our Philosophy

Emergency Medicine is a specialty which closely reflects societal challenges and consequences of public policy decisions. The emergency department specifically deals with social injustice, health and economic disparities, violence, substance abuse, and disaster preparedness and response. This journal focuses on how emergency care affects the health of the community and population, and conversely, how these societal challenges affect the composition of the patient population who seek care in the emergency department. The development of better systems to provide emergency care, including technology solutions, is critical to enhancing population health.