From Ratings to Insights: Unlocking the Power of EPA Data through Rasch Analysis
Session TypeWorkshop
No
Yes
The widespread implementation of Entrustable Professional Activities (EPAs) generates rich data at individual institutions. However, many programs struggle to extract meaningful insights from these data beyond simple descriptive statistics. Without guidance on their practical utilization, programs may struggle to assess residents based on collected EPAs because their ratings are ordinal (e.g, the distance between “limited participation” and “direct supervision” may differ from the distance between subsequent categories), rater standards can be inconsistent, and EPAs vary in difficulty. These complexities can limit the use of EPA and are often missed by descriptive statistics.
Many-facet Rasch modeling (MFRM) offers a solution. It transforms ordinal EPA ratings into interval-scale measures, accounting for item difficulty, rater severity, and rating scale function. More importantly, it generates practical tools you can use immediately: variable maps that function as assessment dashboards, insights for Clinical Competency Committee (CCC) discussions, and evidence to guide rater training.
Audience: This 90-minute hands-on workshop is designed for program directors, associate program directors, residents, and clinician educators who collect EPA data and want actionable insights. No prior experience with statistics or programming is required.
Module 1: Understanding the Rasch Model (Without the Statistical Anxiety)
We will introduce MFRM by showing what it does. Using a concrete example with simulated EPA data, we will demonstrate how the model reveals insights that are invisible with traditional analysis: which residents are truly struggling versus which appear to struggle because they have been rated by harsh attendings, which EPA items are unexpectedly difficult, and whether your rating scale categories are functioning as intended.
Interactive exercise: following the conceptual overview, participants will engage in a short, interactive quiz using a polling platform. The multiple-choice questions are designed to challenge common assumptions about EPA data and reinforce the fundamental principles of Rasch measurement.
Module 2: Your New Assessment Dashboard: The Variable Map
Using simulated data, we will demonstrate how variable maps become powerful visual tools for program leadership. Participants will learn to interpret these maps for multiple practical applications: (1) identifying residents who may need additional support or remediation; (2) presenting assessment data to a CCC with visual evidence of resident progress; (3) determining which EPA items might be redundant or poorly calibrated; and (4) identifying attendings who may benefit from rater training.
Interactive exercise: Participants will analyze variable maps from three different programs and role-play a CCC discussion, using the map to justify decisions and identify assessment system improvements.
Module 3: Hands-On Analysis with Your Data
We will provide annotated R code that participants can adapt to their own institutional EPA data. The code generates everything you need: variable maps formatted as dashboards, diagnostic reports, and CCC-ready visualizations. Facilitators will walk through each section, explaining how to modify it for a specific context.
Interactive exercise: Participants will run the code on the provided simulated data, then identify specific ways they would use the outputs at their institution. Facilitators will provide individualized guidance on implementation strategies.
90-minute workshop
Yes
Yes
Convey how Many Facet Rasch Modeling provides a refinement of EPA data by accounting for rater severity, item difficulty, and rating scale function
Interpret variable (Wright) maps as assessment dashboards to support Clinical Competency Committee decisions and identify residents needing intervention
Apply provided code to generate actionable reports from institutional EPA data for program improvement and accreditation
Use diagnostic statistics to identify specific actions (e.g., rater training, item review) to strengthen the assessment system
| Activity Order | Title of Presentation or Activity | Presenter/Faculty Name | Presenter/Faculty Email | Time allotted in minutes for activity |
|---|---|---|---|---|
1 |
Module 1: Understanding the Model (Without the Statistic Anxiety) |
Emily Witt |
EWITT@mgh.harvard.edu |
35 |
2 |
Module 2: Your New Assessment Dashboard: The Variable Map |
Jonah Thomas |
JTHOMAS@MGH.HARVARD.EDU |
35 |
3 |
Module 3: Hands-On Analysis with Your Data |
Dandan Chen |
dchen43@mgh.harvard.edu |
20 |
