Quick Shot V - AI/Recruitment
SIGNAL BOOST: EVALUATING THE EFFECT OF SIGNALING ON GENERAL SURGERY INTERVIEW RATES
Akshat Sanan, Nicholas J Iglesias, Talia R Arcieri, Ana M Reyes, Marina M Tabbara, Megan V Laurendeau, Nikita M Shah, Vanessa W Hui, Laurence R Sands, Chad M Thorson; University of Miami Miller School of Medicine
Introduction: The Electronic Residency Application Service has introduced signaling to help applicants express interest in specific residency programs, but its impact on interview offers remains unclear. We aimed to evaluate trends in interview rates associated with signaling across general surgery residency programs during the 2024–2025 application cycle.
Methodology: We queried Residency Explorer, extracting program variables including affiliation (university-affiliated, community-based university-affiliated or community-based), location, and interview rate. Signal Boost (SB) was defined as the difference between the proportion of interviews offered to signaled and non-signaled applicants. Associations between SB and program characteristics were assessed using non-parametric tests.
Results: 194 programs were included. University-affiliated programs demonstrated a higher median SB compared to community-based university-affiliated general surgery residency programs (p = 0.047). Doximity top 50 exhibited lower SB than programs ranked 51–100 (p = 0.033). Regional variation was also observed, with higher SB in the West North Central region compared to the Middle Atlantic (p = 0.046) and South Atlantic (p = 0.049).
Conclusions: The impact of residency program signaling varies significantly by program type, ranking, and geographic region, highlighting the need for more precise guidance on signaling practices to promote transparency in the residency selection process.
|
|
Signal Interview Rate | Non-Signal Interview Rate | Median Signal Boost | p-value |
| Affiliation | 0.047 | |||
| University-Affiliated | 23% | 3% | 0.195 | |
| Community-Based University-Affiliated | 22% | 6% | 0.170 | |
| Region | 0.007 | |||
| Pacific | 32% | 5% | 0.230 | |
| Mountain | 24% | 6.5% | 0.175 | |
| West North Central | 37% | 4% | 0.320 | |
| West South Central | 31% | 4% | 0.275 | |
| East North Central | 22% | 3.5% | 0.180 | |
| East South Central | 28% | 5% | 0.240 | |
| New England | 22% | 3% | 0.200 | |
| Middle Atlantic | 22% | 5% | 0.145 | |
| South Atlantic | 19% | 5% | 0.155 |
HARNESSING ARTIFICIAL INTELLIGENCE (AI) FOR SURGICAL EDUCATION: COMPARING AI- AND SURGEON-GENERATED FEEDBACK IN CONSENT TRAINING
Catherine Gbekie, BS, Brianna M Peet, BS, Viemma Nwigwe, MD, Grace B Simmons, AB, Daniel P Pacella, BS, Angel Rosario, MD, MPH; Columbia University Vagelos College of Physicians and Surgeons
Background: Effective surgical consent requires empathy, clarity, and cultural sensitivity. Traditional consent training has relied on in-person simulations, dyadic role-playing, and live feedback, methods that are variable and resource-intensive. Large language models now offer trainees controlled, judgement-free environments to practice communication skills with immediate, individualized feedback; however, their reliability and educational value remain insufficiently validated.
Objectives: To compare (1) the fidelity of AI-generated feedback to resident and attending surgeon-generated feedback and (2) the quality of AI-generated versus human-generated feedback.
Methods: Six medical students completed a simulated appendectomy consent involving a culturally sensitive component. Each encounter transcript received AI-generated and human-generated feedback using the same 17-item informed-consent rubric (0-3 scale; maximum score 51). The primary outcome was the difference in mean item-level rubric scores between AI and human evaluators (positive = higher AI), using human ratings as the reference. The secondary analysis evaluated feedback quality using a meta-evaluation model adapted from the modified Completed Clinical Evaluation Report Rating (CCERR) scale (8 items, 1-5 scale; maximum score 40) measuring Educational Utility and Constructiveness, Specificity and Justification, and Comprehensiveness. Data were collected October through December 2025, with additional responses pending.
Results: Preliminary results from 13 of 43 invited evaluators (N=26 evaluations) showed average rubric score discrepancies of 0.57–1.79 points, with largest differences for Explains Alternatives (Δ1.79), Assesses Capacity (Δ1.40), and Explains Benefits (Δ1.25). In secondary analysis, AI-generated feedback (N=6) universally outscored human-generated (N=20) feedback across all CCERR quality domains (Utility, 4.79 vs. 3.00; Specificity, 4.53 vs. 3.00; Comprehensiveness, 4.89 vs. 3.36), resulting in overall higher average scores for AI feedback (37.8 vs. 12.1 out of 40, Δ25.7). Notably, AI feedback was consistently complete, while surgeons frequently omitted qualitative or justification commentary, reflecting real-world time constraints for delivering live, constructive feedback.
Conclusions: AI feedback exhibited superior comprehensiveness, structural quality, and actionability but inflated performance scores and reduced scoring precision. These findings highlight AI’s potential as a scalable supplemental feedback tool, while underscoring the need for continued human oversight to preserve rubric fidelity and evaluative sophistication.
BRINGING THE TRAUMA BAY TO THE BACKROADS: IMPLEMENTING A PORTABLE SIMULATION CURRICULUM FOR RURAL HOSPITALS — THE PILOT YEAR
Zackery Aldaher, DO1, Gabrielle Moore, MD1, Kendall Via1, Cole Harp, DO1, Adnan Alseidi, MD, FACS2, The Georgia Trauma Commission2, Bao Ling Adam, PhD1, Erika Simmerman Mabes, MD, FACS1; 1Medical College of Georgia, 2The Georgia Trauma Commission
Introduction
Rural hospitals face significant gaps in trauma care due to limited access to specialists, procedural training, and standardized team-based education. In Georgia, over 20% of trauma patients come from rural areas where high-fidelity trauma training is unavailable. A statewide needs assessment identified deficiencies in procedural skills, interdisciplinary communication, and trauma system activation, guiding the design of a pilot curriculum presented at a multidisciplinary rural trauma conference. The curriculum was adapted to become a statewide, portable, interactive trauma simulation outreach course tailored for rural providers and piloted at participating hospitals.
Methods
In partnership with the Georgia Trauma Commission, we identified rural hospitals willing to participate in the pilot year of curriculum implementation. Providers at participating hospitals attended sessions combining interactive lectures and procedural skills stations (airway management, hemorrhage control, chest trauma). Participants completed pre- and post-course knowledge tests and Likert-scale confidence surveys with paired t-tests performed for analysis. Qualitative feedback was also collected which was analyzed utilizing inductive coding.
Results
To date, 4 rural hospitals participated in the 2025 pilot year with 83 participants (9% MD/DO/APP, 68% RN, 16% EMS, 7% RT). Participants demonstrated significant improvement in procedural confidence across all skills (Table 1) and trauma stabilization knowledge. 86% of participants were strongly satisfied with the course, and over 90% strongly agreed it should continue. Qualitative themes emphasized the value of hands-on learning, efficient teaching, and clinical relevance. Suggested improvements included having more instructors and additional trauma scenarios.
Conclusion
The pilot year of this portal rural trauma simulation curriculum is demonstrating strong feasibility, high engagement and measurable education benefits across various rural settings. This program fills an important gap in trauma readiness and offers a scalable model for delivering tailored, multidisciplinary trauma education in low-resource environments. Ongoing growth includes regional partnerships, the development of a simulation toolkit, and train-the-trainer programs to ensure sustainability and wide dissemination.
Table 1: Participant (n=83) pre/post survey data indicating confidence in procedural skills following interactive trauma curriculum. All skills showed a significant increase in confidence (p<0.05) post training with largest effects in FAST Exam, Chest Tube and Needle Decompression.

