Skip to Content
Authors D'Angelo AL, Law KE, Cohen ER, Greenberg JA, Kwan C, Greenberg C, Wiegmann DA, Pugh CM
Author Profile(s)
Journal Surgery Volume: 158 Issue: 5 Pages: 1408-14
Publish Date 2015 Nov
PubMed ID 26003910
PMC ID 4604013
Abstract

The aim of this study was to assess validity of a human factors error assessment method for evaluating resident performance during a simulated operative procedure.Seven postgraduate year 4-5 residents had 30 minutes to complete a simulated laparoscopic ventral hernia (LVH) repair on day 1 of a national, advanced laparoscopic course. Faculty provided immediate feedback on operative errors and residents participated in a final product analysis of their repairs. Residents then received didactic and hands-on training regarding several advanced laparoscopic procedures during a lecture session and animate lab. On day 2, residents performed a nonequivalent LVH repair using a simulator. Three investigators reviewed and coded videos of the repairs using previously developed human error classification systems.Residents committed 121 total errors on day 1 compared with 146 on day 2. One of 7 residents successfully completed the LVH repair on day 1 compared with all 7 residents on day 2 (P = .001). The majority of errors (85%) committed on day 2 were technical and occurred during the last 2 steps of the procedure. There were significant differences in error type (P ≤ .001) and level (P = .019) from day 1 to day 2. The proportion of omission errors decreased from day 1 (33%) to day 2 (14%). In addition, there were more technical and commission errors on day 2.The error assessment tool was successful in categorizing performance errors, supporting known-groups validity evidence. Evaluating resident performance through error classification has great potential in facilitating our understanding of operative readiness.

Full Text Full text available on PubMed Central
webmaster@surgery.wisc.edu Copyright © 2016 The Board of Regents of the University of Wisconsin System