Comparative assessment of three standardized robotic surgery training methods

被引:92
作者
Hung, Andrew J. [1 ]
Jayaratna, Isuru S. [1 ]
Teruya, Kara [1 ]
Desai, Mihir M. [1 ]
Gill, Inderbir S. [1 ]
Goh, Alvin C. [2 ]
机构
[1] Univ So Calif, Keck Sch Med, USC Inst Urol, Hillard & Roclyn Herzog Ctr Robot Surg, Los Angeles, CA 90033 USA
[2] Methodist Hosp, Dept Urol, Methodist Inst Technol Innovat & Educ, Houston, TX 77030 USA
关键词
clinical competence; robotics; laparoscopy; computer simulation; education; medical; CONSTRUCT-VALIDATION; LEARNING-CURVE; SIMULATOR; SKILLS; PERFORMANCE; FACE;
D O I
10.1111/bju.12045
中图分类号
R5 [内科学]; R69 [泌尿科学(泌尿生殖系疾病)];
学科分类号
1002 ; 100201 ;
摘要
Objectives To evaluate three standardized robotic surgery training methods, inanimate, virtual reality and in vivo, for their construct validity. To explore the concept of cross-method validity, where the relative performance of each method is compared. Materials and Methods Robotic surgical skills were prospectively assessed in 49 participating surgeons who were classified as follows: 'novice/trainee': urology residents, previous experience < 30 cases (n = 38) and 'experts': faculty surgeons, previous experience >= 30 cases (n = 11). Three standardized, validated training methods were used: (i) structured inanimate tasks; (ii) virtual reality exercises on the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA, USA); and (iii) a standardized robotic surgical task in a live porcine model with performance graded by the Global Evaluative Assessment of Robotic Skills (GEARS) tool. A Kruskal-Wallis test was used to evaluate performance differences between novices and experts (construct validity). Spearman's correlation coefficient (rho) was used to measure the association of performance across inanimate, simulation and in vivo methods (cross-method validity). Results Novice and expert surgeons had previously performed a median (range) of 0 (0-20) and 300 (30-2000) robotic cases, respectively (P < 0.001). Construct validity: experts consistently outperformed residents with all three methods (P < 0.001). Cross-method validity: overall performance of inanimate tasks significantly correlated with virtual reality robotic performance (rho = -0.7, P < 0.001) and in vivo robotic performance based on GEARS (rho = -0.8, P < 0.0001). Virtual reality performance and in vivo tissue performance were also found to be strongly correlated (rho = 0.6, P < 0.001). Conclusions We propose the novel concept of cross-method validity, which may provide a method of evaluating the relative value of various forms of skills education and assessment. We externally confirmed the construct validity of each featured training tool.
引用
收藏
页码:864 / 871
页数:8
相关论文
共 20 条
[1]   Comprehensive proficiency-based inanimate training for robotic surgery: reliability, feasibility, and educational benefit [J].
Arain, Nabeel A. ;
Dulan, Genevieve ;
Hogg, Deborah C. ;
Rege, Robert V. ;
Powers, Cathryn E. ;
Tesfay, Seifu T. ;
Hynan, Linda S. ;
Scott, Daniel J. .
SURGICAL ENDOSCOPY AND OTHER INTERVENTIONAL TECHNIQUES, 2012, 26 (10) :2740-2745
[2]   Survey of residency training in laparoscopic and robotic surgery [J].
Duchene, David A. ;
Moinzadeh, Alireza ;
Gill, Inderbir S. ;
Clayman, Ralph V. ;
Winfield, Howard N. .
JOURNAL OF UROLOGY, 2006, 176 (05) :2158-2166
[3]   da Vinci Skills Simulator Construct Validation Study: Correlation of Prior Robotic Experience With Overall Score and Time Score Simulator Performance [J].
Finnegan, Kyle T. ;
Meraney, Anoop M. ;
Staff, Ilene ;
Shichman, Steven J. .
UROLOGY, 2012, 80 (02) :330-335
[4]   DEVELOPMENT AND VALIDATION OF INANIMATE TASKS FOR ROBOTIC SURGICAL SKILLS ASSESSMENT AND TRAINING [J].
Goh, Alvin ;
Joseph, Rohan ;
O'Malley, Marcia ;
Miles, Brian ;
Dunkin, Brian .
JOURNAL OF UROLOGY, 2010, 183 (04) :E516-E516
[5]   Global Evaluative Assessment of Robotic Skills: Validation of a Clinical Assessment Tool to Measure Robotic Surgical Skills [J].
Goh, Alvin C. ;
Goldfarb, David W. ;
Sander, James C. ;
Miles, Brian J. ;
Dunkin, Brian J. .
JOURNAL OF UROLOGY, 2012, 187 (01) :247-252
[6]   Concurrent and Predictive Validation of a Novel Robotic Surgery Simulator: A Prospective, Randomized Study [J].
Hung, Andrew J. ;
Patil, Mukul B. ;
Zehnder, Pascal ;
Cai, Jie ;
Ng, Casey K. ;
Aron, Monish ;
Gill, Inderbir S. ;
Desai, Mihir M. .
JOURNAL OF UROLOGY, 2012, 187 (02) :630-637
[7]   Face, Content and Construct Validity of a Novel Robotic Surgery Simulator [J].
Hung, Andrew J. ;
Zehnder, Pascal ;
Patil, Mukul B. ;
Cai, Jie ;
Ng, Casey K. ;
Aron, Monish ;
Gill, Inderbir S. ;
Desai, Mihir M. .
JOURNAL OF UROLOGY, 2011, 186 (03) :1019-1024
[8]   Face, Content, and Construct Validation of the da Vinci Skills Simulator [J].
Kelly, Douglas C. ;
Margules, Andrew C. ;
Kundavaram, Chandan R. ;
Narins, Hadley ;
Gomella, Leonard G. ;
Trabulsi, Edouard J. ;
Lallas, Costas D. .
UROLOGY, 2012, 79 (05) :1068-1072
[9]   Face, Content, and Construct Validity of dV-Trainer, a Novel Virtual Reality Simulator for Robotic Surgery [J].
Kenney, Patrick A. ;
Wszolek, Matthew F. ;
Gould, Justin J. ;
Libertino, John A. ;
Moinzadeh, Alireza .
UROLOGY, 2009, 73 (06) :1288-1292
[10]   Validating the Use of the Mimic dV-trainer for Robotic Surgery Skill Acquisition Among Urology Residents [J].
Korets, Ruslan ;
Mues, Adam C. ;
Graversen, Joseph A. ;
Gupta, Mantu ;
Benson, Mitchell C. ;
Cooper, Kimberly L. ;
Landman, Jaime ;
Badani, Ketan K. .
UROLOGY, 2011, 78 (06) :1326-1330