Training benchmarks based on validated composite scores for the RobotiX robot-assisted surgery simulator on basic tasks
Radboudumc 10: Reconstructive and regenerative medicine RIMLS: Radboud Institute for Molecular Life Sciences
Radboudumc 18: Healthcare improvement science RIHS: Radboud Institute for Health Sciences
Adult
Surgeons
Virtual Reality
Radboudumc 14: Tumours of the digestive tract RIHS: Radboud Institute for Health Sciences
16. Peace & justice
Benchmarking
03 medical and health sciences
0302 clinical medicine
Robotic Surgical Procedures
Surveys and Questionnaires
Task Performance and Analysis
Humans
Original Article
Laparoscopy
Clinical Competence
Simulation Training
DOI:
10.1007/s11701-020-01080-9
Publication Date:
2020-04-20T16:04:18Z
AUTHORS (6)
ABSTRACT
AbstractThe RobotiX robot-assisted virtual reality simulator aims to aid in the training of novice surgeons outside of the operating room. This study aimed to determine the validity evidence on multiple levels of the RobotiX simulator for basic skills. Participants were divided in either the novice, laparoscopic or robotic experienced group based on their minimally invasive surgical experience. Two basic tasks were performed: wristed manipulation (Task 1) and vessel energy dissection (Task 2). The performance scores and a questionnaire regarding the realism, didactic value, and usability were gathered (content). Composite scores (0–100), pass/fail values, and alternative benchmark scores were calculated. Twenty-seven novices, 21 laparoscopic, and 13 robotic experienced participants were recruited. Content validity evidence was scored positively overall. Statistically significant differences between novices and robotic experienced participants (construct) was found for movements left (Task 1 p = 0.009), movements right (Task 1 p = 0.009, Task 2 p = 0.021), path length left (Task 1 p = 0.020), and time (Task 1 p = 0.040, Task 2 p < 0.001). Composite scores were statistically significantly different between robotic experienced and novice participants for Task 1 (85.5 versus 77.1, p = 0.044) and Task 2 (80.6 versus 64.9, p = 0.001). The pass/fail score with false-positive/false-negative percentage resulted in a value of 75/100, 46/9.1% (Task 1) and 71/100, 39/7.0% (Task 2). Calculated benchmark scores resulted in a minority of novices passing multiple parameters. Validity evidence on multiple levels was assessed for two basic robot-assisted surgical simulation tasks. The calculated benchmark scores can be used for future surgical simulation training.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (29)
CITATIONS (3)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....