Medical simulators provide a controlled environment for training and assessing. However, they require the presence of an experienced examiner to provide performance feedback. This makes the assessment process inefficient and expensive. We have developed an autonomous and a fully automatic speech-based checklist system. In the future, this approach may be implemented in the operation room and emergency room. This could facilitate the development of automatic assistive technologies for these domains.
The recent advances in deep learning and computer vision have led to a growing number of studies focusing on automatic analysis of surgical video data . Since the use of video is an integral part of minimally invasive surgery (MIS), most of these studies have focused on laparoscopic and robotic surgery. In contrast, video capture is not well established in open surgery. Thus, open surgery has not benefited from the many advantages computer vision and deep learning methods have to offer for skill training and automatic assistance.
The Fundamental Laparoscopic Skills (FLS) teaches the knowledge, judgment, and technical skills that are expected in laparoscopic surgery.
As part of our research, we evaluated two essential procedures in the exam: sewing and peg transfer. The goal of this work is use computer vision to develop automatic feedback based on video data alone. The automatic feedback will allow residents to practice on their own without the need for a supervising human expert.