Recognition of Instrument Passing and Group Attention for Understanding Intraoperative State of Surgical Team

  • Yokoyama Koji
    Graduate School of Informatics, Kyoto University
  • Yamamoto Goshiro
    Graduate School of Informatics, Kyoto University Kyoto University Hospital Graduate School of Medicine, Kyoto University
  • Liu Chang
    Kyoto University Hospital
  • Sugiyama Osamu
    Graduate School of Medicine, Kyoto University
  • Santos Luciano HO
    Graduate School of Informatics, Kyoto University Kyoto University Hospital Graduate School of Medicine, Kyoto University
  • Kuroda Tomohiro
    Graduate School of Informatics, Kyoto University Kyoto University Hospital Graduate School of Medicine, Kyoto University

この論文をさがす

抄録

<p>Appropriate evaluation of the intraoperative state of a surgical team is essential for the improvement of teamwork and hence a safe surgical environment. Traditional methods to evaluate intraoperative team states such as interview and self-check questionnaire on each surgical team member often require human efforts, which are time-consuming and can be biased by individual recall. One effective solution is to analyze the surgical video and track the important team activities, such as whether the members are complying with the surgical procedure or are being distracted by unexpected events. However, due to the complexity of the situations in an operating room, identifying the team activities without any human effort remains challenging. In this work, we propose a novel approach that automatically recognizes and quantifies intraoperative activities from surgery videos. As a first step, we focus on recognizing two activities that especially involve multiple individuals: (a) passing of clean-packaged surgery instruments which is a representative interaction between the surgical technologists such as the circulating nurse and scrub nurse, and (b) group attention that may be attracted by unexpected events. We record surgical videos as input, and apply pose estimation and particle filters to extract individual's face orientation, body orientation, and arm raise. These results coupled with individual IDs are then sent to an estimation model that provides the probability of each target activity. Simultaneously, a person model is generated and bound to each individual, which describes all the involved activities along the timeline. We tested our method using videos of simulated activities. The results showed that the system was able to recognize instrument passing and group attention with F1 = 0.95 and F1 = 0.66, respectively. We also implemented a system with an interface that automatically annotated intraoperative activities along the video timeline, and invited feedback from surgical technologists. The results suggest that the quantified and visualized activities can help improve understanding of the intraoperative state of the surgical team.</p>

収録刊行物

被引用文献 (1)*注記

もっと見る

参考文献 (31)*注記

もっと見る

関連プロジェクト

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ