A deep-learning model based on fusion images of chest radiography and X-ray sponge images supports human visual characteristics of retained surgical items detection

  • Kawakubo, Masateru
    Department of Health Sciences, Faculty of Medical Sciences, Kyushu University
  • Waki, Hiroto
    Department of Radiological Technology, Hyogo Medical University Hospital
  • Shirasaka, Takashi
    Division of Radiology, Department of Medical Technology, Kyushu University Hospital Department of Diagnostic Radiology, Faculty of Life Sciences, Kumamoto University
  • Kojima, Tsukasa
    Division of Radiology, Department of Medical Technology, Kyushu University Hospital Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University
  • Mikayama, Ryoji
    Division of Radiology, Department of Medical Technology, Kyushu University Hospital
  • Hamasaki, Hiroshi
    Division of Radiology, Department of Medical Technology, Kyushu University Hospital Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University
  • Akamine, Hiroshi
    Division of Radiology, Department of Medical Technology, Kyushu University Hospital Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University
  • Kato, Toyoyuki
    Division of Radiology, Department of Medical Technology, Kyushu University Hospital
  • Baba, Shingo
    Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University
  • Ushiro, Shin
    Division of Patient Safety, Kyushu University Hospital Japan Council for Quality Health Care
  • Ishigami, Kousei
    Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University

Description

[Purpose] / Although a novel deep learning software was proposed using post-processed images obtained by the fusion between X-ray images of normal post-operative radiography and surgical sponge, the association of the retained surgical item detectability with human visual evaluation has not been sufficiently examined. In this study, we investigated the association of retained surgical item detectability between deep learning and human subjective evaluation. / [Methods] / A deep learning model was constructed from 2987 training images and 1298 validation images, which were obtained from post-processing of the image fusion between X-ray images of normal postoperative radiography and surgical sponge. Then, another 800 images were used, i.e., 400 with and 400 without surgical sponge. The detection characteristics of retained sponges between the model and a general observer with 10-year clinical experience were analyzed using the receiver operator characteristics. / [Results] / The following values from the deep learning model and observer were, respectively, derived: cutoff values of probability were 0.37 and 0.45; areas under the curves were 0.87 and 0.76; sensitivity values were 85 % and 61 %; and specificity values were 73 % and 92 %. / [Conclusion] / For the detection of surgical sponges, we concluded that the deep learning model has higher sensitivity while the human observer has higher specificity. These characteristics indicate that the deep learning system that is complementary to humans could support the clinical workflow in operation rooms for prevention of retained surgical items.

Journal

Citations (1)*help

See more

References(19)*help

See more

Related Projects

See more

Details 詳細情報について

Report a problem

Back to top