Simultaneous cosegmentation of tumors in <scp>PET</scp>‐<scp>CT</scp> images using deep fully convolutional networks

  • Zisha Zhong
    Department of Electrical and Computer Engineering The University of Iowa Iowa City IA 52242 USA
  • Yusung Kim
    Department of Radiation Oncology University of Iowa Hospitals and Clinics Iowa City IA 52242 USA
  • Kristin Plichta
    Department of Radiation Oncology University of Iowa Hospitals and Clinics Iowa City IA 52242 USA
  • Bryan G. Allen
    Department of Radiation Oncology University of Iowa Hospitals and Clinics Iowa City IA 52242 USA
  • Leixin Zhou
    Department of Electrical and Computer Engineering The University of Iowa Iowa City IA 52242 USA
  • John Buatti
    Department of Radiation Oncology University of Iowa Hospitals and Clinics Iowa City IA 52242 USA
  • Xiaodong Wu
    Department of Electrical and Computer Engineering The University of Iowa Iowa City IA 52242 USA

説明

<jats:sec><jats:title>Purpose</jats:title><jats:p>To investigate the use and efficiency of 3‐D deep learning, fully convolutional networks (<jats:styled-content style="fixed-case">DFCN</jats:styled-content>) for simultaneous tumor cosegmentation on dual‐modality nonsmall cell lung cancer (<jats:styled-content style="fixed-case">NSCLC</jats:styled-content>) and positron emission tomography (<jats:styled-content style="fixed-case">PET</jats:styled-content>)‐computed tomography (<jats:styled-content style="fixed-case">CT</jats:styled-content>) images.</jats:p></jats:sec><jats:sec><jats:title>Methods</jats:title><jats:p>We used <jats:styled-content style="fixed-case">DFCN</jats:styled-content> cosegmentation for <jats:styled-content style="fixed-case">NSCLC</jats:styled-content> tumors in <jats:styled-content style="fixed-case">PET</jats:styled-content>‐<jats:styled-content style="fixed-case">CT</jats:styled-content> images, considering both the <jats:styled-content style="fixed-case">CT</jats:styled-content> and <jats:styled-content style="fixed-case">PET</jats:styled-content> information. The proposed <jats:styled-content style="fixed-case">DFCN</jats:styled-content>‐based cosegmentation method consists of two coupled three‐dimensional (3D)‐<jats:styled-content style="fixed-case">UN</jats:styled-content>ets with an encoder‐decoder architecture, which can communicate with the other in order to share complementary information between <jats:styled-content style="fixed-case">PET</jats:styled-content> and <jats:styled-content style="fixed-case">CT</jats:styled-content>. The weighted average sensitivity and positive predictive values denoted as Scores, dice similarity coefficients (<jats:styled-content style="fixed-case">DSC</jats:styled-content>s), and the average symmetric surface distances were used to assess the performance of the proposed approach on 60 pairs of <jats:styled-content style="fixed-case">PET</jats:styled-content>/<jats:styled-content style="fixed-case">CT</jats:styled-content>s. A Simultaneous Truth and Performance Level Estimation Algorithm (<jats:styled-content style="fixed-case">STAPLE</jats:styled-content>) of <jats:bold>3</jats:bold> expert physicians’ delineations were used as a reference. The proposed <jats:styled-content style="fixed-case">DFCN</jats:styled-content> framework was compared to <jats:bold>3</jats:bold> graph‐based cosegmentation methods.</jats:p></jats:sec><jats:sec><jats:title>Results</jats:title><jats:p>Strong agreement was observed when using the <jats:styled-content style="fixed-case">STAPLE</jats:styled-content> references for the proposed <jats:styled-content style="fixed-case">DFCN</jats:styled-content> cosegmentation on the <jats:styled-content style="fixed-case">PET</jats:styled-content>‐<jats:styled-content style="fixed-case">CT</jats:styled-content> images. The average <jats:styled-content style="fixed-case">DSC</jats:styled-content>s on <jats:styled-content style="fixed-case">CT</jats:styled-content> and <jats:styled-content style="fixed-case">PET</jats:styled-content> are 0.861 <jats:italic>±</jats:italic> 0.037 and 0.828 <jats:italic>±</jats:italic> 0.087, respectively, using <jats:styled-content style="fixed-case">DFCN</jats:styled-content>, compared to 0.638 <jats:italic>±</jats:italic> 0.165 and 0.643 <jats:italic>±</jats:italic> 0.141, respectively, when using the graph‐based cosegmentation method. The proposed <jats:styled-content style="fixed-case">DFCN</jats:styled-content> cosegmentation using both <jats:styled-content style="fixed-case">PET</jats:styled-content> and <jats:styled-content style="fixed-case">CT</jats:styled-content> also outperforms the deep learning method using either <jats:styled-content style="fixed-case">PET</jats:styled-content> or <jats:styled-content style="fixed-case">CT</jats:styled-content> alone.</jats:p></jats:sec><jats:sec><jats:title>Conclusions</jats:title><jats:p>The proposed <jats:styled-content style="fixed-case">DFCN</jats:styled-content> cosegmentation is able to outperform existing graph‐based segmentation methods. The proposed <jats:styled-content style="fixed-case">DFCN</jats:styled-content> cosegmentation shows promise for further integration with quantitative multimodality imaging tools in clinical trials.</jats:p></jats:sec>

収録刊行物

被引用文献 (2)*注記

もっと見る

問題の指摘

ページトップへ