Generating Training Data Using Python Scripts for Automatic Extraction of Landmarks from Tooth Models

  • Kato Akiko
    Department of Oral Anatomy, School of Dentistry, Aichi Gakuin University Center for Advanced Oral Science, School of Dentistry, Aichi Gakuin University
  • Hori Miki
    Center for Advanced Oral Science, School of Dentistry, Aichi Gakuin University Department of Dental Materials Science, School of Dentistry, Aichi Gakuin University
  • Hori Tadasuke
    Center for Advanced Oral Science, School of Dentistry, Aichi Gakuin University
  • Jincho Makoto
    Center for Advanced Oral Science, School of Dentistry, Aichi Gakuin University
  • Sekine Hironao
    Center for Advanced Oral Science, School of Dentistry, Aichi Gakuin University
  • Kawai Tatsushi
    Center for Advanced Oral Science, School of Dentistry, Aichi Gakuin University Department of Dental Materials Science, School of Dentistry, Aichi Gakuin University

この論文をさがす

抄録

<p>The automatic extraction of landmarks from three-dimensional tooth models using artificial intelligence is crucial to advance dental anatomy studies. However, collecting sufficient data for artificial intelligence training hinders progress in this field. To automatically identify anatomical landmarks on tooth models, this paper proposes a method for generating substantial amount of training data from the digital data of a single human maxillary canine. Human maxillary canine data were loaded into Blender, and the landmark was defined as the cusp of the canine. Sequentially, the coordinate values of the centroid of the landmark were used as correct answer labels. A total of 22,896 training images were generated using Python scripts, and they were split into train and validation datasets. Pairs of images and label data were fed into the artificial intelligence network, which comprises four convolutional layers and one max pooling layer. The accuracy of the trained artificial intelligence was evaluated on 915 200 × 200 pixel images, which were different from the training images. The average Euclidean distance error of the artificial intelligence-predicted coordinate values of the landmark was 6.1 pixels. Our approach, based on the training data generated in Blender using Python scripts, provides a powerful solution when it is challenging to obtain sufficient amount of medical field data for artificial intelligence-assisted procedures.</p>

収録刊行物

参考文献 (17)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ