DEEP LEARNING OF UNDERWATER IMAGES AND TRAINING DATA CRITERIA FOR UNDERSTANDING FEEDING INJURY OF <i>ACANTHOPAGRUS SCHLEGELII</i> IN NORI SEAWEED FARMS

Bibliographic Information

Other Title
  • ノリ養殖漁場でのクロダイの食害把握に向けた水中画像の深層学習と教師データ基準の検討

Abstract

<p> The decreasing production of cultured nori seaweed in Okayama Prefecture has been partly attributed to the feeding damage of Acanthopagrus schlegelii. We applied a deep learning-based detection model (YOLOv5) to underwater camera images of the target species to understand their feeding behaviors efficiently. It is commonly noticed that both the quality and quantity of training data greatly affects the detection accuracy of the model. However, it was challenging to establish objective conditions for labeling because some images were taken unclearly in the natural environment. In this study, we used the Segment Anything Model (SAM), which was recently developed to segment image regions based on huge amount of training dataset, to create a labeling criteria efficiently for unclear images. We analyzed the characteristics of images that were difficult for the SAM model to detect when counting the target species in actual sea areas.We also considered which unclear images should be selectively added to the training data based on the judge from fisheries professionals. Results showed that including both SAM-detectable and non-detectable unclear images of the target species in the training data would be effective in improving accuracy.</p>

Journal

References(2)*help

See more

Details 詳細情報について

Report a problem

Back to top