Estimating Maize-Leaf Coverage in Field Conditions by Applying a Machine Learning Algorithm to UAV Remote Sensing Images

  • Chengquan Zhou
    Institute of Agricultural Equipment, Zhejiang Academy of Agricultural Sciences (ZAAS), Zhejiang 310000, China
  • Hongbao Ye
    Institute of Agricultural Equipment, Zhejiang Academy of Agricultural Sciences (ZAAS), Zhejiang 310000, China
  • Zhifu Xu
    Institute of Agricultural Equipment, Zhejiang Academy of Agricultural Sciences (ZAAS), Zhejiang 310000, China
  • Jun Hu
    Institute of Agricultural Equipment, Zhejiang Academy of Agricultural Sciences (ZAAS), Zhejiang 310000, China
  • Xiaoyan Shi
    Institute of Agricultural Equipment, Zhejiang Academy of Agricultural Sciences (ZAAS), Zhejiang 310000, China
  • Shan Hua
    Institute of Agricultural Equipment, Zhejiang Academy of Agricultural Sciences (ZAAS), Zhejiang 310000, China
  • Jibo Yue
    Key Laboratory of Quantitative Remote Sensing in Agriculture of Ministry of Agriculture P. R. China, Beijing Research Center for Information Technology in Agriculture, Beijing 100089, China
  • Guijun Yang
    Key Laboratory of Quantitative Remote Sensing in Agriculture of Ministry of Agriculture P. R. China, Beijing Research Center for Information Technology in Agriculture, Beijing 100089, China

説明

<jats:p>Leaf coverage is an indicator of plant growth rate and predicted yield, and thus it is crucial to plant-breeding research. Robust image segmentation of leaf coverage from remote-sensing images acquired by unmanned aerial vehicles (UAVs) in varying environments can be directly used for large-scale coverage estimation, and is a key component of high-throughput field phenotyping. We thus propose an image-segmentation method based on machine learning to extract relatively accurate coverage information from the orthophoto generated after preprocessing. The image analysis pipeline, including dataset augmenting, removing background, classifier training and noise reduction, generates a set of binary masks to obtain leaf coverage from the image. We compare the proposed method with three conventional methods (Hue-Saturation-Value, edge-detection-based algorithm, random forest) and a frontier deep-learning method called DeepLabv3+. The proposed method improves indicators such as Qseg, Sr, Es and mIOU by 15% to 30%. The experimental results show that this approach is less limited by radiation conditions, and that the protocol can easily be implemented for extensive sampling at low cost. As a result, with the proposed method, we recommend using red-green-blue (RGB)-based technology in addition to conventional equipment for acquiring the leaf coverage of agricultural crops.</jats:p>

収録刊行物

被引用文献 (1)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ