Image-processing Method for Multi-lens Multispectral Cameras: Registration and Derivation of Reflectance

  • Lim Jihyun
    Research Center for Agricultural Information Technology, National Agriculture and Food Research Organization Institute for Agro-Environmental Sciences, National Agriculture and Food Research Organization
  • Ishihara Mitsunori
    Research Center for Agricultural Information Technology, National Agriculture and Food Research Organization
  • Tsunematsu Hiroshi
    Research Center for Agricultural Information Technology, National Agriculture and Food Research Organization
  • Sugiura Ryo
    Research Center for Agricultural Information Technology, National Agriculture and Food Research Organization

Bibliographic Information

Other Title
  • レジストレーションと反射率導出を中核とした多眼式マルチスペクトルカメラの画像処理方法
  • レジストレーション ト ハンシャリツ ドウシュツ オ チュウカク ト シタ タガンシキ マルチスペクトルカメラ ノ ガゾウ ショリ ホウホウ

Search this article

Abstract

<p> Multispectral cameras for drone sensing have multiple image sensors, each of which has a different viewing angle and focal point. Aligning images of all bands (registration) is an essential step before using the images of two or more bands for an analysis such as calculation of the normalized difference vegetation index. We suggest use of a feature-based registration technique that uses the OpenCV open-source computer vision library to align multispectral images simply and inexpensively. This method also corrects for the lens effects including distortion, converts pixel values on images from digital numbers to values of radiance and reflectance, and exports these results as a geotagged image for further processing. In the process of multispectral image registration, one band of the images is used as the reference image, and the others are set as sensed images. The sensed images are then aligned to the reference image by using 3×3 homography arrays, which are estimated using OpenCV from the corresponding keypoints between the reference image and each sensed image. We have investigated the method with multiple sets of images taken by the three major drone-mountable multispectral cameras — MicaSense RedEdge-3, Parrot Sequoia+, and DJI P4 Multispectral — with five algorithms for the feature-detector-descriptor — AKAZE, SIFT, SURF, BRISK, and ORB — and for all bands as the reference. We found that the green band was best for the reference. The success rates of AKAZE and SIFT were similar for the feature-detector-descriptor algorithm and exceeded 89 % for all image sets. However, the processing time was shorter for SIFT than AKAZE, especially for large images. We describe reasonable methods for derivation of reflectance from the images of each camera.</p>

Journal

Related Projects

See more

Details 詳細情報について

Report a problem

Back to top