レジストレーションと反射率導出を中核とした多眼式マルチスペクトルカメラの画像処理方法

  • 林 志炫
    国立研究開発法人 農業・食品産業技術総合研究機構 農業情報研究センター 国立研究開発法人 農業・食品産業技術総合研究機構 農業環境研究部門
  • 石原 光則
    国立研究開発法人 農業・食品産業技術総合研究機構 農業情報研究センター
  • 常松 浩史
    国立研究開発法人 農業・食品産業技術総合研究機構 農業情報研究センター
  • 杉浦 綾
    国立研究開発法人 農業・食品産業技術総合研究機構 農業情報研究センター

書誌事項

タイトル別名
  • Image-processing Method for Multi-lens Multispectral Cameras: Registration and Derivation of Reflectance
  • レジストレーション ト ハンシャリツ ドウシュツ オ チュウカク ト シタ タガンシキ マルチスペクトルカメラ ノ ガゾウ ショリ ホウホウ

この論文をさがす

抄録

<p> Multispectral cameras for drone sensing have multiple image sensors, each of which has a different viewing angle and focal point. Aligning images of all bands (registration) is an essential step before using the images of two or more bands for an analysis such as calculation of the normalized difference vegetation index. We suggest use of a feature-based registration technique that uses the OpenCV open-source computer vision library to align multispectral images simply and inexpensively. This method also corrects for the lens effects including distortion, converts pixel values on images from digital numbers to values of radiance and reflectance, and exports these results as a geotagged image for further processing. In the process of multispectral image registration, one band of the images is used as the reference image, and the others are set as sensed images. The sensed images are then aligned to the reference image by using 3×3 homography arrays, which are estimated using OpenCV from the corresponding keypoints between the reference image and each sensed image. We have investigated the method with multiple sets of images taken by the three major drone-mountable multispectral cameras — MicaSense RedEdge-3, Parrot Sequoia+, and DJI P4 Multispectral — with five algorithms for the feature-detector-descriptor — AKAZE, SIFT, SURF, BRISK, and ORB — and for all bands as the reference. We found that the green band was best for the reference. The success rates of AKAZE and SIFT were similar for the feature-detector-descriptor algorithm and exceeded 89 % for all image sets. However, the processing time was shorter for SIFT than AKAZE, especially for large images. We describe reasonable methods for derivation of reflectance from the images of each camera.</p>

収録刊行物

関連プロジェクト

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ