Global Self-localization based on Classification and Semantic segmentation of Omni-directional Images

DOI

Bibliographic Information

Other Title
  • 全方位画像の分類とセマンティックセグメンテーションによる大域自己位置推定

Abstract

<p>In order for a robot to move autonomously, it is necessary to estimate its location based on recognition of the surrounding environment. In this study, we propose a deep neural network model for global self-localization based on omnidirectional image classification and semantic segmentation. This model consists of the “spatial category estimation module”, which is based on image classification, the “surrounding region distribution estimation module”, which is based on semantic segmentation, and the “global location analysis module", which performs global self-localization from the results of spatial category estimation and surrounding region distribution estimation. The accuracy of image classification and semantic segmentation was evaluated by experiments using an omnidirectional image dataset captured by a THETA-V camera. In addition, we evaluated the global location estimation by an experiment in which subjects were required to plot the location on a map for given global location descriptions which were generated by the “global location analysis module". From these results, we have confirmed that the proposed model achieves global self-localization in case there are enough region labels to identify locations.</p>

Journal

Details 詳細情報について

  • CRID
    1390003825189593344
  • NII Article ID
    130007857224
  • DOI
    10.11517/pjsai.jsai2020.0_3rin462
  • Text Lang
    ja
  • Data Source
    • JaLC
    • CiNii Articles
  • Abstract License Flag
    Disallowed

Report a problem

Back to top