Global Self-localization based on Classification and Semantic segmentation of Omni-directional Images
-
- RYU Sio
- Soka University
-
- MURATA Yuki
- Soka University Graduate School of Engineering
-
- ATSUMI Masayasu
- Soka University Graduate School of Engineering
Bibliographic Information
- Other Title
-
- 全方位画像の分類とセマンティックセグメンテーションによる大域自己位置推定
Description
<p>In order for a robot to move autonomously, it is necessary to estimate its location based on recognition of the surrounding environment. In this study, we propose a deep neural network model for global self-localization based on omnidirectional image classification and semantic segmentation. This model consists of the “spatial category estimation module”, which is based on image classification, the “surrounding region distribution estimation module”, which is based on semantic segmentation, and the “global location analysis module", which performs global self-localization from the results of spatial category estimation and surrounding region distribution estimation. The accuracy of image classification and semantic segmentation was evaluated by experiments using an omnidirectional image dataset captured by a THETA-V camera. In addition, we evaluated the global location estimation by an experiment in which subjects were required to plot the location on a map for given global location descriptions which were generated by the “global location analysis module". From these results, we have confirmed that the proposed model achieves global self-localization in case there are enough region labels to identify locations.</p>
Journal
-
- Proceedings of the Annual Conference of JSAI
-
Proceedings of the Annual Conference of JSAI JSAI2020 (0), 3Rin462-3Rin462, 2020
The Japanese Society for Artificial Intelligence
- Tweet
Details 詳細情報について
-
- CRID
- 1390003825189593344
-
- NII Article ID
- 130007857224
-
- Text Lang
- ja
-
- Data Source
-
- JaLC
- CiNii Articles
-
- Abstract License Flag
- Disallowed