Visual Navigation Based on Semantic Segmentation Using Only a Monocular Camera as an External Sensor

  • Miyamoto Ryusuke
    Department of Computer Science, School of Science and Technology, Meiji University
  • Adachi Miho
    Department of Computer Science, Graduate School of Science and Technology, Meiji University
  • Ishida Hiroki
    Department of Computer Science, Graduate School of Science and Technology, Meiji University
  • Watanabe Takuto
    Department of Computer Science, Graduate School of Science and Technology, Meiji University
  • Matsutani Kouchi
    Department of Computer Science, Graduate School of Science and Technology, Meiji University
  • Komatsuzaki Hayato
    Department of Computer Science, Graduate School of Science and Technology, Meiji University
  • Sakata Shogo
    Department of Computer Science, Graduate School of Science and Technology, Meiji University
  • Yokota Raimu
    Department of Computer Science, Graduate School of Science and Technology, Meiji University
  • Kobayashi Shingo
    Department of Computer Science, Graduate School of Science and Technology, Meiji University

この論文をさがす

抄録

<p>The most popular external sensor for robots capable of autonomous movement is 3D LiDAR. However, cameras are typically installed on robots that operate in environments where humans live their daily lives to obtain the same information that is presented to humans, even though autonomous movement itself can be performed using only 3D LiDAR. The number of studies on autonomous movement for robots using only visual sensors is relatively small, but this type of approach is effective at reducing the cost of sensing devices per robot. To reduce the number of external sensors required for autonomous movement, this paper proposes a novel visual navigation scheme using only a monocular camera as an external sensor. The key concept of the proposed scheme is to select a target point in an input image toward which a robot can move based on the results of semantic segmentation, where road following and obstacle avoidance are performed simultaneously. Additionally, a novel scheme called virtual LiDAR is proposed based on the results of semantic segmentation to estimate the orientation of a robot relative to the current path in a traversable area. Experiments conducted during the course of the Tsukuba Challenge 2019 demonstrated that a robot can operate in a real environment containing several obstacles, such as humans and other robots, if correct results of semantic segmentation are provided.</p>

収録刊行物

被引用文献 (5)*注記

もっと見る

参考文献 (33)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ