Spherical Convolution Empowered Viewport Prediction in 360 Video Multicast with Limited FoV Feedback

  • Jie Li
    Hefei University of Technology, Hefei, Anhui, China
  • Ling Han
    Hefei University of Technology, Hefei, Anhui, China
  • Chong Zhang
    Hefei University of Technology, Hefei, Anhui, China
  • Qiyue Li
    Hefei University of Technology, Hefei, Anhui, China
  • Zhi Liu
    The University of Electro-Communications, Japan

説明

<jats:p>Field of view (FoV) prediction is critical in 360-degree video multicast, which is a key component of the emerging virtual reality and augmented reality applications. Most of the current prediction methods combining saliency detection and FoV information neither take into account that the distortion of projected 360-degree videos can invalidate the weight sharing of traditional convolutional networks nor do they adequately consider the difficulty of obtaining complete multi-user FoV information, which degrades the prediction performance. This article proposes a spherical convolution-empowered FoV prediction method, which is a multi-source prediction framework combining salient features extracted from 360-degree video with limited FoV feedback information. A spherical convolutional neural network is used instead of a traditional two-dimensional convolutional neural network to eliminate the problem of weight sharing failure caused by video projection distortion. Specifically, salient spatial-temporal features are extracted through a spherical convolution-based saliency detection model, after which the limited feedback FoV information is represented as a time-series model based on a spherical convolution-empowered gated recurrent unit network. Finally, the extracted salient video features are combined to predict future user FoVs. The experimental results show that the performance of the proposed method is better than other prediction methods.</jats:p> <jats:p />

収録刊行物

参考文献 (37)*注記

もっと見る

関連プロジェクト

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ