Regional Time-Series Coding Network and Multi-View Image Generation Network for Short-Time Gait Recognition

  • Wenhao Sun
    School of Automation and Electrical Engineering, Tianjin University of Technology and Education, Tianjin 300222, China
  • Guangda Lu
    School of Automation and Electrical Engineering, Tianjin University of Technology and Education, Tianjin 300222, China
  • Zhuangzhuang Zhao
    School of Automation and Electrical Engineering, Tianjin University of Technology and Education, Tianjin 300222, China
  • Tinghang Guo
    School of Automation and Electrical Engineering, Tianjin University of Technology and Education, Tianjin 300222, China
  • Zhuanping Qin
    School of Automation and Electrical Engineering, Tianjin University of Technology and Education, Tianjin 300222, China
  • Yu Han
    School of Automation and Electrical Engineering, Tianjin University of Technology and Education, Tianjin 300222, China

抄録

<jats:p>Gait recognition is one of the important research directions of biometric authentication technology. However, in practical applications, the original gait data is often short, and a long and complete gait video is required for successful recognition. Also, the gait images from different views have a great influence on the recognition effect. To address the above problems, we designed a gait data generation network for expanding the cross-view image data required for gait recognition, which provides sufficient data input for feature extraction branching with gait silhouette as the criterion. In addition, we propose a gait motion feature extraction network based on regional time-series coding. By independently time-series coding the joint motion data within different regions of the body, and then combining the time-series data features of each region with secondary coding, we obtain the unique motion relationships between regions of the body. Finally, bilinear matrix decomposition pooling is used to fuse spatial silhouette features and motion time-series features to obtain complete gait recognition under shorter time-length video input. We use the OUMVLP-Pose and CASIA-B datasets to validate the silhouette image branching and motion time-series branching, respectively, and employ evaluation metrics such as IS entropy value and Rank-1 accuracy to demonstrate the effectiveness of our design network. Finally, we also collect gait-motion data in the real world and test them in a complete two-branch fusion network. The experimental results show that the network we designed can effectively extract the time-series features of human motion and achieve the expansion of multi-view gait data. The real-world tests also prove that our designed method has good results and feasibility in the problem of gait recognition with short-time video as input data.</jats:p>

収録刊行物

  • Entropy

    Entropy 25 (6), 837-, 2023-05-23

    MDPI AG

被引用文献 (1)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ