Temporal Distance Matrices for Squat Classification
説明
When working out, it is necessary to perform the same action many times for it to have effect. If the action, such as squats or bench pressing, is performed with poor form, it can lead to serious injuries in the long term. For this purpose, we present an action dataset of squats where different types of poor form have been annotated with a diversity of users and backgrounds, and propose a model, based on temporal distance matrices, for the classification task. We first run a 3D pose detector, then we normalize the pose and compute the distance matrix, in which each element represents the distance between two joints. This representation is invariant to differences in individuals, global translation, and global rotation, allowing for high generalization to real world data. Our classification model consists of a CNN with 1D convolutions. Results show that our method significantly outperforms existing approaches for the task.
収録刊行物
-
- 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
-
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2533-2542, 2019-06-01
IEEE