•  
  •  
 

Abstract

Skeleton-based human action recognition is an important research area with many practical applications. Most existing methods rely on single representations of skeletal sequences, which cannot totally obtain all the complex features of human movements. This paper presents LFHAR (Latent Features for Human Action Recognition), a new framework that uses multiple spatio-temporal latent representations to improve the extraction of action features. Our method captures how skeletal poses change over time and combines motion information from both individual joints and connected body parts. The proposed approach applies graph-based processing to each skeleton frame in a sequence, then arranges the resulting graph features into spatio-temporal matrices. We tested LFHAR on standard datasets to demonstrate its stability and effectiveness. Our method shows significant improvements, achieving 2.7% higher accuracy on NTU-RGB+D 60 and 2.1% higher accuracy on NTU-RGB+D 120 compared to baseline methods. These results confirm that the LFHAR framework effectively improves skeleton-based action recognition performance.

First Page

88

Last Page

95

References

  1. Sun, Z., Ke, Q., Rahmani, H., Bennamoun, M., Wang, G., Liu, J. (2022). Human action recognition from various data modalities: A review. IEEE Trans. Pattern Anal. Mach. Intell.
  2. Ahmad, T., Jin, L., Zhang, X., Lai, S., Tang, G., Lin, L. (2021). Graph convolutional neural network for human action recognition: A comprehensive survey. IEEE Trans. Artif. Intell., 2(2), 128-145.
  3. Cheng, K., Zhang, Y., He, X., Chen, W., Cheng, J., Lu, H. (2020). Skeleton-based action recognition with shift graph convolutional network. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 183-192.
  4. Xin, W., Liu, Y., Liu, R., Miao, Q., Shi, C., Pun, C.-M. (2023). Auto-learning-gcn: An ingenious framework for skeleton-based action recognition. Chinese Conference on Pattern Recognition and Computer Vision, 29-42.
  5. Liu, R., Liu, Y., Wu, M., Xin, W., Miao, Q., Liu, X., Li, L. (2025). SG-CLR: Semantic representation-guided contrastive learning for self-supervised skeleton-based action recognition. Pattern Recognit., 162, 111377.
  6. Marakhimov, A.R., Khudaybergenov, K.K. (2025). Softmax Regression with Multi-Connected Weights. Computers, accepted.
  7. Aouaidjia, K., Zhang, C., Pitas, I. (2025). Spatio-temporal invariant descriptors for skeleton-based human action recognition, Inf Sci (NY), 700, 121832. doi: 10.1016/j.ins.2024.121832.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.