Automatic Evaluation Method for Functional Movement Screening Based on a Dual-Stream Network and Feature Fusion

被引:2
|
作者
Lin, Xiuchun [1 ]
Chen, Renguang [2 ]
Feng, Chen [3 ]
Chen, Zhide [2 ]
Yang, Xu [4 ]
Cui, Hui [5 ]
机构
[1] Fujian Inst Educ, Fuzhou 350025, Peoples R China
[2] Fujian Normal Univ, Coll Comp & Cyber Secur, Fuzhou 350117, Peoples R China
[3] Fuzhou Polytech, Dept Informat Engn, Fuzhou 350108, Peoples R China
[4] Minjiang Univ, Fuzhou Inst Oceanog, Coll Comp & Data Sci, Fuzhou 350108, Peoples R China
[5] Monash Univ, Dept Software Syst & Cybersecur, Melbourne, Vic 3800, Australia
基金
中国国家自然科学基金;
关键词
RAFT; dual stream; feature fusion; functional movement screening;
D O I
10.3390/math12081162
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Functional Movement Screening (FMS) is a movement pattern quality assessment system used to assess basic movement capabilities such as flexibility, stability, and pliability. Movement impairments and abnormal postures can be identified through peculiar movements and postures of the body. The reliability, validity, and accuracy of functional movement screening are difficult to test due to the subjective nature of the assessment. In this sense, this paper presents an automatic evaluation method for functional movement screening based on a dual-stream network and feature fusion. First, the RAFT algorithm is used to estimate the optical flow of a video, generating a set of optical flow images to represent the motion between consecutive frames. By inputting optical flow images and original video frames separately into the I3D model, it can better capture spatiotemporal features compared to the single-stream method. Meanwhile, this paper introduces a simple but effective attention fusion method that combines features extracted from optical flow with the original frames, enabling the network to focus on the most relevant parts of the input data, thereby improving prediction accuracy. The prediction of the four categories of FMS results was performed. It produced better correlation results compared to other more complex fusion protocols, with an accuracy improvement of 3% over the best-performing fusion method. Tests on public datasets showed that the evaluation metrics of the method proposed in this paper were the most advanced, with an accuracy improvement of approximately 4% compared to the currently superior methods. The use of deep learning methods makes it more objective and reliable to identify human movement impairments and abnormal postures.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] A DUAL-STREAM CONVOLUTIONAL FEATURE FUSION NETWORK FOR HYPERSPECTRAL UNMIXING
    Hua, Haoyue
    Li, Jie
    Wang, Ying
    Gao, Xinbo
    IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 7531 - 7534
  • [2] Dual-stream feature fusion network for person re-identification
    Zhang, Wenbin
    Li, Zhaoyang
    Du, Haishun
    Tong, Jiangang
    Liu, Zhihua
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 131
  • [3] A Video Action Recognition Method via Dual-Stream Feature Fusion Neural Network with Attention
    Han, Jianmin
    Li, Jie
    INTERNATIONAL JOURNAL OF UNCERTAINTY FUZZINESS AND KNOWLEDGE-BASED SYSTEMS, 2024, 32 (04) : 673 - 694
  • [4] Feature Fusion for Dual-Stream Cooperative Action Recognition
    Chen, Dong
    Wu, Mengtao
    Zhang, Tao
    Li, Chuanqi
    IEEE ACCESS, 2023, 11 : 116732 - 116740
  • [5] Dual-Stream Feature Fusion Network for Detection and ReID in Multi-object Tracking
    He, Qingyou
    Li, Liangqun
    PRICAI 2022: TRENDS IN ARTIFICIAL INTELLIGENCE, PT I, 2022, 13629 : 247 - 260
  • [6] A lightweight network based on dual-stream feature fusion and dual-domain attention for white blood cells segmentation
    Luo, Yang
    Wang, Yingwei
    Zhao, Yongda
    Guan, Wei
    Shi, Hanfeng
    Fu, Chong
    Jiang, Hongyang
    FRONTIERS IN ONCOLOGY, 2023, 13
  • [7] Dual-stream GNN fusion network for hyperspectral classification
    Weiming Li
    Qikang Liu
    Shuaishuai Fan
    Cong’an Xu
    Hongyang Bai
    Applied Intelligence, 2023, 53 : 26542 - 26567
  • [8] Dual-stream GNN fusion network for hyperspectral classification
    Li, Weiming
    Liu, Qikang
    Fan, Shuaishuai
    Xu, Con'gan
    Bai, Hongyang
    APPLIED INTELLIGENCE, 2023, 53 (22) : 26542 - 26567
  • [9] DSAGAN: A generative adversarial network based on dual-stream attention mechanism for anatomical and functional image fusion
    Fu, Jun
    Li, Weisheng
    Du, Jiao
    Xu, Liming
    Li, Weisheng (liws@cqupt.edu.cn), 1600, Elsevier Inc. (576): : 484 - 506
  • [10] Radar-Based Human Activity Recognition Using Dual-Stream Spatial and Temporal Feature Fusion Network
    Li, Jianjun
    Xu, Hongji
    Zeng, Jiaqi
    Ai, Wentao
    Li, Shijie
    Li, Xiaoman
    Li, Xinya
    IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 2024, 60 (02) : 1835 - 1847