Pedestrian Action Prediction Based on Deep Features Extraction of Human Posture and Traffic Scene

被引:5
|
作者
Diem-Phuc Tran [1 ]
Nguyen Gia Nhu [1 ]
Van-Dung Hoang [2 ]
机构
[1] Duy Tan Univ, Da Nang, Vietnam
[2] Quang Binh Univ, Dong Hoi, Quang Binh, Vietnam
关键词
Deep learning; Pedestrian action prediction; Deep-feature extraction; People detection; Linear classifier; ORIENTED GRADIENTS; MOTION;
D O I
10.1007/978-3-319-75420-8_53
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The paper proposes a solution for pedestrian action prediction from single images. Pedestrian action prediction is based on the analysis of human postures in the context of traffic in traffic systems. Normally, other solutions use sequential frames (video) motion properties. Technically, these solutions may produce high results but slow performance since the need to analyze the relationship between the frames. This paper takes into account analyzing the relationship between the pedestrian postures and traffic scenes from an image with the expectation that ensures accuracy without analyzing the relationship of motion between frames. This work consists of two phases, which are human detection and pedestrian action prediction. First, human detection is solved by applying aggregate channel features (ACF) method and then predict pedestrian action by extracting features of this image and use the classifier model which is trained by features extracted of pedestrian image dataset in convolution neural network (CNN) model. The minimum accuracy rate is 82%, the maximum is 97%, with the average response rate of 0.6 s per pedestrian case has that been identified.
引用
收藏
页码:563 / 572
页数:10
相关论文
共 50 条
  • [1] Pedestrian Intention Prediction Based on Traffic-Aware Scene Graph Model
    Song, Xingchen
    Kang, Miao
    Zhou, Sanping
    Wang, Jianji
    Mao, Yishu
    Zheng, Nanning
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 9851 - 9858
  • [2] Pedestrian detection algorithm in traffic scene based on weakly supervised hierarchical deep model
    Cai, Yingfeng
    He, Youguo
    Wang, Hai
    Sun, Xiaoqiang
    Chen, Long
    Jiang, Haobin
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2017, 14
  • [3] Action-ViT: Pedestrian Intent Prediction in Traffic Scenes
    Zhao, Shengzhe
    Li, Haopeng
    Ke, Qiuhong
    Liu, Liangchen
    Zhang, Rui
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 324 - 328
  • [4] A Survey of Human Action Recognition and Posture Prediction
    Nan Ma
    Zhixuan Wu
    Yiu-ming Cheung
    Yuchen Guo
    Yue Gao
    Jiahong Li
    Beiyan Jiang
    TsinghuaScienceandTechnology, 2022, 27 (06) : 973 - 1001
  • [5] A Survey of Human Action Recognition and Posture Prediction
    Ma, Nan
    Wu, Zhixuan
    Cheung, Yiu-ming
    Guo, Yuchen
    Gao, Yue
    Li, Jiahong
    Jiang, Beijyan
    TSINGHUA SCIENCE AND TECHNOLOGY, 2022, 27 (06) : 973 - 1001
  • [6] DeepPTP: A Deep Pedestrian Trajectory Prediction Model for Traffic Intersection
    Lv, Zhiqiang
    Li, Jianbo
    Dong, Chuanhao
    Wang, Yue
    Li, Haoran
    Xu, Zhihao
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2021, 15 (07): : 2321 - 2338
  • [7] Impact of Posture and Social Features on Pedestrian Road-Crossing Trajectory Prediction
    Kao, I-Hsi
    Chan, Ching-Yao
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [8] Action-Conditioned Traffic Scene Prediction for Interactive Planning
    Kim, Chan
    Cho, Jae-Kyung
    Jung, Younghwa
    Seo, Seung-Woo
    Kim, Seong-Woo
    2022 INTERNATIONAL CONFERENCE ON ELECTRONICS, INFORMATION, AND COMMUNICATION (ICEIC), 2022,
  • [9] A model based method of pedestrian abnormal behavior detection in traffic scene
    Jiang Qianyin
    Li Guoming
    Yu Jinwei
    Li Xiying
    2015 IEEE FIRST INTERNATIONAL SMART CITIES CONFERENCE (ISC2), 2015,
  • [10] Pedestrian traffic light detection in complex scene using AdaBoost with multi-layer features
    Wu, Xue-Hua
    Hu, Renjie
    Bao, Yu-Qing
    JOURNAL OF ENGINEERING RESEARCH, 2018, 6 (03): : 34 - 53