A hybrid spatial-temporal deep learning architecture for lane detection

被引:23
|
作者
Dong, Yongqi [1 ]
Patil, Sandeep [2 ]
van Arem, Bart [1 ]
Farah, Haneen [1 ]
机构
[1] Delft Univ Technol, Fac Civil Engn & Geosci, Dept Transport & Planning, Delft, Netherlands
[2] Delft Univ Technol, Fac Mech Maritime & Mat Engn, Delft, Netherlands
关键词
LINE DETECTION; TRACKING;
D O I
10.1111/mice.12829
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Accurate and reliable lane detection is vital for the safe performance of lane-keeping assistance and lane departure warning systems. However, under certain challenging circumstances, it is difficult to get satisfactory performance in accurately detecting the lanes from one single image as mostly done in current literature. Since lane markings are continuous lines, the lanes that are difficult to be accurately detected in the current single image can potentially be better deduced if information from previous frames is incorporated. This study proposes a novel hybrid spatial-temporal (ST) sequence-to-one deep learning architecture. This architecture makes full use of the ST information in multiple continuous image frames to detect the lane markings in the very last frame. Specifically, the hybrid model integrates the following aspects: (a) the single image feature extraction module equipped with the spatial convolutional neural network; (b) the ST feature integration module constructed by ST recurrent neural network; (c) the encoder-decoder structure, which makes this image segmentation problem work in an end-to-end supervised learning format. Extensive experiments reveal that the proposed model architecture can effectively handle challenging driving scenes and outperforms available state-of-the-art methods.
引用
收藏
页码:67 / 86
页数:20
相关论文
共 50 条
  • [21] A Personalized Lane-Changing Mode for Advanced Driver Assistance System Based on Deep Learning and Spatial-Temporal Modeling
    Gao, Jun
    Yi, Jiangang
    Zhu, Honghui
    Murphey, Yi Lu
    SAE INTERNATIONAL JOURNAL OF TRANSPORTATION SAFETY, 2019, 7 (02) : 163 - 174
  • [22] Multistep hybrid learning: CNN driven by spatial-temporal features for faults detection on metallic surfaces
    Fantinel, Riccardo
    Cenedese, Angelo
    JOURNAL OF ELECTRONIC IMAGING, 2020, 29 (04)
  • [23] An Improved Deep Spatial-Temporal Hybrid Model for Bus Speed Prediction
    Zhai, Huawei
    Cui, Licheng
    Zhang, Weishi
    Xu, Xiaowei
    Tian, Ruijie
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2020, 2020 (2020)
  • [24] Yoga Posture Recognition by Learning Spatial-Temporal Feature with Deep Learning Techniques
    Palanimeera, J.
    Ponmozhi, K.
    INTERNATIONAL JOURNAL OF IMAGE AND GRAPHICS, 2024, 24 (06)
  • [25] Learning Efficient Spatial-Temporal Gait Features with Deep Learning for Human Identification
    Liu, Wu
    Zhang, Cheng
    Ma, Huadong
    Li, Shuangqun
    NEUROINFORMATICS, 2018, 16 (3-4) : 457 - 471
  • [26] Video-based driver action recognition via hybrid spatial-temporal deep learning framework
    Hu, Yaocong
    Lu, Mingqi
    Xie, Chao
    Lu, Xiaobo
    MULTIMEDIA SYSTEMS, 2021, 27 (03) : 483 - 501
  • [27] Learning Efficient Spatial-Temporal Gait Features with Deep Learning for Human Identification
    Wu Liu
    Cheng Zhang
    Huadong Ma
    Shuangqun Li
    Neuroinformatics, 2018, 16 : 457 - 471
  • [28] Hybrid Deep Learning approach for Urban Expressway Travel Time Prediction Considering Spatial-Temporal Features
    Zhang, Zhihao
    Chen, Peng
    Wang, Yunpeng
    Yu, Guizhen
    2017 IEEE 20TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2017,
  • [29] Two-Stream Spatial-Temporal Feature Extraction and Classification Model for Anomaly Event Detection Using Hybrid Deep Learning Architectures
    Mangai, P.
    Geetha, M. Kalaiselvi
    Kumaravelan, G.
    INTERNATIONAL JOURNAL OF IMAGE AND GRAPHICS, 2024, 24 (06)
  • [30] Spatial-temporal multi -task learning for salient region detection
    Chen, Zhe
    Wang, Ruili
    Yu, Ming
    Gao, Hongmin
    Li, Qi
    Wang, Huibin
    PATTERN RECOGNITION LETTERS, 2020, 132 (132) : 76 - 83