A Two-Stage Automatic Container Code Recognition Method Considering Environmental Interference

被引:0
|
作者
Yu, Meng [1 ]
Zhu, Shanglei [1 ]
Lu, Bao [2 ]
Chen, Qiang [3 ]
Wang, Tengfei [1 ]
机构
[1] Wuhan Univ Technol, Sch Transportat & Logist Engn, Wuhan 430063, Peoples R China
[2] China Tianjin Ocean Shipping Agcy Co Ltd, Tianjin 300456, Peoples R China
[3] Qingdao New Qianwan Container Terminal Co Ltd, Qingdao 266000, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 11期
基金
国家重点研发计划;
关键词
maritime management; ship monitoring; video image recognition; convolutional neural network; feature fusion;
D O I
10.3390/app14114779
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Automatic Container Code Recognition (ACCR) is critical for enhancing the efficiency of container terminals. However, existing ACCR methods frequently fail to achieve satisfactory performance in complex environments at port gates. In this paper, we propose an approach for accurate, fast, and compact container code recognition by utilizing YOLOv4 for container region localization and Deeplabv3+ for character recognition. To enhance the recognition speed and accuracy of YOLOv4 and Deeplabv3+, and to facilitate their deployment at gate entrances, we introduce several improvements. First, we optimize the feature-extraction process of YOLOv4 and Deeplabv3+ to reduce their computational complexity. Second, we enhance the multi-scale recognition and loss functions of YOLOv4 to improve the accuracy and speed of container region localization. Furthermore, we adjust the dilated convolution rates of the ASPP module in Deeplabv3+. Finally, we replace two upsampling structures in the decoder of Deeplabv3+ with transposed convolution upsampling and sub-pixel convolution upsampling. Experimental results on our custom dataset demonstrate that our proposed method, C-YOLOv4, achieves a container region localization accuracy of 99.76% at a speed of 56.7 frames per second (FPS), while C-Deeplabv3+ achieves an average pixel classification accuracy (MPA) of 99.88% and an FPS of 11.4. The overall recognition success rate and recognition speed of our approach are 99.51% and 2.3 ms per frame, respectively. Moreover, C-YOLOv4 and C-Deeplabv3+ outperform existing methods in complex scenarios.
引用
收藏
页数:20
相关论文
共 50 条
  • [21] Optimising two-stage recognition systems
    Landgrebe, T
    Paclík, P
    Tax, DMJ
    Duin, RPW
    MULTIPLE CLASSIFIER SYSTEMS, 2005, 3541 : 206 - 215
  • [22] A two-stage recognition method based on deep learning for sheep behavior
    Gu, Zishuo
    Zhang, Haoyu
    He, Zhiqiang
    Niu, Kai
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2023, 212
  • [23] A Two-stage Pattern Matching Method for Speaker Recognition of Partner Robots
    Cao, Jiangtao
    Kubota, Naoyuki
    Liu, Honghai
    2010 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE 2010), 2010,
  • [24] A Two-stage Method For Hand-Raising Gesture Recognition in Classroom
    Liao, Wang
    Xu, Wei
    Kong, SiCong
    Ahmad, Fowad
    Liu, Wei
    PROCEEDINGS OF 2019 8TH INTERNATIONAL CONFERENCE ON EDUCATIONAL AND INFORMATION TECHNOLOGY (ICEIT 2019), 2019, : 38 - 44
  • [25] Optimization of Two-Stage Operational Amplifier Using Firefly Algorithm Considering Environmental Constraints
    Archana, Kumari
    Kumar, Ram
    Nath, Sourav
    Srivastava, Prabhat Kumar
    ASIAN JOURNAL OF WATER ENVIRONMENT AND POLLUTION, 2022, 19 (06) : 45 - 50
  • [26] Analyzing Two-Stage Experiments in the Presence of Interference
    Basse, Guillaume
    Feller, Avi
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2018, 113 (521) : 41 - 55
  • [27] Prostatectomy by the two-stage method
    Irwin, WK
    BRITISH MEDICAL JOURNAL, 1937, 1937 : 639 - 640
  • [28] Prostatectomy by the two-stage method
    Irwin, WK
    BRITISH MEDICAL JOURNAL, 1937, 1937 : 528 - 529
  • [29] Prostatectomy by the Two-stage Method
    Galbraith, WW
    BRITISH MEDICAL JOURNAL, 1937, 1937 : 472 - 473
  • [30] Prostatectomy by the two-stage method
    Galbraith, WW
    BRITISH MEDICAL JOURNAL, 1937, 1937 : 638 - 639