A convolutional neural network approach for visual recognition in wheel production lines

被引:2
|
作者
Tong, Zheming [1 ,2 ]
Gao, Jie [1 ,2 ]
Tong, Shuiguang [1 ,2 ]
机构
[1] Zhejiang Univ, State Key Lab Fluid Power & Mechatron Syst, Hangzhou 310027, Peoples R China
[2] Zhejiang Univ, Sch Mech Engn, Hangzhou 310027, Peoples R China
基金
中国国家自然科学基金;
关键词
Wheel production; manufacturing; image recognition; convolutional neural network; digital image processing; ALGORITHM; EXPOSURE; IMAGES;
D O I
10.1177/1729881420926879
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
China has been the world's largest automotive manufacturing country since 2008. The automotive wheel industry in China has been growing steadily in pace with the automobile industry. Visual recognition system that automatically classifies wheel types is a key component in the wheel production line. Traditional recognition methods are mainly based on extracted feature matching. Their accuracy, robustness, and processing speed are often compromised considerably in actual production. To overcome this problem, we proposed a convolutional neural network approach to adaptively classify wheel types in actual production lines with a complex visual background. The essential steps to achieve wheel identification include image acquisition, image preprocessing, and classification. The image differencing algorithm and histogram technique are developed on acquired wheel images to remove track disturbances. The wheel images after image processing were organized into training and test sets. This approach improved the residual network model ResNet-18 and then evaluated this model based on the wheel test data. Experiments showed that this method can obtain an accuracy over 98% on nearly 70,000 wheel images and its single image processing time can reach millisecond level.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] PART-BASED CONVOLUTIONAL NEURAL NETWORK FOR VISUAL RECOGNITION
    Yang, Lingxiao
    Xie, Xiaohua
    Li, Peihua
    Zhang, David
    Zhang, Lei
    2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 1772 - 1776
  • [2] A convolutional neural network for visual object recognition in marine sector
    Kumar, Aiswarya S.
    Sherly, Elizabeth
    2017 2ND INTERNATIONAL CONFERENCE FOR CONVERGENCE IN TECHNOLOGY (I2CT), 2017, : 304 - 307
  • [3] A Visual Recognition Model Based on Improved Convolutional Neural Network
    Zhou, Jin
    Zhang, Yonglin
    Song, Shaoyun
    BASIC & CLINICAL PHARMACOLOGY & TOXICOLOGY, 2020, 126 : 260 - 260
  • [4] An Efficient Convolutional Neural Network Approach for Facial Recognition
    Mangal, Aayushi
    Malik, Himanshu
    Aggarwal, Garima
    PROCEEDINGS OF THE CONFLUENCE 2020: 10TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING, DATA SCIENCE & ENGINEERING, 2020, : 817 - 822
  • [5] Handwritten Signature Recognition : A Convolutional Neural Network Approach
    Kancharla, Krishnaditya
    Kamble, Varun
    Kapoor, Mohit
    2018 INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATION AND TELECOMMUNICATION (ICACAT), 2018,
  • [6] Face recognition: A convolutional neural-network approach
    Lawrence, S
    Giles, CL
    Tsoi, AC
    Back, AD
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1997, 8 (01): : 98 - 113
  • [7] Convolutional neural network based face recognition approach
    Kumar, Pratul
    Chande, Sayali
    Sinha, Saugata
    PROCEEDINGS OF THE 2019 IEEE REGION 10 CONFERENCE (TENCON 2019): TECHNOLOGY, KNOWLEDGE, AND SOCIETY, 2019, : 2525 - 2528
  • [8] Visual Speech Recognition of Korean Words Using Convolutional Neural Network
    Lee, Sung-Won
    Yu, Je-Hun
    Park, Seung Min
    Sim, Kwee-Bo
    INTERNATIONAL JOURNAL OF FUZZY LOGIC AND INTELLIGENT SYSTEMS, 2019, 19 (01) : 1 - 9
  • [9] Visual Static Hand Gesture Recognition Using Convolutional Neural Network
    Eid, Ahmed
    Schwenker, Friedhelm
    ALGORITHMS, 2023, 16 (08)
  • [10] Convolutional Neural Network Based Ensemble Approach for Homoglyph Recognition
    Majumder, Md Taksir Hasan
    Rahman, Md Mahabur
    Iqbal, Anindya
    Rahman, M. Sohel
    MATHEMATICAL AND COMPUTATIONAL APPLICATIONS, 2020, 25 (04)