Robotics multi-modal recognition system via computer-based vision

被引:0
|
作者
Shahin, Mohammad [1 ]
Chen, F. Frank [1 ]
Hosseinzadeh, Ali [1 ]
Bouzary, Hamed [1 ]
Shahin, Awni [2 ]
机构
[1] Mechanical Engineering Department, The University of Texas at San Antonio, San Antonio, United States
[2] Faculty of Education, Mu’tah University, Karak, Jordan
关键词
This paper presents a multi-modal recognition system (MMR) which can eliminates the need for using Barcodes and Radio Frequency Identification Systems (RFIDs). Barcodes and RFID have limitations; for example, Barcodes require the scanner to have a direct line of sight with the code, and they are more susceptible to errors. Barcodes can also be hard to locate and can be affixed to oddly shaped products. While, RFID may overcome such problems, but it could be disturbed if the RFID is attached to a metallic background. The proposed MMR system can monitor items flowing one by one down on a conveyor belt to make sure they match their images, thus enabling robotics identification for items while picking them up, sorting them or turning them into a desirable orientation. Recognizing the business landscape evolves and competition from low-cost nations grows, new models must be created that provide a competitive edge by combining the Lean paradigm with Industry 4.0 technical advancements. This paper reports a contribution to this field by assessing the supporting function of MMR state-of-the-art algorithms in Lean manufacturing. In addition, this paper also aims to explore how MMR could be integrated into the Lean manufacturing settings to enable a competitive manufacturing process in a Lean 4.0 environment. A dataset with 21,000 vegetable images of 15 classes was used to present the recent development and application of image analysis and computer vision systems in object recognition, showing an overall detection F1-score of 85.08%. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024;
D O I
暂无
中图分类号
学科分类号
摘要
Article in Press
引用
收藏
相关论文
共 50 条
  • [41] Multi-modal Sensing for Behaviour Recognition
    Wang, Ziwei
    Liu, Jiajun
    Arablouei, Reza
    Bishop-Hurley, Greg
    Matthews, Melissa
    Borges, Paulo
    PROCEEDINGS OF THE 2022 THE 28TH ANNUAL INTERNATIONAL CONFERENCE ON MOBILE COMPUTING AND NETWORKING, ACM MOBICOM 2022, 2022, : 900 - 902
  • [42] A Multi-modal Searching Algorithm in Computer Go Based on Test
    Li, Xiali
    Wu, Licheng
    PROCEEDINGS OF THE 2015 CHINESE INTELLIGENT AUTOMATION CONFERENCE: INTELLIGENT INFORMATION PROCESSING, 2015, 336 : 143 - 149
  • [43] A brain-computer interface based on multi-modal attention
    Zhang, Dan
    Wang, Yijun
    Maye, Alexander
    Engel, Andreas K.
    Gao, Xiaorong
    Hong, Bo
    Gao, Shangkai
    2007 3RD INTERNATIONAL IEEE/EMBS CONFERENCE ON NEURAL ENGINEERING, VOLS 1 AND 2, 2007, : 414 - +
  • [44] Speech recognition with multi-modal features based on neural networks
    Kim, Myung Won
    Ryu, Joung Woo
    Kim, Eun Ju
    NEURAL INFORMATION PROCESSING, PT 2, PROCEEDINGS, 2006, 4233 : 489 - 498
  • [45] Multi-modal human motion recognition based on behaviour tree
    Yang, Qin
    Zhou, Zhenhua
    INTERNATIONAL JOURNAL OF BIOMETRICS, 2024, 16 (3-4) : 381 - 398
  • [46] Empirical Mode Decomposition Based Multi-Modal Activity Recognition
    Hu, Lingyue
    Zhao, Kailong
    Zhou, Xueling
    Ling, Bingo Wing-Kuen
    Liao, Guozhao
    SENSORS, 2020, 20 (21) : 1 - 15
  • [47] Multi-Modal Pain Intensity Recognition Based on the SenseEmotion Database
    Thiam, Patrick
    Kessler, Viktor
    Amirian, Mohammadreza
    Bellmann, Peter
    Layher, Georg
    Zhang, Yan
    Velana, Maria
    Gruss, Sascha
    Walter, Steffen
    Traue, Harald C.
    Schork, Daniel
    Kim, Jonghwa
    Andre, Elisabeth
    Neumann, Heiko
    Schwenker, Friedhelm
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2021, 12 (03) : 743 - 760
  • [48] Multi-modal haptic image recognition based on deep learning
    Han, Dong
    Nie, Hong
    Chen, Jinbao
    Chen, Meng
    Deng, Zhen
    Zhang, Jianwei
    SENSOR REVIEW, 2018, 38 (04) : 486 - 493
  • [49] Multi-Modal Fusion Emotion Recognition Based on HMM and ANN
    Xu, Chao
    Cao, Tianyi
    Feng, Zhiyong
    Dong, Caichao
    CONTEMPORARY RESEARCH ON E-BUSINESS TECHNOLOGY AND STRATEGY, 2012, 332 : 541 - 550
  • [50] Tactile texture recognition of multi-modal bionic finger based on multi-modal CBAM-CNN interpretable method
    Ma, Feihong
    Li, Yuliang
    Chen, Meng
    DISPLAYS, 2024, 83