Deep Learning-based Multimodal Control Interface for Human-Robot Collaboration

被引:37
|
作者
Liu, Hongyi [1 ]
Fang, Tongtong [1 ]
Zhou, Tianyu [1 ]
Wang, Yuquan [1 ]
Wang, Lihui [1 ]
机构
[1] KTH Royal Inst Technol, Brinellvagen 68, S-11428 Stockholm, Sweden
关键词
Human-robot collaboration; Deep learning; Robot control;
D O I
10.1016/j.procir.2018.03.224
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
In human-robot collaborative manufacturing, industrial robot is required to dynamically change its pre-programmed tasks and collaborate with human operators at the same workstation. However, traditional industrial robot is controlled by pre-programmed control codes, which cannot support the emerging needs of human-robot collaboration. In response to the request, this research explored a deep learning-based multimodal robot control interface for human-robot collaboration. Three methods were integrated into the multimodal interface, including voice recognition, hand motion recognition, and body posture recognition. Deep learning was adopted as the algorithm for classification and recognition. Humanrobot collaboration specific datasets were collected to support the deep learning algorithm. The result presented at the end of the paper shows the potential to adopt deep learning in human-robot collaboration systems. (C) 2018 The Authors. Published by Elsevier B.V. Peer-review under responsibility of the scientific committee of the 51st CIRP Conference on Manufacturing Systems.
引用
收藏
页码:3 / 8
页数:6
相关论文
共 50 条
  • [31] Improving Human-Robot Interaction by a Multimodal Interface
    Ubeda, Andres
    Ianez, Eduardo
    Azorin, Jose M.
    Sabater, Jose M.
    Garcia, Nicolas M.
    Perez, Carlos
    IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2010), 2010, : 3580 - 3585
  • [32] Interactive learning in human-robot collaboration
    Ogata, T
    Masago, N
    Sugano, S
    Tani, J
    IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, 2003, : 162 - 167
  • [33] Multimodal perception-fusion-control and human-robot collaboration in manufacturing: a review
    Duan, Jianguo
    Zhuang, Liwen
    Zhang, Qinglei
    Zhou, Ying
    Qin, Jiyun
    INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY, 2024, 132 (3-4): : 1071 - 1093
  • [34] HARMONIC: A multimodal dataset of assistive human-robot collaboration
    Newman, Benjamin A.
    Aronson, Reuben M.
    Srinivasa, Siddhartha S.
    Kitani, Kris
    Admoni, Henny
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2022, 41 (01): : 3 - 11
  • [35] An adaptive reinforcement learning-based multimodal data fusion framework for human-robot confrontation gaming
    Qi, Wen
    Fan, Haoyu
    Karimi, Hamid Reza
    Su, Hang
    NEURAL NETWORKS, 2023, 164 : 489 - 496
  • [36] Recognition of Grasping Patterns Using Deep Learning for Human-Robot Collaboration
    Amaral, Pedro
    Silva, Filipe
    Santos, Vitor
    SENSORS, 2023, 23 (21)
  • [37] Machine learning-based human-robot interaction in ITS
    Wang, Jingyao
    Pradhan, Manas Ranjan
    Gunasekaran, Nallappan
    INFORMATION PROCESSING & MANAGEMENT, 2022, 59 (01)
  • [38] Multimodal control for human-robot cooperation
    Cherubini, Andrea
    Passama, Robin
    Meline, Arnaud
    Crosnier, Andre
    Fraisse, Philippe
    2013 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2013, : 2202 - 2207
  • [39] Towards Safe Human-Robot Collaboration Using Deep Reinforcement Learning
    El-Shamouty, Mohamed
    Wu, Xinyang
    Yang, Shanqi
    Albus, Marcel
    Huber, Marco F.
    2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 4899 - 4905
  • [40] Learning-based control strategy for safe human-robot interaction exploiting task and robot redundancies
    Calinon, Sylvain
    Sardellitti, Irene
    Caldwell, Darwin G.
    IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 2010, : 249 - 254