Grasp Detection Based on Faster Region CNN

被引:0
|
作者
Luo, Zihan [1 ]
Tang, Biwei [1 ]
Jiang, Shan [1 ]
Pang, Muye [1 ]
Xiang, Kui [1 ]
机构
[1] Wuhan Univ Technol, Sch Automat, Intelligent Syst Res Inst, Wuhan, Hubei, Peoples R China
关键词
D O I
10.1109/icarm49381.2020.9195274
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The robot grasp technology has recently aroused increasing research interests thanks to its foundation and importance in the field of robotics. Based on the deep learning method, this paper introduces a grasp detection model with the improved model Faster Region Convolutional Neural Network (Faster-RCNN). The orientation of the ground truth box in the grasp detection is random, so the orientation issue is one of the key points in grasp detection and differs from the other object detection researches. To tackle with this problem, this paper applies the five-dimensional parameters to represent the grasp rectangle. The method puts forward the improved Region Proposal Network (RPN) to export the tilted graspable region, including the size, the location, the orientation and the score belongs to the grasp class or non-grasp class. The RPN extracts the candidate proposals via using a more efficient CNN, instead of the inefficient selective search method. In the classification branch, the softmax function works to determine whether the anchor box is foreground or background. The regression of the angle is performed in the regression branch. In addition, the improved Non-Maximum Suppression (NMS) is used to generate the optimal inclined predicted grasp rectangle. To cope with the insufficient data size in the Cornell Grasp Dataset, the data augmentation and transfer learning methods are applied in the training phase. During the test, the results reveal that the detection accuracy of the model proposed in this paper on the dataset is 92.3% in terms of the image-wise splitting and 92.5% with respect to the objective-wise splitting on the Cornel Grasp Dataset, respectively.
引用
收藏
页码:323 / 328
页数:6
相关论文
共 50 条
  • [31] Roadside Traffic Sign Detection Based on Faster R-CNN
    Fu, Xingyu
    Fang, Bin
    Qian, Jiye
    Wu, Zhenni
    Zhu, Jiajie
    Du, Tongxin
    ICMLC 2019: 2019 11TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND COMPUTING, 2019, : 439 - 444
  • [32] Express parcel detection based on improved faster regions with CNN features
    Wu, Cuiling
    Duan, Xiaodong
    Ning, Tao
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2023, 45 (03) : 4223 - 4238
  • [33] Face Detection With Different Scales Based on Faster R-CNN
    Wu, Wenqi
    Yin, Yingjie
    Wang, Xingang
    Xu, De
    IEEE TRANSACTIONS ON CYBERNETICS, 2019, 49 (11) : 4017 - 4028
  • [34] Aerial Target Detection Based on Improved Faster R-CNN
    Feng Xiaoyu
    Mei Wei
    Hu Dashuai
    ACTA OPTICA SINICA, 2018, 38 (06)
  • [35] Traffic sign detection method based on Faster R-CNN
    Wu, Linxiu
    Li, Houjie
    He, Jianjun
    Chen, Xuan
    2018 INTERNATIONAL SEMINAR ON COMPUTER SCIENCE AND ENGINEERING TECHNOLOGY (SCSET 2018), 2019, 1176
  • [36] Insulator Defect Detection Based on Improved Faster R-CNN
    Tang, Jinpeng
    Wang, Jiang
    Wang, Hailin
    Wei, Jiyi
    Wei, Yijian
    Qin, Mingsheng
    2022 4TH ASIA ENERGY AND ELECTRICAL ENGINEERING SYMPOSIUM (AEEES 2022), 2022, : 541 - 546
  • [37] Feature Optimization for Pedestrian Detection based on Faster R-CNN
    Ren, Mengxue
    Lu, Shuhua
    2019 INTERNATIONAL CONFERENCE ON IMAGE AND VIDEO PROCESSING, AND ARTIFICIAL INTELLIGENCE, 2019, 11321
  • [38] Gas mask wearing detection based on Faster R-CNN
    Wang, Bangrong
    Wang, Jun
    Xu, Xiaofeng
    Bao, Xianglin
    JOURNAL OF AMBIENT INTELLIGENCE AND SMART ENVIRONMENTS, 2023, 16 (01) : 57 - 71
  • [39] Inshore ship detection based on improved Faster R-CNN
    Tan, Xiangyu
    Tian, Tian
    Li, Hang
    MIPPR 2019: AUTOMATIC TARGET RECOGNITION AND NAVIGATION, 2020, 11429
  • [40] Lung Nodule Detection based on Faster R-CNN Framework
    Su, Ying
    Li, Dan
    Chen, Xiaodong
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2021, 200 (200)