Language-guided Robot Grasping: CLIP-based Referring Grasp Synthesis in Clutter

被引:0
|
作者
Tziafas, Georgios [1 ]
Xu, Yucheng [2 ]
Goel, Arushi [2 ]
Kasaei, Mohammadreza [2 ]
Li, Zhibin [3 ]
Kasaei, Hamidreza [1 ]
机构
[1] Univ Groningen, Groningen, Netherlands
[2] Univ Edinburgh, Edinburgh, Midlothian, Scotland
[3] UCL, London, England
来源
基金
欧盟地平线“2020”;
关键词
Language-Guided Robot Grasping; Referring Grasp Synthesis; Visual Grounding;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Robots operating in human-centric environments require the integration of visual grounding and grasping capabilities to effectively manipulate objects based on user instructions. This work focuses on the task of referring grasp synthesis, which predicts a grasp pose for an object referred through natural language in cluttered scenes. Existing approaches often employ multi-stage pipelines that first segment the referred object and then propose a suitable grasp, and are evaluated in simple datasets or simulators that do not capture the complexity of natural indoor scenes. To address these limitations, we develop a challenging benchmark based on cluttered indoor scenes from OCID dataset, for which we generate referring expressions and connect them with 4-DoF grasp poses. Further, we propose a novel end-to-end model (CROG) that leverages the visual grounding capabilities of CLIP to learn grasp synthesis directly from image-text pairs. Our results show that vanilla integration of CLIP with pretrained models transfers poorly in our challenging benchmark, while CROG achieves significant improvements both in terms of grounding and grasping. Extensive robot experiments in both simulation and hardware demonstrate the effectiveness of our approach in challenging interactive object grasping scenarios that include clutter.
引用
收藏
页数:17
相关论文
共 36 条
  • [1] CLIP-It! Language-Guided Video Summarization
    Narasimhan, Medhini
    Rohrbach, Anna
    Darrell, Trevor
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [2] Language-Guided Dexterous Functional Grasping by LLM Generated Grasp Functionality and Synergy for Humanoid Manipulation
    Li, Zhuo
    Liu, Junjia
    Li, Zhihao
    Dong, Zhipeng
    Teng, Tao
    Ou, Yongsheng
    Caldwell, Darwin
    Chen, Fei
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2025,
  • [3] Language-Guided Category Push-Grasp Synergy Learning in Clutter by Efficiently Perceiving Object Manipulation Space
    Zhao, Min
    Zuo, Guoyu
    Yu, Shuangyue
    Luo, Yongkang
    Liu, Chunfang
    Gong, Daoxiong
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2025, 21 (02) : 1783 - 1792
  • [4] CLUE: Contrastive language-guided learning for referring video object segmentation
    Gao, Qiqi
    Zhong, Wanjun
    Li, Jie
    Zhao, Tiejun
    PATTERN RECOGNITION LETTERS, 2024, 178 : 115 - 121
  • [5] Language-Guided Controller Synthesis for Linear Systems
    Gol, Ebru Aydin
    Lazar, Mircea
    Belta, Calin
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2014, 59 (05) : 1163 - 1176
  • [6] SLViT: Scale-Wise Language-Guided Vision Transformer for Referring Image Segmentation
    Ouyang, Shuyi
    Wang, Hongyi
    Xie, Shiao
    Niu, Ziwei
    Tong, Ruofeng
    Chen, Yen-Wei
    Lin, Lanfen
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1294 - 1302
  • [7] Neighbourhood Watch: Referring Expression Comprehension via Language-guided Graph Attention Networks
    Wang, Peng
    Wu, Qi
    Cao, Jiewei
    Shen, Chunhua
    Gao, Lianli
    van den Hengel, Anton
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 1960 - 1968
  • [8] Language-guided Human Motion Synthesis with Atomic Actions
    Zhai, Yuanhao
    Huang, Mingzhen
    Luan, Tianyu
    Dong, Lu
    Nwogu, Ifeoma
    Lyu, Siwei
    Doermann, David
    Yuan, Junsong
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 5262 - 5271
  • [9] Scaling Up and Distilling Down: Language-Guided Robot Skill Acquisition
    Ha, Huy
    Florence, Pete
    Song, Shuran
    CONFERENCE ON ROBOT LEARNING, VOL 229, 2023, 229
  • [10] MTA-CLIP: Language-Guided Semantic Segmentation with Mask-Text Alignment
    Das, Anurag
    Hu, Xinting
    Jiang, Li
    Schiele, Bernt
    COMPUTER VISION - ECCV 2024, PT LIV, 2025, 15112 : 39 - 56