Compositional Networks Enable Systematic Generalization for Grounded Language Understanding

被引:0
|
作者
Kuo, Yen-Ling [1 ]
Katz, Boris [1 ]
Barbu, Andrei [1 ]
机构
[1] MIT, CSAIL & CBMM, Cambridge, MA 02139 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Humans are remarkably flexible when understanding new sentences that include combinations of concepts they have never encountered before. Recent work has shown that while deep networks can mimic some human language abilities when presented with novel sentences, systematic variation uncovers the limitations in the language-understanding abilities of networks. We demonstrate that these limitations can be overcome by addressing the generalization challenges in the gSCAN dataset, which explicitly measures how well an agent is able to interpret novel linguistic commands grounded in vision, e.g., novel pairings of adjectives and nouns. The key principle we employ is compositionality: that the compositional structure of networks should reflect the compositional structure of the problem domain they address, while allowing other parameters to be learned end-to-end. We build a general-purpose mechanism that enables agents to generalize their language understanding to compositional domains. Crucially, our network has the same state-of-theart performance as prior work while generalizing its knowledge when prior work does not. Our network also provides a level of interpretability that enables users to inspect what each part of networks learns. Robust grounded language understanding without dramatic failures and without corner cases is critical to building safe and fair robots; we demonstrate the significant role that compositionality can play in achieving that goal.
引用
收藏
页码:216 / 226
页数:11
相关论文
共 50 条
  • [31] Understanding Estimation and Generalization Error of Generative Adversarial Networks
    Ji, Kaiyi
    Zhou, Yi
    Liang, Yingbin
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2021, 67 (05) : 3114 - 3129
  • [32] Vokenization: Improving Language Understanding with Contextualized, Visual-Grounded Supervision
    Tan, Hao
    Bansal, Mohit
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 2066 - 2080
  • [33] The Heuristic Core: Understanding Subnetwork Generalization in Pretrained Language Models
    Bhaskar, Adithya
    Chen, Danqi
    Friedman, Dan
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 14351 - 14368
  • [34] LanCon-Learn: Learning With Language to Enable Generalization in Multi-Task Manipulation
    Silva, Andrew
    Moorman, Nina
    Silva, William
    Zaidi, Zulfiqar
    Gopalan, Nakul
    Gombolay, Matthew
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02) : 1635 - 1642
  • [35] Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks
    Lake, Brenden
    Baroni, Marco
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [36] Compositional Generalization and Natural Language Variation: Can a Semantic Parsing Approach Handle Both?
    Shaw, Peter
    Chang, Ming-Wei
    Pasupat, Panupong
    Toutanova, Kristina
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 922 - 938
  • [37] Recurrent Neural Networks for Language Understanding
    Yao, Kaisheng
    Zweig, Geoffrey
    Hwang, Mei-Yuh
    Shi, Yangyang
    Yu, Dong
    14TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2013), VOLS 1-5, 2013, : 2523 - 2527
  • [38] Supervised Topic Compositional Neural Language Model for Clinical Narrative Understanding
    Qin, Xiao
    Xiao, Cao
    Ma, Tengfei
    Kakar, Tabassum
    Wunnava, Susmitha
    Kong, Xiangnan
    Rundensteiner, Elke
    Wang, Fei
    2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, : 758 - 767
  • [39] SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics
    Yanaka, Hitomi
    Mineshima, Koji
    Inui, Kentaro
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 103 - 119
  • [40] Towards Understanding Sample Variance in Visually Grounded Language Generation: Evaluations and Observations
    Zhu, Wanrong
    Wang, Xin Eric
    Narayana, Pradyumna
    Sone, Kazoo
    Basu, Sugato
    Wang, William Yang
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 8806 - 8811