Emergent Communication: Generalization and Overfitting in Lewis Games

被引:0
|
作者
Rita, Mathieu [1 ]
Tallec, Corentin [2 ]
Michel, Paul [2 ]
Grill, Jean-Bastien [2 ]
Pietquin, Olivier [3 ]
Dupoux, Emmanuel [4 ,5 ]
Strub, Florian [2 ]
机构
[1] INRIA, Paris, France
[2] DeepMind, London, England
[3] Google Res, Brain Team, Mountain View, CA USA
[4] INRIA, CNRS, EHESS, ENS PSL, Paris, France
[5] Meta AI Res, New York, NY USA
基金
欧洲研究理事会;
关键词
LANGUAGE EVOLUTION; COMPRESSION; DYNAMICS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Lewis signaling games are a class of simple communication games for simulating the emergence of language. In these games, two agents must agree on a communication protocol in order to solve a cooperative task. Previous work has shown that agents trained to play this game with reinforcement learning tend to develop languages that display undesirable properties from a linguistic point of view (lack of generalization, lack of compositionality, etc). In this paper, we aim to provide better understanding of this phenomenon by analytically studying the learning problem in Lewis games. As a core contribution, we demonstrate that the standard objective in Lewis games can be decomposed in two components: a co-adaptation loss and an information loss. This decomposition enables us to surface two potential sources of overfitting, which we show may undermine the emergence of a structured communication protocol. In particular, when we control for overfitting on the co-adaptation loss, we recover desired properties in the emergent languages: they are more compositional and generalize better.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Emergent Communication for Numerical Concepts Generalization
    Zhou, Enshuai
    Hao, Yifan
    Zhang, Rui
    Guo, Yuxuan
    Du, Zidong
    Zhang, Xishan
    Song, Xinkai
    Wang, Chao
    Zhou, Xuehai
    Guo, Jiaming
    Yi, Qi
    Peng, Shaohui
    Huang, Di
    Chen, Ruizhi
    Guo, Qi
    Chen, Yunji
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 16, 2024, : 17609 - 17617
  • [2] OVERFITTING AND GENERALIZATION IN LEARNING DISCRETE PATTERNS
    LING, CX
    NEUROCOMPUTING, 1995, 8 (03) : 341 - 347
  • [3] Deep reinforcement learning with emergent communication for coalitional negotiation games
    Chen, Siqi
    Yang, Yang
    Su, Ran
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2022, 19 (05) : 4592 - 4609
  • [4] Emergent Linguistic Phenomena in Multi-Agent Communication Games
    Graesser, Laura
    Cho, Kyunghyun
    Kiela, Douwe
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 3700 - 3710
  • [5] On overfitting, generalization, and randomly expanded training sets
    Karystinos, GN
    Pados, DA
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2000, 11 (05): : 1050 - 1057
  • [6] Generalization despite overfitting in quantum machine learning models
    Peters, Evan
    Schuld, Maria
    QUANTUM, 2023, 7
  • [7] Emergent Multiplayer Games
    Wodarczyk, Sebastian
    von Mammen, Sebastian
    2020 IEEE CONFERENCE ON GAMES (IEEE COG 2020), 2020, : 33 - 40
  • [8] Compositionality and Generalization in Emergent Languages
    Chaabouni, Rahma
    Kharitonov, Eugene
    Bouchacourt, Diane
    Dupoux, Emmanuel
    Baroni, Marco
    58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 4427 - 4442
  • [9] A Common Generalization of Budget Games and Congestion Games
    Kiyosue, Fuga
    Takazawa, Kenjiro
    ALGORITHMIC GAME THEORY, SAGT 2022, 2022, 13584 : 258 - 274
  • [10] A common generalization of budget games and congestion games
    Kiyosue, Fuga
    Takazawa, Kenjiro
    JOURNAL OF COMBINATORIAL OPTIMIZATION, 2024, 48 (03)