Dissecting Recall of Factual Associations in Auto-Regressive Language Models

被引:0
|
作者
Geval, Mor [1 ]
Bastings, Jasmijn [1 ]
Filippoval, Katja [1 ]
Globerson, Amir [2 ,3 ]
机构
[1] Google DeepMind, London, England
[2] Tel Aviv Univ, Tel Aviv, Israel
[3] Google Res, Mountain View, CA USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transformer-based language models (LMs) are known to capture factual knowledge in their parameters. While previous work looked into where factual associations are stored, only little is known about how they are retrieved internally during inference. We investigate this question through the lens of information flow. Given a subject-relation query, we study how the model aggregates information about the subject and relation to predict the correct attribute. With interventions on attention edges, we first identify two critical points where information propagates to the prediction: one from the relation positions followed by another from the subject positions. Next, by analyzing the information at these points, we unveil a three-step internal mechanism for attribute extraction. First, the representation at the last-subject position goes through an enrichment process, driven by the early MLP sublayers, to encode many subject-related attributes. Second, information from the relation propagates to the prediction. Third, the prediction representation "queries" the enriched subject to extract the attribute. Perhaps surprisingly, this extraction is typically done via attention heads, which often encode subject-attribute mappings in their parameters. Overall, our findings introduce a comprehensive view of how factual associations are stored and extracted internally in LMs, facilitating future research on knowledge localization and editing.1
引用
收藏
页码:12216 / 12235
页数:20
相关论文
共 50 条
  • [1] Painter: Teaching Auto-regressive Language Models to Draw Sketches
    Pourreza, Reza
    Bhattacharyya, Apratim
    Panchal, Sunny
    Lee, Mingu
    Madan, Pulkit
    Memisevic, Roland
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 305 - 314
  • [2] ESTIMATION AND FORECASTING IN AUTO-REGRESSIVE MODELS
    MALINVAUD, E
    ECONOMETRICA, 1962, 30 (01) : 198 - 201
  • [3] Quantile approximations in auto-regressive portfolio models
    Ahcan, Ales
    Masten, Igor
    Polanec, Saso
    Perman, Mihael
    JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 2011, 235 (08) : 1976 - 1983
  • [4] PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models
    Scholak, Torsten
    Schucher, Nathan
    Bandanau, Dzmitry
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 9895 - 9901
  • [5] MARLIN: Mixed-Precision Auto-Regressive Parallel Inference on Large Language Models
    Frantar, Elias
    Castro, Roberto L.
    Chen, Jiale
    Hoefler, Torsten
    Alistarh, Dan
    PROCEEDINGS OF THE 2025 THE 30TH ACM SIGPLAN ANNUAL SYMPOSIUM ON PRINCIPLES AND PRACTICE OF PARALLEL PROGRAMMING, PPOPP 2025, 2025, : 239 - 251
  • [6] Facial expression recognition using auto-regressive models
    Dornaika, Fadi
    Davoine, Franck
    18TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 2, PROCEEDINGS, 2006, : 520 - +
  • [7] Mixed frequency structural vector auto-regressive models
    Foroni, Claudia
    Marcellino, Massimiliano
    JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES A-STATISTICS IN SOCIETY, 2016, 179 (02) : 403 - 425
  • [8] Damage Detection Using Vector Auto-Regressive Models
    Huang, Zongming
    Liu, Gang
    Todd, Michael
    Mao, Zhu
    HEALTH MONITORING OF STRUCTURAL AND BIOLOGICAL SYSTEMS 2013, 2013, 8695
  • [9] The tensor auto-regressive model
    Hill, Chelsey
    Li, James
    Schneider, Matthew J.
    Wells, Martin T.
    JOURNAL OF FORECASTING, 2021, 40 (04) : 636 - 652
  • [10] Probing Pre-trained Auto-regressive Language Models for Named Entity Typing and Recognition
    Epure, Elena V.
    Hennequin, Romain
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 1408 - 1417