共 50 条
- [1] LM Transparency Tool: Interactive Tool for Analyzing Transformer Language Models PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 3: SYSTEM DEMONSTRATIONS, 2024, : 51 - 60
- [2] EVALUATIVE CONCEPTS ENCODED IN METAPHORICAL LANGUAGE REVISTA GENERO & DIREITO, 2019, 8 (06): : 478 - 491
- [3] Analyzing Redundancy in Pretrained Transformer Models PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 4908 - 4926
- [4] Analyzing the Structure of Attention in a Transformer Language Model BLACKBOXNLP WORKSHOP ON ANALYZING AND INTERPRETING NEURAL NETWORKS FOR NLP AT ACL 2019, 2019, : 63 - 76
- [6] Are Structural Concepts Universal in Transformer Language Models? Towards Interpretable Cross-Lingual Generalization FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 13951 - 13976
- [8] Structural Guidance for Transformer Language Models 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 3735 - 3745
- [9] Staged Training for Transformer Language Models INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,