Editorial for Special Issue on Pre-trained Large Language Models for Information Processing

被引:0
|
作者
Wang, Bin [1 ]
Kawahara, Tatsuya [2 ]
Li, Haizhou [3 ]
Meng, Helen [4 ]
Wu, Chung-Hsien [5 ]
机构
[1] ASTAR, Inst Infocomm Res, Singapore, Singapore
[2] Kyoto Univ, Kyoto, Japan
[3] Chinese Univ Hong Kong, Shenzhen, Peoples R China
[4] Chinese Univ Hong Kong, Hong Kong, Peoples R China
[5] Natl Cheng Kung Univ, Tainan, Taiwan
关键词
D O I
10.1561/116.00004100
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
引用
收藏
页数:3
相关论文
共 50 条
  • [31] Adopting Pre-trained Large Language Models for Regional Language Tasks: A Case Study
    Gaikwad, Harsha
    Kiwelekar, Arvind
    Laddha, Manjushree
    Shahare, Shashank
    INTELLIGENT HUMAN COMPUTER INTERACTION, IHCI 2023, PT I, 2024, 14531 : 15 - 25
  • [32] Sorting through the noise: Testing robustness of information processing in pre-trained language
    Pandia, Lalchand
    Ettinger, Allyson
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 1583 - 1596
  • [33] Synergizing Large Language Models and Pre-Trained Smaller Models for Conversational Intent Discovery
    Liang, Jinggui
    Liao, Lizi
    Fei, Hao
    Jiang, Jing
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 14133 - 14147
  • [34] Guest Editorial Introduction to the Issue on Pre-Trained Models for Multi-Modality Understanding
    Zhou, Wengang
    Deng, Jiajun
    Sebe, Niculae
    Tian, Qi
    Yuille, Alan L.
    Spampinato, Concetto
    Hammal, Zakia
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 8291 - 8296
  • [35] A Study on Accessing Linguistic Information in Pre-Trained Language Models by Using Prompts
    Di Marco, Marion
    Haemmerl, Katharina
    Fraser, Alexander
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 7328 - 7336
  • [36] Fusion of Root and Affix Information with Pre-trained Language Models for Text Classification
    Wu, Yujia
    Zhang, Xuan
    Xiao, Guohua
    Ren, Hong
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT III, ICIC 2024, 2024, 14877 : 488 - 498
  • [37] Efficient Data Learning for Open Information Extraction with Pre-trained Language Models
    Fan, Zhiyuan
    He, Shizhu
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 13056 - 13063
  • [38] Model-Agnostic Syntactical Information for Pre-Trained Programming Language Models
    Saberi, Iman
    Fard, Fatemeh H.
    2023 IEEE/ACM 20TH INTERNATIONAL CONFERENCE ON MINING SOFTWARE REPOSITORIES, MSR, 2023, : 183 - 193
  • [39] Injecting Descriptive Meta-information into Pre-trained Language Models with Hypernetworks
    Duan, Wenying
    He, Xiaoxi
    Zhou, Zimu
    Rao, Hong
    Thiele, Lothar
    INTERSPEECH 2021, 2021, : 3216 - 3220
  • [40] Clinical efficacy of pre-trained large language models through the lens of aphasia
    Cong, Yan
    Lacroix, Arianna N.
    Lee, Jiyeon
    SCIENTIFIC REPORTS, 2024, 14 (01):