HyperPELT: Unified Parameter-Efficient Language Model Tuning for Both Language and Vision-and-Language Tasks

被引:0
|
作者
Zhang, Zhengkun [1 ]
Guo, Wenya [1 ]
Meng, Xiaojun [2 ]
Wang, Yasheng [2 ]
Wang, Yadao [2 ]
Jiang, Xin [2 ]
Liu, Qun [2 ]
Yang, Zhenglu [1 ]
机构
[1] Nankai Univ, CS, TKLNDST, Tianjin, Peoples R China
[2] Huawei Technol, Noahs Ark Lab, Beijing, Peoples R China
关键词
D O I
暂无
中图分类号
学科分类号
摘要
With the scale and capacity of pretrained models growing rapidly, parameter-efficient language model tuning has emerged as a popular paradigm for solving various NLP and Vision-and-Language (V&L) tasks. In this paper, we design a unified parameter-efficient multitask learning framework that works effectively on both NLP and V&L tasks. In particular, we use a shared hypernetwork that takes trainable hyper-embeddings and visual modality as input, and outputs weights for different modules in a pretrained language model, such as the parameters inserted into multi-head attention blocks (i.e., prefix-tuning) and feed-forward blocks (i.e., adapter-tuning.). Our proposed framework adds fewer trainable parameters in multi-task learning while achieving superior performances and transfer ability compared to state-of-the-art methods. Empirical results on the GLUE benchmark and multiple V&L tasks confirm the effectiveness of our framework.
引用
收藏
页码:11442 / 11453
页数:12
相关论文
共 50 条
  • [1] VL-ADAPTER: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks
    Sung, Yi-Lin
    Cho, Jaemin
    Bansal, Mohit
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 5217 - 5227
  • [2] UNIPELT: A Unified Framework for Parameter-Efficient Language Model Tuning
    Mao, Yuning
    Mathias, Lambert
    Hou, Rui
    Almahairi, Amjad
    Ma, Hao
    Han, Jiawei
    Yih, Wen-tau
    Khabsa, Madian
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 6253 - 6264
  • [3] VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control
    Hu, Zi-Yuan
    Li, Yanyang
    Lyu, Michael R.
    Wang, Liwei
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 2998 - 3008
  • [4] VLN-PETL: Parameter-Efficient Transfer Learning for Vision-and-Language Navigation
    Qiao, Yanyuan
    Yu, Zheng
    Wu, Qi
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 15397 - 15406
  • [5] Bridging Vision and Language Encoders: Parameter-Efficient Tuning for Referring Image Segmentation
    Xu, Zunnan
    Chen, Zhihong
    Zhang, Yong
    Song, Yibing
    Wan, Xiang
    Li, Guanbin
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 17457 - 17466
  • [6] XtremeCLIP: Extremely Parameter-efficient Tuning for Low-resource Vision Language Understanding
    Tang, Moming
    Wang, Chengyu
    Wang, Jianing
    Tan, Chuanqi
    Huang, Songfang
    Chen, Cen
    Qian, Weining
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 6368 - 6376
  • [7] Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model Fine-tuning
    Zhang, Zhen-Ru
    Tan, Chuanqi
    Xu, Haiyang
    Wang, Chengyu
    Huang, Jun
    Huang, Songfang
    61ST CONFERENCE OF THE THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 2, 2023, : 1239 - 1248
  • [8] Parameter-efficient Tuning for Large Language Model without Calculating Its Gradients
    Jin, Feihu
    Zhang, Jiajun
    Zong, Chengqing
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 321 - 330
  • [9] Parameter-Efficient Transfer Learning for Audio-Visual-Language Tasks
    Liu, Hongye
    Xie, Xianhai
    Gao, Yang
    Yu, Zhou
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 387 - 396
  • [10] Parameter-Efficient Language Model Tuning with Active Learning in Low-Resource Settings
    Jukic, Josip
    Snajder, Jan
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 5061 - 5074