共 50 条
- [31] A Pre-Trained Language Model Based on LED for Tibetan Long Text Summarization PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 992 - 997
- [34] EFFICIENT UTILIZATION OF LARGE PRE-TRAINED MODELS FOR LOW RESOURCE ASR 2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
- [35] Low-Resource Dialogue Summarization with Domain-Agnostic Multi-Source Pretraining 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 80 - 91
- [36] Pre-trained Personalized Review Summarization with Effective Salience Estimation FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 10743 - 10754
- [37] Modeling Content Importance for Summarization with Pre-trained Language Models PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 3606 - 3611
- [38] Pre-trained Word Embedding based Parallel Text Augmentation Technique for Low-Resource NMT in Favor of Morphologically Rich Languages PROCEEDINGS OF THE THIRD INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND APPLICATION ENGINEERING (CSAE2019), 2019,
- [39] Adapting Generative Pre-trained Language Model for Open-domain Multimodal Sentence Summarization PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 195 - 204
- [40] Grounding Dialogue History: Strengths and Weaknesses of Pre-trained Transformers AIXIA 2020 - ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 12414 : 263 - 279