共 50 条
- [1] P2P: Tuning Pre-trained Image Models for Point Cloud Analysis with Point-to-Pixel Prompting ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
- [2] Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 14115 - 14124
- [3] Controllable Generation from Pre-trained Language Models via Inverse Prompting KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 2450 - 2460
- [4] Probing Power by Prompting: Harnessing Pre-trained Language Models for Power Connotation Framing 17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 873 - 885
- [7] Context Analysis for Pre-trained Masked Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 3789 - 3804
- [9] MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 6131 - 6142
- [10] Pre-Trained Image Processing Transformer 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 12294 - 12305