共 50 条
- [1] UserBERT: Pre-training User Model with Contrastive Self-supervision PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 2087 - 2092
- [2] SLIP: Self-supervision Meets Language-Image Pre-training COMPUTER VISION, ECCV 2022, PT XXVI, 2022, 13686 : 529 - 544
- [3] Multilingual Pre-training with Self-supervision from Global Co-occurrence FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 7526 - 7543
- [4] PTUM: Pre-training User Model from Unlabeled User Behaviors via Self-supervision FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 1939 - 1944
- [6] LiRA: Learning Visual Speech Representations from Audio through Self-supervision INTERSPEECH 2021, 2021, : 3011 - 3015
- [7] CTAL: Pre-training Cross-modal Transformer for Audio-and-Language Representations 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3966 - 3977
- [8] Audio-Visual Contrastive Learning with Temporal Self-Supervision THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 7, 2023, : 7996 - 8004
- [10] Pre-training Mention Representations in Coreference Models PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 8534 - 8540