共 50 条
- [2] Boost Supervised Pretraining for Visual Transfer Learning: Implications of Self-Supervised Contrastive Representation Learning THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 2307 - 2315
- [3] EFFECTS OF PRETRAINING ON SOUND DISCRIMINATION-LEARNING JOURNAL OF SPEECH AND HEARING RESEARCH, 1963, 6 (02): : 171 - 180
- [4] Multimodal pretraining for unsupervised protein representation learning BIOLOGY METHODS & PROTOCOLS, 2024, 9 (01):
- [5] Learning Multiple Visual Tasks while Discovering their Structure 2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2015, : 131 - 139
- [6] How Useful is Self-Supervised Pretraining for Visual Tasks? 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 7343 - 7352
- [7] Multistain Pretraining for Slide Representation Learning in Pathology COMPUTER VISION - ECCV 2024, PT XXXIII, 2025, 15091 : 19 - 37
- [9] Pretraining Methods for Dialog Context Representation Learning 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 3836 - 3845
- [10] Same Representation, Different Attentions: Shareable Sentence Representation Learning from Multiple Tasks PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 4616 - 4622