共 50 条
- [42] SELF-TRAINING AND PRE-TRAINING ARE COMPLEMENTARY FOR SPEECH RECOGNITION 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3030 - 3034
- [43] CycleNER: An Unsupervised Training Approach for Named Entity Recognition PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 2916 - 2924
- [45] Entity Enhanced BERT Pre-training for Chinese NER PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 6384 - 6396
- [46] Table Pre-training: A Survey on Model Architectures, Pre-training Objectives, and Downstream Tasks PROCEEDINGS OF THE THIRTY-FIRST INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2022, 2022, : 5426 - 5435
- [47] In Defense of Image Pre-Training for Spatiotemporal Recognition COMPUTER VISION, ECCV 2022, PT XXV, 2022, 13685 : 675 - 691
- [50] Extract-Select: A Span Selection Framework for Nested Named Entity Recognition with Generative Adversarial Training FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 85 - 96