Counterfactual Contrastive Learning: Robust Representations via Causal Image Synthesis

被引:0
|
作者
Roschewitz, Melanie [1 ]
Ribeiro, Fabio de Sousa [1 ]
Xia, Tian [1 ]
Khara, Galvin [2 ]
Glocker, Ben [1 ,2 ]
机构
[1] Imperial Coll London, London, England
[2] Kheiron Med Technol, London, England
基金
英国工程与自然科学研究理事会; 欧洲研究理事会;
关键词
Contrastive learning; Counterfactuals; Model robustness;
D O I
10.1007/978-3-031-73748-0_3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Contrastive pretraining is well-known to improve downstream task performance and model generalisation, especially in limited label settings. However, it is sensitive to the choice of augmentation pipeline. Positive pairs should preserve semantic information while destroying domain-specific information. Standard augmentation pipelines emulate domain-specific changes with pre-defined photometric transformations, but what if we could simulate realistic domain changes instead? In this work, we show how to utilise recent progress in counterfactual image generation to this effect. We propose CF-SimCLR, a counterfactual contrastive learning approach which leverages approximate counterfactual inference for positive pair creation. Comprehensive evaluation across five datasets, on chest radiography and mammography, demonstrates that CF-SimCLR substantially improves robustness to acquisition shift with higher downstream performance on both in- and out-of-distribution data, particularly for domains which are under-represented during training.
引用
收藏
页码:22 / 32
页数:11
相关论文
共 50 条
  • [31] On Learning Contrastive Representations for Learning with Noisy Labels
    Yi, Li
    Liu, Sheng
    She, Qi
    McLeod, A. Ian
    Wang, Boyu
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 16661 - 16670
  • [32] Using Contrastive Learning and Pseudolabels to Learn Representations for Retail Product Image Classification
    Srivastava, Muktabh Mayank
    PROCEEDINGS OF THE 17TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 5, 2022, : 659 - 663
  • [33] UMIC: An Unreferenced Metric for Image Captioning via Contrastive Learning
    Lee, Hwanhee
    Yoon, Seunghyun
    Dernoncourt, Franck
    Bui, Trung
    Jung, Kyomin
    ACL-IJCNLP 2021: THE 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 2, 2021, : 220 - 226
  • [34] Image Template Matching via Dense and Consistent Contrastive Learning
    Li, Bo
    Wu, Lin Yuanbo
    Liu, Deyin
    Chen, Hongyang
    Ye, Yuanxin
    Xie, Xianghua
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 1319 - 1324
  • [35] Structure-preserving image smoothing via contrastive learning
    Zhu, Dingkun
    Wang, Weiming
    Xue, Xue
    Xie, Haoran
    Cheng, Gary
    Wang, Fu Lee
    VISUAL COMPUTER, 2024, 40 (08): : 5139 - 5153
  • [36] Contrastive Learning of Generalized Game Representations
    Trivedi, Chintan
    Liapis, Antonios
    Yannakakis, Georgios N.
    2021 IEEE CONFERENCE ON GAMES (COG), 2021, : 119 - 126
  • [37] Probabilistic Representations for Video Contrastive Learning
    Park, Jungin
    Lee, Jiyoung
    Kim, Ig-Jae
    Sohn, Kwanghoon
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 14691 - 14701
  • [38] Contrastive Learning Models for Sentence Representations
    Xu, Lingling
    Xie, Haoran
    Li, Zongxi
    Wang, Fu Lee
    Wang, Weiming
    Li, Qing
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2023, 14 (04)
  • [39] A Contrastive Objective for Learning Disentangled Representations
    Kahana, Jonathan
    Hoshen, Yedid
    COMPUTER VISION, ECCV 2022, PT XXVI, 2022, 13686 : 579 - 595
  • [40] Boosting Zero-Shot Learning via Contrastive Optimization of Attribute Representations
    Du, Yu
    Shi, Miaojing
    Wei, Fangyun
    Li, Guoqi
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 35 (11) : 1 - 14