Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework

被引:0
|
作者
Ko, Ching-Yun [1 ]
Mohapatra, Jeet [1 ]
Liu, Sijia [2 ]
Chen, Pin-Yu [3 ]
Daniel, Luca [1 ]
Weng, Tsui-Wei [4 ]
机构
[1] MIT, Cambridge, MA 02139 USA
[2] MSU, E Lansing, MI USA
[3] IBM Res AI, Armonk, NY USA
[4] UCSD, La Jolla, CA USA
来源
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162 | 2022年
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As a seminal tool in self-supervised representation learning, contrastive learning has gained unprecedented attention in recent years. In essence, contrastive learning aims to leverage pairs of positive and negative samples for representation learning, which relates to exploiting neighborhood information in a feature space. By investigating the connection between contrastive learning and neighborhood component analysis (NCA), we provide a novel stochastic nearest neighbor viewpoint of contrastive learning and subsequently propose a series of contrastive losses that outperform the existing ones. Under our proposed framework, we show a new methodology to design integrated contrastive losses that could simultaneously achieve good accuracy and robustness on downstream tasks. With the integrated framework, we achieve up to 6% improvement on the standard accuracy and 17% improvement on the robust accuracy.
引用
收藏
页数:26
相关论文
共 50 条