Cross-Modal Retrieval via Deep and Bidirectional Representation Learning

被引:109
作者
He, Yonghao [1 ]
Xiang, Shiming [1 ]
Kang, Cuicui [2 ]
Wang, Jian [1 ]
Pan, Chunhong [1 ]
机构
[1] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
[2] Chinese Acad Sci, Inst Informat Engn, Beijing 100093, Peoples R China
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
Bidirectional modeling; convolutional neural network; cross-modal retrieval; representation learning; word embedding;
D O I
10.1109/TMM.2016.2558463
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Cross-modal retrieval emphasizes understanding inter-modality semantic correlations, which is often achieved by designing a similarity function. Generally, one of the most important things considered by the similarity function is how to make the cross-modal similarity computable. In this paper, a deep and bidirectional representation learning model is proposed to address the issue of image-text cross-modal retrieval. Owing to the solid progress of deep learning in computer vision and natural language processing, it is reliable to extract semantic representations from both raw image and text data by using deep neural networks. Therefore, in the proposed model, two convolution-based networks are adopted to accomplish representation learning for images and texts. By passing the networks, images and texts are mapped to a common space, in which the cross-modal similarity is measured by cosine distance. Subsequently, a bidirectional network architecture is designed to capture the property of the cross-modal retrieval-the bidirectional search. Such architecture is characterized by simultaneously involving the matched and unmatched image-text pairs for training. Accordingly, a learning framework with maximum likelihood criterion is finally developed. The network parameters are optimized via backpropagation and stochastic gradient descent. A great deal of experiments are conducted to sufficiently evaluate the proposed method on three publicly released datasets: IAPRTC-12, Flickr30k, and Flickr8k. The overall results definitely show that the proposed architecture is effective and the learned representations have good semantics to achieve superior cross-modal retrieval performance.
引用
收藏
页码:1363 / 1377
页数:15
相关论文
共 61 条
[1]  
[Anonymous], 2014, Advances in Neural Information Processing Systems
[2]  
[Anonymous], 2014, T ASSOC COMPUT LING
[3]  
[Anonymous], 2012, P INT C NEUR INF PRO
[4]  
[Anonymous], 2010, P 18 ACM INT C MULT
[5]  
[Anonymous], 2003, P 26 ANN INT ACM SIG
[6]  
[Anonymous], 2013, P ACM INT C MULTIMED
[7]  
[Anonymous], 2013, CORR
[8]  
[Anonymous], 2014, CORR
[9]  
[Anonymous], PROC CVPR IEEE
[10]  
[Anonymous], 2014, CORR