共 50 条
SELF-REPRESENTATION CONVOLUTIONAL NEURAL NETWORKS
被引:0
|作者:
Gao, Hongchao
[1
,2
]
Wang, Xi
[1
]
Li, Yujia
[1
,2
]
Han, Jizhong
[1
]
Hu, Songlin
[1
]
Li, Ruixuan
[3
]
机构:
[1] Chinese Acad Sci, Inst Informat Engn, Beijing 100093, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing 100049, Peoples R China
[3] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430074, Hubei, Peoples R China
基金:
中国国家自然科学基金;
关键词:
Convolutional Neural Networks;
Image Classification;
Self-Representation Convolution;
D O I:
10.1109/ICME.2019.00288
中图分类号:
TP31 [计算机软件];
学科分类号:
081202 ;
0835 ;
摘要:
The traditional convolutional neural networks (CNNs) learn numbers of fixed kernels(filters), which are used to obtain the representations of fixed patterns. Therefore, the knowledge representations of CNNs are limited to the number of kernels. In this paper, we present a Self-Representation Convolutional (SRC) layer to obtain richer knowledge representations of images by fully considering the self-similarity between adjacent pixels. SRC layer comprises a learnable local correlation measurement which measures the importance of adjacent pixels to the current pixel and two learnable linear parameters that perform linear projection on adjacent pixels and the weighted sum vectors, respectively. Compared with regular convolutional layers, the SRC layers can not only obtain comparable knowledge representations, but also reduce by a factor of 3x to 56x in the number of learnable parameters. Empirically, CNNs with SRC layers, called Self-Representation Convolutional Neural Networks (SRCNN), achieve strong performances on a range of visual datasets (SVHN, CIFAR-10 and CIFAR-100) while enjoying significant parameters and FLOPs savings.
引用
收藏
页码:1672 / 1677
页数:6
相关论文