Reducing redundancy in the bottleneck representation of autoencoders

被引:5
|
作者
Laakom, Firas [1 ]
Raitoharju, Jenni [2 ,3 ]
Iosifidis, Alexandros [4 ]
Gabbouj, Moncef [1 ]
机构
[1] Tampere Univ, Fac Informat Technol & Commun Sci, Tampere, Finland
[2] Univ Jyvaskyla, Fac Informat Technol, Jyvaskyla, Finland
[3] Finnish Environm Inst, Qual Informat, Helsinki, Finland
[4] Aarhus Univ, Dept Elect & Comp Engn, Aarhus, Denmark
基金
芬兰科学院;
关键词
Autoencoders; Unsupervised learning; Diversity; Feature representation; Dimensionality reduction; Image denoising; Image compression;
D O I
10.1016/j.patrec.2024.01.013
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Autoencoders (AEs) are a type of unsupervised neural networks, which can be used to solve various tasks, e.g., dimensionality reduction, image compression, and image denoising. An AE has two goals: (i) compress the original input to a low -dimensional space at the bottleneck of the network topology using an encoder, (ii) reconstruct the input from the representation at the bottleneck using a decoder. Both encoder and decoder are optimized jointly by minimizing a distortion -based loss which implicitly forces the model to keep only the information in input data required to reconstruct them and to reduce redundancies. In this paper, we propose a scheme to explicitly penalize feature redundancies in the bottleneck representation. To this end, we propose an additional loss term, based on the pairwise covariances of the network units, which complements the data reconstruction loss forcing the encoder to learn a more diverse and richer representation of the input. We tested our approach across different tasks, namely dimensionality reduction, image compression, and image denoising. Experimental results show that the proposed loss leads consistently to superior performance compared to using the standard AE loss.
引用
收藏
页码:202 / 208
页数:7
相关论文
共 50 条
  • [1] Sentence Bottleneck Autoencoders from Transformer Language
    Montero, Ivan
    Pappas, Nikolaos
    Smith, Noah A.
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 1822 - 1831
  • [2] Do Autoencoders Need a Bottleneck for Anomaly Detection?
    Yong, Bang Xiang
    Brintrup, Alexandra
    IEEE ACCESS, 2022, 10 : 78455 - 78471
  • [3] REDUCING THE LABOR BOTTLENECK
    Metal Casting Design and Purchasing, 2023, 25
  • [5] Image Compression: Sparse Coding vs. Bottleneck Autoencoders
    Watkins, Yijing
    Iaroshenko, Oleksandr
    Sayeh, Mohammad
    Kenyon, Garrett
    2018 IEEE SOUTHWEST SYMPOSIUM ON IMAGE ANALYSIS AND INTERPRETATION (SSIAI), 2018, : 17 - 20
  • [6] Reducing Dimensionality of Data Using Autoencoders
    Janakiramaiah, B.
    Kalyani, G.
    Narayana, S.
    Krishna, T. Bala Murali
    SMART INTELLIGENT COMPUTING AND APPLICATIONS, VOL 2, 2020, 160 : 51 - 58
  • [7] Leveraging Autoencoders for Better Representation Learning
    Achary, Maria
    Abraham, Siby
    JOURNAL OF COMPUTER INFORMATION SYSTEMS, 2024,
  • [8] Data-driven detector signal characterization with constrained bottleneck autoencoders
    Jesus-Valls, C.
    Lux, T.
    Sanchez, F.
    JOURNAL OF INSTRUMENTATION, 2022, 17 (06):
  • [9] Denoised Bottleneck Features From Deep Autoencoders for Telephone Conversation Analysis
    Janod, Killian
    Morchid, Mohamed
    Dufour, Richard
    Linares, Georges
    De Mori, Renato
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2017, 25 (09) : 1505 - 1516
  • [10] Reducing the Bottleneck in Discovery of Novel Antibiotics
    Marcus B. Jones
    William C. Nierman
    Yue Shan
    Bryan C. Frank
    Amy Spoering
    Losee Ling
    Aaron Peoples
    Ashley Zullo
    Kim Lewis
    Karen E. Nelson
    Microbial Ecology, 2017, 73 : 658 - 667