MIXTURE MODEL AUTO-ENCODERS: DEEP CLUSTERING THROUGH DICTIONARY LEARNING

被引:3
|
作者
Lin, Alexander [1 ]
Song, Andrew H. [2 ]
Ba, Demba [1 ]
机构
[1] Harvard Univ, Sch Engn & Appl Sci, Boston, MA 02138 USA
[2] MIT, 77 Massachusetts Ave, Cambridge, MA 02139 USA
关键词
deep clustering; auto-encoder; dictionary learning; mixture model; sparsity; ALGORITHM;
D O I
10.1109/ICASSP43922.2022.9747848
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
State-of-the-art approaches for clustering high-dimensional data utilize deep auto-encoder architectures. Many of these networks require a large number of parameters and suffer from a lack of interpretability, due to the black-box nature of the auto-encoders. We introduce Mixture Model Auto-Encoders (MixMate), a novel architecture that clusters data by performing inference on a generative model. Built on ideas from sparse dictionary learning and mixture models, MixMate comprises several auto-encoders, each tasked with reconstructing data in a distinct cluster, while enforcing sparsity in the latent space. Through experiments on various image datasets, we show that MixMate achieves competitive performance versus state-of-the-art deep clustering algorithms, while using orders of magnitude fewer parameters.
引用
收藏
页码:3368 / 3372
页数:5
相关论文
共 50 条
  • [21] EXPLORING CONVOLUTIONAL AUTO-ENCODERS FOR REPRESENTATION LEARNING ON NETWORKS
    Nerurkar, Pranav
    Chandane, Madhav
    Bhirud, Sunil
    COMPUTER SCIENCE-AGH, 2019, 20 (03): : 350 - 365
  • [22] Learning from Nested Data with Ornstein Auto-Encoders
    Choi, Youngwon
    Lee, Sungdong
    Won, Joong-Ho
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [23] Unsupervised representation learning with Laplacian pyramid auto-encoders
    Zhao Qilu
    Li Zongmin
    Dong Junyu
    APPLIED SOFT COMPUTING, 2019, 85
  • [24] Multi-view Contrastive Clustering with Clustering Guidance and Adaptive Auto-encoders
    Guo, Bingchen
    Kong, Bing
    Zhou, Lihua
    Chen, Hongmei
    Bao, Chongming
    SPATIAL DATA AND INTELLIGENCE, SPATIALDI 2024, 2024, 14619 : 3 - 14
  • [25] Stacked Convolutional Sparse Auto-Encoders for Representation Learning
    Zhu, Yi
    Li, Lei
    Wu, Xindong
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2021, 15 (02)
  • [26] Nonparametric Variational Auto-encoders for Hierarchical Representation Learning
    Goyal, Prasoon
    Hu, Zhiting
    Liang, Xiaodan
    Wang, Chenyu
    Xing, Eric P.
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 5104 - 5112
  • [27] Coherent and Consistent Relational Transfer Learning with Auto-encoders
    Stromfelt, Harald
    Dickens, Luke
    Garcez, Artur d'Avila
    Russo, Alessandra
    NESY 2021: NEURAL-SYMBOLIC LEARNING AND REASONING, 2021, 2986 : 176 - 192
  • [28] Explicit guiding auto-encoders for learning meaningful representation
    Sun, Yanan
    Mao, Hua
    Sang, Yongsheng
    Yi, Zhang
    NEURAL COMPUTING & APPLICATIONS, 2017, 28 (03): : 429 - 436
  • [29] EXTRACTING DEEP BOTTLENECK FEATURES USING STACKED AUTO-ENCODERS
    Gehring, Jonas
    Miao, Yajie
    Metze, Florian
    Waibel, Alex
    2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 3377 - 3381
  • [30] Deep Feature Representation via Multiple Stack Auto-Encoders
    Xiong, Mingfu
    Chen, Jun
    Wang, Zheng
    Liang, Chao
    Zheng, Qi
    Han, Zhen
    Sun, Kaimin
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2015, PT I, 2015, 9314 : 275 - 284