Compact learning for multi-label classification

被引:14
|
作者
Lv, Jiaqi [1 ,2 ]
Wu, Tianran [1 ,2 ]
Peng, Chenglun [1 ,2 ]
Liu, Yunpeng [1 ,2 ]
Xu, Ning [1 ,2 ]
Geng, Xin [1 ,2 ]
机构
[1] Southeast Univ, Sch Comp Sci & Engn, Nanjing, Peoples R China
[2] Southeast Univ, Minist Educ, Key Lab Comp Network & Informat Integrat, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
Machine learning; Multi-label classification; Label compression; Compact learning; REDUCTION;
D O I
10.1016/j.patcog.2021.107833
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-label classification (MLC) studies the problem where each instance is associated with multiple relevant labels, which leads to the exponential growth of output space. It confronts with the great challenge for the exploration of the latent label relationship and the intrinsic correlation between feature and la bel spaces. MLC gave rise to a framework named label compression (LC) to obtain a compact space for efficient learning. Nevertheless, most existing LC methods failed to consider the influence of the feature space or misguided by original problematic features, which may result in performance degradation instead. In this paper, we present a compact learning (CL) framework to embed the features and labels simultaneously and with mutual guidance. The proposal is a versatile concept that does not rigidly adhere to some specific embedding methods, and is independent of the subsequent learning process. Following its spirit, a simple yet effective implementation called compact multi-label learning (CMLL) is proposed to learn a compact low-dimensional representation for both spaces. CMLL maximizes the dependence between the embedded spaces of the labels and features, and minimizes the loss of label space recovery concurrently. Theoretically, we provide a general analysis for different embedding methods. Practically, we conduct extensive experiments to validate the effectiveness of the proposed method. (c) 2021 Elsevier Ltd. All rights reserved.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Compact Multi-Label Learning
    Shen, Xiaobo
    Liu, Weiwei
    Tsang, Ivor W.
    Sun, Quan-Sen
    Ong, Yew-Soon
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 4066 - 4073
  • [2] Metric Learning for Multi-label Classification
    Brighi, Marco
    Franco, Annalisa
    Maio, Dario
    STRUCTURAL, SYNTACTIC, AND STATISTICAL PATTERN RECOGNITION, S+SSPR 2020, 2021, 12644 : 24 - 33
  • [3] Hyperspherical Learning in Multi-Label Classification
    Ke, Bo
    Zhu, Yunquan
    Li, Mengtian
    Shu, Xiujun
    Qiao, Ruizhi
    Ren, Bo
    COMPUTER VISION, ECCV 2022, PT XXV, 2022, 13685 : 38 - 55
  • [4] On active learning in multi-label classification
    Brinker, K
    FROM DATA AND INFORMATION ANALYSIS TO KNOWLEDGE ENGINEERING, 2006, : 206 - 213
  • [5] Learning multi-label scene classification
    Boutell, MR
    Luo, JB
    Shen, XP
    Brown, CM
    PATTERN RECOGNITION, 2004, 37 (09) : 1757 - 1771
  • [6] Joint learning of multi-label classification and label correlations
    He, Zhi-Fen
    Yang, Ming
    Liu, Hui-Dong
    Ruan Jian Xue Bao/Journal of Software, 2014, 25 (09): : 1967 - 1981
  • [7] Scalable Label Distribution Learning for Multi-Label Classification
    Zhao, Xingyu
    An, Yuexuan
    Qi, Lei
    Geng, Xin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [8] Learning Label Specific Features for Multi-Label Classification
    Huang, Jun
    Li, Guorong
    Huang, Qingming
    Wu, Xindong
    2015 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2015, : 181 - 190
  • [9] Active learning for hierarchical multi-label classification
    Nakano, Felipe Kenji
    Cerri, Ricardo
    Vens, Celine
    DATA MINING AND KNOWLEDGE DISCOVERY, 2020, 34 (05) : 1496 - 1530
  • [10] Compositional metric learning for multi-label classification
    YanPing SUN
    MinLing ZHANG
    Frontiers of Computer Science, 2021, (05) : 70 - 81