Deep multi-label learning for image distortion identification

被引:13
|
作者
Liang, Dong [1 ]
Gao, Xinbo [1 ]
Lu, Wen [1 ]
He, Lihuo [1 ]
机构
[1] Xidian Univ, Sch Elect Engn, Video & Image Proc Syst Lab, Xian 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Image distortion identification; Multi-label learning; Convolutional neural network; Multi-task learning; Deep learning; QUALITY ASSESSMENT; CLASSIFICATION;
D O I
10.1016/j.sigpro.2020.107536
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Image Distortion Identification is important for image processing system enhancement, image distortion correction and image quality assessment. Although images may suffer various number of distortions while going through different systems, most of the previous researches of image distortion identification were focus on identifying single distortion in image. In this paper, we proposed a CNN-based multi-label learning model (called MLLNet) to identify distortions for different scenarios, including images having no distortion, single distortion and multiple distortions. Concretely, we transform the multi-label classification for image distortion identification to a number of multi-class classifications and use a deep multi-task CNN model to train all associated classifiers simultaneously. For unseen image, we use the trained CNN model to predict a number of classifications at same time and fuse them to final multi-label classification. The extensive experiments demonstrate that the propose algorithm can achieve good performance on several databases. Moreover, the network architecture of the CNN model can make flexible adjustment according to the different requirements. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] DBDDL: Double-Branch Deep Dictionary Learning for Multi-Label Image Classification
    Zhang, Wenke
    Liao, Mengmeng
    2024 6TH INTERNATIONAL CONFERENCE ON BIG-DATA SERVICE AND INTELLIGENT COMPUTATION, BDSIC 2024, 2024, : 35 - 40
  • [32] Deep Learning for Extreme Multi-label Text Classification
    Liu, Jingzhou
    Chang, Wei-Cheng
    Wu, Yuexin
    Yang, Yiming
    SIGIR'17: PROCEEDINGS OF THE 40TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2017, : 115 - 124
  • [33] Towards Interpretable Deep Extreme Multi-label Learning
    Kang, Yihuang
    Cheng, I-Ling
    Mao, Wenjui
    Kuo, Bowen
    Lee, Pei-Ju
    2019 IEEE 20TH INTERNATIONAL CONFERENCE ON INFORMATION REUSE AND INTEGRATION FOR DATA SCIENCE (IRI 2019), 2019, : 69 - 74
  • [34] Deep Learning with a Rethinking Structure for Multi-label Classification
    Yang, Yao-Yuan
    Lin, Yi-An
    Chu, Hong-Min
    Lin, Hsuan-Tien
    ASIAN CONFERENCE ON MACHINE LEARNING, VOL 101, 2019, 101 : 125 - 140
  • [35] Learning Deep Latent Spaces for Multi-Label Classification
    Yeh, Chih-Kuan
    Wu, Wei-Chieh
    Ko, Wei-Jen
    Wang, Yu-Chiang Frank
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 2838 - 2844
  • [36] Multi-label classification performance using Deep Learning
    Awachat, Snehal
    INTERNATIONAL JOURNAL OF NEXT-GENERATION COMPUTING, 2023, 14 (01): : 119 - 126
  • [37] Deep Partial Multi-Label Learning with Graph Disambiguation
    Wang, Haobo
    Yang, Shisong
    Lyu, Gengyu
    Liu, Weiwei
    Hu, Tianlei
    Chen, Ke
    Feng, Songhe
    Chen, Gang
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 4308 - 4316
  • [38] Deep Learning for Multi-Label Land Cover Classification
    Karalas, Konstantinos
    Tsagkatakis, Grigorios
    Zervakis, Michalis
    Tsakalides, Panagiotis
    IMAGE AND SIGNAL PROCESSING FOR REMOTE SENSING XXI, 2015, 9643
  • [39] Learning discriminative representations for multi-label image recognition
    Hassanin, Mohammed
    Radwan, Ibrahim
    Khan, Salman
    Tahtali, Murat
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2022, 83
  • [40] Multi-label SVM active learning for image classification
    Li, XC
    Wang, L
    Sung, E
    ICIP: 2004 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1- 5, 2004, : 2207 - 2210