No-reference synthetic image quality assessment with convolutional neural network and local image saliency

被引:22
|
作者
Wang, Xiaochuan [1 ]
Liang, Xiaohui [1 ]
Yang, Bailin [2 ]
Li, Frederick W. B. [3 ]
机构
[1] Beihang Univ, State Kay Lab Virtual Real Technol & Syst, Beijing 100191, Peoples R China
[2] Zhejiang Gongshang Univ, Sch Comp Sci & Informat Engn, Hangzhou 310018, Peoples R China
[3] Univ Durham, Dept Comp Sci, Durham, England
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
image quality assessment; synthetic image; depth-image-based rendering (DIBR); convolutional neural network; local image saliency; DATABASE;
D O I
10.1007/s41095-019-0131-6
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Depth-image-based rendering (DIBR) is widely used in 3DTV, free-viewpoint video, and interactive 3D graphics applications. Typically, synthetic images generated by DIBR-based systems incorporate various distortions, particularly geometric distortions induced by object dis-occlusion. Ensuring the quality of synthetic images is critical to maintaining adequate system service. However, traditional 2D image quality metrics are ineffective for evaluating synthetic images as they are not sensitive to geometric distortion. In this paper, we propose a novel no-reference image quality assessment method for synthetic images based on convolutional neural networks, introducing local image saliency as prediction weights. Due to the lack of existing training data, we construct a new DIBR synthetic image dataset as part of our contribution. Experiments were conducted on both the public benchmark IRCCyN/IVC DIBR image dataset and our own dataset. Results demonstrate that our proposed metric outperforms traditional 2D image quality metrics and state-of-the-art DIBR-related metrics.
引用
收藏
页码:193 / 208
页数:16
相关论文
共 50 条
  • [31] Feature-segmentation strategy based convolutional neural network for no-reference image quality assessment
    Shen, Lili
    Hang, Ning
    Hou, Chunping
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (17-18) : 11891 - 11904
  • [32] CONVOLUTIONAL NEURAL NETWORK AND SALIENCY SELECTION FOR BLIND IMAGE QUALITY ASSESSMENT
    Chetouani, Aladine
    2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 2835 - 2839
  • [33] No-Reference Image Quality Assessment Using Image Saliency for JPEG Compressed Images
    Song, Zengjie
    Zhang, Jiangshe
    Liu, Junmin
    JOURNAL OF IMAGING SCIENCE AND TECHNOLOGY, 2016, 60 (06)
  • [34] No-Reference Light Field Image Quality Assessment Exploiting Saliency
    Lamichhane, Kamal
    Neri, Michael
    Battisti, Federica
    Paudyal, Pradip
    Carli, Marco
    IEEE TRANSACTIONS ON BROADCASTING, 2023, 69 (03) : 790 - 800
  • [35] No-Reference Image Quality Assessment for Multiple Distortions Using Saliency Map Based on Dual-Convolutional Neural Networks
    Li, Jian-Jun
    Xu, Lan-Lan
    Wang, Zhi-Hui
    Chang, Chin-Chen
    JOURNAL OF INTERNET TECHNOLOGY, 2017, 18 (07): : 1701 - 1710
  • [36] AN ACCURATE DEEP CONVOLUTIONAL NEURAL NETWORKS MODEL FOR NO-REFERENCE IMAGE QUALITY ASSESSMENT
    Bare, Bahetiyaer
    Li, Ke
    Yan, Bo
    2017 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2017, : 1356 - 1361
  • [37] SGDNet: An End-to-End Saliency-Guided Deep Neural Network for No-Reference Image Quality Assessment
    Yang, Sheng
    Jiang, Qiuping
    Lin, Weisi
    Wang, Yongtao
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 1383 - 1391
  • [38] No-Reference Quality Assessment Based on Dual-Channel Convolutional Neural Network for Underwater Image Enhancement
    Hu, Renzhi
    Luo, Ting
    Jiang, Guowei
    Lin, Zhiqiang
    He, Zhouyan
    ELECTRONICS, 2024, 13 (22)
  • [39] No-reference Stereoscopic Image Quality Assessment Based on Visual Saliency Region
    Wang, Xin
    Sheng, Yuxia
    2019 CHINESE AUTOMATION CONGRESS (CAC2019), 2019, : 2070 - 2074
  • [40] A No-Reference Image Quality Assessment
    Kemalkar, Aniket K.
    Bairagi, Vinayak K.
    2013 IEEE INTERNATIONAL CONFERENCE ON EMERGING TRENDS IN COMPUTING, COMMUNICATION AND NANOTECHNOLOGY (ICE-CCN'13), 2013, : 462 - 465