Saliency-Guided No-Reference Omnidirectional Image Quality Assessment via Scene Content Perceiving

被引:0
|
作者
Zhang, Youzhi [1 ]
Wan, Lifei [2 ]
Liu, Deyang [1 ,3 ,4 ]
Zhou, Xiaofei [5 ]
An, Ping [6 ]
Shan, Caifeng [3 ,7 ]
机构
[1] Anqing Normal Univ, Sch Comp & Informat, Anqing 246000, Peoples R China
[2] Ningbo Univ, Fac Informat Sci & Engn, Ningbo 315211, Peoples R China
[3] Shandong Univ Sci & Technol, Coll Elect Engn & Automat, Qingdao 266590, Peoples R China
[4] Anhui Normal Univ, Anhui Prov Key Lab Network & Informat Secur, Wuhu 241002, Anhui, Peoples R China
[5] Hangzhou Dianzi Univ, Sch Automat, Hangzhou 310061, Peoples R China
[6] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 200444, Peoples R China
[7] Nanjing Univ, Sch Intelligence Sci & Technol, Nanjing 210023, Peoples R China
基金
中国国家自然科学基金;
关键词
Distortion; Feature extraction; Visualization; Measurement; Semantics; Quality assessment; Indexes; Virtual reality; Three-dimensional displays; Human visual system (HVS); hypernetwork; image no-reference quality assessment; omnidirectional images (OIs); saliency map; Transformer; CNN;
D O I
10.1109/TIM.2024.3485447
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Due to the widespread application of the virtual reality (VR) technique, omnidirectional image (OI) has attracted remarkable attention both from academia and industry. In contrast to a natural 2-D image, an OI contains 360(omicron) x180(omicron) panoramic content, which presents great challenges for no-reference quality assessment. In this article, we propose a saliency-guided no-reference OI quality assessment (OIQA) method based on scene content understanding. Inspired by the fact that humans use hierarchical representations to grade images, we extract multiscale features from each projected viewport. Then, we integrate the texture removal and background detection techniques to obtain the corresponding saliency map of each viewport, which is subsequently utilized to guide the multiscale feature fusion from the low-level feature to the high-level one. Furthermore, motivated by the human way of understanding content, we leverage a self-attention-based Transformer to build nonlocal mutual dependencies to perceive the variations of distortion and scene in each viewport. Moreover, we also propose a content perception hypernetwork to adaptively return weights and biases for quality regressor, which is conducive to understanding the scene content and learning the perception rule for the quality assessment procedure. Comprehensive experiments validate that the proposed method can achieve competitive performances on two available databases. The code is publicly available at https://github.com/ldyorchid/SCP-OIQA.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] No-reference stereoscopic image quality assessment based on saliency-guided binocular feature consolidation
    Xu, Xiaogang
    Zhao, Yang
    Ding, Yong
    ELECTRONICS LETTERS, 2017, 53 (22) : 1468 - 1469
  • [2] Saliency-Guided Transformer Network combined with Local Embedding for No-Reference Image Quality Assessment
    Zhu, Mengmeng
    Hou, Guanqun
    Chen, Xinjia
    Xie, Jiaxing
    Lu, Haixian
    Che, Jun
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 1953 - 1962
  • [3] 3D No-Reference Image Quality Assessment via Transfer Learning and Saliency-Guided Feature Consolidation
    Xu, Xiaogang
    Shi, Bufan
    Gu, Zijin
    Deng, Ruizhe
    Chen, Xiaodong
    Krylov, Andrey S.
    Ding, Yong
    IEEE ACCESS, 2019, 7 : 85286 - 85297
  • [4] SGDNet: An End-to-End Saliency-Guided Deep Neural Network for No-Reference Image Quality Assessment
    Yang, Sheng
    Jiang, Qiuping
    Lin, Weisi
    Wang, Yongtao
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 1383 - 1391
  • [5] Saliency-guided convolution neural network-transformer fusion network for no-reference image quality assessment
    Wu, Lipeng
    Cui, Ziguan
    Gan, Zongliang
    Tang, Guijin
    Liu, Feng
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (06)
  • [6] Saliency-Guided Local Full-Reference Image Quality Assessment
    Varga, Domonkos
    SIGNALS, 2022, 3 (03): : 483 - 496
  • [7] Saliency-Guided Deep Framework for Image Quality Assessment
    Hou, Weilong
    Gao, Xinbo
    IEEE MULTIMEDIA, 2015, 22 (02) : 46 - 55
  • [8] Saliency-Guided Quality Assessment of Screen Content Images
    Gu, Ke
    Wang, Shiqi
    Yang, Huan
    Lin, Weisi
    Zhai, Guangtao
    Yang, Xiaokang
    Zhang, Wenjun
    IEEE TRANSACTIONS ON MULTIMEDIA, 2016, 18 (06) : 1098 - 1110
  • [9] No-reference Omnidirectional Image Quality Assessment Based on Joint Network
    Zhang, Chaofan
    Liu, Shiguang
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 943 - 951
  • [10] No-Reference Light Field Image Quality Assessment Exploiting Saliency
    Lamichhane, Kamal
    Neri, Michael
    Battisti, Federica
    Paudyal, Pradip
    Carli, Marco
    IEEE TRANSACTIONS ON BROADCASTING, 2023, 69 (03) : 790 - 800