Saliency-Guided No-Reference Omnidirectional Image Quality Assessment via Scene Content Perceiving

被引:0
|
作者
Zhang, Youzhi [1 ]
Wan, Lifei [2 ]
Liu, Deyang [1 ,3 ,4 ]
Zhou, Xiaofei [5 ]
An, Ping [6 ]
Shan, Caifeng [3 ,7 ]
机构
[1] Anqing Normal Univ, Sch Comp & Informat, Anqing 246000, Peoples R China
[2] Ningbo Univ, Fac Informat Sci & Engn, Ningbo 315211, Peoples R China
[3] Shandong Univ Sci & Technol, Coll Elect Engn & Automat, Qingdao 266590, Peoples R China
[4] Anhui Normal Univ, Anhui Prov Key Lab Network & Informat Secur, Wuhu 241002, Anhui, Peoples R China
[5] Hangzhou Dianzi Univ, Sch Automat, Hangzhou 310061, Peoples R China
[6] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 200444, Peoples R China
[7] Nanjing Univ, Sch Intelligence Sci & Technol, Nanjing 210023, Peoples R China
基金
中国国家自然科学基金;
关键词
Distortion; Feature extraction; Visualization; Measurement; Semantics; Quality assessment; Indexes; Virtual reality; Three-dimensional displays; Human visual system (HVS); hypernetwork; image no-reference quality assessment; omnidirectional images (OIs); saliency map; Transformer; CNN;
D O I
10.1109/TIM.2024.3485447
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Due to the widespread application of the virtual reality (VR) technique, omnidirectional image (OI) has attracted remarkable attention both from academia and industry. In contrast to a natural 2-D image, an OI contains 360(omicron) x180(omicron) panoramic content, which presents great challenges for no-reference quality assessment. In this article, we propose a saliency-guided no-reference OI quality assessment (OIQA) method based on scene content understanding. Inspired by the fact that humans use hierarchical representations to grade images, we extract multiscale features from each projected viewport. Then, we integrate the texture removal and background detection techniques to obtain the corresponding saliency map of each viewport, which is subsequently utilized to guide the multiscale feature fusion from the low-level feature to the high-level one. Furthermore, motivated by the human way of understanding content, we leverage a self-attention-based Transformer to build nonlocal mutual dependencies to perceive the variations of distortion and scene in each viewport. Moreover, we also propose a content perception hypernetwork to adaptively return weights and biases for quality regressor, which is conducive to understanding the scene content and learning the perception rule for the quality assessment procedure. Comprehensive experiments validate that the proposed method can achieve competitive performances on two available databases. The code is publicly available at https://github.com/ldyorchid/SCP-OIQA.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] HVS-Based Perception-Driven No-Reference Omnidirectional Image Quality Assessment
    Liu, Yun
    Yin, Xiaohua
    Wang, Yan
    Yin, Zixuan
    Zheng, Zhi
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [32] HVS-Based Perception-Driven No-Reference Omnidirectional Image Quality Assessment
    Liu, Yun
    Yin, Xiaohua
    Wang, Yan
    Yin, Zixuan
    Zheng, Zhi
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [33] HVS-Based Perception-Driven No-Reference Omnidirectional Image Quality Assessment
    Liu, Yun
    Yin, Xiaohua
    Wang, Yan
    Yin, Zixuan
    Zheng, Zhi
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [34] Saliency-based deep convolutional neural network for no-reference image quality assessment
    Sen Jia
    Yang Zhang
    Multimedia Tools and Applications, 2018, 77 : 14859 - 14872
  • [35] Nature Scene Statistics Approach Based On ICA for No-Reference Image Quality Assessment
    Zhang, Dong
    Ding, Yong
    Zheng, Ning
    2012 INTERNATIONAL WORKSHOP ON INFORMATION AND ELECTRONICS ENGINEERING, 2012, 29 : 3589 - 3593
  • [36] Cluster-Based Saliency-Guided Content-Aware Image Retargeting
    Li-Wei Kang
    Ching-Yu Tseng
    Chao-Long Jheng
    Ming-Fang Weng
    Chao-Yung Hsu
    Journal of Electronic Science and Technology, 2017, 15 (02) : 141 - 146
  • [37] 360° video quality assessment based on saliency-guided viewport extraction
    Yang, Fanxi
    Yang, Chao
    An, Ping
    Huang, Xinpeng
    MULTIMEDIA SYSTEMS, 2024, 30 (02)
  • [38] Saliency-based deep convolutional neural network for no-reference image quality assessment
    Jia, Sen
    Zhang, Yang
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (12) : 14859 - 14872
  • [39] A Visual Saliency-Based Neural Network Architecture for No-Reference Image Quality Assessment
    Ryu, Jihyoung
    APPLIED SCIENCES-BASEL, 2022, 12 (19):
  • [40] No-Reference Stereoscopic Image Quality Assessment
    Akhter, Roushain
    Sazzad, Z. M. Parvez
    Horita, Y.
    Baltes, J.
    STEREOSCOPIC DISPLAYS AND APPLICATIONS XXI, 2010, 7524