No Reference Quality Assessment of Stereo Video Based on Saliency and Sparsity

被引:40
|
作者
Yang, Jiachen [1 ]
Ji, Chunqi [1 ]
Jiang, Bin [1 ]
Lu, Wen [2 ]
Meng, Qinggang [3 ]
机构
[1] Tianjin Univ, Sch Elect & Informat Engn, Tianjin 30072, Peoples R China
[2] Xidian Univ, Sch Elect Engn, Xian 710071, Shaanxi, Peoples R China
[3] Loughborough Univ, Dept Comp Sci, Loughborough LE11 3TU, Leics, England
基金
中国国家自然科学基金;
关键词
Stereoscopic video quality assessment; saliency; sparse representation; stacked auto-encoder (SAE); sparsity; VISUAL-ATTENTION; INDUCED INDEX; IMAGES; REPRESENTATION; PERCEPTION;
D O I
10.1109/TBC.2018.2789583
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
With the popularity of video technology, stereoscopic video quality assessment (SVQA) has become increasingly important. Existing SVQA methods cannot achieve good performance because the videos' information is not fully utilized. In this paper, we consider various information in the videos together, construct a simple model to combine and analyze the diverse features, which is based on saliency and sparsity. First, we utilize the 3-D saliency map of sum map, which remains the basic information of stereoscopic video, as a valid tool to evaluate the videos' quality. Second, we use the sparse representation to decompose the sum map of 3-D saliency into coefficients, then calculate the features based on sparse coefficients to obtain the effective expression of videos' message. Next, in order to reduce the relevance between the features, we put them into stacked auto-encoder, mapping vectors to higher dimensional space, and adding the sparse restraint, then input them into support vector machine subsequently, and finally, get the quality assessment scores. Within that process, we take the advantage of saliency and sparsity to extract and simplify features. Through the later experiment, we can see the proposed method is fitting well with the subjective scores.
引用
收藏
页码:341 / 353
页数:13
相关论文
共 50 条
  • [31] Panoramic video quality assessment based on cascaded network using saliency map
    Ding, Wenxin
    An, Ping
    Yang, Chao
    Huang, Xinpeng
    OPTOELECTRONIC IMAGING AND MULTIMEDIA TECHNOLOGY VII, 2020, 11550
  • [32] 360° video quality assessment based on saliency-guided viewport extraction
    Yang, Fanxi
    Yang, Chao
    An, Ping
    Huang, Xinpeng
    MULTIMEDIA SYSTEMS, 2024, 30 (02)
  • [33] 360° video quality assessment based on saliency-guided viewport extraction
    Fanxi Yang
    Chao Yang
    Ping An
    Xinpeng Huang
    Multimedia Systems, 2024, 30
  • [34] SALIENCY BASED OBJECTIVE QUALITY ASSESSMENT OF DECODED VIDEO AFFECTED BY PACKET LOSSES
    Xin Feng
    Tao Liu
    Dan Yang
    Yao Wang
    2008 15TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-5, 2008, : 2560 - 2563
  • [35] No-reference Stereoscopic Image Quality Assessment Based on Visual Saliency Region
    Wang, Xin
    Sheng, Yuxia
    2019 CHINESE AUTOMATION CONGRESS (CAC2019), 2019, : 2070 - 2074
  • [36] A weighted full-reference image quality assessment based on visual saliency
    Wen, Yang
    Li, Ying
    Zhang, Xiaohua
    Shi, Wuzhen
    Wang, Lin
    Chen, Jiawei
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2017, 43 : 119 - 126
  • [37] No-reference Mobile Video Quality Assessment Based on Video Natural Statistics
    Shi Wenjuan
    Sun Yanjing
    Zuo Haiwei
    Cao Qi
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2018, 40 (01) : 143 - 150
  • [38] No-reference pixel based video quality assessment for HEVC decoded video
    Huang, Xin
    Sogaard, Jacob
    Forchhammer, Soren
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2017, 43 : 173 - 184
  • [39] Conformer Based No-Reference Quality Assessment for UGC Video
    Yang, Zike
    Zhang, Yingxue
    Si, Zhanjun
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT VI, ICIC 2024, 2024, 14867 : 464 - 472
  • [40] Reconstruction-based No-Reference Video Quality Assessment
    Wu, Zhenyu
    Hu, Hong
    PROCEEDINGS OF THE 2016 IEEE REGION 10 CONFERENCE (TENCON), 2016, : 3075 - 3078