No-reference video quality objective assessment method based on the content visual perception and transmission distortion

被引:0
|
作者
Yao J. [1 ,2 ]
Tang H. [1 ]
Shen J. [1 ]
机构
[1] School of Computer Engineering, Nanjing Institute of Technology, Nanjing
[2] School of Information and Communication Engineering, Xi’an Jiaotong University, Xi’an
关键词
human visual system characteristics; luminance and chrominance transmission delay; video contents; Video Quality Assessment (VQA);
D O I
10.37188/OPE.20223022.2923
中图分类号
学科分类号
摘要
This paper proposes a Video Quality Assessment (VQA) method based on video-content perception by analyzing the influence of video content,transmission delay,and encoding and decoding distortion characteristics on the VQA,combined with human visual system characteristics and its mathematical model. In this method,the video contents are described by the texture complexity,local contrast,temporal information of video frame image,and their visual perception. Thus,the video contents perception model can be built,which allows for investigating the influence of the video content and their visual perception on VQA. The relationship between the bit rate and video quality is discussed,whose relationship models are built,to study the impact of the bit rate of video-on-video quality. Subsequently,the VQA model that video quality degradation caused by transmission delay distortion is designed by combining the characteristics of video transmission delay. Finally,the convex optimization method is used to synthesize the above three aspects of models,and a no-reference VQA model considering the video contents,encoding and decoding distortion,transmission delay distortion,and human visual system characteristics,is proposed. The proposed VQA model was tested and verified using the videos from several established video databases and open-source video databases,and its performance was compared with that of 17 existing VQA models. The results showed that the precision Pearson linear and Spearman rank order correlation coefficients of the proposed VQA model reached a minimum of 0.8773 and 0.8336 and a maximum of 0.938 3 and 0.943 8,respectively. This shows that the model has good generalization performance and low complexity. Analyzing the overall efficiency of performance in terms of model accuracy,generalization performance,and complexity,the results show that the proposed model is an excellent VQA model. © 2022 Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:2923 / 2938
页数:15
相关论文
共 38 条
  • [1] MIN X K., Perceptual image quality assessment:a survey[J], Science China Information Sciences, 63, 11, pp. 1-52, (2020)
  • [2] GONG Y Q, YU X, QIU G P., Environment adaptive tone mapping algorithm for display devices[J], Chinese Journal of Liquid Crystals and Displays, 36, 12, pp. 1645-1657, (2021)
  • [3] CHENG S, Et al., Screen content video quality assessment:subjective and objective study[J], IEEE Transactions on Image Processing:a Publication of the IEEE Signal Processing Society, (2020)
  • [4] YAO J Y, LIU G Z., Bitrate-based no-reference video quality assessment combining the visual perception of video contents[J], IEEE Transactions on Broadcasting, 65, 3, pp. 546-557, (2019)
  • [5] MARGARET H P., The consumer digital video library[EB/OL]
  • [6] PEREZ-ORTIZ M, Et al., From pairwise comparisons and rating to a unified quality scale[J], IEEE Transactions on Image Processing:a Publication of the IEEE Signal Processing Society, (2019)
  • [7] Metrics and methods of video quality assessment:a brief review[J], Multimedia Tools and Applications, 78, 22, pp. 31019-31033, (2019)
  • [8] YILMAZ G N., Blind video quality assessment via spatiotemporal statistical analysis of adaptive cube size 3D-DCT coefficients[J], IET Image Processing, 14, 5, pp. 845-852, (2020)
  • [9] MITRA S, CHANNAPPAYYA S S., Predicting spatio-temporal entropic differences for robust no reference video quality assessment [J], IEEE Signal Processing Letters, 28, pp. 170-174, (2021)
  • [10] End user video quality prediction and coding parameters selection at the encoder for robust HEVC video transmission[J], IEEE Transactions on Circuits and Systems for Video Technology, 29, 11, pp. 3367-3381, (2019)