Video colorization method based on fusion of multi-source colorization results using dual reference frames

被引:0
|
作者
Meng H. [1 ]
Tang J. [1 ]
Dai L. [1 ]
机构
[1] School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing
关键词
dual reference frames; fusion; occlusion; video colorization;
D O I
10.3969/j.issn.1001-0505.2024.01.023
中图分类号
学科分类号
摘要
To better leverage reference frame information,a colorization method for black-and-white videos based on fusion of multi-source colorization results using dual reference frames is proposed. First,a hard attention fusion submodule is employed to fuse color information from two reference frames,preventing color blurring caused by unreasonable reference information during the coloring of the dual-frame semantic matching module. Then,a multi-source colorization result fusion module fuses colorization results obtained through the dual-frame optical flow propagation module,the dual-frame semantic matching module and occlusion information,thereby producing a better colorization result. Experimental results show that the peak signal to noise (PSNR),the structural similarity (SSIM),and the color distribution consistency index (CDC)of the method on the Davis30 test dataset are 37. 36 dB,0. 980 5,0. 003 748,respectively. This demonstrates that the method can fully utilize information from dual reference frames to colorize grayscale frames through multiple fusion methods and generate aesthetically pleasing and temporally consistent colorization results. © 2024 Southeast University. All rights reserved.
引用
收藏
页码:183 / 191
页数:8
相关论文
共 19 条
  • [1] Lei C Y, Chen Q F., Fully automatic video colorization with self-regularization and diversity[C], 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020)
  • [2] Liu Y H, Zhao H Y, Chan K C K, Et al., Temporally consistent video colorization with deep feature propagation and self-regularization learning
  • [3] Zhang B, He MM, Liao J, Et al., Deep exemplar-based video colorization[C], 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • [4] Iizuka S, Simo-Serra E., DeepRemaster:Temporal source-reference attention networks for comprehensive video enhancement[J], ACM Transactions on Graphics, 38, 6, (2019)
  • [5] Yang Y, Liu Y, Yuan H, Et al., Deep colorization:A channel attention-based CNN for video colorization [C], Proceedings of the 2022 5th International Conference on Image and Graphics Processing, (2022)
  • [6] Liu X Y, Gao Y, Qin P L, Et al., Multi template video coloring with automatic scene division[J], Journal of North University of China (Natural Science Edition), 44, 4, pp. 388-396, (2023)
  • [7] Chen Y, Ding Y D, Yu B, Et al., Video colourisation based on voxel flow[J], Journal of Shanghai University (Natural Science Edition), 27, 1, (2021)
  • [8] Simonyan K, Zisserman A., Very deep convolutional networks for large-scale image recognition [EB/OL]
  • [9] Chan K C K, Wang X T, Yu K, Et al., BasicVSR:The search for essential components in video super-resolution and beyond[C], 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2021)
  • [10] Veluri B, Saffari A, Pernu C, Et al., NeuriCam:Video super-resolution and colorization using key frames