Deformable Attention Network for Efficient Space-Time Video Super-Resolution

被引:0
|
作者
Wang, Hua [1 ,2 ]
Chamchong, Rapeeporn [1 ]
Chomphuwiset, Phatthanaphong [3 ]
Pawara, Pornntiwa [1 ]
机构
[1] Mahasarakham Univ, Fac Informat, Dept Comp Sci, Maha Sarakham, Thailand
[2] Putian Univ, New Engn Ind Coll, Putian, Peoples R China
[3] MQ Sq, Bangkok, Thailand
关键词
image enhancement; image processing; image resolution;
D O I
10.1049/ipr2.70026
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Space-time video super-resolution (STVSR) aims to construct high space-time resolution video sequences from low frame rate and low-resolution video sequences. While recent STVSR works combine temporal interpolation and spatial super-resolution in a unified framework, they face challenges in computational complexity across both temporal and spatial dimensions, particularly in achieving accurate intermediate frame interpolation and efficient temporal information utilisation. To address these, we propose a deformable attention network for efficient STVSR. Specifically, we introduce a deformable interpolation block that employs hierarchical feature fusion to effectively handle complex inter-frame motions at multiple scales, enabling more accurate intermediate frame generation. To fully utilise temporal information, we design a temporal feature shuffle block (TFSB) to efficiently exchange complementary information across multiple frames. Additionally, we develop a motion feature enhancement block incorporating channel attention mechanism to selectively emphasise motion-related features, further boosting TFSB's effectiveness. Experimental results on benchmark datasets definitively demonstrate that our proposed method achieves competitive performance in STVSR tasks.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] STDAN: Deformable Attention Network for Space-Time Video Super-Resolution
    Wang, Hai
    Xiang, Xiaoyu
    Tian, Yapeng
    Yang, Wenming
    Liao, Qingmin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (08) : 10606 - 10616
  • [2] STDAN: Deformable Attention Network for Space-Time Video Super-Resolution
    Wang, Hai
    Xiang, Xiaoyu
    Tian, Yapeng
    Yang, Wenming
    Liao, Qingmin
    arXiv, 2022,
  • [3] MEGAN: Memory Enhanced Graph Attention Network for Space-Time Video Super-Resolution
    You, Chenyu
    Han, Lianyi
    Feng, Aosong
    Zhao, Ruihan
    Tang, Hui
    Fan, Wei
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 3946 - 3956
  • [4] Space-Time Distillation for Video Super-Resolution
    Xiao, Zeyu
    Fu, Xueyang
    Huang, Jie
    Cheng, Zhen
    Xiong, Zhiwei
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 2113 - 2122
  • [5] Temporal Modulation Network for Controllable Space-Time Video Super-Resolution
    Xu, Gang
    Xu, Jun
    Li, Zhen
    Wang, Liang
    Sun, Xing
    Cheng, Ming-Ming
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 6384 - 6393
  • [6] Learning for Unconstrained Space-Time Video Super-Resolution
    Shi, Zhihao
    Liu, Xiaohong
    Li, Chengqi
    Dai, Linhui
    Chen, Jun
    Davidson, Timothy N.
    Zhao, Jiying
    IEEE TRANSACTIONS ON BROADCASTING, 2022, 68 (02) : 345 - 358
  • [7] Space-Time Super-Resolution from a Single Video
    Shahar, Oded
    Faktor, Alon
    Irani, Michal
    2011 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2011,
  • [8] Space-time super-resolution
    Shechtman, E
    Caspi, Y
    Irani, M
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2005, 27 (04) : 531 - 545
  • [9] Space-time super-resolution with motion-perceptive deformable alignment
    Cai, Zhuojun
    Tian, Xiang
    Chen, Ze
    Chen, Yaowu
    JOURNAL OF ELECTRONIC IMAGING, 2021, 30 (03)
  • [10] Space-Time Video Super-Resolution Using Temporal Profiles
    Xiao, Zeyu
    Xiong, Zhiwei
    Fu, Xueyang
    Liu, Dong
    Zha, Zheng-Jun
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 664 - 672