Enhanced LDR Detail Rendering for HDR Fusion by TransU-Fusion Network

被引:2
|
作者
Song, Bo [1 ]
Gao, Rui [1 ]
Wang, Yong [2 ]
Yu, Qi [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Integrated Circult Sci & Engn, State Key Lab Elect Thin Films & Integrated Device, Chengdu 610054, Peoples R China
[2] Chengdu Image Design Technol Co Ltd, 171 Hele 2nd St, Chengdu 610213, Peoples R China
来源
SYMMETRY-BASEL | 2023年 / 15卷 / 07期
关键词
HDR fusion; DFTB; DRDB; U-Net; ghosting artifact; DYNAMIC-RANGE; IMAGES;
D O I
10.3390/sym15071463
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
High Dynamic Range (HDR) images are widely used in automotive, aerospace, AI, and other fields but are limited by the maximum dynamic range of a single data acquisition using CMOS image sensors. High dynamic range images are usually synthesized through multiple exposure techniques and image processing techniques. One of the most challenging task in multiframe Low Dynamic Range (LDR) images fusion for HDR is to eliminate ghosting artifacts caused by motion. In traditional algorithms, optical flow is generally used to align dynamic scenes before image fusion, which can achieve good results in cases of small-scale motion scenes but causes obvious ghosting artifacts when motion magnitude is large. Recently, attention mechanisms have been introduced during the alignment stage to enhance the network's ability to remove ghosts. However, significant ghosting artifacts still occur in some scenarios with large-scale motion or oversaturated areas. We proposea novel Distilled Feature TransformerBlock (DFTB) structure to distill and re-extract information from deep image features obtained after U-Net downsampling, achieving ghost removal at the semantic level for HDR fusion. We introduce a Feature Distillation Transformer Block (FDTB), based on the Swin-Transformer and RFDB structure. FDTB uses multiple distillation connections to learn more discriminative feature representations. For the multiexposure moving scene image fusion HDR ghost removal task, in the previous method, the use of deep learning to remove the ghost effect in the composite image has been perfect, and it is almost difficult to observe the ghost residue of moving objects in the composite HDR image. The method in this paper focuses more on how to save the details of LDR image more completely after removing the ghost to synthesize high-quality HDR image. After using the proposed FDTB, the edge texture details of the synthesized HDR image are saved more perfectly, which shows that FDTB has a better effect in saving the details of image fusion. Futhermore, we propose a new depth framework based on DFTB for fusing and removing ghosts from deep image features, called TransU-Fusion. First of all, we use the encoder in U-Net to extract image features of different exposures and map them to different dimensional feature spaces. By utilizing the symmetry of the U-Net structure, we can ultimately output these feature images as original size HDR images. Then, we further fuse high-dimensional space features using Dilated Residual Dense Block (DRDB) to expand the receptive field, which is beneficial for repairing over-saturated regions. We use the transformer in DFTB to perform low-pass filtering on low-dimensional space features and interact with global information to remove ghosts. Finally, the processed features are merged and output as an HDR image without ghosting artifacts through the decoder. After testing on datasets and comparing with benchmark and state-of-the-art models, the results demonstrate our model's excellent information fusion ability and stronger ghost removal capability.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] Spatiotemporal attention enhanced features fusion network for action recognition
    Danfeng Zhuang
    Min Jiang
    Jun Kong
    Tianshan Liu
    International Journal of Machine Learning and Cybernetics, 2021, 12 : 823 - 841
  • [42] SELECTIVELY DETAIL-ENHANCED EXPOSURE FUSION VIA A GRADIENT DOMAIN CONTENT ADAPTIVE BILATERAL FILTER
    Li, Z. G.
    Zheng, J. H.
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [43] RQVR: A multi-exposure image fusion network that optimizes rendering quality and visual realism
    Liu, Xiaokang
    Wang, Enlong
    Man, Huizi
    Zhou, Shihua
    Wang, Yueping
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2025, 107
  • [44] BDMFuse: Multi-scale network fusion for infrared and visible images based on base and detail features
    Si, Hai-Ping
    Zhao, Wen-Rui
    Li, Ting-Ting
    Li, Fei-Tao
    Fernado, Bacao
    Sun, Chang-Xia
    Li, Yan-Ling
    JOURNAL OF INFRARED AND MILLIMETER WAVES, 2025, 44 (02) : 275 - 284
  • [45] DSAFusion: Detail-semantic-aware network for infrared and low-light visible image fusion
    Xia, Menghan
    Lin, Cheng
    Xu, Biyun
    Li, Qian
    Fang, Hao
    Huang, Zhenghua
    INFRARED PHYSICS & TECHNOLOGY, 2025, 147
  • [46] Infrared and visible image fusion based on multi-level detail enhancement and generative adversarial network
    Tian, Xiangrui
    Xianyu, Xiaohan
    Li, Zhimin
    Xu, Tong
    Jia, Yinjun
    INTELLIGENCE & ROBOTICS, 2024, 4 (04): : 524 - 543
  • [47] Coupled Convolutional Neural Network-Based Detail Injection Method for Hyperspectral and Multispectral Image Fusion
    Lu, Xiaochen
    Yang, Dezheng
    Jia, Fengde
    Zhao, Yifeng
    APPLIED SCIENCES-BASEL, 2021, 11 (01): : 1 - 13
  • [48] Symmetrical Enhanced Fusion Network for Skeleton-Based Action Recognition
    Kong, Jun
    Deng, Haoyang
    Jiang, Min
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (11) : 4394 - 4408
  • [49] TETFN: A text enhanced transformer fusion network for multimodal sentiment analysis
    Wang, Di
    Guo, Xutong
    Tian, Yumin
    Liu, Jinhui
    He, LiHuo
    Luo, Xuemei
    PATTERN RECOGNITION, 2023, 136
  • [50] Multimodal cross enhanced fusion network for diagnosis of Alzheimer?s disease and
    Leng, Yilin
    Cui, Wenju
    Peng, Yunsong
    Yan, Caiying
    Cao, Yuzhu
    Yan, Zhuangzhi
    Chen, Shuangqing
    Jiang, Xi
    Zheng, Jian
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 157