Enhanced LDR Detail Rendering for HDR Fusion by TransU-Fusion Network

被引:2
|
作者
Song, Bo [1 ]
Gao, Rui [1 ]
Wang, Yong [2 ]
Yu, Qi [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Integrated Circult Sci & Engn, State Key Lab Elect Thin Films & Integrated Device, Chengdu 610054, Peoples R China
[2] Chengdu Image Design Technol Co Ltd, 171 Hele 2nd St, Chengdu 610213, Peoples R China
来源
SYMMETRY-BASEL | 2023年 / 15卷 / 07期
关键词
HDR fusion; DFTB; DRDB; U-Net; ghosting artifact; DYNAMIC-RANGE; IMAGES;
D O I
10.3390/sym15071463
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
High Dynamic Range (HDR) images are widely used in automotive, aerospace, AI, and other fields but are limited by the maximum dynamic range of a single data acquisition using CMOS image sensors. High dynamic range images are usually synthesized through multiple exposure techniques and image processing techniques. One of the most challenging task in multiframe Low Dynamic Range (LDR) images fusion for HDR is to eliminate ghosting artifacts caused by motion. In traditional algorithms, optical flow is generally used to align dynamic scenes before image fusion, which can achieve good results in cases of small-scale motion scenes but causes obvious ghosting artifacts when motion magnitude is large. Recently, attention mechanisms have been introduced during the alignment stage to enhance the network's ability to remove ghosts. However, significant ghosting artifacts still occur in some scenarios with large-scale motion or oversaturated areas. We proposea novel Distilled Feature TransformerBlock (DFTB) structure to distill and re-extract information from deep image features obtained after U-Net downsampling, achieving ghost removal at the semantic level for HDR fusion. We introduce a Feature Distillation Transformer Block (FDTB), based on the Swin-Transformer and RFDB structure. FDTB uses multiple distillation connections to learn more discriminative feature representations. For the multiexposure moving scene image fusion HDR ghost removal task, in the previous method, the use of deep learning to remove the ghost effect in the composite image has been perfect, and it is almost difficult to observe the ghost residue of moving objects in the composite HDR image. The method in this paper focuses more on how to save the details of LDR image more completely after removing the ghost to synthesize high-quality HDR image. After using the proposed FDTB, the edge texture details of the synthesized HDR image are saved more perfectly, which shows that FDTB has a better effect in saving the details of image fusion. Futhermore, we propose a new depth framework based on DFTB for fusing and removing ghosts from deep image features, called TransU-Fusion. First of all, we use the encoder in U-Net to extract image features of different exposures and map them to different dimensional feature spaces. By utilizing the symmetry of the U-Net structure, we can ultimately output these feature images as original size HDR images. Then, we further fuse high-dimensional space features using Dilated Residual Dense Block (DRDB) to expand the receptive field, which is beneficial for repairing over-saturated regions. We use the transformer in DFTB to perform low-pass filtering on low-dimensional space features and interact with global information to remove ghosts. Finally, the processed features are merged and output as an HDR image without ghosting artifacts through the decoder. After testing on datasets and comparing with benchmark and state-of-the-art models, the results demonstrate our model's excellent information fusion ability and stronger ghost removal capability.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Efficient Detail-enhanced Exposure Correction Based on Auto-fusion for LDR Image
    Chen, Jiayi
    Lan, Xuguang
    Yang, Meng
    2016 IEEE 18TH INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), 2016,
  • [2] Detail-Enhanced Exposure Fusion
    Li, Zheng Guo
    Zheng, Jing Hong
    Rahardja, Susanto
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2012, 21 (11) : 4672 - 4676
  • [3] Infrared and Visible Image Fusion Using Detail Enhanced Channel Attention Network
    Cui, Yinghan
    Du, Huiqian
    Mei, Wenbo
    IEEE ACCESS, 2019, 7 : 182185 - 182197
  • [4] Generative Adversarial Network Using Weighted Loss Map and Regional Fusion Training for LDR-to-HDR Image Conversion
    Jung, Sung-Woon
    Kwon, Hyuk-Ju
    Son, Dong-Min
    Lee, Sung-Hak
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2020, E103D (11) : 2398 - 2402
  • [5] Detail-enhanced multimodal medical image fusion
    Yang, Guocheng
    Chen, Leiting
    Qiu, Hang
    2014 IEEE 17TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND ENGINEERING (CSE), 2014, : 1611 - 1615
  • [6] Detail-Enhanced Multi-Scale Exposure Fusion
    Li, Zhengguo
    Wei, Zhe
    Wen, Changyun
    Zheng, Jinghong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (03) : 1243 - 1252
  • [7] SAFNet: Selective Alignment Fusion Network for Efficient HDR Imaging
    Kong, Lingtong
    Li, Bo
    Xiong, Yike
    Zhang, Hao
    Gu, Hong
    Chen, Jinwei
    COMPUTER VISION - ECCV 2024, PT XXVI, 2025, 15084 : 256 - 273
  • [8] A Lightweight Detail-Fusion Progressive Network for Image Deraining
    Ding, Siyi
    Zhu, Qing
    Zhu, Wanting
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, ICIC 2023, PT V, 2023, 14090 : 75 - 87
  • [9] Deep HDR Deghosting by Motion-Attention Fusion Network
    Xiao, Yifan
    Veelaert, Peter
    Philips, Wilfried
    SENSORS, 2022, 22 (20)
  • [10] DEAF-Net: Detail-Enhanced Attention Feature Fusion Network for Retinal Vessel Segmentation
    Cai, Pengfei
    Li, Biyuan
    Sun, Gaowei
    Yang, Bo
    Wang, Xiuwei
    Lv, Chunjie
    Yan, Jun
    JOURNAL OF IMAGING INFORMATICS IN MEDICINE, 2025, 38 (01): : 496 - 519