Multi-Exposure Image Fusion via Multi-Scale and Context-Aware Feature Learning

被引:9
|
作者
Liu, Yu [1 ,2 ]
Yang, Zhigang [1 ,2 ]
Cheng, Juan [1 ,2 ]
Chen, Xun [3 ]
机构
[1] Hefei Univ Technol, Dept Biomed Engn, Hefei 230009, Peoples R China
[2] Hefei Univ Technol, Anhui Prov Key Lab Measuring Theory & Precis Instr, Hefei 230009, Peoples R China
[3] Univ Sci & Technol China, Dept Elect Engn & Informat Sci, Hefei 230027, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Semantics; Image fusion; Decoding; Transforms; Transformers; Visualization; Auto-encoder; global contextual information; multi-exposure image fusion; multi-scale features; Transformer; QUALITY ASSESSMENT;
D O I
10.1109/LSP.2023.3243767
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this letter, a deep learning (DL)-based multi-exposure image fusion (MEF) method via multi-scale and context-aware feature learning is proposed, aiming to overcome the defects of existing traditional and DL-based methods. The proposed network is based on an auto-encoder architecture. First, an encoder that combines the convolutional network and Transformer is designed to extract multi-scale features and capture the global contextual information. Then, a multi-scale feature interaction (MSFI) module is devised to enrich the scale diversity of extracted features using cross-scale fusion and Atrous spatial pyramid pooling (ASPP). Finally, a decoder with a nest connection architecture is introduced to reconstruct the fused image. Experimental results show that the proposed method outperforms several representative traditional and DL-based MEF methods in terms of both visual quality and objective assessment.
引用
收藏
页码:100 / 104
页数:5
相关论文
共 50 条
  • [31] MCGFF-Net: a multi-scale context-aware and global feature fusion network for enhanced polyp and skin lesion segmentation
    Li, Yanxiang
    Meng, Wenzhe
    Ma, Dehua
    Xu, Siping
    Zhu, Xiaoliang
    VISUAL COMPUTER, 2024,
  • [32] Medical Image Fusion Based on Multi-Scale Feature Learning and Edge Enhancement
    Xiao Wanxin
    Li Huafeng
    Zhang Yafei
    Xie Minghong
    Li Fan
    LASER & OPTOELECTRONICS PROGRESS, 2022, 59 (06)
  • [33] AUTOMATIC EXPOSURE COMPENSATION FOR MULTI-EXPOSURE IMAGE FUSION
    Kinoshita, Yuma
    Shiota, Sayaka
    Kiya, Hitoshi
    2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 883 - 887
  • [34] Enhancing image visuality by multi-exposure fusion
    Yan, Qingsen
    Zhu, Yu
    Zhou, Yulin
    Sun, Jinqiu
    Zhang, Lei
    Zhang, Yanning
    PATTERN RECOGNITION LETTERS, 2019, 127 : 66 - 75
  • [35] Single image defogging via multi-exposure image fusion and detail enhancement
    Mao, Wenjing
    Zheng, Dezhi
    Chen, Minze
    Chen, Juqiang
    JOURNAL OF SAFETY SCIENCE AND RESILIENCE, 2024, 5 (01): : 37 - 46
  • [36] CDMC-Net: Context-Aware Image Deblurring Using a Multi-scale Cascaded Network
    Zhao, Qian
    Zhou, Dongming
    Yang, Hao
    NEURAL PROCESSING LETTERS, 2023, 55 (04) : 3985 - 4006
  • [37] A novel fusion approach of multi-exposure image
    Kong, Jun
    Wang, Rujuan
    Lu, Yingha
    Feng, Xue
    Zhang, Jingbuo
    EUROCON 2007: THE INTERNATIONAL CONFERENCE ON COMPUTER AS A TOOL, VOLS 1-6, 2007, : 1458 - 1464
  • [38] An Improved Multi-Exposure Image Fusion Algorithm
    Xiang, Huyan
    Ma Xi-rong
    MEMS, NANO AND SMART SYSTEMS, PTS 1-6, 2012, 403-408 : 2200 - 2205
  • [39] Review of Multi-Exposure Image Fusion Methods
    Zhu Xinli
    Zhang Yasheng
    Fang Yuqiang
    Zhang Xitao
    Xu Jieping
    Luo Di
    LASER & OPTOELECTRONICS PROGRESS, 2023, 60 (22)
  • [40] A Method for Fast Multi-Exposure Image Fusion
    Choi, Seungcheol
    Kwon, Oh-Jin
    Lee, Jinhee
    IEEE ACCESS, 2017, 5 : 7371 - 7380