Boths: Super Lightweight Network-Enabled Underwater Image Enhancement

被引:15
|
作者
Liu, Xu [1 ]
Lin, Sen [2 ]
Chi, Kaichen [3 ]
Tao, Zhiyong [4 ]
Zhao, Yang [1 ]
机构
[1] Hefei Univ Technol, Sch Comp Sci & Informat Engn, Hefei 230601, Peoples R China
[2] Shenyang Ligong Univ, Sch Automat & Elect Engn, Shenyang 110159, Peoples R China
[3] Northwestern Polytech Univ, Sch Artificial Intelligence Opt & Elect, Xian 710072, Peoples R China
[4] Liaoning Tech Univ, Sch Elect & Informat Engn, Huludao 125105, Peoples R China
基金
中国国家自然科学基金;
关键词
3-D attention learning; high- and low-frequency loss functions; structure and detail interaction; underwater image enhancement; QUALITY;
D O I
10.1109/LGRS.2022.3230049
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Since light is scattered and absorbed by water, underwater images have inherent degradation (e.g., hazing, color shift), consequently impeding the development of remotely operated vehicles (ROVs). Toward this end, we propose a novel method, referred to as Best of Both Worlds (Boths). With parameters of only 0.0064 M, Boths can be considered a super lightweight neural network for underwater image enhancement. On the whole, it has three levels: structure and detail features; pixel and channel dimensions; high-and low-frequency information. Each of these three levels represents "Best of Both Worlds." Initially, by interacting with structure and detail features, Boths can focus on these two aspects at the same time. Further, our network can simultaneously consider channel and pixel dimensions through 3-D attention learning, which is more similar to human visual perception. Lastly, the proposed model can focus on high-and low-frequency information, through a novel loss function based on the wavelet transforms. Upon subsequent analysis and evaluation, Boths has shown superior performance compared with state-of-the-art (SOTA) methods. Our models and datasets are publicly available at: https://github.com/perseveranceLX/Boths.
引用
收藏
页数:5
相关论文
共 50 条
  • [31] FeNet: Feature Enhancement Network for Lightweight Remote-Sensing Image Super-Resolution
    Wang, Zheyuan
    Li, Liangliang
    Xue, Yuan
    Jiang, Chenchen
    Wang, Jiawen
    Sun, Kaipeng
    Ma, Hongbing
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [32] Underwater image super-resolution and enhancement via progressive frequency-interleaved network?
    Wang, Li
    Xu, Lizhong
    Tian, Wei
    Zhang, Yunfei
    Feng, Hui
    Chen, Zhe
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2022, 86
  • [33] A Proxy Agent for Small Network-Enabled Devices
    Lu, H. Karen
    Ali, Asad M.
    2008 IEEE INTERNATIONAL PERFORMANCE, COMPUTING AND COMMUNICATIONS CONFERENCE (IPCCC 2008), 2008, : 445 - 449
  • [34] Development of a Network-Enabled Traffic Light System
    bin Onn, Aizuddin
    Salim, Safyzan
    Ahmad, Muhammad Shukri
    Jamil, Muhammad Mahadi Abdul
    2014 IEEE INTERNATIONAL CONFERENCE ON CONTROL SYSTEM COMPUTING AND ENGINEERING, 2014, : 241 - 244
  • [35] Characterizing Collaboration in Social Network-enabled Routing
    Mohaisen, Manar
    Mohaisen, Aziz
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2016, 10 (04): : 1643 - 1660
  • [36] Finland moves to create 'network-enabled defence'
    Skinner, Tony
    Jane's Defence Weekly, 2006, (JULY):
  • [37] A very lightweight image super-resolution network
    Bai, Haomou
    Liang, Xiao
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [38] Applying NetSolve's network-enabled server
    Casanova, H
    Dongarra, J
    IEEE COMPUTATIONAL SCIENCE & ENGINEERING, 1998, 5 (03): : 57 - 67
  • [39] Enhancing Upscaled Image Resolution Using Hybrid Generative Adversarial Network-Enabled Frameworks
    Geetha, R.
    Jebamalar, G. Belshia
    Shiney, S. Arumai
    Dao, Nhu-Ngoc
    Moon, Hyeonjoon
    Cho, Sungrae
    IEEE ACCESS, 2024, 12 : 27784 - 27793
  • [40] NETWORK-ENABLED CAPABILITY OF LOGISTICS BUSINESS GROUPS
    Li, Hong-qi
    LISS 2011: PROCEEDINGS OF THE 1ST INTERNATIONAL CONFERENCE ON LOGISTICS, INFORMATICS AND SERVICE SCIENCE, VOL 2, 2011, : 35 - 39