Searching for Energy-Efficient Hybrid Adder-Convolution Neural Networks

被引:9
|
作者
Li, Wenshuo [1 ]
Chen, Xinghao [1 ]
Bai, Jinyu [1 ,3 ]
Ning, Xuefei [2 ,4 ]
Wang, Yunhe [1 ]
机构
[1] Huawei Noahs Ark Lab, Hong Kong, Peoples R China
[2] Huawei TCS Lab, Hong Kong, Peoples R China
[3] Beihang Univ, Sch Integrated Circuit Sci & Engn, Beijing, Peoples R China
[4] Tsinghua Univ, Dept Elect Engn, Beijing, Peoples R China
关键词
D O I
10.1109/CVPRW56347.2022.00211
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
As convolutional neural networks (CNNs) are more and more widely used in computer vision area, the energy consumption of CNNs has become the focus of researchers. For edge devices, both the battery life and the inference latency are critical and directly affect user experience. Recently, great progress has been made in the design of neural architectures and new operators. The emergence of neural architecture search technology has improved the performance of network step by step, and liberated the productivity of engineers to a certain extent. New operators, such as AdderNets, make it possible to further improve the energy efficiency of neural networks. In this paper, we explore the fusion of new adder operators and common convolution operators into state-of-the-art light-weight networks, GhostNet, to search for models with better energy efficiency and performance. Our proposed search equilibrium strategy ensures that the adder and convolution operators can be treated fairly in the search, and the resulting model achieves the same accuracy of 73.9% with GhostNet on the ImageNet dataset at an extremely low power consumption of 0.612 mJ. When keeping the same energy consumption, the accuracy reaches 74.3% which is 0.4% higher than original GhostNet.
引用
收藏
页码:1942 / 1951
页数:10
相关论文
共 50 条
  • [1] Hybrid Convolution Architecture for Energy-Efficient Deep Neural Network Processing
    Kim, Suchang
    Jo, Jihyuck
    Park, In-Cheol
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2021, 68 (05) : 2017 - 2029
  • [2] An Energy-efficient Convolution Unit for Depthwise Separable Convolutional Neural Networks
    Chong, Yi Sheng
    Goh, Wang Ling
    Ong, Yew Soon
    Nambiar, Vishnu P.
    Do, Anh Tuan
    2021 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2021,
  • [3] A Novel Energy-Efficient Hybrid Full Adder Circuit
    Sharma, Trapti
    Kumre, Laxmi
    ADVANCES IN DATA AND INFORMATION SCIENCES, VOL 1, 2018, 38 : 105 - 114
  • [4] Energy-Efficient Hybrid Adder Design by Using Inexact Lower Bits Adder
    Kim, Sunghyun
    Kim, Youngmin
    2016 IEEE ASIA PACIFIC CONFERENCE ON CIRCUITS AND SYSTEMS (APCCAS), 2016, : 355 - 357
  • [5] Energy-Efficient Hybrid Full Adder (EEHFA) for Arithmetic Applications
    Rajagopal, Thiruvengadam
    Chakrapani, Arvind
    NATIONAL ACADEMY SCIENCE LETTERS-INDIA, 2022, 45 (02): : 165 - 168
  • [6] Energy-Efficient Hybrid Full Adder (EEHFA) for Arithmetic Applications
    Thiruvengadam Rajagopal
    Arvind Chakrapani
    National Academy Science Letters, 2022, 45 : 165 - 168
  • [7] E2GC: Energy-efficient Group Convolution in Deep Neural Networks
    Jha, Nandan Kumar
    Saini, Rajat
    Nag, Subhrajit
    Mittal, Sparsh
    2020 33RD INTERNATIONAL CONFERENCE ON VLSI DESIGN AND 2020 19TH INTERNATIONAL CONFERENCE ON EMBEDDED SYSTEMS (VLSID), 2020, : 155 - 160
  • [8] Global-Local Convolution with Spiking Neural Networks for Energy-efficient Keyword Spotting
    Wang, Shuai
    Zhang, Dehao
    Shi, Kexin
    Wang, Yuchen
    Wei, Wenjie
    Wu, Jibin
    Zhang, Malu
    INTERSPEECH 2024, 2024, : 4523 - 4527
  • [9] New energy-efficient hybrid wide-operand adder architecture
    Jafarzadehpour, Fereshteh
    Molahosseini, Amir Sabbagh
    Zarandi, Azadeh Alsadat Emrani
    Sousa, Leonel
    IET CIRCUITS DEVICES & SYSTEMS, 2019, 13 (08) : 1221 - 1231
  • [10] Design of energy-efficient and high-speed hybrid decimal adder
    Mashayekhi, Negin
    Jaberipur, Ghassem
    Reshadinezhad, Mohammad Reza
    Moghimi, Shekoofeh
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (03):