Urban sound classification using neural networks on embedded FPGAs

被引:1
|
作者
Belloch, Jose A. [1 ]
Coronado, Raul [1 ]
Valls, Oscar [2 ]
del Amor, Rocio [2 ]
Leon, German [3 ]
Naranjo, Valery [2 ]
Dolz, Manuel F. [3 ]
Amor-Martin, Adrian [4 ]
Pinero, Gema [5 ]
机构
[1] Univ Carlos III Madrid, Dept Tecnol Elect, Avda Univ 30, Leganes 28911, Madrid, Spain
[2] Univ Politecn Valencia, Inst Univ Invest Tecnol Centrada Ser Humano HUMAN, Camino Vera S-N, Valencia 46022, Spain
[3] Univ Jaume I Castellon, Dept Ingn & Ciencia Comp, Avda Sos Baynat s-n, Castellon de La Plana 12071, Spain
[4] Univ Carlos III Madrid, Dept Teoria Senal & Comunicac, Avda Univ 30, Madrid 28911, Spain
[5] Univ Politecn Valencia, Inst Telecomunicac & Aplicac Multimedia, Camino Vera S-N, E-46022 Valencia, Spain
来源
JOURNAL OF SUPERCOMPUTING | 2024年 / 80卷 / 09期
关键词
FPGA; Sound classification; Hardware acceleration; Convolutional neural networks; Deep learning;
D O I
10.1007/s11227-024-05947-8
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Sound classification using neural networks has recently produced very accurate results. A large number of different applications use this type of sound classifiers such as controlling and monitoring the type of activity in a city or identifying different types of animals in natural environments. While traditional acoustic processing applications have been developed on high-performance computing platforms equipped with expensive multi-channel audio interfaces, the Internet of Things (IoT) paradigm requires the use of more flexible and energy-efficient systems. Although software-based platforms exist for implementing general-purpose neural networks, they are not optimized for sound classification, wasting energy and computational resources. In this work, we have used FPGAs to develop an ad hoc system where only the hardware needed for our application is synthesized, resulting in faster and more energy-efficient circuits. The results show that our developments are accelerated by a factor of 35 compared to a software-based implementation on a Raspberry Pi.
引用
收藏
页码:13176 / 13186
页数:11
相关论文
共 50 条
  • [21] Urban Sound Classification Using Adaboost
    Bansal, Anam
    Garg, Naresh Kumar
    INTERNATIONAL CONFERENCE ON INNOVATIVE COMPUTING AND COMMUNICATIONS, ICICC 2022, VOL 1, 2023, 473 : 621 - 631
  • [22] Urban Sound Classification using CNN
    Massoudi, Massoud
    Verma, Siddhant
    Jain, Riddhima
    PROCEEDINGS OF THE 6TH INTERNATIONAL CONFERENCE ON INVENTIVE COMPUTATION TECHNOLOGIES (ICICT 2021), 2021, : 583 - 589
  • [23] Accelerating Deep Neural Networks Using FPGAs and ZYNQ
    Lee, Han Sung
    Jeon, Jae Wook
    2021 IEEE REGION 10 SYMPOSIUM (TENSYMP), 2021,
  • [24] Environmental Sound Classification using Deep Convolutional Neural Networks and Data Augmentation
    Davis, Nithya
    Suresh, K.
    2018 IEEE RECENT ADVANCES IN INTELLIGENT COMPUTATIONAL SYSTEMS (RAICS), 2018, : 41 - 45
  • [25] Sound Classification System Using Deep Neural Networks for Hearing Impaired People
    Veena, S.
    Aravindhar, D. John
    WIRELESS PERSONAL COMMUNICATIONS, 2022, 126 (01) : 385 - 399
  • [26] ENVIRONMENTAL SOUND CLASSIFICATION USING CENTRAL AUDITORY REPRESENTATIONS AND DEEP NEURAL NETWORKS
    Chen, Kean
    Yang, Lixue
    Sang, Zhiming
    PROCEEDINGS OF THE 23RD INTERNATIONAL CONGRESS ON SOUND AND VIBRATION: FROM ANCIENT TO MODERN ACOUSTICS, 2016,
  • [27] Sound event classification using neural networks and feature selection based methods
    Ahmed, Ammar
    Serrestou, Youssef
    Raoof, Kosai
    Diouris, Jean-Francois
    2021 IEEE INTERNATIONAL CONFERENCE ON ELECTRO INFORMATION TECHNOLOGY (EIT), 2021, : 298 - 303
  • [28] Dialect Classification From a Single Sonorant Sound Using Deep Neural Networks
    Themistocleous, Charalambos
    FRONTIERS IN COMMUNICATION, 2019, 4
  • [29] Performance comparison of lung sound classification using various convolutional neural networks
    Kim, Gee Yeun
    Kim, Hyoung-Gook
    JOURNAL OF THE ACOUSTICAL SOCIETY OF KOREA, 2019, 38 (05): : 568 - 573
  • [30] Sound Classification System Using Deep Neural Networks for Hearing Impaired People
    S. Veena
    D. John Aravindhar
    Wireless Personal Communications, 2022, 126 : 385 - 399