Fully forward mode training for optical neural networks

被引:6
|
作者
Xue, Zhiwei [1 ,2 ,3 ,4 ]
Zhou, Tiankuang [1 ,2 ,3 ]
Xu, Zhihao [1 ,2 ,3 ,4 ]
Yu, Shaoliang [5 ]
Dai, Qionghai [2 ,3 ,6 ]
Fang, Lu [1 ,2 ,3 ]
机构
[1] Tsinghua Univ, Dept Elect Engn, Beijing, Peoples R China
[2] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol, Beijing, Peoples R China
[3] Tsinghua Univ, Inst Brain & Cognit Sci, Beijing, Peoples R China
[4] Tsinghua Univ, Shenzhen Int Grad Sch, Shenzhen, Peoples R China
[5] Zhejiang Lab, Res Ctr Intelligent Optoelect Comp, Hangzhou, Peoples R China
[6] Tsinghua Univ, Dept Automat, Beijing, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
INVERSE DESIGN; ARTIFICIAL-INTELLIGENCE; BACKPROPAGATION; TIME;
D O I
10.1038/s41586-024-07687-4
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Optical computing promises to improve the speed and energy efficiency of machine learning applications1-6. However, current approaches to efficiently train these models are limited by in silico emulation on digital computers. Here we develop a method called fully forward mode (FFM) learning, which implements the compute-intensive training process on the physical system. The majority of the machine learning operations are thus efficiently conducted in parallel on site, alleviating numerical modelling constraints. In free-space and integrated photonics, we experimentally demonstrate optical systems with state-of-the-art performances for a given network size. FFM learning shows training the deepest optical neural networks with millions of parameters achieves accuracy equivalent to the ideal model. It supports all-optical focusing through scattering media with a resolution of the diffraction limit; it can also image in parallel the objects hidden outside the direct line of sight at over a kilohertz frame rate and can conduct all-optical processing with light intensity as weak as subphoton per pixel (5.40 x 1018- operations-per-second-per-watt energy efficiency) at room temperature. Furthermore, we prove that FFM learning can automatically search non-Hermitian exceptional points without an analytical model. FFM learning not only facilitates orders-of-magnitude-faster learning processes, but can also advance applied and theoretical fields such as deep neural networks, ultrasensitive perception and topological photonics. We present fully forward mode learning, which conducts machine learning operations on site, leading to faster learning and promoting advancement in numerous fields.
引用
收藏
页码:280 / 286
页数:17
相关论文
共 50 条
  • [41] Salp Swarm Algorithm (SSA) for Training Feed-Forward Neural Networks
    Bairathi, Divya
    Gopalani, Dinesh
    SOFT COMPUTING FOR PROBLEM SOLVING, SOCPROS 2017, VOL 1, 2019, 816 : 521 - 534
  • [42] Analysis of Training Parameters of Feed Forward Neural Networks for WiFi RSSI Modeling
    Bogdandy, Bence
    Toth, Zsolt
    2019 IEEE 15TH INTERNATIONAL SCIENTIFIC CONFERENCE ON INFORMATICS (INFORMATICS 2019), 2019, : 273 - 277
  • [43] A training-time analysis of robustness in feed-forward neural networks
    Alippi, C
    Sana, D
    Scotti, F
    2004 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, PROCEEDINGS, 2004, : 2853 - 2858
  • [44] An ensemble of differential evolution and Adam for training feed-forward neural networks
    Xue, Yu
    Tong, Yiling
    Neri, Ferrante
    INFORMATION SCIENCES, 2022, 608 : 453 - 471
  • [45] FASTER TRAINING USING FUSION OF ACTIVATION FUNCTIONS FOR FEED FORWARD NEURAL NETWORKS
    Asaduzzaman, Md.
    Shahjahan, Md.
    Murase, Kazuyuki
    INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2009, 19 (06) : 437 - 448
  • [46] Hybrid learning schemes for fast training of feed-forward neural networks
    Karayiannis, NB
    MATHEMATICS AND COMPUTERS IN SIMULATION, 1996, 41 (1-2) : 13 - 28
  • [47] PARALLEL, SELF-ORGANIZING, HIERARCHICAL NEURAL NETWORKS WITH FORWARD BACKWARD TRAINING
    DENG, SW
    ERSOY, OK
    CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 1993, 12 (02) : 223 - 246
  • [49] Hybrid training of feed-forward neural networks with particle swarm optimization
    Carvalho, M.
    Ludermir, T. B.
    NEURAL INFORMATION PROCESSING, PT 2, PROCEEDINGS, 2006, 4233 : 1061 - 1070
  • [50] An evolutionary approach to training feed-forward and recurrent neural networks.
    Riley, J
    Ciesielski, VB
    1998 SECOND INTERNATIONAL CONFERENCE ON KNOWLEDGE-BASED INTELLIGENT ELECTRONIC SYSTEMS, KES '98, PROCEEDINGS, VOL, 3, 1998, : 596 - 602