Fully forward mode training for optical neural networks

被引:6
|
作者
Xue, Zhiwei [1 ,2 ,3 ,4 ]
Zhou, Tiankuang [1 ,2 ,3 ]
Xu, Zhihao [1 ,2 ,3 ,4 ]
Yu, Shaoliang [5 ]
Dai, Qionghai [2 ,3 ,6 ]
Fang, Lu [1 ,2 ,3 ]
机构
[1] Tsinghua Univ, Dept Elect Engn, Beijing, Peoples R China
[2] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol, Beijing, Peoples R China
[3] Tsinghua Univ, Inst Brain & Cognit Sci, Beijing, Peoples R China
[4] Tsinghua Univ, Shenzhen Int Grad Sch, Shenzhen, Peoples R China
[5] Zhejiang Lab, Res Ctr Intelligent Optoelect Comp, Hangzhou, Peoples R China
[6] Tsinghua Univ, Dept Automat, Beijing, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
INVERSE DESIGN; ARTIFICIAL-INTELLIGENCE; BACKPROPAGATION; TIME;
D O I
10.1038/s41586-024-07687-4
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Optical computing promises to improve the speed and energy efficiency of machine learning applications1-6. However, current approaches to efficiently train these models are limited by in silico emulation on digital computers. Here we develop a method called fully forward mode (FFM) learning, which implements the compute-intensive training process on the physical system. The majority of the machine learning operations are thus efficiently conducted in parallel on site, alleviating numerical modelling constraints. In free-space and integrated photonics, we experimentally demonstrate optical systems with state-of-the-art performances for a given network size. FFM learning shows training the deepest optical neural networks with millions of parameters achieves accuracy equivalent to the ideal model. It supports all-optical focusing through scattering media with a resolution of the diffraction limit; it can also image in parallel the objects hidden outside the direct line of sight at over a kilohertz frame rate and can conduct all-optical processing with light intensity as weak as subphoton per pixel (5.40 x 1018- operations-per-second-per-watt energy efficiency) at room temperature. Furthermore, we prove that FFM learning can automatically search non-Hermitian exceptional points without an analytical model. FFM learning not only facilitates orders-of-magnitude-faster learning processes, but can also advance applied and theoretical fields such as deep neural networks, ultrasensitive perception and topological photonics. We present fully forward mode learning, which conducts machine learning operations on site, leading to faster learning and promoting advancement in numerous fields.
引用
收藏
页码:280 / 286
页数:17
相关论文
共 50 条
  • [21] Training of the feed forward artificial neural networks using dragonfly algorithm
    Gulcu, Saban
    APPLIED SOFT COMPUTING, 2022, 124
  • [22] Training Algorithm with Incomplete Data for Feed-Forward Neural Networks
    Song-Yee Yoon
    Soo-Young Lee
    Neural Processing Letters, 1999, 10 : 171 - 179
  • [23] Training Recurrent Neural Networks via Forward Propagation Through Time
    Kag, Anil
    Saligrama, Venkatesh
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [24] Differential evolution training algorithm for feed-forward neural networks
    Ilonen, J
    Kamarainen, JK
    Lampinen, J
    NEURAL PROCESSING LETTERS, 2003, 17 (01) : 93 - 105
  • [25] Constructing and training feed-forward neural networks for pattern classification
    Jiang, XD
    Wah, AHKS
    PATTERN RECOGNITION, 2003, 36 (04) : 853 - 867
  • [26] A Greedy Iterative Layered Framework for Training Feed Forward Neural Networks
    Custode, L. L.
    Tecce, C. L.
    Bakurov, I.
    Castelli, M.
    Della Cioppa, A.
    Vanneschi, L.
    APPLICATIONS OF EVOLUTIONARY COMPUTATION, EVOAPPLICATIONS 2020, 2020, 12104 : 513 - 529
  • [27] Evolutionary approach to training feed-forward and recurrent neural networks
    Riley, Jeff
    Ciesielski, Victor B.
    International Conference on Knowledge-Based Intelligent Electronic Systems, Proceedings, KES, 1998, 3 : 596 - 602
  • [28] Training of large-scale feed-forward neural networks
    Seiffert, Udo
    2006 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORK PROCEEDINGS, VOLS 1-10, 2006, : 5324 - 5329
  • [29] Differential Evolution Training Algorithm for Feed-Forward Neural Networks
    Jarmo Ilonen
    Joni-Kristian Kamarainen
    Jouni Lampinen
    Neural Processing Letters, 2003, 17 : 93 - 105
  • [30] Multiplication units in feed-forward neural networks and its training
    Li, DZ
    Hirasawa, K
    Hu, JL
    Murata, J
    ICONIP'02: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING: COMPUTATIONAL INTELLIGENCE FOR THE E-AGE, 2002, : 75 - 79