Efficient training of unitary optical neural networks

被引:3
|
作者
Lu, Kunrun [1 ]
Guo, Xianxin [2 ,3 ]
机构
[1] Harbin Inst Technol, Sch Sci, Weihai 264209, Shandong, Peoples R China
[2] Univ Oxford, Clarendon Lab, Parks Rd, Oxford OX1 3PU, England
[3] Lurtis Ltd, Wood Ctr Innovat, Quarry Rd, Oxford OX3 8SB, England
关键词
ALGORITHMS; DESIGN;
D O I
10.1364/OE.500544
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Deep learning has profoundly reshaped the technology landscape in numerous scientific areas and industrial sectors. This technology advancement is, nevertheless, confronted with severe bottlenecks in digital computing. Optical neural network presents a promising solution due to the ultra-high computing speed and energy efficiency. In this work, we present systematic study of unitary optical neural network (UONN) as an approach towards optical deep learning. Our results show that the UONN can be trained to high accuracy through special unitary gradient descent optimization, and the UONN is robust against physical imperfections and noises, hence it is more suitable for physical implementation than existing ONNs.
引用
收藏
页码:39616 / 39623
页数:8
相关论文
共 50 条
  • [41] Efficient Training of Very Deep Neural Networks for Supervised Hashing
    Zhang, Ziming
    Chen, Yuting
    Saligrama, Venkatesh
    2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 1487 - 1495
  • [42] An Efficient Approach to Escalate the Speed of Training Convolution Neural Networks
    Pabitha, P.
    Jayasimhan, Anusha
    CHINA COMMUNICATIONS, 2024, 21 (02) : 258 - 269
  • [43] Efficient training of Time Delay Neural Networks for sequential patterns
    Cancelliere, R
    Gemello, R
    NEUROCOMPUTING, 1996, 10 (01) : 33 - 42
  • [44] An Efficient Approach to Escalate the Speed of Training Convolution Neural Networks
    P Pabitha
    Anusha Jayasimhan
    ChinaCommunications, 2024, 21 (02) : 258 - 269
  • [45] GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training
    Zhu, Chen
    Ni, Renkun
    Xu, Zheng
    Kong, Kezhi
    Huang, W. Ronny
    Goldstein, Tom
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [46] EFFICIENT GENETIC ALGORITHMS FOR TRAINING LAYERED FEEDFORWARD NEURAL NETWORKS
    YOON, BJ
    HOLMES, DJ
    LANGHOLZ, G
    KANDEL, A
    INFORMATION SCIENCES, 1994, 76 (1-2) : 67 - 85
  • [47] An Experimental Perspective for Computation-Efficient Neural Networks Training
    Yin, Lujia
    Chen, Xiaotao
    Qin, Zheng
    Zhang, Zhaoning
    Feng, Jinghua
    Li, Dongsheng
    ADVANCED COMPUTER ARCHITECTURE, 2018, 908 : 168 - 178
  • [48] Enabling Efficient Training of Convolutional Neural Networks for Histopathology Images
    Alali, Mohammed H.
    Roohi, Arman
    Deogun, Jitender S.
    IMAGE ANALYSIS AND PROCESSING, ICIAP 2022 WORKSHOPS, PT I, 2022, 13373 : 533 - 544
  • [49] An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks
    Xie, Xiurui
    Qu, Hong
    Liu, Guisong
    Zhang, Malu
    Kurths, Juergen
    PLOS ONE, 2016, 11 (04):
  • [50] Depth Dropout: Efficient Training of Residual Convolutional Neural Networks
    Guo, Jian
    Gould, Stephen
    2016 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS (DICTA), 2016, : 343 - 349