Learning with Holographic Reduced Representations

被引:0
|
作者
Ganesan, Ashwinkumar [1 ,4 ]
Gao, Hang [1 ]
Gandhi, Sunil [1 ]
Raff, Edward [1 ,2 ,3 ]
Oates, Tim [1 ]
Holt, James [2 ]
McLean, Mark [2 ]
机构
[1] Univ Maryland Baltimore Cty, Baltimore, MD 21228 USA
[2] Lab Phys Sci, Catonsville, MD USA
[3] Booz Allen Hamilton, Mclean, VA USA
[4] Amazon, Seattle, WA USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Holographic Reduced Representations (HRR) are a method for performing symbolic AI on top of real-valued vectors [1] by associating each vector with an abstract concept, and providing mathematical operations to manipulate vectors as if they were classic symbolic objects. This method has seen little use outside of older symbolic AI work and cognitive science. Our goal is to revisit this approach to understand if it is viable for enabling a hybrid neural-symbolic approach to learning as a differentiable component of a deep learning architecture. HRRs today are not effective in a differentiable solution due to numerical instability, a problem we solve by introducing a projection step that forces the vectors to exist in a well behaved point in space. In doing so we improve the concept retrieval efficacy of HRRs by over 100x. Using multi-label classification we demonstrate how to leverage the symbolic HRR properties to develop an output layer and loss function that is able to learn effectively, and allows us to investigate some of the pros and cons of an HRR neuro-symbolic learning approach. Our code can be found at https://github.com/NeuromorphicComputationResearchProgram/ Learning-with-Holographic-Reduced-Representations
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Autonomous Learning of Representations
    Walter O.
    Haeb-Umbach R.
    Mokbel B.
    Paassen B.
    Hammer B.
    KI - Künstliche Intelligenz, 2015, 29 (4) : 339 - 351
  • [32] Representations for Continuous Learning
    Isele, David
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 5040 - 5041
  • [33] Learning representations on graphs
    Jun Zhu
    NationalScienceReview, 2018, 5 (01) : 21 - 21
  • [34] Learning with probabilistic representations
    Langley, P
    Provan, GM
    Smyth, P
    MACHINE LEARNING, 1997, 29 (2-3) : 91 - 101
  • [35] Learning representations on graphs
    Zhu, Jun
    NATIONAL SCIENCE REVIEW, 2018, 5 (01) : 21 - 21
  • [36] Learning network representations
    Luis G. Moyano
    The European Physical Journal Special Topics, 2017, 226 : 499 - 518
  • [37] Learning with Probabilistic Representations
    Pat Langley
    Gregory M. Provan
    Padhraic Smyth
    Machine Learning, 1997, 29 : 91 - 101
  • [38] Learning overcomplete representations
    Lewicki, MS
    Sejnowski, TJ
    NEURAL COMPUTATION, 2000, 12 (02) : 337 - 365
  • [39] LEARNING AND CONNECTIONIST REPRESENTATIONS
    RUMELHART, DE
    TODD, PM
    ATTENTION AND PERFORMANCE, 1993, 14 : 3 - 30
  • [40] Learning predictive representations
    Herrmann, JM
    Pawelzik, K
    Geisel, T
    NEUROCOMPUTING, 2000, 32 : 785 - 791