Multi-modal actuation with the activation bit vector machine

被引:2
|
作者
Schmidtke, H. R. [1 ]
机构
[1] POB 11 01 29, D-19001 Schwerin, Germany
来源
关键词
Symbol grounding problem; Vector symbolic architectures; Action verbs; Activation bit vector machine; Context logic;
D O I
10.1016/j.cogsys.2020.10.022
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Research towards a new approach to the abstract symbol grounding problem showed that through model counting there is a correspondence between logical/linguistic and coordinate representation in the visuospatial domain. The logical/verbal description of a spatial layout directly gives rise to a coordinate representation that can be drawn, with the drawing reflecting what is described. The main characteristic of this logical property is that it does not need any semantic information or ontology apart from a separation into symbols/words referring to relations and symbols/words referring to objects. Moreover, the complete mechanism can be implemented efficiently on a brain inspired cognitive architecture, the Activation Bit Vector Machine (ABVM), an architecture that belongs to the Vector Symbolic Architectures. However, the natural language fragment captured previously was restricted to simple predication sentences, with the corresponding logical fragment being atomic Context Logic (CLA), and the only actuation modality leveraged was visualization. This article extends the approach on all three aspects: adding a third category of action verbs we move to a fragment of first-order Context Logic (CL1), with modalities requiring a temporal dimension, such as film and music, becoming available. The article presents an ABVM generating sequences of images from texts. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:162 / 175
页数:14
相关论文
共 50 条
  • [41] Hadamard matrix-guided multi-modal hashing for multi-modal retrieval
    Yu, Jun
    Huang, Wei
    Li, Zuhe
    Shu, Zhenqiu
    Zhu, Liang
    DIGITAL SIGNAL PROCESSING, 2022, 130
  • [42] Multi-Modal Knowledge Graph Transformer Framework for Multi-Modal Entity Alignment
    Li, Qian
    Ji, Cheng
    Guo, Shu
    Liang, Zhaoji
    Wang, Lihong
    Li, Jianxin
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 987 - 999
  • [43] Conversational multi-modal browser: An integrated multi-modal browser and dialog manager
    Tiwari, A
    Hosn, RA
    Maes, SH
    2003 SYMPOSIUM ON APPLICATIONS AND THE INTERNET, PROCEEDINGS, 2003, : 348 - 351
  • [44] Hierarchical Multi-Modal Prompting Transformer for Multi-Modal Long Document Classification
    Liu, Tengfei
    Hu, Yongli
    Gao, Junbin
    Sun, Yanfeng
    Yin, Baocai
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (07) : 6376 - 6390
  • [45] A multi-modal machine learning approach towards predicting patient readmission
    Mohanty, Somya D.
    Lekan, Deborah
    McCoy, Thomas P.
    Jenkins, Marjorie
    Manda, Prashanti
    2020 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, 2020, : 2027 - 2035
  • [46] Multi-modal biomarkers of low back pain: A machine learning approach
    Lamichhane, Bidhan
    Jayasekera, Dinal
    Jakes, Rachel
    Glasser, Matthew F.
    Zhang, Justin
    Yang, Chunhui
    Grimes, Derayvia
    Frank, Tyler L.
    Ray, Wilson Z.
    Leuthardt, Eric C.
    Hawasli, Ammar H.
    NEUROIMAGE-CLINICAL, 2021, 29
  • [47] Multi-Modal Machine Learning in Engineering Design: A Review and Future Directions
    Song, Binyang
    Zhou, Rui
    Ahmed, Faez
    JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING, 2024, 24 (01)
  • [48] Predicting working alliance in psychotherapy: A multi-modal machine learning approach
    Aafjes-Van Doorn, Katie
    Cicconet, Marcelo
    Cohn, Jeffrey F.
    Aafjes, Marc
    PSYCHOTHERAPY RESEARCH, 2025, 35 (02) : 256 - 270
  • [49] Research on Multi-modal Human-machine Interface for Aerospace Robot
    Bian, Yan
    Zhao, Li
    Li, Hongwei
    Yang, Genghuang
    Geng, Liqing
    Deng, Xuhui
    2015 7TH INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS IHMSC 2015, VOL I, 2015, : 535 - 538
  • [50] A Cognitive User Interface for a Multi-modal Human-Machine Interaction
    Tschoepe, Constanze
    Duckhorn, Frank
    Huber, Markus
    Meyer, Werner
    Wolff, Matthias
    SPEECH AND COMPUTER (SPECOM 2018), 2018, 11096 : 707 - 717