Multi-modal actuation with the activation bit vector machine

被引:2
|
作者
Schmidtke, H. R. [1 ]
机构
[1] POB 11 01 29, D-19001 Schwerin, Germany
来源
关键词
Symbol grounding problem; Vector symbolic architectures; Action verbs; Activation bit vector machine; Context logic;
D O I
10.1016/j.cogsys.2020.10.022
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Research towards a new approach to the abstract symbol grounding problem showed that through model counting there is a correspondence between logical/linguistic and coordinate representation in the visuospatial domain. The logical/verbal description of a spatial layout directly gives rise to a coordinate representation that can be drawn, with the drawing reflecting what is described. The main characteristic of this logical property is that it does not need any semantic information or ontology apart from a separation into symbols/words referring to relations and symbols/words referring to objects. Moreover, the complete mechanism can be implemented efficiently on a brain inspired cognitive architecture, the Activation Bit Vector Machine (ABVM), an architecture that belongs to the Vector Symbolic Architectures. However, the natural language fragment captured previously was restricted to simple predication sentences, with the corresponding logical fragment being atomic Context Logic (CLA), and the only actuation modality leveraged was visualization. This article extends the approach on all three aspects: adding a third category of action verbs we move to a fragment of first-order Context Logic (CL1), with modalities requiring a temporal dimension, such as film and music, becoming available. The article presents an ABVM generating sequences of images from texts. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:162 / 175
页数:14
相关论文
共 50 条
  • [21] Multi-Modal Approaches for Post-Editing Machine Translation
    Herbig, Nico
    Pal, Santanu
    van Genabith, Josef
    Krueger, Antonio
    CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
  • [22] Multi-modal Machine Learning Model for Interpretable Malware Classification
    Lisa, Fahmida Tasnim
    Islam, Sheikh Rabiul
    Kumar, Neha Mohan
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, PT III, XAI 2024, 2024, 2155 : 334 - 349
  • [23] A Multi-Modal and Collaborative Human–Machine Interface for a Walking Robot
    J. Estremera
    E. Garcia
    P. Gonzalez de Santos
    Journal of Intelligent and Robotic Systems, 2002, 35 : 397 - 425
  • [24] Multi-Modal 2020: Multi-Modal Argumentation 30 Years Later
    Gilbert, Michael A.
    INFORMAL LOGIC, 2022, 42 (03): : 487 - 506
  • [25] Multi-modal Machine Learning Investigation of Telework and Transit Connections
    Deirdre Edward
    Jason Soria
    Amanda Stathopoulos
    Data Science for Transportation, 2024, 6 (2):
  • [26] Multi-modal neural machine translation with deep semantic interactions
    Su, Jinsong
    Chen, Jinchang
    Jiang, Hui
    Zhou, Chulun
    Lin, Huan
    Ge, Yubin
    Wu, Qingqiang
    Lai, Yongxuan
    INFORMATION SCIENCES, 2021, 554 : 47 - 60
  • [27] Multi-Modal Hate Speech Recognition Through Machine Learning
    1600, Institute of Electrical and Electronics Engineers Inc.
  • [28] Machine Learning of Multi-Modal Influences on Airport Pushback Delays
    Kicinger, Rafal
    Krozel, Jimmy
    Chen, Jit-Tat
    Schelling, Steven
    AIAA AVIATION FORUM AND ASCEND 2024, 2024,
  • [29] Machine Learning Based Multi-Modal Transportation Network Planner
    Manghat, Neeraj Menon
    Gopalakrishna, Vaishak
    Bonthu, Sai
    Hunt, Victor
    Helmicki, Arthur
    McClintock, Doug
    INTERNATIONAL CONFERENCE ON TRANSPORTATION AND DEVELOPMENT 2024: TRANSPORTATION SAFETY AND EMERGING TECHNOLOGIES, ICTD 2024, 2024, : 380 - 389
  • [30] Multi-modal Hate Speech Detection using Machine Learning
    Boishakhi, Fariha Tahosin
    Shill, Ponkoj Chandra
    Alam, Md Golam Rabiul
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 4496 - 4499