Multimodal Machine Learning: A Survey and Taxonomy

被引:2031
|
作者
Baltrusaitis, Tadas [1 ]
Ahuja, Chaitanya [2 ]
Morency, Louis-Philippe [2 ]
机构
[1] Microsoft Corp, Cambridge CB1 2FB, England
[2] Carnegie Mellon Univ, Language Technol Inst, Pittsburgh, PA 15213 USA
基金
美国国家科学基金会;
关键词
Multimodal; machine learning; introductory; survey; EMOTION RECOGNITION; NEURAL-NETWORKS; SPEECH; TEXT; FUSION; VIDEO; LANGUAGE; MODELS; GENERATION; ALIGNMENT;
D O I
10.1109/TPAMI.2018.2798607
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Our experience of the world is multimodal - we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research.
引用
收藏
页码:423 / 443
页数:21
相关论文
共 50 条
  • [21] Machine Learning for Multimodal Interaction: Preface
    Popescu-Belis, Andrei
    Stiefelhagen, Rainer
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2008, 5237 LNCS
  • [22] Machine Learning in Multimodal Medical Imaging
    Xia, Yong
    Ji, Zexuan
    Krylov, Andrey
    Chang, Hang
    Cai, Weidong
    BIOMED RESEARCH INTERNATIONAL, 2017, 2017
  • [23] Label distribution for multimodal machine learning
    Yi Ren
    Ning Xu
    Miaogen Ling
    Xin Geng
    Frontiers of Computer Science, 2022, 16
  • [24] Multimodal Machine Learning for Credit Modeling
    Nguyen, Cuong, V
    Das, Sanjiv R.
    He, John
    Yue, Shenghua
    Hanumaiah, Vinay
    Ragot, Xavier
    Zhang, Li
    2021 IEEE 45TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE (COMPSAC 2021), 2021, : 1754 - 1759
  • [25] Label distribution for multimodal machine learning
    Ren, Yi
    Xu, Ning
    Ling, Miaogen
    Geng, Xin
    FRONTIERS OF COMPUTER SCIENCE, 2022, 16 (01)
  • [26] Label distribution for multimodal machine learning
    REN Yi
    XU Ning
    LING Miaogen
    GENG Xin
    Frontiers of Computer Science, 2022, 16 (01)
  • [27] Deep Multimodal Representation Learning: A Survey
    Guo, Wenzhong
    Wang, Jianwen
    Wang, Shiping
    IEEE ACCESS, 2019, 7 : 63373 - 63394
  • [28] Toward a taxonomy of trust for probabilistic machine learning
    Broderick, Tamara
    Gelman, Andrew
    Meager, Rachael
    Smith, Anna L.
    Zheng, Tian
    SCIENCE ADVANCES, 2023, 9 (07):
  • [29] Exploring the Landscape of Machine Unlearning: A Comprehensive Survey and Taxonomy
    Shaik, Thanveer
    Tao, Xiaohui
    Xie, Haoran
    Li, Lin
    Zhu, Xiaofeng
    Li, Qing
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [30] Machine Learning on Graphs: A Model and Comprehensive Taxonomy
    Chami, Ines
    Abu-El-Haija, Sami
    Perozzi, Bryan
    Re, Christopher
    Murphy, Kevin
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23 : 1 - 64