首页
学术期刊
论文检测
AIGC检测
热点
更多
数据
DMRFNet: Deep Multimodal Reasoning and Fusion for Visual Question Answering and explanation generation
被引:0
|
作者
:
Zhang, Weifeng
论文数:
0
引用数:
0
h-index:
0
机构:
Jiaxing University, Zhejiang, China
Jiaxing University, Zhejiang, China
Zhang, Weifeng
[
1
]
Yu, Jing
论文数:
0
引用数:
0
h-index:
0
机构:
Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China
Jiaxing University, Zhejiang, China
Yu, Jing
[
2
]
Zhao, Wenhong
论文数:
0
引用数:
0
h-index:
0
机构:
Nanhu College, Jiaxing University, Zhejiang, China
Jiaxing University, Zhejiang, China
Zhao, Wenhong
[
3
]
Ran, Chuan
论文数:
0
引用数:
0
h-index:
0
机构:
IBM Corporation, NC, United States
Jiaxing University, Zhejiang, China
Ran, Chuan
[
4
]
机构
:
[1]
Jiaxing University, Zhejiang, China
[2]
Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China
[3]
Nanhu College, Jiaxing University, Zhejiang, China
[4]
IBM Corporation, NC, United States
来源
:
Information Fusion
|
2021年
/ 72卷
关键词
:
Artificial intelligence - Natural language processing systems - Visual languages;
D O I
:
暂无
中图分类号
:
学科分类号
:
摘要
:
Visual Question Answering (VQA), which aims to answer questions in natural language according to the content of image, has attracted extensive attention from artificial intelligence community. Multimodal reasoning and fusion is a central component in recent VQA models. However, most existing VQA models are still insufficient to reason and fuse clues from multiple modalities. Furthermore, they are lack of interpretability since they disregard the explanations. We argue that reasoning and fusing multiple relations implied in multimodalities contributes to more accurate answers and explanations. In this paper, we design an effective multimodal reasoning and fusion model to achieve fine-grained multimodal reasoning and fusion. Specifically, we propose Multi-Graph Reasoning and Fusion (MGRF) layer, which adopts pre-trained semantic relation embeddings, to reason complex spatial and semantic relations between visual objects and fuse these two kinds of relations adaptively. The MGRF layers can be further stacked in depth to form Deep Multimodal Reasoning and Fusion Network (DMRFNet) to sufficiently reason and fuse multimodal relations. Furthermore, an explanation generation module is designed to justify the predicted answer. This justification reveals the motive of the model's decision and enhances the model's interpretability. Quantitative and qualitative experimental results on VQA 2.0, and VQA-E datasets show DMRFNet's effectiveness. © 2021 Elsevier B.V.
引用
收藏
页码:70 / 79
相关论文
未找到相关数据