Segmentation from Natural Language Expressions

被引:226
|
作者
Hu, Ronghang [1 ]
Rohrbach, Marcus [1 ,2 ]
Darrell, Trevor [1 ]
机构
[1] Univ Calif Berkeley, EECS, Berkeley, CA 94720 USA
[2] ICSI, Berkeley, CA USA
来源
关键词
Natural language; Segmentation; Recurrent neural network; Fully convolutional network;
D O I
10.1007/978-3-319-46448-0_7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we approach the novel problem of segmenting an image based on a natural language expression. This is different from traditional semantic segmentation over a predefined set of semantic classes, as e.g., the phrase "two men sitting on the right bench" requires segmenting only the two people on the right bench and no one standing or sitting on another bench. Previous approaches suitable for this task were limited to a fixed set of categories and/or rectangular regions. To produce pixelwise segmentation for the language expression, we propose an end-to-end trainable recurrent and convolutional network model that jointly learns to process visual and linguistic information. In our model, a recurrent neural network is used to encode the referential expression into a vector representation, and a fully convolutional network is used to a extract a spatial feature map from the image and output a spatial response map for the target object. We demonstrate on a benchmark dataset that our model can produce quality segmentation output from the natural language expression, and outperforms baseline methods by a large margin.
引用
收藏
页码:108 / 124
页数:17
相关论文
共 50 条
  • [21] A method of extracting and evaluating good and bad reputations for natural language expressions
    Fuketa, M
    Kadoya, Y
    Atlam, E
    Kunikata, T
    Morita, K
    Kashiji, S
    Aoe, JI
    INTERNATIONAL JOURNAL OF INFORMATION TECHNOLOGY & DECISION MAKING, 2005, 4 (02) : 177 - 196
  • [22] Augmenting an Answer Set Based Controlled Natural Language with Temporal Expressions
    Schwitter, Rolf
    PRICAI 2019: TRENDS IN ARTIFICIAL INTELLIGENCE, PT I, 2019, 11670 : 500 - 513
  • [23] Dynamic Multimodal Instance Segmentation Guided by Natural Language Queries
    Margffoy-Tuay, Edgar
    Perez, Juan C.
    Botero, Emilio
    Arbelaez, Pablo
    COMPUTER VISION - ECCV 2018, PT XI, 2018, 11215 : 656 - 672
  • [24] Human Awareness Viewed from Natural Language Concept Formation Focusing on affective words related to facial expressions
    Yokota, Masao
    2013 IEEE INTERNATIONAL CONFERENCE ON CYBERNETICS (CYBCONF), 2013,
  • [25] Application of natural language modeling techniques in natural gas segmentation in seismic reflection images
    Henrique Ribeiro de Mello
    Anselmo Cardoso de Paiva
    Aristófanes Correa Silva
    Geraldo Braz Junior
    João Dallyson Sousa de Almeida
    Darlan Bruno Pontes Quintanilha
    Marcelo Gattass
    Neural Computing and Applications, 2025, 37 (4) : 2383 - 2409
  • [26] Semantic expansion of geographic web queries based on natural language positioning expressions
    Department of Computer Science, Federal University of Minas Gerais, Minas Gerais, Brazil
    不详
    不详
    Trans. GIS, 2007, 3 (377-397):
  • [27] Applying semantic knowledge to the automatic processing of temporal expressions and events in natural language
    Llorens, Hector
    Saquete, Estela
    Navarro-Colorado, Borja
    INFORMATION PROCESSING & MANAGEMENT, 2013, 49 (01) : 179 - 197
  • [28] Detection of Expressions of Violence Targeting Health Workers with Natural Language Processing Techniques
    Arisoy, Merve Varol
    Yalcinkaya, Mehmet Ali
    Gurfidan, Remzi
    Arisoy, Ayhan
    APPLIED SCIENCES-BASEL, 2025, 15 (04):
  • [29] Self-supervised Meta Auxiliary Learning for Actor and Action Video Segmentation from Natural Language
    Ye, Linwei
    Wang, Zhenhua
    ARTIFICIAL INTELLIGENCE, CICAI 2023, PT I, 2024, 14473 : 317 - 328
  • [30] Segmentation According to Natural Examples: Learning Static Segmentation from Motion Segmentation
    Ross, Michael G.
    Kaelbling, Leslie Pack
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2009, 31 (04) : 661 - 676