Multi-modal spatial querying

被引:0
|
作者
Egenhofer, MJ
机构
关键词
D O I
暂无
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
People who use multiple channels at the same time, communicate more successfully about spatial problems than those who rely exclusively on either voice or pictures. To achieve a similarly successful interaction between a person and a geographic information system (GIS), we use two concurrent communication channels-graphics and speech-to construct a multimodal spatial query language in which users interact with a geographic database by drawing sketches of the desired configuration, while simultaneously talking about the spatial objects and the spatial relations drawn. Through the combined use of graphics and sketch, more intuitive and more precise specifications of spatial queries are possible. The key to this interaction is the exploitation of complementary or redundant information present in both graphical and verbal descriptions of the same spatial scenes. A multiple-resolution model of spatial relations is used to capture the essential aspects of a sketch and its corresponding verbal description. The model stresses topological properties, such as containment and neighborhood, and considers metrical properties, such as distances and directions, as refinements where necessary. This model enables the retrieval of similar, not only exact, matches between a spatial query and a geographic database. Such new methods of multimodal spatial querying and spatial similarity retrieval will empower experts as well as novice users to perform spatial searches more easily, ultimately providing new user communities access to spatial databases.
引用
收藏
页码:785 / 799
页数:15
相关论文
共 50 条
  • [1] Physical Querying with Multi-Modal Sensing
    Baek, Iljoo
    Stine, Taylor
    Dash, Denver
    Xiao, Fanyi
    Sheikh, Yaser
    Movshovitz-Attias, Yair
    Chen, Mei
    Hebert, Martial
    Kanade, Takeo
    2014 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2014, : 183 - 190
  • [2] Indescribable Multi-modal Spatial Evaluator
    Kong, Lingke
    Qi, X. Sharon
    Shen, Qijin
    Wang, Jiacheng
    Zhang, Jingyi
    Hu, Yanle
    Zhou, Qichao
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 9853 - 9862
  • [3] QuMinS: Fast and scalable querying, mining and summarizing multi-modal databases
    Cordeiro, Robson L. F.
    Guo, Fan
    Haverkamp, Donna S.
    Horne, James H.
    Hughes, Ellen K.
    Kim, Gunhee
    Romani, Luciana A. S.
    Coltri, Priscila P.
    Souza, Tamires T.
    Traina, Agma J. M.
    Traina, Caetano, Jr.
    Faloutsos, Christos
    INFORMATION SCIENCES, 2014, 264 : 211 - 229
  • [4] CODESPIDER: Automatic Code Querying with Multi-modal Conjunctive Query Synthesis
    Wang, Chengpeng
    COMPANION PROCEEDINGS OF THE 2022 ACM SIGPLAN INTERNATIONAL CONFERENCE ON SYSTEMS, PROGRAMMING, LANGUAGES, AND APPLICATIONS: SOFTWARE FOR HUMANITY, SPLASH COMPANION 2022, 2022, : 63 - 65
  • [5] Spatial mapping of multi-modal data in neuroscience
    Hawrylycz, Mike
    Sunkin, Susan
    Ng, Lydia
    METHODS, 2015, 73 : 1 - 3
  • [6] Multi-modal examination of spatial heterogeneity in the astrocytoma microenvironment
    Moffet, Joel
    Kriel, Jurgen
    Lu, Tianyao
    Freytag, Lutz
    Whittle, James
    Best, Sarah
    Freytag, Saskia
    CANCER RESEARCH, 2024, 84 (06)
  • [7] MULTI-MODAL EXAMINATION OF SPATIAL HETEROGENEITY IN THE ASTROCYTOMA MICROENVIRONMENT
    Lu, T.
    Freytag, L.
    Moffet, J.
    Kriel, J.
    Whittle, J.
    Freytag, S.
    Best, S. A.
    NEURO-ONCOLOGY, 2024, 26 : V23 - V24
  • [8] Multi-modal anchor adaptation learning for multi-modal summarization
    Chen, Zhongfeng
    Lu, Zhenyu
    Rong, Huan
    Zhao, Chuanjun
    Xu, Fan
    NEUROCOMPUTING, 2024, 570
  • [9] Flexible Dual Multi-Modal Hashing for Incomplete Multi-Modal Retrieval
    Wei, Yuhong
    An, Junfeng
    INTERNATIONAL JOURNAL OF IMAGE AND GRAPHICS, 2024,
  • [10] Multi-Modal 2020: Multi-Modal Argumentation 30 Years Later
    Gilbert, Michael A.
    INFORMAL LOGIC, 2022, 42 (03): : 487 - 506