Selective learning of spatial configuration and object identity in visual search

被引:0
|
作者
Nobutaka Endo
Yuji Takeda
机构
[1] National Institute of Advanced Industrial Science and Technology (AIST),Visual Cognition Group, Institute for Human Science and Biomedical Engineering
来源
关键词
Conditioned Stimulus; Target Location; Spatial Configuration; Object Identity; Repetition Condition;
D O I
暂无
中图分类号
学科分类号
摘要
To conduct an efficient visual search, visual attention must be guided to a target appropriately. Previous studies have suggested that attention can be quickly guided to a target when the spatial configurations of search objects or the object identities have been repeated. This phenomenon is termedcontextual cuing. In this study, we investigated the effect of learning spatial configurations, object identities, and a combination of both configurations and identities on visual search. The results indicated that participants could learn the contexts of spatial configurations, but not of object identities, even when both configurations and identities were completely correlated (Experiment 1). On the other hand, when only object identities were repeated, an effect of identity learning could be observed (Experiment 2). Furthermore, an additive effect of configuration learning and identity learning was observed when, in some trials, each context was the relevant cue for predicting the target (Experiment 3). Participants could learn only the context that was associated with target location (Experiment 4). These findings indicate that when multiple contexts are redundant, contextual learning occurs selectively, depending on the predictability of the target location.
引用
收藏
页码:293 / 302
页数:9
相关论文
共 50 条
  • [21] Video instance search via spatial fusion of visual words and object proposals
    Vinh-Tiep Nguyen
    Duy Dinh Le
    Minh-Triet Tran
    Tam V. Nguyen
    Thanh Duc Ngo
    Shin’ichi Satoh
    Duc Anh Duong
    International Journal of Multimedia Information Retrieval, 2019, 8 : 181 - 192
  • [22] Spatial constraints on learning in visual search: Modeling contextual cuing
    Brady, Timothy F.
    Chun, Marvin M.
    JOURNAL OF EXPERIMENTAL PSYCHOLOGY-HUMAN PERCEPTION AND PERFORMANCE, 2007, 33 (04) : 798 - 815
  • [23] Implicit learning: A way to improve visual search in spatial neglect?
    Wansard, Murielle
    Geurten, Marie
    Colson, Catherine
    Meulemans, Thierry
    CONSCIOUSNESS AND COGNITION, 2016, 43 : 102 - 112
  • [24] SPATIAL PROGRESSION OF PERCEPTUAL LEARNING IN VISUAL FEATURE CONJUNCTION SEARCH
    Reavis, Eric
    Frank, Sebastian
    Tse, Peter
    JOURNAL OF COGNITIVE NEUROSCIENCE, 2013, : 233 - 233
  • [25] Selective Search for Object Recognition
    Uijlings, J. R. R.
    van de Sande, K. E. A.
    Gevers, T.
    Smeulders, A. W. M.
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2013, 104 (02) : 154 - 171
  • [26] Selective Search for Object Recognition
    J. R. R. Uijlings
    K. E. A. van de Sande
    T. Gevers
    A. W. M. Smeulders
    International Journal of Computer Vision, 2013, 104 : 154 - 171
  • [27] VISUAL AND SPATIAL PROBABILITY LEARNING BY RAT - SELECTIVE DECORTICATION INFLUENCES
    TREICHLER, FR
    DENGERINK, HA
    PHYSIOLOGY & BEHAVIOR, 1970, 5 (11) : 1229 - +
  • [28] Utilization of Deep Reinforcement Learning for Saccadic-Based Object Visual Search
    Kornuta, Tomasz
    Rocki, Kamil
    AUTOMATION 2017: INNOVATIONS IN AUTOMATION, ROBOTICS AND MEASUREMENT TECHNIQUES, 2017, 550 : 565 - 574
  • [29] Perceptual learning in within- and across-object conjunctions visual search
    Casco, C
    Campana, G
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 1999, 40 (04) : S346 - S346
  • [30] Facelikeness matters: A parametric multipart object set to understand the role of spatial configuration in visual recognition
    Vuong, Quoc C.
    Willenbockel, Verena
    Zimmermann, Friederike G. S.
    Lochy, Aliette
    Laguesse, Renaud
    Dryden, Adam
    Rossion, Bruno
    VISUAL COGNITION, 2016, 24 (7-8) : 406 - 421