Attention-based texture segregation

被引:0
|
作者
Thomas V. Papathomas
Andrei Gorea
Akos Feher
Tiffany E. Conway
机构
[1] Rutgers University,Laboratory of Vision Research
[2] CNRS,Laboratoire de Psychologie Expérimentale
[3] René Descartes University,undefined
来源
Perception & Psychophysics | 1999年 / 61卷
关键词
Visual Search; Luminance Contrast; Orientation Contrast; Chromatic Contrast; Texture Segregation;
D O I
暂无
中图分类号
学科分类号
摘要
Luminance- or color-defined ±45°-oriented bars were arranged to yieldsingle-feature ordouble-conjunction texture pairs. In the former, the global edge between two regions is formed by differences in one attribute (orientation, or color, or luminance). In the color/orientation double-conjunction pair, one region has +45° red and −45° green textels, the other −45° red and +45° green textels (the lumi-nance/orientation double-conjunction pair is similar); such a pair contains a single-feature orientation edge in the subset of red (or green) textels, and a color edge in the subset of +45° (or −45°) textels. We studied whether edge detection improved when observers were instructed to attend to such subsets. Two groups of observers participated: in the test group, the stimulus construction was explained to observers, and they were cued to attend to one subset. The control group ran through the same total number of sessions without explanations/cues. The effect of cuing was weak but statistically significant. Feature cuing was more effective for color/orientation than for luminance/orientation conjunctions. Within each stimulus category, performance was nearly the same no matter which subset was attended to. On average, a global performance improvement occurred over time even without cuing, but some observers did not improve with either cuing or practice. We discuss these results in the context of one-versus two-stage segregation theories, as well as by reference to signal enhancement versus noise suppression. We conclude that texture segregation can be improved by attentional strategies aimed to isolate specific stimulus features.
引用
收藏
页码:1399 / 1410
页数:11
相关论文
共 50 条
  • [31] Neighbour feature attention-based pooling
    Li, Xiaosong
    Wu, Yanxia
    Fu, Yan
    Tang, Chuheng
    Zhang, Lidan
    NEUROCOMPUTING, 2022, 501 : 285 - 293
  • [32] Step Counting with Attention-based LSTM
    Khan, Shehroz S.
    Abedi, Ali
    2022 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2022, : 559 - 566
  • [33] Attention-Based Document Classifier Learning
    Buscher, Georg
    Dengel, Andreas
    PROCEEDINGS OF THE 8TH IAPR INTERNATIONAL WORKSHOP ON DOCUMENT ANALYSIS SYSTEMS, 2008, : 87 - +
  • [34] Dynamic Attention-based Visual Odometry
    Kuo, Xin-Yu
    Liu, Chien
    Lin, Kai-Chen
    Lee, Chun-Yi
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 160 - 169
  • [35] Dynamic Attention-based Visual Odometry
    Kuo, Xin-Yu
    Liu, Chien
    Lin, Kai-Chen
    Luo, Evan
    Chen, Yu-Wen
    Lee, Chun-Yi
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 5753 - 5760
  • [36] Attention-based vanishing point detection
    Stentiford, Fred
    2006 IEEE International Conference on Image Processing, ICIP 2006, Proceedings, 2006, : 417 - 420
  • [37] A Fully Attention-Based Information Retriever
    Correia, Alvaro H. C.
    Silva, Jorge L. M.
    Martins, Thiago de C.
    Cozman, Fabio G.
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [38] Attention-Based Models for Speech Recognition
    Chorowski, Jan
    Bahdanau, Dzmitry
    Serdyuk, Dmitriy
    Cho, Kyunghyun
    Bengio, Yoshua
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 28 (NIPS 2015), 2015, 28
  • [39] Attention-based Visual Question Generation
    Patil, Charulata
    Kulkarni, Anagha
    2021 INTERNATIONAL CONFERENCE ON EMERGING SMART COMPUTING AND INFORMATICS (ESCI), 2021, : 82 - 86
  • [40] Attention-Based Real Image Restoration
    Anwar, Saeed
    Barnes, Nick
    Petersson, Lars
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021,