Unsupervised Visual Attention and Invariance for Reinforcement Learning

被引:14
|
作者
Wang, Xudong [1 ]
Lian, Long [1 ]
Yu, Stella X. [1 ]
机构
[1] Univ Calif Berkeley, ICSI, Berkeley, CA 94720 USA
关键词
D O I
10.1109/CVPR46437.2021.00661
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Vision-based reinforcement learning (RL) is successful, but how to generalize it to unknown test environments remains challenging. Existing methods focus on training an RL policy that is universal to changing visual domains, whereas we focus on extracting visual foreground that is universal, feeding clean invariant vision to the RL policy learner. Our method is completely unsupervised, without manual annotations or access to environment internals. Given videos of actions in a training environment, we learn how to extract foregrounds with unsupervised keypoint detection, followed by unsupervised visual attention to automatically generate a foreground mask per video frame. We can then introduce artificial distractors and train a model to reconstruct the clean foreground mask from noisy observations. Only this learned model is needed during test to provide distraction-free visual input to the RL policy learner. Our Visual Attention and Invariance (VAI) method significantly outperforms the state-of-the-art on visual domain generalization, gaining 15 similar to 49% (61 similar to 229%) more cumulative rewards per episode on DeepMind Control (our Drawer-World Manipulation) benchmarks. Our results demonstrate that it is not only possible to learn domain-invariant vision without any supervision, but freeing RL from visual distractions also makes the policy more focused and thus far better.
引用
收藏
页码:6673 / 6683
页数:11
相关论文
共 50 条
  • [21] Unsupervised learning of visual structure
    Edelman, S
    Intrator, N
    Jacobson, JS
    BIOLOGICALLY MOTIVATED COMPUTER VISION, PROCEEDINGS, 2002, 2525 : 629 - 642
  • [22] Unsupervised learning of visual taxonomies
    Bart, Evgeniy
    Porteous, Ian
    Perona, Pietro
    Welling, Max
    2008 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-12, 2008, : 2166 - +
  • [23] DeepRare: Generic Unsupervised Visual Attention Models
    Kong, Phutphalla
    Mancas, Matei
    Gosselin, Bernard
    Po, Kimtho
    ELECTRONICS, 2022, 11 (11)
  • [24] Delving into Inter-Image Invariance for Unsupervised Visual Representations
    Jiahao Xie
    Xiaohang Zhan
    Ziwei Liu
    Yew-Soon Ong
    Chen Change Loy
    International Journal of Computer Vision, 2022, 130 : 2994 - 3013
  • [25] Delving into Inter-Image Invariance for Unsupervised Visual Representations
    Xie, Jiahao
    Zhan, Xiaohang
    Liu, Ziwei
    Ong, Yew-Soon
    Loy, Chen Change
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2022, 130 (12) : 2994 - 3013
  • [26] An Improved Reinforcement Learning Method Based on Unsupervised Learning
    Chang, Xin
    Li, Yanbin
    Zhang, Guanjie
    Liu, Donghui
    Fu, Changjun
    IEEE ACCESS, 2024, 12 : 12295 - 12307
  • [27] Focus of attention in reinforcement learning
    Li, Lihong
    Bulitko, Vadim
    Greiner, Russell
    JOURNAL OF UNIVERSAL COMPUTER SCIENCE, 2007, 13 (09) : 1246 - 1269
  • [28] Unsupervised Representation Learning in Deep Reinforcement Learning: A Review
    Botteghi, Nicolo
    Poel, Mannes
    Brune, Christoph
    IEEE CONTROL SYSTEMS MAGAZINE, 2025, 45 (02): : 26 - 68
  • [29] A brainlike learning system with supervised, unsupervised, and reinforcement learning
    Sasakawa, Takafumi
    Hu, Jinglu
    Hirasawa, Kotaro
    ELECTRICAL ENGINEERING IN JAPAN, 2008, 162 (01) : 32 - 39
  • [30] Unsupervised Learning for Robust Fitting: A Reinforcement Learning Approach
    Truong, Giang
    Le, Huu
    Suter, David
    Zhang, Erchuan
    Gilani, Syed Zulqarnain
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 10343 - 10352