Single Stage Virtual Try-On Via Deformable Attention Flows

被引:32
|
作者
Bai, Shuai [1 ]
Zhou, Huiling [1 ]
Li, Zhikang [1 ]
Zhou, Chang [1 ]
Yang, Hongxia [1 ]
机构
[1] Alibaba Grp, DAMO Acad, Hangzhou, Peoples R China
来源
关键词
Virtual try-on; Single stage; Deformable attention flows;
D O I
10.1007/978-3-031-19784-0_24
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Virtual try-on aims to generate a photo-realistic fitting result given an in-shop garment and a reference person image. Existing methods usually build up multi-stage frameworks to deal with clothes warping and body blending respectively, or rely heavily on intermediate parser-based labels which may be noisy or even inaccurate. To solve the above challenges, we propose a single-stage try-on framework by developing a novel Deformable Attention Flow (DAFlow), which applies the deformable attention scheme to multi-flow estimation. With pose keypoints as the guidance only, the self- and cross-deformable attention flows are estimated for the reference person and the garment images, respectively. By sampling multiple flow fields, the feature-level and pixel-level information from different semantic areas is simultaneously extracted and merged through the attention mechanism. It enables clothes warping and body synthesizing at the same time which leads to photo-realistic results in an end-to-end manner. Extensive experiments on two try-on datasets demonstrate that our proposed method achieves state-of-the-art performance both qualitatively and quantitatively. Furthermore, additional experiments on the other two image editing tasks illustrate the versatility of our method for multi-view synthesis and image animation. Code will be made available at https://github.com/OFA-Sys/DAFlow.
引用
收藏
页码:409 / 425
页数:17
相关论文
共 50 条
  • [1] Parser-Free Virtual Try-on via Distilling Appearance Flows
    Ge, Yuying
    Song, Yibing
    Zhang, Ruimao
    Ge, Chongjian
    Liu, Wei
    Luo, Ping
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 8481 - 8489
  • [2] Attention-based Video Virtual Try-On
    Tsai, Wen-Jiin
    Tien, Yi-Cheng
    PROCEEDINGS OF THE 2023 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2023, 2023, : 209 - 216
  • [3] Image-based Virtual Try-on via Channel Attention and Appearance Flow
    He, Chao
    Liu, Rong
    E, Jinxuan
    Liu, Ming
    2024 5TH INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKS AND INTERNET OF THINGS, CNIOT 2024, 2024, : 198 - 203
  • [4] Virtual try-on based on attention U-Net
    Hu, Xinrong
    Zhang, Junyu
    Huang, Jin
    Liang, JinXing
    Yu, Feng
    Peng, Tao
    VISUAL COMPUTER, 2022, 38 (9-10): : 3365 - 3376
  • [5] Virtual try-on based on attention U-Net
    Xinrong Hu
    Junyu Zhang
    Jin Huang
    JinXing Liang
    Feng Yu
    Tao Peng
    The Visual Computer, 2022, 38 : 3365 - 3376
  • [6] SVTON: Simplified Virtual Try-On
    Islam, Tasin
    Miron, Alina
    Liu, XiaoHui
    Li, Yongmin
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 369 - 374
  • [7] Powering Virtual Try-On via Auxiliary Human Segmentation Learning
    Ayush, Kumar
    Jandial, Surgan
    Chopra, Ayush
    Krishnamurthy, Balaji
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 3193 - 3196
  • [8] Size Does Matter: Size-aware Virtual Try-on via Clothing-oriented Transformation Try-on Network
    Chen, Chieh-Yun
    Chen, Yi-Chung
    Shuai, Hong-Han
    Cheng, Wen-Huang
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 7479 - 7488
  • [9] Regularized Adversarial Training for Single-shot Virtual Try-On
    Kikuchi, Kotaro
    Yamaguchi, Kota
    Simo-Serra, Edgar
    Kobayashi, Tetsunori
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 3149 - 3152
  • [10] TOAC: Try-On Aligning Conformer for Image-Based Virtual Try-On Alignment
    Wang, Yifei
    Xiang, Wang
    Zhang, Shengjie
    Xue, Dizhan
    Qian, Shengsheng
    ARTIFICIAL INTELLIGENCE, CICAI 2023, PT II, 2024, 14474 : 29 - 40