Consistent Arbitrary Style Transfer Using Consistency Training and Self-Attention Module

被引:3
|
作者
Zhou, Zheng [1 ]
Wu, Yue [2 ]
Zhou, Yicong [1 ]
机构
[1] Univ Macau, Dept Comp & Informat Sci, Taipa, Macao, Peoples R China
[2] Amazon Alexa Nat Understanding, Manhattan Beach, CA 90007 USA
关键词
Image color analysis; Adaptation models; Transformers; Learning systems; Visualization; Training; Loss measurement; Arbitrary style transfer (AST); consistent training; self-attention (SA); style inconsistency;
D O I
10.1109/TNNLS.2023.3298383
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Arbitrary style transfer (AST) has garnered considerable attention for its ability to transfer styles infinitely. Although existing methods have achieved impressive results, they may overlook style consistencies and fail to capture crucial style patterns, leading to inconsistent style transfer (ST) caused by minor disturbances. To tackle this issue, we conduct a mathematical analysis of inconsistent ST and develop a style inconsistency measure (SIM) to quantify the inconsistencies between generated images. Moreover, we propose a consistent AST (CAST) framework that effectively captures and transfers essential style features into content images. The proposed CAST framework incorporates an intersection-of-union-preserving crop (IoUPC) module to obtain style pairs with minor disturbance, a self-attention (SA) module to learn the crucial style features, and a style inconsistency loss regularization (SILR) to facilitate consistent feature learning for consistent stylization. Our proposed framework not only provides an optimal solution for consistent ST but also outperforms existing methods when embedded into the CAST framework. Extensive experiments demonstrate that the proposed CAST framework can effectively transfer style patterns while preserving consistency and achieve the state-of-the-art performance.
引用
收藏
页码:16845 / 16856
页数:12
相关论文
共 50 条
  • [21] Unsupervised image-to-image translation by semantics consistency and self-attention
    Zhang Zhibin
    Xue Wanli
    Fu Guokai
    OPTOELECTRONICS LETTERS, 2022, 18 (03) : 175 - 180
  • [22] On Recognizing Texts of Arbitrary Shapes with 2D Self-Attention
    Lee, Junyeop
    Park, Sungrae
    Baek, Jeonghun
    Oh, Seong Joon
    Kim, Seonghyeon
    Lee, Hwalsuk
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 2326 - 2335
  • [23] Preserving Global and Local Temporal Consistency for Arbitrary Video Style Transfer
    Wu, Xinxiao
    Chen, Jialu
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 1791 - 1799
  • [24] Self-attention transfer networks for speech emotion recognition
    Ziping ZHAO
    Keru Wang
    Zhongtian BAO
    Zixing ZHANG
    Nicholas CUMMINS
    Shihuang SUN
    Haishuai WANG
    Jianhua TAO
    Bj?rn W.SCHULLER
    虚拟现实与智能硬件(中英文), 2021, 3 (01) : 43 - 54
  • [25] Att-Net: Enhanced emotion recognition system using lightweight self-attention module
    Mustaqeem
    Kwon, Soonil
    APPLIED SOFT COMPUTING, 2021, 102
  • [26] AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer
    Liu, Songhua
    Lin, Tianwei
    He, Dongliang
    Li, Fu
    Wang, Meiling
    Li, Xin
    Sun, Zhengxing
    Li, Qian
    Ding, Errui
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 6629 - 6638
  • [27] Arbitrary style transfer based on Attention and Covariance-Matching
    Peng, Haiyuan
    Qian, Wenhua
    Cao, Jinde
    Tang, Shan
    COMPUTERS & GRAPHICS-UK, 2023, 116 : 298 - 307
  • [28] Arbitrary Style Transfer With Fused Convolutional Block Attention Modules
    Xin, Haitao
    Li, Li
    IEEE ACCESS, 2023, 11 : 44977 - 44988
  • [29] SATS: Self-attention transfer for continual semantic segmentation
    Qiu, Yiqiao
    Shen, Yixing
    Sun, Zhuohao
    Zheng, Yanchong
    Chang, Xiaobin
    Zheng, Weishi
    Wang, Ruixuan
    PATTERN RECOGNITION, 2023, 138
  • [30] PEGANs: Phased Evolutionary Generative Adversarial Networks with Self-Attention Module
    Xue, Yu
    Tong, Weinan
    Neri, Ferrante
    Zhang, Yixia
    MATHEMATICS, 2022, 10 (15)