InitialGAN: A Language GAN With Completely Random Initialization

被引:0
|
作者
Ren, Da [1 ]
Li, Qing [1 ]
机构
[1] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
关键词
Generators; Training; Maximum likelihood estimation; Generative adversarial networks; Sampling methods; Transforms; Transformers; Generative adversarial network (GAN); text generation;
D O I
10.1109/TNNLS.2023.3315778
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Text generative models trained via maximum likelihood estimation (MLE) suffer from the notorious exposure bias problem, and generative adversarial networks (GANs) are shown to have potential to tackle this problem. The existing language GANs adopt estimators, such as REINFORCE or continuous relaxations to model word probabilities. The inherent limitations of such estimators lead current models to rely on pretraining techniques (MLE pretraining or pretrained embeddings). Representation modeling methods (RMMs), which are free from those limitations, however, are seldomly explored because of their poor performance in previous attempts. Our analyses reveal that invalid sampling methods and unhealthy gradients are the main contributors to such unsatisfactory performance. In this work, we present two techniques to tackle these problems: dropout sampling and fully normalized long short-term memory network (LSTM). Based on these two techniques, we propose InitialGAN whose parameters are randomly initialized in full. Besides, we introduce a new evaluation metric, least coverage rate (LCR), to better evaluate the quality of generated samples. The experimental results demonstrate that the InitialGAN outperforms both MLE and other compared models. To the best of our knowledge, it is the first time a language GAN can outperform MLE without using any pretraining techniques.
引用
收藏
页码:18431 / 18444
页数:14
相关论文
共 50 条