Training Energy-Based Models for Time-Series Imputation

被引:0
|
作者
Brakel, Philemon [1 ]
Stroobandt, Dirk [1 ]
Schrauwen, Benjamin [1 ]
机构
[1] Univ Ghent, Dept Elect & Informat Syst, B-9000 Ghent, Belgium
关键词
neural networks; energy-based models; time-series; missing values; optimization;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Imputing missing values in high dimensional time-series is a difficult problem. This paper presents a strategy for training energy-based graphical models for imputation directly, bypassing difficulties probabilistic approaches would face. The training strategy is inspired by recent work on optimization-based learning (Domke, 2012) and allows complex neural models with convolutional and recurrent structures to be trained for imputation tasks. In this work, we use this training strategy to derive learning rules for three substantially different neural architectures. Inference in these models is done by either truncated gradient descent or variational mean-field iterations. In our experiments, we found that the training methods outperform the Contrastive Divergence learning algorithm. Moreover, the training methods can easily handle missing values in the training data itself during learning. We demonstrate the performance of this learning scheme and the three models we introduce on one artificial and two real-world data sets.
引用
收藏
页码:2771 / 2797
页数:27
相关论文
共 50 条