Semisupervised learning of hierarchical latent trait models for data visualization

被引:4
|
作者
Nabney, IT [1 ]
Sun, Y
Tino, P
Kabán, A
机构
[1] Aston Univ, Neural Comp Res Grp, Birmingham B4 7ET, W Midlands, England
[2] Univ Hertfordshire, Sch Comp Sci, Hatfield AL10 9AB, Herts, England
[3] Univ Birmingham, Sch Comp Sci, Birmingham B15 2TT, W Midlands, England
基金
英国生物技术与生命科学研究理事会;
关键词
hierarchical model; latent trait model; magnification factors; data visualization; document mining;
D O I
10.1109/TKDE.2005.49
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, we have developed the hierarchical Generative Topographic Mapping (HGTM), an interactive method for visualization of large high-dimensional real-valued data sets. In this paper, we propose a more general visualization system by extending HGTM in three ways, which allows the user to visualize a wider range of data sets and better support the model development process. 1) We integrate HGTM with noise models from the exponential family of distributions. The basic building block is the Latent Trait Model (LTM). This enables us to visualize data of inherently discrete nature, e. g., collections of documents, in a hierarchical manner. 2) We give the user a choice of initializing the child plots of the current plot in either interactive, or automatic mode. In the interactive mode, the user selects "regions of interest," whereas in the automatic mode, an unsupervised minimum message length (MML)-inspired construction of a mixture of LTMs is employed. The unsupervised construction is particularly useful when high-level plots are covered with dense clusters of highly overlapping data projections, making it difficult to use the interactive mode. Such a situation often arises when visualizing large data sets. 3) We derive general formulas for magnification factors in latent trait models. Magnification factors are a useful tool to improve our understanding of the visualization plots, since they can highlight the boundaries between data clusters. We illustrate our approach on a toy example and evaluate it on three more complex real data sets.
引用
收藏
页码:384 / 400
页数:17
相关论文
共 50 条
  • [21] Uncertainty in Latent Trait Models
    Tutz, Gerhard
    Schauberger, Gunther
    APPLIED PSYCHOLOGICAL MEASUREMENT, 2020, 44 (06) : 447 - 464
  • [22] Generalized latent trait models
    Irini Moustaki
    Martin Knott
    Psychometrika, 2000, 65 : 391 - 411
  • [23] ON NONPARAMETRIC LATENT TRAIT MODELS
    GAFRIKOVA, V
    QUALITY & QUANTITY, 1987, 21 (01) : 71 - 79
  • [24] HIERARCHICAL MODELS FOR SPATIAL DATA WITH ERRORS THAT ARE CORRELATED WITH THE LATENT PROCESS
    Bradley, Jonathan R.
    Wikle, Christopher K.
    Holan, Scott H.
    STATISTICA SINICA, 2020, 30 (01) : 81 - 109
  • [25] ANALYSIS OF EMPIRICAL DATA USING 2 LOGISTIC LATENT TRAIT MODELS
    HAMBLETON, RK
    TRAUB, RE
    BRITISH JOURNAL OF MATHEMATICAL & STATISTICAL PSYCHOLOGY, 1973, 26 (NOV): : 195 - 211
  • [26] Learning deep autoregressive models for hierarchical data
    Andersson, Carl R.
    Wahlstrom, Niklas
    Schon, Thomas B.
    IFAC PAPERSONLINE, 2021, 54 (07): : 529 - 534
  • [27] IDENTIFICATION PROBLEMS IN LATENT TRAIT MODELS
    MORAN, PAP
    BRITISH JOURNAL OF MATHEMATICAL & STATISTICAL PSYCHOLOGY, 1986, 39 : 208 - 212
  • [28] LATENT TRAIT MODELS IN THE STUDY OF INTELLIGENCE
    WHITELY, SE
    INTELLIGENCE, 1980, 4 (02) : 97 - 132
  • [29] Data visualization via latent variables and mixture models: a brief survey
    Rodolphe Priam
    Mohamed Nadif
    Pattern Analysis and Applications, 2016, 19 : 807 - 819
  • [30] Semisupervised learning from dissimilarity data
    Trosset, Michael W.
    Priebe, Carey E.
    Park, Youngser
    Miller, Michael I.
    COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2008, 52 (10) : 4643 - 4657