Manipulating and measuring variation in deep neural network (DNN) representations of objects

被引:0
|
作者
Chow, Jason K. [1 ]
Palmeri, Thomas J. [1 ]
机构
[1] Vanderbilt Univ, Dept Psychol, 111 21st Ave South, Nashville, TN 37240 USA
关键词
Deep neural networks; Individual differences; Simulation; Visual perception; INDIVIDUAL-DIFFERENCES; ORGANIZATION; INFORMATION; MODEL;
D O I
10.1016/j.cognition.2024.105920
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
We explore how DNNs can be used to develop a computational understanding of individual differences in highlevel visual cognition given their ability to generate rich meaningful object representations informed by their architecture, experience, and training protocols. As a first step to quantifying individual differences in DNN representations, we systematically explored the robustness of a variety of representational similarity measures: Representational Similarity Analysis (RSA), Centered Kernel Alignment (CKA), and Projection-Weighted Canonical Correlation Analysis (PWCCA), with an eye to how these measures are used in cognitive science, cognitive neuroscience, and vision science. To manipulate object representations, we next created a large set of models varying in random initial weights and random training image order, training image frequencies, training category frequencies, and model size and architecture and measured the representational variation caused by each manipulation. We examined both small (All-CNN-C) and commonly-used large (VGG and ResNet) DNN architectures. To provide a comparison for the magnitude of representational differences, we established a baseline based on the representational variation caused by image-augmentation techniques used to train those DNNs. We found that variation in model randomization and model size never exceeded baseline. By contrast, differences in training image frequency and training category frequencies caused representational variation that exceeded baseline, with training category frequency manipulations exceeding baseline earlier in the networks. These findings provide insights into the magnitude of representational variations that can be expected with a range of manipulations and provide a springboard for further exploration of systematic model variations aimed at modeling individual differences in high-level visual cognition.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Deep neural network (DNN) modelling for prediction of the mode of delivery
    Kuanar, Ananya
    Akbar, Abdul
    Sujata, Pendyala
    Kar, Dattatreya
    EUROPEAN JOURNAL OF OBSTETRICS & GYNECOLOGY AND REPRODUCTIVE BIOLOGY, 2024, 297 : 241 - 248
  • [2] Sparse Attacks for Manipulating Explanations in Deep Neural Network Models
    Ajalloeian, Ahmad
    Moosavi-Dezfooli, Seyed Mohsen
    Vlachos, Michalis
    Frossard, Pascal
    23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, ICDM 2023, 2023, : 918 - 923
  • [3] SRS-DNN: a deep neural network with strengthening response sparsity
    Chen Qiao
    Bin Gao
    Yan Shi
    Neural Computing and Applications, 2020, 32 : 8127 - 8142
  • [4] ON THE TRAINING ASPECTS OF DEEP NEURAL NETWORK (DNN) FOR PARAMETRIC TTS SYNTHESIS
    Qian, Yao
    Fan, Yuchen
    Hu, Wenping
    Soong, Frank K.
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [5] Modeling of Deep Neural Network (DNN) Placement and Inference in Edge Computing
    Bensalem, Mounir
    Dizdarevic, Jasenka
    Jukan, Admela
    2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS), 2020,
  • [6] SRS-DNN: a deep neural network with strengthening response sparsity
    Qiao, Chen
    Gao, Bin
    Shi, Yan
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (12): : 8127 - 8142
  • [7] Fitting objects with implicit polynomials by deep neural network
    Liu, Jingyi
    Yu, Lina
    Sun, Linjun
    Tong, Yuerong
    Wu, Min
    Li, Weijun
    OPTOELECTRONICS LETTERS, 2023, 19 (01) : 60 - 64
  • [8] Fitting objects with implicit polynomials by deep neural network
    LIU Jingyi
    YU Lina
    SUN Linjun
    TONG Yuerong
    WU Min
    LI Weijun
    OptoelectronicsLetters, 2023, 19 (01) : 60 - 64
  • [9] Fitting objects with implicit polynomials by deep neural network
    Jingyi Liu
    Lina Yu
    Linjun Sun
    Yuerong Tong
    Min Wu
    Weijun Li
    Optoelectronics Letters, 2023, 19 : 60 - 64
  • [10] Loss Function for Ambiguous Boundaries for Deep Neural Network (DNN) for Image Segmentation
    Hakumura, Yuma
    Ito, Taiyo
    Matsui, Shiori
    Akiba, Yuya
    Aoki, Kimiya
    Nakashima, Yuki
    Hirao, Kiyoshi
    Fukushima, Manabu
    IEEJ Transactions on Electronics, Information and Systems, 2023, 143 (09): : 914 - 921