Quantification of the transferability of features between deep neural networks

被引:4
|
作者
Orhand, Romain [1 ]
Khodji, Hiba [1 ]
Hutt, Amarin [1 ]
Jeannin-Girardon, Anne [1 ]
机构
[1] Univ Strasbourg, ICube Lab, UMR 7357, 300 Bd Sebastien Brant,CS 10413, F-67412 Illkirch Graffenstaden, France
关键词
transfer learning; feature transferability quantification; convolutional neural networks; deep learning;
D O I
10.1016/j.procs.2021.08.015
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The computationally expensive nature of Deep Neural Networks, along with their significant hunger for labeled data, can impair the overall performance of these models. Among other techniques, this challenge can be tackled by Transfer Learning, which consists in re-using the knowledge previously learned by a model: this method is widely used and has proven effective in enhancing the performance of models in low resources contexts. However, there are relatively few contributions regarding the actual transferability of features in a deep learning model. This paper presents QUANTA (QUANtitative TrAnsferability), a method for quantifying the transferability of features of a given model. A QUANTA is a 2-parameters layer added in a target model at the level at which one wants to study the transferability of the corresponding layer in a source model. Data from the target domain being fed to both the source and the target models, the parameters of the QUANTA layer are trained in such a way that a mutually exclusive quantification occurs between the source model (trained and frozen) and the (trainable) target model. The proposed approach is evaluated on a set of experiments on a visual recognition task using Convolutional Neural Networks. The results show that QUANTA is a promising tool for quantifying the transferability of features of a source model, as well as a new way of assessing the quality of a transfer. (C) 2021 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (https://crativecommons.org/licenses/by-nc-nd/4.0) Peer-review under responsibility of the scientific committee of KES International.
引用
收藏
页码:138 / 147
页数:10
相关论文
共 50 条
  • [1] Disrupting adversarial transferability in deep neural networks
    Wiedeman, Christopher
    Wang, Ge
    PATTERNS, 2022, 3 (05):
  • [2] Transferability of features for neural networks links to adversarial attacks and defences
    Kotyan, Shashank
    Matsuki, Moe
    Vargas, Danilo Vasconcellos
    PLOS ONE, 2022, 17 (04):
  • [3] Transferable Normalization: Towards Improving Transferability of Deep Neural Networks
    Wang, Ximei
    Jin, Ying
    Long, Mingsheng
    Wang, Jianmin
    Jordan, Michael I.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [4] Graphon Neural Networks and the Transferability of Graph Neural Networks
    Ruiz, Luana
    Chamon, Luiz F. O.
    Ribeiro, Alejandro
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [5] Transferability of coVariance Neural Networks
    Sihag, Saurabh
    Mateos, Gonzalo
    McMillan, Corey
    Ribeiro, Alejandro
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2024, 18 (02) : 199 - 215
  • [6] Exploring Transferability in Deep Neural Networks with Functional Data Analysis and Spatial Statistics
    McAllister, Richard
    Sheppard, John
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [7] On Correlation of Features Extracted by Deep Neural Networks
    Ayinde, Babajide O.
    Inane, Tamer
    Zurada, Jacek M.
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [8] How transferable are features in deep neural networks?
    Yosinski, Jason
    Clune, Jeff
    Bengio, Yoshua
    Lipson, Hod
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27
  • [9] Size and temperature transferability of direct and local deep neural networks for atomic forces
    Kuritz, Natalia
    Gordon, Goren
    Natan, Amir
    PHYSICAL REVIEW B, 2018, 98 (09)
  • [10] Exploring the Effect of Randomness on Transferability of Adversarial Samples Against Deep Neural Networks
    Zhou, Yan
    Kantarcioglu, Murat
    Xi, Bowei
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (01) : 83 - 99