Disorder-Invariant Implicit Neural Representation

被引:3
|
作者
Zhu, Hao [1 ]
Xie, Shaowen [1 ]
Liu, Zhen [1 ]
Liu, Fengyi [1 ]
Zhang, Qi [2 ]
Zhou, You [1 ]
Lin, Yi [3 ]
Ma, Zhan [1 ]
Cao, Xun [1 ]
机构
[1] Nanjing Univ, Sch Elect Sci & Engn, Nanjing 210023, Peoples R China
[2] Tencent AI Lab, Shenzhen 518054, Peoples R China
[3] Fudan Univ, Zhongshan Hosp, Dept Cardiovasc Surg, Shanghai 200032, Peoples R China
关键词
Three-dimensional displays; Encoding; Task analysis; Optimization; Inverse problems; Frequency modulation; Training; Disorder-invariance; hash-table; implicit neural representation; inverse problem optimization;
D O I
10.1109/TPAMI.2024.3366237
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Implicit neural representation (INR) characterizes the attributes of a signal as a function of corresponding coordinates which emerges as a sharp weapon for solving inverse problems. However, the expressive power of INR is limited by the spectral bias in the network training. In this paper, we find that such a frequency-related problem could be greatly solved by re-arranging the coordinates of the input signal, for which we propose the disorder-invariant implicit neural representation (DINER) by augmenting a hash-table to a traditional INR backbone. Given discrete signals sharing the same histogram of attributes and different arrangement orders, the hash-table could project the coordinates into the same distribution for which the mapped signal can be better modeled using the subsequent INR network, leading to significantly alleviated spectral bias. Furthermore, the expressive power of the DINER is determined by the width of the hash-table. Different width corresponds to different geometrical elements in the attribute space, e.g., 1D curve, 2D curved-plane and 3D curved-volume when the width is set as 1, 2 and 3, respectively. More covered areas of the geometrical elements result in stronger expressive power. Experiments not only reveal the generalization of the DINER for different INR backbones (MLP versus SIREN) and various tasks (image/video representation, phase retrieval, refractive index recovery, and neural radiance field optimization) but also show the superiority over the state-of-the-art algorithms both in quality and speed.
引用
收藏
页码:5463 / 5478
页数:16
相关论文
共 50 条
  • [1] DINER: Disorder-Invariant Implicit Neural Representation
    Xie, Shaowen
    Zhu, Hao
    Liu, Zhen
    Zhang, Qi
    Zhou, You
    Cao, Xun
    Ma, Zhan
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 6143 - 6152
  • [2] Neural explicit and implicit knowledge representation
    Neagu, CD
    Palade, V
    KES'2000: FOURTH INTERNATIONAL CONFERENCE ON KNOWLEDGE-BASED INTELLIGENT ENGINEERING SYSTEMS & ALLIED TECHNOLOGIES, VOLS 1 AND 2, PROCEEDINGS, 2000, : 213 - 216
  • [3] Regularize implicit neural representation by itself
    Li, Zhemin
    Wang, Hongxia
    Meng, Deyu
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 10280 - 10288
  • [4] MINER: Multiscale Implicit Neural Representation
    Saragadam, Vishwanath
    Tan, Jasper
    Balakrishnan, Guha
    Baraniuk, Richard G.
    Veeraraghavan, Ashok
    COMPUTER VISION, ECCV 2022, PT XXIII, 2022, 13683 : 318 - 333
  • [5] Implicit neural representation for image demosaicking
    Kerepecky, Tomas
    Sroubek, Filip
    Flusser, Jan
    DIGITAL SIGNAL PROCESSING, 2025, 159
  • [6] Neural explicit and implicit knowledge representation
    Neagu, Ciprian-Daniel
    Palade, Vasile
    International Conference on Knowledge-Based Intelligent Electronic Systems, Proceedings, KES, 2000, 1 : 213 - 216
  • [7] Neural Knitworks: Patched neural implicit representation networks
    Czerkawski, Mikolaj
    Cardona, Javier
    Atkinson, Robert
    Michie, Craig
    Andonovic, Ivan
    Clemente, Carmine
    Tachtatzis, Christos
    PATTERN RECOGNITION, 2024, 151
  • [8] Representation theory and invariant neural networks
    Wood, J
    ShaweTaylor, J
    DISCRETE APPLIED MATHEMATICS, 1996, 69 (1-2) : 33 - 60
  • [9] Dynamic multiplexed intensity diffraction tomography using a spatiotemporal regularization-driven disorder-invariant multilayer perceptron
    Luo, Haixin
    Chen, Haiwen
    Xu, Jie
    Wan, Mingming
    Zhong, Liyun
    Lu, Xiaoxu
    Tian, Jindong
    OPTICS EXPRESS, 2024, 32 (22): : 39117 - 39133
  • [10] Ultrasound Confidence Maps with Neural Implicit Representation
    Yesilkaynak, Vahit Bugra
    Duque, Vanessa Gonzalez
    Wysocki, Magdalena
    Velikova, Yordanka
    Mateus, Diana
    Navab, Nassir
    MEDICAL IMAGE UNDERSTANDING AND ANALYSIS, PT II, MIUA 2024, 2024, 14860 : 89 - 100