MNSS: Neural Supersampling Framework for Real-Time Rendering on Mobile Devices

被引:6
|
作者
Yang, Sipeng [1 ]
Zhao, Yunlu [1 ]
Luo, Yuzhe [1 ]
Wang, He [2 ]
Sun, Hongyu [3 ]
Li, Chen [3 ]
Cai, Binghuang [3 ]
Jin, Xiaogang [1 ]
机构
[1] Zhejiang Univ, State Key Lab CAD&CG, Hangzhou 310027, Peoples R China
[2] Univ Leeds, Sch Comp, Leeds LS2 9JT, England
[3] OPPO US Res Ctr, Bellevue, WA 98005 USA
基金
中国国家自然科学基金;
关键词
Rendering (computer graphics); Real-time systems; Image reconstruction; Image resolution; Videos; Artificial intelligence; Neural networks; Deep learning; neural supersampling; real-time rendering; IMAGE SUPERRESOLUTION; QUALITY ASSESSMENT;
D O I
10.1109/TVCG.2023.3259141
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Although neural supersampling has achieved great success in various applications for improving image quality, it is still difficult to apply it to a wide range of real-time rendering applications due to the high computational power demand. Most existing methods are computationally expensive and require high-performance hardware, preventing their use on platforms with limited hardware, such as smartphones. To this end, we propose a new supersampling framework for real-time rendering applications to reconstruct a high-quality image out of a low-resolution one, which is sufficiently lightweight to run on smartphones within a real-time budget. Our model takes as input the renderer-generated low resolution content and produces high resolution and anti-aliased results. To maximize sampling efficiency, we propose using an alternate sub-pixel sample pattern during the rasterization process. This allows us to create a relatively small reconstruction model while maintaining high image quality. By accumulating new samples into a high-resolution history buffer, an efficient history check and re-usage scheme is introduced to improve temporal stability. To our knowledge, this is the first research in pushing real-time neural supersampling on mobile devices. Due to the absence of training data, we present a new dataset containing 57 training and test sequences from three game scenes. Furthermore, based on the rendered motion vectors and a visual perception study, we introduce a new metric called inter-frame structural similarity (IF-SSIM) to quantitatively measure the temporal stability of rendered videos. Extensive evaluations demonstrate that our supersampling model outperforms existing or alternative solutions in both performance and temporal stability.
引用
收藏
页码:4271 / 4284
页数:14
相关论文
共 50 条
  • [31] Text-Guided Real-World-to-3D Generative Models with Real-Time Rendering on Mobile Devices
    Vu Tuan Truong
    Long Bao Le
    2024 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC 2024, 2024,
  • [32] LODManager: a framework for rendering multiresolution models in real-time applications
    Gumbau, J.
    Ripolles, O.
    Chover, M.
    WSCG 2007, SHORT COMMUNICATIONS PROCEEDINGS I AND II, 2007, : 39 - 46
  • [33] Linux real-time framework for fusion devices
    Neto, Andre
    Sartori, Filippo
    Piccolo, Fabio
    Barbalace, Antonio
    Vitelli, Riccardo
    Fernandes, Horacio
    FUSION ENGINEERING AND DESIGN, 2009, 84 (7-11) : 1408 - 1411
  • [34] Real-time rendering framework in the virtual home design system
    State Key Lab of CAD and CG, Zhejiang University, Hangzhou 310027, China
    不详
    Lect. Notes Comput. Sci., (213-224):
  • [35] MixRT: Mixed Neural Representations For Real-Time NeRF Rendering
    Li, Chaojian
    Wu, Bichen
    Vajda, Peter
    Lin, Yingyan
    2024 INTERNATIONAL CONFERENCE IN 3D VISION, 3DV 2024, 2024, : 1115 - 1124
  • [36] A Low-Power Neural Graphics System for Instant 3D Modeling and Real-Time Rendering on Mobile AR/VR Devices
    Ryu, Junha
    Kwon, Hankyul
    Park, Wonhoon
    Li, Zhiyong
    Kwon, Beomseok
    Han, Donghyeon
    Im, Dongseok
    Kim, Sangyeob
    Joo, Hyungnam
    Kim, Minsung
    Yoo, Hoi-Jun
    2024 IEEE SYMPOSIUM IN LOW-POWER AND HIGH-SPEED CHIPS, COOL CHIPS 27, 2024,
  • [37] Hexagonal mesh-based neural rendering for real-time rendering and fast reconstruction
    Zhang, Yisu
    Zhu, Jianke
    Lin, Lixiang
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2025, 255
  • [38] An Efficient Method Using the Parameterized HRTFs for 3D Audio Real-Time Rendering on Mobile Devices
    Song, Yucheng
    Tu, Weiping
    Hu, Ruimin
    Wang, Xiaochen
    Chen, Wei
    Yang, Cheng
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2017, PT II, 2018, 10736 : 663 - 673
  • [39] Re-ReND: Real-time Rendering of NeRFs across Devices
    Rojas, Sara
    Zarzar, Jesus
    Perez, Juan C.
    Sanakoyeu, Artsiom
    Thabet, Ali
    Pumarola, Albert
    Ghanem, Bernard
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 3609 - 3618
  • [40] Frame-Independent and Parallel Method for 3D Audio Real-Time Rendering on Mobile Devices
    Song, Yucheng
    Wang, Xiaochen
    Yang, Cheng
    Gao, Ge
    Chen, Wei
    Tu, Weiping
    MULTIMEDIA MODELING, MMM 2017, PT II, 2017, 10133 : 221 - 232