MNSS: Neural Supersampling Framework for Real-Time Rendering on Mobile Devices

被引:6
|
作者
Yang, Sipeng [1 ]
Zhao, Yunlu [1 ]
Luo, Yuzhe [1 ]
Wang, He [2 ]
Sun, Hongyu [3 ]
Li, Chen [3 ]
Cai, Binghuang [3 ]
Jin, Xiaogang [1 ]
机构
[1] Zhejiang Univ, State Key Lab CAD&CG, Hangzhou 310027, Peoples R China
[2] Univ Leeds, Sch Comp, Leeds LS2 9JT, England
[3] OPPO US Res Ctr, Bellevue, WA 98005 USA
基金
中国国家自然科学基金;
关键词
Rendering (computer graphics); Real-time systems; Image reconstruction; Image resolution; Videos; Artificial intelligence; Neural networks; Deep learning; neural supersampling; real-time rendering; IMAGE SUPERRESOLUTION; QUALITY ASSESSMENT;
D O I
10.1109/TVCG.2023.3259141
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Although neural supersampling has achieved great success in various applications for improving image quality, it is still difficult to apply it to a wide range of real-time rendering applications due to the high computational power demand. Most existing methods are computationally expensive and require high-performance hardware, preventing their use on platforms with limited hardware, such as smartphones. To this end, we propose a new supersampling framework for real-time rendering applications to reconstruct a high-quality image out of a low-resolution one, which is sufficiently lightweight to run on smartphones within a real-time budget. Our model takes as input the renderer-generated low resolution content and produces high resolution and anti-aliased results. To maximize sampling efficiency, we propose using an alternate sub-pixel sample pattern during the rasterization process. This allows us to create a relatively small reconstruction model while maintaining high image quality. By accumulating new samples into a high-resolution history buffer, an efficient history check and re-usage scheme is introduced to improve temporal stability. To our knowledge, this is the first research in pushing real-time neural supersampling on mobile devices. Due to the absence of training data, we present a new dataset containing 57 training and test sequences from three game scenes. Furthermore, based on the rendered motion vectors and a visual perception study, we introduce a new metric called inter-frame structural similarity (IF-SSIM) to quantitatively measure the temporal stability of rendered videos. Extensive evaluations demonstrate that our supersampling model outperforms existing or alternative solutions in both performance and temporal stability.
引用
收藏
页码:4271 / 4284
页数:14
相关论文
共 50 条
  • [1] Neural Supersampling for Real-time Rendering
    Xiao, Lei
    Nouri, Salah
    Chapman, Matt
    Fix, Alexander
    Lanman, Douglas
    Kaplanyan, Anton
    ACM TRANSACTIONS ON GRAPHICS, 2020, 39 (04):
  • [2] Classifier Guided Temporal Supersampling for Real-time Rendering
    Guo, Yu-Xiao
    Chen, Guojun
    Dong, Yue
    Tong, Xin
    COMPUTER GRAPHICS FORUM, 2022, 41 (07) : 237 - 246
  • [3] Real-time Photorealistic Rendering for Mobile Devices
    Ha, Inwoo
    Ahn, Minsu
    Lee, Hyong-Euk
    2014 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS (ICCE), 2014, : 500 - 501
  • [4] Towards Real-Time Neural Volumetric Rendering on Mobile Devices: A Measurement Study
    Wang, Zhe
    Zhu, Yifei
    PROCEEDINGS OF THE 2024 SIGCOMM WORKSHOP ON EMERGING MULTIMEDIA SYSTEMS, EMS 2024, 2024, : 8 - 13
  • [5] NeARportation: A Remote Real-time Neural Rendering Framework
    Hiroi, Yuichi
    Itoh, Yuta
    Rekimoto, Jun
    28TH ACM SYMPOSIUM ON VIRTUAL REALITY SOFTWARE AND TECHNOLOGY, VRST 2022, 2022,
  • [6] REFRAME: Reflective Surface Real-Time Rendering for Mobile Devices
    Ji, Chaojie
    Li, Yufeng
    Liao, Yiyi
    COMPUTER VISION - ECCV 2024, PT XLV, 2025, 15103 : 234 - 252
  • [7] Real-Time Neural Light Field on Mobile Devices
    Cau, Junli
    Wang, Huan
    Chemerys, Pavlo
    Shakhrai, Vladislav
    Hu, Ju
    Fu, Yun
    Makoviichuk, Denys
    Tulyakov, Sergey
    Ren, Jian
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 8328 - 8337
  • [8] Temporally Stable Real-Time Joint Neural Denoising and Supersampling
    Thomas, Manu Mathew
    Liktor, Gabor
    Peters, Christoph
    Kim, Sungye
    Vaidyanathan, Karthik
    Forbes, Angus G.
    PROCEEDINGS OF THE ACM ON COMPUTER GRAPHICS AND INTERACTIVE TECHNIQUES, 2022, 5 (03)
  • [9] ExtraNet: Real-time Extrapolated Rendering for Low-latency Temporal Supersampling
    Guo, Jie
    Fu, Xihao
    Lin, Liqiang
    Ma, Hengjun
    Guo, Yanwen
    Liu, Shiqiu
    Yan, Ling-Qi
    ACM TRANSACTIONS ON GRAPHICS, 2021, 40 (06):
  • [10] Optimization Strategies for Real-Time Rendering of Virtual Scenes on Heterogeneous Mobile Devices
    Gai, Wei
    Bao, Xiyu
    Qi, Meng
    Wang, Yafang
    Liu, Juan
    de Melo, Gerard
    Wang, Lu
    Cui, Lizhen
    Yang, Chenglei
    Meng, Xiangxu
    2019 IEEE SMARTWORLD, UBIQUITOUS INTELLIGENCE & COMPUTING, ADVANCED & TRUSTED COMPUTING, SCALABLE COMPUTING & COMMUNICATIONS, CLOUD & BIG DATA COMPUTING, INTERNET OF PEOPLE AND SMART CITY INNOVATION (SMARTWORLD/SCALCOM/UIC/ATC/CBDCOM/IOP/SCI 2019), 2019, : 395 - 400