Visual saliency guided textured model simplification

被引:0
|
作者
Bailin Yang
Frederick W. B. Li
Xun Wang
Mingliang Xu
Xiaohui Liang
Zhaoyi Jiang
Yanhui Jiang
机构
[1] Zhejiang Gongshang University,School of Computer Science and Information Engineering
[2] University of Durham,School of Engineering and Computing Sciences
[3] University of Zhengzhou,College of Computer Science
[4] Beihang University,State Key Lab of Virtual Reality Technology and Systems
[5] Hunan University,School of Business
来源
The Visual Computer | 2016年 / 32卷
关键词
Visual saliency; Textured model; Model reduction; Simplification; Texture space optimization;
D O I
暂无
中图分类号
学科分类号
摘要
Mesh geometry can be used to model both object shape and details. If texture maps are involved, it is common to let mesh geometry mainly model object shapes and let the texture maps model the most object details, optimising data size and complexity of an object. To support efficient object rendering and transmission, model simplification can be applied to reduce the modelling data. However, existing methods do not well consider how object features are jointly represented by mesh geometry and texture maps, having problems in identifying and preserving important features for simplified objects. To address this, we propose a visual saliency detection method for simplifying textured 3D models. We produce good simplification results by jointly processing mesh geometry and texture map to produce a unified saliency map for identifying visually important object features. Results show that our method offers a better object rendering quality than existing methods.
引用
收藏
页码:1415 / 1432
页数:17
相关论文
共 50 条
  • [41] A Computational Model for Stereoscopic Visual Saliency Prediction
    Cheng, Hao
    Zhang, Jian
    Wu, Qiang
    An, Ping
    IEEE TRANSACTIONS ON MULTIMEDIA, 2019, 21 (03) : 678 - 689
  • [42] An information theoretic model of saliency and visual search
    Bruce, Neil D. B.
    Tsotsos, John K.
    ATTENTION IN COGNITIVE SYSTEMS: THEORIES AND SYSTEMS FROM AN INTERDISCIPLINARY VIEWPOINT, 2007, 4840 : 171 - 183
  • [43] Beyond the Visual Analysis of Deep Model Saliency
    Bargal, Sarah Adel
    Zunino, Andrea
    Petsiuk, Vitali
    Zhang, Jianming
    Murino, Vittorio
    Sclaroff, Stan
    Saenko, Kate
    XXAI - BEYOND EXPLAINABLE AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers, 2022, 13200 : 255 - 269
  • [44] Dim and small infrared target fast detection guided by visual saliency
    Yi, Xiang
    Wang, Bingjian
    Zhou, Huixin
    Qin, Hanlin
    INFRARED PHYSICS & TECHNOLOGY, 2019, 97 : 6 - 14
  • [45] VSGAN: Visual Saliency guided Generative Adversarial Network for data augmentation
    Zhang, Jun
    Xian, Chuhua
    Bruckert, Alexandre
    Le Callet, Patrick
    Li, Guiqing
    Cai, Hongmin
    PROCEEDINGS OF THE ACM INTERNATIONAL CONFERENCE ON INTERACTIVE MEDIA EXPERIENCES WORKSHOPS, IMXW 2023, 2023, : 69 - 75
  • [46] Saliency-guided neural prosthesis for visual attention: Design and simulation
    Yoshida, Masatoshi
    Veale, Richard
    NEUROSCIENCE RESEARCH, 2014, 78 : 90 - 94
  • [47] A SPARSE LINEAR MODEL FOR SALIENCY-GUIDED DECOLORIZATION
    Liu, Chun-Wei
    Liu, Tyng-Luh
    2013 20TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2013), 2013, : 1105 - 1109
  • [48] Legible simplification of textured urban models
    Chang, Remco
    Butkiewicz, Thomas
    Ziemkiewicz, Caroline
    Wartell, Zachary
    Ribarsky, William
    Pollard, Nancy
    IEEE COMPUTER GRAPHICS AND APPLICATIONS, 2008, 28 (03) : 27 - 36
  • [49] Saliency Computation and Simplification of Point Cloud Data
    Yu, Haifeng
    Wang, Rui
    Chen, Junli
    Liu, Liang
    Wan, Wanggen
    PROCEEDINGS OF 2012 2ND INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND NETWORK TECHNOLOGY (ICCSNT 2012), 2012, : 1350 - 1353
  • [50] Visual Saliency Model based on Minimum Description Length
    Liu, Jing
    Yang, Xiaokang
    Zhai, Guangtao
    Chen, Chang Wen
    2016 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2016, : 990 - 993