Active Contour Model Using Fast Fourier Transformation for Salient Object Detection

被引:0
|
作者
Khan, Umer Sadiq [1 ]
Zhang, Xingjun [1 ]
Su, Yuanqi [1 ]
机构
[1] Xi An Jiao Tong Univ, Sch Comp Sci & Technol, Xian 710049, Peoples R China
关键词
active contours; frequency domain; FFT; Fourier force function; salient object detection; LEVEL SET; IMAGE SEGMENTATION; MUMFORD; FORMULATION; EVOLUTION; DRIVEN; COLOR;
D O I
10.3390/electronics10020192
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The active contour model is a comprehensive research technique used for salient object detection. Most active contour models of saliency detection are developed in the context of natural scenes, and their role with synthetic and medical images is not well investigated. Existing active contour models perform efficiently in many complexities but facing challenges on synthetic and medical images due to the limited time like, precise automatic fitted contour and expensive initialization computational cost. Our intention is detecting automatic boundary of the object without re-initialization which further in evolution drive to extract salient object. For this, we propose a simple novel derivative of a numerical solution scheme, using fast Fourier transformation (FFT) in active contour (Snake) differential equations that has two major enhancements, namely it completely avoids the approximation of expansive spatial derivatives finite differences, and the regularization scheme can be generally extended more. Second, FFT is significantly faster compared to the traditional solution in spatial domain. Finally, this model practiced Fourier-force function to fit curves naturally and extract salient objects from the background. Compared with the state-of-the-art methods, the proposed method achieves at least a 3% increase of accuracy on three diverse set of images. Moreover, it runs very fast, and the average running time of the proposed methods is about one twelfth of the baseline.
引用
收藏
页码:1 / 20
页数:20
相关论文
共 50 条
  • [41] A contour tracking method of large motion object using optical flow and active contour model
    Choi, Jin Woo
    Whangbo, Taeg Keun
    Kim, Cheong Ghil
    MULTIMEDIA TOOLS AND APPLICATIONS, 2015, 74 (01) : 199 - 210
  • [42] A contour tracking method of large motion object using optical flow and active contour model
    Jin Woo Choi
    Taeg Keun Whangbo
    Cheong Ghil Kim
    Multimedia Tools and Applications, 2015, 74 : 199 - 210
  • [43] Salient Object Detection Based on Background Model
    Zhang, Yanbang
    Zhang, Fen
    Guo, Lei
    2018 37TH CHINESE CONTROL CONFERENCE (CCC), 2018, : 9374 - 9378
  • [44] Computational model for salient object detection with anisotropy
    Wu, Di
    Sun, Xiudong
    Xu, Yuannan
    Jiang, Yongyuan
    Hou, Chunfeng
    APPLIED OPTICS, 2012, 51 (11) : 1742 - 1748
  • [45] An Adaptive Computational Model for Salient Object Detection
    Zhang, Wei
    Wu, Q. M. Jonathan
    Wang, Guanghui
    Yin, Haibing
    IEEE TRANSACTIONS ON MULTIMEDIA, 2010, 12 (04) : 300 - 316
  • [46] Recursive Contour-Saliency Blending Network for Accurate Salient Object Detection
    Ke, Yun Yi
    Tsubono, Takahiro
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 1360 - 1370
  • [47] Contour-Aware Recurrent Cross Constraint Network for Salient Object Detection
    Yao, Cuili
    Kong, Yuqiu
    Feng, Lin
    Jin, Bo
    Si, Hui
    IEEE ACCESS, 2020, 8 (08): : 218739 - 218751
  • [48] Object detection and recognition using contour based edge detection and fast R-CNN
    Rani, Shilpa
    Ghai, Deepika
    Kumar, Sandeep
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (29) : 42183 - 42207
  • [49] Object detection and recognition using contour based edge detection and fast R-CNN
    Shilpa Rani
    Deepika Ghai
    Sandeep Kumar
    Multimedia Tools and Applications, 2022, 81 : 42183 - 42207
  • [50] Complementary Trilateral Decoder for Fast and Accurate Salient Object Detection
    Zhao, Zhirui
    Xia, Changqun
    Xie, Chenxi
    Li, Jia
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 4967 - 4975