Active Contour Model Using Fast Fourier Transformation for Salient Object Detection

被引:0
|
作者
Khan, Umer Sadiq [1 ]
Zhang, Xingjun [1 ]
Su, Yuanqi [1 ]
机构
[1] Xi An Jiao Tong Univ, Sch Comp Sci & Technol, Xian 710049, Peoples R China
关键词
active contours; frequency domain; FFT; Fourier force function; salient object detection; LEVEL SET; IMAGE SEGMENTATION; MUMFORD; FORMULATION; EVOLUTION; DRIVEN; COLOR;
D O I
10.3390/electronics10020192
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The active contour model is a comprehensive research technique used for salient object detection. Most active contour models of saliency detection are developed in the context of natural scenes, and their role with synthetic and medical images is not well investigated. Existing active contour models perform efficiently in many complexities but facing challenges on synthetic and medical images due to the limited time like, precise automatic fitted contour and expensive initialization computational cost. Our intention is detecting automatic boundary of the object without re-initialization which further in evolution drive to extract salient object. For this, we propose a simple novel derivative of a numerical solution scheme, using fast Fourier transformation (FFT) in active contour (Snake) differential equations that has two major enhancements, namely it completely avoids the approximation of expansive spatial derivatives finite differences, and the regularization scheme can be generally extended more. Second, FFT is significantly faster compared to the traditional solution in spatial domain. Finally, this model practiced Fourier-force function to fit curves naturally and extract salient objects from the background. Compared with the state-of-the-art methods, the proposed method achieves at least a 3% increase of accuracy on three diverse set of images. Moreover, it runs very fast, and the average running time of the proposed methods is about one twelfth of the baseline.
引用
收藏
页码:1 / 20
页数:20
相关论文
共 50 条
  • [31] Joint Learning of Salient Object Detection, Depth Estimation and Contour Extraction
    Zhao, Xiaoqi
    Pang, Youwei
    Zhang, Lihe
    Lu, Huchuan
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 7350 - 7362
  • [32] Fast Active Contour for Object Tracking in Image Sequence
    Fekir, Abdelkader
    Benamrane, Nacera
    2014 IEEE/ACS 11TH INTERNATIONAL CONFERENCE ON COMPUTER SYSTEMS AND APPLICATIONS (AICCSA), 2014, : 184 - 189
  • [33] Depthwise Nonlocal Module for Fast Salient Object Detection Using a Single Thread
    Li, Haofeng
    Li, Guanbin
    Yang, Binbin
    Chen, Guanqi
    Lin, Liang
    Yu, Yizhou
    IEEE TRANSACTIONS ON CYBERNETICS, 2021, 51 (12) : 6188 - 6199
  • [34] Hierarchical Contour Closure-Based Holistic Salient Object Detection
    Liu, Qing
    Hong, Xiaopeng
    Zou, Beiji
    Chen, Jie
    Chen, Zailiang
    Zhao, Guoying
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (09) : 4537 - 4552
  • [35] The Online Detection of Faulty Insulator Using Fast Fourier Transformation
    Yang, YingHong
    Wang, LiChun
    2014 2ND INTERNATIONAL CONFERENCE ON SYSTEMS AND INFORMATICS (ICSAI), 2014, : 175 - 179
  • [36] Active Contours in the Complex Domain for Salient Object Detection
    Khan, Umer Sadiq
    Zhang, Xingjun
    Su, Yuanqi
    APPLIED SCIENCES-BASEL, 2020, 10 (11):
  • [37] Semi-supervised Active Salient Object Detection
    Lv, Yunqiu
    Liu, Bowen
    Zhang, Jing
    Dai, Yuchao
    Li, Aixuan
    Zhang, Tong
    PATTERN RECOGNITION, 2022, 123
  • [38] Snakes revisited - Speeding up active contour models using the Fast Fourier Transform
    Ihlow, A
    Seiffert, U
    PROCEEDINGS OF THE EIGHTH IASTED INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS AND CONTROL, 2005, : 416 - 420
  • [39] Moving object segmentation and detection for monocular robot based on active contour model
    Liu, PR
    Meng, MQH
    Liu, PX
    ELECTRONICS LETTERS, 2005, 41 (24) : 1320 - 1322
  • [40] Object Tracking Using the Parametric Active Contour Model in Video Streams
    Ciecholewski, Marcin
    PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON COMPUTER RECOGNITION SYSTEMS, CORES 2015, 2016, 403 : 421 - 429