Evolutionary Multi-view Face Tracking on Pixel Replaced Image in Video Sequence

被引:0
|
作者
Sato, Junya [1 ]
Akashi, Takuya [2 ]
机构
[1] Iwate Univ, Grad Sch Engn, Dept Design & Media Technol, Morioka, Iwate, Japan
[2] Iwate Univ, Fac Engn, Dept Elect Engn & Comp Sci, Morioka, Iwate, Japan
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Nowadays, many computer vision techniques are applied to practical applications, such as surveillance and facial recognition systems. Some of such applications focus on information extraction from the human beings. However, people may feel psychological stress about recording their personal information, such as a face, behavior, and cloth. Therefore, privacy protection of the images and videos is necessary. Specifically, the detection and tracking methods should be used on the privacy protected images. For this purpose, there are some easy methods, such as blurring and pixelating, and they are often used in news programs etc. Because such methods just average pixel values, no important feature for the detection and tracking is left. Hence, the preprocessed images are unuseful. In order to solve this problem, we have proposed shuffle filter and a multi-view face tracking method with a genetic algorithm (GA). The filter protects the privacy by changing pixel locations, and the color information can be preserved. Since the color information is left, the tracking can be achieved by a basic template matching with histogram. Moreover, by using GA instead of sliding window when the subject in the image is searched, it can search more efficiently. However, the tracking accuracy is still low and the preprocessing time is large. Therefore, improving them is the purpose in this research. In the experiment, the improved method is compared with our previous work, CAMSHIFT, an online learning method, and a face detector. The results indicate that the accuracy of the proposed method is higher than the others.
引用
收藏
页码:322 / 327
页数:6
相关论文
共 50 条
  • [1] Robust multi-view face tracking
    Ho, K
    Yoo, DH
    Jung, SU
    Chung, MJ
    2005 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, 2005, : 3628 - 3633
  • [2] Constrained Multi-View Video Face Clustering
    Cao, Xiaochun
    Zhang, Changqing
    Zhou, Chengju
    Fu, Huazhu
    Foroosh, Hassan
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (11) : 4381 - 4393
  • [3] A new multi-view face tracking algorithm
    Ma, Bo
    Zhou, Yue
    Shanghai Jiaotong Daxue Xuebao/Journal of Shanghai Jiaotong University, 2010, 44 (07): : 902 - 906
  • [4] Incremental Multi-view Face Tracking Based on General View Manifold
    Wei, Wei
    Zhang, Yanning
    COMPUTER VISION - ACCV 2009, PT II, 2010, 5995 : 150 - 159
  • [5] Multi-view frontal face image generation: A survey
    Ning, Xin
    Nan, Fangzhe
    Xu, Shaohui
    Yu, Lina
    Zhang, Liping
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2023, 35 (18):
  • [6] Multi-view Face Detection using Normalized Pixel Difference feature
    Micheal, A. Annie
    Geetha, P.
    2017 INTERNATIONAL CONFERENCE ON COMMUNICATION AND SIGNAL PROCESSING (ICCSP), 2017, : 988 - 992
  • [7] Coding of multi-view image sequences with video sensors
    Flier, Markus
    Girod, Bernd
    2006 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP 2006, PROCEEDINGS, 2006, : 609 - +
  • [8] Probabilistic face tracking using boosted multi-view detector
    Li, PH
    Wang, HJ
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2004, PT 2, PROCEEDINGS, 2004, 3332 : 577 - 584
  • [9] Unsupervised Object of Interest Discovery in Multi-view Video Sequence
    Thummanuntawat, Thanaphat
    Kumwilaisak, Wuttipong
    Chinrungrueng, Jatuporn
    11TH INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY, VOLS I-III, PROCEEDINGS,: UBIQUITOUS ICT CONVERGENCE MAKES LIFE BETTER!, 2009, : 1622 - 1627
  • [10] MULTI-VIEW METRIC LEARNING FOR MULTI-VIEW VIDEO SUMMARIZATION
    Wang, Linbo
    Fang, Xianyong
    Guo, Yanwen
    Fu, Yanwei
    2016 INTERNATIONAL CONFERENCE ON CYBERWORLDS (CW), 2016, : 179 - 182