High-Definition Image Formation Using Multi-stage Cycle Generative Adversarial Network with Applications in Image Forensic

被引:0
|
作者
Danish Arif
Zahid Mehmood
Amin Ullah
Simon Winberg
机构
[1] University of Cape Town,Department of Electrical Engineering
[2] University of Engineering and Technology,Department of Computer Engineering
[3] University of Central Punjab,Department of Software Engineering, Faculty of Information Technology and Computer Science
关键词
Generative adversarial network; Multistage cycle generative adversarial network; Face sketch; Image forensic; Image translation; Smart cities;
D O I
暂无
中图分类号
学科分类号
摘要
In the modern world, human safety and crime control are daunting tasks. Each year the number of street crime cases has been increasing. In many cases, the culprit is unknown, in which the challenging task is to identify the right culprit possibly among hundreds of options in contexts of densely populated public spaces like target killing, vehicle snatching, etc. Often, in these situations, a sketch artist, working for the police forensic department, produces a drawing of the culprit’s face based on guidelines from the victims or witnesses to the crime. However, hand-drawn sketches can be an inefficient means for matching facial photographs of the culprit, particularly in cases where the software is designed around using images of real faces. In this research work, a novel technique is proposed to generate hyperreal high-definition (H-HD) face images of the culprit from a single hand-drawn face sketch. ’Hyperreal’ is used here in the way it is used in the Arts, as in making the image, albeit based on a person’s thoughts, truer to reality through a deep understanding of how the real subject would appear. To produce this image translation from sketch to H-HD face, two techniques are presented in this article, namely cycle generative adversarial network (CGAN) and multistage cycle generative adversarial network (MS-CGAN). MS-CGAN has multiple layers as stages and produces minimum cycle consistency and generative adversarial losses. CGAN uses paired data for training whereas MS-CGAN uses unpaired data. The training results show that the MSE loss of the proposed technique is found to be less than CGANs. GANs can be evaluated in three ways, namely qualitative, quantitative, and observational. In this paper, a quantitative comparison is made by the evaluation of CGAN and MS-CGAN based on the pixel-to-pixel comparison. An observational analysis is performed on the feedback from the observers. According to the evaluations, 54% of participants voted for the MS-CGAN whereas 46% rated CGAN to be the better performer. Two types of pixel-to-pixel comparisons are performed: color-to-color comparison and sketch-to-sketch comparison in terms of mean square error (MSE) and root mean square error (RMSE). For color-to-color image comparison, CGAN achieved an MSE of 2.312 and RMSE of 1.521 whereas MS-CGAN achieved an MSE of 2.232 and RMSE of 1.494. For sketch-to-sketch pixel comparison, CGAN achieved an MSE of 1.901 and RMSE of 1.379 whereas MS-CGAN achieved an MSE of 1.81 and 1.345 RMSE. The development of MS-CGAN and the research of this article are aimed at the police forensic department to generate a true-to-life H-HD face of a culprit and thereby contribute toward the overarching goal of maintaining a peaceful society.
引用
收藏
页码:3887 / 3896
页数:9
相关论文
共 50 条
  • [41] Unsupervised domain adaptation using modified cycle generative adversarial network for aerial image classification
    Ren, Jiehuang
    Jia, Liye
    Yue, Junhong
    Liu, Xueyu
    Sun, Lixin
    Wu, Yongfei
    Zhou, Daoxiang
    JOURNAL OF APPLIED REMOTE SENSING, 2022, 16 (04)
  • [42] Unpaired medical image colorization using generative adversarial network
    Yihuai Liang
    Dongho Lee
    Yan Li
    Byeong-Seok Shin
    Multimedia Tools and Applications, 2022, 81 : 26669 - 26683
  • [43] Underwater image enhancement using improved generative adversarial network
    Zhang, Tingting
    Li, Yujie
    Takahashi, Shinya
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2021, 33 (22):
  • [44] Multi-Focus Image Fusion Based on Generative Adversarial Network
    Jiang L.
    Zhang D.
    Pan B.
    Zheng P.
    Che L.
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2021, 33 (11): : 1715 - 1725
  • [45] An Encoder Generative Adversarial Network for Multi-modality Image Recognition
    Chen, Yu
    Yang, Chunling
    Zhu, Min
    Yang, ShiYan
    IECON 2018 - 44TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, 2018, : 2689 - 2694
  • [46] TMGAN: two-stage multi-domain generative adversarial network for landscape image translation
    Lin, Liyuan
    Zhang, Shun
    Ji, Shulin
    Zhao, Shuxian
    Wen, Aolin
    Yan, Jingpeng
    Zhou, Yuan
    Zhou, Weibin
    VISUAL COMPUTER, 2024, 40 (09): : 6389 - 6405
  • [47] Unsupervised Image Dedusting via a Cycle-Consistent Generative Adversarial Network
    Gao, Guxue
    Lai, Huicheng
    Jia, Zhenhong
    REMOTE SENSING, 2023, 15 (05)
  • [48] Common image gather conditioning using cycle generative adversarial networks
    O'Brien, G. S.
    GEOPHYSICAL PROSPECTING, 2020, 68 (06) : 1758 - 1770
  • [49] Overwater Image Dehazing via Cycle-Consistent Generative Adversarial Network
    Zheng, Shunyuan
    Sun, Jiamin
    Liu, Qinglin
    Qi, Yuankai
    Yan, Jianen
    ELECTRONICS, 2020, 9 (11) : 1 - 19
  • [50] Cycle Generative Adversarial Network Based on Gradient Normalization for Infrared Image Generation
    Yi, Xing
    Pan, Hao
    Zhao, Huaici
    Liu, Pengfei
    Zhang, Canyu
    Wang, Junpeng
    Wang, Hao
    APPLIED SCIENCES-BASEL, 2023, 13 (01):