Robust coal granularity estimation via deep neural network with an image enhancement layer

被引:7
|
作者
Xi, Chen [1 ,2 ,3 ]
Feng, Hua-Yi [3 ]
Wang, Jia-Le [3 ]
机构
[1] Key Lab Civil Aircraft Airworthiness Technol, Tianjin, Peoples R China
[2] Civil Aviat Univ China, Coll Safety Sci & Engn, Tianjin, Peoples R China
[3] Tianjin Meiteng Technol CO LTD, Dept Intelligent Percept, Tianjin, Peoples R China
关键词
Coal granularity estimation; deep neural network; dust removal; image enhancement; industrial and mining intelligence; PARTICLE-SIZE DISTRIBUTION; SEGMENTATION METHOD; ATTENUATION; ALGORITHM;
D O I
10.1080/09540091.2021.2015290
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Accurate granularity estimation of ore images is vital in automatic geometric parameter detecting and composition analysis of ore dressing progress. Machine learning based methods have been widely used in multi-scenario ore granularity estimation. However, the adhesion of coal particles in the images usually results in lower segmentation accuracy. Because much powdery coal fills between blocky ones, making edge contrast between them is not distinct. Currently, the coal granularity estimation is still carried out empirical in nature. We propose a novel method for coal granularity estimation based on a deep neural network called Res-SSD to deal with the problem. Then, to further improve the detection performance, we propose an image enhancement layer for Res-SSD. Since the dust generated during production and transportation will seriously damage the image quality, we first propose an image denoising method based on dust modelling. By investigating imaging characteristics of coal, we second propose the optical balance transformation(OBT), by which the distinguishability of coal in dark zones can be increased. Meanwhile, OBT can also suppress overexposed spots in images. Experimental results show that the proposed method is better than classic and state-of-the-art methods in terms of accuracy while achieving a comparable speed performance.
引用
收藏
页码:472 / 491
页数:20
相关论文
共 50 条
  • [1] Image operator forensics and sequence estimation using robust deep neural network
    Agarwal, Saurabh
    Jung, Ki-Hyun
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (16) : 47431 - 47454
  • [2] Image operator forensics and sequence estimation using robust deep neural network
    Saurabh Agarwal
    Ki-Hyun Jung
    Multimedia Tools and Applications, 2024, 83 : 47431 - 47454
  • [3] Binaural Deep Neural Network for Robust Speech Enhancement
    Jiang, Yi
    Liu, Runsheng
    2014 IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING, COMMUNICATIONS AND COMPUTING (ICSPCC), 2014, : 692 - 695
  • [4] Image Annotation Via Deep Neural Network
    Sun Chengjian
    Zhu, Songhao
    Shi, Zhe
    2015 14TH IAPR INTERNATIONAL CONFERENCE ON MACHINE VISION APPLICATIONS (MVA), 2015, : 518 - 521
  • [5] Deep Convolutional Neural Network for Ultrasound Image Enhancement
    Perdios, Dimitris
    Vonlanthen, Manuel
    Besson, Adrien
    Martinez, Florian
    Arditi, Marcel
    Thiran, Jean-Philippe
    2018 IEEE INTERNATIONAL ULTRASONICS SYMPOSIUM (IUS), 2018,
  • [6] Image resolution enhancement via image restoration using neural network
    Zhang, Shuangteng
    Lu, Yihong
    JOURNAL OF ELECTRONIC IMAGING, 2011, 20 (02)
  • [7] Robust deep convolutional neural network against image distortions
    Wang, Liang-Yao
    Chen, Sau-Gee
    Chien, Feng-Tsun
    APSIPA TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING, 2021, 10
  • [8] Target attention deep neural network for infrared image enhancement
    Wang, Dong
    Lai, Rui
    Guan, Juntao
    INFRARED PHYSICS & TECHNOLOGY, 2021, 115
  • [9] Nonlinear Image Interpolation via Deep Neural Network
    Zhou, Wentian
    Li, Xin
    Reynolds, Daryl S.
    2017 FIFTY-FIRST ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2017, : 228 - 232
  • [10] Image denoising via deep network based on edge enhancement
    Chen X.
    Zhan S.
    Ji D.
    Xu L.
    Wu C.
    Li X.
    Journal of Ambient Intelligence and Humanized Computing, 2023, 14 (11) : 14795 - 14805