Efficient virtual-to-real dataset synthesis for amodal instance segmentation of occlusion-aware rockfill material gradation detection

被引:5
|
作者
Hu, Yike [1 ]
Wang, Jiajun [1 ]
Wang, Xiaoling [1 ]
Yu, Jia [1 ]
Zhang, Jun [1 ]
机构
[1] State Key Lab Hydraul Engn Simulat & Safety, Tianjin 300072, Peoples R China
基金
中国国家自然科学基金;
关键词
Gradation detection; Rockfill materials; Amodal instance segmentation; Dataset synthesis; Generative adversarial network; Image style transfer; IDENTIFICATION;
D O I
10.1016/j.eswa.2023.122046
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Image-based gradation detection methods of rockfill materials mostly ignore occluded regions of particles, and for the mainstream methods with deep learning, dataset annotation is time-consuming and labour-intensive. This study proposes an efficient virtual-to-real dataset synthesis method for rapid dataset synthesis and occlusionaware gradation detection. Instead of photographing or scanning real particles with large time cost and labour input, Diffusion-GAN, which is trained with 600 virtual images generated by 3D modeling, efficiently generates 50,000 various individual particle images to synthesize initial stacked particle images and amodal annotations. Post-processing CycleGAN is subsequently proposed to preserve the background with image processing based on CycleGAN, which superiorly converts the style of synthetic images from virtual to real. The proposed dataset synthesis method is 50 times faster than manual labelling. Occlusion-aware gradation detection employs Bilayer Convolutional Network (BCNet) to predict both visible and occluded areas of particles, whose maximum absolute error is 4.72%, less than the error of 7.73% produced by ignoring occluded regions. The AP50 of BCNet trained with the synthetic dataset is 0.941, extraordinarily close to the result trained with the real dataset.
引用
收藏
页数:18
相关论文
共 1 条
  • [1] Incoherent Region-Aware Occlusion Instance Synthesis for Grape Amodal Detection
    Wang, Yihan
    Xiao, Shide
    Meng, Xiangyin
    SENSORS, 2025, 25 (05)