Matrix cracking and delamination detection in GFRP laminates using pre-trained CNN models

被引:6
|
作者
Chaupal, Pankaj [1 ]
Rohit, S. [1 ]
Rajendran, Prakash [1 ]
机构
[1] Natl Inst Technol, Dept Mech Engn, Tiruchirappalli 620015, Tamil Nadu, India
关键词
Undamaged; Damaged; Laminated composite structure; Data augmentation; CNN models; Transfer learning; DAMAGE DETECTION; COMPOSITE-MATERIALS; CLASSIFICATION;
D O I
10.1007/s40430-023-04060-w
中图分类号
TH [机械、仪表工业];
学科分类号
0802 ;
摘要
The demand for fiber-reinforced polymer composites is increasing steadily because of their superior mechanical properties, high specific strength and stiffness, and high corrosion resistance. Matrix cracking and delamination are one of the most significant kind of damage in laminated composite structures. In this research work, our main objective is to differentiate whether the randomly oriented chopped glass fiber composite laminate is undamaged or damaged (matrix cracking and delamination) using five distinct convolutional neural network (CNN) models with transfer learning techniques. A microscopic examination is conducted on a composite laminate in the thickness direction just before and after the three-point bending test at 100 mu m. Thereafter, data augmentation techniques such as mirroring, rotation, affine transformation, and noise addition are performed, and a total of 13,464 images are obtained from 2 base images. Additionally, these images are given as input to five different deep CNN models, including VGG-16, ResNet-101, NasNetMobile, MobileNet-V2, and DenseNet-201. Using augmented image datasets, the pre-trained CNN models with transfer learning are trained, validated, and tested in the proportion of 70:15:15. Thereafter, comparative studies are carried out to analyze the total trainable and non-trainable parameters, computation time, training, validation, testing accuracy, and F1 score of different CNN models. VGG-16 has 138 million trainable parameters and thus requires maximum computation time. However, MobileNet-V2 has 3 million trainable parameters that needs minimum computation time. DenseNet-201, VGG-16, MobileNet-V2, and NasNetMobile converge very fast and produce minimum validation loss and maximum accuracy whereas ResNet-101 seems not to be converged easily which leads to maximum validation loss and minimum accuracy. The highest training, validation, testing, and F1 score were observed for VGG-16, NasNetMobile, MobileNet-V2, and DenseNet-201 and the lowest for the ResNet-101 CNN model. Finally, it can be concluded that the MobileNet-V2 model performed better than the other four CNN models in terms of accuracy with respect to total parameters and computation time over the other models.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] Robust and Transferable Anomaly Detection in Log Data using Pre-Trained Language Models
    Ott, Harold
    Bogatinovski, Jasmin
    Acker, Alexander
    Nedelkoski, Sasho
    Kao, Odej
    2021 IEEE/ACM INTERNATIONAL WORKSHOP ON CLOUD INTELLIGENCE (CLOUDINTELLIGENCE 2021), 2021, : 19 - 24
  • [42] Visual Tracking by Structurally Optimizing Pre-Trained CNN
    Liu, Chang
    Liu, Peng
    Zhao, Wei
    Tang, Xianglong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (09) : 3153 - 3166
  • [43] Refining Pre-Trained Motion Models
    Sun, Xinglong
    Harley, Adam W.
    Guibas, Leonidas J.
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, : 4932 - 4938
  • [44] Efficiently Robustify Pre-Trained Models
    Jain, Nishant
    Behl, Harkirat
    Rawat, Yogesh Singh
    Vineet, Vibhav
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 5482 - 5492
  • [45] Pre-trained Models for Sonar Images
    Valdenegro-Toro, Matias
    Preciado-Grijalva, Alan
    Wehbe, Bilal
    OCEANS 2021: SAN DIEGO - PORTO, 2021,
  • [46] Pre-trained D-CNN Models for Detecting Complex Events in Unconstrained Videos
    Robinson, Joseph P.
    Fu, Yun
    SENSING AND ANALYSIS TECHNOLOGIES FOR BIOMEDICAL AND COGNITIVE APPLICATIONS 2016, 2016, 9871
  • [47] Pre-Trained Language Models and Their Applications
    Wang, Haifeng
    Li, Jiwei
    Wu, Hua
    Hovy, Eduard
    Sun, Yu
    ENGINEERING, 2023, 25 : 51 - 65
  • [48] μBERT: Mutation Testing using Pre-Trained Language Models
    Degiovanni, Renzo
    Papadakis, Mike
    2022 IEEE 15TH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS (ICSTW 2022), 2022, : 160 - 169
  • [49] Devulgarization of Polish Texts Using Pre-trained Language Models
    Klamra, Cezary
    Wojdyga, Grzegorz
    Zurowski, Sebastian
    Rosalska, Paulina
    Kozlowska, Matylda
    Ogrodniczuk, Maciej
    COMPUTATIONAL SCIENCE, ICCS 2022, PT II, 2022, : 49 - 55
  • [50] What Matters for Out-of-Distribution Detectors using Pre-trained CNN?
    Kim, Dong-Hee
    Lee, Jaeyoon
    Chung, Ki-Seok
    PROCEEDINGS OF THE 17TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 4, 2022, : 264 - 273