An image-based approach to predict instantaneous cutting forces using convolutional neural networks in end milling operation

被引:18
|
作者
Su, Shuo [1 ]
Zhao, Gang [1 ,2 ]
Xiao, Wenlei [1 ,2 ]
Yang, Yiqing [1 ,2 ]
Cao, Xian [1 ]
机构
[1] Beihang Univ, Sch Mech Engn & Automat, Beijing 100191, Peoples R China
[2] Beihang Univ, MIIT Key Lab Aeronaut Intelligent Mfg, Beijing 100191, Peoples R China
关键词
Instantaneous cutting forces; Mechanistic force model; Convolutional neural network (CNN); Digital twin; COEFFICIENTS; SURFACE;
D O I
10.1007/s00170-021-07156-6
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Cutting force detection can contribute to predicting the productivity and quality of end milling operations. Instantaneous cutting force prediction of digital twins in end milling operations should be near real-time and accurate. This paper proposes an image-based approach that can contain more useful information due to a higher dimension and simplify the complexity of computing geometric data. The cutter frame image (CFI) is utilized as one of the inputs of a convolutional neural network (CNN) to predict instantaneous cutting forces. Considering the convenience of capturing massive data, the approach uses cutting forces generated from a mechanistic force model instead of experimental cutting forces to train the CNN. The correlation coefficient R-2 value between predicted results and simulated results is 0.9999 and the average time cost per image is 0.057 s in a cutting condition, which validates the possibility to use the image-based method to predict instantaneous cutting forces accurately and efficiently in the digital twin.
引用
收藏
页码:1657 / 1669
页数:13
相关论文
共 50 条
  • [31] Image-based laparoscopic tool detection and tracking using convolutional neural networks: a review of the literature
    Yang, Congmin
    Zhao, Zijian
    Hu, Sanyuan
    COMPUTER ASSISTED SURGERY, 2020, 25 (01) : 15 - 28
  • [32] Hyperspectral Image-Based Identification of Maritime Objects Using Convolutional Neural Networks and Classifier Models
    Seo, Dongmin
    Lee, Daekyeom
    Park, Sekil
    Oh, Sangwoo
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2025, 13 (01)
  • [33] Designing Deep Convolutional Neural Networks using a Genetic Algorithm for Image-based Malware Classification
    Paardekooper, Cornelius
    Noman, Nasimul
    Chiong, Raymond
    Varadharajan, Vijay
    2022 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2022,
  • [34] Image-based microstructure classification of mortar and paste using convolutional neural networks and transfer learning
    Qian, Hanjie
    Li, Ye
    Yang, Jianfei
    Xie, Lihua
    Tan, Kang Hai
    CEMENT & CONCRETE COMPOSITES, 2022, 129
  • [35] ShipGeoNet: SAR Image-Based Geometric Feature Extraction of Ships Using Convolutional Neural Networks
    Yasir, Muhammad
    Liu, Shanwei
    Mingming, Xu
    Wan, Jianhua
    Pirasteh, Saied
    Dang, Kinh Bac
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 13
  • [36] A System for Image-Based Non-Line-Of-Sight Detection Using Convolutional Neural Networks
    Boeker, Clarissa
    Niemeijer, Joshua
    Wojke, Nicolai
    Meurie, Cyril
    Cocheril, Yann
    2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2019, : 535 - 540
  • [37] Prediction of forces in ball-end milling using RBF neural networks
    El-Mounayri, H
    Briceno, JF
    Gadallah, M
    TRANSACTIONS OF THE NORTH AMERICAN MANUFACTURING RESEARCH INSTITUTE OF SME, VOL XXX, 2002, 2002, : 313 - 320
  • [38] Learning image-based spatial transformations via convolutional neural networks: A review
    Tustison, Nicholas J.
    Avants, Brian B.
    Gee, James C.
    MAGNETIC RESONANCE IMAGING, 2019, 64 : 142 - 153
  • [39] Static Image-based Emotion Recognition Using Convolutional Neural Network
    Ozcan, Tayyip
    Basturk, Alper
    2019 27TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2019,
  • [40] Image-based wheat grain classification using convolutional neural network
    Surabhi Lingwal
    Komal Kumar Bhatia
    Manjeet Singh Tomer
    Multimedia Tools and Applications, 2021, 80 : 35441 - 35465