Yarn state detection based on lightweight network and knowledge distillation

被引:0
|
作者
Ren G. [1 ]
Tu J. [1 ]
Li Y. [1 ]
Qiu Z. [1 ]
Shi W. [1 ]
机构
[1] Zhejiang Key Laboratory of Modern Textile Equipment Technology, Zhejiang Sci-Tech University, Zhejiang, Hangzhou
来源
关键词
deployment model; knitting; knowledge distillation; lightweight neural network; transfer learning; yarn recognition;
D O I
10.13475/j.fzxb.20220508801
中图分类号
学科分类号
摘要
Objective The knotting machine in the circular weft knitting production line absorbs the yarn at the end of different yarn cylinders through the yarn guide tube, and sends the absorbed yarn to the knotting device to complete the knotting process. Aiming at the problem that it is difficult to detect the multi-state and multi-type yarn absorption in the yarn guide tube of the splitter, a detection scheme based on machine vision was proposed to realize the real-time monitoring of yarn number and color in the yarn guide tube of the knot machine to ensure the reliability of the joint. Method In order to overcome the limitation of convenional yarn detection, an image classification method based on deep learning was proposed. 3 500 collected images were divided into 2 800 training sets and 700 test sets, and 560 images were separated from the training set as the verification set. Following that, a lightweight self-built student network was constructed by using superposition depth separable convolution. In order to overcome the defects of low accuracy and weak generalization performance of students' network, the combination training method of transfer learning and knowledge distillation was utilized to train the self-built network, and the final trained student network weight was deployed on the mobile terminal. Results Experimental results showed that the teacher network using transfer learning had a verification set accuracy of more than 92% after the first round, and the convergence speed of the loss curve was also significantly accelerated (Fig. 9). When the student network was trained by knowledge distillation, the setting of loss weight a and distillation temperature T had no rule on the verification accuracy of the network. Compared with the student network verification accuracy of 95.7% before distillation, it was improved in general (Tab. 3). When the loss weight a was set to 0.2 and the distillation temperature T was set to 3, the best effect was achieved, and the top-1 accuracy on the verification set reached 99.57%. Comparative experiments of model reasoning were conducted on student network, teacher network and typical lightweight network before and after distillation (Tab. 4). The accuracy of the student network, which was originally 96.00% accurate on the test set, was improved to 99.28% after distillation. In addition, compared with the current typical lightweight model, the self-built lightweight student network had fewer parameters and less computation, which indirectly improved the forward reasoning time of the model. When the trained self-built network was deployed on the embedded terminal for actual test, the probability of a single yarn was higher than 70% (Fig. 11), while the probability of a double yarn was higher than 80%. The actual yarn detection accuracy rate was 98. 86% after repeated experimental test on the yarn (Tab. 5). The difference in test accuracy between PC and embedded terminal was observed. The analysis showed that the PC side test was in the form of pictures taken before, while the embedded side test was in the form of actual video stream. On the other hand, the precision of weight parameters may be lost during the process of model quantization acceleration and deployment. Conclusion The analysis result shows that, on the one hand, the PC side test is in the form of pictures taken before, while the embedded side test is in the form of actual video stream. On the other hand, the precision of weight parameters may be lost during the process of model quantization acceleration and deployment. It can be seen from the above yarn testing results that it meets the practical application needs and lays a solid foundation for promoting the application and popularization of the later knotting machine. In addition, the current yarn detection device is suitable for the knotting machine, but only it is necessary to optimize and upgrade the hardware and algorithm, useful in many occasions relating to yarn detection. Its lightweight size and low cost are undoubtedly progressed with commercial promotion value and significance. © 2023 China Textile Engineering Society. All rights reserved.
引用
收藏
页码:205 / 212
页数:7
相关论文
共 15 条
  • [1] MA Haipeng, Key Technology Research on automation and informatization of circular knitting machine [D], pp. 5-7, (2017)
  • [2] ZHAO Wenrui, HE Qiusen, ZHANG Jingyi, Et al., Study on the stability of yarn knotting Topology, Cotton Textile Technology, 49, 8, pp. 22-25, (2021)
  • [3] JING Shenquan, Technical progress and development trend of China's color spinning yarn industry [J], China Textile Leader, 919, 6, pp. 22-27, (2019)
  • [4] WANG Mengmeng, LIU Chengxia, Objective evaluation of fabric sewing smoothness based on convolutional neural network, Wool Textile Technology, 48, 5, pp. 87-91, (2020)
  • [5] WU Zhenyu, CHEN Linrong, LI Zijun, Et al., Contactyarn tension sensor dynamic measurement model, Journal of Textile Research, 34, 8, pp. 138-142, (2013)
  • [6] ZHANG Yujuan, PENG Laihu, XU Yushan, Et al., Non-contact yarn state detection technology research [J], Journal of Modern Textile Technology, 30, 1, pp. 101-108, (2022)
  • [7] LI Dongjie, GUO Shuai, YANG Liu, Yarn defect detection based on improved image threshold segmentation algorithm, Journal of Textile Research, 42, 3, pp. 82-88, (2021)
  • [8] ZHANG Huanhuan, ZHAO Yan, JING Junfeng, Et al., Yarn evenness measurement based on sub-pixel edge detection, Journal of Textile Research, 41, 5, pp. 45-49, (2020)
  • [9] MA Ke, YAN Kai, ZHANG Huanhuan, Et al., Research on yarn hairiness detection method based on bayesian threshold, Cotton Textile Technology, 49, 4, pp. 11-15, (2021)
  • [10] ZHANG X, ZHOU X, LIN M, Et al., Shufflenet