Reducing Test Cases with Attention Mechanism of Neural Networks

被引:0
|
作者
Zhang, Xing [1 ]
Chen, Jiongyi [1 ]
Feng, Chao [1 ]
Li, Ruilin [1 ]
Su, Yunfei [1 ]
Zhang, Bin [1 ]
Lei, Jing [1 ]
Tang, Chaojing [1 ]
机构
[1] Natl Univ Def Technol, Changsha, Peoples R China
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As fuzzing techniques become more effective at triggering program crashes, how to triage crashes with less human efforts has become increasingly imperative. To this aim, test case reduction which reduces a crashing input to its minimal form plays an important role, especially when analyzing programs with random, complex, or large inputs. However, existing solutions rely on random algorithms or pre-defined rules, which are inaccurate and error-prone in many cases because of the implementation variance in program internals. In this paper, we present SCREAM, a new approach that leverages neural networks to reduce test cases. In particular, by feeding the network with a program's crashing inputs and non-crashing inputs, the network learns to approximate the computation from the program entry point to the crash point and implicitly denotes the input bytes that are significant to the crash. With the invisibility of the trained network's parameters, we leverage the attention mechanism to explain the network, namely extracting the significance of each input byte to the crash. At the end, the significant input bytes are re-assembled as the failure-inducing input. The cost of our approach is to design a proper dataset augmentation algorithm and a suitable network structure. To this end, we develop a unique dataset augmentation technique that can generate adequate and highly-differentiable samples and expand the search space of crashing input. Highlights of our research also include a novel network structure that can capture dependence of input blocks in long sequences. We evaluated SCREAM on 41 representative programs. The results show that SCREAM outperforms state-of-the-art solutions regarding accuracy and efficiency. Such improvement is made possible by the network's capability to summarize the significance of input bytes from multiple rounds of mutation, which tolerates perturbation occurred in random reduction of single crashing input.
引用
收藏
页码:2075 / 2092
页数:18
相关论文
共 50 条
  • [1] Central Attention Mechanism for Convolutional Neural Networks
    Geng, Y.X.
    Wang, L.
    Wang, Z.Y.
    Wang, Y.G.
    IAENG International Journal of Computer Science, 2024, 51 (10) : 1642 - 1648
  • [2] Visualization of Convolutional Neural Networks with Attention Mechanism
    Yuan, Meng
    Tie, Bao
    Lin, Dawei
    HUMAN CENTERED COMPUTING, HCC 2021, 2022, 13795 : 82 - 93
  • [3] Probabilistic Attention Map: A Probabilistic Attention Mechanism for Convolutional Neural Networks
    Liu, Yifeng
    Tian, Jing
    SENSORS, 2024, 24 (24)
  • [4] Attention mechanism in neural networks: where it comes and where it goes
    Soydaner, Derya
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (16): : 13371 - 13385
  • [5] Cropout: A General Mechanism for Reducing Overfitting on Convolutional Neural Networks
    Hou, Wenbo
    Wang, Wenhai
    Liu, Ruo-Ze
    Lu, Tong
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [6] Attention mechanism in neural networks: where it comes and where it goes
    Soydaner, Derya
    Neural Computing and Applications, 2022, 34 (16) : 13371 - 13385
  • [7] A generic shared attention mechanism for various backbone neural networks
    Huang, Zhongzhan
    Liang, Senwei
    Liang, Mingfu
    NEUROCOMPUTING, 2025, 611
  • [8] Attention mechanism in neural networks: where it comes and where it goes
    Derya Soydaner
    Neural Computing and Applications, 2022, 34 : 13371 - 13385
  • [9] Utilizing the Attention Mechanism for Accuracy Prediction in Quantized Neural Networks
    Wei, Lu
    Ma, Zhong
    Yang, Chaojie
    Yao, Qin
    Zheng, Wei
    MATHEMATICS, 2025, 13 (05)
  • [10] A Generalized Attention Mechanism to Enhance the Accuracy Performance of Neural Networks
    Jiang, Pengcheng
    Neri, Ferrante
    Xue, Yu
    Maulik, Ujjwal
    INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2024, 34 (12)