A Cross-Attention Multi-Scale Performer With Gaussian Bit-Flips for File Fragment Classification

被引:0
|
作者
Liu, Sisung [1 ]
Park, Jeong Gyu [2 ]
Kim, Hyeongsik [3 ]
Hong, Je Hyeong [1 ,3 ]
机构
[1] Hanyang Univ, Dept Artificial Intelligence, Seoul 04763, South Korea
[2] Hanyang Univ, Dept Elect Engn, Seoul 04763, South Korea
[3] Hanyang Univ, Dept Artificial Intelligence Semicond Engn, Seoul 04763, South Korea
关键词
Transformers; Feature extraction; Data models; Adaptation models; Accuracy; Attention mechanisms; Computational modeling; Training; Electronic mail; Data augmentation; File fragment classification; transformer; multi-scale attention; cross-attention; performer;
D O I
10.1109/TIFS.2025.3539527
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
File fragment classification is a crucial task in digital forensics and cybersecurity, and has recently achieved significant improvement through the deployment of convolutional neural networks (CNNs) compared to traditional handcrafted feature-based methods. However, CNN-based models exhibit inherent biases that can limit their effectiveness for larger datasets. To address this limitation, we propose the Cross-Attention Multi-Scale Performer (XMP) model, which integrates the attention mechanisms of transformer encoders with the feature extraction capabilities of CNNs. Compared to our conference work, we additionally introduce a new Gaussian Bit-Flip (GBFlip) method for binary data augmentation, largely inspired by bit flipping errors in digital system, improving the model performance. Furthermore, we incorporate a fine-tuning approach and demonstrate XMP adapts more effectively to diverse datasets than other CNN-based competitors without extensive hyperparameter tuning. Our experimental results on two public file fragment classification datasets show XMP surpassing other CNN-based and RCNN-based models, achieving state-of-the-art performance in file fragment classification both with and without fine-tuning. Our code is available at https://github.com/DominicoRyu/XMP_TIFS.
引用
收藏
页码:2109 / 2121
页数:13
相关论文
共 50 条
  • [1] Multi-scale cross-attention transformer encoder for event classification
    Hammad, A.
    Moretti, S.
    Nojiri, M.
    JOURNAL OF HIGH ENERGY PHYSICS, 2024, 2024 (03)
  • [2] CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification
    Chen, Chun-Fu
    Fan, Quanfu
    Panda, Rameswar
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 347 - 356
  • [3] CrossFormer: Multi-scale cross-attention for polyp segmentation
    Chen, Lifang
    Ge, Hongze
    Li, Jiawei
    IET IMAGE PROCESSING, 2023, 17 (12) : 3441 - 3452
  • [4] CERVICAL CELL CLASSIFICATION USING MULTI-SCALE FEATURE FUSION AND CHANNEL-WISE CROSS-ATTENTION
    Shi, Jun
    Zhu, Xinyu
    Zhang, Yuan
    Zheng, Yushan
    Jiang, Zhiguo
    Zheng, Liping
    2023 IEEE 20TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI, 2023,
  • [5] MCADNet: A Multi-Scale Cross-Attention Network for Remote Sensing Image Dehazing
    Tao, Tao
    Xu, Haoran
    Guan, Xin
    Zhou, Hao
    MATHEMATICS, 2024, 12 (23)
  • [6] Multi-scale network with shared cross-attention for audio–visual correlation learning
    Jiwei Zhang
    Yi Yu
    Suhua Tang
    Wei Li
    Jianming Wu
    Neural Computing and Applications, 2023, 35 : 20173 - 20187
  • [7] Diabetic retinopathy grading based on multi-scale residual network and cross-attention module
    Singh, Atul Kumar
    Madarapu, Sandeep
    Ari, Samit
    DIGITAL SIGNAL PROCESSING, 2025, 157
  • [8] Multi-scale network with shared cross-attention for audio-visual correlation learning
    Zhang, Jiwei
    Yu, Yi
    Tang, Suhua
    Li, Wei
    Wu, Jianming
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (27): : 20173 - 20187
  • [9] Multi-scale Sparse Network with Cross-Attention Mechanism for image-based butterflies fine-grained classification
    Li, Maopeng
    Zhou, Guoxiong
    Cai, Weiwei
    Li, Jiayong
    Li, Mingxuan
    He, Mingfang
    Hu, Yahui
    Li, Liujun
    APPLIED SOFT COMPUTING, 2022, 117
  • [10] Multi-Scale Cross-Attention Fusion Network Based on Image Super-Resolution
    Ma, Yimin
    Xu, Yi
    Liu, Yunqing
    Yan, Fei
    Zhang, Qiong
    Li, Qi
    Liu, Quanyang
    APPLIED SCIENCES-BASEL, 2024, 14 (06):