The rapid expansion of network connections has significantly increased network traffic activity, introducing new cybersecurity challenges and heightened vulnerability to cyber attacks. To address these challenges, researchers have leveraged intelligent techniques such as machine learning (ML) to enhance attack detection accuracy in network traffic. However, ML models often face a data imbalance issue in their training sets. This imbalance, typically due to the uneven distribution of attack classes, hampers the classification performance of ML models in network intrusion detection. To mitigate class imbalance, various resampling techniques can be employed. This study evaluates several resampling techniques, including Random Oversampling, SMOTE, ADASYN, Random Undersampling, Tomek Links, and SMOTE-Tomek. Using the UNSW-NB15 dataset, we trained and tested ML models, including Decision Tree, Random Forest, Gradient Boosting, XGBoost, and 1D-CNN algorithms. Our analysis demonstrates that resampling techniques significantly impact the performance of machine learning models. The Tomek Links technique applied to the 1D-CNN model achieved the highest performance, with an accuracy of 75.27%, a precision of 87.58%, and an F1-score of 76.22%. Notably, the best recall score of 67.57% was obtained from the 1D-CNN model without resampling. These findings provide valuable insights for researchers and engineers, aiding in selecting appropriate resampling techniques for developing robust detection models for network traffic attacks.