Video Anomaly Detection (VAD) for weakly supervised data operates with limited video-level annotations. It also holds the practical significance to play a pivotal role in surveillance and security applications like public safety, patient monitoring, autonomous vehicles, etc. Moreover, VAD extends its utility to various industrial settings, where it is instrumental in safeguarding workers' safety, enabling real-time production quality monitoring, and predictive maintenance. These diverse applications highlight the versatility of VAD and its potential to transform processes across various industries, making it an essential tool along with traditional surveillance applications. The majority of the existing studies have been focused on mitigating critical aspects of VAD, such as reducing false alarm rates and misdetection. These challenges can be effectively addressed by capturing the intricate spatiotemporal pattern within video data. Therefore, the proposed work named Swin Transformer-based Hybrid Temporal Adaptive Module (ST-HTAM) Abnormal Event Detection introduces an intuitive temporal module along with leveraging the strengths of the Swin (Shifted window-based) Transformers for spatial analysis. The novel aspect of this work lies in the hybridization of global self-attention and Convolutional-Long Short Term Memory (C-LSTM) Networks are renowned for capturing both global and local temporal dependencies. By extracting these spatial and temporal components, the proposed method, ST-HTAM, offers a comprehensive understanding of anomalous events. Altogether, it enhances the accuracy and robustness of Weakly Supervised VAD (WS-VAD). Finally, an anomaly scoring mechanism is employed in the classification step to facilitate effective anomaly detection from test video data. The proposed system is tailored to operate in real-time and highlights the dual focus on sophisticated Artificial Intelligence (AI) techniques and their impactful use cases across diverse domains. Comprehensive experiments are conducted on benchmark datasets that clearly show the substantial superiority of the ST-HTAM over state-of-the-art approaches. Code is available at https://github. com/Shalmiyapaulraj78/STHTAM-VAD.