Federated Learning has emerged as a transformative paradigm that enables collaborative model training across distributed clients while preserving data privacy. However, Federated Learning systems are vulnerable to backdoor attacks, where malicious clients introduce harmful triggers into the global model, undermining its security and reliability. Traditional defenses often struggle to balance robust protection with maintaining high model accuracy, leaving Federated Learning systems exposed to significant risks. In this article, we present SHIELD-FL (Synaptic Harmonization for Intelligent and Enhanced Learning Defense), a novel framework designed to provide comprehensive backdoor defense in federated learning environments. At the core of SHIELD-FL is SYNAPSE (Synaptic Neuron Adjustment for Protective System Enhancement), an innovative metric that leverages L2 norm analysis to detect and identify neurons influenced by backdoor triggers. This targeted approach enables precise adjustment and pruning of compromised neurons, effectively neutralizing backdoor threats while preserving overall model performance. SHIELD-FL further enhances protection through a coordinated, system-wide strategy implemented across all clients, ensuring robust defense against backdoor attacks throughout the federated learning network. We rigorously evaluated SHIELD-FL on multiple datasets, demonstrating its effectiveness. The results consistently show that proposed model outperforms state-of-the-art defenses, achieving superior accuracy and resilience against backdoor attacks. Our approach provides a unified and effective solution for securing the federated learning based intrusion detection systems against emerging threats, marking a significant advancement in the field of security.