Emotion recognition is widely applied in medicine, education, and human-computer interaction, with three main approaches: non-physiological, physiological, and hybrid signals. Hybrid methods show promise while non-physiological signals are easily manipulable. Our study proposes a hybrid approach that combines EEG and facial expression data using decision-level fusion. Validating our approach using the DEAP database, we focused on binary classifications for alertness and valence. Our model classifies emotions into four categories: HVLA, HVHA, LVLA, and LVHA. From the dataset, we extracted 17 features from 32 EEG channels across 5 frequency bands for each subject. We applied SVM with the RBF kernel and achieved an accuracy of 54.49%. For facial expression classification, we preprocessed frames from the tests of each subject and used CNN to obtain a validation accuracy of 68.36%. In the fusion step, we combined the predicted probabilities of the four labels from the two unimodal classifiers using weighted averaging to calculate the average predicted probabilities for the final emotion classification. Our thorough approach and strong results make a meaningful contribution to the field of emotion computing and emotion recognition.