Background and motivation: Breast cancer detection remains a critical challenge in medical imaging due to the complexity of tumor features and variability in breast tissue. Conventional mammography struggles with dense tissues, leading to missed diagnoses. Digital Breast Tomosynthesis (DBT) offers improved 3D imaging but brings significant computational burdens. This study proposes a novel framework using the Fully Elman Neural Network (FENN) with feature fusion to enhance the accuracy and reliability of breast cancer diagnosis. Materials and methods: Mammogram images from the CBIS-DDSM dataset and DBT images from the BreastCancer-Screening-DBT dataset were used. The preprocessing step involved Extended-Tuned Adaptive Frost Filtering (Ext-AFF) to enhance image quality by reducing noise. Feature extraction was performed using Disentangled Variational Autoencoder (D-VAE), capturing critical texture features. These features were fused using Deep Generalized Canonical Correlation Analysis (Dg-CCA) to maximize feature correlation across modalities. Finally, a Fully Elman Neural Network was employed for classification, distinguishing between benign, malignant, biopsy-proven cancer, and normal tissues. Results: The proposed FENN-based framework achieved superior classification performance compared to existing methods. Key metrics such as accuracy, sensitivity, specificity, and Matthew's correlation coefficient (MCC) demonstrated significant improvements. The fusion of mammogram and DBT images led to enhanced discriminative power, reducing false positives and negatives across various breast cancer classes. Discussion and conclusion: The integration of mammogram and DBT image data with advanced machine learning techniques, such as D-VAE and FENN, enhances diagnostic precision. The proposed framework shows promise for improving clinical decision-making in breast cancer screening by overcoming the limitations of traditional imaging methods. The system's ability to handle complex interdependencies in imaging data offers substantial potential for earlier and more accurate diagnosis. Future directions: Future research will focus on real-time clinical deployment of the framework, incorporating real-time image acquisition and analysis for faster diagnoses. Additionally, scaling the system for large datasets with varying image quality will further validate its robustness and applicability in diverse clinical environments.