The digitization of financial transactions has led to a rise in credit card fraud, necessitating robust measures to secure digital financial systems from fraudsters. Nevertheless, traditional centralized approaches for detecting such frauds, despite their effectiveness, often do not maintain the confidentiality of financial data. Consequently, Federated Learning (FL) has emerged as a promising solution, enabling the secure and private training of models across organizations. However, the practical implementation of FL is challenged by data heterogeneity among institutions, complicating model convergence. To address this issue, we propose FedFusion, which leverages the fusion of local and global models to harness the strengths of both, ensuring convergence even with heterogeneous data with total feature discrepancy. Our approach involves three distinct datasets with completely different feature sets assigned to separate federated clients. Prior to FL training, datasets are preprocessed to select significant features across three deep learning models. The Multilayer Perceptron (MLP), identified as the best-performing model, undergoes personalized training for each dataset. These trained MLP models serve as local models, while the main MLP architecture acts as the global model. FedFusion then adaptively trains all clients, optimizing fusion proportions. Experimental results demonstrate the approach's superiority, achieving detection rates of 99.74%, 99.70%, and 96.61% for clients 1, 2, and 3, respectively. This highlights the effectiveness of FedFusion in addressing data heterogeneity challenges, thereby paving the way for more secure and efficient fraud detection systems in digital finance.