The development of AI technology has led to an increase in the amount and variety of data. Deep neural networks (DNNs) are commonly used in computer vision, speech recognition, and recommender systems, which necessitate large amounts of user data. However, due to limitations such as privacy concerns, data cannot be processed and analyzed centrally in a single machine or data center. Federated learning (FL) enables model updates and parameter transfers between multiple devices or data centers without sharing raw data, thus protecting data privacy and achieving higher accuracy and greater security. However, recent studies have shown that there are still privacy concerns with the information transmitted during FL learning, which can lead to the inference of private user data from local outputs. This paper presents a secure federated learning scheme that employs differential privacy (DP) and homomorphic encryption (HE). The proposed scheme uses the Laplace mechanism to perturb the client's local model parameters and fully homomorphic encryption (FHE) based on ring learning with error to prevent theft by malicious attackers. The results of our extensive experiments show that our scheme achieves model performance that is competitive with the FL baseline, resulting in improved computational efficiency. Furthermore, our privacy analysis experiments demonstrate that our approach is effective in preventing malicious theft and recovering private data, which leads to a high-intensity privacy protection capability.