Forest fires often result in significant ecological damage and loss of human lives due to their rapid spread and difficulty in extinguishment. To enhance fire detection efficiency, we propose an improved model based on YOLOv8, named FFD-YOLO (Forest Fire Detection model based on YOLO). First, to enable the model to effectively capture flame edge and spatial information, we designed LEIEM (Light Edge Information Extraction Module) and integrated it into the backbone of YOLOv8. Second, to improve the model's ability to detect multi-scale flames, we developed a mechanism called SLSA (Strip Large Kernel Spatial Attention). By combining this with ECA (Efficient Channel Attention), we proposed DF (Dynamic Fusion) module to replace the original upsample components of YOLOv8. Additionally, we created a synthetic dataset containing pseudo-fire examples, such as toy lights resembling flames, to enhance the model's resistance to interference. Furthermore, we developed a complementary system capable of transmitting detected fire information to forest rangers, improving the efficiency of forest fire response. FFD-YOLO achieves a 2.9% improvement in AP0.5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$AP_{0.5}$$\end{document} compared to YOLOv8 and meets the requirements for real-time detection. The code and dataset will be available at https://github.com/ZehuaChenLab/FFD-YOLO.