Accurate detection of vulnerable road users (VRUs) is critical for enhancing traffic safety and advancing autonomous driving systems. However, due to their small size and unpredictable movements, existing detection methods struggle to provide stable and accurate results under real-time conditions. To overcome these challenges, this paper proposes an improved VRU detection algorithm based on YOLOv8, named VRU-YOLO. First, we redesign the neck structure and construct a Detail Enhancement Feature Pyramid Network (DEFPN) to enhance the extraction and fusion capabilities of small target features. Second, the YOLOv8 network's Spatial Pyramid Pooling Fast (SPPF) module is replaced with a novel Feature Pyramid Convolution Fast (FPCF) module based on dilated convolution, effectively mitigating feature loss in small target processing. Additionally, a lightweight Optimized Shared Detection Head (OSDH-Head) is introduced, reducing computational complexity while improving detection efficiency. Finally, to alleviate the deficiencies of traditional loss functions in shape matching and computational efficiency, we propose the Wise-Powerful Intersection over Union (WPIoU) loss function, which further optimizes the regression of target bounding boxes. Experimental results on a custom-built multi-source VRU dataset show that the proposed model enhances precision, recall, mAP50, and mAP50:95 by 1.3%, 3.4%, 3.3%, and 1.8%, respectively, in comparison to the baseline model. Moreover, in a generalization test conducted on the remote sensing small target dataset VisDrone2019, the VRU-YOLO model achieved an mAP50 of 31%. This study demonstrates that the improved model offers more efficient performance in small object detection scenarios, making it well-suited for VRU detection in complex road environments.