Federated Learning (FL) is an emerging technique that involves training Machine/Deep Learning models over distributed Edge Devices (EDs) while facing three challenges: device heterogeneity, resource-constraint devices, and Non-IID (Non-Identically Independently Distributed). In the standard FL, the centralized server has to wait for the model parameters from the slowest participating EDs for global training, which leads to increased waiting time due to device heterogeneity. Asynchronous FL resolves the issue of device heterogeneity, however, it requires frequent model parameter transfer, resulting in a straggler effect. Further, frequent asynchronous updates over Non-IID in participating EDs can affect training accuracy. To overcome the challenges, in this paper, we present a new Federated Semi-Asynchronous Split Learning (Fed-SASL) strategy. Fed-SASL utilizes semi-asynchronous aggregation, where model parameters are aggregated in a centralized cloud server, and received from participating EDs without waiting for all devices. This strategy significantly reduces training time and communication overhead. Additionally, split learning is employed to handle slow EDs by dividing the neural network model based on the computational loads of devices, thereby reducing the burden on stragglers. Extensive simulation results over real-time testbed and one benchmark dataset demonstrate the effectiveness of the proposed strategy over existing ones.