Multirobot path planning in complex environments is a challenging research area. This article proposes a path planning method for multirobot systems based on distributed multiagent deep reinforcement learning. We propose a multirobot dynamic window approach (MRDWA) in which a central controller facilitates sensor information sharing among robots, enabling locally optimal path planning considering the behavior of other robots. We incorporate the output velocity information into the observation function to form an efficient and low-dimensional state representation. Additionally, we employ the multiagent deep deterministic policy gradient (MADDPG) reinforcement learning algorithm to directly map part of the observation information to motion commands for multiple robots, enabling effective obstacle avoidance strategies. An improved action module is developed by using velocity and angular velocity increments and an action selector to refine the output. Furthermore, we introduce a multirobot reward module utilizing heuristic functions to guide the robots to quickly and efficiently identify feasible paths. We also propose a multirobot dynamic constraint reward function to optimize the multirobot trajectories. The MRDWA-MADDPG algorithm is validated through simulations and real-world experiments, demonstrating its effectiveness in diverse complex multirobot path planning scenarios. Our method outperforms conventional algorithms in terms of success rate, arrival time, and overall decision making in complex scenarios. Moreover, our method has a faster computation speed and a shorter training time, produces smoother trajectories, and is easier to deploy on real robots than other learning-based methods.