With the continuous development of autonomous driving technology,accurately predicting the future trajectories of pedestrians has become a critical element in ensuring system safety and reliability. However, most existing studies on pedestrian trajectory prediction rely on fixed camera perspectives,which limits the compre⁃ hensive observation of pedestrian movement and thus makes them unsuitable for direct application to pedestrian tra⁃ jectory prediction under the ego-vehicle perspective in autonomous vehicles. To solve the problem,in this paper a pedestrian trajectory prediction method under the ego-vehicle perspective based on the Multi-Pedestrian Information Fusion Network(MPIFN)is proposed,which achieves accurate prediction of pedestrians' future trajectories by inte⁃ grating social information,local environmental information,and temporal information of pedestrians. In this paper, a Local Environmental Information Extraction Module that combines deformable convolution with traditional convo⁃ lutional and pooling operations is constructed,aiming to more effectively extract local information from complex en⁃ vironment. By dynamically adjusting the position of convolutional kernels,this module enhances the model’s adapt⁃ ability to irregular and complex shapes. Meanwhile,the pedestrian spatiotemporal information extraction module and multimodal feature fusion module are developed to facilitate comprehensive integration of social and environ⁃ mental information. The experimental results show that the proposed method achieves advanced performance on two ego-vehicle driving datasets,JAAD and PSI. Specifically,on the JAAD dataset,the Center Final Mean Squared Er⁃ ror(CF_MSE)is 4 063,and the Center Mean Squared Error(C_MSE)is 829. On the PSI dataset,the Average Root Mean Square Error(ARB)and Final Root Mean Square Error(FRB)also achieve outstanding performance with values of 18.08/29.21/44.98 and 25.27/54.62/93.09 for prediction horizons of 0.5 s,1.0 s,and 1.5 s,respec⁃ tively. © 2024 SAE-China. All rights reserved.