Autonomous driving pays considerable attention to pedestrian trajectory prediction as a crucial task. Constructing effective pedestrian trajectory prediction models depends heavily on utilizing the motion characteristics of pedestrians, along with their interactions among themselves and between themselves and their environment. However, traditional trajectory prediction models often fall short of capturing complex real-world scenarios. To address these challenges, this paper proposes an enhanced pedestrian trajectory prediction model, M(2)Tames, which incorporates comprehensive motion, interaction, and semantic context factors. M(2)Tames provides an interaction module (IM), which consists of an improved multi-head mask temporal attention mechanism (M(2)Tea) and an Interaction Inference Module (I-2). M(2)Tea thoroughly characterizes the historical trajectories and potential interactions, while I-2 determines the precise interaction types. Then, IM adaptively aggregates useful neighbor features to generate a more accurate interactive feature map and feeds it into the final layer of the U-Net encoder to fuse with the encoder's output. Furthermore, by adopting the U-Net architecture, M(2)Tames can learn and interpret scene semantic information, enhancing its understanding of the spatial relationships between pedestrians and their surroundings. These innovations improve the accuracy and adaptability of the model for predicting pedestrian trajectories. Finally, M(2)Tames is evaluated on the ETH/UCY and SDD datasets for short- and long-term settings, respectively. The results demonstrate that M(2)Tames outperforms the state-of-the-art model MSRL by 2.49% (ADE) and 8.77% (FDE) in the short-term setting and surpasses the optimum Y-Net by 6.89% (ADE) and 1.12% (FDE) in the long-term prediction. Excellent performance is also shown on the ETH/UCY datasets.