In the realm of medical image segmentation, deep learning has exhibited remarkable progress. However, the omnipresence of artifacts in MRI poses a substantial challenge, often leading to the failure of deep learning models in real-world scenarios. Addressing the automatic extraction of target regions from MRIs plagued by artifacts remains a critical task in medical image segmentation. This study presents an innovative approach that amalgamates few-shot learning and diffusion models. Our proposed methodology comprises two pivotal modules: the artifact removal module and the segmentation module, both meticulously crafted using pre-trained models. Notably, our model achieves precise MRI segmentation on previously unseen datasets without necessitating additional training. The artifact removal module processes MRIs with artifacts as input, treating them as query images. Through 2D pre-trained diffusion model reconstruction, it generates artifact-free MRIs as output. Leveraging few-shot learning and a carefully curated support set, our approach achieves accurate segmentation of the target region. Validation on the OASIS dataset underscores our method's exceptional segmentation accuracy and generalization capabilities. Unlike existing methods, our approach proves to be more pragmatic, adept at handling MRIs with artifacts, and excels when applied to previously unseen data. The availability of data and code is accessible at: https://github.com/fanchenlex/DiffusionUniverseg.