Accurate segmentation and grading of brain tumors from multi-modal magnetic resonance imaging (MRI) play a vital role in the diagnosis and treatment of brain tumors. The gene expression in glioma also influences the selection of treatment strategies and assessment of patient survival, such as the gene mutation status of isocitrate dehydrogenase (IDH), the co-deletion status of 1p/19q, and the value of Ki67. However, obtaining medical image annotations is both time-consuming and expensive, and it is challenging to perform tasks such as brain tumor segmentation, grading, and genotype prediction directly using label-deprived multi-modal MRI. We proposed a comprehensive multi-modal domain adaptative aid (CMDA) framework building on hospital datasets from multiple centers to address this issue, which can effectively relieve distributional differences between labeled source datasets and unlabeled target datasets. Specifically, a comprehensive diagnostic module is proposed to simultaneously accomplish the tasks of brain tumor segmentation, grading, genotyping, and glioma subtype classification. Furthermore, to learn the data distribution between labeled public datasets and unlabeled local hospital datasets, we consider the semantic segmentation results as the output capturing the similarity between different data sources, and we employ adversarial learning to facilitate the network in learning domain knowledge. Experimental results showthat our end-to-endCMDAframework outperforms other methods based on direct transfer learning and other state-of-the-art unsupervised methods.