In the research of autonomous driving technology, due to the lack of datasets for various extreme weather conditions, autonomous driving perception in adverse weather is a challenge. To address this problem, this paper introduces an end-to-end multi-task perception system that combines labeled supervised learning and unsupervised domain adaptive learning for bad weather. The key innovations of this system include: a multitask learning framework that simultaneously handles object detection, lane line detection, and drivable area detection, improving both efficiency and cost-effectiveness for autonomous driving in complex environments; a domain adaptation strategy using unlabeled data for adverse weather, which enables the system to perform robustly without requiring specific labels for harsh weather conditions; the system has strong generalization ability, demonstrated by achieving an prediction mAP of 83.86%, a drivable area mIoU of 91.59%, and lane detection accuracy of 83.9% on the BDD100K dataset, as well as an mAP of 74.85% on the Cityscapes fog dataset without additional training, highlighting its effectiveness in unseen, adverse conditions. The scalable and generalized solution provided in this paper can achieve high-performance navigation in various extreme environments. By combining supervised and unsupervised learning techniques, this model can not only cope with severe weather but also further generalize to unseen scenarios.