In recent times, the proliferation of Large Language Models (LLMs) has catalyzed advancements across various professional domains. Despite these strides, the attainment of accreditation in human professional examinations by an LLM remains elusive. Presently, this study showcases the inaugural LLM in China to achieve qualification in professional examinations, having achieved a score of 62 points in the 2023 tax qualification exam and 80 points in the 2022 tax qualification exam, surpassing the performance of human counterparts who scored 60 points. Distinguished by a unique training methodology, this work departs from conventional professional domain training. Our approach involves the initial fine-tuning of a multi-task complex model followed by the refinement of a single-task model. This methodology proves markedly more effective than direct single-task model fine-tuning. Furthermore, within the professional domain, we introduce strategic techniques that significantly enhance the LLMs' proficiency in generating responses. Additionally, this study contributes a comprehensive set of solutions tailored to the tax law domain, which can be extrapolated to other analogous domains. These solutions offer novel insights for the successful integration of LLMs into specific professional contexts. Our findings not only underscore the potential of LLMs in professional examinations but also offer practical guidelines for the effective deployment of LLMs in specialized domains, thereby fostering a new paradigm for domain-specific application.