In recent years, Autonomous Underwater Vehicles (AUVs) have seen remarkable technological progress, and their trajectory tracking control has emerged as a crucial research focus. To address the challenges of obtaining precise model parameters and dealing with the complex and dynamic underwater environment, data-driven approaches, such as reinforcement learning (RL), have gradually emerged. However, traditional RL methods often require large datasets and face unpredictability during the early exploration stages, making them challenging for real-world applications. To overcome these limitations, this paper proposes an expert- demonstrated soft actor-critic (ESAC) control scheme for AUV trajectory tracking. This method utilizes expert control data as demonstrations for the RL agent, accelerating the learning process and improving safety. Additionally, Long Short-Term Memory (LSTM) is employed as the policy network to effectively process the sequential state information of the AUV, enhancing control precision. Through simulations and comparisons with other typical RL-based controllers, the superiority of the proposed method is demonstrated. Finally, lake trials further validate the feasibility of the approach. The results demonstrate that the ESAC-LSTM scheme achieves faster convergence and higher control accuracy, making it well-suited for complex underwater environments.