Knowledge Graph Completion (KGC) holds significance across various applications, such as Q&A systems, search engines, and recommendation systems. However, employing deep reinforcement learning for this task encounters specific challenges, impacting completion accuracy and stability. These challenges include sparse rewards, intricate multi-step reasoning, absence of domain-specific rules, overestimation problems, and coupling issues of value and policy. In response, this paper presents GCATRL, a reinforcement learning model integrating the Dual-Delay Deep Deterministic Policy Gradient based on Correlation and Attention Mechanisms (CATD3) with Generative Adversarial Networks (GANs). Initially, we adopt graph convolutional neural network (GCN) for preprocessing to represent the relationships and entities in the knowledge graph as continuous vectors. Subsequently, we combined Wasserstein-GAN (WGAN) with the designed gated recurrent unit (HOGRU), introduced an attention mechanism to record the path trajectory sequence formed during the knowledge graph traversal process, and dynamically generated new subgraph at the appropriate time to ensure that the traversal process of the knowledge graph continues. Finally, CATD3 is used to optimize rewards and mitigate adversarial losses. We demonstrate through experimental results that the proposed model outperforms traditional algorithms on multiple tasks performed on multiple datasets.