This study introduces two novel graph data representation learning models: Context Self-Supervised Learning Graph Auto-Encoders (CSSL-GAE) and Context Self-Supervised Learning Variational Graph Auto-Encoders (CSSLV-GAE). These models integrate graph auto-encoders (GAE), variational graph auto-encoders (VGAE), and context self-supervised learning (CSSL) to elevate the performance of link prediction tasks. Initially, we clarify the intricacies of link prediction challenges and their relevance in graph data. Following this, we offer a comprehensive introduction to the GAE and VGAE models, emphasizing their excellence in representing latent variables when learning graph structures. We then introduce CSSL, which enriches node representations by acquiring contextual information from nodes. The integration of these two models, CSSL-GAE and CSSL-VGAE, showcases remarkable performance in link prediction tasks, exhibiting stronger expressive and generalization abilities. Experimental results conducted on classic datasets, including Cora, Citeseer, and Pubmed, reveal significant performance improvements in comparison to models solely relying on GAE, VGAE, or CSSL. Ultimately, we summarize the potential influence of these models in the realm of graph data representation learning, paving the way for future research directions.