Incomplete multi-view clustering presents greater challenges than traditional multi-view clustering. In recent years, significant progress has been made in this field, multi-view clustering relies on the consistency and integrity of views to ensure the accurate transmission of data information. However, during the process of data collection and transmission, data loss is inevitable, leading to partial view loss and increasing the difficulty of joint learning on incomplete multi-view data. To address this issue, we propose a multi-view contrastive learning framework based on the attention mechanism. Previous contrastive learning mainly focused on the relationships between isolated sample pairs, which limited the robustness of the method. Our method selects positive samples from both global and local perspectives by utilizing the nearest neighbor graph to maximize the correlation between local features and latent features of each view. Additionally, we use a cross-view encoder network with self-attention structure to fuse the low dimensional representations of each view into a joint representation, and guide the learning of the joint representation through a high confidence structure. Furthermore, we introduce graph constraint learning to explore potential neighbor relationships among instances to facilitate data reconstruction. The experimental results on six multi-view datasets demonstrate that our method exhibits significant effectiveness and superiority compared to existing methods.