Human perception possesses the remarkable ability to mentally reconstruct the complete structure of occluded objects, which has inspired researchers to pursue amodal instance segmentation for a more comprehensive understanding of the scene. Previous works have shown promising results, but they often capture the contextual dependencies in an unsupervised way, which can lead to undesirable contextual dependencies and unreasonable feature representations. To tackle this problem, we propose a Pixel Affinity-Parsing (PAP) module trained with the Pixel Affinity Loss (PAL). Embedded into CNN, the PAP module can leverage learned contextual priors to guide the network to explicitly distinguish different relationships between pixels, thus capturing the intraclass and inter-class contextual dependencies in a non-local and supervised way. This process helps to yield robust feature representations to prevent the network from misjudging. To demonstrate the effectiveness of the PAP module, we design an effective Pixel Affinity-Parsing Network (PAPNet). Notably, PAPNet also introduces shape priors to guide the amodal mask refinement process, thus preventing implausible shapes in the predicted masks. Consequently, with the dual guidance of contextual and shape priors, PAPNet can reconstruct the full shape of occluded objects accurately and reasonably. Experimental results demonstrate that the proposed PAPNet outperforms existing state-of-the-art methods on multiple amodal datasets. Specifically, on the KINS dataset, PAPNet achieves 37.1% AP, 60.6% AP50 and 39.8% AP75, surpassing C2F-Seg by 0.6%, 2.4% and 2.8%. On the D2SA dataset, PAPNet achieves 71.70% AP, 85.98% AP50 and 77.10% AP75, surpassing PGExp by 0.75% and 0.33% in AP50 and AP75, and being comparable to PGExp in AP. On the COCOA-cls dataset, PAPNet achieves 41.29% AP, 60.95% AP50 and 46.17% AP75, surpassing PGExp by 3.74%, 3.21% and 4.76%. On the CWALT dataset, PAPNet achieves 72.51% AP, 85.02% AP50 and 80.47% AP75, surpassing VRSPNet by 5.38%, 0.07% and 5.35%. The code is available at https://github.com/jiaoZ7688/PAP-Net.