Despite their capacity to convey knowledge, most existing knowledge graphs (KGs) are created for specific domains using low-resource data sources, especially those in non-global languages, and thus unavoidably suffer from the incompleteness problem. The automatic discovery of missing triples for KG completion is thus hindered by the challenging long-tail relations problem in low-resource KGs. Few-shot learning models trained on rich-resource KGs are unable to tackle this challenge due to a lack of generalization. To alleviate the impact of the intractable long-tail problem on low-resource KG completion, in this paper, we propose a novel few-shot learning framework empowered by multi-view task representation generation. The framework consists of four components, i.e., few-shot learner, perturbed few-shot learner, relation knowledge distiller, and pairwise contrastive distiller. The key idea is to utilize the different views of each few-shot task to improve and regulate the training of the few-shot learner. For each few-shot task, instead of augmenting it by complicated task designs, we generate its representation of different views using the relation knowledge distiller and perturbed few-shot learner, which are obtained by distilling knowledge from a KG encoder and perturbing the few-shot learner. Then, the generated representation of different views is utilized by the pairwise contrastive distiller based on a teacher-student framework to distill the knowledge of how to represent relations from different views into the few-shot learner and facilitate few-shot learning. Extensive experiments conducted on several real-world low-resource KGs validate the effectiveness of our proposed method.