Recently, hyperspectral image classification (HIC) with noisy labels is attracting increasing interest. However, existing methods usually neglect to explore feature-dependent knowledge to reduce label noise and, thus, perform poorly when the noise ratio is high or the clean samples are limited. In this article, a novel triple contrastive representation learning (TCRL) framework is proposed from a deep clustering perspective for robust HIC with noisy labels. The TCRL explores the cluster-, instance-, and structure-level representations of HIC by defining triple learning loss. First, the strong and weak transformations are defined for hyperspectral data augmentation. Then, a simple yet effective lightweight spectral prior attention-based network (SPAN) is presented for spatial-spectral feature extraction of all augmented samples. In addition, cluster- and instance-level contrastive learnings are performed on two projection subspaces for clustering and distinguishing samples, respectively. Meanwhile, structure-level representation learning is employed to maximize the consistency of data after different projections. Taking the feature-dependent information learned by triple representation learning, our proposed end-to-end TCRL can effectively alleviate the overfitting of classifiers to noisy labels. Extensive experiments have been taken on three public datasets with various noise ratios and two types of noise. The results show that the proposed TCRL could provide more robust classification results when training on noisy datasets compared with state-of-the-art methods, especially when clean samples are limited. The code will be available at https://github.com/Zhangxy1999.