Real-world datasets usually contain inevitable noisy labels, which will cause deep networks to overfit inaccurate information and yield degenerated performance. Previous literature typically focuses on sample selection to alleviate noisy labels. However, existing methods tend to treat selected clean data equally, neglecting the different potentials of diverse clean samples. To this end, we propose a novel and effective approach, termed DDCS (Delving Deeper into Clean Samples), to mitigate the detrimental effects of noisy labels. Specifically, we first progressively select clean samples using the small-loss criterion, splitting the training data into a clean subset and a noisy subset. Subsequently, we devise a biased loss re-weighting scheme to impose higher emphasis on clean samples containing more valuable information. Moreover, inspired by metric learning, we propose to employ the circle loss on the clean subset for mining clean hard samples, promoting model performance by encouraging better decision boundaries. Finally, we incorporate contrastive learning on the selected noisy subset, further improving model generalization performance by maximizing the utility of the noisy data. Comprehensive experiments and ablation studies are provided to demonstrate the effectiveness and superiority of our approach.