Camouflaged Object Detection (COD) is a challenging task in computer vision due to the high visual similarity between camouflaged objects and their surrounding environments. Traditional methods relying on the late-stage fusion of high-level semantic features and low-level visual features have reached a performance plateau, limiting their ability to accurately segment object boundaries or enhance object localization. This paper proposes the Cross-layer Semantic Guidance Network (CSGNet), a novel framework designed to progressively integrate semantic and visual features across multiple stages, addressing these limitations. CSGNet introduces two innovative modules: the Cross-Layer Interaction Module (CLIM) and the Semantic Refinement Module (SRM). CLIM facilitates continuous cross-layer semantic interaction, refining high-level semantic information to provide consistent and effective guidance for detecting camouflaged objects. Meanwhile, SRM leverages this refined semantic guidance to enhance low-level visual features, employing feature-level attention mechanisms to suppress background noise and highlight critical object details. This progressive integration strategy ensures precise object localization and accurate boundary segmentation across challenging scenarios. Extensive experiments on three widely used COD benchmark datasets-CAMO, COD10K, and NC4K-demonstrate the effectiveness of CSGNet, achieving state-of-the-art performance with a mean error (M) of 0.042 on CAMO, 0.020 on COD10K, and 0.029 on NC4K.