Three-dimensional (3D) segmentation of neurons is a crucial step in the digital reconstruction of neurons and serves as an important foundation for brain science research. In neuron segmentation, the U-Net and its variants have showed promising results. However, due to their primary focus on learning spatial domain features, these methods overlook the abundant global information in the frequency domain. Furthermore, issues such as insufficient processing of contextual features by skip connections and redundant features resulting from simple channel concatenation in the decoder lead to limitations in accurately segmenting neuronal fiber structures. To address these problems, we propose an encoder-decoder segmentation network integrating frequency domain and spatial domain to enhance neuron reconstruction. To simplify the segmentation task, we first divide the neuron images into neuronal cubes. Then, we design 3D FregSNet, which leverages both frequency and spatial domain features to segment the target neurons within these cubes. Then, we introduce a multiscale attention fusion module (MAFM) that utilizes spatial and channel position information to enhance contextual feature representation. In addition, a feature selection module (FSM) is incorporated to adaptively select discriminative features from both the encoder and decoder, increasing the weight on critical neuron locations and significantly improving segmentation performance. Finally, the segmented nerve fiber cubes were assembled into complete neurons and digitally reconstructed using available neuron tracking algorithms. In experiments, we evaluated 3D FregSNet on two challenging 3D neuron image datasets (the BigNeuron dataset and the CWMBS dataset). Compared to other advanced segmentation methods, 3D FregSNet demonstrates more accurate extraction of target neurons in noisy and weakly visible neuronal fiber images, effectively improving the performance of 3D neuron segmentation and reconstruction.