Automatic medical image segmentation plays a pivotal role in clinical diagnosis. In the past decades, medical image segmentation has made remarkable improvements with the aid of convolutional neural networks (CNNs). However, extracting context information and disease features for dense segmentation remains a challenging task because of the low contrast between lesions and the background of the medical images. To address this issue, we propose a novel enhanced feature fusion scheme in this work. First, we develop a global feature enhancement modTule, which captures the long-range global dependencies of the spatial domains and enhances global features learning. Second, we propose a channel fusion attention module to extract multi-scale context information and alleviate the incoherence of semantic information among different scale features. Then, we combine these two schemes to produce richer context information and to enhance the feature contrast. In addition, we remove the decoder with the progressive deconvolution operations from classical U-shaped networks, and only utilize the features of the last three layers to generate predictions. We conduct extensive experiments on three public datasets: the poly segmentation dataset, ISIC-2018 dataset, and the Synapse Multi-Organ Segmentation dataset. The experimental results demonstrate superior performance and robustness of our method in comparison with state-of-the-art methods.