Affective analysis is a technology that aims to understand human sentiment states, and it is widely applied in human-computer interaction and social sentiment analysis. Compared to unimodal, multimodal sentiment analysis (MSA) focuses more on the complementary information and differences from multimodalities, which can better represent the actual sentiment expressed by humans. Existing MSA methods usually ignore the problem of multimodal data ambiguity and the uncertainty of influence redundant features on the sentiment discriminability. To address these issues, we propose a fuzzy attention fusion-based MSA method, called FFMSA. FFMSA alleviates the heterogeneity of multimodal data through shared and private subspaces, and solves the ambiguity using a fuzzy attention mechanism based on continuous value decision making, in order to obtain accurate sentiment features for downstream tasks. The private subspace refines the latent features within each single modality through constraints on their uniqueness, while the shared subspace learns common features using a nonparametric independence criterion algorithm. By constructing sample pairs for unsupervised contrastive learning, we use fuzzy c-means to model uncertainty to constrain the similarity between similar samples to enhance the expression of shared features. Furthermore, we adopt a multiangle modeling approach to capture the consistency and complementarity of multimodalities, dynamically adjusting the interaction between different modalities through a fuzzy attention mechanism to achieve comprehensive sentiment fusion. Experimental results on two datasets demonstrate that our FFMSA outperforms state-of-the-art approaches in MSA and emotion recognition. The proposed FFMSA achieves sentiment binary classification accuracy of 85.8% and 86.4% on CMU-MOSI and CMU-MOSEI, respectively.