Federated learning aims to complete model training without private data sharing, but many privacy risks remain. Recent studies have shown that federated learning is vulnerable to membership inference attacks. The weight as an important parameter in neural networks has been proven effective for membership inference attacks, but it leads to significant overhead. Facing this issue, in this paper, we propose a bias-based method for efficient membership inference attacks against federated learning. Different from the weight that determines the direction of the decision surface, the bias also plays an important role in determining the distance to move along the direction. Moreover, the number of bias is way less than the weight. We consider two types of attacks: local attack and global attack, corresponding to two possible types of insiders: participant and central aggregator. For the local attack, we design a neural network-based inference, which fully learns the vertical bias changes of the member data and non-member data. For the global attack, we design a difference comparison-based inference to determine the data source. Extensive experimental results on four public datasets show that the proposed method achieves state-of-the-art inference accuracy. Moreover, experiments prove the effectiveness of the proposed method to resist some commonly used defenses.