Convolutional neural networks (CNNs) have demonstrated significant performance improvement in many application scenarios. However, the high computational complexity and model size have limited its application on the mobile and embedded devices. Various approaches have been proposed to compress CNNs. Filter pruning is widely considered as a promising solution, which can significantly speed up the inference and reduce memory consumption. To this end, most approaches tend to prune filters by manually allocating compression ratio, which highly relies on individual expertise and not friendly to non-professional users. In this paper, we propose an Automatic Compression Ratio Allocation (ACRA) scheme based on binary search algorithm to prune convolutional neural networks. Specifically, ACRA provides two strategies for allocating compression ratio automatically. First, uniform pruning strategy allocates the same compression ratio to each layer, which is obtained by binary search based on target FLOPs reduction of the whole networks. Second, sensitivity-based pruning strategy allocates appropriate compression ratio to each layer based on the sensitivity to accuracy. Experimental results from VGG11 and VGG-16, demonstrate that our scheme can reduce FLOPs significantly while maintaining a high accuracy level. Specifically, for the VGG16 on CIFAR-10 dataset, we reduce 29.18% FLOPs with only 1.24% accuracy decrease.