The goal of few-shot learning is to learn a solution to a problem from limited training samples. In recent years, with the promotion and application of deep neural network-based vision algorithms, the problem of data scarcity has become increasingly prominent. This has prompted comprehensive study on few-shot learning algorithms among academic and industrial communities. This paper first analyzes the bias phenomenon of proposal estimation in the classic transfer learning few-shot object detection paradigm, and then proposes an improved scheme that combines weight imprinting and model decoupling. On the one hand, we extend the weight imprinting algorithm on the general Faster R-CNN framework to enhance the fine-tuning performance; on the other hand, we exploit model decoupling to minimize the over-fitting in data-scarce scenarios. Our proposed method achieves 12.3, 15.0, and 18.9 (nAP) top accuracy on novel set of COCO under 5-shot, 10-shot, and 30-shot settings, and achieves 57.7 and 60.2 (nAP50) top accuracy on novel set of VOC Split 3 under 5-shot and 10-shot settings. Compared with the latest published studies, our proposed method provides a competitive detection performance on novel categories only via fine-tuning. Moreover, it retains the original architecture of the network and is practical in real industrial scenarios.