This work studies the problem of distributed classification in peer-to-peer (P2P) networks. While there has been a significant amount of work in distributed classification, most of existing algorithms are not designed for P2P networks. Indeed, as server-less and router-less systems, P2P networks impose several challenges for distributed classification: (1) it is not practical to have global synchronization in large-scale P2P networks; (2) there are frequent topology changes caused by frequent failure and recovery of peers: and (3) there are frequent on-the-fly data updates on each peer. In this paper, we propose an ensemble paradigm for distributed classification in P2P networks. Under this paradigm each peer builds its local classifiers on the local data and the results from all local classifiers are then combined by plurality voting. To build local classifiers, we adopt the learning algorithm of pasting bites to generate multiple local classifiers on each peer based on the local data. To combine local results, we propose a general form of Distributed Plurality Voting (DPV) protocol in dynamic P2P networks. This protocol keeps the single-site validity for dynamic networks, and supports the computing modes of both one-shot query and continuous monitoring. We theoretically prove that the condition C-0 for sending messages used in DPV0 is locally communication-optimal to achieve the above properties. Finally, experimental results on real-world P2P networks show that: (1) the proposed ensemble paradigm is effective even if there are thousands of local classifiers; (2) in most cases, the DPV0 algorithm is local in the sense that voting is processed using information gathered from a very small vicinity, whose size is independent of the network size; (3) DPV, is significantly more communication-efficient than existing algorithms for distributed plurality voting.