This paper studies distributed (strongly convex) optimization over multi-agent networks, subject to finite rate communications. We propose the first distributed algorithm achieving geometric convergence to the exact solution of the problem, matching thus the rate of the centralized gradient algorithm (although with different constants). The algorithm combines gradient tracking with a quantized perturbed consensus scheme. The impact on the convergence (rate) of design and network parameters, such as number of bits, algorithm step-size, and network connectivity, is also investigated. Finally, numerical results validate our theoretical findings. They demonstrate the existence of an interesting trade-off among solution accuracy, convergence time and communication cost, defined as the total number of bits transmitted on one link to achieve a target solution error.