Recent years, manypplications have been driven advances by the use of Machine Learning (ML). Nowadays, it is common to see industrial-strength machine learning jobs that involve millions of model parameters, terabytes of training data, and weeks of training Good efficiency, i.e., fast completion time of running a specific ML training job, therefore, is a key feature of a successful ML system. While the completion time of a long running ML job is determined by the time required to reach model convergence, that is also largely influenced by the values of various system settings. In this paper, we contribute techniques towards building self-tuning parameter servers. Parameter Server (PS) is a popular system architecture for large-scale machine learning systems; and by self-tuning we mean while a long running ML job is iteratively training the expert-suggested model, the system is also iteratively learning which system setting is more efficient for that job and applies it online. Our techniques are general enough to various PS-style ML systems. Experiments on TensorFlow show that our techniques can reduce the completion times of a variety of long-running TensorFlow jobs from 14x to 18x.