The diffusion model has achieved impressive results on low-level tasks, recent studies attempt to design efficient diffusion models for Image Super-Resolution. However, they have mainly focused on reducing the number of parameters and FLops through various network designs. Although these methods can decrease the number of parameters and floating-point operations, they may not necessarily reduce actual running time. To enable DM inference faster on limited computational resources while retaining their quality and flexibility, we propose a Reparameterized Lightweight Diffusion Model SR network (RDSR), which consists of a Latent Prior Encoder (LPE), Reparameterized Decoder (RepD), and diffusion model conditioned on degraded images. Specifically, we first pretrain a LPE, it takes paired HR and LR patches as inputs, mapping input from pixel space to latent space. RepD has a VGG-like inference-time body composed of nothing but a stack of 3x3 convolution and ReLU, while the training-time model has a multi-branch. Our diffusion model serve as a bridge between LPE and RepD: LPE employs distillation loss to supervise reverse diffusion process, the output of reverse process diffusion as a modulator to guide RepD to reconstruct high-quality results. RDSR can effectively reduce GPU memory consumption and improve inference speed. Extensive experiments on SR benchmarks demonstrate the superiority of our RDSR over state-of-the-art DM methods, e.g., RDSR-2.2M achieve 30.11 dB PSNR on DIV2K100 dataset that surpass equal-order DM-based models, while trading-off the parameter, efficiency, and accuracy well: running 55.8x up arrow faster than DiffIR on Intel(R) Xeon(R) Platinum 8255C CPU.