Achieving super-resolution with satellite images is a critical task for enhancing the utility of remote sensing data across various applications, including urban planning, disaster management, and environmental monitoring. Traditional interpolation methods often fail to recover fine details, while deep-learning-based approaches, including convolutional neural networks (CNNs) and generative adversarial networks (GANs), have significantly advanced super-resolution performance. Recent studies have explored large-scale models, such as Transformer-based architectures and diffusion models, demonstrating improved texture realism and generalization across diverse datasets. However, these methods frequently have high computational costs and require extensive datasets for training, making real-world deployment challenging. We propose the multi-branch generative prior integration network (MBGPIN) to address these limitations. This novel framework integrates multiscale feature extraction, hybrid attention mechanisms, and generative priors derived from pretrained VQGAN models. The dual-pathway architecture of the MBGPIN includes a feature extraction pathway for spatial features and a generative prior pathway for external guidance, dynamically fused using an adaptive generative prior fusion (AGPF) module. Extensive experiments on benchmark datasets such as UC Merced, NWPU-RESISC45, and RSSCN7 demonstrate that the MBGPIN achieves superior performance compared to state-of-the-art methods, including large-scale super-resolution models. The MBGPIN delivers a higher peak signal-to-noise ratio (PSNR) and higher structural similarity index measure (SSIM) scores while preserving high-frequency details and complex textures. The model also achieves significant computational efficiency, with reduced floating point operations (FLOPs) and faster inference times, making it scalable for real-world applications.