Image super-resolution is a visual task that reconstructs low-resolution images into high-resolution ones. Currently, many researchers favor applying Transformer-based methods to image super-resolution tasks, which have yielded promising results. However, due to the need to capture long-range dependencies across the entire image, existing research on Vision Transformers (ViT) for super-resolution reconstruction incurs high computational costs, thereby increasing system overhead. Additionally, some researchers have proposed methods based on manually sparse attention mechanisms. However, these approaches, which acquire receptive fields in a manner similar to traditional convolutions, do not fully exploit the advantages of Transformers in extracting global information, resulting in suboptimal reconstruction performance. To leverage the Transformer's ability to capture long-range dependencies, this paper introduces a novel network called RASHAT. In RASHAT, we propose an Adaptive Sparse Hybrid Attention Block (ASHAB). This module introduces a Bi-level Routing Attention(BRA), incorporating both Channel Attention(CA) and Switch Window Multi-head Self-attention((S)W-MSA). These components are designed to capture long-range dependencies, global context, and local dependencies within the image. Additionally, the model employs an Overlapping Cross-Attention Block(OCAB) to enhance information interaction between neighboring pixels. During model training, we introduce a novel composite loss function that combines frequency domain loss with pixel loss, further improving model performance. Extensive experiments demonstrate that benefiting from the sparse attention provided by the Bi-Level Routing Attention (BRA), RASHAT achieves similar performance to the current stateof-the-art results (20.8M) with significantly fewer parameters (11.6M). These results hold across multiple commonly used datasets.