Recently, single-image super-resolution (SISR) based on convolutional neural networks (CNNs) has encountered challenges, including the presence of numerous network parameters, limited receptive field, and the inability to capture global context information. In order to address these issues, we propose an image super-resolution method based on attention aggregation hierarchy feature (AHSR), which improves the performance of the super-resolution (SR) network through the optimization of convolutional operations and the integration of effective attention modules. AHSR first uses a high-frequency filter to bypass the rich low-frequency information, allowing the main network to focus on learning the high-frequency information. In order to aggregate spatial information within the image, expand the receptive field, and extract local structural features more effectively, we propose the utilization of the shift operation with zero parameters and zero triggers instead of spatial convolution. Additionally, we introduce a multi-Dconv head transposed attention module to improve the aggregation of cross-hierarchical feature information. This approach allows us to obtain enhanced features that incorporate contextual information. Extensive experimental results show that compared to other advanced SR models, the proposed AHSR method can better recover image details with fewer model parameters and less computational complexity.