Recent advances in single image super-resolution (SISR) have shown promising results, but networks with optimal performance tend to have heavy computation, making them unsuitable for edge devices. How to achieve better results with fewer parameters is still a problem that requires further research. To overcome this issue, we propose a separable feature complementary network using branch-wise attention and multi-scale spatial attention (SFCN-BMSA). The network contains a feature complementary module, which utilizes a limited number of small-sized convolution kernels to combine long-range features from different positions on the feature map and utilizes them to enhance image reconstruction. In addition, we design a feature fusion module with branch-wise attention, which can fuse the features of different branches according to the importance of each branch. Finally, we also design a multi-scale spatial attention module, which utilizes three dilated convolutions with the size of 5×\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\times $$\end{document}5 to calculate attention from different spatial scales and combines them to obtain more refined attention while utilizing a larger receptive field. Experiments show that the proposed neural network achieves better reconstruction results with lower parameters.