RGB-D salient object detection (SOD) aims to detect salient objects by fusing salient information in RGB and depth images. Although the cross-modal information fusion strategies employed by existing RGB-D SOD models can effectively fuse information from different modalities, most of them ignore the contextual information that is gradually diluted during feature fusion process, and they also lack the full exploitation and use of common and global information in bimodal images. In addition, although the decoding strategies adopted by existing RGB-D SOD models can effectively decode multi-level fused features, most of them do not fully mine the semantic information contained in the high-level fused features and the detail information contained in the low-level fused features, and they also do not fully utilize such information to steer the decoding, which in turn leads to the poor structure completeness and detail richness of generated saliency maps. To overcome the above-mentioned problems, we propose a feature complementary fusion and information-guided network (FCFIG-Net) for RGB-D SOD, which consists of a feature complementary fusion encoder and an information-guided decoder. In FCFIG-Net, the feature complementary fusion encoder and the information-guided decoder cooperate with each other to not only enhance and fuse the multi-modal features and the contextual information during the feature encoding process, but also fully utilize the semantic and detailed information in the features during the feature decoding process. Concretely, we first design a feature complementary enhancement module (FCEM), which enhances the representational capability of features from different modalities by utilizing the information complementarity among them. Then, in order to supplement the contextual information that is gradually diluted during the feature fusion process, we design a global contextual information extraction module (GCIEM) that extracts the global contextual information from deep encoded features. Furthermore, we design a multi-modal feature fusion module (MFFM), which achieves the sufficient fusion of bimodal features and global contextual information on the basis of fully mining and enhancing the common and global information contained in bimodal features. Using FCEM, GCIEM, MFFM, and RestNet50 backbone, we design a feature complementary fusion encoder. In addition, we also design a guidance decoding unit (GDU). Finally, Using GDU and an existing cascaded decoder, we design an information-guided decoder (IGD), which achieves high-quality step-by-step decoding of multi-level fused features based on fully utilizing the semantic information in the high-level fused features and the detail information in the low-level fused features. Extensive experiments on six widely used RGB-D datasets indicate that the performance of FCFIG-Net reaches current state-of-the-art.