Previous methods in salient object detection (SOD) mainly focused on favorable illumination circumstances while neglecting the performance in low-light condition, which significantly impedes the development of related down-stream tasks. In this work, considering that it is impractical to annotate the large-scale labels for this task, we present a framework (HDNet) to detect the salient objects in low-light images with the synthetic images. Our HDNet consists of a foreground highlight sub-network (HNet) and an appearance-aware detection sub-network (DNet), both of which can be learned jointly in an end-to-end manner. Specifically, to highlight the foreground objects, we design the HNet to estimate the parameters to adjust the dynamic range for each pixel adaptively, which can be trained via the weak supervision signals of the salient object labels. In addition, we design a simple detection network (DNet) with a contextual feature fusion module and a multi-scale feature refine module for detailed feature fusion and refinement. Furthermore, we contribute the first annotated dataset for salient object detection in low-light images (SOD-LL), including 6,000 labeled synthetic images (SOD-LLS) and 2,000 labeled real images (SOD-LLR). Experimental results on SOD-LL and other low-light videos in the wild demonstrate the effectiveness and generalization ability of our method. Our dataset and code are available at https://github.com/Ylinyuan/HDNet.