As object detection techniques advance, large-object detection has become less challenging. However, small-object detection remains a significant hurdle. DSOD-YOLO is a lightweight small-object detection network based on YOLOv8, designed to balance detection accuracy with model efficiency. To accurately detect small objects, the network employs a dual-backbone feature extraction architecture, which enhances the extraction of small-object details. This addresses the issue of detail loss in deep models. Additionally, a Channel-Scale Adaptive Module (FASD) is introduced to adaptively select feature channels and image sizes based on the required feature information. This helps mitigate the problem of sparse feature information and information loss during feature propagation for small objects. To strengthen contextual information and further improve small-object detection, a lightweight Context and Spatial Feature Calibration Network (CSFCN) is integrated. CSFCN performs context correction and spatial feature calibration through its two core modules, Context Feature Calibration (CFC) and Spatial Feature Calibration (SFC), based on pixel context similarity and channel dimensions, respectively. To reduce model complexity, the network undergoes a pruning process, achieving lightweight small-object detection. Furthermore, knowledge distillation is employed, with a large model acting as a teacher network to guide DSOD-YOLO, leading to further accuracy improvements. Experimental results demonstrate that DSODYOLO outperforms state-of-the-art algorithms like YOLOv9 and YOLOv10 on multiple small-object datasets. Additionally, a new small-object dataset (SmallDark) is created for low-light conditions, and the proposed method surpasses existing algorithms on this custom dataset.