With the increasing complexity of urban traffic, object detection has become critical in autonomous driving and intelligent traffic management. The demand for real-time, efficient object detection systems is growing. However, traditional algorithms often suffer from large parameter sizes and high computational costs, limiting their applicability in resource-constrained environments. To address this issue, we propose L-YOLO, an improved lightweight road object detection algorithm based on YOLOv8s. First, L-HGNetV2 replaces the backbone network of YOLOv8s to enhance feature extraction and fusion efficiency. Second, a small object detection layer is introduced into the feature fusion network, replacing the original C2f modules with the new CStar modules. This modification improves the capture of features and contextual information for small vehicle targets without significantly increasing computational demands. Third, the CIoU loss function is replaced by the FPIoU2 loss function, enhancing the model's robustness. Finally, the layer adaptive magnitude-based model pruning (LAMP) method is applied to prune the convolutional layer channels, significantly reducing the computational burden and parameter count while maintaining accuracy, thus improving operational efficiency. On the KITTI public dataset, L-YOLO achieves a mAP50 of 93.8%, a 2.5% improvement over YOLOv8s. The number of parameters decreases from 11.12 M to 3.58 M, and the computational load is reduced from 28.4 GFLOPs to 14.2 GFLOPs.