Road damage detection task is an essential guarantee for maintaining road traffic safety. Given the challenges posed by complex image backgrounds, which commonly lead to missed and false detections in road damage detection, an improved algorithm, EADG-YOLO, based on YOLOv8, is proposed for road damage detection. First, an efficient front-end module EIEStem is proposed to extract edge information of road damage, which better improves the performance of model detection. Second, the multi-scale fusion attention is designed to act on the neck network to suppress complex backgrounds and non-interest target areas, which can effectively reduce the model's attention to irrelevant information. Simultaneously, a dynamic up-sampler DySample is used to avoid the problem that may weaken the feature information of the target and improve the accuracy of up-sampling. Finally, a lightweight detection head is formulated to reduce the parameters of the detection head and realize the effective aggregation of multi-scale information. The experimental results show that the mean average precision of EADG-YOLO in the RDD2022 dataset is better than that of other test models, with a mean average precision (mAP@0.5) of 87.3%, and the computation cost is reduced by 7% compared with the YOLOv8 model, which is more suitable for deployment in scenarios with limited computing resources and edge computing such as embedded systems and mobile devices. The findings validate that the method proposed significantly enhances the accuracy and efficiency of road damage detection.