After a long period of research and development, 2D image object detection technology has been greatly improved in terms of efficiency and accuracy, but it is still a great challenge in dealing with motion blurred images. Most of the current deblurring algorithms are too computationally intensive to meet the demands of real-time tasks. To address this problem, a real-time deblur network (RT-Deblur) based on adversarial generative networks has been proposed. Specifically, a fast Fourier transform residual (FFTRes) block was designed to allow the model to achieve better performance while using fewer residual blocks, and to reduce the number of parameters to meet real-time requirements. In order to improve the deblurring performance and the accuracy of object detection after deblurring, a weighted loss function was developed, which includes the discriminator loss, MSE loss, multi-scale frequency reconstruction loss and perceptual YOLO loss. To validate the effectiveness of deblurring for object detection, we produced two blurred object detection datasets based on REDS and GoPro. A large number of comparative experiments on object detection before and after deblurring using YOLOv5s have been done, and the results show that our network achieves leading m AP scores. RT-Deblur improved the m AP scores from 0.486 to 0.566 and from 0.17 to 0.433 on the two blurred datasets, respectively.