In a bad weather environment, the existing road object-detection algorithms using deep learning seldom properly detect road objects in fog, rain, and/or snow. This is due to the lack of data on bad weather environments in the well-known training data, but there is little or no bad weather data that can actually be employed. In this paper, we propose a method to synthesize rain in road images as training data for road object detection in bad weather environments. The proposed method is composed of fog synthesis and rain streak generation. The fog synthesis step estimates and corrects the fog transmission value using depth information from a clear image, and expresses a fog-like feature of the real road image through Gaussian filtering and temporal filtering. We also generate rain streaks using the Unity3D program. We experimented with detecting road objects by learning synthesized rainy images and original rain-free images before synthesis, and compared the two results. Thus, we confirmed that learning with rain-synthesized images improves the detection rate by up to 53%. Experimental results also show that the proposed method can improve vehicle detection performance by about 3.5%, even with real, rainy videos. Copyrights © 2018 The Institute of Electronics and Information Engineers