Earthquakes are one of the most destructive natural disasters, causing human casualties and economic losses. This paper presents a remote sensing method that produces earthquake damage map by fusing optical and radar post-event images, utilizing deep learning, fuzzy inference system, and segmentation. The paper's first part involves the design of a convolutional neural network (CNN) and the classification of images at the pixel level using three different approaches, including 1-classification by a proposed CNN, 2-applying the proposed CNN as a feature extractor, and 3-classifiction via support vector machine and fuzzy inference system. The paper's second part deals with implementing the superior approach in three modes: 1-using optical and radar data individually, 2-fusion of optical and radar images at the pixel level, and 3-exploiting just the optical image. Decision-level fusion is then performed over the damage maps generated in three different modes. At the end of these two parts, the segmented image is combined with the pixel-level damage map to generate the object-level damage map. Experiment results obtained for the city of Sarpol-e Zahab in western Iran, which had experienced an earthquake of magnitude 7.3 on November 12, 2017, showed that using the fuzzy decision maker on the features extracted by the CNN improves the results when compared to the CNN's complete classification. It was also proved that using radar data in conjunction with optical data produces better results than using optical data alone, and decision-level fusion leads to accuracy improvements. Furthermore, in all investigations, object-based methods outperformed pixel-based methods.