Cloud cover and long revisit cycle of satellites can cause gaps in optical images and pose a significant obstacle to the consistency of Earth observation missions. Recently, synthetic aperture radar (SAR)-to-optical image translation (S2OIT) has become an emerging approach to reconstruct the missing information of optical remote sensing images. However, the previous studies ignored the mechanism difference between SAR and optical data and produced color distortion, image blurriness, and texture detail loss in the generated optical images. To tackle these challenges, we propose a multitemporal S2OIT network (MTS2ONet) for high-quality optical image generation. The proposed model comprises two subnetworks: change feature extraction subnetwork (Change_Extractor) and the S2OIT subnetwork (S2O_Translator). The first subnetwork is tasked with extracting change features from SAR images captured at dates T and {T} +1 , and then translating them from the SAR domain to the optical domain. Subsequently, the S2O_Translator integrates the optical image at date {T} +1 with the change features extracted by the Change_Extractor to generate the optical image at date T. In addition, we produce a dual-temporal SAR-optical dataset called DTSEN1-2 for model evaluation. Experiments on the DTSEN1-2 dataset reveal that our method is superior to the state-of-the-art (SOTA) methods with the metrics peak-signal-to-noise ratio (PSNR; 36.0435), structural similarity index measure (SSIM; 0.9896), learned perceptual image patch similarity (LPIPS; 0.0443), and root mean square error (RMSE; 0.0174) and exhibits preferable results in visual effects. Our dataset and codes can be accessed via the following link: https://github.com/hopeupup/MTS2ONet.