In the real world, completely static scenes do not exist. Monocular depth estimation in dynamic scenes refers to obtaining depth information of dynamic foreground and static background from a single image, which has advantages over traditional stereo estimation methods in terms of flexibility and cost-effectiveness. It has strong research relevance and broad development prospects, playing a key role in downstream tasks, such as 3D reconstruction and autonomous driving. With the rapid development of deep learning technology self-supervised learning without using real data labels has attracted the enthusiasm of many scholars. Many local and foreign scholars have proposed a series of self-supervised monocular depth estimation algorithms to deal with dynamic objects in scenes, laying the research foundation for researchers in related fields. However, a comprehensive analysis of the above methods has yet to be conducted. To address this issue, this study systematically reviews and summarizes the progress of self-supervised monocular depth estimation in dynamic scenes based on deep learning. First, the basic models of self-supervised monocular depth estimation based on deep learning are summarized, and how self-supervised constraints are applied between images is analyzed and explained. Moreover, a basic framework diagram of self-supervised monocular depth estimation based on continuous frames is drawn. The effect of dynamic objects on images is explained from four aspects: epipolar lines, triangulation, fundamental matrix estimation, and reprojection error. Second, commonly used datasets and evaluation metrics for monocular depth estimation research are introduced. The KITTI and Cityscapes datasets provide continuous outdoor image data, while the NYU Depth V2 dataset provides indoor dynamic scene data, which are generally used for model training. The Make3D dataset has depth data but discontinuous images, which are generally used to test the generalization ability of the model. The algorithms are quantitatively analyzed using Root Mean Square Error (RMSE), logarithmic root mean square error (RMSE log), absolute relative error (Abs Rel), squared relative error (Sq Rel), and accuracies (Acc), and the performance of classic monocular depth estimation models in dynamic scenes is compared and analyzed.Then, on the basis of different ways of handling dynamic objects, the research directions of robust depth estimation in dynamic scenes and dynamic object tracking and depth estimation are summarized and analyzed. Dynamic objects are extracted and treated as outliers during training model to minimize their effect, training solely on static background information, which is referred to as robust depth estimation in dynamic scenes. Accurately distinguishing dynamic foreground and static background and processing the two regions separately is referred to as dynamic object tracking and depth estimation. Various algorithms for detecting and segmenting dynamic objects based on optical flow information, semantic information, and other information while estimating their motion are explained. At the same time, the advantages and disadvantages of each type of algorithm are summarized and analyzed on the basis of commonly used evaluation criteria. Finally, the future development directions of monocular depth estimation in dynamic scenes are discussed from the aspects of network model optimization, online learning and generalization, real-time operation capability of embedded devices, and domain adaptation of self-supervised learning. © 2024 Science Press. All rights reserved.