Recently, Diffusion Models (DMs) have witnessed the remarkable success in image restoration tasks. However, DMs are not flexible and adaptive in dealing with uncertain multiple forms of image degradation (e.g., noise, blur and soon) due to the lack of degradation prior, resulting in undesirable boundary artifacts. In addition, DMs require a large number of inference iterations to restore clean image, which consumes massive computational resources. To address the forementioned limitations, we propose an adaptive unified two-stage restoration method based on latent diffusion model, termed APDiff that can effectively and adaptively handle real-world images with various degradation types. Specifically, in Stage I, we pre-train a Degradation Adaptive Prompt Learning Network (DAPLNet-S1) to obtain degradation prompt by exploring differences between low quality (LQ) and ground truth (GT) images adaptively. Then, we encode it into the latent space as key discriminant information for different degraded images. In Stage II, we propose a latent diffusion model to directly estimate a degradation prompt similar in pre-train DAPLNet-S1 only using LQ images. Meanwhile, to restore different degradation images effectively, we design a Prompt Guided Fourier Transformer Restorer to integrate the extracted prompt, which enhances characterization ability of model for global frequency feature and local spatial information. Since the generated prompts are low-dimensional latent vector representations, this can significantly reduce computational complexity of diffusion model. Thus, during the inference process, our method takes only 0.09 s to restore an image of SPA+. Extensive experiments demonstrate that APDiff achieves state-of-the-art performance for multi-degradation tasks.