In recent years, diffusion probabilistic models have emerged as a hot topic in computer vision. Image creation programs such as Imagen, Latent Diffusion Models, and Stable Diffusion have shown outstanding generative powers, sparking considerable community discussions. They frequently, however, lack the ability to precisely modify real-world images. In this paper, we propose a novel ControlNet-based image editing framework that enables alteration of real images based on pose maps, scribbling maps, and other features without the need for training or fine-tuning. Given a guiding image as input, we edit the initial noise generated from the guiding image to influence the generation process. Then features extracted from the guiding image are directly injected into the generation process of the translated image. We also construct a classifier guidance based on strong correspondences between intermediate features of the ControlNet branches. The editing signals are converted into gradients to guide the sampling direction. At the end of this paper, we demonstrate high-quality results of our proposed model in image editing tasks.