The high cost of manual annotation for medical images leads to an extreme lack of annotated samples for image segmentation. Moreover, the scales of target regions in medical images are diverse, and the local features like texture and contour of some images (such as skin lesions and polyps) are often poorly distinguished. To solve the above problems, this paper proposes a novel semi-supervised medical image segmentation method based on multi-scale knowledge discovery and multi-task ensemble, incorporating two key improvements. Firstly, to detect targets with various scales and focus on local information, a multi-scale knowledge discovery framework (MSKD) is introduced and discovers multi-scale semantic features and dense spatial detail features from cross-level (image and patches) inputs. Secondly, by integrating the ideas of multi-task learning and ensemble learning, this paper leverages the knowledge discovered by MSKD to perform three subtasks: semantic constraints for the target regions, reliability learning for unsupervised data, and representation learning for local features. Each subtask is treated as a weak learner focusing on learning unique features for a specific task in the ensemble learning. Through three-task ensemble, the model achieves multi-task feature sharing. Finally, comparative experiments are conducted on datasets for skin lesions, polyps, and multi-object cell nucleus segmentation, indicate the superior segmentation accuracy and robustness of the proposed method.