In modern healthcare, the precision of medical image segmentation holds immense significance for diag-nosis and treatment planning. Deep learning techniques, such as CNNs, UNETs, and Transformers, have revolutionized this field by automating the previously labor-intensive manual segmentation processes. However, challenges like intricate structures and indistinct features persist, leading to accuracy issues. Researchers are diligently addressing these challenges to further unlock the potential of medical image segmentation in healthcare transformation. To enhance the precision of brain tumor MRI image segmen-tation, our study introduces three novel feature-enhanced hybrid UNet models (FE-HU-NET): FE1-HU-NET, FE2-HU-NET, and FE3-HU-NET. Our approach encompasses three main aspects. Initially, we empha-size feature enhancement during the image preprocessing stage. We apply distinct image enhancement techniques-CLAHE, MHE, and MBOBHE-to each model. Secondly, we tailor the architecture of the UNet model to enhance segmentation results, focusing on a personalized layered design. Lastly, we employ a CNN model in post-processing to refine segmentation outcomes through additional convolutional layers. The HU-Net module, shared across the three models, integrates a customized UNet layer and a CNN. We also introduce an alternative feature-enhanced variant, FE4-HU-NET, utilizing the DeepLABv3 model. Incorporating CLAHE for image enhancement and bolstered by CNN layers, this variant offers a distinct approach. Rigorous experimentation underscores the excellence of our proposed framework in distin-guishing complex brain tissues, surpassing current state-of-the-art models. Impressively, we achieve accuracy rates exceeding 99% across two publicly available datasets. Performance metrics such as the Jaccard index, sensitivity, and specificity further substantiate the effectiveness of our Hybrid U-Net model.(c) 2023 The Author(s). Published by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).