Transformers have been widely studied in medical image segmentation. However, due to the limitations of high-quality annotated medical image data and model computational efficiency, Transformer models struggle to extract diverse global features and are prone to attention collapse. Therefore, this paper proposes a lightweight network, MLCA-UNet (Medical image segmentation network integrating multiscale linear and convolutional attention), which integrates multiscale linear attention and convolutional attention. The network consists of an encoding layer, convolutional attention layer, and decoding layer. First, to improve the diversity of medical image features and the representation of segmentation semantic details, this paper designs a multiscale linear attention module to capture features with different receptive fields. Second, to address the collapse phenomenon caused by Transformers when learning attention, a convolutional attention module is designed to achieve self-attention and diversification of features. Finally, to verify the effectiveness of the proposed method, experiments were conducted on the ACDC cardiac dataset, ISIC skin lesion dataset, BUSI breast ultrasound dataset, and TNSCUI thyroid nodule ultrasound dataset. The results show that MLCA-UNet outperforms existing mainstream segmentation networks. It achieves the best Dice (dice similarity coefficient) on the ACDC and ISIC datasets, 92.36 and 90.81%, respectively. Additionally, on the BUSI dataset and TNSCUI dataset, it achieves the highest IOU (Intersection over Union) values of 73.27 and 77.52% respectively. MLCA-UNet achieves superior performance with better inference efficiency and parameter volume, striking a balance between parameter count and accuracy.