The use of deep learning methods in disease diagnosis holds great promise with the development of medical big data. However, the scale of parameters in deep learning models, which can often reach millions, requires learning from large and diverse medical datasets to achieve the accuracy required for clinical applications. The challenges of cross-domain, decentralization, and data privacy in medical data have constrained the development of this field. Federated learning (FL) addresses these challenges by exchanging model parameters between clients and servers to share the model. However, in the case of medical data, there may be significant disparities in data quality among medical institutions, leading to imbalances in data volume and labeling, which may significantly affect model performance. Traditional FL approaches typically use simple methods such as averaging or weighted averaging during the parameter aggregation process, ignoring the Non-IID (Non-Independent and Identically Distributed) problem among clients. In this paper, a novel FL approach, Feddaw, is proposed based on the characteristics of non-IID medical data distribution. Feddaw aims to reduce the negative impact of label distribution shift in medical data by limiting the probability weighting factor of the CNN classification layer during client-side local training. Additionally, it verifies the accuracy of the client-side model in each round at the server-side, using accuracy-based weight aggregation to balance the negative impact of different data sample shifts. The experimental results show that the proposed Feddaw outperforms traditional FL methods in medical disease diagnosis.