Face recognition (FR) technology has a concurrent interaction between accuracy, privacy, and fairness. To investigate this, we present a deep learning FR model trained on the BUPT-Balancedface dataset, which is racially balanced, and incorporate differential privacy (DP) into the private variant of the model to ensure data confidentiality, retaining the focus on fairness aspects. We analyze the verification accuracy of private (with different privacy budgets) and non-private models using a variety of benchmark FR datasets. Our results show that the non-private model achieves reasonably high accuracy comparable to the current state-of-the-art models. The private model shows a trade-off between accuracy and privacy, as well as between fairness and privacy, meaning that enhancing privacy tends to reduce both accuracy and fairness. Our findings indicate that DP unevenly reduces accuracy across demographics and suggest that adjusting the privacy budget allows for better balancing of privacy, accuracy, and fairness. Furthermore, we extend our experiments to consider real-world bias by training our private model on the imbalanced CASIA-WebFace dataset, where variability in accuracy and fairness disparities are amplified, showing the impact of dataset composition on the interplay between privacy, accuracy, and fairness. Additionally, we show that traditional membership inference attacks (MIAs) can compromise privacy in FR systems. We further introduce a more realistic, identity-based MIA (I-MIA) tailored specifically for FR. Our analysis demonstrates that DP significantly mitigates privacy risks from both traditional MIAs and the proposed I-MIA.