The literature on the use of machine learning (ML) models for the estimation of real estate prices is increasing at a high rate. However, the black-box nature of the proposed models hinders their adoption by market players such as appraisers, assessors, mortgage lenders, fund managers, real estate agents or investors. Explaining the outputs of those ML models can thus boost their adoption by these domain-field experts. However, very few studies in the literature focus on exploiting the transparency of eXplainable Artificial Intelligence (XAI) approaches in this context. This paper fills this research gap and presents an experiment on the French real estate market using ML models coupled with Shapley values to explain the models. The used dataset contains 1,505,033 transactions (in 7 years) from nine major French cities. All the processing steps for preparing, building, and explaining the ML models are presented in a transparent way. At a global level, beyond the predictive capacity of the models, the results show the similarities and the differences between these nine real estate submarkets in terms of the most important predictors of property prices (e.g., living area, land area, location variables, number of dwellings in a condominium), trends over years, the differences between the markets of apartments and houses, and the impact of sales before completion. At the local level, the results show how one can easily interpret and evaluate the contribution of each feature value for any single prediction, thereby providing essential support for the understanding and adoption by domain-field experts. The results are discussed with respect to the existing literature in the real estate field, and many future research avenues are proposed.