As social media grapples with the proliferation of misinformation, flagging systems emerge as vital digital tools that alert users to potential falsehoods, balancing the preservation of free speech. The efficacy of these systems hinges on user interpretation and reaction to the flags provided. This study probes the influence of warning flags on user perceptions, assessing their effect on the perceived accuracy of information, the propensity to share content, and the trust users have in these warnings, especially when supplemented with fact-checking explanations. Through a within-subject experiment involving 348 American participants, we mimicked a social media feed with a series of COVID-19-related headlines, both true and false, in various conditions-with flags, with flags and explanatory text, and without any intervention. Explanatory content was derived from fact-checking sites linked to the news items. Our findings indicate that false news is perceived as less accurate when flagged or accompanied by explanatory text. The presence of explanatory text correlates with heightened trust in the flags. Notably, participants with high levels of neuroticism and a deliberative cognitive thinking style showed a higher trust for explanatory text alongside warning flags. Conversely, participants with conservative leanings exhibited distrust towards social media flagging systems. These results underscore the importance of clear explanations within flagging mechanisms and support a user-centric model in their design, emphasising transparency and engagement as essential in counteracting misinformation on social media.