Aim: Advanced Large Language Models (LLMs), like ChatGPT, are known for their human-like expression and reasoning abilities. They are used in many fields, including radiology. This study is pioneering in evaluating and comparing the effectiveness of LLMs in simplifying Magnetic Resonance Imaging (MRI) findings in Turkish. Material and Methods: In our study, we simplified 50 fictional MRI findings in Turkish language using different LLMs, including ChatGPT-4, Gemini Pro 1.5, Claude 3 Opus and Perplexity. We compared the responses based on Ate & scedil;man's readability index and word count. Additionally, three radiologists assessed the medical accuracy, consistency of suggestions, and comprehensibility of the answers, scoring each model on a scale of 1 to 5. Results: There was no statistically significant difference between the scores of Gemini 1.5 Pro (average: 4.9; median: 5.0), Opus (average: 4.8; median: 5.0), and ChatGPT-4 (average: 4.8; median: 5.0) (p>0.05). However, there was a significant difference between the scores of Gemini 1.5 Pro and Perplexity (average: 3.7; median: 4.0) (p<0.001). According to the readability index, Gemini 1.5 Pro had the highest average score of 59.3, which was significantly higher than the other LLMs (p<0.005). In terms of word count, ChatGPT-4 used the most words (151.5), while Perplexity used the fewest (88.4). Discussion: This study is the first to evaluate the ability of LLMs to simplify MRI findings in Turkish. The results suggest that radiologists find these models effective in making radiology reports more understandable. However, additional research is necessary to confirm these findings.