With the surge in data volume, big data processing faces unprecedented challenges, among which large models have become a hot research topic due to their powerful data processing capabilities. This paper delves into the performance bottlenecks of large models in big data processing and proposes a series of performance optimization strategies. Through a review of existing data processing technologies and large model architectures, combined with optimization theory and practice, this study introduces a comprehensive optimization mechanism that includes resource scheduling, model computational efficiency, and storage and IO. Experiments were conducted in a cloud computing environment to validate these strategies. The results indicate that the optimization strategies significantly enhanced performance when processing different scales of data, improved load balancing and resource utilization, and increased system stability. This research enriches the theoretical study of big data processing and provides effective optimization avenues for the practical application of large models in fields such as data mining and parallel computing. It offers guidance for feature engineering and data preprocessing and paves the way for future research directions.