The Internet of Vehicles (IoV) has gained prominence due to advancements in intelligent connected technologies, generating vast amounts of personal data through various sensors and communication devices. However, traditional data transfer methods compromise user privacy, highlighting the need for efficient real-time data processing. This paper introduces a novel fully homomorphic encryption (FHE) model optimized for IoV Federated Learning (FL), addressing slow processing speeds inherent in existing FHE techniques. By leveraging GPU acceleration, the proposed framework facilitates local data training, reducing communication overhead and enhancing the speed of homomorphic operations. The optimization focuses on critical computations, including homomorphic multiplication, number theoretic transform (NTT), Chinese Remainder Theorem (CRT), and kernel fusion, using parallel processing strategies. Experimental evaluations reveal that the GPU-accelerated FHE framework improves execution efficiency dramatically: The CRT computation is enhanced by 103.6%, while homomorphic multiplication operations achieve an overall efficiency boost of 98.49%. Notably, for the MNIST dataset, the average execution time for homomorphic multiplication is reduced to 31.6 ms, compared to 2312.3 ms on the CPU. Similarly, for the CIFAR-10 dataset, the execution time drops to 67.1 ms from 3700.9 ms on the CPU. Additionally, the efficiency of the number theoretic transform is improved by 143.6%, demonstrating significant gains in performance. In terms of model accuracy, the proposed system achieves over 90% accuracy on the MNIST dataset and shows substantial improvement on the CIFAR-10 dataset, particularly with a rapid increase in accuracy noted in the latter. These results confirm the framework's capability to meet the low-latency demands of IoV applications while ensuring data privacy.