Recently, autonomous vehicular (AV) applications have garnered significant attention regarding the privacy of collected sensing data. Consequently, federated learning (FL) has emerged as a prominent solution to address the privacy concerns associated with sharing original data. This work introduces FL into the AV domain, enabling vehicles to train models locally and provide local updates to a server instead of transmitting raw datasets, thereby preserving data privacy from potential exposure by a malicious centralized cloud server (CC-server). However, a malicious CC-server may still infer the content of the AV's dataset by analyzing the uploaded gradient models. To counter this, various privacy-preserving federated learning (PPFL) approaches have been proposed to prevent a malicious CC-server from executing revealing or forgery attacks on the uploaded gradient models. Nonetheless, the PPFL framework presents an opportunity for a malicious AV to conduct poisoning attacks on the global model by uploading forged or duplicated trained models to the CC-server. This paper proposes an Efficient Privacy-Preserving Federated Learning framework with the capability of detecting forged and duplicated models (PPFL-DP), which identifies forged uploaded gradient models. Specifically, we design a threshold $(\varepsilon)$ algorithm based on Euclidean distance to serve as an effective method for detecting poisoning attacks. Furthermore, to ensure the privacy preservation of the $(\varepsilon)$ method, we leverage group-oblivious signcryption (GOSC) combined with homomorphic cryptography to develop a sign-homomorphic encryption (SHE) protocol. This protocol effectively preserves privacy in FL while being compatible with the threshold $(\varepsilon)$ design. The proposed SHE protocol achieves both privacy preservation and the detection of poisoning attacks in FL simultaneously. Through experimental results and theoretical analysis, the proposed SHE protocol demonstrates high efficiency in protecting client privacy while maintaining a high accuracy in detecting forged uploaded models, without adversely affecting the quality of the global model.