With the development of intelligent vehicles, more research has focused on achieving human-like driving. As an important component of intelligent vehicle control, car-following control should ensure safety, tracking, comfort while considering the acceptance of human drivers. In this paper, we propose a car-following control strategy p Hybrid based on a hybrid of reinforcement learning (RL) and supervised learning (SL). RL is used to achieve multi-objective collaborative optimization in car following control, and SL is used to achieve human like car following. Through the complementary advantages of the two learning methods, p Hybrid can achieve high performance car following while matching the personalized car-following characteristics of human drivers. RL is used as the main framework of pHybrid. In addition, the personalized car-following reference model (PCRM) of human drivers based on Gaussian mixture regression, and the motion uncertainty model of preceding vehicle (MUMPV) based on the sequence-to-sequence network are established and incorporated into the RL framework. PCRM can lead pHybrid to learn the different characteristics of human drivers, and improve the anthropomorphism of p Hybrid; MUMPV enables p Hybrid to consider the dynamic changes of the traffic environment and to become more robust. p Hybrid is trained and tested on High D dataset, and the generalizability verification is based on the self-built real vehicle data collection platform. The results show that p Hybrid can match human drivers' personalized car-following characteristics and can outperform human drivers in safety, comfort, and tracking of the preceding vehicle.