In federated learning, the clients can adopt local differential privacy on the uploaded gradients or parameters, to reduce the potential risk of privacy leakage. However, most existing solutions set the uniform privacy level for all clients, which cannot meet the individual different privacy needs of users. Moreover, a few schemes introduced personalized differential privacy, but they ignored the data utility of the whole system and the issue of fair client sampling. In this paper, we propose a novel framework of fair federated learning with personalized local differential privacy (PLFa-FL), which can achieve fair client sampling while balancing the privacy and data utility. First, we propose a fair sampling mechanism that combines the client's local loss value and historical participation results. Then, we consider the impact of privacy budget threshold on model performance. To balance the privacy and data utility, we design the privacy budget waste function to determine the optimized privacy budget threshold and the clients' used privacy budget. Experiments on MNIST and EMNIST datasets confirm that PLFa-FL compares favorably against the baseline methods in terms of model performance, running time, and fairness.