Federated learning (FL) has gained widespread attention as a distributed machine learning (ML) technique that offers data protection when training on local devices. Unlike conventional centralized training in traditional ML, FL incorporates privacy and security measures as it does not share raw data between the client and server, thereby safeguarding potentially sensitive information. However, there are still vulnerabilities in the FL field, and commonly used approaches, such as encryption and blockchain technologies, often result in significant computational and communication costs, making them impractical for devices with restricted resources. To tackle this challenge, we present a privacy-preserving FL system specifically designed for resource-constrained devices, leveraging compressive sensing and differential privacy (DP) techniques. We implemented the weight-pruning-based compressive sensing method with an adaptive compression ratio based on resource availability. In addition, we employ DP to introduce noise to the gradient before sending it to a central server for aggregation, thereby protecting the gradient's privacy. Evaluation results demonstrate that our proposed method achieves slightly better accuracy when compared to state-of-the-art methods like DP-federated averaging, DP-FedOpt, and adaptive Gaussian clipping-DP (AGC-DP) for the MNIST, Fashion-MNIST, and Human Activity Recognition data sets. Furthermore, our approach achieves this higher accuracy with a lower total communication cost and training time than the current state-of-the-art methods. Moreover, we comprehensively evaluate our method's resilience against poisoning attacks, revealing its better resistance than existing state-of-the-art approaches.