Federated learning significantly enhances intelligent transportation systems by enabling collaborative model training across multiple clients, thereby improving overall performance. However, the involvement of multiple participants, each using oversized models for local data processing, leads to substantial communication volumes and reduced communication efficiency. To address this issue, we propose the Federated Client and Global (FEDCG) pruning method, employing a two-stage pruning strategy at both the client and server levels. This approach uses mutual information to assess the importance of individual neurons or filters within a neural network, allowing for global pruning on the server. Specifically, federated learning connects multiple sub-models within intelligent transportation systems, such as autonomous vehicles and vehicle detection models, by reducing redundant parameters through pruning. When handling differences between various models, we use a parameter aggregation strategy to ensure the effectiveness of the global model. Our method first performs preliminary pruning at the client side to reduce local communication overhead, followed by further pruning at the server side to aggregate effective parameters from each client into a global model. This global model is constructed based on the pruned client models to ensure efficiency and accuracy. Extensive experiments demonstrate that FEDCG effectively reduces communication overheads during both the uploading and downloading phases while maintaining high accuracy and robustness across various datasets and neural network architectures. This method provides a valuable tool for practical federated learning in intelligent transportation systems.