In deep learning, generalization under distribution shift is still a challenge. Domain generalization is an example of domain shift where the task is to train models to perform effectively on unseen data from different but related domains. In this paper, we introduce novel solutions for domain generalization and integrate a sparse auto-encoder before the classifier for multi-class image classification. Our three approaches focus on reconstructing class-specific features while neglecting domain-specific information. They all increase the model performance on a synthetic dataset, and one approach slightly increases the performance on the iWildCam 2020 dataset. This underscores the importance of exploring techniques to support deep learning models' resilience against distribution shifts.