With the development of the information age, the emergence of massive amounts of text data has made effective text classification a critical challenge. Traditional classification methods often underperform or incur high computational costs when dealing with heterogeneous data, limited labeled data, or domain-specific data. To address these challenges, this paper proposes a novel text classification model, GZclassifier, designed to improve both accuracy and efficiency. GZclassifier employs two distinct compressors to handle information data and calculates distances in parallel to facilitate classification. This dual-compressor approach enhances the model's ability to manage diverse and sparse data effectively. We conducted extensive experimental evaluations on a range of public datasets, including those with few-shot learning scenarios, to assess the proposed method's performance. The results demonstrate that our model significantly outperforms traditional methods in terms of classification accuracy, robustness, and computational efficiency. The GZclassifier's ability to handle limited labeled data and domain-specific contexts highlights its potential as an efficient solution for real-world text classification tasks. This study not only advances the field of text classification but also showcases the model's practical applicability and benefits in various text processing scenarios.