Cognitive reasoning holds a significant place within the field of Natural Language Processing (NLP). Yet, the exploration of zeroshot scenarios, which align more closely with real-life situations than supervised scenarios, has been relatively limited. While a few studies have employed Large Language Models (LLMs) to tackle zero-shot cognitive reasoning tasks, they still grapple with two key challenges: 1) Traditional approaches rely on the chain-of-thought (CoT) mechanism, wherein LLMs are provided with a "'think step by step" prompt. However, this zero-shot learning approach may not effectively leverage multiple similar demonstrations and may be susceptible to errors. 2) Previous CoT methods have predominantly focused on intricate mathematical reasoning tasks, overlooking the fact that conventional NLP tasks can also be reframed as cognitive and reasoning processes, such as sentiment analysis and question answering tasks. Consequently, LLMs can be harnessed for zero-shot cognitive reasoning problems in NLP. To address these issues, we introduce a generative CoT approach for performing zero-shot cognitive reasoning tasks. Our experimental results clearly demonstrate that our approach can outperform the existing state-of-the-art methods across three categories of tasks: sentiment analysis, question answering, and mathematical reasoning.