Large Language Models are a great tool for solving diverse tasks formulated in natural language. Recent work has demonstrated their capacity of solving tasks related to Knowledge Graphs, such as Knowledge Graph Completion or Knowledge Graph Reasoning, even in Zero- or Few-Shot paradigms. However, given a particular input, they do not always produce the same output, and sometimes point to intermediate reasoning steps that are not valid, even if they produce a satisfactorily answer. Moreover, the use of LLMs is mostly studied for static knowledge graphs, while temporal ones are overlooked. To highlight opportunities and challenges in knowledge graph related tasks, we experiment with ChatGPT on graph completion and reasoning for both static and temporal facets, using three different prompting techniques in Zero- and One-Shot contexts, on a Task-Oriented Dialogue system use case. Our results show that ChatGPT can solve given tasks, but mostly in a nondeterministic way.