Graph representation learning has attracted tremendous attention due to its remarkable performance in variety of real-world applications. However, because data labeling is always time and resource intensive, current supervised graph representation learning models for particular tasks frequently suffer from label sparsity issues. In light of this, graph few-shot learning has been proposed to tackle the performance degradation in face of limited annotated data challenge. While recent advances in graph few shot learning achieve promising performance, they typically force to use a generic feature embedding across various tasks. Ideally, we want to construct feature embeddings that are tuned for the given task because of the differences in distribution between tasks. In this work, we propose a novel Task-Aware Graph Model (TAGM) to learn task-aware node embedding. Specifically, we provide a new graph cell design that includes a graph convolution layer for aggregating and updating graph information as well as a two-layer linear transformation for node feature transformation. On this basis, we encode task information to learn the binary weight mask set and gradient mask set, where the weight mask set selects different network parameters for different tasks and the gradient mask set can dynamically update the selected network parameters in a different manner during the optimization process. Our model is more sensitive to task identity and performs better for a task graph input. Our extensive experiments on three graph-structured datasets demonstrate that our proposed method generally outperforms the state-of-the-art baselines in few-shot learning.