Recommendation systems empower users with tailored service assistance by learning about their interactions with systems and recommending items based on their preferences and interests. Typical recommender systems view the recommendation process as a static procedure disregarding the fact that users' preferences are changed over time. Reinforcement learning (RL) approaches are the most advanced and recent techniques used by researchers to handle challenges where the user's interest is captured by their most recent interactions with the system. However, most of the recent research on RL-based recommender systems focuses on simply the user's recent interactions to generate the recommendations without taking into account the context of the user in which these interactions occur. The context has a great impact on users' interests, behaviors, and ratings e.g., user mood, time, day type, companion, social circle, and location. In this paper, we propose a context-aware deep reinforcement learning-based recommender system focusing on context-specific state modeling methods. In this approach, states are designed based on the user's most recent context. In parallel, a list-wise version of the context-aware recommender agent is also proposed, in which a list of items is recommended to users at each step of interaction based on their context. The findings of the study indicate that modeling users' preferences in combination with contextual variables improves the performance of RL-based recommender systems. Furthermore, we evaluate the proposed method on context-based datasets in an offline environment. The performance in terms of evaluation measures optimally indicates the worth of the proposed method in comparison with existing studies. More precisely, the highest Presicion@5, MAP@10, and NDCG@10 of the context-aware recommender agent are 77%, 76%, and 74% respectively.