The integration of Generative Artificial Intelligence (GenAI) in education has introduced innovative approaches to assessment. One such approach is AI chatbot-based assessment, which utilizes large language models to provide students with timely and consistent feedback. However, the effectiveness of AI chatbots in generating assessments comparable to human evaluators in educational contexts remains underexplored. This study compared the grades and feedback provided by AI chatbots, peers, and the course instructor for student projects in a higher education course. The participants were 76 undergraduate students who engaged in a group project involving three phases: questionnaire development, peer assessment, and chatbot-based assessment. Employing a mixed-methods approach, this study quantitatively compared project grades and qualitatively analyzed feedback quality. Results indicated that AI chatbots consistently assigned higher grades than human assessors, while peer and instructor grades were notably lower and closely aligned. Content analysis revealed that chatbots generally provided higher-quality feedback compared to peers, offering detailed insights and specific guidance for improvement, though they occasionally included irrelevant or contradictory information requiring student intervention. Conversely, peer feedback was more personalized and context-sensitive. These findings highlight the importance of human judgment, suggesting that integrating chatbot-based assessments with traditional methods can leverage their complementary strengths to enrich student learning.