AGENTIC LLM SYSTEM FOR AUTOMATED PROJECT EVALUATION
DOI:
https://doi.org/10.5281/zenodo.19452087Keywords:
LLMs, Agents, Software Engineering, Future ChallengesAbstract
Recent advancements in Large Language Models (LLMs) have significantly influenced software engineering practices, yet complex software evaluation tasks often require collaborative and specialized approaches. This project explores an agentic LLM-based system designed for automated project evaluation across different stages of the Software Development Life Cycle (SDLC). The proposed system utilizes multiple intelligent agents that cooperate to analyze project components such as requirements, source code, documentation, testing outputs, and debugging information. Each agent performs a specific role including code review, static analysis, requirement validation, and testing assessment to provide comprehensive evaluation results. The system integrates modern agentic frameworks, communication protocols, and language model selection strategies to ensure accurate and scalable evaluation. Additionally, the study examines challenges such as coordination among agents, computational cost management, and effective data handling. By leveraging multiagent collaboration, the proposed approach aims to improve the efficiency, reliability, and automation of project assessment in software engineering environments.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.






