Exploring the Impact of Generative AI for StandUp Report Recommendations in Software Capstone Project Development

Abstract
StandUp Reports play an important role in capstone software engineering courses, facilitating progress tracking, obstacle identification, and team collaboration. However, despite their significance, students often grapple with the challenge of creating StandUp Reports that are clear, concise, and actionable. This paper investigates the impact of the use of generative AI in producing StandUp report recommendations, aiming to assist students in enhancing the quality and effectiveness of their reports. In a semester-long capstone course, 179 students participated in 16 real-world software development projects. They submitted weekly StandUp Reports with the assistance of an AI-powered Slack, which analyzed their initial reports and provided suggestions for enhancing them using both GPT-3.5 and the early access GPT-4 API. After each submitted report, students voluntarily answered a survey about usability and suggestion preference. Furthermore, we conducted a linguistic analysis of the recommendations made by the algorithms to gauge reading ease and comprehension complexity. Our findings indicate that the AI-based recommendation system helped students improve the overall quality of their StandUp Reports throughout the semester. Students expressed a high level of satisfaction with the tool and exhibited a strong willingness to continue using it in the future. The survey reveals that students perceived a slight improvement when using GPT-4 compared to GPT-3.5. Finally, a computational linguistic analysis performed on the recommendations demonstrates that both algorithms significantly improve the alignment between the generated texts and the students' educational level, thereby improving the quality of the original texts.
Description
Keywords
Generative AI, Large Language Models, ChatGPT,
Citation