Item request has been placed!
×
Item request cannot be made.
×
Processing Request
Large language models for sustainable assessment and feedback in higher education.
Item request has been placed!
×
Item request cannot be made.
×
Processing Request
- Additional Information
- Abstract:
Nowadays, there is growing attention on enhancing the quality of teaching, learning and assessment processes. As a recent EU Report underlines, the assessment and feedback area remains a problematic issue regarding educational professionals training and adopting new practices. In fact, traditional summative assessment practices are predominantly used in European countries, against the recommendations of the Bologna Process guidelines that promote the implementation of alternative assessment practices that seem crucial in order to engage and provide lifelong learning skills for students, also with the use of technology. Looking at the literature, a series of sustainability problems arise when these requests meet real-world teaching, particularly when academic instructors face the assessment of extensive classes. With the fast advancement in Large Language Models (LLMs) and their increasing availability, affordability and capability, part of the solution to these problems might be at hand. In fact, LLMs can process large amounts of text, summarise and give feedback about it following predetermined criteria. The insights of that analysis can be used both for giving feedback to the student and helping the instructor assess the text. With the proper pedagogical and technological framework, LLMs can disengage instructors from some of the time-related sustainability issues and so from the only choice of the multiple-choice test and similar. For this reason, as a first step, we are designing and validating a theoretical framework and a teaching model for fostering the use of LLMs in assessment practice, with the approaches that can be most beneficial. [ABSTRACT FROM AUTHOR]
- Abstract:
Copyright of Intelligenza Artificiale is the property of IOS Press and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
No Comments.