This week I would like write about the part of education that has been affected most severely by AI: Assessments. There are two main reasons that make assessments so susceptible to AI: (1) We do not have any reliable tools to detect AI-generated work, and (2) there are no “AI-proof” assessments – in one way or another, students can get support from AI for any assignment. Consequently, I would like to share some insights I gathered from our colleauges and the Internet that would be helpful with our assessment strategies. Grade students, not AI The most frustrating experience for lecturers is the feeling that they are grading and giving feedback to AI, not students. Avoiding this frustration requires careful pre-planning. When creating an assignment, we should ask ourselves “will the submissions reflect students’ or AI’s efforts?” A recommended strategy is to integrate AI directly into the assignment where students’ work will not be generating something from scratch, but to improve what AI initially produces. This strategy also mimics how most students will work in the future, that is, getting a draft from AI and working on improving it. We should be confident that AI will never replicate all the intriccacies of our courses, and we will be able to notice when students use the knowledge they gained from the course to add value to what AI can produce. AI-level is the new F Given that students will use AI in all assignments, our grading scale should evolve to reflect this reality. A work that is only AI-generated or at a level of what AI can produce will be the new baseline for an “F”, not because it is plagiarism, but it basically shows that there is no need for a human for this level of work – AI can do it alone. Given that baseline, the incremental value added by students to the AI output will determine their grade. To illustrate, consider this hypothetical grading scale:
F – AI Equivalent: Work that doesn’t surpass what a typical AI system could produce on its own.
E – Basic Enhancement: Introduces slight improvements to the AI output, such as correcting minor errors or adding superficial details. While there is some attempt to refine the content, the enhancements lack depth and do not significantly elevate the quality of the work.
D – Rudimentary Insight: Demonstrates an initial effort to add value to the AI-generated content by incorporating foundational insights or expanding on basic concepts. The enhancements show a rudimentary understanding but remain largely surface-level.
C – Thoughtful Contribution: Adds meaningful personal insights or perspectives to the AI-generated content, showing clear engagement with the topic. The work reflects a solid understanding of the material and shows deliberate effort to improve upon the AI-generated foundation.
B – Analytical Enhancement: Significantly expands on AI output with well-reasoned arguments, relevant examples, or thoughtful critiques, demonstrating strong critical thinking.
A – Innovative Advancement: Uses AI output as a foundation for original ideas, creative problem-solving, or novel connections, showcasing exceptional intellectual contribution and subject mastery. Transparency goes both ways At the beginning of our courses we should set clear guidelines and expectations around AI. We should also let students know how we will be using AI throughout the course. This approach will allow students to be more open and forthcoming with regard to their AI use. Many students still think that AI is something we do not approve, something they should be using in secrecy. Share your experiences As lecturers, most of us are going through similar challanges. It is therefore beneficial that we share our both positive and negative experiences with each other. You can also submit your insights to ai@fek.lu.se , where I can curate them and share with everyone through this newsletter and the workshops we have.
Thank you for reading and have a nice weekend! |