AI in education: OCR’s response to the DfE Call for Evidence
25 August 2023
Embrace the opportunities of AI in education and be clear-sighted about the limits. That was the clear message in OCR’s response to the Department for Education’s Call for Evidence on generative artificial intelligence (AI) in education. The response covered the opportunities and risks of using AI, setting out key immediate and future considerations for the DfE. Here's a summary of the OCR response:
Opportunities and benefits
There are many ways in which AI can be used in education including: to support tasks relating to a wide range of administrative and management activities within schools and colleges; to assist teachers in generating class materials and tests and to support them with workload; to provide alternative research avenues for students; to allow students to spend more time on higher order skills (interpreting rather than generating); to create live assessment questions/papers; and to allow automation of marking or to assist human marking.
Subjects
AI can bring benefits to all subjects; it can provide huge numbers of exam questions, especially for STEM subjects (although it struggles with more complicated questions that require context) but also has the potential to provide highly reliable marking for discursive subjects. Computer Science can benefit from AI interpreting code, and performance and creative subjects could use AI to interpret sound recordings and other media. It can support personalised learning and assessment. Our extensive conversations with teachers indicate that there is little or no correlation between subjects and whether or not teachers feel confident in using AI.
Human expertise
AI can augment what educational professionals can achieve, but human expertise must always be at its centre to decide on the parameters and approve outputs. Our society and economy will need people who are able to use AI discerningly – with insight and good judgement. These thinking skills and knowledge will be key to its use. Some people are going to be able to use AI tools better than others. We need to reflect on how these skills are effectively developed in young people, and whether the current curriculum and assessment models continue to be appropriate.
Concerns
- Impact of AI on Non-Examined Assessment (NEA): The impact on coursework and project work has generated concerns that AI can be used to create quality work that is not the authentic work of the student. While the number of malpractice cases Awarding Organisations (AOs) have had to manage in relation to the misuse of AI in NEA has been small, there is no guarantee that this will be the case in 2023/24 given that the technology continues to improve and be used more widely, and the detection software is not wholly reliable. The Joint Council for Qualifications (JCQ) is currently developing its position on controlling the risks of malpractice in the use of AI by students within their Non-Examined Assessments.
- Other concerns about using generative AI in educational settings: These include risk areas such as unauthorised disclosures, data protection and the use of confidential or third-party data, Intellectual Property and copyright risks, inaccuracies, bias and liability issues, increases in risk of cyber-attacks, environmental sustainability, and the mixed success of detector tools.
Risks
All these concerns inevitably impact on our use of generative AI. We operate in a highly regulated industry in which maintaining public confidence in the products we offer is a legal necessity. The use of AI to support teaching and learning, or for formative assessment purposes carries less risk than if it were to be used to generate or mark an assessment that contributes to the awarding of a high-stakes qualification. As an Awarding Organisation, we have to exercise great caution when considering the use of generative AI, particularly in areas involving high impact decisions for learners. Errors in connection with such decision-making can have adverse political, legal, regulatory and/or reputational consequences for the industry. Generative AI presents major areas of ethical concerns for the education sector.
Future use
AI is both too promising and too risky for governments to take a hands-off approach:
- Curriculum and assessment: AI is already having a massive impact on education, work and wider society. It is incumbent on the education system to provide young people with the skills they will need to use AI in their future lives. We need to consider revision of the current curriculum, especially at Key Stage 4, to make it better reflect and prepare people for the AI revolution. The DfE should set out plans for a review of current subject content and assessment constructs ahead of the next round of qualification and curriculum reform. The DfE needs to ensure any subject content criteria for qualifications reflects the impact and broadening use of AI in education in terms of what knowledge and skills students are required to have for each qualification.
- Access to technology: There is a role for the DfE in ensuring adequate and equal access to technology. Schools need quality digital infrastructure: high-speed internet access with enough laptops and tablets to teach and test well. This is not just for generative AI, but to take advantage of other technological opportunities, including digital assessment.
- Best practice: The DfE could also play a role in encouraging the capture and dissemination of best practice in the use of generative AI. The government needs to look carefully at which technologies work in classrooms today and decide which of those we need more of. There are opportunities to enhance learning and assessment with tried and tested means that already exist. At the same time, we cannot put generative AI back in its box. We have to embrace the opportunities for education while being clear-sighted about the limits.
Related news: Jill Duffy, OCR Chief Executive, writes in The Independent on the future role of technology and assessment.