Step 3. Teach Students How to Use AI Tools Ethically and Responsibly
Each institution would have slightly different policies on how academics, professional staff and students may use AI in their work.
Suppose AI tools are permitted in teaching and learning. In that case, the first step for educators is to understand the AI governance landscape and relevant policies at their affiliated institutions before they design and deliver training for their students to understand AI, AI ethics, and AI governance and become AI-literate. Educators may design and deliver AI literacy based on the AI-literacy Mapped Bloom’s taxonomy.
As an educator, I noticed that the UK higher ed sector has focused on exploring the use of AI tools in teaching, learning, researching and increasing efficiencies since ChatGPT became one of the most popular AI tools in 2023. The higher education sector was mainly concerned with whether AI tools could be used ethically and responsibly in teaching and learning and may decide to discourage using AI tools in assessment due to potential risks of plagiarism.
For example, at Oxford and Cambridge Universities, the “unauthorised [sic] use of AI tools in examinations and other assessed work” is considered academic misconduct. Researchers, including Warschauer, also suggest AI tools may discourage students from writing confidently and critically.
Despite these concerns, the UK higher ed sector has recently focused more on AI ethics and governance. For example, the Russell Group published its five principles in July 2023 for empowering educators and students to ethically and responsibly use AI tools and the Quality Assurance Agency for Higher Education published the “Generative Artificial Intelligence in Education in response to the Department of Education call for evidence” in September 2023.