Policies
A May UNESCO global survey of more than 450 schools and universities found that fewer than 10% of them had developed institutional policies or formal guidance related to AI.
“It’s all ad hoc. It’s all piecemeal,” said Dr. Tony Kashani, affiliate faculty and dissertation chair for the doctor of education in educational and professional practice at Antioch University.
Similarly, a June report from Tyton Partners noted that “only 3% of institutions have developed a formal policy regarding the use of AI tools, and most (58%) indicated they will begin to develop one ‘soon.’”
Although Gonzaga University hasn’t developed an AI-specific policy, its academic integrity policy does cover cheating with the use of AI in a broad sense.
“Theoretically, students are not allowed to use the tool that is not specified by the professor to help them in their work,” said Dr. Justin Marquis, director of instructional design and delivery at the university.
Likewise, Dr. Melissa Vito, vice provost of academic innovation at The University of Texas at San Antonio, noted that most of the discourse in early 2023 when her institution started considering AI policies centered on the concern that it would lead to cheating. It has since moved to a more engaging and educational approach to the issue.
“I didn’t want to start with policies because we don’t really know exactly what we’re dealing with,” said Vito. “We did decide not to go down a policy route where it was either going to be this or that but to keep faculty in their role to govern and guide their courses. It’s a dynamic issue. It’s not like a one-and-done. It continues to evolve.”
As a result, a group of “faculty champions” worked to develop broad guidelines for faculty on the use of AI, as well as some best practices that can be updated as understanding of the models continues to evolve. Vito explained that faculty can bar the use of AI in their classes by stipulating as such on their syllabi.
National Louis University has issued guidance to its faculty regarding AI; however, it isn’t being prescriptive in its recommendations.
“We believe in academic freedom and our faculty having the opportunity to make those choices,” said Dr. Bettyjo Bouchey, vice provost for digital strategy and operations at the university. If faculty allow the use of AI, she added, learning assessments should be reoriented in a way that the AI doesn’t diminish the learning process.
Like others, Barnard College also believes in faculty autonomy. For that reason, it’s up to them to decide if and how they want to allow AI in their classes.
“It’s a spectrum; there’s no mandate to use it or not to use it,” said Dr. Melanie Hibbert, director of informational media and academic technology services and the Sloate Media Center at the college.
The Barnard College Center for Engaged Pedagogy developed resources that can help faculty reach a decision about the use of AI in their classes. One graphic walks them through various considerations, which helps them arrive at one of four decisions: closed, restricted, conditional and open. For example, the restricted stance encourages faculty to “consider which learning outcomes may be negatively impacted.”
Sample syllabi statements for faculty who forbid AI, as well as for faculty who are open to AI, were also developed. The statement that is open to AI notes students will be informed about when and how to use tools like ChatGPT, while also advising that uses outside of the stipulated guidelines are not permitted.