Welcome to the Wild, Wild West of AI and the Higher Education Institution
Published by: WCET | 5/11/2023
Tags: Artificial Intelligence, ChatGPT, Higher Education, Policy
Published by: WCET | 5/11/2023
Tags: Artificial Intelligence, ChatGPT, Higher Education, Policy
A general perusal of Inside Higher Education, Chronicle of Higher Education, or the internet in general turns up countless fears that generative AI, especially in the form of large language models such as ChatGPT, will increase attacks on academic integrity.
Take Jeremy Weissman’s opinion piece in Inside Higher Ed where he compares ChatGPT with the early days of the COVID pandemic.
Calling ChatGPT and generative AI a “plague upon education,” Weissman opines “In these early days of the GPT spread, we are largely defenseless against this novel threat to human intelligence and academic integrity. A return to handwritten and oral in-class assignments—a lockdown response—may be the only immediate effective solution as we wait for more robust protections to arise.”
As a result of such fears, most of the discussion around generative AI and institutional policy has revolved around academic integrity, but there are myriad other areas that institutions need to be aware of and make policy to address.
Recently, WCET conducted a survey of college and university leaders regarding the use of generative AI on their campuses. Of the more than 600 respondents, only 8 percent indicated that they had implemented policies around artificial intelligence. Most of those policies, 21 percent, were around academic integrity. Of the 57 percent of respondents at institutions planning or developing policies:
Note: Full analysis of this survey, including institutional recommendations will be published in June.
That lack of institutional policy is born out in research conducted by Primary Research Group earlier this year. That research found that only 14 percent of college administrators reported the existence of institutional guidelines and only 18 percent of instructors reported having policies and guidelines on the use of generative AI in their classes.
A lack of other policies notwithstanding, academic integrity is often the first policy area that institutions and faculty address, and such policies run the gamut from completely outlawing any use of generative AI to allowing for its usage with appropriate attribution. For example, the University of Missouri’s general academic dishonesty policy states, “Academic honesty is fundamental to the activities and principles of the University… Any effort to gain an advantage not given to all students is dishonest whether or not the effort is successful.” The institution’s informational page on AI usage goes on to state, “Students who use ChatGPT and similar programs improperly are seeking to gain an unfair advantage, which means they are committing academic dishonesty.”
The Ohio State University takes a slightly different tact by outlawing the use of generative AI tools unless an instructor explicitly gives permission for students to use such tools. The institution’s academic integrity and artificial intelligence page states, “To maintain a culture of integrity and respect, these generative AI tools should not be used in the completion of course assignments unless an instructor for a given course specifically authorizes their use… [T]hese tools should be used only with the explicit and clear permission of each individual instructor, and then only in the ways allowed by the instructor.” Most institutions crafting generative AI academic integrity policy appear to be adapting existing academic integrity policies as well as ceding the development of such policy to instructors.
Although there is currently no comprehensive directory of course level AI usage policies, Lance Eaton has begun to crowdsource examples of such policies. A perusal of this collection of classroom policies indicate that most policies can be categorized in two areas—bans of generative AI and use of generative AI with attribution. One such policy outlawing the use of AI reads, “Some student work may be submitted to AI or plagiarism detection tools in order to ensure that student work product is human created. The submission of AI generated answers constitutes plagiarism and is violation of CSCC’s student code of conduct.” Or, as one instructor from Northeast Lakeview College submitted for that institution’s ENGL 1301, 1302, 2322, 2323, an 2338 courses— “Unless otherwise explicitly instructed, students are not allowed to use any alternative generation tools for any type of submission in this course. Every submission should be an original composition that the student themselves wholly created for this course.”
Most of the sample policies catalogued by Eaton treat AI generated content as any other non-student generated content and require attribution if used. For example, for theater courses at one small liberal arts college, syllabi contain the following policy: “All work submitted in this course must be your own. Contributions from anyone or anything else—including AI sources, must be properly quoted and cited every time they are used. Failure to do so constitutes an academic integrity violation, and I will follow the institution’s policy to the letter in those instances.” While some policies, such as Ethan Mollick’s with the Wharton School at University of Pennsylvania proclaims, “I expect you to use AI (ChatGPT and image generation tools, at a minimum), in this class. In fact, some assignments will require it. Learning to use AI is an emerging skill.” Mollick goes on to warn students, “Be aware of the limits of ChatGPT: If you provide minimum effort prompts, you will get low quality results… Don’t trust anything it says. If it gives you a number or fact, assume it is wrong unless you either know the answer or can check in with another sources… AI is a tool but one that you need to acknowledge using. Please include a paragraph at the end of any assignment that uses AI explaining what you used the AI for and what prompts you used to get the results… Be thoughtful about when this tool is useful. Don’t use it if it isn’t appropriate for the case or circumstances.”
There are numerous policy areas beyond academic integrity that institutions need to take into consideration when determining AI usage on their campus. Perhaps chief among these areas is data privacy and data security. Large language model artificial intelligence is built on the ingestion of massive amounts of data. Data entered into current generative models (such as ChatGPT) could be stored. Thus, faculty and staff must be cautioned against providing generative AI with FERPA protected student data that might compromise student data privacy. Additionally, institutions may want to consider intellectual property policies that consider the creation of generative AI assisted works. There is currently considerable discussion around whether or not AI generated work can be copyrighted. Institutions would benefit from developing intellectual property policies that address intellectual property and generative AI. Finally, institutions should consider the ways in which generative AI can impact accessibility. Although generative AI can function as an accommodation for some students, not all generative AI tools are currently accessible to all users. As faculty begin to incorporate generative AI into their courses, institutions should consider what to do when an AI tool does not meet ADA accessibility requirements.
Institutions cannot afford to wait to address generative AI and should begin developing policies now. As Daniel Dolan and Ekin Yasin write in their March 23, 2023 Inside Higher Ed piece, “A Guide to Generative AI Policy Making,” institutions should respond to generative AI with speed, strategic purpose, and “inclusive focus on equitable student value.”
Although WCET will be providing our members with more specific recommendations in the coming months, some general recommendations include:
As one respondent in the recent WCET generative AI survey put it, “It’s the wild, wild west. And we don’t have any horses.”
Generative AI isn’t going anywhere. Already we have seen its use and complexity grow by leaps and bounds in just the last six months. Just as institutions have developed intellectual property, privacy, data security, academic integrity, and accessibility policies, now institutions need to revisit those policies considering generative AI.
We cannot afford to stick our collective heads in the sand. It’s time to saddle up and ride into the wild, wild west.