The North Dakota University System plans to launch a task force that will investigate the risks and opportunities associated with the use of the ChatGPT artificial intelligence platform on campus.
The task force is expected to include the heads of all universities in the state system and be led by Andrew Armacost, the president of the University of North Dakota. The creation of the task force was first reported Wednesday by Joe Banish of the Grand Forks Herald.
ChatGPT’s language generation model is trained to produce text that could pass as written by a human. With the right prompts, the platform could be used to write a college-level essay — a use case that has sparked concern among educators worried about cheating, plagiarism and waning standards of academic integrity.
“ChatGPT quickly generated significant buzz around our campus,” Armacost told the Grand Forks Herald. “I’ve experimented with it myself and its responses can be quite sophisticated.”
A professor at the University of Pennsylvania’s Wharton School published a research paper earlier this year stating that responses generated by ChatGPT would earn a B- or B grade in an MBA-level exam on operations management. Professors at the University of Minnesota have also reported that ChatGPT’s exam responses would scrape good enough grades to get a law degree, but would land a real student in academic probation.
The potential for students to rely on ChatGPT too heavily— asking it to write essays or answer homework questions — is a reported focus of North Dakota’s task force, but it is also expected to consider positive applications of ChatGPT in curriculum development.
OpenAI, the startup behind ChatGPT, or Chat Generative Pre-Trained Transformer, says on its website it’s open to receiving feedback from educators on topics such as academic dishonesty, plagiarism detection, accuracy and the perpetuation of false information or harmful biases.
While OpenAI says that ChatGPT can help educators brainstorm lesson plans or develop quiz questions, the company states it should not be used to assess students’ work.
“Models today are subject to biases and inaccuracies, and they are unable to capture the full complexity of a student or an educational context,” the OpenAI website reads. “Consequently, using these models to make decisions about a student is not appropriate.”