The growing optimism around the use of artificial intelligence tools in higher education underscores the need for guardrails and guidance on best practices, according to University of North Dakota faculty members who spoke on a panel this week.
The panelists, who were joined by OpenAI President and ChatGPT co-creator Greg Brockman, said the technology could revolutionize their fields of study and warned that discussions around the ethical uses of AI should not be overshadowed by its novelty.
“You cannot talk about the benefits without talking about the risks,” Brockman said. “You cannot talk about all the wonderful things that we’ll do without also thinking about: ‘But what if it got in the wrong hands?’ I think the truth is that we as a society again need to pick. We need to think about how we want to navigate through this, and there’s going to be ups and downs.”
Faculty panelists in fields involving the arts and humanities expressed more concerns over the ethical implications of artificial intelligence and its effects in the classroom, whereas professors in science and mathematics said their fields have previously experienced similar shifts with tools like the online answer engine Wolfram Alpha.
“I think we were hit by earlier and simpler things that don’t really resemble intelligence so much,” said Bryce Christopherson, an assistant professor of mathematics at UND. “But I think it will start to change the discipline of mathematics as these things begin to write proofs. … I think it will be a confusing line we have to grapple with and I don’t know how we will.”
The emergence of generative AI will help revolutionize entire fields of study, the panelists agreed. LexisNexis, which contains one of the largest legal databases, is planning to integrate generative AI into its services later this year. It will be able to write cited memos and briefs and enable lawyers to do their jobs more efficiently, said Carolyn Williams, a professor with the UND’s School of Law.
“They’re going to be able to write these types of documents much more quickly and hopefully lower the price of legal services to get justice and reform for groups of people that need pro bono services or who need a lower cost of services,” Williams said.
When thinking about how educators will have to adapt to the new opportunities AI presents, some the faculty panelists suggested reframing the way they teach students to write. They also weighed the idea of using AI to assess students’ work.
“I would hesitate to give feedback to students by running it through something like ChatGPT and running it through generative AI,” Williams said. “As a professor, one of the things I want to be able to is learn and understand where it is [a student] might be struggling … and that’s such a personal thing that as a professor I would personally not want to use generative AI in that way to give feedback to my students.”
‘Very good at mediocrity’
But outside of providing feedback and grading work, most panelists were hesitant to draw hard lines regarding the use of AI.
“I think, right now, if a student was going to write a paper, I think generative AI and ChatGPT are very good at mediocrity,” said Emily Cherry Oliver, department chair of theater arts at UND. “But a year from now, I can’t guarantee that. So I think where I lean toward are the ethics and that’s where humans need to be part of creating the ethics around the tool.”
Beyond higher education, Brockman said he and and his team at Open AI are talking with world leaders about the need for an international organization to develop cross-governmental cooperation and help develop best practices for artificial general intelligence tools, similar to how the International Atomic Energy Agency advocates for the peaceful use of nuclear energy. He said governments are far more receptive to those conversations now than they were even a year ago.
“I think that if we can actually come together as humanity to build this kind of system, then I think we actually have a shot at something just truly unimaginable,” Brockman said. “You realize that we do need to work together, we do need some sort of international guardrails and oversight.”