Advertisement

How higher ed is handling AI ethics

As AI advances, universities and colleges are researching its potential effects on the humans it’s intended to serve.

(Getty Images)

As artificial intelligence advances, powering everything from virtual assistants to autonomous vehicles, universities and colleges are researching its potential effects on the humans it’s intended to serve.

Along with establishing research centers and proposing new programs to explore concerns around incorporating AI into everyday life, higher education is also looking at how best to adopt AI for everyday operations. Researchers at the higher education IT organization Educause recently listed AI as one of the six technologies most crucial to the future of higher education. However, researchers also found AI has the highest perceived potential risks among those technologies.

Developing AI guidelines

Developing AI guidelines

A University of California working group looked at opportunities and potential ethical problems with AI and published a report of its findings in October. The system said it plans to adopt the report’s recommendations, which included publishing a database of any AI use and creating AI departmental councils. “Other entities have deployed AI and then realized that it’s producing discriminatory or less efficient outcomes,” Brandie Nonnecke, the founding director of UC’s CITRIS Policy Lab and one of the working group’s co-chairs, said in a press release. “We’re at a critical point where we can establish governance mechanisms that provide necessary scrutiny and oversight.”

Funding AI research

Funding AI research

Pennsylvania State University’s Center for Socially Responsible Artificial Intelligence offers seed funding for collaborative and pilot projects, targeting proposals that consider social and ethical implications of using AI. Its first funding round, announced in March, went to five early-stage projects, including research on how to use AI to support students with autism and a project exploring the psychology of how people interact with technology. The center began accepting submissions in October for its next funding round.

Launching AI labs

Launching AI labs

Many colleges and universities are launching labs and research centers dedicated to AI ethics and research, such as the University of Notre Dame, which last year partnered with IBM on an ethics lab. Syracuse University in 2019 announced an AI policy and ethics research center. The University of Rhode Island launched a lab in its library in 2018. Though not specifically dedicated to AI ethics, Karim Boughida, Dean of University Libraries at URI told Edscoop that hosting the lab in the library is intended to prompt a broader discussion across disciplines about AI. “You’re not here just to program, you have to be aware of your biases, the biases of others and the biases in the data,” he told EdScoop. “Many people who are engineers say, ‘Oh, that’s not my problem, I just program.’”

Designing AI certificates

Designing AI certificates

Some higher education institutions are now offering certificates for continuing education on AI and AI ethics. San Francisco State in 2019 launched a graduate certificate program that educates students on decision-making and ethical concepts associated with AI-based systems, such as autonomous vehicles. Georgia State University announced a graduate certificate in October called “Trustworthy AI Systems,: which looks at data security, privacy and ethics. The university claimed it was the first AI certification that also addresses cybersecurity.

Exploring robotics ethics

Exploring robotics ethics

The University of Texas at Austin’s “Convergent, Responsible, and Ethical AI Training Experience for Roboticists,” or CREATE, program launched this year with the goal of informing  trainees and graduate students on the privacy and security implications of creating robots that work alongside humans or work in homes. “Given the potentially disruptive consequences of artificial intelligence (AI)-based systems, humanity cannot afford to wait until problems arise to consider their impacts on society,” an award abstract read. “AI’s ethical and societal implications must be considered as systems are designed, developed, and deployed.”