Advertisement

AI can help higher ed, but biased data can harm, warns data scientist

Algorithms help predict how students, faculty and institutions can succeed, but the data they rely on can harm stakeholders, data scientist Cathy O'Neil said during a recent online conference.
computer code on a screen
(Getty Images)

Artificial intelligence can help predict how students, faculty and higher education institutions can succeed, but understanding how AI algorithms work and how they can fail is essential for avoiding harm, data scientist Cathy O’Neil said during an online conference Thursday.

Algorithms are increasingly used in higher education with the goal to improve education and uncover the path to success for students and universities themselves. But according to O’Neil, biased data and misused algorithms can end up bringing stakeholders more harm that good.

“Algorithms are everywhere. They’re basically replacing bureaucratic processes, deciding who deserves a job, who gets insurance at what costs, who gets a loan. It even decides how long someone should go to jail based on what they think the risk of being rearrested is,” O’Neil said. “[But] no algorithm is itself a bad algorithm or a good algorithm. I can only really say the problems with algorithms in contexts.”

Mount St. Mary’s University, for example, fired several university leaders in 2016 after the university’s president used a survey tool to predict which freshman wouldn’t be successful in college and kicked them out to improve retention rates.

Advertisement

“The whole idea here was to get rid of these struggling freshmen before the official account day for the U.S. News and World report,” O’Neil said.

Although in the St. Mary’s case the algorithm had a harmful effect on students, using data to identify what students might be struggling with can be used to help instead of harm students, she said.

At the University of Texas, Austin, advisers used an algorithm to identify struggling students that was similar to Mount St. Mary’s. But instead of using the results to dismiss them from the university, students were connected with resources, like tutoring, to help them overcome the challenges they were facing.

In addition to how algorithms are used, what data they rely on to make predictions can also affect whether they are harmful or not, O’Neil said.

In the context of university advising, data about what a student’s interests are, their education background, socioeconomic status and where they live can be used to help them decide what to study. But the quality of the data affects the quality of the algorithm’s decision, O’Neil said.

Advertisement

“It’s really dangerous to take data that has embedded cultural bias and then plug it into an algorithm that memorizes and propagates this bias because it will cause bias in the future,” she said.

To address these potential failures, O’Neil said institutions need to carefully consider how algorithms can have a negative impact.

“The most important question is, for whom does this algorithm fail?,” she said. “Who will be harmed by this algorithm and what would that harm look like?”

Betsy Foresman

Written by Betsy Foresman

Betsy Foresman was an education reporter for EdScoop from 2018 through early 2021, where she wrote about the virtues and challenges of innovative technology solutions used in higher education and K-12 spaces. Foresman also covered local government IT for StateScoop, on occasion. Foresman graduated from Texas Christian University in 2018 — go Frogs! — with a BA in journalism and psychology. During her senior year, she worked as an intern at the Center for Strategic and International Studies in Washington, D.C., and moved back to the capital after completing her degree because, like Shrek, she feels most at home in the swamp. Foresman previously worked at Scoop News Group as an editorial fellow.

Latest Podcasts