Headshot of person/agency from

AI sometimes makes the grade

Can a computer program be taught to measure student success in terms beyond a grade-point average? That’s one goal of the growing use of artificial intelligence across universities, many of which are increasing their use of AI to help with admissions, financial aid and campus safety.

At SUNY Empire State College, New York State’s online public university where more than half of students are between 25 and 49 years old, an AI-powered chatbot installed earlier this year has cut down calls to the student information center by nearly 30%, with more questions about course selections and financial aid packages being answered by a bot named “Blue,” after the university’s bluebird mascot.

Other AI-based software is designed to scan the internet for students’ social-media content and help university officials identify students who might pose at threat to themselves or others. Sprinklr, a customer-management software company, is selling a version of its social-media monitoring software to the higher-ed market that’s capable of identifying keywords in what students post on platforms like platforms like Reddit, Facebook, Instagram and Twitter, flagging suspicious content for closer review.

But AI in higher education has also run into some of the same bias issues found in other settings, like law enforcement. The University of Texas, Austin, abandoned an AI-driven admissions system developed by its graduate computer science department that reduced staff time spent reviewing applications by 74%, but was found to have the potential to reinforce historical biases based on superficial criteria.

One important fix, said JC Bonilla, the chief analytics officer education software publisher Element451, is that AI processes are only as good as the humans who train them. “We need administrators to be better at understanding what student success is so that AI can pick it up,” he told EdScoop last month.

TwitterFacebookLinkedInRedditGmail
TwitterFacebookLinkedInRedditGmail