Advertisement

Researchers want to know how bad actors are tricking machine learning

University of Arizona researcher Gregory Ditzler aims to improve the ability of algorithms to spot false data and adapt to changing environments.
Robbot hands typing on a keyboard
(Getty Images)

To keep emerging technologies like autonomous vehicles and facial recognition systems secure, researchers at the University of Arizona are developing algorithms to identify bad actors carrying out malicious cyber activity.

According to Gregory Ditzler, assistant professor of electrical and computer engineering at University of Arizona, many machine learning algorithms can be easily tricked by introducing false data into their environments, a technique that adversaries can exploit and threaten devices and digital systems with. Ditzler, who also conducts machine learning research at UA, aims to improve these algorithms and, in turn, strengthen cybersecurity. His work to improve this emerging technology has been recognized as critically important to the future of cybersecurity by the National Science Foundation, and has received a five-year, $500,000 Faculty Early Career Development Award from the NSF to support his research, the university announced Monday.

In recent years, machine learning has been increasingly integrated into digital systems and tools like voice assistants to recognize speech, cameras to detect facial features, GPS services to plan optimal routes and self-driving cars to navigate traffic.

“Machine learning is such a hot topic right now because it’s integrated into everything we use in our daily lives – from the computers we use to create Word documents to the cell phones we use to make phone calls, take photos and text,” Ditzler said in a press release.

Advertisement

But according to Ditzler, algorithms can often be tricked into making the wrong calculations.

In a study of Tesla’s self-driving vehicles conduced by Keen Security Lab, researchers were able to confuse the car’s autopilot system, which relies on machine learning, by placing stickers on the road to imitate the appearance of a line. The car detected the stickers, interpreted them to mean that the lane was veering left and steered the car directly into oncoming traffic. In his research at the University of Arizona, Ditzler is investigating what causes this confusion.

Beyond just autonomous vehicles, if an adversary introduces false data into an algorithm’s learning environment, tricking it to misidentify features and make incorrect calculations, a wide swath of technologies that rely on machine learning can be opened to cyber threats.

Ditzler’s research also looks at machine learning in nonstationary environments. Algorithms can be developed to recognize a particular set of security threats, like fraudulent activity in bank accounts, but with the threat landscape constantly evolving, machine learning models need to be able to learn continuously and adapt to changes.

“If you took data from 10 years ago to make a model for investing in the stock market and apply it to today’s economy, it wouldn’t work,” Ditzler said. “Many algorithms are static. You train them and deploy them, but realistically they have to be able to change over time.”

Betsy Foresman

Written by Betsy Foresman

Betsy Foresman was an education reporter for EdScoop from 2018 through early 2021, where she wrote about the virtues and challenges of innovative technology solutions used in higher education and K-12 spaces. Foresman also covered local government IT for StateScoop, on occasion. Foresman graduated from Texas Christian University in 2018 — go Frogs! — with a BA in journalism and psychology. During her senior year, she worked as an intern at the Center for Strategic and International Studies in Washington, D.C., and moved back to the capital after completing her degree because, like Shrek, she feels most at home in the swamp. Foresman previously worked at Scoop News Group as an editorial fellow.

Latest Podcasts