AI in classrooms can raise red flags on privacy
Artificial intelligence (AI) is making its way into classrooms through a wide range of products and services, bringing with it both great promise and much concern over what it means for students, teachers and administrators alike.
The topic was center stage at the Consortium for School Networking (CoSN) annual conference earlier this year, and the association has now released a new report for its members that outlines the kinds of privacy issues that AI may raise.
For instance, many AI applications record voices and may store them on the technology provider’s server. Audio recordings that include a child’s voice are defined as personal information under COPPA, though there is a narrowly defined exception. School districts’ legal counsels can provide guidance on the application of federal and state laws and regulations regarding student data privacy, and CoSN suggests consulting them before introducing any AI software into classrooms.
The K-12 association provided a checklist of 10 questions to ask before bringing AI technologies into classrooms, including:
- Always start with “why.” Why is it important to bring AI into the classroom? What is the educational goal and how will AI help achieve it?
- What data will be shared with the technology provider? Will any personal information be used only to support classroom purposes? When will it be deleted?
- If the AI technologies make decisions based on algorithms, are teachers informed about how best to interpret, use and build context around those decisions, in order to ensure the teachers’ expertise guides any application or implementation?
- Is the benefit to students outweighed by questions regarding data management and protection?
- Has the school or school system provided information to parents explaining the use and benefits of AI technologies, including measures taken to protect student data in partnership with the providers?
Reach the reporter at pwaitster@gmail.com and follow her
on Twitter @edscoop_news.