From anonymity to data ownership, George Mason University professor Priscilla Regan identifies the key nodes in the policy discussion.
Priscilla Regan, professor at George Mason University. (FTC)
"It's important that we sort of
separate out and identify the separate concerns so that we can ensure that policy is really
addressing all of those concerns," she said at a
panel on student privacy and education technology hosted by the U.S. Federal Trade Commission in December.
During her remarks, Regan offered a framework for thinking about privacy in education, identifying six broad concerns that educators, technology companies and policy makers out to keep in mind when discussing the use and safeguarding of student data.
The big six, according to Regan:
- Organizational information privacy concerns: Federal and state laws generally regulate the collection, use, retention and disclosure of personal information. This is becoming more complicated and complex as greater quantities of data are collected and as qualitative information — such as behavior — is being derived and collected. Parental concerns are heightened as citizens generally become more aware of data collection activities.
- Anonymity: Part of privacy, in many people’s minds, is the ability to remain anonymous, sometimes called “practical obscurity.” As more and more data is gathered and retained, it becomes more difficult to anonymize that information. The evolving use of such technology as artificial intelligence makes the ability to remain anonymous less feasible. “This is where we get into sort of the
algorithmic searches, the use of artificial intelligence, the fact that
personally identifiable information is sort of a less meaningful concept,”
- Surveillance and tracking: As personalized learning, online learning and online testing all become more common, the applications are monitoring and analyzing what students are doing, when and where they are working, who else may be working on similar things. “Things
like how long it might take to read a page, the patterns and the ways in
which students are reading and responding, which gives some indication,
then, of the students' thought processes,” Regan said, which facilitate
- Autonomy: There is a risk that using analytics to determine students’ strengths and weaknesses and building a personalized learning experience around that may narrow students’ options too early, by limiting the avenues for their curiosity and creativity. And today’s students are more aware of being monitored and channeled toward particular disciplines; they may self-censor what they’re doing.
- Bias, discrimination and due process: Another part of the concept of privacy is fairness — treating people equally and without discrimination.
“This is obviously critical in the edtech and
the education environment generally, because of the importance of education to
equal opportunity,” Regan said. With the kind of algorithmic analyses in place now, it can make it harder to identify bias and discrimination, thus also harder to reverse. As with concerns about autonomy, judging students early may lead to discrimination.
- Data ownership: As data is generated, collected and analyzed, the question arises — who owns the information, the individual, the school or a third party such as the application vendor? Generally, school records are owned by the school, but laws such as FERPA (Family Education Rights and Privacy Act) ensure parental rights. This issue is something to be addressed in schools’ contracts with vendors.
Reach the reporter at firstname.lastname@example.org and follow her on Twitter @WaitPatience and @edscoop_news.