Most Rio students are familiar with GoGuardian: a software that allows teachers to monitor students’ activity online when in the classroom. While primarily used to keep students on task and prevent cheating, GoGuardian also has other uses.
In 2018, GoGuardian launched Beacon, a software meant to act as a warning system to administrators when students exhibit signs of intending to harm themselves or others via their online presence.
“By combining real-time scans and a proprietary machine-learning engine, only GoGuardian is able to provide comprehensive, context-aware analytics to intelligently assess the content students are creating and interacting with online via web searches, social media, chat, forums, email and online collaboration tools,” GoGaurdian said in a statement.
GoGuardian says that Beacon aims to help administrators prevent injury and violence among their students, and to help these students get assistance from professionals before it’s too late.
On their website, GoGuardian sorts the terms and phrases that are flagged with Beacon into five categories: active planning, suicide ideation, self harm, help and support, and suicide research.
Initially, Beacon seems like a helpful tool in ensuring the safety of students in a society experiencing high levels of self-harm in youth.
GoGuardian has stated that this software was developed with the help of mental health experts, and that it has prevented more than 18,000 students from physical harm since March 2020, as advertised on their website.
But the validity of this statement is called into question when we look into some specific instances of Beacon at work.
The New York Times highlighted several stories involving Beacon flagging a student’s work. In one, Beacon recognized that a student had messaged her friend that she was planning to overdose on anxiety medication, and police were able to arrive fast enough to help.
But in another situation, words from a student’s poem written years prior were incorrectly flagged, sending police to the student’s home late into the night, and traumatizing the family.
At the end of the day, Beacon’s system is AI. And as teachers and students have become increasingly aware with the spike in use of chatbots like ChatGPT, AI can make mistakes. In this case, mistakes can be both expensive and dangerous.
Aside from Beacon, GoGuardian has already made continuous errors in identifying blocked content for students.
“We identified multiple categories of non-explicit content that are regularly marked as harmful or dangerous, including: College application sites and college websites; counseling and therapy sites; sites with information about drug abuse; sites with information about LGBTQ issues; sexual health sites; sites with information about gun violence; sites about historical topics; sites about political parties and figures; medical and health sites; news sites; and general educational sites,” said Jason Kelley, a writer for the Electronic Frontier Foundation, a non-profit organization aimed at protecting civil liberties online.
Beacon is likely to have similar issues to the rest of GoGuardian’s flagging features.
In addition, the Electronic Frontier Foundation found through data from several school districts that Beacon was only effective in identifying the most obvious, direct threats of harm, all categorized as “active planning.”
In the other categories, however, Beacon has been less successful.
The success of Beacon also relies on the people which it notifies. In order to actually be able to help the students, Beacon must not only correctly identify signs of potential danger to the students; it must also alert professionals who can actually help the students
But the same issue arises at this stage as does with many other attempts at mental health support in schools: school officials are rarely trained mental health professionals.
In some scenarios, emergency services can be contacted directly by GoGuardian, which is unlikely to actually help the student, and only puts more stress on the situation.
If the flag is a false positive, then opening the door to multiple armed officers has no positive outcome. If the student is actually in danger, it is likely to be more helpful to send a trained mental health professional instead.
Ultimately, GoGuardian’s Beacon is another attempt at addressing an important issue in the education system.
However, a much better solution would be actually investing in hiring trained mental health professionals on all campuses, instead of trying to replace these roles with AI.
Beacon could even have a role in this kind of system. Other monitoring software, such as Honorlock, use a combination of machine learning software and real people. This software, used to monitor students when taking assessments to prevent cheating, flags students if there is too much noise, if they move around too much, or if it otherwise believes academic integrity may be breached.
Instead of immediately shutting down the assessment or some other extreme measure, Honorlock alerts an employee, who goes in and checks if they believe the student may actually be cheating. If so, then other measures can be taken. If it was a false positive, then no harm was done.
Something similar could easily be applied to Beacon. The AI could flag a potential sign of danger, which is then sent to a real person, to determine if there is an actual threat of violence, or if it was a false positive.
This system would still allow Beacon to have its positive influences while avoiding potentially negative and even traumatic repercussions on students.