Most Rio students are familiar with GoGuardian: a software that allows teachers to monitor students’ activity online when in the classroom. While primarily used to keep students on task and prevent cheating, GoGuardian also has other uses.
In 2018, GoGuardian launched Beacon, a software meant to act as a warning system to administrators when students exhibit signs of intending to harm themselves or others via their online presence.
“By combining real-time scans and a proprietary machine-learning engine, only GoGuardian is able to provide comprehensive, context-aware analytics to intelligently assess the content students are creating and interacting with online via web searches, social media, chat, forums, email and online collaboration tools,” GoGaurdian said in a statement.
GoGuardian says that Beacon aims to help administrators prevent injury and violence among their students, and to help these students get assistance from professionals before it’s too late.
On their website, GoGuardian sorts the terms and phrases that are flagged with Beacon into five categories: active planning, suicide ideation, self harm, help and support, and suicide research.
Initially, Beacon seems like a helpful tool in ensuring the safety of students in a society experiencing high levels of self-harm in youth.
GoGuardian has stated that this software was developed with the help of mental health experts, and that it has prevented more than 18,000 students from physical harm since March 2020, as advertised on their website.
But the validity of this statement is called into question when we look into some specific instances of Beacon at work.
The New York Times highlighted several stories involving Beacon flagging a student’s work. In one, Beacon recognized that a student had messaged her friend that she was planning to overdose on anxiety medication, and police were able to arrive fast enough to help.
But in another situation, words from a student’s poem written years prior were incorrectly flagged, sending police to the student’s home late into the night, and traumatizing the family.
At the end of the day, Beacon’s system is AI. And as teachers and students have become increasingly aware with the spike in use of chatbots like ChatGPT, AI can make mistakes. In this case, mistakes can be both expensive and dangerous.
Aside from Beacon, GoGuardian has already made continuous errors in identifying blocked content for students.
“We identified multiple categories of non-explicit content that are regularly marked as harmful or dangerous, including: College application sites and college websites; counseling and therapy sites; sites with information about drug abuse; sites with information about LGBTQ issues; sexual health sites; sites with information about gun violence; sites about historical topics; sites about political parties and figures; medical and health sites; news sites; and general educational sites,” said Jason Kelley, a writer for the Electronic Frontier Foundation, a non-profit organization aimed at protecting civil liberties online.
Beacon is likely to have similar issues to the rest of GoGuardian’s flagging features.
In addition, the Electronic Frontier Foundation found through data from several school districts that Beacon was only effective in identifying the most obvious, direct threats of harm, all categorized as “active planning.”
In the other categories, however, Beacon has been less successful.
The success of Beacon also relies on the people which it notifies. In order to actually be able to help the students, Beacon must not only correctly identify signs of potential danger to the students; it must also alert professionals who can actually help the students
But the same issue arises at this stage as does with many other attempts at mental health support in schools: school officials are rarely trained mental health professionals.
In some scenarios, emergency services can be contacted directly by GoGuardian, which is unlikely to actually help the student, and only puts more stress on the situation.
If the flag is a false positive, then opening the door to multiple armed officers has no positive outcome. If the student is actually in danger, it is likely to be more helpful to send a trained mental health professional instead.
Ultimately, GoGuardian’s Beacon is another attempt at addressing an important issue in the education system.
However, a much better solution would be actually investing in hiring trained mental health professionals on all campuses, instead of trying to replace these roles with AI.
Beacon could even have a role in this kind of system. Other monitoring software, such as Honorlock, use a combination of machine learning software and real people. This software, used to monitor students when taking assessments to prevent cheating, flags students if there is too much noise, if they move around too much, or if it otherwise believes academic integrity may be breached.
Instead of immediately shutting down the assessment or some other extreme measure, Honorlock alerts an employee, who goes in and checks if they believe the student may actually be cheating. If so, then other measures can be taken. If it was a false positive, then no harm was done.
Something similar could easily be applied to Beacon. The AI could flag a potential sign of danger, which is then sent to a real person, to determine if there is an actual threat of violence, or if it was a false positive.
This system would still allow Beacon to have its positive influences while avoiding potentially negative and even traumatic repercussions on students.
Amanda • Feb 13, 2025 at 11:44 PM
I believe the author made a good solution of using both AI and real people. They also had lots of clear evidence showing when AI hasn’t worked and when it has. I really like how she mentioned having mental health professionals on campus but also touches upon how there are very little trained school officials for every campus to have one.
Matt Teymouri • Feb 13, 2025 at 11:41 PM
Berry does a great analysis and comparison of the few cases where Beacon had a positive result and the common cases of it being harmful. As they point out, GoGuardian’s website advertises how many people its help without addressing these harmful cases. Bringing peoples attention to this in an age of misinformation and carefully worded advertising helps create a better educated society.
Jillian • Feb 13, 2025 at 7:21 PM
The writer does a good job at addressing both the cons and pros of Beacon, showing that they are unbiased, and giving ideas on how we can better the program. I agree that the mistakes the AI is creating, can be resolved by sending/hiring mental health professionals first rather than the police in every instance.
Miley McNamee • Feb 13, 2025 at 6:19 PM
The author makes a great point when discussing combining Honorlock and Beacon to have AI and trained professionals working together to provide the most help. I think there is a really good basis for this idea but the downsides also need to considered like the author mentioned. Also I think it is important to note that this system is very invasive since it has so much access to a person’s online presence.
Jackson Kirkley • Feb 12, 2025 at 3:22 PM
The writer did an excellent job at noting the instances where AI does not work properly and makes expensive mistakes. I think it’s also important to consider the access GoGuardian has to private information, including direct messaging that infringes upon privacy and security.
Madeline McGrath Htain • Feb 12, 2025 at 11:22 AM
The author makes a good point about how we should hire professionals on campus instead, because they will know what is best for students and how to act in serious situations. I also agree that while Beacon could be helpful, it is still AI and is still prone to mistakes, which is something that we must keep in mind.