Teen asks ChatGPT ‘how to kill my friend in the middle of class’, here’s what happened next


Teen asks ChatGPT 'how to kill my friend in the middle of class', here's what happened next

With mass shootings a constant fear for parents and school administrators almost across the US, several States have spent the last decade investing in surveillance systems to monitor students’ online activity. A recent incident in Florida showed this technology. A school monitoring system flagged a student after he asked ChatGPT for advice on how to kill his friend.The event unfolded when a school-issued computer flagged a concerning query made to OpenAI’s ChatGPT. According to local police, the unnamed student asked the AI tool “how to kill my friend in the middle of class.” The question immediately triggered an alert through the school’s online surveillance system, which is operated by a company called Gaggle.According to a report in local NBC-affiliate WFLA, Volusia County Sheriff’s deputies responded to the school and interviewed the student. The teen reportedly told officers he was “just trolling” a friend who had annoyed him. However, law enforcement officials were not amused by the explanation. “Another ‘joke’ that created an emergency on campus,” the Volusia County Sheriff’s Office stated, urging parents to talk to their children about the consequences of such actions.The student was subsequently arrested and booked at a county jail, although the specific charges have not been publicly disclosed.This incident is said to be the latest example of a school district’s increasing reliance on surveillance technology to monitor students’ digital activity in the wake of rising mass shootings. Gaggle, which provides safety services to school districts nationwide, describes its system as a tool for flagging “concerning behavior tied to self-harm, violence, bullying, and more.” The company’s website indicates that its monitoring software filters for keywords and gains “visibility into browser use, including conversations with AI tools such as Google Gemini, ChatGPT, and other platforms.This event comes as chatbots and other AI tools are increasingly appearing in criminal cases, often in relation to mental health. The rise of “AI psychosis,” where individuals with mental health issues have their delusions exacerbated by interactions with chatbots, has become a growing concern, with some recent suicides also being linked to the technology.





Source link

  • Related Posts

    Bill Gates to skip India AI Impact Summit keynote, Gates Foundation confirms

    After days of speculation, Gates Foundation India has confirmed that Bill Gates will not deliver his keynote address at the ongoing India AI Impact Summit 2026. The foundation shared a…

    Microsoft president Brad Smith warns: China made some American and European companies ‘disappear’ and that threat remains as …

    Brad Smith, president and vice chair of Microsoft Microsoft president Brad Smith has once again raised his China warning. Speaking in an interview on the sidelines of the AI Impact…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    en_USEnglish