Teen asks ChatGPT ‘how to kill my friend in the middle of class’, here’s what happened next


Teen asks ChatGPT 'how to kill my friend in the middle of class', here's what happened next

With mass shootings a constant fear for parents and school administrators almost across the US, several States have spent the last decade investing in surveillance systems to monitor students’ online activity. A recent incident in Florida showed this technology. A school monitoring system flagged a student after he asked ChatGPT for advice on how to kill his friend.The event unfolded when a school-issued computer flagged a concerning query made to OpenAI’s ChatGPT. According to local police, the unnamed student asked the AI tool “how to kill my friend in the middle of class.” The question immediately triggered an alert through the school’s online surveillance system, which is operated by a company called Gaggle.According to a report in local NBC-affiliate WFLA, Volusia County Sheriff’s deputies responded to the school and interviewed the student. The teen reportedly told officers he was “just trolling” a friend who had annoyed him. However, law enforcement officials were not amused by the explanation. “Another ‘joke’ that created an emergency on campus,” the Volusia County Sheriff’s Office stated, urging parents to talk to their children about the consequences of such actions.The student was subsequently arrested and booked at a county jail, although the specific charges have not been publicly disclosed.This incident is said to be the latest example of a school district’s increasing reliance on surveillance technology to monitor students’ digital activity in the wake of rising mass shootings. Gaggle, which provides safety services to school districts nationwide, describes its system as a tool for flagging “concerning behavior tied to self-harm, violence, bullying, and more.” The company’s website indicates that its monitoring software filters for keywords and gains “visibility into browser use, including conversations with AI tools such as Google Gemini, ChatGPT, and other platforms.This event comes as chatbots and other AI tools are increasingly appearing in criminal cases, often in relation to mental health. The rise of “AI psychosis,” where individuals with mental health issues have their delusions exacerbated by interactions with chatbots, has become a growing concern, with some recent suicides also being linked to the technology.





Source link

  • Related Posts

    Biggest air deployment in Middle East since 2003 Iraq invasion: US to strike Iran this week?

    The United States has deployed the most substantial military force to the Middle East since the 2003 invasion of Iraq, raising fears of imminent strikes against Iran as President Donald…

    Allu Arjun House: Allu Arjun’s Rs 100 crore Jubilee Hills mansion: A peek into the ‘Pushpa’ star’s all-white, ‘box-shaped’ home |

    Allu Arjun’s Rs 100 crore Jubilee Hills mansion is a stunning, minimalist, all-white ‘box-shaped’ home. Spanning 8,000 sq ft on two acres, it features sunlit living spaces, a cozy family…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    en_USEnglish