Teen asks ChatGPT ‘how to kill my friend in the middle of class’, here’s what happened next


Teen asks ChatGPT 'how to kill my friend in the middle of class', here's what happened next

With mass shootings a constant fear for parents and school administrators almost across the US, several States have spent the last decade investing in surveillance systems to monitor students’ online activity. A recent incident in Florida showed this technology. A school monitoring system flagged a student after he asked ChatGPT for advice on how to kill his friend.The event unfolded when a school-issued computer flagged a concerning query made to OpenAI’s ChatGPT. According to local police, the unnamed student asked the AI tool “how to kill my friend in the middle of class.” The question immediately triggered an alert through the school’s online surveillance system, which is operated by a company called Gaggle.According to a report in local NBC-affiliate WFLA, Volusia County Sheriff’s deputies responded to the school and interviewed the student. The teen reportedly told officers he was “just trolling” a friend who had annoyed him. However, law enforcement officials were not amused by the explanation. “Another ‘joke’ that created an emergency on campus,” the Volusia County Sheriff’s Office stated, urging parents to talk to their children about the consequences of such actions.The student was subsequently arrested and booked at a county jail, although the specific charges have not been publicly disclosed.This incident is said to be the latest example of a school district’s increasing reliance on surveillance technology to monitor students’ digital activity in the wake of rising mass shootings. Gaggle, which provides safety services to school districts nationwide, describes its system as a tool for flagging “concerning behavior tied to self-harm, violence, bullying, and more.” The company’s website indicates that its monitoring software filters for keywords and gains “visibility into browser use, including conversations with AI tools such as Google Gemini, ChatGPT, and other platforms.This event comes as chatbots and other AI tools are increasingly appearing in criminal cases, often in relation to mental health. The rise of “AI psychosis,” where individuals with mental health issues have their delusions exacerbated by interactions with chatbots, has become a growing concern, with some recent suicides also being linked to the technology.





Source link

  • Related Posts

    Is the gym helping or harming your joints? Doctor says human joints are built to move, not withstand repeated overload

    Walk into any gym and you will see posters about pushing limits, building strength, staying consistent. What you don’t see as often are conversations about injuries. And that’s strange, because…

    What is Elon Musk’s secret word for success? His ex-wife Justine revealed it years ago |

    Elon Musk (Image source: Reuters) Success stories around global business leaders often focus on money, innovation, or intelligence. In the case of Elon Musk, the billionaire behind companies like Tesla…

    प्रातिक्रिया दे

    आपका ईमेल पता प्रकाशित नहीं किया जाएगा. आवश्यक फ़ील्ड चिह्नित हैं *

    hi_INहिन्दी