cross-posted from: https://lemmy.ml/post/35349105
Aug. 26, 2025, 7:40 AM EDT
By Angela Yang, Laura Jarrett and Fallon Gallagher[this is a truly scary incident, which shows the incredible dangers of AI without guardrails.]
cross-posted from: https://lemmy.ml/post/35349105
Aug. 26, 2025, 7:40 AM EDT
By Angela Yang, Laura Jarrett and Fallon Gallagher[this is a truly scary incident, which shows the incredible dangers of AI without guardrails.]
I don’t understand your logic here. Clearly, the kid had problems that were not caused by ChatGPT. And his suicidal thoughts were not started by ChatGPT. But OpenAI acknowledged that the longer the engagement continues the more likely that ChatGPT will go off the rails. Which is what happened here. At first, ChatGPT was giving the standard correct advice about suicide lines, etc. Then it started getting darker, where it was telling the kid to not let his mother know how he was feeling. Then it progressed to actual suicide coaching. So I don’t think the analogy to videogames is correct here.
Take away chatgpt and insert a videogame, movie o bookthat talk about those same topics.
There are books that talk much darker about suicide. If the kid were to read those the parents would sue the author of the book?
There is a whole subgenre of music that is about encouraging people to comit suicide and fall into depression, do we use the “who is going to think about the children” card with thar music and its authors? Because music can really get under you skin and a couple of hours listening to that would nake anyone have weird thoughts.
The shitty parents blame chatgpt because it told the kid how to make a noose. You can kind that info in “howto” with instructable images. Do we put the UK nanny dictatorship controls on “howto” ? Or it only counts of it’s something that benefits of the butlerian yihad?
I think is completely irrational to blame a piece of software (or media), as much defective as it is, for a suicide.