it’s a short comment thread so far, but it’s got a few posts that are just condensed orange site
The constant quest for “safety” might actually be making our future much less safe. I’ve seen many instances of users needing to yell at, abuse, or manipulate ChatGPT to get the desired answers. This trains users to be hateful to / frustrated with AI, and if the data is used, it teaches AI that rewards come from such patterns. Wrote an article about this – https://hackernoon.com/ai-restrictions-reinforce-abusive-user-behavior
But you think humans (by and large) do know what “facts” are?
You must log in or register to comment.