“The surveillance, theft and death machine recommends more surveillance to balance out the death.”
Stupid is, etc
they will find out about my relation with uwu chatgpt mechahitler skibidi sigma wifu
This is why i keep my chat gpt under the sofa so when buckling up for safety my open ai stays extra crunk.
I don’t actually have a problem with this. If people are stupid enough to admit to a crime or engage in criminal activity on a platform that they don’t control, that’s on them. I put this as the next step of evolution from people who would commit a crime on youtube for views then get shocked pikachu’d when the police arrest them for it. They have no one to blame but themselves, they brought a 3rd party AI company into it and they did not consent to be an accomplice and if there is any company out there with the resources to have AI scan conversations to flags to send to the police with good accuracy, openAi would definitely be at the front of it.
You’re fine with invasion of privacy as long as it only affects criminals.
I think you’ll find that once privacy is broken you’d be surprised how many people end up under that umbrella.
Can we have it affect the oligarchs and authoritarian fascists, too?
Using the fucking GPT is the privacy invasion.
So yes, once the company has the logs and detects any criminal or dangerous activity, it should report it.
Stop using chatbots in the first place.
Well, you should have a problem with it but not for the reasons you think. Any invasion of privacy is an issue when the people in control get to decide what is a reportable offense without explicitly telling you. I get it, you definitely shouldn’t be admitting anything illegal or asking illegal advice from a chat bot. You shouldn’t be doing anything that is illegal in the first place.That’s basically the same as googling how to make a bomb and if you’re that dumb you’ll get what’s coming to you. The issue arrives when you look at the bigger picture. If they have the ability to report anything they want to the police, what’s stopping them from releasing anything they want to anyone they want at any time? And when it comes to those receiving the data that’s been reported, what proof do you have that these entities have yours or anyone else’s interests or safety in mind? What if they decide to change the rules on what they should report, they don’t tell you, and then retroactively flag a bunch of your conversions with said LLM.
It’s the same kinda situation that we face with these AI cameras that track us and our vehicles literally everywhere we go. There have already been multiple cases where people in law enforcement were using these tools to stalk people like ex girlfriends. All this is putting a lot of trust into people that none of us even know and expect them to have the best of intentions. What would stop them from reporting that you asked ChatGPT about the current situation in Gaza?
I kinda agree. While I do want these llm companies to be more private, in terms of data retention, I think it’s native to say that a company which is selling artificial intelligence to hundreds of millions of users should be totally ambivalent in the face of llm induced psychosis and suicide. Especially when the technology only gets more hazardous as it becomes more capable.
Being criminally stupid when planning crimes is pretty stupid.
Did they think there was patient-sycophantBot privilege or something?
What a snitch, grok is this real??
“Yes, Nazis are cool.” -Grok, probably
There is no privacy if you don’t self-host everything.
On-site self- hosting, on owned hardware. Who knows what’s going on behind the closed doors of data centers around the world.
And let’s not get into industry standard
hardware backdoorsremote control systems.