• 0 Posts
  • 48 Comments
Joined 2 years ago
cake
Cake day: August 14th, 2023

help-circle


  • Not necessarily. It would provide an attack vectore for sure, that being the data connection between profiles, but if it is implemented in a controllable manner (See qubes os), it’s fine. The only issue I see with GrapheneOS in this scenario is: There is no uncompromised host for verification, so I don’t really know myself how something safe could be implemented, however I would also think devs don’t really want to, since there are ways which OP has already described some of.
















  • That argument is no argument since we humans, no matter how advanced our language is, still follow rules. Without rules in language, we would not understand what the other person were saying. Granted, we learn these rules through listening, repeating and using what sounds right. But the exact same thing is happening with LLMs. They learn from the data we feed them. It’s not like we give them the rules to english and they can only understand english then. The first time they come into contact with the concept of grammar is when they get data, most often in english, that tells them about grammar. We all follow rules. That’s exactly how we work. We’re still a lot smarter than LLMs though, so it might seem as if they are vastly inferior. And while I do believe that most complex organisms do have “deeper thought” in that our thought has more layers and is generally fitter for the real world, there is no way I’m not gonna call a neural network that can answer me complex questions, which may have never been asked in the history of mankind, an AI. Because it is very much intelligent. It’s just not alive. We humans tend to think of ourselves too favorably. “We” are just a neural network. Just a different kind. Just like a computer is similar to the human brain, but a wire is not. Where do you draw the line?