Yeah, they aren’t trained to make “correct” responses, but reasonably looking responses; they aren’t truth systems. However, I’m not sure what a truth system would even look like. At a certain point truth/fact become subjective, meaning that we probably have a fundamental problem with how we think about and evaluate these systems.
I mean, it’s the whole reason programming languages were created, natural language is ambiguous.
Yeah, solipsism existing drives the point about truth home. Thing is, LLMs outright lie without knowing they’re lying, because there’s no understanding there. It’s statistics at the character level.
Yeah, they aren’t trained to make “correct” responses, but reasonably looking responses; they aren’t truth systems. However, I’m not sure what a truth system would even look like. At a certain point truth/fact become subjective, meaning that we probably have a fundamental problem with how we think about and evaluate these systems.
I mean, it’s the whole reason programming languages were created, natural language is ambiguous.
Yeah, solipsism existing drives the point about truth home. Thing is, LLMs outright lie without knowing they’re lying, because there’s no understanding there. It’s statistics at the character level.
AI is not my field, so I don’t know, either.