Model Evaluation and Threat Research is an AI research charity that looks into the threat of AI agents! That sounds a bit AI doomsday cult, and they take funding from the AI doomsday cult organisat…
I talked to Microsoft Copilot 3 times for work related reasons because I couldn’t find something in documentation. I was lied to 3 times. It either made stuff up about how the thing I asked about works or even invented entirely new configuration settings
In fairness the msdn documentation is prone to this also.
By “this” I mean having what looks like a comprehensive section about the thing you want but the actual information you need isn’t there, but you need to tag the whole thing to find out.
Claude AI does this ALL the time too. It NEEDS to give a solution, it rarely can say “I don’t know” so it will just completely make up a solution that it thinks is right without actually checking to see the solution exists. It will make/dream up programs or libraries that don’t and have never existed OR it will tell you something can do something when it has never been able to do that thing ever.
And that’s just how all these LLMs have been built. they MUST provide a solution so they all lie. they’ve been programmed this way to ensure maximum profits. Github Copilot is a bit better because it’s with me in my code so it’s suggestions, most of the time, actually work because it can see the context and whats around it. Claude is absolute garbage, MS Copilot is about the same caliber if not worse than Claude, and Chatgpt is only good for content writing or bouncing ideas off of.
LLM are just sophisticated text predictions engine. They don’t know anything, so they can’t produce an “I don’t know” because they can always generate a text prediction and they can’t think.
Tool use, reasoning, chain of thought, those are the things that set llm systems apart. While you are correct in the most basic sense, it’s like saying a car is only a platform with wheels, it’s reductive of the capabilities
I talked to Microsoft Copilot 3 times for work related reasons because I couldn’t find something in documentation. I was lied to 3 times. It either made stuff up about how the thing I asked about works or even invented entirely new configuration settings
In fairness the msdn documentation is prone to this also.
By “this” I mean having what looks like a comprehensive section about the thing you want but the actual information you need isn’t there, but you need to tag the whole thing to find out.
Claude AI does this ALL the time too. It NEEDS to give a solution, it rarely can say “I don’t know” so it will just completely make up a solution that it thinks is right without actually checking to see the solution exists. It will make/dream up programs or libraries that don’t and have never existed OR it will tell you something can do something when it has never been able to do that thing ever.
And that’s just how all these LLMs have been built. they MUST provide a solution so they all lie. they’ve been programmed this way to ensure maximum profits. Github Copilot is a bit better because it’s with me in my code so it’s suggestions, most of the time, actually work because it can see the context and whats around it. Claude is absolute garbage, MS Copilot is about the same caliber if not worse than Claude, and Chatgpt is only good for content writing or bouncing ideas off of.
LLM are just sophisticated text predictions engine. They don’t know anything, so they can’t produce an “I don’t know” because they can always generate a text prediction and they can’t think.
Tool use, reasoning, chain of thought, those are the things that set llm systems apart. While you are correct in the most basic sense, it’s like saying a car is only a platform with wheels, it’s reductive of the capabilities