Imagine someone asked you “If Desk plus Love equals Fruit, why is turtle blue?”
AI will actually TRY to solve it.
Human nature would be to ask if the person asking the question is having a stroke or requires medical attention.
So, I asked this to the three different conversation styles of Bing Chat.
The Precise style actually tried to solve it, came to the conclusion the question might be of philosophical nature, including some potential meanings, and asked for clarification.
The Balanced style told me basically the same as the other reply by admiralteal, that the question makes no sense and I should give more context if I actually want it answered.
The Creative style told me it didn’t understand the first part, but then answered the second part (the turtles being blue) seriously.
Not sure, I’m not familiar with the test, just figured I’d tell the results from asking the AI.
I think based on what you said about it
AI will actually TRY to solve it.
Human nature would be to ask if the person asking the question is having a stroke or requires medical attention.
That the Balanced style didn’t fail, because while it didn’t ask about strokes or medical attention, it did point out I’m asking a nonsense question and refused to engage with it.
The Precise style did try to find an answer and the Creative style didn’t realize I’m fucking with it, so I do think based on the criteria they’d fail the test.
Though, honestly, I’d fail the test too. When asked such a question, I’d think there has to be an answer and it’s stupid of me not to see it and I’d look for it. I think the Precise style’s answer is very much where I’d end up.
You’re saying the test would work.
In 43+ years on this planet I’ve never HEARD someone seriously use “non sequitur” properly in a sentence.
Asking if the intention is sincere would be another flag given the circumstances (knowing they were being tested).
Toss in a couple real questions like: “What is the 42nd digit of pi?”, “What is the square root of -i ?”, and you’d find the AI pretty quick.
Both the phrases you’re calling out as clearly AI came from me. Not used by ChatGPT, just how I summarized its response. I wonder if this is the first time someone has brazenly accused me of being an AI bot?
“If Desk plus Love equals Fruit, why is turtle blue?”
Assuming “Desk = x”, “Love = y”, “Fruit = x+y”, and “turtle blue = z”, it is so because you assigned arbitrary values to the words such that they fulfill the equation.
Voight-Kampff test maybe?
Imagine someone asked you “If Desk plus Love equals Fruit, why is turtle blue?”
AI will actually TRY to solve it.
Human nature would be to ask if the person asking the question is having a stroke or requires medical attention.
So, I asked this to the three different conversation styles of Bing Chat.
The Precise style actually tried to solve it, came to the conclusion the question might be of philosophical nature, including some potential meanings, and asked for clarification.
The Balanced style told me basically the same as the other reply by admiralteal, that the question makes no sense and I should give more context if I actually want it answered.
The Creative style told me it didn’t understand the first part, but then answered the second part (the turtles being blue) seriously.
Would it be safe to say that all 3 answers would fail the test?
Not sure, I’m not familiar with the test, just figured I’d tell the results from asking the AI.
I think based on what you said about it
That the Balanced style didn’t fail, because while it didn’t ask about strokes or medical attention, it did point out I’m asking a nonsense question and refused to engage with it.
The Precise style did try to find an answer and the Creative style didn’t realize I’m fucking with it, so I do think based on the criteria they’d fail the test.
Though, honestly, I’d fail the test too. When asked such a question, I’d think there has to be an answer and it’s stupid of me not to see it and I’d look for it. I think the Precise style’s answer is very much where I’d end up.
Nope, ChatGPT tells you it is a nonsequitor and asks for more context or intention if the question is sincere.
You’re saying the test would work.
In 43+ years on this planet I’ve never HEARD someone seriously use “non sequitur” properly in a sentence.
Asking if the intention is sincere would be another flag given the circumstances (knowing they were being tested).
Toss in a couple real questions like: “What is the 42nd digit of pi?”, “What is the square root of -i ?”, and you’d find the AI pretty quick.
Cool.
Both the phrases you’re calling out as clearly AI came from me. Not used by ChatGPT, just how I summarized its response. I wonder if this is the first time someone has brazenly accused me of being an AI bot?
LoL, no I took you at your word which was my mistake
“ChatGPT tells you” read to me like you attempted and got that response.
Perhaps you are an instance of an LLM and do not realize it.
Assuming “Desk = x”, “Love = y”, “Fruit = x+y”, and “turtle blue = z”, it is so because you assigned arbitrary values to the words such that they fulfill the equation.
Am I an AI?