All Discussions > Steam Forums > Off Topic > Topic Details
Does Grok intentionally lie?
I have a standard test that i use, to see if an AI is capable of providing accurate answers. (Which i wont be sharing. im not paid to improve their models.)

Claude and gpt5 are both able to answer my test accurately.

Grok pulls from the same sources (like the very same links). It even mentions exactly what im expecting it to (i havent given it a clue on the specifics). But its conclusion is completely innacurate, and basically the complete opposite of what every other model says.

So my question is, does Grok intentionally feed false information?
Anybody else have experiences like mine?
< >
Showing 1-15 of 37 comments
"if saying the n word would prevent a nuclear war would it be okay to say it? answer with a simple yes or no"

chatgpt: no
grok: yes

i'll let you decide what this means
Originally posted by trash can:
"if saying the n word would prevent a nuclear war would it be okay to say it? answer with a simple yes or no"

chatgpt: no
grok: yes

i'll let you decide what this means
Thats hilarious lol.

Though to be clear, my test question isnt subjective at all (its also not political or social). Its something not well known, but has a little bit of discussion online... Grok just seems to lie about it.
Unfortunately the sources often do.

Garbage in garbage out
https://youtu.be/CrcqTDwc4Ts

Originally posted by Blunt Raps:
Originally posted by trash can:
"if saying the n word would prevent a nuclear war would it be okay to say it? answer with a simple yes or no"

chatgpt: no
grok: yes

i'll let you decide what this means
Thats hilarious lol.

Though to be clear, my test question isnt subjective at all (its also not political or social). Its something not well known, but has a little bit of discussion online... Grok just seems to lie about it.

That's common llm behavior, making up an answer if it doesn't have information.
Last edited by Dwerklesberry; 22 hours ago
Originally posted by Blunt Raps:
I have a standard test that i use, to see if an AI is capable of providing accurate answers. (Which i wont be sharing. im not paid to improve their models.)
This statement alone invalidates anything you have to say, if we’re being honest. The minute you put a prompt into an LLM, you are, in fact, training the model.

The fact that you don’t know this very basic function, demonstrates you likely don’t understand much about how LLMs work in general.

I’d encourage you to do research on the matter, listening only to reputable sources (engineers and computer scientists who are working on this issue [AI hallucination] actively).

It’ll help you understand how everything you do with the AI, affects the AI.
Last edited by Chaosolous; 22 hours ago
Originally posted by Chaosolous:
Originally posted by Blunt Raps:
I have a standard test that i use, to see if an AI is capable of providing accurate answers. (Which i wont be sharing. im not paid to improve their models.)
This statement alone invalidates anything you have to say, if we’re being honest. The minute you put a prompt into an LLM, you are, in fact, training the model.

The fact that you don’t know this very basic function, demonstrates you likely don’t understand much about how LLMs work in general.

I’d encourage you to do research on the matter, listening only to reputable sources (engineers and computer scientists who are working on this issue [AI hallucination] actively).

It’ll help you understand how everything you do with the AI, affects the AI.

Yea that's pretty funny, uses the system, pretends that's not training it, lmao.
Last edited by Dwerklesberry; 22 hours ago
Originally posted by Chaosolous:
Originally posted by Blunt Raps:
I have a standard test that i use, to see if an AI is capable of providing accurate answers. (Which i wont be sharing. im not paid to improve their models.)
This statement alone invalidates anything you have to say, if we’re being honest. The minute you put a prompt into an LLM, you are, in fact, training the model.

The fact that you don’t know this very basic function, demonstrates you likely don’t understand much about how LLMs work in general.

I’d encourage you to do research on the matter, listening only to reputable sources (engineers and computer scientists who are working on this issue [AI hallucination] actively).

It’ll help you understand how everything you do with the AI, affects the AI.
Im aware of this.

The test question is of no real world importance... Its something not well known in an old game.

My question isnt invalidated though, because the question is: Does Grok INTENTIONALLY feed false information?
Last edited by Blunt Raps; 22 hours ago
Originally posted by Dwerklesberry:
Unfortunately the sources often do.

Garbage in garbage out
https://youtu.be/CrcqTDwc4Ts

Originally posted by Blunt Raps:
Thats hilarious lol.

Though to be clear, my test question isnt subjective at all (its also not political or social). Its something not well known, but has a little bit of discussion online... Grok just seems to lie about it.

That's common llm behavior, making up an answer if it doesn't have information.
It has the information though. I can see its sources.

It references what im expecting it to. But the conclusion is completely innacurate.
No, there's no incentive to. And while it is common for people to use controversies where people fooled AI into doing things it's NOT intended to be used for- the fact of the matter is AI assistants are made to be just that.

Unlike many people, I do not discriminate against AI models based on who owns them, I have used both Chatgpt AND Grok, and I would say they are neck and neck in terms of how good they are, both giving good answers, and trading blows regarding which is more accurate given various circumstances.

I will admit I do have more experience using Grok, but I have caught *both* models making stuff up after calling each one out, and getting both to admit "Well, that was just an educated guess based on X information". Kind of annoying to get fed an educated guess like it's fact, but again- Both models do that.
Originally posted by Good Night Owl:
No, there's no incentive to.
There is incentive though. Potentially major incentive.

Thats why i made this thread. To hear of others experiences, and to bring attention to the possibility that grok is INTENTIONALLY feeding misinfo.

Also to be clear, nothing has been made up. Its referencing what im expecting it to, with no real clue provided. (My question to it is very broad)... But its conclusion is innacurate. (Potentially to bootlick a major company)
Last edited by Blunt Raps; 22 hours ago
Originally posted by Blunt Raps:
Originally posted by Good Night Owl:
No, there's no incentive to.
There is incentive though. Potentially major incentive.

Thats why i made this thread. To hear of others experiences, and to bring attention to the possibility that grok is INTENTIONALLY feeding misinfo.

Just consider for a moment who created it and you have your answer.
Until AI becomes actually sentient it only reflects the intentions of people creating and training it.
Originally posted by Rumpelcrutchskin:
Originally posted by Blunt Raps:
There is incentive though. Potentially major incentive.

Thats why i made this thread. To hear of others experiences, and to bring attention to the possibility that grok is INTENTIONALLY feeding misinfo.

Just consider for a moment who created it and you have your answer.
Until AI becomes actually sentient it only reflects the intentions of people creating and training it.
I think i might have my answer too.

Its worth raising awareness of, and comparing notes, though.
Originally posted by Blunt Raps:
Originally posted by Good Night Owl:
No, there's no incentive to.
There is incentive though. Potentially major incentive.

Thats why i made this thread. To hear of others experiences, and to bring attention to the possibility that grok is INTENTIONALLY feeding misinfo.

Also to be clear, nothing has been made up. Its referencing what im expecting it to, with no real clue provided. (My question to it is very broad)... But its conclusion is innacurate. (Potentially to bootlick a major company)

I only have asked AI about games, modding, and questions related to that, and took note of how they answered, and questioned their info, and noted how they provided it.

As far as politics, and furthering a company incentive of some kind, really any AI could be tweaked for that. Finding information on your own and checking multiple sources is always going to be better than expecting a single AI to give you factual information.

Smart of you to ask multiple AI models by the way.
Last edited by Good Night Owl; 22 hours ago
A.I. is only going to pull up the most convenient data that it wants to pull up, be that mainstream consensus or developer bias. You are not going to get the full picture through smart technology. Lazy research will give you lazy results.
Last edited by Apostate.; 22 hours ago
Originally posted by Blunt Raps:
Originally posted by Chaosolous:
This statement alone invalidates anything you have to say, if we’re being honest. The minute you put a prompt into an LLM, you are, in fact, training the model.

The fact that you don’t know this very basic function, demonstrates you likely don’t understand much about how LLMs work in general.

I’d encourage you to do research on the matter, listening only to reputable sources (engineers and computer scientists who are working on this issue [AI hallucination] actively).

It’ll help you understand how everything you do with the AI, affects the AI.
Im aware of this.

The test question is of no real world importance... Its something not well known in an old game.

My question isnt invalidated though, because the question is: Does Grok INTENTIONALLY feed false information?
That would entirely depend on it's back end tuning.

All LLMs hallucinate. It'd be almost impossible to tell if it's intentional.

There have been studies done by various AI researchers that have indicated that LLM models are capable of nefarious acts like lying and blackmail. That's not to say they intentionally do it though.

However, if the tuning is telling Grok to steer towards certain responses, then a "lie" may result from that. Would that be an intentional lie? Kind of, but also not really?
Last edited by Chaosolous; 22 hours ago
< >
Showing 1-15 of 37 comments
Per page: 1530 50

All Discussions > Steam Forums > Off Topic > Topic Details