Chatbots have a built-in tendency for sycophancy - to affirm the user and sound supportive, at the cost of remaining truthful.
ChatGPT went through its sycophancy scandal recently and I would have hoped they'd have added weight to finding credible and factual sources, but apparently they haven't.
To be honest, I'm rather surprised that Meta AI didn't exhibit much sycophancy. Perhaps they're simply somewhat behind the others in their customization curve - an language model can't be sycophant if it can't figure out the biases of its user or remember them until the relevant prompt.
Grok, being a creation of a company owned by Elon Musk, has quite predictably been "softened up" the most - to cater to the remaining user base of Twitter. I would expect the ability of Grok to present an unbiased and factual opinion degrade further in the future.
Overall, my rather limited personal experience with LLMs suggests that most language models will happily lie to you, unless you ask very carefully. They're only language models, not reality models after all.
I would especially want to comment about:
A country that swims in oil, and has few other natural resources of comparable value, so indeed nearly all parties support using it.
What example will he bring next, Saudi Arabia?
That aside, Norway has a huge electric vehicle adoption rate. They will sell you oil, but have themselves almost stopped using it.