this post was submitted on 08 Oct 2025
1034 points (98.4% liked)

Funny

11883 readers
2666 users here now

General rules:

Exceptions may be made at the discretion of the mods.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] WanderingThoughts@europe.pub 10 points 1 day ago* (last edited 1 day ago) (9 children)

You can tell if to switch that off permanently with custom instructions. It makes the thing a whole lot easier to deal with. Of course, that would be bad for engagement so they're not going to do that by default.

[–] AbsolutelyClawless@piefed.social 11 points 1 day ago (1 children)

I sometimes use ChatGPT when I'm stuck troubleshooting an issue. I had to do exactly this because it became extremely annoying when I corrected it for giving me incorrect information and it would still be "sucking up" to me with "Nice catch!" and "You're absolutely right!". The fact that an average person doesn't find that creepy, unflattering and/or annoying is the real scary part.

[–] merc@sh.itjust.works 3 points 1 day ago (2 children)

Just don't think that turning off the sycophancy improves the quality of the responses. It's still just responding to your questions with essentially "what would a plausible answer to this question look like?"

[–] AbsolutelyClawless@piefed.social 1 points 11 hours ago

I'm well aware of how LLMs work. I take every response with a grain of salt and don't just run with it. However, I understand many people take everything LLMs regurgitate at face value and that's definitely a massive problem. I'm not a fan of these tools, but they do come in handy.

[–] WanderingThoughts@europe.pub 0 points 1 day ago (1 children)

You can set default instructions to always be factual, always provide a link to prove its answer and to give an overall reliability score and tell why it came to that score. That stops it from making stuff up, and allows you to quickly verify. It's not perfect but so much better than just trusting what it puts on the screen.

[–] merc@sh.itjust.works 6 points 1 day ago (1 children)

That stops it from making stuff up

No it doesn't. That's simply not how LLMs work. They're "making stuff up" 100% of the time. If the training data is good, the stuff they're making up more or less matches the training data. If the training data isn't good, they'll make up stuff that sounds plausible.

[–] WanderingThoughts@europe.pub 2 points 1 day ago* (last edited 1 day ago) (1 children)

If you ask it for sources/links, it'll search the web and get information from the pages these days instead of only using training data. That doesn't work for everything of course. And the biggest risk is that all sites get polluted with slop so the sources become worthless over time.

[–] merc@sh.itjust.works 2 points 1 day ago

Sounds infallible, you should use it to submit cases to courts. I hear they love it when people cite things that AI tells them are factual cases.

load more comments (7 replies)