this post was submitted on 30 Apr 2026
99 points (86.7% liked)
Technology
42854 readers
93 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Thanks for specifying a legitimate use-case for this tool. I understand that google search has been the most valuable programming tool for a very long time so it makes sense LLMs would be more helpful in the same kind of way. Search engine technology is quite a bit different than blockchain or VR in terms of consumer and business demand.
For my purposes of news and history research, the unreliability of LLMs making me have to check all its claims every single time negates its usefulness as an assistant because I will have to examine its references anyway so it's more time effective for me to skip the questionable output I would get and do the research myself in the first place. How have you been able to manage the issue of unreliability with the volumes of data you're dealing with? Is the kind of data which you're dealing with less likely to be unreliable since it is of a kind the LLM is more likely to process correctly?
The same way for any other information resource like Wikipedia or some random Reddit post: trust but verify. Always review the code, point out mistakes, call out potential edge cases. Especially with newer thinking models, the hallucinations are minimal. It's mostly just miscommunication in the request, which you can detect in the Thinking stream, stop, and re-correct. Rubberducking makes you better at communicating ideas in general, and providing enough context for the request is everything.
A lot of it has to do with the type of model you're using, too, and having a decent global rules file tailored to how you want it to respond. If you don't like how the model is responding, try out another one. If it's some repeat mistake it makes, put it in a global rules file, or ask it to make a permanent memory.
Claude Opus does well at work, but is rather expensive for home use. I use Kimi reasoning models in Kagi for searching questions, and Qwen/GLM hybrid models for local use. It takes a bit of setup and tweaking to get the local stuff working, but LLMs are good at knowing how their own models work, so I just had Kimi help me out with some of the harder troubleshooting.
I can tell you are experienced with Rubberducking. Thanks for the detailed answer.