this post was submitted on 30 Apr 2026
95 points (86.3% liked)
Technology
42838 readers
477 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is incorrect. They are in fact completely deterministic. Studies have proven that when all inputs, weights, and precision values like temperature are static, they produce the exact same token sequences (outputs). The appearance of non-determinism is a result of pseudo-randomized (another thing which is deterministic but appears otherwise) values and user ignorance (in the technical sense, not the value-judgement sense). In fact, the process of 'tuning' LLMs is heavily focused on adjusting input values to surface preferred outputs, which would not work in a non-deterministic system.
Yes, but we don't trust humans not to
rmwhat they shouldn't either, which is why the--no-preserve-rootflag exists.lsis not supposed to perform write actions. Agentic LLMs are. And just like you wouldn't build and test on your production server in case the code you execute has an unexpected adverse effect, you shouldn't be running LLM agents in a location or way that the actions it performs has an unexpected adverse effect either. The genre of jokes about a new employee bringing down Prod or deleting source code is older than most people (which to be fair, given that the median age is 31, is true for a lot of things).LLMs are just a class of software. They're not good or bad any more than a hammer is good or bad (and can also be used to build or to destroy).
The problem isn't LLMs, it's the entities who control the most powerful ones (corporations and governments), and what those entities are doing with them; using them as weapons against us, rather than as tools to aid us.
I think this kind of rhetoric is best saved for when AI is not currently one of the most harmful things in society today. Argue it's a hammer all you like; people aren't going to be receptive when that hammer is currently being used to beat their faces in, and making that argument at such a time isn't exactly sympathetic.