this post was submitted on 16 Apr 2026
238 points (96.9% liked)

Technology

83966 readers
5409 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] wuffah@lemmy.world 19 points 4 days ago* (last edited 4 days ago) (1 children)

The propensity of the average person to simply believe what they’re told is staggering, and I know because I do it all the time. It takes effort to seek out information, vet it, consider it, and then make a determination on the next information to seek or the next course of action. Deterministic, trustworthy information and abstracted concepts are extremely valuable to the brain, an organ that consumes roughly 20% of our body’s energy.

Before, computers performed tasks that were impossible for the human mind. Machine learning has been automating tasks impossible for humans such as computer vision or large dataset processing, but chatbots are the first technology that has really enabled automating human thought. In this new sense, directly offloading this cognitive work to a computer is literally letting it think for us.

The more reliant on this mode of thinking we become, the easier it is to transfer cognitively expensive work to a device that externalizes that energy cost. However, the trade-offs that are emerging are:

  • Internal electric brain energy is traded for relatively inefficient external electricity production to feed circuits.

  • The words generated by LLM’s must still be verified and combined into coherent, dependable ideas and actions.

  • The drive and skill required to develop good ideas that have value is degraded without constant practice.

In the end, it becomes only a slightly less amount of work to perform the same thinking process for checking and mentally processing the output of an LLM chatbot, which defeats its purpose. If you skip that step of contextualizing it as possibly representing corporate interest and diluting meaning while offering a juicy cognitive shortcut, you’re becoming willingly complacent in your own digital brainwashing. This effect is also emergent and automatic; it doesn’t even have to be of nefarious purpose, it seems to be a procedural consequence of this mode of thinking.

What I really fear, and what is also emerging, is that eventually AI agents will become so advanced and trusted that their end-to-end capabilities will make mistakes and ulterior motives impossible to spot, and that they will become completely above the capability and desire for human scrutiny.

These digital brains we trained on all of human knowledge are now in the process of training us.

[–] No1@aussie.zone 10 points 4 days ago* (last edited 4 days ago) (1 children)

The propensity of the average person to simply believe what they’re told is staggering,

Goddamit — now I don't know if I should.believe you!

[–] wuffah@lemmy.world 3 points 3 days ago* (last edited 3 days ago)

In the spirit, I’m here to tell you that I just made this up. I have no formal training in AI or LLMs. Although, I do know a little about computers and writing. I mostly wrote this in 30 minutes while making coffee so I could trade it for the little dopamine numbers in my doom square.