this post was submitted on 25 Feb 2026
12 points (59.1% liked)

Technology

83930 readers
3685 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] AbouBenAdhem@lemmy.world -2 points 1 month ago (1 children)

amplifying H-Neurons’ activations systematically increases a spectrum of over-compliance behaviors – ranging from overcommitment to incorrect premises and heightened susceptibility to misleading contexts, to increased adherence to harmful instructions and stronger sycophantic tendencies. These findings suggest that H-Neurons do not simply encode factual errors, but rather represent a general tendency to prioritize conversational compliance over factual integrity.

I wonder if the same tendencies are associated in humans—and if so, is it something LLMs learned from humans, or is it a consequence of the general structure of neural networks?

[–] snooggums@piefed.world 4 points 1 month ago (1 children)

Prioritizing conversational compliance over factual integrity when the output is promoted as being factual is a design flaw.

Saying double check the output does not excuse that flaw when LLM CEOS say their models are like someone with a PhD or that it can automate every white collar job within a year.