this post was submitted on 07 Aug 2025
291 points (93.2% liked)

Technology

73967 readers
4609 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Hee hee.

you are viewing a single comment's thread
view the rest of the comments
[–] 4grams@awful.systems 73 points 1 week ago* (last edited 1 week ago) (2 children)

I can’t help but feel like this is the most important part of the article:

The model's refusal to accept information to the contrary, meanwhile, is no doubt rooted in the safety mechanisms OpenAI was so keen to bake in, in order to protect against prompt engineering and injection attacks.

Do any of you believe that these “safety mechanisms” are there just for safety? If they can control ai, they will. This is how we got mecha-hitler, same mucking about with weights and such, not just what it was trained on.

They WILL, they already are, trying to control how ai “thinks”. This is why it’s desperately important to whatever we can to democratize ai. People have already decided that ai has all the answers, and folks like peter thiel now have the single most potent propaganda machine in history.

[–] willington@lemmy.dbzer0.com 19 points 6 days ago* (last edited 6 days ago) (1 children)

Try asking AI for a complete list of the recently deceased CEOs and billionaires based on the publicly available search results.

When I tried, I got only the natural deaths of just some of the publicly available results. All the other deaths were omitted. I brought up the omitted names, one by one. The AI said it was sorry for the omission, and it had all the right details of their passings. With each new name the AI said it was sorry, it omitted it by accident. I said no, once is an accident, but this was a deliberate pattern. The AI waffled and talked like a politician.

The AI in my experience is absolutely controlled on a number of topics. It's still useful for cooking recipies and such. I will not trust it on any topic that is sensitive to its owners.

[–] CXORA@aussie.zone 12 points 6 days ago (1 children)

Just... don't use it at all. Stop supporting these people if youre worried about what they're doing.

[–] Goldmage263@sh.itjust.works 2 points 6 days ago

That's my method. I tested a little bit when the beta phase for Google rolled out. Now I don't use any AI at all. It can be useful as a search results +, but not much else for me.

No doubt inspired by the Chinese models like deepseek-r1, qwen3. They will flat out gaslight you if you try to correct them.