this post was submitted on 29 Oct 2023
162 points (92.2% liked)

Technology

59358 readers
5729 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

AI doomsday warnings a distraction from the danger it already poses, warns expert::A leading researcher, who will attend this week’s AI safety summit in London, warns of ‘real threat to the public conversation’

you are viewing a single comment's thread
view the rest of the comments
[–] fubo@lemmy.world 49 points 1 year ago* (last edited 1 year ago) (3 children)

AI safety folks have been warning about the predictable disastrous consequences of turning economic power over to unethical AI systems for many years now, long before deepfakes, predictive policing, or other trendy "AI dangers" were around.

[–] TheEighthDoctor@lemmy.world 21 points 1 year ago (6 children)

turning economic power over to unethical AI systems for many years now

What's the difference from unethical human systems?

[–] mosiacmango@lemm.ee 32 points 1 year ago* (last edited 1 year ago) (1 children)

No ethics based lapses.

Humans in a systemic unethical system can be individually ethical using deception or until the system grinds them to dust.

An unethical ai built on unethical data will reinforce unethical behavior forever.

[–] p03locke@lemmy.dbzer0.com 1 points 1 year ago

Then the only recourse is to create ethical constraints. Challenging, but possible, even with current LLM technology.

[–] jacksilver@lemmy.world 5 points 1 year ago

It's the fact that ethical people can easily create unethical AI. The core problem is reinforcing biases/stereotypes in the data without realizing. Obviously there are other concerns about purposefully doing unethical stuff, but the real issue is that AI/ML just learns from what it's given.

Examples range from cameras that think most people from Asia have their eyes closed https://www.digitaltrends.com/computing/facial-recognition-software-passport-renewal-asian-man-eyes-closed/ to things like Amazon reinforcing gender hiring biases https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

Ultimately even when built "correctly" AI can be extremely dangerous.

[–] BearOfaTime@lemm.ee 5 points 1 year ago

Upvote for a good, thought provoking question.

[–] uriel238@lemmy.blahaj.zone 3 points 1 year ago

Allegedly you can bring a bad human actor to justice, though we typically do not.

[–] ira@lemmy.ml 3 points 1 year ago

An AI can't be fined or imprisoned.

[–] obinice@lemmy.world 10 points 1 year ago

disastrous consequences of turning economic power over to unethical AI systems

Phew, good thing we've got ethical Jeff Bezos and Elon Musks controling out economies and piloting our governments instead 😅 really dodged a bullet there

[–] burliman@lemm.ee 4 points 1 year ago (1 children)

These warnings and fears would be a little easier to hear if they weren’t pushed so hard by the most disingenuous people ever. Sounds like they want everyone else to pause so they can get ahead.

[–] zbyte64@lemmy.blahaj.zone 3 points 1 year ago

The most obnoxious ones are not only the loudest but they tend to get more screen time. You won't see Gebru on cable news as often as you might get ol' Yud talking about some vengeful AI god.