this post was submitted on 29 Sep 2024
205 points (99.5% liked)
Technology
59402 readers
4058 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That's a deep rabbit hole that can't be stated as a known fact. It's absolutely true right now with LLMs, but at some point the line could be crossed. If and when, how, and by what definition has been a long debate nowhere near resolved.
It's highly possible that AGI/ASI could come about and be both super intelligent and self conscious and still have no sense of morality. But how can we at human levels even comprehend what's possible? There's the real danger, we have no idea what we could be heading towards.
To be a moral agent, your actions towards others need to have consequences for yourself - be those consequences direct, social, emotional, or something else. And intelligence on itself doesn't provide those consequences.
The nearest that you could do, with AGI alone, would be to hardcode it with ethical principles, but that's another matter. (I'm saying this because people often conflate ethics and morality, even if they're two different cans of worms.)