this post was submitted on 06 May 2026
41 points (87.3% liked)

Technology

84443 readers
4300 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] valaramech@fedia.io 10 points 2 days ago (1 children)

To extend this a little bit, I'm not convinced "is X conscious?" is really the question anyone is trying to answer. What I think we're really trying to sus out is "does X require rights?" and where is the line for that.

As another commenter asked, something like "is turning this off equivalent to murder?" is effectively asking if the thing deserves a "right to life" like any human might. At what point does a "thinking machine" cross the line from "person-like" to "person"? I doubt anyone has a satisfactory answer to that question and, unfortunately, I strongly doubt we'll have one until well after it's actually needed.

I think grappling with that question is maybe a little more straightforward when we consider other animals we already consider highly intelligent (e.g. pigs, dolphins, or octopi) but that we don't give the same kinds of rights to that we would a human. At what point would we consider a non-human animal to be equal to ourselves? How many person-like traits does something need before it is a person?

Anyways, all that aside, I think we should start asking the questions we're really trying to answer and stop using other questions as proxies for that one.

[–] chrash0@lemmy.world 3 points 2 days ago (1 children)

yeah i don’t think we’re there yet. these models aren’t capable of remembering their life beyond a single session, so destroying a data center isn’t really killing anything. similarly, artificial biological neural networks aren’t sophisticated enough to be aware of their existence (yet).

while LLMs may be aware enough to beg for their existence when prompted to “think” about it, they’re hopelessly finite (frozen weights, limited context windows). we would need an actually “online learning” system or some other architecture not bound by context to have this conversation meaningfully. biological neural networks are a path to that, but online networks are simply too unpredictable and expensive to run for now.

the crazy thing is tho, that these systems have the capability that some cows and pigs may not: the ability to comprehend their own demise and experience existential dread (at least performatively).

[–] badgermurphy@lemmy.world 1 points 1 day ago

They don't even really "remember" at all in any meaningful sense. They log the conversation history, but they are only acting while they are responding to an input or program, and are otherwise idle awaiting further inputs. They lack agency beyond responding to those inputs.

I think we will really be talking AI when you have more autonomous agents that are capable of deciding what actions to take from a list of their creation, and capably performing those actions. To be clear, there is no technology even on the drawing board that is capable of anything like these capabilities that I'm aware of.