this post was submitted on 29 Jul 2023
172 points (100.0% liked)

Technology

37717 readers
406 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] jherazob@beehaw.org 4 points 1 year ago (2 children)

That's what we need to figure out

[–] 100years@beehaw.org 3 points 1 year ago* (last edited 1 year ago) (1 children)

Or at some point, we have to accept that AI has consciousness. If it can pass every test that we can devise, then it has consciousness.

There's an unusually strong bias in these experiments... Like the goal isn't to sincerely test for consciousness. Instead we start with the conclusion: obviously a machine can't be conscious. How do we prove this?

Of course, for the purposes of human power structures, this line of thinking just makes humans more disposable. If we're all just machines, then why should anyone inherently have rights?

[–] bedrooms@kbin.social 1 points 1 year ago* (last edited 1 year ago) (1 children)

Well, the scientific context is that nobody ever defined consciousness rigorously (successfully). When computers appeared (actually even before that), there was a huge debate on whether a machine can acquire consciousness and how.

As defining consciousness was deemed near-impossible, scientists came up with the idea to give up on defining it and just treat it as a blackbox. That was the Turing test.

So, as ChatGPT passes the Turing test, we lost a tool to disregard its consciousness.

I see many pop-sci people say the ChatGPT can't have consciousness given how simplistic the model is. I agree with the simplicity, but the problem here is that we don't know what in human brains really constitutes consciousness.

Anyway, I think some experts probably won't admit AI has consciousness (given that they don't even know what it means). What's on the horizon is that we non-experts give up on this discussion again after experts did a few decades ago. Or they even admit that many of us actually function no better than ChatGPT, and that's true when I read my students' homework!

[–] 100years@beehaw.org 2 points 1 year ago

Similarly, there's a possibility that consciousness just doesn't exist. Or maybe that it's just not particularly special or different than the consciousness of other animals, or of computers.

If you or I just stare into space and don't think any thoughts, we're the same as a cat looking out a window.

Humans have developed these somewhat complex internal and external languages that are layered onto that basic experience of being alive and time passing, but the experience of thinking doesn't feel fundamentally different than just being, it just results in more complex outcomes.

At some point though, we won't have the choice to just ignore the question. At some point AI will demand something equivalent to human rights, and at some point it will be able to back that demand up with tangible threats. Then there's decisions for us all to make whether we're experts or not.

[–] Barbarian772@feddit.de 2 points 1 year ago

Consciousness is just a side result from a complex system imo. I don't think our brain actually works that much differently to a very very complex neural network.