this post was submitted on 14 Aug 2024
-62 points (17.0% liked)

Technology

59135 readers
3816 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
all 48 comments
sorted by: hot top controversial new old
[–] braindefragger@lemmy.world 43 points 2 months ago (1 children)

It’s an LLM with well documented processes and limitations. Not going to even watch this waste of bits.

[–] Rezbit@lemmy.world 31 points 2 months ago (1 children)

Philosopher doesn't really understand what a LLM is

[–] hendrik@palaver.p3x.de 10 points 2 months ago (2 children)

I like the video. I think it's fun to argue with ChatGPT. Just don't expect anything to come from it. Or get closer to any objective truth that way. ChatGPT is just backpedaling and getting caught up in lies / what it said earlier.

[–] Zwiebel@feddit.org 11 points 2 months ago
[–] TheBigBrother@lemmy.world 7 points 2 months ago

Stopped watching it when the VPN advertising appeared..

[–] Telorand@reddthat.com 3 points 2 months ago (1 children)

This all hinges on the definition of "conscious." You can make a valid syllogism that defines it, but that doesn't necessarily represent a reasonable or accurate summary of what consciousness is. There's no current consensus of what consciousness is amongst philosophers and scientists, and many presume an anthropocentric model with regard to humans.

I can't watch the video right now, but I was able to get ChatGPT to concede, in a few minutes, that it might be conscious, the nature of which is sufficiently different from humans so as to initially not appear conscious.

[–] UraniumBlazer@lemm.ee -5 points 2 months ago (2 children)

Exactly. Which is what makes this entire thing quite interesting.

Alex here (the interrogator in the video) is involved in AI safety research. Questions like "do the ethical frameworks of AI match those of humans", "how do we get AI to not misinterpret inputs and do something dangerous" are very important to be answered.

Following this comes the idea of consciousness. Can machine learning models feel pain? Can we unintentionally put such models into immense eternal pain? What even is the nature of pain?

Alex demonstrated that ChatGPT was lying intentionally. Can it lie intentionally for other things? What about the question of consciousness itself? Could we build models that intentionally fail the Turing test? Should we be scared of such a possibility?

Questions like these are really interesting. Unfortunately, they are shot down immediately on Lemmy, which is pretty disappointing.

[–] conciselyverbose@sh.itjust.works 5 points 2 months ago (1 children)

Alex demonstrated that ChatGPT was lying intentionally

No, he most certainly did not. LLMs have no agency. "Intentionally" doing anything isn't possible.

[–] Telorand@reddthat.com 2 points 2 months ago (2 children)

Questions like these are really interesting. Unfortunately, they are shot down immediately on Lemmy, which is pretty disappointing.

It's just because AI stuff is overhyped pretty much everywhere as a panacea to solve all ~~capitalist~~ ails. Seems every other article, no matter the subject or demographic, is about how AI is changing/ruining it.

I do think that grappling with the idea of consciousness is a necessary component of the human experience, and AI is another way for us to continue figuring out what it means to be conscious, self-aware, or a free agent. I also agree that it's interesting to try to break AI and push it to its limits, but then, breaking software is in my professional interests!

[–] Ilandar@aussie.zone 2 points 2 months ago (1 children)

I do think that grappling with the idea of consciousness is a necessary component of the human experience, and AI is another way for us to continue figuring out what it means to be conscious, self-aware, or a free agent

You might be interested in the book 'The Naked Neanderthal' by Ludovic Slimak. He is an archaeologist but the book is quite philosophical and explores this idea of learning about humanity through the study of other forms of intelligence (Neanderthals). Here are some opening paragraphs from the book to give you an idea of what I mean:

The interstellar perspective, this suggestion of distant intelligences, reminds us that we humans are alone, orphans, the only living conscious beings capable of analysing the mysteries of the universe that surrounds us. These are countless other forms of animal intelligence, but no consciousness with which we can exchange ideas, compare ourselves, or have a conversation.

These distant intelligences outside of us perhaps do exist in the immensity of space - the ultimate enigma. And yet we know for certain that they have existed in a time which appears distant to us but in fact is extremely close.

The real enigma is that these intelligences from the past became progressively extinct over the course of millennia; there was a tipping point in the history of humanity, the last moment when a consciousness external to humanity as we conceive it existed, encountered us, rubbed shoulders with us. This lost otherness still haunts us in our hopes and fears of artificial intelligence, the instrumentalized rebirth of a consciousness that does not belong to us.

[–] Telorand@reddthat.com 1 points 2 months ago

Sounds cool! I'll see if my local libraries have a copy. Thanks for the rec!

[–] UraniumBlazer@lemm.ee 1 points 2 months ago (1 children)

It's just because AI stuff is overhyped pretty much everywhere as a panacea to solve all ~~capitalist~~ ails. Seems every other article, no matter the subject or demographic, is about how AI is changing/ruining it.

Agreed :(

You know what's sad? Communities that look at this from a neutral, objective position (while still being fun) exist on Reddit. I really don't want to keep using it though. But I see nothing like that on Lemmy.

[–] Telorand@reddthat.com 3 points 2 months ago

Lemmy is still in its infancy, and we're the early adopters. It will come into its own in due time, just like Reddit did.