this post was submitted on 18 Sep 2023
57 points (91.3% liked)

Technology

59219 readers
3145 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.::DeepMind cofounder Mustafa Suleyman wants to build a chatbot that does a whole lot more than chat. In a recent conversation I had with him, he told me that generative AI is just a phase. What’s next is interactive AI: bots that can carry out tasks you set for them by calling on other software…

you are viewing a single comment's thread
view the rest of the comments
[–] SnipingNinja@slrpnk.net 2 points 1 year ago (1 children)

I'm not saying I have a definition or way to get there, just that it actually hasn't demonstrated that it actually understands (through the tasks where it fails)

[–] Barack_Embalmer@lemmy.world 1 points 1 year ago (1 children)

I still don't understand what you mean. If you don't have a criterion for "actually" understanding, how has it demonstrably failed?

[–] SnipingNinja@slrpnk.net 1 points 1 year ago (1 children)

I don't have an exact example for you to test it out so I'll try to explain in general terms:

Let's say you give a task to ChatGPT that a human can do easily but ChatGPT fails at it consistently, isn't that proof that it doesn't understand.

It might be hard to grasp from this without example, but the problem with any example would be that OpenAI can become aware of a problem and tweak the algorithm to correct just that specific example.

One example I remembered while typing this is how it fails at giving you a list of words which fit a certain criteria like having a specific number of letters. This is not the best example I had come across in the past but it still seems to fail at this one.

Anyway, hopefully you got my point about lack of understanding.

[–] Barack_Embalmer@lemmy.world 1 points 1 year ago (1 children)

Fair enough but it just seems like a fluffy distinction.

And I don't think they "tweak the algorithm" so much as generate a load more training data of that one specific task to get it up to spec.

In any case, humans make mistakes on lots of stuff too, so if the criterion for "true" understanding is to make no mistakes then humans cannot be said to understand either.

[–] SnipingNinja@slrpnk.net 1 points 1 year ago

As I said, my example wasn't the best one, but you're right that based on it humans can be judged badly too