this post was submitted on 25 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] El_Minadero@alien.top 0 points 10 months ago (43 children)

I mean, everyone is just sorta ignoring the fact that no ML technique has been shown to do anything more than just mimic statistical aspects of the training set. Is statistical mimicry AGI? On some performance benchmarks, it appears better statistical mimicry does approach capabilities we associate with AGI.

I personally am quite suspicious that the best lever to pull is just giving it more parameters. Our own brains have such complicated neural/psychological circuitry for executive function, long and short term memory, types I and II thinking, "internal" dialog and visual models, and more importantly, the ability to few-shot learn the logical underpinnings of an example set. Without a fundamental change in how we train NNs or even our conception of effective NNs to begin with, we're not going to see the paradigm shift everyone's been waiting for.

[–] nemoknows@alien.top 1 points 10 months ago (8 children)

See the trouble with the Turing test is that the linguistic capabilities of the most sophisticated models well exceed those of the dumbest humans.

[–] davikrehalt@alien.top 1 points 10 months ago (3 children)

I think we can just call the Turing test passed in this case.

[–] redd-zeppelin@alien.top 1 points 9 months ago

The Turing test was passed in the 60s by rules based systems. It's not a great test.

Is ChatGPT Passing the Turing Test Really Important? https://youtu.be/wdCzGwQv4rI

load more comments (2 replies)
load more comments (6 replies)
load more comments (40 replies)