this post was submitted on 09 Nov 2023
1 points (100.0% liked)
Machine Learning
1 readers
1 users here now
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I doubt that any model currently is in the “emerging AGI” category (even by there own metric of “general ability and metacognitive abilities like learning new skills”).
The model(s) we currently have are fundamentally unable to update their own weights so they do not “learn new skills”. Also I don’t like how they use “wide range of tasks” as a metric. Yes, LLMs outperform many humans at things like standardized tests, but I have yet to see an LLM who can constantly play tiktaktoe at the level of a 5 year old without a paragraph of “promt engineering”
I’m not the most educated on this topic (still just a student studying machine learning) but imo I think that many researchers are overestimating the abilities of LLMs
If I write out a one paragraph text on how to play a game I've just made up called "Madeupoly," and you read it, we'd say that you learned a new skill. If we prompt an LLM with the same text, and they can play within the rules after, couldn't we say they've also learned a new skill?