ThisBeObliterated

joined 10 months ago
[โ€“] ThisBeObliterated@alien.top 1 points 10 months ago

TBH, even though I appreciate the effort in creating a research roadmap, if you put the AGI sticker in it, this feels more like creating some landmarks to make some buzz down the road. Conversely, there are a lot of features from other AGI definitions such as autonomous agency, multimodal/sensory learning, world modeling and interactivity that are conveniently left out ("non-physicial" tasks, hey, our lab doesn't work with those eh, but we can tots do AGI). This caters neither to the academics who are tired of loaded monikers in the field, nor to the futurology enthusiasts who have a much wider definition for AGI.

[โ€“] ThisBeObliterated@alien.top 1 points 10 months ago (2 children)

Well, you sort of answered the matter yourself - the fact that prompting works in some cases means you don't strictly need weight updates for new skills to be learned. It doesn't mean prompting is an end-all solution, but for DeepMind, this seems enough to consider LLMs "emerging AGI".

Most people entering in the field now (in the literal sense, aka academia, not some random r/singularity ramblers) disregard current LLM capabilities, but their current level of reasoning was deemed almost a fantasy 5 years ago.