this post was submitted on 09 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

Paper: https://arxiv.org/abs/2311.02462

Abstract:

We propose a framework for classifying the capabilities and behavior of Artificial General Intelligence (AGI) models and their precursors. This framework introduces levels of AGI performance, generality, and autonomy. It is our hope that this framework will be useful in an analogous way to the levels of autonomous driving, by providing a common language to compare models, assess risks, and measure progress along the path to AGI. To develop our framework, we analyze existing definitions of AGI, and distill six principles that a useful ontology for AGI should satisfy. These principles include focusing on capabilities rather than mechanisms; separately evaluating generality and performance; and defining stages along the path toward AGI, rather than focusing on the endpoint. With these principles in mind, we propose 'Levels of AGI' based on depth (performance) and breadth (generality) of capabilities, and reflect on how current systems fit into this ontology. We discuss the challenging requirements for future benchmarks that quantify the behavior and capabilities of AGI models against these levels. Finally, we discuss how these levels of AGI interact with deployment considerations such as autonomy and risk, and emphasize the importance of carefully selecting Human-AI Interaction paradigms for responsible and safe deployment of highly capable AI systems.

https://preview.redd.it/64biopsh79zb1.png?width=797&format=png&auto=webp&s=9af1c5085938dac000aaf23aa1b306133b01edb4

you are viewing a single comment's thread
view the rest of the comments
[–] Platapos@alien.top 1 points 1 year ago (1 children)

Machine learning is becoming sociology 2.0 in academia as opposed to remaining firmly in the grasp of STEM and it really sucks. These papers are completely meaningless beyond padding the resumes of grifters and deserve pushback. AI as a field is software, coding and maybe some math, not smart sounding essays and research papers from the same folks who built their careers around the cryptocurrency/NFT/Web3.0 dungpile while having zero hard skills aside from talking themselves into cushy jobs at startups.

[–] APaperADay@alien.top 1 points 1 year ago (1 children)

grifters

same folks who built their careers around the cryptocurrency/NFT/Web3.0 dungpile while having zero hard skills

Insults like this are completely uncalled-for. All of the authors of this paper are accomplished researchers in ML. Not at all related to what you've called the "cryptocurrency/NFT/Web3.0 dungpile".

[–] Platapos@alien.top 1 points 1 year ago (1 children)

It’s a mixed bag. I see a few researchers who have dozens of very similar papers to this one co-authored and others that actually seem to program models to progress machine learning. I still hold to the belief that anyone who doesn’t have copious amounts of programming experience should not be involved in academia related to machine learning. It’s not a space suited for people who don’t have a depth of hands on experience with the topic.

[–] currentscurrents@alien.top 1 points 1 year ago

Jascha Sohl-Dickstein invented diffusion models, he's a pretty big name in the field.

anyone who doesn’t have copious amounts of programming experience should not be involved in academia related to machine learning

ML research is very heavy on math and statistics. In general, the skills necessary for ML are not very similar to the skills necessary for programming.