this post was submitted on 09 Mar 2024
99 points (89.6% liked)

Technology

59219 readers
3320 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

So-called "emergent" behavior in LLMs may not be the breakthrough that researchers think.

you are viewing a single comment's thread
view the rest of the comments
[–] kromem@lemmy.world 6 points 8 months ago

I'm not sure why they are describing it as "a new paper" - this came out in May of 2023 (and as such notably only used GPT-3 and not GPT-4, which was where some of the biggest leaps to date have been documented).

For those interested in the debate on this, the rebuttal by Jason Wei (from the original emergent abilities paper and also the guy behind CoT prompting paper) is interesting: https://www.jasonwei.net/blog/common-arguments-regarding-emergent-abilities

In particular, I find his argument at the end compelling:

Another popular example of emergence which also underscores qualitative changes in the model is chain-of-thought prompting, for which performance is worse than answering directly for small models, but much better than answering directly for large models. Intuitively, this is because small models can’t produce extended chains of reasoning and end up confusing themselves, while larger models can reason in a more-reliable fashion.

If you follow the evolution of prompting in research lately, there's definitely a pattern of reliance on increased inherent capabilities.

Whether that's using analogy to solve similar problems (https://openreview.net/forum?id=AgDICX1h50) or self-determining the optimal strategy for a given problem (https://arxiv.org/abs/2402.03620), there's double digit performance gains in state of the art models by having them perform actions that less sophisticated models simply cannot achieve.

The compounding effects of competence alone mean that progress here isn't going to be a linear trajectory.