this post was submitted on 25 Nov 2023
1 points (100.0% liked)
Machine Learning
1 readers
1 users here now
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm not an expert by any means, just someone who is interested and reads AI news, but lately it seems like optimisation and efficiency work better than increasing parameters to improve performance of LLMs. And research is also clearly pointing at different architectures, other than transformers, to improve performance. I'd be surprised if GPT5 , which is 2-3 years away, will be just a mere development of GPT4, i.e. a LLM with many more parameters. These statements from Bill seem a little bit short sighted and contradictory to the general consensus.
I am also aware of the Dunning-Krueger effect and how it may be tricking me into thinking I somewhat understand things I have no idea of lol