this post was submitted on 25 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

So RWKV 7b v5 is 60% trained now, saw that multilingual parts are better than mistral now, and the english capabilities are close to mistral, except for hellaswag and arc, where its a little behind. all the benchmarks are on rwkv discor, and you can google the pro/cons of rwkv, though most of them are v4.

Thoughts?

you are viewing a single comment's thread
view the rest of the comments
[–] artelligence_consult@alien.top 1 points 10 months ago (2 children)

SIGNIFICATNLY less - it is not a transformer that goes totally quadratic.

[–] involviert@alien.top 1 points 10 months ago (1 children)
[–] Disastrous_Elk_6375@alien.top 1 points 10 months ago

Nope, RNN without attention, with some tricks for enabling parallel training.

[–] Aaaaaaaaaeeeee@alien.top 1 points 10 months ago

Its basically... 0?

From github:

More friendly than usual GPT. Because you don't need to keep a huge context (or kv cache). You just need the hidden state of the last single token.