this post was submitted on 14 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 
top 5 comments
sorted by: hot top controversial new old
[–] dqUu3QlS@alien.top 1 points 10 months ago (1 children)
[–] esotericloop@alien.top 1 points 10 months ago

See, you're attending to the initial token across all layers and heads. :P

[–] Tiny_Nobody6@alien.top 1 points 10 months ago

IYH kindly post the paper link

[–] Knopty@alien.top 1 points 10 months ago (1 children)

If you are wondering if it could be implemented, there was a modified transformers library. The author practically made changes, renamed the library to attention_sinks and presented it as a drop-in solution to use it:

https://github.com/tomaarsen/attention_sinks/

But it was impossible to maintain, so devs of transformers suggested him to make a patch for transformers and to maintain it, so it could be properly incorporated into the library and to be future-proof.

The author of this code has been working on this patch since beginning of the October:

https://github.com/huggingface/transformers/pull/26681

[–] WAHNFRIEDEN@alien.top 1 points 10 months ago

it's already implemented in llama.cpp