this post was submitted on 28 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's not going to help because the model data is much larger than the cache and the access pattern is basically long sequential reads.
It might help for LLMs since a lot of values are cached after each loop, but still highly unlikely to make a difference.