this post was submitted on 17 Nov 2023
1 points (100.0% liked)

LocalLLaMA

11 readers
4 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 2 years ago
MODERATORS
 

I am running a LLaMa 13B instance (via GPT4all) and am finding inference times to be quite slow, especially for summarization. Does anyone have recommendations for models that can do summarization of 4k+ tokens extremely quickly?

you are viewing a single comment's thread
view the rest of the comments
[–] a_beautiful_rhind@alien.top 1 points 2 years ago

you can try to go down to 7b, it will be slightly faster.