this post was submitted on 17 Nov 2023
1 points (100.0% liked)

LocalLLaMA

4 readers
4 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 2 years ago
MODERATORS
 

I am running a LLaMa 13B instance (via GPT4all) and am finding inference times to be quite slow, especially for summarization. Does anyone have recommendations for models that can do summarization of 4k+ tokens extremely quickly?

top 3 comments
sorted by: hot top controversial new old
[–] ttkciar@alien.top 1 points 2 years ago

orca-mini-3b is good at fast summarizations, but it lies a lot, so ymmv.

[–] a_beautiful_rhind@alien.top 1 points 2 years ago

you can try to go down to 7b, it will be slightly faster.

[–] FlishFlashman@alien.top 1 points 2 years ago

Please get specific. What's "quite slow," what's "extremely quickly." Use numbers and units that include a unit of time.

What hardware are you running on? Without changing hardware your best bet is a smaller model (in terms of parameters), or a smaller quantization of a 13b model, or both.