this post was submitted on 30 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Usign GPT4all, only get 13 tokens. anyway to speed this up? perhaps a custom config of llama.cpp. or some other LLM back end.

model is mistra-orca.

does type of model affect tokens per second?

what is your setup for quants and model type

how do i get fastest tokens for second on m1 16gig

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here