this post was submitted on 31 Oct 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Perhaps you are using a wrong fork of KobolAI, I get much more tokens per second. Did you open the task manager and check that the GPU memory used indeed increases when loading and using the model?
Otherwise try out Koboldcpp. It needs gguf instead gptq, but needs no special fork. With cublas enabled you should get good token speeds for a 13B model.
Can I get koboldcpp working with sillytavern without too much of a headache?
Sure, it provides the same API as KoboldAI.
I'm now using a 4bit GPTQ version of the same model. After generation completes the VRAM goes up to 16.2 GB (out of 24 GB) and I have nothing else using GPU as best I can tell (no browser windows with youtube, etc).
Still only getting a bit under 4.00 tokens per second. So I don't think stuff is getting offloaded to CPU.