Which mode do you use? Chat, chat-instruct or instruct?
LocalLLaMA
Community to discuss about Llama, the family of large language models created by Meta AI.
The GGUF one has 140 layers, more than what the textgen UI supports (128). So the slowness may be because you are using CPU for some layers (check your terminal output when loading the model). But you can manually change the source code and set the max value of the n_gpu_layers slider to a higher value (just grep for it).
This is the only helpful because right answer.
or open the UI, go to model page, right click on the layers slider -> inspect element
and update max value for the input field from 128 to 256
Cant believe that worked lol! Thank you so much. The speed increased significantly!
I mean it makes sense The value is chosen we're simply chosen for being a reasonable window at the time.
There was nothing hard coded about them they were simply a range of values that they had set for the UI.
It certainly is interesting though.
Why don't you use exl2? Assuming it's the A100 80GB, you can run up to 5bpw I think,
I have done quants at 3, 4.5 and 4.85bpw.
https://huggingface.co/Panchovix/goliath-120b-exl2
https://huggingface.co/Panchovix/goliath-120b-exl2-rpcal
I have 2x4090+1x3090, I get 2 t/s on GGUF (all layers on GPU) vs 10 t/s on exllamav2.
Thanks. Will try this. No idea how these really work so that is why i am asking :)
I'm sorry for a little side-track, but how much context you able to squeeze into your 3 GPUs with Goliath's 4bit quant?
I'm considering to add another 3090 to my own doble-GPU setup just to run this model.
I tested 4K and it worked fine at 4.5bpw. Max will be prob about 6k. I didn't use 8bit cache
Now 4.5bpw is kinda overkill, 4.12~ bpw is like 4bit 128g gptq, and that would let you use a lot more context.
That is awesome. What kind of platform do you use for that 3 GPUs setup?
Wait what? I am getting 2-3t/s on 3x P40 running Goliath GGUF Q4KS.
I use It through openrouterai, around 200k t/$