this post was submitted on 13 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The GGUF one has 140 layers, more than what the textgen UI supports (128). So the slowness may be because you are using CPU for some layers (check your terminal output when loading the model). But you can manually change the source code and set the max value of the n_gpu_layers slider to a higher value (just grep for it).
or open the UI, go to model page, right click on the layers slider -> inspect element
and update max value for the input field from 128 to 256
Cant believe that worked lol! Thank you so much. The speed increased significantly!
I mean it makes sense The value is chosen we're simply chosen for being a reasonable window at the time.
There was nothing hard coded about them they were simply a range of values that they had set for the UI.
It certainly is interesting though.