this post was submitted on 13 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

I am talking about this particular model:

https://huggingface.co/TheBloke/goliath-120b-GGUF

I specifically use: goliath-120b.Q4_K_M.gguf

I can run it on runpod.io on this A100 instance with "humane" speed, but it is way too slow for creating long form text.

https://preview.redd.it/fz28iycv860c1.png?width=350&format=png&auto=webp&s=cd034b6fb6fe80f209f5e6d5278206fd714a1b10

These are my settings in text-generation-webui:

https://preview.redd.it/vw53pc33960c1.png?width=833&format=png&auto=webp&s=0fccbeac0994447cf7b7462f65d79f2e8f8f1969

Any advice? Thanks

you are viewing a single comment's thread
view the rest of the comments
[–] whtne047htnb@alien.top 1 points 10 months ago (4 children)

The GGUF one has 140 layers, more than what the textgen UI supports (128). So the slowness may be because you are using CPU for some layers (check your terminal output when loading the model). But you can manually change the source code and set the max value of the n_gpu_layers slider to a higher value (just grep for it).

[–] kruk2@alien.top 1 points 10 months ago (2 children)

or open the UI, go to model page, right click on the layers slider -> inspect element
and update max value for the input field from 128 to 256

[–] abandonedexplorer@alien.top 0 points 10 months ago (1 children)

Cant believe that worked lol! Thank you so much. The speed increased significantly!

[–] MINIMAN10001@alien.top 1 points 10 months ago

I mean it makes sense The value is chosen we're simply chosen for being a reasonable window at the time.

There was nothing hard coded about them they were simply a range of values that they had set for the UI.

It certainly is interesting though.

load more comments (1 replies)