this post was submitted on 12 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I was trying to run some models using text-generation-webui. I was testing mistralai_Mistral-7B-v0.1, NousResearch_Llama-2-7b-hf and stabilityai_StableBeluga-7B. Without much success. I remember I needed to change to 4 bits and I couldn't make them respond in a good way for chat. So before I go further, is 8GB a problem? Because Stable Diffusion works beautifuly. Could you share your experiences with 8GB GPU?

top 1 comments
sorted by: hot top controversial new old
[–] Semi_Tech@alien.top 1 points 1 year ago

Download koboldcpp and get gguf version of any model you want, preferably 7B from our pal thebloke. Only get Q4 or higher quantization. Q6 is a bit slow but works good. In koboldcpp. Exe select cublast and set the layers at 35-40.

You should get abot 5T/s or more.

This is the simplest method to run llms from my testing.