this post was submitted on 22 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Hello, this has probably been asked a bazillion times, but I can't find an example. I have installed stable diffusion and LLaMA on my new PC. However, it does not appear to be utilising my new RTX 4080 for generation. Generation of text or images is very slow, and the GPU utilisation stays at 0% - 4% throughout. Any idea how this could be addressed? I am no expert, so I have not a clue what I could change for this.

It is on a laptop by the way, NVIDIA RTX 4080 (Laptop) and 12th Gen Intel CPU.

Thanks in advance!

you are viewing a single comment's thread
view the rest of the comments
[–] FlishFlashman@alien.top 1 points 11 months ago

What software are you using to run LLaMA and Stable Diffusion?

What version of the LLaMA model are you trying to run? How many parameters? What quantization?