this post was submitted on 29 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I am considering purchasing a 3090 primarily for use with Code Llama. Is it a good investment? I haven't been able to find any relevant videos on YouTube and would like to understand more about its performance speeds.

you are viewing a single comment's thread
view the rest of the comments
[–] EgeTheAlmighty@alien.top 1 points 11 months ago

I have a 4090 at work, and quantized 34B models barely fit in the 24GB of VRAM. I get around 20 tokens per second of output. My personal computer has a laptop 3080ti with 16GB of VRAM, that one can't do more than 13B models but still get about 20 tokens per second from it. Although these are for quantizations optimized for speed so depending on what model you're trying to use it might be slower.