this post was submitted on 29 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I am considering purchasing a 3090 primarily for use with Code Llama. Is it a good investment? I haven't been able to find any relevant videos on YouTube and would like to understand more about its performance speeds.

you are viewing a single comment's thread
view the rest of the comments
[–] opi098514@alien.top 1 points 11 months ago

For a 34b model you should be fine. I run 34b models on my duel 3060s and it’s very nice. Usually like 20ish tokens a second. If you want to run like a 7b model you can get basically instant results. With Mistal 7b I’m getting almost 60 tokens a second. It’s crazy. But it really depends on what you are using it for and how much accuracy you need.