this post was submitted on 12 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Curious if anyone got the whole rig and realize they didn't really need it etc

you are viewing a single comment's thread
view the rest of the comments
[โ€“] jubjub07@alien.top 1 points 10 months ago (1 children)

I built my 2x3090 with parts from eBay... MB (x299 Giga), i9 CPU, 64G RAM and 2 3090s... I did spring for a new, heavy duty PS and case with big fans.

All in, I spent about $2k.

System runs 70B models like Llama-2-70B-Orca-200k just fine at 11 T/s...

I feel like there's not a ton of downside - I think the 3090s will be valuable for a while yet, and that's over half the value of the system.

Having the hardware right here means I can have thing running all the time - when I read about a new model, I can download, play, etc. in minutes. Spinning up a runpod feels frustratingly slow to me. I went that route for a while, but found that the friction involved meant I tried fewer things. Having a system that might be slower, but is always available just works for my way of working.

So no "regerts" here.

[โ€“] Infamous_Charge2666@alien.top 1 points 10 months ago

lol i have just finished identical system just a tad stronger..x299x mobo, i9- 10980 XE, 2 x 3090TI, 256gb. 72TB HDD (WD reds) + 4TB Samsung 990 Pro for 3.5k