I bought a fancy 4090 rig and returned it but mostly because I want a more powerful rig
LocalLLaMA
Community to discuss about Llama, the family of large language models created by Meta AI.
What did you go with?
I am loving my rig buys too
Last time I built one was 3 years ago and I used it mainly for gaming. I managed to convince myself that this time it’s going to be different and I am building a new one but lets see
I moved from desktop with GTX1070(and laptop with 1050) to laptop with 3080ti specifically so I can run video games when I'm not running LLM.
My only two regrets is downgrade in RAM(64GB->32GB) and storage(4TB hdd -> 2TB M.2 NVME), but it's not critical.
I thought about upgrading desktop, but it wouldn't be minor upgrade so after calculations it turned out getting laptop is better. ~Year and half later I still think so.
Not for me, because the stuff that's good for AI is also good for video games and doesn't hurt for the creative stuff I use my computer for either.
The only regret i had after buying 2nd hand 3090 this summer is that local models, while impressive still, wasn't there yet, and after experimenting with kobold, ST and stuff, eventually returned to gpt because sadly, after tasting the best, any models i tried felt really boring and too simple. I'm still using it for image generation and some other ai stuff.
I really not a fan of this situation, if one day we'll be able to get some local model close to gpt 3.5 for roleplay, i'll ditch oai asap. But i don't expect this soon.
maybe a little haha
I built my 2x3090 with parts from eBay... MB (x299 Giga), i9 CPU, 64G RAM and 2 3090s... I did spring for a new, heavy duty PS and case with big fans.
All in, I spent about $2k.
System runs 70B models like Llama-2-70B-Orca-200k just fine at 11 T/s...
I feel like there's not a ton of downside - I think the 3090s will be valuable for a while yet, and that's over half the value of the system.
Having the hardware right here means I can have thing running all the time - when I read about a new model, I can download, play, etc. in minutes. Spinning up a runpod feels frustratingly slow to me. I went that route for a while, but found that the friction involved meant I tried fewer things. Having a system that might be slower, but is always available just works for my way of working.
So no "regerts" here.
lol i have just finished identical system just a tad stronger..x299x mobo, i9- 10980 XE, 2 x 3090TI, 256gb. 72TB HDD (WD reds) + 4TB Samsung 990 Pro for 3.5k
I was building an app and then realized it was cheaper to just call inference API for Llama on Azure lol. Put my local llama on hold now
Depends on what your “whole rig” is, if its just a mac studio or 4090 then its fine. If its a whole server and enterprise build then you are better off renting it to someone. Enterprise GPUs are real low on stock rn.
I used LLMs as an excuse to buy a new high end rig. You know what it's been doing for the last months since I built it? Playing 2023 games 4k 120fps+.
Might have a brief regret that it's not being used for what i bought it for, but i'm still using it.
Nope. Have a 2x3090 system and planning to buy another 3090 system to be able to do SD LoRA training while being able to use Dolphin 70b.
Its been the opposite and I've watched my 3090 age like fine wine
I spend a bunch of money on .my PC over the years, including a €1200 for an ASUS TUF 3090, (Last one they had) never regretted it. That said until AI all I did with it was play Skyrim, the case is 20 years old, (Antec P85) the CPU is 10+ years old i7 2600k under water with a 50% OC 32GB RAM
That said the wife is buying me a new one for Xmas, and I'm pondering 128GB RAM. She also loved the case I want (Fractal Design North) so much she wants me to buy one for our son for Christmas as she hates the one he has at present.
So no. Never regretted spending money on hardware :)
120 thousand rubles.
I was an idiot when assembling the PC and somehow inexplicably focused on the processor when assembling the PC, and the video card is quite weak. However, after a while, I realized that this was for the better. I can use the 70B model with 1 token per second. Maybe in the future I will buy another video card so that more layers will help with data processing
3060 12 & 13600K
I bought a 4090 primarily for running LLM's, but I won't regret it because I'll use it for PCVR and playing games like Cyberpunk and Elden Ring. So, the secret is to have backup plans
I’m waiting for the rtx 5090 to release. I heard it’s gonna have 32gb of vram. Right now I only have a 2060 with 6gb of vram which is barely or not enough for alot of ai things and is slower at ai tasks
if u make money with it u wont regret cuz u must stay competitive and chose whatever u need to do ur work
for hobby anything is good and if u like funny projects u prob wont be bored, and even so if u get bored and u can play games in ur new setup
i love automation everywhere and sometimes i write simple bots with scripting languages that are tiny robotic creatures that go around internet and collect some data i want to analyze and these days with the possibility to use llm at home i got even more excited to experiment some new things
my hardware is a 15th century wagon, but i still have fun with it and when i bought it was the best investment i could have in a 3rd world country as an average person
Winter has enveloped us in its chilly embrace. In my quest for warmth, I realized I needed a heater. But then, a memory dawned on me – I had bought one before! It was during those long nights of training a model with a 100k dataset, which made the room toastier. Now, thanks to that, everyone in this house can enjoy a peaceful and warm winter.
No, I'm just glad I asked for a high end gaming machine from my build shop last year before all this exploded on the scene. The RTX 3060 is good, but I shoulda gone a level up because I do get my fair of CUDA memory errors when I try to build Lora's in one step.