this post was submitted on 12 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

On the one hand, I get that this is very much an 'enthusiast' sub, and many of you are doing this because you were the type to have a 4090 already.

On the other, as someone interested in LLMs, stable diffusion and AI, I'm not sure if investing in the hardware to run these things locally makes economic sense at all. I spec'd out a damned nice workstation at microcenter the other day and the bill was over $4000. Even the gpu alone was over $1700.

If you take a really sober look at the numbers, how does running your own system make sense over renting hardware at runpod or a similar service? The overall sentiment I get from reading the posts here is that a large majority of users here are using their 3090's to crank out smut. Hey, no judgement, but do you really think runpod cares what you run as long as it doesn't put them in legal jeopardy?

A 4090 is $.50/hr on some services. Even if you assumed 10h / wk of usage over like 5 years that's still probably less than the depreciation and power usage of running it locally.

TLDR: I know some of you are doing this simply 'because you can' but the value proposition looks sketchy as an outsider.

you are viewing a single comment's thread
view the rest of the comments
[–] Moist_Influence1022@alien.top 1 points 10 months ago

As someone who spends a lot of time on a chair in front of a PC, both as a hobby and for work, I treated myself to an early Christmas present with a dual 3090 machine.

I used to game a lot, but those days are over. It's still nice to be able to play the latest games on maximum graphics, but it's also great to have the capability to play around with the big boy LLMs out there.

Right now, I'm experimenting with so much stuff, trying different frameworks like autogen and memgpt. I tinker around without having this nagging thought in the back of my mind saying,

'Man, you're wasting money,' or 'Be more efficient,'

and so on, if you know what I mean. If it were just for the sake of trying LLMs, then definitely not. I would stick to cloud solutions.