this post was submitted on 12 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

On the one hand, I get that this is very much an 'enthusiast' sub, and many of you are doing this because you were the type to have a 4090 already.

On the other, as someone interested in LLMs, stable diffusion and AI, I'm not sure if investing in the hardware to run these things locally makes economic sense at all. I spec'd out a damned nice workstation at microcenter the other day and the bill was over $4000. Even the gpu alone was over $1700.

If you take a really sober look at the numbers, how does running your own system make sense over renting hardware at runpod or a similar service? The overall sentiment I get from reading the posts here is that a large majority of users here are using their 3090's to crank out smut. Hey, no judgement, but do you really think runpod cares what you run as long as it doesn't put them in legal jeopardy?

A 4090 is $.50/hr on some services. Even if you assumed 10h / wk of usage over like 5 years that's still probably less than the depreciation and power usage of running it locally.

TLDR: I know some of you are doing this simply 'because you can' but the value proposition looks sketchy as an outsider.

you are viewing a single comment's thread
view the rest of the comments
[–] openLLM4All@alien.top 1 points 1 year ago

I've used rundpod in the past but got a bit frustrated with it when I couldn't have just a desktop to run whatever tools I wanted in the same box. I shifted to using VMs rather than runpod which has been nice switching between a text generation ui, lm studio, etc. on the same rented box.