this post was submitted on 08 Apr 2024
69 points (84.8% liked)
Technology
59358 readers
4896 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's a lot less than playing a video game. My fans on my GPU spin up harder and for more sustained time whenever I'm playing.
I think the training part is not to be neglected and might be what is at play here. Facebook has a 350k GPU cluster which is being setup to train AI models. Typical state of the art models have required training for months on end. Imagine the power consumption. Its not about on person running a small quantized model at home.
Such training can be done in places where there's plenty of water to spare. Like so many of these "we're running out of X!" Fears, basic economics will start putting the brakes on long before crashing into a wall.
That's not what I replied to though.
You said running an imagine generating AI on your GPU is less demanding than a video game. While possibly true, the topic of water scarcity and energy demands are not about what one person runs on one GPU, hence my response.
The person I replied to was only commenting on people how much it cost for people to "refine prompts" which isn't where the problems lie. People at home on consumer hardware can't be the ones causing issues at scale, which is what I pointed out. We weren't talking about training costs at all.
Besides, the GPU clusters models are trained on are far outnumbered by non-training datacenters, which also use water for cooling. It seems weird to bring that up as an issue while not talking about the whole cloud computing industry. I've never seen any numbers on how much these GPU clusters spend versus conventional use, If you have any I'd like to see them.