this post was submitted on 10 May 2026
850 points (98.6% liked)

Technology

84583 readers
5279 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] prenatal_confusion@feddit.org 17 points 2 days ago (1 children)

Is that a realistic Approximation of energy usage? This seems a lot to me, even over the span of one day

[–] josephmbasile@lemmy.world 17 points 2 days ago (1 children)

Definitely off by a few orders of magnitude.

[–] Greyghoster@aussie.zone 15 points 2 days ago (3 children)

17gw is about the same size as the Hiroshima bomb - 63 terajoules is 17 GWh and the 9GW data centre produces at least 16GWs of heat. Pretty scary when looked at like that.

[–] humanspiral@lemmy.ca 4 points 2 days ago

17gw of heat is both under and over estimate.

3,600 of those industrial-scale generators to power Stratos

Caterpillar 2.5mw generators have maximum efficiency of 45%, and so 19gw is peak thermal power. that is roughly 26 hiroshimas per day.

It's an over estimate because datacenter cpu/gpu capacity utilization is on average under 10%. It could still produce all that power for export to trap all that heat in a valley.

[–] Pulsar@lemmy.world 2 points 2 days ago

Not that it would matter for this conversation, but at hyperscalers levels, the energy required for mechanical loads is under 20% of the compute load. Wouldn't surprise me if ~10% can be achieved at multi GW scale. Thus about 11GW total energy.

[–] towerful@programming.dev 2 points 2 days ago (2 children)

Does "9GW data center" not mean "a data center that consumes 9GW of power"?
Or is it "9GW of computers + 5GW of cooling + something"?

[–] Pulsar@lemmy.world 4 points 2 days ago

9GW should be the compute load goal, to which you need to add the mechanical and administrative loads. At higher scales they gain significant efficiencies which translates to market advantages.

[–] humanspiral@lemmy.ca 3 points 2 days ago (1 children)

its 9gw of consumption. 19gw of total heat generation.

[–] FauxLiving@lemmy.world 1 points 18 hours ago (1 children)

For comparison, a blast furnace making steel uses on the order of 3600 GWh/yr and the energy comes primarily from coal.

9Gwh is a high number for a datacenter, but industrial processes use much more and much dirtier energy.

That's also one datacenter and the largest. Whereas there are many, many blast furnaces running all over the world.

[–] humanspiral@lemmy.ca 1 points 15 hours ago

9gw if run 24/7 (capacity utilization is actually low on average in US) is 551.88 twh/year. 1500x. Natural gas is not that much cleaner than coal from co2/ghg warming perspective.