this post was submitted on 21 Jun 2024
102 points (100.0% liked)

Technology

59377 readers
5811 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] Dudewitbow@lemmy.zip 5 points 4 months ago (1 children)

faster ram generally has dimishing returns on sustem use, however it does matter for gpu compute reasons on igpu (e. g gaming, and ML/AI would make use of the increased memory bandwith).

its not easily to simply just push a wider bus because memory bus size more or less affects design complexity, thus cost. its cheaper to push memory clocks than design a die with a wider bus.

[โ€“] Paragone@lemmy.world 2 points 4 months ago

Computational-Fluid-Dynamics simulations are RAM-limited, iirc.

I'm presuming many AI models are, too, since some of them require stupendous amounts of RAM, which no non-server machine would have.

"diminishing returns" is what Intel's "beloved" Celeron garbage was pushing.

When I ran Memtest86+ ( or the other version, don't remember ), & saw how insanely slow RAM was, compared with L2 or L3 cache, & then discovered how incredible the machine-upgrade going from SATA to NVMe was..

Get the fastest NVMe & RAM you can: it puts your CPU where it should have been, all along, and that difference between a "normal" build vs an effective build is the misframing the whole industry has been establishing, for decades.

_ /\ _