this post was submitted on 25 Feb 2025
535 points (98.4% liked)

Technology

63277 readers
5335 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] wise_pancake@lemmy.ca 9 points 4 hours ago* (last edited 4 hours ago)

Question about how shared VRAM works

So I need to specify in the BIOS the split, and then it's dedicated at runtime, or can I allocate VRAM dynamically as needed by workload?

On macos you don't really have to think about this, so wondering how this compares.

[–] 01189998819991197253@infosec.pub 29 points 6 hours ago (2 children)

Soldered on ram and GPU. Strange for Framework.

[–] secret300@lemmy.sdf.org 5 points 3 hours ago (2 children)

Ye the soldered ram is for sure making me doubt framework now.

[–] Nalivai@lemmy.world 1 points 4 minutes ago

Apparently AMD wasn't able to make socketed RAM work, timings aren't viable. So Framework has the choice of doing it this way or not doing it at all.

[–] Jyek@sh.itjust.works 9 points 1 hour ago

Signal integrity is a real issue with dimm modules. It's the same reason you don't see modular VRAM on GPUs. If the ram needs to behave like VRAM, it needs to run at VRAM speeds.

[–] enumerator4829@sh.itjust.works 24 points 5 hours ago (2 children)

Apparently AMD couldn’t make the signal integrity work out with socketed RAM. (source: LTT video with Framework CEO)

IMHO: Up until now, using soldered RAM was lazy and cheap bullshit. But I do think we are at the limit of what’s reasonable to do over socketed RAM. In high performance datacenter applications, socketed RAM is on it’s way out (see: MI300A, Grace-{Hopper,Blackwell},Xeon Max), with onboard memory gaining ground. I think we’ll see the same trend on consumer stuff as well. Requirements on memory bandwidth and latency are going up with recent trends like powerful integrated graphics and AI-slop, and socketed RAM simply won’t work.

It’s sad, but in a few generations I think only the lower end consumer CPUs will be possible to use with socketed RAM. I’m betting the high performance consumer CPUs will require not only soldered, but on-board RAM.

Finally, some Grace Hopper to make everyone happy: https://youtube.com/watch?v=gYqF6-h9Cvg

[–] barsoap@lemm.ee 8 points 3 hours ago (1 children)

I definitely wouldn't mind soldered RAM if there's still an expansion socket. Solder in at least a reasonable minimum (16G?) and not the cheap stuff but memory that can actually use the signal integrity advantage, I may want more RAM but it's fine if it's a bit slower. You can leave out the DIMM slot but then have at least one PCIe x16 expansion slot. A free one, one in addition to the GPU slot. PCIe latency isn't stellar but on the upside, expansion boards would come with their own memory controllers, and push come to shove you can configure the faster RAM as cache / the expansion RAM as swap.

Heck, throw the memory into the CPU package. It's not like there's ever a situation where you don't need RAM.

[–] enumerator4829@sh.itjust.works 3 points 1 hour ago (2 children)

All your RAM needs to be the same speed unless you want to open up a rabbit hole. All attempts at that thus far have kinda flopped. You can make very good use of such systems, but I’ve only seen it succeed with software specifically tailored for that use case (say databases or simulations).

The way I see it, RAM in the future will be on package and non-expandable. CXL might get some traction, but naah.

[–] fiddlesticks@lemmy.dbzer0.com 2 points 58 minutes ago (1 children)

Couldn't you just treat the socketed ram like another layer of memory effectively meaning that L1-3 are on the CPU "L4" would be soldered RAM and then L5 would be extra socketed RAM? Alternatively couldn't you just treat it like really fast swap?

[–] barsoap@lemm.ee 1 points 31 minutes ago

Using it as cache would reduce total capacity as cache implies coherence, and treating it as ordinary swap would mean copying to main memory before you access it which is silly when you can access it directly. That is you'd want to write a couple of lines of kernel code to use it effectively but it's nowhere close to rocket science. Nowhere near as complicated as making proper use of NUMA architectures.

[–] barsoap@lemm.ee 1 points 38 minutes ago* (last edited 28 minutes ago) (1 children)

The cache hierarchy has flopped? People aren't using swap?

NUMA also hasn't flopped, it's just that most systems aren't multi socket, or clusters. Different memory speeds connected to the same CPU is not ideal and you don't build a system like that but among upgraded systems that's not rare at all and software-wise worst thing that'll happen is you get the lower memory speed. Which you'd get anyway if you only had socketed RAM.

[–] Jyek@sh.itjust.works 1 points 29 minutes ago (1 children)

In systems where memory speed are mismatched, the system runs at the slowest module's speed. So literally making the soldered, faster memory slower. Why even have soldered memory at that point?

[–] barsoap@lemm.ee 1 points 21 minutes ago* (last edited 20 minutes ago)

I'd assume the soldered memory to have a dedicated memory controller. There's also no hard requirement that a single controller can't drive different channels at different speeds. The only hard requirement is that one channel needs to run at one speed.

...and the whole thing becomes completely irrelevant when we're talking about PCIe expansion cards the memory controller doesn't care.

[–] unphazed@lemmy.world 7 points 5 hours ago (2 children)

Honestly I upgrade every few years and isually have to purchase a new mobo anyhow. I do think this could lead to less options for mobos though.

[–] confusedbytheBasics@lemm.ee 3 points 3 hours ago

I get it but imagine the GPU style markup when all mobos have a set amount of RAM. You'll have two identical boards except for $30 worth of memory with a price spread of $200+. Not fun.

[–] enumerator4829@sh.itjust.works 4 points 5 hours ago (1 children)

I don’t think you are wrong, but I don’t think you go far enough. In a few generations, the only option for top performance will be a SoC. You’ll get to pick which SoC you want and what box you want to put it in.

[–] GamingChairModel@lemmy.world 4 points 4 hours ago (1 children)

the only option for top performance will be a SoC

System in a Package (SiP) at least. Might not be efficient to etch the logic and that much memory onto the same silicon die, as the latest and greatest TSMC node will likely be much more expensive per square mm than the cutting edge memory production node from Samsung or whatever foundry where the memory is being made.

But with advanced packaging going the way it's been over the last decade or so, it's going to be hard to compete with the latency/throughout of an in-package interposer. You can only do so much with the vias/pathways on a printed circuit board.

You are correct, I’m referring to on package. Need more coffee.

[–] Jollyllama@lemmy.world 13 points 5 hours ago

Calling it a gaming PC feels misleading. It's definitely geared more towards enterprise/AI workloads. If you want upgradeable just buy a regular framework. This desktop is interesting but niche and doesn't seem like it's for gamers.

[–] vga@sopuli.xyz 3 points 5 hours ago (1 children)

It's kinda cool but seems a bit expensive at this moment.

[–] ArchRecord@lemm.ee 5 points 3 hours ago

For the performance, it's actually quite reasonable. 4070-like GPU performance, 128gb of memory, and basically the newest Ryzen CPU performance, plus a case, power supply, and fan, will run you about the same price as buying a 4070, case, fan, power supply, and CPU of similar performance. Except you'll actually get a faster CPU with the Framework one, and you'll also get more memory that's accessible by the GPU (up to the full 128gb minus whatever the CPU is currently using)

[–] 0x0@programming.dev 23 points 10 hours ago (5 children)

The Framework Desktop is powered by an AMD Ryzen AI Max processor, a Radeon 8060S integrated GPU, and between 32GB and 128GB of soldered-in RAM.

The CPU and GPU are one piece of silicon, and they're soldered to the motherboard. The RAM is also soldered down and not upgradeable once you've bought it, setting it apart from nearly every other board Framework sells.

It'd raise an eyebrow if it was a laptop but it's a freakin' desktop. Fuck you framework.

[–] surph_ninja@lemmy.world 11 points 7 hours ago (1 children)

I get the frustration with a system being so locked down, but if 32gb is the minimum I don’t really see the problem. This pc will be outdated before you really need to upgrade the ram to play new games.

[–] muelltonne@feddit.org 11 points 6 hours ago* (last edited 6 hours ago) (3 children)

It's not just about upgrading. It's also about being able to repair your computer. RAM likes to go bad and on a normal PC, you can replace it easily. Buy a cheap stick, take out the old RAM, put in the new one and you'll have a working computer again. Quick & easy and even your grandpa is able to run Memtest and do a quick switch. But if you solder down everything, the whole PC becomes electronic waste as most people won't be able to solder RAM.

load more comments (3 replies)
load more comments (4 replies)
[–] UnsavoryMollusk@lemmy.world 14 points 9 hours ago* (last edited 9 hours ago)

At first I was skeptical during the announcement and then I saw the amount of ram and the rack. Imho it is not for enduser but for business. In fact we have workloads that would be perfectly fit that computer so why not?

load more comments
view more: next ›