this post was submitted on 08 Mar 2026
169 points (92.9% liked)

Technology

82490 readers
4916 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] brucethemoose@lemmy.world 44 points 3 days ago

The issue with AI is “now”

Can they power with solar? Nuclear? Hell, even a natural gas plant? Nope, the data centers need the power right this second, so they get gas turbines on site. Same with cooling; evaporative is just the quickest and cheapest to set up.

Same with its architecture. There’s no time to fix temperature/sampling issues, no time to try bitnet or any of a bazillion interesting papers that came out. A shippable product (model) is needed yesterday; just scale up what we have. “Fail” a single experiment? Your team is fired, which is exactly what happened at Meta.

Everything has to happen right now because of corporate FOMO. So, while this is an interesting musing and maybe Intel or someone will play with it, the actual AI labs could not care less because they can’t get it immediately.

[–] CubitOom@infosec.pub 85 points 3 days ago* (last edited 3 days ago)

Or we could just like...not do the terrible thing that is bad in everyway.

[–] edgemaster72@lemmy.world 49 points 3 days ago (2 children)

and then the datacenters adopt that tech and hoard it all too

[–] JasonDJ@lemmy.zip 69 points 3 days ago (4 children)

That's the idea. It's pretty worthless for home use, but for AI workloads, it might make sense, the problem is that it's not quite scalable yet.

Essentially, if you've got 256Tb/s going over 200km of fiber, that means that there's quite literally 32,000,000,000 bytes (32GB) "in flight", living on the fiber at any period of time.

So it's essentially it's a revolving sushi belt of bytes, roughly as large as London (inside M25), moving at nearly the speed of light.

Of course, it doesn't have to be the size of London. You could wind it into something about the size of a softball. Theoretically.

It's a cool idea and Carmack is no doubt a brilliant man. It seems far fetched but it's kind of been done before... https://en.wikipedia.org/wiki/Core_rope_memory

[–] Morphit@feddit.uk 16 points 3 days ago (1 children)

It's an optical delay-line memory. Early computer memories were acoustic in some manner.

I can't imagine that the latency of 'delay line RAM' would be acceptable to anyone today. Maybe there's some clever multiplexing that could improve that but it would surely add more complexity that just making more RAM ICs.

[–] tal@lemmy.today 7 points 3 days ago* (last edited 3 days ago)

Neural net computation has predictable access patterns, so instead of using the thing as a random access memory with latency incurred by waiting for the bit you want to get around to you, I expect that you can load the memory appropriately such that you always have the appropriate bit showing up at the time you need it. I'd guess that it probably needs something like the ability to buffer a small amount of data to get and keep multiple fiber coils in synch due to thermal expansion.

The Hacker's Jargon File has an anecdote about doing something akin to that with drum memory, "The Story of Mel".

http://www.catb.org/~esr/jargon/html/story-of-mel.html

[–] Schmoo@slrpnk.net 4 points 2 days ago (1 children)

moving at nearly the speed of light.

Couldn't resist being a bit of a stickler but 🤓 erm... technically it is moving at the speed of light through a medium, which is slightly less than c, the speed of light in a vacuum. Fun fact, when things move faster than the speed of light through a medium - such as water - it produces Cherenkov radiation, the glowing blue light associated with some nuclear reactors, which is sorta like a sonic boom but with light instead of sound.

[–] JasonDJ@lemmy.zip 4 points 2 days ago

That's pretty cool but I did say 'nearly' :-)

Also would it really be Random Access Memory? Seems like we would have to optimize a lot of things for sequential data access

[–] vacuumflower@lemmy.sdf.org 1 points 3 days ago

Also optical fiber is used a lot on battlefields now. It just remains there. There's a lot to be assembled.

[–] mrnobody@reddthat.com 11 points 3 days ago (1 children)

Then all the necessary mineral prices will shoot up 3,648%.

[–] Goodlucksil@lemmy.dbzer0.com 1 points 3 days ago (1 children)

Is that a decimal comma or a digit separator comma

[–] mrnobody@reddthat.com 1 points 3 days ago

Digit separator comma

[–] RobotToaster@mander.xyz 29 points 3 days ago (4 children)

I'm pretty sure 200km of fibre isn't going to be cheap either

[–] adespoton@lemmy.ca 23 points 3 days ago

Fibre is just strands of extruded glass; one of the most common substances on earth.

Sure beats the blood minerals needed for memory, and to scale up, you just extrude longer strands.

[–] e8CArkcAuLE@piefed.social 10 points 3 days ago (1 children)

there is a bit of surplus of fibre wire in Ukraine, i hear… /s

I'm not sure which job sounds less appealing; collecting it or splicing it

[–] acosmichippo@lemmy.world 5 points 3 days ago

could be cheaper than enterprise grade DIMMs.

load more comments (1 replies)
[–] NotMyOldRedditName@lemmy.world 6 points 2 days ago* (last edited 2 days ago) (1 children)

So we'll soon have houses built with a place to hold a spool of 200km multi fiber cable (which shouldn't be too big, Ukrainian drones carry 40km worth of single strand but this couldbe 10 or 20 strand) and we can plug our computers into it.

[–] KairuByte@lemmy.dbzer0.com 2 points 2 days ago (1 children)

You can carry multiple wavelengths over a single strand.

[–] NotMyOldRedditName@lemmy.world 1 points 2 days ago* (last edited 2 days ago)

The article was saying the spool would give them 32gb of ram, hence the multi strand thought. Were going to want hundreds of gb to run a decent model.

[–] whaleross@lemmy.world 14 points 3 days ago (1 children)

Throw in some AI and a Blockchain and you'll get the cryptobros hooked. Then use it to store NFTs.

[–] boonhet@sopuli.xyz 6 points 3 days ago

It's being proposed for AI literally. As in AI doesn't TECHNICALLY need RAM, it could also use SAM and this stuff could provide excellent sequential access performance.

[–] solrize@lemmy.ml 22 points 3 days ago

Delay line memory in gigabytes? Bold indeed.

[–] ms_lane@lemmy.world 10 points 3 days ago (1 children)

It's an interesting idea, but what's the floor size for a pair of 200TB/s fibre transceivers vs. 32GB of HBM?

It's it's not significantly less, this doesn't seem like it'd be particularly helpful outside the 200TB/s of streaming data.

[–] tal@lemmy.today 5 points 3 days ago* (last edited 3 days ago)

I'm assuming that the point is the bandwidth.

goes looking for HBM bandwidth

https://en.wikipedia.org/wiki/High_Bandwidth_Memory

It says that HBM 4, which came out one year ago, can do 2 TiB/s.

[–] eleitl@lemmy.zip 7 points 3 days ago

They never mention the word latency even once. It's a delay line SAM and speed of light in glass is some 200000 km/s. This is hard drive latency.

[–] humanspiral@lemmy.ca 3 points 2 days ago

while bandwidth is high, storage is low. Even dropping speed to 10Tb/sec, it would mean 1.25GB of effective ram.

[–] architect@thelemmy.club 3 points 2 days ago

No. I will always have computers. Fuck you.

[–] tal@lemmy.today 7 points 3 days ago

Note that this is from last month, though I haven't seen it submitted.

[–] sturmblast@lemmy.world 4 points 3 days ago (1 children)
[–] ooterness@lemmy.world 6 points 3 days ago* (last edited 3 days ago) (2 children)

Were you talking to John Carmack or John Connor?

[–] Sxan@piefed.zip 0 points 2 days ago

I prefer þe real John... John Carter.

[–] sepi@piefed.social 3 points 3 days ago (1 children)
[–] boonhet@sopuli.xyz 8 points 3 days ago (2 children)

If you read the article, it's sequential access but that's fine for AI use.

[–] ryannathans@aussie.zone 4 points 3 days ago (1 children)
[–] boonhet@sopuli.xyz 3 points 3 days ago

Yes, but for a niche use case where SAM is fine, not for consumers

[–] sepi@piefed.social 2 points 3 days ago (1 children)

I read the article title and it said RAM. Now you're trying to pull a sam altman bamboozle - "it's not random, it's sequential" - then it ain't RAM.

Fuck the law and fuck the article yeehaw

[–] boonhet@sopuli.xyz 1 points 2 days ago

Current title seems a bit better.

A cure for the memory crisis? John Carmack envisions fiber cables replacing RAM for AI usage, which would mean a better future for us all

Essentially since the access patterns in AI usage are predictable, they could hypothetically replace their heavy RAM usage with this. Which would mean more RAM for the rest of us.

[–] geekwithsoul@piefed.social 2 points 3 days ago (2 children)

I don't pretend to understand how this would actually work, but wouldn't this essentially be like token ring networking but used as memory?

[–] cmnybo@discuss.tchncs.de 11 points 3 days ago

It's delay line memory. It was common back in the days of vacuum tube computers.

[–] tal@lemmy.today 5 points 3 days ago* (last edited 3 days ago)

A little bit, but normally Token Ring didn't just keep data running around in a circle on and on


Token Ring works more like a roundabout, where you enter at a given computer on the ring and then exit at another device. Without looking, I suspect that, like Internet Protocol packets, Token Ring probably had a TTL (time-to-live) field in its frames to keep a mis-addressed packet from forever running around in circles.

Also, I'm assuming that an implementation of Carmack's idea would have only one...I don't know the right term, might be "repeater". You need to have some device to receive the data and then retransmit them to keep the signal strong and from spreading out. You wouldn't want to have a ton of those, because otherwise it'd add cost. On Token Ring, you'd have a bunch of transceivers, to have a bunch of "exits", since the whole point is to move data from one device to another.

[–] just_another_person@lemmy.world 1 points 3 days ago* (last edited 3 days ago) (3 children)

This is... incredibly stupid. This man has done so many drugs he no longer realizes how computers or electricity works.

ETA: https://www.reddit.com/r/answers/comments/23nd6a/i_remember_in_the_90s_illegal_or_black_box_cable/

[–] Shadow@lemmy.ca 18 points 3 days ago (2 children)
[–] nova_ad_vitum@lemmy.ca 11 points 3 days ago (1 children)

The lack of investment in more production capacity for RAM is based on a roughly 3-year horizon for this insane extra AI demand.

Creating workable consumer-grade alternatives with delay line memory of all things would take longer than that, and the market would collapse the moment AI demand for RAM dried up. This is one of those things that is theoretically possible but due to both technology and market conditions will absolutely not be a thing.

[–] tal@lemmy.today 15 points 3 days ago

Creating workable consumer-grade alternatives

I think that this is intended not to replace DIMMs in PCs, but to replace HBM for AI use. If you're doing neural net computation, you have very predictable access patterns, so you can store your edge weights such that the desired data is showing up at just the right time.

load more comments (1 replies)
[–] doorknob88@lemmy.world 7 points 3 days ago* (last edited 1 day ago) (1 children)
[–] greybeard@feddit.online 1 points 2 days ago

Maybe he confused him with John McAfee.

load more comments
view more: next ›