this post was submitted on 05 Oct 2024
562 points (97.8% liked)

Technology

59377 readers
4734 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Valmond@lemmy.world 15 points 1 month ago (8 children)

I'm all for it, and it's just the usual "moores law" trend, I just wonder if we won't hit a wall where (most!) users just won't need it?

[–] originalucifer@moist.catsweat.com 34 points 1 month ago (2 children)

most users already dont use what theyve got. its more about reducing physical size for the masses... these new techs will allow for even smaller storage for thinner, more efficient devices.

i think only some power users (im a data horader) and commercial interests care about bulk storage

[–] Gradually_Adjusting@lemmy.world 22 points 1 month ago (1 children)

I'm slightly surprised that loss of faith in corporations being good stewards of our cultural content - wantonly deleting cherished shows, namely - has not driven a larger move towards personal ownership of media. In a world where anything that fails to be profitable faces destruction, owning your stuff has never been a better idea.

[–] thejml@lemm.ee 13 points 1 month ago (1 children)

People, in general, don’t care. I don’t necessarily mean that in a bad way, more that they just don’t notice until she show they searched for isn’t available and then they shrug it off and move on to another one they can watch. Most people I know don’t want to keep large catalogs around if things they like because they only watch a single movie a few times in their lives. They watch it and then they’re good for years or more. There’s so much content out there that there’s no way they’re going to rewatch things and there’s no way they’re going to miss it because they’re having enough trouble keeping up with all the new stuff. On top of that, the convenience of just turning on the tube and hitting play vs trying to find the disc, and store and organize it is huge. And ripping it and then keeping a large amount of storage locally, online and healthy for the purpose is out of their technical wheel house. (And budget at times)

Honestly, I’m a big proponent for buying physical media… but I’ve greatly reduced what I rip/buy/keep, just knowing there’s only so much time left on my personal hourglass and I’ve got better things to do than worrying about all that up keep. When I kick the bucket, no one is going to care about it all. Maybe they’ll keep a few interesting ones but they’ll likely just sit on someone else’s shelf. In the mean time, how many times am I really going to watch some of these things?

[–] Gradually_Adjusting@lemmy.world 1 points 1 month ago

Emphasis on "slightly"

[–] beejjorgensen@lemmy.sdf.org 3 points 1 month ago (1 children)

I can't believe how much mileage I've gotten out of my 512GB SSDs on my laptops. And my "big" backup disks are hand me down 1TB HDs my friend didn't need. I don't do video, though.

[–] originalucifer@moist.catsweat.com 3 points 1 month ago (1 children)

my collection is small compared to some, but ive got about 22 4tb drives in use in various arrays.. but its mostly video.

[–] beejjorgensen@lemmy.sdf.org 2 points 1 month ago

I remember it being a big space sink when I was editing video. Now all I have is DVD rips of my collection and those are nice and compact.

[–] IHeartBadCode@fedia.io 15 points 1 month ago

Thermal is a wall to contend with as well. At the moment SSDs get the density from 3D stacking the planes of substrate that make up the memory cells. Each layer contributes some heat and at some point the layer in the middle gets too hot from the layers below and not being close enough to the top to dissipate the heat upwards fast enough.

One way to address this was the multi-level cell (MLC) where instead of on/off, the voltage within the cell could represent multiple bits. So 0-1.5v = 00, 1.6-3v = 01, 3.1-4.5v = 10, 4.6-5v = 11. But that requires sense amplifiers that can handle that, which aren't difficult outright to etch, they just add complexity to ensure that the amplifier read the correct value. We've since moved to eight-level cells, where each cell holds an entire byte, and the error correction circuits are wild for the sense amplifiers. But all NAND FGMOS leak, so if you pack eight levels into a single cell, even small leaks can be the difference between sensing one level from another level. So at some point packing more levels into the cell will just lead to a cell that leaks too quickly for the word "storage" to be applied to the device. It's not really storage any longer if powering the device off for half a year puts all the data at risk.

So once going upwards and packing hits a wall, the next direction is moving out. But the more you move outward, the further one is placing the physical memory cells from the controller. It's a non-zero amount of distance and the speed of light is only so fast. One light-nanosecond is about 300 millimetres, so a device operating at 1GHz frequency clock has that distance to cover in a single tick of the clock in an ideal situation, which heat, quantum effects, and so on all conspire to make it less than ideal. So you can only go so far out before you begin to require cache in the in-between steps and scheduling of block access that make the entire thing more complex and potentially slow it down.

And there are ways to get around that as well, but all of them begin to really increase the cost, like having multi-port chips that are accessed on multi-channel buses, basically creating a small network inside your SSD of chips. Sort of how like a lot of CPUs are starting to swap over to chiplet designs. We can absolutely keep going, but there's going to be cost associated with that "keep going" that's going to be hard to bring down. So there will be a point where that "cost to utility" equation for end-users will start playing a much larger role long before we hit some physical wall.

That said, the 200 domain of layers was thought to be the wall for stacking due to heat, there was some creative work done and the number of layers got past 300, but the chips do indeed generate a lot more heat these days. And maybe heat sinks and fans for your SSD aren't too far off in the future, I know passive cooling with a heat sink is already becoming vogue with SSDs. The article indicated that Samsung and SK hynix predict being able to hit 1000+ layers, which that's crazy to think about, because even with the tricks being employed today to help get heat out of the middle layers faster, I don't see how we use those same tricks to hit past 500+ layers without a major change in production of the cells, which usually there's a lot of R&D that goes behind such a thing. So maybe they've been working on something nobody else knows about, or maybe they're going to have active cooling for SSDs? Who knows, but 1000+ layers is wild to think about, but I'm pretty sure that such chips are not going to come down in prices as quickly as some consumers might hope. As it gets more complex, that length of time before prices start to go down starts to increase. And that slows overall demand for more density as only the ones who see the higher cost being worth their specific need gets more limited to very niche applications.

[–] Beacon@fedia.io 12 points 1 month ago

We hit that point in spinning disk drives a while ago for me

[–] fishos@lemmy.world 10 points 1 month ago (2 children)

The issue is, every time we make a great leap in storage medium, we tend to use that new storage for BIGGER files. Higher quality media and all that. Back in the day, the average movie file was measured in the MB. Now it's GB. Think about an old floppy with 1.4 MB of data and how many text files you stored on it. You couldn't ever imagine needing more space. Then came pictures and music files. Video files. Then higher resolution picture and video files. Suddenly even your text documents aren't just raw .txt files, but Word documents and interactive PDFs.

As storage improves, what we expect to be able to carry around with us or have in our home computer changes. I'm currently running a home server with 18TB of storage. An amount that I would have never dreamed of possessing 20 years ago, and yet here I am debating when I grab that 24TB drive because I can already see me running out of space in a few months.

This is all to say that I really don't think there will ever be a maximum amount a user could need. Give them that maximum and in a week they'll have figured out a way to use it to capacity. I think video games and cartridge/disk size limitations and then the transition to digital games and balloning game size shows my point.

[–] andrew_bidlaw@sh.itjust.works 4 points 1 month ago (1 children)

This demand is also dictated by what companies see as a default setup, now it's 0,5Tb+ SSDs as syst drives. W10\11 doesn't work on HDDs because their update and security services can overwhelm your disk's speed and make the system unresponsive. If you are given an older hardware by your employer, good luck, as your OS and other programs assume they don't need to limit either speed or size, and the only way to keep using the same features is to upgrade.

[–] fishos@lemmy.world 2 points 1 month ago* (last edited 1 month ago)

Exactly. Eventually what we see now as cutting edge will become "bare minimum" or even "obsolete" hardware one day. Eventually the camera on your cell phone will by default be taking such high resolution pictures that anything less that a TB of onboard storage will seem quaint.

[–] DJDarren@thelemmy.club 2 points 1 month ago

Our family’s first proper PC back in around ‘93 had a 1gb HDD. I remember strutting about at school like I was the top shit because of how great my computer was.

These days I have a modded iPod mini with 128gb that I’m getting close to needing to increase because of my love of 320kbps MP4 files.

[–] WalrusDragonOnABike@lemmy.today 7 points 1 month ago* (last edited 1 month ago) (1 children)

Its already been 6 years since the first 100TB SSD released and I still don't think anyone has bothered to dethrone it last I checked. Density and number of layers possible have both increased since then. I imagine part of it is just a performance issue though; 10 10TB SSDs are gonna be faster than 1 100TB SSD.

At the consumer level, the usage of smaller form factors will probably mean more density will still be useful. Things like the steamdeck drives will benefit for a while.

[–] jlh@lemmy.jlh.name 2 points 1 month ago

64TB ssds are fairly common in the enterprise market now, I don't think they were 6 years ago. It's possible we'll see 128TB SSDs become fairly common on servers in a few years.

[–] EldritchFeminity@lemmy.blahaj.zone 5 points 1 month ago (1 children)

They'll be useful for gamers, at least. With the increasing trend of companies caring less about properly optimizing the size of game installs and expecting gamers to have SSDs for texture loading on the fly, these drives will definitely see use. I currently have a 4TB HDD that has over 2.3TB of Steam games installed on it right now (roughly 100 games from tiny indie games to big AAA releases that are 40-80 gigs in size), and several newer games have an SSD listed as one of their minimum requirements.

[–] Valmond@lemmy.world 2 points 1 month ago

Ya, didn't say it will instantly be useless 😁, I'd pick up a 4TB or more because why not?

My first SSD was a 256GB (I really splurged on that one) now I have a 2TB for cheaper, soon it will be 20TB and then 200TB etc. Will video games grow that fast? My thought is it won't and that's all I guess 😊

[–] Dudewitbow@lemmy.zip 4 points 1 month ago

NAND density is always useful for the ultra portable end, be it used in applications like phones, portable gaming devices, microcontroller boards and such, where space or pci-e lanes is often the limiting factor. when the capacity of nand grows, options become better, as nand usually doubles in capacity per chip.

[–] frezik@midwest.social 2 points 1 month ago (1 children)

We've already hit a perceived user experience limit. The perception of responsiveness in blind tests between SATA and NVMe SSDs isn't always apparent--people sometimes say the SATA drive is faster--even though the speed difference on paper is substantial.

IMO, programmers haven't exploited the possibilities of extremely fast mass storage yet. The orders of magnitude difference in speed isn't fully realized. It's not just faster, it's faster in a way that requires new approaches. Unlike multicore CPUs over a decade ago, this change in thinking has gone relatively unnoticed by programmers.

[–] Valmond@lemmy.world 1 points 1 month ago (1 children)

Well maybe, it's just storage like HD or RAM.

But to do what (outside scientific software)?

[–] frezik@midwest.social 1 points 1 month ago

Make everything faster. Space that isn't used for caching data is space that's wasted.

This isn't necessarily about apps that run on your desktop or phone. Most code in the world runs on servers, and the use cases are different.