Seagate. The company that sold me an HDD which broke down two days after the warranty expired.
No thanks.
laughing in Western Digital HDD running for about 10 years now
This is a most excellent place for technology news and articles.
Seagate. The company that sold me an HDD which broke down two days after the warranty expired.
No thanks.
laughing in Western Digital HDD running for about 10 years now
I had the opposite experience. My Seagates have been running for over a decade now. The one time I went with Western Digital, both drives crapped out in a few years.
Funny because I have a box of Seagate consumer drives recovered from systems going to recycling that just won't quit. And my experience with WD drives is the same as your experience with Seagate.
Edit: now that I think about it, my WD experience is from many years ago. But the Seagate drives I have are not new either.
Avoid these like the plague. I made the mistake of buying 2 16 TB Exos drives a couple years ago and have had to RMA them 3 times already.
I stopped buying seagates when I had 4 of their 2TB barracuda drives die within 6 months... constantly was RMAing them. Finally got pissed and sold them and bought WD reds, still got 2 of the reds in my Nas Playing hot backups with nearly 8 years of power time.
Lmao the HDD in the first machine I built in the mid 90s was 1.2GB
My dad had a 286 with a 40MB hard drive in it. When it spun up it sounded like a plane taking off. A few years later he had a 486 and got a 2gb Seagate hard drive. It was an unimaginable amount of space at the time.
The computer industry in the 90s (and presumably the 80s, I just don't remember it) we're wild. Hardware would be completely obsolete every other year.
Just one would be a great backup, but I’m not ready to run a server with 30TB drives.
I'm here for it. The 8 disc server is normally a great form factor for size, data density and redundancy with raid6/raidz2.
This would net around 180TB in that form factor. Thats would go a long way for a long while.
I dunno if you would want to run raidz2 with disks this large. The resilver times would be absolutely bazonkers, I think. I have 24 TB drives in my server and run mirrored vdevs because the chances of one of those drives failing during a raidz2 resilver is just too high. I can't imagine what it'd be like with 30 TB disks.
I thought I read somewhere that larger drives had a higher chance of failure. Quick look around and that seems to be untrue relative to newer drives.
One problem is that larger drives take longer to rebuild the RAID array when one drive needs replacing. You're sitting there for days hoping that no other drive fails while the process goes. Current SATA and SAS standards are as fast as spinning platters could possibly go; making them go even faster won't help anything.
There was some debate among storage engineers if they even want drives bigger than 20TB. The potential risk of data loss during a rebuild is worth trading off density. That will probably be true until SSDs are closer to the price per TB of spinning platters (not necessarily the same; possibly more like double the price).
If you're writing 100 MB/s, it'll still take 300,000 seconds to write 30TB. 300,000 seconds is 5,000 minutes, or 83.3 hours, or about 3.5 days. In some contexts, that can be considered a long time to be exposed to risk of some other hardware failure.
Yep. It’s a little nerve wracking when I replace a RAID drie in our NAS, but I do it before there’s a problem with a drive. I can mount the old one back in, or try another new drive. I’ve only ever had one new DOA, here’s hoping those stay few and far between.
I mean, cool and all, but call me when sata or m2 ssds are 10TB for $250, then we'll talk.
Not sure whether we'll arrive there the tech is definitely entering the taper-out phase of the sigmoid. Capacity might very well still become cheaper, also 3x cheaper, but don't, in any way, expect them to simultaneously keep up with write performance that ship has long since sailed. The more bits they're trying to squeeze into a single cell the slower it's going to get and the price per cell isn't going to change much, any more, as silicon has hit a price wall, it's been a while since the newest, smallest node was also the cheapest.
OTOH how often do you write a terabyte in one go at full tilt.
How can someone without programming skills make a cloud server at home for cheap?
Lemmy’s Spoiler Doesn’t Make Sense
(Like connected to WiFi and that’s it)
Yes. You'll have to learn some new things regardless, but you don't need to know how to program.
What are you hoping to make happen?
Not programming skills, but sysadmin skills.
Buy a used server on EBay (companies often sell their old servers for cheap when they upgrade). Buy a bunch of HDDs. Install Linux and set up the HDDs in a ZFS pool.