this post was submitted on 11 Mar 2026
132 points (95.2% liked)

Selfhosted

57442 readers
747 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
132
submitted 1 day ago* (last edited 21 hours ago) by rook@lemmy.zip to c/selfhosted@lemmy.world
 

New update: my current setup is a dell power edge t310 with 6x4tb SAS, zeon CPU, and 12gb ECC all parts stock. No hardware raid. 2.5gb network card. Should I just replace the 6 drives? With larger capacities? That will probably be more than $10/tb... I didn't buy the 16 drives yet, they are used SAS drives 4tb each, turn to be about $40 each.

Current storage 8tb used out of 14... And lots of cold drives waiting to get copied... 10tb+ probably. Is it worth copying all the cold storage drives to the redundant nas.

Update: budget(200-600), the reason for the build is I found cheap 4tb drives for almost $10/Terabyte. So I want to use as much of them as I can

I am trying to build my final NAS build as a beginner.

I have a 6x4tb dell server, but it's not enough.

I am currently trying to build the final boss of my nasses. 4x16tb with truenas with raid

I am unsure of what parts to buy as I am a complete beginner.

I found a case that can hold all 14 drives.

I need a motherboard, CPU, ram, PSU

I am on a budget, kind of.

What motherboard do you recommend? Pulled from a workstations with CPU and ram? A server board? Normal consumer with normal consumer CPU? Motherboard should have some pcie slots for 2 sata cards and one 2.5 GB card.

What CPU to run all these drives?

What ram and how much? 16? 32? Ecc, non ecc? Ddr4? Ddr3?

Power supply: 850w or more?

All parts should be able to support the 16 drives with headroom...

I would appreciate any help on this build, I want to build this as soon as possible.

Thanks

top 45 comments
sorted by: hot top controversial new old
[–] Shimitar@downonthestreet.eu 7 points 23 hours ago* (last edited 23 hours ago) (1 children)

I wouldn't use more than 4 or 6 disks in a home environment. Specially with mechanical drivers, power consumption 24/7 would get me very worried.

I run 4 x 8Tb SSDs, not cheap, but solid, low power AND low heat (even more important).

Consider also heat dissipation as most likely at home you don't have a constant temperature and humidity, so many spinning disks can suffer from heat, and that will kill them faster

Longevity... With so much space I would expect to keep it running a decade or more... So factor in 10x365x24 hours of operation, energy consumed, heat dissipation and failure rate.

On top of that, whatever gpu and ram you throw at it is meaningless, whatever wi work, even an Intel n100 NUC. Having enough cables and port instead... Well.

[–] SomethingBurger@jlai.lu 5 points 23 hours ago* (last edited 22 hours ago) (1 children)

20W/drive means 30x24x0.2 kWh each month for 10 drives. At 0.20€/kWh, that's 28€/month, cheaper than a 20TB Hetzner box. That's assuming all drives are always spinning, as an idle drive uses more like 5W.

[–] Shimitar@downonthestreet.eu 2 points 20 hours ago (1 children)

10x4tb = 40tb can be achieved with 4 12tb drives (actually 36tb in raid5) .

Doubtfully those 12tb uses much more power than the 4tb ones, each. So the 28€/m probably cut down to 14,€/m counted in excess.

Considering 120m (10y) of uptime, you should save enough to justify cutting down from 10 to 4 drives.

[–] Passerby6497@lemmy.world -1 points 19 hours ago (1 children)

But going with more smaller drives gives you higher IO and the ability to have more concurrent failures before disaster. Losing a disk during resilvering is horrible when you're only running with 1 redundant drive normally.

[–] Shimitar@downonthestreet.eu 2 points 19 hours ago* (last edited 19 hours ago)

Yes, more redindancy is good and indeed worth having. Still 5 12tb drives are probably yet more energy and heat efficient than 10 4tb ones.

Even if I had 10 4tb for free I wouldn't use them. Maybe a couple for backup reasons or cold storage, but not active 24/7 for a domestic raid environment.

I actually have 4 6tb hdds that I dismissed for the 4 8tb sdds, and I use two for local backup and keep two spares to replace them when they will fail.

4 8tb in raid5 provide 24tb total space that its far more than I need, and the risk of a double failure is mitogated by a proper 3,2,1 backup strategy in place

As for the higher I/o frankly I never felt the need. 1gbps home network is always the bottleneck anyway and if you require such disk troughput on your network, you are doing something wrong anyway.

Even many 4k video streams would sturate your lan before saturating your disks unless you store uncompressed video streams.

[–] Flipper@feddit.org 129 points 1 day ago (1 children)

You say you are on a budget. Yet you talk about 128 Gigs of ram.

Maybe you should clarify what your budget is.

[–] Jolteon@lemmy.zip 12 points 1 day ago

Maybe the budget was planned out before RAM prices spiked. 128 gigs of used server RAM was not that expensive before that happened.

[–] philanthropicoctopus@thelemmy.club 43 points 1 day ago (1 children)

Where are people getting drives at $10/tb?

Where I live it's $50/tb

[–] remon@ani.social 13 points 1 day ago

In the past!

My 20TB drives cost me $17 per TB 2 years ago. The exact same model is now at $33 per TB :(

[–] q@piefed.social 4 points 1 day ago

That sounds like a nightmare tbh. So many failure points, so much heat and power usage, and cables.

I have 6 out of 8 bays filled and still feel like it's a lot to worry about and manage if something fails.

[–] blitzen@lemmy.ca 33 points 1 day ago* (last edited 1 day ago) (1 children)

Why 16 drives? Do you already have 16 4tb drives?

[–] JGrffn@lemmy.world 38 points 1 day ago (2 children)

I also went with 16 drives, but they were 20TB each. OP, if you don't already have those 4tb drives, reconsider the amount and sizes. 4tb can't be the price sweet spot for HDDs...

[–] Humanius@lemmy.world 11 points 1 day ago* (last edited 1 day ago)

It would seem that the sweet spot for HDDs is as high as 16 to 24 TB at the moment (at least here in the Netherlands).
You can get a 24TB Seagate Barracuda for €479,- right now, which comes out to about €20 / TB.

If you specifically want a NAS drive though the best "bang for the buck" appears to be a 28TB Seagate IronWolf Pro for €688,- coming out to about €25 / TB.

Edit: Personally I run 8TB drives in my server, which are currently €209,- (€26 / TB) for a regular Seagate Barracuda, and €289 (€36 / TB) for a Seagate IronWolf Pro. Funnily enough 4TB drives would actually be better for NAS drives at €132,90 (€33 / TB) for a WD Red Plus.

[–] Gork@sopuli.xyz 5 points 1 day ago* (last edited 1 day ago)

If I ever got a lucky Amazon mistake where I order one 4 TB drive but a box of 16 comes in, I would set up a full *arr stack.

Probably won't be that lucky though.

[–] Bloefz@lemmy.world 4 points 1 day ago

Ehhh one thing I've learned over the years, it doesn't matter how much storage I buy. Within a few weeks it'll be full.

[–] scrubbles@poptalk.scrubbles.tech 23 points 1 day ago (2 children)

No more Storage Full warnings.

Is that a challenge?

[–] ergonomic_importer@piefed.ca 7 points 1 day ago

Just one more drive bro. Please one just one more

[–] ulterno@programming.dev 0 points 1 day ago

Fix it by simply turning off "Low Disk Space" warnings in System Settings.
Mix that with keeping your / and your home cache, local, share etc directories in a non-data drive and you get no warnings. Only errors when a write fails.

[–] hesh@quokk.au 15 points 1 day ago (1 children)

I would consider fewer, larger drives

[–] frongt@lemmy.zip 7 points 1 day ago (1 children)

I would seek the best price per terabyte while still allowing redundancy.

[–] hesh@quokk.au 1 points 1 day ago (1 children)

True, but I would factor in some kind of negative to cost/longevity from increasing number of drives. Even if 16x4 is a bit cheaper than 4x16 today, will it die faster?

[–] frongt@lemmy.zip 3 points 1 day ago (1 children)

At these scales, I don't think it's measurable, if statistically significant at all.

In any case, you should always be ready to replace a drive that fails. I buy used because they're significantly cheaper (or at least they used to be) and I've never had any major failures.

[–] Onomatopoeia@lemmy.cafe 2 points 1 day ago

And while more drives means more failure opportunity, it also means when a failed drive is replaced, it's likely of a different manufacture period.

I have a 5-drive NAS that I've been upgrading single drives every 6 months. This has the benefit of slowly increasing capacity while also ensuring drives are of different ages so less likely to fail simultaneously. (Now I'm waiting for prices to come back down, dammit).

[–] Theoriginalthon@lemmy.world 1 points 22 hours ago

Have a look at the guides in serverbuild.net forums such as https://forums.serverbuilds.net/t/guide-nas-killer-5-0/

The series of post that is Nas killer (4.0 5.0 6.0) etc. they list a bunch of CPUs and motherboards with approx eBay prices along with ram disks etc etc. I used it as a reference when building my cheap Nas for home, mainly the motherboard/CPU sections.

[–] atzanteol@sh.itjust.works 13 points 1 day ago* (last edited 1 day ago)

You're talking a lot of storage - it might be worth investing in some low-end server hardware. A Dell tower or something, maybe one off eBay if you're looking to cut costs.

I picked up a PowerEdge T110II a long time ago and it's been... flawless. Just a simple server with a 4x4TB RAID5. No hardware problems (aside from occasional disk failures over the years), easy to manage. It costs a bit more - but server hardware is often just more reliable and for a NAS that's job #1. This server just runs.

I just upgraded the memory in it to 32GB for ~$100USD. Before that it had 8GB. I needed more for restic doing backups. I probably could have gotten away with 16GB but I figured I'd max it out for that price.

[–] vane@lemmy.world 11 points 1 day ago

It's better to buy 4x 16-20TB drives and expand storage instead of buying 16 4TB drives. Also 16 3.5 inch HDD drives draw around 200W of power alone.

[–] linuxguy@piefed.ca 7 points 1 day ago (1 children)

Take a look at https://diskprices.com/ for the best price per TB. Backblaze has been pretty great about sharing their hardware specs and builds. Maybe get some ideas from them https://www.backblaze.com/blog/open-source-data-storage-server/

[–] B0rax@feddit.org 1 points 1 day ago

They already have the disks, they are looking for the rest of the build.

[–] blitzen@lemmy.ca 11 points 1 day ago* (last edited 1 day ago)

Honestly, I bet it would be cheaper to replace a few or even all of the 4 TB drives in your current set up with larger drives.

[–] sefra1@lemmy.zip 1 points 1 day ago

I have never build a machine like that, so I guess I can't help you much, but like another comment said, it seems like a pain to maintain, I usually have trouble with sata cables losing contact, with that setup there are many cables keen to lose contact.

As for ram I wouldn't worry about it at all, unless you use zfs 4GB should be more than enough, even 2 or less. Ram is expensive now, so you may want to consider using as little as possible unless you already have it laying around. Does truenas use zfs? If so you may want to use other fs like btrfs or test how well zfs works with the ram you have. I'm not sure zfs is worth the trouble. I wouldn't buy extra ram.

As for CPU I don't think it matters much, but like I said, I have never tried your setup. But even an ancient sandy bridge should work fine if it's just a personal has, with HDDs even with encryption. Works fine on my nas.

Also, if you have access to free old computers you can try a ghetto setup where each each computer only handles 4 drives and then you join them together on a master computer either via nbd or nvme other Ethernet (works on sata too). But that seems like an even bigger pain to maintain and increases your power consumption by a lot.

[–] farcaller@fstab.sh 7 points 1 day ago

You really want the ECC ram and the motherboard/cpu combo that supports it.

[–] KairuByte@lemmy.dbzer0.com 6 points 1 day ago

Honestly, you might want to look into proper server hardware. There are many out there that support dozens of drives, assuming you’re willing to go with a blade. Even if you explicitly want a tower, server hardware is where you’re going to get the best support.

You’ll most likely also want to increase the size of your drives. Assuming you’re being smart and utilizing RAID, you’re going to be losing a bunch of that storage.

[–] Onomatopoeia@lemmy.cafe 3 points 1 day ago* (last edited 1 day ago)

Others have mentioned power - you may want to do some math on drive cost vs power consumption. There'll be a drive size point that is worth the cost because you'll use fewer drives which consume less power than more drives.

Having built a number of systems, I'm a LOT more conscious of power draw today for things that will run 24/7. Like my ancient NAS draws about 15 watts at idle with 5 drives (It will spin down drives).

More drives will always mean more power, so maybe fewer but larger drives makes sense. You may pay more up front, but monthly power costs never go away.

Also, I've built a 10 drive n NAS like this (because I had the drives and the case, mono and ram). It can produce a lot if heat while doing anything, and it was a significant power hog - like 200w when running. And it really didn't idle very well (I've run it with UnRaid, TruNAS and Proxmox).

[–] BlackEco@lemmy.blackeco.com 3 points 1 day ago

What's the case? Does it has the ability to hot-swap drives (even with a side panel off)? It can come really handy if one of your drives fails.

[–] empireOfLove2@lemmy.dbzer0.com 2 points 1 day ago (1 children)

ABSOLUTELY ECC memory, 32gb or higher if you can afford it these days as TrueNAS does benefit from a decent cache space, especially with so many drives to spread data slices across.

Realistically unless you expect multiple concurrent users, any 4 core or higher CPU from 2015-on will be plenty of power to manage the array. No need for dedicated server hardware unless the price is right

I have a Dell PowerEdge t3 SOHO/small business server tower that I gutted and turned into a 5x8tb config. It only has a middling 4 core Xeon 1225v5 and I never get above 50% CPU usage when maxing the drives out. More CPU is needed if you're doing filesystem compression or need multiple concurrent users.

[–] Onomatopoeia@lemmy.cafe 3 points 1 day ago (1 children)

I've never run into issues running desktop hardware without ECC as servers - since the 90's.

I just don't think the extra cost is worthwhile - I'm not running systems/services that will have catastrophic failures without ECC (or have weird bitflips that would corrupt some transaction).

[–] empireOfLove2@lemmy.dbzer0.com 1 points 18 hours ago

I've never ran into issues either, but generally in any situation where data integrity is somewhat important, ECC is a very good idea. Its never a problem until suddenly it is.

I don't give a crap about my Minecraft server having ECC, but a storage server where cached data gets written to disk, I'd rather have ECC ensure nothing gets corrupted.

[–] Humanius@lemmy.world 2 points 1 day ago* (last edited 1 day ago)

There is no real clarification what that budget is, so I will assume that the budget is tight.
My advise is assuming that you are looking for the best bang for the buck.

The case looks like a good option, assuming that those are 3.5 inch bays.
It should give you plenty of space for expansion in the future if you want to do that

RAM prices are pretty nuts right now, so I would definitely not go balls to the wall with 128 GB of RAM.
16 GB of RAM should be more than plenty for a NAS server. Maybe you can even get away with 8GB?
I'm using 16 GB of DDR3 RAM in my own NAS server (which is also running Jellyfin and Nextcloud) and it's running fine.

Speaking of DDR3.. Have you considered buying your CPU, motherboard and RAM second hand?
From what I hear the prices of DDR3 RAM are not nearly as elevated as those of DDR4 and DDR5 RAM, and DDR3 is plenty sufficient for a simple NAS.

Be sure not to skimp on the power supply. Most consumer power supplies are not built for running a NAS worth's of HDDs.
I'm running a Corsair RM550x in my server, which is capable of supplying 130W on the 5V rail.

Good luck with your server build!

[–] Decronym@lemmy.decronym.xyz 2 points 1 day ago* (last edited 18 hours ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
NAS Network-Attached Storage
NUC Next Unit of Computing brand of Intel small computers
PSU Power Supply Unit
RAID Redundant Array of Independent Disks for mass storage
SSD Solid State Drive mass storage
ZFS Solaris/Linux filesystem focusing on data integrity

[Thread #156 for this comm, first seen 11th Mar 2026, 21:50] [FAQ] [Full list] [Contact] [Source code]

[–] Ferrous@lemmy.ml 2 points 1 day ago

Hey, you basically defined my system.

Truenas scale machine running 4x 16TB drives. I use a cheap rosewill 4u server rack case. It has hot swap drive bays in front. Big plus.

The brain is an amd 5950x running on an asrock x570 steel legend w/ 128GB of the cheapest crucial DDR4 ECC I could find. Also running an rtx 2080 for jellyfin transcoding.

My consumer mobo is the bottleneck. Given how my end goal is to have a 10gb nic and an LSI card for more sata ports, I'm going to have to get creative with m.2 ports. I might plug a 10gb nic into an m.2 port.

PSU was a 1kW fractal platinum rated. Way overkill, but the high efficiency is key.

You'll notice my build uses a lot of gaming parts - i simply harvested my old parts when I upgraded my gaming PC. Despite this, it still idles under 200 watts. My point is not that you should seek out gaming parts, but if you happen to have any on hand, they could be effectively leveraged given price increases on new parts.

The biggest thing is: Use ECC. This is non negotiable for your setup. ECC saved me a couple weeks ago when my 5950x shot craps, randomly. So far no issues after increasing to a set voltage. ZFS and ECC go together like peas in a pod.

[–] LodeMike@lemmy.today 2 points 1 day ago (2 children)

Just in case you dont know most drives aren't rated for this many in one case.

[–] fizzle@quokk.au 4 points 1 day ago (1 children)

Yeah earlier in my journey I had a bunch of cheap drives packed in close. They didn't last. Heat kills drives.

[–] LodeMike@lemmy.today 3 points 1 day ago (1 children)

Oh it's the heat? I thought it was vibration (I actually don't know).

[–] fizzle@quokk.au 3 points 1 day ago

My rudimentary understanding of physics suggests that vibrations will be more harmful as heat increases.

[–] ZeldaFreak@lemmy.world 2 points 1 day ago

Also they aren't rated to get screamed at: https://www.youtube.com/watch?v=tDacjrSCeq4