For 1k i would start with a Unifi UDM-Pro, a Intel NUC and a Synology NAS.
Homelab
Rules
- Be Civil.
- Post about your homelab, discussion of your homelab, questions you may have, or general discussion about transition your skill from the homelab to the workplace.
- No memes or potato images.
- We love detailed homelab builds, especially network diagrams!
- Report any posts that you feel should be brought to our attention.
- Please no shitposting or blogspam.
- No Referral Linking.
- Keep piracy discussion off of this community
I regret getting a UDM-Pro and recently swapped it for an n5105 OPNsense box. Luckily they keep their value, so I didn't lose any money on the UDMP.
Why do you regret that choice?
I have a UniFi system: APs, switches, CKG2, Gateway. I’m looking to add CKG2+ and some POE cameras
Honest question... Why people with knoedge on how to do one, buy a Nas like synology? Are you not just paying double or triple for the same result you could have if making the NAS from scratch?
We use Synology at work to avoid paying CALs on a WS VM
I bought a qnap a long time ago, never again...it was like 3k with disk for 6 x 6TB drives like 10 years ago. They constantly get hacked, a bunch of their NAS's were getting crypto lockered because some Dev hard coded an admin password iirc. their software does a bunch of shit I dont need and it runs like shit now with just me using it. I'm gonna reset it soon once I get my data off.
My NAS now is a r730xd with 12 x 12tb drives in it running true nas. Granted my electric bill is a car payment with all my stuff, it only cost me like 1,500 for disk and the server was super cheap and has a 10 gig connection.
Granted some of it is cool if you are still learning like 1 click and you can have a mysql php server on there ect. I thought about getting a synology but all the bells and whistles it can do with apps and that I can just run on a real server.
Reliability and lower power consumtion than most Frankenstein-DIY cheap stuff recommended here ;)
After my past Ubiquiti experiences I can't agree on the UDM...
I'm still a beginner at it, but I would say to not over prioritize cores. Ram will be your bottleneck first. I day this as someone with 36 physical cores and like 90% of them idle
u/diffraa , this is a key point.
At $dayjob, we use 4 GB per core for application workloads and it works well. Databases get 16 GB per core. Memcached gets 32 GB per core. In development we use 16 GB per core because there isn't heavy load.
My own homelab is built around a bunch of quad cores with 32 GB of memory. The memory has come in useful. Having 64 GB per quad core would be even better, but was not possible when I built the systems many years ago (I bought super cheap $40 motherboards with only two slots). For my initial purpose getting 2x 1 GB sticks would have been enough, but I'm glad I bought more as I use all the memory now.
If you don't know what you want to do, I would get 8 GB of memory per core at minimum, and in a lightly loaded homelab, 16 GB per core is totally reasonable. I would only get less memory if you know you're going to hit the CPUs hard with particular tasks that share memory or use little memory, and even then I would get minimum 4 GB per core.
, but $1,000 in cash
not sure how this would help me, I've spend 10k or more, but I could get a t-shirt I guess?
N5105 nas board, 32-64gb of ram, 1x 500gb nvme SSD, some sort of case, and a bunch of HDDs, I like the 8tb ironwolfs, they are cheap enough, but large enough.
Maybe the n6005 if you can find it. But it's a great server, handles most selfhost stuff. I run Ubuntu server on it, it's just the cleanest and easiest to use, no GUI needed.
What's nice is it's super low power, and cheap. So you can eventually migrate to a more powerful Proxmox server, on minipcs, like NAB6, than just turn the n5105 into a TrueNAS server, and even duplicate it for backups, and triplicate (if you are really feeling it), for redundancy. Getting a 2nd and 3rd Proxmox minipcs enables HA on VMs. So yea. That's my goal. ATM I gotta migrate to the Proxmox.
Same but with a N100 motherboard. Asus and Asrock have some ITX boards with this chip.
I loved migrating to 3 nucs from a 2015 synology, so think you are 100% correct. (It allowed me to use TB networking for a 26Gbe ceph network)
TB = Thunderbolt?
Bought a Dell R630 from ebay for a decent price, but I wish I've had spend more on larger capacity hard drives. I bought a bunch of old 600GB HDD running RAID 10 that right now im afraid to replace them.
At least 2 mini desktops with as much RAM and ssd that I can get I'm it. Running proxmox and truenas and then setting up my jellyfin, homeassistant, and the rest will be a playground. I am a simple man
UDM-PRO, USW-Aggregation, USW-Enterprise-24-POE, U6-LR… build a server with i5/32GB NVMe boot drive, then some RAID drives… I took out a loan in this scenario as $1,000 wouldn’t cover my entire rack getting blown up.
https://www.servethehome.com/introducing-project-tinyminimicro-home-lab-revolution/ Small foot print, low wattage, modern CPU can run anything I can try and throw at it just get a lot of ram. Id run Ubuntu or Debian all apps go in docker containers, maybe install cockpit if I wanted web gui. And run vms if I want via KVM https://ubuntu.com/blog/kvm-hyphervisor If you want to go nas Plex rute you can add a hd via 10g usb Great level1techs video about mini PC home server https://youtu.be/GmQdlLCw-5k?si=VrdfDRfmpNHCZz-H
Everyone here recommending tinylabs, but what if you need lots of TB's? Is there a solution then? I have a Microserver Gen 8 (which is plenty powerful enough) but need way more space, and was going to buy something that can fit 10+ Hard drives...
There's lots of solutions.
Cheap:
But a full tower PC case with room for 10+ HDDs. Lot of options like those from Fractal, CoolerMaster, etc.
Enterprise (expensive):
Buy a JBOD with a backplane that you plug all your discs into then plug that into a server.
can you make ZFS pools across devices with Proxmox? Otherwise idk what you do for storage redundancy or RAID unless you run like longhorn or ceph or something across the cluster - all those machines have a single drive
Depends on the requirements. Is the purpose to learn virtualization management? Linux sysadmin stuff? Virtual networking + firewalls? For my purposes it’s all of the above and more.
Having said that, I have not had an ounce of trouble out of Intel NUC 12 Pro NUC12WSHv5. So for $1000 I’d start with that and add NVMe storage and max ram in my budget. Running ESXi 8.
3 optiplex 7040s micros - put 32gb ram and a 2tb ssd in each and call it a day
Legit same
I already have one 7040 Micro and I really wish I had two more for this exactly. Just cluster those puppies.
An am4 mobo+4500 = <150 euros. Cheap atx case + psu = 75 Leaves you 75 for ram to price-match those 7040 with lots more expandability and ecc support.
I like the optiplex micros because they're small
I figure you need to buy networking equipment too?
What is your job? Do you have exposure to life cycled hardware?
2-3 second hand small form factor PC's running Proxmox, cheap 2 bay Synology NAS for backups.
2 bay?
I would buy a single $1000 42u rack…
And do what with it? if your budget was $1000 and you only bought a rack, it'd be empty.
That's it. A $1000 rack and call it a day! Done.
Homerack achieved
All used: 2019ish Intel NUC i7, 32-64GB RAM, run ESXi 7, 4 Bay QNAP or Synology with a Celeron, 8TB spinners, TP-Link ER605, an Omada POE switch, and an Omada AP.
You end up with a great setup for VMs, a reliable Plex server using the NAS CPU, multi-WAN, rock solid VPN, and a UniFi/Meraki like experience, and you don’t notice it on the electric bill, your ears, the shelf, or the room temperature.
This doesn’t differ at all from my existing setup. My only regret was not starting with 64GB of RAM on the NUC instead of the 32GB I started with.
I will just get a nice amd board with ipmi and dump a good Ryzen cpu, Any linux, be debian or any Rhel based distro or even Proxmox and tons of drives plus a few nvme raids. Pretty much about that
Wish I had skipped the Frankenstein and mini PC steps.
Here's two reasons enterprise servers are the way to go:
- Remote management is awesome. Remote KVM, remote serial terminal, mounting ISOs remotely. If your homelab is in a not-so-accessible place (e.g. cupboard or garage), this saves so much frustration.
- High quality rack rails. You're more likely to be tinkering around the back of your server than a company that throws it in a data centre. It's almost like rack rails were built for homelabs.
I wouldn't worry too much about noise. $1000 will easily get you an R730 or T630.
- Supermicro H11SSL-N6 with an Epyc 7551P with 128G memory - €600
- PSU - €60ish
- Pile of refurb 4TiB disks - €100
- Mikrotik hAP ax² - €80
- HP Procurve 2848 - €40
- Misc gubbins - €180
There's a server, networking gear, and storage. I can sort the rest out later.
$1k wouldn't get me started for the electrical runs and cabinets for the hardware.
I have a rack full of R710s that barely get used anymore because energy is so freaking expensive. I’d either do everything in the cloud or use lots of low powered machines at home.
Just chiming in that the consensus on Mini PC clusters is pretty cool.
Completely agree. That's where it's at!
A bigger NAS with more drive bays
I would do pretty much what I do now with two mini PCs and my desktop PC running background services in a three node cluster. I change my mind too often though and just did a bit of a rebuild over the holiday, so by next weekend I may have a completely different goal.
I having considered replacing the desktop with a laptop for more portability.
I would also not mind getting a 2.5 Gbps switch. I have all 2.5 Gbps devices on the network except the switch which is a little silly.
Will do almost what i have now: compact (ITX/mATX) board with C612/2600v3/v4, maxxed with memory. SAS board/NVME/10G if you want/need. Silent and efficient for 24/7
I would buy a second hand workstation with all the pcie slots I could. They are bargains, and you can pull / upgrade cpus as needed. Need more ram? Put the second cpu in. Don't need it? Pull it out.
I'd separate my storage and put that in its own server.
Then, I'd probably go for multiple low energy sff "servers" instead of one powerfull one.
Dell Poweredge budget server. R720 can have good specs for cheap on eBay. Get a ubiquiti switch for vlans. Firewall brand of your choice I did tz400w. You should have some money left over to buy an endpoint as well. Then install VMWare and build out a vm environment of your choice. I chose windows just to continue learning the systems I administer.