this post was submitted on 10 Nov 2023
3 points (100.0% liked)

Self-Hosted Main

515 readers
1 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

For Example

We welcome posts that include suggestions for good self-hosted alternatives to popular online services, how they are better, or how they give back control of your data. Also include hints and tips for less technical readers.

Useful Lists

founded 1 year ago
MODERATORS
 

Current running one box with Proxmox split into several VMs:

  • 1 core devoted to HTPC VM
  • 2 cores devoted to Linux VM for hosting game servers
  • 1 core devoted to Portainer and ~25 containers
  • 1 core reserved for running VMs for fun (Windows, different Linux distros)

My main concern is with my Portainer machine handling quite a bit at this point. I haven’t noticed any performance degradation yet, but I’m wondering if I could benefit from introducing another machine to my homelab to host some services.

How do you tend to organize, separate, and split resources between your hosted services? What steps did you take to begin growing your homelab? Next big step for me is getting an HDD enclosure to serve out more storage then my one HDD allows, but I’m posing this question from a CPU/RAM resources perspective.

top 7 comments
sorted by: hot top controversial new old
[–] ithilelda@alien.top 1 points 1 year ago

one vm for all, the other for router. you don't even need to limit cpu resources on proxmox. it just works^TM^.

anyway, I do plan to have another box for my game server because I don't want them to mess with my perfectly fine internet backbone lol.

[–] randomcoww@alien.top 1 points 1 year ago (1 children)

I've been downsizing.

A few years ago I had three Xeon E5 rack servers with 64GB of RAM each and a bunch of HDDs.

I then cut it down to three mini PCs with 16GB of RAM each.

Just recently I cut it down even further to just my single Linux desktop running Kubernetes in the background.

Why? I realized that my interests are all on the software side and more hardware doesn't contribute much to improving my experience either building or using the lab.

I also tend to make improvements in my software stack when resources are more restricted and my lab became so much more reliable and efficient over the years.

[–] remotelove@lemmy.ca 1 points 1 year ago* (last edited 1 year ago)

I downsized as well for the same reasons. Less equipment means less care and feeding. My main desktop is powerful enough to have a few VMs running in the background while I play games.

There is a mini-pc I use for my workbench that I use to show references for any other hobbies and it also controls my CNC when needed.

My CAD work and games require a powerful workstation so all my money went into that. Also, electric bills can start to add up when servers are running 24x7. Plus, they can be loud.

I am going to invest in another mini-pc soon to act as a another router and firewall, but that is about it.

[–] SadGrimReaper@alien.top 1 points 1 year ago

I would try to move from vm to lxc, uses less resources.

[–] lilolalu@alien.top 1 points 1 year ago

I dont think this makes any sense at all. There is no fixed resource requirements in any of your VM's ... ALL of them can make use of having more resources when they need it. Thats why probably the Proxmox (or Linux Kernel Developers, or Android developers) spend a LOT OF TIME thinking about governing resources. It makes no sense that an idling OS is sitting on a reservee core when your 25 Containers would need to do stuff in parallel.

I think you should resource the overall concept of your setup.

[–] Comakip@alien.top 1 points 1 year ago

What's the purpose of buying more hardware if everything is working fine? It makes more sense to buy more if you notice a bottleneck.

[–] Apart_Mistake_8412@alien.top 1 points 1 year ago

There's been some great responses here, I'm adding my own here.

I've been in the Infrastructure/Security space for 22 years, homelabbing for at least that long if not longer. I've had a 42u rack in my apartment, a dedicated server room in a previous home, all the way down to a single RPI3 hosting what I needed. Things change, and it's a hobby for me so it's definitely fun to experiment, but here's what I've learned over the years.

In an enterprise/production environment especially with today's "cloud" everything, machines that are not being utilized are wasted resources. In both a professional and homelab setting, I don't start to look at machine expansion until I've hit 70-80% utilization of CPU and RAM. The reason being, if I'm constantly at 30-40% utilization, the server(s) are doing their job with plenty of headroom. When it starts to approach a consistent 70-80% then I ask myself the following question.

"Do I need to add another machine or am I simply hosting things that I don't use or need?". This will drive the action as to whether or not to scale out my machine/lab or to reduce the workloads.

I don't think there's a best practice here. It's going to come down to an individual answering that question for themselves.

If you don't have it already, I would highly recommend adding some monitoring to your system. Something like the Prometheus/Grafana/Node Exporter. This will give you a good feel for what's happening on your Proxmox host and for each VM. From there you're able to have some data to drive your decision further.

One thing to keep in mind, let's say you have a 4 core 8 thread machine on Proxmox. The kernel scheduler will handle allocating cpu threads as fast as it can as well as garbage collecting memory back to the system. By that, you could have 8-10 VMs on your proxmox host all with 1-2 cores each and unless they are all being heavily utilized at the same time, the overprovisioning of CPU cores is totally fine. Memory is more important in that regard as well as having fairly quick underlying VM storage to avoid IO delay.

My setup that I'm working on codifying at the moment is a 8 core N305/16gb ram/500gb m2 storage Mini PC running Proxmox. This machine has dual nics so I can utilize a trunk for VLANs and what not. I keep all networking services there such as multiple resolvers (pihole or whatever), Unifi controller, and anything that's not coupled directly to my homelab services.

The second machine I have is a i5-8279u (4c/8t) mini pc with 32gb ram and 2tb m2 pcie3 nvme storage running proxmox. It runs all of the main services in my lab/home and does quite well. If I want to test a new piece of software or just experiment I tend to use it temporarily to better understand the workload needs for the deployment/vm. If I decided that it's very resource intensive and will push the overall host to 70-80% utilization I start looking at adding another node.

I treat my "fleet" very much as a utility. So for instance if I have home assistant (which I'm currently not on, but moving back to soon). I might allocation 2 cores, 4gb of ram, 40gb of storage, and leave it at that. Keeping things smallish, and monitoring very closely over a period of weeks and months helps me to determine if I'm "rightsized" well enough for whatever it is I'm running. Having the monitoring stack in place really helps me to understand what's going on and how I should pivot if at all.

If I see small persistent spikes in utilization (say 5-10 minutes a few times a day where things spike). I simply ignore it, because it's part of just how things go. But if it's pegged at that level for a long time 12-24hrs as an example, then it's time to start going down that path of resource allocation. Be it changing the Proxmox settings or maybe adding another machine.

I really don't think about growing my homelab, I just think of what I need to get the job done, what I need to experiment, and move on from there.

I hope this was helpful.