PRTG
Homelab
Rules
- Be Civil.
- Post about your homelab, discussion of your homelab, questions you may have, or general discussion about transition your skill from the homelab to the workplace.
- No memes or potato images.
- We love detailed homelab builds, especially network diagrams!
- Report any posts that you feel should be brought to our attention.
- Please no shitposting or blogspam.
- No Referral Linking.
- Keep piracy discussion off of this community
Only reason I keep a Windows box around!
Does overspeccing your hardware so much performance issues never come up count?
For normal people grafana & prometheus are typical good answers.
Flag a warning when usage hits 5% so you can start saving for the next server
Zabbix
I second this. I just set my instance up a couple weeks ago.
While researching, I saw many people saying that it’s very good but is hard to set up. I disagree with that to some degree; Setup itself is extremely easy, configuring it the way you want isn’t as easy BUT is wayyy more time consuming. Time consuming != hard, though. Just take time to tweak it how you want.
I second this. Setup was a breeze for me compared to checkmk.
What specs did you put it on?
Well, I run my containers in kubernetes.
And, it more or less includes full support for prometheus/grafana/alertmanager/etc.
So- I use that.
I have questions about this. I’ll be getting another Pi or two and was considering putting k8s on them. Would I be able to set them up with kubernetes and then import my existing Docker containers from my current Pi to them?
Yup. You can do that.
Although- you wouldn't "import" your existing containers. but, you can...
- Create manifests for your containers (Kubernetes runs the exact same docker containers). or, find helm charts for your containers.
- Import the storage from docker into your new PV/PVCs.
I would, suggest learning kubernetes first though. Learning curve can be rather steep.
Also, rancher + k3s would work perfect for your Pis.
Prometheus and Grafana for production environments
I really enjoy practicing with Datadog - though it gets quite expensive really quickly and is quite overkill for 6-7 hosts, many VMs, and 20ish containers.
We use it at work, but monitoring isn’t my team’s responsibility so I try to understand how it all fits together by practicing with it at home.
I think Datadog should have a homelabber tier (above the free 5 physical hosts) that allows people to tinker. I honestly think it would net them more customers.
Ahhh, Datadog, the sleazy used car salesmen of the observability market. Seriously, they're hucksters.
Telegraf with Docker Input Plugin installed on the Host writing to InfluxDB and displayed in Grafana, both running in Docker containers.
Here is a screenshot of my Server Performance Grafana dashboard.
Telegraf, InfluxDB and Grafana
Netdata
Does netdata support multiple servers? Can I see statistics for all hosts in a centralized way?
Thank you!
I’m curious which monitoring tool is the easiest to deploy and maintain. I’m looking to deploy a monitoring solutions via docker on an existing server. I wasn’t a fan of Zabbix with their docker deployment.
vSphere lol
Datadog.
Mostly my own eyes.. /s
I run Dozzle as a container on my host and I use the command 'docker stats' on the CLI on the dockerhost for in-depth stuff.
Netdata, works well for me
Home Assistant.
Zabbix and a TIG stack
New Relic
Munin. It’s highly portable and works with many operating systems.
I'm definitely alone but... Checkmk. AMA.
Grafana, VictoriaMetrics (drop-in replacement for Prometheus with better storage efficiency and enhanced query language), Loki, Telegraf and Promtail for metrics and logs correspondingly.
Example with provisioned datasources and dashboards here:
https://gitlab.com/homelab_software/monitoring
It’s possible to ship metrics and logs with cAdvisor and other tools.