ervwalter

joined 1 year ago
[–] ervwalter@alien.top 3 points 11 months ago

The point of purchasing a registered domain name and connecting it to a public DNS server is to make it findable from any Internet location. If you only ever want to use the domain name internally, you don't need to have a public domain name and you can make up your own internal domain name that is served by your local DNS. To avoid future conflicts with public domains, I'd probably use a TLD that doesn't exist (e.g. not .com or the like).

[–] ervwalter@alien.top 2 points 11 months ago

All software has bugs. Sometimes bugs let you do things you weren't intended to be able to do (e.g. access data on a NAS without knowing the login password). Your NAS might have a bug that hasn't been discovered (or publicized yet) or hasn't been fixed yet.

If you put your NAS on the internet, you give "bad guys" am opportunity to exploit those bugs to get your data or to use your NAS as a jumping off spot to attack other things inside your home network.

[–] ervwalter@alien.top 1 points 11 months ago

I accomplish what you are looking for more or less, but it's not an appliance--it's a system of tools that I setup myself and maintain (which I enjoy). But it sounds like you want to avoid doing that.

My solution includes:

  • A small Proxmox cluster so that if any single host dies, VMs can move to another host. This cluster approach is only necessary to protect against hardware failure--if that isn't something you care to protect against initially, you can do all of the rest with only a single Proxmox host
  • On that proxmox cluster, a few VMs. I run these are VMs because that makes it super easy to snapshot each VM before making experimental changes (i.e. trivial rollback) and super easy to backup each VM to my NAS (again easy rollback for unplanned problems that get introduced without me noticing right away)
    • A VM for Home Assistant. I prefer Home Assistant OS to running it in a docker container myself. It is easier for me to manage this way.
    • A VM for just scrypted. Scrypted is easier to deploy if you can put it in network host most which could in theory interfere with other docker containers, so I keep it on an isolated VM. Extra VMs are easy with proxmox so there is little downside.
    • A VM running docker where most everything else runs. Docker containers are managed via portainer using docker-compose files
      • Docker compose files (called "stacks" in portainer) life in a private github and when I make changes it github, they portainer pulls them down and updates the running containers with the new compose file.
      • I run the Renovate bot on my github repo which notices when my containers are out of date and creates a Pull Request with a recommended upgrade. I can either manually approve those or create rules to auto merge them.
      • Because all the docker-compose files are in a git repo, rolling back after a problematic upgrade is usually trivial (unless the data got converted as part of an upgrade which might require restoring a VM backup in the worst case)
    • A VM running ubuntu I use for development (connected to remotely with Visual Studio Code). This is also the linux VM that I use to launch ansible playbooks to remotely do things like apt upgrades on the other VMs (HA excluded).
  • One of the containers I run is uptime-kuma which monitors general health of all my other services and notifies me via telegram and email if a VM or container dies or starts to look unhealthy.
  • Another container I run is homepage which is a dashboard that lets me get to all my services and also has widgets to surface more health information.

This is not at all turnkey and took some time to put together, but I find it to be relatively low ongoing maintenance now that it is setup. And I have pretty good high availability and great rollback/recovery support in the event that something goes sideways with an upgrade or some configuration change I make manually.

[–] ervwalter@alien.top 1 points 1 year ago

It depends on your goals of course.

Personally, I use Proxmox on a couple machines for a couple reasons:

  1. It's way way easier to backup an entire VM than it is to backup a bare metal physical device. And when you back up a VM, because the VM is "virtual hardware" you can (and I have) restore it to the same machine or to brand new hardware easily and it will "just work". This is especially useful in the case that hardware dies.
  2. I want high availability. A few things I do in my homelab, I personally concider "critical" to my home happiness. They aren't really critical, but I don't want to be without them if I can avoid it. And by having multiple proxmox hosts, I get automatic failover. If one machine dies or crashes, the VMs automatically start up on the other machine.

Is that overkill? Yes. But I wouldn't say it "doesn't make sense". It makes sense but just isn't necessary.

Fudge topping on ice cream isn't necessary either, but it sure is nice.

[–] ervwalter@alien.top 1 points 1 year ago

I do something similar:

Incoming traffic ---[https traffic]---> reverse proxy ---[https traffic]---> real services (emby, etc).

The traffic from my browser to the reverse proxy is encrypted with TLS certs from letsencrypt. Whenever possible (it usually is), I configure the real services to expose HTTPS endpoints even if they are just with self-signed certs. That way the proxy-to-service traffic is also encrypted.