this post was submitted on 24 Sep 2025
150 points (95.2% liked)

Selfhosted

51846 readers
954 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

(page 2) 50 comments
sorted by: hot top controversial new old
[–] sylver_dragon@lemmy.world 8 points 4 days ago (1 children)

I started self hosting in the days well before containers (early 2000's). Having been though that hell, I'm very happy to have containers.
I like to tinker with new things and with bare metal installs this has a way of adding cruft to servers and slowly causing the system to get into an unstable state. That's my own fault, but I'm a simple person who likes simple solutions. There are also the classic issues with dependency hell and just flat out incompatible software. While these issues have gotten much better over the years, isolating applications avoids this problem completely. It also makes OS and hardware upgrades less likely to break stuff.

These days, I run everything in containers. My wife and I play games like Valheim together and I have a Dockerfile template I use to build self-hosted serves in a container. The Dockerfile usually just requires a few tweaks for AppId, exposed ports and mount points for save data. That paired with a docker-compose.yaml (also built off a template) means I usually have a container up and running in fairly short order. The update process could probably be better, I currently just rebuild the image, but it gets the job done.

load more comments (1 replies)
[–] savvywolf@pawb.social 8 points 4 days ago

I've always done things bare metal since starting the selfhosting stuff before containers were common. I've recently switched to NixOS on my server, which also solves the dependency hell issue that containers are supposed to solve.

[–] kutsyk_alexander@lemmy.world 8 points 4 days ago* (last edited 4 days ago) (7 children)

I use Raspberry Pi 4 with 16GB SD-card. I simply don't have enough memory and CPU power for 15 separate database containers for every service which I want to use.

load more comments (7 replies)

All I have is Minecraft and a discord bot so I don't think it justifies vms

[–] Strider@lemmy.world 6 points 4 days ago

Erm. I'd just say there's no benefit in adding layers just for the sake of it.

It's just different needs. Say I have a machine where I run a dedicated database on, I'd install it just like that because as said there's no advantage in making it more complicated.

[–] pedro@lemmy.dbzer0.com 4 points 4 days ago* (last edited 4 days ago) (4 children)

I've not cracked the docker nut yet. I don't get how I backup my containers and their data. I would also need to transfer my Plex database into its container while switching from windows to Linux, I love Linux but haven't figured out these two things yet

[–] hperrin@lemmy.ca 4 points 4 days ago

Anything you want to back up (data directories, media directories, db data) you would use a bind mount for to a directory on the host. Then you can back them up just like everything else on the host.

[–] Passerby6497@lemmy.world 3 points 4 days ago* (last edited 4 days ago)

All your docker data can be saved to a mapped local disk, then backup is the same as it ever is. Throw borg or something on it and you're gold.

Look into docker compose and volumes to get an idea of where to start.

[–] boiledham@lemmy.world 2 points 4 days ago

You would leave your plex config and db files on the disk and then map them into the docker container via the volume parameter (-v parameter if you are running command line and not docker-compose). Same goes for any other docker container where you want to persist data on the drive.

load more comments (1 replies)
[–] kiol@lemmy.world 6 points 4 days ago (1 children)

Are you concerned about your self-hosted bare metal machine being a single point of failure? Or, are you concerned it will be difficult to reproduce?

[–] 30p87@feddit.org 4 points 4 days ago

Considering I have a full backup, all services are Arch packages and all important data is on its own drive, I'm not concerned about anything

[–] SailorFuzz@lemmy.world 2 points 3 days ago (4 children)

Mainly that I don't understand how to use containers... or VMs that well... I have and old MyCloud NAS and little pucky PC that I wanted to run simple QoL services on... HomeAssistant, JellyFin etc...

I got Proxmox installed on it, I can access it.... I don't know what the fuck I'm doing... There was a website that allowed you to just run scripts on shell to install a lot of things... but now none of those work becuase it says my version of Proxmox is wrong (when it's not?)... so those don't work....

And at least VMs are easy(ish) to understand. Fake computer with OS... easy. I've built PCs before, I get it..... Containers just never want to work, or I don't understand wtf to do to make them work.

I wanted to run a Zulip or Rocket.chat for internal messaging around the house (wife and I both work at home, kid does home/virtualschool).... wanted to use a container because a service that simple doesn't feel like it needs a whole VM..... but it won't work...

load more comments (4 replies)
[–] melfie@lemy.lol 5 points 4 days ago* (last edited 4 days ago)

I use k3s and enjoy benefits like the following over bare metal:

  • Configuration as code where my whole setup is version controlled in git
  • Containers and avoiding dependency hell
  • Built-in reverse proxy with the Traefik ingress controller. Combined with DNS in my OpenWRT router, all of my self hosted apps can be accessed via appname.lan (e.g., jellyfin.lan, forgejo.lan)
  • Declarative network policies with Calico, mainly to make sure nothing phones home
  • Managing secrets securely in git with Bitnami Sealed Secrets
  • Liveness probes that automatically “turn it off and on again” when something goes wrong

These are just some of the benefits just for one server. Add more and the benefits increase.

Edit:

Sorry, I realize this post is asking why go bare metal, not why k3s and containers are great. 😬

[–] lka1988@lemmy.dbzer0.com 4 points 4 days ago (3 children)

I run my NAS and Home Assistant on bare metal.

  • NAS: OMV on a Mac mini with a separate drive case
  • Home Assistant: HAOS on a Lenovo M710q, since 1) it has a USB zigbee adapter and 2) HAOS on bare metal is more flexible

Both of those are much easier to manage on bare metal. Everything else runs virtualized on my Proxmox cluster, whether it's Docker stacks on a dedicated VM, an application that I want to run separately in an LXC, or something heavier in its own VM.

load more comments (3 replies)
[–] hperrin@lemmy.ca 3 points 4 days ago

There’s one thing I’m hosting on bare metal, a WebDAV server. I’m running it on the host because it uses PAM for authentication, and that doesn’t work in a container.

[–] brucethemoose@lemmy.world 4 points 4 days ago* (last edited 4 days ago) (2 children)

In my case it’s performance and sheer RAM need.

GLM 4.5 needs like 112GB RAM and absolutely every megabyte of VRAM from the GPU, at least without the quantization getting too compressed to use. I’m already swapping a tiny bit and simply cannot afford the overhead.

I think containers may slow down CPU<->GPU transfers slightly, but don’t quote me on that.

load more comments (2 replies)
[–] Kurious84@lemmings.world 3 points 4 days ago

Anything you want dedicated performance on or require fine tuning for a specific performance use cases. Theyre out there.

[–] Routhinator@startrek.website 2 points 4 days ago

I'm running Kube on baremetal.

[–] frezik@lemmy.blahaj.zone 2 points 4 days ago (1 children)

My file server is also the container/VM host. It does NAS duties while containers/VMs do the other services.

OPNsense is its own box because I prefer to separate it for security reasons.

Pihole is on its own RPi because that was easier to setup. I might move that functionality to the AdGuard plugin on OPNsense.

load more comments (1 replies)
[–] Surp@lemmy.world 3 points 4 days ago (1 children)

What are you doing running your vms on bare metal? Time is a flat circle.

load more comments (1 replies)
[–] eleitl@lemmy.zip 2 points 4 days ago

Obviously, you host your own hypervisor on own or rented bare metal.

[–] jaemo@sh.itjust.works 3 points 4 days ago

I generally abstract to docker anything I don't want to bother with and just have it work.

If I'm working on something that requires lots of back and forth syncing between host and container, I'll run that on bare metal and have it talk to things in docker.

Ie: working on an app or a website or something in language of choice on framework of choice, but postgres and redis are living in docker. Just the app I'm messing with and it's direct dependencies run outside.

load more comments
view more: ‹ prev next ›