this post was submitted on 24 Sep 2023
30 points (94.1% liked)

Selfhosted

39247 readers
311 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I'm running Docker on Ubuntu server; around 50 containers running, most admin via Portainer. Configuration files and small databases for container applications are stored on the local SSD, media and larger files are stored on a NAS.

NAS data and the container folders are backed up.

I have a second identical machine doing nothing. What would you recommend researching to add resilience to this setup? Top priority is quick and easy restoration should the SSD fail - everything else is relatively easy to replace.

I'll create an SSD RAID but I like the idea of a second host.

top 17 comments
sorted by: hot top controversial new old
[–] peter@feddit.uk 15 points 1 year ago (2 children)

You can use docker swarm (or a better container orchestrator) to have the containers automatically fail over to the second host

[–] mhzawadi@lemmy.horwood.cloud 9 points 1 year ago (1 children)

Swarm will also spread the load out over both hosts, but all your data would need to be accessible by both hosts

[–] Sim@lemmy.nz 1 points 1 year ago (2 children)

Thanks. That means I need to move all data off the hosts on to, say, a NAS - then the NAS becomes the single point of failure. Can I operate a swarm without doing that but still duplicate everything from host 1 to host 2, so host 2 could take over relatively seamlessly (apart from local DNS and moving port forwarding to nginx on the remaining host)?

[–] Still@programming.dev 2 points 1 year ago (1 children)

I think you can run a ceph or glusterfs cluster for sharing files in a cluster

[–] Mio@feddit.nu 2 points 1 year ago

I think 3 nodes are required for that

[–] mhzawadi@lemmy.horwood.cloud 2 points 1 year ago

Yes could sync the 2 hosts data, you also can use both hosts as nginx upstreams.

[–] Sim@lemmy.nz 1 points 1 year ago

Thanks. Can I use my existing, single Docker to start a new swarm, or do I have to start from scratch?

[–] eluvatar@programming.dev 8 points 1 year ago (1 children)

Container orchestration is what you're looking for. Kubernetes is the most popular, but it might be overkill it's hard to say based on your setup. However it's definitely useful experience to know how to run it.

[–] Sim@lemmy.nz 1 points 1 year ago (1 children)

Thanks. Could I achieve a simple 2-host solution with Kubernetes though?

[–] eluvatar@programming.dev 3 points 1 year ago

Nothing about k8s is simple. But yes you can achieve that.

Take a look at Rancher for actually running a cluster.

[–] mertn@lemmy.world 4 points 1 year ago (1 children)

I put my dockers on mirrored zfs pool and have enough spare parts in case of breakdowns.

[–] Sim@lemmy.nz 1 points 1 year ago (1 children)

So you have Docker itself on a single host (with parts) and all the containers in fault tolerant storage, and the most work you'd have to do in the event of host drive failure is to re-install the OS and Docker itself?

[–] mertn@lemmy.world 2 points 1 year ago

I have the OS (with docker) mirrored too. So no reinstalling, just disk or other parts swapping in case of a failure. I hope. A mothboard swap is the worst downtime. I have done this and needed to fiddle with network settings due to changed net interface name to get the server up again.

[–] mplewis@lemmy.globe.pub 4 points 1 year ago

Learning K8s is a lot to take on, but it will pay off as your needs expand in the long term — and if you decide to go into infra/ops at work.

[–] Mio@feddit.nu 2 points 1 year ago

It might be enough to just rsync stuff to the secondary regularly and the inactive machine monitor the active machine and just start all services as the active machine stops responding.

[–] adam@kbin.pieho.me 1 points 1 year ago

The issue with orchestration is that you still need a way to share those small databases and config files.

Docker has ok NFS support so you'd want to move the files to NAS shares and have them mount those. Without some way to centralise or spread the files out you won't be covering your SSD failure case. Once you've got that going docker swarm will probably cover your needs just fine.

You could go with K8S but based on you setup that's a bit overkill (unless you're doing it as a learning exercise, in which case go nuts).

[–] Decronym@lemmy.decronym.xyz 1 points 1 year ago* (last edited 1 year ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
DNS Domain Name Service/System
HTTP Hypertext Transfer Protocol, the Web
NAS Network-Attached Storage
k8s Kubernetes container management package
nginx Popular HTTP server

4 acronyms in this thread; the most compressed thread commented on today has 8 acronyms.

[Thread #162 for this sub, first seen 24th Sep 2023, 17:15] [FAQ] [Full list] [Contact] [Source code]