this post was submitted on 01 Nov 2023
1 points (100.0% liked)

Self-Hosted Main

504 readers
1 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

For Example

We welcome posts that include suggestions for good self-hosted alternatives to popular online services, how they are better, or how they give back control of your data. Also include hints and tips for less technical readers.

Useful Lists

founded 1 year ago
MODERATORS
 

I am trying to setup a restic job to backup my docker stacks, and with half of everything owned by root it becomes problematic. I've been wanting to look at podman so everything isn't owned by root, but for now I want to backup my work I built.

Also, how do you deal with some docker containers having databases. Do you have to create exports for all docker containers that have some form of database?

I've spent the last few days moving all my docker containers to a dedicated machine. I was using a mix of NFS and local storage before, but now I am doing everything on local NVME. My original plan was having everything on NFS so I would worry about backups there, and I might go back to that.

you are viewing a single comment's thread
view the rest of the comments
[–] root-node@alien.top 1 points 1 year ago (3 children)

For backups I use Nautical Backup.

For the "owned by root" problem, I ensure all my docker compose files have [P]UID and [P]GID set to 1000 (the user my docker runs under). All my 20 containers have no issue running like this.

How are you launching your containers? Docker compose is the way, I have set the following in all mine:

environment:
  - PUID=1000
  - PGID=1000

user:
  1000:1000
[–] human_with_humanity@alien.top 1 points 1 year ago

Do u add bothe the user and env variables or just one?

[–] Not_your_guy_buddy42@alien.top 1 points 1 year ago (1 children)

Hey, this is where I am stuck just now: I want to keep the docker volumes, as bind mounts, also on my NAS share. If the containers run as a separate non root user (say 1001) then I can mount that share as 1001... sounds good right?

But somebody suggested running each container from their own user. But then I would need lots of differently owned directories. I wonder if I could keep mounting subdirs of the same NAS share, as different users, so each of them can have their own file access? Perhaps that is overkill.

(For OP: I've been on a selfhosting binge the past week and trying to work my way in at least the general direction of best practice... At least for the container databases I've been starting to use tiredofit/docker-db-backup (does database dumps) but also discovered this jdfranel docker backup as well which looks great as well. I save the dumps on a volume mounted from NAS. btrfs and there is a folder replication (snapshots) tool. So far, so good. )

[–] root-node@alien.top 1 points 1 year ago

...running each container from their own user...

Ideally this is the perfect option from a security standpoint, this as well as each container having it's own network too.

In a homelab it's not really required unless you are exposing your network to the internet or are better at creating/managing containers.

If you are just starting out, just keep everything simple.

[–] human_with_humanity@alien.top 1 points 1 year ago (1 children)

Do u add bothe the user and env variables or just one?

[–] root-node@alien.top 1 points 1 year ago

I add both because why not. It doesn't hurt.