this post was submitted on 21 Jul 2023
12 points (83.3% liked)

Selfhosted

40183 readers
770 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Hi All. I have been running my own lemmy instance for a while now. I set it up sort of as an experiment, and then I realized that I liked having my own instance, as it makes me (mostly) immune to outages due to things outside my control, defederation drama, etc. So I decided that I am going to stick with having my own instance. But obviously the amount of space it is taking grows, ~~and I apparently have zero foresight~~ and I only have so much space on the SSD that I initially put lemmy on. So I wanted to migrate everything over to my NAS.

I am mounting a volume on my NAS via NFS. I copied over my whole lemmy directory with cp -a, and it appeared that all of the permissions and file ownership copied over properly. However, when I run the containers, the postgres container is constantly crashing. The logs say "Permission denied" and then "chmod operation not permitted" back and forth forever. I opened a shell in the container to see what was going on, and I could see that the container's root user could not cd into /var/lib/postgres/data, but the postgres user could.

I have no_root_squash set for my NFS share if that is important, but I doubt that is even relevant since it is only the root user inside the container. I'm running my lemmy instance with rootless podman, so root inside the container actually maps to the UID of the user running the podman commands outside the container. That said, when I run this in my local filesystem, while my podman user can't access the postgres volume outside the container, as root inside the container it can access it.

I hope this isn't too confusing, and I hope that someone can help me with this. I know it is a very specific setup being rootless podman and trying to run it on an NFS share.

Today is also the first time I have every tried using NFS, as my NAS was always using SMB before, but I needed file ownership to do this. So it's very possible I just need to tweak some NFS settings.

Edit:

I sort of got it working, but it's mega hacky. It's not a permanent solution, but it gives me some insight into what is going wrong.

I set the permissions on the postgres volume in my host to be g+rx, and it worked. However, as soon as the container started, it changed the permissions back to 700. The thing is, "root" doesn't actually need access to the directory. The postgres user has access, and that's all that needs it. So it this actually works. But if I need to restart the container for any reason, it no longer works. So I would need to set the permissions to g+rx every time, which is just not a good solution.

you are viewing a single comment's thread
view the rest of the comments
[–] Max_P@lemmy.max-p.me 5 points 1 year ago (2 children)

When using rootless podman, the actual ID being mapped might be something from /etc/sub{uid,gid}. Those are well above the maximum unsigned 16 bit integer typically used, so I'm not sure NFS can even map this.

I found this quickly, that probably applies to you: https://www.redhat.com/sysadmin/rootless-podman-nfs

[–] dandroid@dandroid.app 1 points 1 year ago

Oh thank you. I am going to read through this now.

[–] dandroid@dandroid.app 1 points 1 year ago (2 children)

Hmm, I'm not 100% sure this is the scenario I am in. My user's home directory is on the local file system, not the NFS, so the images are being stored on the local filesystem. The docker-compose file, the config files, and the volumes are the things that are on the NFS.

I also think it's worth pointing out that the pictrs container is working fine, and it also uses weird UIDs that are over 100,000

[–] Max_P@lemmy.max-p.me 2 points 1 year ago

Could be some missing NFS features then: make sure you're using NFSv4.2, have locking enabled and as many features enabled. It's a database, it's gonna be picky. Maybe it's failing to lock the files.

[–] fkn@lemmy.world 1 points 1 year ago* (last edited 1 year ago) (1 children)

It's possible that ownership/group is wrong. Is there a reason you used cp -a. Instead of rsync -a? The rsync version is a much closer duplicate than the cp version.

Edit: also if your base folder has different permissions that you are mounting into docker are different permissions this can happen.

[–] dandroid@dandroid.app 1 points 1 year ago* (last edited 1 year ago)

I did try it with rsync -a and got the same results. :(

Edit: Oh, I just saw your edit. The base folder could be the problem. So the folder structure leading up to the problem is like /mnt/nfs_share/podman/lemmy/volumes/postgres/. The postgres folder is what is being mounted and where the problem is. The whole lemmy folder is what is being copied. So the folder holding the problem folder should have the correct ownership and permissions. But could something upstream all the way to the podman folder cause issues all the way down?

podman is the name of my user that runs podman commands and the name of the folder that hold all the stuff that belongs to that user. I know, that's confusing. ~~Did I mention that I had zero foresight?~~