Hi All. I have been running my own lemmy instance for a while now. I set it up sort of as an experiment, and then I realized that I liked having my own instance, as it makes me (mostly) immune to outages due to things outside my control, defederation drama, etc. So I decided that I am going to stick with having my own instance. But obviously the amount of space it is taking grows, ~~and I apparently have zero foresight~~ and I only have so much space on the SSD that I initially put lemmy on. So I wanted to migrate everything over to my NAS.
I am mounting a volume on my NAS via NFS. I copied over my whole lemmy directory with cp -a
, and it appeared that all of the permissions and file ownership copied over properly. However, when I run the containers, the postgres container is constantly crashing. The logs say "Permission denied" and then "chmod operation not permitted" back and forth forever. I opened a shell in the container to see what was going on, and I could see that the container's root user could not cd
into /var/lib/postgres/data
, but the postgres user could.
I have no_root_squash set for my NFS share if that is important, but I doubt that is even relevant since it is only the root user inside the container. I'm running my lemmy instance with rootless podman, so root inside the container actually maps to the UID of the user running the podman commands outside the container. That said, when I run this in my local filesystem, while my podman user can't access the postgres volume outside the container, as root inside the container it can access it.
I hope this isn't too confusing, and I hope that someone can help me with this. I know it is a very specific setup being rootless podman and trying to run it on an NFS share.
Today is also the first time I have every tried using NFS, as my NAS was always using SMB before, but I needed file ownership to do this. So it's very possible I just need to tweak some NFS settings.
Edit:
I sort of got it working, but it's mega hacky. It's not a permanent solution, but it gives me some insight into what is going wrong.
I set the permissions on the postgres volume in my host to be g+rx, and it worked. However, as soon as the container started, it changed the permissions back to 700. The thing is, "root" doesn't actually need access to the directory. The postgres user has access, and that's all that needs it. So it this actually works. But if I need to restart the container for any reason, it no longer works. So I would need to set the permissions to g+rx every time, which is just not a good solution.
Unfortunately, no. I ended adding another SSD to my server and am just running it there now.