darmok

joined 1 year ago
[–] darmok@darmok.xyz 1 points 1 year ago

I've noticed something similar on my instance in some cases as well. Nothing obvious logged as errors either. It just seems like the comment was never sent. In my case cpu is minimal so it doesn't seem like a resource issue on the receiving side.

I suspect it may be a resource issue on the sending side. Potentially, not able to keep up with the number of subscribers. I know there was some discussion from the devs around the number of federation workers needing to be increased to keep up, so another possibility there.

It's definitely problematic though. I was contemplating implementing some kind of resync this entire post and all comments via the Lemmy API to get things back in sync. But, if it is a sending server resource issue, I'm also hesitant to add a bunch more API calls to the mix. I think some kind of resync functionality will be necessary in the end.

[–] darmok@darmok.xyz 1 points 1 year ago

I was just thinking about doing something like to migrate some of my communities over (and was even planning on writing it in Python). Just ran it and it worked perfectly. Thank you, this saved me a bunch of time!

[–] darmok@darmok.xyz 3 points 1 year ago (2 children)

I ran into this at one point as well. You can unset the private instance flag manually in the database and restart to get up and running again.

First connect to psql:

docker-compose exec postgres psql -U lemmy

Then run an update to unset the flag:

lemmy=# update local_site set private_instance = false;

Which should update 1 row (only 1 row in that table if you're on a typical install)

[–] darmok@darmok.xyz 8 points 1 year ago

I think some of the difficulty right now is on the presentation side. It may not be as noticable of an issue if we had a way to aggregate and view posts from related communities in a single consolidated view. I'm hoping the tooling around this will improve over time.