this post was submitted on 15 Nov 2023
69 points (97.3% liked)

Selfhosted

40041 readers
703 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I've had a Lemmy instance running on a VPS with 100 GB of storage for a few months and it has filled up. I've been searching for ways to reduce the amount of storage used but so far I am coming up empty. Can anyone point me in the right direction?

top 35 comments
sorted by: hot top controversial new old
[–] bdonvr@thelemmy.club 41 points 1 year ago* (last edited 1 year ago) (2 children)

Firstly move pict-rs to object storage. My instance's pict-rs uses 150GB alone. I pay less than $2/mo to put it on Cloudflare R2. Backblaze B2 might be even cheaper. Instructions: https://crates.io/crates/pict-rs#filesystem-to-object-storage-migration

If that doesn't help enough and you're comfortable with SQL, you can purge the unnecessary entries in received-activities.

Command: delete from received_activity where published & NOW() - INTERVAL '3 days'; (Lemmy has problems with ampersands so you'll have to edit it)

Then do a vacuum full received_activity; to reclaim the space.

This deleted 98 million entries for me and reduced my database size from 49GB to 20GB a week ago when I started running out of space. No other effect as far as I can tell. Thanks @illecors@lemmy.cafe

[–] TrinityTek@lemmy.world 6 points 1 year ago

Thank you, that SQL command looks like exactly what I'm after! I'm going to give that a shot. I appreciate the help!

[–] MrRazamataz@lemmy.razbot.xyz 2 points 1 year ago (2 children)

what is stored in received_activity ? anything important? I mean obviously it's something the instance has received from other instances but is this then stored somewhere else (like comments then stored elsewhere for eg)

[–] Die4Ever@programming.dev 3 points 11 months ago* (last edited 11 months ago) (1 children)

in v0.19.0 Lemmy will automatically delete entries over 7 days old

https://github.com/LemmyNet/lemmy/issues/4113

https://github.com/LemmyNet/lemmy/commit/cb01427dcff14b3d88e30220695fc97978786a9a

currently it waits 3 months before deleting

[–] MrRazamataz@lemmy.razbot.xyz 1 points 11 months ago (1 children)
[–] Die4Ever@programming.dev 3 points 11 months ago* (last edited 11 months ago) (1 children)

it's in alpha currently, but you could still run it

I think you might have to use the :dev tag to get this update, a bit risky to stay on that tag though, maybe wait for the next docker image of an alpha release

[–] MrRazamataz@lemmy.razbot.xyz 2 points 11 months ago (1 children)
[–] Die4Ever@programming.dev 2 points 11 months ago (1 children)

Well now you can use :0.19.0-rc.5 :)

[–] MrRazamataz@lemmy.razbot.xyz 2 points 11 months ago (1 children)
[–] Die4Ever@programming.dev 2 points 11 months ago (1 children)

oh btw 3rd party apps aren't working with 0.19.0 yet, because of changes to the authentication API

[–] MrRazamataz@lemmy.razbot.xyz 1 points 11 months ago

Good to know. I'll check it out.

[–] bdonvr@thelemmy.club 3 points 1 year ago* (last edited 1 year ago) (1 children)

As far as I've been told it's basically just a log of all received activities. Nothing references it.

Nothing seems to have gone wrong in the past week on thelemmy.club since I removed it. I do have backups though.

[–] MrRazamataz@lemmy.razbot.xyz 1 points 11 months ago

Nice, thank you.

[–] scrubbles@poptalk.scrubbles.tech 24 points 1 year ago (3 children)

Instead of using the hard drive for pictrs, I suggest using it's S3 capabilities and migrating to bucket based storage. You'll save way more money and keep the expensive VPS hard drive just doing text and DB things. I think I spend maybe a dollar a month in S3 storage.

[–] elscallr@lemmy.world 4 points 11 months ago

For anyone doing this, set up your spending and budget alerts and actions. It's possible to accidentally fuck something up and end up with an aws bill that'll suck, but this will give you some measure of protection from that in case you accidentally misconfigure something.

[–] TrinityTek@lemmy.world 2 points 1 year ago (1 children)

Thanks for the suggestion! As Nix asked, do you happen to know of a guide or any documentation I could reference for this?

[–] hitagi@ani.social 19 points 1 year ago (1 children)

pict-rs has the option to compress images. Ours is set to WEBP with 1280 pixels either side max.

[–] TrinityTek@lemmy.world 2 points 1 year ago (1 children)

That sounds like a good idea. Do you know of any documentation for this? I'd like to give it a try.

[–] lemmy@linkopath.com 17 points 1 year ago* (last edited 1 year ago) (2 children)

I ran into the same problem and ended up switching to an S3 with Vultr. It's been a while since I did it but here are the links that I used to figure it out. I'm deployed using Lemmy-Easy-Deploy.

I used a combination of:

https://lemmy.world/post/538280

https://github.com/ubergeek77/Lemmy-Easy-Deploy/blob/main/ADVANCED_CONFIGURATION.md

https://git.asonix.dog/asonix/pict-rs/#user-content-filesystem-to-object-storage-migration

Good luck!

[–] TrinityTek@lemmy.world 2 points 1 year ago

Wow, thank you for these great resources! I will check it out. I really appreciate it!

[–] jeena@jemmy.jeena.net 1 points 1 year ago (1 children)

So I wanted to try it and found that Synology offers 15 GB for free so I tried it and it was extreamly easy to do. Set up a bucket and then fill in the ENV variables in docker-compose restart and it works, impressive. I might really need to think getting a good deal and doing that for my other fediverse stuff like Mastodon, Peertube and Matrix too, I think it's equaly easy there.

[–] jeena@jemmy.jeena.net 1 points 1 year ago* (last edited 1 year ago) (1 children)

Ah, but all my old pictures are not available anymore ...

[–] Die4Ever@programming.dev 12 points 1 year ago* (last edited 1 year ago) (1 children)

in v0.19.0 you could try disabling pictrs caching https://github.com/LemmyNet/lemmy/commit/1d23df37d86cc5cb6d7d9efaaf4360ecc9a9796f

    cache_external_link_previews: false

I don't think that will clear the existing cache though

[–] Dave@lemmy.nz 7 points 1 year ago (1 children)

Isn't this only on the as yet unreleased version 0.19?

Asking as someone with a 165GB picts cache...

[–] Die4Ever@programming.dev 3 points 1 year ago

yes you're right, also that config flag was renamed to cache_external_link_previews

[–] ramble81@lemm.ee 8 points 1 year ago (1 children)

I love how OP doesn’t say it, but everyone immediately goes “it’s the pictures”

[–] Scrath@feddit.de 11 points 1 year ago

Honestly, what else would it be? Text takes ridiculously little storage compared to a single picture of a decent resolution.

[–] Shadow@lemmy.ca 4 points 1 year ago* (last edited 1 year ago) (1 children)

Are you on the latest version?

Is the space used by pictrs, or your db?

[–] TrinityTek@lemmy.world 1 points 1 year ago (1 children)

Sadly, no. My server has been a bit neglected but it's been plugging along and working fine for the most part. I need to upgrade though. And I assume pictrs but to be honest I haven't checked. I just noticed today it was running poorly and checked and the drive is full.

[–] Shadow@lemmy.ca 1 points 1 year ago

The last release (or maybe the one before) did a bunch of DB cleanup that reduces the db size by about 20gb.