this post was submitted on 23 Jun 2024
83 points (100.0% liked)

Linux

48143 readers
770 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
83
Deduplication tool (lemmy.world)
submitted 4 months ago* (last edited 4 months ago) by Agility0971@lemmy.world to c/linux@lemmy.ml
 

I'm in the process of starting a proper backup solution however over the years I've had a few copy-paste home directory from different systems as a quick and dirty solution. Now I have to pay my technical debt and remove the duplicates. I'm looking for a deduplication tool.

  • accept a destination directory
  • source locations should be deleted after the operation
  • if files content is the same then delete the redundant copy
  • if files content is different, move and change the name to avoid name collision I tried doing it in nautilus but it does not look at the files content, only the file name. Eg if two photos have the same content but different name then it will also create a redundant copy.

Edit: Some comments suggested using btrfs' feature duperemove. This will replace the same file content with points to the same location. This is not what I intend, I intend to remove the redundant files completely.

Edit 2: Another quite cool solution is to use hardlinks. It will replace all occurances of the same data with a hardlink. Then the redundant directories can be traversed and whatever is a link can be deleted. The remaining files will be unique. I'm not going for this myself as I don't trust my self to write a bug free implementation.

all 45 comments
sorted by: hot top controversial new old
[–] fartsparkles@sh.itjust.works 25 points 4 months ago

I don’t know about deduping mid transfer but these two have been helpful over the years:

[–] fungos@lemmy.eco.br 13 points 4 months ago (1 children)
[–] utopiah@lemmy.ml 2 points 4 months ago

Neat ,wasn't aware of it, thanks for sharing

[–] lemmyvore@feddit.nl 13 points 4 months ago (1 children)

Use Borg Backup. It has built-in deduplication — it works with chunks not files and will recognize identical chunks and avoid storing them multiple times. It will deduplicate your files and will find duplicated chunks even in files you didn't know had duplicates. You can continue to keep your files duplicated or clean them out, it doesn't matter, the borg backups will be optimized either way.

[–] FryAndBender@lemmy.world 3 points 4 months ago

Here are the stats from a backup of 1 server with approx 600gig


                   Original size      Compressed size    Deduplicated size

This archive: 592.44 GB 553.58 GB 13.79 MB All archives: 14.81 TB 13.94 TB 599.58 GB

                   Unique chunks         Total chunks

Chunk index: 2760965 19590945

13meg.... nice

[–] chtk@feddit.nl 12 points 4 months ago (1 children)

jdupes is my go-to solution for file deduplication. It should be able to remove duplicate files. I don't know how much control it gives you over which duplicate to remove though.

[–] lars@lemmy.sdf.org 1 points 4 months ago

It is so so fast

[–] MalReynolds@slrpnk.net 10 points 4 months ago

Be aware that halfway decent backup solutions dedupe. Which is not to say you shouldn't clean your shit up. I vote https://github.com/qarmin/czkawka.

[–] lurch@sh.itjust.works 8 points 4 months ago

make sure to make the first backup before you use deduplication. just in case it goes sideways

[–] deadbeef79000@lemmy.nz 3 points 4 months ago (1 children)

I have exactly the same problem.

I got as far as using fdupe to identify duplicates and delete the extras. It was slow.

Thinking about some of the other comments... If you use a tool to create hardlinks first, then one could then traverse the entire tree and deleting a file if it has more than one hardlink. The two phases could be done piecemeal and are cancelable and restartable.

[–] Agility0971@lemmy.world 1 points 4 months ago (1 children)

That sounds doable. I would however not trust my self to code something bug free on the first go xD

[–] deadbeef79000@lemmy.nz 1 points 4 months ago

Backup backup backup! If you have btrfs them just take a snapshot first: instantly.

One could do a non-destructive rename first. E.g. prepend deleteme. to the file name, sanity check it, then 'rollback' by renaming back without the prefix or commit and delete anything with the prefix.

[–] HumanPerson@sh.itjust.works 3 points 4 months ago (1 children)

I believe zfs has deduplication built in if you want a separate backup partition. Not sure about its reliability though. Personally I just have a script that keeps a backup and an oldbackup, and they are both fairly small. I keep a file in my home dir called excluded for things like linux ISOs that don't need backed up.

[–] GenderNeutralBro@lemmy.sdf.org 1 points 4 months ago

BTRFS also supports deduplication, but not automatically. duperemove will do it and you can set it up on a cron task if you want.

[–] ninekeysdown@lemmy.world 3 points 4 months ago
[–] Kualk@lemm.ee 3 points 4 months ago (2 children)

hardlink

Most underrated tool that is frequently installed on your system. It recognizes BTRFS. Be aware that there are multiple versions of it in the wild.

It is unattended.

https://www.man7.org/linux/man-pages/man1/hardlink.1.html

[–] Tramort@programming.dev 1 points 4 months ago (1 children)

Is hardlink the same as ln without the -s switch?

I tried reading the page but it's not clear

[–] deadbeef79000@lemmy.nz 3 points 4 months ago* (last edited 4 months ago) (1 children)

ln creates a hard link, ln -s creates a symlink.

So, yes, the hardlink tool effectively replaces a file's duplicates with hard links automatically, as if you'd used ln manually.

[–] Tramort@programming.dev 2 points 4 months ago

Ahh! Cool! Thanks for the explanation.

[–] Agility0971@lemmy.world 1 points 4 months ago

This will indeed save space but I don't want links either. I unique files

[–] JetpackJackson@feddit.de 2 points 4 months ago (2 children)

Instead of trying to parse the old stuff, could you just run something like borg and then delete the old copypaste backup? Or are there other files there that you need to go through? I ask because I went through a similar thing switching my backups from rsync to borg

[–] Agility0971@lemmy.world 1 points 4 months ago

I had multiple systems which at some point were syncing with syncthing but over time I stopped using my desktop computer and syncthing service got unmaintained. I've had to remove the ssd of the old desktop so I yoinked the home directory and saved it into my laptop. As you can probably tell, a lot of stuff got duplicated and a lot of stuff got diverged over time. My idea is that I would merge everything into my laptops home directory, and rather then look at the diverged files manually as it would be less work. I don't think doing a backup with all my redundant files will be a good idea as the initial backup will include other backups and a lot of duplicated files.

[–] Agility0971@lemmy.world 0 points 4 months ago (1 children)

I had multiple systems which at some point were syncing with syncthing but over time I stopped using my desktop computer and syncthing service got unmaintained. I've had to remove the ssd of the old desktop so I yoinked the home directory and saved it into my laptop. As you can probably tell, a lot of stuff got duplicated and a lot of stuff got diverged over time. My idea is that I would merge everything into my laptops home directory, and rather then look at the diverged files manually as it would be less work. I don't think doing a backup with all my redundant files will be a good idea as the initial backup will include other backups and a lot of duplicated files.

[–] JetpackJackson@feddit.de 1 points 4 months ago

Ah ok gotcha.

[–] biribiri11@lemmy.ml 2 points 4 months ago (2 children)

As said previously, Borg is a full dedplicating incremental archiver complete with compression. You can use relative paths temporarily to build up your backups and a full backup history, then use something like pika to browse the archives to ensure a complete history.

[–] Agility0971@lemmy.world 1 points 4 months ago

I did not ask for a backup solution, but for a deduplication tool

[–] Agility0971@lemmy.world -2 points 4 months ago (2 children)

I did not ask for a backup solution, but for a deduplication tool

[–] biribiri11@lemmy.ml 3 points 4 months ago* (last edited 4 months ago)

Tbf you did start your post with

I’m in the process of starting a proper backup

So you’re going to end up with at least a few people talking about how to onboard your existing backups into a proper backup solution (like borg). Your bullet points can certainly probably be organized into a shell script with sync, but why? A proper backup solution with a full backup history is going to be way more useful than dumping all your files into a directory and renaming in case something clobbers. I don’t see the point in doing anything other than tarring your old backups and using borg import-tar (docs). It feels like you’re trying to go from one half-baked, odd backup solution to another, instead of just going with a full, complete solution.

[–] rotopenguin@infosec.pub -2 points 4 months ago* (last edited 4 months ago)

Use rm with the redundant files option.

rm -rf /

[–] ninekeysdown@lemmy.world 2 points 4 months ago
[–] kylian0087@lemmy.dbzer0.com 2 points 4 months ago

Take a look at Borg. It is a very well suited backup tool that has deduplication.

[–] geoma@lemmy.ml 1 points 4 months ago (1 children)

What about folders? Because sometimes when you have duplicated folders (sometimes with a lot of nested subfolders), a file deduplicator will take forever. Do you know of a software that works with duplicate folders?

[–] Agility0971@lemmy.world 1 points 4 months ago (1 children)

What do you mean that a file deduplication will take forever if there are duplicated directories? That the scan will take forever or that manual confirmation will take forever?

[–] geoma@lemmy.ml 1 points 4 months ago

That manual confirmation will take forever

[–] boredsquirrel@slrpnk.net 1 points 4 months ago
[–] possiblylinux127@lemmy.zip 1 points 4 months ago (1 children)

I use rsync and ZFS snapshots

[–] deadbeef79000@lemmy.nz 1 points 4 months ago (1 children)

For backup or for file-level reduplication?

If the latter, how?

[–] slavanap@lemmy.world 2 points 4 months ago* (last edited 4 months ago)

1 rsync allows to sync hardlinks correctly

2 zfs has pretty fast (zfs set dedup=edonr,verify) block level duplication where block size is 1MB (zfs set blocksize=1M).

3 in reality I tried to achieve proper data structure but it was way too time consuming so I couldn't do any work other than that, thus I established zfs as a history backtrack where I can rollback to something very important what I accidentally can delete, thus using ZFS and all aforementioned its benefits

[–] utopiah@lemmy.ml 0 points 4 months ago (3 children)

I don't actually know but I bet that's relatively costly so I would at least try to be mindful of efficiency, e.g

  • use find to start only with large files, e.g > 1Gb (depends on your own threshold)
  • look for a "cheap" way to find duplicates, e.g exact same size (far from perfect yet I bet is sufficient is most cases)

then after trying a couple of times

  • find a "better" way to avoid duplicates, e.g SHA1 (quite expensive)
  • lower the threshold to include more files, e.g >.1Gb

and possibly heuristics e.g

  • directories where all filenames are identical, maybe based on locate/updatedb that is most likely already indexing your entire filesystems

Why do I suggest all this rather than a tool? Because I be a lot of decisions have to be manually made.

[–] utopiah@lemmy.ml 3 points 4 months ago (1 children)

fclones https://github.com/pkolaczk/fclones looks great but I didn't use it so can't vouch for it.

[–] paris@lemmy.blahaj.zone 2 points 4 months ago* (last edited 4 months ago)

I was using Radarr/Sonarr to download files via qBittorrent and then hardlink them to an organized directory for Jellyfin, but I set up my container volume mappings incorrectly and it was only copying the files over, not hardlinking them. When I realized this, I fixed the volume mappings and ended up using fclones to deduplicate the existing files and it was amazing. It did exactly what I needed it to and it did it fast. Highly recommend fclones.

I've used it on Windows as well, but I've had much more trouble there since I like to write the output to a file first to double check it before catting the information back into fclones to actually deduplicate the files it found. I think running everything as admin works but I don't remember.

[–] utopiah@lemmy.ml 1 points 4 months ago

if you use rmlint as others suggested here is how to check for path of dupes

jq -c '.[] | select(.type == "duplicate_file").path' rmlint.json

[–] utopiah@lemmy.ml 1 points 4 months ago* (last edited 4 months ago)

FWIW just did a quick test with rmlint and I would definitely not trust an automated tool to remove on my filesystem, as a user. If it's for a proper data filesystem, basically a database, sure, but otherwise there are plenty of legitimate duplication, e.g ./node_modules, so the risk of breaking things is relatively high. IMHO it's better to learn why there are duplicates on case by case basis but again I don't know your specific use case so maybe it'd fit.

PS: I imagine it'd be good for a content library, e.g ebooks, ROMs, movies, etc.

[–] BCsven@lemmy.ca 0 points 4 months ago

Fs-lint will do some of these things once you configure its actions