this post was submitted on 19 Oct 2025
        
      
      42 points (95.7% liked)
      Programming
    23300 readers
  
      
      593 users here now
      Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
        founded 2 years ago
      
      MODERATORS
      
    you are viewing a single comment's thread
view the rest of the comments
    view the rest of the comments
Yes, but decompression speed is even more important, no? My internet connection gets 40MByte/s and my ssd 500+MB/s, so if my decompressor runs at <40MB/s it's slowing down my updates / boot time and it would be better to use a worse compression.
Arch - since 2021 for kernel images https://archlinux.org/news/moving-to-zstandard-images-by-default-on-mkinitcpio/ and since 2019 for packages https://lists.archlinux.org/pipermail/arch-dev-public/2019-December/029739.html
brotli is mainly good because it basically has a huge dictionary that includes common http headers and html structures so those don't need to be part of the compressed file. I would assume without testing that zstd would more clearly win against brotli if you'd train a similar dictionary for it or just include a random WARC file into
--patch-from.Cloudflare started supporting zstd and is using it as the default since 2024 https://blog.cloudflare.com/new-standards/ citing compression speed as the main reason (since it does this on the fly). It's been in chrome since 2021 https://chromestatus.com/feature/6186023867908096
The RFC mentions dictionaries but they are not currently used:
And there is a proposed standard for how zstd dictionaries could be served from a domain https://datatracker.ietf.org/doc/rfc9842/
Let me revise that statement to - it's better in every metric (compression speed, compressed size, feature set, most importantly decompression speed) compared to all other compressors I'm aware of, apart from xz and bz2 and potentially other non-lz compressors in the best compression ratio aspect. And I'm not sure whether it beats lzo/lz4 in the very fast levels (negative numbers on zstd).
What struck me as weird about what you were kind of calling it AI hype crap, when they are developing this for their own use and publishing it (not to make money). I'm kind of assuming this based on how much work they put into open sourcing the zstd format and how deeply it is now used in much FOSS which does not care at all for facebook. The format they are introducing uses explicitly structured data formats to guide a compressor - a structure which can be generated from a struct or class definition, and yes potentially much easier by an LLM, but I don't think that is hooey. So I assumed you had no idea what you were talking about.
Your Cloudflare post literally says "a new compression algorithm that we have found compresses data 42% faster than Brotli while maintaining almost the same compression levels." Yes, I get that in some circumstances where compression speed is important, this might be very useful. I don't see the point in talking further in circles anymore, thank you for the information.
like I said, brotli contains a large dictionary for web content / http which means you can't compare it directly to other compressors when looking at web content. the reason they do a comparison like that is because hardcoded dictionaries are not part of the zstd compression content-encoding because it is iffy.