this post was submitted on 24 Jun 2025
246 points (99.6% liked)
Programming
21142 readers
364 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
One thing I'd like to see from image formats and libraries is better support for very high resolution images. Like, images where you're zooming into and out of a very large, high-resolution image and probably only looking at a small part of the image at any given point.
I was playing around with some high resolution images a bit back, and I was quite surprised to find how poor the situation is. Try viewing a very high resolution PNG in your favorite image-viewing program, and it'll probably choke.
At least on Linux, it looks like the standard native image viewers don't do a great job here, and as best I can tell, the norm is to use web-based viewers. These deal with poor image format support support for high resolutions by generating versions of the image at multiple pre-scaled levels and then slicing the image into tiles, saving each tile as a separate image, so that a web browser just pulls down a handful of appropriate tiles from a web server. Viewers and library APIs need to be able to work with the image without having to decode the whole image.
gliv
used to do very smooth GPU-accelerated panning and zoomingI'd like to be able to do the same for very high-resolution images, decoding and loading visible data into video memory as required.
I would guess that better parallel encoding and decoding support is likely associated with solving this, since limiting the portion of the image that one needs to decode is probably necessary both for parallel decoding and for efficient high-resolution processing.
JPEG2000 can do exactly what you want for decades.
There is a reason why TIFF is one of the most popular formats for raster geographic datasets :)
Yeah, I have a couple over 800MB PNGs that I can only get Gimp to open properly. I need to look into pyramidal TIFFs.
Again, that would be TIFF. TIFF images can be encoded either with each line compressed separately or with rectangular tiles compressed separately, and separately compressed blocks can be read and decompressed in parallel. I have some >100GiB TIFFs containing elevation maps for entire countries, and my very old laptop can happily zoom and pan around in them with virtually no delay.