Where the design community meets.
Founder at Optimage Joined over 5 years ago
PNGyu uses Pngquant internally. So the result should be comparable to ImageOptim in lossy mode.
It's unfortunate though that due to aggressive configuration some other tools get ahead of ImageOptim. But it's all at even greater cost of visual quality.
I made Clean History plugin for this. It uses official Versions API and has an option to automatically remove document versions on close. Free and open source.
Optimage is different on many levels.
Lossless compression. PNG is about 3-10% smaller on average. The best test I could found is https://css-ig.net/png-tools-overview. Total savings: Optimage 1 787 523 B vs ZopfliPNG (used in ImageOptim) 1 507 775 B. That's about 3%. While in the reduction subset, it's about 7% on top of ImageOptim. JPEG and GIF are about the same, although individual GIFs can be smaller. Optimage also supports SVG, PDF, ICO and ICNS out-of-the-box. ImageOptim requires Node.js to be installed for SVGO.
Lossy compression. Optimage, at least for my own projects, is good enough for automatic lossy on random images. That means gradients won't be broken by color quantization, JPEGs won't end up overly compressed because average error is low (it applies to many other tools). This is where numbers deviate the most. My bar is high here. Not JPEGmini and TinyPNG but pngquant, Zoplfi, Guetzli and beyond.
App. Pausing, file renaming, configurable destination folder, auto-scaling parallel processing (image compression is slow but it should not stand in a way of other activities), etc.
Things like color management, conversion to sRGB (only doing it if the attached profile is actually different), auto rotation for photos using Orientation tags (it's lossless in lossless mode), etc.
States eliminate repeating work. They naturally expand to artboards and allow exportable symbols with per-resolution adjustments.
I even implemented the latter in the plugin for Photoshop a while back. But that alone would not fix it.
It is time consuming, takes seconds. You need all the performance you have.
Type trials, palette sorting, filter brute forcing, nearly-optimal Deflate compression, etc. It's way more than 6 lines of code to do it properly.
With bit-optimal parsing and elaborate heuristics you can probably get close and keep top performance.
The 23.3 kB is 254 colors. The 20.4 kB is just 25 colors, and just 15.1 kB (-26.1%) losslessly compressed with a better tool. That's the difference.
Thanks! Do you have a link?
Edit: Found it. It’s all straight to the point! For anyone interested here's the link.
Special thanks for including test images. Let's see if I can improve on that one.
"hard problem to solve in the browser" - what do you mean by that?
Just check out code complexity in the linked projects.
I don't think that tools that you mentioned can do a better job than UPNG.
I've updated the comment with the link to image. Can you make it this small without changing the number of colors?
Dithering may improve the visual quality, but it also usually increases the file size (by creating noise), that is why I avoid it.
It's not for everything. But without it, images containing gradients have noticeable quality degradation. Dithering can also be applied selectively.
Image compression is a hard problem to solve in the browser. UPNG minifier appears to be winning just by selecting fewer colors into an image palette without any dithering. This may result in a smaller file size but visual quality suffers a lot, and there’s still a lot to optimize.
Color quantization is just the beginning. Have a look at Pngquant (used by TinyPNG), OptiPNG, Libdeflate, Zopfli, ECT, etc. This is what it takes to properly compress PNG images. There’s a room for improvement too. That's why I'm making a new image optimization tool Optimage. One challenge is to actually remove any sliders and choose compression parameters automatically.
For comparison, the
bunny.png can be further reduced by 58.7% of 57.9KB. Even at lower quality it can be losslessly reduced by good 10-30% with the right tools. The same applies to virtually any online service.
I do agree that PDF is not the best format for vector graphics. Vector drawables give more freedom, e.g. adaptive icons. I guess it was a bet on the proven and widely used technology. Also, PostScript/PDF is deeply integrated into Core Graphics.
The difference in rendering is mainly in AA, which is superior in CG at least for flat shapes. In my test even gradients become a problem. It is fixable though.
Where the design community meets.
Designer News is a large, global community of people working or interested in design and technology.