Battle of the JPEG decoders

In knusperli really a better JPEG decoder? Google assures so, but can it be objectively demonstrated with easily-understood metrics? And what about jpeg2png, a knusperli-like utility which takes the same idea as knusperli but a different approach to estimating the unknown coefficients - better, or worse? Or jpegqs, yet another program? Four programs for decoding JPEG, each one using a distinct but related method for reconstructing the original image: Which will produce the best fidelity recreation of the original? It's a battle royale of image decoders!

The metric used in these tests is simply mean error: Simple, but effective enough for comparing the decoders. The test data consists of a set of illustrations and a set of photos, none of which have ever before been subject to compression. The comparison graph is easy to read: It just records the fidelity of each program's output compared to the original across the full range of JPEG quality levels from imagemagik, as measured by mean absolute error in RGB space.

This is not a perfect measure. It is not remotely similar to the subjective experience of viewing an image. It does though serve as a rough point for comparison.


A graph comparing the four decoders.

The first attempt fails, but a quick look at the testing log shows the problem:

convert-im6.q16: Corrupt JPEG data: premature end of data segment `tempjpeg2.jpg' @ warning/jpeg.c/JPEGWarningHandler/386.
convert-im6.q16: Corrupt JPEG data: premature end of data segment `tempjpeg2.jpg' @ warning/jpeg.c/JPEGWarningHandler/386.
convert-im6.q16: Corrupt JPEG data: premature end of data segment `tempjpeg2.jpg' @ warning/jpeg.c/JPEGWarningHandler/386.
convert-im6.q16: Corrupt JPEG data: premature end of data segment `tempjpeg2.jpg' @ warning/jpeg.c/JPEGWarningHandler/386.

There's a problem with jpegqs: It works well, when it works, but it's prone to fail at time for unclear reason. Worse, it still generates output, but corrupted output: Something to be cautious of.

Correct that though by interpolating the missing numbers, and some clear features can be observed. As expected, imagemagik does the worst: As a basic JPEG decoder, it makes no attempt to intelligently estimate the lost information within the quantised DCT coefficients. Surprisingly though, knusperli's performance comes in consistently third: It barely outperforms imagemagik. Better might be expected from a program backed by Google, and yet both jpeg2png and jpegqs (when it works) perform better. Of those two though, the winner is too close to call.

The 'metadecoder' performs best of all: It averages knusperli, jpeg2png and jpegqs in the hope that their combined output may be superior to any of them alone. It is, though it is arguable if the difference is enough to justify such efforts - especially in light of jpegqs's occasional corrupting of images.


A graph comparing the four decoders.

How about photos? There were more problems with files not processing - this time jpeg2png failing on images of large dimensions, with an error of 'jpeg2png: jpeg invalid coef w size'. Clearly these programs should be used with proper error handling. Correct that though, and you get a very similar graph. The difference is slight, though once again imagemagick scores worst - though barely. Once again the 'metadecoder' comes top, followed by jpeg2png.


Firstly, it can be seen that any attempt to use an intelligent estimate of the lost DCT coefficient precision does work: Imagemagick is the only program tested that does not do this, and it produces the worst score for both photos and illustrations at every quality setting. Of the three programs tested that do use intelligent estimation, they all use a different approach - but there is no consistent winner every time that is clearly superior. For photographs, the difference in score is so slight as to be barely noticable. For illustrations, the 'winner' would depend upon the image being processed, though both jpegqs and jpeg2png seem to have a consistent, if slight, superiority over knusperli.

These numbers also suggest there is yet scope for improvement: Simple averaging the three programs together can produce an output that is arguably superior to what any provides alone. Perhaps grounds for a future project.

Jpegqs should not be used at all. There are times when it corrupts an image silently. Most of these could be detected, but one case encountered during these tests could not: The file it output was perfectly valid, but the image within a mangled mess. As this program cannot be trusted, it just should not be used until this bug is fixed. This is unfortunate, as jpegqs otherwise outperforms all others on illustrations

Jpeg2png also fails on large images. But this is not such a severe problem: It exits cleanly, indicating the error, and produces no output file.

There is one way in which Knusperli is superior: It never failed to work as intended. Otherwise, it is less accurate in reconstructing the original image than either of the two rival programs.