# Battle of the JPEG decoders
JPEG is a lossy image codec. This loss comes from quantisation of DCT coefficients. Any JPEG file may potentially be decoded into a very large set of source images, all of which are valid solutions to the equation that is JPEG with no correct choice. Typically a JPEG decoder simply picks the middle of the range.
This is a heuristic, though. With a bettes heuristic it is possible to decode a JPEG file in a way that is likely to be closer to the intended original image. There are three programs that attempt to do this.
The best-tested of these, knusperli, originates as an internal project within Google. Rival to this are jpegqs and jpeg2png. But how do they really stack up, and how do they compare to a naïve decoder? Here I test them all, in the great battle royale of JPEG decoders.
The test data consists of a set of illustrations and a set of photos, none of which have ever before been subject to compression. The comparison graph is easy to read: It just records the fidelity of each program's output compared to the original across the full range of JPEG quality levels from imagemagik, as measured by mean absolute error in RGB space. A basic metric, but effective. This is not a perfect measure. It is not remotely similar to the subjective experience of viewing an image. It does though serve as a rough point for comparison.
## Illustrations
=> illu_1.svg A graph comparing the four decoders.
The first attempt fails, but a quick look at the testing log shows the problem:
```
5,10.0574345877572,9.53529166869773,9.47017332570166,9.73976228800171
convert-im6.q16: Corrupt JPEG data: premature end of data segment `tempjpeg2.jpg' @ warning/jpeg.c/JPEGWarningHandler/386.
convert-im6.q16: Corrupt JPEG data: premature end of data segment `tempjpeg2.jpg' @ warning/jpeg.c/JPEGWarningHandler/386.
10,5.76795429061573,5.49035856973362,5.09245728169581,31.0333659469523
15,5.19145182557439,4.71494841266756,4.46769966310163,4.46022287510738
20,4.52086173103067,4.40141509353086,3.97840772048307,4.03713815597479
convert-im6.q16: Corrupt JPEG data: premature end of data segment `tempjpeg2.jpg' @ warning/jpeg.c/JPEGWarningHandler/386.
convert-im6.q16: Corrupt JPEG data: premature end of data segment `tempjpeg2.jpg' @ warning/jpeg.c/JPEGWarningHandler/386.
```
The files reading as corrupt are all produced by jpegeq. There's a problem: It works well, when it works, but it's prone to fail at time for unclear reason. Worse, it still generates output, but corrupted output: Something to be cautious of. This is a serious problem! Using jpegqs without proper testing of the output could lead to corruption and loss of data.
Correct that though by interpolating the missing numbers, and some clear features can be observed. As expected, imagemagik does the worst: As a basic JPEG decoder, it makes no attempt to intelligently estimate the lost information within the quantised DCT coefficients. Surprisingly though, knusperli's performance comes in consistently third: It barely outperforms imagemagik. Better might be expected from a program backed by Google, and yet both jpeg2png and jpegqs (when it works) perform better. Of those two though, the winner is too close to call.
The 'metadecoder' is an average of knusperli, jpeg2png and jpegqs in the hope that their combined output may be superior to any of them alone. It is not.
## Photos
=> photo_1.svg A graph comparing the four decoders.
How about photos? There were more problems with files not processing - this time jpeg2png failing on images of large dimensions, with an error of 'jpeg2png: jpeg invalid coef w size'. Clearly these programs should be used with proper error handling. Correct that though, and you get a very similar graph. The difference is slight, though once again imagemagick scores worst - though barely. This time Once again the 'metadecoder' comes top, followed by jpeg2png.
## Conclusions
Firstly, it can be seen that any attempt to use an intelligent estimate of the lost DCT coefficient precision does work: Imagemagick is the only program tested that does not do this, and it produces the worst score for both photos and illustrations at every quality setting. Of the three programs tested that do use intelligent estimation, they all use a different approach - but there is no consistent winner every time that is clearly superior. For photographs, the difference in score is so slight as to be barely noticable. For illustrations, the 'winner' would depend upon the image being processed, though both jpegqs and jpeg2png seem to have a consistent, if slight, superiority over knusperli.
These numbers also suggest there is yet scope for improvement: Simple averaging the three programs together can produce an output that is arguably superior to what any provides alone, if only for photos. Perhaps grounds for a future project.
Jpegqs should not be used at all. There are times when it corrupts an image silently. Most of these could be detected, but one case encountered during these tests could not: The file it output was perfectly valid, but the image within a mangled mess. As this program cannot be trusted, it just should not be used until this bug is fixed. This is unfortunate, as jpegqs otherwise outperforms all others on illustrations.
Jpeg2png also fails on large images. But this is not such a severe problem: It exits cleanly, indicating the error, and produces no output file so the error can be easily detected.
There is one way in which Knusperli is superior: It never failed to work as intended. It worked correctly on every test file. Otherwise, it is less accurate in reconstructing the original image than either of the two rival programs, though still superior to imagemagik.