The first pages of a new section on graphics programming in Java includes some information on the performance of different types (formats) of BufferedImage.
As you will be aware if you have used this class, BufferedImage provides a range of different internal formats, specified by a constant at the type of creating the image object. Sometimes it can be non-obvious which format to opt for among apparently functionally similar choices. For example, TYPE_3BYTE_BGR vs TYPE_INT_RGB are functionally similar, as are TYPE_INT_BGR vs TYPE_4BYTE_ABGR. But is there any performance difference between using an int or separate bytes per component? And how much of a peerformance hit is it to include an alpha (transparency) component?
Or perhaps it is better to opt for one of the USHORT types allowing storage in only 2 bytes per pixel, thus requiring less data throughput and presumably higher performance?
As an example, some actual performance tests of BufferedImage.setRGB() are given. Integer storage is shown to be better performing overall than byte-by-byte storage, as is maybe to be expected. But on the test system, a perhaps surprising finding is that when combined with integer storage, including a transparency component actually increased performance, presumably because this combination is closest to the native image format used on this system. Despite the throughput argument, 1- and 2-byte per pixel formats performed poorly. The moral of the story is that measurement is as important as common-sense assumptions!
Comments/discussion about BufferedImage are welcome here or in the corresponding page of the Javamex forums.