The first pages of a new section on graphics programming in Java includes some information on the performance of different types (formats) of BufferedImage.
As you will be aware if you have used this class, BufferedImage provides a range of different internal formats, specified by a constant at the type of creating the image object. Sometimes it can be non-obvious which format to opt for among apparently functionally similar choices. For example, TYPE_3BYTE_BGR vs TYPE_INT_RGB are functionally similar, as are TYPE_INT_BGR vs TYPE_4BYTE_ABGR. But is there any performance difference between using an int or separate bytes per component? And how much of a peerformance hit is it to include an alpha (transparency) component?
Or perhaps it is better to opt for one of the USHORT types allowing storage in only 2 bytes per pixel, thus requiring less data throughput and presumably higher performance?
As an example, some actual performance tests of BufferedImage.setRGB() are given. Integer storage is shown to be better performing overall than byte-by-byte storage, as is maybe to be expected. But on the test system, a perhaps surprising finding is that when combined with integer storage, including a transparency component actually increased performance, presumably because this combination is closest to the native image format used on this system. Despite the throughput argument, 1- and 2-byte per pixel formats performed poorly. The moral of the story is that measurement is as important as common-sense assumptions!
Comments/discussion about BufferedImage are welcome here or in the corresponding page of the Javamex forums.
The Javamex companion blog. This blog includes both technical articles relating to the programming information that you'll find on the Javamex site, plus information covering the IT industry more generally.
Tuesday, May 8, 2012
Wednesday, May 2, 2012
iPad advertising trouble again...
Apple continues to meet with pressure over the advertising of the iPad 3. This time, it's the turn of the UK's Advertising Standards Authority to pressure Apple into removing references to "4G".
Monday, April 16, 2012
New Javamex blog
The old Javamex forum is shortly to be retired. A new Javamex forum has now been set up to take its place. From now on, if you wish to ask a question about Java or the Javamex web site, please do so on the new forum.
Note that you will need to re-register on the new forum.
Note that you will need to re-register on the new forum.
Labels:
Java answers,
Java forum,
Java QA,
Java question,
Javamex forum
New article: file system notifications
A new article has been added on file system notifications. The article explains how your Java app can ask the underlying O/S to be notified of modifications to files in particular directories, e.g. for monitoring log files, watching for files created by an external process, or files opened by your application.
The article also looks at some of the limitations and pitfalls of using Java's WatchService API.
The article also looks at some of the limitations and pitfalls of using Java's WatchService API.
Saturday, February 25, 2012
Win a £20 iTunes voucher
To celebrate the launch of the new LetterMeister game published on the Javamex site earlier this week, enter the LetterMeister hi score competition to have the chance of winning a £20 iTunes gift certificate!
Wednesday, February 15, 2012
If we were to "fix" the Internet today, would we get it right?
Prof Alan Woodward of Surrey University presents an interesting viewpoint today on the state of our current Internet infrastructure. Practically all of the present "security" features were shoehorned in on top of an infrastructure that was never really designed with security in mind. With the benefit of hindsight, maybe what we need is simply a new infrastructure, designed from the ground up to meet our current needs and use, be that in terms of security or other features.
On the other hand, security isn't the only feature absent from basic Internet infrastructure because it was not thought of in the 1970s. It is probably for similar historical reasons that the Internet crosses many political boundaries that some of our current governments appear to wish it didn't cross.
So if we were to re-design the Internet today, some questions arise:
- the infrastructure that we have today met the needs and capabilities of the 1970s; how would we guarantee that a new infrastructure invented today wouldn't simply be reflecting the needs and capabilities of 2012? In 20 years time, would there be a similar conversation ("well, you see, quantum decryption wasn't a real threat back in the 2010s")?
- what would the political pressures be on an Internet infrastructure invented in 2012? How many back doors into the security features would governments try to force into the specification? How much pressure would there be for the application of content filters and bandwidth allocation to reflect the degree of bribery (sorry, "funding") provided by such-and-such corporation to the political parties involved in legislating the infrastructure?
We should also be careful not to mask political failure as being a purely technological problem. On some level, identity theft and other cybercrimes occur both because our technology permits it and because, one way or another, our political structures still leave the risk-benefit tradeoff stacked in favour of the criminals in question.
On the other hand, security isn't the only feature absent from basic Internet infrastructure because it was not thought of in the 1970s. It is probably for similar historical reasons that the Internet crosses many political boundaries that some of our current governments appear to wish it didn't cross.
So if we were to re-design the Internet today, some questions arise:
- the infrastructure that we have today met the needs and capabilities of the 1970s; how would we guarantee that a new infrastructure invented today wouldn't simply be reflecting the needs and capabilities of 2012? In 20 years time, would there be a similar conversation ("well, you see, quantum decryption wasn't a real threat back in the 2010s")?
- what would the political pressures be on an Internet infrastructure invented in 2012? How many back doors into the security features would governments try to force into the specification? How much pressure would there be for the application of content filters and bandwidth allocation to reflect the degree of bribery (sorry, "funding") provided by such-and-such corporation to the political parties involved in legislating the infrastructure?
We should also be careful not to mask political failure as being a purely technological problem. On some level, identity theft and other cybercrimes occur both because our technology permits it and because, one way or another, our political structures still leave the risk-benefit tradeoff stacked in favour of the criminals in question.
Wednesday, January 25, 2012
Symantec advise users to disable pcAnywhere
According to a white paper released by Symantec, the source code to various of its produces that hacker group Anonymous recently threatened to disclose was stolen in 2006, and users are advised to disable pcAnywhere until further notice. Specifically, the paper states:
"pcAnywhere is a product that allows for direct PC to PC communication and this does expose some risk if the compromised code is actually released."
This seems to imply that pcAnywhere is based on security through obscurity. (Presumably, the same security risk actually exists, albeit to a lesser extent, whether or not somebody releases the source code: whatever information is in the source code is in principle available by reverse engineering the compiled code.)
To me, the event underlines at least two lessons:
- this is precisely what may happen if you rely on security through obscurity
- if you have some security-sensitive source code stolen, "right now" would be a good time to review the stolen code, rather than 6 years later...
Or is there something about the content of the white paper and the incident in general that I'm misunderstanding?
"pcAnywhere is a product that allows for direct PC to PC communication and this does expose some risk if the compromised code is actually released."
This seems to imply that pcAnywhere is based on security through obscurity. (Presumably, the same security risk actually exists, albeit to a lesser extent, whether or not somebody releases the source code: whatever information is in the source code is in principle available by reverse engineering the compiled code.)
To me, the event underlines at least two lessons:
- this is precisely what may happen if you rely on security through obscurity
- if you have some security-sensitive source code stolen, "right now" would be a good time to review the stolen code, rather than 6 years later...
Or is there something about the content of the white paper and the incident in general that I'm misunderstanding?
Subscribe to:
Posts (Atom)