The need for performance metrics and comparison
Since we released AW 1.0, and more generally for every release of any AOP / interceptor framework (AspectJ, JBoss AOP, Spring AOP, Cglibg etc), a question is always raised: "what is the performance cost of such an approach ?", "how much do I loose per method invocation when an advice / interceptor is applied ?".
This is indeed an issue that needs to be carefully addressed, and that in fact has affected the design of every mature enough framework of such.
We are probably all scared by the cost of the java.lang.reflect despite its relative power, and usually, even before starting to evaluate semantics robustness and ease of use in general - which is a corner stone in the AOP landscape - we start doing some hello world bench.
We have started AWbench for that purpose. Offering a single place to measure the relative performance of AOP / interceptor framework, and even measure it by your own.