Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

The need for performance metrics and comparison

Since we released AW 1.0, and more generally for every release of any AOP / interceptor framework (AspectJ, JBoss AOP, Spring AOP, Cglibg etc), a question is always raised: "what is the performance cost of such an approach ?", "how much do I loose per method invocation when an advice / interceptor is applied ?".

This is indeed an issue that needs to be carefully addressed, and that in fact has affected the design of every mature enough framework.

We are probably all scared by the cost of the java.lang.reflect despite its relative power, and usually, even before starting to evaluate semantics robustness and ease of use in general - we start doing some hello world bench.

We have started AWbench for that purpose. Offering a single place to measure the relative performance of AOP / interceptor framework, and even measure it by your own.

More than providing performance comparison, AWbench is a good place to figure out the semantics differences and ease of use of each framework. A simple "line of count" metrics could be provided as well (source + external file like xml + Ant glue).

Introducing awbench


AWbench is a micro benchmark suite, which aims at staying simple. The test application is very simple, and AWbench is mainly the glue around the test application that applies one or more very simple advice / interceptor of the framework of your choice.

AWbench comes with an Ant script that allows you to run it on you own box, and provide some improvement if you know some for a particular framework.

What (what are the AOP constructs benched)

So far, AWbench includes method execution pointcuts, since call side pointcuts are not supported by proxy based framework (Spring AOP, Cglib, dynAOP etc).

The awbench.method.Execution class is the test application, and contains one method per construct to bench. An important fact is that bytecode based AOP may provide much better performance for before and after advice, as well as much better performance when it comes to accessing contextual information.
Indeed, proxy based frameworks are very likely to use reflection to give the user access to intercepted method parameters at runtime from within an advice, while bytecode based AOP may use more advanced constructs to provide access at the speed of a statically compiled access.

The current scope is thus:

For method execution pointcut


Contextual information access





before advice



before advice

static information (method signature etc)


before advice

contextual information accessed reflectively


before advice

contextual information accessed with explicit framework capabilities

Only supported by AspectJ and AspectWerkz 2.x




before + after advice






around advice


AspectJ and AspetWerkz 2.x provides specific optimizations (thisJoinPointStaticPart vs thisJoinPoint)

around advice

non optimizezd





2 around advice

contextual information


By accessing contextual information we means:

  • accessing a method parameter using its real type (ie. unboxing might be needed when proxy based approach are used)
  • accessing a the advised instance using its real type (ie. casting might be needed when proxy based approach are used)

A pseudo code block is thus likely to be:

Who (what are the AOP framework / Proxy framework benched)

AWbench is extensible. So far it includes the following

How (general rules)

Full blown AOP




Proxy based AOP, interceptor frameworks

Spring AOP

Cglib proxy

AspectWerkz extensible AOP container

Spring AOP within the AspectWerkz extensible AOP container

What's next ?

Running awbench by your own


  • No labels