Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

The need for performance metrics and comparison

Since we released AW 1.0, and more generally for every release of any AOP / interceptor framework (AspectJ, JBoss AOP, Spring AOP, Cglibg etc), a question is always raised: "what is the performance cost of such an approach ?", "how much do I loose per method invocation when an advice / interceptor is applied ?".

This is indeed an issue that needs to be carefully addressed, and that in fact has affected the design of every mature enough framework of such.

We are probably all scared by the cost of the java.lang.reflect despite its relative power, and usually, even before starting to evaluate semantics robustness and ease of use in general - which is a corner stone in the AOP landscape - we start doing some hello world bench.

We have started AWbench for that purpose. Offering a single place to measure the relative performance of AOP / interceptor framework, and even measure it by your own.

Introducing awbench

What (what are the AOP constructs benched)

Who (what are the AOP framework / Proxy framework benched)

How (general rules)

Full blown AOP

AspectWerkz

AspectJ

JBoss AOP

Proxy based AOP, interceptor frameworks

Spring AOP

Cglib proxy

AspectWerkz extensible AOP container

Spring AOP within the AspectWerkz extensible AOP container

What's next ?

Running awbench by your own

Contributing

  • No labels