Why Another Tool for Language Recognition ? Why not reusing Open Source and well-know libraries like Antlr or javacc ? This is the first question asked by any developer discovering SSLR. Of course this option was seriously studied and had bid advantages but we decided to start from scratch for the following reasons :
- The Sonar team is TDD addict and we think that existing tools don't fit well with TDD as they require some code generation and doesn't provide any simple and quick way to unit test all part of a source code analyser like a parsing rule for instance.
- The Sonar team is KISS addict and so we think that a java developer should be able to do anything from its favorite java IDE and directly in java.
- This technology is also used to analyse some legacy languages like COBOL for instance which require some very specific lexing and preprocessing features. Implementing those features would have required to fully master the implementation of those existing tools and so we didn't benefit from a black box approach.
- In any case, the ultimate goal of SSLR is to provide a complete stack for source code analyzing which goes far beyond parsing. SSLR will sooner or later provide out-of-the-box the required material to :
- Feed and request a symbolic table
- Feed and visit a control flow graph
- Feed and request a complete graph of dependencies
There are the main features of SSLR :
- Easy integration and use
- Just add a dependency on a jar file (< 200 Ko)
- No special step to add to add to the build process
- No "untouchable" generated code
- Everything in java
- Definition of grammar and lexer directly in code using Java
- No break in IDE support (syntax highlighting, code navigation, refactoring, etc)
- Mature and production ready
- This technology is already used in production to analyse millions of Cobol, PL/SQL, C, C#, ... lines of code
- Good performances