Why yet another tool for language recognition ? Why not reusing open source and well-know libraries like ANTLR or JavaCC ? This is the first question asked by any developer discovering SSLR. Of course this option was seriously studied and had big advantages but we decided to start from scratch for the following reasons :
- The SonarQubeteam SonarQube team is TDD addict and we think that existing tools don't fit well with TDD as they require some code generation and doesn't provide any simple and quick way to unit test all part of a source code analyser like a parsing rule for instance.
- The SonarQube team is KISS addict and so we think that a Java developer should be able to do anything from its favorite IDE.
- This technology is also used to analyse some legacy languages like COBOL for instance which require some very specific lexing and preprocessing features. Implementing those features would have required to fully master the implementation of those existing tools and so we didn't benefit from a black box approach.
- In any case, the ultimate goal of SSLR is to provide a complete compiler front-end stack, which goes well beyond the parsing. SSLR will sooner or later provide the required material to fully implement a:
- Symbolic table (currently in beta)
- Control flow graph
- Data flow analysis
- LLVM IR emitter