It is the cyclomatic complexity, also known as McCabe metric. Whenever the control flow of a method splits, the complexity counter gets incremented by one.
|Complexity /class||class_complexity||Average complexity by class.|
|Complexity /file||file_complexity||Average complexity by file.|
|Complexity /method||function_complexity||Average complexity by method.|
A class afferent couplings is a measure of how many other classes use the specific class.
Depth in Tree
The depth of inheritance tree (DIT) metric provides for each class a measure of the inheritance levels from the object hierarchy top.
A class efferent couplings is a measure of how many different classes are used by the specific class.
Minimal number of file cycles detected inside a package to be able to identify all undesired dependencies.
File edges weight
Number of file dependencies inside a package.
File dependencies to cut
Number of file dependencies to cut in order to remove all cycles between packages.
file_tangles = file_feedback_edges.
File tangle index
File tangle index = 2 * (file_tangles / file_edges_weight) * 100.
Lack of cohesion of methods. See LCOM4 documentation page.
Number of children
The number of children of a class is the number of direct and indirect descendants of this class.
Minimal number of package cycles detected to be able to identify all undesired dependencies.
Package dependencies to cut
Number of package dependencies to cut in order to remove all cycles between packages.
Package tangle index
Level of tangle of the packages. Best value (0%) means that there is no cycle and worst value (100%) means that packages are really tangled. This metric is computed with the following formula: 2 * (File dependencies to cut / Number of file dependencies between packages) * 100.
Response for class
See RFC documentation page.
Package edges weight
Total number of file dependencies between packages.
Suspect file dependencies
File dependencies to cut in order to remove cycles between files inside a package. Note that cycles between files inside a package does not always mean a bad quality architecture.
|Suspect LCOM4 density||Density of files having a LCOM4 density greater than 1.|
|Blank comments||Number of non-significant comments lines (empty comment line, comment line containing only special characters, etc.).|
Number of javadoc, multi-comment and single-comment lines. Empty comment lines like, header file comments (mainly used to define the license) and commented-out lines of code are not included.
Density of comment lines = Number of Comment lines / (Number of Lines of code + Number of Comment lines) * 100
With such a formula:
|Public documented API (%)||public_documented_api_density||Density of public documented API = (Public API - Public undocumented API) / Public API * 100|
|Public undocumented API||public_undocumented_api||Public API without comments header.|
Number of duplicated blocks of lines.
|Duplicated files||duplicated_files||Number of files involved in a duplication.|
|Duplicated lines||duplicated_lines||Number of lines involved in a duplication.|
|Duplicated lines (%)||duplicated_lines_density|
Density of duplication = duplicated_lines / lines * 100
Number of active reviews (status not closed).
|False-positive reviews||Number fo false-positive reviews.|
|New unreviewed violations||Number of new unreviewed violations.|
|Unassigned reviews||Number of unassigned reviews.|
|Unplanned reviews||Number of unplanned reviews (not associated with an action plan).|
|Unreviewed violations||Number of unreviewed violations.|
Number of new violations.
New xxxxx violations
Number of new violations with severity xxxxx, xxxxx being Blocker, Critical, Major, Minor or Info.
Number of violations.
Number of violations with severity xxxxx, xxxxx being Blocker, Critical, Major, Minor or Info.
Sum of the violations weighted by the coefficient associated to each severity (Sum(xxxxx_violations * xxxxx_weight)).
Rules compliance index (RCI) = 100 - (weighted_violations / nloc * 100)
|Authors by line||authors_by_line||no||The last committer on each line of code.|
This metric is made available by the SCM Activity plugin.
|Blocker Remediation Cost||Remediation cost (in days) to fix all blocker violations.|
|Critical and over Remediation Cost||Remediation cost (in days) to fix all critical and blocker violations.|
|Effort to grade X||Effort (in days) to reach grade X.|
|Major and over Remediation Cost||Remediation cost (in days) to fix all major and critical and blocker violations.|
|Minor and over Remediation Cost||Remediation cost (in days) to fix all minor and major and critical and blocker violations.|
|SQALE Remediation Cost||Remediation cost (in days) to fix all violations.|
Note that these metrics are made available by the SQALE plugin.
Number of getter and setter methods used to get (reading) or set (writing) a class property.
|Classes||classes||Number of classes (including nested classes, interfaces, enums and annotations).|
|Directories||directories||Number of directories.|
|Files||files||Number of files.|
|Generated Lines||Number of generated lines (Cobol only).|
|Generated lines of code||Number of generated lines of code (Cobol only).|
|Lines||lines||Number of physical lines (number of carriage returns).|
|Lines of code||ncloc||Number of physical lines that contain at least one character which is neither a whitespace or a tabulation or part of a comment.|
Number of methods.
Notes for Java:
|Packages||packages||Number of packages.|
|Projects||projects||Number of projects in a view.|
|Public API||public_api||Number of public Classes + number of public Methods + number of public Properties (without public final static ones).|
Number of statements as defined in the Java Language Specification but without block definitions. Statements counter gets incremented by one each time a following keyword is encountered: if, else, while, do, for, switch, break, continue, return, throw, synchronized, catch, finally.
Statements counter is not incremented by a class, method, field, annotation definition, package declaration and import declaration.
On each line of code containing some boolean expressions, the branch coverage simply answers the following question: 'Has each boolean expression been evaluated both to true and false?'. This is the density of possible branches in flow control structures that have been followed during unit tests execution.
It is a mix of Line coverage and Branch coverage. Its goal is to provide an even more accurate answer to the following question: 'How much of the source code has been covered by the unit tests?".
On a given line of code, Line coverage simply answers the following question: 'Has this line of code been executed during the execution of the unit tests?'. It is the density of covered lines by unit tests:
|Lines to cover||lines_to_cover||Number of lines of code which could be covered by unit tests (for example, blank lines or full comments lines are not considered as lines to cover).|
|New branch coverage||new_branch_coverage||Identical to Branch coverage but restricted to new / updated source code.|
|New coverage||new_coverage||Identical to Coverage but restricted to new / updated source code.|
|New line coverage||new_line_coverage||Identical to Line coverage but restricted to new / updated source code.|
|New lines to cover||new_lines_to_cover||Identical to Lines to cover but restricted to new / updated source code.|
|New uncovered lines||new_uncovered_lines||Identical to Uncovered lines but restricted to new / updated source code.|
|Skipped unit tests||skipped_tests||Number of skipped unit tests.|
|Uncovered branches||uncovered_branches||Number of branches which are not covered by unit tests.|
|Uncovered lines||uncovered_lines||Number of lines of code which are not covered by unit tests.|
|Unit tests||tests||Number of unit tests.|
|Unit tests duration||test_execution_time||Time required to execute all the unit tests.|
|Unit test errors||test_errors||Number of unit tests that have failed.|
|Unit test failures||test_failures||Number of unit tests that have failed with an unexpected exception.|
|Unit test success density (%)||test_success_density||Test success density = (tests - (test_errors + test_failures)) / tests * 100|
The same kinds of metrics exist for Integration tests coverage and Overall tests coverage (Units tests + Integration tests).
Metrics on tests execution does not exist for Integration tests and Overall tests.