It is the cyclomatic complexity, also known as McCabe metric. Whenever the control flow of a method function splits, the complexity counter gets incremented by one.
|Complexity /class||class_complexity||Average complexity by class.|
|Complexity /file||file_complexity||Average complexity by file.|
|Complexity /method||function_complexity||Average complexity by method function.|
A class afferent couplings is a measure of how many other classes use the specific class.
Depth in Tree
The depth of inheritance tree (DIT) metric provides for each class a measure of the inheritance levels from the object hierarchy top.
A class efferent couplings is a measure of how many different classes are used by the specific class.
Minimal number of file cycles detected inside a package directory to be able to identify all undesired dependencies. This metric is available at directory level.
Number of file dependencies inside a package.directory. This metric is available at directory level.
File dependencies to cut
Number of file dependencies to cut in order to remove all cycles between packages.directories. This metric is available at directory, module and project level.
File tangle = Suspect file dependencies
This metric is available at directory level.
Number of children
File tangle index = 2 * (File tangle / File edges weight) * 100.
Lack of cohesion of methods. See LCOM4 documentation page.
The number of children of a class is the number of direct and indirect descendants of this classThis metric is available at directory level.
Minimal number of package directory cycles detected to be able to identify all undesired dependencies. This metric is available at directory, module and project level.
Package dependencies to cut
Number of package directory dependencies to cut in order to remove all cycles between packages.directories. This metric is available at the package, module and program levels.
Package tangle index
Level of tangle of the packagesdirectory interdependency. Best value (0%) means that there is no cycle and worst value (100%) means that packages directories are really tangled. This metric is computed with the following formula: 2 * (File dependencies to cut / Number of file dependencies between packagesdirectories) * 100.
Response for class
See RFC documentation pageThis metric is available at directory, module and project level.
Package edges weight
Number of file dependencies between packages.directories. This metric is available at directory, module and project level.
File dependencies to cut in order to remove cycles between files inside a packagedirectory. Note that cycles between files inside a package directory does not always mean a bad quality architecture.
|Suspect LCOM4 density||suspect_lcom4_densityDensity of files having a LCOM4 density greater than 1|
This metric is available at directory level.
lines containing either comment or commented-out code.
Non-significant comment lines (empty commentline
lines containing only special characters, etc.).
Number of lines containing a comment.do not increase the number of comment lines.
The following piece of code contains 9 comment lines:
Density of comment lines = Comment lines / (Lines of code + Comment lines) * 100
With such a formula:
|Comments in Procedure Divisions||Comments in Procedure divisions (Cobol only)|
|public_documented_api_density||Density of public documented API = (Public API - Public undocumented API) / Public API * 100|
|public_undocumented_api||Public API without comments header.|
Number of duplicated blocks of lines.
For a block of code to be considered as duplicated:
Differences in indentation as well as in string literals are ignored while detecting duplications.
|Duplicated files||duplicated_files||Number of files involved in a duplication.|
|duplicated_lines||Number of lines involved in a duplication.|
|Duplicated lines (%)||duplicated_lines_density|
Density of duplication = Duplicated lines / Lines * 100
Number of active reviews (status not closed).
|False-positive reviews||false_positive_reviews||Number fo false-positive reviews.|
| unreviewed violations.|
Number of new
|Unassigned reviews||unassigned_reviews||Number of unassigned reviews.|
|Unplanned reviews||unplanned_reviews||Number of unplanned reviews (not associated with an action plan).|
|Unreviewed violations||unreviewed_violations||Number of unreviewed violations.|
Number of new violations.
New xxxxx violationsissues.
New xxxxx issues
Number of new violations issues with severity xxxxx, xxxxx being Blockerblocker, Criticalcritical, Majormajor, Minor minor or Infoinfo.
Number of violationsissues.
Number of violations issues with severity xxxxx, xxxxx being Blockerblocker, Criticalcritical, Majormajor, Minor minor or Info.info.
|False positive issues||false_positive_issues||Number of false positive issues|
|Open issues||open_issues||Number of issues whose status is Open|
|Confirmed issues||confirmed_issues||Number of issues whose status is Confirmed|
|Reopened issues||reopened_issues||Number of issues whose status is Reopened|
Sum of the violations issues weighted by the coefficient associated to each severity (Sum(xxxxx_violations * xxxxx_weight)).
Rules compliance index (RCI) = 100 - (Weighted violationsissues / Lines of code * 100)
|Authors by line||authors_by_line||no||The last committer on each line of code.|
Note that this metric is made available by the SCM Activity plugin.
|Blocker Remediation Cost||blocker_remediation_cost||Remediation cost (in days) to fix all blocker violations.|
|Critical and over Remediation Cost||critical_remediation_cost||Remediation cost (in days) to fix all critical and blocker violations.|
|Effort to grade X|
(X = a or b or c or d)
|Effort (in days) to reach grade X.|
|Major and over Remediation Cost||major_remediation_cost||Remediation cost (in days) to fix all major and critical and blocker violations.|
|Minor and over Remediation Cost||minor_remediation_cost||Remediation cost (in days) to fix all minor and major and critical and blocker violations.|
|SQALE Rating||sqale_rating||SQALE rating from A to E.|
|SQALE Remediation Cost||sqale_index||Remediation cost (in days) to fix all violations.|
Note that these metrics are made available by the SQALE plugin.
If the value is negative, it is rounded to 0%.
|Technical debt||sqale_index||Effort to fix all issues. The measure is stored in minutes in the DB.|
|Blocker||Operational/security risk: This issue might make the whole application unstable in production. Ex: calling garbage collector, not closing a socket, etc.|
|Critical||Operational/security risk: This issue might lead to an unexpected behavior in production without impacting the integrity of the whole application. Ex: NullPointerException, badly caught exceptions, lack of unit tests, etc.|
|Major||This issue might have a substantial impact on productivity. Ex: too complex methods, package cycles, etc.|
|Minor||This issue might have a potential and minor impact on productivity. Ex: naming conventions, Finalizer does nothing but call superclass finalizer, etc.|
|Info||Not known or yet well defined security risk or impact on productivity.|
Number of getter and setter methods functions used to get (reading) or set (writing) a class property.
|classes||Number of classes (including nested classes, interfaces, enums and annotations).|
|directories||Number of directories.|
|files||Number of files.|
Number of lines generated by Cobol code generators like CA-Telon.
|Generated lines of code||generated_ncloc||Number of lines of code generated by Cobol code generators like CA-Telon.|
|Inside Control Flow Statements||cobol_inside_ctrlflow_statements||Number of inside (intra program) control flow statements (GOBACK, STOP RUN, DISPLAY, CONTINUE, EXIT, RETURN, PERFORM paragraph1 THRU paragraph2).|
|lines||Number of physical lines (number of carriage returns).|
Number of physical lines that contain at least one character which is neither a whitespace or a tabulation or part of a comment.
|LOCs in Data Divisions||cobol_data_division_ncloc||Number of lines of code in Data divisions. Generated lines of code are excluded.|
|LOCs in Procedure Divisions||cobol_procedure_division_ncloc||Number of lines of code in Procedure divisions. Generated lines of code are excluded.|
Number of methods/ functions. Depending on the language, a function is either a function or a method or a paragraph.
|Outside Control Flow Statements||cobol_outside_ctrlflow_statements||Number of outside (inter programs) control flow statements (CALL, EXEC CICS LINK, EXEC CICS XCTL, EXEC SQL, EXEC CICS RETURN).|
|packages||Number of packages.|
|projects||Number of projects in a view.|
Number of public Classes + number of public Methods Functions + number of public Properties
Number of statements.
On each line of code containing some boolean expressions, the branch condition coverage simply answers the following question: 'Has each boolean expression been evaluated both to true and false?'. This is the density of possible branches conditions in flow control structures that have been followed during unit tests execution.
|Condition coverage on new code||new_branch_coverage|
Identical to Condition coverage but restricted to new / updated source code.
To be computed this metric requires the SCM Activity plugin.
|Condition coverage hits||branch_coverage_hits_data||List of covered conditions.|
|Condition coverage||see Condition coverage|
|Conditions by line||conditions_by_line||Number of conditions by line.|
|Covered conditions by line||covered_conditions_by_line||Number of covered conditions by line.|
It is a mix of Line coverage and Branch Condition coverage. Its goal is to provide an even more accurate answer to the following question: ' How much of the source code has been covered by the unit tests?".
|Coverage on new code||new_coverage|
Identical to Coverage but restricted to new / updated source code.
To be computed this metric requires the SCM Activity plugin .
On a given line of code, Line coverage simply answers the following question: ' Has this line of code been executed during the execution of the unit tests?'. It is the density of covered lines by unit tests:
|Line coverage on new code||new_line_coverage||Identical to Line coverage but restricted to new / updated source code.|
|Line coverage hits||coverage_line_hits_data||List of covered lines.|
|lines_to_cover||Number of lines of code which could be covered by unit tests (for example, blank lines or full comments lines are not considered as lines to cover).|
|New branch coverage||new_branch_coverage||Identical to Branch coverage but restricted to new / updated source code.|
|New coverage||new_coverage||Identical to Coverage but restricted to new / updated source code.|
|New line coverage||new_line_coverage||Identical to Line coverage but restricted to new / updated source code.|
|New lines to cover||new_lines_Lines to cover on new code||new_lines_to_cover||Identical to Lines to cover but restricted to new / updated source code.|
|New uncovered lines||new_uncovered_lines||Identical to Uncovered lines but restricted to new / updated source code.|
|Skipped unit tests||skipped_tests||Number of skipped unit tests.|
|conditions||uncovered_branches||conditions||Number ofbranches||conditions which are not covered by unit tests.|
|Uncovered conditions on new code||new_uncovered_conditions||Identical to Uncovered conditions but restricted to new / updated source code.|
|uncovered_lines||Number of lines of code which are not covered by unit tests.|
|Uncovered lines on new code||new_uncovered_lines||Identical to Uncovered lines but restricted to new / updated source code.|
|tests||Number of unit tests.|
|Unit tests duration||test_execution_time||Time required to execute all the unit tests.|
|test_errors||Number of unit tests that have failed.|
|test_failures||Number of unit tests that have failed with an unexpected exception.|
|Unit test success density (%)||test_success_density||Test success density = (Unit tests - (Unit test errors + Unit test failures)) / Unit tests * 100|