Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Infoiframe
iconframeborderfalse0
titleheightTable of Contents10

Table of Contents
maxLevel1

Complexity

...

It is the cyclomatic complexity, also known as McCabe metric. Whenever the control flow of a function splits, the complexity counter gets incremented by one. Each function has a minimum complexity of 1.

More details

...

Design

...

File cycles

...

file_cycles

...

Minimal number of file cycles detected inside a directory to be able to identify all undesired dependencies. This metric is available at the directory, module and program levels.

...

file_edges_weight

...

Number of file dependencies inside a directory. This metric is available at the directory, module and program levels.

...

File dependencies to cut

...

package_tangles

...

Number of file dependencies to cut in order to remove all cycles between directories. This metric is available at the directory, module and program levels.

 

...

file_tangles

...

File tangle = Suspect file dependencies

This metric is available at the directory, module and program levels.

...

file_tangle_index

...

File tangle index = 2 * (File tangle / File edges weight) * 100.

This metric is available at the directory, module and program levels.

...

LCOM4

...

lcom4

...

Lack of cohesion of functions. See LCOM4 documentation page. This metric is available at all levels.

...

Package cycles

...

package_cycles

...

Minimal number of directory cycles detected to be able to identify all undesired dependencies. This metric is available at the directory, module and program levels.

...

Package dependencies to cut

...

package_feedback_edges

...

Number of directory dependencies to cut in order to remove all cycles between directories. This metric is available at the package, module and program levels.

...

Package tangle index

...

package_tangle_index

...

Level of directory interdependency. Best value (0%) means that there is no cycle and worst value (100%) means that directories are really tangled. This metric is computed with the following formula: 2 * (File dependencies to cut / Number of file dependencies between directories) * 100. This metric is available at the directory, module and program levels.

...

Response for class

...

rfc

...

See RFC documentation page. This metric is available at all levels.

...

Package edges weight

...

package_edges_weight

...

Number of file dependencies between directories. This metric is available at the directory, module and program levels.

...

file_feedback_edges

...

File dependencies to cut in order to remove cycles between files inside a directory. Note that cycles between files inside a directory does not always mean a bad quality architecture. This metric is available at the directory level.

...

Documentation

...

Number of lines containing a comment.

More details

...

Density of comment lines = Comment lines / (Lines of code + Comment lines) * 100

With such a formula:

  • 50% means that the number of lines of code equals the number of comment lines
  • 100% means that the file only contains comment lines

...

Duplications

...

Duplicated blocks

...

duplicated_blocks

...

Number of duplicated blocks of lines.

...

Density of duplication = Duplicated lines / Lines * 100

Issues

...

New issues

...

new_violations

...

Number of new issues.

...

New xxxxx issues

...

new_xxxxx_violations

...

Number of new issues with severity xxxxx, xxxxx being blocker, critical, major, minor or info.

...

Issues

...

violations

...

Number of issues.

...

xxxxx issues

...

xxxxx_violations

...

Number of issues with severity xxxxx, xxxxx being blocker, critical, major, minor or info.

...

weighted_violations

...

Sum of the issues weighted by the coefficient associated to each severity (Sum(xxxxx_violations * xxxxx_weight)).
To set the weight of each severity, log in as an administrator, go to Settings > General Settings > General > General and set the Rules weight property. The default value is:
( (Blocker_violatons * 10) + (Critical_violations * 5) + (Major_violations * 3) + Minor_violations )

...

Rules compliance

...

violations_density

...

Rules compliance index (RCI) = 100 - (Weighted issues / Lines of code * 100)
If the value is negative, it is rounded to 0%. 

...

Severity

SeverityDescription
BlockerOperational/security risk: This issue might make the whole application unstable in production. Ex: calling garbage collector, not closing a socket, etc.
CriticalOperational/security risk: This issue might lead to an unexpected behavior in production without impacting the integrity of the whole application. Ex: NullPointerException, badly caught exceptions, lack of unit tests, etc.
MajorThis issue might have a substantial impact on productivity. Ex: too complex methods, package cycles, etc.
MinorThis issue might have a potential and minor impact on productivity. Ex: naming conventions, Finalizer does nothing but call superclass finalizer, etc.
InfoNot known or yet well defined security risk or impact on productivity.

Size

...

Number of getter and setter functions used to get (reading) or set (writing) a class property.

More details

...

Number of lines generated by Cobol code generators like CA-Telon.

...

Number of physical lines that contain at least one character which is neither a whitespace or a tabulation or part of a comment.

More details

...

Number of functions. Depending on the language, a function is either a function or a method or a paragraph.

More details

...

Number of public Classes + number of public Functions + number of public Properties

More details

...

Number of statements.

More details

Tests

...

On each line of code containing some boolean expressions, the branch coverage simply answers the following question: 'Has each boolean expression been evaluated both to true and false?'. This is the density of possible branches in flow control structures that have been followed during unit tests execution.

Code Block
Branch coverage = (CT + CF) / (2*B)

where

CT = branches that have been evaluated to 'true' at least once
CF = branches that have been evaluated to 'false' at least once

B = total number of branches

...

It is a mix of Line coverage and Branch coverage. Its goal is to provide an even more accurate answer to the following question: 'How much of the source code has been covered by the unit tests?".

Code Block
Coverage = (CT + CF + LC)/(2*B + EL)

where

CT = branches that have been evaluated to 'true' at least once
CF = branches that have been evaluated to 'false' at least once
LC = covered lines = lines_to_cover - uncovered_lines

B = total number of branches
EL = total number of executable lines (lines_to_cover)

...

On a given line of code, Line coverage simply answers the following question: 'Has this line of code been executed during the execution of the unit tests?'. It is the density of covered lines by unit tests:

Code Block
Line coverage = LC / EL

where

LC = covered lines (lines_to_cover - uncovered_lines)
EL = total number of executable lines (lines_to_cover)

...

The same kinds of metrics exist for Integration tests coverage and Overall tests coverage (Units tests + Integration tests).

...

srchttp://redirect.sonarsource.com/doc/metric-definitions.html

Documentation has been moved to http://redirect.sonarsource.com/doc/metric-definitions.html.

Documentation has been moved to http://redirect.sonarsource.com/doc/metric-definitions.html.