Skip to end of metadata
Go to start of metadata

The ABAP quality check is a data provider allows you to monitor several pre-defined Key product indicators. The DP_ATC fetch data from the custom code management quality cockpit. To be able to display data using the ATC data provider you should first have tasks that are configured to run on a regular basis in the quality cockpit.

1. Objectives 

The objective of this Wiki is to explain how the ABAP Code Quality Data provider works.

2. Supported Renderers

The ATC data provider supports this following renderers :

  • Line Chart
  • Pie Chart
  • Donut Chart
  • Line-Column Chart
  • Dual Line Chart
  • Dual Line-Column
  • Dual Bar-Column
  • Stack Bar Chart
  • Stack Column Chart
  • Stack Column Chart2 Label
  • Column Chart
  • Bar Chart
  • Table History Renderer
  • SLR Renderer
  • Trend Table Renderer
  • Dynamic Table
  • HTML Renderer

3. Tabs description

General tab contains two attributes:

  • ATC Project: The projects that are running in the quality cockpit
  • Metric: The KPI that you want to visualize


The available metric list is the following: 

  • Objects: Number of objects that were analyzed. 
  • Number of violations: Number of errors and warnings together. 
  • Number of errors: Number of errors. 
  • Number of warnings: Number of warnings.
  • Coverage: represents the number of used and tested objects (objects whose their “last used” column contains a date, if there is no date it means that the object is not used hence not taken into consideration).  
  • Quality: The benchmark result is a grade (between 0 and 3.5) for each category grouped by application based on ATC RUNs, the note includes 2 measures:
                  - Number of checks violation.
                  - Backlog of checks violations expressed as percentage of objects violation.
  • Impact: number of used objects with issues divided by the total number of objects then multiplied by 100.

4. Data validation

The user can validate the displayed data through the custom code improvement or just the quality cockpit by following the steps below>

This section presents all the common steps to validate the available metrics:

  1. Display the projects list available in the quality cockpit

       2. Select the project name

       3. Select a time range and click on apply button 

       4. Select the job for 30.09.2019

Note that the selected period in OCC dashboard is the following for the five first metrics:

Now you use the following information to figure out the value of the selected metric. In each case we will compare the displayed data in OCC dashboard with the data in the quality cockpit:

4.1 Objects

OCC query:

The displayed data is the following:

To validate the displayed result, Scroll down to the object statistics section and check the displayed data for Total number of objects

4.2 Number of violations

OCC query:

The displayed data is:

The displayed result will be the SUM of number of ATC errors and number of ATC warnings.

4.3 Number of errors

OCC query:

The displayed data is:

To validate the displayed result, Scroll down to the object statistics section and check the displayed data for Number of ATC errors

4.4 Number of warnings

OCC query:


The displayed data is:


To validate the displayed result, Scroll down to the object statistics section and check the displayed data for Number of ATC warnings

4.5 Coverage

OCC query:

The displayed data is:

To validate the returned result, select the wanted job and click on “object list”

Click on Go button

We count the number of objects for which the Used fields is not empty

In this case all are empty so the coverage will be 0.

4.6 Quality

Here is how to calculate the quality metric value:

X will be the total number of categories having zero violations

Y will be interpreted from this table according to backlog percentage

OCC query:

In the quality cockpit,  Prio 1 : 12, Prio 2 : 8 and Prio 3 : 10 → X = 0

In the quality cockpit: 

Objects with errors: 9

Objects with Warnings: 8

Objects with Info: 34
→ Objects with violations = 51 and Total number of objects = 77
→ Percentage of objects with violations = ( 51 / 77 ) * 100 = 66.23
Y = 1

4.7 Impact

 OCC query:

In the quality cockpit , we count the number of objects with violations :  9 + 8 + 34 = 51

In the quality cockpit, the number of objects with violations that are used :  Prio 1  : 1

In the quality cockpit, the number of objects with violations that are used :  Prio 2  : 3

In the quality cockpit, the number of objects with violations that are used :  Prio 3  : 6


In the quality cockpit, the total number of objects with violations that are used :  Total  : 6 + 3 +1 = 10

Percentage = ( 10 / 51 ) * 100 = 19.6 %




  • No labels