Non-Cumulatives: Performance and Memory Consumption
See note 1548125 Interesting facts about Inventory Cubes
- As for all basis cubes it is important that as many requests as possible are compressed (see note 590370). For inventory cubes the system has to take into account all uncompressed request when the marker is calculated. This additionally contributes to a long runtime in case of uncompressed cubes.
- Please only use validity-determining characteristics if its really necessary since too many of them ( or a validity-determining characteristic with a lot of characteristic values) decreases the performance considerably.
- Its important to restrict the query as far as possible regarding time. This obviously improves the runtime by simply lowering the number of points in time for which ncum values have to be calculated. Keep in mind that less calculations during the query runtime also decrease memory requirements.
- a query with ncum key figures calculates the stock values at a certain point in time for all characteristic values in the drilldown(which meet the query filters except time restrictions) and not only for the values(e.g. materials) where there is a movement posted in the Infoprovider. Hence, when you want to estimate the number of records which have to be retrieved from the database by a certain ncum query, you need to use (e.g.)transaction LISTCUBE.and apploy the formula 'The number of calculated stock values in query' = 'number of reference points' * 'number of requested points in time'
For Queries with non-cumulative key figures it is especially important to define (global) restrictions regarding time:
Check out the following example in order to learn how you can use the query runtime statistics and transaction LISTCUBE to estimate the requested/calulated amount of data.
Sample Query with non-cumulative key figure
As described in note 1548125, the query processing changes decisively if non-cumualtive key figures are involved. Therefore it is necessary to take a closer look at how to interpret correctly the query runtime statistics in this case. The following query is very simple and defined on an inventiory cube (copy of 0IC_C03) with just a couple of data records:
There are only two key figures, the non-cumulative key figure 'Quantity Total Stock' and the cumulative key figure Amount. The query has only the global filter 0calyear=2013 and the 2 characteristics 'material' and 'Month' are in the drilldown. The following settings should be used when the query is run in RSRT:
Query Runtime Statistics
We use the back(F3) button to get the runtime statistcs displayed:
Basically there are three data base accesses (different DataManager IDs):
- 3 records transported
- 4 records transported, Read Type=N
- 4 records transported
A query with non-cumulative key figues is internally split into two subqueries:
- one handles the 'normal' cumulative key figures
- the other one handles the non-cumulative key figures
The system starts with the cumulative key figures, in our case 3 data records are transferred form the data base. For non-cumulative key figures the system needs to read
- the reference points: Read Type=N; 4 records transported
- and the relevant 'movements': 4 records transported
There are only 4 materials in the cube, if calmonth is removed from the drilldown we get 4 rows.
Hence the system reads for these 4 materials the reference points and then starts the backward calculation for all requested times. Since the query is restricted to the year 2013 and 0calmonth is in the drilldown, for every reference point 12 stock values to 12 different points in time need to be calculated. The final result set therefore has 4*12=48 records. When we add to this the number of records transferred from the data base for the 'normal' key figure we get 51. This is exactly the value displayed for the internal table SP in the runtime statistics: 3100 OLAP:Read Data = 51 (please see the screenshot above).
In order to be able to do the backward calculation the system needs all movements for the requested time interval. In our case all movements are read where 0calday >= 20130101. Hence, the number of movements which need to be processed depends on the time restriction in the query.
You can also use Listcube to estimate the number of calculated stock values in query. You need to use the same global(see OT-OLAP Filter Values) filters, except time restrictions(!), as in the query and add the same characteristics to the drilldown. In Addition, since we are interested in the reference points, we set 0recordt=1:
No time restrictions and no restriction regarding number of rows:
Field Selection for Output(drilldown of query):
In this simple case the result is very small(only 3 resords):
As in our example above, the number has to be multiplied by the 'number of requested points in time' (which was 12 in our exmaple above) in order to get the number of calculated stock values.
Since normally the data set is much bigger, it makes sense to use the following trick to get the number of reference points in LISTCUBE. Just set the following breakpoint and check the parameter l_lines. When the result set is very big(e.g. 2 Million records), Listube also wouln't be able to display such a big result.
Its important to restrict the query as far as possible regarding time. This obviously improves the runtime and the memory consumption by simply lowering the number of points in time for which ncum values have to be calculated. In addition, the number of movements which are used to carry out the 'backward calcuation' is reduced.
SAP Consulting Notes
1548125 Interesting facts about Inventory Cubes
1160520 BIA and non-cumulative queries