Registration

Dear SAP Community Member,
In order to fully benefit from what the SAP Community has to offer, please register at:
http://scn.sap.com
Thank you,
The SAP Community team.
Skip to end of metadata
Go to start of metadata

How to understand query data while archiving and reloading non-cumulative data

Documentation and terminology

As a prerequisite for understanding this note it is assumed, that you have read the online help and the consulting note for basic terminology like marker (reference point), validity, calculation of non-cumulative values, archiving and reloading:

http://help.sap.com/saphelp_nw2004s/helpdata/en/3f/c219fe9a46194aa66d55254fdab182/frameset.htm

# 1548125 Interesting facts about Inventory Cubes

Demonstration of a sample with screenshots (in RSRT and LISTCUBE)

Initial state of the original cube NRNCARC and query

For simplification the 0CALMONTH is displayed instead of 0CALDAY to see the values contained in the cube NRNCARC.

                       

The automatically created validity slice contains the interval that data was loaded for:

 

The adhoc query NRNCARC/!NRNCARC  with 0CALMONTH in the drilldown result would look like this:

 

Archive and delete data older 01.05.2013 from original cube NRNCARC

Now, the archive run is executed and 2 records are moved to the archive. The result in the query changes for the archived month as the movements do not exist anymore for a calculation.

 

For the archived time-frames the calculated results do not look ‘right’, so best would be to go to transaction RSDV and restrict the validity excluding the archived time-slots.

The query now only displays the existing months:

Reload and compress data from archive into the copy of the cube C_NRNCARC

At a later stage it is required to reload the data from archive. As described in the online help, a copy of the original cube is created and connected with the original provider in a multiprovider. Then the data is reloaded via an Export Datasource to the copied provider: C_NRNCARC.

Without compression, the result of the adhoc query for the copied cube C_NRNCARC /! C_NRNCARC directly would look like this:

 

The multiprovider adhoc query would now show a wrong stock value as both markers are taken together:

 

It is essential to make sure that no additional marker is created or calculated in the copied cube. The request with the data from the archive has to be compressed without marker update (like historical data).

 

Implicitly, a marker is created with the value 0.  The content of the copied infocube with data from archive will look like this:

 

 (CAUTION: if you see markers with a value different than 0, the scenario cannot work. Either the data has not been compressed or has been compressed with marker update!)

The query on this copy of the cube containing the archived data will not make sense, reporting should always be done on a multiprovider (having a UNION of the original cube and the cube with archived data).

Reporting on the multiprovider NRARCM

The original cube NRNCARC contains the following entries:

 

The copied cube with the archived data C_NRNCARC has been displayed previously.

LISTCUBE on the multiprovider NRARCM  now displays the data relevant for the calculation:

 

It is important that the marker/reference point for the copied cube exists and that the value of it is ‘0’.

The system can now calculate the result as expected. For a better understanding the adhoc query of the multiprovider is displayed with 0CALMONTH in the rows and 0NFOPROV in the columns:

 

Common problems

Compression in the copied cube with marker update

Data reloaded from the archive with marker update create additional stock value and will make the results inconsistant. See the important effect above in the sample and read up the concept of historical data again in the provided documentation.

HANA system

The idea for the hana optimized infocubes was to avoid the marker record in the infinite date and calculate the stock values from the the stock initialisation date foreward to the highest requested date for stockvalues and then backward up to the lowest requested date.

This sounds much more effective and does not contain the marker update during compression anymore which was mostly the reason for wrong data in noncumulative queries.

Unfortunately this architecture does not allow anymore archiving by time because of the algorithm.

In BW7.3x this cannot be solved anymore and you need to use the "normal" Infocubes for the noncum. cubes where you need to archive by time.

In BW7.40 with SP05 the marker record for HANA optimized non-cum cubes is reintroduced because of the given problem. There are no plans to downport this developement to BW7.3x

More information on non-cumulatives in BW7.3 based on HANA can be found here: Non-cumulatives in HANA 

 

  • No labels