Page tree
Skip to end of metadata
Go to start of metadata

Important Facts concerning the compression of InfoCubes

Performance problems with the compression of requests may have different reasons and need a specific analysis. Nevertheless such problems usually cannot be rated with priority Very High. This is due to the following facts:

  • queries show correct results independent from where they get the data: F-table in case of uncompressed requests or E-table for compressed requests.
  • the compression process can be canceled (and sarted again later) without risking inconsistent data in the cube
  • for some data bases like ORACLE ist possible to execute queries even when a compression is running (for other DBs see SAP note 2198756).
  • the loading processes into a cube are not affected

There are several things one can do to find out what may have caused the problems.
The following information summarizes

  • Possible reasons for issues during the compression
  • What to check in case the compression does not work/is too slow
  • Link to more detailed information
  • Helpful SAP notes

Please note that from a technical point of view the suggestions provided in this article are basically ORACLE related. Nevertheless the information on how to analyze/proceed can be used for all other data bases too.

I) Find out how to proceed

Is it possible to cancel an active compression job?

If a long running compression job blocks other processes on your system you can cancel it without the risk to generate inconsistencies in the fact tables of the InfoCube. In case the compression job covers several requests the system checks for which of the concerned requests a rollback has to be done. For those requests where the compression is already finished no rollback will happen and the data can be found in the E-table. This is described in SAP note 407260.

Does a running compression job disturb other activities?

A long running compression job in general does not disturb other BW processes so there is no need to cancel it. Nevertheless you have to be aware of the following influences:

  • Only one compression job can be executed per InfoCube, but you can start a new data load.
  • If you are using e.g. Oracle as your database, you can also execute queries on the relevant InfoCube while compression is running. With other manufacturers’ databases, you may get a warning message if you try to execute a query on an InfoCube while compression is running. You can execute the query once compression is finished.
  • The performance of queries will be influenced in case you have dropped the secondary indices of the E-table to speed up the compression process (see information in the ‘Checks’ area below).

Anyway for some reasons you may want to get rid of the compression job and find out how much longer it will take before it will be finished. For that reason you have to:

Estimate the time until the job will be finished

For that purpose you first should find out whether or not there is any progress in the job’s activities.

To find out if the job is making any progress you have to observe it’s concerning process in SM50. Is there any changing of the concerned actions, tables, reports etc.? You could also have a closer look at the running SQL statement in transaction ST04. Open the ‘Session Monitor’ and find the ‘Client Process ID’ that matches to your process ID of SM50. Check together with your data base expert if the SQL statement changes from time to time. Estimate the time left until the job is finished.

The runtime of course depends mainly on the number of records that have to be updated to the E-table. You can see which requests are affected in trasnaction RSA1 -> Request tab (columns 'Requests' plus ‘added records’). You can expect the processing of about 2 million rows per hour (in case of non-cumulative key figures it is about 1 million rows per hour). Anyway, this is only a rule of thumb and depends on both, the capacity of your data base and the size (number of characteristics and key figures) of the Cube. If you want to get a more detailed estimation you could search for other requests (to the affected cube) that have been compressed successfully in the past. You will probably find a single request with a considerable amount of data. Check the log in the Cube administration and find the corresponding job in SM37 to get the runtime. This information can be used to extrapolate the runtime of the affected request. This should be a valuable decision guidance for the question whether or not to cancel the job. Please note that in case of very big requests it might even be necesarry to reload the data in more but smaller requests.

If you compress more than one request and want to know how far the job already has come,you can check the status in RSA1. A clock symbol is shown for all those requests where the compression isn't yet finished.

II) Cancel the job

If you see that the job hangs without any progress, or if your calculations make you think that the finishing of the compression will take too much time, you may decide to cancel the job and follow the suggestions below in this article before you start a new compression.

III) Checks/updates before restarting the compression

1. Check Data Base updates

The compressing of requests is a process which is exclusively done by the data base. This is why it is particularly important not only to use an actual version of the data base but also to make sure all the latest patches and hot fixes are implemented. Hence it is recommended that your data base administrator checks if all patches and hotfixes are implemented and your data base is up-to-date in case of such performance issues. Below you find some SAP notes which could be of use for that purpose.

ORACLE

1705026 - Basic requirements to solve BW performance issues on Oracle
1503709 - Oracle 11.2.0: Patches / Patch collections for 11.2.0.2
1171650 - Automated Oracle DB parameter check
1175996 Oracle 10g/11g patches check
1431798 Oracle 11.2.0: Database Parameter Settings

DB2, DB4, DB6

899322 DB6: DB2 9.1 Standard Parameter Settings

MSSQL

1076022 Release planning for Microsoft SQL Server 2008 (R2)
62988 Service Packs for Microsoft SQL Server
1733195 Microsoft SQL Server version and download list
1860832 Performance improvement for BW cube compression

2. Existence of the 'P' index (Oracle)

Check the E-Fact table and make sure that this indexes exists. Without it a compression is impossible, see SAP note 407260.

3. Deleting Secondary Indexes of E fact table

Deleting the secondary indexes on the E fact table of an InfoCube that should be compressed may be useful (sometimes even necessary) to prevent resource shortages on the database. Since the secondary indexes are needed for reporting (not for compression), queries may take much longer in the time when the secondary E table indexes are not there. The way to drop the indices on the E-Fact table and rebuild them when the compression is finished is also described in SAP note 407260.

4. Update statistics

The optimizer tries to find the most efficient access path for every process. After the compression is finished the distribution of the data in the fact tables have changed and there are much more data in the E-table. To avoid performance issues on your queries it is recommended to update statistics after the compression too.

5. Zero elimination

The compression provides the option to eliminate zero values (all key figures zero). This is an additional task for the data base and may have an impact on the performance. Therefore you may switch it off temporary until the problematic request(s) are compressed. If the E-table then contains zero records you can remove them as described in SAP note 1587759 (solution III.b).

IV) Please provide the following informations in case SAP support is requested:

  • Request number(s) (SID)
  • Number of records to be compressed
  • Expected runtime (see point "Estimate the time until the job will be finished"  above)
  • Name and starting date/time of the compression job
  • Informations regarding the Secondary Indices on the E-fact table
  • Confirmation that the data base have been checked for hotfixes and patches by your basis administrator

V) Good to know

- Never keep too many uncompressed requests in your InfoCubes (see SAP note 590370)
- For some MSSQL releases the maximum number of partitions is limited to 1000 per table (see SAP note 869407)
- Big requests should be compressed separately
- You could clean up the F-table from empty or ununsed partitions with the report SAP_DROP_EMPTY_FPARTITIONS before you start a compression. This may also help to get a better performance. See SAP note 430486
- Joblog shows DBMAN 368 “Characteristic attribute exists twice”: The error can be removed with report RSCDS_MERGE_DUPL_REF_POINTS. See SAP note 1587759 for more detailed information

VI) SAP notes and KBAs

List of interesting Notes and KBAs:

  • 407260 - FAQs: Compression of InfoCubes
  • 590370 - Too many uncompressed request (f table partitions)
  • 1778638 - Important Facts concerning the compression of InfoCubes
  • 1472386 - 'ORA-04031 out of memory error in BW systems'.
  • 430486 - Overview/repair of F fact table of an BW InfoCube
  • 1012829 - DB6: Long Running BW Compression
  • 1587759 - Duplicate Records in InfoCube Dimensions
  • 869407 - Partitioning on Microsoft SQL Server
  • 2198756 - Due to Cube Being Locked, Query Cannot be Executed at the Moment

http://help.sap.com/saphelp_nw70ehp2/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/frameset.htm

  • No labels