Registration

Dear SAP Community Member,
In order to fully benefit from what the SAP Community has to offer, please register at:
http://scn.sap.com
Thank you,
The SAP Community team.
Skip to end of metadata
Go to start of metadata

This document describes how to analyse, optimize and manage the size of the SAP Business Warehouse (SAP BW) in your SAP Solution Manager 7.1 as part of regular system administration activities, as part of the preparations of an upgrade to SAP Solution Manager 7.2 or for the preparation of a SAP HANA migration of a SAP Solution Manager 7.2 system. It does not describe how to reduce the data volume for objects which are not part of the SAP BW.

For a general overview about the various possibilities to delete or archive data in SAP Solution Manager, see SAP Note 2257558.

Important
Before implementing any of the recommendations in this document, carefully analyse the impact, as these recommendations are mainly about the deletion of data! It is strongly recommended to implement them first in a test system. SAP Solution Manager functionalities which can be affected include mainly (but not exclusively) End-to-End (E2E) Diagnostics, Technical Monitoring, (End-) User Experience Monitoring, Business Process Analytics, Data Volume Management and Custom Code Management.

Analyzing the SAP BW Size in SAP Solution Manager

In order to be able to store data for reporting, SAP Solution Manager 7.1 includes a SAP Business Warehouse 7.02. While this is a fully functional SAP BW, the way it is used in SAP Solution Manager is different compared to a ‘normal’ SAP BW. These differences have to be taken into account when analyzing the size of different objects in SAP BW containing data and when optimizing the data volume.

SAP BW data model in SAP Solution Manager

General structure

Like in any SAP BW, SAP Solution Manager uses different data ‘containers’, the so called InfoProviders, to store data. The most important InfoProvider is the InfoCube, which is optimized for reporting and long-term data storage. In addition, SAP Solution Manager uses the Data Store Object (DSO) as a temporary storage for data, usually to store them until they have been aggregated and moved to the InfoCubes.

The data which are needed for a specific report (the reports are based on SAP BW Queries) reside in different InfoProviders; e.g. there can be different InfoCubes with different granularities (older data are more aggregated than more recent data), or there are two identical InfoCubes in the so called twin cube concept (to be discussed later). As the Query does not know in which InfoProvider the requested data are stored, the InfoProviders are grouped into so called MultiProviders, which are logical views on InfoProviders containing ‘similar’ data. The Queries are designed to access the MultiProviders and not the individual InfoProviders directly.

 

A famous example is the MultiProvider 0CCMPDATA (see Picture 1), containing monitoring data collected via the E2E Monitoring and Alerting Infrastructure (MAI)[1]. The InfoProviders can be found via the central SAP BW transaction RSA1. The MultiProvider 0CCMPDATA includes various InfoCubes for data of different granularities. The InfoCubes have a suffix 1 or 2, indicating that there are two identical InfoCubes due to the twin cube concept:

Picture 1

While the reporting is reflecting pretty much the standard SAP BW functionality, the data loading and staging differs significantly from ‘standard’ SAP BW usage. There are no SAP BW extractors in a narrower sense, no DataSources, no InfoPackages, no PSA (Persistent Staging Area), no DTPs (Data Transfer Processes) and no process chains[2]. Instead, these functionalities have been replaced in SAP Solution Manager by the Extractor (and Alerting) Framework (EFWK) and several reorganization jobs for aggregation and housekeeping. These jobs will be described later. More details about the EFWK can be found here: https://wiki.scn.sap.com/wiki/display/TechOps/EFWK_Home.

Usage of the report SAP_INFOCUBE_DESIGNS

The most important InfoProvider, the InfoCube, uses an extended star schema, where the data are distributed over several database tables which are grouped under one logical object. This extended star schema has been described elsewhere; for our purposes, it is sufficient to know that one data ‘container’ (the InfoCube) is composed of several dimension tables and two central fact tables, the F- and the E-fact table. When loading data to the InfoCube, each new ‘request’ will be written to the F-fact table. These requests are aggregated and written to the E-fact table in a process called compression.

The report SAP_INFOCUBE_DESIGNS can be used to analyze the number of records in the different tables. When executing the report, this information will be provided for all available InfoCubes. The following picture 2 shows the result for the InfoCube 0CCMWCDD1:
Picture 2

 

The first 12 lines are the dimension tables, the last two are the E- and F-fact tables, respectively. In our example 100% of the records in the fact tables are in the F-fact table, which means that the InfoCube is currently not compressed. In general, InfoCubes should be compressed for performance reasons. However, the package dimension (fourth line from below, table contains two entries), which contains the request ID, indicates that there are not that many uncompressed requests. In practice, the number of uncompressed requests which are tolerable depend on the database. For Oracle databases, for instance, Queries can have performance problems with already 30 uncompressed requests (see SAP Note 590370). For more information about InfoCube compression in SAP Solution Manager see SAP Note 1178655.

While compression sometimes can have an impact on data volume, it is not systematically connected to significantly smaller sizes of the InfoCubes. The performance aspect of compression is much more important.

Analyzing the size with transaction ST14

In order to analyze the database of SAP Solution Manager with the goal to identify the largest objects, especially the largest SAP BW objects (which are not single tables but rather logical groupings of several tables), the transaction ST14 is a very valuable help. Please note that ST14 is available for use by SAP support and is not officially available or documented. Nevertheless, it can be used to perform a simple analysis.

In order to perform such an analysis, proceed as follows: Execute transaction ST14. When asked, select “Business Warehouse” (see picture 3):

 

 Picture 3

 

Select “Schedule job”. Enter the SAP BW client, an “Analysis title”, and select “Basis Data. Top Objects”. Click “Schedule job” (see picture 4):
Picture 4
After a few minutes, the result will be available (see picture 5):
Picture 5
Most important are the “Top 30 Objects”, which can also be tables not belonging to the SAP BW:
Picture 6
In the example above (picture 6), there is no SAP BW object among the top 30 objects! In this case, it is likely that all efforts to reduce the data volume of the SAP BW will have only a minor effect on the overall database size. However, in many customer systems, most of the top 30 objects are actually InfoCubes. In such a case, you should first analyze if the housekeeping/reorganization jobs are scheduled and run without issues; in a second step, you can consider reducing the data retention times or restrict the amount of data collected. Both aspects are covered in the following sections of this document

SAP BW housekeeping and reorganization mechanisms in SAP solution Manager

While in a standard SAP BW system housekeeping has to be considered explicitly and is usually implemented via dedicated steps in the process chains for data loading and staging or separate housekeeping process chains, SAP Solution Manager offers a “black box” approach to housekeeping using dedicated jobs. This has the advantage that administration does not require SAP BW knowledge nor a deep understanding of the data model in use. On the other hand, the possibilities to influence data volume by tuning of the housekeeping concept are limited, and it may be difficult to understand why the data volume is growing in case the housekeeping is not working as expected.

There are two main SAP BW housekeeping mechanisms in SAP Solution Manager: The E2E BI Housekeeping job and the Twin Cube Reorganization (implemented via different jobs). These jobs do not only delete old data once their retention period has exceeded, they also take care about InfoCube compression, index check, statistics creation etc., as well as aggregation from higher to lower granularities.

This chapter describes the different housekeeping/reorganization mechanisms.       

E2E BI Housekeeping

E2E BI Housekeeping is historically the “older” mechanism. It has been replaced largely by Twin Cube Reorganization due to performance reasons. However, it is still used by some important InfoCubes and requires some attention by the SAP Solution Manager administrator.

Description of the E2E BI Housekeeping mechanism

E2E BI Housekeeping consists mainly of two steps:

 1. Aggregation from higher to lower granularities (e.g. from InfoCubes with suffix “H” to InfoCubes with suffix “D”, meaning the aggregation is done from hourly to daily data).

Picture 7

2. Selective deletion of entries which have exceeded their lifetime.

These tasks are performed by the report E2E_HK_CONTROLLER, which is scheduled via the job E2E BI HOUSEKEEPING. This job is by default scheduled once per day. Due to the performance aspect of the selective deletion, this job can run quite a long time – in fact, in some large systems it has been running longer than a day, which was the main reason to introduce Twin Cube Reorganization, which avoids selective deletion. In a SAP Solution Manager 7.1, this job should not run longer than a few hours.

If you suspect that there could be problems with the E2E BI Housekeeping, you can check the job log and the log of the spool request (via transaction SM37) for more details. In the spool log, you will first find information about the aggregation, e.g. source and target InfoCubes, aggregators etc.:

Picture 8

Picture 9

The aggregation phase concludes with an information about the total time for aggregation:

Picture 10

At the end of the spool log, you will find information about the deletion phase:

Picture 11

Maintenance of the table E2E_BI_DELETE

Some control over the retention times and the data volumes of the associated InfoCubes can be gained by maintaining the table E2E_BI_DELETE, which is one of the control tables for the program E2E_HK_CONTROLLER. This is described in detail in the document attached to SAP Note 1480588.

The main idea is to define so called “deletors”, which are sets of criteria for the deletion of data. The table E2E_BI_DELETE typically looks like this:

Picture 12

By default, the lifetimes are 31 days for “H” cubes and 100 days for “D” cubes (in the example above – this may be different for different SP levels). The easiest way to decrease the data volume is to define shorter data retention times. In addition, you can define deletors with more granular deletion criteria. Some examples can be found in the document attached to SAP Note 1480588. However, please be aware that reducing the retention times can have an impact on the availability of data in the respective applications!

Housekeeping of the BI Hashtable

Sometimes the table SMD_HASH_TABLE is among the top 30 tables in the SAP BW client of the SAP Solution Manager. In this case, you first have to analyze which combinations of InfoCube and InfoObject are responsible for the majority of the entries in the hash table. Afterwards, you can define appropriate housekeeping criteria via the table E2E_BI_DELETE. All this is described in the document attached to SAP Note 1480588.

InfoCube compression, request based deletion

If you have identified an InfoCube with a high number of uncompressed requests via the report SAP_INFOCUBE_DESIGNS, and provided that this InfoCube is managed via E2E BI Housekeeping, you can maintain settings for automatic compression via the table E2E_BI_HOUSEKEEP. Details about InfoCube compression in SAP Solution Manager can also be found in SAP Note 1178655. Compression can also be performed manually or using other mechanisms as described in SAP Note 590370. However, the last request is needed by the E2E BI Housekeeping mechanism; therefore, it is important not to compress the request from the current day, better from the last three days! By using the “compress cube” option of the E2E BI Housekeeping mechanism (via table E2E_BI_HOUSEKEEP), you ensure that only those requests are compressed which are not yet aggregated. Again, this works only for those InfoCubes which are managed via the E2E BI Housekeeping.

A further option offered by E2E BI Housekeeping is the deletion by request. This works only as long as compression is not used for the InfoCube, as compression removes the request information. This can be an option as long as the total number of requests in the F-fact table stays low (or the database type allows a higher number) and the performance of the selective deletion is bad – deletion by request should be much faster.

All these options are described in SAP Note 1480588.

Twin cube reorganization

In order to overcome the performance problems of the selective deletion in the E2E BI Housekeeping, a new mechanism has been introduced which avoids the selective deletion. This so called twin cube reorganization is rather a general concept and has been implemented in different flavors using different jobs and configuration tables. The details are described in this section.

Idea behind twin cube reorganization

Instead of using selective deletion to remove old records, the twin cube reorganization deletes the content of the whole InfoCube via truncate commands on database level. However, this means that data covering a whole timeframe according to the respective InfoCube are removed. In order to keep records which have not yet reached their retention time, they have to be stored in a separate InfoCube. Hence, there are always two identical InfoCubes with the suffix 1 or 2 (e.g. 0SMD_PEH1 and 0SMD_PEH2). Assume the data retention time has been defined as n days; a so called twin cube data loader will initially write new records to the InfoCube with suffix 1. After n days, the data loader will switch to the second InfoCube. After 2n days, both InfoCubes are “full”, and the first one is emptied by a twin cube reorganization job. Afterwards, the data loader will again write new records to the first InfoCube. The data volume as a function of time has a characteristic saw tooth shape:

Picture 13

Note that the twin cube concept is not optimal regarding the data volume; actually, the data are kept as long as twice their retention time, and the data volume can be about twice as much as it would be when using selective deletion frequently. Furthermore, all records have to be deleted in the twin cube concept, while selective deletion allows to treat data from different systems/SIDs differently. However, the simplicity of the concept and the much better performance compared to selective deletion outweigh these disadvantages.

An important difference compared to the E2E BI Housekeeping is that there is no aggregation as part of the twin cube reorganization. The data loader actually has to fill data into the different aggregates simultaneously. The data are initially stored in DSOs (Data Store Objects), from where they are moved to the InfoCubes (as aggregated values) by the data loader. The reorganization jobs also have to take care about the deletion of records from the DSOs. However, the exact mechanism may depend on the application area to which the InfoCubes belong to, or the reorganization job, respectively. For statistics data (ABAP statistics records or ASR), for instance, the data flow schematically looks like this:

 

Picture 14

For monitoring data (data collected via the E2E Monitoring and Alerting Infrastructure (MAI) and stored in the MultiProvider 0CCMPDATA) the data flow looks slightly different (in particular, different reorganization jobs are used):

Picture 15

As a SAP Solution Manager administrator, you do not need to understand the differences between the different mechanisms in detail; however, you should know the different jobs which are involved. Furthermore, you should know how to maintain retention times in the different scenarios.

Generic twin cube reorganization job

CCMS_ASR_AGGR_DATA_UPDATE is the twin cube reorganization job which handles most InfoCubes, including among others ABAP statistics records (ASR) data and other Root Cause Analysis (RCA) related InfoCubes. The job is scheduled once per hour and its duration should normally not exceed one hour. However, by default up to three instances of the job can run in parallel if the duration of one hour is exceeded (the default value can be changed, see SAP Note 1945500).

From the point of view of the SAP Solution Manager administrator, twin cube reorganization follows a black box approach and does not offer many options for customizing. However, it is possible to adapt the lifetimes of the data using two important tables: ASRSTAT_LIFETIME and ASRSTAT_PROFILES.

In the table ASRSTAT_LIFETIME, several lifetime profiles with corresponding lifetimes are defined according to the scenarios activated in SAP Solution Manager:

Picture 16

The number and names of profiles can differ from the example above according to the activated functionalities in your SAP Solution Manager system. The corresponding lifetime is in days and can be changed via different UIs, which will be discussed in the next chapter.

The table ASRSTAT_PROFILES assigns these lifetime profiles to the pairs of twin cubes:

Picture 17

For instance, the pair of twin cubes 0CCMWCDD1 and 0CCMWCDD2 are supplied with data from the DSO 0CCMAWCD. The switch of the twin cube data loader happens according to the profile LIFETIME_DAY, which is 100 days according to table ASRSTAT_PROFILES. We can see that the last switch happened from the 9th to the 10th of August. The InfoCube which is currently being loaded with data is 0CCMWCDD1. (Note that the field FIRST_LOAD2 is empty in this example; the information is deleted during the switch.)

The easiest way to reduce the data volume of InfoCubes listed in table ASRSTAT_PROFILES is to reduce the corresponding lifetime in table ASRSTAT_LIFETIME. As mentioned, this is usually performed via different UIs and will be discussed later. However, please be aware that also this change of lifetimes will have an impact on the availability of data in the respective applications!

Errors during twin cube reorganization will be logged and can be displayed via the report SMBI_PN_SHOW_MESSAGES. See SAP Note 1708736 for further details. Not all types of error messages are self-explaining; when in doubt, send an incident to SAP.

Twin cube reorganization for monitoring data

As already mentioned, monitoring data are handled via a separate twin cube reorganization mechanism. There are two important jobs: CCMS_CPH_RAW_DATA_UPDATE (scheduled hourly) and CCMS_CPH_AGGR_DATA_UPDATE[3] (scheduled daily). There can only be one job instance at a time. However, the duration of the jobs is usually not a problem.

For the InfoCubes managed by these jobs (MultiProvider 0CCMPDATA), there is no entry in table ASRSTAT_PROFILES. However, the lifetimes are the same as defined in table ASRSTAT_LIFETIME and are configured in the same UI in the Guided Procedure for configuration of System Monitoring (to be discussed later)[4].

Managing the SAP BW size of the SAP Solution manager in Practice

Data avoidance

One of the easiest but most forgotten strategies to keep the data volume small is data avoidance. This aspect should be considered already when connecting managed systems to the SAP Solution Manager.

Since SAP Solution Manager 7.1 SP12 there is the possibility to connect non-productive managed systems via a “minimal” Managed Systems Configuration, where only extractors for E2E Change Analysis and E2E Trace Analysis are scheduled. Considering that usually more than half of the connected systems are non-productive and Workload Analysis and Exception Analysis data for those systems will not be collected in the minimal configuration, a considerable volume of data can be saved. For more information on Managed Systems Configuration see https://wiki.scn.sap.com/wiki/display/SMSETUP/Home.

When configuring different functionalities (e.g. System, DB and Host Monitoring) in SAP Solution Manager, make sure to collect only those data which are required. For instance, in the monitoring templates, activate only the required metrics and store them in the SAP BW only if you need them for reporting purposes. Metrics have to be stored in the SAP BW if (a) you want to keep the history of single metrics, which can be accessed via the Metric Viewer or (b) they are needed by the other reports in Interactive Reporting (see SAP Note 1909531). If you do not need these functionalities, you can switch off the forwarding of metrics to the SAP BW for individual metrics in your monitoring templates on the “Data Usage” tab of the metric settings (for more information on template maintenance, see https://wiki.scn.sap.com/wiki/display/TechOps/SysMon_Setup_and_Configuration).

Furthermore, you can also decide which granularity levels of MAI metrics shall be stored in the SAP BW. Since SAP Solution Manager 7.1 SP12 the default setting is to store only raw data and hourly averages, which are the required data for the metric viewer (showing the history of single metrics). Daily, weekly and monthly averages are required only for metrics which are used by Interactive Reporting. For details see SAP Note 1999511 – especially, if your SAP Solution Manager 7.1 is still on SP10/11.

Maintaining data retention times

The most effective strategy to reduce data volume is to decrease data retention times, provided the requirements for data availability allow to do so. The decision to reduce the lifetime of data should be preceded by a careful examination of the top objects in your SAP BW as described above. In any case you have to keep in mind that the respective applications will not have all data available anymore compared the default design values for retention times.

E2E BI Housekeeping

As mentioned before, data retention times for data stored in InfoCubes which are managed by E2E BI Housekeeping are defined in table E2E_BI_DELETE. The details are explained in SAP Note 1480588 and will not be repeated here. See also SAP Note 1514022 for Business Process Analytics data.

An exception are the InfoCubes for Data Volume Management (DVM). The retention times for DVM data can be conveniently defined in step 2 of the Guided Procedure for the configuration of the DVM functionality.

Maintaining table ASRSTAT_LIFETIME via UIs

Retention times for data stored in InfoCubes which are managed by twin cube reorganization are defined in table ASRSTAT_LIFETIME. They can be changed via WebDynpro user interfaces, which are part of the Guided Procedures for configuration of different functionalities which you find in the work center “SAP Solution Manager: Configuration” (or alternatively via transaction SOLMAN_SETUP). For instance, select “Technical Monitoring” and the radio button “System Monitoring” and go to step 2.5 “Housekeeping”:

Picture 18

You will find a UI which allows to change the lifetimes for monitoring data for different granularity levels:

Picture 19

These values correspond to the profiles LIFETIME_RAW, LIFETIME_HOUR, LIFETIME_DAY, LIFETIME_WEEK and LIFETIME_MONTH in table ASRSTAT_LIFETIME. Please note that these values apply to all InfoCubes listed in table ASRSTAT_PROFILES which are using those profiles (in addition to the InfoCubes for monitoring data, which are actually not managed via ASRSTAT_PROFILES).

The configuration of other monitoring functionalities creates further entries in table ASRSTAT_LIFETIME, e.g. LIFETIME_HOUR_EEM and LIFETIME_DAY_EEM for End-User Experience Monitoring.

Maintaining lifetimes 0SMD_PEH and 0SMD_PED

The InfoCubes 0SMD_PEH1, 0SMD_PEH2, 0SMD_PED1 and 0SMD_PED2 can grow to a considerable size, and you may want to change the lifetimes for the corresponding profiles 0SMD_PEH and 0SMD_PED. In many cases it is not allowed to change table entries directly.

However, you can change the lifetimes in a UI which can be accessed directly via the following URL: http(s)://<Host>:<Port>/sap/bc/webdynpro/sap/e2ewa_cfg?show_reorg_settings=X

(<Host> stands for the host of the SAP Solution Manager, <Port> for the ICM-HTTP(S) port).

See also SAP Note 1911191 for more details.

Deleting outdated data manually

In some cases you do not want to wait until data are removed by one of the housekeeping/reorganization jobs (e.g. a functionality has been decommissioned or as preparation of an upgrade or a HANA migration). You can also delete data manually via transaction RSA1.

 Important: Be careful to analyze the impact before deleting data!

Some InfoCubes are not used anymore (e.g. they have been replaced by others due to a migration to the twin cube concept). Their content can be deleted manually if not needed anymore (be aware that they can still contain historical data which you may want to keep).

The following InfoCubes are not supplied with new data as of SAP Solution Manager 7.1 SPS 5:

  • 0CCMAETPH
  • 0CCMSAINQ
  • 0CCMSAEXE
  • 0CCMSARC
  • 0CCMARCH
  • 0CCMSARCD
  • 0CCMARCDH
  • 0CCMSARS
  • 0CCMARSH
  • 0CCMSARSD
  • 0CCMARSDH
  • 0CCMSATIM
  • 0CCMSATPH
  • 0CCMSAWSX
  • 0CPH_15M
  • 0CPH_VRKF
  • 0CPH_DAY
  • 0CPH_HOUR
  • 0CPH_MIN
  • 0CPH_MIMA

Data in the following InfoCubes have been collected before the upgrade to SAP Solution Manager 7.1 SPS 8 (or higher):

  • 0SMD_PE2H
  • 0SMD_PE2D

See also SAP Note 2260715 for more details.

Another example is the InfoCube 0SM_CCL, containing data for “old” CCLM (Custom Code Lifecycle Management) dashboards. This InfoCube is not in use anymore since SP 12 and has been replaced by the InfoCube 0SM_CCLM (which, in contrast to the former, is included in housekeeping). If the SAP Solution Manager is on SP 12 or higher, and the InfoCube 0SM_CCLM exists, the content of InfoCube 0SM_CCL can be deleted.

In order to delete the content of these InfoCubes manually, go to transaction RSA1 and search the InfoCube via the “Find” functionality:

Picture 20

Right-click on the InfoCube and select “Delete Data”:

Picture 21

You will be asked if you want to delete the content. Select “Yes”:

Picture 22

When asked if you want to keep the data in the dimension tables, select “No”:

Picture 23

This will delete the complete content of the selected InfoCube. It is also possible to criteria for selective deletion (via right-click on InfoCube → Manage; go to tab “Contents”; chose “Delete Selection”).

Maintaining aggregation settings for ASR data

ASR data (statistical records) are frequently found among the top 30 InfoCubes. It is possible to analyze the size of ASR InfoCubes with the report SMBI_ASR_BW_ANALYZE. A typical result is shown here:

Picture 24

Note that the number of records in the InfoCubes should normally be much higher than the number of records in the corresponding DSO; otherwise, this could indicate a problem with the twin cube reorganization. With a double click on a line, you can display further information, which you can use for a more detailed analysis. You can even drill down to the level where you can see how many records are belonging to each of the managed systems.

SAP Note 1457010 describes how you can reduce the data volume by excluding certain data from aggregation into the various InfoCubes. Important: Some reports based on ASRs will no longer work correctly if information is discarded during the aggregation, i.e. they will be missing data. The configuration tool can be found via work center “SAP Solution Manager Administration” → “Infrastructure” → “BW Reporting” → “Start New Window” → select line “System Reporting” → “Scenario Setup”. Expand the node of a system/SID; click “Expert Settings” in the category “Statistical Records (ABAP)”.

The document attached to SAP Note 1457010 contains an example explaining how to analyze the ASR InfoCubes using the report SMBI_ASR_BW_ANALYZE and how this information can be used to change the amount of data extracted by the ASR extractors.



[1] Don’t be confused by the naming “Performance Data” in the InfoProvider documentation. Many names in SAP Solution Manager have a historical origin which does not necessarily reflect the current usage of the corresponding object.

[2] Except if these object types have been activated and are in use for a specific reason.

[3] The naming of the jobs has historical reasons and does not necessarily reflect what the jobs are doing.

[4] Although the values are the same, they are taken from a different table. When changing the values via the UI, they are changed in both tables. It is strongly recommended to change them only via the UI, not directly via SE16!


 


 

 

 

  • No labels