Child pages
  • EI Memory issues in APD processing
Skip to end of metadata
Go to start of metadata

Memory issues in APD processing 


Many memory issues within the Advanced Process Design area can be avoided with simple changes to the design. In looking at similar issues arising in relation to the performance of the APD and, in a number of cases, memory and use of memory is a cause of APDs failing. This page might be helpful as a quick guide to simple features.


The details below should be helpful in reducing the datasets which are used in the APD processing. This should minimise the amount of memory short dumps being produced and should allow APDs to run better and more efficiently.

Tips for reducing memory consumption

Generally, memory errors and performance issues are caused by large datasets being processed in APD.  The APD may also be inefficiently designed which causes the amount of data to be larger than the user expects. Below are some tips which can be used to reduce the amount of data processed and, therefore, reduce memory errors.

  • Uncheck "Process Data in Memory":
  1. Start Transaction RSANWB
  2. Choose 'Goto' from menu bar
  3. Click on 'Performance Settings'
  4. Uncheck the flag 'Process Data in Memory
  • Use APD Partitioning
    1. See 1901962    How Query Partitioning works in APD
  • Check if the bXML interface for the MDX interface is already used in your system (release+ support package)
    1. See SAP note 1284239 RSCRM: Overcoming the 1 million row limit in RSCRM
    2. This will also overcome the 1 million cell limit in APDs (to be checked with available memory configuration)
  • Check if an APD could be replaced by DTP with Query as InfoProvider
    1. If the APD only extracts data from a BW Query into a data target such as flat file or InfoProvider
    2. IF the Query can be released for 'Query as InfoProvider'
    3. See more details in the SAP Wiki: OT-QPROV or in the SAP Online Help: Query as InfoProvider
  • Analyse if there is scope for improving the design of the APD:
  1. Check intermediate results at each node
  2. See at which node the APD is failing
  3. Conditions are ignored in MDX and cause extra overhead in APD, delete conditions and replace by a FILTER node in the APD (see KBA 2109881 APD: Bex Query with active Bex conditions causes memory and performance problem)
  4. Check if blank records are filtered out at joins (these can cause extra data)
  5. Reduce the records by using as many filters as possible in the early stages of APD
  •  Overall APD design:
  1. Filter data into smaller datasets
  2. Use multiple runs
  3. Aggregate all resulting data back into one DSO
  • Multiprovider processing:
  1. Multiprovider works as UNION and join condition and this generates more records than expected
  2. Check the design of multiprovider/dimension and try to reduce the number of records

Related Content

Related KBAs

1790516    Memory issues in APD processing

1901962    How Query Partitioning works in APD

2109881     APD: Bex Query with active Bex conditions causes memory and performance problem

Related Notes

605208     RSCRM - Restrictions
605213     RSCRM: Performance
646699     Restrictions during the use of BW queries
751577     APD-FAQ: Data source query

  • No labels