Skip to end of metadata
Go to start of metadata

This wiki is a DUPLICATE entry. Please refer to wiki ABAP Performance and Tuning instead. 




It is quite common in ABAP development the exigency of having programs running as quick as possible, once in most of cases there is a huge data mass to process. Let's try to understand how we can have an effective work on that subject.

SAP R/3 servers work in three layers:

Database Server: responsible for accessing and updating the database.
Application Server: responsible for application processing (ABAP command interpretation).
Frontend/Presentation Server: responsible for graphic interface proccesing (R/3 system middleware and kernel).
Reaching a good performance rating means, in most cases, decreasing the communication between these three levels.

Practices to avoid

To have a good performance you should avoid (very important):

  • Database operations
    • Nested Select commands (use For All Entries or Join instead)
    • Selects within a loop
    • Select * (all fields) instead of Select fields needed
    • GROUP BY and ORDER BY in SELECT statement (Use sort at internal table)
    • Select (without up to 1 rows) and Exit, or Select count to check existence of a record (Use Select single)
    • using fields in the WHERE clause which are not part of an index
    • Omitting a key from primary key in the WHERE clause for Cluster Tables (use all keys)
    • using complex Selects on buffered tables
    • Executing a Select multiple times in the program
    • updating a complete row, if only one of the fields is to be updated (use UPDATE+SET statement)
    • using join statements whileas adequate standard views exist
    • Issuing a SELECT Query with the Fields in the Field List not in the Order that they exist in Standard SAP Table
    • SELECT ... INTO CORRESPONDING FIELDS OF TABLE (Fields are moved one by one)
  • Internal Tables
    • Nested Loops (use sorted tables and binary search)
    • Sort internal_table instead of Sort internal_table by field1 field2
    • MOVEs all fields between 2 work areas, instead move the whole work area
    • Linear Search (use BINARY Search with READ TABLE statement)
    • LOOP AT ITAB WHERE <> (USE:Loop..Endloop and use IF CONDITION inside loop).
  • Miscellaneous
    • Move-Corresponding clause (due to field-by-field comparison);

Generic Tips

  • Use of WHERE clause wherever possible: Instead of going for a select quary without where clause,its always better to give where clause in it so as to retrieve data satisfying the condition.If we dont give where clause then it will retrieve all record of the database table and the we have to filter it at application server level. And in the process we will be using more costly database resource which should be used very judiciously and carefully. When some key is referred, the system will access directly that data provided a database index exists, becoming the query easier. To certify a secondary index is being used, it is possible to use the option "Explain SQL", in transaction ST05.

  • Logical Database: Imagine a scenario where you have a logical database defined with four tables. Now you have a program which has to use only three of that tables. If the three first tables are declared on the tables  clause, the database will automatically bring all the key fields for the fourth table. Due to that, sometimes it is better to declare all the tables and use a get command only to one of its key fields.

  • Database Resources: When possible, use database resources such as sum, avg, min and max commands. This will reduce the traveling information volume, as well as internal processing effort.

  • "OR" Operators: Several or operators can be replaced by one in operator:  z1 = a1 and ( z2 = y1 or z2 = y2 or z2 = y3) could be replaced by z1 = a1 and   z2  in ( y1, y2 , y3).

  • Cluster Tables:  This kind of table has a particular behavior when being queried with where conditions when the compared fields are not keys; as it has all the non-key fields compacted, comparing them means to have to unpack them, which means extra processing effort. So it is better to make the where comparison using only the key fields, and to delete the unwished ones in a separated delete command. 

  • Types: When not selecting all the fields of a database table, use TYPES to determine which fields to use. This will grant no unwished data will be selected, thus decreasing the communication flow from database to application layer. It follows a sample:
    BEGIN OF table_struc,
    field1 TYPE struct-field,
    field2 TYPE struct-field,
    END OF table_struc.

  • Work Areas: For a better performance it is preferible using internal declared structures in your program named work areas, instead of declaring tables with header lines. This will allow you to use sometimes the same one-record structure for more than one internal table. Use work areas for inserting and updating records. It folows a simple code sample, as in complement to last item's code:
    DATA: internal_table TYPE STANDARD TABLE OF table_struc.
    DATA: work_area_table TYPE table_struct.
    APPEND work_area_table TO internal_table.
    INSERT work_area_table INTO internal_table.
    MODIFY internal_table FROM work_area_table.

  • Field-Symbols: Use field-symbols almost exactly the same way than work areas. Use them, however, to read-only record cases. Work areas should still be used for internal tables with a structure less than around 100 bytes. It follows a sample of how is worthy using this resource:
    FIELD-SYMBOLS: <fs_table> TYPE table_struct.
    LOOP AT internal_table ASSIGNING <fs_table>.
      var1 = <fs_table>-field1.
      var2 = <fs_table>-field2.
    READ TABLE table_struct ASSIGNING <fs_table> INDEX 5.

Command Tips

INSERT command

For more than one insertion, prefer using the syntax INSERT db_table FROM TABLE internal_table [ACCEPTING DUPLICATE KEYS]

FOR ALL ENTRIES IN itab clause

Remove duplicated records on the comparison table. They can consume a lot more from your processing.
Also, always make sure that the comparison table is NOT empty, otherwise it will read the whole table.


IF itab_data[] IS NOT INITIAL.
  SORT itab_data BY material.
  SELECT matnr labst FROM mard INTO TABLE i_mard
    FOR ALL ENTRIES IN itab_data
    WHERE matnr = itab_data-material
      AND werks = p_werks
      AND lgort = p_lgort.


Always check if you have all the needed keys to perform the reading of an internal table. If read data will only be use for reading, use field-symbol, else choose using a work area. When using this command just with purpose of verifying a record's existence, do not forget using the TRANSPORTING NO FIELDS clause.

Always try to use READ with BINARY SEARCH, but make sure you've sorted the internal table by the key you are trying to READ with.

And in case, you are looking only for values of 2 fields(say f1, f2) from the internal table, use TRANSPORTING f1 f2.

If you are looking to READ with a Contains Pattern, for ex. you want to get the entry which starts for f1 with AB or AB* values, use the following technique:

SORT ITAB1 by f1.

READ ITAB1 with key f1+0(2) EQ AB BINARY SEARCH TRANSPORTING f1.--this illustrates all the tips suggested above w.r.t READ Statement.

MODIFY Statements:

Use MODIFY with TRANSPORTING clause, wherever applicable.

Related SAP notes

  •  Note 332856 - Reading buffered master data  : Identical database accesses to master data within a transaction can be avoided by using buffered reading. SAP provides special Function Modules which store the result of the last database request in the memory. This value can be used within accessing the database again. 

Solutions to performance issues

There are performance issues encountered in few cases. This page address most of the common performance issues.

1. ORA - 1555

Log into CSN and check out the notes 185822 and 3807.

2. SELECT * <table> causes performance issues.

a. Try modifying the query to include maximum keys in the WHERE clause

b. Create secondary indexes on the table. This does improve performance. After creating the secondary indexes, do make sure to update the statistics. If this is not done, the secondary indexes would be used in delete operation and would lead to performance issue during deletion.

c. Use open cursors

    SELECT * FROM <table> upto max_rows WHERE <field1> = 'xyz'.
      PACKAGE SIZE g_package_size.&nbsp;<do processing here>

3. If you are dropping and re-creating the indexes for the table using the function module call DB_DROP_INDEX and DB_CREATE_INDEX. Re-think on the strategy here. When processing large amount of data and each the index is dropped and re-created would lead to performance issue. A coding improvement would improve the performance here. Error could be with a a message DBIF_RSQL_INVALID_CURSOR.

4. When doing parallel processing of a particular transaction and if there is going to be an update of table/file remember to

a. Lock the table/file during update

b. End  of the update unlock the object.

When the locking is not done, then there would be multiple attempts to update the table/file. This would lead to short dump after long processing time.

5. Error message"DBIF_RSQL_INVALID_CURSOR" check out for which database it is occurring. If it is an update to a table write a code modification like below

IF ( sy-dbsys(2) = 'DB' ). "only do commit for DB2 systems
  i_dummy = records MOD commit_max.
  IF i_dummy = 0.


  1. Unknown User (98f1oprd)

    Page currently in development. Useful content will be welcome. Tks!

  2. Good! I try to help!

  3. Unknown User (uanlq16)

    Hello Rodrigo,

    I like every article regarding performance issues. Yours has a lot of helpful recommendations, but one statement is not correct! It is the bold and strikethrough comment below. This incorrectness can be seen in many statement, also posted by SAP.

    Practices to avoid

    To have a good performance you should avoid:

    • Nested Select commands (use For All Entries instead);
    • Nested Loops (use Parallel Cursor or Indexed Loop);
    • Append commands inside Select statements (use always Into Table instruction);
    • Select * instead of determining the proper fields;
    • Select Single without complete keys (use Select ... Up To 1 Rows... Endselect instead);
    • Absence of indexes or primary key seeks;
    • Paying no attention when selecting from Cluster Tables;
    • Move-Corresponding clause (due to field-by-field comparison);
    • Sort internal_table instead of Sort internal_table by field1 field2.

    In the last days I discussed and tested this issue in the Forum with a couple of guys, initiated by a BASIS Administrator. You can find this thread under Performance "SELECT INTO TABLE" vs. "SELECT IN LOOP". There you can see that an APPEND inside a SELECT ... ENDSELECT is at least not slower; sometimes even faster.




  4. Unknown User (98f1oprd)

    Nice, Heinz, good to know it!

     I'll change this information on this page!

     Thank you very much!


  5. Unknown User (p85x2iw)


     SELECT + APPEND is definitely slower than SELECT INTO TABLE.  Try this in SE30 -> Tips and Tricks> SQL Interface -> Array operations (Internal table) -> Select ... into table   I read the forum thread mentioned here but the tests there were done with tabe USR05 which is buffered (Generic area buffering) so it may not be an appropriate test.

     The reason that SELECT INTO TABLE is faster is because of reduced data transfer time from the database server to application server.  With a SELECT..ENDSELECT, the data is brought to the application server one record (using a cursor size 1) at a time while during the SELCT INTO TABLE, it is got all at once.  If the table is buffered, the difference may not be that much but for all transactional and master data tables, SELECT INTO TABLE is definitely faster. 

     Usually, development environments have only one server which acts as the database and application server.  So, there may be  (I am not sure) no perceptible difference.  Usually in productive systems, there is a separate database server and several app servers - this is where the difference in performance is highly visible. 



  6. Unknown User (s6tb1nx)

    I want to add one more performance related tip. although this tip is very specific. While writing code inside a userexit or customer exit there may be the cases when you need a field value which is not availabe in the current context. Either we can write a select query or we can get that field value from call stack (If that field is available in call stack). by this way can avoid a select query. with the help of field-symbols we can achieve it.

    Also one more thing... we can use static internal table wherever applicable in userexits & customer exits. This can avoid unnecc select queries.

  7. Performance tips are very good, Thanks for sharing your knowledge. 

    Really these are very helpful to improve the performance.

  8. Unknown User (98cj0bpr)