The performance of any ABAP program mainly depends on the database accesses used in it. The more optimized the selections, the better the performance. Consider the points mentioned in the following sections while writing any ABAP code that accesses the database.
Using all the keys in SELECT statement
When using the SELECT statement, study the key and always provide as much of the left-most part of the key as possible.
Avoid SELECT *
The SELECT * command is to be avoided rigorously, (unless you need every field in the table), because it drags every field in the table through the I/O bottleneck thus slowing the program.
Fetching Single Record
If the entire key can be qualified, code a SELECT SINGLE not a SELECT … ENDSELECT. If all the keys are not available, we should use SELECT UPTO 1 ROWS if we are interested only in the first record.
Selecting data into an internal table using an array fetch versus a SELECT-ENDELECT loop will give at least a 2x performance improvement. After the data has been put into the internal data, then row-level processing can be done..
select ... from table <..>
loop at <itab>
<do the row-level processing here>
When accessing the database, careful consideration should be given to index access in order to make the program as efficient as possible. Tune the Query so that optimum indexing will happen.
Use the Where clause appropriately to use the proper index both should be in same order use ST05 or SE30 to analyse which index is used.
Avoid “INTO CORRESPONDING”
Avoid using INTO CORESPONDING FIELDS of Table. Instead, explicitly mention the fields, if the table fields are not in the same sequence as the selection
SELECT statement inside LOOP
Do not write SELECT statements inside a loop. Instead, use the FOR ALL ENTRIES Command
Before using FOR ALL ENTRIES command, check that the
1. Corresponding Internal table is not empty. If the Internal table is empty, the statement will select ALL the entries in the Database
2. The Internal table is sorted by the File used in the Where Clause: This makes selection faster. (And delete adjacent duplicates for the key fields.)
Nested SELECT statement
Avoid using nested SELECT statements. Instead, make use of different internal tables to fetch the data, and Use Nested LOOPS to read them.
Whenever it's possible avoid SELECT DISTINCT, instead select data into internal table, sort and use DELETE ADJACENT DUPLICATES.
Use of OR in Where Clause
Do not use OR when selecting data from DB table using an index because the optimizer generally stops if the WHERE condition contains an OR expression.
SELECT * FROM spfli WHERE carrid = ‘LH’
AND (cityfrom = ‘FRANKFURT’ OR
city from = ‘NEWYORK’)
SELECT * FROM spfli WHERE (carrid = ‘LH’ AND cityfrom = ‘FRANKFURT’)
OR (carrid = ‘LH’ AND cityfrom = ‘NEWYORK’).
ORDER BY will bypass buffer. So, performance will decrease. If you want to sort data, it is efficient to SORT them in an internal table rather than using ORDER BY. Only use an ORDER BY in your SELECT if the order matches the index, which should be used.
Using the READ statement
When reading a single record in an internal table, the READ TABLE WITH KEY is not a direct READ. The table needs to be sorted by the Key fields and the command READ TABLE WITH KEY BINARY SEARCH is to be used; otherwise the table is read from top to bottom until a field matching the KEY is found.
Append Lines of
Whenever it is possible use APPEND LINES OF to append the internal Tables instead of using loop and then APPEND Statement.
DELETE <itab> WHERE
Use DELETE <itab> WHERE…for deleting records from an internal table.
LOOP AT <itab> WHERE <field> = ‘0001’
DELETE <itab> WHERE <field> = ‘0001’.
Using WHERE clause in LOOP…….ENDLOOP
Loop at itab where name EQ SY-UNAME
Loop at itab from l_tabix.
If name = SY-UNAME.
For good modularization, the decision of whether or not to execute a subroutine should be made before the subroutine is called.
IF f1 NE 0.
Case vs. Nested IF
When testing fields "equal to" something, one can use either the nested IF or the CASE statement. The CASE is better for two reasons. It is easier to read and after about five nested IFs the performance of the CASE is more efficient.
If the number of entries in the Internal Table is high then use Hashed Table with Keys to access the table.
With READ or MODIFY Statements use TRANSPORTING and specify the fields being transported.
( Modifying internal table its best to use field symbol )
In order to improve performance in case of an LDB, individual tables can be excluded from selection. Under the section ‘Table Selection’ in the Documentation of LDB the fields with proper description has been given those fields can be set in the application report at the time of INITIALIZATION or at the START OF SELECTION. This can enhance the performance.
Use WHILE instead of a DO+EXIT-construction, as WHILE is easier to understand and faster to execute
If two structures are identical, use MOVE x to y, rather than MOVE-CORRESPONDING x to y
When records a and b have the exact same structure, it is more efficient to MOVE a TO b than to MOVE-CORRESPONDING a TO b.
MOVE BSEG TO *BSEG.
is better than
MOVE-CORRESPONDING BSEG TO *BSEG.
Order of tests
*Ensure that the first tested condition in an IF statement is most frequently true. For a logical AND statement put the most likely FALSE case first, and conversely for a logical OR statement put the most likely TRUE statement first. (But this will only lead to a noticeable performance improvement if the test is performed *very many times with a loop.
Ensure that instead of nested loop, parallel cursor method is used wherever possible. Parallel cursor method involves finding out the index using READ statement & searching only the relevant entries. This will give huge performance benefits. And if possible, RFC parallelization method can also be considered.
Loop at T1 into W1.
Loop at T2 into W2 where F1 = W1-F1.
Parallel cursor method
Loop at T1 into W1.
Read table T2 into W2 with key F1 = W1-F1 Binary Search.
l_index = SY-tabix.
Loop at T2 into W3 from l_index.
If W3-F1 <> W1-F1.
Exit. “ Exit inner loop when condition fails
Use Parallel Cursor methods for nested loop into the internal tables if second internal table contains considerable number of records.
Inner joins Vs For all Entries-FAE
In most cases, INNER JOIN is better performing than FAE, so it should be used first.
The set of data that can be selected with a view greatly depends on whether the view implements an inner join or an outer join. With an inner join, you only get those records which have an entry in all the tables included in the view. With an outer join, on the other hand, those records that do not have a corresponding entry in some of the tables included in the view are also selected.
The hit list found with an inner join can therefore be a subset of the hit list found with an outer join. Database views implement an inner join. You only get those records which have an entry in all the tables included in the view. Help views and maintenance views, however, implement an outer join. So basically it tells us that we have to choose the join based on the requirement.
And “FOR ALL ENTRIES-FAE” should be avoided in case a large Volume of Data is expected in the Driver Internal Table, because the Single Database Process might starts eating the System resources more than what is top limit set by the BASIS people and hence we might end up with a Short Dump. In this kind of Cases, prefer using Inner Joins over For All Entries.
we need to check the parent table is empty or not. if its empty then 2nd table will retrieve all data.
Try to make use of INDEX tables as much as possible to improve the performance.
e.g.: If data needs to be fetched from VBAP table based on Material number which is not a key field, prior looking for a secondary index with material, try to find whether any INDEX TABLE exists. In this case VAPMA is an index table which will return the SD Document number and item where a material exists.
Note: In all the cases Index table may not exists and even if Index table exists make sure its active. i.e. the index table contains approximately the same no entries compared to the main table
Usage of cursors over Select…INTO/Select..Endselect
Cursor is a control structure for successive traversal through data. The rows in the result set will be processed sequentially by the application. Cursor acts as an iterator over a collection of rows in the result set.
Fetching data from the Database
Data is fetched from the database depending on the data specified in the SELECT statement along with its additions. There are two core operations to get the data from the database.
- OPEN / RE-OPEN.
OPEN / RE-OPEN: This is the process to start or to flag-off the process of getting data from the database. This is like a green signal of the traffic lights that denotes that it is the time to get the data from the source location.
FETCH: This locates the database data that satisfies the conditions and then transfers it into the Application Server. The data is transferred in one or more Fetches; it is called an Array Fetch. An Array Fetch offers better performance than transferring data in the form of single records in Client/Server architecture.
Maximum number of records that can be Fetched in an operation is determined by the SAP DB interface.
The default value that is set by the SAP for this purpose is 33,792 bytes.
from application system to presentation system also fixed. But we can change that value.
To diagnose performance problems, it is recommended to use the SAP transaction SE30, ABAP/4 Runtime Analysis. The utility allows statistical analysis of transactions and programs.
Use transaction ST05 (SQL Trace) to see what indices your database accesses are using. Check these indices against your “where” clause to assure they are significant. Check other indices for this table and where you have to change your “where” clause to use it.