ABAP/4 Tuning Checklist
1. Is the program YEAR 2000 compliant ?
check date logic and make sure there are not any 2 digit year fields
2. Is the program US ready ?
Taking into consideration the US volumes, will the program ever finish? Can data be selected for just the US - selection parameters at company code level, plant, purch org, sales div, etc...
SQL SELECT statements:
3. Is the program using SELECT * statements ?
Convert them to SELECT column1 column2 or use projection views
4. Are CHECK statements for table fields embedded in a SELECT ... ENDSELECT loop, or in a LOOP AT ... ENDLOOP ?
Incorporate the CHECK statements into WHERE clause of the SELECT statement, or the LOOP AT statement
5. Do SELECTS on non-key fields use an appropriate DB index or is table buffered ?
Create an index for the table in the data dictionary or buffer tables if they are read only or read mostly
6. Is the program using nested SELECTs to retrieve data ?
Convert nested selects to database views, DB joins (4.0) or SELECT xxx FOR ALL ENTRIES IN ITAB Nested open select tatements are usually bad, and unless the amount of data is enormous, these types of reads should be avoided. They are usually used as they are the safest way to program - you will never have any memory issues. The technical explanation as to why this is bad follows - feel free to tune out for the remainder of the paragraph. Open selects read just one record from the database at a time. Each time you do this, the app server and database server need to communicate. The packet size of this communication is typically much larger than the size of the record, so most of the packet is waisted with this type of read. When system load is heavy, this can slow performance. Combine this with the overhead of accessing the database many times vs just once.
7. Are there SELECTs without WHERE condition against files that grow constantly (BKPF/BSEG,MKPF/MSEG,VBAK/VBAP...) ?
Program design is wrong - back to drawing board
8. Are SELECT accesses to master data files buffered (no duplicate accesses with the same key) ?
Buffer accesses to master data files by storing the data in an internal table and filling the table with READ TABLE...BINARY SEARCH method
9. Is the program using SELECT ...APPEND ITAB ...ENDSELECT techniques to fill internal tables ?
Change processing to read the data immediately into an internal table (SELECT VBELN AUART ... INTO TABLE IVBAK...)
10. Is the program using SELECT ORDER BY statements ?
Data should be read into an internal table first and then sorted, unless there is an appropriate index on the order by fields
11. Is the programming doing calculations/summations that can be done on the database via SUM, AVG, MIN or MAX functions of the SELECT statement?
use the calculation capabilities of the database via SELECT SUM...
12. Do any SELECT statements contain fully-qualified keys (all keys are specified to match a single value in the WHERE clause) but not the keyword SINGLE?
the syntax checker won't catch this & neither does the database optimizer! such an "incorrect" statement can take 5x longer
13. Are there any SELECT or LOOP AT structures that are only attempting to retrieve the first match on a generic key lookup?
use the EXIT statement, or UP TO 1 ROWS option in the case of SELECT, to break the loop & avoid unnecessary iterations
14. Do UPDATE/INSERT/DELETE statements occur inside loops?
save all accumulated changes in an internal table and then use the SQL array statements to perform the updates: INSERT <tab> FROM TABLE <itab>.
15. Are only a few fields being changed by an UPDATE statement?
use the SET <field> = <value> clause of the UPDATE statement rather than updating the entire record
16. Is the program inserting/updating or deleting data in dialog mode (not via an update function module) ?
Make sure that the program issues COMMIT WORK statements when one or more logical units of work (LUWS) have been processed
Internal Table Processing:
(also see 4 & 13 above)
17. Are internal tables processed using READ TABLE itab WITH KEY ... BINARY SEARCH technique ?
Change table accesses to use BINARY SEARCH method. Note that this is ONLY possible if the table is sorted by the access key! Add a SORT if not already there.
18. Is a semi-sequential scan of a large, sorted internal table being performed by a LOOP AT ... WHERE, or LOOP AT ... with CHECK statements to continue/terminate the loop ?
Use a READ TABLE with BINARY SEARCH to retrieve first record in the sequence matching the lookup keys, then LOOP AT ... FROM SY- TABIX + 1 until one of the key field values changes; this way you get only the records you need, rather than reading up to the starting point
19. Is a condensed, summary internal table being built by READ TABLE... BINARY SEARCH and then sum, INSERT or APPEND?
Use the COLLECT statement, and SORT the result after the table is complete
20. Is a COLLECT statement being used simply to avoid duplicate entries in an all-character fields - no numeric fields: types I, P or F - internal table ?
Use the READ TABLE ... WITH KEY ... BINARY SEARCH with an INSERT ... INDEX SY-TABIX to build the table
21. Are internal tables being SORTed without any field qualifiers?
Specify the actual fields to be used: SORT <tab> BY <fld1> <fld2> ... The fewer fields, the better!
22. Are internal tables being copied or compared record-by-record?
Use the "" option to refer to the entire table in one line of code:
- TAB2 = TAB1 - copy TAB1 to TAB2
- IF TAB1 EQ TAB2 - compare TAB1 to TAB2
23. Are strings being built up/broken down manually with code fragments?
use the new CONCATENATE/SPLIT statements, with the SEPARATED BY SPACE option of CONCATENATE if desired
24. Are the older, obsolete string handling functions for finding string length, or centering/right-justifying strings being called?
use the newer "strlen( )", "WRITE...TO...CENTERED",
25. Are field strings (records) being set to a special character, other than SPACE, via a CLEAR and then a TRANSLATE instruction?
use the new CLEAR <record> WITH <char> statement (esp. useful with standard SAP batch input generation programs to set the "no value for this field" indicator, which has a default symbol of "/")
26. Is an internal table filled with a massive amount of data (approx. 5000 to 10000 records or more, depending on record length), sorted and processed?
- instead, use the FIELD-GROUPS declaration, and the INSERT, EXTRACT, SORT and LOOP instructions to define, create, sort and process such large volumes of data more efficiently
27. Is a logical database-driven program taking too long to select data?
- convert the program to specific SELECT statements, with more appropriate index usage and a minimized sequence of nested SELECTs; don't forget to remove the 'Logical database' name/application from the program's attributes
28. Is a MOVE-CORRESPONDING statement used when it's not needed?
- if moving a complete record with same structure, just use the simple assignment statement: XVBAP = VBAP or MOVE VBAP TO XVBAP
and if moving only a handful of fields, use specific assignments
29. Are there no OBLIGATORY specifications on the key PARAMETERS and/or SELECT-OPTIONS declarations?
- enforce user entry of key fields, thus limiting database accesses to some degree. To read everything, however, the user could still enter "1" -"99999999..." (see next item though), but at least they start to think about what they really need to see.
30. Are SELECT-OPTIONS fully open to the most complex searches even when they need not be?
- Use the keywords NO-EXTENSION and/or NO INTERVALS of the SELECT-OPTIONS declaration, if possible, to limit the number of ranges to only 1 (NO-EXTENSION), or limit the entry to single values only (NO INTERVALS) if that's all that is really needed for the application, so that the SELECT statement's "IN" operation is less complex
31. Producing information messages about "long run-times" can also be beneficial in reducing run-times
-the user will think twice and hopefully re-starts the program with a smaller range; or in extreme situations, an error message preventing generic input or complete range input on a key field
32. In especially difficult cases, where no amount of specific statement re- writing, indexing, or buffering increases performance enough, consider re- structuring the main SELECT sequence, if possible, to reduce the total number of records to a manageable size.
- this technique is only possible with certain table sequences that are related via both forward and backward pointing foreign key relationships, and then only makes sense when reversing the logic will definitely reduce the total read accesses, and does not require a complete program re-write of all related routines, etc.