The Database Analyzer Parameter Check can be used to check the parameter setting of a database.. Parameters set differently from SAP's recommendation for that release are logged in the parameter check log file. Set the parameter values as recommended by the DB-Analyzer parameter check.
Please check SAP note 1111426 to learn how to execute the DB-Analyzer parameter check.
Additionally you can read the corresponding parameter note depending on the SAP MaxDB version you use. If your parameter settings do not correspond to the recommendations, correct these parameters.
MaxDB determines the statistics on the basis of estimated values that are defined in relation to the table size for each table in the domain.tables system table. See Note 808060 for information about setting the correct estimated values on your system. You should restart the report mentioned in Note 808060, especially after carrying out large data imports. Make sure that you regularly refresh the database statistics by using Transaction DB13. You should schedule a UPD_CHECK followed by an UPD_COND once a week at a time when there is little load on the system. You should only use Transaction DB13 to execute a 'CREATE NEW OPTIMIZER STATISTCS' (UPD_UNCOND) after large data imports, after a release upgrade, or after a request using MaxDB Support.
Old MaxDB patches still sometimes contain errors that can lead to performance problems. Therefore check whether there is a newer MaxDB patch on the Service Marketplace than the one that you are currently using. To find out if there are performance improvements in a more recent patch, see the following SAP MaxDB Features notes:
Use Transaction DbaCockpit (-> Space -> Caches) or in old SAP Releases DB50 (-> Current Status -> Memory Areas -> Caches) to check the hit rate of the data cache. This should be 99 % in a balanced system. If this is not the case, you need to check if the low hit rate is due to the size of the data cache being too small, or due to an unsuitable SQL statement. If necessary, you can increase the data cache by increasing the MaxDB parameter Data_Cache_Size (lower than 7.4) or Cache_Size (7.4 or higher) or CacheMemorySize ( 7.7. or higher).
Log entries are not written directly to the log volume, but first into a buffer (LOGQUEUE) so as to be able to write several log entries together asynchronously with one I/O on the hard disk. Only once a LOGQUEUE page is full is it written to the hard disk.
However, there are situations in which you need to write LOG entries from the LOGQUEUE onto the hard disk as quickly as possible (good Log IO write performance is less than 2 msec), for example if transactions are completed (COMMIT,ROLLBACK). The transactions wait until the log writer reports the OK informing them that the log entry is on the hard disk. Firstly, this means that is important to use the quickest possible hard disks for the log volume(s) and secondly, you must ensure that no LOG_QUEUE_OVERFLOWS occur in production operation. If the LOG_IO_QUEUE is full, then all transactions that want to write LOG entries must wait until free memory becomes available again in the LOG_IO_QUEUE. At LOG_QUEUE_OVERFOLWS (Transaction DBACockpit -> Performance -> DataBase Analyzer -> Bottlenecks) and good Log IO write performance you need to increase the LogQueueSize parameter.
Using "DB02 -> Missing indexes", you can determine whether indexes that should exist from the R/3 Dictionary view are missing at database level. If indexes are listed here, determine the cause and create the indexes where necessary.
Make sure that the rsdb/max_blocking_factor parameter is set in accordance with the recommendations. See Note 819324 and the notes referenced there. As of the 6.10 Kernel, FOR ALL ENTRIES is converted into a list, if this is possible. In connection with this, it is necessary to correctly set the rsdb/max_in_blocking_factor parameter to optimize database performance. For more precise details, see Note 819324 and the notes mentioned in it.
Note that even in 6.x, FOR ALL ENTRIES are still sometimes converted into OR concatenations - more specifically, if several conditions in the WHERE section refer to the FOR ALL ENTRIES list. Therefore the rsdb/max_blocking_factor continues to be relevant.
To ensure that the database interface generates useful hints, the dbs/ada_use_hints parameter must be set correctly. In this regard, see Note 832544 and the subnotes that are referenced in it.
The _PACKET_SIZE parameter defines the size of the buffer that is used to send and receive SQL data. You should not normally change the default values. However, in the BW environment, operations such as Array INSERTs can be accelerated by increasing the parameter, since this means that less database communication is necessary. In this regard, see notes 545385 and 140739.
Check the following points by using Transaction ST10 ("all tables"/"since startup"/"this server"):
sort the list in accordance with the column "Rows affected" that is a measure for the database load caused when accessing the table. If buffered tables (the "Buffer state" column is not empty) are at the very top of the list, then you should check their buffering. Large tables and tables with frequent changes should not be buffered. You must also check whether the WHERE conditions from the application match the type of table buffering.
Sort the list in accordance with the columns "Invalidations" or "Changes", and check whether it would be better to remove tables from the buffering if they have a high number of changes or invalidations.
Sort the list in accordance with the column "Buffer size" and check whether it is useful to buffer the tables with the largest buffer size.
There is extensive information about monitoring and about the optimization of table buffering in the book entitled "SAP R/3 Performance Optimization" from the SAP PRESS series.
In accordance with Note 819324, check which SQL statements are responsible for the most disk accesses, and whether they can be optimized. You should also optimize SQL statements that have a poor runtime and poor selectivity.
You can use Transaction ST06/OS07 (-> Detail Analysis Menu -> Disk) to determine the load on the hard disks of the database server. The load of a hard disk should not exceed 80%. You can use Transaction DB50 (-> Current Status -> I/O Operations -> Overview) to determine the number of read and write accesses for each volume of data. Since backups are not written at the same time, there is a specific analysis by using I/O Operations -> Backup I/O, although this can only return data while the backup is running. Even the duration of a savepoint can provide information about the hard disk speed. Check the runtime of the savepoint in the kernel log (knldiag). When you do this, also see the specification of how many data pages had to be written for the relevant savepoint. Increased I/O times could also be the result of a high load. You can use Transaction DB50 (-> Current Status -> Activities Overview) to determine the number of physical read accesses carried out since the last database restart. Together with the Database Analyzer specifications, this gives you an impression of the system load. You can use the SQL optimization (Note 819324) to determine the SQL statements that create a high read load. These statements can considerably ease the load on the system by creating suitable indexes. In connection with this, see also Note 820824 FAQ:MAXDB that also provides recommendations for the configuration.
Use Transaction ST06 or OS07 to check whether there are sufficient CPU and memory resources on the database server. The CPU IDLE time should appear on the hour average at 30% or more. Each hour, a maximum of just 20% maximum of the memory should be swapped.
Network problems between R/3 instances and the database server can lead to database response times seeming very long, even though from the MaxDB, you cannot see any problem. The NIPING tool is a good way of locating problems in the network area. See Note 500235 for details and start a NIPING server on the database server and NIPING clients on the R/3 instances. Check whether the measured bandwidth corresponds to your expectations. If not (or if it causes errors), consult your hardware partner.