This page provides introductory information about the hot standby functionality that is available for SAP MaxDB. The hot standby concept combines a high availability cluster software and a storage replication technology (in the sense of a hardware-implemented data area cloning) with the MaxDB recovery mechanism.
Usecases and Benefits
The hot standby technology provides robust and fast failover and minimizes the failover impact on the applications that work with the MaxDB. Currently, setting up a hot standby configuration is the most advanced approach to achieve high availability for a SAP MaxDB or SAP liveCache instance.
The solution is particularly useful as building block of mission-critical SAP SCM scenarios in order to implement liveCache high availability. The hot standby completely obsoletes long-running rollback operations that occasionally can occur with a traditional liveCache instance restart after a system crash. It also prevents the performance degradation that occurs due to the cold-restart. While any MaxDB benefits from this, it is most usable in the memory-centric liveCache configurations to make the time that is required to restore the liveCache business availability more predictable.
Required building blocks
The IT hardware infrastructure for a hot standby MaxDB environments typically comprises of two clustered servers with shared-access storage for the log devspaces and additional storage capacity for the data devspaces that needs to support quick creation of a full physical data copy without denying r/w-access while the copy process is ongoing.
SAP MaxDB supports hot standby with an API specification. There is a set of dbmcli commands to handle hot standby instances. The hot standby concept is also integrated into the MaxDB database manager. A third party vendor has to provide a library that implements the API and runs as part of the RTE (runtime environment) of MaxDB. Without this library, a hot standby can not be configured. Typically the same vendor provides the cluster integration software for the MaxDB in order to seamlessly integrate all building blocks.
An overview of how it works
The 3rd party library enhances the MaxDB runtime environment with storage handling capabilities. It enables the following behavior: A hot standby MaxDB instance can be started on a secondary server. The hot standby maintains its own copy of data devspaces. It will regularly consult the shared-access storage with the log devspace of the master MaxDB (that is running on a different server) in order to become aware of recent changes that need to be applied to keep the standby data logically in sync with the master data. If the standby can not restore its own data devspaces from the information that it finds in the active (='hot') logs of the master MaxDB system, it will trigger the creation of a storage-level clone via the library. This should rarely be necessary. It usually happens as part of the initial startup of a standby or as part of the first startup after the standby was offline for a longer period of time while the master MaxDB system was busily running. The procedure automatically builds up a consistent copy of the master data without interrupting the master operation. Afterwards the physical copy gets split off from the master data volumes. It will eventually be synchronized logically by the standby instance as described above.
Notice: There is no need to separately implement a log shipping mechanism in a hot standby configuration. There is also no need for permanent(!) hardware-based volume synchronization during MaxDB runtime.
A third party implementation is available from Hewlett-Packard as part of the HP Serviceguard cluster extensions for SAP software (SGeSAP 4.51 or higher). It requires HP Integrity servers running HP-UX 11i v2 or higher. Supported storage arrays include a range of HP Storageworks XP arrays. HP Business Copy is used for the hw-based copy.