Page tree
Skip to end of metadata
Go to start of metadata

Technical Information About the BI Accelerator Engine

Processing Data

After creating a BI accelerator index, the data is available on the file server of the BI accelerator server. The data is loaded to the main memory when you execute a query for the first time or start a special load program. The data remains in the main memory until it is replaced or is removed from the main memory when a special delete program is started. It may be necessary to execute a special delete program if, for example, there is not enough memory on the BI accelerator server for all BI accelerator indexes and you need to load data from particular InfoCubes but data from other InfoCubes is not needed (at this time).

Table data is stored in the main memory in columns. Vertically segmenting data tables in this way is more efficient than saving row-based data in conventional relational database systems. In a conventional database, the system has to search all the data in the table if a predefined aggregate is not available for a query.  The BI accelerator engine specifically accesses only those data columns that are relevant. It sorts the columns individually and puts the required entry at the beginning. This improves performance considerably because the data flows are smaller. It also significantly reduces the input and output load and the main memory consumption.

Compressing Data

Data is available on the BI accelerator server in a read-optimized format. The BI accelerator engine uses dictionary-based compression. Integers are used to represent text or values in table cells. Using integers allows efficient numeric coding and intelligent caching strategies.

For example, if a column has a thousand rows and some of the cells contain long texts, efficiency is significantly increased by using a ten-bit binary number to identify the texts during processing and a dictionary to call them again afterwards. The datasets that have to be transferred and temporarily stored during the different processing steps are reduced on average by a factor of ten.

This means that you can perform the entire query processing in the main memory and reduce network traffic between separate landscapes.

Divided (Split) Indexes

The BI accelerator engine can process huge datasets, without exceeding the limits of the installed memory architecture. You can split large tables (fact tables and large X and Y tables) horizontally, save them on different servers and process them quickly in parallel. The maximum table size before the system splits the index depends on the existing hardware of the BI accelerator server. Data is distributed to the subindexes in a round-robin procedure. Write, optimize and read accesses are parallelized on the BI accelerator server.

This scalability allows users to make use of sophisticated adaptive computing infrastructures such as blade servers and grid computing.

Index Types

The following index types are available:

  • Normal: In standard cases, the system creates BI accelerator indexes on the BI accelerator server for all the tables in the InfoCube star schema.
  • Flat: An exception arises if the InfoCube star schema has been deconstructed because, for example, one (or more) dimension tables have got very large (> 20% of the InfoCube). In this case, the system does not create dimension tables but de-normalizes the appropriate part of the InfoCube star schema (fact and dimension tables).
  • No labels