My personal experience with Hana
When SAP started to announce Hana internally I was not too thrilled about it. I mean, great, another columnar database meant for analytic queries which will be better than anything else and even support OLTP transactions. Sure, whatever you say. It is not the first time SAP tried to switch customers from well established databases to ADABAS or MaxDB, so what will be different this time? How are you going to offset the years of experience established vendors have?
Then came the killer arguments:
- We are in-memory and not disk based hence we are faster! Okay, so you are saying other databases don't cache the data in memory? Then this is even a downside as the others cache only the frequently used data whereas in Hana all data has to reside in memory, right?
- We store data in columns, not in rows, hence can sum up single columns much faster! Okay, and if I want to see many columns of the same table common to OLTP queries: Show me the values of material xyz and let me edit it. The response was that Hana does support row storage as well. Excellent, so now I can either use BI queries or OLTP queries and I have to decide which one to optimize for when creating the table.
- We are a massive parallel shared nothing architecture where one single select statement can utilize the entire hardware! Are you saying that as soon as two users query the same database simultaneously they get just half the performance?
With these kinds of thoughts in mind I wanted to know the truth in each statement made and to evaluate my concerns. This article takes you through my journey assuming you have the same kind of thoughts.
During my journey I had extensive talks with the Hana developers, they probably hate me by now, whenever I did not understand a result I got. And it turned out I couldn't be more wrong with above thoughts. So let's start....
The core principles of Hana
These are the core techniques Hana is utilizing in order to be better:
Each of these techniques has advantages but disadvantages as well.
The basic idea is that memory is much faster than disk. Actually it is times faster. A 2012 CPU has a memory bandwidth of 10GByte/sec and higher, a single disk around 150MByte/sec - factor 70 faster. If your program is using the same memory frequently it is cached inside the CPU's L1 or L2 cache, speeding the bandwidth up by another factor of 10. And that is sequential access only, random access is times worse for a disk system and has no negative impact on the RAM.
The downside of memory is the costs of the memory chip itself (7USD/GByte for RAM compared to 0.05USD/GByte for disks as of 2012) and the hardware platform you need to cope with more memory is getting increasingly more expensive.
On the other hand, if I need 1TB of RAM that would be 7000USD, so while this is much compared to a single 100USD disk, it is not much in terms of absolute numbers.
But all I am saying here is, no matter what database you have, make sure it fits into the server's main memory entirely, at least the important parts of it.
The idea of compression is simple, a single CPU is much faster than the memory bus and the disk, not to mention that multiple CPUs share the same bus, hence compressing data in order to reduce the amount of data is beneficial as long the overhead of that is not too huge. Therefore every major database supports compression however it is used not that often as compressing a database block and decompressing it takes its toll, most obvious when data is updated inside a database block. You have to uncompress the data block, make the change and then compress it again. Hopefully it fits into the same database block still otherwise you need a second one.
Columnar storage versus row storage
For a simple select sum(revenue) the columnar storage is just perfect. You have to read one column only, hence just a fraction of the whole table data is needed. If you have all the columns data in one file this will be much faster as with traditional row orientated tables where all the table data is in the same database object and you have to read the entire table in order to figure out each row's columns value.
In case you want to see all columns of a single row like it is typical for OLTP queries, the row storage is much better suited.
A real database should have read consistency in the sense of when I execute my select statement I get all data that was committed at that exact time in the table but neither will I see data that has been committed after nor will my long running query fail just because the old value was overwritten by the database writer.
The only major database I know that does that is Oracle, but you have to pay a price for that. Whenever the database writer does overwrite a block with new data, the old version has to be kept somewhere, in the rollback segments of the database. So a simple update or insert into an existing block requires two operations, the actual change plus saving the old version.
With insert only, the idea is a different one, there in each table the old data is never overwritten but only appended. So if you execute an update against an existing row, a new row is written at the end of the table with same primary key value plus a transaction number. When a select statement is executed, it will see multiple versions and use the one with the highest transaction number less or equal to the currently active global transaction number in the database.
So it is a tradeoff, the tables grow fast with this approach, especially when you update just a single byte but you are faster.
My starting point
Okay, this was my starting point. All technologies are good ideas, other companies tried these as well and have built proper niche products, mostly around the analytic use cases. One database is a pure in-memory database that needs a regular database to persist the data, others you can insert data but not update or delete. Many support compression but usually it is turned off by customers.
The claim of Hana is to be a database that supports analytic and transactional use cases and is better than other databases in all areas. That should be easy to be put into perspective, I thought.
However, the one thing I did overlook at that time was how these technologies benefit from each other. So let us go through a couple of mutual benefits to get the idea.
Combination of Compression with Columnar Storage +
Compressing data you can do best whenever there is a repetitive pattern. Let us have a look at a real example, the material master table.
What can you compress better, the entire file or a single column?
The answer is obvious but nevertheless I exported these columns from the MARA table, all 20'000 rows of my system, into a CSV file (1'033KB big) and did zip the one file with all data plus nine files with one column each.
Obviously the primary key cannot be compressed much, it is half of the data in fact but all other columns are, the MAND file is 303 byte large, the ERSDA file with all the many create dates is 12'803 bytes big.
But this is not a fair comparison as the zip algorithm favors larger datasets as it can look for patterns easier and it is a fairly aggressive algorithm. In databases your compression is lower to require less CPU cycles and more important, each file is split into database blocks. Meaning that if you have a fairly wide table, one database block might include one row of data only that is compressed - hence almost no compression possible at all.
With columnar storage we have no issue with that side effect.
So as you see, the technology of using columnar storage has a huge positive side effect on the degree compression is possible.
Combination of Compression with In-Memory +
That's an easy one. The more compression we do the less memory we need. Before we said that the cost ratio between disk and RAM is about 700 times cheaper, thanks to the compression that usually is a factor of 10 the disk is just 70 times cheaper. Or more important, for your 10 TB database you do not need a server with 10TB of RAM which would be very expensive, one or two standard servers are good enough to handle the volumes.
Combination of Compression with Insert-only +
Compression has one important downside as well however, what if a row is updated or deleted even? The compression spans multiple rows, so when you change a value you have to uncompress the entire thing, change the value and compress it again. With traditional databases you do exactly that. Okay, you do not uncompress the entire table, the table data is split into pages (database blocks) and hence only the impacted page has to be recompressed but still you can virtually watch how much these indexes slow down updates/deletes in traditional databases.
So what does Hana do? Quite simple, it does not update and delete the existing data. It appends the change as a new version with a transaction ID and when you query the table, you will read the oldest version of each row, the oldest version that was before the query execution time. So suddenly no recompression is needed anymore, data is appended uncompressed to the end of the table and once the uncompressed area exceeds a limit, it gets compressed and new data is inserted into a new page of the table.
The other advantage of that approach is, if a single row is update multiple times, which row will that be? A booking made 10 years ago? Very unlikely. It will be a recent one, probably one that is still in the uncompressed area.
All four together + + +
As you can see compression, insert-only, columnar-storage, in-memory, all work together so that four features positively impact each other. Remove one, e.g. the columnar storage, and suddenly the compression is not that effective anymore and appending full rows would take way too much space to make this feasible.
Comparing Hana with other databases
So much about the theory but I want to measure how effective the Hana database is actually. No excuses in the form of, hey, it's a new product and therefore is not as mature as other databases, no, I want to know where we are today - Hana 1.0 SP4.
Of course I will provide outlook on what could be done in future and I certainly will touch on the problems Hana has as well in the following chapters.
But in order to put the results of the tests made into perspective we need to compare it with something. Is installing another database on the same hardware a fair comparison? Maybe. But maybe the server has fast memory but just a single slow disk instead of a fast disk array? Or maybe it is not configured properly? Should we compare the relational database with empty caches or all data in memory cached already just because the amount of data I am testing with fits into memory uncompressed? Maybe the relational database would work better with the database vendor's hardware?
How can we make it a fair comparison?
Simple answer, you cannot. So what I did is I did setup everything in favor to the relational database - nice server, powerful disk array, large database cache, data selected multiple times to make sure all is in memory - and for Hana I did the worst things possible. That would give me one extreme of the comparison and I wanted to know how much slower Hana is then. If needed I did build variations in Hana to get an idea on the spread we might get.
Chosen table layout for the tests
- The table has a primary key column of type integer
- The table has 100 text columns to make life a misery for the columnar Hana storage
- The 100 text columns are just 20 chars long
- The data is uniformly distributed to break the compression as much as possible - Hana will not be very effective hence
- The average number of distinct values is about 1000 for the text columns
- The amount of data is a 60GB flat file equivalent
- We load 30 million rows
- Performance of Hana - Load an empty table asap
- Performance of Hana - Load an empty table asap - Compression
- Performance of Hana - Single table aggregation
- Performance of Hana - Joins
- Performance of Hana - where clauses on columns
- Performance of Hana - selecting one entire row via primary key
- Performance of Hana - where clause and functions