-
Notifications
You must be signed in to change notification settings - Fork 1
WiredTiger vs RocksDB
A set of benchmarks comparing WiredTiger to RocksDB
Running a version of the LevelDB benchmark as implemented in the WiredTiger RocksDB repository. As at revision 046c7ab20ffaff3ce11a8f45470f133494834226
WiredTiger develop branch as at revision
a3f1d6379dcc2b7ffc14443abb0eb860212d280a
Both source trees built with release flags.
- 24 Intel(R) Xeon(R) CPU X5650 @ 2.67GHz (2 physical CPUs)
- 144 GB RAM
- 400 GB Intel s3700 SSD
- 6+ disk SAS array using RAID-0 and xfs
All benchmarks use:
- 180 byte values
- 16 byte keys
- 200 million items in insert phase
- Bloom filters with 16 bits per key
- 2GB cache size
- Snappy compression enabled
Unless otherwise stated.
The overwrite workload consists of a database filled using fillseq, followed by a period of time allowing the LSM tree to stabilise merges. The results are based on inserting 200 million values into the existing database. Overwrite operations in LSM trees are really simple inserts - there is no pre-insert search.
Measurement | WiredTiger SSD | RocksDB SSD |
---|---|---|
Ops/sec | 328486 | 74581 |
micros/op | 3.044 | 13.408 |
Throughput (MB/s) | 61.4 | 13.9 |
99% Latency | 7.06 | 11.97 |
99.99% Latency | 25.70 | 1197.96 |
Max Latency | 201181 | 3152823 |
DB size (GB) | 60 | 50 |
The read random workload consists of a database filled using fillseq, followed by a period of time allowing the LSM tree to stabilise merges. The results are based on 20 threads each inserting 10 million items.
This workload is run with a 3GB cache size, and compression disabled.
Measurement | WiredTiger SSD | RocksDB SSD |
---|---|---|
Ops/sec | 439498 | 589031 |
micros/op | 2.275 | 1.698 |
99% Latency | 130.37 | 63.6 |
99.99% Latency | 348.44 | 167.88 |
Max Latency | 630089 | 31138 |
DB size (GB) | 37 | 39 |
The WiredTiger database does not completely merge during the compact phase, which is a likely explanation for the performance difference between the two engines. The output shows lsm: Merge aborted due to close
and After compact. LSM chunks: 4
for WiredTiger - I suspect that what has happened is that the final merge has finished but not finalized, during the finalization compact decides that all merges are done and shuts down - throwing away the work done by the final merge.
This read random workload consists of a database filled using fillseq, followed by a period of time allowing the LSM tree to stabilise merges, followed by overwriting 50 million items in the table, and then inserting 8192 items to ensure the in-memory chunk contains entries. The results are based on 20 threads each inserting 10 million items.
This workload is run with a 3GB cache size, and compression disabled.
Measurement | WiredTiger SSD | RocksDB SSD |
---|---|---|
Ops/sec | 281869 | 421521 |
micros/op | 3.548 | 2.372 |
99% Latency | 215.59 | 131.61 |
99.99% Latency | 778.77 | 631.45 |
Max Latency | 146881 | 35479 |
DB size (GB) | 43 | 45 |
What is happening with WiredTiger here is that there are merges continuing during the read phase, since the overwrite phase created new data in the tree. The log for WiredTiger reported After overwrite LSM chunks: 35
, and then After read. LSM chunks: 3
. If another compact was added after the overwrite, then there would not be any merge operations happening during the read phase.
This benchmark inserts 200 million items with random keys from a range of 200 million. The key space is not entirely filled, and some inserts will overwrite existing keys.
Measurement | WiredTiger SSD | RocksDB SSD |
---|---|---|
Ops/sec | 365760 | 255064 |
micros/op | 2.734 | 3.921 |
Throughput (MB/s) | 68.4 | 47.7 |
99% Latency | 6.74 | 6.97 |
99.99% Latency | 45.16 | 30.06 |
Max Latency | 237089 | 332950 |
DB size (GB) | 36 | 24 |
WiredTiger database size is larger than RocksDB here. It appears as though the compact phase for WiredTiger isn't working as expected, from the end of the log: LSM chunks: 110
This benchmark inserts 200 million items in order into the tree.
Measurement | WiredTiger SSD | RocksDB SSD |
---|---|---|
Ops/sec | 545745 | 724379 |
micros/op | 1.832 | 1.380 |
Throughput (MB/s) | 102 | 135.4 |
99% Latency | 5.06 | 3.05 |
99.99% Latency | 52.5 | 40.9 |
Max Latency | 146517 | 335411 |
DB size (GB) | 25 | 38 |
You can download the raw results here