Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf(rocksdb): ⚡ Better use of Rocksdb multiget and batch writes #86

Merged
merged 9 commits into from
Apr 30, 2024

Conversation

Trantorian1
Copy link
Collaborator

@Trantorian1 Trantorian1 commented Apr 29, 2024

Pull Request type

Please add the labels corresponding to the type of changes your PR introduces:

  • Feature

What is the current behavior?

Our current usage of rocksdb heavily parallelises database read and write operations when commiting. This is not necessarily more efficient, as rocsdkb has it's own capabilities for retrieving and writing multiple values from and to the database.

What is the new behavior?

Made better use of rocksdb multiget and batch writes. Storage handler commit operations are now synchronous. Note that each commit is still run inside a separate task which is joined as this still provides better performance, but the commit operation itself is synchronous. See storage_update.rs for more information.

Changes were also made to the data serialization model, with bincode now being the primary method of serialization over parity-scale-codec. Changes have also been made to StorageContractData to avoid the usage of binary trees to store class hash and nonce history, preferring instead our own revertible History<T> implementation. This should also have the effect of reducing storage in this area. New accessors were also added to simply retrieving the nonce and class hash as compared to before.

Work was also done refactoring primitive types StorageContractData and StorageContractClassData. These have been moved from carates/primitives to a new client/db/src/storage_handler/primitives folder. Please place any struct used as a storage wrapper here in the future.

All in all this has resulted in a 10% improvement in sync time.

Does this introduce a breaking change?

Yes. Due to change in storage, nodes already running will have to restart synchronizing from block 0.

@Trantorian1 Trantorian1 added the feature Request for new feature or enhancement label Apr 29, 2024
@Trantorian1 Trantorian1 self-assigned this Apr 29, 2024
@@ -10,17 +9,17 @@ impl BlockHashView {
pub fn insert(&mut self, block_number: u64, block_hash: &Felt252Wrapper) -> Result<(), DeoxysStorageError> {
let db = DeoxysBackend::expose_db();
let column = db.get_column(Column::BlockNumberToHash);
db.put_cf(&column, block_number.encode(), block_hash.encode())
db.put_cf(&column, bincode::serialize(&block_number).unwrap(), bincode::serialize(&block_hash).unwrap())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe create new Storage error for serialize and deserialize

pub fn get_at(
pub fn get_nonce(self, contract_address: &ContractAddress) -> Result<Option<Nonce>, DeoxysStorageError> {
self.get(contract_address)
.map(|option| option.map(|contract_data| contract_data.nonce.get().cloned().unwrap_or_default()))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use as_ref() instead of cloned()

@jbcaron jbcaron merged commit 42f8207 into madara-alliance:main Apr 30, 2024
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Request for new feature or enhancement
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants