Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quick and dirty PoC for syncing from Portal history network #2910

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

kdeme
Copy link
Contributor

@kdeme kdeme commented Dec 4, 2024

This PR is not intended to get merged. It is a very quick and dirty implementation with the intention of testing the Portal network and Fluffy code and to verify how long downloads of blocks take in comparison with the execution of them.
It kind of abuses the current import from era code to do this.

I think (?) in an improved version the block downloads should probably lead the implementation and trigger execution (it is a bit the reverse right now, which makes sense for era files). Perhaps that way the execution could even be offloaded to another thread?

It is also coded without using the JSON-RPC API, as I found that easier for a quick version. But the getBlock call could be changed to use the json-rpc alternative.

@kdeme
Copy link
Contributor Author

kdeme commented Dec 4, 2024

Note that it currently will start failing fairly fast because of the issue #2901

@kdeme kdeme force-pushed the quick-poc-import-portal-blocks branch from 4dd1896 to 5fbe874 Compare January 10, 2025 10:51
This PR is not intended to get merged. It is a very quick and
dirty implementation with the intention of testing the Portal
network and Fluffy code and to verify how long downloads of blocks
take in comparison with the execution of them.
It kind of abuses the current import from era code to do this.

I think (?) in an improved version the block downloads should
probably lead the implementation and trigger execution (it is a
bit the reverse right now, which makes sense for era files).
Perhaps that way the execution could even be offloaded to another
thread?

It is also coded without using the JSON-RPC API, as I found that
easier for a quick version. But the getBlock call could be
changed to use the json-rpc alternative.
@kdeme kdeme force-pushed the quick-poc-import-portal-blocks branch from 5fbe874 to 24171c3 Compare February 11, 2025 11:08
@kdeme
Copy link
Contributor Author

kdeme commented Feb 11, 2025

Some stats when syncing small blocks (from genesis):

With 512 workers:

INF 2025-02-11 10:25:41.348+01:00 Finished downloading 8192 blocks           startBlock=196609
INF 2025-02-11 10:25:41.437+01:00 Downloading 8192 blocks                    startBlock=204801
INF 2025-02-11 10:25:42.044+01:00 Imported blocks                            blockNumber=204801 slot=1 blocks=204800 txs=127337 mgas=3727 bps=236.4 tps=272.2 mgps=8.210 avgBps=194.1 avgTps=120.7 avgMGps=3.533 elapsed=17m35s48ms

With 1024 workers:

INF 2025-02-11 10:46:20.649+01:00 Finished downloading 8192 blocks           startBlock=237569
INF 2025-02-11 10:46:20.728+01:00 Downloading 8192 blocks                    startBlock=245761
INF 2025-02-11 10:46:21.331+01:00 Imported blocks                            blockNumber=245761 slot=1 blocks=245760 txs=174914 mgas=5076 bps=249.9 tps=279.9 mgps=7.234 avgBps=218.1 avgTps=155.2 avgMGps=4.505 elapsed=18m47s42ms

Not that this is download + processing. Processing for these blocks is negligible. Also, this is average, there are quite some outliers with sometimes reaching 400bps and sometimes dropping to 100bps, I think mostly dependent on lookup failures.

At block 2395517 (which are still small compared to today but does require more tx processing already).

INF 2025-02-11 11:40:08.730+01:00 Downloading 8192 blocks                    startBlock=2395517
 2025-02-11 11:41:57.350+01:00 Finished downloading 8192 blocks           startBlock=2395517
INF 2025-02-11 11:41:57.487+01:00 Downloading 8192 blocks                    startBlock=2403709
INF 2025-02-11 11:45:41.562+01:00 Imported blocks                            blockNumber=2403709 slot=1 blocks=8192 txs=55316 mgas=3423 bps=24.61 tps=166.2 mgps=10.29 avgBps=24.61 avgTps=166.2 avgMGps=10.29 elapsed=5m32s827ms
INF 2025-02-11 11:47:22.919+01:00 Finished downloading 8192 blocks           startBlock=2403709
INF 2025-02-11 11:47:23.045+01:00 Downloading 8192 blocks                    startBlock=2411901
INF 2025-02-11 11:48:58.297+01:00 Imported blocks                            blockNumber=2411901 slot=1 blocks=16384 txs=112184 mgas=7043 bps=41.64 tps=289.1 mgps=18.40 avgBps=30.94 avgTps=211.8 avgMGps=13.30 elapsed=8m49s560ms

As can be seen, bps drops considerable. But pure downloads is about 75-80 bps (And a lot of download failures occur actually, meaning a lot of retries) , compared to download + processing ~30 bps. I believe this is a first good indication that if we can offload processing on another thread, downloads might be able to keep up with processing.

edit: I might have been to early on the above statement, as that range I was testing with seems to include the DDoS days.

1M blocks later:

INF 2025-02-11 12:41:53.523+01:00 Finished downloading 8192 blocks           startBlock=3157351 bps=91.05309148341027
INF 2025-02-11 12:41:53.645+01:00 Downloading 8192 blocks                    startBlock=3165543
INF 2025-02-11 12:41:56.350+01:00 Imported blocks                            blockNumber=3165543 slot=1 blocks=32768 txs=272974 mgas=10114 bps=90.81 tps=734.5 mgps=28.37 avgBps=88.24 avgTps=735.1 avgMGps=27.24 elapsed=6m11s336ms

@kdeme
Copy link
Contributor Author

kdeme commented Feb 11, 2025

At 4.4M range:

INF 2025-02-11 13:25:30.083+01:00 Finished downloading 8192 blocks           startBlock=4431929 bps=59.36405788613555
INF 2025-02-11 13:26:10.645+01:00 Imported blocks                            blockNumber=4440121 slot=1 blocks=8192 txs=642143 mgas=35189 bps=45.89 tps=3597 mgps=197.1 avgBps=45.89 avgTps=3597 avgMGps=197.1 elapsed=2m58s501ms

Should be noted that lots of block lookups fail and need to be retried.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant