-
Notifications
You must be signed in to change notification settings - Fork 934
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce long running mode to linera benchmark #3265
base: main
Are you sure you want to change the base?
Conversation
This stack of pull requests is managed by Graphite. Learn more about stacking. |
f45203e
to
b6212a2
Compare
26ecdea
to
ea4e86f
Compare
This comment was marked as spam.
This comment was marked as spam.
linera-client/src/client_context.rs
Outdated
}; | ||
|
||
#[cfg(feature = "benchmark")] | ||
fn deserialize_response(response: RpcMessage) -> Option<ChainInfoResponse> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(You could make it a static method of ChainClient
below; one less cfg
attribute.)
linera-client/src/client_options.rs
Outdated
@@ -602,6 +602,11 @@ pub enum ClientCommand { | |||
/// If none is specified, the benchmark uses the native token. | |||
#[arg(long)] | |||
fungible_application_id: Option<linera_base::identifiers::ApplicationId>, | |||
|
|||
/// If provided, will be long running, and block proposals will be send at the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/// If provided, will be long running, and block proposals will be send at the | |
/// If provided, will be long running, and block proposals will be sent at the |
num_sent_proposals = 0; | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will be slower than running run_benchmark
in a loop. What's the advantage?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The goal here is having block proposals be sent at a given/controlled BPS/TPS. This will allow us to ramp this up while watching metrics (client's resources like CPU/Memory/IO, and validator metrics like proxy latency, etc). This way we can figure out what the max TPS is for different network configurations, and also play around with parameters to find potential bottlenecks.
But like I said, this is version 1 from the doc. It's not very scalable. So this will become more useful as we build the next versions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm keeping the old version around btw because I think if we ever replace linera-benchmark
with something that calls this, we'll need the old version for CI, probably, as we won't want it to run continuously
b0e57b7
to
89a96c3
Compare
ea4e86f
to
cf574c9
Compare
89a96c3
to
d10477e
Compare
cf574c9
to
7abebc9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not a blocker from my side, but I don't think we'll need two separate implementations for this in the long run. A limit of blocks per chain that can be set to 1 would make run_benchmark
a special case of run_benchmark_long_running
.
True, we can think about unifying it at some point, but it just felt easier for now to keep them separate. I can look into that in a follow up PR |
d10477e
to
e54f165
Compare
7abebc9
to
04924c4
Compare
e54f165
to
e0c17fe
Compare
d761756
to
6fafdeb
Compare
6fafdeb
to
14821a3
Compare
e0c17fe
to
b843259
Compare
b63a9ca
to
5145ba3
Compare
5145ba3
to
7564fd3
Compare
Motivation
We need a way of proposing blocks to a network at a fixed BPS rate
Proposal
Introduce a long running mode that you can specify a BPS rate, and it'll attempt to reach that. This is the most naive version, doesn't scale very well.
We need to figure out how much we're gonna paralellize these before having another binary run this in different processes with different wallets.
My proposal is that we implement until Version 3, then instead of the "worker tasks" we split these into separate processes with separate wallets.
Test Plan
Ran it locally against a local kind network.
Release Plan