-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: dynamic creation and removal of workers #989
Conversation
7b4f72b
to
326589e
Compare
This changes will be persisted to the configuration file. | ||
|
||
Note that, in order to update the workers, the proxy must be running in the same computer as the command is being executed because it will check if the client address is localhost to avoid any security issues. | ||
|
||
Both the worker and the proxy will use the `info` log level by default, but it can be changed by setting the `RUST_LOG` environment variable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think this sentence should be in another section, as it's not related to the update of workers.
bin/tx-prover/src/proxy/mod.rs
Outdated
if let Some(index) = available_workers.iter().position(|w| w == &worker) { | ||
available_workers.remove(index); | ||
} | ||
if let Some(index) = current_workers.iter().position(|w| w == &worker) { | ||
current_workers.remove(index); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here I think we can use the Vec::retain
method which is simpler
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good.
bin/tx-prover/src/proxy/mod.rs
Outdated
let mut available_workers = self.available_workers.write().await; | ||
let mut current_workers = self.current_workers.write().await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fact that we have two separate locks for the separate worker lists may lead to a deadlock (I don't think it's the case here as the lock order is always the same). We may want to unify them so that we don't have that possibility.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could not see a situation on our particular code that leads to a deadlock, as you said, we always lock in the same order. Though if we prefer I can unify the locks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Thank you! I left some comments inline.
Also, one general question: is it possible to somehow "spoof" the request to make it appear that it is coming from the local host? Mostly trying to understand how secure is this approach.
I have been investigating this, an attacker could impersonate "localhost" from outside of the same machine only if he has access to the same VPN that the server is connected to or he gains SSH access. I'm currently investigating a bit more on Pingora internals to check if I can create another cc @bobbinth |
ce83e2c
to
254537b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Thank you! I left a few small comments inline. After these are addressed, we can merge.
e19984a
to
05da62e
Compare
feat: dynamic creation and removal of workers fix: don't allow worker duplication, don't panic on invalid action
feat: add command to add/remove workers improve update workers response improve error handling add status for workers docs: update README
feat: use two lists instead of Worker Status fix: use proxy config from file in update-worker docs: improve code documentation docs: add entry to changelog use vec::retain update readme adding logging sections
fix: separate update workers command in two commands refactor: use only one vec to store workers chore: use X-Workers-Count, add doc for function fix: use correct header chore: rename X-Worker-Count, fix doc chore: rename current_workers, rename current_workers_count feat: implement TryInto<WorkerConfig> for &Worker
05da62e
to
e5097fe
Compare
Closes #963
Adds a command to add/remove workers from a running proxy. It updates the configuration file.
Can be used by running:
It works by keeping both a list of available workers and another for all the workers in the proxy. The proxy can be updated only if the sender of the update is in the localhost.