-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Always allocate regions to different sleds #7382
Conversation
When performing region allocation _again_ for a particular volume and increasing the redundancy, the region allocation query was not excluding the sleds that have already been used by the existing allocation as candidates for the new region (note that this only matters when the allocation strategy mandates distinct sleds!). This resulted in region replacement and region snapshot replacement having the potential to allocate the replacement region onto the same sled as an existing region. This commit fixes the region allocation query, and then fixes the fallout: many tests of region replacement or region snapshot replacement were not creating many simulated sled agents, and therefore could not properly satisfy the fixed region allocation query. The `extra_sled_agents` parameter added in oxidecomputer#7353 was used, along with changing the `DiskTestBuilder` to simulated one zpool on many sleds. Changing these replacement tests revealed the next problem: because all the simulated sleds share the same IP (::1) and range of ports for the fake Crucible agents, each region allocation would have the same target, and the volume construction request would look like: "target": [ "[::1]:1100", "[::1]:1100", "[::1]:1100" ] This is very confusing for the replacement routines! We can't change the simulated sled's IPs: they need to bind to localhost to listen for requests. Therefore this commit also adds the idea of "sled index", and this sled index is used to make each range of Crucible agent ports unique. This lead straight to the next problem: the simulated Pantry assumes that there is really only one simulated sled-agent, and this has all the storage! This is no longer true in a post-oxidecomputer#7353 world. Snapshots would not be able to complete because the simulated Pantry would only find one of the three regions when searching the first simulated sled-agent's storage. The real Pantry would construct a Crucible Upstairs, and delegate snapshot requests to that, so this commit also adds a simulated Crucible Upstairs that's aware of each of the simulated sled agent's `Storage`, and can therefore fulfill snapshot requests. Centralizing the place where all simulated sled agents (and Pantry servers!) are created makes this easy: the simulated Upstairs can be aware of all of the `Storage` objects, and the simulated Pantry can use that simulated Upstairs to take fake snapshots. Fixes oxidecomputer/crucible#1594.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just some nits and questions.
GoneCheckError(#[source] Error), | ||
|
||
/// The retry loop progenitor operation saw a permanent client error | ||
#[error("permanent error")] | ||
#[error("permanent error: {0}")] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit - this (and the same change a few lines up) is not right. Whatever is printing or logging errors should print the full source chain using something like anyhow's #
formatting or slog-error-chain, at which point this will result in double printing of messages:
permanent error: some error: some error
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in d508812
INNER JOIN | ||
sled ON (sled.id = physical_disk.sled_id) | ||
WHERE | ||
zpool.id = ANY(SELECT pool_id FROM existing_zpools) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TIL = ANY
is equivalent to IN
. (I would have written this zpool.id IN (SELECT ...)
.
" | ||
existing_sleds AS ( | ||
SELECT | ||
sled.id |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this be zpool.sled_id
and then avoid the joins against physical_disk
/sled
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TIL that zpool has that field! 703e3c7
@@ -9,7 +9,7 @@ use syn::{parse_macro_input, ItemFn, Token}; | |||
#[derive(Debug, PartialEq, Eq, Hash)] | |||
pub(crate) enum NexusTestArg { | |||
ServerUnderTest(syn::Path), | |||
ExtraSledAgents(usize), | |||
ExtraSledAgents(u16), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems fine but out of curiosity - why the change from usize
?
Edit: From later in the PR, it looks like it's because this is used to calculate a port number? I think I'd be inclined to keep this as a usize and map index -> port at the point where that's needed. But this isn't a big deal either way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I figured changing the type here would cut down on the conversions and checked arithmetic later, and also 65536 seemed like a more reasonable limit for the number of simulated sled agents :)
@@ -73,8 +73,11 @@ async fn test_sleds_list(cptestctx: &ControlPlaneTestContext) { | |||
log, | |||
addr, | |||
sa_id, | |||
// Index starts at 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be worth expanding this slightly: "Index starts at 2 because our nexus_test
config created 2 sled agents already" or something like that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 afe335d
|
||
for sled_agent in cptestctx.all_sled_agents() { | ||
let zpool_id = | ||
TypedUuid::from_untyped_uuid(db_read_only_dataset.pool_id); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit - can we use the specific kind of Uuid here?
TypedUuid::from_untyped_uuid(db_read_only_dataset.pool_id); | |
ZpoolUuid::from_untyped_uuid(db_read_only_dataset.pool_id); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good idea, done in 474e085
sled-agent/src/sim/sled_agent.rs
Outdated
@@ -207,7 +174,7 @@ impl SledAgent { | |||
updates: UpdateManager::new(config.updates.clone()), | |||
nexus_address, | |||
nexus_client, | |||
disk_id_to_region_ids: Mutex::new(HashMap::new()), | |||
simulated_upstairs: simulated_upstairs.clone(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit - can we drop this .clone()
?
simulated_upstairs: simulated_upstairs.clone(), | |
simulated_upstairs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep - 7922496
sled-agent/src/sim/storage.rs
Outdated
self.next_crucible_port += 100; | ||
assert!(self.next_crucible_port < 1000); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm missing something here. In new()
I see
next_crucible_port: (sled_index + 1) * 1000,
which looks like this assertion would always fail (since next_crucible_port
starts at N-thousand for the Nth sled-agent)?
Similarly confused question: the comment says "10 per dataset", but next_crucible_port
looks like it's incremented independently of datasets. Is it 10 per dataset or 10 total?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think both of us are missing something here! This worked for me when I ran the full test suite before opening this PR, but now I'm seeing
thread 'test_omdb_env_settings' panicked at sled-agent/src/sim/storage.rs:1604:9:
assertion failed: self.next_crucible_port < 1000
What should be asserted is the next port - starting port is less than the range, which is done in d440dc0. That commit also changes the ranges so that there can be a maximum of 20 datasets/agents per sled (and therefore 50 regions/snapshots per agent) and also clarifies the comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh, that makes sense. Thanks, the new comment is very helpful.
sled-agent/src/sim/upstairs.rs
Outdated
volume_construction_request: &VolumeConstructionRequest, | ||
) { | ||
let mut inner = self.inner.lock().unwrap(); | ||
// XXX what if existing entry? assert? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can callers "remap" IDs to new VCRs? If not, seems reasonable to either assert or return a Result
, I think? (I guess a variant of this question would be: is it reasonable for a caller to call this multiple times with the same ID -> VCR mapping, and if so should that be okay?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually yeah, VCR changes happen all the time due to read-only parent removal and the various replacements. Updated this comment in da4aca6.
When performing region allocation again for a particular volume and increasing the redundancy, the region allocation query was not excluding the sleds that have already been used by the existing allocation as candidates for the new region (note that this only matters when the allocation strategy mandates distinct sleds!). This resulted in region replacement and region snapshot replacement having the potential to allocate the replacement region onto the same sled as an existing region.
This commit fixes the region allocation query, and then fixes the fallout: many tests of region replacement or region snapshot replacement were not creating many simulated sled agents, and therefore could not properly satisfy the fixed region allocation query. The
extra_sled_agents
parameter added in #7353 was used, along with changing theDiskTestBuilder
to simulated one zpool on many sleds.Changing these replacement tests revealed the next problem: because all the simulated sleds share the same IP (::1) and range of ports for the fake Crucible agents, each region allocation would have the same target, and the volume construction request would look like:
This is very confusing for the replacement routines!
We can't change the simulated sled's IPs: they need to bind to localhost to listen for requests. Therefore this commit also adds the idea of "sled index", and this sled index is used to make each range of Crucible agent ports unique.
This lead straight to the next problem: the simulated Pantry assumes that there is really only one simulated sled-agent, and this has all the storage! This is no longer true in a post-#7353 world. Snapshots would not be able to complete because the simulated Pantry would only find one of the three regions when searching the first simulated sled-agent's storage.
The real Pantry would construct a Crucible Upstairs, and delegate snapshot requests to that, so this commit also adds a simulated Crucible Upstairs that's aware of each of the simulated sled agent's
Storage
, and can therefore fulfill snapshot requests. Centralizing the place where all simulated sled agents (and Pantry servers!) are created makes this easy: the simulated Upstairs can be aware of all of theStorage
objects, and the simulated Pantry can use that simulated Upstairs to take fake snapshots.Fixes oxidecomputer/crucible#1594.