Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

⁉️ Switch to using the on-demand allocator, instead of the pooling allocator #391

Merged
merged 4 commits into from
Jun 20, 2024

Conversation

acw
Copy link
Contributor

@acw acw commented Jun 18, 2024

Fixes #255.

In addition to just fixing #255, this also allows the test suite to be run in parallel, again. At least on my machines, running make test directly would cause memory failures. I'm guessing this was due to the pooling allocator's use of virtual memory. Switching to the on-demand allocator means that things run fine.

This extends the built-in Limiter object to use a StoreLimits under the hood, as suggested in the issue. It then builds different versions of this limiter based on whether or not Viceroy is operating in a component or non-component context, because component-based systems require more instances and more tables than traditional ones.

@elliottt
Copy link
Contributor

Interestingly, we've always used the pooling allocator:

config.allocation_strategy(InstanceAllocationStrategy::Pooling {
strategy: PoolingAllocationStrategy::NextAvailable,
module_limits,
instance_limits: InstanceLimits::default(),
});

Should we integrate the StoreLimitsBuilder change that @acfoltzer suggested in this PR as well?

Copy link
Contributor

@elliottt elliottt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Limiter changes look good to me, but I think we need a StoreLimits on ExecuteCtx as well, to ensure that we're enforcing limits on core wasm execution.

lib/src/linking.rs Outdated Show resolved Hide resolved
@elliottt
Copy link
Contributor

Ignore my comments about neeing a Limiter on ExecuteCtx, I missed that it's already present on WasmCtx :)

@@ -90,7 +140,7 @@ impl ComponentCtx {
wasi: builder.build(),
session,
guest_profiler: guest_profiler.map(Box::new),
limiter: Limiter::default(),
limiter: Limiter::new(100, 100),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll need to change this, but I'm not too worried about it now as we're not officially supporting components yet.

Copy link
Contributor

@elliottt elliottt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚢

@acw acw merged commit 36ec2f9 into main Jun 20, 2024
7 checks passed
@acw acw deleted the awick/on-demand-allocator branch June 20, 2024 21:21
GeeWee pushed a commit to GeeWee/Viceroy that referenced this pull request Jul 25, 2024
…locator (fastly#391)

Fixes fastly#255.

In addition to just fixing fastly#255, this also allows the test suite to be run in parallel, again. At least on my machines, running `make test` directly would cause memory failures. I'm guessing this was due to the pooling allocator's use of virtual memory. Switching to the on-demand allocator means that things run fine.

This extends the built-in `Limiter` object to use a `StoreLimits` under the hood, as suggested in the issue. It then builds different versions of this limiter based on whether or not Viceroy is operating in a component or non-component context, because component-based systems require more instances and more tables than traditional ones.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Use the wasmtime on-demand allocator
2 participants