Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set max_background FUSE config to 64 by default. #1137

Merged
merged 5 commits into from
Nov 18, 2024
Merged

Conversation

adpeace
Copy link
Contributor

@adpeace adpeace commented Nov 14, 2024

This improves sequential read performance on instances with multiple 100Gbps network interfaces. It controls the number of requests that are allowed in the pending queue that are classified as background, which includes at least some read requests. It also indirectly controls the "congestion threshold", which is set by default to 75% of the max background value. When the congestion threshold is reached, FUSE will stop sending the asynchronous part of readaheads from paged IO to the filesystem.

Testing on 2 NIC instances shows up to approximately 29% speed-up on a sequential read workload with 32 open files, from 76.74 to 99Gbps, for paged IO. Although we don't have enough instrumentation to fully understand the change in queueing behaviour in FUSE, we think it is likely because we're able to serve sufficient readahead requests for the object before hitting the congestion threshold when the limit is higher, thus allowing mountpoint to start prefetching later parts of the object sooner.

The value of 64 was picked by experimentation with values between 16 (the default) and 256, as well as specifically setting the congestion threshold. Increasing the value generally led to better performance up to 64, after which performance doesn't improve further (at least not significantly). We wanted to choose the lowest value that seemed reasonable for the desired performance improvement, to reduce the chance of affecting a workload that wasn't being tested.

As well as the standard regression tests, the change was tested on trn1 instances with a 256KB sequential read workload reading 32 files in parallel over 1, 2, and 4 network interfaces. It does not regress our standard benchmarks nor performance on this test with 1 NIC in use.

This change also temporarily introduces two environment variables to tune the behaviour, so we can isolate this change if a particular workload is found to regress.

Does this change impact existing behavior?

This improves performance on large instance types. There's a risk of regression for workloads we don't test.

Does this change need a changelog entry in any of the crates?

Yes, will submit a separate PR.


By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license and I agree to the terms of the Developer Certificate of Origin (DCO).

@adpeace adpeace added the performance PRs to run benchmarks on label Nov 14, 2024
@adpeace adpeace had a problem deploying to PR integration tests November 14, 2024 19:41 — with GitHub Actions Failure
@adpeace adpeace had a problem deploying to PR integration tests November 14, 2024 19:41 — with GitHub Actions Failure
@adpeace adpeace had a problem deploying to PR integration tests November 14, 2024 19:41 — with GitHub Actions Failure
@adpeace adpeace had a problem deploying to PR integration tests November 14, 2024 19:41 — with GitHub Actions Failure
@adpeace adpeace had a problem deploying to PR integration tests November 14, 2024 19:41 — with GitHub Actions Failure
@adpeace adpeace had a problem deploying to PR integration tests November 14, 2024 19:41 — with GitHub Actions Failure
@adpeace adpeace had a problem deploying to PR integration tests November 14, 2024 19:41 — with GitHub Actions Failure
@adpeace adpeace temporarily deployed to PR integration tests November 14, 2024 20:09 — with GitHub Actions Inactive
@adpeace adpeace temporarily deployed to PR integration tests November 14, 2024 20:09 — with GitHub Actions Inactive
@adpeace adpeace temporarily deployed to PR integration tests November 14, 2024 20:09 — with GitHub Actions Inactive
@adpeace adpeace temporarily deployed to PR integration tests November 14, 2024 20:09 — with GitHub Actions Inactive
@adpeace adpeace temporarily deployed to PR integration tests November 14, 2024 20:09 — with GitHub Actions Inactive
@adpeace adpeace temporarily deployed to PR integration tests November 14, 2024 20:09 — with GitHub Actions Inactive
Signed-off-by: Andrew Peace <[email protected]>
Copy link
Contributor

@dannycjones dannycjones left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@adpeace adpeace added this pull request to the merge queue Nov 18, 2024
Merged via the queue into awslabs:main with commit 7198bc8 Nov 18, 2024
26 checks passed
@dannycjones dannycjones mentioned this pull request Nov 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance PRs to run benchmarks on
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants