Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize Microsoft.Extensions.Caching.Memory #111050

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

pentp
Copy link
Contributor

@pentp pentp commented Jan 3, 2025

Remove some allocations and interface calls.

@dotnet-policy-service dotnet-policy-service bot added the community-contribution Indicates that the PR has been added by a community member label Jan 3, 2025
Copy link
Contributor

Tagging subscribers to this area: @dotnet/area-extensions-caching
See info in area-owners.md if you want to be subscribed.

@mgravell
Copy link
Member

mgravell commented Jan 7, 2025

The main win here is not eagerly allocating the options lists, is that right? Plus using the list-specific iterator instead of IEnumerable-T/IEnumerator-T. Definitely worthwhile - however, I'm not a huge fan of exposing the naked field, even as internal. I wonder if we could have a private field and an internal get that is the span via collectionsmarshal, and return the default/empty span if the list is absent. Alternatively, maybe an internal HasFoos => _foos is { Length :> 0 }; and just add an if (options.HasFoos) ? (although then we still need to think about the enumerator - the span approach avoids that entirely, and should be more direct than the custom list iterator, additionally skipping bounds checks)

@pentp
Copy link
Contributor Author

pentp commented Jan 8, 2025

Yes, the allocation savings come from on-demand allocation of options lists and avoiding an enumerator allocation each time when using options. And skipping options allocation entirely in MemoryDistributedCache.Set.

I'll try the Span based variant - it could actually generate less code compared to List<T>.Enumerator.
Nvm, CollectionsMarshal is only available on .NET 5+.

@mgravell
Copy link
Member

mgravell commented Jan 8, 2025

Yes, the allocation savings come from on-demand allocation of options lists and avoiding an enumerator allocation each time when using options. And skipping options allocation entirely in MemoryDistributedCache.Set.

I'll try the Span based variant - it could actually generate less code compared to List<T>.Enumerator. Nvm, CollectionsMarshal is only available on .NET 5+.

Since this is an internal API and isn't being used by IVTA hackery except for the tests, we could actually cheat by having the property typed differently on different runtimes, but perhaps we can keep things simple

private List<Whatever>? _foos;
public IList<Whatever> Foos => _foos ??= []; // lazy alloc
internal List<Whatever>? FoosDirect => _foos; // note never allocs

This adds a line of code, but is IMO significantly preferable to exposing the naked field.

@pentp
Copy link
Contributor Author

pentp commented Jan 17, 2025

This should be ready for merging if there are no other concerns.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area-Extensions-Caching community-contribution Indicates that the PR has been added by a community member
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants