Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add PipeOptions setting for retaining Memory after the PipeReader consumes it #26723

Open
davidfowl opened this issue Jul 8, 2018 · 7 comments
Labels
area-System.IO.Pipelines backlog-cleanup-candidate An inactive issue that has been marked for automated closure. no-recent-activity
Milestone

Comments

@davidfowl
Copy link
Member

Today, the Pipe implementation returns buffers to the underlying buffer pool as soon as the buffer is fully consumed by the PipeReader. This is great for lowering memory usage but it causes more memory to be returned to the pool even if we haven't exhausted the current memory segment (the tail of the linked list).

We should have an option to avoid this behavior so that the segment sticks around longer. This'll increases the idle memory usage but can be more efficient for scenarios where you are receiving continuous data or are willing to use more memory to reduce the amount of calls to Rent/Return on the underlying MemoryPool<byte>.

namespace System.IO.Pipelines
{
    public class PipeOptions
    {
        /// <summary>
        /// Gets a value that determines how much idle memory should be allowed in the last segment before returning to the underlying MemoryPool<byte>.
        /// </summary>
        public int IdleMemoryThreshold { get; }
    }
}

The setting would default to 0 meaning that we don't allow any idle memory to sit around (this matches today's behavior). This could be set to anything between the Pipe's minimumSegmentSize setting and MemoryPool<byte>.MaxBufferSize.

cc @pakrym @halter73 @mgravell

@davidfowl
Copy link
Member Author

@pakrym
Copy link
Contributor

pakrym commented Jul 10, 2018

I would prefer this to be "leave N bytes worth of free segments in the pipe" option.

@halter73
Copy link
Member

No matter what the IdleMemoryThreshold is, if it's greater than 0, you are effectively wasting 1 block worth of memory for a truly idle Pipe, right? The code you highlighted @davidfowl only affects returning the last block, so how is IdleMemoryThreshold any different than a PreserveIdleMemory bool?

Now if IdleMemoryThreshold represented the maximum amount of memory kept in some per-Pipe pool to avoid contention like @pakrym is suggesting that might be interesting.

If Kestrel were to ever use this feature, we'd probably need some time of timeout to eventually returning blocks after the Pipe is idle for long enough.

@davidfowl
Copy link
Member Author

No matter what the IdleMemoryThreshold is, if it's greater than 0, you are effectively wasting 1 block worth of memory for a truly idle Pipe, right? The code you highlighted @davidfowl only affects returning the last block, so how is IdleMemoryThreshold any different than a PreserveIdleMemory bool?

Right, I originally had a bool but then I was thinking this should be more general and just let you describe how much memory you were willing to keep around.

Now if IdleMemoryThreshold represented the maximum amount of memory kept in some per-Pipe pool to avoid contention like @pakrym is suggesting that might be interesting.

Agreed. We could look into this as a general feature. The thing I think we need to have a way to solve today though is the inefficiency around renting and returning buffers that are fully consumed. There are some scenarios where the PipeReader's buffers are always fully consumed (like when writing to another sink) and those suffer the problem where the memory potentially gets returned if the reader is catching up with the writer. For e.g. small writes for example could cause the constant rent/return more often than it needs to.

If Kestrel were to ever use this feature, we'd probably need some time of timeout to eventually returning blocks after the Pipe is idle for long enough.

Maybe, maybe we'd need to expose an explicit call to dump the consumed memory and a way to determine how much memory was consumed but not in use. This makes things a bit more complicated but gives the pipe creator more control...

The first thing to do here is to get some benchmarks for scenarios we think might be faster (like writing output) and see if this moves the needle.

@davidfowl
Copy link
Member Author

We like this, but will leave it to a future milestone.

@davidfowl
Copy link
Member Author

Prototype (https://github.com/dotnet/corefx/compare/davidfowl/pool-harder) of basically never returning the rented memory and instead reusing them from the BufferSegment in the pipe.

@msftgits msftgits transferred this issue from dotnet/corefx Jan 31, 2020
@msftgits msftgits added this to the Future milestone Jan 31, 2020
@maryamariyan maryamariyan added the untriaged New issue has not been triaged by the area owner label Feb 23, 2020
@BrennanConroy BrennanConroy removed the untriaged New issue has not been triaged by the area owner label Jul 8, 2020
Copy link
Contributor

Due to lack of recent activity, this issue has been marked as a candidate for backlog cleanup. It will be closed if no further activity occurs within 14 more days. Any new comment (by anyone, not necessarily the author) will undo this process.

This process is part of our issue cleanup automation.

@dotnet-policy-service dotnet-policy-service bot added backlog-cleanup-candidate An inactive issue that has been marked for automated closure. no-recent-activity labels Jan 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area-System.IO.Pipelines backlog-cleanup-candidate An inactive issue that has been marked for automated closure. no-recent-activity
Projects
None yet
Development

No branches or pull requests

6 participants