You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As seen in FasterXML/jackson-databind#3665, the approach taken in our ConcurrentLruCache implementation can result in an increase of memory heap consumption because of how the read operations queue is structured.
We've experimented with an alternate solution that "flattens" that queue, trading arrays of AtomicReference for AtomicReferenceArray. This results in a slight performance decrease but looks acceptable for our use case. We can also consider decreasing the default size of queues as well. They're currently calculated with "number of CPUs x fixed size" - the use cases present in Spring Framework probably don't need this much memory by default.
The text was updated successfully, but these errors were encountered:
bclozel
changed the title
Improve heap consumption in ConcurrentLruCache implementation
ConcurrentLruCache implementation is using too much heap memory
Nov 25, 2022
As seen in FasterXML/jackson-databind#3665, the approach taken in our
ConcurrentLruCache
implementation can result in an increase of memory heap consumption because of how the read operations queue is structured.We've experimented with an alternate solution that "flattens" that queue, trading arrays of
AtomicReference
forAtomicReferenceArray
. This results in a slight performance decrease but looks acceptable for our use case. We can also consider decreasing the default size of queues as well. They're currently calculated with "number of CPUs x fixed size" - the use cases present in Spring Framework probably don't need this much memory by default.The text was updated successfully, but these errors were encountered: