Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds an Expression pool to reduce allocations. #1037

Open
wants to merge 2 commits into
base: ion-11-encoding
Choose a base branch
from

Conversation

tgregg
Copy link
Contributor

@tgregg tgregg commented Jan 17, 2025

Description of changes:
Allows us to reuse Expression instances, reducing allocations. Please comment if there's a better way to handle the newfound mutable-ness of the data classes in Kotlin.

Note the significant repetition of the pool pattern in PooledExpressionFactory. This is actually not easy to avoid without sacrificing performance (e.g. using lambdas, which require allocations and cut into the benefit).

This doesn't provide a speedup in my benchmarks, but does result in a noticeable reduction in allocations. That's a meaningful metric to drive down, though it means there remain other significant sources of both speedup and allocation improvements.

Speed: 248 ms/op -> 247 ms/op (~0%)
Allocations: 189 KB/op -> 175 KB/op (-7.4%)

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

*/
public class PooledExpressionFactory {

private static final int POOL_SIZE = 32;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This confused me (briefly) further down the page. Might be worth renaming to INITIAL_POOL_SIZE or something like that.

import java.util.List;

/**
* A factory for {@link Expression} instances. Avoids repetitive allocations by pooling instances. {@link #clear()}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is only used in the EExpressionArgsReader and related code, right? Should it be shared with the macro evaluator so that things like make_string can also use pooled Expression instances?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's a good callout. I'll work on that.

@popematt
Copy link
Contributor

Note the significant repetition of the pool pattern in PooledExpressionFactory. This is actually not easy to avoid without sacrificing performance (e.g. using lambdas, which require allocations and cut into the benefit).

FYI—if you wrote it in Kotlin and used inline functions that accept lambda functions, then you can avoid allocations and other overhead because kotlinc can inline the function at the call site around the functional-typed argument that is passed in.

But absolutely do not take this as a suggestion that you should rewrite it in Kotlin. You already have something that works.

…xpression pool during evaluation in addition to parsing.
@@ -52,6 +54,8 @@ class MacroEvaluator {
private var numExpandedExpressions = 0
/** Pool of [ExpansionInfo] to minimize allocation and garbage collection. */
private val expanderPool: ArrayList<ExpansionInfo> = ArrayList(32)
/** Pool of [Expression] to minimize allocation and garbage collection. */
val expressionPool: PooledExpressionFactory = PooledExpressionFactory()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: this means there are two expression pools: one used when parsing expression arguments, and one used when evaluating expressions (which may produce more expressions). I think this is fine, as both pools are one-time costs, and the shape of the expressions they hold may be very different.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants