You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Use case: Locker Service would like to measure the performance of the browser eval() vs Locker secureEval(). We can write Best benchmarks to independently measure each, but there's no way AFAICT to measure the delta between the two, across commits.
So for instance, if eval() itself regresses between Chrome 71 and Chrome 72, we don't care. But if the delta between our secureEval() and eval() regresses, then we actually care. We can track the two independently and infer this, but it would be better if Best had something built-in to handle this use case.
Other use cases that may want this scenario: measuring LWC vs Aura, polyfill vs native, etc.
The text was updated successfully, but these errors were encountered:
Use case: Locker Service would like to measure the performance of the browser
eval()
vs LockersecureEval()
. We can write Best benchmarks to independently measure each, but there's no way AFAICT to measure the delta between the two, across commits.So for instance, if
eval()
itself regresses between Chrome 71 and Chrome 72, we don't care. But if the delta between oursecureEval()
andeval()
regresses, then we actually care. We can track the two independently and infer this, but it would be better if Best had something built-in to handle this use case.Other use cases that may want this scenario: measuring LWC vs Aura, polyfill vs native, etc.
The text was updated successfully, but these errors were encountered: