You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem or challenge? Please describe what you are trying to do.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
(This section helps Arrow developers understand the context and why for this feature, in addition to the what)
More of a question than a feature request -- I'm wondering if there are plans for running benchmarks on a consistent basis and storing the results so we can see how performance is changing over time in DataFusion. I see there's the conbench directory which presumably could somehow be hooked up to this -> https://conbench.ursa.dev/ . Is that something I could help get running?
If not, do we have any hardware on which we can consistently run benchmarks? I don't think it needs to be as fancy as conbench is at the start, nor would it need to run for every commit. But something simple to help show progression (or regression) would be nice.
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered:
I see there's the conbench directory which presumably could somehow be hooked up to this -> https://conbench.ursa.dev/ . Is that something I could help get running?
I think that would be good. I did some work on this but did not get through the work of actually hooking this up.
I finally got around to organizing some thoughts in this area and filed #5504 to track getting regular running benchmark runs. Any help people could offer would be super helpful!
Is your feature request related to a problem or challenge? Please describe what you are trying to do.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
(This section helps Arrow developers understand the context and why for this feature, in addition to the what)
More of a question than a feature request -- I'm wondering if there are plans for running benchmarks on a consistent basis and storing the results so we can see how performance is changing over time in DataFusion. I see there's the conbench directory which presumably could somehow be hooked up to this -> https://conbench.ursa.dev/ . Is that something I could help get running?
If not, do we have any hardware on which we can consistently run benchmarks? I don't think it needs to be as fancy as conbench is at the start, nor would it need to run for every commit. But something simple to help show progression (or regression) would be nice.
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered: