Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add basic benchmark suite #788

Merged
merged 1 commit into from
Jul 11, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -27,3 +27,7 @@ coverage.xml

# Versionning
param/_version.py

# asv benchmark
benchmarks/.asv
benchmarks/param
94 changes: 94 additions & 0 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# Benchmarking

`Param` uses ASV (https://asv.readthedocs.io) for benchmarking.

## Preparation

Install `asv` into your environment with for example:
```
hatch shell
cd benchmarks
pip install asv
```

ASV then runs benchmarks in isolated virtual environments that it creates using `virtualenv`.

## Running benchmarks

To run all benchmarks against the default `main` branch:
```
cd benchmarks
asv run
```

The first time this is run it will create a machine file to store information about your machine. Then a virtual environment will be created and each benchmark will be run multiple times to obtain a statistically valid benchmark time.

To list the benchmark timings stored for the `main` branch use:
```
asv show main
```

ASV ships with its own simple webserver to interactively display the results in a webbrowser. To use this:
```
asv publish
asv preview
```
and then open a web browser at the URL specified.

If you want to quickly run all benchmarks once only to check for errors, etc, use:
```
asv dev
```
instead of `asv run`.


## Adding new benchmarks

Add new benchmarks to existing or new classes in the `benchmarks/benchmarks` directory. Any class member function with a name that starts with `time` will be identified as a timing benchmark when `asv` is run.

Data that is required to run benchmarks is usually created in the `setup()` member function. This ensures that the time taken to setup the data is not included in the benchmark time. The `setup()` function is called once for each invocation of each benchmark, the data are not cached.

At the top of each benchmark class there are lists of parameter names and values. Each benchmark is repeated for each unique combination of these parameters.

If you only want to run a subset of benchmarks, use syntax like:
```
asv run -b ShadeCategorical
```
where the text after the `-b` flag is used as a regex to match benchmark file, class and function names.


## Benchmarking code changes

You can compare the performance of code on different branches and in different commits. Usually if you want to determine how much faster a new algorithm is, the old code will be in the `main` branch and the new code will be in a new feature branch. Because ASV uses virtual environments and checks out the `param` source code into these virtual environments, your new code must be committed into the new feature branch locally.

To benchmark the latest commits on `main` and your new feature branch, edit `asv.conf.json` to change the line
```
"branches": ["main"],
```
into
```
"branches": ["main", "new_feature_branch"],
```
or similar.

Now when you `asv run` the benchmarks will be run against both branches in turn.

Then use
```
asv show
```
to list the commits that have been benchmarked, and
```
asv compare commit1 commit2
```
to give you a side-by-side comparison of the two commits.

You can run `asv` for single tags and compare them
```
asv run v0.0.1^!
asv run v0.0.2^!
asv compare v0.0.1 v0.0.2
```

```
asv run v0.0.2..HEAD
25 changes: 25 additions & 0 deletions benchmarks/asv.conf.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
{
"version": 1,
"project": "param",
"project_url": "https://param.holoviz.org",
"repo": "..",
"dvcs": "git",
"repo_subdir": "",
"branches": ["main"],
"install_command": ["in-dir={env_dir} python -m pip install {wheel_file}"],
"uninstall_command": ["return-code=any python -m pip uninstall -y {project}"],
"build_command": [
"python -m pip install hatch hatch-vcs",
"PIP_NO_BUILD_ISOLATION=false python -mpip wheel --no-deps --no-index -w {build_cache_dir} {build_dir}"
],
"environment_type": "virtualenv",
"install_timeout": 600,
"show_commit_url": "https://github.com/holoviz/param/commit/",
// "pythons": ["3.8"],
"benchmark_dir": "benchmarks",
"env_dir": ".asv/env",
"results_dir": ".asv/results",
"html_dir": ".asv/html",
"hash_length": 8,
"build_cache_size": 2
}
Empty file.
Loading