Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reorganize the turnkey benchmark help page #73

Closed
jeremyfowers opened this issue Dec 15, 2023 · 0 comments · Fixed by #74
Closed

Reorganize the turnkey benchmark help page #73

jeremyfowers opened this issue Dec 15, 2023 · 0 comments · Fixed by #74
Assignees
Labels
cleanup Cleaning up old/unused code and tech debt documentation Improvements or additions to documentation good first issue Good for newcomers p1 Medium priority ui/ux Improve the user experience

Comments

@jeremyfowers
Copy link
Collaborator

Problem Statement

The current help page for turnkey benchmark -h presents a lot of options in no particular order. This doesn't give new users a good path to parsing the available options and associating them with their respective phases of the toolchain.

Proposed Solution

Reorganize the help page like this:
 

Discover, build, and then benchmark the model(s) within input file(s).

positional arguments:
  input_files           One or more script (.py), ONNX (.onnx), or input list (.txt) files to be benchmarked

options:
  -h, --help            show this help message and exit

options that specifically apply to the `discover` part of the toolflow:
  --script-args SCRIPT_ARGS
                        Arguments to pass into the target script(s)
  --max-depth MAX_DEPTH
                        Maximum depth to analyze within the model structure of the target script(s)
  --labels [LABELS [LABELS ...]]
                        Only evaluate the scripts that have the provided labels

options that apply to both the `build` and `benchmark` parts of the toolflow:
  --device {nvidia,x86}
                        Type of hardware device to be used for the benchmark (defaults to "x86")
  --runtime {ort,trt,torch-eager,torch-compiled}
                        Software runtime that will be used to collect the benchmark. Must be compatible with the selected device. Automatically selects a sequence if `--sequence` is not used.If this argument is not set, the default runtime of the selected device will be used.
  -d CACHE_DIR, --cache-dir CACHE_DIR
                        Build cache directory where the resulting build and benchmarking artifacts will be stored (defaults to /home/jfowers/.cache/turnkey)
  --lean-cache          Delete all build artifacts except for log files when the command completes

options that apply specifically to the `build` part of the toolflow:
  --sequence {optimize-fp16,optimize-fp32,onnx-fp32}
                        Name of a build sequence that will define the model-to-model transformations, used to build the models. Each runtime has a default sequence that it uses.
  --rebuild {if_needed,always,never}
                        Sets the cache rebuild policy (defaults to if_needed)
  --onnx-opset ONNX_OPSET
                        ONNX opset used when creating ONNX files (default=14). Not applicable when input model is already a .onnx file. options that apply specifically to the `benchmark` part of the toolflow:

options that apply specifically to the `benchmark` part of the toolflow:
  --iterations ITERATIONS
                        Number of execution iterations of the model to capture the benchmarking performance (e.g., mean latency)
  --rt-args [RT_ARGS [RT_ARGS ...]]
                        Optional arguments provided to the runtime being used

options that apply to all toolflows:
  --use-slurm           Execute on Slurm instead of using local compute resources
  --process-isolation   Isolate evaluating each input into a separate process
  --timeout TIMEOUT     Build timeout, in seconds, after which a build will be canceled (default=3600). Only applies when --process-isolation or --use-slurm is also used.
@jeremyfowers jeremyfowers added documentation Improvements or additions to documentation good first issue Good for newcomers cleanup Cleaning up old/unused code and tech debt ui/ux Improve the user experience p1 Medium priority labels Dec 15, 2023
@jeremyfowers jeremyfowers self-assigned this Dec 15, 2023
@jeremyfowers jeremyfowers linked a pull request Dec 15, 2023 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cleanup Cleaning up old/unused code and tech debt documentation Improvements or additions to documentation good first issue Good for newcomers p1 Medium priority ui/ux Improve the user experience
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant