Benchmark TorchSharp GPT NEO performance with respect to those of PyTorch and C++ (ONNX) #967
Replies: 4 comments
-
@GeorgeS2019 -- is this an issue or a discussion topic? |
Beta Was this translation helpful? Give feedback.
-
This issue has been added to ONNX Runtime backlog This is a request (most likely long-term) for the benefit for new users interested to push for TorchSharp GPT to consider a contribution to add an additional selling point for TorchSharp's readiness in production through the "official" benchmarking with respect to PyTorch and ONNX c++ |
Beta Was this translation helpful? Give feedback.
-
Similar class of TorchSharp benchmarking is provided here |
Beta Was this translation helpful? Give feedback.
-
Okay, I'm going to convert this to a discussion. This is not a feature request or a bug, or a report of missing features or documentation. If you think benchmarks are important to the future of TorchSharp, please make the case for it in more detail in the discussion thread and build the community support for it there. I would suggest broadening the benchmarking case to include other scenarios than GPT. |
Beta Was this translation helpful? Give feedback.
-
GPT NEO: better performance of python GPT NEO than its onnx runtime version in C++
In this issue, it seems PyTorch performs better than the c++ (ONNX runtime).
This is related to TorchSharp Deep NLP or NLG (Natural Language Generation) discussions
The codes are provided.
I hope community members here could work together to benchmark Torchsharp using this hot GPT topic given the benchmark codes and use case provided by the community member from ONNX Runtime.
FYI
@fwaris
@ChengYen-Tang
Beta Was this translation helpful? Give feedback.
All reactions