-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[enhancement] add dlpack support to to_table
#2275
base: main
Are you sure you want to change the base?
Conversation
Something weird is happening to codecov with respect to seeing the new files. Will need to be investigated at some point |
/intelci: run |
to_table
to_table
@@ -0,0 +1,332 @@ | |||
/*! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this file be added through a build dependency instead of being committed into the repository?
Otherwise, could it be linked from a git module of the repository where it came from?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would be problematic if aspects change in the API which aren't reflected in the code which adapts to it. Other frameworks include copies: https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/dlpack.h
https://github.com/IntelPython/dpctl/blob/master/dpctl/tensor/include/dlpack/dlpack.h
https://github.com/numpy/numpy/blob/main/numpy/_core/src/common/dlpack/dlpack.h
/intelci: run |
/intelci: run |
Description
This PR introduces
__dlpack__
tensor (https://github.com/dmlc/dlpack) consumption byto_table
allowing for zero-copy use of data in oneDAL. This is important for enabling array_api support and is a pre-requisite for #2096 (array api dispatching). That PR is then a pre-requisite for #2100 #2106 #2189 #2206 #2207 and #2209. Sklearn provides array_api support for some algorithms. If we wish to fully support zero copy of sycl_usm inputs, we need to be able to consume array_api inputs due to underlying sklearn dependencies (validate_data
,check_array
, etc.). While we support Sycl usm ndarrays (dpctl
,dpnp
) via the__sycl_usm_array_interface__
method in the onedal folder estimators, to properly interface estimators in the sklearnex folder, we need to support the__dlpack__
method of arrays/tensors. This PR does that and greatly simplifies the necessary logic in #2096 and the follow-up PRs. This PR also provides the added benefit of working with other frameworks which support SYCL gpu data which have__dlpack__
interfaces (i.e. PyTorch).NOTES:
This PR continues from [enhancement] Refactor
onedal/datatypes
in preparation for dlpack support #2195 to integrate dlpack support, and it takes code from DataManagement update #1568. Please use DataManagement update #1568 as reference, though it contained some mistakes.This new functionality is not yet exposed publicly and therefore must be added to the documentation with ENH: Array API dispatching #2096.
This aspect is not yet benchmarkable/ has nothing to benchmark against.
Memory leak checking using the infrastructure from ENH: Data management update to support SUA ifaces for Homogen OneDAL tables #2045 (only CPU at the moment, will be modified if/when PyTorch support is brought online)
It does numeric testing via onedal's
assert_all_finite
testing usingarray_api_strict
arrays, as that has a simple interface to the backend with no checking of array aspects on the python-side beforeto_table
.A special testing class is created to verify for SYCL device support and is used in
test_data.py
.This does not have any work with returning
__dlpack__
supported arrays as results. And must be done as a follow-up PR (probably also a good idea in order to ease reviewing). Therefore, this is only about array/tensor consumption.TODO: add a onedal function which checks a dlpack tensor for C-contiguity or F-contiguity similar to the
flags
attribute of numpy/dpctl/dpnp. This is out of the scope of this PR, but is necessary for assert_all_finite support for the next step in array_api work.PR should start as a draft, then move to ready for review state after CI is passed and all applicable checkboxes are closed.
This approach ensures that reviewers don't spend extra time asking for regular requirements.
You can remove a checkbox as not applicable only if it doesn't relate to this PR in any way.
For example, PR with docs update doesn't require checkboxes for performance while PR with any change in actual code should have checkboxes and justify how this code change is expected to affect performance (or justification should be self-evident).
Checklist to comply with before moving PR from draft:
PR completeness and readability
Testing
Performance