Skip to content

Commit

Permalink
Fix typos / tweak GPU support documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
ericpre committed Mar 10, 2024
1 parent 76fcd94 commit 9fe1e08
Showing 1 changed file with 30 additions and 22 deletions.
52 changes: 30 additions & 22 deletions doc/user_guide/gpu_computations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,31 +9,46 @@ GPU support in pyxem/hyperspy is a point of active development. Currently, the
remains in the `beta` stage. This means that the API is not yet stable and may change as development continues
to occur. We are actively seeking feedback from users to help guide the development of the GPU support!

The GPU support is currently limited to NVIDIA GPUs and requires the `cupy <https://cupy.dev>`_ package to be installed.
If you are interested in increasing GPU support to other vendors, please let us know!

Just a note that cuda can be a bit difficult to install depending on your hardware etc. If you are having
trouble, please let us know by raising an issue and we will try to help you out.


Supported Operations
^^^^^^^^^^^^^^^^^^^^

Operations that are supported on the GPU:
------------------------------------------
- Basic Operations
- 2D Azimuthal Integration
- Template Matching

- Generic Operations
- 2D Azimuthal Integration
- Template Matching


Transfer Data To and From the GPU
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

You can transfer data to the GPU using the `to_gpu` method. This method will transfer the data to the GPU
or use dask to perform the operation in parallel. You can transfer the data back to the CPU using the `from_gpu`.
You can transfer data to the GPU using the :meth:`~.signals.CommonDiffraction.to_device` method. This method
will transfer the data to the GPU or use dask to perform the operation in parallel. You can transfer the data
back to the CPU using the :meth:`~.signals.CommonDiffraction.to_host` method.

Note that this will be limited by the number of GPU's you have available.

.. code-block::
import pyxem as pxm
s = pxm.data.pdcusi(lazy=True)
s.to_gpu() # Creates a plan to transfer the data to GPU
s = pxm.data.pdnip_glass(lazy=True)
s.to_device() # Creates a plan to transfer the data to GPU
az = s.get_azimuthal_integral2d(inplace=False) # automatically uses GPU method
az.from_gpu() # Creates a plan to transfer the data back to the CPU
az.to_host() # Creates a plan to transfer the data back to the CPU
Working with Multiple GPUs
^^^^^^^^^^^^^^^^^^^^^^^^^^

Maybe more useful is the `dask-cuda` package which allows you to use multiple GPU's or will handle the
scheduling of the GPU operations for you without the context managing shown above.
Maybe more useful is the `dask-cuda <https://docs.rapids.ai/api/dask-cuda/stable>`_ (linux only) package
which allows you to use multiple GPU's or will handle the scheduling of the GPU operations for you without the
context managing shown above.

.. code-block::
Expand All @@ -44,15 +59,8 @@ scheduling of the GPU operations for you without the context managing shown abov
client = Client(cluster)
import pyxem as pxm
s = pxm.data.pdcusi(lazy=True)
s.to_gpu() # Creates a plan to transfer the data to GPU
s = pxm.data.pdnip_glass(lazy=True)
s.to_device() # Creates a plan to transfer the data to GPU
az = s.get_azimuthal_integral2d(inplace=False) # automatically uses GPU method
az.from_gpu() # Creates a plan to transfer the data back to the CPU
az.to_host() # Creates a plan to transfer the data back to the CPU
az.compute() # This will 1 transfer the data to the GPU in blocks operate and then transfer the data back to CPU
The GPU support is currently limited to NVIDIA GPUs and requires the `cupy` package to be installed. If
you are interested in increasing GPU support to other vendors, please let us know!

Just a note that cuda can be a bit difficult to install depending on your hardware etc. If you are having
trouble, please let us know by raising an issue and we will try to help you out.

0 comments on commit 9fe1e08

Please sign in to comment.