Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reorganized Backends Docs Page #2481

Open
wants to merge 11 commits into
base: main
Choose a base branch
from
2 changes: 1 addition & 1 deletion docs/sphinx/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,4 +31,4 @@ You are browsing the documentation for |version| version of CUDA-Q. You can find
Other Versions <versions.rst>

.. |---| unicode:: U+2014 .. EM DASH
:trim:
:trim:
Binary file added docs/sphinx/using/backends/backends.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
60 changes: 24 additions & 36 deletions docs/sphinx/using/backends/backends.rst
Original file line number Diff line number Diff line change
@@ -1,41 +1,29 @@
*************************
CUDA-Q Backends
**********************
*************************
The CUDA-Q platform has is a powerful tool with many different backends for running hybrid quatum applications and other simulations. This page will help you understand what backends are availible and what the best choices are for your purpose.

The figure below groups the backends into four categories, and described the general purpose for each. See the following sections for a breakdown of the backends included in each section.

.. image:: backends.png
:width: 1000

Click on the links below for each catagory to learn more about the backends it contains. the list below also covers all of the backends availible in CUDA-Q.

.. toctree::
:maxdepth: 3

Circuit Simulation <simulators.rst>
mawolf2023 marked this conversation as resolved.
Show resolved Hide resolved
Quantum Hardware (QPUs) <hardware.rst>

.. toctree::
:caption: Backend Targets
:maxdepth: 1

Dynamics Simulation <dynamics.rst>

.. toctree::
:maxdepth: 2

Cloud <cloud.rst>


Simulation <simulators.rst>
Quantum Hardware <hardware.rst>
NVIDIA Quantum Cloud <nvqc.rst>
Multi-Processor Platforms <platform.rst>

**The following is a comprehensive list of the available targets in CUDA-Q:**

* :ref:`anyon <anyon-backend>`
* :ref:`braket <braket-backend>`
* :ref:`density-matrix-cpu <default-simulator>`
* :ref:`fermioniq <fermioniq-backend>`
* :ref:`infleqtion <infleqtion-backend>`
* :ref:`ionq <ionq-backend>`
* :ref:`iqm <iqm-backend>`
* :ref:`nvidia <nvidia-backend>`
* :ref:`nvidia-fp64 <nvidia-fp64-backend>`
* :ref:`nvidia-mgpu <nvidia-mgpu-backend>`
* :ref:`nvidia-mqpu <mqpu-platform>`
* :ref:`nvidia-mqpu-fp64 <mqpu-platform>`
* :doc:`nvqc <nvqc>`
* :ref:`oqc <oqc-backend>`
* :ref:`orca <orca-backend>`
* :ref:`qpp-cpu <qpp-cpu-backend>`
* :ref:`quantinuum <quantinuum-backend>`
* :ref:`quera <quera-backend>`
* :ref:`remote-mqpu <mqpu-platform>`
* :ref:`stim <stim-backend>`
* :ref:`tensornet <tensor-backends>`
* :ref:`tensornet-mps <tensor-backends>`

.. deprecated:: 0.8
The `nvidia-fp64`, `nvidia-mgpu`, `nvidia-mqpu`, and `nvidia-mqpu-fp64` targets can be
enabled as extensions of the unified `nvidia` target (see `nvidia` :ref:`target documentation <nvidia-backend>`).
These target names might be removed in a future release.
Binary file added docs/sphinx/using/backends/circuitsimulators.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
17 changes: 17 additions & 0 deletions docs/sphinx/using/backends/cloud.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
CUDA-Q Cloud Backends
***************************************
CUDA-Q provides a number of options to access hardware resources (GPUs and QPUs) through the cloud. Such options provide users with more flexible access to simulation and hardware resources. See the links below for more information on running CUDA-Q with cloud resources.


.. toctree::
:maxdepth: 1

Amazon Braket (braket) <cloud/braket.rst>
NVIDIA Quantum Cloud (nvqc) <cloud/nvqc.rst>







112 changes: 112 additions & 0 deletions docs/sphinx/using/backends/cloud/braket.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
Amazon Braket
================

Amazon Braket
++++++++++++++

.. _braket-backend:

`Amazon Braket <https://aws.amazon.com/braket/>`__ is a fully managed AWS
service which provides Jupyter notebook environments, high-performance quantum
circuit simulators, and secure, on-demand access to various quantum computers.
To get started users must enable Amazon Braket in their AWS account by following
`these instructions <https://docs.aws.amazon.com/braket/latest/developerguide/braket-enable-overview.html>`__.
To learn more about Amazon Braket, you can view the `Amazon Braket Documentation <https://docs.aws.amazon.com/braket/>`__
and `Amazon Braket Examples <https://github.com/amazon-braket/amazon-braket-examples>`__.
A list of available devices and regions can be found `here <https://docs.aws.amazon.com/braket/latest/developerguide/braket-devices.html>`__.

Users can run CUDA-Q programs on Amazon Braket with `Hybrid Job <https://docs.aws.amazon.com/braket/latest/developerguide/braket-what-is-hybrid-job.html>`__.
See `this guide <https://docs.aws.amazon.com/braket/latest/developerguide/braket-jobs-first.html>`__ to get started.

Setting Credentials
```````````````````

After enabling Amazon Braket in AWS, set credentials using any of the documented `methods <https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html>`__.
One of the simplest ways is to use `AWS CLI <https://aws.amazon.com/cli/>`__.

.. code:: bash

aws configure

Alternatively, users can set the following environment variables.

.. code:: bash

export AWS_DEFAULT_REGION="<region>"
export AWS_ACCESS_KEY_ID="<key_id>"
export AWS_SECRET_ACCESS_KEY="<access_key>"
export AWS_SESSION_TOKEN="<token>"

Submission from C++
`````````````````````````

To target quantum kernel code for execution in Amazon Braket,
pass the flag ``--target braket`` to the ``nvq++`` compiler.
By default jobs are submitted to the state vector simulator, `SV1`.

.. code:: bash

nvq++ --target braket src.cpp


To execute your kernels on different device, pass the ``--braket-machine`` flag to the ``nvq++`` compiler
to specify which machine to submit quantum kernels to:

.. code:: bash

nvq++ --target braket --braket-machine "arn:aws:braket:eu-north-1::device/qpu/iqm/Garnet" src.cpp ...

where ``arn:aws:braket:eu-north-1::device/qpu/iqm/Garnet`` refers to IQM Garnet QPU.

To emulate the device locally, without submitting through the cloud,
you can also pass the ``--emulate`` flag to ``nvq++``.

.. code:: bash

nvq++ --emulate --target braket src.cpp

To see a complete example for using Amazon Braket backends, take a look at our :doc:`C++ examples <../examples/examples>`.

Submission from Python
`````````````````````````

The target to which quantum kernels are submitted
can be controlled with the ``cudaq::set_target()`` function.

.. code:: python

cudaq.set_target("braket")

By default, jobs are submitted to the state vector simulator, `SV1`.

To specify which Amazon Braket device to use, set the :code:`machine` parameter.

.. code:: python

device_arn = "arn:aws:braket:eu-north-1::device/qpu/iqm/Garnet"
cudaq.set_target("braket", machine=device_arn)

where ``arn:aws:braket:eu-north-1::device/qpu/iqm/Garnet`` refers to IQM Garnet QPU.

To emulate the device locally, without submitting through the cloud,
you can also set the ``emulate`` flag to ``True``.

.. code:: python

cudaq.set_target("braket", emulate=True)

The number of shots for a kernel execution can be set through the ``shots_count``
argument to ``cudaq.sample``. By default, the ``shots_count`` is set to 1000.

.. code:: python

cudaq.sample(kernel, shots_count=100)

To see a complete example for using Amazon Braket backends, take a look at our :doc:`Python examples <../examples/examples>`.

.. note::

The ``cudaq.observe`` API is not yet supported on the `braket` target.



Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
NVIDIA Quantum Cloud
---------------------
NVIDIA Quantum Cloud (nvqc)
+++++++++++++++++++++++++++

NVIDIA Quantum Cloud (NVQC) offers universal access to the world’s most powerful computing platform,
for every quantum researcher to do their life’s work.
To learn more about NVQC visit this `link <https://www.nvidia.com/en-us/solutions/quantum-computing/cloud>`__.
Expand All @@ -8,7 +9,7 @@ Apply for early access `here <https://developer.nvidia.com/quantum-cloud-early-a
Access to the Quantum Cloud early access program requires an NVIDIA Developer account.

Quick Start
+++++++++++
^^^^^^^^^^^
Once you have been approved for an early access to NVQC, you will be able to follow these instructions to use it.

1. Follow the instructions in your NVQC Early Access welcome email to obtain an API Key for NVQC.
Expand All @@ -30,7 +31,7 @@ By selecting the `nvqc` target, the quantum circuit simulation will run on NVQC

.. tab:: Python

.. literalinclude:: ../../snippets/python/using/cudaq/nvqc/nvqc_intro.py
.. literalinclude:: ../../../snippets/python/using/cudaq/nvqc/nvqc_intro.py
:language: python
:start-after: [Begin Documentation]

Expand All @@ -49,7 +50,7 @@ By selecting the `nvqc` target, the quantum circuit simulation will run on NVQC

.. tab:: C++

.. literalinclude:: ../../snippets/cpp/using/cudaq/nvqc/nvqc_intro.cpp
.. literalinclude:: ../../../snippets/cpp/using/cudaq/nvqc/nvqc_intro.cpp
:language: cpp
:start-after: [Begin Documentation]

Expand All @@ -76,7 +77,7 @@ By selecting the `nvqc` target, the quantum circuit simulation will run on NVQC


Simulator Backend Selection
++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

NVQC hosts all CUDA-Q simulator backends (see :doc:`simulators`).
You may use the NVQC `backend` (Python) or `--nvqc-backend` (C++) option to select the simulator to be used by the service.
Expand All @@ -101,7 +102,7 @@ For example, to request the `tensornet` simulator backend, the user can do the f
By default, the single-GPU single-precision `custatevec-fp32` simulator backend will be selected if backend information is not specified.

Multiple GPUs
+++++++++++++
^^^^^^^^^^^^^^

Some CUDA-Q simulator backends are capable of multi-GPU distribution as detailed in :doc:`simulators`.
For example, the `nvidia-mgpu` backend can partition and distribute state vector simulation to multiple GPUs to simulate
Expand Down Expand Up @@ -190,7 +191,7 @@ To select a specific number of GPUs on the NVQC managed service, the following `


Multiple QPUs Asynchronous Execution
+++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

NVQC provides scalable QPU virtualization services, whereby clients
can submit asynchronous jobs simultaneously to NVQC. These jobs are
Expand All @@ -202,13 +203,13 @@ calculating the expectation value along with parameter-shift gradients simultane

.. tab:: Python

.. literalinclude:: ../../snippets/python/using/cudaq/nvqc/nvqc_mqpu.py
.. literalinclude:: ../../../snippets/python/using/cudaq/nvqc/nvqc_mqpu.py
:language: python
:start-after: [Begin Documentation]

.. tab:: C++

.. literalinclude:: ../../snippets/cpp/using/cudaq/nvqc/nvqc_mqpu.cpp
.. literalinclude:: ../../../snippets/cpp/using/cudaq/nvqc/nvqc_mqpu.cpp
:language: cpp
:start-after: [Begin Documentation]

Expand All @@ -230,7 +231,7 @@ calculating the expectation value along with parameter-shift gradients simultane
multi-QPU distribution may not deliver any substantial speedup.

FAQ
++++
^^^^^

1. How do I get more information about my NVQC API submission?

Expand Down
14 changes: 7 additions & 7 deletions docs/sphinx/using/backends/dynamics.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
CUDA-Q Dynamics
*********************************
Dynamics Simulation
+++++++++++++++++++++++++++++++

CUDA-Q enables the design, simulation and execution of quantum dynamics via
the ``evolve`` API. Specifically, this API allows us to solve the time evolution
Expand All @@ -8,7 +8,7 @@ backend target, which is based on the cuQuantum library, optimized for performan
on NVIDIA GPU.

Quick Start
+++++++++++
^^^^^^^^^^^^

In the example below, we demonstrate a simple time evolution simulation workflow comprising of the
following steps:
Expand Down Expand Up @@ -86,7 +86,7 @@ observable. Hence, we convert them into sequences for plotting purposes.


Operator
+++++++++++
^^^^^^^^^^

.. _operators:

Expand Down Expand Up @@ -157,7 +157,7 @@ The latter is specified by the dimension map that is provided to the `cudaq.evol


Time-Dependent Dynamics
++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^

.. _time_dependent:

Expand Down Expand Up @@ -219,7 +219,7 @@ the desired value for each parameter:
:end-before: [End Schedule2]

Numerical Integrators
++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^

.. _integrators:

Expand Down Expand Up @@ -272,4 +272,4 @@ backend target.
If the output is a '`None`' string, it indicates that your Torch installation does not support CUDA.
In this case, you need to install a CUDA-enabled Torch package via other mechanisms, e.g., building Torch from source or
using their Docker images.


Loading