Skip to content

Commit

Permalink
Fix documentation for 6.1 release (#2836)
Browse files Browse the repository at this point in the history
  • Loading branch information
causten authored Feb 27, 2024
1 parent 540a156 commit aee01f8
Show file tree
Hide file tree
Showing 19 changed files with 1,137 additions and 200 deletions.
73 changes: 73 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,79 @@
Full documentation for MIGraphX is available at
[https://rocmdocs.amd.com/projects/AMDMIGraphX/en/latest/](https://rocmdocs.amd.com/projects/AMDMIGraphX/en/latest/).

## MIGraphX 2.9 for ROCm 6.1.0

### Additions

* Added FP8 support
* Created a dockerfile with MIGraphX+ONNX Runtime EP+Torch
* Added support for the `Hardmax`, `DynamicQuantizeLinear`, `Qlinearconcat`, `Unique`, `QLinearAveragePool`, `QLinearSigmoid`, `QLinearLeakyRelu`, `QLinearMul`, `IsInf` operators
* Created web site examples for `Whisper`, `Llama-2`, and `Stable Diffusion 2.1`
* Created examples of using the ONNX Runtime MIGraphX Execution Provider with the `InceptionV3` and `Resnet50` models
* Updated operators to support ONNX Opset 19
* Enable fuse_pointwise and fuse_reduce in the driver
* Add support for dot-(mul)-softmax-dot offloads to MLIR
* Added Blas auto-tuning for GEMMs
* Added dynamic shape support for the multinomial operator
* Added fp16 to accuracy checker
* Added initial code for running on Windows OS

### Optimizations

* Improved the output of migraphx-driver command
* Documentation now shows all environment variables
* Updates needed for general stride support
* Enabled Asymmetric Quantization
* Added ScatterND unsupported reduction modes
* Rewrote softmax for better performance
* General improvement to how quantization is performed to support INT8
* Used problem_cache for gemm tuning
* Improved performance by always using rocMLIR for quantized convolution
* Improved group convolutions by using rocMLIR
* Improved accuracy of fp16 models
* ScatterElements unsupported reduction
* Added concat fusions
* Improved INT8 support to include UINT8
* Allow reshape ops between dq and quant_op
* Improve dpp reductions on navi
* Have the accuracy checker print the whole final buffer
* Added support for handling dynamic Slice and ConstantOfShape ONNX operators
* Add support for the dilations attribute to Pooling ops
* Add layout attribute support for LSTM operator
* Improved performance by removing contiguous for reshapes
* Handle all slice input variations
* Add scales attribute parse in upsample for older opset versions
* Added support for uneven Split operations
* Improved unit testing to run in python virtual environments

### Fixes

* Fixed outstanding issues in autogenerated documentation
* Update model zoo paths for examples
* Fixed promote_literals_test by using additional if condition
* Fixed export API symbols from dynamic library
* Fixed bug in pad operator from dimension reduction
* Fixed using the LD to embed files and enable by default when building shared libraries on linux
* fixed get_version()
* Fixed Round operator inaccuracy
* Fixed wrong size check when axes not present for slice
* Set the .SO version correctly


### Changes

* Cleanup LSTM and RNN activation functions
* Placed gemm_pointwise at a higher priority than layernorm_pointwise
* Updated README to mention the need to include GPU_TARGETS when building MIGraphX


### Removals

* Removed unused device kernels from Gather and Pad operators
* Removed int8x4 format



## MIGraphX 2.8 for ROCm 6.0.0

### Additions
Expand Down
22 changes: 12 additions & 10 deletions docs/conf.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#####################################################################################
# The MIT License (MIT)
#
# Copyright (c) 2015-2023 Advanced Micro Devices, Inc. All rights reserved.
# Copyright (c) 2015-2024 Advanced Micro Devices, Inc. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
Expand Down Expand Up @@ -30,9 +30,9 @@

import re

from rocm_docs import ROCmDocs
html_theme = "rocm_docs_theme"
html_theme_options = {"flavor": "rocm-docs-home"}

html_theme_options = {"flavor": "list"}
templates_path = ["."] # Use the current folder for templates

setting_all_article_info = True
Expand All @@ -49,17 +49,19 @@
# for PDF output on Read the Docs
project = "AMD MIGraphX Documentation"
author = "Advanced Micro Devices, Inc."
copyright = "Copyright (c) 2023 Advanced Micro Devices, Inc. All rights reserved."
copyright = "Copyright (c) 2024 Advanced Micro Devices, Inc. All rights reserved."
version = version_number
release = version_number

extensions = ["rocm_docs", "rocm_docs.doxygen", "sphinx_collapse"]
external_toc_path = "./sphinx/_toc.yml"
doxygen_root = "doxygen"
doxysphinx_enabled = False
doxygen_project = {
"name": "doxygen",
"path": "doxygen/xml",
}

docs_core = ROCmDocs(left_nav_title)
docs_core.run_doxygen(doxygen_root="doxygen", doxygen_path="doxygen/xml")
docs_core.setup()
html_title = f"ROCm Docs Core {left_nav_title}"

external_projects_current_project = "amdmigraphx"

for sphinx_var in ROCmDocs.SPHINX_VARS:
globals()[sphinx_var] = getattr(docs_core, sphinx_var)
17 changes: 0 additions & 17 deletions docs/contributor_guide.rst

This file was deleted.

161 changes: 161 additions & 0 deletions docs/dev/contributing-to-migraphx.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,161 @@
.. meta::
:description: MIGraphX provides an optimized execution engine for deep learning neural networks
:keywords: MIGraphX, ROCm, library, API

.. _contributing-to-migraphx:

==========================
Contributing to MIGraphX
==========================

This document explains the internal implementation of some commonly used MIGraphX APIs. You can utilize the information provided in this document and other documents under "Contributing to MIGraphX" section to contribute to the MIGraphX API implementation.
Here is how some basic operations in the MIGraphX framework are performed.

Performing basic operations
----------------------------

A program is a collection of modules, which are collections of instructions to be executed when calling :cpp:any:`eval <migraphx::internal::program::eval>`.
Each instruction has an associated :cpp:any:`operation <migraphx::internal::operation>` which represents the computation to be performed by the instruction.

The following code snippets demonstrate some basic operations using MIGraphX.

Adding literals
******************

Here is a ``add_two_literals()`` function::

// create the program and get a pointer to the main module
migraphx::program p;
auto* mm = p.get_main_module();

// add two literals to the program
auto one = mm->add_literal(1);
auto two = mm->add_literal(2);

// make the add operation between the two literals and add it to the program
mm->add_instruction(migraphx::make_op("add"), one, two);

// compile the program on the reference device
p.compile(migraphx::ref::target{});

// evaulate the program and retreive the result
auto result = p.eval({}).back();
std::cout << "add_two_literals: 1 + 2 = " << result << "\n";

In the above function, a simple :cpp:any:`program <migraphx::internal::program>` object is created along with a pointer to the main module of it.
The program is a collection of modules which starts execution from the main module, so instructions are added to the modules rather than the program object directly.
The :ref:`add_literal <migraphx-module>` function is used to add an instruction that stores the literal number ``1`` while returning an :cpp:any:`instruction_ref <migraphx::internal::instruction_ref>`.
The returned :cpp:any:`instruction_ref <migraphx::internal::instruction_ref>` can be used in another instruction as an input.
The same :ref:`add_literal <migraphx-module>` function is used to add the literal ``2`` to the program.
After the literals are created, the instruction is created to add the numbers. This is done by using the :ref:`add_instruction <migraphx-module>` function with the ``add`` :cpp:any:`operation <migraphx::internal::operation>` created by ``make_op`` and the previously created literals passed as the arguments for the instruction.
You can run this :cpp:any:`program <migraphx::internal::program>` by compiling it for the reference target (CPU) and then running it with :cpp:any:`eval <migraphx::internal::program::eval>`. This prints the result on the console.

To compile the program for the GPU, move the file to ``test/gpu/`` directory and include the given target::

#include <migraphx/gpu/target.hpp>

Adding Parameters
*******************

While the ``add_two_literals()`` function above demonstrates add operation on constant values ``1`` and ``2``,
the following program demonstrates how to pass a parameter (``x``) to a module using ``add_parameter()`` function .

migraphx::program p;
auto* mm = p.get_main_module();
migraphx::shape s{migraphx::shape::int32_type, {1}};

// add parameter "x" with the shape s
auto x = mm->add_parameter("x", s);
auto two = mm->add_literal(2);

// add the "add" instruction between the "x" parameter and "two" to the module
mm->add_instruction(migraphx::make_op("add"), x, two);
p.compile(migraphx::ref::target{});

In the code snippet above, an add operation is performed on a parameter of type ``int32`` and literal ``2`` followed by compilation for the CPU.
To run the program, pass the parameter as a ``parameter_map`` while calling :cpp:any:`eval <migraphx::internal::program::eval>`.
To map the parameter ``x`` to an :cpp:any:`argument <migraphx::internal::argument>` object with an ``int`` data type, a ``parameter_map`` is created as shown below::

// create a parameter_map object for passing a value to the "x" parameter
std::vector<int> data = {4};
migraphx::parameter_map params;
params["x"] = migraphx::argument(s, data.data());

auto result = p.eval(params).back();
std::cout << "add_parameters: 4 + 2 = " << result << "\n";
EXPECT(result.at<int>() == 6);

Handling Tensor Data
**********************

The above two examples demonstrate scalar operations. To describe multi-dimensional tensors, use the :cpp:any:`shape <migraphx::internal::shape>` class to compute a simple convolution as shown below::

migraphx::program p;
auto* mm = p.get_main_module();

// create shape objects for the input tensor and weights
migraphx::shape input_shape{migraphx::shape::float_type, {2, 3, 4, 4}};
migraphx::shape weights_shape{migraphx::shape::float_type, {3, 3, 3, 3}};

// create the parameters and add the "convolution" operation to the module
auto input = mm->add_parameter("X", input_shape);
auto weights = mm->add_parameter("W", weights_shape);
mm->add_instruction(migraphx::make_op("convolution", {{"padding", {1, 1}}, {"stride", {2, 2}}}), input, weights);

Most programs take data from allocated buffers that are usually on the GPU. To pass the buffer data as an argument, create :cpp:any:`argument <migraphx::internal::argument>` objects directly from the pointers to the buffers::

// Compile the program
p.compile(migraphx::ref::target{});

// Allocated buffers by the user
std::vector<float> a = ...;
std::vector<float> c = ...;

// Solution vector
std::vector<float> sol = ...;

// Create the arguments in a parameter_map
migraphx::parameter_map params;
params["X"] = migraphx::argument(input_shape, a.data());
params["W"] = migraphx::argument(weights_shape, c.data());

// Evaluate and confirm the result
auto result = p.eval(params).back();
std::vector<float> results_vector(64);
result.visit([&](auto output) { results_vector.assign(output.begin(), output.end()); });

EXPECT(migraphx::verify::verify_rms_range(results_vector, sol));

An :cpp:any:`argument <migraphx::internal::argument>` can handle memory buffers from either the GPU or the CPU.
When running the :cpp:any:`program <migraphx::internal::program>`, buffers are allocated on the corresponding target by default.
By default, the buffers are allocated on the CPU when compiling for CPU and on the GPU when compiling for GPU.
To locate the buffers on the CPU even when compiling for GPU, set the option ``offload_copy=true``.

Importing From ONNX
**********************

To make it convenient to use neural networks directly from other frameworks, MIGraphX ONNX parser allows you to build a :cpp:any:`program <migraphx::internal::program>` directly from an ONNX file.
For usage, refer to the ``parse_onnx()`` function below::

program p = migraphx::parse_onnx("model.onnx");
p.compile(migraphx::gpu::target{});

Sample programs
-----------------

You can find all the MIGraphX examples in the `Examples <https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/tree/develop/examples/migraphx>`_ directory.

Build MIGraphX source code
****************************

To build a sample program `ref_dev_examples.cpp <https://github.com/ROCm/AMDMIGraphX/blob/develop/test/ref_dev_examples.cpp>`_, use:

make -j$(nproc) test_ref_dev_examples

This creates an executable file ``test_ref_dev_examples`` in the ``bin/`` of the build directory.

To verify the build, use:

make -j$(nproc) check

For detailed instructions on building MIGraphX from source, refer to the `README <https://github.com/ROCm/AMDMIGraphX#readme>`_ file.
5 changes: 3 additions & 2 deletions docs/dev/data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,9 @@ raw_data
:members:
:undoc-members:

.. doxygenfunction:: template<class T, class ...Ts> auto migraphx::internal::visit_all(T &&x, Ts&&... xs)

.. doxygenfunction:: migraphx::internal::visit_all(T&& x, Ts&&... xs)

.. doxygenfunction:: migraphx::internal::visit_all(const std::vector<T>& x)

tensor_view
-----------
Expand Down
Loading

0 comments on commit aee01f8

Please sign in to comment.