Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge with fixes of 087fea06 (19) #248

Merged
merged 84 commits into from
Aug 22, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
84 commits
Select commit Hold shift + click to select a range
5e564b5
Adds Some Quantization Support for AtenMatmulOp (#3147)
zjgarvey Apr 15, 2024
ae47247
[Stablehlo] Enhance broadcast pattern in matmul Ops (#3161)
Apr 16, 2024
10b6062
[CI] Enable the tests for fx_importer in the CI (#3168)
penguin-wwy Apr 16, 2024
af5509c
[FxImporter] Type conversion to resolve the mismatch between Py type …
penguin-wwy Apr 16, 2024
a0232e9
[MLIR][TORCH] Add OnnxToTorch lowering for ReduceL1 Op (#3146)
vinayakdsci Apr 16, 2024
7a1ad0d
[TorchToLinalg] Adds Support for Remaining Quantized Matmul Cases (#3…
zjgarvey Apr 16, 2024
398aeee
[FxImporter] Fix kwarg operands in fx importer (#3166)
penguin-wwy Apr 16, 2024
e4b11a0
[FxImporter] Fix fx importer test config and clean xfail set (#3176)
penguin-wwy Apr 17, 2024
d2ba956
[Torch] Support Aten_CastLongOp. (#3160)
Apr 17, 2024
3aa81f7
[FxImporter] Replace local_scalar_dense in fx_importer (#3180)
penguin-wwy Apr 17, 2024
b66eabd
[onnx][torch][linalg] Implementing align-corner modes for gridsampler…
afalkenberg1 Apr 17, 2024
491f482
[torch-mlir][sparse] pre-pend named buffers to parameter list (#3178)
aartbik Apr 17, 2024
d4313ee
[Torch] Add decomposition of RepeatInterleaveSelfInt Op (#3075)
Apr 17, 2024
6e5630d
build: manually update PyTorch version (#3170)
vivekkhandelwal1 Apr 18, 2024
4c21e20
[torch] Support rank-0 index for torch index select (#3182)
rsuderman Apr 18, 2024
0e77de9
[torch] Add support for `torch.view` with dynamic shapes (#3164)
rsuderman Apr 18, 2024
be742a9
[onnx] Update the failure triage for onnx (#3186)
rsuderman Apr 18, 2024
6c4f7de
[stablehlo] add aten.clamp.Tensor op conversion support (#3185)
penguin-wwy Apr 19, 2024
0a60734
[FxImporter] Add fx importer to stablehlo e2e test config (#3183)
penguin-wwy Apr 19, 2024
5a98c72
[StableHLO] Fix aten.clamp.Tensor in FxImporter2StableHLO (#3190)
penguin-wwy Apr 19, 2024
790a697
[Torch] Add folder for AtenIntOp, AtenFloatOp (#3189)
Apr 19, 2024
b01245c
[onnx] Fix `onnx.Not` for non-bool inputs (#3187)
rsuderman Apr 19, 2024
ea0ecb6
[stablehlo] add aten.remainder.Tensor op conversion support (#3197)
penguin-wwy Apr 20, 2024
b6b0160
[stablehlo] add aten.fmod.Tensor op conversion support (#3198)
penguin-wwy Apr 21, 2024
733cace
[onnx] Fix `onnx.split` by directly handling slicing (#3194)
rsuderman Apr 21, 2024
8222637
[onnx] Extend op version number of `onnx.ScatterElements` (#3195)
rsuderman Apr 21, 2024
a60e84e
[stablehlo] add aten.expm1 op conversion support (#3199)
penguin-wwy Apr 22, 2024
e5bdd71
[Torch] Emit and decompose prims.iota op (#3132)
penguin-wwy Apr 22, 2024
6abc737
[MLIR][TORCH] Fix OnnxToLinalg lowering issue for Squeeze and Unsquee…
vivekkhandelwal1 Apr 22, 2024
3c252cd
[onnx] Add `onnx-to-torch` lowering for random ops (#3193)
vivekkhandelwal1 Apr 22, 2024
cff2f08
[torch] Add OnnxToTorch lowering for `onnx.ReduceL2` (#3175)
vinayakdsci Apr 23, 2024
797e4cd
[Stablehlo] lowering asin, acos, atan (#3207)
qingyunqu Apr 23, 2024
1f8123b
[Stablehlo] support unary ops which promote to floating point (#3209)
qingyunqu Apr 23, 2024
c1967b6
[Stablehlo] add AtenLog10Op, AtenLog2Op lowering to stablehlo (#3208)
Apr 23, 2024
db3842f
[Stablehlo] support lowering sinh & cosh to stablehlo (#3213)
qingyunqu Apr 23, 2024
ddb29c2
[onnx] Add OnnxToTorch support for `onnx.ConvInteger` (#3179)
jinchen62 Apr 23, 2024
61e6312
Support select_last_index attribute of onnx argmax op (#3192)
jinchen62 Apr 23, 2024
09d4204
Support select_last_index attribute of onnx argmin op (#3212)
jinchen62 Apr 23, 2024
a8ba865
[torch] Adds Quantization Support for `aten.relu` (#3177)
zjgarvey Apr 23, 2024
4da3d71
[Torch] Support AtenProdOp on linalg and stablehlo (#3215)
Apr 24, 2024
42b9ecc
[Stablehlo] Fix AtenSumDimIntListOp when dim==None (#3216)
Apr 24, 2024
f77d883
[onnx] handle dynamic padSize tensor in onnx.Pad (#3214)
PhaneeshB Apr 24, 2024
8a1dbbd
[torchscript] export extra library file name to user (#3203)
qingyunqu Apr 24, 2024
dc470e6
add torch.qint32 to dtype-spec in TorchTypes.td (#3206)
qingyunqu Apr 24, 2024
e18bf42
[stablehlo] Support ConstantPadNdOp in stablehlo (#3211)
Apr 24, 2024
fab2696
[Torch] support aten.trunc (#3219)
qingyunqu Apr 24, 2024
678c03b
Fix nan issue for fp16 torch.randn/randn_like in ConvertAtenUniformOp…
aviator19941 Apr 24, 2024
7be22bb
Update add_ops.md to link torch mlir get started instructions promine…
renxida Apr 24, 2024
7030eac
[stablehlo] Support aten.any and aten.all lowering (#3217)
Apr 25, 2024
9e2fe47
build: manually update PyTorch version (#3210)
vivekkhandelwal1 Apr 25, 2024
4361178
[torch-mlir][sparse] recognize sparse tensor conversion (#3226)
aartbik Apr 25, 2024
b0ba3de
[Torch] support AtenScalarImplicitOp canonicalize with float (#3231)
qingyunqu Apr 25, 2024
2eac8a9
[torch-mlir][sparse] sparse tensor dialect is a legal dialect (#3227)
aartbik Apr 25, 2024
ac11ec7
[MLIR][ONNX] Add OnnxToTorch support for ReduceLogSum Op (#3229)
archana-ramalingam Apr 25, 2024
cd33d8b
[onnx] Update DefaultDomainGtoP.cpp gridsampler (#3228)
afalkenberg1 Apr 26, 2024
122eb69
[stablehlo] add aten left/right shift op conversion support (#3234)
penguin-wwy Apr 26, 2024
634a796
[Torch] fold aten.log (#3223)
qingyunqu Apr 26, 2024
ac85338
[Stablehlo] Support AtenPowScalarOp, AtenTanOp, AtenAsinhOp, AtenAcos…
Apr 26, 2024
9a12a09
[onnx] Support `onnx.OneHot` lowering to `torch` (#3196)
rsuderman Apr 26, 2024
944a6df
Extract the Python APIs in the pt1 dir back to the root (#3237)
penguin-wwy Apr 27, 2024
f173a06
[Torch] emit aten.ne.str and add folder (#3242)
qingyunqu Apr 27, 2024
4fbe77a
[dynamo] Verify the default value is passed by kwargs (#2998)
penguin-wwy Apr 27, 2024
695458d
Fix ArgAnnotation with boolean flag which instructs value semantics (…
qingyunqu Apr 27, 2024
189b3f1
Fix broken link in abstract_interp_lib.md (#2800)
zjgarvey Apr 27, 2024
fb8748b
Switch to pre-commit for lint checks. (#3200)
stellaraccident Apr 27, 2024
466618e
Disable pre-commit on push to main.
stellaraccident Apr 27, 2024
6679728
Fix deprecated uses of cast/dyn_cast/dyn_cast_or_null/isa (#3243)
penguin-wwy Apr 27, 2024
5d4b803
[NFC reformat] Run pre-commit on all files and format misc.
stellaraccident Apr 27, 2024
6877302
[NFC reformat] Applies pre-commit formatting to Python files. (#3244)
stellaraccident Apr 27, 2024
a339d7b
Enable post commit run of pre-commit hooks over all files. (#3245)
stellaraccident Apr 27, 2024
46c0f3c
[Torch] emit aten.log_sigmoid and decompose it to log(sigmoid) (#3246)
qingyunqu Apr 28, 2024
5684dc0
[Torch] emit aten.celu and decompose it (#3247)
Apr 28, 2024
9f64748
[FxImporter] Synchronize the collection of symbolic torch ops (#3236)
penguin-wwy Apr 29, 2024
aed2cf3
[Torch] emit aten.__contains__.str_list and add folder (#3249)
qingyunqu Apr 29, 2024
b218519
[NFC] Update black version (#3256)
penguin-wwy Apr 29, 2024
b1e2241
[ONNX] Fix Onnx.Selu lowering and canonicalizer for IntImplicit op (#…
vivekkhandelwal1 Apr 29, 2024
0a5ff68
[stablehlo] Support PrimsCollapseOp and PrimsSplitDimOp in stablehlo …
Apr 29, 2024
2176176
[FX] Add broadcast test with dynamic dim (#3123)
sjain-stanford Apr 29, 2024
087fea0
build: manually update PyTorch version (#3257)
vivekkhandelwal1 Apr 29, 2024
e344453
Merge commit '087fea06' into bump_to_087fea06
mgehre-amd Aug 20, 2024
9c7e3b8
Fixes
mgehre-amd Aug 21, 2024
aeaceb7
Fix xfail
mgehre-amd Aug 21, 2024
f3e53f2
Update xfail
mgehre-amd Aug 21, 2024
70e6f39
Update xfail
mgehre-amd Aug 21, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 0 additions & 26 deletions .github/workflows/lint.yml

This file was deleted.

15 changes: 15 additions & 0 deletions .github/workflows/pre-commit-all.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
name: pre-commit (all files on push)

on:
push:
branches: [main, post-commit-test]

jobs:
pre-commit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v3
- uses: pre-commit/[email protected]
with:
extra_args: --color=always --all-files
17 changes: 17 additions & 0 deletions .github/workflows/pre-commit.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
name: pre-commit

on:
pull_request:

jobs:
pre-commit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
# requites to grab the history of the PR
fetch-depth: 0
- uses: actions/setup-python@v3
- uses: pre-commit/[email protected]
with:
extra_args: --color=always --from-ref ${{ github.event.pull_request.base.sha }} --to-ref ${{ github.event.pull_request.head.sha }}
2 changes: 1 addition & 1 deletion .github/workflows/releaseSnapshotPackage.yml
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ jobs:
uses: ad-m/[email protected]
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
branch: ${{ env.BRANCH_NAME }}
branch: ${{ env.BRANCH_NAME }}
tags: true

- name: Create Release
Expand Down
21 changes: 21 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
exclude: "GeneratedTorchOps\\.td|abstract_interp_lib_gen\\.py|\\.excalidraw|\\.ipynb"
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.2.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-ast
- id: check-yaml
- id: check-added-large-files
- repo: https://github.com/psf/black
rev: 24.4.2
hooks:
- id: black

- repo: https://github.com/pre-commit/mirrors-clang-format
rev: 'v18.1.4'
hooks:
- id: clang-format
2 changes: 0 additions & 2 deletions .style.yapf

This file was deleted.

2 changes: 1 addition & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -247,4 +247,4 @@ add_subdirectory(projects)
# Finish with top-level Python bindings so it can handle additional deps.
if(MLIR_ENABLE_BINDINGS_PYTHON)
add_subdirectory(python)
endif()
endif()
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# The Torch-MLIR Project
# The Torch-MLIR Project

The Torch-MLIR project aims to provide first class compiler support from the [PyTorch](https://pytorch.org) ecosystem to the MLIR ecosystem.

Expand All @@ -8,15 +8,15 @@ necessarily a reflection of the completeness or stability of the code, it
does indicate that the project is not yet endorsed as a component of LLVM.

[PyTorch](https://pytorch.org)
PyTorch is an open source machine learning framework that facilitates the seamless transition from research and prototyping to production-level deployment.
PyTorch is an open source machine learning framework that facilitates the seamless transition from research and prototyping to production-level deployment.

[MLIR](https://mlir.llvm.org)
The MLIR project offers a novel approach for building extensible and reusable compiler architectures, which address the issue of software fragmentation, reduce the cost of developing domain-specific compilers, improve compilation for heterogeneous hardware, and promote compatibility between existing compilers.

[Torch-MLIR](https://github.com/llvm/torch-mlir)
Several vendors have adopted MLIR as the middle layer in their systems, enabling them to map frameworks such as PyTorch, JAX, and TensorFlow into MLIR and subsequently lower them to their target hardware. We have observed half a dozen custom lowerings from PyTorch to MLIR, making it easier for hardware vendors to focus on their unique value, rather than needing to implement yet another PyTorch frontend for MLIR. The ultimate aim is to be similar to the current hardware vendors adding LLVM target support, rather than each one implementing Clang or a C++ frontend.

[![Release Build](https://github.com/llvm/torch-mlir/actions/workflows/buildRelease.yml/badge.svg)](https://github.com/llvm/torch-mlir/actions/workflows/buildRelease.yml)
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit)](https://github.com/pre-commit/pre-commit)

## All the roads from PyTorch to Torch MLIR Dialect

Expand Down Expand Up @@ -76,7 +76,7 @@ pip install torch-mlir -f https://github.com/llvm/torch-mlir-release/releases/ex

## Demos

### TorchScript ResNet18
### TorchScript ResNet18

Standalone script to Convert a PyTorch ResNet18 model to MLIR and run it on the CPU Backend:

Expand Down
20 changes: 15 additions & 5 deletions build_tools/autogen_ltc_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@
TORCHGEN_DIR = Path(torchgen.__path__[0]).resolve()
TORCH_MLIR_DIR = Path(__file__).resolve().parent.parent


def reindent(text, prefix=""):
return indent(dedent(text), prefix)

Expand Down Expand Up @@ -75,7 +76,11 @@ def lowering_function(self, schema: LazyIrSchema):
)

# Only create this variable if it's used to avoid Wunused-variable
operand_idx_counter = "size_t i = 0;" if "i++" in (emplace_arguments_str + emplace_kwarguments) else ""
operand_idx_counter = (
"size_t i = 0;"
if "i++" in (emplace_arguments_str + emplace_kwarguments)
else ""
)

return reindent(
f"""
Expand Down Expand Up @@ -111,12 +116,16 @@ def __init__(self, binary_dir):
)
assert self.torch_ops_file.exists()
self.binary_dir = Path(binary_dir)
assert self.binary_dir.is_dir(), f"Binary directory not found: {self.binary_dir}"
assert (
self.binary_dir.is_dir()
), f"Binary directory not found: {self.binary_dir}"
self.source_yaml = self.binary_dir.joinpath("generated_native_functions.yaml")
self.backend_path = TORCH_MLIR_DIR.joinpath(
"projects", "ltc", "csrc", "base_lazy_backend"
)
assert self.backend_path.is_dir(), f"Backend path not found: {self.backend_path}"
assert (
self.backend_path.is_dir()
), f"Backend path not found: {self.backend_path}"
self.generated_path = self.binary_dir.joinpath(
"projects", "ltc", "csrc", "base_lazy_backend", "generated"
)
Expand Down Expand Up @@ -168,8 +177,9 @@ def generate_native_functions(self):
if ts_native_yaml_path.exists():
ts_native_yaml = yaml.load(ts_native_yaml_path.read_text(), yaml.CLoader)
else:
logging.warning(f"Could not find `ts_native_functions.yaml` at {ts_native_yaml_path}")

logging.warning(
f"Could not find `ts_native_functions.yaml` at {ts_native_yaml_path}"
)

parsed_yaml = parse_native_yaml(native_yaml_path, tags_yaml_path)
self.native_functions = parsed_yaml.native_functions
Expand Down
9 changes: 4 additions & 5 deletions build_tools/ci/test_posix.sh
Original file line number Diff line number Diff line change
Expand Up @@ -26,17 +26,16 @@ echo "::endgroup::"

case $torch_version in
nightly)
# Failing with: NotImplementedError:
# Failing with: NotImplementedError:
# Could not run 'aten::empty.memory_format' with arguments from the 'Lazy' backend.
# As of 2024-01-07
# echo "::group::Run Lazy Tensor Core e2e integration tests"
# python -m e2e_testing.main --config=lazy_tensor_core -v
# echo "::endgroup::"

# TODO: There is one failing test in this group on stable. It could
# be xfailed vs excluding entirely.
echo "::group::Run TorchDynamo e2e integration tests"
python -m e2e_testing.main --config=torchdynamo -v
# TODO: Need to verify in the stable version
echo "::group::Run FxImporter e2e integration tests"
python -m e2e_testing.main --config=fx_importer -v
echo "::endgroup::"
;;
stable)
Expand Down
8 changes: 3 additions & 5 deletions build_tools/python_deploy/build_linux_packages.sh
Original file line number Diff line number Diff line change
Expand Up @@ -282,7 +282,7 @@ function _check_file_not_changed_by() {

function test_in_tree() {
local torch_version="$1"

echo ":::: Test in-tree"
cmake --build /main_checkout/torch-mlir/build --target check-torch-mlir-all

Expand All @@ -308,10 +308,8 @@ function test_in_tree() {
echo ":::: Run Onnx e2e integration tests"
python -m e2e_testing.main --config=onnx -v

# Dynamo is changing a lot in nightly versions, and thus the implementation
# tends to become incompatible to the stable version.
echo ":::: Run TorchDynamo e2e integration tests"
python -m e2e_testing.main --config=torchdynamo -v
echo ":::: Run FxImporter e2e integration tests"
python -m e2e_testing.main --config=fx_importer -v
;;
stable)
echo ":::: Test with stable torch"
Expand Down
12 changes: 7 additions & 5 deletions build_tools/scrape_releases.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,26 +2,28 @@

See https://github.com/llvm/torch-mlir/issues/1374
"""

import argparse
import json

import requests

# Parse arguments
parser = argparse.ArgumentParser()
parser.add_argument('owner', type=str)
parser.add_argument('repo', type=str)
parser.add_argument("owner", type=str)
parser.add_argument("repo", type=str)
args = parser.parse_args()

# Get releases
response = requests.get(
f"https://api.github.com/repos/{args.owner}/{args.repo}/releases")
f"https://api.github.com/repos/{args.owner}/{args.repo}/releases"
)
body = json.loads(response.content)

# Parse releases
releases = []
for row in body:
for asset in row['assets']:
for asset in row["assets"]:
releases.append((asset["name"], asset["browser_download_url"]))

# Output HTML
Expand All @@ -33,4 +35,4 @@
html += f" <a href='{url}'>{name}</a><br />\n"
html += """ </body>
</html>"""
print(html)
print(html)
2 changes: 1 addition & 1 deletion docs/abstract_interp_lib.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ that this is minimal.

## Adding to the abstract interpretation library

See [Adding a Shape and Dtype Function](adding_a_shape_and_dtype_function.md)
See [Adding Abstract Interpretation Functions](adding_abstract_interpretation_functions.md)
for details on how to add a shape and dtype function for an operator.

## Rationale
Expand Down
2 changes: 1 addition & 1 deletion docs/add_ops.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@

Collected links and contacts for how to add ops to torch-mlir.


<details>
<summary>Turbine Camp: Start Here</summary>
This document was previously known as `turbine-camp.md` to Nod.ai. "Turbine Camp" is part of Nod.ai's onboarding process. Welcome to turbine camp. This document originated at Nod.ai as a part of onboardding process, where new nod-ai folks learn about the architecture of our work by adding support for 2 ops to torch-mlir. I decided to put this into torch mlir because a lot of this is about torch-mlir.
Expand All @@ -27,6 +26,7 @@ The details of how we do it and helpful commands to help you set up each repo is
PS: IREE is pronounced Eerie, and hence the ghost icon.

## How to begin
0. Set up torch-mlir according to the instructions here: https://github.com/llvm/torch-mlir/blob/main/docs/development.md
1. You will start by adding support for 2 ops in torch-mlir, to get you familiar with the center of our pipeline. Begin by reading [torch-mlir's documentation on how to implement a new torch op](https://github.com/llvm/torch-mlir/blob/main/docs/Torch-ops-E2E-implementation.md), and set up `llvm/torch_mlir` using https://github.com/llvm/torch-mlir/blob/main/docs/development.md
2. Pick 1 of the yet-unimplemented from the following. You should choose something that looks easy to you. **Make sure you create an issue by clicking the little "target" icon to the right of the op, thereby marking the op as yours**
- [TorchToLinalg ops tracking issue](https://github.com/nod-ai/SHARK-Turbine/issues/347)
Expand Down
13 changes: 13 additions & 0 deletions docs/development.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,19 @@ python -m pip install -r requirements.txt
python -m pip install -r torchvision-requirements.txt
```

## (Optional) Set up pre-commit

This project uses [pre-commit](https://pre-commit.com/) in its CI. You can
install it locally too in order to lint and fix your code prior to the CI
complaining about it.

```shell
pip install pre-commit
# You can run interactively with `pre-commit run`
# or install hooks so it runs automatically:
pre-commit install
```

## CMake Build

Two setups are possible to build: in-tree and out-of-tree. The in-tree setup is the most straightforward, as it will build LLVM dependencies as well.
Expand Down
1 change: 0 additions & 1 deletion docs/importers/onnx_importer.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,4 +140,3 @@ torch-mlir's representation:

* `ConstantOfShape`: Mapped to `torch.vtensor.literal` with
a corresponding `value` attribute.

1 change: 0 additions & 1 deletion docs/roadmap.md
Original file line number Diff line number Diff line change
Expand Up @@ -277,4 +277,3 @@ directly provided a way to plug into this.

Additionally, we can leverage the [`pytorch-jit-paritybench`](https://github.com/jansel/pytorch-jit-paritybench)
to verify our end-to-end correctness on real models.

2 changes: 1 addition & 1 deletion include/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
add_subdirectory(torch-mlir)
add_subdirectory(torch-mlir-dialects)
add_subdirectory(torch-mlir-dialects)
Loading
Loading