forked from llvm/torch-mlir
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge with fixes of 087fea06 (19) #248
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
1. onnx.MatMulInteger now converts to aten.matmul instead of aten.mm 2. aten.matmul, for ranks >=2, now allows quantized inputs and will lower to linalg::quantized_matmul or linalg::quantized_batch_matmul. 3. added AtenMatmulOp to the FuseQuantizeOps rewrite patters QuantizeOperands, QuantizeTransposedOperands, and QuantizeAccumulator 4. added several tests, including some to test AtenMmOp with varying quantization signed-ness. 5. a quantized matmul mat-vec test is added to verify the failure to lower to linalg; cleaned of out-of-date code related to common torch-mlir lowering xfails. 6. in debugging a real model with quantized matmuls, I found a bug on the scalarize-shapes pass which resulted from the aten.full op folder returning an incompatible result type. This is fixed by the small change here to [lib/Dialect/Torch/IR/TorchOps.cpp](https://github.com/llvm/torch-mlir/compare/main...zjgarvey:torch-mlir:MatMulIntegerFix?expand=1#diff-dc8ed165c207918e606490eee3984b1ad51d7034e6aac36fc046bf47f6f03f4f).
To pass test "MatmulStaticBroadcast_basic" in stablehlo: ```python class MatmulStaticBroadcast(torch.nn.Module): def __init__(self): super().__init__() @export @annotate_args([ None, ([4, 1, 6, 7], torch.float32, True), ([8, 1, 5, 7, 6], torch.float32, True), ]) def forward(self, lhs, rhs): return torch.matmul(lhs, rhs) @register_test_case(module_factory=lambda: MatmulStaticBroadcast()) def MatmulStaticBroadcast_basic(module, tu: TestUtils): module.forward(tu.rand(4, 1, 6, 7), tu.rand(8, 1, 5, 7, 6)) ```
Replace the torchdynamo e2e with the fx_importer e2e
Adds OnnxToTorch Lowering for the ReduceL1 op.
…vm#3167) The new cases added for quantized matmuls are: 1. vec-vec 2. vec-mat 3. mat-vec each of which are now lowered to expand(s), quantized_matmul, and collapse.
Remove the `kwarg_only` limitation, for example ``` torch.add(x, 3.0, alpha=2) ``` compiled to ``` %0 = torch.aten.add.Scalar %arg0, %float3.000000e00, %int1 ``` fix to ``` %0 = torch.aten.add.Scalar %arg0, %float3.000000e00, %int2 ```
By canonicalize Aten_CastLongOp into AtenToDtypeOp
…llvm#3171) Align corner modes which select what the corners mean. Either the center of the corner points or the edges of the edge points. --------- Co-authored-by: Rob Suderman <[email protected]>
Decomposition RepeatInterleaveSelfInt with following ops: ```python def my_repeat_interleave(input, repeats, dim=None): if dim is None: # Flatten the input and then repeat return input.flatten().unsqueeze(-1).tile((1, repeats)).flatten() else: # Calculate the shape after repeat expanded_shape = list(input.shape) expanded_shape[dim] *= repeats # Repeat the tensor along the specified dimension repeat_shape = [1] * (input.dim() + 1) repeat_shape[dim + 1] = repeats input = input.unsqueeze(-1) # Tile and then reshape tiled = torch.tile(input, repeat_shape) # Rearrange and reshape repeated = tiled.reshape(*expanded_shape) return repeated ``` I passed the tests of stablehlo and linalg. When testing onnx, strange things happened. In torch-mlir's CI **torch_nightly** and my own environment(torch==2.4.0.dev20240318+cpu), it can **pass the pass**. In torch-mlir's CI **torch_stable**, it **failed**. The test case is `RepeatInterleaveSelfIntNoDimModule_basic`, the result shape should be [120]. ```python class RepeatInterleaveSelfIntNoDimModule(torch.nn.Module): def __init__(self): super().__init__() @export @annotate_args([ None, ([3, 4, 5], torch.float32, True), ]) def forward(self, x): return x.repeat_interleave(2) @register_test_case(module_factory=lambda: RepeatInterleaveSelfIntNoDimModule()) def RepeatInterleaveSelfIntNoDimModule_basic(module, tu: TestUtils): module.forward(tu.rand(3, 4, 5)) ``` The error log is as follows: ``` Unexpected outcome summary: (onnx) ****** Failed tests - 1 tests FAIL - "RepeatInterleaveSelfIntNoDimModule_basic" @ trace item #0 - call to "forward" @ output of call to "forward" ERROR: shape (torch.Size([6, 4, 5])) is not equal to golden shape (torch.Size([120])) ``` @rsuderman Would you please help me check what's wrong with my PR? Thanks a lot.
Set PyTorch and TorchVision version to nightly release 2024-04-16. Signed-Off By: Vivek Khandelwal <[email protected]>
Need to perform an expand in the case where the indices is rank-0.
We can map to `tensor.reshape` for handling multiple output dynamic shapes. Later we can perform a more complex analysis for indentifying expand/collapse cases from the tensor.reshape. Initially we planned to handle this identification at the `torch` level however it will be easier to handle once converted to core mlir-dialects.
Reclassifying what the source of failures are for various bugs so we can reprioritize what failures are common.
The FX importer will pass static shapes to the Torch dialect, so it needs to generate a StableHLO that satisfies shape inference.
See unit test below: ``` // CHECK-LABEL: func.func @torch.aten.tensor.float( // CHECK-NEXT: torch.vtensor.literal(dense<1.000000e+01> : tensor<f32>) : !torch.vtensor<[],f32> func.func @torch.aten.tensor.float() -> !torch.vtensor<[],f32> { %none = torch.constant.none %false = torch.constant.bool false %float1.000000e01 = torch.constant.float 1.000000e+01 %67 = torch.aten.tensor.float %float1.000000e01, %none, %none, %false : !torch.float, !torch.none, !torch.none, !torch.bool -> !torch.vtensor<[],f32> return %67 : !torch.vtensor<[],f32> } // CHECK-LABEL: func.func @torch.aten.tensor.int( // CHECK-NEXT: torch.vtensor.literal(dense<45> : tensor<si32>) : !torch.vtensor<[],si32> func.func @torch.aten.tensor.int() -> !torch.vtensor<[],si32> { %none = torch.constant.none %false = torch.constant.bool false %int45 = torch.constant.int 45 %67 = torch.aten.tensor.int %int45, %none, %none, %false : !torch.int, !torch.none, !torch.none, !torch.bool -> !torch.vtensor<[],si32> return %67 : !torch.vtensor<[],si32> } ```
Need to perform a bool cast to support `onnx.Not` on non-bool inputs.
Previous implementation erroneously mixed up num_outputs with slice_size. New version correctly computs the slice size and directly performs slicing rather than leveraging `aten.split.tensor`. This is due to `onnx` supporting a fixed number of splits making the size computation more easily computeable when lowering to `aten` rather than deferring to `aten.split.tensor`. --------- Co-authored-by: Robert Suderman <[email protected]>
Version number was set too high. Lowered to support more cases allows more tests to pass. Co-authored-by: Robert Suderman <[email protected]>
…ze op (llvm#2991) This commit also cleans up the OnnxToTorch lowering for the Squeeze and Unsqueeze op and adds the support for handling edge cases. Signed-Off By: Vivek Khandelwal <[email protected]>
This commit adds the OnnxToTorch lowering for Onnx's RandomNormal, RandomNormalLike, RandomUniform, and RandomUniformLike op.
Like llvm#3130, gradually replace the deprecated code https://github.com/llvm/mlir-www/blob/main/website/content/deprecation/_index.md#deprecated
This is part 1 of ~3, formatting all miscellaneous text files and CPP files matched by a first run of pre-commit. These tend to be low change-traffic and are likely not disruptive. Subsequent patches will format Python files and remaining CPP files.
) This is a large change because prior to this point, Python files in the project were not consistently formatted. This reformats them all with black defaults. Based on experience with prior projects, if you have a dev/long-term branch with Python patches, you can minimize merge conflicts prior to rebasing to include this commit by running `black` on your modified Python files, squashing, and then rebasing/merging.
The pre-commit hook will only run on changed files, whereas this runs on push and will check everything.
CELU(x)=max(0,x)+min(0,α∗(exp(x/α)−1))
* Update black version to support 3.11/3.12 * Reformat code
…lvm#3221) Signed-Off By: Vivek Khandelwal <[email protected]>
This scenario was uncovered in a downstream test that failed with a previous snapshot of torch-mlir. See https://github.com/cruise-automation/mlir-tcp/actions/runs/8605480116/job/23581829102?pr=65. ``` File "/home/runner/.cache/bazel/_bazel_runner/ce288f117ee4ca92dc028a6a28476a3d/sandbox/processwrapper-sandbox/2380/execroot/mlir-tcp/bazel-out/k8-opt-exec-2B5CBBC6/bin/test/AotCompile/broadcast_unit_dim_to_dynamic_with_unchanged_dim_dynamic_torch_exporter.runfiles/pip_deps_torch_mlir/site-packages/torch_mlir/extras/fx_importer.py", line 969, in value_info_to_type raise NotImplementedError( NotImplementedError: Could not deduce type from value info: tensor_meta=None, val=s1, sparsity=None ``` It seems to have resolved on current HEAD. Adding this test to ensure coverage in the future.
Set PyTorch and TorchVision version to nightly release 2024-04-28. Signed-Off By: Vivek Khandelwal <[email protected]>
36973de
to
f3849d8
Compare
5bc4220
to
7e5a35d
Compare
7e5a35d
to
e344453
Compare
cferry-AMD
approved these changes
Aug 22, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.