Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AutoBump] Merge with fixes of f03a5762 (Dec 12) (136) #526

Merged
merged 11 commits into from
Feb 12, 2025

Conversation

mgehre-amd
Copy link
Collaborator

No description provided.

sahas3 and others added 6 commits December 12, 2024 04:08
…ps and conversion patterns. (llvm#3759)

This PR refactors TorchToTosa to separate the construction of
legal/illegal ops and conversion patterns in their own functions:

1. populateTorchToTosaConversionLegalOps -- populate any ops that are
legal after the conversion pass
2. populateTorchToTosaConversionIllegalOps -- populate any ops that are
illegal after the conversion pass
3. populateTorchToTosaConversionPatterns -- populate the ops conversion
patterns

Currently the (il)legality of the ops that are (il)legal after the
conversion pass runs is embedded within the conversion pattern. Our end
goal is to write a new pass pipeline that converts `torch` ops to a mix
of `tosa`, `linalg`, `tensor`, etc dialect ops. The reason we want to
also emit `tosa` ops (instead of using the existing `TorchToLinalg` to
emit `linalg`+`tensor`+...) is because some operations like `conv2d`
encodes the padding behavior in the op in `tosa` unlike the `linalg`
version -- this helps in lowering the `tosa.conv2d` to a custom
implementation that does padding on the fly.

To implement this new pipeline we need to be able to separate out the
illegal `tosa` ops from the conversion pattern itself. Otherwise we will
hit an issue for ops like `AtenMaxDimOp` which can be lowered to both
`tosa` and `linalg + others` dialects. Not all `AtenMaxDimOp` can be
lowered successfully to `tosa` as the implementation uses `tosa.reshape`
which cannot handle multiple dynamic dimensions but the `TorchToLinalg`
lowering can handle it. In the current behavior the pipeline will stop
as soon as the existing `TorchToTosa` conversion runs as `AtenMaxDimOp`
will be marked as an illegal op.

Essentially we want to be able to control what the legality of the ops
should be independent of the conversion pattern. This is also inline
with the conversion patterns in the llvm-mlir repo such as
https://github.com/llvm/llvm-project/blob/000e790be35b77a01872851646d54432a203542c/mlir/lib/Conversion/SCFToControlFlow/SCFToControlFlow.cpp#L718


"THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY."
…m#3916)

The onnx output tensor has a shape of ((n, z)), where (n) is the number
of dimensions in the input tensor and (z) is the number of non-zero
elements2. This is different from PyTorch's default behavior, where the
dimensions are reversed.
This commit adds the support for 1-d group convolution by transforming
it into a 2-d group convolution which is already supported.

This commit also refactors the unsqueeze and squeeze tensor utility.

---------

Signed-off-by: Vivek Khandelwal <[email protected]>
…llvm#3918)

We incorrectly relied on the fact that StableHLO registers the sparse
tensor dialect, but when building for e.g. just LinAlg, the dependency
was missing. This fixes this shortcoming.

FIXES: llvm#3816
Base automatically changed from bump_to_5a5cc6b3 to bump_to_5077090a February 10, 2025 14:10
@mgehre-amd mgehre-amd requested a review from jorickert February 10, 2025 14:11
Base automatically changed from bump_to_5077090a to feature/backport_ea1_ops February 11, 2025 08:38
[AutoBump] Merge with fixes of 8e0eafd (Dec 13) (138)
[AutoBump] Merge with fixes of 71cb942 (Dec 17) (139)
@mgehre-amd mgehre-amd merged commit 210477d into feature/backport_ea1_ops Feb 12, 2025
4 checks passed
@mgehre-amd mgehre-amd deleted the bump_to_f03a5762 branch February 12, 2025 07:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants