Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(ONNX): avoids resizing unsupported dimensions #3945
base: main
Are you sure you want to change the base?
fix(ONNX): avoids resizing unsupported dimensions #3945
Changes from all commits
84a4146
f2c3e16
15a0eb5
5f4768f
a46d47d
146b53a
2feea14
58e3b1a
262b944
be47c8d
deaaa4b
3f23816
70627dd
59983ca
68bae32
d63d92a
71c804e
6d58e68
c0da0b0
87045db
4565928
21d0d48
8340beb
7a2f746
ed705df
8ff7d8f
f477cd8
e240129
7cbf317
3604d8d
2e89f93
c2e9805
acbbb37
41b1657
87d6082
ea4e45c
357fccd
b5a743a
9728f97
361fd35
c2bd963
cecabb9
492151f
c93d81b
0a9f919
0526142
f6e3d3d
a3daf2a
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
medium structural: These two helper functions have quite a bit of overlap. If you wanted to cut out a bit of code duplication, I'd recommend passing the
extractIndex
and expected value (constant 1.0 for "scales" and the size op result for "sizes") as an input, and determining which ofAtenEqIntOp
orAtenEqFloatOp
is appropriate based on thesourceValue.getDtype()
.This would, in my opinion, also make the logic a bit more transparent from within the resize conversion. E.g. if you called the helper function something like
extractAndCompare
, the assert creation loops might look something like:There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
important nit: This name is a bit inscrutable to me. Perhaps something like
getSupportedResizeValueList
is of a similar length, but a bit more direct? Not sold on my own suggestion, but I definitely think this name should be modified.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a later comment addresses this a bit more, since this function doesn't need to be specific to the resize op.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
medium nit:
I don't particularly like
transformation
as a generic replacement forsize || scale
, and I would like the variable names here to be simplified considerably.The external purpose of the first input isn't useful for understanding what this function does in isolation. You could call this input
sourceTensor
orsourceVector
since all we are doing here is extracting values from this source and assembling them into a list construct op.This function might also end up being useful elsewhere, and no one would be able to discern that through the overly-specific naming.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since it might improve the extensibility of this function. You could perhaps allow passing this starting index for value extraction as an input to the function.
With something like this, a name like
would make for a rather useful function in many places (maybe warranting a move to
Utils
).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact, this is likely useful in many places in
TorchOnnxToTorch
becauseONNX
doesn't have lists of scalars (so they use rank-1 tensors), and many torch ops take lists of scalars as inputs.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
Torch::ValueTensorType typeOfOutputTensor;
is quite redundant. I know it's the type of a tensor because its declared as aTorch::ValueTensorType
. Additionally,Of
can be removed simply by reordering.resultType
is already perfectly fine, and if you preferoutput
overresult
, then why not useoutputType
?When you are contributing to existing code with existing naming conventions, I think it is only reasonable to change existing variable names if you have a very good reason to do so. I don't think this instance is one of those cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
possibly important nit:
There might be a better way to leverage the llvm array-like structures. E.g.
This pre-allocates memory for two
int64_t
's and has a few other benefits. See https://llvm.org/docs/ProgrammersManual.html#llvm-adt-smallvector-hThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
I'm not really sure about the choice to use
each
for the iteration variable. It might make reading the initialfor
statement a bit more natural in English, but it does make the body quite awkward to read. I've never seen anyone use this convention (in any repository), so I'm a bit curious what the motivation is?In my opinion, the reading comprehension difficulty is not in the setup of the
for
loop, but within the body, so it makes sense to try and maximize readability there instead. E.g.,currDim
oriterDim
would be preferable in my opinion toeachDim
, sinceeachDim
is like an incomplete universal quantifier and not an individual "thing". My top preference: if there isn't some other loop-independentdim
that we need to disambiguate with, then why not just usedim
as the loop variable?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
low-priority: I'm commenting here because we've had a bit of a discussion on
loc
: I'm certainly amenable to usingmlir::Location loc = binder.getLoc();
if you think the explicit typing would be helpful for future devs. That way they can just googlemlir Location
and it might be easier to figure out what this thing is. The use ofauto
forloc
is pretty widespread in llvm/mlir, but there is a bit of disagreement over which typing convention is preferred, so I'm happy to leave the this up to your discretion.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nits:
uint64_t
overunsigned
. It has the same number of characters, but is more specific.rank
, since the input and output tensors should have the same rank. It's fine either way, but I would preferrank
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Although relatively minor since it is a very small loop, it would be best to avoid an implicit copy constructor as in https://llvm.org/docs/CodingStandards.html#beware-unnecessary-copies-with-auto
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I'd prefer
Value
here instead ofauto
. I think the comment in https://llvm.org/docs/CodingStandards.html#use-auto-type-deduction-to-make-code-more-readable is a helpful one.