-
Notifications
You must be signed in to change notification settings - Fork 359
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Fixed conv1d converter when weights are Tensor #2542
Conversation
@apbose : I see my PR is breaking in CI. Is the CI broken? I see some torch-tensorrt dependency related issue in the workflow run. |
@andi4191 There was a problem with CI. I re-triggered it to verify if the latest changes fixed it. |
@andi4191 Can you rebase your changes and resolve conflicts ? |
Thanks, I resolved the conflicts. However, I still see CI build failure. |
Signed-off-by: Anurag Dixit <[email protected]>
e1a2c15
to
ee0bb04
Compare
@bowang007 : Is there anything that I can do to merge this PR? Any update on the root cause for CI failure? |
Hi @andi4191 thanks for supporting this. Thanks! |
Thank you for the update @bowang007 |
Hi @bowang007, Is this PR good to merge now? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description
This PR enables the conv1d -> conv2d converter mapping work when the weights are Tensor.
PS: Kernel tensor is also unsqueezed when the filterDims are unsqueezed for conv1d -> conv2d scenario.
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
Fixes # (issue)
Type of change
Please delete options that are not relevant and/or add your own.
Checklist: