Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/add instance segmentation #67
Feat/add instance segmentation #67
Changes from 23 commits
3d3c88e
f2dbf33
535d09a
454d749
7bb93e9
3fcb736
5a0795d
c0cf6ab
a1c6b6a
7879220
4fae718
f40e5a0
853d5ad
04e91fd
ff771ad
335cc05
f887910
b8151cb
af08e4b
c566bea
07a58f0
057a9b4
437d067
cd819c4
5e45347
3b915ba
68487e4
5401431
d47253a
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SlimSAM doesn't support batched inference?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The thing is that each image can have different number of detected objects, and in that case the batched inference isn't possible straight away , so that's why I implemented it per image. But now that you've mentioned it, I thought about it again and realized that we could "padd" the bboxes with dummy bboxes, so that we can have batch inference, I'm currrently testing it. Let me know @sokovninn, if you'd find this small hack better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I see. Dummy bboxes is a good solution. However, I am not sure if it will bring any boost in inference speed, but it is worth a try I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly, I'll test it and let you know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It turned out not to be faster, so not gonna use it.