Replies: 9 comments
-
Hello @YiguanLiao, there is a tradeoff between performance and duration. You can increase the slice size, the model confidence threshold or the postprocess match metric threshold for faster sliced inference. |
Beta Was this translation helpful? Give feedback.
-
Thank you @fcakyon . Is it possible for SAHI to use batch-inference? |
Beta Was this translation helpful? Give feedback.
-
Hi, I'm also curious if the tiles are all ran through the model as a batch or if each tile is predicted on in a loop. This should be possible at least for all tiles that share the same size? |
Beta Was this translation helpful? Give feedback.
-
@YiguanLiao hi, how did you exactly divide your images? You said into 4 parts, is it 4 images of 540x960? Could you share your insights about this very topic please? Thanks. |
Beta Was this translation helpful? Give feedback.
-
@bit-scientist I directly use get_sliced_prediction() of sahi library,you can see the code in this repo. |
Beta Was this translation helpful? Give feedback.
-
@YiguanLiao @austinmw |
Beta Was this translation helpful? Give feedback.
-
@fcakyon Hey, just curious, is this speed improvement planned for the next release or later down the road? |
Beta Was this translation helpful? Give feedback.
-
This major upgrade is not planned for a near release but other speed improvements are planned in the next few releases @austinmw |
Beta Was this translation helpful? Give feedback.
-
Any updates on this? |
Beta Was this translation helpful? Give feedback.
-
Hi, I divied a 1080x1920 image into four parts, which took 0.0606 seconds. Although the small target detection result is better, the overall time is much slower. Is there a good way to balance the two?
Beta Was this translation helpful? Give feedback.
All reactions