You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We currently trained a model with rotated bounded boxes and are getting pretty good results from it, we trained/evaluated/tested this on our ubuntu machine using the terminal/otdk commands.
We now want to implement this model in our python script that is controlling our robot. The only way we managed to get this to work is opening a docker container trough our python script and sending the same "otdk infer command" using os.system("").
The problem with this method is that we have to put a time.sleep of 10 seconds after this command otherwise our detections.json file is still giving old detections because the inference was not completed yet. The inference itself is pretty fast but the initial loading of the model takes 7-8seconds. Because we are only doing detections on 1 image each 2 - 3min we always have to load the model again for each picture and are loosing a lot of time just waiting before the model is loaded.
What would be the best way to solve this loading time? Is it possible to preload the model somehow so we can just send a picture to it, or are their other solutions for this?
Kind regards,
Tjörven
The text was updated successfully, but these errors were encountered:
TjorvenVD
changed the title
Preloading model for faster inference?
Preloading model for faster inference in python script?
Oct 17, 2022
Hello there,
We currently trained a model with rotated bounded boxes and are getting pretty good results from it, we trained/evaluated/tested this on our ubuntu machine using the terminal/otdk commands.
We now want to implement this model in our python script that is controlling our robot. The only way we managed to get this to work is opening a docker container trough our python script and sending the same "otdk infer command" using os.system("").
The problem with this method is that we have to put a time.sleep of 10 seconds after this command otherwise our detections.json file is still giving old detections because the inference was not completed yet. The inference itself is pretty fast but the initial loading of the model takes 7-8seconds. Because we are only doing detections on 1 image each 2 - 3min we always have to load the model again for each picture and are loosing a lot of time just waiting before the model is loaded.
What would be the best way to solve this loading time? Is it possible to preload the model somehow so we can just send a picture to it, or are their other solutions for this?
Kind regards,
Tjörven
The text was updated successfully, but these errors were encountered: