Replies: 5 comments
-
thanks for reaching out! I've passed this along to the team that owns DeepAR to see if they have any insight. |
Beta Was this translation helpful? Give feedback.
-
Hello @mstfldmr, DeepAR doesn't support |
Beta Was this translation helpful? Give feedback.
-
Hello @jaheba , I already tried it. It took way way too long. I stopped the job after 20hours. Honestly, I forgot it after submitting and noticed it was still running the next day. Just to compare, on an inference endpoint of the same instance type I can predict 10k lines in 1min. |
Beta Was this translation helpful? Give feedback.
-
Hello @mstfldmr, that sounds like something which should not happen. If you feel comfortable to share some data, can you send me an email to [email protected] so I can look closer at the specific case? |
Beta Was this translation helpful? Give feedback.
-
Sorry, it's customer data, I'm not allowed to share the data. |
Beta Was this translation helpful? Give feedback.
-
Describe the bug
I trained a model with DeepAR algorithm. I can deploy an inference endpoint and get predictions. I can do batch transform with 100 lines. When I run batch transform with 50k lines, I get error.
To reproduce
Screenshots or logs
The log ends like:
Additional context
Test data looks like:
Beta Was this translation helpful? Give feedback.
All reactions