-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Runtime tensor error when trying to convert cpu model to tflite #5749
Comments
The wsl 2 has a default installation ubuntu 24
|
HI @itzjac, Could you please share the complete script used for model conversion? Providing the full, standalone code would be very helpful in understanding and potentially reproducing the issue. Thank you!! |
Hi @kuaashish ! Thanks for the follow up.
By changing the cpu to gpu, i can produce the tflite and run on device, is super slow though. I expect the CPu model would run normally as that's the general recommendation, correct? |
Hi @itzjac, Could you please confirm if you still require support for this issue or if it has been resolved on your end? If not, kindly provide the latest complete error code for further assistance. Thank you!! |
Hi @kuaashish, The issue is still present, the last error is shown at the beginning of the thread, no progress whatsoever trying to fix this issue. Regards |
Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
None
OS Platform and Distribution
WSL2
MediaPipe Tasks SDK version
0.10.18
Task name (e.g. Image classification, Gesture recognition etc.)
convert model
Programming Language and version (e.g. C++, Python, Java)
Python
Describe the actual behavior
Runtime Erorr for generating cpu model
Describe the expected behaviour
convert model and generate a tflite file
Standalone code/steps you may have used to try to get what you need
Using the provided LLM inference example as found in github (text-to-text)
Other info / Complete Logs
Running the conversion using the gpu backend works and load on device (is super slow). cpu backend stops the process with the runtime error RuntimeError: INTERNAL: ; RET_CHECK failure (external/odml/odml/infra/genai/inference/utils/xnn_utils/model_ckpt_util.cc:116) tensor
I tried different ubuntu versions, both generated same runtime error for the cpu backend and worked fine with gpu backend.
The text was updated successfully, but these errors were encountered: