You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If the user goes to a screen where useLLM is being used, but have not downloaded the model before, it downloads the model. As some models can be quite heavy, the user likely isn't going to stick around to wait for the whole download to finish, so he might go into another app. He might even leave the phone on the table and just go do something else, and the screen will turn off.
I'm experiencing some issues with this, if the user comes back to the app to see the status, it's stuck at 12% for example - so the download stops at some point, or the progress is not updating anymore. It might be the latter.
Other times, the model might've been downloaded, but isModelReady is false and error is Model and tokenizer already loaded
Proposed solution
To illustrate the current implementation in the hook:
Download issue:
Open a screen where useLLM is used
Go into another app for while
Come back
The loaded issue is a bit trickier, because it has happened after I've loaded the model and had a few conversations with it. After many restarts, it tried to start the download again but got stuck in 0. After further restarts, it downloaded the whole (previously downloaded) model.
Alternative solutions
No response
Benefits to React Native ExecuTorch
Usability. Users are prone to keep with their lives while a model downloads.
Additional context
No response
The text was updated successfully, but these errors were encountered:
Problem description
If the user goes to a screen where
useLLM
is being used, but have not downloaded the model before, it downloads the model. As some models can be quite heavy, the user likely isn't going to stick around to wait for the whole download to finish, so he might go into another app. He might even leave the phone on the table and just go do something else, and the screen will turn off.I'm experiencing some issues with this, if the user comes back to the app to see the status, it's stuck at 12% for example - so the download stops at some point, or the progress is not updating anymore. It might be the latter.
Other times, the model might've been downloaded, but
isModelReady
isfalse
anderror
isModel and tokenizer already loaded
Proposed solution
To illustrate the current implementation in the hook:
Download issue:
useLLM
is usedThe loaded issue is a bit trickier, because it has happened after I've loaded the model and had a few conversations with it. After many restarts, it tried to start the download again but got stuck in 0. After further restarts, it downloaded the whole (previously downloaded) model.
Alternative solutions
No response
Benefits to React Native ExecuTorch
Usability. Users are prone to keep with their lives while a model downloads.
Additional context
No response
The text was updated successfully, but these errors were encountered: