Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Request for New Feature] Introduce Local Model Like faster-whisper #1

Open
Frank-Z-Chen opened this issue Apr 27, 2024 · 1 comment

Comments

@Frank-Z-Chen
Copy link

Hi Dhruvyad,

I love how lightweight uttertype is and it is very handy. Instead of calling Whisper API, I wonder if you plan on introducing local model like faster-whisper for faster processing speed?

Best,
Frank

@dhruvyad
Copy link
Owner

Hey @Frank-Z-Chen,

Thanks for the suggestion. I've added a sample implementation with a local MLX based Whisper model for macOS. It's ~15 lines of code and should be similar for any local library you wish to use.

You can copy this and replace it with any local whisper library or transcription service you wish, and then simply change the two lines - transcriber used in main.py and its corresponding import.

Let me know if you have any questions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants