Skip to content

Slower performance on Whisper model #269

Closed Answered by jjmaldonis
uanandaraja asked this question in General help
Discussion options

You must be logged in to vote

Hey all, early last week we improved the underlying systems that run Whisper to support the growing demand from our users. You should have seen significant improvements in the number of errors received and the latency of API requests to Whisper.

We will continue to improve the underlying infrastructure over the coming months, and may introduce additional rate limits if Whisper latency remains higher than desired.

Many of our users rely on Deepgram to serve voice-to-text solutions to their own customer base. When Whisper latency is high (and request take longer than normal to complete), those customers can see a poorer user experience because the audio files they upload seem to "hang" for …

Replies: 5 comments 20 replies

Comment options

You must be logged in to vote
6 replies
@uanandaraja
Comment options

@jjmaldonis
Comment options

@uanandaraja
Comment options

@TomaszSzymanskiDl
Comment options

@mosnicholas
Comment options

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
14 replies
@ujlm
Comment options

@jjmaldonis
Comment options

@ujlm
Comment options

@peldszus
Comment options

@jjmaldonis
Comment options

Answer selected by lukeocodes
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
8 participants