Getting higher latency (TTFB) even after hosting self-hosted deepgram STT model #1038
-
We're experiencing a 1.38s p80 latency (TTFB) with our self-hosted Deepgram setup for Speech-to-Text via WebSocket streaming. Below are the configurations we're using: interim_results: True TTFB:- The time from when speech starts to when we receive the transcript. Can someone please help here? |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments
-
Thanks for asking your question. Please be sure to reply with as much detail as possible so the community can assist you efficiently. |
Beta Was this translation helpful? Give feedback.
-
Hey there! It looks like you haven't connected your GitHub account to your Deepgram account. You can do this at https://community.deepgram.com - being verified through this process will allow our team to help you in a much more streamlined fashion. |
Beta Was this translation helpful? Give feedback.
-
It looks like we're missing some important information to help debug your issue. Would you mind providing us with the following details in a reply?
|
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
When you say self-hosted Deepgram, do you mean you have a license agreement for running Deepgram on your own infrastructure? If that is the case, you should have direct contact an account manager who can arrange for support