-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Output stream issue #1
Comments
I can confirm this. The audio chunks are not playing as expected. |
Interesting, if either @RazaProdigy or @WikiLucas00 can spot where the fix is, happy to merge a pr I'll try to take a look soon if not |
Yeah--I'm seeing the same thing. |
In case it's helpful, this python code that I wrote does stream audio properly on my system: https://github.com/mdagost/openai-realtime-streamlit |
This change helped me in fixing the above issue described - #5 |
@tanvithakur94 Still same issue after increasing queue max size ☹ |
@RazaProdigy If you use airpods or some input device with good noise cancellation, it should work. For better performance, you can mute the input volume after you are done talking. |
yea, in a noisy environment the interrupt mode is a little flakey. There is some threshold settings that you can play with, I don't think these are fully exposed in the API I don't think the queue size has much to do with this |
This happens when the mic is muted after input. Similar experience as to when the model is picking itself up, but this issue is NOT related to that. |
Hello! I tried this project but noticed some problems in the response stream. It seems the audio chunks are being played as soon as available, not waiting for the current one to finish playing.
Here's a video of the answer I received (using
./examples/streaming_cli.py
) when I asked it to tell a story:openai_realtime_client.mp4
The text was updated successfully, but these errors were encountered: