-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(deps): update dependency openai to v4.83.0 #242
Conversation
openai debug - [puLL-Merge] - openai/[email protected] Diffdiff --git .release-please-manifest.json .release-please-manifest.json
index b1ab5c7b9..b2ee58e08 100644
--- .release-please-manifest.json
+++ .release-please-manifest.json
@@ -1,3 +1,3 @@
{
- ".": "4.79.4"
+ ".": "4.82.0"
}
diff --git .stats.yml .stats.yml
index 9600edae3..e49b5c56e 100644
--- .stats.yml
+++ .stats.yml
@@ -1,2 +1,2 @@
configured_endpoints: 69
-openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-b5b0e2c794b012919701c3fd43286af10fa25d33ceb8a881bec2636028f446e0.yml
+openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-6204952a29973265b9c0d66fc67ffaf53c6a90ae4d75cdacf9d147676f5274c9.yml
diff --git CHANGELOG.md CHANGELOG.md
index 4254a9b8f..7565cb01a 100644
--- CHANGELOG.md
+++ CHANGELOG.md
@@ -1,5 +1,50 @@
# Changelog
+## 4.82.0 (2025-01-31)
+
+Full Changelog: [v4.81.0...v4.82.0](https://github.com/openai/openai-node/compare/v4.81.0...v4.82.0)
+
+### Features
+
+* **api:** add o3-mini ([#1295](https://github.com/openai/openai-node/issues/1295)) ([378e2f7](https://github.com/openai/openai-node/commit/378e2f7af62c570adb4c7644a4d49576b698de41))
+
+
+### Bug Fixes
+
+* **examples/realtime:** remove duplicate `session.update` call ([#1293](https://github.com/openai/openai-node/issues/1293)) ([ad800b4](https://github.com/openai/openai-node/commit/ad800b4f9410c6838994c24a3386ea708717f72b))
+* **types:** correct metadata type + other fixes ([378e2f7](https://github.com/openai/openai-node/commit/378e2f7af62c570adb4c7644a4d49576b698de41))
+
+## 4.81.0 (2025-01-29)
+
+Full Changelog: [v4.80.1...v4.81.0](https://github.com/openai/openai-node/compare/v4.80.1...v4.81.0)
+
+### Features
+
+* **azure:** Realtime API support ([#1287](https://github.com/openai/openai-node/issues/1287)) ([fe090c0](https://github.com/openai/openai-node/commit/fe090c0a57570217eb0b431e2cce40bf61de2b75))
+
+## 4.80.1 (2025-01-24)
+
+Full Changelog: [v4.80.0...v4.80.1](https://github.com/openai/openai-node/compare/v4.80.0...v4.80.1)
+
+### Bug Fixes
+
+* **azure:** include retry count header ([3e0ba40](https://github.com/openai/openai-node/commit/3e0ba409e57ce276fb1f95cd11c801e4ccaad572))
+
+
+### Documentation
+
+* fix typo, "zodFunctionTool" -> "zodFunction" ([#1128](https://github.com/openai/openai-node/issues/1128)) ([b7ab6bb](https://github.com/openai/openai-node/commit/b7ab6bb304973ade94830f37eb646e800226d5ef))
+* **helpers:** fix type annotation ([fc019df](https://github.com/openai/openai-node/commit/fc019df1d9cc276e8f8e689742853a09aa94991a))
+* **readme:** fix realtime errors docs link ([#1286](https://github.com/openai/openai-node/issues/1286)) ([d1d50c8](https://github.com/openai/openai-node/commit/d1d50c897c18cefea964e8057fe1acfd766ae2bf))
+
+## 4.80.0 (2025-01-22)
+
+Full Changelog: [v4.79.4...v4.80.0](https://github.com/openai/openai-node/compare/v4.79.4...v4.80.0)
+
+### Features
+
+* **api:** update enum values, comments, and examples ([#1280](https://github.com/openai/openai-node/issues/1280)) ([d38f2c2](https://github.com/openai/openai-node/commit/d38f2c2648b6990f217c3c7d83ca31f3739641d3))
+
## 4.79.4 (2025-01-21)
Full Changelog: [v4.79.3...v4.79.4](https://github.com/openai/openai-node/compare/v4.79.3...v4.79.4)
diff --git README.md README.md
index 3bd386e99..a1f4bf760 100644
--- README.md
+++ README.md
@@ -157,7 +157,7 @@ A full example can be found [here](https://github.com/openai/openai-node/blob/ma
### Realtime error handling
-When an error is encountered, either on the client side or returned from the server through the [`error` event](https://platform.openai.com/docs/guides/realtime/realtime-api-beta#handling-errors), the `error` event listener will be fired. However, if you haven't registered an `error` event listener then an `unhandled Promise rejection` error will be thrown.
+When an error is encountered, either on the client side or returned from the server through the [`error` event](https://platform.openai.com/docs/guides/realtime-model-capabilities#error-handling), the `error` event listener will be fired. However, if you haven't registered an `error` event listener then an `unhandled Promise rejection` error will be thrown.
It is **highly recommended** that you register an `error` event listener and handle errors approriately as typically the underlying connection is still usable.
@@ -499,7 +499,7 @@ const credential = new DefaultAzureCredential();
const scope = 'https://cognitiveservices.azure.com/.default';
const azureADTokenProvider = getBearerTokenProvider(credential, scope);
-const openai = new AzureOpenAI({ azureADTokenProvider });
+const openai = new AzureOpenAI({ azureADTokenProvider, apiVersion: "<The API version, e.g. 2024-10-01-preview>" });
const result = await openai.chat.completions.create({
model: 'gpt-4o',
@@ -509,6 +509,26 @@ const result = await openai.chat.completions.create({
console.log(result.choices[0]!.message?.content);
\`\`\`
+### Realtime API
+This SDK provides real-time streaming capabilities for Azure OpenAI through the `OpenAIRealtimeWS` and `OpenAIRealtimeWebSocket` clients described previously.
+
+To utilize the real-time features, begin by creating a fully configured `AzureOpenAI` client and passing it into either `OpenAIRealtimeWS.azure` or `OpenAIRealtimeWebSocket.azure`. For example:
+
+```ts
+const cred = new DefaultAzureCredential();
+const scope = 'https://cognitiveservices.azure.com/.default';
+const deploymentName = 'gpt-4o-realtime-preview-1001';
+const azureADTokenProvider = getBearerTokenProvider(cred, scope);
+const client = new AzureOpenAI({
+ azureADTokenProvider,
+ apiVersion: '2024-10-01-preview',
+ deployment: deploymentName,
+});
+const rt = await OpenAIRealtimeWS.azure(client);
+```
+
+Once the instance has been created, you can then begin sending requests and receiving streaming responses in real time.
+
### Retries
Certain errors will be automatically retried 2 times by default, with a short exponential backoff.
diff --git api.md api.md
index 33ab95ef6..516188b20 100644
--- api.md
+++ api.md
@@ -5,6 +5,7 @@ Types:
- <code><a href="./src/resources/shared.ts">ErrorObject</a></code>
- <code><a href="./src/resources/shared.ts">FunctionDefinition</a></code>
- <code><a href="./src/resources/shared.ts">FunctionParameters</a></code>
+- <code><a href="./src/resources/shared.ts">Metadata</a></code>
- <code><a href="./src/resources/shared.ts">ResponseFormatJSONObject</a></code>
- <code><a href="./src/resources/shared.ts">ResponseFormatJSONSchema</a></code>
- <code><a href="./src/resources/shared.ts">ResponseFormatText</a></code>
diff --git examples/azure.ts examples/azure/chat.ts
similarity index 91%
rename from examples/azure.ts
rename to examples/azure/chat.ts
index 5fe1718fa..46df820f8 100755
--- examples/azure.ts
+++ examples/azure/chat.ts
@@ -2,6 +2,7 @@
import { AzureOpenAI } from 'openai';
import { getBearerTokenProvider, DefaultAzureCredential } from '@azure/identity';
+import 'dotenv/config';
// Corresponds to your Model deployment within your OpenAI resource, e.g. gpt-4-1106-preview
// Navigate to the Azure OpenAI Studio to deploy a model.
@@ -13,7 +14,7 @@ const azureADTokenProvider = getBearerTokenProvider(credential, scope);
// Make sure to set AZURE_OPENAI_ENDPOINT with the endpoint of your Azure resource.
// You can find it in the Azure Portal.
-const openai = new AzureOpenAI({ azureADTokenProvider });
+const openai = new AzureOpenAI({ azureADTokenProvider, apiVersion: '2024-10-01-preview' });
async function main() {
console.log('Non-streaming:');
diff --git a/examples/azure/realtime/websocket.ts b/examples/azure/realtime/websocket.ts
new file mode 100644
index 000000000..bec74e654
--- /dev/null
+++ examples/azure/realtime/websocket.ts
@@ -0,0 +1,60 @@
+import { OpenAIRealtimeWebSocket } from 'openai/beta/realtime/websocket';
+import { AzureOpenAI } from 'openai';
+import { DefaultAzureCredential, getBearerTokenProvider } from '@azure/identity';
+import 'dotenv/config';
+
+async function main() {
+ const cred = new DefaultAzureCredential();
+ const scope = 'https://cognitiveservices.azure.com/.default';
+ const deploymentName = 'gpt-4o-realtime-preview-1001';
+ const azureADTokenProvider = getBearerTokenProvider(cred, scope);
+ const client = new AzureOpenAI({
+ azureADTokenProvider,
+ apiVersion: '2024-10-01-preview',
+ deployment: deploymentName,
+ });
+ const rt = await OpenAIRealtimeWebSocket.azure(client);
+
+ // access the underlying `ws.WebSocket` instance
+ rt.socket.addEventListener('open', () => {
+ console.log('Connection opened!');
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['text'],
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+
+ rt.send({
+ type: 'conversation.item.create',
+ item: {
+ type: 'message',
+ role: 'user',
+ content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
+ },
+ });
+
+ rt.send({ type: 'response.create' });
+ });
+
+ rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+ });
+
+ rt.on('session.created', (event) => {
+ console.log('session created!', event.session);
+ console.log();
+ });
+
+ rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
+ rt.on('response.text.done', () => console.log());
+
+ rt.on('response.done', () => rt.close());
+
+ rt.socket.addEventListener('close', () => console.log('\nConnection closed!'));
+}
+
+main();
diff --git a/examples/azure/realtime/ws.ts b/examples/azure/realtime/ws.ts
new file mode 100644
index 000000000..6ab7b742a
--- /dev/null
+++ examples/azure/realtime/ws.ts
@@ -0,0 +1,60 @@
+import { DefaultAzureCredential, getBearerTokenProvider } from '@azure/identity';
+import { OpenAIRealtimeWS } from 'openai/beta/realtime/ws';
+import { AzureOpenAI } from 'openai';
+import 'dotenv/config';
+
+async function main() {
+ const cred = new DefaultAzureCredential();
+ const scope = 'https://cognitiveservices.azure.com/.default';
+ const deploymentName = 'gpt-4o-realtime-preview-1001';
+ const azureADTokenProvider = getBearerTokenProvider(cred, scope);
+ const client = new AzureOpenAI({
+ azureADTokenProvider,
+ apiVersion: '2024-10-01-preview',
+ deployment: deploymentName,
+ });
+ const rt = await OpenAIRealtimeWS.azure(client);
+
+ // access the underlying `ws.WebSocket` instance
+ rt.socket.on('open', () => {
+ console.log('Connection opened!');
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['text'],
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+
+ rt.send({
+ type: 'conversation.item.create',
+ item: {
+ type: 'message',
+ role: 'user',
+ content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
+ },
+ });
+
+ rt.send({ type: 'response.create' });
+ });
+
+ rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+ });
+
+ rt.on('session.created', (event) => {
+ console.log('session created!', event.session);
+ console.log();
+ });
+
+ rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
+ rt.on('response.text.done', () => console.log());
+
+ rt.on('response.done', () => rt.close());
+
+ rt.socket.on('close', () => console.log('\nConnection closed!'));
+}
+
+main();
diff --git examples/package.json examples/package.json
index b8c34ac45..70ec2c523 100644
--- examples/package.json
+++ examples/package.json
@@ -7,6 +7,7 @@
"private": true,
"dependencies": {
"@azure/identity": "^4.2.0",
+ "dotenv": "^16.4.7",
"express": "^4.18.2",
"next": "^14.1.1",
"openai": "file:..",
diff --git examples/realtime/ws.ts examples/realtime/ws.ts
index 4bbe85e5d..08c6fbcb6 100644
--- examples/realtime/ws.ts
+++ examples/realtime/ws.ts
@@ -6,13 +6,6 @@ async function main() {
// access the underlying `ws.WebSocket` instance
rt.socket.on('open', () => {
console.log('Connection opened!');
- rt.send({
- type: 'session.update',
- session: {
- modalities: ['foo'] as any,
- model: 'gpt-4o-realtime-preview',
- },
- });
rt.send({
type: 'session.update',
session: {
diff --git helpers.md helpers.md
index abf980c82..16bc1f277 100644
--- helpers.md
+++ helpers.md
@@ -49,7 +49,7 @@ if (message?.parsed) {
The `.parse()` method will also automatically parse `function` tool calls if:
-- You use the `zodFunctionTool()` helper method
+- You use the `zodFunction()` helper method
- You mark your tool schema with `"strict": True`
For example:
@@ -226,7 +226,7 @@ on in the documentation page [Message](https://platform.openai.com/docs/api-refe
```ts
.on('textCreated', (content: Text) => ...)
-.on('textDelta', (delta: RunStepDelta, snapshot: Text) => ...)
+.on('textDelta', (delta: TextDelta, snapshot: Text) => ...)
.on('textDone', (content: Text, snapshot: Message) => ...) diff --git jsr.json jsr.json
export class OpenAIRealtimeError extends OpenAIError { -export function buildRealtimeURL(props: { baseURL: string; model: string }): URL {
interface MessageEvent {
@@ -77,6 +84,45 @@ export class OpenAIRealtimeWebSocket extends OpenAIRealtimeEmitter {
send(event: RealtimeClientEvent) { export class OpenAIRealtimeWS extends OpenAIRealtimeEmitter {
+async function getAzureHeaders(client: AzureOpenAI) {
diff --git src/resources/batches.ts src/resources/batches.ts export class Batches extends APIResource { /**
export interface BatchListParams extends CursorPageParams {} /**
import { APIResource } from '../../../resource'; /**
/**
/**
@@ -1723,8 +1764,11 @@ export namespace SessionUpdateEvent {
@@ -1756,8 +1800,19 @@ export namespace SessionUpdateEvent {
@@ -1797,15 +1852,33 @@ export namespace SessionUpdateEvent {
@@ -86,6 +88,7 @@ export interface Session { /**
@@ -200,7 +203,7 @@ export interface SessionCreateResponse {
/** export interface SessionCreateParams {
@@ -390,8 +385,11 @@ export interface SessionCreateParams {
@@ -423,8 +421,19 @@ export interface SessionCreateParams {
@@ -464,15 +473,33 @@ export namespace SessionCreateParams {
export interface Tool { @@ -407,11 +408,13 @@ export interface Message { /**
export namespace MessageCreateParams {
export interface MessageListParams extends CursorPageParams { /**
export namespace AdditionalMessage {
export interface RunListParams extends CursorPageParams { export class Steps extends APIResource { /**
export namespace Message {
@@ -658,7 +671,8 @@ export interface ThreadCreateAndRunParamsBase { export namespace ThreadCreateAndRunParams {
export type ChatModel =
@@ -300,8 +299,7 @@ export interface ChatCompletionChunk { /**
@@ -1014,10 +1012,14 @@ export interface ChatCompletionCreateParamsBase { /**
diff --git src/resources/embeddings.ts src/resources/embeddings.ts
diff --git src/resources/shared.ts src/resources/shared.ts +/**
export interface ResponseFormatJSONObject { /**
@@ -33,7 +33,9 @@ describe('resource assistants', () => {
diff --git tests/api-resources/beta/realtime/sessions.test.ts tests/api-resources/beta/realtime/sessions.test.ts describe('resource sessions', () => {
diff --git tests/api-resources/beta/threads/runs/runs.test.ts tests/api-resources/beta/threads/runs/runs.test.ts
diff --git tests/api-resources/beta/threads/threads.test.ts tests/api-resources/beta/threads/threads.test.ts
@@ -118,7 +120,7 @@ describe('resource threads', () => {
@@ -130,15 +132,17 @@ describe('resource threads', () => {
diff --git tests/api-resources/chat/completions.test.ts tests/api-resources/chat/completions.test.ts
diff --git tests/api-resources/completions.test.ts tests/api-resources/completions.test.ts
diff --git tests/lib/azure.test.ts tests/lib/azure.test.ts
describe('defaultQuery', () => {
|
anthropic debug - [puLL-Merge] - openai/[email protected] Diffdiff --git .release-please-manifest.json .release-please-manifest.json
index b1ab5c7b9..b2ee58e08 100644
--- .release-please-manifest.json
+++ .release-please-manifest.json
@@ -1,3 +1,3 @@
{
- ".": "4.79.4"
+ ".": "4.82.0"
}
diff --git .stats.yml .stats.yml
index 9600edae3..e49b5c56e 100644
--- .stats.yml
+++ .stats.yml
@@ -1,2 +1,2 @@
configured_endpoints: 69
-openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-b5b0e2c794b012919701c3fd43286af10fa25d33ceb8a881bec2636028f446e0.yml
+openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-6204952a29973265b9c0d66fc67ffaf53c6a90ae4d75cdacf9d147676f5274c9.yml
diff --git CHANGELOG.md CHANGELOG.md
index 4254a9b8f..7565cb01a 100644
--- CHANGELOG.md
+++ CHANGELOG.md
@@ -1,5 +1,50 @@
# Changelog
+## 4.82.0 (2025-01-31)
+
+Full Changelog: [v4.81.0...v4.82.0](https://github.com/openai/openai-node/compare/v4.81.0...v4.82.0)
+
+### Features
+
+* **api:** add o3-mini ([#1295](https://github.com/openai/openai-node/issues/1295)) ([378e2f7](https://github.com/openai/openai-node/commit/378e2f7af62c570adb4c7644a4d49576b698de41))
+
+
+### Bug Fixes
+
+* **examples/realtime:** remove duplicate `session.update` call ([#1293](https://github.com/openai/openai-node/issues/1293)) ([ad800b4](https://github.com/openai/openai-node/commit/ad800b4f9410c6838994c24a3386ea708717f72b))
+* **types:** correct metadata type + other fixes ([378e2f7](https://github.com/openai/openai-node/commit/378e2f7af62c570adb4c7644a4d49576b698de41))
+
+## 4.81.0 (2025-01-29)
+
+Full Changelog: [v4.80.1...v4.81.0](https://github.com/openai/openai-node/compare/v4.80.1...v4.81.0)
+
+### Features
+
+* **azure:** Realtime API support ([#1287](https://github.com/openai/openai-node/issues/1287)) ([fe090c0](https://github.com/openai/openai-node/commit/fe090c0a57570217eb0b431e2cce40bf61de2b75))
+
+## 4.80.1 (2025-01-24)
+
+Full Changelog: [v4.80.0...v4.80.1](https://github.com/openai/openai-node/compare/v4.80.0...v4.80.1)
+
+### Bug Fixes
+
+* **azure:** include retry count header ([3e0ba40](https://github.com/openai/openai-node/commit/3e0ba409e57ce276fb1f95cd11c801e4ccaad572))
+
+
+### Documentation
+
+* fix typo, "zodFunctionTool" -> "zodFunction" ([#1128](https://github.com/openai/openai-node/issues/1128)) ([b7ab6bb](https://github.com/openai/openai-node/commit/b7ab6bb304973ade94830f37eb646e800226d5ef))
+* **helpers:** fix type annotation ([fc019df](https://github.com/openai/openai-node/commit/fc019df1d9cc276e8f8e689742853a09aa94991a))
+* **readme:** fix realtime errors docs link ([#1286](https://github.com/openai/openai-node/issues/1286)) ([d1d50c8](https://github.com/openai/openai-node/commit/d1d50c897c18cefea964e8057fe1acfd766ae2bf))
+
+## 4.80.0 (2025-01-22)
+
+Full Changelog: [v4.79.4...v4.80.0](https://github.com/openai/openai-node/compare/v4.79.4...v4.80.0)
+
+### Features
+
+* **api:** update enum values, comments, and examples ([#1280](https://github.com/openai/openai-node/issues/1280)) ([d38f2c2](https://github.com/openai/openai-node/commit/d38f2c2648b6990f217c3c7d83ca31f3739641d3))
+
## 4.79.4 (2025-01-21)
Full Changelog: [v4.79.3...v4.79.4](https://github.com/openai/openai-node/compare/v4.79.3...v4.79.4)
diff --git README.md README.md
index 3bd386e99..a1f4bf760 100644
--- README.md
+++ README.md
@@ -157,7 +157,7 @@ A full example can be found [here](https://github.com/openai/openai-node/blob/ma
### Realtime error handling
-When an error is encountered, either on the client side or returned from the server through the [`error` event](https://platform.openai.com/docs/guides/realtime/realtime-api-beta#handling-errors), the `error` event listener will be fired. However, if you haven't registered an `error` event listener then an `unhandled Promise rejection` error will be thrown.
+When an error is encountered, either on the client side or returned from the server through the [`error` event](https://platform.openai.com/docs/guides/realtime-model-capabilities#error-handling), the `error` event listener will be fired. However, if you haven't registered an `error` event listener then an `unhandled Promise rejection` error will be thrown.
It is **highly recommended** that you register an `error` event listener and handle errors approriately as typically the underlying connection is still usable.
@@ -499,7 +499,7 @@ const credential = new DefaultAzureCredential();
const scope = 'https://cognitiveservices.azure.com/.default';
const azureADTokenProvider = getBearerTokenProvider(credential, scope);
-const openai = new AzureOpenAI({ azureADTokenProvider });
+const openai = new AzureOpenAI({ azureADTokenProvider, apiVersion: "<The API version, e.g. 2024-10-01-preview>" });
const result = await openai.chat.completions.create({
model: 'gpt-4o',
@@ -509,6 +509,26 @@ const result = await openai.chat.completions.create({
console.log(result.choices[0]!.message?.content);
\`\`\`
+### Realtime API
+This SDK provides real-time streaming capabilities for Azure OpenAI through the `OpenAIRealtimeWS` and `OpenAIRealtimeWebSocket` clients described previously.
+
+To utilize the real-time features, begin by creating a fully configured `AzureOpenAI` client and passing it into either `OpenAIRealtimeWS.azure` or `OpenAIRealtimeWebSocket.azure`. For example:
+
+```ts
+const cred = new DefaultAzureCredential();
+const scope = 'https://cognitiveservices.azure.com/.default';
+const deploymentName = 'gpt-4o-realtime-preview-1001';
+const azureADTokenProvider = getBearerTokenProvider(cred, scope);
+const client = new AzureOpenAI({
+ azureADTokenProvider,
+ apiVersion: '2024-10-01-preview',
+ deployment: deploymentName,
+});
+const rt = await OpenAIRealtimeWS.azure(client);
+```
+
+Once the instance has been created, you can then begin sending requests and receiving streaming responses in real time.
+
### Retries
Certain errors will be automatically retried 2 times by default, with a short exponential backoff.
diff --git api.md api.md
index 33ab95ef6..516188b20 100644
--- api.md
+++ api.md
@@ -5,6 +5,7 @@ Types:
- <code><a href="./src/resources/shared.ts">ErrorObject</a></code>
- <code><a href="./src/resources/shared.ts">FunctionDefinition</a></code>
- <code><a href="./src/resources/shared.ts">FunctionParameters</a></code>
+- <code><a href="./src/resources/shared.ts">Metadata</a></code>
- <code><a href="./src/resources/shared.ts">ResponseFormatJSONObject</a></code>
- <code><a href="./src/resources/shared.ts">ResponseFormatJSONSchema</a></code>
- <code><a href="./src/resources/shared.ts">ResponseFormatText</a></code>
diff --git examples/azure.ts examples/azure/chat.ts
similarity index 91%
rename from examples/azure.ts
rename to examples/azure/chat.ts
index 5fe1718fa..46df820f8 100755
--- examples/azure.ts
+++ examples/azure/chat.ts
@@ -2,6 +2,7 @@
import { AzureOpenAI } from 'openai';
import { getBearerTokenProvider, DefaultAzureCredential } from '@azure/identity';
+import 'dotenv/config';
// Corresponds to your Model deployment within your OpenAI resource, e.g. gpt-4-1106-preview
// Navigate to the Azure OpenAI Studio to deploy a model.
@@ -13,7 +14,7 @@ const azureADTokenProvider = getBearerTokenProvider(credential, scope);
// Make sure to set AZURE_OPENAI_ENDPOINT with the endpoint of your Azure resource.
// You can find it in the Azure Portal.
-const openai = new AzureOpenAI({ azureADTokenProvider });
+const openai = new AzureOpenAI({ azureADTokenProvider, apiVersion: '2024-10-01-preview' });
async function main() {
console.log('Non-streaming:');
diff --git a/examples/azure/realtime/websocket.ts b/examples/azure/realtime/websocket.ts
new file mode 100644
index 000000000..bec74e654
--- /dev/null
+++ examples/azure/realtime/websocket.ts
@@ -0,0 +1,60 @@
+import { OpenAIRealtimeWebSocket } from 'openai/beta/realtime/websocket';
+import { AzureOpenAI } from 'openai';
+import { DefaultAzureCredential, getBearerTokenProvider } from '@azure/identity';
+import 'dotenv/config';
+
+async function main() {
+ const cred = new DefaultAzureCredential();
+ const scope = 'https://cognitiveservices.azure.com/.default';
+ const deploymentName = 'gpt-4o-realtime-preview-1001';
+ const azureADTokenProvider = getBearerTokenProvider(cred, scope);
+ const client = new AzureOpenAI({
+ azureADTokenProvider,
+ apiVersion: '2024-10-01-preview',
+ deployment: deploymentName,
+ });
+ const rt = await OpenAIRealtimeWebSocket.azure(client);
+
+ // access the underlying `ws.WebSocket` instance
+ rt.socket.addEventListener('open', () => {
+ console.log('Connection opened!');
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['text'],
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+
+ rt.send({
+ type: 'conversation.item.create',
+ item: {
+ type: 'message',
+ role: 'user',
+ content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
+ },
+ });
+
+ rt.send({ type: 'response.create' });
+ });
+
+ rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+ });
+
+ rt.on('session.created', (event) => {
+ console.log('session created!', event.session);
+ console.log();
+ });
+
+ rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
+ rt.on('response.text.done', () => console.log());
+
+ rt.on('response.done', () => rt.close());
+
+ rt.socket.addEventListener('close', () => console.log('\nConnection closed!'));
+}
+
+main();
diff --git a/examples/azure/realtime/ws.ts b/examples/azure/realtime/ws.ts
new file mode 100644
index 000000000..6ab7b742a
--- /dev/null
+++ examples/azure/realtime/ws.ts
@@ -0,0 +1,60 @@
+import { DefaultAzureCredential, getBearerTokenProvider } from '@azure/identity';
+import { OpenAIRealtimeWS } from 'openai/beta/realtime/ws';
+import { AzureOpenAI } from 'openai';
+import 'dotenv/config';
+
+async function main() {
+ const cred = new DefaultAzureCredential();
+ const scope = 'https://cognitiveservices.azure.com/.default';
+ const deploymentName = 'gpt-4o-realtime-preview-1001';
+ const azureADTokenProvider = getBearerTokenProvider(cred, scope);
+ const client = new AzureOpenAI({
+ azureADTokenProvider,
+ apiVersion: '2024-10-01-preview',
+ deployment: deploymentName,
+ });
+ const rt = await OpenAIRealtimeWS.azure(client);
+
+ // access the underlying `ws.WebSocket` instance
+ rt.socket.on('open', () => {
+ console.log('Connection opened!');
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['text'],
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+
+ rt.send({
+ type: 'conversation.item.create',
+ item: {
+ type: 'message',
+ role: 'user',
+ content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
+ },
+ });
+
+ rt.send({ type: 'response.create' });
+ });
+
+ rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+ });
+
+ rt.on('session.created', (event) => {
+ console.log('session created!', event.session);
+ console.log();
+ });
+
+ rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
+ rt.on('response.text.done', () => console.log());
+
+ rt.on('response.done', () => rt.close());
+
+ rt.socket.on('close', () => console.log('\nConnection closed!'));
+}
+
+main();
diff --git examples/package.json examples/package.json
index b8c34ac45..70ec2c523 100644
--- examples/package.json
+++ examples/package.json
@@ -7,6 +7,7 @@
"private": true,
"dependencies": {
"@azure/identity": "^4.2.0",
+ "dotenv": "^16.4.7",
"express": "^4.18.2",
"next": "^14.1.1",
"openai": "file:..",
diff --git examples/realtime/ws.ts examples/realtime/ws.ts
index 4bbe85e5d..08c6fbcb6 100644
--- examples/realtime/ws.ts
+++ examples/realtime/ws.ts
@@ -6,13 +6,6 @@ async function main() {
// access the underlying `ws.WebSocket` instance
rt.socket.on('open', () => {
console.log('Connection opened!');
- rt.send({
- type: 'session.update',
- session: {
- modalities: ['foo'] as any,
- model: 'gpt-4o-realtime-preview',
- },
- });
rt.send({
type: 'session.update',
session: {
diff --git helpers.md helpers.md
index abf980c82..16bc1f277 100644
--- helpers.md
+++ helpers.md
@@ -49,7 +49,7 @@ if (message?.parsed) {
The `.parse()` method will also automatically parse `function` tool calls if:
-- You use the `zodFunctionTool()` helper method
+- You use the `zodFunction()` helper method
- You mark your tool schema with `"strict": True`
For example:
@@ -226,7 +226,7 @@ on in the documentation page [Message](https://platform.openai.com/docs/api-refe
```ts
.on('textCreated', (content: Text) => ...)
-.on('textDelta', (delta: RunStepDelta, snapshot: Text) => ...)
+.on('textDelta', (delta: TextDelta, snapshot: Text) => ...)
.on('textDone', (content: Text, snapshot: Message) => ...) diff --git jsr.json jsr.json
export class OpenAIRealtimeError extends OpenAIError { -export function buildRealtimeURL(props: { baseURL: string; model: string }): URL {
interface MessageEvent {
@@ -77,6 +84,45 @@ export class OpenAIRealtimeWebSocket extends OpenAIRealtimeEmitter {
send(event: RealtimeClientEvent) { export class OpenAIRealtimeWS extends OpenAIRealtimeEmitter {
+async function getAzureHeaders(client: AzureOpenAI) {
diff --git src/resources/batches.ts src/resources/batches.ts export class Batches extends APIResource { /**
export interface BatchListParams extends CursorPageParams {} /**
import { APIResource } from '../../../resource'; /**
/**
/**
@@ -1723,8 +1764,11 @@ export namespace SessionUpdateEvent {
@@ -1756,8 +1800,19 @@ export namespace SessionUpdateEvent {
@@ -1797,15 +1852,33 @@ export namespace SessionUpdateEvent {
@@ -86,6 +88,7 @@ export interface Session { /**
@@ -200,7 +203,7 @@ export interface SessionCreateResponse {
/** export interface SessionCreateParams {
@@ -390,8 +385,11 @@ export interface SessionCreateParams {
@@ -423,8 +421,19 @@ export interface SessionCreateParams {
@@ -464,15 +473,33 @@ export namespace SessionCreateParams {
export interface Tool { @@ -407,11 +408,13 @@ export interface Message { /**
export namespace MessageCreateParams {
export interface MessageListParams extends CursorPageParams { /**
export namespace AdditionalMessage {
export interface RunListParams extends CursorPageParams { export class Steps extends APIResource { /**
export namespace Message {
@@ -658,7 +671,8 @@ export interface ThreadCreateAndRunParamsBase { export namespace ThreadCreateAndRunParams {
export type ChatModel =
@@ -300,8 +299,7 @@ export interface ChatCompletionChunk { /**
@@ -1014,10 +1012,14 @@ export interface ChatCompletionCreateParamsBase { /**
diff --git src/resources/embeddings.ts src/resources/embeddings.ts
diff --git src/resources/shared.ts src/resources/shared.ts +/**
export interface ResponseFormatJSONObject { /**
@@ -33,7 +33,9 @@ describe('resource assistants', () => {
diff --git tests/api-resources/beta/realtime/sessions.test.ts tests/api-resources/beta/realtime/sessions.test.ts describe('resource sessions', () => {
diff --git tests/api-resources/beta/threads/runs/runs.test.ts tests/api-resources/beta/threads/runs/runs.test.ts
diff --git tests/api-resources/beta/threads/threads.test.ts tests/api-resources/beta/threads/threads.test.ts
@@ -118,7 +120,7 @@ describe('resource threads', () => {
@@ -130,15 +132,17 @@ describe('resource threads', () => {
diff --git tests/api-resources/chat/completions.test.ts tests/api-resources/chat/completions.test.ts
diff --git tests/api-resources/completions.test.ts tests/api-resources/completions.test.ts
diff --git tests/lib/azure.test.ts tests/lib/azure.test.ts
describe('defaultQuery', () => {
Possible Issues
Security Hotspots
|
bedrock debug - [puLL-Merge] - openai/[email protected] Diffdiff --git .release-please-manifest.json .release-please-manifest.json
index b1ab5c7b9..b2ee58e08 100644
--- .release-please-manifest.json
+++ .release-please-manifest.json
@@ -1,3 +1,3 @@
{
- ".": "4.79.4"
+ ".": "4.82.0"
}
diff --git .stats.yml .stats.yml
index 9600edae3..e49b5c56e 100644
--- .stats.yml
+++ .stats.yml
@@ -1,2 +1,2 @@
configured_endpoints: 69
-openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-b5b0e2c794b012919701c3fd43286af10fa25d33ceb8a881bec2636028f446e0.yml
+openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-6204952a29973265b9c0d66fc67ffaf53c6a90ae4d75cdacf9d147676f5274c9.yml
diff --git CHANGELOG.md CHANGELOG.md
index 4254a9b8f..7565cb01a 100644
--- CHANGELOG.md
+++ CHANGELOG.md
@@ -1,5 +1,50 @@
# Changelog
+## 4.82.0 (2025-01-31)
+
+Full Changelog: [v4.81.0...v4.82.0](https://github.com/openai/openai-node/compare/v4.81.0...v4.82.0)
+
+### Features
+
+* **api:** add o3-mini ([#1295](https://github.com/openai/openai-node/issues/1295)) ([378e2f7](https://github.com/openai/openai-node/commit/378e2f7af62c570adb4c7644a4d49576b698de41))
+
+
+### Bug Fixes
+
+* **examples/realtime:** remove duplicate `session.update` call ([#1293](https://github.com/openai/openai-node/issues/1293)) ([ad800b4](https://github.com/openai/openai-node/commit/ad800b4f9410c6838994c24a3386ea708717f72b))
+* **types:** correct metadata type + other fixes ([378e2f7](https://github.com/openai/openai-node/commit/378e2f7af62c570adb4c7644a4d49576b698de41))
+
+## 4.81.0 (2025-01-29)
+
+Full Changelog: [v4.80.1...v4.81.0](https://github.com/openai/openai-node/compare/v4.80.1...v4.81.0)
+
+### Features
+
+* **azure:** Realtime API support ([#1287](https://github.com/openai/openai-node/issues/1287)) ([fe090c0](https://github.com/openai/openai-node/commit/fe090c0a57570217eb0b431e2cce40bf61de2b75))
+
+## 4.80.1 (2025-01-24)
+
+Full Changelog: [v4.80.0...v4.80.1](https://github.com/openai/openai-node/compare/v4.80.0...v4.80.1)
+
+### Bug Fixes
+
+* **azure:** include retry count header ([3e0ba40](https://github.com/openai/openai-node/commit/3e0ba409e57ce276fb1f95cd11c801e4ccaad572))
+
+
+### Documentation
+
+* fix typo, "zodFunctionTool" -> "zodFunction" ([#1128](https://github.com/openai/openai-node/issues/1128)) ([b7ab6bb](https://github.com/openai/openai-node/commit/b7ab6bb304973ade94830f37eb646e800226d5ef))
+* **helpers:** fix type annotation ([fc019df](https://github.com/openai/openai-node/commit/fc019df1d9cc276e8f8e689742853a09aa94991a))
+* **readme:** fix realtime errors docs link ([#1286](https://github.com/openai/openai-node/issues/1286)) ([d1d50c8](https://github.com/openai/openai-node/commit/d1d50c897c18cefea964e8057fe1acfd766ae2bf))
+
+## 4.80.0 (2025-01-22)
+
+Full Changelog: [v4.79.4...v4.80.0](https://github.com/openai/openai-node/compare/v4.79.4...v4.80.0)
+
+### Features
+
+* **api:** update enum values, comments, and examples ([#1280](https://github.com/openai/openai-node/issues/1280)) ([d38f2c2](https://github.com/openai/openai-node/commit/d38f2c2648b6990f217c3c7d83ca31f3739641d3))
+
## 4.79.4 (2025-01-21)
Full Changelog: [v4.79.3...v4.79.4](https://github.com/openai/openai-node/compare/v4.79.3...v4.79.4)
diff --git README.md README.md
index 3bd386e99..a1f4bf760 100644
--- README.md
+++ README.md
@@ -157,7 +157,7 @@ A full example can be found [here](https://github.com/openai/openai-node/blob/ma
### Realtime error handling
-When an error is encountered, either on the client side or returned from the server through the [`error` event](https://platform.openai.com/docs/guides/realtime/realtime-api-beta#handling-errors), the `error` event listener will be fired. However, if you haven't registered an `error` event listener then an `unhandled Promise rejection` error will be thrown.
+When an error is encountered, either on the client side or returned from the server through the [`error` event](https://platform.openai.com/docs/guides/realtime-model-capabilities#error-handling), the `error` event listener will be fired. However, if you haven't registered an `error` event listener then an `unhandled Promise rejection` error will be thrown.
It is **highly recommended** that you register an `error` event listener and handle errors approriately as typically the underlying connection is still usable.
@@ -499,7 +499,7 @@ const credential = new DefaultAzureCredential();
const scope = 'https://cognitiveservices.azure.com/.default';
const azureADTokenProvider = getBearerTokenProvider(credential, scope);
-const openai = new AzureOpenAI({ azureADTokenProvider });
+const openai = new AzureOpenAI({ azureADTokenProvider, apiVersion: "<The API version, e.g. 2024-10-01-preview>" });
const result = await openai.chat.completions.create({
model: 'gpt-4o',
@@ -509,6 +509,26 @@ const result = await openai.chat.completions.create({
console.log(result.choices[0]!.message?.content);
\`\`\`
+### Realtime API
+This SDK provides real-time streaming capabilities for Azure OpenAI through the `OpenAIRealtimeWS` and `OpenAIRealtimeWebSocket` clients described previously.
+
+To utilize the real-time features, begin by creating a fully configured `AzureOpenAI` client and passing it into either `OpenAIRealtimeWS.azure` or `OpenAIRealtimeWebSocket.azure`. For example:
+
+```ts
+const cred = new DefaultAzureCredential();
+const scope = 'https://cognitiveservices.azure.com/.default';
+const deploymentName = 'gpt-4o-realtime-preview-1001';
+const azureADTokenProvider = getBearerTokenProvider(cred, scope);
+const client = new AzureOpenAI({
+ azureADTokenProvider,
+ apiVersion: '2024-10-01-preview',
+ deployment: deploymentName,
+});
+const rt = await OpenAIRealtimeWS.azure(client);
+```
+
+Once the instance has been created, you can then begin sending requests and receiving streaming responses in real time.
+
### Retries
Certain errors will be automatically retried 2 times by default, with a short exponential backoff.
diff --git api.md api.md
index 33ab95ef6..516188b20 100644
--- api.md
+++ api.md
@@ -5,6 +5,7 @@ Types:
- <code><a href="./src/resources/shared.ts">ErrorObject</a></code>
- <code><a href="./src/resources/shared.ts">FunctionDefinition</a></code>
- <code><a href="./src/resources/shared.ts">FunctionParameters</a></code>
+- <code><a href="./src/resources/shared.ts">Metadata</a></code>
- <code><a href="./src/resources/shared.ts">ResponseFormatJSONObject</a></code>
- <code><a href="./src/resources/shared.ts">ResponseFormatJSONSchema</a></code>
- <code><a href="./src/resources/shared.ts">ResponseFormatText</a></code>
diff --git examples/azure.ts examples/azure/chat.ts
similarity index 91%
rename from examples/azure.ts
rename to examples/azure/chat.ts
index 5fe1718fa..46df820f8 100755
--- examples/azure.ts
+++ examples/azure/chat.ts
@@ -2,6 +2,7 @@
import { AzureOpenAI } from 'openai';
import { getBearerTokenProvider, DefaultAzureCredential } from '@azure/identity';
+import 'dotenv/config';
// Corresponds to your Model deployment within your OpenAI resource, e.g. gpt-4-1106-preview
// Navigate to the Azure OpenAI Studio to deploy a model.
@@ -13,7 +14,7 @@ const azureADTokenProvider = getBearerTokenProvider(credential, scope);
// Make sure to set AZURE_OPENAI_ENDPOINT with the endpoint of your Azure resource.
// You can find it in the Azure Portal.
-const openai = new AzureOpenAI({ azureADTokenProvider });
+const openai = new AzureOpenAI({ azureADTokenProvider, apiVersion: '2024-10-01-preview' });
async function main() {
console.log('Non-streaming:');
diff --git a/examples/azure/realtime/websocket.ts b/examples/azure/realtime/websocket.ts
new file mode 100644
index 000000000..bec74e654
--- /dev/null
+++ examples/azure/realtime/websocket.ts
@@ -0,0 +1,60 @@
+import { OpenAIRealtimeWebSocket } from 'openai/beta/realtime/websocket';
+import { AzureOpenAI } from 'openai';
+import { DefaultAzureCredential, getBearerTokenProvider } from '@azure/identity';
+import 'dotenv/config';
+
+async function main() {
+ const cred = new DefaultAzureCredential();
+ const scope = 'https://cognitiveservices.azure.com/.default';
+ const deploymentName = 'gpt-4o-realtime-preview-1001';
+ const azureADTokenProvider = getBearerTokenProvider(cred, scope);
+ const client = new AzureOpenAI({
+ azureADTokenProvider,
+ apiVersion: '2024-10-01-preview',
+ deployment: deploymentName,
+ });
+ const rt = await OpenAIRealtimeWebSocket.azure(client);
+
+ // access the underlying `ws.WebSocket` instance
+ rt.socket.addEventListener('open', () => {
+ console.log('Connection opened!');
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['text'],
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+
+ rt.send({
+ type: 'conversation.item.create',
+ item: {
+ type: 'message',
+ role: 'user',
+ content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
+ },
+ });
+
+ rt.send({ type: 'response.create' });
+ });
+
+ rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+ });
+
+ rt.on('session.created', (event) => {
+ console.log('session created!', event.session);
+ console.log();
+ });
+
+ rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
+ rt.on('response.text.done', () => console.log());
+
+ rt.on('response.done', () => rt.close());
+
+ rt.socket.addEventListener('close', () => console.log('\nConnection closed!'));
+}
+
+main();
diff --git a/examples/azure/realtime/ws.ts b/examples/azure/realtime/ws.ts
new file mode 100644
index 000000000..6ab7b742a
--- /dev/null
+++ examples/azure/realtime/ws.ts
@@ -0,0 +1,60 @@
+import { DefaultAzureCredential, getBearerTokenProvider } from '@azure/identity';
+import { OpenAIRealtimeWS } from 'openai/beta/realtime/ws';
+import { AzureOpenAI } from 'openai';
+import 'dotenv/config';
+
+async function main() {
+ const cred = new DefaultAzureCredential();
+ const scope = 'https://cognitiveservices.azure.com/.default';
+ const deploymentName = 'gpt-4o-realtime-preview-1001';
+ const azureADTokenProvider = getBearerTokenProvider(cred, scope);
+ const client = new AzureOpenAI({
+ azureADTokenProvider,
+ apiVersion: '2024-10-01-preview',
+ deployment: deploymentName,
+ });
+ const rt = await OpenAIRealtimeWS.azure(client);
+
+ // access the underlying `ws.WebSocket` instance
+ rt.socket.on('open', () => {
+ console.log('Connection opened!');
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['text'],
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+
+ rt.send({
+ type: 'conversation.item.create',
+ item: {
+ type: 'message',
+ role: 'user',
+ content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
+ },
+ });
+
+ rt.send({ type: 'response.create' });
+ });
+
+ rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+ });
+
+ rt.on('session.created', (event) => {
+ console.log('session created!', event.session);
+ console.log();
+ });
+
+ rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
+ rt.on('response.text.done', () => console.log());
+
+ rt.on('response.done', () => rt.close());
+
+ rt.socket.on('close', () => console.log('\nConnection closed!'));
+}
+
+main();
diff --git examples/package.json examples/package.json
index b8c34ac45..70ec2c523 100644
--- examples/package.json
+++ examples/package.json
@@ -7,6 +7,7 @@
"private": true,
"dependencies": {
"@azure/identity": "^4.2.0",
+ "dotenv": "^16.4.7",
"express": "^4.18.2",
"next": "^14.1.1",
"openai": "file:..",
diff --git examples/realtime/ws.ts examples/realtime/ws.ts
index 4bbe85e5d..08c6fbcb6 100644
--- examples/realtime/ws.ts
+++ examples/realtime/ws.ts
@@ -6,13 +6,6 @@ async function main() {
// access the underlying `ws.WebSocket` instance
rt.socket.on('open', () => {
console.log('Connection opened!');
- rt.send({
- type: 'session.update',
- session: {
- modalities: ['foo'] as any,
- model: 'gpt-4o-realtime-preview',
- },
- });
rt.send({
type: 'session.update',
session: {
diff --git helpers.md helpers.md
index abf980c82..16bc1f277 100644
--- helpers.md
+++ helpers.md
@@ -49,7 +49,7 @@ if (message?.parsed) {
The `.parse()` method will also automatically parse `function` tool calls if:
-- You use the `zodFunctionTool()` helper method
+- You use the `zodFunction()` helper method
- You mark your tool schema with `"strict": True`
For example:
@@ -226,7 +226,7 @@ on in the documentation page [Message](https://platform.openai.com/docs/api-refe
```ts
.on('textCreated', (content: Text) => ...)
-.on('textDelta', (delta: RunStepDelta, snapshot: Text) => ...)
+.on('textDelta', (delta: TextDelta, snapshot: Text) => ...)
.on('textDone', (content: Text, snapshot: Message) => ...) diff --git jsr.json jsr.json
export class OpenAIRealtimeError extends OpenAIError { -export function buildRealtimeURL(props: { baseURL: string; model: string }): URL {
interface MessageEvent {
@@ -77,6 +84,45 @@ export class OpenAIRealtimeWebSocket extends OpenAIRealtimeEmitter {
send(event: RealtimeClientEvent) { export class OpenAIRealtimeWS extends OpenAIRealtimeEmitter {
+async function getAzureHeaders(client: AzureOpenAI) {
diff --git src/resources/batches.ts src/resources/batches.ts export class Batches extends APIResource { /**
export interface BatchListParams extends CursorPageParams {} /**
import { APIResource } from '../../../resource'; /**
/**
/**
@@ -1723,8 +1764,11 @@ export namespace SessionUpdateEvent {
@@ -1756,8 +1800,19 @@ export namespace SessionUpdateEvent {
@@ -1797,15 +1852,33 @@ export namespace SessionUpdateEvent {
@@ -86,6 +88,7 @@ export interface Session { /**
@@ -200,7 +203,7 @@ export interface SessionCreateResponse {
/** export interface SessionCreateParams {
@@ -390,8 +385,11 @@ export interface SessionCreateParams {
@@ -423,8 +421,19 @@ export interface SessionCreateParams {
@@ -464,15 +473,33 @@ export namespace SessionCreateParams {
export interface Tool { @@ -407,11 +408,13 @@ export interface Message { /**
export namespace MessageCreateParams {
export interface MessageListParams extends CursorPageParams { /**
export namespace AdditionalMessage {
export interface RunListParams extends CursorPageParams { export class Steps extends APIResource { /**
export namespace Message {
@@ -658,7 +671,8 @@ export interface ThreadCreateAndRunParamsBase { export namespace ThreadCreateAndRunParams {
export type ChatModel =
@@ -300,8 +299,7 @@ export interface ChatCompletionChunk { /**
@@ -1014,10 +1012,14 @@ export interface ChatCompletionCreateParamsBase { /**
diff --git src/resources/embeddings.ts src/resources/embeddings.ts
diff --git src/resources/shared.ts src/resources/shared.ts +/**
export interface ResponseFormatJSONObject { /**
@@ -33,7 +33,9 @@ describe('resource assistants', () => {
diff --git tests/api-resources/beta/realtime/sessions.test.ts tests/api-resources/beta/realtime/sessions.test.ts describe('resource sessions', () => {
diff --git tests/api-resources/beta/threads/runs/runs.test.ts tests/api-resources/beta/threads/runs/runs.test.ts
diff --git tests/api-resources/beta/threads/threads.test.ts tests/api-resources/beta/threads/threads.test.ts
@@ -118,7 +120,7 @@ describe('resource threads', () => {
@@ -130,15 +132,17 @@ describe('resource threads', () => {
diff --git tests/api-resources/chat/completions.test.ts tests/api-resources/chat/completions.test.ts
diff --git tests/api-resources/completions.test.ts tests/api-resources/completions.test.ts
diff --git tests/lib/azure.test.ts tests/lib/azure.test.ts
describe('defaultQuery', () => {
Security Hotspots
|
9ef032a
to
2a0f0f5
Compare
anthropic debug - [puLL-Merge] - openai/[email protected] Diffdiff --git .release-please-manifest.json .release-please-manifest.json
index b1ab5c7b9..6eb0f130e 100644
--- .release-please-manifest.json
+++ .release-please-manifest.json
@@ -1,3 +1,3 @@
{
- ".": "4.79.4"
+ ".": "4.83.0"
}
diff --git .stats.yml .stats.yml
index 9600edae3..df7877dfd 100644
--- .stats.yml
+++ .stats.yml
@@ -1,2 +1,2 @@
configured_endpoints: 69
-openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-b5b0e2c794b012919701c3fd43286af10fa25d33ceb8a881bec2636028f446e0.yml
+openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-fc5dbc19505b0035f9e7f88868619f4fb519b048bde011f6154f3132d4be71fb.yml
diff --git CHANGELOG.md CHANGELOG.md
index 4254a9b8f..f61def5e4 100644
--- CHANGELOG.md
+++ CHANGELOG.md
@@ -1,5 +1,64 @@
# Changelog
+## 4.83.0 (2025-02-05)
+
+Full Changelog: [v4.82.0...v4.83.0](https://github.com/openai/openai-node/compare/v4.82.0...v4.83.0)
+
+### Features
+
+* **client:** send `X-Stainless-Timeout` header ([#1299](https://github.com/openai/openai-node/issues/1299)) ([ddfc686](https://github.com/openai/openai-node/commit/ddfc686f43a3420c3adf8dec2e82b4d10a121eb8))
+
+
+### Bug Fixes
+
+* **api/types:** correct audio duration & role types ([#1300](https://github.com/openai/openai-node/issues/1300)) ([a955ac2](https://github.com/openai/openai-node/commit/a955ac2bf5bee663d530d0c82b0005bf3ce6fc47))
+* **azure/audio:** use model param for deployments ([#1297](https://github.com/openai/openai-node/issues/1297)) ([85de382](https://github.com/openai/openai-node/commit/85de382db17cbe5f112650e79d0fc1cc841efbb2))
+
+## 4.82.0 (2025-01-31)
+
+Full Changelog: [v4.81.0...v4.82.0](https://github.com/openai/openai-node/compare/v4.81.0...v4.82.0)
+
+### Features
+
+* **api:** add o3-mini ([#1295](https://github.com/openai/openai-node/issues/1295)) ([378e2f7](https://github.com/openai/openai-node/commit/378e2f7af62c570adb4c7644a4d49576b698de41))
+
+
+### Bug Fixes
+
+* **examples/realtime:** remove duplicate `session.update` call ([#1293](https://github.com/openai/openai-node/issues/1293)) ([ad800b4](https://github.com/openai/openai-node/commit/ad800b4f9410c6838994c24a3386ea708717f72b))
+* **types:** correct metadata type + other fixes ([378e2f7](https://github.com/openai/openai-node/commit/378e2f7af62c570adb4c7644a4d49576b698de41))
+
+## 4.81.0 (2025-01-29)
+
+Full Changelog: [v4.80.1...v4.81.0](https://github.com/openai/openai-node/compare/v4.80.1...v4.81.0)
+
+### Features
+
+* **azure:** Realtime API support ([#1287](https://github.com/openai/openai-node/issues/1287)) ([fe090c0](https://github.com/openai/openai-node/commit/fe090c0a57570217eb0b431e2cce40bf61de2b75))
+
+## 4.80.1 (2025-01-24)
+
+Full Changelog: [v4.80.0...v4.80.1](https://github.com/openai/openai-node/compare/v4.80.0...v4.80.1)
+
+### Bug Fixes
+
+* **azure:** include retry count header ([3e0ba40](https://github.com/openai/openai-node/commit/3e0ba409e57ce276fb1f95cd11c801e4ccaad572))
+
+
+### Documentation
+
+* fix typo, "zodFunctionTool" -> "zodFunction" ([#1128](https://github.com/openai/openai-node/issues/1128)) ([b7ab6bb](https://github.com/openai/openai-node/commit/b7ab6bb304973ade94830f37eb646e800226d5ef))
+* **helpers:** fix type annotation ([fc019df](https://github.com/openai/openai-node/commit/fc019df1d9cc276e8f8e689742853a09aa94991a))
+* **readme:** fix realtime errors docs link ([#1286](https://github.com/openai/openai-node/issues/1286)) ([d1d50c8](https://github.com/openai/openai-node/commit/d1d50c897c18cefea964e8057fe1acfd766ae2bf))
+
+## 4.80.0 (2025-01-22)
+
+Full Changelog: [v4.79.4...v4.80.0](https://github.com/openai/openai-node/compare/v4.79.4...v4.80.0)
+
+### Features
+
+* **api:** update enum values, comments, and examples ([#1280](https://github.com/openai/openai-node/issues/1280)) ([d38f2c2](https://github.com/openai/openai-node/commit/d38f2c2648b6990f217c3c7d83ca31f3739641d3))
+
## 4.79.4 (2025-01-21)
Full Changelog: [v4.79.3...v4.79.4](https://github.com/openai/openai-node/compare/v4.79.3...v4.79.4)
diff --git README.md README.md
index 3bd386e99..a1f4bf760 100644
--- README.md
+++ README.md
@@ -157,7 +157,7 @@ A full example can be found [here](https://github.com/openai/openai-node/blob/ma
### Realtime error handling
-When an error is encountered, either on the client side or returned from the server through the [`error` event](https://platform.openai.com/docs/guides/realtime/realtime-api-beta#handling-errors), the `error` event listener will be fired. However, if you haven't registered an `error` event listener then an `unhandled Promise rejection` error will be thrown.
+When an error is encountered, either on the client side or returned from the server through the [`error` event](https://platform.openai.com/docs/guides/realtime-model-capabilities#error-handling), the `error` event listener will be fired. However, if you haven't registered an `error` event listener then an `unhandled Promise rejection` error will be thrown.
It is **highly recommended** that you register an `error` event listener and handle errors approriately as typically the underlying connection is still usable.
@@ -499,7 +499,7 @@ const credential = new DefaultAzureCredential();
const scope = 'https://cognitiveservices.azure.com/.default';
const azureADTokenProvider = getBearerTokenProvider(credential, scope);
-const openai = new AzureOpenAI({ azureADTokenProvider });
+const openai = new AzureOpenAI({ azureADTokenProvider, apiVersion: "<The API version, e.g. 2024-10-01-preview>" });
const result = await openai.chat.completions.create({
model: 'gpt-4o',
@@ -509,6 +509,26 @@ const result = await openai.chat.completions.create({
console.log(result.choices[0]!.message?.content);
\`\`\`
+### Realtime API
+This SDK provides real-time streaming capabilities for Azure OpenAI through the `OpenAIRealtimeWS` and `OpenAIRealtimeWebSocket` clients described previously.
+
+To utilize the real-time features, begin by creating a fully configured `AzureOpenAI` client and passing it into either `OpenAIRealtimeWS.azure` or `OpenAIRealtimeWebSocket.azure`. For example:
+
+```ts
+const cred = new DefaultAzureCredential();
+const scope = 'https://cognitiveservices.azure.com/.default';
+const deploymentName = 'gpt-4o-realtime-preview-1001';
+const azureADTokenProvider = getBearerTokenProvider(cred, scope);
+const client = new AzureOpenAI({
+ azureADTokenProvider,
+ apiVersion: '2024-10-01-preview',
+ deployment: deploymentName,
+});
+const rt = await OpenAIRealtimeWS.azure(client);
+```
+
+Once the instance has been created, you can then begin sending requests and receiving streaming responses in real time.
+
### Retries
Certain errors will be automatically retried 2 times by default, with a short exponential backoff.
diff --git api.md api.md
index 33ab95ef6..01854a8e0 100644
--- api.md
+++ api.md
@@ -5,6 +5,7 @@ Types:
- <code><a href="./src/resources/shared.ts">ErrorObject</a></code>
- <code><a href="./src/resources/shared.ts">FunctionDefinition</a></code>
- <code><a href="./src/resources/shared.ts">FunctionParameters</a></code>
+- <code><a href="./src/resources/shared.ts">Metadata</a></code>
- <code><a href="./src/resources/shared.ts">ResponseFormatJSONObject</a></code>
- <code><a href="./src/resources/shared.ts">ResponseFormatJSONSchema</a></code>
- <code><a href="./src/resources/shared.ts">ResponseFormatText</a></code>
@@ -228,6 +229,7 @@ Types:
- <code><a href="./src/resources/beta/realtime/realtime.ts">ConversationItemInputAudioTranscriptionFailedEvent</a></code>
- <code><a href="./src/resources/beta/realtime/realtime.ts">ConversationItemTruncateEvent</a></code>
- <code><a href="./src/resources/beta/realtime/realtime.ts">ConversationItemTruncatedEvent</a></code>
+- <code><a href="./src/resources/beta/realtime/realtime.ts">ConversationItemWithReference</a></code>
- <code><a href="./src/resources/beta/realtime/realtime.ts">ErrorEvent</a></code>
- <code><a href="./src/resources/beta/realtime/realtime.ts">InputAudioBufferAppendEvent</a></code>
- <code><a href="./src/resources/beta/realtime/realtime.ts">InputAudioBufferClearEvent</a></code>
diff --git examples/azure.ts examples/azure/chat.ts
similarity index 91%
rename from examples/azure.ts
rename to examples/azure/chat.ts
index 5fe1718fa..46df820f8 100755
--- examples/azure.ts
+++ examples/azure/chat.ts
@@ -2,6 +2,7 @@
import { AzureOpenAI } from 'openai';
import { getBearerTokenProvider, DefaultAzureCredential } from '@azure/identity';
+import 'dotenv/config';
// Corresponds to your Model deployment within your OpenAI resource, e.g. gpt-4-1106-preview
// Navigate to the Azure OpenAI Studio to deploy a model.
@@ -13,7 +14,7 @@ const azureADTokenProvider = getBearerTokenProvider(credential, scope);
// Make sure to set AZURE_OPENAI_ENDPOINT with the endpoint of your Azure resource.
// You can find it in the Azure Portal.
-const openai = new AzureOpenAI({ azureADTokenProvider });
+const openai = new AzureOpenAI({ azureADTokenProvider, apiVersion: '2024-10-01-preview' });
async function main() {
console.log('Non-streaming:');
diff --git a/examples/azure/realtime/websocket.ts b/examples/azure/realtime/websocket.ts
new file mode 100644
index 000000000..bec74e654
--- /dev/null
+++ examples/azure/realtime/websocket.ts
@@ -0,0 +1,60 @@
+import { OpenAIRealtimeWebSocket } from 'openai/beta/realtime/websocket';
+import { AzureOpenAI } from 'openai';
+import { DefaultAzureCredential, getBearerTokenProvider } from '@azure/identity';
+import 'dotenv/config';
+
+async function main() {
+ const cred = new DefaultAzureCredential();
+ const scope = 'https://cognitiveservices.azure.com/.default';
+ const deploymentName = 'gpt-4o-realtime-preview-1001';
+ const azureADTokenProvider = getBearerTokenProvider(cred, scope);
+ const client = new AzureOpenAI({
+ azureADTokenProvider,
+ apiVersion: '2024-10-01-preview',
+ deployment: deploymentName,
+ });
+ const rt = await OpenAIRealtimeWebSocket.azure(client);
+
+ // access the underlying `ws.WebSocket` instance
+ rt.socket.addEventListener('open', () => {
+ console.log('Connection opened!');
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['text'],
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+
+ rt.send({
+ type: 'conversation.item.create',
+ item: {
+ type: 'message',
+ role: 'user',
+ content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
+ },
+ });
+
+ rt.send({ type: 'response.create' });
+ });
+
+ rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+ });
+
+ rt.on('session.created', (event) => {
+ console.log('session created!', event.session);
+ console.log();
+ });
+
+ rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
+ rt.on('response.text.done', () => console.log());
+
+ rt.on('response.done', () => rt.close());
+
+ rt.socket.addEventListener('close', () => console.log('\nConnection closed!'));
+}
+
+main();
diff --git a/examples/azure/realtime/ws.ts b/examples/azure/realtime/ws.ts
new file mode 100644
index 000000000..6ab7b742a
--- /dev/null
+++ examples/azure/realtime/ws.ts
@@ -0,0 +1,60 @@
+import { DefaultAzureCredential, getBearerTokenProvider } from '@azure/identity';
+import { OpenAIRealtimeWS } from 'openai/beta/realtime/ws';
+import { AzureOpenAI } from 'openai';
+import 'dotenv/config';
+
+async function main() {
+ const cred = new DefaultAzureCredential();
+ const scope = 'https://cognitiveservices.azure.com/.default';
+ const deploymentName = 'gpt-4o-realtime-preview-1001';
+ const azureADTokenProvider = getBearerTokenProvider(cred, scope);
+ const client = new AzureOpenAI({
+ azureADTokenProvider,
+ apiVersion: '2024-10-01-preview',
+ deployment: deploymentName,
+ });
+ const rt = await OpenAIRealtimeWS.azure(client);
+
+ // access the underlying `ws.WebSocket` instance
+ rt.socket.on('open', () => {
+ console.log('Connection opened!');
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['text'],
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+
+ rt.send({
+ type: 'conversation.item.create',
+ item: {
+ type: 'message',
+ role: 'user',
+ content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
+ },
+ });
+
+ rt.send({ type: 'response.create' });
+ });
+
+ rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+ });
+
+ rt.on('session.created', (event) => {
+ console.log('session created!', event.session);
+ console.log();
+ });
+
+ rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
+ rt.on('response.text.done', () => console.log());
+
+ rt.on('response.done', () => rt.close());
+
+ rt.socket.on('close', () => console.log('\nConnection closed!'));
+}
+
+main();
diff --git examples/package.json examples/package.json
index b8c34ac45..70ec2c523 100644
--- examples/package.json
+++ examples/package.json
@@ -7,6 +7,7 @@
"private": true,
"dependencies": {
"@azure/identity": "^4.2.0",
+ "dotenv": "^16.4.7",
"express": "^4.18.2",
"next": "^14.1.1",
"openai": "file:..",
diff --git examples/realtime/ws.ts examples/realtime/ws.ts
index 4bbe85e5d..08c6fbcb6 100644
--- examples/realtime/ws.ts
+++ examples/realtime/ws.ts
@@ -6,13 +6,6 @@ async function main() {
// access the underlying `ws.WebSocket` instance
rt.socket.on('open', () => {
console.log('Connection opened!');
- rt.send({
- type: 'session.update',
- session: {
- modalities: ['foo'] as any,
- model: 'gpt-4o-realtime-preview',
- },
- });
rt.send({
type: 'session.update',
session: {
diff --git helpers.md helpers.md
index abf980c82..16bc1f277 100644
--- helpers.md
+++ helpers.md
@@ -49,7 +49,7 @@ if (message?.parsed) {
The `.parse()` method will also automatically parse `function` tool calls if:
-- You use the `zodFunctionTool()` helper method
+- You use the `zodFunction()` helper method
- You mark your tool schema with `"strict": True`
For example:
@@ -226,7 +226,7 @@ on in the documentation page [Message](https://platform.openai.com/docs/api-refe
```ts
.on('textCreated', (content: Text) => ...)
-.on('textDelta', (delta: RunStepDelta, snapshot: Text) => ...)
+.on('textDelta', (delta: TextDelta, snapshot: Text) => ...)
.on('textDone', (content: Text, snapshot: Message) => ...) diff --git jsr.json jsr.json
export class OpenAIRealtimeError extends OpenAIError { -export function buildRealtimeURL(props: { baseURL: string; model: string }): URL {
interface MessageEvent {
@@ -77,6 +84,45 @@ export class OpenAIRealtimeWebSocket extends OpenAIRealtimeEmitter {
send(event: RealtimeClientEvent) { export class OpenAIRealtimeWS extends OpenAIRealtimeEmitter {
+async function getAzureHeaders(client: AzureOpenAI) {
private buildHeaders({
@@ -814,6 +822,7 @@ export type RequestOptions<
@@ -103,7 +106,7 @@ export interface TranscriptionVerbose {
diff --git src/resources/audio/translations.ts src/resources/audio/translations.ts
@@ -38,7 +41,7 @@ export interface TranslationVerbose {
export class Batches extends APIResource { /**
export interface BatchListParams extends CursorPageParams {} /**
import { APIResource } from '../../../resource'; /**
+/**
/**
/**
/**
@@ -1723,8 +1835,11 @@ export namespace SessionUpdateEvent {
@@ -1756,8 +1871,19 @@ export namespace SessionUpdateEvent {
@@ -1797,15 +1923,33 @@ export namespace SessionUpdateEvent {
@@ -86,6 +88,7 @@ export interface Session { /**
@@ -200,7 +203,7 @@ export interface SessionCreateResponse {
/** export interface SessionCreateParams {
@@ -390,8 +385,11 @@ export interface SessionCreateParams {
@@ -423,8 +421,19 @@ export interface SessionCreateParams {
@@ -464,15 +473,33 @@ export namespace SessionCreateParams {
export interface Tool { @@ -407,11 +408,13 @@ export interface Message { /**
export namespace MessageCreateParams {
export interface MessageListParams extends CursorPageParams { /**
export namespace AdditionalMessage {
export interface RunListParams extends CursorPageParams { export class Steps extends APIResource { /**
export namespace Message {
@@ -658,7 +671,8 @@ export interface ThreadCreateAndRunParamsBase { export namespace ThreadCreateAndRunParams {
export type ChatModel =
@@ -300,8 +299,7 @@ export interface ChatCompletionChunk { /**
@@ -373,7 +371,7 @@ export namespace ChatCompletionChunk {
/**
/**
diff --git src/resources/embeddings.ts src/resources/embeddings.ts
diff --git src/resources/shared.ts src/resources/shared.ts +/**
export interface ResponseFormatJSONObject { /**
@@ -33,7 +33,9 @@ describe('resource assistants', () => {
diff --git tests/api-resources/beta/realtime/sessions.test.ts tests/api-resources/beta/realtime/sessions.test.ts describe('resource sessions', () => {
diff --git tests/api-resources/beta/threads/runs/runs.test.ts tests/api-resources/beta/threads/runs/runs.test.ts
diff --git tests/api-resources/beta/threads/threads.test.ts tests/api-resources/beta/threads/threads.test.ts
@@ -118,7 +120,7 @@ describe('resource threads', () => {
@@ -130,15 +132,17 @@ describe('resource threads', () => {
diff --git tests/api-resources/chat/completions.test.ts tests/api-resources/chat/completions.test.ts
diff --git tests/api-resources/completions.test.ts tests/api-resources/completions.test.ts
diff --git tests/lib/azure.test.ts tests/lib/azure.test.ts
describe('defaultQuery', () => {
Security Hotspots
Possible Issues
|
openai debug - [puLL-Merge] - openai/[email protected] Diffdiff --git .release-please-manifest.json .release-please-manifest.json
index b1ab5c7b9..6eb0f130e 100644
--- .release-please-manifest.json
+++ .release-please-manifest.json
@@ -1,3 +1,3 @@
{
- ".": "4.79.4"
+ ".": "4.83.0"
}
diff --git .stats.yml .stats.yml
index 9600edae3..df7877dfd 100644
--- .stats.yml
+++ .stats.yml
@@ -1,2 +1,2 @@
configured_endpoints: 69
-openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-b5b0e2c794b012919701c3fd43286af10fa25d33ceb8a881bec2636028f446e0.yml
+openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-fc5dbc19505b0035f9e7f88868619f4fb519b048bde011f6154f3132d4be71fb.yml
diff --git CHANGELOG.md CHANGELOG.md
index 4254a9b8f..f61def5e4 100644
--- CHANGELOG.md
+++ CHANGELOG.md
@@ -1,5 +1,64 @@
# Changelog
+## 4.83.0 (2025-02-05)
+
+Full Changelog: [v4.82.0...v4.83.0](https://github.com/openai/openai-node/compare/v4.82.0...v4.83.0)
+
+### Features
+
+* **client:** send `X-Stainless-Timeout` header ([#1299](https://github.com/openai/openai-node/issues/1299)) ([ddfc686](https://github.com/openai/openai-node/commit/ddfc686f43a3420c3adf8dec2e82b4d10a121eb8))
+
+
+### Bug Fixes
+
+* **api/types:** correct audio duration & role types ([#1300](https://github.com/openai/openai-node/issues/1300)) ([a955ac2](https://github.com/openai/openai-node/commit/a955ac2bf5bee663d530d0c82b0005bf3ce6fc47))
+* **azure/audio:** use model param for deployments ([#1297](https://github.com/openai/openai-node/issues/1297)) ([85de382](https://github.com/openai/openai-node/commit/85de382db17cbe5f112650e79d0fc1cc841efbb2))
+
+## 4.82.0 (2025-01-31)
+
+Full Changelog: [v4.81.0...v4.82.0](https://github.com/openai/openai-node/compare/v4.81.0...v4.82.0)
+
+### Features
+
+* **api:** add o3-mini ([#1295](https://github.com/openai/openai-node/issues/1295)) ([378e2f7](https://github.com/openai/openai-node/commit/378e2f7af62c570adb4c7644a4d49576b698de41))
+
+
+### Bug Fixes
+
+* **examples/realtime:** remove duplicate `session.update` call ([#1293](https://github.com/openai/openai-node/issues/1293)) ([ad800b4](https://github.com/openai/openai-node/commit/ad800b4f9410c6838994c24a3386ea708717f72b))
+* **types:** correct metadata type + other fixes ([378e2f7](https://github.com/openai/openai-node/commit/378e2f7af62c570adb4c7644a4d49576b698de41))
+
+## 4.81.0 (2025-01-29)
+
+Full Changelog: [v4.80.1...v4.81.0](https://github.com/openai/openai-node/compare/v4.80.1...v4.81.0)
+
+### Features
+
+* **azure:** Realtime API support ([#1287](https://github.com/openai/openai-node/issues/1287)) ([fe090c0](https://github.com/openai/openai-node/commit/fe090c0a57570217eb0b431e2cce40bf61de2b75))
+
+## 4.80.1 (2025-01-24)
+
+Full Changelog: [v4.80.0...v4.80.1](https://github.com/openai/openai-node/compare/v4.80.0...v4.80.1)
+
+### Bug Fixes
+
+* **azure:** include retry count header ([3e0ba40](https://github.com/openai/openai-node/commit/3e0ba409e57ce276fb1f95cd11c801e4ccaad572))
+
+
+### Documentation
+
+* fix typo, "zodFunctionTool" -> "zodFunction" ([#1128](https://github.com/openai/openai-node/issues/1128)) ([b7ab6bb](https://github.com/openai/openai-node/commit/b7ab6bb304973ade94830f37eb646e800226d5ef))
+* **helpers:** fix type annotation ([fc019df](https://github.com/openai/openai-node/commit/fc019df1d9cc276e8f8e689742853a09aa94991a))
+* **readme:** fix realtime errors docs link ([#1286](https://github.com/openai/openai-node/issues/1286)) ([d1d50c8](https://github.com/openai/openai-node/commit/d1d50c897c18cefea964e8057fe1acfd766ae2bf))
+
+## 4.80.0 (2025-01-22)
+
+Full Changelog: [v4.79.4...v4.80.0](https://github.com/openai/openai-node/compare/v4.79.4...v4.80.0)
+
+### Features
+
+* **api:** update enum values, comments, and examples ([#1280](https://github.com/openai/openai-node/issues/1280)) ([d38f2c2](https://github.com/openai/openai-node/commit/d38f2c2648b6990f217c3c7d83ca31f3739641d3))
+
## 4.79.4 (2025-01-21)
Full Changelog: [v4.79.3...v4.79.4](https://github.com/openai/openai-node/compare/v4.79.3...v4.79.4)
diff --git README.md README.md
index 3bd386e99..a1f4bf760 100644
--- README.md
+++ README.md
@@ -157,7 +157,7 @@ A full example can be found [here](https://github.com/openai/openai-node/blob/ma
### Realtime error handling
-When an error is encountered, either on the client side or returned from the server through the [`error` event](https://platform.openai.com/docs/guides/realtime/realtime-api-beta#handling-errors), the `error` event listener will be fired. However, if you haven't registered an `error` event listener then an `unhandled Promise rejection` error will be thrown.
+When an error is encountered, either on the client side or returned from the server through the [`error` event](https://platform.openai.com/docs/guides/realtime-model-capabilities#error-handling), the `error` event listener will be fired. However, if you haven't registered an `error` event listener then an `unhandled Promise rejection` error will be thrown.
It is **highly recommended** that you register an `error` event listener and handle errors approriately as typically the underlying connection is still usable.
@@ -499,7 +499,7 @@ const credential = new DefaultAzureCredential();
const scope = 'https://cognitiveservices.azure.com/.default';
const azureADTokenProvider = getBearerTokenProvider(credential, scope);
-const openai = new AzureOpenAI({ azureADTokenProvider });
+const openai = new AzureOpenAI({ azureADTokenProvider, apiVersion: "<The API version, e.g. 2024-10-01-preview>" });
const result = await openai.chat.completions.create({
model: 'gpt-4o',
@@ -509,6 +509,26 @@ const result = await openai.chat.completions.create({
console.log(result.choices[0]!.message?.content);
\`\`\`
+### Realtime API
+This SDK provides real-time streaming capabilities for Azure OpenAI through the `OpenAIRealtimeWS` and `OpenAIRealtimeWebSocket` clients described previously.
+
+To utilize the real-time features, begin by creating a fully configured `AzureOpenAI` client and passing it into either `OpenAIRealtimeWS.azure` or `OpenAIRealtimeWebSocket.azure`. For example:
+
+```ts
+const cred = new DefaultAzureCredential();
+const scope = 'https://cognitiveservices.azure.com/.default';
+const deploymentName = 'gpt-4o-realtime-preview-1001';
+const azureADTokenProvider = getBearerTokenProvider(cred, scope);
+const client = new AzureOpenAI({
+ azureADTokenProvider,
+ apiVersion: '2024-10-01-preview',
+ deployment: deploymentName,
+});
+const rt = await OpenAIRealtimeWS.azure(client);
+```
+
+Once the instance has been created, you can then begin sending requests and receiving streaming responses in real time.
+
### Retries
Certain errors will be automatically retried 2 times by default, with a short exponential backoff.
diff --git api.md api.md
index 33ab95ef6..01854a8e0 100644
--- api.md
+++ api.md
@@ -5,6 +5,7 @@ Types:
- <code><a href="./src/resources/shared.ts">ErrorObject</a></code>
- <code><a href="./src/resources/shared.ts">FunctionDefinition</a></code>
- <code><a href="./src/resources/shared.ts">FunctionParameters</a></code>
+- <code><a href="./src/resources/shared.ts">Metadata</a></code>
- <code><a href="./src/resources/shared.ts">ResponseFormatJSONObject</a></code>
- <code><a href="./src/resources/shared.ts">ResponseFormatJSONSchema</a></code>
- <code><a href="./src/resources/shared.ts">ResponseFormatText</a></code>
@@ -228,6 +229,7 @@ Types:
- <code><a href="./src/resources/beta/realtime/realtime.ts">ConversationItemInputAudioTranscriptionFailedEvent</a></code>
- <code><a href="./src/resources/beta/realtime/realtime.ts">ConversationItemTruncateEvent</a></code>
- <code><a href="./src/resources/beta/realtime/realtime.ts">ConversationItemTruncatedEvent</a></code>
+- <code><a href="./src/resources/beta/realtime/realtime.ts">ConversationItemWithReference</a></code>
- <code><a href="./src/resources/beta/realtime/realtime.ts">ErrorEvent</a></code>
- <code><a href="./src/resources/beta/realtime/realtime.ts">InputAudioBufferAppendEvent</a></code>
- <code><a href="./src/resources/beta/realtime/realtime.ts">InputAudioBufferClearEvent</a></code>
diff --git examples/azure.ts examples/azure/chat.ts
similarity index 91%
rename from examples/azure.ts
rename to examples/azure/chat.ts
index 5fe1718fa..46df820f8 100755
--- examples/azure.ts
+++ examples/azure/chat.ts
@@ -2,6 +2,7 @@
import { AzureOpenAI } from 'openai';
import { getBearerTokenProvider, DefaultAzureCredential } from '@azure/identity';
+import 'dotenv/config';
// Corresponds to your Model deployment within your OpenAI resource, e.g. gpt-4-1106-preview
// Navigate to the Azure OpenAI Studio to deploy a model.
@@ -13,7 +14,7 @@ const azureADTokenProvider = getBearerTokenProvider(credential, scope);
// Make sure to set AZURE_OPENAI_ENDPOINT with the endpoint of your Azure resource.
// You can find it in the Azure Portal.
-const openai = new AzureOpenAI({ azureADTokenProvider });
+const openai = new AzureOpenAI({ azureADTokenProvider, apiVersion: '2024-10-01-preview' });
async function main() {
console.log('Non-streaming:');
diff --git a/examples/azure/realtime/websocket.ts b/examples/azure/realtime/websocket.ts
new file mode 100644
index 000000000..bec74e654
--- /dev/null
+++ examples/azure/realtime/websocket.ts
@@ -0,0 +1,60 @@
+import { OpenAIRealtimeWebSocket } from 'openai/beta/realtime/websocket';
+import { AzureOpenAI } from 'openai';
+import { DefaultAzureCredential, getBearerTokenProvider } from '@azure/identity';
+import 'dotenv/config';
+
+async function main() {
+ const cred = new DefaultAzureCredential();
+ const scope = 'https://cognitiveservices.azure.com/.default';
+ const deploymentName = 'gpt-4o-realtime-preview-1001';
+ const azureADTokenProvider = getBearerTokenProvider(cred, scope);
+ const client = new AzureOpenAI({
+ azureADTokenProvider,
+ apiVersion: '2024-10-01-preview',
+ deployment: deploymentName,
+ });
+ const rt = await OpenAIRealtimeWebSocket.azure(client);
+
+ // access the underlying `ws.WebSocket` instance
+ rt.socket.addEventListener('open', () => {
+ console.log('Connection opened!');
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['text'],
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+
+ rt.send({
+ type: 'conversation.item.create',
+ item: {
+ type: 'message',
+ role: 'user',
+ content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
+ },
+ });
+
+ rt.send({ type: 'response.create' });
+ });
+
+ rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+ });
+
+ rt.on('session.created', (event) => {
+ console.log('session created!', event.session);
+ console.log();
+ });
+
+ rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
+ rt.on('response.text.done', () => console.log());
+
+ rt.on('response.done', () => rt.close());
+
+ rt.socket.addEventListener('close', () => console.log('\nConnection closed!'));
+}
+
+main();
diff --git a/examples/azure/realtime/ws.ts b/examples/azure/realtime/ws.ts
new file mode 100644
index 000000000..6ab7b742a
--- /dev/null
+++ examples/azure/realtime/ws.ts
@@ -0,0 +1,60 @@
+import { DefaultAzureCredential, getBearerTokenProvider } from '@azure/identity';
+import { OpenAIRealtimeWS } from 'openai/beta/realtime/ws';
+import { AzureOpenAI } from 'openai';
+import 'dotenv/config';
+
+async function main() {
+ const cred = new DefaultAzureCredential();
+ const scope = 'https://cognitiveservices.azure.com/.default';
+ const deploymentName = 'gpt-4o-realtime-preview-1001';
+ const azureADTokenProvider = getBearerTokenProvider(cred, scope);
+ const client = new AzureOpenAI({
+ azureADTokenProvider,
+ apiVersion: '2024-10-01-preview',
+ deployment: deploymentName,
+ });
+ const rt = await OpenAIRealtimeWS.azure(client);
+
+ // access the underlying `ws.WebSocket` instance
+ rt.socket.on('open', () => {
+ console.log('Connection opened!');
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['text'],
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+
+ rt.send({
+ type: 'conversation.item.create',
+ item: {
+ type: 'message',
+ role: 'user',
+ content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
+ },
+ });
+
+ rt.send({ type: 'response.create' });
+ });
+
+ rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+ });
+
+ rt.on('session.created', (event) => {
+ console.log('session created!', event.session);
+ console.log();
+ });
+
+ rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
+ rt.on('response.text.done', () => console.log());
+
+ rt.on('response.done', () => rt.close());
+
+ rt.socket.on('close', () => console.log('\nConnection closed!'));
+}
+
+main();
diff --git examples/package.json examples/package.json
index b8c34ac45..70ec2c523 100644
--- examples/package.json
+++ examples/package.json
@@ -7,6 +7,7 @@
"private": true,
"dependencies": {
"@azure/identity": "^4.2.0",
+ "dotenv": "^16.4.7",
"express": "^4.18.2",
"next": "^14.1.1",
"openai": "file:..",
diff --git examples/realtime/ws.ts examples/realtime/ws.ts
index 4bbe85e5d..08c6fbcb6 100644
--- examples/realtime/ws.ts
+++ examples/realtime/ws.ts
@@ -6,13 +6,6 @@ async function main() {
// access the underlying `ws.WebSocket` instance
rt.socket.on('open', () => {
console.log('Connection opened!');
- rt.send({
- type: 'session.update',
- session: {
- modalities: ['foo'] as any,
- model: 'gpt-4o-realtime-preview',
- },
- });
rt.send({
type: 'session.update',
session: {
diff --git helpers.md helpers.md
index abf980c82..16bc1f277 100644
--- helpers.md
+++ helpers.md
@@ -49,7 +49,7 @@ if (message?.parsed) {
The `.parse()` method will also automatically parse `function` tool calls if:
-- You use the `zodFunctionTool()` helper method
+- You use the `zodFunction()` helper method
- You mark your tool schema with `"strict": True`
For example:
@@ -226,7 +226,7 @@ on in the documentation page [Message](https://platform.openai.com/docs/api-refe
```ts
.on('textCreated', (content: Text) => ...)
-.on('textDelta', (delta: RunStepDelta, snapshot: Text) => ...)
+.on('textDelta', (delta: TextDelta, snapshot: Text) => ...)
.on('textDone', (content: Text, snapshot: Message) => ...) diff --git jsr.json jsr.json
export class OpenAIRealtimeError extends OpenAIError { -export function buildRealtimeURL(props: { baseURL: string; model: string }): URL {
interface MessageEvent {
@@ -77,6 +84,45 @@ export class OpenAIRealtimeWebSocket extends OpenAIRealtimeEmitter {
send(event: RealtimeClientEvent) { export class OpenAIRealtimeWS extends OpenAIRealtimeEmitter {
+async function getAzureHeaders(client: AzureOpenAI) {
private buildHeaders({
@@ -814,6 +822,7 @@ export type RequestOptions<
@@ -103,7 +106,7 @@ export interface TranscriptionVerbose {
diff --git src/resources/audio/translations.ts src/resources/audio/translations.ts
@@ -38,7 +41,7 @@ export interface TranslationVerbose {
export class Batches extends APIResource { /**
export interface BatchListParams extends CursorPageParams {} /**
import { APIResource } from '../../../resource'; /**
+/**
/**
/**
/**
@@ -1723,8 +1835,11 @@ export namespace SessionUpdateEvent {
@@ -1756,8 +1871,19 @@ export namespace SessionUpdateEvent {
@@ -1797,15 +1923,33 @@ export namespace SessionUpdateEvent {
@@ -86,6 +88,7 @@ export interface Session { /**
@@ -200,7 +203,7 @@ export interface SessionCreateResponse {
/** export interface SessionCreateParams {
@@ -390,8 +385,11 @@ export interface SessionCreateParams {
@@ -423,8 +421,19 @@ export interface SessionCreateParams {
@@ -464,15 +473,33 @@ export namespace SessionCreateParams {
export interface Tool { @@ -407,11 +408,13 @@ export interface Message { /**
export namespace MessageCreateParams {
export interface MessageListParams extends CursorPageParams { /**
export namespace AdditionalMessage {
export interface RunListParams extends CursorPageParams { export class Steps extends APIResource { /**
export namespace Message {
@@ -658,7 +671,8 @@ export interface ThreadCreateAndRunParamsBase { export namespace ThreadCreateAndRunParams {
export type ChatModel =
@@ -300,8 +299,7 @@ export interface ChatCompletionChunk { /**
@@ -373,7 +371,7 @@ export namespace ChatCompletionChunk {
/**
/**
diff --git src/resources/embeddings.ts src/resources/embeddings.ts
diff --git src/resources/shared.ts src/resources/shared.ts +/**
export interface ResponseFormatJSONObject { /**
@@ -33,7 +33,9 @@ describe('resource assistants', () => {
diff --git tests/api-resources/beta/realtime/sessions.test.ts tests/api-resources/beta/realtime/sessions.test.ts describe('resource sessions', () => {
diff --git tests/api-resources/beta/threads/runs/runs.test.ts tests/api-resources/beta/threads/runs/runs.test.ts
diff --git tests/api-resources/beta/threads/threads.test.ts tests/api-resources/beta/threads/threads.test.ts
@@ -118,7 +120,7 @@ describe('resource threads', () => {
@@ -130,15 +132,17 @@ describe('resource threads', () => {
diff --git tests/api-resources/chat/completions.test.ts tests/api-resources/chat/completions.test.ts
diff --git tests/api-resources/completions.test.ts tests/api-resources/completions.test.ts
diff --git tests/lib/azure.test.ts tests/lib/azure.test.ts
describe('defaultQuery', () => {
|
bedrock debug - [puLL-Merge] - openai/[email protected] Diffdiff --git .release-please-manifest.json .release-please-manifest.json
index b1ab5c7b9..6eb0f130e 100644
--- .release-please-manifest.json
+++ .release-please-manifest.json
@@ -1,3 +1,3 @@
{
- ".": "4.79.4"
+ ".": "4.83.0"
}
diff --git .stats.yml .stats.yml
index 9600edae3..df7877dfd 100644
--- .stats.yml
+++ .stats.yml
@@ -1,2 +1,2 @@
configured_endpoints: 69
-openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-b5b0e2c794b012919701c3fd43286af10fa25d33ceb8a881bec2636028f446e0.yml
+openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-fc5dbc19505b0035f9e7f88868619f4fb519b048bde011f6154f3132d4be71fb.yml
diff --git CHANGELOG.md CHANGELOG.md
index 4254a9b8f..f61def5e4 100644
--- CHANGELOG.md
+++ CHANGELOG.md
@@ -1,5 +1,64 @@
# Changelog
+## 4.83.0 (2025-02-05)
+
+Full Changelog: [v4.82.0...v4.83.0](https://github.com/openai/openai-node/compare/v4.82.0...v4.83.0)
+
+### Features
+
+* **client:** send `X-Stainless-Timeout` header ([#1299](https://github.com/openai/openai-node/issues/1299)) ([ddfc686](https://github.com/openai/openai-node/commit/ddfc686f43a3420c3adf8dec2e82b4d10a121eb8))
+
+
+### Bug Fixes
+
+* **api/types:** correct audio duration & role types ([#1300](https://github.com/openai/openai-node/issues/1300)) ([a955ac2](https://github.com/openai/openai-node/commit/a955ac2bf5bee663d530d0c82b0005bf3ce6fc47))
+* **azure/audio:** use model param for deployments ([#1297](https://github.com/openai/openai-node/issues/1297)) ([85de382](https://github.com/openai/openai-node/commit/85de382db17cbe5f112650e79d0fc1cc841efbb2))
+
+## 4.82.0 (2025-01-31)
+
+Full Changelog: [v4.81.0...v4.82.0](https://github.com/openai/openai-node/compare/v4.81.0...v4.82.0)
+
+### Features
+
+* **api:** add o3-mini ([#1295](https://github.com/openai/openai-node/issues/1295)) ([378e2f7](https://github.com/openai/openai-node/commit/378e2f7af62c570adb4c7644a4d49576b698de41))
+
+
+### Bug Fixes
+
+* **examples/realtime:** remove duplicate `session.update` call ([#1293](https://github.com/openai/openai-node/issues/1293)) ([ad800b4](https://github.com/openai/openai-node/commit/ad800b4f9410c6838994c24a3386ea708717f72b))
+* **types:** correct metadata type + other fixes ([378e2f7](https://github.com/openai/openai-node/commit/378e2f7af62c570adb4c7644a4d49576b698de41))
+
+## 4.81.0 (2025-01-29)
+
+Full Changelog: [v4.80.1...v4.81.0](https://github.com/openai/openai-node/compare/v4.80.1...v4.81.0)
+
+### Features
+
+* **azure:** Realtime API support ([#1287](https://github.com/openai/openai-node/issues/1287)) ([fe090c0](https://github.com/openai/openai-node/commit/fe090c0a57570217eb0b431e2cce40bf61de2b75))
+
+## 4.80.1 (2025-01-24)
+
+Full Changelog: [v4.80.0...v4.80.1](https://github.com/openai/openai-node/compare/v4.80.0...v4.80.1)
+
+### Bug Fixes
+
+* **azure:** include retry count header ([3e0ba40](https://github.com/openai/openai-node/commit/3e0ba409e57ce276fb1f95cd11c801e4ccaad572))
+
+
+### Documentation
+
+* fix typo, "zodFunctionTool" -> "zodFunction" ([#1128](https://github.com/openai/openai-node/issues/1128)) ([b7ab6bb](https://github.com/openai/openai-node/commit/b7ab6bb304973ade94830f37eb646e800226d5ef))
+* **helpers:** fix type annotation ([fc019df](https://github.com/openai/openai-node/commit/fc019df1d9cc276e8f8e689742853a09aa94991a))
+* **readme:** fix realtime errors docs link ([#1286](https://github.com/openai/openai-node/issues/1286)) ([d1d50c8](https://github.com/openai/openai-node/commit/d1d50c897c18cefea964e8057fe1acfd766ae2bf))
+
+## 4.80.0 (2025-01-22)
+
+Full Changelog: [v4.79.4...v4.80.0](https://github.com/openai/openai-node/compare/v4.79.4...v4.80.0)
+
+### Features
+
+* **api:** update enum values, comments, and examples ([#1280](https://github.com/openai/openai-node/issues/1280)) ([d38f2c2](https://github.com/openai/openai-node/commit/d38f2c2648b6990f217c3c7d83ca31f3739641d3))
+
## 4.79.4 (2025-01-21)
Full Changelog: [v4.79.3...v4.79.4](https://github.com/openai/openai-node/compare/v4.79.3...v4.79.4)
diff --git README.md README.md
index 3bd386e99..a1f4bf760 100644
--- README.md
+++ README.md
@@ -157,7 +157,7 @@ A full example can be found [here](https://github.com/openai/openai-node/blob/ma
### Realtime error handling
-When an error is encountered, either on the client side or returned from the server through the [`error` event](https://platform.openai.com/docs/guides/realtime/realtime-api-beta#handling-errors), the `error` event listener will be fired. However, if you haven't registered an `error` event listener then an `unhandled Promise rejection` error will be thrown.
+When an error is encountered, either on the client side or returned from the server through the [`error` event](https://platform.openai.com/docs/guides/realtime-model-capabilities#error-handling), the `error` event listener will be fired. However, if you haven't registered an `error` event listener then an `unhandled Promise rejection` error will be thrown.
It is **highly recommended** that you register an `error` event listener and handle errors approriately as typically the underlying connection is still usable.
@@ -499,7 +499,7 @@ const credential = new DefaultAzureCredential();
const scope = 'https://cognitiveservices.azure.com/.default';
const azureADTokenProvider = getBearerTokenProvider(credential, scope);
-const openai = new AzureOpenAI({ azureADTokenProvider });
+const openai = new AzureOpenAI({ azureADTokenProvider, apiVersion: "<The API version, e.g. 2024-10-01-preview>" });
const result = await openai.chat.completions.create({
model: 'gpt-4o',
@@ -509,6 +509,26 @@ const result = await openai.chat.completions.create({
console.log(result.choices[0]!.message?.content);
\`\`\`
+### Realtime API
+This SDK provides real-time streaming capabilities for Azure OpenAI through the `OpenAIRealtimeWS` and `OpenAIRealtimeWebSocket` clients described previously.
+
+To utilize the real-time features, begin by creating a fully configured `AzureOpenAI` client and passing it into either `OpenAIRealtimeWS.azure` or `OpenAIRealtimeWebSocket.azure`. For example:
+
+```ts
+const cred = new DefaultAzureCredential();
+const scope = 'https://cognitiveservices.azure.com/.default';
+const deploymentName = 'gpt-4o-realtime-preview-1001';
+const azureADTokenProvider = getBearerTokenProvider(cred, scope);
+const client = new AzureOpenAI({
+ azureADTokenProvider,
+ apiVersion: '2024-10-01-preview',
+ deployment: deploymentName,
+});
+const rt = await OpenAIRealtimeWS.azure(client);
+```
+
+Once the instance has been created, you can then begin sending requests and receiving streaming responses in real time.
+
### Retries
Certain errors will be automatically retried 2 times by default, with a short exponential backoff.
diff --git api.md api.md
index 33ab95ef6..01854a8e0 100644
--- api.md
+++ api.md
@@ -5,6 +5,7 @@ Types:
- <code><a href="./src/resources/shared.ts">ErrorObject</a></code>
- <code><a href="./src/resources/shared.ts">FunctionDefinition</a></code>
- <code><a href="./src/resources/shared.ts">FunctionParameters</a></code>
+- <code><a href="./src/resources/shared.ts">Metadata</a></code>
- <code><a href="./src/resources/shared.ts">ResponseFormatJSONObject</a></code>
- <code><a href="./src/resources/shared.ts">ResponseFormatJSONSchema</a></code>
- <code><a href="./src/resources/shared.ts">ResponseFormatText</a></code>
@@ -228,6 +229,7 @@ Types:
- <code><a href="./src/resources/beta/realtime/realtime.ts">ConversationItemInputAudioTranscriptionFailedEvent</a></code>
- <code><a href="./src/resources/beta/realtime/realtime.ts">ConversationItemTruncateEvent</a></code>
- <code><a href="./src/resources/beta/realtime/realtime.ts">ConversationItemTruncatedEvent</a></code>
+- <code><a href="./src/resources/beta/realtime/realtime.ts">ConversationItemWithReference</a></code>
- <code><a href="./src/resources/beta/realtime/realtime.ts">ErrorEvent</a></code>
- <code><a href="./src/resources/beta/realtime/realtime.ts">InputAudioBufferAppendEvent</a></code>
- <code><a href="./src/resources/beta/realtime/realtime.ts">InputAudioBufferClearEvent</a></code>
diff --git examples/azure.ts examples/azure/chat.ts
similarity index 91%
rename from examples/azure.ts
rename to examples/azure/chat.ts
index 5fe1718fa..46df820f8 100755
--- examples/azure.ts
+++ examples/azure/chat.ts
@@ -2,6 +2,7 @@
import { AzureOpenAI } from 'openai';
import { getBearerTokenProvider, DefaultAzureCredential } from '@azure/identity';
+import 'dotenv/config';
// Corresponds to your Model deployment within your OpenAI resource, e.g. gpt-4-1106-preview
// Navigate to the Azure OpenAI Studio to deploy a model.
@@ -13,7 +14,7 @@ const azureADTokenProvider = getBearerTokenProvider(credential, scope);
// Make sure to set AZURE_OPENAI_ENDPOINT with the endpoint of your Azure resource.
// You can find it in the Azure Portal.
-const openai = new AzureOpenAI({ azureADTokenProvider });
+const openai = new AzureOpenAI({ azureADTokenProvider, apiVersion: '2024-10-01-preview' });
async function main() {
console.log('Non-streaming:');
diff --git a/examples/azure/realtime/websocket.ts b/examples/azure/realtime/websocket.ts
new file mode 100644
index 000000000..bec74e654
--- /dev/null
+++ examples/azure/realtime/websocket.ts
@@ -0,0 +1,60 @@
+import { OpenAIRealtimeWebSocket } from 'openai/beta/realtime/websocket';
+import { AzureOpenAI } from 'openai';
+import { DefaultAzureCredential, getBearerTokenProvider } from '@azure/identity';
+import 'dotenv/config';
+
+async function main() {
+ const cred = new DefaultAzureCredential();
+ const scope = 'https://cognitiveservices.azure.com/.default';
+ const deploymentName = 'gpt-4o-realtime-preview-1001';
+ const azureADTokenProvider = getBearerTokenProvider(cred, scope);
+ const client = new AzureOpenAI({
+ azureADTokenProvider,
+ apiVersion: '2024-10-01-preview',
+ deployment: deploymentName,
+ });
+ const rt = await OpenAIRealtimeWebSocket.azure(client);
+
+ // access the underlying `ws.WebSocket` instance
+ rt.socket.addEventListener('open', () => {
+ console.log('Connection opened!');
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['text'],
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+
+ rt.send({
+ type: 'conversation.item.create',
+ item: {
+ type: 'message',
+ role: 'user',
+ content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
+ },
+ });
+
+ rt.send({ type: 'response.create' });
+ });
+
+ rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+ });
+
+ rt.on('session.created', (event) => {
+ console.log('session created!', event.session);
+ console.log();
+ });
+
+ rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
+ rt.on('response.text.done', () => console.log());
+
+ rt.on('response.done', () => rt.close());
+
+ rt.socket.addEventListener('close', () => console.log('\nConnection closed!'));
+}
+
+main();
diff --git a/examples/azure/realtime/ws.ts b/examples/azure/realtime/ws.ts
new file mode 100644
index 000000000..6ab7b742a
--- /dev/null
+++ examples/azure/realtime/ws.ts
@@ -0,0 +1,60 @@
+import { DefaultAzureCredential, getBearerTokenProvider } from '@azure/identity';
+import { OpenAIRealtimeWS } from 'openai/beta/realtime/ws';
+import { AzureOpenAI } from 'openai';
+import 'dotenv/config';
+
+async function main() {
+ const cred = new DefaultAzureCredential();
+ const scope = 'https://cognitiveservices.azure.com/.default';
+ const deploymentName = 'gpt-4o-realtime-preview-1001';
+ const azureADTokenProvider = getBearerTokenProvider(cred, scope);
+ const client = new AzureOpenAI({
+ azureADTokenProvider,
+ apiVersion: '2024-10-01-preview',
+ deployment: deploymentName,
+ });
+ const rt = await OpenAIRealtimeWS.azure(client);
+
+ // access the underlying `ws.WebSocket` instance
+ rt.socket.on('open', () => {
+ console.log('Connection opened!');
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['text'],
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+
+ rt.send({
+ type: 'conversation.item.create',
+ item: {
+ type: 'message',
+ role: 'user',
+ content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
+ },
+ });
+
+ rt.send({ type: 'response.create' });
+ });
+
+ rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+ });
+
+ rt.on('session.created', (event) => {
+ console.log('session created!', event.session);
+ console.log();
+ });
+
+ rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
+ rt.on('response.text.done', () => console.log());
+
+ rt.on('response.done', () => rt.close());
+
+ rt.socket.on('close', () => console.log('\nConnection closed!'));
+}
+
+main();
diff --git examples/package.json examples/package.json
index b8c34ac45..70ec2c523 100644
--- examples/package.json
+++ examples/package.json
@@ -7,6 +7,7 @@
"private": true,
"dependencies": {
"@azure/identity": "^4.2.0",
+ "dotenv": "^16.4.7",
"express": "^4.18.2",
"next": "^14.1.1",
"openai": "file:..",
diff --git examples/realtime/ws.ts examples/realtime/ws.ts
index 4bbe85e5d..08c6fbcb6 100644
--- examples/realtime/ws.ts
+++ examples/realtime/ws.ts
@@ -6,13 +6,6 @@ async function main() {
// access the underlying `ws.WebSocket` instance
rt.socket.on('open', () => {
console.log('Connection opened!');
- rt.send({
- type: 'session.update',
- session: {
- modalities: ['foo'] as any,
- model: 'gpt-4o-realtime-preview',
- },
- });
rt.send({
type: 'session.update',
session: {
diff --git helpers.md helpers.md
index abf980c82..16bc1f277 100644
--- helpers.md
+++ helpers.md
@@ -49,7 +49,7 @@ if (message?.parsed) {
The `.parse()` method will also automatically parse `function` tool calls if:
-- You use the `zodFunctionTool()` helper method
+- You use the `zodFunction()` helper method
- You mark your tool schema with `"strict": True`
For example:
@@ -226,7 +226,7 @@ on in the documentation page [Message](https://platform.openai.com/docs/api-refe
```ts
.on('textCreated', (content: Text) => ...)
-.on('textDelta', (delta: RunStepDelta, snapshot: Text) => ...)
+.on('textDelta', (delta: TextDelta, snapshot: Text) => ...)
.on('textDone', (content: Text, snapshot: Message) => ...) diff --git jsr.json jsr.json
export class OpenAIRealtimeError extends OpenAIError { -export function buildRealtimeURL(props: { baseURL: string; model: string }): URL {
interface MessageEvent {
@@ -77,6 +84,45 @@ export class OpenAIRealtimeWebSocket extends OpenAIRealtimeEmitter {
send(event: RealtimeClientEvent) { export class OpenAIRealtimeWS extends OpenAIRealtimeEmitter {
+async function getAzureHeaders(client: AzureOpenAI) {
private buildHeaders({
@@ -814,6 +822,7 @@ export type RequestOptions<
@@ -103,7 +106,7 @@ export interface TranscriptionVerbose {
diff --git src/resources/audio/translations.ts src/resources/audio/translations.ts
@@ -38,7 +41,7 @@ export interface TranslationVerbose {
export class Batches extends APIResource { /**
export interface BatchListParams extends CursorPageParams {} /**
import { APIResource } from '../../../resource'; /**
+/**
/**
/**
/**
@@ -1723,8 +1835,11 @@ export namespace SessionUpdateEvent {
@@ -1756,8 +1871,19 @@ export namespace SessionUpdateEvent {
@@ -1797,15 +1923,33 @@ export namespace SessionUpdateEvent {
@@ -86,6 +88,7 @@ export interface Session { /**
@@ -200,7 +203,7 @@ export interface SessionCreateResponse {
/** export interface SessionCreateParams {
@@ -390,8 +385,11 @@ export interface SessionCreateParams {
@@ -423,8 +421,19 @@ export interface SessionCreateParams {
@@ -464,15 +473,33 @@ export namespace SessionCreateParams {
export interface Tool { @@ -407,11 +408,13 @@ export interface Message { /**
export namespace MessageCreateParams {
export interface MessageListParams extends CursorPageParams { /**
export namespace AdditionalMessage {
export interface RunListParams extends CursorPageParams { export class Steps extends APIResource { /**
export namespace Message {
@@ -658,7 +671,8 @@ export interface ThreadCreateAndRunParamsBase { export namespace ThreadCreateAndRunParams {
export type ChatModel =
@@ -300,8 +299,7 @@ export interface ChatCompletionChunk { /**
@@ -373,7 +371,7 @@ export namespace ChatCompletionChunk {
/**
/**
diff --git src/resources/embeddings.ts src/resources/embeddings.ts
diff --git src/resources/shared.ts src/resources/shared.ts +/**
export interface ResponseFormatJSONObject { /**
@@ -33,7 +33,9 @@ describe('resource assistants', () => {
diff --git tests/api-resources/beta/realtime/sessions.test.ts tests/api-resources/beta/realtime/sessions.test.ts describe('resource sessions', () => {
diff --git tests/api-resources/beta/threads/runs/runs.test.ts tests/api-resources/beta/threads/runs/runs.test.ts
diff --git tests/api-resources/beta/threads/threads.test.ts tests/api-resources/beta/threads/threads.test.ts
@@ -118,7 +120,7 @@ describe('resource threads', () => {
@@ -130,15 +132,17 @@ describe('resource threads', () => {
diff --git tests/api-resources/chat/completions.test.ts tests/api-resources/chat/completions.test.ts
diff --git tests/api-resources/completions.test.ts tests/api-resources/completions.test.ts
diff --git tests/lib/azure.test.ts tests/lib/azure.test.ts
describe('defaultQuery', () => {
|
This PR contains the following updates:
4.79.4
->4.83.0
Release Notes
openai/openai-node (openai)
v4.83.0
Compare Source
Full Changelog: v4.82.0...v4.83.0
Features
X-Stainless-Timeout
header (#1299) (ddfc686)Bug Fixes
v4.82.0
Compare Source
Full Changelog: v4.81.0...v4.82.0
Features
Bug Fixes
session.update
call (#1293) (ad800b4)v4.81.0
Compare Source
Full Changelog: v4.80.1...v4.81.0
Features
v4.80.1
Compare Source
Full Changelog: v4.80.0...v4.80.1
Bug Fixes
Documentation
v4.80.0
Compare Source
Full Changelog: v4.79.4...v4.80.0
Features
Configuration
📅 Schedule: Branch creation - "* 0-12 * * 3" (UTC), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.