Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chat completions audio -- content part squish #317

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions .dotnet/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,18 @@
# Release History

## 2.2.0-beta.1 (Unreleased)

### Features added

- Chat completion now supports audio input and output!
- To configure a chat completion to request audio output using the `gpt-4o-audio-preview` model, create a `ChatAudioOptions` instance and provide it on `ChatCompletionOptions.AudioOptions`.
- Audio is always represented as a `ChatMessageContentPart`:
- User audio input can be instantiated via `ChatMessageContentPart.CreateAudioPart(BinaryData, ChatAudioInputFormat)` and will populate the `AudioBytes` and `AudioInputFormat` properties on `ChatMessageContentPart`
- Response audio associated with the items in `Content` of a `ChatCompletion` or `ContentUpdate` of a `StreamingChatCompletionUpdate` will populate the `AudioBytes`, `AudioTranscript`, `AudioExpiresAt`, and `AudioCorrelationId` properties
- Audio referring to a previous response's output can be created via `ChatMessageContentPart.CreateAudioPart(string)` and will populate the `AudioCorrelationId` property.
- The `AssistantChatMessage(IEnumerable<ChatMessageContentPart>)` and `AssistantChatMessage(ChatCompletion)` constructors will automatically infer `AudioCorrelationId`, simplifying conversation history management
- For more information, see the example in the README

## 2.1.0 (2024-12-04)

### Features added
Expand Down
68 changes: 68 additions & 0 deletions .dotnet/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ It is generated from our [OpenAPI specification](https://github.com/openai/opena
- [How to use chat completions with streaming](#how-to-use-chat-completions-with-streaming)
- [How to use chat completions with tools and function calling](#how-to-use-chat-completions-with-tools-and-function-calling)
- [How to use chat completions with structured outputs](#how-to-use-chat-completions-with-structured-outputs)
- [How to use chat completions with audio](#how-to-use-chat-completions-with-audio)
- [How to generate text embeddings](#how-to-generate-text-embeddings)
- [How to generate images](#how-to-generate-images)
- [How to transcribe audio](#how-to-transcribe-audio)
Expand Down Expand Up @@ -354,6 +355,73 @@ foreach (JsonElement stepElement in structuredJson.RootElement.GetProperty("step
}
```

## How to use chat completions with audio

Starting with the `gpt-4o-audio-preview` model, chat completions can process audio input and output.

This example demonstrates:
1. Configuring the client with the supported `gpt-4o-audio-preview` model
1. Supplying user audio input on a chat completion request
1. Requesting model audio output from the chat completion operation
1. Retrieving audio output from a `ChatCompletion` instance
1. Using past audio output as `ChatMessage` conversation history

```csharp
// Chat audio input and output is only supported on specific models, beginning with gpt-4o-audio-preview
ChatClient client = new("gpt-4o-audio-preview", Environment.GetEnvironmentVariable("OPENAI_API_KEY"));

// Input audio is provided to a request by adding an audio content part to a user message
string audioFilePath = Path.Combine("Assets", "whats_the_weather_pcm16_24khz_mono.wav");
byte[] audioFileRawBytes = File.ReadAllBytes(audioFilePath);
BinaryData audioData = BinaryData.FromBytes(audioFileRawBytes);
List<ChatMessage> messages =
[
new UserChatMessage(ChatMessageContentPart.CreateAudioPart(audioData, ChatInputAudioFormat.Wav)),
];

// Output audio is requested by configuring AudioOptions on ChatCompletionOptions
ChatCompletionOptions options = new()
{
AudioOptions = new(ChatResponseVoice.Alloy, ChatOutputAudioFormat.Mp3),
};

ChatCompletion completion = client.CompleteChat(messages, options);

void PrintAudioContent()
{
foreach (ChatMessageContentPart contentPart in completion.Content)
{
if (contentPart.AudioCorrelationId is not null)
{
Console.WriteLine($"Response audio transcript: {contentPart.AudioTranscript}");

string outputFilePath = $"{contentPart.AudioCorrelationId}.mp3";
using (FileStream outputFileStream = File.OpenWrite(outputFilePath))
{
outputFileStream.Write(contentPart.AudioBytes);
}
Console.WriteLine($"Response audio written to file: {outputFilePath}");
Console.WriteLine($"Valid on followup requests until: {contentPart.AudioExpiresAt}");
}
}
}

PrintAudioContent();

// To refer to past audio output, create an assistant message from the earlier ChatCompletion, use the earlier
// response content part, or use ChatMessageContentPart.CreateAudioPart(string) to manually instantiate a part.

messages.Add(new AssistantChatMessage(completion));
messages.Add("Can you say that like a pirate?");

completion = client.CompleteChat(messages, options);

PrintAudioContent();
```

Streaming is virtually identical: items in `StreamingChatCompletionUpdate.ContentUpdate` will provide incremental chunks of data via
`AudioBytes` and `AudioTranscript`.

## How to generate text embeddings

In this example, you want to create a trip-planning website that allows customers to write a prompt describing the kind of hotel that they are looking for and then offers hotel recommendations that closely match this description. To achieve this, it is possible to use text embeddings to measure the relatedness of text strings. In summary, you can get embeddings of the hotel descriptions, store them in a vector database, and use them to build a search index that you can query using the embedding of a given customer's prompt.
Expand Down
71 changes: 68 additions & 3 deletions .dotnet/api/OpenAI.netstandard2.0.cs
Original file line number Diff line number Diff line change
Expand Up @@ -1146,6 +1146,13 @@ public class AssistantChatMessage : ChatMessage, IJsonModel<AssistantChatMessage
protected override ChatMessage PersistableModelCreateCore(BinaryData data, ModelReaderWriterOptions options);
protected override BinaryData PersistableModelWriteCore(ModelReaderWriterOptions options);
}
public class ChatAudioOptions : IJsonModel<ChatAudioOptions>, IPersistableModel<ChatAudioOptions> {
public ChatAudioOptions(ChatResponseVoice responseVoice, ChatOutputAudioFormat outputAudioFormat);
public ChatOutputAudioFormat OutputAudioFormat { get; set; }
public ChatResponseVoice ResponseVoice { get; set; }
public static explicit operator ChatAudioOptions(ClientResult result);
public static implicit operator BinaryContent(ChatAudioOptions chatAudioOptions);
}
public class ChatClient {
protected ChatClient();
protected internal ChatClient(ClientPipeline pipeline, string model, OpenAIClientOptions options);
Expand Down Expand Up @@ -1186,6 +1193,7 @@ public class ChatCompletion : IJsonModel<ChatCompletion>, IPersistableModel<Chat
}
public class ChatCompletionOptions : IJsonModel<ChatCompletionOptions>, IPersistableModel<ChatCompletionOptions> {
public bool? AllowParallelToolCalls { get; set; }
public ChatAudioOptions AudioOptions { get; set; }
public string EndUserId { get; set; }
public float? FrequencyPenalty { get; set; }
[Obsolete("This property is obsolete. Please use ToolChoice instead.")]
Expand Down Expand Up @@ -1256,6 +1264,20 @@ public class ChatFunctionChoice : IJsonModel<ChatFunctionChoice>, IPersistableMo
public static bool operator !=(ChatImageDetailLevel left, ChatImageDetailLevel right);
public override readonly string ToString();
}
public readonly partial struct ChatInputAudioFormat : IEquatable<ChatInputAudioFormat> {
public ChatInputAudioFormat(string value);
public static ChatInputAudioFormat Mp3 { get; }
public static ChatInputAudioFormat Wav { get; }
public readonly bool Equals(ChatInputAudioFormat other);
[EditorBrowsable(EditorBrowsableState.Never)]
public override readonly bool Equals(object obj);
[EditorBrowsable(EditorBrowsableState.Never)]
public override readonly int GetHashCode();
public static bool operator ==(ChatInputAudioFormat left, ChatInputAudioFormat right);
public static implicit operator ChatInputAudioFormat(string value);
public static bool operator !=(ChatInputAudioFormat left, ChatInputAudioFormat right);
public override readonly string ToString();
}
public class ChatInputTokenUsageDetails : IJsonModel<ChatInputTokenUsageDetails>, IPersistableModel<ChatInputTokenUsageDetails> {
public int AudioTokenCount { get; }
public int CachedTokenCount { get; }
Expand Down Expand Up @@ -1292,13 +1314,20 @@ public class ChatMessageContent : ObjectModel.Collection<ChatMessageContentPart>
public ChatMessageContent(string content);
}
public class ChatMessageContentPart : IJsonModel<ChatMessageContentPart>, IPersistableModel<ChatMessageContentPart> {
public BinaryData AudioBytes { get; }
public string AudioCorrelationId { get; }
public DateTimeOffset? AudioExpiresAt { get; }
public ChatInputAudioFormat? AudioInputFormat { get; }
public string AudioTranscript { get; }
public BinaryData ImageBytes { get; }
public string ImageBytesMediaType { get; }
public ChatImageDetailLevel? ImageDetailLevel { get; }
public Uri ImageUri { get; }
public ChatMessageContentPartKind Kind { get; }
public string Refusal { get; }
public string Text { get; }
public static ChatMessageContentPart CreateAudioPart(BinaryData audioBytes, ChatInputAudioFormat audioFormat);
public static ChatMessageContentPart CreateAudioPart(string audioCorrelationId);
public static ChatMessageContentPart CreateImagePart(BinaryData imageBytes, string imageBytesMediaType, ChatImageDetailLevel? imageDetailLevel = null);
public static ChatMessageContentPart CreateImagePart(Uri imageUri, ChatImageDetailLevel? imageDetailLevel = null);
public static ChatMessageContentPart CreateRefusalPart(string refusal);
Expand All @@ -1310,7 +1339,8 @@ public class ChatMessageContentPart : IJsonModel<ChatMessageContentPart>, IPersi
public enum ChatMessageContentPartKind {
Text = 0,
Refusal = 1,
Image = 2
Image = 2,
Audio = 3
}
public enum ChatMessageRole {
System = 0,
Expand All @@ -1319,6 +1349,23 @@ public enum ChatMessageRole {
Tool = 3,
Function = 4
}
public readonly partial struct ChatOutputAudioFormat : IEquatable<ChatOutputAudioFormat> {
public ChatOutputAudioFormat(string value);
public static ChatOutputAudioFormat Flac { get; }
public static ChatOutputAudioFormat Mp3 { get; }
public static ChatOutputAudioFormat Opus { get; }
public static ChatOutputAudioFormat Pcm16 { get; }
public static ChatOutputAudioFormat Wav { get; }
public readonly bool Equals(ChatOutputAudioFormat other);
[EditorBrowsable(EditorBrowsableState.Never)]
public override readonly bool Equals(object obj);
[EditorBrowsable(EditorBrowsableState.Never)]
public override readonly int GetHashCode();
public static bool operator ==(ChatOutputAudioFormat left, ChatOutputAudioFormat right);
public static implicit operator ChatOutputAudioFormat(string value);
public static bool operator !=(ChatOutputAudioFormat left, ChatOutputAudioFormat right);
public override readonly string ToString();
}
public class ChatOutputTokenUsageDetails : IJsonModel<ChatOutputTokenUsageDetails>, IPersistableModel<ChatOutputTokenUsageDetails> {
public int AudioTokenCount { get; }
public int ReasoningTokenCount { get; }
Expand All @@ -1332,6 +1379,24 @@ public class ChatResponseFormat : IJsonModel<ChatResponseFormat>, IPersistableMo
public static explicit operator ChatResponseFormat(ClientResult result);
public static implicit operator BinaryContent(ChatResponseFormat chatResponseFormat);
}
public readonly partial struct ChatResponseVoice : IEquatable<ChatResponseVoice> {
public ChatResponseVoice(string value);
public static ChatResponseVoice Alloy { get; }
public static ChatResponseVoice Echo { get; }
public static ChatResponseVoice Fable { get; }
public static ChatResponseVoice Nova { get; }
public static ChatResponseVoice Onyx { get; }
public static ChatResponseVoice Shimmer { get; }
public readonly bool Equals(ChatResponseVoice other);
[EditorBrowsable(EditorBrowsableState.Never)]
public override readonly bool Equals(object obj);
[EditorBrowsable(EditorBrowsableState.Never)]
public override readonly int GetHashCode();
public static bool operator ==(ChatResponseVoice left, ChatResponseVoice right);
public static implicit operator ChatResponseVoice(string value);
public static bool operator !=(ChatResponseVoice left, ChatResponseVoice right);
public override readonly string ToString();
}
public class ChatTokenLogProbabilityDetails : IJsonModel<ChatTokenLogProbabilityDetails>, IPersistableModel<ChatTokenLogProbabilityDetails> {
public float LogProbability { get; }
public string Token { get; }
Expand Down Expand Up @@ -1401,13 +1466,13 @@ public class FunctionChatMessage : ChatMessage, IJsonModel<FunctionChatMessage>,
protected override BinaryData PersistableModelWriteCore(ModelReaderWriterOptions options);
}
public static class OpenAIChatModelFactory {
public static ChatCompletion ChatCompletion(string id = null, ChatFinishReason finishReason = ChatFinishReason.Stop, ChatMessageContent content = null, string refusal = null, IEnumerable<ChatToolCall> toolCalls = null, ChatMessageRole role = ChatMessageRole.System, ChatFunctionCall functionCall = null, IEnumerable<ChatTokenLogProbabilityDetails> contentTokenLogProbabilities = null, IEnumerable<ChatTokenLogProbabilityDetails> refusalTokenLogProbabilities = null, DateTimeOffset createdAt = default, string model = null, string systemFingerprint = null, ChatTokenUsage usage = null);
public static ChatCompletion ChatCompletion(string id = null, ChatFinishReason finishReason = ChatFinishReason.Stop, ChatMessageContent content = null, string refusal = null, IEnumerable<ChatToolCall> toolCalls = null, ChatMessageRole role = ChatMessageRole.System, ChatFunctionCall functionCall = null, IEnumerable<ChatTokenLogProbabilityDetails> contentTokenLogProbabilities = null, IEnumerable<ChatTokenLogProbabilityDetails> refusalTokenLogProbabilities = null, DateTimeOffset createdAt = default, string model = null, string systemFingerprint = null, ChatTokenUsage usage = null, BinaryData audioBytes = null, string audioCorrelationId = null, string audioTranscript = null, DateTimeOffset? audioExpiresAt = null);
public static ChatInputTokenUsageDetails ChatInputTokenUsageDetails(int audioTokenCount = 0, int cachedTokenCount = 0);
public static ChatOutputTokenUsageDetails ChatOutputTokenUsageDetails(int reasoningTokenCount = 0, int audioTokenCount = 0);
public static ChatTokenLogProbabilityDetails ChatTokenLogProbabilityDetails(string token = null, float logProbability = 0, ReadOnlyMemory<byte>? utf8Bytes = null, IEnumerable<ChatTokenTopLogProbabilityDetails> topLogProbabilities = null);
public static ChatTokenTopLogProbabilityDetails ChatTokenTopLogProbabilityDetails(string token = null, float logProbability = 0, ReadOnlyMemory<byte>? utf8Bytes = null);
public static ChatTokenUsage ChatTokenUsage(int outputTokenCount = 0, int inputTokenCount = 0, int totalTokenCount = 0, ChatOutputTokenUsageDetails outputTokenDetails = null, ChatInputTokenUsageDetails inputTokenDetails = null);
public static StreamingChatCompletionUpdate StreamingChatCompletionUpdate(string completionId = null, ChatMessageContent contentUpdate = null, StreamingChatFunctionCallUpdate functionCallUpdate = null, IEnumerable<StreamingChatToolCallUpdate> toolCallUpdates = null, ChatMessageRole? role = null, string refusalUpdate = null, IEnumerable<ChatTokenLogProbabilityDetails> contentTokenLogProbabilities = null, IEnumerable<ChatTokenLogProbabilityDetails> refusalTokenLogProbabilities = null, ChatFinishReason? finishReason = null, DateTimeOffset createdAt = default, string model = null, string systemFingerprint = null, ChatTokenUsage usage = null);
public static StreamingChatCompletionUpdate StreamingChatCompletionUpdate(string completionId = null, ChatMessageContent contentUpdate = null, StreamingChatFunctionCallUpdate functionCallUpdate = null, IEnumerable<StreamingChatToolCallUpdate> toolCallUpdates = null, ChatMessageRole? role = null, string refusalUpdate = null, IEnumerable<ChatTokenLogProbabilityDetails> contentTokenLogProbabilities = null, IEnumerable<ChatTokenLogProbabilityDetails> refusalTokenLogProbabilities = null, ChatFinishReason? finishReason = null, DateTimeOffset createdAt = default, string model = null, string systemFingerprint = null, ChatTokenUsage usage = null, string audioCorrelationId = null, string audioTranscript = null, BinaryData audioBytes = null, DateTimeOffset? audioExpiresAt = null);
[Obsolete("This class is obsolete. Please use StreamingChatToolCallUpdate instead.")]
public static StreamingChatFunctionCallUpdate StreamingChatFunctionCallUpdate(string functionName = null, BinaryData functionArgumentsUpdate = null);
public static StreamingChatToolCallUpdate StreamingChatToolCallUpdate(int index = 0, string toolCallId = null, ChatToolCallKind kind = ChatToolCallKind.Function, string functionName = null, BinaryData functionArgumentsUpdate = null);
Expand Down
65 changes: 65 additions & 0 deletions .dotnet/examples/Chat/Example09_ChatWithAudio.cs
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
using NUnit.Framework;
using OpenAI.Chat;
using System;
using System.Collections.Generic;
using System.IO;

namespace OpenAI.Examples;

public partial class ChatExamples
{
[Test]
public void Example09_ChatWithAudio()
{
// Chat audio input and output is only supported on specific models, beginning with gpt-4o-audio-preview
ChatClient client = new("gpt-4o-audio-preview", Environment.GetEnvironmentVariable("OPENAI_API_KEY"));

// Input audio is provided to a request by adding an audio content part to a user message
string audioFilePath = Path.Combine("Assets", "whats_the_weather_pcm16_24khz_mono.wav");
byte[] audioFileRawBytes = File.ReadAllBytes(audioFilePath);
BinaryData audioData = BinaryData.FromBytes(audioFileRawBytes);
List<ChatMessage> messages =
[
new UserChatMessage(ChatMessageContentPart.CreateAudioPart(audioData, ChatInputAudioFormat.Wav)),
];

// Output audio is requested by configuring AudioOptions on ChatCompletionOptions
ChatCompletionOptions options = new()
{
AudioOptions = new(ChatResponseVoice.Alloy, ChatOutputAudioFormat.Mp3),
};

ChatCompletion completion = client.CompleteChat(messages, options);

void PrintAudioContent()
{
foreach (ChatMessageContentPart contentPart in completion.Content)
{
if (contentPart.AudioCorrelationId is not null)
{
Console.WriteLine($"Response audio transcript: {contentPart.AudioTranscript}");

string outputFilePath = $"{contentPart.AudioCorrelationId}.mp3";
using (FileStream outputFileStream = File.OpenWrite(outputFilePath))
{
outputFileStream.Write(contentPart.AudioBytes);
}
Console.WriteLine($"Response audio written to file: {outputFilePath}");
Console.WriteLine($"Valid on followup requests until: {contentPart.AudioExpiresAt}");
}
}
}

PrintAudioContent();

// To refer to past audio output, create an assistant message from the earlier ChatCompletion, use the earlier
// response content part, or use ChatMessageContentPart.CreateAudioPart(string) to manually instantiate a part.

messages.Add(new AssistantChatMessage(completion));
messages.Add("Can you say that like a pirate?");

completion = client.CompleteChat(messages, options);

PrintAudioContent();
}
}
Loading
Loading