Skip to content

Commit

Permalink
Update to .NET 8.0 #134. Add support for new features #129
Browse files Browse the repository at this point in the history
  • Loading branch information
marcominerva committed Dec 11, 2023
1 parent da88220 commit 5bfe84c
Show file tree
Hide file tree
Showing 45 changed files with 665 additions and 166 deletions.
120 changes: 103 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,11 @@ builder.Services.AddChatGpt(options =>
options.DefaultEmbeddingModel = "text-embedding-ada-002",
options.MessageLimit = 16; // Default: 10
options.MessageExpiration = TimeSpan.FromMinutes(5); // Default: 1 hour
options.DefaultParameters = new ChatGptParameters
{
MaxTokens = 800,
Temperature = 0.7
};
});
```

Expand Down Expand Up @@ -65,7 +70,15 @@ Even if it is not a strictly necessary for chat conversation, the library suppor

##### OpenAI

Currently available models are: _gpt-3.5-turbo_, _gpt-3.5-turbo-16k_, _gpt-4_ and _gpt-4-32k_. They have fixed names, available in the [OpenAIChatGptModels.cs file](https://github.com/marcominerva/ChatGptNet/blob/master/src/ChatGptNet/Models/OpenAIChatGptModels.cs).
Currently available models are:
- gpt-3.5-turbo,
- gpt-3.5-turbo-16k,
- gpt-4,
- gpt-4-32k
- gpt-4-1106-preview
- gpt-4-vision-preview

They have fixed names, available in the [OpenAIChatGptModels.cs file](https://github.com/marcominerva/ChatGptNet/blob/master/src/ChatGptNet/Models/OpenAIChatGptModels.cs).

##### Azure OpenAI Service

Expand Down Expand Up @@ -136,14 +149,16 @@ The configuration can be automatically read from [IConfiguration](https://learn.
"DefaultEmbeddingModel": "text-embedding-ada-002", // Optional, set it if you want to use embedding
"MessageLimit": 20,
"MessageExpiration": "00:30:00",
"ThrowExceptionOnError": true
"ThrowExceptionOnError": true // Optional, default: true
//"User": "UserName",
//"DefaultParameters": {
// "Temperature": 0.8,
// "TopP": 1,
// "MaxTokens": 500,
// "PresencePenalty": 0,
// "FrequencyPenalty": 0
// "FrequencyPenalty": 0,
// "ResponseFormat": { "Type": "text" }, // Allowed values for Type: text (default) or json_object
// "Seed": 42 // Optional (any integer value)
//}
}
```
Expand Down Expand Up @@ -216,6 +231,49 @@ var content = response.GetContent();
> **Note**
If the response has been filtered by the content filtering system, **GetContent** will return *null*. So, you should always check the `response.IsContentFiltered` property before trying to access to the actual content.

#### Using parameters

Using configuration, it is possible to set default parameters for chat completion. However, we can also specify parameters for each request, using the **AskAsync** or **AskStreamAsync** overloads that accepts a [ChatGptParameters](https://github.com/marcominerva/ChatGptNet/blob/master/src/ChatGptNet/Models/ChatGptParameters.cs) object:

```csharp
var response = await chatGptClient.AskAsync(conversationId, message, new ChatGptParameters
{
MaxTokens = 150,
Temperature = 0.7
});
```

We don't need to specify all the parameters, only the ones we want to override. The other ones will be taken from the default configuration.

##### Seed and system fingerprint

ChatGPT is known to be non deterministic. This means that the same input can produce different outputs. To try to control this behavior, we can use the _Temperature_ and _TopP_ parameters. For example, setting the _Temperature_ to values near to 0 makes the model more deterministic, while setting it to values near to 1 makes the model more creative.
However, this is not always enough to get the same output for the same input. To address this issue, OpenAI introduced the **Seed** parameter. If specified, the model should sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Nevertheless, determinism is not guaranteed neither in this case, and you should refer to the _SystemFingerprint_ response parameter to monitor changes in the backend. Changes in this values mean that the backend configuration has changed, and this might impact determinism.

As always, the _Seed_ property can be specified in the default configuration or in the **AskAsync** or **AskStreamAsync** overloads that accepts a [ChatGptParameters](https://github.com/marcominerva/ChatGptNet/blob/master/src/ChatGptNet/Models/ChatGptParameters.cs).

> **Note**
_Seed_ and _SystemFingerprint_ are only supported by the most recent models, such as _gpt-4-1106-preview_.

##### Response format

If you want to forse the response in JSON format, you can use the _ResponseFormat_ parameter:

```csharp
var response = await chatGptClient.AskAsync(conversationId, message, new ChatGptParameters
{
ResponseFormat = ChatGptResponseFormat.Json,
});
```

In this way, the response will always be a valid JSON. Note that must also instruct the model to produce JSON via a system or user message. If you don't do this, the model will return an error.


As always, the _ResponseFormat_ property can be specified in the default configuration or in the **AskAsync** or **AskStreamAsync** overloads that accepts a [ChatGptParameters](https://github.com/marcominerva/ChatGptNet/blob/master/src/ChatGptNet/Models/ChatGptParameters.cs).

> **Note**
_ResponseFormat_ is only supported by the most recent models, such as _gpt-4-1106-preview_.

### Handling a conversation

The **AskAsync** and **AskStreamAsync** (see below) methods provides overloads that require a *conversationId* parameter. If we pass an empty value, a random one is generated and returned.
Expand Down Expand Up @@ -286,7 +344,6 @@ app.MapGet("/api/chat/stream", (Guid? conversationId, string message, IChatGptCl
> **Note**
If the response has been filtered by the content filtering system, the **AsDeltas** method in the _foreach_ will return *nulls* string.


The library is 100% compatible also with Blazor WebAssembly applications:

![](https://raw.githubusercontent.com/marcominerva/ChatGptNet/master/assets/ChatGptBlazor.WasmStreaming.gif)
Expand Down Expand Up @@ -323,7 +380,7 @@ await chatGptClient.DeleteConversationAsync(conversationId, preserveSetup: false

The _preserveSetup_ argument allows to decide whether mantain also the _system_ message that has been set with the **SetupAsync** method (default: _false_).

## Function calling
## Tool and Function calling

With function calling, we can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions. This is a new way to more reliably connect GPT's capabilities with external tools and APIs.

Expand Down Expand Up @@ -382,21 +439,21 @@ var functions = new List<ChatGptFunction>
}
};

var functionParameters = new ChatGptFunctionParameters
var toolParameters = new ChatGptToolParameters
{
FunctionCall = ChatGptFunctionCalls.Auto, // This is the default if functions are present.
FunctionCall = ChatGptToolChoices.Auto, // This is the default if functions are present.
Functions = functions
};

var response = await chatGptClient.AskAsync("What is the weather like in Taggia?", functionParameters);
var response = await chatGptClient.AskAsync("What is the weather like in Taggia?", toolParameters);
```

We can pass an arbitrary number of functions, each one with a name, a description and a JSON schema describing the function parameters, following the [JSON Schema references](https://json-schema.org/understanding-json-schema). Under the hood, functions are injected into the system message in a syntax the model has been trained on. This means functions count against the model's context limit and are billed as input tokens.

The response object returned by the **AskAsync** method provides a property to check if the model has selected a function call:

```csharp
if (response.IsFunctionCall)
if (response.ContainsFunctionCalls())
{
Console.WriteLine("I have identified a function to call:");

Expand All @@ -409,21 +466,50 @@ if (response.IsFunctionCall)

This code will print something like this:

I have identified a function to call:
GetCurrentWeather
{
"location": "Taggia",
"format": "celsius"
}
```
I have identified a function to call:
GetCurrentWeather
{
"location": "Taggia",
"format": "celsius"
}
```

Note that the API will not actually execute any function calls. It is up to developers to execute function calls using model outputs.

After the actual execution, we need to call the **AddFunctionResponseAsync** method on the **ChatGptClient** to add the response to the conversation history, just like a standard message, so that it will be automatically used for chat completion:
After the actual execution, we need to call the **AddToolResponseAsync** method on the **ChatGptClient** to add the response to the conversation history, just like a standard message, so that it will be automatically used for chat completion:

```csharp
// Calls the remote function API.
var functionResponse = await GetWeatherAsync(functionCall.Arguments);
await chatGptClient.AddFunctionResponseAsync(conversationId, functionCall.Name, functionResponse);
await chatGptClient.AddToolResponseAsync(conversationId, functionCall, functionResponse);
```

Newer models like _gpt-4-1106-preview_ support a more general approach to functions, the **Tool calling**. When you send a request, you can specify a list of tools the model may call. Currently, only functions are supported, but in future release other types of tools will be available.

To use Tool calling instead of direct Function calling, you need to set the _ToolChoice_ and _Tools_ properties in the **ChatGptToolParameters** object (instead of _FunctionCall_ and _Function_, as in previous example):

```csharp
var toolParameters = new ChatGptToolParameters
{
ToolChoice = ChatGptToolChoices.Auto, // This is the default if functions are present.
Tools = functions.ToTools()
};
```

The **ToTools** extension method is used to convert a list of [ChatGptFunction](https://github.com/marcominerva/ChatGptNet/blob/master/src/ChatGptNet/Models/ChatGptFunction.cs) to a list of tools.

If you use this new approach, of course you still need to check if the model has selected a tool call, using the same approach shown before.
Then, after the actual execution of the function, you have to call the **AddToolResponseAsync** method, but in this case you need to specify the tool (not the function) to which the response refers:

```csharp
var tool = response.GetToolCalls()!.First();
var functionCall = response.GetFunctionCall()!;

// Calls the remote function API.
var functionResponse = await GetWeatherAsync(functionCall.Arguments);

await chatGptClient.AddToolResponseAsync(conversationId, tool, functionResponse);
```

Check out the [Function calling sample](https://github.com/marcominerva/ChatGptNet/blob/master/samples/ChatGptFunctionCallingConsole/Application.cs#L18) for a complete implementation of this workflow.
Expand Down
5 changes: 2 additions & 3 deletions samples/ChatGptApi/ChatGptApi.csproj
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
<Project Sdk="Microsoft.NET.Sdk.Web">

<PropertyGroup>
<TargetFramework>net7.0</TargetFramework>
<TargetFramework>net8.0</TargetFramework>
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
</PropertyGroup>

<ItemGroup>
<PackageReference Include="Microsoft.AspNetCore.OpenApi" Version="7.0.12" />
<PackageReference Include="MinimalHelpers.OpenApi" Version="1.0.4" />
<PackageReference Include="Microsoft.AspNetCore.OpenApi" Version="8.0.0" />
<PackageReference Include="Swashbuckle.AspNetCore" Version="6.5.0" />
</ItemGroup>

Expand Down
36 changes: 2 additions & 34 deletions samples/ChatGptApi/Program.cs
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,6 @@
using System.Text.Json.Serialization;
using ChatGptNet;
using ChatGptNet.Extensions;
using Microsoft.AspNetCore.Diagnostics;
using MinimalHelpers.OpenApi;

var builder = WebApplication.CreateBuilder(args);

Expand All @@ -30,10 +28,7 @@
builder.Services.AddChatGpt(builder.Configuration);

builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen(options =>
{
options.AddMissingSchemas();
});
builder.Services.AddSwaggerGen();

builder.Services.AddProblemDetails(options =>
{
Expand All @@ -48,34 +43,7 @@
// Configures the HTTP request pipeline.
app.UseHttpsRedirection();

if (!app.Environment.IsDevelopment())
{
// Error handling
app.UseExceptionHandler(new ExceptionHandlerOptions
{
AllowStatusCode404Response = true,
ExceptionHandler = async (HttpContext context) =>
{
var problemDetailsService = context.RequestServices.GetRequiredService<IProblemDetailsService>();
var exceptionHandlerFeature = context.Features.Get<IExceptionHandlerFeature>();
var error = exceptionHandlerFeature?.Error;

// Writes as JSON problem details
await problemDetailsService.WriteAsync(new()
{
HttpContext = context,
AdditionalMetadata = exceptionHandlerFeature?.Endpoint?.Metadata,
ProblemDetails =
{
Status = context.Response.StatusCode,
Title = error?.GetType().FullName ?? "An error occurred while processing your request",
Detail = error?.Message
}
});
}
});
}

app.UseExceptionHandler();
app.UseStatusCodePages();

app.UseSwagger();
Expand Down
8 changes: 5 additions & 3 deletions samples/ChatGptApi/appsettings.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"ChatGPT": {
"Provider": "OpenAI", // Optional. Allowed values: OpenAI (default) or Azure
"ApiKey": "", // Required
//"Organization": "", // Optional, used only by OpenAI
//"Organization": "", // Optional, used only by OpenAI
"ResourceName": "", // Required when using Azure OpenAI Service
"ApiVersion": "2023-12-01-preview", // Optional, used only by Azure OpenAI Service (default: 2023-12-01-preview)
"AuthenticationType": "ApiKey", // Optional, used only by Azure OpenAI Service. Allowed values: ApiKey (default) or ActiveDirectory
Expand All @@ -11,14 +11,16 @@
"DefaultEmbeddingModel": "text-embedding-ada-002", // Optional, set it if you want to use embeddings
"MessageLimit": 20,
"MessageExpiration": "00:30:00",
"ThrowExceptionOnError": true
"ThrowExceptionOnError": true, // Optional, default: true
//"User": "UserName",
//"DefaultParameters": {
// "Temperature": 0.8,
// "TopP": 1,
// "MaxTokens": 500,
// "PresencePenalty": 0,
// "FrequencyPenalty": 0
// "FrequencyPenalty": 0,
// "ResponseFormat": { "Type": "text" }, // Allowed values for Type: text (default) or json_object
// "Seed": 42 // Optional (any integer value)
//}
},
"Logging": {
Expand Down
6 changes: 3 additions & 3 deletions samples/ChatGptBlazor.Wasm/ChatGptBlazor.Wasm.csproj
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
<Project Sdk="Microsoft.NET.Sdk.BlazorWebAssembly">

<PropertyGroup>
<TargetFramework>net7.0</TargetFramework>
<TargetFramework>net8.0</TargetFramework>
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
<ServiceWorkerAssetsManifest>service-worker-assets.js</ServiceWorkerAssetsManifest>
</PropertyGroup>

<ItemGroup>
<PackageReference Include="Markdig" Version="0.33.0" />
<PackageReference Include="Microsoft.AspNetCore.Components.WebAssembly" Version="7.0.12" />
<PackageReference Include="Microsoft.AspNetCore.Components.WebAssembly.DevServer" Version="7.0.12" PrivateAssets="all" />
<PackageReference Include="Microsoft.AspNetCore.Components.WebAssembly" Version="8.0.0" />
<PackageReference Include="Microsoft.AspNetCore.Components.WebAssembly.DevServer" Version="8.0.0" PrivateAssets="all" />
</ItemGroup>

<ItemGroup>
Expand Down
7 changes: 6 additions & 1 deletion samples/ChatGptConsole/Application.cs
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
using ChatGptNet;
using ChatGptNet.Extensions;
using ChatGptNet.Models;

namespace ChatGptConsole;

Expand Down Expand Up @@ -39,7 +40,11 @@ public async Task ExecuteAsync()
{
Console.WriteLine("I'm thinking...");

var response = await chatGptClient.AskAsync(conversationId, message);
var response = await chatGptClient.AskAsync(conversationId, message, new ChatGptParameters
{
MaxTokens = 150,
Temperature = 0.7
});

Console.WriteLine(response.GetContent());
Console.WriteLine();
Expand Down
4 changes: 2 additions & 2 deletions samples/ChatGptConsole/ChatGptConsole.csproj
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@

<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net7.0</TargetFramework>
<TargetFramework>net8.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
</PropertyGroup>

<ItemGroup>
<PackageReference Include="Microsoft.Extensions.Hosting" Version="7.0.1" />
<PackageReference Include="Microsoft.Extensions.Hosting" Version="8.0.0" />
</ItemGroup>

<ItemGroup>
Expand Down
Loading

0 comments on commit 5bfe84c

Please sign in to comment.