Releases: davidmigloz/langchain_dart
v0.7.2
2024-06-01
What's New?
π₯ ObjectBox Vector Search
We are excited to announce that Langchain.dart now supports ObjectBox as a vector store!
ObjectBox is an embedded database that runs inside your application. With the release of v4.0.0, it now supports storing and querying vectors. Leveraging the HNSW algorithm, ObjectBox provides fast and efficient vector search without keeping all the vectors in-memory, making it the first scalable on-device vector database for Dart/Flutter applications.
Check out the ObjectBoxVectorStore documentation to learn how to use it.
final vectorStore = ObjectBoxVectorStore(
embeddings: OllamaEmbeddings(model: 'jina/jina-embeddings-v2-small-en'),
dimensions: 512,
);
We have also introduced a new example showcasing a fully local Retrieval Augmented Generation (RAG) pipeline with Llama 3, utilizing ObjectBox and Ollama:
β¨ Runnable.close
You now have the ability to close any resources associated with a Runnable
by invoking the close
method. For instance, if you have a chain like:
final chain = promptTemplate
.pipe(model)
.pipe(outputParser);
// ...
chain.close();
Calling close()
will propagate the close()
call to each Runnable
instance within the chain. In this example, it won't affect promptTemplate
and outputParser
as they have no associated resources to close, but it will effectively close the HTTP client of the model.
π Documentation Migration: langchaindart.dev
We have successfully migrated our documentation to a new domain: langchaindart.dev.
π οΈ Bugfixes
- Errors are now correctly propagated to the stream listener when streaming a chain that uses a
StringOutputParser
. - The Ollama client now properly handles buffered stream responses, such as when utilizing Cloudflare Tunnels.
π anthropic_sdk_dart client
We are working on integrating Anthropic into LangChain.dart. As part of this effort, we have released a new client for the Anthropic API: anthropic_sdk_dart. In the next release, we will add support for tool calling and further integrate it into LangChain.dart.
Changes
New packages:
Packages with other changes:
langchain
-v0.7.2
langchain_core
-v0.3.2
langchain_community
-v0.2.1
langchain_chroma
-v0.2.0+5
langchain_firebase
-v0.1.0+2
langchain_google
-v0.5.1
langchain_mistralai
-v0.2.1
langchain_ollama
-v0.2.2
langchain_openai
-v0.6.2
langchain_pinecone
-v0.1.0+5
langchain_supabase
-v0.1.0+5
chromadb
-v0.2.0+1
googleai_dart
-v0.1.0+1
mistralai_dart
-v0.0.3+2
ollama_dart
-v0.1.1
openai_dart
-v0.3.3
vertex_ai
-v0.1.0+1
langchain
- v0.7.2
- FEAT: Add support for ObjectBoxVectorStore (#438). (81e167a6)
- Check out the ObjectBoxVectorStore documentation
- REFACTOR: Migrate to langchaindart.dev domain (#434). (358f79d6)
langchain_core
- v0.3.2
- FEAT: Add Runnable.close() to close any resources associated with it (#439). (4e08cced)
- FIX: Stream errors are not propagated by StringOutputParser (#440). (496b11cc)
langchain_community
- v0.2.1
- FEAT: Add support for ObjectBoxVectorStore (#438). (81e167a6)
- Check out the ObjectBoxVectorStore documentation
langchain_openai
- v0.6.2
anthropic_sdk_dart
- v0.0.1
ollama_dart
- v0.1.1
openai_dart
- v0.3.3
- FEAT: Support FastChat OpenAI-compatible API (#444). (ddaf1f69)
- FIX: Make vector store name optional (#436). (29a46c7f)
- FIX: Fix deserialization of sealed classes (#435). (7b9cf223)
New Contributors
- @alfredobs97 made their first contribution in #433
π£ Check out the #announcements channel in the LangChain.dart Discord server for more details.
v0.7.1
2024-05-14
What's New?
π₯ VertexAI for Firebase
We are excited to announce 0-day support for Vertex AI for Firebase with the introduction of the new langchain_firebase
package.
If you need to call the Vertex AI Gemini API directly from your mobile or web app, you can now use the ChatFirebaseVertexAI
class. This class is specifically designed for mobile and web apps, offering enhanced security options against unauthorized clients (via Firebase App Check) and seamless integration with other Firebase services. It supports the latest models (gemini-1.5-pro
and gemini-1.5-flash
) as well as tool calling.
await Firebase.initializeApp();
final chatModel = ChatFirebaseVertexAI(
defaultOptions: ChatFirebaseVertexAIOptions(
model: 'gemini-1.5-pro-preview-0514',
),
);
Check out the documentation and the sample project (a port of the official firebase_vertexai
sample).
β‘οΈ Google AI for Developers (Upgrade)
ChatGoogleGenerativeAI
and GoogleGenerativeAIEmbeddings
have been upgraded to use version v1beta
of the Gemini API (previously v1
), which supports the latest models (gemini-1.5-pro-latest
and gemini-1.5-flash-latest
).
ChatGoogleGenerativeAI
now includes support for tool calling, including parallel tool calling.
Under the hood, we have migrated the client from googleai_dart
to the official google_generative_ai
package.
β¨ OpenAI (Enhancements)
You can already use the new OpenAI's GPT-4o model. Additionally, usage statistics are now included when streaming with OpenAI
and ChatOpenAI
.
π¦ Ollama
The default models for Ollama
, ChatOllama
, and OllamaEmbeddings
have been updated to llama3
. ChatOllama
now returns a finishReason
. OllamaEmbeddings
now supports keepAlive
.
π οΈ openai_dart
The Assistant API has been enhanced to support different content types, and several bug fixes have been implemented.
The batch API now supports completions and embeddings.
π§ ollama_dart
The client has been aligned with the Ollama v0.1.36 API.
Changes
Packages with breaking changes:
Packages with other changes:
langchain
-v0.7.1
langchain_core
-v0.3.1
langchain_community
-v0.2.0+1
langchain_firebase
-v0.1.0
langchain_openai
-v0.6.1
langchain_ollama
-v0.2.1
langchain_chroma
-v0.2.0+4
langchain_mistralai
-v0.2.0+1
langchain_pinecone
-v0.1.0+4
langchain_supabase
-v0.1.0+4
openai_dart
-v0.3.2
langchain
- v0.7.1
Note: VertexAI for Firebase (
ChatFirebaseVertexAI
) is available in the newlangchain_firebase
package.
- DOCS: Add docs for ChatFirebaseVertexAI (#422). (8d0786bc)
- DOCS: Update ChatOllama docs (#417). (9d30b1a1)
langchain_core
- v0.3.1
- FEAT: Add equals to ChatToolChoiceForced (#422). (8d0786bc)
- FIX: Fix finishReason null check (#406). (5e2b0ecc)
langchain_community
- v0.2.0+1
- Update a dependency to the latest release.
langchain_google
- v0.5.0
Note:
ChatGoogleGenerativeAI
andGoogleGenerativeAIEmbeddings
now use the versionv1beta
of the Gemini API (instead ofv1
) which support the latest models (gemini-1.5-pro-latest
andgemini-1.5-flash-latest
).VertexAI for Firebase (
ChatFirebaseVertexAI
) is available in the newlangchain_firebase
package.
- FEAT: Add support for tool calling in ChatGoogleGenerativeAI (#419). (df41f38a)
- DOCS: Add Gemini 1.5 Flash to models list (#423). (40f4c9de)
- BREAKING FEAT: Migrate internal client from googleai_dart to google_generative_ai (#407). (fa4b5c37)
langchain_firebase
- v0.1.0
- FEAT: Add support for ChatFirebaseVertexAI (#422). (8d0786bc)
- DOCS: Add Gemini 1.5 Flash to models list (#423). (40f4c9de)
langchain_openai
- v0.6.1
- FEAT: Add GPT-4o to model catalog (#420). (96214307)
- FEAT: Include usage stats when streaming with OpenAI and ChatOpenAI (#406). (5e2b0ecc)
langchain_ollama
- v0.2.1
- FEAT: Handle finish reason in ChatOllama (#416). (a5e1af13)
- FEAT: Add keepAlive option to OllamaEmbeddings (#415). (32e19028)
- FEAT: Update Ollama default model from llama2 to llama3 (#417). (9d30b1a1)
- REFACTOR: Remove deprecated Ollama options (#414). (861a2b74)
openai_dart
- v0.3.2
- FEAT: Add GPT-4o to model catalog (#420). (96214307)
- FEAT: Add support for different content types in Assistants API and other fixes (#412). (97acab45)
- FEAT: Add support for completions and embeddings in batch API in openai_dart (#425). (16fe4c68)
- FEAT: Add incomplete status to RunObject in openai_dart (#424). (71b116e6)
ollama_dart
- v0.1.0
- BREAKING FEAT: Align Ollama client to the Ollama v0.1.36 API (#411). (326212ce)
- FEAT: Update Ollama default model from llama2 to llama3 (#417). (9d30b1a1)
- FEAT: Add support for done reason (#413). (cc5b1b02)
googleai_dart
- v0.1.0
- REFACTOR: Minor changes (#407). ([fa4b5c3](https://github.co...
v0.7.0
2024-05-05
What's New?
This update introduces a standardised interface for tool calling (also known as function calling), allowing models to interact more effectively with external tools.
Previously, our function-calling capability was tightly integrated with the OpenAI provider. The new interface decouples this by providing an abstraction layer over the tool-calling APIs of different vendors. This enhancement makes it easier to switch providers without modifying your existing code.
We have also improved integration with LangChain tools. Now you can seamlessly integrate these tools into your models without the need to convert data formats.
Models can now call multiple tools in a single request, an improvement over the previous limit of one tool per request.
A new output parser, ToolsOutputParser
, has been introduced to extract tool calls from the model response:
final calculator = CalculatorTool();
final model = ChatOpenAI(
apiKey: openAiApiKey,
defaultOptions: ChatOpenAIOptions(
model: 'gpt-4-turbo',
tools: [calculator],
),
);
final chain = model.pipe(ToolsOutputParser());
final res = await chain.invoke(
PromptValue.string('Calculate 3 * 12 and 11 + 49'),
);
print(res);
// [ParsedToolCall{
// id: call_p4GmED1My56vV6XZi9ChljJN,
// name: calculator,
// arguments: {
// input: 3 * 12
// },
// }, ParsedToolCall{
// id: call_eLJo7nII9EanFUcxy42WA5Pm,
// name: calculator,
// arguments: {
// input: 11 + 49
// },
// }]
It effectively handles streaming by progressively concatenating chunks and completing partial JSONs into valid ones:
final stream = chain2.stream(
PromptValue.string('Calculate 3 * 12 and 11 + 49'),
);
await stream.forEach(print);
// []
// [ParsedToolCall{ id: call_gGXYQDJj9ZG4YmvLhZyLD442, name: calculator, arguments: {}, }]
// [ParsedToolCall{ id: call_gGXYQDJj9ZG4YmvLhZyLD442, name: calculator, arguments: {input: 3 * }, }]
// [ParsedToolCall{ id: call_gGXYQDJj9ZG4YmvLhZyLD442, name: calculator, arguments: {input: 3 * 12}, }]
// [ParsedToolCall{ id: call_gGXYQDJj9ZG4YmvLhZyLD442, name: calculator, arguments: {input: 3 * 12}, }, ParsedToolCall{ id: call_axZ3Q5Ve8ZvLUB9NDXdwuUVh, name: calculator, arguments: {}, }]
// [ParsedToolCall{ id: call_gGXYQDJj9ZG4YmvLhZyLD442, name: calculator, arguments: {input: 3 * 12}, }, ParsedToolCall{ id: call_axZ3Q5Ve8ZvLUB9NDXdwuUVh, name: calculator, arguments: {input: 11 +}, }]
// [ParsedToolCall{ id: call_gGXYQDJj9ZG4YmvLhZyLD442, name: calculator, arguments: {input: 3 * 12}, }, ParsedToolCall{ id: call_axZ3Q5Ve8ZvLUB9NDXdwuUVh, name: calculator, arguments: {input: 11 + 49}, }]
Finally, the OpenAIFunctionsAgent
has been renamed to OpenAIToolsAgent
and updated to work with the new standardised tool calling interface. We plan to extend this functionality in future updates by introducing a ToolsAgent
that is compatible with any vendor that supports tool calling.
Refer to the Tool Calling and ToolsOutputParser documentation for more details.
To migrate from the previous function call paradigm to the new standard tool call interface, see this migration guide. We have also improved the tool abstractions, see here for all the changes.
Changes
Packages with breaking changes:
langchain
-v0.7.0
langchain_core
-v0.3.0
langchain_community
-v0.2.0
langchain_openai
-v0.6.0
langchain_google
-v0.4.0
langchain_mistralai
-v0.2.0
langchain_ollama
-v0.2.0
langchain
- v0.7.0
- BREAKING FEAT: Migrate from function calling to tool calling (#400). (44413b83)
- BREAKING REFACTOR: Improve Tool abstractions (#398). (2a50aec2)
langchain_core
- v0.3.0
- BREAKING FEAT: Migrate from function calling to tool calling (#400). (44413b83)
- BREAKING REFACTOR: Improve Tool abstractions (#398). (2a50aec2)
langchain_community
- v0.2.0
- BREAKING FEAT: Migrate from function calling to tool calling (#400). (44413b83)
- BREAKING REFACTOR: Improve Tool abstractions (#398). (2a50aec2)
langchain_openai
- v0.6.0
- BREAKING FEAT: Migrate from function calling to tool calling (#400). (44413b83)
- BREAKING REFACTOR: Improve Tool abstractions (#398). (2a50aec2)
langchain_google
- v0.4.0
langchain_mistralai
- v0.2.0
langchain_ollama
- v0.2.0
π£ Check out the #announcements channel in the LangChain.dart Discord server for more details.
v0.6.0
2024-04-30
What's New?
This release focuses on enhancing and expanding the capabilities of LangChain Expression Language (LCEL):
π¦ RunnableRouter
RunnableRouter
enables the creation of non-deterministic chains where the output of a previous step determines the next step. This feature allows you to use an LLM to dynamically select the appropriate prompt, chain, LLM, or other components based on some input. A particularly effective technique is combining RunnableRouter
with embedding models to route a query to the most relevant (semantically similar) prompt.
final router = Runnable.fromRouter((Map<String, dynamic> input, _) {
final topic = input['topic'] as String;
if (topic.contains('langchain')) {
return langchainChain;
} else if (topic.contains('anthropic')) {
return anthropicChain;
} else {
return generalChain;
}
});
For more details and examples, please refer to the router documentation.
π JsonOutputParser
In certain scenarios, it is useful to ask the model to respond in JSON format, which makes it easier to parse the response. Many vendors even offer a JSON mode that guarantees valid JSON output. With the new JsonOutputParser
you can now easily parse the output of a runnable as a JSON map. It also supports streaming, returning valid JSON from the incomplete JSON chunks streamed by the model.
final model = ChatOpenAI(
apiKey: openAiApiKey,
defaultOptions: ChatOpenAIOptions(
responseFormat: ChatOpenAIResponseFormat(
type: ChatOpenAIResponseFormatType.jsonObject,
),
),
);
final parser = JsonOutputParser<ChatResult>();
final chain = model.pipe(parser);
final stream = chain.stream(
PromptValue.string(
'Output a list of the countries france, spain and japan and their '
'populations in JSON format. Use a dict with an outer key of '
'"countries" which contains a list of countries. '
'Each country should have the key "name" and "population"',
),
);
await stream.forEach((final chunk) => print('$chunk|'));
// {}|
// {countries: []}|
// {countries: [{name: France}]}|
// {countries: [{name: France, population: 67076000}, {}]}|
// {countries: [{name: France, population: 67076000}, {name: Spain}]}|
// {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}]}|
// {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}, {name: Japan}]}|
// {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}, {name: Japan, population: 126476461}]}|
πΊοΈ Mapping input values
Mapping the output value of a previous runnable to a new value that aligns with the input requirements of the next runnable is a common task. Previous versions provided the Runnable.mapInput
method for custom mapping logic, but it lacked control over the stream of input values when using streaming. With this release, you can now utilize Runnable.mapInputStream
to have full control over the input stream.
For example, you may want to output only the last element of the input stream: (full code)
final mapper = Runnable.mapInputStream((Stream<String> inputStream) async* {
yield await inputStream.last;
});
If you need to define separate logic for invoke and stream operations, Runnable.fromFunction
has been updated to allow you to specify the invoke
logic, the stream
logic, or both, providing greater flexibility. This refactoring of Runnable.fromFunction
resulted in a minor breaking change, see the migration guide for more information.
In this example, we create a runnable that we can use in our chains to debug the output of the previous step. It prints different information when the chain is invoked vs streamed. (full code)
Runnable<T, RunnableOptions, T> logOutput<T extends Object>(String stepName) {
return Runnable.fromFunction<T, T>(
invoke: (input, options) {
print('Output from step "$stepName":\n$input\n---');
return Future.value(input);
},
stream: (inputStream, options) {
return inputStream.map((input) {
print('Chunk from step "$stepName":\n$input\n---');
return input;
});
},
);
}
final chain = Runnable.getMapFromInput<String>('equation_statement')
.pipe(logOutput('getMapFromInput'))
.pipe(promptTemplate)
.pipe(logOutput('promptTemplate'))
.pipe(ChatOpenAI(apiKey: openaiApiKey))
.pipe(logOutput('chatModel'))
.pipe(StringOutputParser())
.pipe(logOutput('outputParser'));
π Non-streaming components
Previously, all LangChain.dart components processed a streaming input item by item. This made sense for some components, such as output parsers, but was problematic for others. For example, you don't want a retriever to retrieve documents for each streamed chunk, instead you want to wait for the full query to be received before performing the search.
This has been fixed in this release, as from now on the following components will reduce/aggregate the streaming input from the previous step into a single value before processing it:
PromptTemplate
ChatPromptTemplate
LLM
ChatModel
Retriever
Tool
RunnableFunction
RunnableRouter
π Improved LCEL docs
We have revamped the LangChain Expression Language documentation. It now includes a dedicated section explaining the different primitives available in LCEL. Also, a new page has been added specifically covering streaming.
- Sequence: Chaining runnables
- Map: Formatting inputs & concurrency
- Passthrough: Passing inputs through
- Mapper: Mapping inputs
- Function: Run custom logic
- Binding: Configuring runnables
- Router: Routing inputs
Changes
Packages with breaking changes:
Packages with other changes:
langchain
- v0.6.0+1
- FEAT: Add support for Runnable.mapInputStream (#393). (a2b6bbb5)
- FEAT: Add support for JsonOutputParser (#392). (c6508f0f)
- FEAT: Reduce input stream for PromptTemplate, LLM, ChatModel, Retriever and Tool (#388). (b59bcd40)
- BREAKING FEAT: Support different logic for streaming in RunnableFunction (#394). (8bb2b8ed)
- FIX: Allow async functions in Runnable.mapInput (#396). (e4c35092)
- DOCS: Update LangChain Expression Language documentation (#395). (6ce75e5f)
langchain_core
- v0.2.0+1
- FEAT: Add support for Runnable.mapInputStream (#393). (a2b6bbb5)
- FEAT: Add support for JsonOutputParser (#392). (c6508f0f)
- FEAT: Reduce input stream for PromptTemplate, LLM, ChatModel, Retriever and Tool (#388). (b59bcd40)
- BREAKING FEAT: Support different logic for streaming in RunnableFunction (#394). (8bb2b8ed)
- FIX: Allow async functions in Runnable.mapInput (#396). (e4c35092)
openai_dart
- v0.2.2
v0.5.0
2024-04-10
What's New?
We're excited to announce a major update with a focus on enhancing the project's scalability and improving the developer experience. Here are the key enhancements:
π οΈ Restructured package organization:
LangChain.dart's main package has been divided into multiple packages to simplify usage and contribution to the project.
langchain_core
: Includes only the core abstractions and the LangChain Expression Language as a way to compose them together.- Depend on this package to build frameworks on top of LangChain.dart or to interoperate with it.
langchain
: Features higher-level components and use-case specific frameworks crucial to the application's cognitive architecture.- Depend on this package to build LLM applications with LangChain.dart.
- This package exposes
langchain_core
so you don't need to depend on it explicitly.
langchain_community
: Houses community-contributed components and third-party integrations not included in the main LangChain.dart API.- Depend on this package if you want to use any of the integrations or components it provides.
- Integration-specific packages such as
langchain_openai
andlangchain_google
: These enable independent imports of popular third-party integrations without full dependency on thelangchain_community
package.- Depend on an integration-specific package if you want to use the specific integration.
β¨ Enhanced APIs and New .batch
API:
The LanguageModelResult
class structure (including its child classes LLMResult
and ChatResult
) has been simplified, with each LanguageModelResult
now storing a single output directly.
To generate multiple outputs, use the new .batch
API (instead of .invoke
or .stream
), which batches the invocation of a Runnable
on a list of inputs. If the underlying provider supports batching, this method will attempt to batch the calls to the provider. Otherwise, it will concurrently call invoke
on each input (you can configure the concurrencyLimit
).
Output parsers have been revamped to provide greater flexibility and compatibility with any type of Runnable
. StringOutputParser
now supports reducing the output of a stream to a single value, which is useful when the next step in the chain expects a single input value instead of a stream.
The deprecated generate
and predict
APIs have been removed, favouring the LCEL APIs (invoke
, stream
, and batch
).
The internal implementation of the stream
API has been optimized, providing clearer error messages in case of issues.
π Google AI streaming and embeddings:
ChatGoogleGenerativeAI
(used for interacting with Gemini models) now supports streaming and tuned models.
Support for Google AI embedding models has been added through the GoogleGenerativeAIEmbeddings
class, compatible with the latest text-embedding-004
embedding model. Specifying the number of output dimensions is also supported.
π Migration guide:
We have compiled a migration guide to assist you in updating your code to the new version. You can find it here. For any questions or assistance, please reply to this discussion or reach out to us on Discord.
Changes
Packages with breaking changes:
langchain
-v0.5.0
langchain_chroma
-v0.2.0
langchain_community
-v0.1.0
langchain_core
-v0.1.0
langchain_google
-v0.3.0
langchain_mistralai
-v0.1.0
langchain_ollama
-v0.1.0
langchain_openai
-v0.5.0
langchain_pinecone
-v0.1.0
langchain_supabase
-v0.1.0
chromadb
-v0.2.0
openai_dart
-v0.2.0
vertex_ai
-v0.1.0
Packages with other changes:
langchain
- v0.5.0
- BREAKING REFACTOR: Introduce langchain_core and langchain_community packages (#328). (5fa520e6)
- BREAKING REFACTOR: Simplify LLMResult and ChatResult classes (#363). (ffe539c1)
- BREAKING REFACTOR: Simplify Output Parsers (#367). (f24b7058)
- BREAKING REFACTOR: Remove deprecated generate and predict APIs (#335). (c55fe50f)
- REFACTOR: Simplify internal .stream implementation (#364). (c83fed22)
- FEAT: Implement .batch support (#370). (d254f929)
- FEAT: Add reduceOutputStream option to StringOutputParser (#368). (7f9a9fae)
- DOCS: Update LCEL docs. (ab3ab573)
- DOCS: Add RAG example using OllamaEmbeddings and ChatOllama (#337). (8bddc6c0)
langchain_community
- v0.1.0
langchain_core
- v0.1.0
- BREAKING REFACTOR: Introduce langchain_core and langchain_community packages (#328). (5fa520e6)
- BREAKING REFACTOR: Simplify LLMResult and ChatResult classes (#363). (ffe539c1)
- BREAKING REFACTOR: Simplify Output Parsers (#367). (f24b7058)
- REFACTOR: Simplify internal .stream implementation (#364). (c83fed22)
- FEAT: Implement .batch support (#370). (d254f929)
- FEAT: Add reduceOutputStream option to StringOutputParser (#368). (7f9a9fae)
langchain_chroma
- v0.2.0
langchain_google
- v0.3.0
- BREAKING REFACTOR: Introduce langchain_core and langchain_community packages (#328). (5fa520e6)
- BREAKING REFACTOR: Simplify LLMResult and ChatResult classes (#363). (ffe539c1)
- BREAKING REFACTOR: Simplify Output Parsers (#367). (f24b7058)
- BREAKING REFACTOR: Remove deprecated generate and predict APIs (#335). (c55fe50f)
- REFACTOR: Simplify internal .stream impleme...
v0.4.2
2024-02-15
What's new?
This release contains some minor improvements:
- Ollama
keep_alive
: users now can control the duration for which models remain active in memory when using Ollama (which, by the way, just added Windows support). - Streaming support for
googleai_dart
: the client has been upgraded to support streaming functionality. However, with the release of Google's official google_generative_ai client, we are evaluating the potential deprecation ofgoogleai
_dart in favour of the official client if they are on feature parity. - Custom instance configuration for OpenAI.
In addition to these updates, we've also started the work to split langchain
package into 3 packages. This refactoring aims to enhance modularity and facilitate community contributions.
langchain_core
: will contain the core abstractions (ie. language models, document loaders, embedding models, vectorstores, retrievers, etc.), as well as LangChain Expression Language as a way to compose these components together. The community can depend on this package to build frameworks on top of LangChain.dart or to interoperate with it.langchain_community
: will contain third-party integrations that don't have a dedicated package.langchain
: will depend and exposelangchain_core
and contain higher-level and use-case specific chains, agents, and retrieval algorithms that are at the core of the application's cognitive architecture.
Changes
Packages with breaking changes:
- There are no breaking changes in this release.
Packages with other changes:
chromadb
-v0.1.2
googleai_dart
-v0.0.3
langchain
-v0.4.2
langchain_chroma
-v0.1.1
langchain_google
-v0.2.4
langchain_mistralai
-v0.0.3
langchain_ollama
-v0.0.4
langchain_openai
-v0.4.1
langchain_pinecone
-v0.0.7
langchain_supabase
-v0.0.1+1
mistralai_dart
-v0.0.3
ollama_dart
-v0.0.3
openai_dart
-v0.1.7
vertex_ai
-v0.0.10
googleai_dart
- v0.0.3
- FEAT: Add streaming support to googleai_dart client (#299). (2cbd538a)
- FEAT: Update meta and test dependencies (#331). (912370ee)
- DOCS: Update pubspecs. (d23ed89a)
openai_dart
- v0.1.7
- FEAT: Allow to specify OpenAI custom instance (#327). (4744648c)
- FEAT: Update meta and test dependencies (#331). (912370ee)
- DOCS: Update pubspecs. (d23ed89a)
langchain_openai
- v0.4.1
- FEAT: Allow to specify OpenAI custom instance (#327). (4744648c)
- FEAT: Update meta and test dependencies (#331). (912370ee)
- DOCS: Update pubspecs. (d23ed89a)
ollama_dart
- v0.0.3
- FEAT: Add Ollama keep_alive param to control how long models stay loaded (#319). (3b86e227)
- FEAT: Update meta and test dependencies (#331). (912370ee)
- DOCS: Update pubspecs. (d23ed89a)
langchain_ollama
- v0.0.4
- FEAT: Add Ollama keep_alive param to control how long models stay loaded (#319). (3b86e227)
- FEAT: Update meta and test dependencies (#331). (912370ee)
- DOCS: Update pubspecs. (d23ed89a)
chromadb
- v0.1.2
langchain
- v0.4.2
langchain_chroma
- v0.1.1
langchain_google
- v0.2.4
langchain_mistralai
- v0.0.3
langchain_pinecone
- v0.0.7
langchain_supabase
- v0.0.1+1
- DOCS: Update pubspecs. (d23ed89a)
mistralai_dart
- v0.0.3
vertex_ai
- v0.0.10
Contributors
π£ Check out the #announcements channel in the LangChain.dart Discord server for more details.
v0.4.1
2024-01-31
What's new?
![Supbase Vector LangChain.dart](https://private-user-images.githubusercontent.com/6546265/301312162-cfde9851-709b-4d2d-9b86-b774dc0886e4.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkxMzg5NzksIm5iZiI6MTczOTEzODY3OSwicGF0aCI6Ii82NTQ2MjY1LzMwMTMxMjE2Mi1jZmRlOTg1MS03MDliLTRkMmQtOWI4Ni1iNzc0ZGMwODg2ZTQucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDIwOSUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAyMDlUMjIwNDM5WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9N2FiY2NiY2ZjMDY2MzNhMTA1NjIyOGU0ZjA4ZTQ1MWYxZmVjZmVmMTgwNzgxNmJkZmI3MjAxZWEzNGM4ZTJkNCZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.dMRpGR2bWpYkr6TIxm_e0Ic1aoH2wrr94CFilaxY9W8)
π Supabase Vector integration
Now you can use Supabase Vector to store, query, and index vector embeddings in your Supbase Postgres database.
final vectorStore = Supabase(
embeddings: OpenAIEmbeddings(apiKey: openaiApiKey),
supabaseUrl: 'https://xyzcompany.supabase.co',
supabaseKey: 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...',
);
final res = await vectorStore.similaritySearch(
query: 'Where is the cat?',
config: SupabaseSimilaritySearch(
k: 5,
filter: {
'category': {r'$ne': 'person'},
},
),
);
Check out the docs for more info.
Changes
Packages with breaking changes:
- There are no breaking changes in this release.
Packages with other changes:
langchain
- v0.4.1
langchain_supabase
- v0.0.1
New Contributors
- @matteodg made their first contribution in #318
- @luisredondo made their first contribution in #318
π£ Check out the #announcements channel in the LangChain.dart Discord server for more details.
v0.4.0
2024-01-26
What's new?
π New OpenAI embedding models
You can now use the new generation of OpenAI embedding models:
text-embedding-3-small
: smaller and highly efficient. It provides a significant upgrade over its predecessor (text-embedding-ada-002
) and it is 5X cheaper.text-embedding-3-large
: larger and more powerful. It creates embeddings with up to 3072 dimensions.
Breaking change: text-embedding-3-small
is now the default model of OpenAIEmbeddings
wrapper.
π Support for shortening OpenAI embeddings
Using larger embeddings, for example storing them in a vector store for retrieval, generally costs more and consumes more computing, memory and storage than using smaller embeddings. The new embedding models support shortening embeddings (i.e. removing some numbers from the end of the sequence) without the embedding losing its concept-representing properties.
For example, on the MTEB benchmark, a
text-embedding-3-large
embedding can be shortened to a size of 256 while still outperforming an unshortenedtext-embedding-ada-002
embedding with a size of 1536.
Eg:
final embeddings = OpenAIEmbeddings(
apiKey: openaiApiKey,
model: 'text-embedding-3-large',
dimensions: 256,
);
Changes
Packages with breaking changes:
Packages with other changes:
Packages with dependency updates only:
Packages listed below depend on other packages in this workspace that have had changes. Their versions have been incremented to bump the minimum dependency versions of the packages they depend upon in this project.
langchain_ollama
-v0.0.3+2
langchain_mistralai
-v0.0.2+2
langchain_pinecone
-v0.0.6+13
langchain_chroma
-v0.1.0+14
langchain_google
-v0.2.3+2
langchain
- v0.4.0
langchain_openai
- v0.4.0
- BREAKING FEAT: Update OpenAIEmbeddings' default model to text-embedding-3-small (#313). (43463481)
- FEAT: Add support for shortening embeddings in OpenAIEmbeddings (#312). (5f5eb54f)
openai_dart
- v0.1.6
- FEAT: Add gpt-4-0125-preview and gpt-4-turbo-preview in model catalog (#309). (f5a78867)
- FEAT: Add text-embedding-3-small and text-embedding-3-large in model catalog (#310). (fda16024)
- FEAT: Add support for shortening embeddings (#311). (c725db0b)
π£ Check out the #announcements channel in the LangChain.dart Discord server for more details.
v0.3.3
2024-01-20
What's new?
π Together AI support
Together AI offers a unified OpenAI-compatible API for a broad range of models running serverless or on your own dedicated instances. It also allows you to fine-tune models on your data or train new models from scratch.
You can now consume Chat and Embeddings models from Together AI
using the ChatOpenAI
and OpenAIEmbeddings
wrappers.
Eg:
final chatModel = ChatOpenAI(
apiKey: togetherAiApiKey,
baseUrl: 'https://api.together.xyz/v1',
defaultOptions: const ChatOpenAIOptions(
model: 'NousResearch/Nous-Hermes-2-Yi-34B',
),
);
π Anyscale support
Similarly to Together AI, Anyscale also offers an OpenAI-compatible API for a large range of chat and embedding models.
Eg:
final chatModel = ChatOpenAI(
apiKey: anyscaleApiKey,
baseUrl: 'https://api.endpoints.anyscale.com/v1',
defaultOptions: const ChatOpenAIOptions(
model: 'meta-llama/Llama-2-70b-chat-hf',
),
);
β¨ Other fixes an improvements
- The Mistral client is now aligned with the latest spec of their API
- The OpenAI client for the Assistant API now returns the usage data for Run and RunStep
VertexAI
/ChatVertexAI
wrappers now count tokens using thecountTokens
API instead oftiktoken
Changes
Packages with breaking changes:
- There are no breaking changes in this release.
Packages with other changes:
langchain
-v0.3.3
langchain_openai
-v0.3.3
langchain_google
-v0.2.3+1
langchain_mistralai
-v0.0.2+1
openai_dart
-v0.1.5
mistralai_dart
-v0.0.2+2
vertex_ai
-v0.0.9
Packages with dependency updates only:
Packages listed below depend on other packages in this workspace that have had changes. Their versions have been incremented to bump the minimum dependency versions of the packages they depend upon in this project.
langchain_pinecone
-v0.0.6+12
langchain_ollama
-v0.0.3+1
langchain_chroma
-v0.1.0+13
langchain
- v0.3.3
langchain_openai
- v0.3.3
- FEAT: Support Anyscale in ChatOpenAI and OpenAIEmbeddings wrappers (#305). (7daa3eb0)
- FEAT: Support Together AI in ChatOpenAI wrapper (#297). (28ab56af)
- FEAT: Support Together AI in OpenAIEmbeddings wrapper (#304). (ddc761d6)
langchain_google
- v0.2.3+1
langchain_mistralai
- v0.0.2+1
openai_dart
- v0.1.5
- FEAT: Support Anyscale API in openai_dart client (#303). (e0a3651c)
- FEAT: Support Together AI API (#296). (ca6f23d5)
- FEAT: Support Together AI Embeddings API in openai_dart client (#301). (4a6e1045)
- FEAT: Add usage to Run/RunStep in openai_dart client (#302). (cc6538b5)
vertex_ai
- v0.0.9
mistralai_dart
- v0.0.2+2
New Contributors
π£ Check out the #announcements channel in the LangChain.dart Discord server for more details.
v0.3.2
2024-01-13
What's new?
![OpenRouter](https://private-user-images.githubusercontent.com/6546265/296449048-20d9795e-6f45-47ce-b091-d5976e03a30f.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkxMzg5NzksIm5iZiI6MTczOTEzODY3OSwicGF0aCI6Ii82NTQ2MjY1LzI5NjQ0OTA0OC0yMGQ5Nzk1ZS02ZjQ1LTQ3Y2UtYjA5MS1kNTk3NmUwM2EzMGYucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDIwOSUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAyMDlUMjIwNDM5WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9YjFkZTNjOWM1NzgxZDM3MmMwOTY3OTYzNzE4YThjNmRiYTEzOGI0MThhN2YyNWYwNTJlMmM2ZGEzZDU0MWFmMyZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.jHv5mzIjhp8X0xwrREELL6EMtjHmk9oOX1hFQ1VURmM)
π OpenRouter support
OpenRouter offers a unified OpenAI-compatible API for a broad range of models. You can also let users pay for their own models via their OAuth PKCE flow.
You can now consume their API using the ChatOpenAI
wrapper.
final chatModel = ChatOpenAI(
apiKey: openRouterApiKey,
baseUrl: 'https://openrouter.ai/api/v1',
defaultOptions: const ChatOpenAIOptions(
model: 'mistralai/mistral-small',
),
);
Check out the documentation for more details.
π Fixed long build times when using tokenizer
LangChain.dart was using tiktoken package to tokenize prompts. This package suffers from a bug that causes long build times. Unfortunately, the maintainer is not very active anymore. So @orenagiv has released a new package langchain_tiktoken fixing this issue, and we have migrated to it.
β¨ Other fixes an improvements
- Added
copyWith
method to all options classes (likeChatOpenAIOptions
) to make it easier to update options ConversationTokenBufferMemory
andConversationSummaryMemory
are now properly exported inlangchain
package
Changes
Packages with breaking changes:
- There are no breaking changes in this release.
Packages with other changes:
langchain
-v0.3.2
langchain_openai
-v0.3.2
langchain_google
-v0.2.3
langchain_mistralai
-v0.0.2
langchain_ollama
-v0.0.3
langchain_pinecone
-v0.0.6+11
langchain_chroma
-v0.1.0+12
openai_dart
-v0.1.4
googleai_dart
-v0.0.2+1
mistralai_dart
-v0.0.2+1
vertex_ai
-v0.0.8
Packages with dependency updates only:
Packages listed below depend on other packages in this workspace that have had changes. Their versions have been incremented to bump the minimum dependency versions of the packages they depend upon in this project.
langchain_pinecone
-v0.0.6+11
langchain_chroma
-v0.1.0+12
langchain
- v0.3.2
- REFACTOR(llms): Make all LLM options fields nullable and add copyWith (#284). (57eceb9b)
- FIX(memory): Export ConversationSummaryMemory (#283). (76b01d23)
- FEAT: Update internal dependencies (#291). (69621cc6)
langchain_openai
- v0.3.2
- FEAT(chat-models): Support OpenRouter API in ChatOpenAI wrapper (#292). (c6e7e5be) (docs)
- REFACTOR(llms): Make all LLM options fields nullable and add copyWith (#284). (57eceb9b)
- REFACTOR: Migrate tokenizer to langchain_tiktoken package (#285). (6a3b6466)
- FEAT: Update internal dependencies (#291). (69621cc6)
langchain_google
- v0.2.3
- REFACTOR: Use cl100k_base encoding model when no tokenizer is available (#295). (ca908e80)
- REFACTOR(llms): Make all LLM options fields nullable and add copyWith (#284). (57eceb9b)
- REFACTOR: Migrate tokenizer to langchain_tiktoken package (#285). (6a3b6466)
- FEAT: Update internal dependencies (#291). (69621cc6)
langchain_mistralai
- v0.0.2
- REFACTOR: Use cl100k_base encoding model when no tokenizer is available (#295). (ca908e80)
- REFACTOR(llms): Make all LLM options fields nullable and add copyWith (#284). (57eceb9b)
- REFACTOR: Migrate tokenizer to langchain_tiktoken package (#285). (6a3b6466)
- FEAT: Update internal dependencies (#291). (69621cc6)
langchain_ollama
- v0.0.3
- REFACTOR: Use cl100k_base encoding model when no tokenizer is available (#295). (ca908e80)
- REFACTOR(llms): Make all LLM options fields nullable and add copyWith (#284). (57eceb9b)
- REFACTOR: Migrate tokenizer to langchain_tiktoken package (#285). (6a3b6466)
- FEAT: Update internal dependencies (#291). (69621cc6)
openai_dart
- v0.1.4
- FEAT(openai_dart): Support OpenRouter API (#292). (57699b32)
- FEAT(openai_dart): Remove OpenAI deprecated models (#290). (893b1c51)
googleai_dart
- v0.0.2+1
mistralai_dart
- v0.0.2+1
vertex_ai
- v0.0.8
Contributors
π£ Check out the #announcements channel in the LangChain.dart Discord server for more details.