v0.6.0
2024-04-30
What's New?
This release focuses on enhancing and expanding the capabilities of LangChain Expression Language (LCEL):
🚦 RunnableRouter
RunnableRouter
enables the creation of non-deterministic chains where the output of a previous step determines the next step. This feature allows you to use an LLM to dynamically select the appropriate prompt, chain, LLM, or other components based on some input. A particularly effective technique is combining RunnableRouter
with embedding models to route a query to the most relevant (semantically similar) prompt.
final router = Runnable.fromRouter((Map<String, dynamic> input, _) {
final topic = input['topic'] as String;
if (topic.contains('langchain')) {
return langchainChain;
} else if (topic.contains('anthropic')) {
return anthropicChain;
} else {
return generalChain;
}
});
For more details and examples, please refer to the router documentation.
🌟 JsonOutputParser
In certain scenarios, it is useful to ask the model to respond in JSON format, which makes it easier to parse the response. Many vendors even offer a JSON mode that guarantees valid JSON output. With the new JsonOutputParser
you can now easily parse the output of a runnable as a JSON map. It also supports streaming, returning valid JSON from the incomplete JSON chunks streamed by the model.
final model = ChatOpenAI(
apiKey: openAiApiKey,
defaultOptions: ChatOpenAIOptions(
responseFormat: ChatOpenAIResponseFormat(
type: ChatOpenAIResponseFormatType.jsonObject,
),
),
);
final parser = JsonOutputParser<ChatResult>();
final chain = model.pipe(parser);
final stream = chain.stream(
PromptValue.string(
'Output a list of the countries france, spain and japan and their '
'populations in JSON format. Use a dict with an outer key of '
'"countries" which contains a list of countries. '
'Each country should have the key "name" and "population"',
),
);
await stream.forEach((final chunk) => print('$chunk|'));
// {}|
// {countries: []}|
// {countries: [{name: France}]}|
// {countries: [{name: France, population: 67076000}, {}]}|
// {countries: [{name: France, population: 67076000}, {name: Spain}]}|
// {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}]}|
// {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}, {name: Japan}]}|
// {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}, {name: Japan, population: 126476461}]}|
🗺️ Mapping input values
Mapping the output value of a previous runnable to a new value that aligns with the input requirements of the next runnable is a common task. Previous versions provided the Runnable.mapInput
method for custom mapping logic, but it lacked control over the stream of input values when using streaming. With this release, you can now utilize Runnable.mapInputStream
to have full control over the input stream.
For example, you may want to output only the last element of the input stream: (full code)
final mapper = Runnable.mapInputStream((Stream<String> inputStream) async* {
yield await inputStream.last;
});
If you need to define separate logic for invoke and stream operations, Runnable.fromFunction
has been updated to allow you to specify the invoke
logic, the stream
logic, or both, providing greater flexibility. This refactoring of Runnable.fromFunction
resulted in a minor breaking change, see the migration guide for more information.
In this example, we create a runnable that we can use in our chains to debug the output of the previous step. It prints different information when the chain is invoked vs streamed. (full code)
Runnable<T, RunnableOptions, T> logOutput<T extends Object>(String stepName) {
return Runnable.fromFunction<T, T>(
invoke: (input, options) {
print('Output from step "$stepName":\n$input\n---');
return Future.value(input);
},
stream: (inputStream, options) {
return inputStream.map((input) {
print('Chunk from step "$stepName":\n$input\n---');
return input;
});
},
);
}
final chain = Runnable.getMapFromInput<String>('equation_statement')
.pipe(logOutput('getMapFromInput'))
.pipe(promptTemplate)
.pipe(logOutput('promptTemplate'))
.pipe(ChatOpenAI(apiKey: openaiApiKey))
.pipe(logOutput('chatModel'))
.pipe(StringOutputParser())
.pipe(logOutput('outputParser'));
🙆 Non-streaming components
Previously, all LangChain.dart components processed a streaming input item by item. This made sense for some components, such as output parsers, but was problematic for others. For example, you don't want a retriever to retrieve documents for each streamed chunk, instead you want to wait for the full query to be received before performing the search.
This has been fixed in this release, as from now on the following components will reduce/aggregate the streaming input from the previous step into a single value before processing it:
PromptTemplate
ChatPromptTemplate
LLM
ChatModel
Retriever
Tool
RunnableFunction
RunnableRouter
📚 Improved LCEL docs
We have revamped the LangChain Expression Language documentation. It now includes a dedicated section explaining the different primitives available in LCEL. Also, a new page has been added specifically covering streaming.
- Sequence: Chaining runnables
- Map: Formatting inputs & concurrency
- Passthrough: Passing inputs through
- Mapper: Mapping inputs
- Function: Run custom logic
- Binding: Configuring runnables
- Router: Routing inputs
Changes
Packages with breaking changes:
Packages with other changes:
langchain
- v0.6.0+1
- FEAT: Add support for Runnable.mapInputStream (#393). (a2b6bbb5)
- FEAT: Add support for JsonOutputParser (#392). (c6508f0f)
- FEAT: Reduce input stream for PromptTemplate, LLM, ChatModel, Retriever and Tool (#388). (b59bcd40)
- BREAKING FEAT: Support different logic for streaming in RunnableFunction (#394). (8bb2b8ed)
- FIX: Allow async functions in Runnable.mapInput (#396). (e4c35092)
- DOCS: Update LangChain Expression Language documentation (#395). (6ce75e5f)
langchain_core
- v0.2.0+1
- FEAT: Add support for Runnable.mapInputStream (#393). (a2b6bbb5)
- FEAT: Add support for JsonOutputParser (#392). (c6508f0f)
- FEAT: Reduce input stream for PromptTemplate, LLM, ChatModel, Retriever and Tool (#388). (b59bcd40)
- BREAKING FEAT: Support different logic for streaming in RunnableFunction (#394). (8bb2b8ed)
- FIX: Allow async functions in Runnable.mapInput (#396). (e4c35092)
openai_dart
- v0.2.2
📣 Check out the #announcements channel in the LangChain.dart Discord server for more details.