Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

init bedrock docs section + added some params to bedrock adapter #121

Open
wants to merge 2 commits into
base: latest
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 29 additions & 0 deletions docs/docs/learn/003-adapters/004-bedrock/001-overview.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
---
sidebar_label: "Overview"
---

import { CodeEditor } from "@site/src/components/CodeEditor/CodeEditor";
import exampleUrlFileAiChatBot from "./examples/aiAssistant-default";

# Bedrock Adapter

`NLUX` offers integration with Bedrock SDK API.

## About Bedrock

Bedrock is a versatile framework designed for creating, configuring, and managing AI-driven chat applications. It provides a robust set of tools and features to streamline the integration of AI models, handle data transfer efficiently, and customize inference configurations. This documentation provides an overview of Bedrock's key components and their functionalities.

## Supported Features

The `NLUX` LangServe adapter can do the following:

- Offers a way to **send a user prompt to a Bedrock API**.
- **Handles the API responses** and displays them in the chat UI (single responses and streamed text).

If you have specific requirements that are not covered by the adapter, please submit a feature request
[here](https://github.com/nlkitai/nlux/discussions) and we will consider adding it to the roadmap
if enough users request it.

## Example: Bedrock Adapter With Default Settings

Coming Soon.
9 changes: 9 additions & 0 deletions docs/docs/learn/003-adapters/004-bedrock/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
{
"label": "Bedrock Adapter",
"collapsible": true,
"collapsed": true,
"link": {
"type": "generated-index",
"slug": "/learn/adapters/bedrock"
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
export default `import {AiChat} from '@nlux/react';
import {useChatAdapter} from '@nlux/langchain-react';
import '@nlux/themes/nova.css';

export default () => {
// LangServe adapter that connects to a demo LangChain Runnable API
const adapter = useChatAdapter({
url: 'https://pynlux.api.nlkit.com/pirate-speak',
dataTransferMode: 'batch'
});

return (
<AiChat
adapter={adapter}
personaOptions={{
assistant: {
name: 'Feather-AI',
avatar: 'https://docs.nlkit.com/nlux/images/personas/feather.png',
tagline: 'Yer AI First Mate!'
},
user: {
name: 'Alex',
avatar: 'https://docs.nlkit.com/nlux/images/personas/alex.png'
}
}}
layoutOptions={{
height: 320,
maxWidth: 600
}}
/>
);
};`;
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
export default `import {AiChat} from '@nlux/react';
import {useChatAdapter} from '@nlux/langchain-react';
import '@nlux/themes/nova.css';

export default () => {
// LangServe adapter that connects to a demo LangChain Runnable API
const adapter = useChatAdapter({
url: 'https://pynlux.api.nlkit.com/pirate-speak'
});

return (
<AiChat
adapter={adapter}
personaOptions={{
assistant: {
name: 'Feather',
avatar: 'https://docs.nlkit.com/nlux/images/personas/feather.png',
tagline: 'Yer AI First Mate!'
},
user: {
name: 'Alex',
avatar: 'https://docs.nlkit.com/nlux/images/personas/alex.png'
}
}}
layoutOptions={{
height: 320,
maxWidth: 600
}}
/>
);
};`;
161 changes: 97 additions & 64 deletions packages/js/bedrock/src/bedrock/builder/builder.ts
Original file line number Diff line number Diff line change
@@ -1,69 +1,102 @@
import {BedrockRuntimeClientConfigType, InferenceConfiguration} from '@aws-sdk/client-bedrock-runtime';
import {ChatAdapterBuilder as CoreChatAdapterBuilder, DataTransferMode, StandardChatAdapter} from '@nlux/core';
import {
BedrockRuntimeClientConfigType,
InferenceConfiguration,
} from "@aws-sdk/client-bedrock-runtime";
import {
ChatAdapterBuilder as CoreChatAdapterBuilder,
DataTransferMode,
StandardChatAdapter,
} from "@nlux/core";

export interface ChatAdapterBuilder<AiMsg>
extends CoreChatAdapterBuilder<AiMsg> {
/**
* Create a new Bedrock Inference API adapter.
* Adapter users don't need to call this method directly. It will be called by nlux when the adapter is expected
* to be created.
*
* @returns {StandardChatAdapter}
*/
create(): StandardChatAdapter<AiMsg>;
extends CoreChatAdapterBuilder<AiMsg> {
/**
* Create a new Bedrock Inference API adapter.
* Adapter users don't need to call this method directly. It will be called by nlux when the adapter is expected
* to be created.
*
* @returns {StandardChatAdapter}
*/
create(): StandardChatAdapter<AiMsg>;

/**
* The authorization token to use for Bedrock Inference API.
* This will be passed to the `Authorization` header of the HTTP request.
* If no token is provided, the request will be sent without an `Authorization` header as in this example:
* `"Authorization": f"Bearer {AUTH_TOKEN}"`.
*
* Public models do not require an authorization token, but if your model is private, you will need to provide one.
*
* @optional
* @param {string} cred
* @returns {ChatAdapterBuilder}
*/
withCredintial(
cred: BedrockRuntimeClientConfigType['credentials'],
): ChatAdapterBuilder<AiMsg>;
/**
* The authorization token to use for Bedrock Inference API.
* This will be passed to the `Authorization` header of the HTTP request.
* If no token is provided, the request will be sent without an `Authorization` header as in this example:
* `"Authorization": f"Bearer {AUTH_TOKEN}"`.
*
* Public models do not require an authorization token, but if your model is private, you will need to provide one.
*
* @optional
* @param {string} cred
* @returns {ChatAdapterBuilder}
*/
withCredintial(
cred: BedrockRuntimeClientConfigType["credentials"]
): ChatAdapterBuilder<AiMsg>;

/**
* Instruct the adapter to connect to API and load data either in streaming mode or in batch mode.
* The `stream` mode would use protocols such as websockets or server-side events, and nlux will display data as
* it's being generated by the server. The `batch` mode would use a single request to fetch data, and the response
* would only be displayed once the entire message is loaded.
*
* @optional
* @default 'stream'
* @returns {ChatAdapterBuilder}
*/
withDataTransferMode(mode: DataTransferMode): ChatAdapterBuilder<AiMsg>;
/**
* Inference parameters to pass to the model. <code>Converse</code> supports a base
* set of inference parameters. If you need to pass additional parameters that the model
* supports, use the <code>additionalModelRequestFields</code> request field.
*
* @param {InferenceConfiguration} inferenceConfig
* @returns {ChatAdapterBuilder}
*/
withInferenceConfig(
inferenceConfig: InferenceConfiguration,
): ChatAdapterBuilder<AiMsg>;
/**
* The model or the endpoint to use for Bedrock Inference API.
* You should provide either a model or an endpoint, but not both.
*
* @param {string} model
* @returns {ChatAdapterBuilder}
*/
withModel(model: string): ChatAdapterBuilder<AiMsg>;
/**
* The endpoint to use for Bedrock Inference API.
*
* @optional
* @param {string} region
* @returns {ChatAdapterBuilder}
*/
withRegion(region: string): ChatAdapterBuilder<AiMsg>;
/**
* Instruct the adapter to connect to API and load data either in streaming mode or in batch mode.
* The `stream` mode would use protocols such as websockets or server-side events, and nlux will display data as
* it's being generated by the server. The `batch` mode would use a single request to fetch data, and the response
* would only be displayed once the entire message is loaded.
*
* @optional
* @default 'stream'
* @returns {ChatAdapterBuilder}
*/
withDataTransferMode(mode: DataTransferMode): ChatAdapterBuilder<AiMsg>;
/**
* Inference parameters to pass to the model. <code>Converse</code> supports a base
* set of inference parameters. If you need to pass additional parameters that the model
* supports, use the <code>additionalModelRequestFields</code> request field.
*
* @param {InferenceConfiguration} inferenceConfig
* @returns {ChatAdapterBuilder}
*/
withInferenceConfig(
inferenceConfig: InferenceConfiguration
): ChatAdapterBuilder<AiMsg>;
/**
* The model or the endpoint to use for Bedrock Inference API.
* You should provide either a model or an endpoint, but not both.
*
* @param {string} model
* @returns {ChatAdapterBuilder}
*/
withModel(model: string): ChatAdapterBuilder<AiMsg>;
/**
* The region to use for Bedrock Inference API.
*
* @optional
* @param {string} region
* @returns {ChatAdapterBuilder}
*/
withRegion(region: string): ChatAdapterBuilder<AiMsg>;
/**
* The endpoint to use for Bedrock Inference API.
*
* @optional
* @param {string} endpoint
* @returns {ChatAdapterBuilder}
*/
withEndpoint(endpoint: string): ChatAdapterBuilder<AiMsg>;

/**
* The max number of attempts for retrying Bedrock Inference API.
*
* @optional
* @param {number} maxAttempts
* @returns {ChatAdapterBuilder}
*/
withMaxAttempts(maxAttempts: number): ChatAdapterBuilder<AiMsg>;

/**
* Unique Service Identifier.
*
* @optional
* @param {number} serviceId
* @returns {ChatAdapterBuilder}
*/
withServiceId(serviceId: string): ChatAdapterBuilder<AiMsg>;
}
Loading
Loading