Skip to content

Commit

Permalink
feat!: add backend module (#301)
Browse files Browse the repository at this point in the history
Ref: #208
  • Loading branch information
Tomas2D authored Feb 11, 2025
1 parent a6e87ff commit 06bf1e4
Show file tree
Hide file tree
Showing 254 changed files with 5,717 additions and 13,035 deletions.
1 change: 1 addition & 0 deletions .embedmeignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
docs/README.md
61 changes: 36 additions & 25 deletions .env.template
Original file line number Diff line number Diff line change
Expand Up @@ -7,46 +7,56 @@ BEE_FRAMEWORK_LOG_SINGLE_LINE="false"
# BEE_FRAMEWORK_INSTRUMENTATION_ENABLED=true
# BEE_FRAMEWORK_INSTRUMENTATION_IGNORED_KEYS=

# For WatsonX LLM Adapter
# For Watsonx LLM Adapter
# WATSONX_CHAT_MODEL=""
# WATSONX_EMBEDDING_MODEL=""
# WATSONX_API_KEY=""
# WATSONX_PROJECT_ID=""
# WATSONX_REGION="us-south"
# WATSONX_SPACE_ID=""
# WATSONX_VERSION=""
# WATSONX_REGION=""

# For Ollama LLM Adapter
# OLLAMA_HOST="http://0.0.0.0:11434"
# OLLAMA_MODEL="deepseek-r1:8b"
# OLLAMA_CHAT_MODEL=""
# OLLAMA_EMBEDDING_MODEL=""
# OLLAMA_BASE_URL=""

# For OpenAI LLM Adapter
# OPENAI_CHAT_MODEL=""
# OPENAI_EMBEDDING_MODEL=""
# OPENAI_API_ENDPOINT=""
# OPENAI_API_KEY=""
# OPENAI_API_HEADERS=""

# For Azure OpenAI LLM Adapter
# AZURE_OPENAI_API_VERSION=""
# AZURE_OPENAI_API_DEPLOYMENT=""
# AZURE_OPENAI_CHAT_MODEL=""
# AZURE_OPENAI_EMBEDDING_MODEL=""
# AZURE_OPENAI_API_KEY=""
# AZURE_OPENAI_API_ENDPOINT=""
# AZURE_OPENAI_API_RESOURCE=""
# AZURE_OPENAI_API_VERSION=""

# For Groq LLM Adapter
# GROQ_CHAT_MODEL=""
# GROQ_EMBEDDING_MODEL=""
# GROQ_API_HOST=""
# GROQ_API_KEY=""

# For IBM VLLM LLM Adapter
# IBM_VLLM_URL=""
# IBM_VLLM_ROOT_CERT=""
# IBM_VLLM_CERT_CHAIN=""
# IBM_VLLM_PRIVATE_KEY=""

# For IBM RITS LLM Adapter
# IBM_RITS_URL=""
# IBM_RITS_API_KEY=""
# IBM_RITS_MODEL=ibm-granite/granite-3.0-8b-instruct

# LLM Provider, used for some of the example agents
# (watsonx/ollama/openai/groq/ibmvllm/ibmrits)
# LLM_BACKEND="ollama"

# For GCP VertexAI Adapter
# For Google Vertex Adapter
# GOOGLE_VERTEX_CHAT_MODEL=""
# GOOGLE_VERTEX_EMBEDDING_MODEL=""
# GOOGLE_VERTEX_PROJECT=""
# GOOGLE_VERTEX_ENDPOINT=""
# GOOGLE_VERTEX_LOCATION=""
# GOOGLE_APPLICATION_CREDENTIALS=""
# GCP_VERTEXAI_PROJECT=""
# GCP_VERTEXAI_LOCATION=""

# For Amazon Bedrock
# AWS_CHAT_MODEL=""
# AWS_EMBEDDING_MODEL=""
# AWS_ACCESS_KEY_ID=""
# AWS_SECRET_ACCESS_KEY=""
# AWS_REGION=""
# AWS_SESSION_TOKEN=""

# Tools
# CODE_INTERPRETER_URL="http://127.0.0.1:50081"
Expand All @@ -63,4 +73,5 @@ BEE_FRAMEWORK_LOG_SINGLE_LINE="false"
# ELASTICSEARCH_API_KEY=""

## Third-party services
# TAVILY_API_KEY=your-api-key-here
# TAVILY_API_KEY=your-api-key-here

2 changes: 1 addition & 1 deletion .github/workflows/examples-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ jobs:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_SEARCH_CSE_ID }}
# TODO: enable WatsonX later
# TODO: enable Watsonx later
# WATSONX_API_KEY: ${{ secrets.WATSONX_API_KEY }}
# WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
# WATSONX_SPACE_ID: ${{ secrets.WATSONX_SPACE_ID }}
Expand Down
4 changes: 0 additions & 4 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,7 +1,3 @@
scripts/ibm_vllm_generate_protos/dist
scripts/ibm_vllm_generate_protos/dts
scripts/ibm_vllm_generate_protos/types

### Node template
# Logs
logs
Expand Down
21 changes: 1 addition & 20 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,26 +59,7 @@ yarn install --immutable
yarn prepare
```

5. **Setup environmental variables:** To run E2E Tests, you should set the following variables in your `.env` file in the repository’s root.

```bash
# At least one provider API key or an OLLAMA_HOST must be defined!
OPENAI_API_KEY=""
GROQ_API_KEY=""
WATSONX_API_KEY=""
WATSONX_PROJECT_ID=""
OLLAMA_HOST=""
AZURE_OPENAI_API_VERSION=""
AZURE_OPENAI_DEPLOYMENT=""
AZURE_OPENAI_API_KEY=""
AZURE_OPENAI_API_ENDPOINT=""
GOOGLE_APPLICATION_CREDENTIALS=""
GCP_VERTEXAI_PROJECT=""
GCP_VERTEXAI_LOCATION=""

WATSONX_SPACE_ID="" # optional
WATSONX_DEPLOYMENT_ID="" # optional
```
5. **Setup environmental variables:** To run E2E Tests, you should set the requisite environmental variables in your `.env` file.

6. **Follow Conventional Commit Messages:** We use [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/#summary) to structure our commit messages. This helps maintain a clean and manageable commit history. Please use the following format:

Expand Down
46 changes: 23 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,11 @@
<h4 align="center">Open-source framework for building, deploying, and serving powerful multi-agent workflows at scale.</h4>
</p>

🐝 **Bee Agent Framework** is an open-source TypeScript library for building **production-ready multi-agent systems**. Pick from a variety of [🌐 LLM providers](/docs/llms.md#providers-adapters), customize the [📜 prompt templates](/docs/templates.md), create [🤖 agents](/docs/agents.md), equip agents with pre-made [🛠️ tools](/docs/tools.md), and orchestrate [🤖🤝🤖 multi-agent workflows](/docs/workflows.md)! 🪄
🐝 **Bee Agent Framework** is an open-source TypeScript library for building **production-ready multi-agent systems**. Pick from a variety of [🌐 AI Providers](/docs/backend.md), customize the [📜 prompt templates](/docs/templates.md), create [🤖 agents](/docs/agents.md), equip agents with pre-made [🛠️ tools](/docs/tools.md), and orchestrate [🤖🤝🤖 multi-agent workflows](/docs/workflows.md)! 🪄

## Latest updates

- 🚀 **2025-02-07**: Introduced [Backend](/docs/backend.md) module to simplify working with AI services (chat, embedding). See [migration guide](/docs/migration_guide.md).
- 🧠 **2025-01-28**: Added support for [DeepSeek R1](https://api-docs.deepseek.com/news/news250120), check out the [Competitive Analysis Workflow example](https://github.com/i-am-bee/bee-agent-framework/tree/main/examples/workflows/competitive-analysis)
- 🚀 **2025-01-09**:
- Introduced [Workflows](/docs/workflows.md), a way of building multi-agent systems.
Expand All @@ -30,7 +31,7 @@ For a full changelog, see the [releases page](https://github.com/i-am-bee/bee-ag
## Why pick Bee?

- ⚔️ **Battle-tested.** Bee Agent Framework is at the core of [BeeAI](https://iambee.ai), a powerful platform for building chat assistants and custom AI-powered apps. BeeAI is in a closed beta, but already used by hundreds of users. And it's [fully open-source](https://github.com/i-am-bee/bee-ui) too!
- 🚀 **Production-grade.** In an actual product, you have to reduce token spend through [memory strategies](/docs/memory.md), store and restore the agent state through [(de)serialization](/docs/serialization.md), generate [structured output](/examples/llms/structured.ts), or execute generated code in a [sandboxed environment](https://github.com/i-am-bee/bee-code-interpreter). Leave all that to Bee and focus on building!
- 🚀 **Production-grade.** In an actual product, you have to reduce token spend through [memory strategies](/docs/memory.md), store and restore the agent state through [(de)serialization](/docs/serialization.md), generate [structured output](/examples/backend/structured.ts), or execute generated code in a [sandboxed environment](https://github.com/i-am-bee/bee-code-interpreter). Leave all that to Bee and focus on building!
- 🤗 **Built for open-source models.** Pick any LLM you want – including small and open-source models. The framework is designed to perform robustly with [Granite](https://www.ibm.com/granite/docs/) and [Llama 3.x](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct). A full agentic workflow can run on your laptop!
- 😢 **Bee cares about the sad path too.** Real-world applications encounter errors and failures. Bee lets you observe the full agent workflow through [events](/docs/emitter.md), collect [telemetry](/docs/instrumentation.md), [log](/docs/logger.md) diagnostic data, and throws clear and well-defined [exceptions](/docs/errors.md). Bees may be insects, but not bugs!
- 🌳 **A part of something greater.** Bee isn't just a framework, but a full ecosystem. Use [Bee UI](https://github.com/i-am-bee/bee-ui) to chat with your agents visually. [Bee Observe](https://github.com/i-am-bee/bee-observe) collects and manages telemetry. [Bee Code Interpreter](https://github.com/i-am-bee/bee-code-interpreter) runs generated code safely in a secure sandbox. The Bee ecosystem also integrates with [Model Context Protocol](https://i-am-bee.github.io/bee-agent-framework/#/tools?id=using-the-mcptool-class), allowing interoperability with the wider agent ecosystem!
Expand All @@ -47,7 +48,7 @@ import { UnconstrainedMemory } from "bee-agent-framework/memory/unconstrainedMem
import { OpenMeteoTool } from "bee-agent-framework/tools/weather/openMeteo";
import { WikipediaTool } from "bee-agent-framework/tools/search/wikipedia";
import { AgentWorkflow } from "bee-agent-framework/experimental/workflows/agent";
import { BaseMessage, Role } from "bee-agent-framework/llms/primitives/message";
import { Message, Role } from "bee-agent-framework/llms/primitives/message";
import { GroqChatLLM } from "bee-agent-framework/adapters/groq/chat";

const workflow = new AgentWorkflow();
Expand Down Expand Up @@ -77,7 +78,7 @@ workflow.addAgent({
const memory = new UnconstrainedMemory();

await memory.add(
BaseMessage.of({
Message.of({
role: Role.USER,
text: "What is the capital of France and what is the current weather there?",
meta: { createdAt: new Date() },
Expand Down Expand Up @@ -119,16 +120,16 @@ yarn add bee-agent-framework

```ts
import { BeeAgent } from "bee-agent-framework/agents/bee/agent";
import { OllamaChatLLM } from "bee-agent-framework/adapters/ollama/chat";
import { OllamaChatModel } from "bee-agent-framework/adapters/ollama/backend/chat";
import { TokenMemory } from "bee-agent-framework/memory/tokenMemory";
import { DuckDuckGoSearchTool } from "bee-agent-framework/tools/search/duckDuckGoSearch";
import { OpenMeteoTool } from "bee-agent-framework/tools/weather/openMeteo";

const llm = new OllamaChatLLM(); // default is llama3.1 (8B), it is recommended to use 70B model
const llm = new OllamaChatModel("llama3.1"); // default is llama3.1 (8B), it is recommended to use 70B model

const agent = new BeeAgent({
llm, // for more explore 'bee-agent-framework/adapters'
memory: new TokenMemory({ llm }), // for more explore 'bee-agent-framework/memory'
memory: new TokenMemory(), // for more explore 'bee-agent-framework/memory'
tools: [new DuckDuckGoSearchTool(), new OpenMeteoTool()], // for more explore 'bee-agent-framework/tools'
});

Expand Down Expand Up @@ -174,22 +175,21 @@ console.log(`Agent 🤖 : `, response.result.text);

The source directory (`src`) provides numerous modules that one can use.

| Name | Description |
| ------------------------------------------------ | ------------------------------------------------------------------------------------------- |
| [**agents**](/docs/agents.md) | Base classes defining the common interface for agent. |
| [**workflows**](/docs/workflows.md) | Build agentic applications in a declarative way via [workflows](/docs/workflows.md). |
| [**llms**](/docs/llms.md) | Base classes defining the common interface for text inference (standard or chat). |
| [**template**](/docs/templates.md) | Prompt Templating system based on `Mustache` with various improvements. |
| [**memory**](/docs/memory.md) | Various types of memories to use with agent. |
| [**tools**](/docs/tools.md) | Tools that an agent can use. |
| [**cache**](/docs/cache.md) | Preset of different caching approaches that can be used together with tools. |
| [**errors**](/docs/errors.md) | Error classes and helpers to catch errors fast. |
| [**adapters**](/docs/llms.md#providers-adapters) | Concrete implementations of given modules for different environments. |
| [**logger**](/docs/logger.md) | Core component for logging all actions within the framework. |
| [**serializer**](/docs/serialization.md) | Core component for the ability to serialize/deserialize modules into the serialized format. |
| [**version**](/docs/version.md) | Constants representing the framework (e.g., latest version) |
| [**emitter**](/docs/emitter.md) | Bringing visibility to the system by emitting events. |
| **internals** | Modules used by other modules within the framework. |
| Name | Description |
| ---------------------------------------- | ------------------------------------------------------------------------------------------- |
| [**agents**](/docs/agents.md) | Base classes defining the common interface for agent. |
| [**workflows**](/docs/workflows.md) | Build agentic applications in a declarative way via [workflows](/docs/workflows.md). |
| [**backend**](/docs/backend.md) | Functionalities that relates to AI models (chat, embedding, image, tool calling, ...) |
| [**template**](/docs/templates.md) | Prompt Templating system based on `Mustache` with various improvements. |
| [**memory**](/docs/memory.md) | Various types of memories to use with agent. |
| [**tools**](/docs/tools.md) | Tools that an agent can use. |
| [**cache**](/docs/cache.md) | Preset of different caching approaches that can be used together with tools. |
| [**errors**](/docs/errors.md) | Error classes and helpers to catch errors fast. |
| [**logger**](/docs/logger.md) | Core component for logging all actions within the framework. |
| [**serializer**](/docs/serialization.md) | Core component for the ability to serialize/deserialize modules into the serialized format. |
| [**version**](/docs/version.md) | Constants representing the framework (e.g., latest version) |
| [**emitter**](/docs/emitter.md) | Bringing visibility to the system by emitting events. |
| **internals** | Modules used by other modules within the framework. |

To see more in-depth explanation see [overview](/docs/overview.md).

Expand Down
21 changes: 1 addition & 20 deletions docs/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,26 +59,7 @@ yarn install --immutable
yarn prepare
```

5. **Setup environmental variables:** To run E2E Tests, you should set the following variables in your `.env` file in the repository’s root.

```bash
# At least one provider API key or an OLLAMA_HOST must be defined!
OPENAI_API_KEY=""
GROQ_API_KEY=""
WATSONX_API_KEY=""
WATSONX_PROJECT_ID=""
OLLAMA_HOST=""
AZURE_OPENAI_API_VERSION=""
AZURE_OPENAI_DEPLOYMENT=""
AZURE_OPENAI_API_KEY=""
AZURE_OPENAI_API_ENDPOINT=""
GOOGLE_APPLICATION_CREDENTIALS=""
GCP_VERTEXAI_PROJECT=""
GCP_VERTEXAI_LOCATION=""

WATSONX_SPACE_ID="" # optional
WATSONX_DEPLOYMENT_ID="" # optional
```
5. **Setup environmental variables:** To run E2E Tests, you should set the requisite environmental variables in your `.env` file.

6. **Follow Conventional Commit Messages:** We use [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/#summary) to structure our commit messages. This helps maintain a clean and manageable commit history. Please use the following format:

Expand Down
Loading

0 comments on commit 06bf1e4

Please sign in to comment.