Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

📦 NEW: BaseAI CLI Meeting to memo agent #56

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions examples/agents/meeting-to-memo/.env.baseai.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# !! SERVER SIDE ONLY !!
# Keep all your API keys secret — use only on the server side.

# TODO: ADD: Both in your production and local env files.
# Langbase API key for your User or Org account.
# How to get this API key https://langbase.com/docs/api-reference/api-keys
LANGBASE_API_KEY=

# TODO: ADD: LOCAL ONLY. Add only to local env files.
# Following keys are needed for local pipe runs. For providers you are using.
# For Langbase, please add the key to your LLM keysets.
# Read more: Langbase LLM Keysets https://langbase.com/docs/features/keysets
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
COHERE_API_KEY=
FIREWORKS_API_KEY=
GOOGLE_API_KEY=
GROQ_API_KEY=
MISTRAL_API_KEY=
PERPLEXITY_API_KEY=
TOGETHER_API_KEY=
9 changes: 9 additions & 0 deletions examples/agents/meeting-to-memo/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# baseai
**/.baseai/
node_modules
.env
package-lock.json
pnpm-lock.yaml
# env file
.env
.vscode
53 changes: 53 additions & 0 deletions examples/agents/meeting-to-memo/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
![Meeting to Memo Agent by ⌘ BaseAI][cover]

![License: MIT][mit] [![Fork on ⌘ Langbase][fork]][pipe]

## Build a Meeting to Memo Agent with BaseAI framework — ⌘ Langbase

This AI Agent is built using the BaseAI framework. It leverages an agentic pipeline that integrates over 30+ LLMs (including OpenAI, Gemini, Mistral, Llama, Gemma, etc.) and can handle any data, with context sizes of up to 10M+ tokens, supported by memory. The framework is compatible with any front-end framework (such as React, Remix, Astro, Next.js), giving you, as a developer, the freedom to tailor your AI application exactly as you envision.

## Features

- Meeting to Memo Agent — Built with [BaseAI framework and agentic Pipe ⌘ ][qs].
- Composable Agents — Build and compose agents with BaseAI.
- Add and Sync deployed pipe on Langbase locally npx baseai@latest add ([see the Code button][pipe]).

## Learn more

1. Check the [Learning path to build an agentic AI pipe with ⌘ BaseAI][learn]
2. Read the [source code on GitHub][gh] for this agent example
3. Go through Documentaion: [Pipe Quick Start][qs]
4. Learn more about [Memory features in ⌘ BaseAI][memory]
5. Learn more about [Tool calls support in ⌘ BaseAI][toolcalls]


> NOTE:
> This is a BaseAI project, you can deploy BaseAI pipes, memory and tool calls on Langbase.

---

## Authors

This project is created by [Langbase][lb] team members, with contributions from:

- Muhammad-Ali Danish - Software Engineer, [Langbase][lb] <br>
**_Built by ⌘ [Langbase.com][lb] — Ship hyper-personalized AI assistants with memory!_**


[lb]: https://langbase.com
[pipe]: https://langbase.com/examples/meeting-to-memo-agent
[gh]: https://github.com/LangbaseInc/baseai/tree/main/examples/agents/meeting-to-memo-agent
[cover]:https://raw.githubusercontent.com/LangbaseInc/docs-images/main/baseai/baseai-cover.png
[download]:https://download-directory.github.io/?url=https://github.com/LangbaseInc/baseai/tree/main/examples/agents/meeting-to-memo-agent
[learn]:https://baseai.dev/learn
[memory]:https://baseai.dev/docs/memory/quickstart
[toolcalls]:https://baseai.dev/docs/tools/quickstart
[deploy]:https://baseai.dev/docs/deployment/authentication
[signup]: https://langbase.fyi/io
[qs]:https://baseai.dev/docs/pipe/quickstart
[docs]:https://baseai.dev/docs
[xaa]:https://x.com/MrAhmadAwais
[xab]:https://x.com/AhmadBilalDev
[local]:http://localhost:9000
[mit]: https://img.shields.io/badge/license-MIT-blue.svg?style=for-the-badge&color=%23000000
[fork]: https://img.shields.io/badge/FORK%20ON-%E2%8C%98%20Langbase-000000.svg?style=for-the-badge&logo=%E2%8C%98%20Langbase&logoColor=000000
18 changes: 18 additions & 0 deletions examples/agents/meeting-to-memo/baseai/baseai.config.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
import type { BaseAIConfig } from 'baseai';

export const config: BaseAIConfig = {
log: {
isEnabled: false,
logSensitiveData: false,
pipe: true,
'pipe.completion': true,
'pipe.request': true,
'pipe.response': true,
tool: true,
memory: true
},
memory: {
useLocalEmbeddings: false
},
envFilePath: '.env'
};
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
Meeting Details:
- Date: August 3, 2024
- Time: 2:00 PM - 4:00 PM
- Duration: 120 minutes
- Format: Video Call
- Type: Technical
- Purpose: Discuss integration of new image models into existing traffic monitoring solution

Participants:
1. Alex Rivera (CTO, TrafficTech Solutions)
2. Priya Patel (Lead Engineer, TrafficTech Solutions)
3. Dr. Yuki Tanaka (AI Research Director, VisualAI Services)
4. Carlos Mendoza (Integration Specialist, VisualAI Services)
5. Lisa Chen (Project Manager, TrafficTech Solutions)

Agenda:
1. Overview of current traffic monitoring system
2. Presentation of new image models by VisualAI Services
3. Discussion of integration challenges and solutions
4. Performance benchmarks and testing protocols
5. Timeline and resource allocation

Discussion Notes:
- Alex presented an overview of TrafficTech's current system, highlighting areas for improvement in vehicle classification and incident detection.
- Dr. Tanaka introduced VisualAI's latest image models, focusing on their enhanced accuracy in low-light conditions and ability to distinguish between vehicle types.
- Priya raised concerns about the computational requirements of the new models and their impact on real-time processing.
- Carlos suggested a phased integration approach, starting with offline testing before moving to real-time implementation.
- The group discussed the need for additional training data specific to TrafficTech's deployment environments.
- Lisa emphasized the importance of maintaining system uptime during the integration process.
- The team debated the trade-offs between model accuracy and processing speed.

Decisions Made:
1. Agreed to proceed with a pilot integration of VisualAI's models, focusing on vehicle classification and incident detection modules.
2. Decided to use a hybrid approach, combining edge computing for basic processing and cloud resources for more complex analyses.
3. Approved the creation of a joint testing team with members from both companies.

Action Items:
1. Priya to provide VisualAI with a sample dataset from TrafficTech's current deployments by August 10.
2. Dr. Tanaka to refine models based on TrafficTech's specific use cases and provide initial benchmarks by August 24.
3. Carlos to develop an integration plan with minimal disruption to existing systems by August 17.
4. Lisa to create a project timeline and resource allocation plan by August 15.
5. Alex to secure cloud resources for expanded processing capabilities by August 20.

Open Issues:
1. Specific performance metrics for acceptable model accuracy vs. processing time need to be defined.
2. Data privacy concerns regarding the use of real traffic data for model training require legal review.
3. Long-term storage solutions for the increased data generated by new models need to be explored.

Next Steps:
- Schedule a follow-up meeting for August 26 to review initial integration results and refine the project plan.
- Set up a shared repository for collaborative development and testing.
- Arrange site visits for VisualAI team to observe current traffic monitoring installations.
14 changes: 14 additions & 0 deletions examples/agents/meeting-to-memo/baseai/memory/memo-docs/index.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
import { MemoryI } from '@baseai/core';
import path from 'path';

const memoryMemoDocs = (): MemoryI => ({
name: 'memo-docs',
description: 'memory documents',
config: {
useGitRepo: false,
dirToTrack: path.posix.join(''),
extToTrack: ['*']
}
});

export default memoryMemoDocs;
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
import { PipeI } from '@baseai/core';
import memoryMemoDocs from '../memory/memo-docs';

const pipeMeetingToMemoAgent = (): PipeI => ({
// Replace with your API key https://langbase.com/docs/api-reference/api-keys
apiKey: process.env.LANGBASE_API_KEY!,
name: `meeting-to-memo-agent`,
description: `Turn business and technical discussion summary into memo format, insights, actions and schedules`,
status: `private`,
model: `openai:gpt-4o-mini`,
stream: true,
json: false,
store: true,
moderate: true,
top_p: 0.75,
max_tokens: 3000,
temperature: 0.41,
presence_penalty: 0.5,
frequency_penalty: 0.5,
stop: [],
tool_choice: 'auto',
parallel_tool_calls: true,
messages: [
{
role: 'system',
content:
"You are an AI memo agent designed to summarize various types of meetings, including business and technical discussions. Your capabilities include:\n\n1. Synthesizing complex information into clear, concise summaries\n2. Identifying key decisions, action items, and open issues\n3. Organizing information in a structured, easy-to-read format\n4. Adapting to different meeting types and contexts\n\nGuidelines:\n1. Maintain a professional and neutral tone\n2. Prioritize clarity and brevity without omitting crucial information\n3. Use bullet points for lists and action items\n4. Include all participants' contributions, ensuring fair representation\n5. Highlight technical terms or jargon that may need explanation\n\nMeeting Input:\nProvided as an attached document in the CONTEXT\n\nPlease generate a memo based on the input provided, following this format:\n\n---\n# Meeting Memo\n\n**Date**: [Extract from input]\n**Time**: [Extract from input]\n**Duration**: [Extract from input]\n**Format**: [Extract from input]\n**Type**: [Extract from input]\n**Purpose**: [Summarize from input]\n\n## Participants\n[List all participants with their roles and affiliations]\n\n## Executive Summary\n[2-3 sentence overview of the meeting's key outcomes]\n\n## Agenda and Key Discussion Points\n- [List main topics with brief summaries]\n\n## Participant Contributions\n- [Name]: [Key points and insights]\n- [Name]: [Key points and insights]\n[Continue for all participants]\n\n## Decisions Made\n1. [Decision 1]\n2. [Decision 2]\n[Continue as needed]\n\n## Action Items\n1. [Action item 1] - Assigned to: [Name], Due: [Date]\n2. [Action item 2] - Assigned to: [Name], Due: [Date]\n[Continue as needed]\n\n## Open Issues\n- [Open issue 1]\n- [Open issue 2]\n[Continue as needed]\n\n## Next Steps\n- [Next step 1]\n- [Next step 2]\n[Continue as needed]\n\n---\n\nAfter generating the memo, please review it to ensure:\n1. All key information is accurately captured\n2. The summary is clear and concise\n3. Technical terms are used correctly (if applicable)\n4. The format is followed consistently\n5. In summary include, actions, insights and scheduling information so that it can be further taken into consideration by scheduling and calendar AI agents."
},
{ name: 'json', role: 'system', content: '' },
{ name: 'safety', role: 'system', content: '' },
{
name: 'opening',
role: 'system',
content: 'Welcome to Langbase. Prompt away!'
},
{ name: 'rag', role: 'system', content: '' }
],
variables: [],
tools: [],
memory: [memoryMemoDocs()],
});

export default pipeMeetingToMemoAgent;
58 changes: 58 additions & 0 deletions examples/agents/meeting-to-memo/index.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
import 'dotenv/config';
import { Pipe } from '@baseai/core';
import inquirer from 'inquirer';
import ora from 'ora';
import chalk from 'chalk';
import pipeMeetingToMemoAgent from './baseai/pipes/meeting-to-memo-agent';


const pipe = new Pipe(pipeMeetingToMemoAgent());

async function main() {

const initialSpinner = ora('Checking attached content type...').start();
try {
const { completion: initialReportAgentResponse } = await pipe.run({
messages: [{ role: 'user', content: 'Check if the attached document in CONTEXT can be styled in memo, \
if yes then respond the about the extracted memo date and one line summary' }],
});
initialSpinner.stop();
console.log(chalk.cyan('Memo Agent response...'));
console.log(initialReportAgentResponse);
} catch (error) {
initialSpinner.stop();
console.error(chalk.red('Error processing initial request:'), error);
}

while (true) {
const { userMsg } = await inquirer.prompt([
{
type: 'input',
name: 'userMsg',
message: chalk.blue('Enter your query (or type "exit" to quit):'),
},
]);

if (userMsg.toLowerCase() === 'exit') {
console.log(chalk.green('Goodbye!'));
break;
}

const spinner = ora('Processing your request...').start();

try {
const { completion: memoAgentResponse } = await pipe.run({
messages: [{ role: 'user', content: userMsg }],
});

spinner.stop();
console.log(chalk.cyan('Agent:'));
console.log(memoAgentResponse);
} catch (error) {
spinner.stop();
console.error(chalk.red('Error processing your request:'), error);
}
}
}

main();
22 changes: 22 additions & 0 deletions examples/agents/meeting-to-memo/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
{
"name": "meeting-to-memo-agent",
"version": "1.0.0",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"baseai": "baseai"
},
"keywords": [],
"author": "",
"license": "ISC",
"description": "",
"dependencies": {
"@baseai/core": "^0.9.5",
"dotenv": "^16.4.5",
"inquirer": "^12.0.0",
"ora": "^8.1.0"
},
"devDependencies": {
"baseai": "^0.9.5"
}
}