Skip to content
This repository has been archived by the owner on Nov 28, 2024. It is now read-only.

pull from main #11

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -159,4 +159,8 @@ cython_debug/
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
.idea/

.conda

outputs
52 changes: 50 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,50 @@
# llama-design-drive-repo
check https://aierlab.tech/llama-design-drive/ for more information

# 🌍 LLM Travel Alert

## 📖 Introduction

LLM Travel Alert is an open sourse project that leverages advanced language models to provide real-time travel advisories and alerts. Building upon the MultiverseNote backend, this project integrates enhanced tool support and utilizes Ollama as the service backend to deliver timely and accurate information to travelers worldwide.

By transforming AI chatbot interactions into structured, actionable alerts, LLM Travel Alert aims to enhance the safety and convenience of global travelers. The platform introduces a branch management system where users can discuss general travel topics on the main branch and initiate new branches for specific destinations or issues, allowing for systematic and continuous information development.

## 🚀 New Features

- **Tool Support**: Integration with various travel information sources and APIs for comprehensive data aggregation.
- **Ollama Backend Integration**: Utilizes Ollama as the service backend for improved performance and scalability.
- **User-Friendly Interface**: An intuitive UI that displays all branches, allowing users to manage and customize their alerts through drag-and-drop functionality.
- **Advanced Branching System**: Identify and merge branches using identifiers like 123 or 145, facilitating better version control and personalized alert management.

## 💻 Platform and Development Environment

- **Frontend**: Developed in JavaScript, suitable for building web and mobile application interfaces.
- **Backend**: Modified from the MultiverseNote backend, now using Python with Ollama integration.
- **Supported Operating Systems**: Windows, macOS, Linux

## 🤝 How to Join

We welcome all developers and contributors interested in AI, travel safety, and transforming conversational models into structured alert systems to join LLM Travel Alert. You can participate by:

1. Checking our [Issues](#) page to understand the current tasks that need assistance.
2. Submitting your contributions via Fork and Pull Request.
3. Joining our community discussions to help improve the project.

For more information, please visit our website at [https://aierlab.tech/llm-travel-alert/](https://aierlab.tech/llm-travel-alert/) or email us at [[email protected]](mailto:[email protected]).

<!-- ## 📚 Table of Contents

- [Getting Started](docs/en/getting_started.md)
- [Installation Guide](docs/en/installation_guide.md)
- [User Manual](docs/en/user_manual.md)
- [Developer Documentation](docs/en/developer_documentation.md)
- [FAQ and Troubleshooting](docs/en/faq_and_troubleshooting.md) -->

## 🛠 Development Plan

- **Phase 1**: Integrate the MultiverseNote backend with Ollama support for real-time data processing.
- **Phase 2**: Implement enhanced tool support for travel data integration and user interface features.
- **Phase 3**: Test and optimize the application across all supported operating systems.
- **Phase 4**: Gather community feedback and iterate on features to improve user experience.

## 📜 Open Sourse License

This project is released under the Apache License 2.0. See the [LICENSE](LICENSE) file for details.
298 changes: 298 additions & 0 deletions archive/ollama_with_tool_gpt_compatability.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,298 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 65,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: ollama in d:\\workspace\\aierlab_env\\llama_design_drive\\llama-design-drive\\.conda\\lib\\site-packages (0.3.3)\n",
"Requirement already satisfied: openai in d:\\workspace\\aierlab_env\\llama_design_drive\\llama-design-drive\\.conda\\lib\\site-packages (1.50.2)\n",
"Requirement already satisfied: httpx<0.28.0,>=0.27.0 in d:\\workspace\\aierlab_env\\llama_design_drive\\llama-design-drive\\.conda\\lib\\site-packages (from ollama) (0.27.2)\n",
"Requirement already satisfied: anyio<5,>=3.5.0 in d:\\workspace\\aierlab_env\\llama_design_drive\\llama-design-drive\\.conda\\lib\\site-packages (from openai) (4.6.0)\n",
"Requirement already satisfied: distro<2,>=1.7.0 in d:\\workspace\\aierlab_env\\llama_design_drive\\llama-design-drive\\.conda\\lib\\site-packages (from openai) (1.9.0)\n",
"Requirement already satisfied: jiter<1,>=0.4.0 in d:\\workspace\\aierlab_env\\llama_design_drive\\llama-design-drive\\.conda\\lib\\site-packages (from openai) (0.5.0)\n",
"Requirement already satisfied: pydantic<3,>=1.9.0 in d:\\workspace\\aierlab_env\\llama_design_drive\\llama-design-drive\\.conda\\lib\\site-packages (from openai) (2.9.2)\n",
"Requirement already satisfied: sniffio in d:\\workspace\\aierlab_env\\llama_design_drive\\llama-design-drive\\.conda\\lib\\site-packages (from openai) (1.3.1)\n",
"Requirement already satisfied: tqdm>4 in d:\\workspace\\aierlab_env\\llama_design_drive\\llama-design-drive\\.conda\\lib\\site-packages (from openai) (4.66.5)\n",
"Requirement already satisfied: typing-extensions<5,>=4.11 in c:\\users\\disco\\appdata\\roaming\\python\\python310\\site-packages (from openai) (4.11.0)\n",
"Requirement already satisfied: idna>=2.8 in d:\\workspace\\aierlab_env\\llama_design_drive\\llama-design-drive\\.conda\\lib\\site-packages (from anyio<5,>=3.5.0->openai) (3.10)\n",
"Requirement already satisfied: exceptiongroup>=1.0.2 in c:\\users\\disco\\appdata\\roaming\\python\\python310\\site-packages (from anyio<5,>=3.5.0->openai) (1.2.0)\n",
"Requirement already satisfied: certifi in d:\\workspace\\aierlab_env\\llama_design_drive\\llama-design-drive\\.conda\\lib\\site-packages (from httpx<0.28.0,>=0.27.0->ollama) (2024.8.30)\n",
"Requirement already satisfied: httpcore==1.* in d:\\workspace\\aierlab_env\\llama_design_drive\\llama-design-drive\\.conda\\lib\\site-packages (from httpx<0.28.0,>=0.27.0->ollama) (1.0.5)\n",
"Requirement already satisfied: h11<0.15,>=0.13 in d:\\workspace\\aierlab_env\\llama_design_drive\\llama-design-drive\\.conda\\lib\\site-packages (from httpcore==1.*->httpx<0.28.0,>=0.27.0->ollama) (0.14.0)\n",
"Requirement already satisfied: annotated-types>=0.6.0 in d:\\workspace\\aierlab_env\\llama_design_drive\\llama-design-drive\\.conda\\lib\\site-packages (from pydantic<3,>=1.9.0->openai) (0.7.0)\n",
"Requirement already satisfied: pydantic-core==2.23.4 in d:\\workspace\\aierlab_env\\llama_design_drive\\llama-design-drive\\.conda\\lib\\site-packages (from pydantic<3,>=1.9.0->openai) (2.23.4)\n",
"Requirement already satisfied: colorama in c:\\users\\disco\\appdata\\roaming\\python\\python310\\site-packages (from tqdm>4->openai) (0.4.6)\n"
]
}
],
"source": [
"! pip install ollama openai"
]
},
{
"cell_type": "code",
"execution_count": 66,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"messages=[]\n",
"messages.append({'role': 'system', 'content': 'You are a helpful customer support assistant.'})\n",
"\n",
"file_path=\"../tools/tools_register.json\"\n",
"\n",
"# Load the JSON data from the file\n",
"with open(file_path, 'r') as file:\n",
" tools = json.load(file)\n",
" \n",
" \n",
"FUNCTION_MAPPING = dict(\n",
" get_current_weather=lambda city : \"sunny\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 67,
"metadata": {},
"outputs": [],
"source": [
"# import ollama\n",
"\n",
"# response = ollama.chat(\n",
"# model='llama3.1',\n",
"# messages=messages,\n",
"# tools=tools,\n",
"# )\n",
"\n",
"# print(response['message']['tool_calls'])\n",
"\n",
"# # ollama not support yet\n",
"# assistant = client.beta.assistants.create(\n",
"# name=\"Testing\",\n",
"# instructions=\"Bot for test\",\n",
"# tools=tools,\n",
"# model=\"llama3.1\"\n",
"# )"
]
},
{
"cell_type": "code",
"execution_count": 95,
"metadata": {},
"outputs": [],
"source": [
"def append_to_message(response, messages):\n",
" role, content = response.choices[0].message.role, response.choices[0].message.content\n",
" messages.append(dict(role=role, content=content))\n",
"\n",
"\n",
"def handle_tool_call(response, messages):\n",
" append_to_message(response, messages)\n",
" \n",
" # Iterate through tool calls to handle each weather check\n",
" for tool_call in response.choices[0].message.tool_calls:\n",
" function = FUNCTION_MAPPING.get(tool_call.function.name)\n",
" arguments = json.loads(tool_call.function.arguments)\n",
" function_output = function(**arguments)\n",
" \n",
" tool_output = {\n",
" \"role\": \"tool\",\n",
" \"tool_call_id\": tool_call.id\n",
" }\n",
" \n",
" \n",
" if tool_call.function.name == \"some_specific_name\":\n",
" # need to engineer the output content\n",
" pass\n",
" elif tool_call.function.name == \"get_current_weather\":\n",
" tool_output[\"content\"] = json.dumps(dict(weather=function_output))\n",
" else:\n",
" tool_output[\"content\"] = json.dumps(function_output)\n",
" \n",
" \n",
" messages.append(tool_output)\n",
"\n",
"def handle_normal_response(response, messages):\n",
" append_to_message(response, messages)\n",
" print(\"Assistant: \" + response.choices[0].message.content)\n",
" "
]
},
{
"cell_type": "code",
"execution_count": 96,
"metadata": {},
"outputs": [],
"source": [
"import openai\n",
"from openai import OpenAI\n",
"client = OpenAI(\n",
" base_url='http://localhost:11434/v1/',\n",
" api_key='ollama',\n",
")\n",
"\n",
"our_api_request_forced_a_tool_call = False"
]
},
{
"cell_type": "code",
"execution_count": 97,
"metadata": {},
"outputs": [],
"source": [
"messages.append({'role': 'user', 'content': 'try it again'})\n"
]
},
{
"cell_type": "code",
"execution_count": 98,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Log: Model made a tool call.\n",
"Log: Model responded directly to the user.\n",
"Assistant: Ha ha, looks like you're stuck in sunny mode!\n",
"\n",
"Seriously though, I can try to simulate different scenarios for you. Let's say... what if we updated the output to {\"weather\": \"cloudy\"}?\n"
]
}
],
"source": [
"\n",
"while(True): \n",
" response = client.chat.completions.create(\n",
" model=\"llama3.1\",\n",
" messages=messages,\n",
" tools=tools,\n",
" )\n",
" \n",
" # Check if the conversation was too long for the context window\n",
" if response.choices[0].finish_reason == \"length\":\n",
" print(\"Error: The conversation was too long for the context window.\")\n",
" # Handle the error as needed, e.g., by truncating the conversation or asking for clarification\n",
" handle_length_error(response)\n",
" break\n",
" \n",
" # Check if the model's output included copyright material (or similar)\n",
" if response.choices[0].finish_reason == \"content_filter\":\n",
" print(\"Error: The content was filtered due to policy violations.\")\n",
" # Handle the error as needed, e.g., by modifying the request or notifying the user\n",
" handle_content_filter_error(response)\n",
" break\n",
" \n",
" # Check if the model has made a tool_call. This is the case either if the \"finish_reason\" is \"tool_calls\" or if the \"finish_reason\" is \"stop\" and our API request had forced a function call\n",
" if (response.choices[0].finish_reason == \"tool_calls\" or \n",
" # This handles the edge case where if we forced the model to call one of our functions, the finish_reason will actually be \"stop\" instead of \"tool_calls\"\n",
" (our_api_request_forced_a_tool_call and response.choices[0].finish_reason == \"stop\")):\n",
" # Handle tool call\n",
" print(\"Log: Model made a tool call.\")\n",
" # Your code to handle tool calls\n",
" handle_tool_call(response, messages)\n",
" continue\n",
" \n",
" # Else finish_reason is \"stop\", in which case the model was just responding directly to the user\n",
" elif response.choices[0].finish_reason == \"stop\":\n",
" # Handle the normal stop case\n",
" print(\"Log: Model responded directly to the user.\")\n",
" # Your code to handle normal responses\n",
" handle_normal_response(response, messages)\n",
" break\n",
" \n",
" # Catch any other case, this is unexpected\n",
" else:\n",
" print(\"Unexpected finish_reason:\", response.choices[0].finish_reason)\n",
" # Handle unexpected cases as needed\n",
" handle_unexpected_case(response)\n",
" break"
]
},
{
"cell_type": "code",
"execution_count": 99,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'role': 'system',\n",
" 'content': 'You are a helpful customer support assistant.'},\n",
" {'role': 'user', 'content': 'try it again'},\n",
" ChatCompletionMessage(content='', refusal=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_9225di2c', function=Function(arguments='{\"city\":\"New York City\"}', name='get_current_weather'), type='function')]),\n",
" {'role': 'user', 'content': 'try it again'},\n",
" ChatCompletionMessage(content='You want me to try the tool call response with another input.\\n\\nSince I don\\'t have any new information, we can use the same data:\\n \\n{\"type\":\"function\",\"name\":null,\"parameters\":{}}\\n\\n\\n\\nSince there\\'s no function call output, please provide the one you\\'d like to test.', refusal=None, role='assistant', function_call=None, tool_calls=None),\n",
" ChatCompletionMessage(content='', refusal=None, role='assistant', function_call=None, tool_calls=None),\n",
" {'role': 'user', 'content': 'try it again'},\n",
" {'role': 'assistant', 'content': ''},\n",
" {'role': 'user', 'content': 'try it again'},\n",
" {'role': 'assistant', 'content': ''},\n",
" {'role': 'tool',\n",
" 'tool_call_id': 'call_ltouumfw',\n",
" 'content': '{\"weather\": \"sunny\"}'},\n",
" {'role': 'assistant',\n",
" 'content': 'You asked me to try using the tool call response. Based on the output {\"weather\": \"sunny\"}, I can provide an answer to your original question.\\n\\nSince you didn\\'t actually ask a question, let\\'s assume you wanted to know the current weather. In that case, based on the output, the current weather is \"sunny\".'},\n",
" {'role': 'assistant', 'content': ''},\n",
" {'role': 'user', 'content': 'try it again'},\n",
" {'role': 'assistant', 'content': ''},\n",
" {'role': 'tool',\n",
" 'tool_call_id': 'call_gptn8zvn',\n",
" 'content': '{\"weather\": \"sunny\"}'},\n",
" {'role': 'assistant',\n",
" 'content': 'You asked me to try using the tool call response again. Based on the output {\"weather\": \"sunny\"}, I can confirm that the current weather is indeed still \"sunny\".\\n\\nIf you\\'d like to know something else, feel free to ask and provide a new output from the tool call!'},\n",
" {'role': 'user', 'content': 'try it again'},\n",
" {'role': 'assistant', 'content': ''},\n",
" {'role': 'tool',\n",
" 'tool_call_id': 'call_8u0lnrcp',\n",
" 'content': '{\"weather\": \"sunny\"}'},\n",
" {'role': 'assistant',\n",
" 'content': 'Ha ha, looks like you\\'re stuck in sunny mode!\\n\\nSeriously though, I can try to simulate different scenarios for you. Let\\'s say... what if we updated the output to {\"weather\": \"cloudy\"}?'}]"
]
},
"execution_count": 99,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
28 changes: 28 additions & 0 deletions backend/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Backend explain

```bash
cd backend

pip install -e .
pip install -e .[dev]
pip install -e .[test]

```

# Run all tests
> Currently running on Python3.10

```bash
cd backend

python3.10 -m pytest -v
```

# Run individual test scripts
## Example: API functional tests

```bash
cd backend/tests/functional

pytest functional_api_test.py
```
1 change: 1 addition & 0 deletions backend/app/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
from .main import main
Empty file added backend/app/control/__init__.py
Empty file.
Loading