Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Onthefly persona #100

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
75 changes: 75 additions & 0 deletions enterprise_edition/configuration.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,81 @@ export API_KEY_HASH_ROUNDS=600000

Increasing this value enhances security but may impact performance. The default value provides a good balance for most use cases.

## Create persona on the fly

Endpoint: POST `/query/answer-with-quote'

Enterprise Edition users can dynamically create custom personas for AI assistants during API interactions. This feature allows for tailored AI behavior without pre-configuration.

### Using Persona Configuration in send-message Request

Include a `persona_config` in your POST request to the `/chat/send-message` endpoint to create a custom persona for that interaction.

### Persona Configuration Options

1. **Basic Information**:
- `name`: A unique identifier for your persona (e.g., "Renewable Energy Expert")
- `description`: Brief explanation of the persona's purpose

2. **Search Settings**:
- `search_type`: Choose from "hybrid", "semantic", or "keyword"
- `num_chunks`: Number of text chunks to retrieve (e.g., 5)
- `llm_relevance_filter`: Set to `true` or `false` to enable/disable AI-powered filtering

3. **Language Model**:
- `llm_model_provider_override`: Optional, specify a different AI model provider
- `llm_model_version_override`: Optional, choose a specific model version

4. **Prompts**: Include a list of prompt configurations, each containing:
- `name`: Identifier for the prompt
- `system_prompt`: Define the overall role and behavior of the AI
- `task_prompt`: Provide specific instructions for handling queries

5. **Knowledge Base**:
- `document_sets`: List of document set IDs the AI can access (e.g., `[{"id": 1}, {"id": 2}]`)

6. **Tools**: List of tools, each containing:
- `name`: Identifier for the tool (e.g., "Search tool")
- `description`: Brief explanation of the tool's function
- `display_name`: User-friendly name for the tool
- `id`: The id of the tol you wish you enable for this persona
- `in_code_tool_id`: Optional, specify the ID for internal reference

### Usage Example

```json
{
"persona_config": {
"name": "Renewable Energy Expert",
"description": "An expert in renewable energy technologies and trends",
"search_type": "hybrid",
"num_chunks": 5,
"llm_relevance_filter": true,
"prompts": [
{
"name": "Renewable Energy Advisor",
"system_prompt": "You are an expert in renewable energy technologies.",
"task_prompt": "Provide up-to-date information on renewable energy developments, focusing on accuracy and citing sources when possible."
}
],
"document_sets": [{"id": 1}, {"id": 2}],
"tools": [
{
"name": "Search tool",
"description": "Searches renewable energy databases",
"display_name": "Renewable Energy Search",
"in_code_tool_id": 1
}
]
},
"chat_session_id": 1,
"message": "What are the latest trends in solar energy?",
"prompt_id": 1,
"retrieval_options": {
"filters": [],
"use_all_docs": true
}
}


If you have any questions, feel free to message us at [email protected] for clarification or join our [Slack](https://join.slack.com/t/danswer/shared_invite/zt-2lcmqw703-071hBuZBfNEOGUsLa5PXvQ)
Expand Down
85 changes: 85 additions & 0 deletions guides/assistants.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
---
title: 'Creating Assistants'
description: 'A comprehensive guide to creating and configuring AI assistants in Danswer'
icon: 'square-plus'
---

## Introduction

Danswer empowers you to create custom AI assistants tailored to your specific needs. These assistants can be configured to handle a wide range of tasks, from answering HR queries to assisting with technical support. This guide will walk you through the process of creating an effective assistant using Danswer's platform.

![Assistant Configuration Interface](/images/guides/assistants/assistant_setup.png)


## Configuration Options

When creating an assistant in Danswer, you have several configuration options:

1. **Name**: The identifier for your assistant
2. **Description**: A brief overview of the assistant's purpose and capabilities
3. **System Prompt**: Defines the assistant's role and overall behavior
4. **Task Prompt**: Specifies how the assistant should handle user queries
5. **Tools**: Available integrations and capabilities
6. **Starter Messages**: Initial messages to guide user interaction
7. **LLM Provider**: The language model powering your assistant

![Assistant Configuration Interface](/images/guides/assistants/assistant_search.png)


## Understanding Different Prompt Types

Danswer uses two main types of prompts when configuring an assistant: the System Prompt and the Task Prompt ("Additional Instructions"). Understanding the difference between these is crucial for creating an effective assistant.

### System Prompt
The System Prompt sets the overall context and behavior of your assistant. It defines:
- The assistant's role
- General behavioral guidelines
- Any limitations or restrictions

The System Prompt is like giving your assistant its job description and what its purpose is.

Example System Prompt:
```
You are an HR assistant for Danswer Inc. You have access to the company's HR policies, benefits packages, and procedures. Maintain a professional and friendly tone in all interactions. If you're unsure about any information, advise the user to contact the HR department directly. Do not make up information or policies.
```

### Task Prompt
The Task Prompt provides specific instructions on how the assistant should handle and respond to user queries. It defines:
- Steps to follow for each query
- How to process and present information
- When to ask for clarification
- How to handle different types of requests

The Task Prompt is like giving your assistant a specific protocol for handling each interaction.

Example Task Prompt:
```
Follow these steps when responding to a user query:
1. Identify the main HR topic or policy in the user's question.
2. Search the provided HR documents for relevant information.
3. Summarize the applicable policy or procedure in clear, concise language.
4. If multiple policies apply, list them in order of relevance.
5. Provide the source document and section for your information.
6. If the query is unclear, ask the user for clarification before providing an answer.
7. If the query is outside your knowledge base, politely direct the user to contact the HR department.
```



## Testing and Refinement

After creating your assistant:

1. Conduct test runs with various queries
2. Gather feedback from a small group of users
3. Use Danswer's analytics tools to identify areas for improvement
4. Regularly update and refine your assistant based on feedback and changing needs


## Sharing and Permissions

- Determine the appropriate access level for the assistant (e.g., specific departments, entire organization)
- Utilize Danswer's group features to manage access efficiently
- Consider creating multiple versions of an assistant for different user groups if needed

By carefully configuring these elements, you can create a Danswer assistant that effectively serves your organization's needs, providing accurate and helpful responses to user queries.
79 changes: 79 additions & 0 deletions guides/providers.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
---
title: 'LLM Providers'
description: 'A comprehensive guide to selecting and using LLM providers in Danswer'
icon: 'campground'
---

## Danswer's Approach to LLM Providers

Danswer is designed to be model-agnostic, offering you the flexibility to choose the Language Model (LLM) that best suits your needs. This approach ensures that you're not locked into a single provider and can leverage the strengths of different models for various tasks.


## Model Overview

Danswer offers integration with several popular LLM providers, with a focus on models from OpenAI and Anthropic. Here's an overview of key models:

### OpenAI Models

#### GPT-3.5-Turbo
- **Strengths**: High speed, good quality for general tasks
- **Best for**: Quick queries, general information retrieval
- **Knowledge cutoff**: September 2021

#### GPT-4
- **Strengths**: High-quality responses, strong reasoning capabilities
- **Best for**: Complex analysis, creative tasks, code generation
- **Knowledge cutoff**: April 2023
- **Note**: Capable of image analysis

Here's the updated information on Anthropic models, including Claude 3.5 Sonnet and a recommendation:

### Anthropic Models

#### Claude-3 Opus
- **Strengths**: Exceptional performance in reasoning and analysis tasks
- **Best for**: Complex problem-solving, detailed explanations
- **Note**: Offers high accuracy but may have slower response times

#### Claude-3 Sonnet
- **Strengths**: Balance of performance and speed
- **Best for**: General-purpose tasks requiring good quality and reasonable speed
- **Note**: Good all-around performer for most use cases

#### Claude 3.5 Sonnet
- **Strengths**: Enhanced capabilities over Claude-3 Sonnet
- **Best for**: Advanced general-purpose tasks with improved performance
- **Note**: Recommended for most use cases due to its superior balance of capabilities

#### Claude-3 Haiku
- **Strengths**: Fast responses, efficient for simpler tasks
- **Best for**: Quick queries, real-time applications
- **Note**: Sacrifices some complexity for speed

For most use cases, Claude 3.5 Sonnet is recommended as it offers an excellent balance of advanced capabilities, performance, and speed, making it suitable for a wide range of applications.


## Custom Providers
Danswer allows you to add custom providers by integrating any model from the [LiteLLM providers list](https://docs.litellm.ai/docs/providers). This flexibility enables you to use specialized or proprietary models that best fit your organization's needs.



## Choosing the Right Model

Consider these factors when selecting a model:

1. **Task Complexity**: More complex tasks benefit from advanced models like GPT-4 or Claude-3 Opus.
2. **Response Speed**: For quick responses, consider faster models like GPT-3.5-Turbo or Claude-3 Haiku.
3. **Cost Considerations**: More advanced models typically have higher usage costs.
4. **Data Privacy**: For strict data policies, consider open-source models like Llama 2 that can be self-hosted.
5. **Specific Strengths**: Some models excel in particular areas (e.g., coding, analysis, creativity).

We recommend testing different models with your typical queries to determine which performs best for your specific use cases.

## Leveraging Model Flexibility

1. **Experiment**: Try different models for the same task to compare results.
2. **Monitor Performance**: Track which models perform best for different types of queries.
3. **Stay Updated**: Regularly check for updates and new model releases.
4. **Custom Integration**: Explore the option of integrating specialized models from the LiteLLM providers list for unique use cases.

Binary file added images/guides/assistants/assistant_search.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/guides/assistants/assistant_setup.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 7 additions & 1 deletion mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -82,14 +82,20 @@
"oidc_saml"
]
},

{
"group": "Enterprise",
"pages": [
"enterprise_edition/overview",
"enterprise_edition/configuration"
]
},
{
"group": "Guides",
"pages": [
"guides/assistants",
"guides/providers"
]
},
{
"group": "Connectors",
"icon": "plug",
Expand Down
2 changes: 1 addition & 1 deletion more/options.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ If you must delete a connector and related information before the process of wai

3. On the connector's page, the URL's final number will be the ID of the connector. For example, in this example, the connector's ID is 2. Here is also a more zoomed in photo of the URL bar.

![Zoomed in Connector ID](/images/more/zoomConnectorID.png)m
![Zoomed in Connector ID](/images/more/zoomConnectorID.png)


#### Deleting Locally
Expand Down