Skip to content

Commit

Permalink
Merge pull request #204 from raspawar/raspawar/nvidia_notebooks
Browse files Browse the repository at this point in the history
NVIDIA Marketing Strategy Example Notebook
  • Loading branch information
joaomdmoura authored Jan 3, 2025
2 parents 9295194 + 436b0dd commit fe723ea
Show file tree
Hide file tree
Showing 5 changed files with 163 additions and 5 deletions.
3 changes: 2 additions & 1 deletion nvidia_models/intro/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,11 @@
This is a simple example using the CrewAI framework with an NVIDIA endpoint and langchain-nvidia-ai-endpoints integration.

## Running the Script
This example uses the Azure OpenAI API to call a model.
This example show cases the NVIDIA NIM endpoint integration with CrewAI.

- **Configure Environment**: Set NVIDIA_API_KEY to appropriate api key.
Set MODEL to select appropriate model
Set NVIDIA_API_URL to select the endpoint(Catalogue/local endpoint)
- **Install Dependencies**: Run `make install`.
- **Execute the Script**: Run `python main.py` to see a list of recommended changes to this document.

Expand Down
3 changes: 2 additions & 1 deletion nvidia_models/intro/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,8 @@ def set_callbacks(self, callbacks: List[Any]):


model = os.environ.get("MODEL", "meta/llama-3.1-8b-instruct")
llm = ChatNVIDIA(model=model)
api_base = os.environ.get("NVIDIA_API_URL", "https://integrate.api.nvidia.com/v1")
llm = ChatNVIDIA(model=model, base_url=api_base)
default_llm = nvllm(model_str="nvidia_nim/" + model, llm=llm)

os.environ["NVIDIA_NIM_API_KEY"] = os.environ.get("NVIDIA_API_KEY")
Expand Down
2 changes: 1 addition & 1 deletion nvidia_models/marketing_strategy/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ It uses meta/llama-3.1-8b-instruct by default so you should have access to that

***Disclaimer:** This will use gpt-4o unless you change it to use a different model, and by doing so it may incur in different costs.*

- **Configure Environment**: Copy `.env.example` and set up the environment variables for [OpenAI](https://platform.openai.com/api-keys) and other tools as needed, like [Serper](serper.dev).
- **Configure Environment**: Copy `.env.example` and set up the environment variables for [NVIDIA](https://build.nvidia.com) and other tools as needed, like [Serper](serper.dev).
- **Install Dependencies**: Run `make install`.
- **Customize**: Modify `src/marketing_posts/main.py` to add custom inputs for your agents and tasks.
- **Customize Further**: Check `src/marketing_posts/config/agents.yaml` to update your agents and `src/marketing_posts/config/tasks.yaml` to update your tasks.
Expand Down
155 changes: 155 additions & 0 deletions nvidia_models/marketing_strategy/marketing_posts.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,155 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# NVIDIA NIMs\n",
"\n",
"The `langchain-nvidia-ai-endpoints` package contains LangChain integrations building applications with models on \n",
"NVIDIA NIM inference microservice. NIM supports models across domains like chat, embedding, and re-ranking models \n",
"from the community as well as NVIDIA. These models are optimized by NVIDIA to deliver the best performance on NVIDIA \n",
"accelerated infrastructure and deployed as a NIM, an easy-to-use, prebuilt containers that deploy anywhere using a single \n",
"command on NVIDIA accelerated infrastructure.\n",
"\n",
"NVIDIA hosted deployments of NIMs are available to test on the [NVIDIA API catalog](https://build.nvidia.com/). After testing, \n",
"NIMs can be exported from NVIDIA’s API catalog using the NVIDIA AI Enterprise license and run on-premises or in the cloud, \n",
"giving enterprises ownership and full control of their IP and AI application.\n",
"\n",
"This example goes over how to use LangChain to interact with NVIDIA supported via the `ChatNVIDIA` class to implement Marketing Post CrewAI Agent.\n",
"\n",
"For more information on accessing the chat models through this api, check out the [ChatNVIDIA](https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/) documentation."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet marketing_posts"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"Import our dependencies and set up our NVIDIA API key from the API catalog, https://build.nvidia.com for the two models we'll use hosted on the catalog (embedding and re-ranking models).\n",
"\n",
"**To get started:**\n",
"\n",
"1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models.\n",
"\n",
"2. Click on your model of choice.\n",
"\n",
"3. Under Input select the Python tab, and click `Get API Key`. Then click `Generate Key`.\n",
"\n",
"4. Copy and save the generated key as NVIDIA_API_KEY. From there, you should have access to the endpoints."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"# del os.environ['NVIDIA_API_KEY'] ## delete key and reset\n",
"if os.environ.get(\"NVIDIA_API_KEY\", \"\").startswith(\"nvapi-\"):\n",
" print(\"Valid NVIDIA_API_KEY already in environment. Delete to reset\")\n",
"else:\n",
" nvapi_key = getpass.getpass(\"NVAPI Key (starts with nvapi-): \")\n",
" assert nvapi_key.startswith(\n",
" \"nvapi-\"\n",
" ), f\"{nvapi_key[:5]}... is not a valid key\"\n",
" os.environ[\"NVIDIA_API_KEY\"] = nvapi_key"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# set API Endoipoint\n",
"# to call local model set NVIDIA_API_URL to local NIM endpoint\n",
"os.environ[\"NVIDIA_API_URL\"] = \"http://localhost:8000/v1\" # for local NIM container\n",
"# os.environ[\"NVIDIA_API_URL\"] = \"https://integrate.api.nvidia.com/v1\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Setup model using environment variable MODEL as below"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"#set model\n",
"os.environ[\"MODEL\"] = \"meta/llama-2-7b-chat\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Import the run function and kickoff the marketting creawai agent"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"from marketing_posts.main import run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
5 changes: 3 additions & 2 deletions nvidia_models/marketing_strategy/src/marketing_posts/crew.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,11 @@
load_dotenv()

model = os.getenv("MODEL", "meta/llama-3.1-8b-instruct")
llm = ChatNVIDIA(model=model)
api_base = os.environ.get("NVIDIA_API_URL", "https://integrate.api.nvidia.com/v1")
llm = ChatNVIDIA(model=model, base_url=api_base)
default_llm = nvllm(model_str="nvidia_nim/" + model, llm=llm)

os.environ["NVIDIA_NIM_API_KEY"] = os.getenv("NVIDIA_API_KEY")
os.environ["NVIDIA_API_KEY"] = os.getenv("NVIDIA_API_KEY")


class MarketStrategy(BaseModel):
Expand Down

0 comments on commit fe723ea

Please sign in to comment.