-
Notifications
You must be signed in to change notification settings - Fork 13
/
catalog-entry.yaml
34 lines (32 loc) · 3.29 KB
/
catalog-entry.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
name: Hands on Lab focused on LLMs
entries:
- title: Hands on Lab Workshop with LLM
label: llm-hol
short_description: |
This AMP is used for Cloudera Machine Learning Hands on Labs and focuses on CML's integrations with external components as well as running
use case entirely in CML.
long_description: |
Hands on Lab which demonstrates a number of concepts including web scraping, vector databases, model deployment, model usage, Langchain, application building, and instruction following/tuning. IMPORTANT: Please read the following before proceeding. This AMP includes or otherwise depends on certain third party software packages. Information about such third party software packages are made available in the notice file associated with this AMP. By configuring and launching this AMP, you will cause such third party software packages to be downloaded and installed into your environment, in some instances, from third parties’ websites. For each third party software package, please see the notice file and the applicable websites for more information, including the applicable license terms. If you do not wish to download and install the third party software packages, do not configure, launch or otherwise use this AMP. By configuring, launching or otherwise using the AMP, you acknowledge the foregoing statement and agree that Cloudera is not responsible or liable in any way for the third party software packages.
image_path: "https://raw.githubusercontent.com/cloudera/CML_llm-hol/main/assets/amp-cover.png"
tags:
- CML Labs
- Exercises
- LLM
- Model Deployment
git_url: "https://github.com/ogakulov/CML_llm-hol"
is_prototype: true
- title: Shared LLM Model for Hands on Lab
label: llm-model-deploy
short_description: |
This AMP deploys Mistral-7B model as a CML API endpoint. Requires a GPU node with 4 vCores and 16 GB memory minimum.
long_description: |
This AMP deploys Mistral-7B model as a CML API endpoint. Requires a GPU node with 4 vCores and 16 GB memory minimum. IMPORTANT: Please read the following before proceeding. This AMP includes or otherwise depends on certain third party software packages. Information about such third party software packages are made available in the notice file associated with this AMP. By configuring and launching this AMP, you will cause such third party software packages to be downloaded and installed into your environment, in some instances, from third parties’ websites. For each third party software package, please see the notice file and the applicable websites for more information, including the applicable license terms. If you do not wish to download and install the third party software packages, do not configure, launch or otherwise use this AMP. By configuring, launching or otherwise using the AMP, you acknowledge the foregoing statement and agree that Cloudera is not responsible or liable in any way for the third party software packages.
image_path: "https://raw.githubusercontent.com/cloudera/CML_AMP_Deploy-Mistral7B-CML-Native-Model/main/images/catalog-entry.png"
tags:
- Mistral 7B
- LLM
- CML Labs
- Model Deployment
- GPU
git_url: "https://github.com/cloudera/CML_AMP_Deploy-Mistral7B-CML-Native-Model"
is_prototype: true