Skip to content

Commit

Permalink
typo fix
Browse files Browse the repository at this point in the history
  • Loading branch information
Oscilloscope98 committed May 31, 2024
1 parent 0b0c908 commit 04ef5b5
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions docs/tutorial/ipex_llm.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title: "Local LLM Setup with IPEX-LLM on Intel GPU"
---

:::note
This guide is verified with Open WebUI setup through [Mannual Installation](../getting-started/index.mdx#manual-installation).
This guide is verified with Open WebUI setup through [Manual Installation](../getting-started/index.mdx#manual-installation).
:::

# Local LLM Setup with IPEX-LLM on Intel GPU
Expand All @@ -13,7 +13,7 @@ This guide is verified with Open WebUI setup through [Mannual Installation](../g
[**IPEX-LLM**](https://github.com/intel-analytics/ipex-llm) is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc A-Series, Flex and Max) with very low latency.
:::

This tutorial demonstrates how to setup Open WebUI with **IPEX-LLM accelerated Ollama backend hosted on Intel GPU**. By following this guide, you will be able to setup Open WebUI even on a low-cost PC (i.e. only with integrated GPU), and achieve a smooth experience.
This tutorial demonstrates how to setup Open WebUI with **IPEX-LLM accelerated Ollama backend hosted on Intel GPU**. By following this guide, you will be able to setup Open WebUI even on a low-cost PC (i.e. only with integrated GPU) with a smooth experience.

## Start Ollama Serve on Intel GPU

Expand Down

0 comments on commit 04ef5b5

Please sign in to comment.