Skip to content

Commit

Permalink
Update README
Browse files Browse the repository at this point in the history
Summary: - As title

Reviewed By: spencerwmeta

Differential Revision: D51927019

fbshipit-source-id: 57009c4f198eeb908338b8c3059744dc3241fe2f
  • Loading branch information
csahana95 authored and facebook-github-bot committed Dec 7, 2023
1 parent 5c3227d commit 3e98f6a
Showing 1 changed file with 2 additions and 3 deletions.
5 changes: 2 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
</p>

<p align="center">
🤗 <a href="https://huggingface.co/meta-Llama"> Models on Hugging Face</a>&nbsp | <a href="https://ai.facebook.com/blog/purple-llama-open-trust-safety-generative-ai"> Blog</a>&nbsp | <a href="https://ai.facebook.com/llama/purple-llama">Website</a>&nbsp | <a href="https://ai.meta.com/research/publications/purple-llama-cyberseceval-a-benchmark-for-evaluating-the-cybersecurity-risks-of-large-language-models/">CyberSec Eval Paper</a>&nbsp&nbsp | <a href="https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/">Llama Guard Paper</a>&nbsp
🤗 <a href="https://huggingface.co/meta-Llama"> Models on Hugging Face</a>&nbsp | <a href="https://ai.meta.com/blog/purple-llama-open-trust-safety-generative-ai"> Blog</a>&nbsp | <a href="https://ai.meta.com/llama/purple-llama">Website</a>&nbsp | <a href="https://ai.meta.com/research/publications/purple-llama-cyberseceval-a-benchmark-for-evaluating-the-cybersecurity-risks-of-large-language-models/">CyberSec Eval Paper</a>&nbsp&nbsp | <a href="https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/">Llama Guard Paper</a>&nbsp
<br>

---
Expand Down Expand Up @@ -72,8 +72,7 @@ accordance with content guidelines appropriate to the application.

### Llama Guard

To support this, and empower the community, we are releasing
[Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/),
To support this, and empower the community, we are releasing Llama Guard,
an openly-available model that performs competitively on common open benchmarks
and provides developers with a pretrained model to help defend against
generating potentially risky outputs.
Expand Down

0 comments on commit 3e98f6a

Please sign in to comment.