diff --git a/examples/search-within-images-with-sam-and-clip/README.md b/examples/search-within-images-with-sam-and-clip/README.md index 6c24c75a..25888ae3 100644 --- a/examples/search-within-images-with-sam-and-clip/README.md +++ b/examples/search-within-images-with-sam-and-clip/README.md @@ -1,6 +1,6 @@ # 🔍Search engine using SAM & CLIP -Open In Colab [![Medium](https://img.shields.io/badge/Medium-12100E?style=for-the-badge&logo=medium&logoColor=white)](https://blog.lancedb.com/context-aware-chatbot-using-llama-2-lancedb-as-vector-database-4d771d95c755) +Open In Colab [![Medium](https://img.shields.io/badge/Medium-12100E?style=for-the-badge&logo=medium&logoColor=white)](https://medium.com/etoai/search-within-an-image-331b54e4285e) ![“A Dog”](https://github.com/kaushal07wick/vectordb-recipes/assets/57106063/3907c1e5-009b-4ffb-8ea2-2eddb58f3346) ### 🚀Create a Search Engine within an Image use **SAM**(Segment Anything) and **CLIP** (Constrastive Language Image Pretraining) model.