You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add support for video and audio modalities using LanguageBind models (#931). You can now index, embed, and search with video and audio files using Marqo, extending your search capabilities beyond text and images. See the model description here and usage in index creation for structured here and for unstructured here
Load OpenCLIP models from HuggingFace Hub (#939). Support loading OpenCLIP models directly from HuggingFace by providing a model name with the hf-hub: prefix. This simplifies model integration and expands your options. See usage here
Load custom OpenCLIP checkpoints with different image preprocessors (#939). Allow loading a custom OpenCLIP checkpoint with a different image preprocessor by providing imagePreprocessor in the model properties. This offers greater flexibility in model selection and customization. See usage here
Support new multilingual OpenCLIP models (#939). A new family of multilingual OpenCLIP models(visheratin) are added to Marqo. These state-of-the-art models can support 201 languages. Check here for how to load them into Marqo.
Bug fixes and minor changes
Fix tokenizer loading for custom OpenCLIP checkpoints (#939). The correct tokenizer is now applied when custom OpenCLIP model checkpoints are loaded.
Improve error handling for image_pointer fields in structured indexes (#944). Structured indexes now have targeted error reporting for non-image content in image_pointer fields. This improvement prevents batch failures and provides clearer feedback to users.
Contributor shout-outs
Shoutouts to our valuable 4.5k stargazers!
Thanks a lot for the discussion and suggestions in our community. We love to hear your thoughts and requests. Join our Slack channel and forum now.