Releases: run-llama/llama_index
Releases · run-llama/llama_index
2024-04-17 (v0.10.30)
llama-index-core
[0.10.30]
- Add intermediate outputs to QueryPipeline (#12683)
- Fix show progress causing results to be out of order (#12897)
- add OR filter condition support to simple vector store (#12823)
- improved custom agent init (#12824)
- fix pipeline load without docstore (#12808)
- Use async
_aprocess_actions
in_arun_step_stream
(#12846) - provide the exception to the StreamChatErrorEvent (#12879)
- fix bug in load and search tool spec (#12902)
llama-index-embeddings-azure-opena
[0.1.7]
- Expose azure_ad_token_provider argument to support token expiration (#12818)
llama-index-embeddings-cohere
[0.1.8]
- Add httpx_async_client option (#12896)
llama-index-embeddings-ipex-llm
[0.1.0]
- add ipex-llm embedding integration (#12740)
llama-index-embeddings-octoai
[0.1.0]
- add octoai embeddings (#12857)
llama-index-llms-azure-openai
[0.1.6]
- Expose azure_ad_token_provider argument to support token expiration (#12818)
llama-index-llms-ipex-llm
[0.1.2]
- add support for loading "low-bit format" model to IpexLLM integration (#12785)
llama-index-llms-mistralai
[0.1.11]
- support
open-mixtral-8x22b
(#12894)
llama-index-packs-agents-lats
[0.1.0]
- added LATS agent pack (#12735)
llama-index-readers-smart-pdf-loader
[0.1.4]
- Use passed in metadata for documents (#12844)
llama-index-readers-web
[0.1.9]
- added Firecrawl Web Loader (#12825)
llama-index-vector-stores-milvus
[0.1.10]
- use batch insertions into Milvus vector store (#12837)
llama-index-vector-stores-vearch
[0.1.0]
- add vearch to vector stores (#10972)
2024-04-13 (v0.10.29)
llama-index-core
[0.10.29]
- BREAKING Moved
PandasQueryEngine
andPandasInstruction
parser tollama-index-experimental
(#12419)- new install:
pip install -U llama-index-experimental
- new import:
from llama_index.experimental.query_engine import PandasQueryEngine
- new install:
- Fixed some core dependencies to make python3.12 work nicely (#12762)
- update async utils
run_jobs()
to include tqdm description (#12812) - Refactor kvdocstore delete methods (#12681)
llama-index-llms-bedrock
[0.1.6]
- Support for Mistral Large from Bedrock (#12804)
llama-index-llms-openvino
[0.1.0]
- Added OpenVino LLMs (#12639)
llama-index-llms-predibase
[0.1.4]
- Update LlamaIndex-Predibase Integration to latest API (#12736)
- Enable choice of either Predibase-hosted or HuggingFace-hosted fine-tuned adapters in LlamaIndex-Predibase integration (#12789)
llama-index-output-parsers-guardrails
[0.1.3]
- Modernize GuardrailsOutputParser (#12676)
llama-index-packs-agents-coa
[0.1.0]
- Chain-of-Abstraction Agent Pack (#12757)
llama-index-packs-code-hierarchy
[0.1.3]
- Fixed issue with chunking multi-byte characters (#12715)
llama-index-packs-raft-dataset
[0.1.4]
- Fix bug in raft dataset generator - multiple system prompts (#12751)
llama-index-postprocessor-openvino-rerank
[0.1.2]
- Add openvino rerank support (#12688)
llama-index-readers-file
[0.1.18]
- convert to Path in docx reader if input path str (#12807)
- make pip check work for optional pdf packages (#12758)
llama-index-readers-s3
[0.1.7]
- wrong doc id when using default s3 endpoint in S3Reader (#12803)
llama-index-retrievers-bedrock
[0.1.0]
- Add Amazon Bedrock knowledge base integration as retriever (#12737)
llama-index-retrievers-mongodb-atlas-bm25-retriever
[0.1.3]
- Add mongodb atlas bm25 retriever (#12519)
llama-index-storage-chat-store-redis
[0.1.3]
- fix message serialization in redis chat store (#12802)
llama-index-vector-stores-astra-db
[0.1.6]
- Relax dependency version to accept astrapy
1.*
(#12792)
llama-index-vector-stores-couchbase
[0.1.0]
- Add support for Couchbase as a Vector Store (#12680)
llama-index-vector-stores-elasticsearch
[0.1.7]
- Fix elasticsearch hybrid rrf window_size (#12695)
llama-index-vector-stores-milvus
[0.1.8]
- Added support to retrieve metadata fields from milvus (#12626)
llama-index-vector-stores-redis
[0.2.0]
- Modernize redis vector store, use redisvl (#12386)
llama-index-vector-stores-qdrant
[0.2.0]
- refactor: Switch default Qdrant sparse encoder (#12512)
2024-04-09 (v0.10.28)
llama-index-core
[0.10.28]
- Support indented code block fences in markdown node parser (#12393)
- Pass in output parser to guideline evaluator (#12646)
- Added example of query pipeline + memory (#12654)
- Add missing node postprocessor in CondensePlusContextChatEngine async mode (#12663)
- Added
return_direct
option to tools /tool metadata (#12587) - Add retry for batch eval runner (#12647)
- Thread-safe instrumentation (#12638)
- Coroutine-safe instrumentation Spans #12589
- Add in-memory loading for non-default filesystems in PDFReader (#12659)
- Remove redundant tokenizer call in sentence splitter (#12655)
- Add SynthesizeComponent import to shortcut imports (#12655)
- Improved truncation in SimpleSummarize (#12655)
- adding err handling in eval_utils default_parser for correctness (#12624)
- Add async_postprocess_nodes at RankGPT Postprocessor Nodes (#12620)
- Fix MarkdownNodeParser ref_doc_id (#12615)
llama-index-embeddings-openvino
[0.1.5]
- Added initial support for openvino embeddings (#12643)
llama-index-llms-anthropic
[0.1.9]
- add anthropic tool calling (#12591)
llama-index-llms-ipex-llm
[0.1.1]
llama-index-llms-openllm
[0.1.4]
- Proper PrivateAttr usage in OpenLLM (#12655)
llama-index-multi-modal-llms-anthropic
[0.1.4]
- Bumped anthropic dep version (#12655)
llama-index-multi-modal-llms-gemini
[0.1.5]
- bump generativeai dep (#12645)
llama-index-packs-dense-x-retrieval
[0.1.4]
- Add streaming support for DenseXRetrievalPack (#12607)
llama-index-readers-mongodb
[0.1.4]
- Improve efficiency of MongoDB reader (#12664)
llama-index-readers-wikipedia
[0.1.4]
- Added multilingual support for the Wikipedia reader (#12616)
llama-index-storage-index-store-elasticsearch
[0.1.3]
- remove invalid chars from default collection name (#12672)
llama-index-vector-stores-milvus
[0.1.8]
2024-04-04 (v0.10.27)
llama-index-agent-openai
[0.2.2]
- Update imports for message thread typing (#12437)
llama-index-core
[0.10.27]
- Fix for pydantic query engine outputs being blank (#12469)
- Add span_id attribute to Events (instrumentation) (#12417)
- Fix RedisDocstore node retrieval from docs property (#12324)
- Add node-postprocessors to retriever_tool (#12415)
- FLAREInstructQueryEngine : delegating retriever api if the query engine supports it (#12503)
- Make chat message to dict safer (#12526)
- fix check in batch eval runner for multi-kwargs (#12563)
- Fixes agent_react_multimodal_step.py bug with partial args (#12566)
llama-index-embeddings-clip
[0.1.5]
- Added support to load clip model from local file path (#12577)
llama-index-embeddings-cloudflar-workersai
[0.1.0]
- text embedding integration: Cloudflare Workers AI (#12446)
llama-index-embeddings-voyageai
[0.1.4]
- Fix pydantic issue in class definition (#12469)
llama-index-finetuning
[0.1.5]
- Small typo fix in QA generation prompt (#12470)
llama-index-graph-stores-falkordb
[0.1.3]
- Replace redis driver with FalkorDB driver (#12434)
llama-index-llms-anthropic
[0.1.8]
- Add ability to pass custom HTTP headers to Anthropic client (#12558)
llama-index-llms-cohere
[0.1.6]
- Add support for Cohere Command R+ model (#12581)
llama-index-llms-databricks
[0.1.0]
- Integrations with DataBricks LLM API (#12432)
llama-index-llms-watsonx
[0.1.6]
lama-index-postprocessor-rankllm-rerank
[0.1.2]
- Add RankGPT support inside RankLLM (#12475)
llama-index-readers-microsoft-sharepoint
[0.1.7]
- Use recursive strategy by default for SharePoint (#12557)
llama-index-readers-web
[0.1.8]
- Readability web page reader fix playwright async api bug (#12520)
llama-index-vector-stores-kdbai
[0.1.5]
- small
to_list
fix (#12515)
llama-index-vector-stores-neptune
[0.1.0]
- Add support for Neptune Analytics as a Vector Store (#12423)
llama-index-vector-stores-postgres
[0.1.5]
- fix(postgres): numeric metadata filters (#12583)
v0.10.26
v0.10.26
v0.10.25
v0.10.25
v0.10.24
v0.10.24
v0.10.23
v0.10.23
v0.10.22
v0.10.22
v0.10.20
v0.10.20