Releases: run-llama/llama_index
Releases · run-llama/llama_index
2024-05-28 (v0.10.40)
llama-index-core
[0.10.40]
- Added
PropertyGraphIndex
and other supporting abstractions. See the full guide for more details (#13747) - Updated
AutoPrevNextNodePostprocessor
to allow passing in response mode and LLM (#13771) - fix type handling with return direct (#13776)
- Correct the method name to
_aget_retrieved_ids_and_texts
in retrievval evaluator (#13765) - fix: QueryTransformComponent incorrect call
self._query_transform
(#13756) - implement more filters for
SimpleVectorStoreIndex
(#13365)
llama-index-embeddings-bedrock
[0.2.0]
- Added support for Bedrock Titan Embeddings v2 (#13580)
llama-index-embeddings-oci-genai
[0.1.0]
- add Oracle Cloud Infrastructure (OCI) Generative AI (#13631)
llama-index-embeddings-huggingface
[0.2.1]
- Expose "safe_serialization" parameter from AutoModel (#11939)
llama-index-graph-stores-neo4j
[0.2.0]
- Added
Neo4jPGStore
for property graph support (#13747)
llama-index-indices-managed-dashscope
[0.1.1]
- Added dashscope managed index (#13378)
llama-index-llms-oci-genai
[0.1.0]
- add Oracle Cloud Infrastructure (OCI) Generative AI (#13631)
llama-index-readers-feishu-wiki
[0.1.1]
- fix undefined variable (#13768)
llama-index-packs-secgpt
[0.1.0]
- SecGPT - LlamaIndex Integration #13127
llama-index-vector-stores-hologres
[0.1.0]
- Add Hologres vector db (#13619)
llama-index-vector-stores-milvus
[0.1.16]
- Remove FlagEmbedding as Milvus's dependency (#13767)
Unify the collection construction regardless of the value of enable_sparse (#13773)
llama-index-vector-stores-opensearch
[0.1.9]
- refactor to put helper methods inside class definition (#13749)
v0.10.39
v0.10.39
v0.10.38
v0.10.38
v0.10.37
v0.10.37
v0.10.36
v0.10.36
2024-05-07 (v0.10.35)
llama-index-agent-introspective
[0.1.0]
- Add CRITIC and reflection agent integrations (#13108)
llama-index-core
[0.10.35]
- fix
from_defaults()
erasing summary memory buffer history (#13325) - use existing async event loop instead of
asyncio.run()
in core (#13309) - fix async streaming from query engine in condense question chat engine (#13306)
- Handle ValueError in extract_table_summaries in element node parsers (#13318)
- Handle llm properly for QASummaryQueryEngineBuilder and RouterQueryEngine (#13281)
- expand instrumentation payloads (#13302)
- Fix Bug in sql join statement missing schema (#13277)
llama-index-embeddings-jinaai
[0.1.5]
- add encoding_type parameters in JinaEmbedding class (#13172)
- fix encoding type access in JinaEmbeddings (#13315)
llama-index-embeddings-nvidia
[0.1.0]
- add nvidia nim embeddings support (#13177)
llama-index-llms-mistralai
[0.1.12]
- Fix async issue when streaming with Mistral AI (#13292)
llama-index-llms-nvidia
[0.1.0]
- add nvidia nim llm support (#13176)
llama-index-postprocessor-nvidia-rerank
[0.1.0]
- add nvidia nim rerank support (#13178)
llama-index-readers-file
[0.1.21]
- Update MarkdownReader to parse text before first header (#13327)
llama-index-readers-web
[0.1.13]
- feat: Spider Web Loader (#13200)
llama-index-vector-stores-vespa
[0.1.0]
- Add VectorStore integration for Vespa (#13213)
llama-index-vector-stores-vertexaivectorsearch
[0.1.0]
- Add support for Vertex AI Vector Search as Vector Store (#13186)
2024-05-02 (v0.10.34)
llama-index-core
[0.10.34]
- remove error ignoring during chat engine streaming (#13160)
- add structured planning agent (#13149)
- update base class for planner agent (#13228)
- Fix: Error when parse file using SimpleFileNodeParser and file's extension doesn't in FILE_NODE_PARSERS (#13156)
- add matching
source_node.node_id
verification to node parsers (#13109) - Retrieval Metrics: Updating HitRate and MRR for Evaluation@K documents retrieved. Also adding RR as a separate metric (#12997)
- Add chat summary memory buffer (#13155)
llama-index-indices-managed-zilliz
[0.1.3]
llama-index-llms-huggingface
[0.1.7]
- Add tool usage support with text-generation-inference integration from Hugging Face (#12471)
llama-index-llms-maritalk
[0.2.0]
- Add streaming for maritalk (#13207)
llama-index-llms-mistral-rs
[0.1.0]
- Integrate mistral.rs LLM (#13105)
llama-index-llms-mymagic
[0.1.7]
- mymagicai api update (#13148)
llama-index-llms-nvidia-triton
[0.1.5]
- Streaming Support for Nvidia's Triton Integration (#13135)
llama-index-llms-ollama
[0.1.3]
- added async support to ollama llms (#13150)
llama-index-readers-microsoft-sharepoint
[0.2.2]
- Exclude access control metadata keys from LLMs and embeddings - SharePoint Reader (#13184)
llama-index-readers-web
[0.1.11]
- feat: Browserbase Web Reader (#12877)
llama-index-readers-youtube-metadata
[0.1.0]
- Added YouTube Metadata Reader (#12975)
llama-index-storage-kvstore-redis
[0.1.4]
- fix redis kvstore key that was in bytes (#13201)
llama-index-vector-stores-azureaisearch
[0.1.5]
- Respect filter condition for Azure AI Search (#13215)
llama-index-vector-stores-chroma
[0.1.7]
- small bump for new chroma client version (#13158)
llama-index-vector-stores-firestore
[0.1.0]
- Adding Firestore Vector Store (#12048)
llama-index-vector-stores-kdbai
[0.1.5]
- small fix to returned IDs after
add()
(#12515)
llama-index-vector-stores-milvus
[0.1.11]
- Add hybrid retrieval mode to MilvusVectorStore (#13122)
llama-index-vector-stores-postgres
[0.1.7]
- parameterize queries in pgvector store (#13199)
v0.10.33
v0.10.33
v0.10.32
v0.10.32
2024-04-23 (v0.10.31)
llama-index-core
[0.10.31]
- fix async streaming response from query engine (#12953)
- enforce uuid in element node parsers (#12951)
- add function calling LLM program (#12980)
- make the PydanticSingleSelector work with async api (#12964)
- fix query pipeline's arun_with_intermediates (#13002)
llama-index-agent-coa
[0.1.0]
- Add COA Agent integration (#13043)
llama-index-agent-lats
[0.1.0]
- Official LATs agent integration (#13031)
llama-index-agent-llm-compiler
[0.1.0]
- Add LLMCompiler Agent Integration (#13044)
llama-index-llms-anthropic
[0.1.10]
- Add the ability to pass custom headers to Anthropic LLM requests (#12819)
llama-index-llms-bedrock
[0.1.7]
- Adding claude 3 opus to BedRock integration (#13033)
llama-index-llms-fireworks
[0.1.5]
- Add new Llama 3 and Mixtral 8x22b model into Llama Index for Fireworks (#12970)
llama-index-llms-openai
[0.1.16]
- Fix AsyncOpenAI "RuntimeError: Event loop is closed bug" when instances of AsyncOpenAI are rapidly created & destroyed (#12946)
- Don't retry on all OpenAI APIStatusError exceptions - just InternalServerError (#12947)
llama-index-llms-watsonx
[0.1.7]
- Updated IBM watsonx foundation models (#12973)
llama-index-packs-code-hierarchy
[0.1.6]
- Return the parent node if the query node is not present (#12983)
- fixed bug when function is defined twice (#12941)
llama-index-program-openai
[0.1.6]
- dding support for streaming partial instances of Pydantic output class in OpenAIPydanticProgram (#13021)
llama-index-readers-openapi
[0.1.0]
- add reader for openapi files (#12998)
llama-index-readers-slack
[0.1.4]
- Avoid infinite loop when not handled exception is raised (#12963)
llama-index-readers-web
[0.1.10]
- Improve whole site reader to remove duplicate links (#12977)
llama-index-retrievers-bedrock
[0.1.1]
- Fix Bedrock KB retriever to use query bundle (#12910)
llama-index-vector-stores-awsdocdb
[0.1.0]
- Integrating AWS DocumentDB as a vector storage method (#12217)
llama-index-vector-stores-databricks
[0.1.2]
- Fix databricks vector search metadata (#12999)
llama-index-vector-stores-neo4j
[0.1.4]
- Neo4j metadata filtering support (#12923)
llama-index-vector-stores-pinecone
[0.1.5]
- Fix error querying PineconeVectorStore using sparse query mode (#12967)
llama-index-vector-stores-qdrant
[0.2.5]
- Many fixes for async and checking if collection exists (#12916)
llama-index-vector-stores-weaviate
[0.1.5]
- Adds the index deletion functionality to the WeviateVectoreStore (#12993)