Releases: run-llama/llama_index
Releases · run-llama/llama_index
v0.10.19
llama-index-cli
[0.1.9]
- Removed chroma as a bundled dep to reduce
llama-index
deps
llama-index-core
[0.10.19]
- Introduce retries for rate limits in
OpenAI
llm class (#11867) - Added table comments to SQL table schemas in
SQLDatabase
(#11774) - Added
LogProb
type toChatResponse
object (#11795) - Introduced
LabelledSimpleDataset
(#11805) - Fixed insert
IndexNode
objects with unserializable objects (#11836) - Fixed stream chat type error when writing response to history in
CondenseQuestionChatEngine
(#11856) - Improve post-processing for json query engine (#11862)
llama-index-embeddings-cohere
[0.1.4]
- Fixed async kwarg error (#11822)
llama-index-embeddings-dashscope
[0.1.2]
- Fixed pydantic import (#11765)
llama-index-graph-stores-neo4j
[0.1.3]
- Properly close connection after verifying connectivity (#11821)
llama-index-llms-cohere
[0.1.3]
- Add support for new
command-r
model (#11852)
llama-index-llms-huggingface
[0.1.4]
- Fixed streaming decoding with special tokens (#11807)
llama-index-llms-mistralai
[0.1.5]
- Added support for latest and open models (#11792)
llama-index-tools-finance
[0.1.1]
- Fixed small bug when passing in the API get for stock news (#11772)
llama-index-vector-stores-chroma
[0.1.6]
- Slimmed down chroma deps (#11775)
llama-index-vector-stores-lancedb
[0.1.3]
- Fixes for deleting (#11825)
llama-index-vector-stores-postgres
[0.1.3]
- Support for nested metadata filters (#11778)
v0.10.18
v0.10.18
v0.10.17
[2024-03-06]
New format! Going to try out reporting changes per package.
llama-index-cli
[0.1.8]
- Update mappings for
upgrade
command (#11699)
llama-index-core
[0.10.17]
- add
relative_score
anddist_based_score
toQueryFusionRetriever
(#11667) - check for
none
in async agent queue (#11669) - allow refine template for
BaseSQLTableQueryEngine
(#11378) - update mappings for llama-packs (#11699)
- fixed index error for extacting rel texts in KG index (#11695)
- return proper response types from synthesizer when no nodes (#11701)
- Inherit metadata to summaries in DocumentSummaryIndex (#11671)
- Inherit callback manager in sql query engines (#11662)
- Fixed bug with agent streming not being written to chat history (#11675)
- Fixed a small bug with
none
deltas when streaming a function call with an agent (#11713)
llama-index-multi-modal-llms-anthropic
[0.1.2]
- Added support for new multi-modal models
haiku
andsonnet
(#11656)
llama-index-packs-finchat
[0.1.0]
- Added a new llama-pack for hierarchical agents + finance chat (#11387)
llama-index-readers-file
[0.1.8]
- Added support for checking if NLTK files are already downloaded (#11676)
llama-index-readers-json
[0.1.4]
- Use the metadata passed in when creating documents (#11626)
llama-index-vector-stores-astra-db
[0.1.4]
- Update wording in warning message (#11702)
llama-index-vector-stores-opensearch
[0.1.7]
- Avoid calling
nest_asyncio.apply()
in code to avoid confusing errors for users (#11707)
llama-index-vector-stores-qdrant
[0.1.4]
- Catch RPC errors (#11657)
v0.10.16
v0.10.16
v0.10.15
v0.10.15
v0.10.14
New Features
- Released llama-index-networks (#11413)
- Jina reranker (#11291)
- Added DuckDuckGo agent search tool (#11386)
- helper functions for chatml (#10272)
- added brave search tool for agents (#11468)
- Added Friendli LLM integration (#11384)
- metadata only queries for chromadb (#11328)
Bug Fixes / Nits
- Fixed inheriting llm callback in synthesizers (#11404)
- Catch delete error in milvus (#11315)
- Fixed pinecone kwargs issue (#11422)
- Supabase metadata filtering fix (#11428)
- api base fix in gemini embeddings (#11393)
- fix elasticsearch vector store await (#11438)
- vllm server cuda fix (#11442)
- fix for passing LLM to context chat engine (#11444)
- set input types for cohere embeddings (#11288)
- default value for azure ad token (#10377)
- added back prompt mixin for react agent (#10610)
- fixed system roles for gemini (#11481)
- fixed mean agg pooling returning numpy float values (#11458)
- improved json path parsing for JSONQueryEngine (#9097)
v0.10.13.post1
v0.10.13.post1
v0.10.13
New Features
- Added a llama-pack for KodaRetriever, for on-the-fly alpha tuning (#11311)
- Added support for
mistral-large
(#11398) - Last token pooling mode for huggingface embeddings models like SFR-Embedding-Mistral (#11373)
- Added fsspec support to SimpleDirectoryReader (#11303)
Bug Fixes / Nits
- Fixed an issue with context window + prompt helper (#11379)
- Moved OpenSearch vector store to BasePydanticVectorStore (#11400)
- Fixed function calling in fireworks LLM (#11363)
- Made cohere embedding types more automatic (#11288)
- Improve function calling in react agent (#11280)
- Fixed MockLLM imports (#11376)
v0.10.12
v0.10.12
v0.10.11
v0.10.11