All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
This release adds min_history_tokens
context window rolling strategy. It can be handy to keep the last big response in the context. Additionally, the API now provides token usage info.
- Fix loading config file passed as CLI option (commit)
- Remove impossible
Error::NoTokenizer
and update docs (commit)
This is a bugfix release fixing compilation of the library with default-features = false
.
- Fix compilation of library with
default-features = false
(commit)
This release introduces several new features and improvements. Key updates are:
- Execution is now async, based on custom OpenAI API client implementation with proper error handling.
- Added the possibility to discard old messages in the context to keep it below allowed max token limit.
- Added support for Azure endpoints.
- The binary dependencies made optional in the library. Use
default-features = false
when depending on the library. - CLI can now copy every response to clipboard via
xclip
on X11.
- Support Azure endpoints (#4)
- Implement rolling context window (#3)
- cli: Support copying every response to clipboard with
xclip
(commit)
- Replace
openai_api_rust
with custom async OpenAI API client (#2) - cli: Print
xclip
stderr on invocation failure (commit) - Make bin dependencies optional for lib (commit)
The project was renamed to jutella
.
- Use "mini" model by default
- Improve docs
- Rename
unspoken
->jutella
Improved documentation and README.
- Improve README
- Improve help
- Improve docs
Initial release.
- Add README
- Introduce a config file
- Add command line arguments
- Make
ChatClientConfig
public - Support setting API key in a config
- Report recoverable errors
- Initial commit