Skip to content

Latest commit

 

History

History
73 lines (41 loc) · 3.55 KB

NEWS.md

File metadata and controls

73 lines (41 loc) · 3.55 KB

ellmer (development version)

  • chat_openrouter() provides support for OpenRouter models (#212)

  • chat_deepseek() provides support for DeepSeek models (#242)

  • option(ellmer_verbosity) is no longer supported; instead use the standard httr2 verbosity functions which now support streaming data.

  • print(Chat) no longer wraps long lines, making it easier to read code and bulleted lists (#246).

  • chat_openai() should be less likely to timeout when not streaming chat results (#213).

  • chat_azure() now has a credentials argument to make authentication against Azure more flexible (#248, @atheriel).

  • chat_databricks() now handles the DATABRICKS_HOST environment variable correctly whether it includes an HTTPS prefix or not (#252, @atheriel).

  • chat_databricks() now respects the SPARK_CONNECT_USER_AGENT environment variable when making requests (#254, @atheriel).

  • A new chat_snowflake() allows chatting with models hosted through Snowflake's Cortex LLM REST API (#258, @atheriel).

  • chat_azure() now picks up credentials from Azure service principals automatically (#263, @atheriel).

  • chat_azure() now reports better error messages when the underlying HTTP requests fail (#269, @atheriel).

ellmer 0.1.0

  • New chat_vllm() to chat with models served by vLLM (#140).

  • The default chat_openai() model is now GPT-4o.

  • New Chat$set_turns() to set turns. Chat$turns() is now Chat$get_turns(). Chat$system_prompt() is replaced with Chat$set_system_prompt() and Chat$get_system_prompt().

  • Async and streaming async chat are now event-driven and use later::later_fd() to wait efficiently on curl socket activity (#157).

  • New chat_bedrock() to chat with AWS bedrock models (#50).

  • New chat$extract_data() uses the structured data API where available (and tool calling otherwise) to extract data structured according to a known type specification. You can create specs with functions type_boolean(), type_integer(), type_number(), type_string(), type_enum(), type_array(), and type_object() (#31).

  • The general ToolArg() has been replaced by the more specific type_*() functions. ToolDef() has been renamed to tool.

  • content_image_url() will now create inline images when given a data url (#110).

  • Streaming ollama results works once again (#117).

  • Streaming OpenAI results now capture more results, including logprops (#115).

  • New interpolate() and prompt_file() make it easier to create prompts that are a mix of static text and dynamic values.

  • You can find how many tokens you've used in the current session by calling token_usage().

  • chat_browser() and chat_console() are now live_browser() and live_console().

  • The echo can now be one of three values: "none", "text", or "all". If "all", you'll now see both user and assistant turns, and all content types will be printed, not just text. When running in the global environment, echo defaults to "text", and when running inside a function it defaults to "none".

  • You can now log low-level JSON request/response info by setting options(ellmer_verbosity = 2).

  • chat$register_tool() now takes an object created by Tool(). This makes it a little easier to reuse tool definitions (#32).

  • new_chat_openai() is now chat_openai().

  • Claude and Gemini are now supported via chat_claude() and chat_gemini().

  • The Snowflake Cortex Analyst is now supported via chat_cortex() (#56).

  • Databricks is now supported via chat_databricks() (#152).