Last PR included in this release: #1264
- editor: Added support for modifying general prompt metadata, such as
remember_chat_context
, below model settings (#1205) - editor: Added logging events for share and download button clicks as well as any actions that edit the config (#1217, #1220)
- extensions: Created conversational model parser to Hugging Face remote inference extension and added input model schema to editor client (#1229, #1230)
- editor: Updated ‘model’ value in model settings to clear when the model for a prompt (which can include general group of models such as Hugging Face Tasks which require the model field to specify a specific model name) is updated (#1245, #1257)
- extensions: Set default model names for the Hugging Face remote inference model parsers for Summarization, Translation, Automatic Speech Recognition and Text-to-Speech tasks (#1246, #1221)
- gradio-notebook: Fixed styles for checkboxes, markdown links, loading spinners and output lists, as well as general cleanup to buttons and input sizing (#1248, #1249, #1250, #1251, #1252, #1231)
- python-sdk: Fixed dependency issue to no longer pin pydantic to 2.4.2 so that
aiconfig-editor
can be compatible with other libraries (#1225)
- [updated] Added new content to Gradio Notebooks documentation, including 5-mins tutorial video, local model support, more streamlined content format, and warnings for discovered issues with Gradio SDK version (#1247, #1234, #1243, #1238)
- vscode: Now utilizes the user's Python interpreter in the VS Code environment when installing dependencies for the AIConfig Editor extension. PR #1151
- vscode: Added a command for opening an AIConfig file directly. PR #1164
- vscode: Added a VS Code command for displaying a Welcome Page on how to use the extension effectively. PR #1194
- Python SDK:
- AIConfig Format Support: Added support for AIConfig format issue for chats starting with an assistant (AI) message by making the initial prompt input empty. PR #1158
- Dependency Management: Pinned google-generativeai module version to >=0.3.1 in
requirements.txt
files. PR #1171 - Python Version Requirement: Defined all
pyproject.toml
files to require Python version >= 3.10. PR #1146
- VS Code:
- Extension Dependencies: Removed the Hugging Face extension from VS Code extension dependencies. PR #1167
- Editor Component Theming: Fixed color scheming in the AIConfig editor component to match VS Code settings. PR #1168, PR #1176
- Share Command Fix: Fixed an issue where the Share command was not working for unsigned AWS S3 credentials. PR #1213
- Notification Issue: Fixed an issue where a notification, “Failed to start aiconfig server,” would show when closing a config with unsaved changes. PR #1201
- Tutorials and Guides:
- Created a getting-started tutorial for Gradio Notebooks. Documentation
- Created a cookbook for RAG with model-graded evaluation. PR #1169, PR #1200
Last PR included in this release: #995
- sdk: Updated input attachments with
AttachmentDataWithStringValue
type to distinguish the data representation ‘kind’ (file_uri
orbase64
) (#929). Please note that this can break existing SDK calls for model parsers that use non-text inputs - editor: Added telemetry data to log editor usage. Users can opt-out of telemetry by setting
allow_usage_data_sharing: False
in the.aiconfigrc
runtime configuration file (#869, #899, #946) - editor: Added CLI
rage
command so users can submit bug reports (#870) - editor: Changed streaming format to be output chunks for the running prompt instead of entire AIConfig (#896)
- editor: Disabled run button on other prompts if a prompt is currently running (#907)
- editor: Made callback handler props optional and no-op if not included (#941)
- editor: Added
mode
prop to customize UI themes on client, as well as match user dark/light mode system preferences (#950, #966) - editor: Added read-only mode where editing of AIConfig is disabled (#916, #935, #936, #939, #967, #961, #962)
- eval: Generalized params to take in arbitrary dict instead of list of arguments (#951)
- eval: Created
@metric
decorator to make defining metrics and adding tests easier by only needing to define the evaluation metric implementation inside the inner function (#988) - python-sdk: Refactored
delete_output
to setoutputs
attribute ofPrompt
toNone
rather than an empty list (#811)
- editor: Refactored run prompt server implementation to use
stop_streaming
,output_chunk
,aiconfig_chunk
, andaiconfig
so server can more explicitly pass data to client (#914, #911) - editor: Split
RUN_PROMPT
event on client intoRUN_PROMPT_START
,RUN_PROMPT_CANCEL
,RUN_PROMPT_SUCCESS
, andRUN_PROMPT_ERROR
(#925, #922, #924) - editor: Rearranged default model ordering to be more user-friendly (#994)
- editor: Centered the Add Prompt button and fixed styling (#912, #953)
- editor: Fixed an issue where changing the model for a prompt resulted in the model settings being cleared; now they will persist (#964)
- editor: Cleared outputs when first clicking the run button in order to make it clearer that new outputs are created (#969)
- editor: Fixed bug to display array objects in model input settings properly (#902)
- python-sdk: Fixed issue where we were referencing
PIL.Image
as a type instead of a module in the HuggingFaceimage_2_text.py
model parser (#970) - editor: Connected HuggingFace model parser tasks names to schema input renderers (#900)
- editor: Fixed
float
model settings schema renderer tonumber
(#989)
- [new] Added docs page for AIConfig Editor (#876, #947)
- [updated] Renamed “variables” to “parameters” to make it less confusing (#968)
- [updated] Updated Getting Started page with quickstart section, and more detailed instructions for adding API keys (#956, #895)
We built an AIConfig Editor which is like VSCode + Jupyter notebooks for AIConfig files! You can edit the config prompts, parameters, settings, and most importantly, run them for generating outputs. Source control your AIConfig files by clearing outputs and saving. It’s the most convenient way to work with Generative AI models through a local, user interface. See the README to learn more on how to use it!
- Add and delete prompts (#682, #665)
- Select prompt model and model settings with easy-to-read descriptions (#707, #760)
- Modify local and global parameters (#673)
- Run prompts with streaming or non-streaming outputs (#806)
- Cancel inference runs mid-execution (#789)
- Modify name and description of AIConfig (#682)
- Render input and outputs as text, image, or audio format (#744, #834)
- View prompt input, output, model settings in both regular UI display or purely in raw JSON format (#686, #656, #757)
- Copy and clear prompt output results (#656, #791)
- Autosave every 15s, or press (CTRL/CMD) + S or Save button to do it manually (#734, #735)
- Edit on existing AIConfig file or create a new one if not specified (#697)
- Run multiple editor instances simultaneously (#624)
- Error handling for malformed input + settings data, unexpected outputs, and heartbeat status when server has disconnected (#799, #803, #762)
- Specify explicit model names to use for generic HuggingFace model parsers tasks (#850)
- sdk: Schematized prompt OutputData format to be of type string,
OutputDataWithStringValue
, orOutputDataWithToolCallsValue
(#636). Please note that this can break existing SDK calls - extensions: Created 5 new HuggingFace local transformers: text-to-speech, image-to-text, automatic speech recognition, text summarization, & text translation (#793, #821, #780, #740, #753)
- sdk: Created Anyscale model parser and cookbook to demonstrate how to use it (#730, #746)
- python-sdk: Explicitly set
model
in completion params for several model parsers (#783) - extensions: Refactored HuggingFace model parsers to use default model for pipeline transformer if
model
is not provided (#863, #879) - python-sdk: Made
get_api_key_from_environment
non-required and able to return nullable, wrapping it around Result-Ok (#772, #787) - python-sdk: Created
get_parameters
method (#668) - python-sdk: Added exception handling for
add_output
method (#687) - sdk: Changed run output type to be
list[Output]
instead ofOutput
(#617, #618) - extensions: Refactored HuggingFace text to image model parser response data into a single object (#805)
- extensions: Renamed
python-aiconfig-llama
toaiconfig-extension-llama
(#607)
- python-sdk: Fixed
get_prompt_template()
issue for non-text prompt inputs (#866) - python-sdk: Fixed core HuggingFace library issue where response type was not a string (#769)
- python-sdk: Fixed bug by adding
kwargs
toParameterizedModelParser
(#882) - python-sdk: Added automated tests for
add_output()
method (#687) - python-sdk: Updated
set_parameters()
to work if parameters haven’t been defined already (#670) - python-sdk: Removed
callback_manager
argument from run method (#886) - extensions: Removed extra
python
dir fromaiconfig-extension-llama-guard
(#653) - python-sdk: Removed unused model-ids from OpenAI model parser (#729)
- [new] AIConfig Editor README: https://github.com/lastmile-ai/aiconfig/tree/main/python/src/aiconfig/editor#readme
- [new] Anyscale cookbook: https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Anyscale
- [new] Gradio cookbook for HuggingFace extension model parsers: https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Gradio
- [updated] AIConfig README: https://github.com/lastmile-ai/aiconfig/blob/main/README.md
- Added support for YAML file format in addition to JSON for improved readability of AIConfigs: (#583)
- python-sdk: Added optional param in
add_prompt()
method to specify index where to add prompt (#599) - eval: Added generalized metric builder for creating your own metric evaluation class (#513)
- python-sdk: Supported using default model if no prompt model is provided (#600)
- python-sdk: Refactored
update_model()
method to take in model name and settings as separate arguments (#507) - python-sdk: Supported additional types in Gemini model parser. Now includes list of strings, Content string, and Content struct: (#532)
- extensions: Added callback handlers to HuggingFace extensions (#597)
- python-sdk: Pinned
google-generativeai
to version 0.3.1 on Gemini model parser (#534) - Added explicit output types to the
ExecuteResult.data
schema. Freeform also still supported (#589)
- Checked for null in system prompt (#541)
- Converted protobuf to dict to fix pydantic BaseModel errors on Gemini (#558)
- Fixed issue where we were overwriting a single prompt output instead of creating a new one in batch execution (#566)
- Unpinned
requests==2.30.0
dependency and using https instead of http inload_from_workbook()
method (#582) - typescript-sdk: Created automated test for typescript
save()
API (#198)
- OpenAI Prompt Engineering Guide: https://openai-prompt-guide.streamlit.app/
- Chain-of-Verification Demo: https://chain-of-verification.streamlit.app/
- python-sdk: Created model parser extension for Google’s Gemini (#478, cookbook)
- Added attachment field to PromptInput schema to support non-text input data (ex: image, audio) (#473)
- python-sdk: Created batch execution interface using
config.run_batch()
(#469) - Added model parser for HuggingFace text2Image tasks (#460)
- Updated evaluation metric values to be any arbitrary type, not just floats, & renamed fields for easier understanding (#484, #437)
- Merged
aiconfig-extension-hugging-face-transformers
intoaiconfig-extension-hugging-face
where all Hugging Face tasks will now be supported (#498, README)
- Fixed caching issue where re-running the same prompt caused nondeterministic behavior (#491)
- typescript-sdk: Pinned OpenAI dependency to to 4.11.1 to have a stable API surface (#524)
- typescript-sdk: Removed redundant PromptWithOutputs type (#508)
- Refactored and shortened README (#493)
- Created table of supported models (#501)
- Updated cookbooks with explicit instructions on how to set API keys (#441)
- python-sdk: Evaluation harness for AIConfig. Supports text input/output evaluation with native AIConfig Interface (tutorial) as well as an integration with Promptfoo (tutorial). See README for details
- python-sdk: Support for PaLM Text as a core model Parser
- typescript-sdk: Support for PaLM Text as a core model parser (8902bef)
PaLM.Text.Example.mov
- python-extension HuggingFace Transformers Extension: Fixed bug where we're not properly appending outputs for multiple return sequences (49da477)
- python-extension HuggingFace Transformers Extension: Fixed a bug that defaulted model to GPT-2. (1c28f7c)
- python-sdk: DALL-E Model Parser (4753f21)
- python-sdk: Updated OpenAI Introspection Wrapper - Now more user-friendly with complete module mocking for easier import replacement, enhancing overall usability. (143c3dd)
- sdk: Updated
add_prompt
to rename prompt name if a different name is passed in (a29d5f87) - typescript-sdk: Updated Metadata Field to be optional (cb5fdc5)
- python-tests: Higher fidelity Test script, performs a complete build for testing (04fc5a5)
- tests: added a github action script testing for main(74e0c15)
- python-sdk: Added linter to python-sdk
- readme: add readme and license within the python extension directory(450012c)
- cookbooks Updated Cookbooks' compatibility with latest aiconfig releases
- python-extension: Extension for HuggingFace Text Generation with transformers (222bf6e)
- python-extension: Extension for LLama 2.0
- typescript-extension: Extension for LLama 2.0