This repository contains examples and best practices for fine-tuning large language models (LLMs) using both open-source models and OpenAI models. Whether you're working with open-source models or leveraging OpenAI's APIs, this repo provides hands-on guides and resources for various fine-tuning scenarios.
Fine-tuning large open-source language models has its unique challenges, including infrastructure setup, high computational demands, and scalable training processes. This section is based on a blog series that explores:
- Fundamentals of scaling fine-tuning for large open-source LLMs using Azure Machine Learning (Azure ML).
- Real-world pipeline setups for end-to-end model training, hyperparameter optimization, and testing.
- Deployment of trained models for practical use.
- Advanced techniques for handling multi-billion-parameter models efficiently.
For a detailed explanation, see the Open Source LLM Blog Post Series.
This section provides examples and best practices for fine-tuning OpenAI models for various scenarios, enabling you to adapt models to your specific use case. It includes:
- Function Calling: Fine-tuning for tasks that require structured outputs or API integrations.
- Python Analytic: Adapting models for advanced analytics and Python-based tasks.
- SQL Generation: Training models to generate SQL queries from natural language inputs.
- Data Preparation: Tools and scripts for preparing datasets for fine-tuning.
Explore the details in the Azure OpenAI Section.
This project is licensed under the MIT License. See the LICENSE file for details.
To get started, clone the repository and explore the individual sections for instructions and examples tailored to your use case.
git clone https://github.com/your-repo/LLM-FINE-TUNING.git