Learn How To Observe, Manage, and Scale, Agentic AI Apps Using Azure AI Foundry - with this hands-on workshop
-
Updated
Nov 24, 2025 - Jupyter Notebook
Learn How To Observe, Manage, and Scale, Agentic AI Apps Using Azure AI Foundry - with this hands-on workshop
[ICML 2025] Official code for the paper "RoSTE: An Efficient Quantization-Aware Supervised Fine-Tuning Approach for Large Language Models"
Code for SFT and RL
[ICML 2025] Official code for the paper "RoSTE: An Efficient Quantization-Aware Supervised Fine-Tuning Approach for Large Language Models"
Automatic music tagging using foundation models
LoRA fine-tuning pipeline for tool-calling chat LLMs with config-driven datasets, deterministic prompts, and built-in tool-call evaluation.
🎯 Fine-tuning LLMs using LlamaFactory for financial intent understanding | Evaluating open-source models on OpenFinData benchmark | Full implementation with multiple models (Qwen2.5/ChatGLM3/Baichuan2/Llama3)
Supervised Fine Tuning with QLoRA
Fine-tuning various Llama 3.1 family of models on the Mult-It dataset
🦙 Llama2-FineTuning: Fine-tune LLAMA 2 with Custom Datasets Using LoRA and QLoRA Techniques
🛠 Fine-tune tool-calling chat models for specific domains using supervised learning with LoRA for high-quality conversation management.
A Multimodal AI medical assistant
Add a description, image, and links to the supervised-fine-tuning topic page so that developers can more easily learn about it.
To associate your repository with the supervised-fine-tuning topic, visit your repo's landing page and select "manage topics."