At the moment, we’re excited to announce two main enhancements to mannequin fine-tuning in Azure AI Foundry—Reinforcement Superb-Tuning (RFT) with o4-mini, coming quickly, and Supervised Superb-Tuning (SFT) for the 4.1-nano mannequin, out there now.
At the moment, we’re excited to announce three main enhancements to mannequin fine-tuning in Azure AI Foundry—Reinforcement Superb-Tuning (RFT) with o4-mini (coming quickly), Supervised Superb-Tuning (SFT) for the GPT-4.1-nano and Llama 4 Scout mannequin (out there now). These updates mirror our continued dedication to empowering organizations with instruments to construct extremely custom-made, domain-adapted AI methods for real-world affect.
With these new fashions, we’re unblocking two main avenues of LLM customization: GPT-4.1-nano is a strong small mannequin, ultimate for distillation, whereas o4-mini is the primary reasoning mannequin you may fine-tune, and Llama 4 Scout is a best-in-class open supply mannequin.
Reinforcement Superb-Tuning with o4-mini
Reinforcement Superb-Tuning introduces a brand new stage of management for aligning mannequin habits with complicated enterprise logic. By rewarding correct reasoning and penalizing undesirable outputs, RFT improves mannequin decision-making in dynamic or high-stakes environments.
Coming quickly for the o4-mini mannequin, RFT unlocks new potentialities to be used circumstances requiring adaptive reasoning, contextual consciousness, and domain-specific logic—all whereas sustaining quick inference efficiency.
Actual world affect: DraftWise
DraftWise, a authorized tech startup, used reinforcement fine-tuning (RFT) in Azure AI Foundry Fashions to reinforce the efficiency of reasoning fashions tailor-made for contract technology and evaluate. Confronted with the problem of delivering extremely contextual, legally sound ideas to legal professionals, DraftWise fine-tuned Azure OpenAI fashions utilizing proprietary authorized knowledge to enhance response accuracy and adapt to nuanced person prompts. This led to a 30% enchancment in search end result high quality, enabling legal professionals to draft contracts sooner and concentrate on high-value advisory work.
Reinforcement fine-tuning on reasoning fashions is a possible recreation changer for us. It’s serving to our fashions perceive the nuance of authorized language and reply extra intelligently to complicated drafting directions, which guarantees to make our product considerably extra helpful to legal professionals in actual time.
—James Ding, founder and CEO of DraftWise.
When must you use Reinforcement Superb-Tuning?
Reinforcement Superb-Tuning is greatest suited to use circumstances the place adaptability, iterative studying, and domain-specific habits are important. It is best to contemplate RFT in case your state of affairs entails:
- Customized Rule Implementation: RFT thrives in environments the place choice logic is extremely particular to your group and can’t be simply captured by way of static prompts or conventional coaching knowledge. It allows fashions to study versatile, evolving guidelines that mirror real-world complexity.
- Area-Particular Operational Requirements: Supreme for situations the place inside procedures diverge from business norms—and the place success relies on adhering to these bespoke requirements. RFT can successfully encode procedural variations, similar to prolonged timelines or modified compliance thresholds, into the mannequin’s habits.
- Excessive Choice-Making Complexity: RFT excels in domains with layered logic and variable-rich choice timber. When outcomes rely upon navigating quite a few subcases or dynamically weighing a number of inputs, RFT helps fashions generalize throughout complexity and ship extra constant, correct choices.
Instance: Wealth advisory at Contoso Wellness
To showcase the potential of RFT, contemplate Contoso Wellness, a fictitious wealth advisory agency. Utilizing RFT, the o4-mini mannequin realized to adapt to distinctive enterprise guidelines, similar to figuring out optimum consumer interactions primarily based on nuanced patterns just like the ratio of a consumer’s web value to out there funds. This enabled Contoso to streamline their onboarding processes and make extra knowledgeable choices sooner.
Supervised Superb-Tuning now out there for GPT-4.1-nano
We’re additionally bringing Supervised Superb-Tuning (SFT) to the GPT-4.1-nano mannequin—a small however highly effective basis mannequin optimized for high-throughput, cost-sensitive workloads. With SFT, you may instill your mannequin with company-specific tone, terminology, workflows, and structured outputs—all tailor-made to your area. This mannequin will likely be out there for fine-tuning within the coming days.
Why Superb-tune GPT-4.1-nano?
- Precision at Scale: Tailor the mannequin’s responses whereas sustaining velocity and effectivity.
- Enterprise-Grade Output: Guarantee alignment with enterprise processes and tone-of-voice.
- Light-weight and Deployable: Good for situations the place latency and value matter—similar to customer support bots, on-device processing, or high-volume doc parsing.
In comparison with bigger fashions, 4.1-nano delivers sooner inference and decrease compute prices, making it properly suited to large-scale workloads like:
- Buyer assist automation, the place fashions should deal with hundreds of tickets per hour with constant tone and accuracy.
- Inner information assistants that comply with firm type and protocol in summarizing documentation or responding to FAQs.
As a small, quick, however extremely succesful mannequin, GPT-4.1-nano makes an important candidate for distillation as properly. You should use fashions like GPT-4.1 or o4 to generate coaching knowledge—or seize manufacturing site visitors with saved completions—and train 4.1-nano to be simply as sensible!

Llama 4 Superb-Tuning now out there
We’re additionally excited to announce assist for fine-tuning Meta’s Llama 4 Scout—a innovative,17 billion energetic parameter mannequin which presents an business main context window of 10M tokens whereas becoming on a single H100 GPU for inferencing. It’s a best-in-class mannequin, and extra highly effective than all earlier technology llama fashions.
Llama 4 fine-tuning is obtainable in our managed compute providing, permitting you to fine-tune and inference utilizing your personal GPU quota. Accessible in each Azure AI Foundry and as Azure Machine Studying parts, you’ve entry to further hyperparameters for deeper customization in comparison with our serverless expertise.
Get began with Azure AI Foundry immediately
Azure AI Foundry is your basis for enterprise-grade AI tuning. These fine-tuning enhancements unlock new frontiers in mannequin customization, serving to you construct clever methods that suppose and reply in ways in which mirror your corporation DNA.
- Use Reinforcement Superb-tuning with o4-mini to construct reasoning engines that study from expertise and evolve over time. Coming quickly in Azure AI Foundry, with regional availability for East US2 and Sweden Central.
- Use Supervised Superb-Tuning with 4.1-nano to scale dependable, cost-efficient, and extremely custom-made mannequin behaviors throughout your group. Accessible now in Azure AI Foundry in North Central US and Sweden Central.
- Strive Llama 4 scout high quality tuning to customise a best-in-class open supply mannequin. Accessible now in Azure AI Foundry mannequin catalog and Azure Machine Studying.
With Azure AI Foundry, fine-tuning isn’t nearly accuracy—it’s about belief, effectivity, and flexibility at each layer of your stack.
Discover additional:
We’re simply getting began. Keep tuned for extra mannequin assist, superior tuning methods, and instruments that will help you construct AI that’s smarter, safer, and uniquely yours.