Skip to main content
Oumi streamlines model fine tuning and performance iteration by providing multiple training methods and flexible configuration options. This allows you to experiment efficiently while retaining full control over your setup.

How to run training jobs

You can either selectSupervised Fine-Tuning to train a model using labeled examples, or On-policy Distillation to train a student model using a teacher model for knowledge distillation.

Supervised fine-tuning (SFT)

To start an SFT job, initiate a training run from the Models page.
  1. Click on Train New Model.
  2. In the Builder, select Supervised Fine-Tuning.
  3. Select the base model to fine-tune. Oumi offers a broad range of commonly used models.
  4. Choose your training dataset, and optionally select validation and test datasets. You can use uploaded datasets, synthesized data, or merged datasets.
  5. Select a training method. Oumi supports full fine-tuning (FFT) and parameter-efficient fine-tuning (PEFT), including LoRA.
  6. Adjust advanced hyperparameters (e.g., maximum steps, learning rate) if needed.
  7. Review your configuration, (optionally) save it as a reusable recipe, and launch the training job.

On-policy distillation

To start an on-policy distillation job, initiate a training run from the Models page.
  1. Click on Train New Model.
  2. In the Builder, select On-Policy Distillation.
  3. Leave Training Method on On-Policy Distillation.
  4. Choose your Base Model and Teacher Model.
  5. Select your Training Dataset.
  6. Configure advanced settings (e.g., Training Settings, Distillation Settings, Parameter-Efficient Settings) if needed.
Please see On-Policy Distillation for more information regarding configuration optionss and settings.

Checking job status

After a training job launches, it will appear on the Activity log page with a status of Running. When training completes, you can access your model from the Model page.