Skip to main content
Once you’ve trained and evaluated a model in Oumi, the next step is to export it for deployment or downstream inference workflows. Exporting a model allows you to use it outside of the Oumi platform, whether for local testing, cloud-based inference, or integration into production systems. Oumi makes model export straightforward by packaging trained artifacts in a standard, portable format that works with common inference engines and serving frameworks. This enables a smooth transition from experimentation to real-world usage without additional conversion steps.

When to export a model

You’re ready to export your model when:
  • Evaluation results meet your predefined success criteria
  • You’re ready to run inference locally or in the cloud
  • You want to serve the model behind an API
  • You plan to integrate the model into an external application or workflow
If your model does not yet meet quality expectations, you can continue iterating on training or data synthesis before exporting.

What gets exported

When you export a model from Oumi, the following artifacts are typically produced:
  • Model weights: the trained parameters resulting from fine-tuning
  • Tokenizer and configuration files: required for correct input/output handling
  • Model metadata: information about the training run and configuration
These files are packaged in a format compatible with standard inference tools and libraries.

How to export a model

To export a model in Oumi:
  1. Go to the Models page and click on the model name.
  2. On the model’s detail page, click on the `Export* button.
  3. Click Continue Export to download the file to your local computer.
Once exported, the model can be used immediately for inference or deployment.