Skip to content

Run

The Run UI provides a comprehensive view of each model execution (run), showing parameters, metrics, artifacts, and settings. It helps you analyze and optimize runs effectively.

1. Run Overview Tab

  • Parameters – Displays the hyperparameters or configuration settings used for the model run.
  • Metrics – Lists performance metrics (e.g., accuracy, loss) logged during execution.

Run Overview - Parameters
Run Overview - Metrics

2. Run Metrics Tab

  • Chart Visualization – View metrics over time or across different runs.
  • Performance Insights – Identify trends, bottlenecks, and potential optimization targets.

Run Metrics

3. Run Artifacts Tab

  • Artifact Repository – Store files that describe the run’s model (e.g., python_env.yaml, requirements.txt).
  • Model Files – Includes serialized models (model.pkl, .h5, etc.) and metadata (MLmodel).

Run Artifacts

Important Note: Editing YAML or TXT files can break compatibility if done improperly. Make changes cautiously to avoid deployment failures or unexpected behavior.

4. Run Settings Tab

  • Auto-Generated Tags – Track user, source name, source type, run name, and model history.
  • Custom Tags – Add tags to categorize runs for easier filtering and organization.
  • Delete & Restore – Remove unwanted runs; restore them later from the Run Overview tab if needed.

Run Settings - Auto Tags
Run Settings - Extra Tags
Run Settings - Delete


Next Steps

  • Experiment UI – Learn how runs fit into the larger experiment workflow.
  • Tracking Overview – Explore the fundamentals of OICM’s tracking capabilities.
  • Resource Allocation – Understand how to manage compute resources for optimal run performance.