Skip to content

ASR Benchmark UI

Under the ASR Benchmark tab, you can view and manage all configured ASR benchmarks.


1. Package Configuration

1.1 Overview

On the main ASR Benchmark page, you’ll see all benchmark packages currently defined:

Home Page

1.2 Create a New Benchmark Package

  1. Click Create new ASR package.
  2. Add Dataset – Each package can contain multiple datasets.
  3. Configure each dataset with:
    • Dataset Identifier – A unique label.
    • Source & Model – Point to your audio dataset (e.g., from HuggingFace Datasets).
    • Dataset Configuration – Subset, split (train/test/validation), and the specific audio/transcription columns.
    • Normalizer – (Optional) Applies text normalization before comparing transcripts. New Package
      New Dataset

      Tip: For HuggingFace datasets, check the Viewer tab to find valid subset and split names.

Dataset Configuration 4. Save the package after adding the desired datasets.

Complete Package


2. Running a Benchmark

  1. Select an existing package.
  2. Click New ASR Benchmark Run.
  3. Specify the model, required resources (CPU/GPU), and any additional settings.
  4. Run Benchmark.

New Run

Note: You can discover pre-trained ASR models on HuggingFace with the automatic-speech-recognition tag.


3. Checking the Results

After the benchmark finishes, see aggregated scores for each model in the run.

Results

3.1 Detailed Results

Click on an individual run to view more in-depth metrics and performance details.

Result Details


Next Steps