File size: 8,775 Bytes
958473e
 
0d9842f
8eaa3f2
de6da40
8eaa3f2
 
 
62488f8
8eaa3f2
958473e
9a1bc5c
 
d0fb542
9a1bc5c
 
 
d0fb542
9a1bc5c
958473e
 
1ff2c02
 
 
 
 
 
 
 
 
 
 
 
 
 
 
de6da40
1ff2c02
 
de6da40
1ff2c02
de6da40
 
 
 
1ff2c02
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
958473e
62488f8
958473e
2bdb6ed
958473e
1ff2c02
958473e
 
 
 
 
 
 
 
 
 
 
 
2bdb6ed
958473e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2bdb6ed
 
 
958473e
 
1ff2c02
958473e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
594dc64
958473e
0d9842f
958473e
2bdb6ed
 
 
c7d9dc5
 
958473e
0d9842f
958473e
2bdb6ed
 
 
958473e
8eaa3f2
958473e
2bdb6ed
 
 
958473e
8eaa3f2
958473e
2bdb6ed
 
 
958473e
2bdb6ed
 
 
958473e
7bf5d76
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
# SUPERB Submission Template

Welcome to the [SUPERB Challenge](https://superbbenchmark.org/challenge-slt2022/challenge_overview)! SUPERB is a collection of benchmarking resources to evaluate the capability of a universal shared representation for speech processing. It comes with a benchmark on the publicly available datasets and a challenge on a secret/not released hidden dataset. In SUPERB Challenge, a challenging hidden dataset is newly recorded to evaluate the ultimate generaliziblity across various tasks and data.

You can participate the challenge by simply submitting your self-supervised (SSL) pretrained models (model definition & pretrained weights), and we benchmark it with the hidden datasets. This repository constains useful tools to let you easliy [submit](https://superbbenchmark.org/submit) your models ***privately*** for evaluation to [the challenge hidden-set leaderboard](https://superbbenchmark.org/leaderboard?track=constrained&subset=Hidden+Dev+Set).

1. Generate a submission template
2. Validate the format/interface correctness of your model
3. Upload to Huggingface's Hub (privately)
4. Submit the upload information to [SUPERB website](https://superbbenchmark.org/submit)

#### Note 1.

We accept pre-trained models in PyTorch by default. If you wish to submit upstreams in non-PyTorch frameworks, please mail to [[email protected]](mailto:[email protected])!

#### Note 2.

If you are not feasible to submit the pre-trained model, please mail to [[email protected]](mailto:[email protected]) for us to see how to help!

## Quickstart

### 1. Add model interfaces

#### forward

Extract features from waveforms.

- **Input:** A list of waveforms in 16000 Hz

    ```python
    SAMPLE_RATE = 16000
    BATCH_SIZE = 8
    EXAMPLE_SEC = 10
    wavs = [torch.randn(SAMPLE_RATE * EXAMPLE_SEC).cuda() for _ in range(BATCH_SIZE)]
    ```

- **Output:** A dictionary with a key "hidden_states" (for compatiblility with old ver.). The value is **a list** of padded sequences in the same shape of **(batch_size, max_sequence_length_of_batch, hidden_size)** for weighted-sum to work. It is welcome to perform some task-specified / independent pre- / post-processing on the upstream's raw hidden-sets, including upsampling and downsampling. However, all the values must come from **a single upstream model**:

    ```python
    tasks = ["hidden_states", "PR", "SID", "ER", "ASR", "ASV", "SD", "QbE", "ST", "SS", "SE", "secret"]
    for task in tasks:
        # you can do task-specified pre- / post-processing depend on the arg "upstream_feature_selection"
        results = upstream(wavs, upstream_feature_selection=task)
        hidden_states = results["hidden_states"]
        assert isinstance(results, dict)
        assert isinstance(hidden_states, list)

        for state in hidden_states:
            assert isinstance(state, torch.Tensor)
            assert state.dim() == 3, "(batch_size, max_sequence_length_of_batch, hidden_size)"
            assert state.shape == hidden_states[0].shape
    ```

#### get_downsample_rates

Provide the downsample rate **from 16000 Hz waveforms** for each task's representation in the dict. For the standard 10ms stride representation, the downsample rate is 160.

```python
SAMPLE_RATE = 16000
MSEC_PER_SEC = 1000
downsample_rate = SAMPLE_RATE * 10 / MSEC_PER_SEC  # 160
```

The downsample rate will be used to:

1. Calculate the valid representation length of each utterance in the output padded representation.
2. Prepare the training materials according to the representation's downsample rate for frame-level tasks, e.g. SD, SE, and SS.

- **Input:** the task key (str)
- **Output:** the downsample rate (int) of the representation for that task

```python
for task in tasks:
    assert isinstance(task, str)
    downsample_rate = upstream.get_downsample_rate(task)
    assert isinstance(downsample_rate, int)
    print("The upstream's representation for {task}"
        f" has the downsample rate of {downsample_rate}.")
```

### 2. Create an account and organization on the Hugging Face Hub

First create an account on the Hugging Face Hub and you can sign up [here](https://huggingface.co/join) if you haven't already! Next, create a new organization and invite the SUPERB Hidden Set Committee to join. You will upload your model to a repository under this organization so that members inside it can access the model which is not publicly available.

* [superb-hidden-set](https://huggingface.co/superb-hidden-set)

### 3. Create a template repository on your machine

The next step is to create a template repository on your local machine that contains various files and a CLI to help you validate and submit your pretrained models. The Hugging Face Hub uses [Git Large File Storage (LFS)](https://git-lfs.github.com) to manage large files, so first install it if you don't have it already. For example, on macOS you can run:

```bash
brew install git-lfs
git lfs install
```

Next, run the following commands to create the repository. We recommend creating a Python virtual environment for the project, e.g. with Anaconda:

```bash
# Create and activate a virtual environment
conda create -n superb-submit python=3.8 && conda activate superb-submit
# Install the following libraries
pip install cookiecutter huggingface-hub==0.0.16
# Create the template repository
cookiecutter git+https://huggingface.co/superb/superb-submission
```

This will ask you to specify your Hugging Face Hub username, password, organisation, and the name of the repository:

```
hf_hub_username [<huggingface>]:
hf_hub_password [<password>]:
hf_hub_organisation [superb-submissions]:
repo_name [<my-superb-submissions>]:
```

This will trigger the following steps:

1. Create a private dataset repository on the Hugging Face Hub under `{hf_hub_organisation}/{repo_name}`
2. Clone the repository to your local machine
3. Add various template files, commit them locally to the repository, and push them to the Hub

The resulting repository should have the following structure:

```
my-superb-submission
β”œβ”€β”€ LICENSE
β”œβ”€β”€ README.md               <- The README with submission instructions
β”œβ”€β”€ cli.py                  <- The CLI for validating predictions etc
└── requirements.txt        <- The requirements packages for the submissions
β”œβ”€β”€ expert.py               <- Your model definition
└── model.pt                <- Your model weights
```

### 4. Install the dependencies

The final step is to install the project's dependencies:

```bash
# Navigate to the template repository
cd my-superb-submission
# Install dependencies
python -m pip install -r requirements.txt
```

That's it! You're now all set to start pretraining your speech models - see the instructions below on how to submit them to the Hub.


## Submitting to the leaderboard

To make a submission to the [leaderboard](https://superbbenchmark.org/leaderboard?subset=Hidden+Dev+Set), there are 4 main steps:

1. Modify `expert.py` and change `model.pt` so we can initialize an upstream model following the [challenge policy](https://superbbenchmark.org/challenge-slt2022/upstream) by:

    ```python
    upstream = UpstreamExpert(ckpt="./model.pt")
    ```

    ***Package Dependency:*** Note that we only install `torch` package so far by following the above steps. If your model needs more packages, you can modify the `requirement.txt` to meet your need and install them inside the current conda environment. We will install the packages you list in the `requirement.txt` before initializing the upstream model.

2. Validate the upstream model's interface meets the requirements in the [challenge policy](https://superbbenchmark.org/challenge-slt2022/upstream). If everything is correct, you should see the following message: "All submission files validated! Now you can make a submission."

    ```
    python cli.py validate
    ```

3. Push the model to the Hub! If there are no errors, you should see the following message: "Upload successful!"

    ```
    python cli.py upload "commit message: my best model"
    ```

4. [Make a submission at SUPERB website](https://superbbenchmark.org/submit) by uniquely indentifying this uploaded model with the following information, which can be shown by:

    ```
    python cli.py info
    ```

    - Organization Name
    - Repository Name
    - Commit Hash (full 40 characters)

After you finish the above 4 steps. You will see a new entry in your [SUPERB profile page](https://superbbenchmark.org/profile) (need login) which does not have any benchmark numbers yet. Please wait for us to finetuned it on the hidden dataset and get the benchmark results. The results will be revealed within one week. Please stay tuned!