Audio Classification
Transformers
Safetensors
Slovenian
Croatian
Serbian
wav2vec2-bert
audio-frame-classification
File size: 5,032 Bytes
c89080d
2b64b7e
 
cbd3ea2
 
 
 
 
 
 
 
2b64b7e
 
cbd3ea2
 
da015cf
c89080d
 
c9ad0c5
c89080d
c9ad0c5
c89080d
 
 
 
 
 
 
92d3cba
bf21825
cbd3ea2
d6ee563
92d3cba
c89080d
 
 
97c1145
 
 
 
 
 
 
 
 
7798656
 
e8f5be2
 
 
 
da015cf
c89080d
e8f5be2
c89080d
da015cf
 
 
 
 
 
 
c89080d
 
97c1145
c9ad0c5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b43bf19
c9ad0c5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
09dba0b
c9ad0c5
 
 
c89080d
 
 
 
 
09dba0b
c89080d
 
 
 
 
 
 
c9ad0c5
 
 
 
 
c89080d
 
 
da015cf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
---
base_model:
- facebook/w2v-bert-2.0
datasets:
- classla/ParlaSpeech-RS
- classla/ParlaSpeech-HR
- classla/Mici_Princ
language:
- sl
- hr
- sr
library_name: transformers
license: cc-by-sa-4.0
metrics:
- accuracy
pipeline_tag: audio-classification
---

# Model Card

This model annotates primary stress in words on 20ms frames.

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->


- **Developed by:** [Peter Rupnik](https://huggingface.co/5roop), [Nikola Ljubešić](https://huggingface.co/nljubesi), [Ivan Porupski](https://huggingface.co/porupski)
- **Model type:** Audio frame classifier
- **Language(s) (NLP):** Croatian, Slovenian, Serbian, Chakavian variant of Croatian
- **License:** Creative Commons - Share Alike 4.0

<!-- Provide the basic links for the model. -->

- **Paper:** Please cite the following paper:

  ```
  @inproceedings{ljubesic2025identifying,
  title     = {Identifying Primary Stress Across Related Languages and Dialects with Transformer-based Speech Encoder Models},
  author    = {Ljubešić, Nikola and Porupski, Ivan and Rupnik, Peter},
  booktitle = {Proceedings of Interspeech 2025},
  year      = {2025},
  note      = {Accepted at Interspeech 2025}
  }
  ```
### Training data

The model was trained on the training split of [ParlaStress-HR dataset](http://hdl.handle.net/11356/2038).

### Evaluation results

For evaluation, the test splits of [ParlaStress-HR dataset](http://hdl.handle.net/11356/2038) were used.

|test language|accuracy|
| ---|---|
| Croatian| 99.1|
|Serbian|99.3|
|Chakavian (variant of Croatian)|88.9|
|Slovenian|89.0|

### Direct Use

The model is intended for data-driven analyses in primary stress position. At the moment, it has been proven to work on 4 datasets in 3 languages.


## Example use

```python
import numpy as np

from datasets import Audio, Dataset
from transformers import AutoFeatureExtractor, Wav2Vec2BertForAudioFrameClassification
import torch
import numpy as np

if torch.cuda.is_available():
    device = torch.device("cuda")
else:
    device = torch.device("cpu")

model_name = "classla/Wav2Vec2BertPrimaryStressAudioFrameClassifier"
feature_extractor = AutoFeatureExtractor.from_pretrained(model_name)
model = Wav2Vec2BertForAudioFrameClassification.from_pretrained(model_name).to(device)
# Path to the file, containing the word to be annotated:
f = "wavs/word.wav"


def frames_to_intervals(frames: list[int]) -> list[tuple[float]]:
    from itertools import pairwise
    import pandas as pd

    results = []
    ndf = pd.DataFrame(
        data={
            "time_s": [0.020 * i for i in range(len(frames))],
            "frames": frames,
        }
    )
    ndf = ndf.dropna()
    indices_of_change = ndf.frames.diff()[ndf.frames.diff() != 0].index.values
    for si, ei in pairwise(indices_of_change):
        if ndf.loc[si : ei - 1, "frames"].mode()[0] == 0:
            pass
        else:
            results.append(
                (round(ndf.loc[si, "time_s"], 3), round(ndf.loc[ei - 1, "time_s"], 3))
            )
    if results == []:
        return None
    # Post-processing: if multiple regions were returned, only the longest should be taken:
    if len(results) > 1:
        results = sorted(results, key=lambda t: t[1]-t[0], reverse=True)
    return results[0:1]


def evaluator(chunks):
    sampling_rate = chunks["audio"][0]["sampling_rate"]
    with torch.no_grad():
        inputs = feature_extractor(
            [i["array"] for i in chunks["audio"]],
            return_tensors="pt",
            sampling_rate=sampling_rate,
        ).to(device)
        logits = model(**inputs).logits
    y_pred_raw = np.array(logits.cpu())
    y_pred = y_pred_raw.argmax(axis=-1)
    primary_stress = [frames_to_intervals(i) for i in y_pred]
    return {
        "y_pred": y_pred,
        "y_pred_logits": y_pred_raw,
        "primary_stress": primary_stress,
    }

# Create a dataset with a single instance and map our evaluator function on it:
ds = Dataset.from_dict({"audio": [f]}).cast_column("audio", Audio(16000, mono=True))
ds = ds.map(evaluator, batched=True, batch_size=1) # Adjust batch size according to your hardware specs
print(ds["y_pred"][0])
# Outputs: [0, 0, 1, 1, 1, 1, 1, ...]
print(ds["y_pred_logits"][0])
# Outputs:
# [[ 0.89419061, -0.77746612],
#  [ 0.44213724, -0.34862748],
#  [-0.08605709,  0.13012762],
# ....
print(ds["primary_stress"][0])
# Outputs: [0.34, 0.4]

```

## Training Details

### Training Data

10443 manually annotated multisyllabic words from [ParlaSpeech-HR](https://huggingface.co/datasets/classla/ParlaSpeech-HR).

### Training Procedure

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->

#### Training Hyperparameters

- Learning rate: 1e-5
- Batch size: 32
- Number of epochs: 20
- Weight decay: 0.01
- Gradient accumulation steps: 1

## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->