Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
fill-mask | transformers | - **Release 1.1** (March 11, 2021)
- **Release 1.0** (January 13, 2021)
# NB-BERT-base
## Description
NB-BERT-base is a general BERT-base model built on the large digital collection at the National Library of Norway.
This model is based on the same structure as [BERT Cased multilingual model](https://github.com/google-research/bert/blob/master/multilingual.md), and is trained on a wide variety of Norwegian text (both bokmål and nynorsk) from the last 200 years.
## Intended use & limitations
The 1.1 version of the model is general, and should be fine-tuned for any particular use. Some fine-tuning sets may be found on GitHub, see
* https://github.com/NBAiLab/notram
## Training data
The model is trained on a wide variety of text. The training set is described on
* https://github.com/NBAiLab/notram
## More information
For more information on the model, see
https://github.com/NBAiLab/notram
| {"language": false, "license": "cc-by-4.0", "tags": ["norwegian", "bert"], "pipeline_tag": "fill-mask", "widget": [{"text": "P\u00e5 biblioteket kan du [MASK] en bok."}, {"text": "Dette er et [MASK] eksempel."}, {"text": "Av og til kan en spr\u00e5kmodell gi et [MASK] resultat."}, {"text": "Som ansat f\u00e5r du [MASK] for at bidrage til borgernes adgang til dansk kulturarv, til forskning og til samfundets demokratiske udvikling."}]} | NbAiLab/nb-bert-base | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"norwegian",
"fill-mask",
"no",
"license:cc-by-4.0",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
- **Release 1.0beta** (April 29, 2021)
# NB-BERT-large (beta)
## Description
NB-BERT-large is a general BERT-large model built on the large digital collection at the National Library of Norway.
This model is trained from scratch on a wide variety of Norwegian text (both bokmål and nynorsk) from the last 200 years using a monolingual Norwegian vocabulary.
## Intended use & limitations
The 1.0 version of the model is general, and should be fine-tuned for any particular use. Some fine-tuning sets may be found on Github, see
* https://github.com/NBAiLab/notram
## Training data
The model is trained on a wide variety of text. The training set is described on
* https://github.com/NBAiLab/notram
## More information
For more information on the model, see
https://github.com/NBAiLab/notram | {"language": false, "license": "cc-by-4.0", "tags": ["norwegian", "bert"], "thumbnail": "nblogo_3.png", "pipeline_tag": "fill-mask", "widget": [{"text": "P\u00e5 biblioteket kan du l\u00e5ne en [MASK]."}]} | NbAiLab/nb-bert-large | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"norwegian",
"fill-mask",
"no",
"license:cc-by-4.0",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
- **Release ✨v1✨** (January 18th, 2023) *[Full-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1), [sharded](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1-sharded), [half-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1-float16), and [mesh-transformers-jax](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1-mesh) weights*
<details><summary>All checkpoints</summary>
- **Release v1beta5** (December 18th, 2022) *[Full-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta5), [sharded](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta5-sharded), and [half-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta5-float16) weights*
- **Release v1beta4** (October 28th, 2022) *[Full-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta4), [sharded](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta4-sharded), and [half-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta4-float16) weights*
- **Release v1beta3** (August 8th, 2022) *[Full-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta3), [sharded](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta3-sharded), and [half-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta3-float16) weights*
- **Release v1beta2** (June 18th, 2022) *[Full-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta2), [sharded](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/sharded), and [half-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta2-float16) weights*
- **Release v1beta1** (April 28th, 2022) *[Half-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta1-float16) weights*
</details>
# NB-GPT-J-6B
## Demo: https://ai.nb.no/demo/nb-gpt-j-6B/ (Be patient, it runs on CPU 😅)
## Model Description
NB-GPT-J-6B is a Norwegian finetuned version of GPT-J 6B, a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters (6 billion parameters).
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 6053381344 |
| \\(n_{layers}\\) | 28* |
| \\(d_{model}\\) | 4096 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Training data
NB-GPT-J-6B was finetuned on [NCC](https://huggingface.co/datasets/NbAiLab/NCC), the Norwegian Colossal Corpus, plus other Internet sources like Wikipedia, mC4, and OSCAR.
## Training procedure
This model was finetuned for 130 billion tokens over 1,000,000 steps on a TPU v3-8 VM. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
## Intended Use and Limitations
NB-GPT-J-6B learns an inner representation of the Norwegian language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("NbAiLab/nb-gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("NbAiLab/nb-gpt-j-6B")
```
### Limitations and Biases
As the original GPT-J model, the core functionality of NB-GPT-J-6B is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting NB-GPT-J-6B it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon NB-GPT-J-6B to produce factually accurate output.
The original GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. A fine-grained analysis of the bias contained in the corpus used for fine-tuning is still pending.
As with all language models, it is hard to predict in advance how NB-GPT-J-6B will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Evaluation results
We still have to find proper datasets to evaluate the model, so help is welcome!
## Citation and Related Information
### BibTeX entry
To cite this model or the corpus used:
```bibtex
@inproceedings{kummervold2021operationalizing,
title={Operationalizing a National Digital Library: The Case for a Norwegian Transformer Model},
author={Kummervold, Per E and De la Rosa, Javier and Wetjen, Freddy and Brygfjeld, Svein Arne},
booktitle={Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)},
pages={20--29},
year={2021},
url={https://aclanthology.org/2021.nodalida-main.3/}
}
```
If you use this model, we would love to hear about it! Reach out on twitter, GitHub, Discord, or shoot us an email.
## Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha. Specially, to [Stella Biderman](https://www.stellabiderman.com) for her general openness, and [Ben Wang](https://github.com/kingoflolz/mesh-transformer-jax) for the main codebase. | {"language": ["no", "nb", "nn"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm"], "datasets": ["NbAiLab/NCC", "mc4", "oscar"], "pipeline_tag": "text-generation", "extra_gated_prompt": "You agree to not use the model to conduct experiments that cause harm to human subjects.", "extra_gated_fields": {"Company": "text", "Country": "text", "Intended Use": "text"}} | NbAiLab/nb-gpt-j-6B | null | [
"transformers",
"pytorch",
"safetensors",
"gptj",
"text-generation",
"causal-lm",
"no",
"nb",
"nn",
"dataset:NbAiLab/NCC",
"dataset:mc4",
"dataset:oscar",
"arxiv:2104.09864",
"arxiv:2101.00027",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# This is just a Test Model. Do NOT use for anything!
Continued pretrained from the nb-roberta-base.
The domain specific pretraining is done on the 102GB (Scandinavian corpus)[https://huggingface.co/datasets/NbAiLab/scandinavian].
## Train for 180k steps for 128 sequences:
```bash
./run_mlm_flax_stream.py \
--output_dir="./" \
--model_type="roberta" \
--config_name="./" \
--tokenizer_name="./" \
--model_name_or_path="./" \
--dataset_name="NbAiLab/scandinavian" \
--max_seq_length="128" \
--weight_decay="0.01" \
--per_device_train_batch_size="128" \
--per_device_eval_batch_size="128" \
--learning_rate="6e-5" \
--warmup_steps="5000" \
--overwrite_output_dir \
--cache_dir /mnt/disks/flaxdisk/cache/ \
--num_train_steps="180000" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--logging_steps="10000" \
--save_steps="10000" \
--eval_steps="10000" \
--preprocessing_num_workers 96 \
--auth_token True \
--adafactor \
--push_to_hub
```
## Train for 20k steps for 512 sequences:
```bash
./run_mlm_flax_stream.py \
--output_dir="./" \
--model_type="roberta" \
--config_name="./" \
--tokenizer_name="./" \
--model_name_or_path="./" \
--dataset_name="NbAiLab/scandinavian" \
--max_seq_length="512" \
--weight_decay="0.01" \
--per_device_train_batch_size="48" \
--per_device_eval_batch_size="48" \
--learning_rate="3e-5" \
--warmup_steps="5000" \
--overwrite_output_dir \
--cache_dir /mnt/disks/flaxdisk/cache/ \
--num_train_steps="20000" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--logging_steps="20000" \
--save_steps="10000" \
--eval_steps="10000" \
--preprocessing_num_workers 96 \
--auth_token True \
--adafactor \
--push_to_hub
```
Approximate additional training time: 1 week.
| {"language": false, "license": "cc-by-4.0", "tags": ["norwegian", "roberta"], "pipeline_tag": "fill-mask", "widget": [{"text": "P\u00e5 biblioteket kan du <mask> en bok."}, {"text": "Dette er et <mask> eksempel."}, {"text": "Av og til kan en spr\u00e5kmodell gi et <mask> resultat."}, {"text": "Som ansat f\u00e5r du <mask> for at bidrage til borgernes adgang til dansk kulturarv, til forskning og til samfundets demokratiske udvikling."}]} | NbAiLab/nb-roberta-base-scandinavian | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"norwegian",
"no",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers | # 🇳🇴 Norwegian T5 Base model Trained on the NCC🇳🇴
This is a Norwegian T5-base model trained on the Norwegian Colossal Corpus (NCC) on a TPU v3-8.
This model is currently training. It will finish in January 2022. Please do not use yet..
```
| {"language": false, "license": "cc-by-4.0", "tags": ["seq2seq"], "datasets": ["Norwegian Nynorsk/Bokm\u00e5l"]} | NbAiLab/nb-t5-base-v3 | null | [
"transformers",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"seq2seq",
"no",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
# Norwegian Wav2Vec2 Model - 1B Bokmål
This model is finetuned on top of feature extractor [XLS-R](https://huggingface.co/facebook/wav2vec2-xls-r-1b) from Facebook/Meta. The finetuned model achieves the following results on the test set with a 5-gram KenLM. The numbers in parentheses are the results without the language model:
- **WER: 0.0633** (0.0738)
- **CER: 0.0248** (0.0263)
## Model description
This is one of several Wav2Vec-models our team created during the 🤗 hosted [Robust Speech Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614?s=09). This is the complete list of our models and their final scores:
| Model | Final WER | |
|:--------------|:------------|:------------:|
| NbAiLab/nb-wav2vec2-1b-bokmaal (this model) | 6.33 | |
| [NbAiLab/nb-wav2vec2-300m-bokmaal](https://huggingface.co/NbAiLab/nb-wav2vec2-300m-bokmaal) | 7.03 | |
| [NbAiLab/nb-wav2vec2-1b-nynorsk](https://huggingface.co/NbAiLab/nb-wav2vec2-1b-nynorsk) | 11.32 | |
| [NbAiLab/nb-wav2vec2-300m-nynorsk](https://huggingface.co/NbAiLab/nb-wav2vec2-300m-nynorsk) | 12.22 | |
## Dataset
In parallel with the event, the team also converted the [Norwegian Parliamentary Speech Corpus (NPSC)](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-58/) to the [NbAiLab/NPSC](https://huggingface.co/datasets/NbAiLab/NPSC) in 🤗 Dataset format and used that as the main source for training.
## Code
We have released all the code developed during the event so that the Norwegian NLP community can build upon it when developing even better Norwegian ASR models. The finetuning of these models is not very computationally demanding. After following the instructions here, you should be able to train your own automatic speech recognition system in less than a day with an average GPU.
## Team
The following people contributed to building this model: Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen.
## Training procedure
To reproduce these results, we strongly recommend that you follow the [instructions from 🤗](https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event#talks) to train a simple Swedish model.
When you have verified that you are able to do this, create a fresh new repo. You can then start by copying the files ```run.sh``` and ```run_speech_recognition_ctc.py``` from our repo. Running these will create all the other necessary files, and should let you reproduce our results. With some tweaks to the hyperparameters, you might even be able to build an even better ASR. Good luck!
### Language Model
As the scores indicate, adding even a simple 5-gram language will improve the results. 🤗 has provided another [very nice blog](https://huggingface.co/blog/wav2vec2-with-ngram) explaining how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC). You can also skip some of the steps in the guide, and copy the [5-gram model from this repo](https://huggingface.co/NbAiLab/XLSR-300M-bokmaal/tree/main/language_model).
### Parameters
The final model was run using these parameters:
```
--dataset_name="NbAiLab/NPSC"
--model_name_or_path="facebook/wav2vec2-xls-r-1b"
--dataset_config_name="16K_mp3_bokmaal"
--output_dir="./"
--overwrite_output_dir
--num_train_epochs="40"
--per_device_train_batch_size="12"
--per_device_eval_batch_size="12"
--gradient_accumulation_steps="2"
--learning_rate="2e-5"
--warmup_steps="2000"
--length_column_name="input_length"
--evaluation_strategy="steps"
--text_column_name="text"
--save_steps="500"
--eval_steps="500"
--logging_steps="100"
--layerdrop="0.041"
--attention_dropout="0.094"
--activation_dropout="0.055"
--hidden_dropout="0.047"
--save_total_limit="3"
--freeze_feature_encoder
--feat_proj_dropout="0.04"
--mask_time_prob="0.082"
--mask_time_length="10"
--mask_feature_prob="0.25"
--mask_feature_length="64"
--gradient_checkpointing
--min_duration_in_seconds="0.5"
--max_duration_in_seconds="30.0"
--ctc_zero_infinity=True
--use_auth_token
--seed="42"
--fp16
--group_by_length
--do_train --do_eval
--push_to_hub
--preprocessing_num_workers="16"
```
Using these settings, the training might take 3-4 days on an average GPU. You can, however, get a decent model and faster results by tweaking these parameters.
| Parameter| Comment |
|:-------------|:-----|
| per_device_train_batch_size | Adjust this to the maximum of available memory. 16 or 24 might be good settings depending on your system |
|gradient_accumulation_steps |Can be adjusted even further up to increase batch size and speed up training without running into memory issues |
| learning_rate|Can be increased, maybe as high as 1e-4. Speeds up training but might add instability |
| epochs| Can be decreased significantly. This is a huge dataset and you might get a decent result already after a couple of epochs|
## Citation
```bibtex
@inproceedings{de-la-rosa-etal-2023-boosting,
title = "Boosting {N}orwegian Automatic Speech Recognition",
author = "De La Rosa, Javier and
Braaten, Rolv-Arild and
Kummervold, Per and
Wetjen, Freddy",
booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
month = may,
year = "2023",
address = "T{\'o}rshavn, Faroe Islands",
publisher = "University of Tartu Library",
url = "https://aclanthology.org/2023.nodalida-1.55",
pages = "555--564",
abstract = "In this paper, we present several baselines for automatic speech recognition (ASR) models for the two official written languages in Norway: Bokm{\aa}l and Nynorsk. We compare the performance of models of varying sizes and pre-training approaches on multiple Norwegian speech datasets. Additionally, we measure the performance of these models against previous state-of-the-art ASR models, as well as on out-of-domain datasets. We improve the state of the art on the Norwegian Parliamentary Speech Corpus (NPSC) from a word error rate (WER) of 17.10{\%} to 7.60{\%}, with models achieving 5.81{\%} for Bokm{\aa}l and 11.54{\%} for Nynorsk. We also discuss the challenges and potential solutions for further improving ASR models for Norwegian.",
}
```
See https://arxiv.org/abs/2307.01672
| {"language": ["nb", false], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "NbAiLab/NPSC", false, "nb", "nb-NO"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "nb-wav2vec2-1b-bokmaal", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_bokmaal"}, "metrics": [{"type": "wer", "value": 0.0633, "name": "Test (Bokm\u00e5l) WER"}, {"type": "cer", "value": 0.0248, "name": "Test (Bokm\u00e5l) CER"}]}]}]} | NbAiLab/nb-wav2vec2-1b-bokmaal | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"NbAiLab/NPSC",
"no",
"nb",
"nb-NO",
"dataset:NbAiLab/NPSC",
"arxiv:2307.01672",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
# Norwegian Wav2Vec2 Model - 300M - VoxRex - Bokmål
This model is finetuned on top of feature extractor [VoxRex-model](https://huggingface.co/KBLab/wav2vec2-large-voxrex) from the National Library of Sweden. The finetuned model achieves the following results on the test set with a 5-gram KenLM. The numbers in parentheses are the results without the language model:
- **WER: 0.0703** (0.0979)
- **CER: 0.0269** (0.0311)
## Model description
This is one of several Wav2Vec-models our team created during the 🤗 hosted [Robust Speech Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614?s=09). This is the complete list of our models and their final scores:
| Model | Final WER | |
|:--------------|:------------|:------------:|
| [NbAiLab/nb-wav2vec2-1b-bokmaal](https://huggingface.co/NbAiLab/nb-wav2vec2-1b-bokmaal) | 6.33 | |
| NbAiLab/nb-wav2vec2-300m-bokmaal (this model) | 7.03 | |
| [NbAiLab/nb-wav2vec2-1b-nynorsk](https://huggingface.co/NbAiLab/nb-wav2vec2-1b-nynorsk) | 11.32 | |
| [NbAiLab/nb-wav2vec2-300m-nynorsk](https://huggingface.co/NbAiLab/nb-wav2vec2-300m-nynorsk) | 12.22 | |
## Dataset
In parallel with the event, the team also converted the [Norwegian Parliamentary Speech Corpus (NPSC)](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-58/) to the [NbAiLab/NPSC](https://huggingface.co/datasets/NbAiLab/NPSC) in 🤗 Dataset format and used that as the main source for training.
## Code
We have released all the code developed during the event so that the Norwegian NLP community can build upon it when developing even better Norwegian ASR models. The finetuning of these models is not very computationally demanding. After following the instructions here, you should be able to train your own automatic speech recognition system in less than a day with an average GPU.
## Team
The following people contributed to building this model: Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen.
## Training procedure
To reproduce these results, we strongly recommend that you follow the [instructions from 🤗](https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event#talks) to train a simple Swedish model.
When you have verified that you are able to do this, create a fresh new repo. You can then start by copying the files ```run.sh``` and ```run_speech_recognition_ctc.py``` from our repo. Running these will create all the other necessary files, and should let you reproduce our results. With some tweaks to the hyperparameters, you might even be able to build an even better ASR. Good luck!
### Language Model
As the scores indicate, adding even a simple 5-gram language will improve the results. 🤗 has provided another [very nice blog](https://huggingface.co/blog/wav2vec2-with-ngram) explaining how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC). You can also skip some of the steps in the guide, and copy the [5-gram model from this repo](https://huggingface.co/NbAiLab/XLSR-300M-bokmaal/tree/main/language_model).
### Parameters
The final model was run using these parameters:
```
--dataset_name="NbAiLab/NPSC"
--model_name_or_path="KBLab/wav2vec2-large-voxrex"
--dataset_config_name="16K_mp3_bokmaal"
--output_dir="./"
--overwrite_output_dir
--num_train_epochs="15"
--per_device_train_batch_size="16"
--per_device_eval_batch_size="16"
--gradient_accumulation_steps="2"
--learning_rate="1e-4"
--warmup_steps="2000"
--length_column_name="input_length"
--evaluation_strategy="steps"
--text_column_name="text"
--save_steps="500"
--eval_steps="500"
--logging_steps="100"
--layerdrop="0.041"
--attention_dropout="0.094"
--activation_dropout="0.055"
--hidden_dropout="0.047"
--save_total_limit="3"
--freeze_feature_encoder
--feat_proj_dropout="0.04"
--mask_time_prob="0.082"
--mask_time_length="10"
--mask_feature_prob="0.25"
--mask_feature_length="64"
--gradient_checkpointing
--min_duration_in_seconds="0.5"
--max_duration_in_seconds="30.0"
--use_auth_token
--seed="42"
--fp16
--group_by_length
--do_train --do_eval
--push_to_hub
--preprocessing_num_workers="32"
```
Using these settings, the training might take 3-4 days on an average GPU. You can, however, get a decent model and faster results by tweaking these parameters.
| Parameter| Comment |
|:-------------|:-----|
| per_device_train_batch_size | Adjust this to the maximum of available memory. 16 or 24 might be good settings depending on your system |
|gradient_accumulation_steps |Can be adjusted even further up to increase batch size and speed up training without running into memory issues |
| learning_rate|Can be increased, maybe as high as 1e-4. Speeds up training but might add instability |
| epochs| Can be decreased significantly. This is a huge dataset and you might get a decent result already after a couple of epochs|
## Citation
```bibtex
@inproceedings{de-la-rosa-etal-2023-boosting,
title = "Boosting {N}orwegian Automatic Speech Recognition",
author = "De La Rosa, Javier and
Braaten, Rolv-Arild and
Kummervold, Per and
Wetjen, Freddy",
booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
month = may,
year = "2023",
address = "T{\'o}rshavn, Faroe Islands",
publisher = "University of Tartu Library",
url = "https://aclanthology.org/2023.nodalida-1.55",
pages = "555--564",
abstract = "In this paper, we present several baselines for automatic speech recognition (ASR) models for the two official written languages in Norway: Bokm{\aa}l and Nynorsk. We compare the performance of models of varying sizes and pre-training approaches on multiple Norwegian speech datasets. Additionally, we measure the performance of these models against previous state-of-the-art ASR models, as well as on out-of-domain datasets. We improve the state of the art on the Norwegian Parliamentary Speech Corpus (NPSC) from a word error rate (WER) of 17.10{\%} to 7.60{\%}, with models achieving 5.81{\%} for Bokm{\aa}l and 11.54{\%} for Nynorsk. We also discuss the challenges and potential solutions for further improving ASR models for Norwegian.",
}
```
See https://arxiv.org/abs/2307.01672
| {"language": [false, "nb"], "license": "apache-2.0", "tags": ["automatic-speech-recognition"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "nb-wav2vec2-300m-bokmaal", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_bokmaal"}, "metrics": [{"type": "wer", "value": 0.0703, "name": "Test (Bokm\u00e5l) WER"}, {"type": "cer", "value": 0.0269, "name": "Test (Bokm\u00e5l) CER"}]}]}]} | NbAiLab/nb-wav2vec2-300m-bokmaal | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"no",
"nb",
"dataset:NbAiLab/NPSC",
"arxiv:2307.01672",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
# Norwegian Wav2Vec2 Model - 300M - VoxRex - Nynorsk
This model is finetuned on top of feature extractor [VoxRex-model](https://huggingface.co/KBLab/wav2vec2-large-voxrex) from the National Library of Sweden. The finetuned model achieves the following results on the test set with a 5-gram KenLM. The numbers in parentheses are the results without the language model:
- **WER: 0.1222** (0.1537)
- **CER: 0.0419** (0.0468)
## Model description
This is one of several Wav2Vec-models our team created during the 🤗 hosted [Robust Speech Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614?s=09). This is the complete list of our models and their final scores:
| Model | Final WER | |
|:--------------|:------------|:------------:|
| [NbAiLab/nb-wav2vec2-1b-bokmaal](https://huggingface.co/NbAiLab/nb-wav2vec2-1b-bokmaal) | 6.33 | |
| [NbAiLab/nb-wav2vec2-300m-bokmaal](https://huggingface.co/NbAiLab/nb-wav2vec2-300m-bokmaal) | 7.03 | |
| [NbAiLab/nb-wav2vec2-1b-nynorsk](https://huggingface.co/NbAiLab/nb-wav2vec2-1b-nynorsk) | 11.32 | |
| NbAiLab/nb-wav2vec2-300m-nynorsk (this model) | 12.22 | |
### Dataset
In parallel with the event, the team also converted the [Norwegian Parliamentary Speech Corpus (NPSC)](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-58/) to the [NbAiLab/NPSC](https://huggingface.co/datasets/NbAiLab/NPSC) in 🤗 Dataset format and used that as the main source for training.
## Code
We have released all the code developed during the event so that the Norwegian NLP community can build upon it when developing even better Norwegian ASR models. The finetuning of these models is not very computationally demanding. After following the instructions here, you should be able to train your own automatic speech recognition system in less than a day with an average GPU.
## Team
The following people contributed to building this model: Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen.
## Training procedure
To reproduce these results, we strongly recommend that you follow the [instructions from 🤗](https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event#talks) to train a simple Swedish model.
When you have verified that you are able to do this, create a fresh new repo. You can then start by copying the files ```run.sh``` and ```run_speech_recognition_ctc.py``` from our repo. Running these will create all the other necessary files, and should let you reproduce our results. With some tweaks to the hyperparameters, you might even be able to build an even better ASR. Good luck!
### Language Model
As the scores indicate, adding even a simple 5-gram language will improve the results. 🤗 has provided another [very nice blog](https://huggingface.co/blog/wav2vec2-with-ngram) explaining how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC). You can also skip some of the steps in the guide, and copy the [5-gram model from this repo](https://huggingface.co/NbAiLab/XLSR-300M-bokmaal/tree/main/language_model).
### Parameters
The final model was run using these parameters:
```
--dataset_name="NbAiLab/NPSC"
--model_name_or_path="KBLab/wav2vec2-large-voxrex"
--dataset_config_name="16K_mp3_nynorsk"
--output_dir="./"
--overwrite_output_dir
--num_train_epochs="80"
--per_device_train_batch_size="16"
--per_device_eval_batch_size="16"
--gradient_accumulation_steps="2"
--learning_rate="1e-4"
--warmup_steps="2000"
--length_column_name="input_length"
--evaluation_strategy="steps"
--text_column_name="text"
--save_steps="500"
--eval_steps="500"
--logging_steps="100"
--layerdrop="0.041"
--attention_dropout="0.094"
--activation_dropout="0.055"
--hidden_dropout="0.047"
--save_total_limit="3"
--freeze_feature_encoder
--feat_proj_dropout="0.04"
--mask_time_prob="0.082"
--mask_time_length="10"
--mask_feature_prob="0.25"
--mask_feature_length="64"
--gradient_checkpointing
--min_duration_in_seconds="0.5"
--max_duration_in_seconds="30.0"
--use_auth_token
--seed="42"
--fp16
--group_by_length
--do_train --do_eval
--push_to_hub
--preprocessing_num_workers="32"
```
Using these settings, the training might take 3-4 days on an average GPU. You can, however, get a decent model and faster results by tweaking these parameters.
| Parameter| Comment |
|:-------------|:-----|
| per_device_train_batch_size | Adjust this to the maximum of available memory. 16 or 24 might be good settings depending on your system |
|gradient_accumulation_steps |Can be adjusted even further up to increase batch size and speed up training without running into memory issues |
| learning_rate|Can be increased, maybe as high as 1e-4. Speeds up training but might add instability |
| epochs| Can be decreased significantly. This is a huge dataset and you might get a decent result already after a couple of epochs|
## Citation
```bibtex
@inproceedings{de-la-rosa-etal-2023-boosting,
title = "Boosting {N}orwegian Automatic Speech Recognition",
author = "De La Rosa, Javier and
Braaten, Rolv-Arild and
Kummervold, Per and
Wetjen, Freddy",
booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
month = may,
year = "2023",
address = "T{\'o}rshavn, Faroe Islands",
publisher = "University of Tartu Library",
url = "https://aclanthology.org/2023.nodalida-1.55",
pages = "555--564",
abstract = "In this paper, we present several baselines for automatic speech recognition (ASR) models for the two official written languages in Norway: Bokm{\aa}l and Nynorsk. We compare the performance of models of varying sizes and pre-training approaches on multiple Norwegian speech datasets. Additionally, we measure the performance of these models against previous state-of-the-art ASR models, as well as on out-of-domain datasets. We improve the state of the art on the Norwegian Parliamentary Speech Corpus (NPSC) from a word error rate (WER) of 17.10{\%} to 7.60{\%}, with models achieving 5.81{\%} for Bokm{\aa}l and 11.54{\%} for Nynorsk. We also discuss the challenges and potential solutions for further improving ASR models for Norwegian.",
}
```
See https://arxiv.org/abs/2307.01672
| {"language": ["nn"], "license": "apache-2.0", "tags": ["automatic-speech-recognition"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "nb-wav2vec2-300m-nynorsk", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_nynorsk"}, "metrics": [{"type": "wer", "value": 0.1222, "name": "Test (Nynorsk) WER"}, {"type": "cer", "value": 0.0419, "name": "Test (Nynorsk) CER"}]}]}]} | NbAiLab/nb-wav2vec2-300m-nynorsk | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"nn",
"dataset:NbAiLab/NPSC",
"arxiv:2307.01672",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
## Results
|**Model** | **NoRec** | **NorNe-NB**| **NorNe-NN** | **NorDial** | **DaNe** | **Da-Angry-Tweets** |
|:-----------|------------:|------------:|------------:|------------:|------------:|------------:|
|roberta-base (English) | 51.77 | 79.01/79.53| 79.79/83.02 | 67.18| 75.44/78.07 | 55.51 |
|mBERT-cased | 63.91 | 83.72/86.12| 83.05/87.12 | 66.23| 80.00/81.43 | 57.67 |
|nb-bert-base | 75.60 |**91.98**/**92.95** |**90.93**/**94.06**|69.39| 81.95/84.83| 64.18|
|notram-bert-norwegian-cased | 72.47 | 91.77/93.12|89.79/93.70| **78.55**| **83.69**/**86.55**| **64.19** |
|notram-bert-norwegian-uncased | 73.47 | 89.28/91.61 |87.23/90.23 |74.21 | 80.29/82.31| 61.18|
|notram-bert-norwegian-cased-pod | **76.18** | 91.24/92.24| 90.88/93.21| 76.21| 81.82/84.99| 62.16 |
|nb-roberta-base | 68.77 |87.99/89.43 | 85.43/88.66| 76.34| 75.91/77.94| 61.50 |
|nb-roberta-base-scandinavian | 67.88 | 87.73/89.14| 87.39/90.92| 74.81| 76.22/78.66 | 63.37 |
|nb-roberta-base-v2-200k | 46.87 | 85.57/87.04| - | 64.99| - | - |
|test_long_w5 200k| 60.48 | 88.00/90:00 | 83.93/88.45 | 68.41 |75.22/78.50| 57.95 |
|test_long_w5_roberta_tokenizer 200k| 63.51| 86.28/87.77| 84.95/88.61 | 69.86 | 71.31/74.27 | 59.96 |
|test_long_w5_roberta_tokenizer 400k| 59.76 |87.39/89.06 | 85.16/89.01 | 71.46 | 72.39/75.65| 39.73 |
|test_long_w5_dataset 400k| 66.80 | 86.52/88.55 | 82.81/86.76 | 66.94 | 71.47/74.20| 55.25 |
|test_long_w5_dataset 600k| 67.37 | 89.98/90.95 | 84.53/88.37 | 66.84 | 75.14/76.50| 57.47 |
|roberta-jan-128_ncc - 400k - 128| 67.79 | 91.45/92.33 | 86.41/90.19 | 67.20 | 81.00/82.39| 59.65 |
|roberta-jan-128_ncc - 1000k - 128| 68.17 | 89.34/90.74 | 86.89/89.87 | 68.41 | 80.30/82.17| 61.63 | | {"language": false, "license": "cc-by-4.0", "tags": ["norwegian", "bert"], "pipeline_tag": "fill-mask", "widget": [{"text": "P\u00e5 biblioteket kan du [MASK] en bok."}, {"text": "Dette er et [MASK] eksempel."}, {"text": "Av og til kan en spr\u00e5kmodell gi et [MASK] resultat."}, {"text": "Som ansat f\u00e5r du [MASK] for at bidrage til borgernes adgang til dansk kulturarv, til forskning og til samfundets demokratiske udvikling."}]} | NbAiLab/notram-bert-norwegian-cased-080321 | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"norwegian",
"fill-mask",
"no",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | transformers | {} | NbAiLab/notram-bert-norwegian-cased-pod-030421 | null | [
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | Just for performing some experiments. Do not use.
| {} | NbAiLab/roberta_NCC_des_128 | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use.
| {} | NbAiLab/roberta_NCC_des_128_decayfrom200 | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use.
This needed to be restarted at 100k. I am getting memory errors at the end of the epoch. Not really sure why.
Step 2 is therefore on train_2__4. Static learning rate for a while. The first 100k ended at 0.59. This is decent so early. No point in running more epochs here though. Changing the corpus and continue training.
| {} | NbAiLab/roberta_des_128 | null | [
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use.
Since the loss seem to start going up, I did have to restore this from 9e945cb0636bde60bec30bd7df5db30f80401cc7 (2 step 600k/200). I am then restarting with warmup decaying from 1e-4.
That did failed. Checked out c94b5bb43b05fc798f9db013d940b05b3b47cd98 instead and restarted step 3 from here.
| {} | NbAiLab/roberta_des_512 | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use.
| {} | NbAiLab/roberta_des_512_4e4 | null | [
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use.
| {} | NbAiLab/roberta_des_512_6e4 | null | [
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use.
| {} | NbAiLab/roberta_des_ada_128 | null | [
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use.
| {} | NbAiLab/roberta_des_ada_128_6e4 | null | [
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | {} | NbAiLab/roberta_jan_128_ncc | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {"license": "cc-by-sa-4.0"} | NbAiLab/roberta_jan_128_scandinavian | null | [
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {"license": "cc-by-sa-4.0"} | NbAiLab/roberta_jan_512_ncc | null | [
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | Just for performing some experiments. Do not use. | {} | NbAiLabArchive/test_NCC_OSCAR_16w_noada | null | [
"transformers",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use. | {} | NbAiLabArchive/test_NCC_OSCAR_style | null | [
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use. | {} | NbAiLabArchive/test_NCC_OSCAR_style_98w | null | [
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use. | {} | NbAiLabArchive/test_NCC_small_flax | null | [
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use. | {} | NbAiLabArchive/test_NCC_small_flax_stream | null | [
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use. | {} | NbAiLabArchive/test_NCC_small_flax_stream_100 | null | [
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use. | {} | NbAiLabArchive/test_NCC_small_pytorch | null | [
"transformers",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use. | {} | NbAiLabArchive/test_OSCAR_flax | null | [
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use. | {} | NbAiLabArchive/test_w4 | null | [
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use. | {} | NbAiLabArchive/test_w5 | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use. | {} | NbAiLabArchive/test_w5_long | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use. | {} | NbAiLabArchive/test_w5_long_dataset | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use. | {} | NbAiLabArchive/test_w5_long_roberta_tokenizer | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use. | {} | NbAiLabArchive/test_w5_long_roberta_tokenizer_adafactor | null | [
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use. | {} | NbAiLabArchive/test_w6 | null | [
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use. | {} | NbAiLabArchive/test_w7 | null | [
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Just for performing some experiments. Do not use. | {} | NbAiLabArchive/test_w8 | null | [
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null |
## Citation
```bibtex
@inproceedings{de-la-rosa-etal-2023-boosting,
title = "Boosting {N}orwegian Automatic Speech Recognition",
author = "De La Rosa, Javier and
Braaten, Rolv-Arild and
Kummervold, Per and
Wetjen, Freddy",
booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
month = may,
year = "2023",
address = "T{\'o}rshavn, Faroe Islands",
publisher = "University of Tartu Library",
url = "https://aclanthology.org/2023.nodalida-1.55",
pages = "555--564",
abstract = "In this paper, we present several baselines for automatic speech recognition (ASR) models for the two official written languages in Norway: Bokm{\aa}l and Nynorsk. We compare the performance of models of varying sizes and pre-training approaches on multiple Norwegian speech datasets. Additionally, we measure the performance of these models against previous state-of-the-art ASR models, as well as on out-of-domain datasets. We improve the state of the art on the Norwegian Parliamentary Speech Corpus (NPSC) from a word error rate (WER) of 17.10{\%} to 7.60{\%}, with models achieving 5.81{\%} for Bokm{\aa}l and 11.54{\%} for Nynorsk. We also discuss the challenges and potential solutions for further improving ASR models for Norwegian.",
}
``` | {"license": "apache-2.0"} | NbAiLab/nb-wav2vec2-kenlm | null | [
"license:apache-2.0",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-voxrex-npsc-bokmaal
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1311
- Wer: 0.1038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8.379967082059723e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2127 | 0.32 | 500 | 0.1335 | 0.1047 |
| 0.1976 | 0.64 | 1000 | 0.1309 | 0.1039 |
| 0.1887 | 0.97 | 1500 | 0.1306 | 0.1040 |
| 0.18 | 1.29 | 2000 | 0.1311 | 0.1038 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
| {"language": ["nb-NO"], "license": "apache-2.0", "tags": ["generated_from_trainer", "automatic-speech-recognition", "NbAiLab/NPSC", "robust-speech-event", false, "nb-NO", "hf-asr-leaderboard"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "wav2vec2-large-voxrex-npsc-bokmaal", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_bokmaal"}, "metrics": [{"type": "wer", "value": 0.07028972259374369, "name": "Test (Bokm\u00e5l) WER"}, {"type": "cer", "value": 0.026870600821650645, "name": "Test (Bokm\u00e5l) CER"}]}]}]} | NbAiLab/wav2vec2-large-voxrex-npsc-bokmaal | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:NbAiLab/NPSC",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-voxrex-npsc-nynorsk
This model is a fine-tuned version of [KBLab/wav2vec2-large-voxrex](https://huggingface.co/KBLab/wav2vec2-large-voxrex) on the NBAILAB/NPSC - 16K_MP3_NYNORSK dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4142
- Wer: 0.1576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 40.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.086 | 2.17 | 500 | 3.0773 | 1.0 |
| 2.8532 | 4.35 | 1000 | 2.8393 | 1.0 |
| 0.9738 | 6.52 | 1500 | 0.7283 | 0.4890 |
| 0.6763 | 8.7 | 2000 | 0.5340 | 0.3662 |
| 0.5303 | 10.87 | 2500 | 0.4521 | 0.3140 |
| 0.4765 | 13.04 | 3000 | 0.4181 | 0.2853 |
| 0.4219 | 15.22 | 3500 | 0.4156 | 0.2934 |
| 0.3564 | 17.39 | 4000 | 0.3925 | 0.2509 |
| 0.3282 | 19.57 | 4500 | 0.3824 | 0.2420 |
| 0.3118 | 21.74 | 5000 | 0.3636 | 0.2354 |
| 0.2919 | 23.91 | 5500 | 0.3615 | 0.2281 |
| 0.2961 | 26.09 | 6000 | 0.3548 | 0.2255 |
| 0.284 | 28.26 | 6500 | 0.3526 | 0.2209 |
| 0.2566 | 30.43 | 7000 | 0.3526 | 0.2205 |
| 0.2422 | 32.61 | 7500 | 0.3569 | 0.2173 |
| 0.2472 | 34.78 | 8000 | 0.3592 | 0.2166 |
| 0.2337 | 36.96 | 8500 | 0.3625 | 0.2172 |
| 0.2315 | 39.13 | 9000 | 0.3580 | 0.2155 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
| {"language": ["nn-NO"], "license": "apache-2.0", "tags": ["generated_from_trainer", "automatic-speech-recognition", "NbAiLab/NPSC", "robust-speech-event", "no", "nn-NO", "hf-asr-leaderboard"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "wav2vec2-large-voxrex-npsc-nynorsk", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_nynorsk"}, "metrics": [{"type": "wer", "value": 0.12220762155059132, "name": "Test (Nynorsk) WER"}, {"type": "cer", "value": 0.04195612578778549, "name": "Test (Nynorsk) CER"}]}]}]} | NbAiLab/wav2vec2-large-voxrex-npsc-nynorsk | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"NbAiLab/NPSC",
"robust-speech-event",
"no",
"nn-NO",
"hf-asr-leaderboard",
"dataset:NbAiLab/NPSC",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-voxrex-npsc
This model is a fine-tuned version of [KBLab/wav2vec2-large-voxrex](https://huggingface.co/KBLab/wav2vec2-large-voxrex) on the NBAILAB/NPSC - 16K_MP3 dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.9728 | 0.32 | 500 | 2.9449 | 1.0 |
| 2.5099 | 0.64 | 1000 | 1.8492 | 0.9910 |
| 0.7872 | 0.97 | 1500 | 0.4467 | 0.3774 |
| 0.5993 | 1.29 | 2000 | 0.3181 | 0.2819 |
| 0.5134 | 1.61 | 2500 | 0.2638 | 0.2401 |
| 0.4544 | 1.93 | 3000 | 0.2287 | 0.2091 |
| 0.4085 | 2.26 | 3500 | 0.2153 | 0.1918 |
| 0.3921 | 2.58 | 4000 | 0.2004 | 0.1804 |
| 0.4613 | 2.9 | 4500 | 0.1905 | 0.1732 |
| 0.3402 | 3.22 | 5000 | 0.1778 | 0.1659 |
| 0.3258 | 3.55 | 5500 | 0.1732 | 0.1571 |
| 0.3044 | 3.87 | 6000 | 0.1677 | 0.1497 |
| 0.2914 | 4.19 | 6500 | 0.1597 | 0.1420 |
| 0.278 | 4.51 | 7000 | 0.1574 | 0.1386 |
| 0.2858 | 4.84 | 7500 | 0.1552 | 0.1300 |
| 0.2585 | 5.16 | 8000 | 0.1523 | 0.1276 |
| 0.2827 | 5.48 | 8500 | 0.1448 | 0.1265 |
| 0.3365 | 5.8 | 9000 | 0.1411 | 0.1232 |
| 0.2488 | 6.13 | 9500 | 0.1456 | 0.1195 |
| 0.2406 | 6.45 | 10000 | 0.1414 | 0.1194 |
| 0.2488 | 6.77 | 10500 | 0.1393 | 0.1173 |
| 0.3084 | 7.09 | 11000 | 0.1379 | 0.1164 |
| 0.2365 | 7.41 | 11500 | 0.1387 | 0.1165 |
| 0.2217 | 7.74 | 12000 | 0.1381 | 0.1132 |
| 0.2381 | 8.06 | 12500 | 0.1360 | 0.1126 |
| 0.2329 | 8.38 | 13000 | 0.1357 | 0.1124 |
| 0.2103 | 8.7 | 13500 | 0.1335 | 0.1087 |
| 0.2366 | 9.03 | 14000 | 0.1388 | 0.1105 |
| 0.2289 | 9.35 | 14500 | 0.1383 | 0.1098 |
| 0.2486 | 9.67 | 15000 | 0.1386 | 0.1087 |
| **0.2772** | **9.99** | **15500** | **0.1598** | **0.1093** |
| 0.2728 | 10.32 | 16000 | 0.1814 | 0.1110 |
| 0.3437 | 10.64 | 16500 | 0.2505 | 0.1124 |
| 0.431 | 10.96 | 17000 | 0.2828 | 0.1143 |
| 0.3929 | 11.28 | 17500 | 0.2977 | 0.1149 |
| 0.4396 | 11.61 | 18000 | 0.3198 | 0.1170 |
| 0.59 | 11.93 | 18500 | 0.4158 | 0.1315 |
| 0.7813 | 12.25 | 19000 | 0.6123 | 0.2208 |
| 0.9345 | 12.57 | 19500 | 0.6815 | 0.2885 |
| 0.998 | 12.89 | 20000 | 0.7587 | 0.1991 |
| 1.0493 | 13.22 | 20500 | 0.7583 | 0.1996 |
| 1.438 | 13.54 | 21000 | nan | 1.0 |
| 0.0 | 13.86 | 21500 | nan | 1.0 |
| 0.0 | 14.18 | 22000 | nan | 1.0 |
| 0.0 | 14.51 | 22500 | nan | 1.0 |
| 0.0 | 14.83 | 23000 | nan | 1.0 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
| {"license": "cc0-1.0", "tags": ["automatic-speech-recognition", "NbAiLab/NPSC", "generated_from_trainer", "robust-speech-event"], "datasets": ["NbAiLab/NPSC"], "base_model": "KBLab/wav2vec2-large-voxrex", "model-index": [{"name": "wav2vec2-large-voxrex-npsc", "results": []}]} | NbAiLab/wav2vec2-large-voxrex-npsc | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"NbAiLab/NPSC",
"generated_from_trainer",
"robust-speech-event",
"dataset:NbAiLab/NPSC",
"base_model:KBLab/wav2vec2-large-voxrex",
"license:cc0-1.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers | {"language": ["nb-NO"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "NbAiLab/NPSC", "robust-speech-event", false, "nb-NO", "hf-asr-leaderboard"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "wav2vec2-xls-r-1b-npsc-bokmaal-low-27k", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_bokmaal"}, "metrics": [{"type": "wer", "value": 0.06332329423537675, "name": "Test (Bokm\u00e5l) WER"}, {"type": "cer", "value": 0.02480899861950731, "name": "Test (Bokm\u00e5l) CER"}]}]}]} | NbAiLab/wav2vec2-xls-r-1b-npsc-bokmaal-low-27k | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:NbAiLab/NPSC",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-npsc
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the [NbAiLab/NPSC (16K_mp3_bokmaal)](https://huggingface.co/datasets/NbAiLab/NPSC/viewer/16K_mp3_bokmaal/train) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1598
- WER: 0.0966
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.8361 | 0.32 | 500 | 0.6304 | 0.4970 |
| 0.5703 | 0.64 | 1000 | 0.3195 | 0.2775 |
| 0.5451 | 0.97 | 1500 | 0.2700 | 0.2246 |
| 0.47 | 1.29 | 2000 | 0.2564 | 0.2329 |
| 0.4063 | 1.61 | 2500 | 0.2459 | 0.2099 |
| 0.374 | 1.93 | 3000 | 0.2175 | 0.1894 |
| 0.3297 | 2.26 | 3500 | 0.2036 | 0.1755 |
| 0.3145 | 2.58 | 4000 | 0.1957 | 0.1757 |
| 0.3989 | 2.9 | 4500 | 0.1923 | 0.1723 |
| 0.271 | 3.22 | 5000 | 0.1889 | 0.1649 |
| 0.2758 | 3.55 | 5500 | 0.1768 | 0.1588 |
| 0.2683 | 3.87 | 6000 | 0.1720 | 0.1534 |
| 0.2341 | 4.19 | 6500 | 0.1689 | 0.1471 |
| 0.2316 | 4.51 | 7000 | 0.1706 | 0.1405 |
| 0.2383 | 4.84 | 7500 | 0.1637 | 0.1426 |
| 0.2148 | 5.16 | 8000 | 0.1584 | 0.1347 |
| 0.2085 | 5.48 | 8500 | 0.1601 | 0.1387 |
| 0.2944 | 5.8 | 9000 | 0.1566 | 0.1294 |
| 0.1944 | 6.13 | 9500 | 0.1494 | 0.1271 |
| 0.1853 | 6.45 | 10000 | 0.1561 | 0.1247 |
| 0.235 | 6.77 | 10500 | 0.1461 | 0.1215 |
| 0.2286 | 7.09 | 11000 | 0.1447 | 0.1167 |
| 0.1781 | 7.41 | 11500 | 0.1502 | 0.1199 |
| 0.1714 | 7.74 | 12000 | 0.1425 | 0.1179 |
| 0.1725 | 8.06 | 12500 | 0.1427 | 0.1173 |
| 0.143 | 8.38 | 13000 | 0.1448 | 0.1142 |
| 0.154 | 8.7 | 13500 | 0.1392 | 0.1104 |
| 0.1447 | 9.03 | 14000 | 0.1404 | 0.1094 |
| 0.1471 | 9.35 | 14500 | 0.1404 | 0.1088 |
| 0.1479 | 9.67 | 15000 | 0.1414 | 0.1133 |
| 0.1607 | 9.99 | 15500 | 0.1458 | 0.1171 |
| 0.166 | 10.32 | 16000 | 0.1652 | 0.1264 |
| 0.188 | 10.64 | 16500 | 0.1713 | 0.1322 |
| 0.1461 | 10.96 | 17000 | 0.1423 | 0.1111 |
| 0.1289 | 11.28 | 17500 | 0.1388 | 0.1097 |
| 0.1273 | 11.61 | 18000 | 0.1438 | 0.1074 |
| 0.1317 | 11.93 | 18500 | 0.1312 | 0.1066 |
| 0.1448 | 12.25 | 19000 | 0.1446 | 0.1042 |
| 0.1424 | 12.57 | 19500 | 0.1386 | 0.1015 |
| 0.1392 | 12.89 | 20000 | 0.1379 | 0.1005 |
| 0.1408 | 13.22 | 20500 | 0.1408 | 0.0992 |
| 0.1239 | 13.54 | 21000 | 0.1338 | 0.0968 |
| 0.1244 | 13.86 | 21500 | 0.1335 | 0.0957 |
| 0.1254 | 14.18 | 22000 | 0.1382 | 0.0950 |
| 0.1597 | 14.51 | 22500 | 0.1544 | 0.0970 |
| 0.1566 | 14.83 | 23000 | 0.1589 | 0.0963 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
| {"language": ["nb-NO"], "license": "apache-2.0", "tags": ["generated_from_trainer", "automatic-speech-recognition", "NbAiLab/NPSC", "robust-speech-event", false, "nb-NO", "hf-asr-leaderboard"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "wav2vec2-xls-r-1b-npsc-bokmaal", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_bokmaal"}, "metrics": [{"type": "wer", "value": 0.07901700231893541, "name": "Test (Bokm\u00e5l) WER"}, {"type": "cer", "value": 0.029734583252347752, "name": "Test (Bokm\u00e5l) CER"}]}]}]} | NbAiLab/wav2vec2-xls-r-1b-npsc-bokmaal | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:NbAiLab/NPSC",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-npsc-bokmaal
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1663
- Wer: 0.0932
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.0969 | 0.32 | 500 | 0.1773 | 0.1054 |
| 0.0929 | 0.64 | 1000 | 0.1672 | 0.1061 |
| 0.1018 | 0.97 | 1500 | 0.1770 | 0.1067 |
| 0.0871 | 1.29 | 2000 | 0.1832 | 0.1087 |
| 0.0908 | 1.61 | 2500 | 0.1830 | 0.1101 |
| 0.0975 | 1.93 | 3000 | 0.1848 | 0.1100 |
| 0.0936 | 2.26 | 3500 | 0.1853 | 0.1113 |
| 0.1025 | 2.58 | 4000 | 0.1958 | 0.1149 |
| 0.0989 | 2.9 | 4500 | 0.1776 | 0.1123 |
| 0.0946 | 3.22 | 5000 | 0.1825 | 0.1097 |
| 0.0859 | 3.55 | 5500 | 0.1864 | 0.1072 |
| 0.0867 | 3.87 | 6000 | 0.1886 | 0.1081 |
| 0.0783 | 4.19 | 6500 | 0.1883 | 0.1063 |
| 0.0804 | 4.51 | 7000 | 0.1831 | 0.1063 |
| 0.0797 | 4.84 | 7500 | 0.1884 | 0.1058 |
| 0.0705 | 5.16 | 8000 | 0.1802 | 0.1057 |
| 0.0795 | 5.48 | 8500 | 0.1854 | 0.1038 |
| 0.0711 | 5.8 | 9000 | 0.1766 | 0.1032 |
| 0.0973 | 6.13 | 9500 | 0.1663 | 0.1014 |
| 0.087 | 6.45 | 10000 | 0.1664 | 0.1014 |
| 0.0962 | 6.77 | 10500 | 0.1631 | 0.1009 |
| 0.0857 | 7.09 | 11000 | 0.1659 | 0.1002 |
| 0.0882 | 7.41 | 11500 | 0.1668 | 0.1007 |
| 0.0784 | 7.74 | 12000 | 0.1688 | 0.0996 |
| 0.0838 | 8.06 | 12500 | 0.1675 | 0.0984 |
| 0.0863 | 8.38 | 13000 | 0.1639 | 0.0979 |
| 0.0763 | 8.7 | 13500 | 0.1638 | 0.0980 |
| 0.0822 | 9.03 | 14000 | 0.1709 | 0.0972 |
| 0.0769 | 9.35 | 14500 | 0.1700 | 0.0965 |
| 0.0838 | 9.67 | 15000 | 0.1703 | 0.0974 |
| 0.0799 | 9.99 | 15500 | 0.1667 | 0.0957 |
| 0.0712 | 10.32 | 16000 | 0.1754 | 0.0960 |
| 0.0737 | 10.64 | 16500 | 0.1725 | 0.0968 |
| 0.0851 | 10.96 | 17000 | 0.1733 | 0.0958 |
| 0.076 | 11.28 | 17500 | 0.1682 | 0.0954 |
| 0.0712 | 11.61 | 18000 | 0.1713 | 0.0943 |
| 0.0745 | 11.93 | 18500 | 0.1662 | 0.0951 |
| 0.0864 | 12.25 | 19000 | 0.1692 | 0.0947 |
| 0.0937 | 12.57 | 19500 | 0.1624 | 0.0943 |
| 0.0915 | 12.89 | 20000 | 0.1678 | 0.0942 |
| 0.0926 | 13.22 | 20500 | 0.1641 | 0.0945 |
| 0.0912 | 13.54 | 21000 | 0.1665 | 0.0937 |
| 0.0917 | 13.86 | 21500 | 0.1648 | 0.0936 |
| 0.094 | 14.18 | 22000 | 0.1635 | 0.0935 |
| 0.0864 | 14.51 | 22500 | 0.1678 | 0.0934 |
| 0.0899 | 14.83 | 23000 | 0.1663 | 0.0932 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
| {"language": ["nb-NO"], "license": "apache-2.0", "tags": ["generated_from_trainer", "automatic-speech-recognition", "NbAiLab/NPSC", "robust-speech-event", false, "nb-NO", "hf-asr-leaderboard"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "wav2vec2-xls-r-300m-npsc-bokmaal", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_bokmaal"}, "metrics": [{"type": "wer", "value": 0.07556265455560153, "name": "Test (Bokm\u00e5l) WER"}, {"type": "cer", "value": 0.028191288775481386, "name": "Test (Bokm\u00e5l) CER"}]}]}]} | NbAiLab/wav2vec2-xls-r-300m-npsc-bokmaal | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:NbAiLab/NPSC",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | NbAiLab/wav2vec2-xlsr-1B-NPSC-NN-OH | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-1B-NPSC-NN
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the NBAILAB/NPSC - 16K_MP3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4562
- Wer: 0.1531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.6894 | 1.08 | 500 | 1.2423 | 0.8619 |
| 0.7543 | 2.15 | 1000 | 0.5956 | 0.3817 |
| 0.5481 | 3.23 | 1500 | 0.5043 | 0.3246 |
| 0.4661 | 4.3 | 2000 | 0.4813 | 0.2793 |
| 0.3901 | 5.38 | 2500 | 0.4371 | 0.2592 |
| 0.3512 | 6.45 | 3000 | 0.4216 | 0.2458 |
| 0.3016 | 7.53 | 3500 | 0.3814 | 0.2257 |
| 0.278 | 8.6 | 4000 | 0.4151 | 0.2145 |
| 0.2435 | 9.68 | 4500 | 0.4816 | 0.2130 |
| 0.2122 | 10.75 | 5000 | 0.4489 | 0.2137 |
| 0.1949 | 11.83 | 5500 | 0.3978 | 0.2063 |
| 0.1929 | 12.9 | 6000 | 0.3823 | 0.2026 |
| 0.1757 | 13.98 | 6500 | 0.3409 | 0.1965 |
| 0.1771 | 15.05 | 7000 | 0.3844 | 0.1936 |
| 0.1452 | 16.13 | 7500 | 0.3749 | 0.1900 |
| 0.1341 | 17.2 | 8000 | 0.4407 | 0.2026 |
| 0.13 | 18.28 | 8500 | 0.4253 | 0.1883 |
| 0.1183 | 19.35 | 9000 | 0.4311 | 0.1880 |
| 0.118 | 20.43 | 9500 | 0.4431 | 0.1882 |
| 0.1123 | 21.51 | 10000 | 0.4753 | 0.1820 |
| 0.1037 | 22.58 | 10500 | 0.4087 | 0.1834 |
| 0.1066 | 23.66 | 11000 | 0.4151 | 0.1845 |
| 0.0977 | 24.73 | 11500 | 0.4367 | 0.1783 |
| 0.0968 | 25.81 | 12000 | 0.4237 | 0.1756 |
| 0.0835 | 26.88 | 12500 | 0.4729 | 0.1781 |
| 0.0919 | 27.96 | 13000 | 0.4153 | 0.1701 |
| 0.0677 | 29.03 | 13500 | 0.4317 | 0.1693 |
| 0.0726 | 30.11 | 14000 | 0.4380 | 0.1736 |
| 0.066 | 31.18 | 14500 | 0.4384 | 0.1681 |
| 0.0713 | 32.26 | 15000 | 0.4215 | 0.1629 |
| 0.0605 | 33.33 | 15500 | 0.4574 | 0.1714 |
| 0.0632 | 34.41 | 16000 | 0.4343 | 0.1642 |
| 0.0567 | 35.48 | 16500 | 0.4231 | 0.1601 |
| 0.0556 | 36.56 | 17000 | 0.4404 | 0.1667 |
| 0.0426 | 37.63 | 17500 | 0.4459 | 0.1625 |
| 0.0445 | 38.71 | 18000 | 0.4484 | 0.1629 |
| 0.0463 | 39.78 | 18500 | 0.4508 | 0.1596 |
| 0.0448 | 40.86 | 19000 | 0.4395 | 0.1605 |
| 0.0434 | 41.94 | 19500 | 0.4490 | 0.1607 |
| 0.0347 | 43.01 | 20000 | 0.4772 | 0.1582 |
| 0.0332 | 44.09 | 20500 | 0.4729 | 0.1582 |
| 0.037 | 45.16 | 21000 | 0.4559 | 0.1573 |
| 0.0328 | 46.24 | 21500 | 0.4664 | 0.1560 |
| 0.0366 | 47.31 | 22000 | 0.4543 | 0.1543 |
| 0.0377 | 48.39 | 22500 | 0.4507 | 0.1560 |
| 0.0331 | 49.46 | 23000 | 0.4567 | 0.1533 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["nn-NO"], "license": "apache-2.0", "tags": ["generated_from_trainer", "automatic-speech-recognition", "NbAiLab/NPSC", "robust-speech-event", false, "nn-NO", "hf-asr-leaderboard"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "wav2vec2-xlsr-1B-NPSC-NN", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_nynorsk"}, "metrics": [{"type": "wer", "value": 0.13347099680871036, "name": "Test (Nynorsk) WER"}, {"type": "cer", "value": 0.04537322093454329, "name": "Test (Nynorsk) CER"}]}]}]} | NbAiLab/wav2vec2-xlsr-1B-NPSC-NN | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:NbAiLab/NPSC",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | transformers | {} | NbAiLab/wav2vec2-xlsr-1B-NPSC-OH | null | [
"transformers",
"tensorboard",
"wav2vec2",
"pretraining",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | NbAiLab/wav2vec2-xlsr-1b-npsc | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
# XLS-R-300M-LM - Norwegian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the Norwegian [NPSC](https://huggingface.co/datasets/NbAiLab/NPSC) dataset.
### Scores without Language Model
Without using a language model, it achieves the following scores on the NPSC Eval set
It achieves the following results on the evaluation set without a language model:
- WER: 0.2110
- CER: 0.0622
### Scores with Language Model
A 5-gram KenLM was added to boost the models performance. The language model was created on a corpus mainly consisting of online newspapers, public reports and Wikipedia data. After this we are getting these values.
- WER: 0.1540
- CER: 0.0548
## Team
The model is developed by Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen. Name in alphabetic order.
## Model description
This current version is based on checkpoint 8500 of [NbAiLab/wav2vec2-xlsr-300M-NPSC-OH](https://huggingface.co/NbAiLab/wav2vec2-xlsr-300M-NPSC-OH).
## Intended uses & limitations
Demo version only. The model will be updated later this week.
## Training and evaluation data
The model is trained and evaluated on [NPSC](https://huggingface.co/datasets/NbAiLab/NPSC). Unfortunately there is no Norwegian test data in Common Voice, and currently the model is only evaluated on the validation set of NPSC..
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 30.0 (But interrupted after 8500 steps, approx 6 epochs)
- mixed_precision_training: Native AMP
| {"language": ["nb-NO"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", false, "nb-NO", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "XLS-R-300M-LM - Norwegian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC"}, "metrics": [{"type": "wer", "value": 15.4, "name": "Eval WER"}, {"type": "cer", "value": 5.48, "name": "Eval CER"}]}]}]} | NbAiLab/wav2vec2-xlsr-300M-NPSC-LM | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:NbAiLab/NPSC",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-300M-NPSC-OH
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NBAILAB/NPSC - 16K_MP3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1692
- Wer: 0.1663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.1638 | 0.66 | 500 | 3.0686 | 1.0 |
| 2.9311 | 1.31 | 1000 | 2.9208 | 1.0 |
| 2.4175 | 1.97 | 1500 | 1.5009 | 0.9049 |
| 1.4442 | 2.63 | 2000 | 0.4426 | 0.3783 |
| 1.2624 | 3.28 | 2500 | 0.3193 | 0.2998 |
| 1.1889 | 3.94 | 3000 | 0.2867 | 0.2630 |
| 1.1315 | 4.6 | 3500 | 0.2566 | 0.2444 |
| 1.0864 | 5.26 | 4000 | 0.2368 | 0.2294 |
| 1.093 | 5.91 | 4500 | 0.2240 | 0.2151 |
| 1.0368 | 6.57 | 5000 | 0.2117 | 0.2056 |
| 1.0178 | 7.23 | 5500 | 0.2020 | 0.1954 |
| 1.0035 | 7.88 | 6000 | 0.2005 | 0.1924 |
| 0.9759 | 8.54 | 6500 | 0.1971 | 0.1863 |
| 0.9795 | 9.2 | 7000 | 0.1892 | 0.1812 |
| 0.9601 | 9.85 | 7500 | 0.1863 | 0.1795 |
| 0.9673 | 10.51 | 8000 | 0.1809 | 0.1761 |
| 0.9233 | 11.17 | 8500 | 0.1818 | 0.1755 |
| 0.9382 | 11.83 | 9000 | 0.1767 | 0.1741 |
| 0.9242 | 12.48 | 9500 | 0.1743 | 0.1703 |
| 0.9703 | 13.14 | 10000 | 0.1711 | 0.1711 |
| 0.9139 | 13.8 | 10500 | 0.1718 | 0.1672 |
| 0.9073 | 14.45 | 11000 | 0.1700 | 0.1665 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["automatic-speech-recognition", "NbAiLab/NPSC", "generated_from_trainer"], "model-index": [{"name": "wav2vec2-xlsr-300M-NPSC-OH", "results": []}]} | NbAiLab/wav2vec2-xlsr-300M-NPSC-OH | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"NbAiLab/NPSC",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers | {"license": "cc"} | NbAiLab/wav2vec2-xlsr-300M-NPSC | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"license:cc",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | NbAiLab/wav2vec2-xlsr-300m-64e6 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {"license": "cc"} | NbAiLab/wav2vec2-xlsr-300m-norwegian | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"license:cc",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {"license": "cc"} | NbAiLab/wav2vec2-xlsr-300m-norwegian2 | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"license:cc",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {"license": "cc"} | NbAiLab/wav2vec2-xlsr-300m-test | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"license:cc",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-npsc-oh
This model is a fine-tuned version of [KBLab/wav2vec2-large-voxrex](https://huggingface.co/KBLab/wav2vec2-large-voxrex) on the NBAILAB/NPSC - 48K_MP3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2106
- Wer: 0.8586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.1093 | 2.61 | 1000 | 0.2572 | 0.9293 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
| {"license": "cc0-1.0", "tags": ["automatic-speech-recognition", "NbAiLab/NPSC", "generated_from_trainer"], "datasets": ["npsc"], "model-index": [{"name": "xls-npsc-oh", "results": []}]} | NbAiLab/xls-npsc-oh | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"NbAiLab/NPSC",
"generated_from_trainer",
"dataset:npsc",
"license:cc0-1.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-npsc
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NBAILAB/NPSC - 48K_MP3 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5006
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.1.dev0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["automatic-speech-recognition", "NbAiLab/NPSC", "generated_from_trainer"], "datasets": ["npsc"], "model-index": [{"name": "xls-npsc", "results": []}]} | NbAiLab/xls-npsc | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"NbAiLab/NPSC",
"generated_from_trainer",
"dataset:npsc",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers | {"license": "apache-2.0"} | NbAiLab/xls-r-1b-npsc | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {} | NbAiLab/xls-r-300m-sv2 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | # Harry potter | {"tags": ["conversational"]} | Necrozma/harrypotterbot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | NegativeSector/fakeNewsBot | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
not for use...
technical data | {"language": ["ru"], "widget": [{"text": "\u0421\u043c\u0435\u0440\u0442\u0438 \u043d\u0435\u0442, "}]} | Nehc/adpatres | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
Start from sberbank-ai/rugpt3small_based_on_gpt2 and finetuning on Govard Phillips Lovecraft texts (russian).
On this moment - only 1 epoch (perplexity falls reasons)
on progress...
| {"language": ["ru"], "metrics": [{"loss": 3.3}, {"perplexity": 25.7528}], "widget": [{"text": "\u041d\u0435\u043c\u044b\u0441\u043b\u0438\u043c\u043e, "}]} | Nehc/gpt2_lovecraft_ru | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
Start from sberbank-ai/rugpt3small_based_on_gpt2 and finetuning on Biblie & preaching (russian).
On this moment - only 1 epoch, 1650 seq length
on progress... | {"language": ["ru"], "metrics": [{"loss": 3.3}, {"perplexity": 25.7528}], "widget": [{"text": "\u0411\u043e\u0433, \u044d\u0442\u043e "}]} | Nehc/gpt2_priest_ru | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | #zhongli DialoGTP Model | {"tags": ["conversational"]} | Nekoism/Zhongli-Beta | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Nemo1215/RogerJefferson | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Nenma/romanian-bert-fake-news | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Neok/Zhongli-Beta | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {"license": "apache-2.0"} | NeonBohdan/stt-polyglot-de | null | [
"tflite",
"license:apache-2.0",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {"license": "apache-2.0"} | NeonBohdan/stt-polyglot-en | null | [
"tflite",
"license:apache-2.0",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {"license": "apache-2.0"} | NeonBohdan/stt-polyglot-es | null | [
"tflite",
"license:apache-2.0",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {"license": "apache-2.0"} | NeonBohdan/stt-polyglot-fr | null | [
"tflite",
"license:apache-2.0",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {"license": "apache-2.0"} | NeonBohdan/stt-polyglot-it | null | [
"tflite",
"license:apache-2.0",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {"license": "apache-2.0"} | NeonBohdan/stt-polyglot-pl | null | [
"tflite",
"license:apache-2.0",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {"license": "apache-2.0"} | NeonBohdan/tts-tacotron2-ljspeech-pl | null | [
"license:apache-2.0",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Neptd/distilbert-base-uncased-finetuned-ner | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Nerdy/Bot | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Nerdy/ChatBot | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Nerdy/Loqui | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Neskalp/Neskalooo | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
image-classification | transformers |
# sea_mammals
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### blue whale

#### dolphin

#### orca whale
 | {"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]} | Neto71/sea_mammals | null | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Network101/TestNLPModel | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers | # BERT-Small CORD-19 fine-tuned on SQuAD 2.0
[bert-small-cord19 model](https://huggingface.co/NeuML/bert-small-cord19) fine-tuned on SQuAD 2.0
## Building the model
```bash
python run_squad.py
--model_type bert
--model_name_or_path bert-small-cord19
--do_train
--do_eval
--do_lower_case
--version_2_with_negative
--train_file train-v2.0.json
--predict_file dev-v2.0.json
--per_gpu_train_batch_size 8
--learning_rate 3e-5
--num_train_epochs 3.0
--max_seq_length 384
--doc_stride 128
--output_dir bert-small-cord19-squad2
--save_steps 0
--threads 8
--overwrite_cache
--overwrite_output_dir
| {} | NeuML/bert-small-cord19-squad2 | null | [
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"question-answering",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | # BERT-Small fine-tuned on CORD-19 dataset
[BERT L6_H-512_A-8 model](https://huggingface.co/google/bert_uncased_L-6_H-512_A-8) fine-tuned on the [CORD-19 dataset](https://www.semanticscholar.org/cord19).
## CORD-19 data subset
The training data for this dataset is stored as a [Kaggle dataset](https://www.kaggle.com/davidmezzetti/cord19-qa?select=cord19.txt). The training
data is a subset of the full corpus, focusing on high-quality, study-design detected articles.
## Building the model
```bash
python run_language_modeling.py
--model_type bert
--model_name_or_path google/bert_uncased_L-6_H-512_A-8
--do_train
--mlm
--line_by_line
--block_size 512
--train_data_file cord19.txt
--per_gpu_train_batch_size 4
--learning_rate 3e-5
--num_train_epochs 3.0
--output_dir bert-small-cord19
--save_steps 0
--overwrite_output_dir
| {} | NeuML/bert-small-cord19 | null | [
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers | # BERT-Small fine-tuned on CORD-19 QA dataset
[bert-small-cord19-squad model](https://huggingface.co/NeuML/bert-small-cord19-squad2) fine-tuned on the [CORD-19 QA dataset](https://www.kaggle.com/davidmezzetti/cord19-qa?select=cord19-qa.json).
## CORD-19 QA dataset
The CORD-19 QA dataset is a SQuAD 2.0 formatted list of question, context, answer combinations covering the [CORD-19 dataset](https://www.semanticscholar.org/cord19).
## Building the model
```bash
python run_squad.py \
--model_type bert \
--model_name_or_path bert-small-cord19-squad \
--do_train \
--do_lower_case \
--version_2_with_negative \
--train_file cord19-qa.json \
--per_gpu_train_batch_size 8 \
--learning_rate 5e-5 \
--num_train_epochs 10.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir bert-small-cord19qa \
--save_steps 0 \
--threads 8 \
--overwrite_cache \
--overwrite_output_dir
```
## Testing the model
Example usage below:
```python
from transformers import pipeline
qa = pipeline(
"question-answering",
model="NeuML/bert-small-cord19qa",
tokenizer="NeuML/bert-small-cord19qa"
)
qa({
"question": "What is the median incubation period?",
"context": "The incubation period is around 5 days (range: 4-7 days) with a maximum of 12-13 day"
})
qa({
"question": "What is the incubation period range?",
"context": "The incubation period is around 5 days (range: 4-7 days) with a maximum of 12-13 day"
})
qa({
"question": "What type of surfaces does it persist?",
"context": "The virus can survive on surfaces for up to 72 hours such as plastic and stainless steel ."
})
```
```json
{"score": 0.5970273583242793, "start": 32, "end": 38, "answer": "5 days"}
{"score": 0.999555868193891, "start": 39, "end": 56, "answer": "(range: 4-7 days)"}
{"score": 0.9992726505196998, "start": 61, "end": 88, "answer": "plastic and stainless steel"}
```
| {} | NeuML/bert-small-cord19qa | null | [
"transformers",
"pytorch",
"jax",
"bert",
"question-answering",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers |
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 24135330
- CO2 Emissions (in grams): 155.8470724053265
## Validation Metrics
- Loss: 1.369327425956726
- Rouge1: 52.6656
- Rouge2: 30.5879
- RougeL: 40.1268
- RougeLsum: 47.4438
- Gen Len: 75.4625
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/Neuralearn/autonlp-Summarization-AutoNLP-24135330
``` | {"language": "unk", "tags": "autonlp", "datasets": ["Neuralearn/autonlp-data-Summarization-AutoNLP"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 155.8470724053265} | Neuralearn/autonlp-Summarization-AutoNLP-24135330 | null | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autonlp",
"unk",
"dataset:Neuralearn/autonlp-data-Summarization-AutoNLP",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
feature-extraction | transformers | {} | Nevena/test-model-1 | null | [
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | {} | Nevena/test-model | null | [
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Nevo067/MariaTest1 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers |
# Test
Hf T5: -95.86687088012695
MTF T5: -67.8558578491211
| {"tags": ["t5-new-failed"]} | NewT5SharedHeadsSharedKeyValues/t5-efficient-base-sh | null | [
"transformers",
"t5",
"text2text-generation",
"t5-new-failed",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers |
# Test
Hf T5:
MTF T5: -80.44100952148438
| {"tags": ["t5-new-hf-not-loaded"]} | NewT5SharedHeadsSharedKeyValues/t5-efficient-base-skv | null | [
"transformers",
"t5",
"text2text-generation",
"t5-new-hf-not-loaded",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers |
# Test
Hf T5: -110.35000801086426
MTF T5: -57.58127975463867
| {"tags": ["t5-new-failed"]} | NewT5SharedHeadsSharedKeyValues/t5-efficient-large-sh | null | [
"transformers",
"t5",
"text2text-generation",
"t5-new-failed",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers |
# Test
Hf T5:
MTF T5: -59.432472229003906
| {"tags": ["t5-new-hf-not-loaded"]} | NewT5SharedHeadsSharedKeyValues/t5-efficient-large-skv | null | [
"transformers",
"t5",
"text2text-generation",
"t5-new-hf-not-loaded",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers |
# Test
Hf T5: -146.39734268188477
MTF T5: -72.12132263183594
| {"tags": ["t5-new-failed"]} | NewT5SharedHeadsSharedKeyValues/t5-efficient-small-sh | null | [
"transformers",
"t5",
"text2text-generation",
"t5-new-failed",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers |
# Test
Hf T5:
MTF T5: -277.564697265625
| {"tags": ["t5-new-hf-not-loaded"]} | NewT5SharedHeadsSharedKeyValues/t5-efficient-small-shkv | null | [
"transformers",
"t5",
"text2text-generation",
"t5-new-hf-not-loaded",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers |
# Test
Hf T5: -149.6728801727295
MTF T5: -74.4166259765625
| {"tags": ["t5-new-failed"]} | NewT5SharedHeadsSharedKeyValues/t5-efficient-tiny-sh | null | [
"transformers",
"t5",
"text2text-generation",
"t5-new-failed",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers |
# Test
Hf T5:
MTF T5: -138.18275451660156
| {"tags": ["t5-new-hf-not-loaded"]} | NewT5SharedHeadsSharedKeyValues/t5-efficient-tiny-skv | null | [
"transformers",
"t5",
"text2text-generation",
"t5-new-hf-not-loaded",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers |
# Test
Hf T5: -118.6875057220459
MTF T5: -76.85459899902344
| {"tags": ["t5-new-failed"]} | NewT5SharedHeadsSharedKeyValues/t5-efficient-xl-sh | null | [
"transformers",
"t5",
"text2text-generation",
"t5-new-failed",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers |
# Test
Hf T5:
MTF T5: -66.05513000488281
| {"tags": ["t5-new-hf-not-loaded"]} | NewT5SharedHeadsSharedKeyValues/t5-efficient-xl-skv | null | [
"transformers",
"t5",
"text2text-generation",
"t5-new-hf-not-loaded",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.