Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_trained
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2671
- F1: 0.7253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.2140338797769864e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.6647 | 1.0 | 11404 | 0.6424 | 0.7189 |
| 0.6018 | 2.0 | 22808 | 0.7947 | 0.7170 |
| 0.5004 | 3.0 | 34212 | 1.0811 | 0.7200 |
| 0.3761 | 4.0 | 45616 | 1.2671 | 0.7253 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "sentiment_trained", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "sentiment"}, "metrics": [{"type": "f1", "value": 0.7253452834090693, "name": "F1"}]}]}]} | aXhyra/sentiment_trained | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_trained_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2854
- F1: 0.7165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.2140338797769864e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 1234567
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.6603 | 1.0 | 11404 | 0.7020 | 0.6992 |
| 0.5978 | 2.0 | 22808 | 0.8024 | 0.7151 |
| 0.5495 | 3.0 | 34212 | 1.0837 | 0.7139 |
| 0.4026 | 4.0 | 45616 | 1.2854 | 0.7165 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "sentiment_trained_1234567", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "sentiment"}, "metrics": [{"type": "f1", "value": 0.7165064254565859, "name": "F1"}]}]}]} | aXhyra/sentiment_trained_1234567 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_trained_31415
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2481
- F1: 0.7188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.2140338797769864e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 31415
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.651 | 1.0 | 11404 | 0.6669 | 0.7141 |
| 0.6066 | 2.0 | 22808 | 0.8160 | 0.7198 |
| 0.503 | 3.0 | 34212 | 1.0659 | 0.7182 |
| 0.386 | 4.0 | 45616 | 1.2481 | 0.7188 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "sentiment_trained_31415", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "sentiment"}, "metrics": [{"type": "f1", "value": 0.7188262432133108, "name": "F1"}]}]}]} | aXhyra/sentiment_trained_31415 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_trained_42
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3194
- F1: 0.7132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.2140338797769864e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.6405 | 1.0 | 11404 | 0.6631 | 0.7046 |
| 0.5998 | 2.0 | 22808 | 0.8429 | 0.7102 |
| 0.5118 | 3.0 | 34212 | 1.0906 | 0.7155 |
| 0.3745 | 4.0 | 45616 | 1.3194 | 0.7132 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "sentiment_trained_42", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "sentiment"}, "metrics": [{"type": "f1", "value": 0.7131935389791447, "name": "F1"}]}]}]} | aXhyra/sentiment_trained_42 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers | {} | aXhyra/test-model | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_emotion_trained_test
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5866
- F1: 0.7015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.458132814624325e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 51 | 0.7877 | 0.5569 |
| No log | 2.0 | 102 | 0.6188 | 0.6937 |
| No log | 3.0 | 153 | 0.5969 | 0.7068 |
| No log | 4.0 | 204 | 0.5866 | 0.7015 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "test_emotion_trained_test", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "emotion"}, "metrics": [{"type": "f1", "value": 0.7014611518188594, "name": "F1"}]}]}]} | aXhyra/test_emotion_trained_test | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_hate_trained_test
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1807
- F1: 0.7692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.257754679724796e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4362 | 1.0 | 1125 | 0.5282 | 0.7369 |
| 0.3193 | 2.0 | 2250 | 0.6364 | 0.7571 |
| 0.1834 | 3.0 | 3375 | 1.0346 | 0.7625 |
| 0.0776 | 4.0 | 4500 | 1.1807 | 0.7692 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "test_hate_trained_test", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "hate"}, "metrics": [{"type": "f1", "value": 0.7691585677255204, "name": "F1"}]}]}]} | aXhyra/test_hate_trained_test | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_irony_trained_test
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7674
- F1: 0.6680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.207906329883037e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 358 | 0.6655 | 0.5924 |
| 0.684 | 2.0 | 716 | 0.6889 | 0.6024 |
| 0.5826 | 3.0 | 1074 | 0.7085 | 0.6488 |
| 0.5826 | 4.0 | 1432 | 0.7674 | 0.6680 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "test_irony_trained_test", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "irony"}, "metrics": [{"type": "f1", "value": 0.6680395323922843, "name": "F1"}]}]}]} | aXhyra/test_irony_trained_test | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | aXhyra/test_sentiment_trained_test | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aaaa/aaaa | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | Please visit the repo for training details. https://github.com/AADeLucia/gpt2-narrative-decoding | {} | aadelucia/GPT2_medium_narrative_finetuned_large | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers | Please visit the repo for training details. https://github.com/AADeLucia/gpt2-narrative-decoding | {} | aadelucia/GPT2_medium_narrative_finetuned_medium | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers | Please visit the repo for training details. https://github.com/AADeLucia/gpt2-narrative-decoding | {} | aadelucia/GPT2_small_narrative_finetuned_medium | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | aadeshgupta/distilbert-base-uncased-finetuned-ner | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
# Chandler friends DialogGPT Modal | {"tags": ["conversational"]} | aadilhassan/Chandlerbot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers | {} | aadilhassan/DialoGPT-small-chandler | null | [
"transformers",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aakash123/ejej | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text2text-generation | transformers | {} | aakashD/t5_paraphrase | null | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers |
# NOTE: this is an old model and should not be used anymore!! There are a lot better newer models available at our orgnization hub: [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2) and [Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm)
# Wav2Vec2-Large-XLSR-53-Finnish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Finnish using the [Common Voice](https://huggingface.co/datasets/common_voice), [CSS10 Finnish](https://www.kaggle.com/bryanpark/finnish-single-speaker-speech-dataset) and [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) datasets.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import librosa
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fi", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish")
model = Wav2Vec2ForCTC.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish")
resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Finnish test data of Common Voice.
```python
import librosa
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish")
model = Wav2Vec2ForCTC.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\'\...\…\–\é]'
resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 32.378771 %
## Training
The Common Voice `train`, `validation` and `other` datasets were used for training as well as `CSS10 Finnish` and `Finnish parliament session 2` datasets.
The script used for training can be found from [Google Colab](https://colab.research.google.com/drive/1vnEGC9BnNRmVyIHj-0UsVulh_cUYSGWA?usp=sharing) | {"language": "fi", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Finnish by Aapo Tanskanen", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice fi", "type": "common_voice", "args": "fi"}, "metrics": [{"type": "wer", "value": 32.378771, "name": "Test WER"}]}]}]} | aapot/wav2vec2-large-xlsr-53-finnish | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"fi",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
# Wav2Vec2 XLS-R for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 275.6 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
This repository also includes Finnish KenLM language model used in the decoding phase with the acoustic model.
**Note**: this model is exactly the same as the [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2) model so this model has just been copied/moved to the `Finnish-NLP` Hugging Face organization.
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (1 billion parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish-lm-v2/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
The Finnish KenLM language model used in the decoding phase has been trained with text data from the audio transcriptions and from a subset of Finnish Wikipedia. Thus, the decoder's language model may not generalize to very different language, for example to spoken daily language with dialects (because especially the Wikipedia contains mostly formal Finnish language). It may be beneficial to train your own KenLM language model for your domain language and use that in the decoding.
## Training data
This model was fine-tuned with 275.6 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:------------------------------------------------------------------------------------------------------------------------------ |:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.52 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.97 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.74 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.73 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.95 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
For the KenLM language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by Hugging Face. Training data for the 5-gram KenLM were text transcriptions of the audio training data and 100k random samples of cleaned [Finnish Wikipedia](https://huggingface.co/datasets/wikipedia) (August 2021) dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-1b` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.7778 | 0.17 | 500 | 0.2851 | 0.3572 |
| 0.5506 | 0.34 | 1000 | 0.1595 | 0.2130 |
| 0.6569 | 0.5 | 1500 | 0.1458 | 0.2046 |
| 0.5997 | 0.67 | 2000 | 0.1374 | 0.1975 |
| 0.542 | 0.84 | 2500 | 0.1390 | 0.1956 |
| 0.4815 | 1.01 | 3000 | 0.1266 | 0.1813 |
| 0.6982 | 1.17 | 3500 | 0.1441 | 0.1965 |
| 0.4522 | 1.34 | 4000 | 0.1232 | 0.1822 |
| 0.4655 | 1.51 | 4500 | 0.1209 | 0.1702 |
| 0.4069 | 1.68 | 5000 | 0.1149 | 0.1688 |
| 0.4226 | 1.84 | 5500 | 0.1121 | 0.1560 |
| 0.3993 | 2.01 | 6000 | 0.1091 | 0.1557 |
| 0.406 | 2.18 | 6500 | 0.1115 | 0.1553 |
| 0.4098 | 2.35 | 7000 | 0.1144 | 0.1560 |
| 0.3995 | 2.51 | 7500 | 0.1028 | 0.1476 |
| 0.4101 | 2.68 | 8000 | 0.1129 | 0.1511 |
| 0.3636 | 2.85 | 8500 | 0.1025 | 0.1517 |
| 0.3534 | 3.02 | 9000 | 0.1068 | 0.1480 |
| 0.3836 | 3.18 | 9500 | 0.1072 | 0.1459 |
| 0.3531 | 3.35 | 10000 | 0.0928 | 0.1367 |
| 0.3649 | 3.52 | 10500 | 0.1042 | 0.1426 |
| 0.3645 | 3.69 | 11000 | 0.0979 | 0.1433 |
| 0.3685 | 3.85 | 11500 | 0.0947 | 0.1346 |
| 0.3325 | 4.02 | 12000 | 0.0991 | 0.1352 |
| 0.3497 | 4.19 | 12500 | 0.0919 | 0.1358 |
| 0.3303 | 4.36 | 13000 | 0.0888 | 0.1272 |
| 0.3323 | 4.52 | 13500 | 0.0888 | 0.1277 |
| 0.3452 | 4.69 | 14000 | 0.0894 | 0.1279 |
| 0.337 | 4.86 | 14500 | 0.0917 | 0.1289 |
| 0.3114 | 5.03 | 15000 | 0.0942 | 0.1313 |
| 0.3099 | 5.19 | 15500 | 0.0902 | 0.1239 |
| 0.3079 | 5.36 | 16000 | 0.0871 | 0.1256 |
| 0.3293 | 5.53 | 16500 | 0.0861 | 0.1263 |
| 0.3123 | 5.7 | 17000 | 0.0876 | 0.1203 |
| 0.3093 | 5.86 | 17500 | 0.0848 | 0.1226 |
| 0.2903 | 6.03 | 18000 | 0.0914 | 0.1221 |
| 0.297 | 6.2 | 18500 | 0.0841 | 0.1185 |
| 0.2797 | 6.37 | 19000 | 0.0858 | 0.1165 |
| 0.2878 | 6.53 | 19500 | 0.0874 | 0.1161 |
| 0.2974 | 6.7 | 20000 | 0.0835 | 0.1173 |
| 0.3051 | 6.87 | 20500 | 0.0835 | 0.1178 |
| 0.2941 | 7.04 | 21000 | 0.0852 | 0.1155 |
| 0.258 | 7.21 | 21500 | 0.0832 | 0.1132 |
| 0.2778 | 7.37 | 22000 | 0.0829 | 0.1110 |
| 0.2751 | 7.54 | 22500 | 0.0822 | 0.1069 |
| 0.2887 | 7.71 | 23000 | 0.0819 | 0.1103 |
| 0.2509 | 7.88 | 23500 | 0.0787 | 0.1055 |
| 0.2501 | 8.04 | 24000 | 0.0807 | 0.1076 |
| 0.2399 | 8.21 | 24500 | 0.0784 | 0.1052 |
| 0.2539 | 8.38 | 25000 | 0.0772 | 0.1075 |
| 0.248 | 8.55 | 25500 | 0.0772 | 0.1055 |
| 0.2689 | 8.71 | 26000 | 0.0763 | 0.1027 |
| 0.2855 | 8.88 | 26500 | 0.0756 | 0.1035 |
| 0.2421 | 9.05 | 27000 | 0.0771 | 0.0998 |
| 0.2497 | 9.22 | 27500 | 0.0756 | 0.0971 |
| 0.2367 | 9.38 | 28000 | 0.0741 | 0.0974 |
| 0.2473 | 9.55 | 28500 | 0.0739 | 0.0982 |
| 0.2396 | 9.72 | 29000 | 0.0756 | 0.0991 |
| 0.2602 | 9.89 | 29500 | 0.0737 | 0.0975 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id aapot/wav2vec2-xlsr-1b-finnish-lm-v2 --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the first row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models:
| | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-----------------------------------------|---------------|------------------|---------------|------------------|
|aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** |
|aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 |
|aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 | {"language": "fi", "license": "apache-2.0", "tags": ["automatic-speech-recognition", "fi", "finnish", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "wav2vec2-xlsr-1b-finnish-lm-v2", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "fi"}, "metrics": [{"type": "wer", "value": 4.09, "name": "Test WER"}, {"type": "cer", "value": 0.88, "name": "Test CER"}]}]}]} | aapot/wav2vec2-xlsr-1b-finnish-lm-v2 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
# Wav2Vec2 XLS-R for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 259.57 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
This repository also includes Finnish KenLM language model used in the decoding phase with the acoustic model.
**Note**: this model is exactly the same as the [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm) model so this model has just been copied/moved to the `Finnish-NLP` Hugging Face organization.
**Note**: there is a better V2 version of this model which has been fine-tuned longer with 16 hours of more data: [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2)
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (1 billion parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish-lm/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
The Finnish KenLM language model used in the decoding phase has been trained with text data from the audio transcriptions. Thus, the decoder's language model may not generalize to very different language, for example to spoken daily language with dialects. It may be beneficial to train your own KenLM language model for your domain language and use that in the decoding.
## Training data
This model was fine-tuned with 259.57 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:----------------------------------------------------------------------------------------------------------------------------------|:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.74 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 5.94 h | 2.29 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.98 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 87.84 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 2.07 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
For the KenLM language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by Hugging Face. Training data for the 5-gram KenLM were text transcriptions of the audio training data.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-1b` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.968 | 0.18 | 500 | 0.4870 | 0.4720 |
| 0.6557 | 0.36 | 1000 | 0.2450 | 0.2931 |
| 0.647 | 0.54 | 1500 | 0.1818 | 0.2255 |
| 0.5297 | 0.72 | 2000 | 0.1698 | 0.2354 |
| 0.5802 | 0.9 | 2500 | 0.1581 | 0.2355 |
| 0.6351 | 1.07 | 3000 | 0.1689 | 0.2336 |
| 0.4626 | 1.25 | 3500 | 0.1719 | 0.3099 |
| 0.4526 | 1.43 | 4000 | 0.1434 | 0.2069 |
| 0.4692 | 1.61 | 4500 | 0.1645 | 0.2192 |
| 0.4584 | 1.79 | 5000 | 0.1483 | 0.1987 |
| 0.4234 | 1.97 | 5500 | 0.1499 | 0.2178 |
| 0.4243 | 2.15 | 6000 | 0.1345 | 0.2070 |
| 0.4108 | 2.33 | 6500 | 0.1383 | 0.1850 |
| 0.4048 | 2.51 | 7000 | 0.1338 | 0.1811 |
| 0.4085 | 2.69 | 7500 | 0.1290 | 0.1780 |
| 0.4026 | 2.87 | 8000 | 0.1239 | 0.1650 |
| 0.4033 | 3.04 | 8500 | 0.1346 | 0.1657 |
| 0.3986 | 3.22 | 9000 | 0.1310 | 0.1850 |
| 0.3867 | 3.4 | 9500 | 0.1273 | 0.1741 |
| 0.3658 | 3.58 | 10000 | 0.1219 | 0.1672 |
| 0.382 | 3.76 | 10500 | 0.1306 | 0.1698 |
| 0.3847 | 3.94 | 11000 | 0.1230 | 0.1577 |
| 0.3691 | 4.12 | 11500 | 0.1310 | 0.1615 |
| 0.3593 | 4.3 | 12000 | 0.1296 | 0.1622 |
| 0.3619 | 4.48 | 12500 | 0.1285 | 0.1601 |
| 0.3361 | 4.66 | 13000 | 0.1261 | 0.1569 |
| 0.3603 | 4.84 | 13500 | 0.1235 | 0.1533 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id aapot/wav2vec2-xlsr-1b-finnish-lm --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the second row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models:
| | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-----------------------------------------|---------------|------------------|---------------|------------------|
|aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** |
|aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 |
|aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 | {"language": "fi", "license": "apache-2.0", "tags": ["automatic-speech-recognition", "fi", "finnish", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "wav2vec2-xlsr-1b-finnish-lm", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "fi"}, "metrics": [{"type": "wer", "value": 5.65, "name": "Test WER"}, {"type": "cer", "value": 1.2, "name": "Test CER"}]}]}]} | aapot/wav2vec2-xlsr-1b-finnish-lm | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
# Wav2Vec2 XLS-R for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 275.6 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
**Note**: there is a version with KenLM language model used in the decoding phase producing better transcriptions: [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2)
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (1 billion parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish-v2/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
## Training data
This model was fine-tuned with 275.6 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:------------------------------------------------------------------------------------------------------------------------------ |:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.52 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.97 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.74 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.73 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.95 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-1b` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.7778 | 0.17 | 500 | 0.2851 | 0.3572 |
| 0.5506 | 0.34 | 1000 | 0.1595 | 0.2130 |
| 0.6569 | 0.5 | 1500 | 0.1458 | 0.2046 |
| 0.5997 | 0.67 | 2000 | 0.1374 | 0.1975 |
| 0.542 | 0.84 | 2500 | 0.1390 | 0.1956 |
| 0.4815 | 1.01 | 3000 | 0.1266 | 0.1813 |
| 0.6982 | 1.17 | 3500 | 0.1441 | 0.1965 |
| 0.4522 | 1.34 | 4000 | 0.1232 | 0.1822 |
| 0.4655 | 1.51 | 4500 | 0.1209 | 0.1702 |
| 0.4069 | 1.68 | 5000 | 0.1149 | 0.1688 |
| 0.4226 | 1.84 | 5500 | 0.1121 | 0.1560 |
| 0.3993 | 2.01 | 6000 | 0.1091 | 0.1557 |
| 0.406 | 2.18 | 6500 | 0.1115 | 0.1553 |
| 0.4098 | 2.35 | 7000 | 0.1144 | 0.1560 |
| 0.3995 | 2.51 | 7500 | 0.1028 | 0.1476 |
| 0.4101 | 2.68 | 8000 | 0.1129 | 0.1511 |
| 0.3636 | 2.85 | 8500 | 0.1025 | 0.1517 |
| 0.3534 | 3.02 | 9000 | 0.1068 | 0.1480 |
| 0.3836 | 3.18 | 9500 | 0.1072 | 0.1459 |
| 0.3531 | 3.35 | 10000 | 0.0928 | 0.1367 |
| 0.3649 | 3.52 | 10500 | 0.1042 | 0.1426 |
| 0.3645 | 3.69 | 11000 | 0.0979 | 0.1433 |
| 0.3685 | 3.85 | 11500 | 0.0947 | 0.1346 |
| 0.3325 | 4.02 | 12000 | 0.0991 | 0.1352 |
| 0.3497 | 4.19 | 12500 | 0.0919 | 0.1358 |
| 0.3303 | 4.36 | 13000 | 0.0888 | 0.1272 |
| 0.3323 | 4.52 | 13500 | 0.0888 | 0.1277 |
| 0.3452 | 4.69 | 14000 | 0.0894 | 0.1279 |
| 0.337 | 4.86 | 14500 | 0.0917 | 0.1289 |
| 0.3114 | 5.03 | 15000 | 0.0942 | 0.1313 |
| 0.3099 | 5.19 | 15500 | 0.0902 | 0.1239 |
| 0.3079 | 5.36 | 16000 | 0.0871 | 0.1256 |
| 0.3293 | 5.53 | 16500 | 0.0861 | 0.1263 |
| 0.3123 | 5.7 | 17000 | 0.0876 | 0.1203 |
| 0.3093 | 5.86 | 17500 | 0.0848 | 0.1226 |
| 0.2903 | 6.03 | 18000 | 0.0914 | 0.1221 |
| 0.297 | 6.2 | 18500 | 0.0841 | 0.1185 |
| 0.2797 | 6.37 | 19000 | 0.0858 | 0.1165 |
| 0.2878 | 6.53 | 19500 | 0.0874 | 0.1161 |
| 0.2974 | 6.7 | 20000 | 0.0835 | 0.1173 |
| 0.3051 | 6.87 | 20500 | 0.0835 | 0.1178 |
| 0.2941 | 7.04 | 21000 | 0.0852 | 0.1155 |
| 0.258 | 7.21 | 21500 | 0.0832 | 0.1132 |
| 0.2778 | 7.37 | 22000 | 0.0829 | 0.1110 |
| 0.2751 | 7.54 | 22500 | 0.0822 | 0.1069 |
| 0.2887 | 7.71 | 23000 | 0.0819 | 0.1103 |
| 0.2509 | 7.88 | 23500 | 0.0787 | 0.1055 |
| 0.2501 | 8.04 | 24000 | 0.0807 | 0.1076 |
| 0.2399 | 8.21 | 24500 | 0.0784 | 0.1052 |
| 0.2539 | 8.38 | 25000 | 0.0772 | 0.1075 |
| 0.248 | 8.55 | 25500 | 0.0772 | 0.1055 |
| 0.2689 | 8.71 | 26000 | 0.0763 | 0.1027 |
| 0.2855 | 8.88 | 26500 | 0.0756 | 0.1035 |
| 0.2421 | 9.05 | 27000 | 0.0771 | 0.0998 |
| 0.2497 | 9.22 | 27500 | 0.0756 | 0.0971 |
| 0.2367 | 9.38 | 28000 | 0.0741 | 0.0974 |
| 0.2473 | 9.55 | 28500 | 0.0739 | 0.0982 |
| 0.2396 | 9.72 | 29000 | 0.0756 | 0.0991 |
| 0.2602 | 9.89 | 29500 | 0.0737 | 0.0975 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id aapot/wav2vec2-xlsr-1b-finnish-v2 --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the first row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models:
| | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-----------------------------------------|---------------|------------------|---------------|------------------|
|aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** |
|aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 |
|aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 | {"language": "fi", "license": "apache-2.0", "tags": ["automatic-speech-recognition", "fi", "finnish", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "wav2vec2-xlsr-1b-finnish-v2", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "fi"}, "metrics": [{"type": "wer", "value": 9.73, "name": "Test WER"}, {"type": "cer", "value": 1.65, "name": "Test CER"}]}]}]} | aapot/wav2vec2-xlsr-1b-finnish-v2 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
# Wav2Vec2 XLS-R for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 259.57 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
**Note**: there is a version with KenLM language model used in the decoding phase producing better transcriptions: [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm)
**Note**: there is a better V2 version of this model which has been fine-tuned longer with 16 hours of more data: [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2)
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (1 billion parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
## Training data
This model was fine-tuned with 259.57 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:----------------------------------------------------------------------------------------------------------------------------------|:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.74 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 5.94 h | 2.29 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.98 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 87.84 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 2.07 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-1b` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.968 | 0.18 | 500 | 0.4870 | 0.4720 |
| 0.6557 | 0.36 | 1000 | 0.2450 | 0.2931 |
| 0.647 | 0.54 | 1500 | 0.1818 | 0.2255 |
| 0.5297 | 0.72 | 2000 | 0.1698 | 0.2354 |
| 0.5802 | 0.9 | 2500 | 0.1581 | 0.2355 |
| 0.6351 | 1.07 | 3000 | 0.1689 | 0.2336 |
| 0.4626 | 1.25 | 3500 | 0.1719 | 0.3099 |
| 0.4526 | 1.43 | 4000 | 0.1434 | 0.2069 |
| 0.4692 | 1.61 | 4500 | 0.1645 | 0.2192 |
| 0.4584 | 1.79 | 5000 | 0.1483 | 0.1987 |
| 0.4234 | 1.97 | 5500 | 0.1499 | 0.2178 |
| 0.4243 | 2.15 | 6000 | 0.1345 | 0.2070 |
| 0.4108 | 2.33 | 6500 | 0.1383 | 0.1850 |
| 0.4048 | 2.51 | 7000 | 0.1338 | 0.1811 |
| 0.4085 | 2.69 | 7500 | 0.1290 | 0.1780 |
| 0.4026 | 2.87 | 8000 | 0.1239 | 0.1650 |
| 0.4033 | 3.04 | 8500 | 0.1346 | 0.1657 |
| 0.3986 | 3.22 | 9000 | 0.1310 | 0.1850 |
| 0.3867 | 3.4 | 9500 | 0.1273 | 0.1741 |
| 0.3658 | 3.58 | 10000 | 0.1219 | 0.1672 |
| 0.382 | 3.76 | 10500 | 0.1306 | 0.1698 |
| 0.3847 | 3.94 | 11000 | 0.1230 | 0.1577 |
| 0.3691 | 4.12 | 11500 | 0.1310 | 0.1615 |
| 0.3593 | 4.3 | 12000 | 0.1296 | 0.1622 |
| 0.3619 | 4.48 | 12500 | 0.1285 | 0.1601 |
| 0.3361 | 4.66 | 13000 | 0.1261 | 0.1569 |
| 0.3603 | 4.84 | 13500 | 0.1235 | 0.1533 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id aapot/wav2vec2-xlsr-1b-finnish --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the second row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models:
| | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-----------------------------------------|---------------|------------------|---------------|------------------|
|aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** |
|aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 |
|aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 | {"language": "fi", "license": "apache-2.0", "tags": ["automatic-speech-recognition", "fi", "finnish", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "wav2vec2-xlsr-1b-finnish", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "fi"}, "metrics": [{"type": "wer", "value": 13.11, "name": "Test WER"}, {"type": "cer", "value": 2.23, "name": "Test CER"}]}]}]} | aapot/wav2vec2-xlsr-1b-finnish | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
# Wav2Vec2 XLS-R for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for Finnish ASR. The model has been fine-tuned with 275.6 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
This repository also includes Finnish KenLM language model used in the decoding phase with the acoustic model.
**Note**: this model is exactly the same as the [Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm) model so this model has just been copied/moved to the `Finnish-NLP` Hugging Face organization.
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (300 million parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-300m-finnish-lm/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
The Finnish KenLM language model used in the decoding phase has been trained with text data from the audio transcriptions and from a subset of Finnish Wikipedia. Thus, the decoder's language model may not generalize to very different language, for example to spoken daily language with dialects (because especially the Wikipedia contains mostly formal Finnish language). It may be beneficial to train your own KenLM language model for your domain language and use that in the decoding.
## Training data
This model was fine-tuned with 275.6 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:------------------------------------------------------------------------------------------------------------------------------ |:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.52 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.97 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.74 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.73 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.95 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
For the KenLM language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by Hugging Face. Training data for the 5-gram KenLM were text transcriptions of the audio training data and 100k random samples of cleaned [Finnish Wikipedia](https://huggingface.co/datasets/wikipedia) (August 2021) dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-04
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-300m` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.973 | 0.17 | 500 | 0.5750 | 0.6844 |
| 0.713 | 0.34 | 1000 | 0.3356 | 0.4518 |
| 0.6563 | 0.5 | 1500 | 0.3007 | 0.4039 |
| 0.642 | 0.67 | 2000 | 0.2619 | 0.3674 |
| 0.6203 | 0.84 | 2500 | 0.2488 | 0.3558 |
| 0.6016 | 1.01 | 3000 | 0.2795 | 0.3835 |
| 0.5423 | 1.17 | 3500 | 0.2652 | 0.3310 |
| 0.5639 | 1.34 | 4000 | 0.2479 | 0.3462 |
| 0.586 | 1.51 | 4500 | 0.2409 | 0.3295 |
| 0.5169 | 1.68 | 5000 | 0.2728 | 0.3352 |
| 0.5176 | 1.84 | 5500 | 0.2254 | 0.3149 |
| 0.4983 | 2.01 | 6000 | 0.2169 | 0.3009 |
| 0.4982 | 2.18 | 6500 | 0.2215 | 0.3079 |
| 0.4898 | 2.35 | 7000 | 0.2174 | 0.3023 |
| 0.4922 | 2.51 | 7500 | 0.2217 | 0.3081 |
| 0.5025 | 2.68 | 8000 | 0.2002 | 0.2710 |
| 0.4745 | 2.85 | 8500 | 0.1935 | 0.2783 |
| 0.4377 | 3.02 | 9000 | 0.1859 | 0.2742 |
| 0.4511 | 3.18 | 9500 | 0.2038 | 0.2786 |
| 0.4411 | 3.35 | 10000 | 0.1863 | 0.2651 |
| 0.4501 | 3.52 | 10500 | 0.1948 | 0.2605 |
| 0.4557 | 3.69 | 11000 | 0.1872 | 0.2695 |
| 0.4493 | 3.85 | 11500 | 0.1888 | 0.2632 |
| 0.4047 | 4.02 | 12000 | 0.1818 | 0.2559 |
| 0.4319 | 4.19 | 12500 | 0.1896 | 0.2648 |
| 0.4162 | 4.36 | 13000 | 0.1953 | 0.2595 |
| 0.4046 | 4.52 | 13500 | 0.1864 | 0.2606 |
| 0.4195 | 4.69 | 14000 | 0.1843 | 0.2467 |
| 0.4146 | 4.86 | 14500 | 0.1686 | 0.2450 |
| 0.378 | 5.03 | 15000 | 0.1731 | 0.2401 |
| 0.3792 | 5.19 | 15500 | 0.1676 | 0.2325 |
| 0.3855 | 5.36 | 16000 | 0.1740 | 0.2326 |
| 0.4029 | 5.53 | 16500 | 0.1674 | 0.2345 |
| 0.386 | 5.7 | 17000 | 0.1735 | 0.2280 |
| 0.3811 | 5.86 | 17500 | 0.1692 | 0.2258 |
| 0.3607 | 6.03 | 18000 | 0.1797 | 0.2279 |
| 0.3604 | 6.2 | 18500 | 0.1651 | 0.2206 |
| 0.3362 | 6.37 | 19000 | 0.1627 | 0.2199 |
| 0.3611 | 6.53 | 19500 | 0.1652 | 0.2172 |
| 0.3671 | 6.7 | 20000 | 0.1564 | 0.2140 |
| 0.3769 | 6.87 | 20500 | 0.1525 | 0.2101 |
| 0.3539 | 7.04 | 21000 | 0.1639 | 0.2096 |
| 0.3225 | 7.21 | 21500 | 0.1611 | 0.2087 |
| 0.3323 | 7.37 | 22000 | 0.1633 | 0.2008 |
| 0.3327 | 7.54 | 22500 | 0.1692 | 0.1975 |
| 0.3456 | 7.71 | 23000 | 0.1555 | 0.1991 |
| 0.3058 | 7.88 | 23500 | 0.1590 | 0.1959 |
| 0.3034 | 8.04 | 24000 | 0.1531 | 0.1973 |
| 0.2925 | 8.21 | 24500 | 0.1583 | 0.1978 |
| 0.2967 | 8.38 | 25000 | 0.1546 | 0.1906 |
| 0.2974 | 8.55 | 25500 | 0.1540 | 0.1869 |
| 0.3131 | 8.71 | 26000 | 0.1534 | 0.1850 |
| 0.3306 | 8.88 | 26500 | 0.1482 | 0.1844 |
| 0.2842 | 9.05 | 27000 | 0.1490 | 0.1854 |
| 0.2879 | 9.22 | 27500 | 0.1463 | 0.1799 |
| 0.27 | 9.38 | 28000 | 0.1454 | 0.1798 |
| 0.2874 | 9.55 | 28500 | 0.1504 | 0.1787 |
| 0.2757 | 9.72 | 29000 | 0.1512 | 0.1784 |
| 0.3017 | 9.89 | 29500 | 0.1484 | 0.1800 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id aapot/wav2vec2-xlsr-300m-finnish-lm --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the third row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models:
| | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-----------------------------------------|---------------|------------------|---------------|------------------|
|aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** |
|aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 |
|aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 | {"language": "fi", "license": "apache-2.0", "tags": ["automatic-speech-recognition", "fi", "finnish", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "wav2vec2-xlsr-300m-finnish-lm", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "fi"}, "metrics": [{"type": "wer", "value": 8.16, "name": "Test WER"}, {"type": "cer", "value": 1.97, "name": "Test CER"}]}]}]} | aapot/wav2vec2-xlsr-300m-finnish-lm | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
# Wav2Vec2 XLS-R for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for Finnish ASR. The model has been fine-tuned with 275.6 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
**Note**: there is a version with KenLM language model used in the decoding phase producing better transcriptions: [Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm)
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (300 million parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-300m-finnish/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
## Training data
This model was fine-tuned with 275.6 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:------------------------------------------------------------------------------------------------------------------------------ |:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.52 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.97 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.74 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.73 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.95 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-04
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-300m` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.973 | 0.17 | 500 | 0.5750 | 0.6844 |
| 0.713 | 0.34 | 1000 | 0.3356 | 0.4518 |
| 0.6563 | 0.5 | 1500 | 0.3007 | 0.4039 |
| 0.642 | 0.67 | 2000 | 0.2619 | 0.3674 |
| 0.6203 | 0.84 | 2500 | 0.2488 | 0.3558 |
| 0.6016 | 1.01 | 3000 | 0.2795 | 0.3835 |
| 0.5423 | 1.17 | 3500 | 0.2652 | 0.3310 |
| 0.5639 | 1.34 | 4000 | 0.2479 | 0.3462 |
| 0.586 | 1.51 | 4500 | 0.2409 | 0.3295 |
| 0.5169 | 1.68 | 5000 | 0.2728 | 0.3352 |
| 0.5176 | 1.84 | 5500 | 0.2254 | 0.3149 |
| 0.4983 | 2.01 | 6000 | 0.2169 | 0.3009 |
| 0.4982 | 2.18 | 6500 | 0.2215 | 0.3079 |
| 0.4898 | 2.35 | 7000 | 0.2174 | 0.3023 |
| 0.4922 | 2.51 | 7500 | 0.2217 | 0.3081 |
| 0.5025 | 2.68 | 8000 | 0.2002 | 0.2710 |
| 0.4745 | 2.85 | 8500 | 0.1935 | 0.2783 |
| 0.4377 | 3.02 | 9000 | 0.1859 | 0.2742 |
| 0.4511 | 3.18 | 9500 | 0.2038 | 0.2786 |
| 0.4411 | 3.35 | 10000 | 0.1863 | 0.2651 |
| 0.4501 | 3.52 | 10500 | 0.1948 | 0.2605 |
| 0.4557 | 3.69 | 11000 | 0.1872 | 0.2695 |
| 0.4493 | 3.85 | 11500 | 0.1888 | 0.2632 |
| 0.4047 | 4.02 | 12000 | 0.1818 | 0.2559 |
| 0.4319 | 4.19 | 12500 | 0.1896 | 0.2648 |
| 0.4162 | 4.36 | 13000 | 0.1953 | 0.2595 |
| 0.4046 | 4.52 | 13500 | 0.1864 | 0.2606 |
| 0.4195 | 4.69 | 14000 | 0.1843 | 0.2467 |
| 0.4146 | 4.86 | 14500 | 0.1686 | 0.2450 |
| 0.378 | 5.03 | 15000 | 0.1731 | 0.2401 |
| 0.3792 | 5.19 | 15500 | 0.1676 | 0.2325 |
| 0.3855 | 5.36 | 16000 | 0.1740 | 0.2326 |
| 0.4029 | 5.53 | 16500 | 0.1674 | 0.2345 |
| 0.386 | 5.7 | 17000 | 0.1735 | 0.2280 |
| 0.3811 | 5.86 | 17500 | 0.1692 | 0.2258 |
| 0.3607 | 6.03 | 18000 | 0.1797 | 0.2279 |
| 0.3604 | 6.2 | 18500 | 0.1651 | 0.2206 |
| 0.3362 | 6.37 | 19000 | 0.1627 | 0.2199 |
| 0.3611 | 6.53 | 19500 | 0.1652 | 0.2172 |
| 0.3671 | 6.7 | 20000 | 0.1564 | 0.2140 |
| 0.3769 | 6.87 | 20500 | 0.1525 | 0.2101 |
| 0.3539 | 7.04 | 21000 | 0.1639 | 0.2096 |
| 0.3225 | 7.21 | 21500 | 0.1611 | 0.2087 |
| 0.3323 | 7.37 | 22000 | 0.1633 | 0.2008 |
| 0.3327 | 7.54 | 22500 | 0.1692 | 0.1975 |
| 0.3456 | 7.71 | 23000 | 0.1555 | 0.1991 |
| 0.3058 | 7.88 | 23500 | 0.1590 | 0.1959 |
| 0.3034 | 8.04 | 24000 | 0.1531 | 0.1973 |
| 0.2925 | 8.21 | 24500 | 0.1583 | 0.1978 |
| 0.2967 | 8.38 | 25000 | 0.1546 | 0.1906 |
| 0.2974 | 8.55 | 25500 | 0.1540 | 0.1869 |
| 0.3131 | 8.71 | 26000 | 0.1534 | 0.1850 |
| 0.3306 | 8.88 | 26500 | 0.1482 | 0.1844 |
| 0.2842 | 9.05 | 27000 | 0.1490 | 0.1854 |
| 0.2879 | 9.22 | 27500 | 0.1463 | 0.1799 |
| 0.27 | 9.38 | 28000 | 0.1454 | 0.1798 |
| 0.2874 | 9.55 | 28500 | 0.1504 | 0.1787 |
| 0.2757 | 9.72 | 29000 | 0.1512 | 0.1784 |
| 0.3017 | 9.89 | 29500 | 0.1484 | 0.1800 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id aapot/wav2vec2-xlsr-300m-finnish --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the third row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models:
| | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-----------------------------------------|---------------|------------------|---------------|------------------|
|aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** |
|aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 |
|aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 | {"language": "fi", "license": "apache-2.0", "tags": ["automatic-speech-recognition", "fi", "finnish", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "wav2vec2-xlsr-300m-finnish", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "fi"}, "metrics": [{"type": "wer", "value": 17.92, "name": "Test WER"}, {"type": "cer", "value": 3.36, "name": "Test CER"}]}]}]} | aapot/wav2vec2-xlsr-300m-finnish | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | aaraki/marian-finetuned-kde4-en-to-fr-accelerate | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
translation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8559
- Bleu: 52.9456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
| {"license": "apache-2.0", "tags": ["translation", "generated_from_trainer"], "datasets": ["kde4"], "metrics": ["bleu"], "model-index": [{"name": "marian-finetuned-kde4-en-to-fr", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "kde4", "type": "kde4", "args": "en-fr"}, "metrics": [{"type": "bleu", "value": 52.94560734092563, "name": "Bleu"}]}]}]} | aaraki/marian-finetuned-kde4-en-to-fr | null | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | aaraki/my-new-shiny-tokenizer | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | aarnphm/finetune_emotion_distilroberta | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aaronrelph/Test | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers | ---
datasets:
-
common_voice: ~
language:
-
ur: ~
library_name:
transformers: ~
license:
mit: ~
metrics:
-
wer: ~
model-index:
-
name:
wav2vec2-xls-r-300m-Urdu: ~
results:
-
task:
dataset:
args:
ur: ~
name:
: "common_voice"
: ~
type:
common_voice: ~
metrics:
-
type:
wer: ~
value:
0.2459: ~
-
type:
cer: ~
value:
0.0691: ~
type:
automatic-speech-recognition: ~
tags:
-
audio: ~
-
automatic-speech-recognition: ~
-
speech: ~
Finetuning of [Facebook's 300M model](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on Common Voice 8.0 Urdu dataset | {} | aasem/wav2vec2-xls-r-300m-Urdu | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | aashutosh2102/DialoGPT-smalll-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | ab20211112/distilbert-base-cased-distilled-squad-finetuned-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers | {} | ab20211112/distilbert-base-uncased-finetuned-squad | null | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | ab20211112/finetuned-subsidies | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | ab20211112/gelectra-base-germanquad-finetuned-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | abais/bert-base-uncased-finetuned-swag | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text2text-generation | transformers | {} | abanoub1412/finetuning | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | abanoub1412/myTuner | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c4-aristo-roberta-large
This model was trained from scratch on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0332
- Accuracy: 0.7370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8204 | 1.0 | 140 | 0.7246 | 0.7171 |
| 0.5512 | 2.0 | 280 | 0.7441 | 0.7312 |
| 0.3437 | 3.0 | 420 | 0.8940 | 0.7363 |
| 0.291 | 4.0 | 560 | 1.0332 | 0.7370 |
### Framework versions
- Transformers 4.6.1
- Pytorch 1.10.0.dev20210620+cu113
- Datasets 1.6.2
- Tokenizers 0.10.2
| {"metrics": ["accuracy"]} | abarbosa/c4-aristo-roberta-large | null | [
"transformers",
"pytorch",
"roberta",
"multiple-choice",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers | {} | abbas/gpt2-horror-stories | null | [
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | abby711/FaceRestoration | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | abcdef/trial | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | abcdefg/trial | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
audio-classification | transformers |
**Context**
Most of our great brilliant ideas happen in periods of relaxation, like taking a
shower, however, once we leave the shower, we forget the brilliant idea. What if
we do not forget, and collect your ideas in the shower?
**What is the Shower Ideas concept?**
This is an app that detects when someone is taking a shower (douche) and asks
“do you have any idea?”, and the person will speak while taking the shower telling
the idea. And also will ask questions after taking a shower.
**Abstract about the model**
This model was trained based on *facebook/wav2vec2-base-960h* (which is a pretrained model on 960 hours of Librispeech on 16kHz sampled speech audio.) in order to classify the audio input into shower or no_shower.
**Dataset**
The SHD-2 dataset is a labeled collection of 2260 audio recordings of shower and no shower sounds.
The dataset consists of 6-second-long recordings organized into 2 classes (with 1130 examples per class).
# Usage
In order to use the model in your Python script just copy the following code:
```python
from transformers import pipeline
audio_input = 'example.wav'
classifier = pipeline("audio-classification", model="abdelhalim/Shower_Sound_Recognition")
labels = classifier(audio_input)
labels
``` | {"tags": ["audio", "audio-classificaiton", "shower detection"], "datasets": ["SHD-2"], "metrics": ["Accuracy"]} | abdelhalim/Shower_Sound_Recognition | null | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"audio",
"audio-classificaiton",
"shower detection",
"dataset:SHD-2",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3038
- Accuracy: 0.9465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 2.8460 | 0.7506 |
| 3.322 | 2.0 | 636 | 1.4301 | 0.8532 |
| 3.322 | 3.0 | 954 | 0.7377 | 0.9152 |
| 1.2296 | 4.0 | 1272 | 0.4784 | 0.9316 |
| 0.449 | 5.0 | 1590 | 0.3730 | 0.9390 |
| 0.449 | 6.0 | 1908 | 0.3367 | 0.9429 |
| 0.2424 | 7.0 | 2226 | 0.3163 | 0.9468 |
| 0.1741 | 8.0 | 2544 | 0.3074 | 0.9452 |
| 0.1741 | 9.0 | 2862 | 0.3054 | 0.9458 |
| 0.1501 | 10.0 | 3180 | 0.3038 | 0.9465 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["clinc_oos"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-distilled-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9464516129032258, "name": "Accuracy"}]}]}]} | abdelkader/distilbert-base-uncased-distilled-clinc | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7713
- Accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2831 | 0.7426 |
| 3.785 | 2.0 | 636 | 1.8739 | 0.8335 |
| 3.785 | 3.0 | 954 | 1.1525 | 0.8926 |
| 1.6894 | 4.0 | 1272 | 0.8569 | 0.91 |
| 0.897 | 5.0 | 1590 | 0.7713 | 0.9174 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["clinc_oos"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9174193548387096, "name": "Accuracy"}]}]}]} | abdelkader/distilbert-base-uncased-finetuned-clinc | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2162
- Accuracy: 0.9215
- F1: 0.9216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8007 | 1.0 | 250 | 0.3082 | 0.907 | 0.9045 |
| 0.2438 | 2.0 | 500 | 0.2162 | 0.9215 | 0.9216 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9215, "name": "Accuracy"}, {"type": "f1", "value": 0.9215604730468001, "name": "F1"}]}]}]} | abdelkader/distilbert-base-uncased-finetuned-emotion | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6936 | 0.54 | 500 | 1.4844 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["samsum"], "model-index": [{"name": "pegasus-samsum", "results": []}]} | abdelkader/pegasus-samsum | null | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | abdelkader/xlm-roberta-base-finetuned-panx-de | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | abdinoor/bert-base-uncased | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers |
# Soraberta: Unsupervised Language Model Pre-training for Wolof
**bert-base-wolof** is pretrained bert-base model on wolof language .
## Soraberta models
| Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters |
| :------: | :---: | :---: | :---: | :---: |
| `bert-base` | 6 | 12 | 514 | 56931622 M |
## Using Soraberta with Hugging Face's Transformers
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='abdouaziiz/bert-base-wolof')
>>> unmasker("kuy yoot du [MASK].")
[{'sequence': '[CLS] kuy yoot du seqet. [SEP]',
'score': 0.09505125880241394,
'token': 13578},
{'sequence': '[CLS] kuy yoot du daw. [SEP]',
'score': 0.08882280439138412,
'token': 679},
{'sequence': '[CLS] kuy yoot du yoot. [SEP]',
'score': 0.057790059596300125,
'token': 5117},
{'sequence': '[CLS] kuy yoot du seqat. [SEP]',
'score': 0.05671025067567825,
'token': 4992},
{'sequence': '[CLS] kuy yoot du yaqu. [SEP]',
'score': 0.0469999685883522,
'token': 1735}]
```
## Training data
The data sources are [Bible OT](http://biblewolof.com/) , [WOLOF-ONLINE](http://www.wolof-online.com/)
[ALFFA_PUBLIC](https://github.com/getalp/ALFFA_PUBLIC/tree/master/ASR/WOLOF)
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "wo", "tags": ["bert", "language-model", "wo", "wolof"]} | abdouaziiz/bert-base-wolof | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"language-model",
"wo",
"wolof",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# Soraberta: Unsupervised Language Model Pre-training for Wolof
**Soraberta** is pretrained roberta-base model on wolof language . Roberta was introduced in [this paper](https://arxiv.org/abs/1907.11692)
## Soraberta models
| Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters |
| :------: | :---: | :---: | :---: | :---: |
| `soraberta-base` | 6 | 12 | 514 | 83 M |
## Using Soraberta with Hugging Face's Transformers
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='abdouaziiz/soraberta')
>>> unmasker("juroom naari jullit man nanoo boole jend aw nag walla <mask>.")
[{'sequence': 'juroom naari jullit man nanoo boole jend aw nag walla gileem.',
'score': 0.9783930778503418,
'token': 4621,
'token_str': ' gileem'},
{'sequence': 'juroom naari jullit man nanoo boole jend aw nag walla jend.',
'score': 0.009271537885069847,
'token': 2155,
'token_str': ' jend'},
{'sequence': 'juroom naari jullit man nanoo boole jend aw nag walla aw.',
'score': 0.0027585660573095083,
'token': 704,
'token_str': ' aw'},
{'sequence': 'juroom naari jullit man nanoo boole jend aw nag walla pel.',
'score': 0.001120452769100666,
'token': 1171,
'token_str': ' pel'},
{'sequence': 'juroom naari jullit man nanoo boole jend aw nag walla juum.',
'score': 0.0005133090307936072,
'token': 5820,
'token_str': ' juum'}]
```
## Training data
The data sources are [Bible OT](http://biblewolof.com/) , [WOLOF-ONLINE](http://www.wolof-online.com/)
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": "wo", "tags": ["roberta", "language-model", "wo", "wolof"]} | abdouaziiz/soraberta | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"language-model",
"wo",
"wolof",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-wolof-lm
Wolof is a language spoken in Senegal and neighbouring countries, this language is not too well represented, there are few resources in the field of Text en speech
In this sense we aim to bring our contribution to this, it is in this sense that enters this repo.
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) ,with a language model that is fine-tuned with the largest available speech dataset of the [ALFFA_PUBLIC](https://github.com/besacier/ALFFA_PUBLIC/tree/master/ASR/WOLOF)
It achieves the following results on the evaluation set:
- Loss: 0.367826
- Wer: 0.212565
## Model description
The duration of the training data is 16.8 hours, which we have divided into 10,000 audio files for the training and 3,339 for the test.
## Training and evaluation data
We eval the model at every 1500 step , and log it . and save at every 33340 step
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-4
- train_batch_size: 3
- eval_batch_size : 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10.0
### Training results
| Step | Training Loss | Validation Loss | Wer |
|:-------:|:-------------:|:---------------:|:------:|
| 1500 | 2.854200 |0.642243 |0.543964 |
| 3000 | 0.599200 | 0.468138 | 0.429549|
| 4500 | 0.468300 | 0.433436 | 0.405644|
| 6000 | 0.427000 | 0.384873 | 0.344150|
| 7500 | 0.377000 | 0.374003 | 0.323892|
| 9000 | 0.337000 | 0.363674 | 0.306189|
| 10500 | 0.302400 | 0.349884 |0 .283908 |
| 12000 | 0.264100 | 0.344104 |0.277120|
| 13500 |0 .254000 |0.341820 |0.271316|
| 15000 | 0.208400| 0.326502 | 0.260695|
| 16500 | 0.203500| 0.326209 | 0.250313|
| 18000 |0.159800 |0.323539 | 0.239851|
| 19500 | 0.158200 | 0.310694 | 0.230028|
| 21000 | 0.132800 | 0.338318 | 0.229283|
| 22500 | 0.112800 | 0.336765 | 0.224145|
| 24000 | 0.103600 | 0.350208 | 0.227073 |
| 25500 | 0.091400 | 0.353609 | 0.221589 |
| 27000 | 0.084400 | 0.367826 | 0.212565 |
## Usage
The model can be used directly as follows:
```python
import librosa
import warnings
from transformers import AutoProcessor, AutoModelForCTC
from datasets import Dataset, DatasetDict
from datasets import load_metric
wer_metric = load_metric("wer")
wolof = pd.read_csv('Test.csv') # wolof contains the columns of file , and transcription
wolof = DatasetDict({'test': Dataset.from_pandas(wolof)})
chars_to_ignore_regex = '[\"\?\.\!\-\;\:\(\)\,]'
def remove_special_characters(batch):
batch["transcription"] = re.sub(chars_to_ignore_regex, '', batch["transcription"]).lower() + " "
return batch
wolof = wolof.map(remove_special_characters)
processor = AutoProcessor.from_pretrained("abdouaziiz/wav2vec2-xls-r-300m-wolof-lm")
model = AutoModelForCTC.from_pretrained("abdouaziiz/wav2vec2-xls-r-300m-wolof-lm")
warnings.filterwarnings("ignore")
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["file"], sr = 16000)
batch["speech"] = speech_array.astype('float16')
batch["sampling_rate"] = sampling_rate
batch["target_text"] = batch["transcription"]
return batch
wolof = wolof.map(speech_file_to_array_fn, remove_columns=wolof.column_names["test"], num_proc=1)
def map_to_result(batch):
model.to("cuda")
input_values = processor(
batch["speech"],
sampling_rate=batch["sampling_rate"],
return_tensors="pt"
).input_values.to("cuda")
with torch.no_grad():
logits = model(input_values).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_str"] = processor.batch_decode(pred_ids)[0]
return batch
results = wolof["test"].map(map_to_result)
print("Test WER: {:.3f}".format(wer_metric.compute(predictions=results["pred_str"], references=results["transcription"])))
```
## PS:
The results obtained can be improved by using :
- Wav2vec2 + language model .
- Build a Spellcheker from the text of the data
- Sentence Edit Distance | {"license": "mit", "tags": ["automatic-speech-recognition", "asr", "pytorch", "wav2vec2", "wolof", "wo"]} | abdouaziiz/wav2vec2-xls-r-300m-wolof-lm | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"asr",
"wolof",
"wo",
"license:mit",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | abdulbaseer/will_lliw_gpt2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
token-classification | transformers |
# Arabic NER | {"language": "ar", "tags": ["ner", "ar", "classification"], "datasets": ["wikiann"], "pipeline_tag": "token-classification", "task_ids": ["named-entity-recognition"], "widget": [{"text": "\u0643\u0631\u064a\u0633\u062a\u064a\u0627\u0646\u0648 \u0631\u0648\u0646\u0627\u0644\u062f\u0648 \u064a\u0644\u0639\u0628 \u0645\u0639 \u0646\u0627\u062f\u064a \u064a\u0648\u0641\u0646\u062a\u0648\u0633", "example_title": "Sentence 1"}, {"text": "\u062a\u062e\u0631\u062c \u0623\u062d\u0645\u062f \u0645\u0646 \u0627\u0644\u062c\u0627\u0645\u0639\u0629 \u0627\u0644\u0623\u0645\u0631\u064a\u0643\u064a\u0629 \u0641\u064a \u0627\u0644\u0634\u0627\u0631\u0642\u0629 \u0627\u0644\u0634\u0647\u0631 \u0627\u0644\u0645\u0627\u0636\u064a", "example_title": "Sentence 2"}, {"text": "\u0644\u0627 \u064a\u0632\u0627\u0644 \u062f\u064a\u0628\u0627\u0644\u0627 \u064a\u0644\u0639\u0628 \u0644\u0641\u0631\u064a\u0642 \u064a\u0648\u0641\u0646\u062a\u0648\u0633", "example_title": "Sentence 3"}]} | abdusah/arabert-ner | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"ner",
"ar",
"classification",
"dataset:wikiann",
"doi:10.57967/hf/0271",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | abelli/bert-base-uncased-finetuned-ner | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | abelsaug/albert-xxl_test | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | abeppu/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | abhi-gm/distilbert-base-uncased-finetuned-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers |
## Dataset
English Bible Translation Dataset (https://www.kaggle.com/oswinrh/bible)
*NOTE:* It is `roberta-base` fine-tuned (for MLM objective) for 1 epoch (using MLM objective) on the 7 `.csv` files mentioned above, which consist of around 5.5M words.
## Citation
If you use this model in your work, please add the following citation -
```
@inproceedings{nandy-etal-2021-cs60075,
title = "cs60075{\_}team2 at {S}em{E}val-2021 Task 1 : Lexical Complexity Prediction using Transformer-based Language Models pre-trained on various text corpora",
author = "Nandy, Abhilash and
Adak, Sayantan and
Halder, Tanurima and
Pokala, Sai Mahesh",
booktitle = "Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.semeval-1.87",
doi = "10.18653/v1/2021.semeval-1.87",
pages = "678--682",
abstract = "The main contribution of this paper is to fine-tune transformer-based language models pre-trained on several text corpora, some being general (E.g., Wikipedia, BooksCorpus), some being the corpora from which the CompLex Dataset was extracted, and others being from other specific domains such as Finance, Law, etc. We perform ablation studies on selecting the transformer models and how their individual complexity scores are aggregated to get the resulting complexity scores. Our method achieves a best Pearson Correlation of 0.784 in sub-task 1 (single word) and 0.836 in sub-task 2 (multiple word expressions).",
}
```
| {"language": "en", "tags": ["English", "Bible"], "dataset": ["English Bible Translation Dataset", {"Link": "https://www.kaggle.com/oswinrh/bible"}], "inference": false} | abhi1nandy2/Bible-roberta-base | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"English",
"Bible",
"en",
"autotrain_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
Refer to https://aclanthology.org/2021.semeval-1.87/
## Citation
If you use this model in your work, please add the following citation -
```
@inproceedings{nandy-etal-2021-cs60075,
title = "cs60075{\_}team2 at {S}em{E}val-2021 Task 1 : Lexical Complexity Prediction using Transformer-based Language Models pre-trained on various text corpora",
author = "Nandy, Abhilash and
Adak, Sayantan and
Halder, Tanurima and
Pokala, Sai Mahesh",
booktitle = "Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.semeval-1.87",
doi = "10.18653/v1/2021.semeval-1.87",
pages = "678--682",
abstract = "The main contribution of this paper is to fine-tune transformer-based language models pre-trained on several text corpora, some being general (E.g., Wikipedia, BooksCorpus), some being the corpora from which the CompLex Dataset was extracted, and others being from other specific domains such as Finance, Law, etc. We perform ablation studies on selecting the transformer models and how their individual complexity scores are aggregated to get the resulting complexity scores. Our method achieves a best Pearson Correlation of 0.784 in sub-task 1 (single word) and 0.836 in sub-task 2 (multiple word expressions).",
}
```
| {"language": ["English"], "tags": ["CRAFT", "roberta"], "datasets": ["CRAFT BioNLP Corpus"]} | abhi1nandy2/Craft-bionlp-roberta-base | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"CRAFT",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
Refer to https://aclanthology.org/2021.findings-emnlp.392/ for the paper and https://sites.google.com/view/emanualqa/home for the project website
## Citation
Please cite the work if you would like to use it.
```
@inproceedings{nandy-etal-2021-question-answering,
title = "Question Answering over Electronic Devices: A New Benchmark Dataset and a Multi-Task Learning based {QA} Framework",
author = "Nandy, Abhilash and
Sharma, Soumya and
Maddhashiya, Shubham and
Sachdeva, Kapil and
Goyal, Pawan and
Ganguly, NIloy",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.392",
doi = "10.18653/v1/2021.findings-emnlp.392",
pages = "4600--4609",
abstract = "Answering questions asked from instructional corpora such as E-manuals, recipe books, etc., has been far less studied than open-domain factoid context-based question answering. This can be primarily attributed to the absence of standard benchmark datasets. In this paper, we meticulously create a large amount of data connected with E-manuals and develop a suitable algorithm to exploit it. We collect E-Manual Corpus, a huge corpus of 307,957 E-manuals, and pretrain RoBERTa on this large corpus. We create various benchmark QA datasets which include question answer pairs curated by experts based upon two E-manuals, real user questions from Community Question Answering Forum pertaining to E-manuals etc. We introduce EMQAP (E-Manual Question Answering Pipeline) that answers questions pertaining to electronics devices. Built upon the pretrained RoBERTa, it harbors a supervised multi-task learning framework which efficiently performs the dual tasks of identifying the section in the E-manual where the answer can be found and the exact answer span within that section. For E-Manual annotated question-answer pairs, we show an improvement of about 40{\%} in ROUGE-L F1 scores over most competitive baseline. We perform a detailed ablation study and establish the versatility of EMQAP across different circumstances. The code and datasets are shared at https://github.com/abhi1nandy2/EMNLP-2021-Findings, and the corresponding project website is https://sites.google.com/view/emanualqa/home.",
}
``` | {"language": ["English"], "tags": ["EManuals", "customer support", "QA", "bert"]} | abhi1nandy2/EManuals_BERT | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"EManuals",
"customer support",
"QA",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
feature-extraction | transformers |
Refer to https://aclanthology.org/2021.findings-emnlp.392/ for the paper and https://sites.google.com/view/emanualqa/home for the project website
## Citation
Please cite the work if you would like to use it.
```
@inproceedings{nandy-etal-2021-question-answering,
title = "Question Answering over Electronic Devices: A New Benchmark Dataset and a Multi-Task Learning based {QA} Framework",
author = "Nandy, Abhilash and
Sharma, Soumya and
Maddhashiya, Shubham and
Sachdeva, Kapil and
Goyal, Pawan and
Ganguly, NIloy",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.392",
doi = "10.18653/v1/2021.findings-emnlp.392",
pages = "4600--4609",
abstract = "Answering questions asked from instructional corpora such as E-manuals, recipe books, etc., has been far less studied than open-domain factoid context-based question answering. This can be primarily attributed to the absence of standard benchmark datasets. In this paper, we meticulously create a large amount of data connected with E-manuals and develop a suitable algorithm to exploit it. We collect E-Manual Corpus, a huge corpus of 307,957 E-manuals, and pretrain RoBERTa on this large corpus. We create various benchmark QA datasets which include question answer pairs curated by experts based upon two E-manuals, real user questions from Community Question Answering Forum pertaining to E-manuals etc. We introduce EMQAP (E-Manual Question Answering Pipeline) that answers questions pertaining to electronics devices. Built upon the pretrained RoBERTa, it harbors a supervised multi-task learning framework which efficiently performs the dual tasks of identifying the section in the E-manual where the answer can be found and the exact answer span within that section. For E-Manual annotated question-answer pairs, we show an improvement of about 40{\%} in ROUGE-L F1 scores over most competitive baseline. We perform a detailed ablation study and establish the versatility of EMQAP across different circumstances. The code and datasets are shared at https://github.com/abhi1nandy2/EMNLP-2021-Findings, and the corresponding project website is https://sites.google.com/view/emanualqa/home.",
}
``` | {"language": ["English"], "tags": ["EManuals", "customer support", "QA", "roberta"]} | abhi1nandy2/EManuals_RoBERTa | null | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"EManuals",
"customer support",
"QA",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
Refer to https://aclanthology.org/2021.semeval-1.87/
## Citation
If you use this model in your work, please add the following citation -
```
@inproceedings{nandy-etal-2021-cs60075,
title = "cs60075{\_}team2 at {S}em{E}val-2021 Task 1 : Lexical Complexity Prediction using Transformer-based Language Models pre-trained on various text corpora",
author = "Nandy, Abhilash and
Adak, Sayantan and
Halder, Tanurima and
Pokala, Sai Mahesh",
booktitle = "Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.semeval-1.87",
doi = "10.18653/v1/2021.semeval-1.87",
pages = "678--682",
abstract = "The main contribution of this paper is to fine-tune transformer-based language models pre-trained on several text corpora, some being general (E.g., Wikipedia, BooksCorpus), some being the corpora from which the CompLex Dataset was extracted, and others being from other specific domains such as Finance, Law, etc. We perform ablation studies on selecting the transformer models and how their individual complexity scores are aggregated to get the resulting complexity scores. Our method achieves a best Pearson Correlation of 0.784 in sub-task 1 (single word) and 0.836 in sub-task 2 (multiple word expressions).",
}
```
| {"language": ["English"], "tags": ["Europarl", "roberta"], "datasets": ["Europarl"]} | abhi1nandy2/Europarl-roberta-base | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"Europarl",
"dataset:Europarl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | abhibisht89/neural-search-engine-model | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
token-classification | transformers |
spanbert-large-cased fine-tuned for <b>"Adverse drug reaction"</b> and <b>"Drug"</b> span Extraction.
<b>Details of spanbert-large-cased:</b>
https://huggingface.co/SpanBERT/spanbert-large-cased
<b>Details of the downstream task (Adverse drug reaction and Drug Extraction) - Dataset</b>
https://huggingface.co/datasets/ade_corpus_v2 | {"language": "en", "tags": ["spanbert"], "datasets": ["ade_corpus_v2"], "widget": [{"text": "Having fever after taking paracetamol.", "example_title": "NER"}, {"text": "Birth defects associated with thalidomide.", "example_title": "NER"}, {"text": "Deafness and kidney failure associated with gentamicin (an antibiotic).", "example_title": "NER"}, {"text": "Bleeding of the intestine associated with aspirin therapy.", "example_title": "NER"}]} | abhibisht89/spanbert-large-cased-finetuned-ade_corpus_v2 | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"spanbert",
"en",
"dataset:ade_corpus_v2",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | abhiii/qna | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers | # Dataset
---
---
datasets:
- covid_qa_deepset
---
---
Covid 19 question answering data obtained from [covid_qa_deepset](https://huggingface.co/datasets/covid_qa_deepset).
# Original Repository
Repository for the fine tuning, inference and evaluation scripts can be found [here](https://github.com/abhijithneilabraham/Covid-QA).
# Model in action
```
import torch
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("abhijithneilabraham/longformer_covid_qa")
model = AutoModelForQuestionAnswering.from_pretrained("abhijithneilabraham/longformer_covid_qa")
question = "In this way, what do the mRNA-destabilising RBPs constitute ?"
text =
"""
In this way, mRNA-destabilising RBPs constitute a 'brake' on the immune system, which may ultimately be toggled therapeutically. I anticipate continued efforts in this area will lead to new methods of regaining control over inflammation in autoimmunity, selectively enhancing immunity in immunotherapy, and modulating RNA synthesis and virus replication during infection.
Another mRNA under post-transcriptional regulation by Regnase-1 and Roquin is Furin, which encodes a conserved proprotein convertase crucial in human health and disease. Furin, along with other PCSK family members, is widely implicated in immune regulation, cancer and the entry, maturation or release of a broad array of evolutionarily diverse viruses including human papillomavirus (HPV), influenza (IAV), Ebola (EboV), dengue (DenV) and human immunodeficiency virus (HIV). Here, Braun and Sauter review the roles of furin in these processes, as well as the history and future of furin-targeting therapeutics. 7 They also discuss their recent work revealing how two IFN-cinducible factors exhibit broad-spectrum inhibition of IAV, measles (MV), zika (ZikV) and HIV by suppressing furin activity. 8 Over the coming decade, I expect to see an ever-finer spatiotemporal resolution of host-oriented therapies to achieve safe, effective and broad-spectrum yet costeffective therapies for clinical use.
The increasing abundance of affordable, sensitive, high-throughput genome sequencing technologies has led to a recent boom in metagenomics and the cataloguing of the microbiome of our world. The MinION nanopore sequencer is one of the latest innovations in this space, enabling direct sequencing in a miniature form factor with only minimal sample preparation and a consumer-grade laptop computer. Nakagawa and colleagues here report on their latest experiments using this system, further improving its performance for use in resource-poor contexts for meningitis diagnoses. 9 While direct sequencing of viral genomic RNA is challenging, this system was recently used to directly sequence an RNA virus genome (IAV) for the first time. 10 I anticipate further improvements in the performance of such devices over the coming decade will transform virus surveillance efforts, the importance of which was underscored by the recent EboV and novel coronavirus (nCoV / COVID-19) outbreaks, enabling rapid deployment of antiviral treatments that take resistance-conferring mutations into account.
Decades of basic immunology research have provided a near-complete picture of the main armaments in the human antiviral arsenal. Nevertheless, this focus on mammalian defences and pathologies has sidelined examination of the types and roles of viruses and antiviral defences that exist throughout our biosphere. One case in point is the CRISPR/Cas antiviral immune system of prokaryotes, which is now repurposed as a revolutionary gene-editing biotechnology in plants and animals. 11 Another is the ancient lineage of nucleocytosolic large DNA viruses (NCLDVs), which are emerging human pathogens that possess enormous genomes of up to several megabases in size encoding hundreds of proteins with unique and unknown functions. 12 Moreover, hundreds of human-and avian-infective viruses such as IAV strain H5N1 are known, but recent efforts indicate the true number may be in the millions and many harbour zoonotic potential. 13 It is increasingly clear that host-virus interactions have generated truly vast yet poorly understood and untapped biodiversity. Closing this Special Feature, Watanabe and Kawaoka elaborate on neo-virology, an emerging field engaged in cataloguing and characterising this biodiversity through a global consortium. 14 I predict these efforts will unlock a vast wealth of currently unexplored biodiversity, leading to biotechnologies and treatments that leverage the host-virus interactions developed throughout evolution.
When biomedical innovations fall into the 'Valley of Death', patients who are therefore not reached all too often fall with them. Being entrusted with the resources and expectation to conceive, deliver and communicate dividends to society is both cherished and eagerly pursued at every stage of our careers. Nevertheless, the road to research translation is winding and is built on a foundation of basic research. Supporting industry-academia collaboration and nurturing talent and skills in the Indo-Pacific region are two of the four pillars of the National Innovation and Science Agenda. 2 These frame Australia's Medical Research and Innovation Priorities, which include antimicrobial resistance, global health and health security, drug repurposing and translational research infrastructure, 15 capturing many of the key elements of this CTI Special Feature. Establishing durable international relationships that integrate diverse expertise is essential to delivering these outcomes. To this end, NHMRC has recently taken steps under the International Engagement Strategy 16 to increase cooperation with its counterparts overseas. These include the Japan Agency for Medical Research and Development (AMED), tasked with translating the biomedical research output of that country. Given the reciprocal efforts at accelerating bilateral engagement currently underway, 17 the prospects for new areas of international cooperation and mobility have never been more exciting nor urgent. With the above in mind, all contributions to this CTI Special Feature I have selected from research presented by fellow invitees to the 2018 Awaji International Forum on Infection and Immunity (AIFII) and 2017 Consortium of Biological Sciences (ConBio) conferences in Japan. Both Australia and Japan have strong traditions in immunology and related disciplines, and I predict that the quantity, quality and importance of our bilateral cooperation will accelerate rapidly over the short to medium term. By expanding and cooperatively leveraging our respective research strengths, our efforts may yet solve the many pressing disease, cost and other sustainability issues of our time.
"""
encoding = tokenizer(question, text, return_tensors="pt")
input_ids = encoding["input_ids"]
# default is local attention everywhere
# the forward method will automatically set global attention on question tokens
attention_mask = encoding["attention_mask"]
start_scores, end_scores = model(input_ids, attention_mask=attention_mask)
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())
answer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1]
answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens))
# output => a 'brake' on the immune system
``` | {} | abhijithneilabraham/longformer_covid_qa | null | [
"transformers",
"pytorch",
"longformer",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers | {} | abhijithneilabraham/pubmed-summarisation-pegasus | null | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('abhijithneilabraham/stsb_multi_mt_distilbert-base-uncased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('abhijithneilabraham/stsb_multi_mt_distilbert-base-uncased')
model = AutoModel.from_pretrained('abhijithneilabraham/stsb_multi_mt_distilbert-base-uncased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 360 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 25,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 900,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | abhijithneilabraham/stsb_multi_mt_distilbert-base-uncased | null | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | abhikbhatia/NolanBot | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text2text-generation | transformers | {} | abhilash1910/FinancialLongPegasus | null | [
"transformers",
"tf",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | ## German NER Albert Model For Token Classification
This is a trained Albert model for Token Classification in German ,[Germeval](https://sites.google.com/site/germeval2014ner/) and can be used for Inference.
## Model Specifications
- MAX_LENGTH=128
- MODEL='albert-base-v1'
- BATCH_SIZE=32
- NUM_EPOCHS=3
- SAVE_STEPS=750
- SEED=1
- SAVE_STEPS = 100
- LOGGING_STEPS = 100
- SEED = 42
### Usage Specifications
This model is trained on Tensorflow version and is compatible with the 'ner' pipeline of huggingface.
```python
from transformers import AutoTokenizer,TFAutoModelForTokenClassification
from transformers import pipeline
model=TFAutoModelForTokenClassification.from_pretrained('abhilash1910/albert-german-ner')
tokenizer=AutoTokenizer.from_pretrained('abhilash1910/albert-german-ner')
ner_model = pipeline('ner', model=model, tokenizer=tokenizer)
seq='Berlin ist die Hauptstadt von Deutschland'
ner_model(seq)
```
The Tensorflow version of Albert is used for training the model and the output for the above mentioned segment is as follows:
```
[{'entity': 'B-PERderiv',
'index': 1,
'score': 0.09580112248659134,
'word': '▁berlin'},
{'entity': 'B-ORGpart',
'index': 2,
'score': 0.08364498615264893,
'word': '▁is'},
{'entity': 'B-LOCderiv',
'index': 3,
'score': 0.07593920826911926,
'word': 't'},
{'entity': 'B-PERderiv',
'index': 4,
'score': 0.09574996680021286,
'word': '▁die'},
{'entity': 'B-LOCderiv',
'index': 5,
'score': 0.07097965478897095,
'word': '▁'},
{'entity': 'B-PERderiv',
'index': 6,
'score': 0.07122448086738586,
'word': 'haupt'},
{'entity': 'B-PERderiv',
'index': 7,
'score': 0.12397754937410355,
'word': 'stadt'},
{'entity': 'I-OTHderiv',
'index': 8,
'score': 0.0818650871515274,
'word': '▁von'},
{'entity': 'I-LOCderiv',
'index': 9,
'score': 0.08271490037441254,
'word': '▁'},
{'entity': 'B-LOCderiv',
'index': 10,
'score': 0.08616268634796143,
'word': 'deutschland'}]
```
## Resources
For all resources , please look into [huggingface](https://huggingface.com).
---
language:
- de
tags:
- Token Classification
license: apache-2.0
datasets:
- germeval_14
---
| {} | abhilash1910/albert-german-ner | null | [
"transformers",
"tf",
"albert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers | {"language": ["en"], "license": "apache-2.0", "datasets": ["squad_v2"], "model-index": [{"name": "abhilash1910/albert-squad-v2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 23.6563, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTE5ZTM2YzIwZjBhYjM0ZDUyNzBiMjg1YjZhMGJiMGViMjYzYjQ5ZmI4MGFkYmU4YjY1OTNjYzAwZWRlZjIwNSIsInZlcnNpb24iOjF9.jlvV8WRPSPKJm6UdApoh-dXcAOmLPtF5smsHt39RoO4sFzzbH6elUz5yPF5Lt9Yc2YDIl6c8JDsODqMxmsD0Bg"}, {"type": "f1", "value": 29.3808, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2ZjYWRlYTI1NDkwYzNhMzM5YTg2NjZmODg0NjNkOGM3YjM2NTlkYjVhZWI0MzlmNjNkMTMxODlkNTY3ODBkMiIsInZlcnNpb24iOjF9.CR1MYeU3uqld9bbI8CLupMtote4WEG9fIq9enwhFJfVpChIT9BGKm86zaPmXHg0yBaNHgkMaEt_a-DaIdiEwAg"}]}]}]} | abhilash1910/albert-squad-v2 | null | [
"transformers",
"pytorch",
"safetensors",
"albert",
"question-answering",
"en",
"dataset:squad_v2",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers | # DistilBERT--SQuAD-v1
Training is done on the [SQuAD](https://huggingface.co/datasets/squad) dataset. The model can be accessed via [HuggingFace](https://huggingface.co/abhilash1910/distilbert-squadv1):
## Model Specifications
We have used the following parameters:
- Training Batch Size : 512
- Learning Rate : 3e-5
- Training Epochs : 0.75
- Sequence Length : 384
- Stride : 128
## Usage Specifications
```python
from transformers import AutoModelForQuestionAnswering,AutoTokenizer,pipeline
model=AutoModelForQuestionAnswering.from_pretrained('abhilash1910/distilbert-squadv1')
tokenizer=AutoTokenizer.from_pretrained('abhilash1910/distilbert-squadv1')
nlp_QA=pipeline('question-answering',model=model,tokenizer=tokenizer)
QA_inp={
'question': 'What is the fund price of Huggingface in NYSE?',
'context': 'Huggingface Co. has a total fund price of $19.6 million dollars'
}
result=nlp_QA(QA_inp)
result
```
The result is:
```bash
{'score': 0.38547369837760925,
'start': 42,
'end': 55,
'answer': '$19.6 million'}
```
---
language:
- en
license: apache-2.0
datasets:
- squad_v1
---
| {} | abhilash1910/distilbert-squadv1 | null | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers | # Roberta Masked Language Model Trained On Financial Phrasebank Corpus
This is a Masked Language Model trained with [Roberta](https://huggingface.co/transformers/model_doc/roberta.html) on a Financial Phrasebank Corpus.
The model is built using Huggingface transformers.
The model can be found at :[Financial_Roberta](https://huggingface.co/abhilash1910/financial_roberta)
## Specifications
The corpus for training is taken from the Financial Phrasebank (Malo et al)[https://www.researchgate.net/publication/251231107_Good_Debt_or_Bad_Debt_Detecting_Semantic_Orientations_in_Economic_Texts].
## Model Specification
The model chosen for training is [Roberta](https://arxiv.org/abs/1907.11692) with the following specifications:
1. vocab_size=56000
2. max_position_embeddings=514
3. num_attention_heads=12
4. num_hidden_layers=6
5. type_vocab_size=1
This is trained by using RobertaConfig from transformers package.
The model is trained for 10 epochs with a gpu batch size of 64 units.
## Usage Specifications
For using this model, we have to first import AutoTokenizer and AutoModelWithLMHead Modules from transformers
After that we have to specify, the pre-trained model,which in this case is 'abhilash1910/financial_roberta' for the tokenizers and the model.
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("abhilash1910/financial_roberta")
model = AutoModelWithLMHead.from_pretrained("abhilash1910/financial_roberta")
```
After this the model will be downloaded, it will take some time to download all the model files.
For testing the model, we have to import pipeline module from transformers and create a masked output model for inference as follows:
```python
from transformers import pipeline
model_mask = pipeline('fill-mask', model='abhilash1910/inancial_roberta')
model_mask("The company had a <mask> of 20% in 2020.")
```
Some of the examples are also provided with generic financial statements:
Example 1:
```python
model_mask("The company had a <mask> of 20% in 2020.")
```
Output:
```bash
[{'sequence': '<s>The company had a profit of 20% in 2020.</s>',
'score': 0.023112965747714043,
'token': 421,
'token_str': 'Ġprofit'},
{'sequence': '<s>The company had a loss of 20% in 2020.</s>',
'score': 0.021379893645644188,
'token': 616,
'token_str': 'Ġloss'},
{'sequence': '<s>The company had a year of 20% in 2020.</s>',
'score': 0.0185744296759367,
'token': 443,
'token_str': 'Ġyear'},
{'sequence': '<s>The company had a sales of 20% in 2020.</s>',
'score': 0.018143286928534508,
'token': 428,
'token_str': 'Ġsales'},
{'sequence': '<s>The company had a value of 20% in 2020.</s>',
'score': 0.015319528989493847,
'token': 776,
'token_str': 'Ġvalue'}]
```
Example 2:
```python
model_mask("The <mask> is listed under NYSE")
```
Output:
```bash
[{'sequence': '<s>The company is listed under NYSE</s>',
'score': 0.1566661298274994,
'token': 359,
'token_str': 'Ġcompany'},
{'sequence': '<s>The total is listed under NYSE</s>',
'score': 0.05542507395148277,
'token': 522,
'token_str': 'Ġtotal'},
{'sequence': '<s>The value is listed under NYSE</s>',
'score': 0.04729423299431801,
'token': 776,
'token_str': 'Ġvalue'},
{'sequence': '<s>The order is listed under NYSE</s>',
'score': 0.02533523552119732,
'token': 798,
'token_str': 'Ġorder'},
{'sequence': '<s>The contract is listed under NYSE</s>',
'score': 0.02087237872183323,
'token': 635,
'token_str': 'Ġcontract'}]
```
## Resources
For all resources , please look into the [HuggingFace](https://huggingface.co/) Site and the [Repositories](https://github.com/huggingface).
| {"tags": ["finance"]} | abhilash1910/financial_roberta | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"roberta",
"fill-mask",
"finance",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers | # Roberta Trained Model For Masked Language Model On French Corpus :robot:
This is a Masked Language Model trained with [Roberta](https://huggingface.co/transformers/model_doc/roberta.html) on a small French News Corpus(Leipzig corpora).
The model is built using Huggingface transformers.
The model can be found at :[French-Roberta](https://huggingface.co/abhilash1910/french-roberta)
## Specifications
The corpus for training is taken from Leipzig Corpora (French News) , and is trained on a small set of the corpus (300K).
## Model Specification
The model chosen for training is [Roberta](https://arxiv.org/abs/1907.11692) with the following specifications:
1. vocab_size=32000
2. max_position_embeddings=514
3. num_attention_heads=12
4. num_hidden_layers=6
5. type_vocab_size=1
This is trained by using RobertaConfig from transformers package.The total training parameters :68124416
The model is trained for 100 epochs with a gpu batch size of 64 units.
More details for building custom models can be found at the [HuggingFace Blog](https://huggingface.co/blog/how-to-train)
## Usage Specifications
For using this model, we have to first import AutoTokenizer and AutoModelWithLMHead Modules from transformers
After that we have to specify, the pre-trained model,which in this case is 'abhilash1910/french-roberta' for the tokenizers and the model.
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("abhilash1910/french-roberta")
model = AutoModelWithLMHead.from_pretrained("abhilash1910/french-roberta")
```
After this the model will be downloaded, it will take some time to download all the model files.
For testing the model, we have to import pipeline module from transformers and create a masked output model for inference as follows:
```python
from transformers import pipeline
model_mask = pipeline('fill-mask', model='abhilash1910/french-roberta')
model_mask("Le tweet <mask>.")
```
Some of the examples are also provided with generic French sentences:
Example 1:
```python
model_mask("À ce jour, <mask> projet a entraîné")
```
Output:
```bash
[{'sequence': '<s>À ce jour, belles projet a entraîné</s>',
'score': 0.18685665726661682,
'token': 6504,
'token_str': 'Ġbelles'},
{'sequence': '<s>À ce jour,- projet a entraîné</s>',
'score': 0.0005200508167035878,
'token': 17,
'token_str': '-'},
{'sequence': '<s>À ce jour, de projet a entraîné</s>',
'score': 0.00045729897101409733,
'token': 268,
'token_str': 'Ġde'},
{'sequence': '<s>À ce jour, du projet a entraîné</s>',
'score': 0.0004307595663703978,
'token': 326,
'token_str': 'Ġdu'},
{'sequence': '<s>À ce jour," projet a entraîné</s>',
'score': 0.0004219160182401538,
'token': 6,
'token_str': '"'}]
```
Example 2:
```python
model_mask("C'est un <mask>")
```
Output:
```bash
[{'sequence': "<s>C'est un belles</s>",
'score': 0.16440927982330322,
'token': 6504,
'token_str': 'Ġbelles'},
{'sequence': "<s>C'est un de</s>",
'score': 0.0005495127406902611,
'token': 268,
'token_str': 'Ġde'},
{'sequence': "<s>C'est un du</s>",
'score': 0.00044988933950662613,
'token': 326,
'token_str': 'Ġdu'},
{'sequence': "<s>C'est un-</s>",
'score': 0.00044542422983795404,
'token': 17,
'token_str': '-'},
{'sequence': "<s>C'est un </s>",
'score': 0.00037563967634923756,
'token': 202,
'token_str': 'ĉ'}]
```
## Resources
For all resources , please look into the [HuggingFace](https://huggingface.co/) Site and the [Repositories](https://github.com/huggingface).
---
language:
- fr
tags:
- fill-mask
license: apache-2.0
---
| {} | abhilash1910/french-roberta | null | [
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"fill-mask",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | abhilash1910/t5-small-finetuned-xsum | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | {} | abhinema/distillgpt2 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | {} | abhinema/gpt-medium | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | {} | abhinema/gpt | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | {} | abhinema/testauto | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | {} | abhiramtirumala/DialoGPT-sarcastic-medium | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | This model is a fine-tuned version of Microsoft/DialoGPT-medium trained to created sarcastic responses from the dataset "Sarcasm on Reddit" located [here](https://www.kaggle.com/danofer/sarcasm). | {"pipeline_tag": "conversational"} | abhiramtirumala/DialoGPT-sarcastic | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 37229289
- CO2 Emissions (in grams): 5.448567309047846
## Validation Metrics
- Loss: 0.07081354409456253
- Accuracy: 0.9867109634551495
- Macro F1: 0.9859067529980614
- Micro F1: 0.9867109634551495
- Weighted F1: 0.9866417220968429
- Macro Precision: 0.9868771404595043
- Micro Precision: 0.9867109634551495
- Weighted Precision: 0.9869289511551576
- Macro Recall: 0.9853173241852486
- Micro Recall: 0.9867109634551495
- Weighted Recall: 0.9867109634551495
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-bbc-news-classification-37229289
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-bbc-news-classification-37229289", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-bbc-news-classification-37229289", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "datasets": ["abhishek/autonlp-data-bbc-news-classification"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 5.448567309047846} | abhishek/autonlp-bbc-news-classification-37229289 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:abhishek/autonlp-data-bbc-news-classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 37249301
- CO2 Emissions (in grams): 1.9859980179658823
## Validation Metrics
- Loss: 0.06406362354755402
- Accuracy: 0.9833887043189369
- Macro F1: 0.9832763664701248
- Micro F1: 0.9833887043189369
- Weighted F1: 0.9833288528828136
- Macro Precision: 0.9847257743677181
- Micro Precision: 0.9833887043189369
- Weighted Precision: 0.9835392869652073
- Macro Recall: 0.982101705176067
- Micro Recall: 0.9833887043189369
- Weighted Recall: 0.9833887043189369
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-bbc-roberta-37249301
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-bbc-roberta-37249301", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-bbc-roberta-37249301", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "unk", "tags": "autonlp", "datasets": ["abhishek/autonlp-data-bbc-roberta"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 1.9859980179658823} | abhishek/autonlp-bbc-roberta-37249301 | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"unk",
"dataset:abhishek/autonlp-data-bbc-roberta",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 2652021
## Validation Metrics
- Loss: 0.3934604227542877
- Accuracy: 0.8411030860144452
- Precision: 0.8201550387596899
- Recall: 0.8076335877862595
- AUC: 0.8946767157983608
- F1: 0.8138461538461538
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-ferd1-2652021
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-ferd1-2652021", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-ferd1-2652021", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "datasets": ["abhishek/autonlp-data-ferd1"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]} | abhishek/autonlp-ferd1-2652021 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:abhishek/autonlp-data-ferd1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 2682064
## Validation Metrics
- Loss: 0.4454168379306793
- Accuracy: 0.8188976377952756
- Precision: 0.8442028985507246
- Recall: 0.7103658536585366
- AUC: 0.8699702146791053
- F1: 0.771523178807947
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-fred2-2682064
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-fred2-2682064", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-fred2-2682064", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "datasets": ["abhishek/autonlp-data-fred2"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]} | abhishek/autonlp-fred2-2682064 | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:abhishek/autonlp-data-fred2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
# Model Trained Using AutoNLP
- Problem type: Speech Recognition
| {"language": {}, "tags": ["autonlp", "automatic-speech-recognition", "audio"]} | abhishek/autonlp-hindi-asr | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"autonlp",
"audio",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- CO2 Emissions (in grams): 39.76330395590446
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-hindi-question-answering-23865268
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("abhishek/autonlp-hindi-question-answering-23865268", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-hindi-question-answering-23865268", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` | {"language": "hi", "tags": ["autonlp", "question-answering"], "datasets": ["abhishek/autonlp-data-hindi-question-answering"], "widget": [{"text": "\u00b4\u0938\u0924\u0940\u0936 \u0927\u0935\u0928 \u0905\u0902\u0924\u0930\u093f\u0915\u094d\u0937 \u0915\u0947\u0902\u0926\u094d\u0930\u00b4 \u0915\u093f\u0938 \u0930\u093e\u091c\u094d\u092f \u092e\u0947\u0902 \u0938\u094d\u0925\u093f\u0924 \u0939\u0948?", "context": "\u0938\u0924\u0940\u0936 \u0927\u0935\u0928 \u0905\u0902\u0924\u0930\u093f\u0915\u094d\u0937 \u0915\u0947\u0902\u0926\u094d\u0930, \u092d\u093e\u0930\u0924\u0940\u092f \u0905\u0902\u0924\u0930\u093f\u0915\u094d\u0937 \u0905\u0928\u0941\u0938\u0902\u0927\u093e\u0928 \u0938\u0902\u0917\u0920\u0928 (\u0907\u0938\u0930\u094b) \u0915\u093e \u092a\u094d\u0930\u0915\u094d\u0937\u0947\u092a\u0923 \u0915\u0947\u0902\u0926\u094d\u0930 \u0939\u0948\u0964 \u092f\u0939 \u0906\u0902\u0927\u094d\u0930 \u092a\u094d\u0930\u0926\u0947\u0936 \u0915\u0947 \u0936\u094d\u0930\u0940\u0939\u0930\u0940\u0915\u094b\u091f\u093e \u092e\u0947\u0902 \u0938\u094d\u0925\u093f\u0924 \u0939\u0948, \u0907\u0938\u0947 '\u0936\u094d\u0930\u0940\u0939\u0930\u0940\u0915\u094b\u091f\u093e \u0930\u0947\u0902\u091c' \u092f\u093e '\u0936\u094d\u0930\u0940\u0939\u0930\u0940\u0915\u094b\u091f\u093e \u0932\u093e\u0901\u091a\u093f\u0902\u0917 \u0930\u0947\u0902\u091c' \u0915\u0947 \u0928\u093e\u092e \u0938\u0947 \u092d\u0940 \u091c\u093e\u0928\u093e \u091c\u093e\u0924\u093e \u0939\u0948\u0964 2002 \u092e\u0947\u0902 \u0907\u0938\u0930\u094b \u0915\u0947 \u092a\u0942\u0930\u094d\u0935 \u092a\u094d\u0930\u092c\u0902\u0927\u0915 \u0914\u0930 \u0935\u0948\u091c\u094d\u091e\u093e\u0928\u093f\u0915 \u0938\u0924\u0940\u0936 \u0927\u0935\u0928 \u0915\u0947 \u092e\u0930\u0923\u094b\u092a\u0930\u093e\u0902\u0924 \u0909\u0928\u0915\u0947 \u0938\u092e\u094d\u092e\u093e\u0928 \u092e\u0947\u0902 \u0907\u0938\u0915\u093e \u0928\u093e\u092e \u092c\u0926\u0932\u093e \u0917\u092f\u093e\u0964 \u092a\u094d\u0930\u0915\u094d\u0937\u0947\u092a\u0923 \u092f\u093e\u0928 \u0915\u0940 \u0905\u0938\u0947\u092e\u094d\u200d\u092c\u0932\u0940 \u0915\u0947 \u0932\u093f\u090f \u0926\u0942\u0938\u0930\u093e \u092d\u0935\u0928 \u0915\u0947\u0928\u094d\u200d\u0926\u094d\u0930\u0940\u092f \u092e\u0902\u0924\u094d\u0930\u093f\u092e\u0902\u0921\u0932 \u0928\u0947 12 \u0938\u093f\u0924\u092e\u094d\u200d\u092c\u0930, 2013 \u0915\u094b \u0938\u0924\u0940\u0936 \u0927\u0935\u0928 \u0905\u0902\u0924\u0930\u093f\u0915\u094d\u0937 \u0915\u0947\u0928\u094d\u200d\u0926\u094d\u0930, \u0936\u094d\u0930\u0940\u0939\u0930\u093f\u0915\u094b\u091f\u093e \u092e\u0947\u0902 \u092a\u094d\u0930\u0915\u094d\u0937\u0947\u092a\u0923 \u092f\u093e\u0928 \u0915\u0940 \u0905\u0938\u0947\u092e\u094d\u200d\u092c\u0932\u0940 \u0915\u0947 \u0932\u093f\u090f \u0926\u0942\u0938\u0930\u0947 \u092d\u0935\u0928 \u0915\u0947 \u0928\u093f\u0930\u094d\u092e\u093e\u0923 \u0915\u0940 \u092e\u0902\u091c\u0942\u0930\u0940 \u0926\u0940\u0964 \u0907\u0938 \u092a\u0930 363.95 \u0915\u0930\u094b\u0921\u093c \u0930\u0941\u092a\u092f\u0947 \u0915\u0940 \u0905\u0928\u0941\u092e\u093e\u0928\u093f\u0924 \u0932\u093e\u0917\u0924 \u0906\u090f\u0917\u0940, \u091c\u093f\u0938\u092e\u0947\u0902 \u0938\u093e\u0924 \u0915\u0930\u094b\u0921\u093c \u0930\u0941\u092a\u092f\u0947 \u0915\u093e \u0916\u0930\u094d\u091a \u0935\u093f\u0926\u0947\u0936\u0940 \u092e\u0941\u0926\u094d\u0930\u093e \u092e\u0947\u0902 \u0939\u094b\u0917\u093e\u0964 \u0907\u0938 \u0926\u0942\u0938\u0930\u0940 \u092c\u093f\u0932\u094d\u0921\u093f\u0902\u0917 \u0915\u0947 \u0909\u092a\u0932\u092c\u094d\u200d\u0927 \u0939\u094b \u091c\u093e\u0928\u0947 \u0938\u0947 \u092a\u0940\u090f\u0938\u090f\u0932\u0935\u0940 \u0914\u0930 \u091c\u0940\u090f\u0938\u090f\u0932\u0935\u0940 \u0915\u0940 \u092a\u094d\u0930\u0915\u094d\u0937\u0947\u092a\u0923 \u092b\u094d\u0930\u0940\u0915\u094d\u0935\u0947\u0902\u0938\u0940 \u092c\u0922\u093c\u0947\u0917\u0940\u0964 \u092f\u0939 \u091c\u0940\u090f\u0938\u090f\u0932\u0935\u0940 \u090f\u092e\u0915\u0947-III \u0915\u0947 \u090f\u0915\u0940\u0915\u0930\u0923 \u0915\u0947 \u0932\u093f\u090f \u0935\u0930\u094d\u0924\u092e\u093e\u0928 \u0935\u094d\u200d\u0939\u0940\u0915\u0932 \u0905\u0938\u0947\u092e\u094d\u200d\u092c\u0932\u0940 \u092c\u093f\u0932\u094d\u0921\u093f\u0902\u0917 \u0915\u094b \u0905\u0924\u093f\u0930\u093f\u0915\u094d\u200d\u0924 \u0938\u0941\u0935\u093f\u0927\u093e \u092e\u0941\u0939\u0948\u092f\u093e \u0915\u0930\u093e\u092f\u0947\u0917\u0940\u0964 \u0924\u0940\u0938\u0930\u0947 \u092a\u094d\u0930\u0915\u094d\u0937\u0947\u092a\u0923 \u092a\u0948\u0921 \u0924\u0925\u093e \u092d\u0935\u093f\u0937\u094d\u200d\u092f \u092e\u0947\u0902 \u0938\u093e\u092e\u093e\u0928\u094d\u200d\u092f \u092f\u093e\u0928 \u092a\u094d\u0930\u0915\u094d\u0937\u0947\u092a\u0923 \u0915\u0947 \u0932\u093f\u090f \u092d\u0940 \u0907\u0938\u0938\u0947 \u0915\u093e\u092b\u0940 \u0938\u0941\u0935\u093f\u0927\u093e \u092e\u093f\u0932\u0947\u0917\u0940\u0964[1]\n\u0932\u093e\u0902\u091a \u092a\u0948\u0921\n\u0909\u092a\u0917\u094d\u0930\u0939 \u092a\u094d\u0930\u0915\u094d\u0937\u0947\u092a\u0923 \u092f\u093e\u0928 \u0932\u0949\u0928\u094d\u091a \u092a\u0948\u0921\n\u0907\u0938 \u0932\u093e\u0902\u091a \u092a\u0948\u0921 \u0938\u0947 \u0909\u092a\u0917\u094d\u0930\u0939 \u092a\u094d\u0930\u0915\u094d\u0937\u0947\u092a\u0923 \u092f\u093e\u0928 \u0914\u0930 \u0938\u0902\u0935\u0930\u094d\u0927\u093f\u0924 \u0909\u092a\u0917\u094d\u0930\u0939 \u092a\u094d\u0930\u0915\u094d\u0937\u0947\u092a\u0923 \u092f\u093e\u0928 \u0915\u094b \u0932\u093e\u0902\u091a \u0915\u093f\u092f\u093e \u0917\u092f\u093e \u0925\u093e\u0964 \u092f\u0939 \u0935\u0930\u094d\u0924\u092e\u093e\u0928 \u092a\u094d\u0930\u0915\u094d\u0937\u0947\u092a\u0923 \u0938\u094d\u0925\u0932 \u0915\u0947 \u0926\u0915\u094d\u0937\u093f\u0923\u0940 \u0938\u093f\u0930\u0947 \u092a\u0930 \u0938\u094d\u0925\u093f\u0924 \u0939\u0948\u0964 \u0907\u0938\u0947 \u0938\u0947\u0935\u093e\u092e\u0941\u0915\u094d\u0924 \u0915\u0930 \u0926\u093f\u092f\u093e \u0917\u092f\u093e \u0939\u0948\u0964 \u0936\u0941\u0930\u0942 \u092e\u0947\u0902 \u0907\u0938\u0947 \u0909\u092a\u0917\u094d\u0930\u0939 \u092a\u094d\u0930\u0915\u094d\u0937\u0947\u092a\u0923 \u092f\u093e\u0928 \u0932\u093e\u0902\u091a \u0915\u0930\u0928\u0947 \u0915\u0947 \u0932\u093f\u090f \u092c\u0928\u093e\u092f\u093e \u0917\u092f\u093e \u0925\u093e\u0964 \u0932\u0947\u0915\u093f\u0928 \u092c\u093e\u0926 \u092e\u0947\u0902 \u0907\u0938\u0947 \u0938\u0902\u0935\u0930\u094d\u0927\u093f\u0924 \u0909\u092a\u0917\u094d\u0930\u0939 \u092a\u094d\u0930\u0915\u094d\u0937\u0947\u092a\u0923 \u092f\u093e\u0928 \u092a\u094d\u0930\u0915\u094d\u0937\u0947\u092a\u0923 \u092a\u0930\u093f\u0938\u0930 \u0915\u0947 \u0930\u0942\u092a \u092e\u0947\u0902 \u0907\u0938\u094d\u0924\u0947\u092e\u093e\u0932 \u0915\u093f\u092f\u093e \u0917\u092f\u093e \u0925\u093e\u0964\n\u092a\u094d\u0930\u0925\u092e \u0932\u093e\u0902\u091a \u092a\u0948\u0921\n\u0926\u094d\u0935\u093f\u0924\u0940\u092f \u0932\u0949\u0928\u094d\u091a \u092a\u0948\u0921\n\u0924\u0943\u0924\u0940\u092f \u0932\u093e\u0902\u091a \u092a\u0948\u0921\n\u0938\u0928\u094d\u0926\u0930\u094d\u092d \u0936\u094d\u0930\u0947\u0923\u0940:\u092d\u093e\u0930\u0924\u0940\u092f \u0905\u0902\u0924\u0930\u093f\u0915\u094d\u0937 \u0905\u0928\u0941\u0938\u0902\u0927\u093e\u0928 \u0938\u0902\u0917\u0920\u0928\n\u0936\u094d\u0930\u0947\u0923\u0940:\u092d\u093e\u0930\u0924 \u0915\u0947 \u0930\u0949\u0915\u0947\u091f \u092a\u094d\u0930\u0915\u094d\u0937\u0947\u092a\u0923 \u0938\u094d\u0925\u0932"}], "co2_eq_emissions": 39.76330395590446} | abhishek/autonlp-hindi-question-answering-23865268 | null | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"hi",
"dataset:abhishek/autonlp-data-hindi-question-answering",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 3662644
- CO2 Emissions (in grams): 25.894117734124272
## Validation Metrics
- Loss: 0.20277436077594757
- Accuracy: 0.92604
- Precision: 0.9560674830864092
- Recall: 0.89312
- AUC: 0.9814625504000001
- F1: 0.9235223559581421
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-imdb-roberta-base-3662644
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-imdb-roberta-base-3662644", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-imdb-roberta-base-3662644", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "unk", "tags": "autonlp", "datasets": ["abhishek/autonlp-data-imdb-roberta-base"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 25.894117734124272} | abhishek/autonlp-imdb-roberta-base-3662644 | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"unk",
"dataset:abhishek/autonlp-data-imdb-roberta-base",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 71421
## Validation Metrics
- Loss: 0.4114699363708496
- Accuracy: 0.8248248248248248
- Precision: 0.8305439330543933
- Recall: 0.8085539714867617
- AUC: 0.9088033420466026
- F1: 0.8194014447884417
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-imdb_eval-71421
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-imdb_eval-71421", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-imdb_eval-71421", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "datasets": ["abhishek/autonlp-data-imdb_eval"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]} | abhishek/autonlp-imdb_eval-71421 | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:abhishek/autonlp-data-imdb_eval",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 31154
## Validation Metrics
- Loss: 0.19292379915714264
- Accuracy: 0.9395
- Precision: 0.9569557080474111
- Recall: 0.9204
- AUC: 0.9851040399999998
- F1: 0.9383219492302988
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-imdb_sentiment_classification-31154
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-imdb_sentiment_classification-31154", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-imdb_sentiment_classification-31154", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]} | abhishek/autonlp-imdb_sentiment_classification-31154 | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"autonlp",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 59362
## Validation Metrics
- Loss: 0.13092292845249176
- Accuracy: 0.9527127414314258
- Precision: 0.9634070704982427
- Recall: 0.9842171959602166
- AUC: 0.9667289746092403
- F1: 0.9737009564152002
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-japanese-sentiment-59362
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-japanese-sentiment-59362", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-japanese-sentiment-59362", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "ja", "tags": "autonlp", "datasets": ["abhishek/autonlp-data-japanese-sentiment"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]} | abhishek/autonlp-japanese-sentiment-59362 | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autonlp",
"ja",
"dataset:abhishek/autonlp-data-japanese-sentiment",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 59363
## Validation Metrics
- Loss: 0.12651239335536957
- Accuracy: 0.9532079853817648
- Precision: 0.9729688278823665
- Recall: 0.9744633462616643
- AUC: 0.9717333684823413
- F1: 0.9737155136027014
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-japanese-sentiment-59363
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-japanese-sentiment-59363", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-japanese-sentiment-59363", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "ja", "tags": "autonlp", "datasets": ["abhishek/autonlp-data-japanese-sentiment"], "widget": [{"text": "\ud83e\udd17AutoNLP\u304c\u5927\u597d\u304d\u3067\u3059"}]} | abhishek/autonlp-japanese-sentiment-59363 | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autonlp",
"ja",
"dataset:abhishek/autonlp-data-japanese-sentiment",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Entity Extraction
- Model ID: 3362554
- CO2 Emissions (in grams): 5.340540212393564
## Validation Metrics
- Loss: 0.14167872071266174
- Accuracy: 0.9587076867229332
- Precision: 0.7351351351351352
- Recall: 0.7923728813559322
- F1: 0.7626816212082591
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-prodigy-10-3362554
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("abhishek/autonlp-prodigy-10-3362554", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-prodigy-10-3362554", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "datasets": ["abhishek/autonlp-data-prodigy-10"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 5.340540212393564} | abhishek/autonlp-prodigy-10-3362554 | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autonlp",
"en",
"dataset:abhishek/autonlp-data-prodigy-10",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 30516963
- CO2 Emissions (in grams): 30.684995819386277
## Validation Metrics
- Loss: 0.08340361714363098
- Accuracy: 0.9688222161294113
- Precision: 0.9102096627164995
- Recall: 0.7692604006163328
- AUC: 0.9859340458715813
- F1: 0.8338204592901879
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-toxic-new-30516963
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-toxic-new-30516963", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-toxic-new-30516963", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "datasets": ["abhishek/autonlp-data-toxic-new"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 30.684995819386277} | abhishek/autonlp-toxic-new-30516963 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:abhishek/autonlp-data-toxic-new",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
# muril-large-chaii
This is __one of the models__ that we used for getting 5th place in the hindi and tamil question answering competition organized by Kaggle.
Our full solution can be found here:
| {"language": ["hi", "ta"], "tags": ["question-answering"], "widget": [{"text": "\u0905\u092d\u093f\u0937\u0947\u0915 \u0914\u0930 \u0909\u0926\u094d\u092d\u0935 \u0915\u094b \u0915\u094c\u0928 \u0938\u093e \u0938\u094d\u0925\u093e\u0928 \u092e\u093f\u0932\u093e?", "context": "kaggle \u0926\u094d\u0935\u093e\u0930\u093e \u0906\u092f\u094b\u091c\u093f\u0924 chaii \u092a\u094d\u0930\u0924\u093f\u092f\u094b\u0917\u093f\u0924\u093e \u092e\u0947\u0902 \u0905\u092d\u093f\u0937\u0947\u0915 \u0914\u0930 \u0909\u0926\u094d\u092d\u0935 \u0928\u0947 \u092a\u093e\u0902\u091a\u0935\u093e \u0938\u094d\u0925\u093e\u0928 \u0939\u093e\u0938\u093f\u0932 \u0915\u093f\u092f\u093e \n \u0909\u0928\u094d\u0939\u094b\u0902\u0928\u0947 xlm-roberta, muril \u0914\u0930 rembert \u091c\u0948\u0938\u0947 \u092e\u0949\u0921\u0932\u094b\u0902 \u0915\u093e \u0907\u0938\u094d\u0924\u0947\u092e\u093e\u0932 \u0915\u093f\u092f\u093e."}]} | abhishek/muril-large-chaii | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"hi",
"ta",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | abhishekjha2468/temp | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.