Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
question-answering | transformers | {} | aodiniz/bert_uncased_L-4_H-768_A-12_squad2 | null | [
"transformers",
"pytorch",
"jax",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers | {} | aodiniz/bert_uncased_L-4_H-768_A-12_squad2_covid-qna | null | [
"transformers",
"pytorch",
"jax",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | aodiniz/bert_uncased_L-6_H-128_A-2_cord19-200616 | null | [
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers | {} | aodiniz/bert_uncased_L-6_H-128_A-2_cord19-200616_squad2 | null | [
"transformers",
"pytorch",
"jax",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers | {} | aodiniz/bert_uncased_L-6_H-128_A-2_cord19-200616_squad2_covid-qna | null | [
"transformers",
"pytorch",
"jax",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers | {} | aodiniz/bert_uncased_L-6_H-128_A-2_squad2 | null | [
"transformers",
"pytorch",
"jax",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers | {} | aodiniz/bert_uncased_L-6_H-128_A-2_squad2_covid-qna | null | [
"transformers",
"pytorch",
"jax",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | # Building a HuggingFace Transformer NLP Model
## Running this Repo
| {} | aogara/slai_transformer | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers | {} | aorona/dickens | null | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | {} | aoryabinin/aoryabinin_gpt_ai_dungeon_ru | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-new-model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["xsum"], "model-index": [{"name": "my-new-model", "results": []}]} | aozorahime/my-new-model | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers | {} | apeguero/wav2vec2-large-xls-r-300m-tr-colab-3 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | apeguero/wav2vec2-large-xls-r-300m-tr-colab2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | {} | aphuongle95/xlnet_effect_partial_new | null | [
"transformers",
"pytorch",
"xlnet",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
# Aladdin Bot | {"tags": ["conversational"]} | aplnestrella/Aladdin-Bot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-to-image | transformers |
## DALL·E mini - Generate images from text
<img style="text-align:center; display:block;" src="https://raw.githubusercontent.com/borisdayma/dalle-mini/main/img/logo.png" width="200">
* [Technical Report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini--Vmlldzo4NjIxODA)
* [Demo](https://huggingface.co/spaces/flax-community/dalle-mini)
### Model Description
This is an attempt to replicate OpenAI's [DALL·E](https://openai.com/blog/dall-e/), a model capable of generating arbitrary images from a text prompt that describes the desired result.

This model's architecture is a simplification of the original, and leverages previous open source efforts and available pre-trained models. Results have lower quality than OpenAI's, but the model can be trained and used on less demanding hardware. Our training was performed on a single TPU v3-8 for a few days.
### Components of the Architecture
The system relies on the Flax/JAX infrastructure, which are ideal for TPU training. TPUs are not required, both Flax and JAX run very efficiently on GPU backends.
The main components of the architecture include:
* An encoder, based on [BART](https://arxiv.org/abs/1910.13461). The encoder transforms a sequence of input text tokens to a sequence of image tokens. The input tokens are extracted from the text prompt by using the model's tokenizer. The image tokens are a fixed-length sequence, and they represent indices in a VQGAN-based pre-trained codebook.
* A decoder, which converts the image tokens to image pixels. As mentioned above, the decoder is based on a [VQGAN model](https://compvis.github.io/taming-transformers/).
The model definition we use for the encoder can be downloaded from our [Github repo](https://github.com/borisdayma/dalle-mini). The encoder is represented by the class `CustomFlaxBartForConditionalGeneration`.
To use the decoder, you need to follow the instructions in our accompanying VQGAN model in the hub, [flax-community/vqgan_f16_16384](https://huggingface.co/flax-community/vqgan_f16_16384).
### How to Use
The easiest way to get familiar with the code and the models is to follow the inference notebook we provide in our [github repo](https://github.com/borisdayma/dalle-mini/blob/main/dev/inference/inference_pipeline.ipynb). For your convenience, you can open it in Google Colaboratory: [](https://colab.research.google.com/github/borisdayma/dalle-mini/blob/main/dev/inference/inference_pipeline.ipynb)
If you just want to test the trained model and see what it comes up with, please visit [our demo](https://huggingface.co/spaces/flax-community/dalle-mini), available in 🤗 Spaces.
### Additional Details
Our [report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini--Vmlldzo4NjIxODA) contains more details about how the model was trained and shows many examples that demonstrate its capabilities.
| {"language": ["en"], "pipeline_tag": "text-to-image", "inference": false} | apol/dalle-mini | null | [
"transformers",
"jax",
"bart",
"text2text-generation",
"text-to-image",
"en",
"arxiv:1910.13461",
"autotrain_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | hello
| {} | apoorvumang/kgt5-test | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers | This is a t5-small model trained from scratch on WikiKG90Mv2 dataset. Please see https://github.com/apoorvumang/kgt5/ for more details on the method.
This model was trained on the tail entity prediction task ie. given subject entity and relation, predict the object entity. Input should be provided in the form of "\<entity text\>| \<relation text\>".
We used the raw text title and descriptions to get entity and relation textual representations. These raw texts were obtained from ogb dataset itself (dataset/wikikg90m-v2/mapping/entity.csv and relation.csv). Entity representation was set to the title, and description was used to disambiguate if 2 entities had the same title. If still no disambiguation was possible, we used the wikidata ID (eg. Q123456).
We trained the model on WikiKG90Mv2 for approx 1.5 epochs on 4x1080Ti GPUs. The training time for 1 epoch was approx 5.5 days.
To evaluate the model, we sample 300 times from the decoder for each input (s,r) pair. We then remove predictions which do not map back to a valid entity, and then rank the predictions by their log probabilities. Filtering was performed subsequently. We achieve 0.22 validation MRR (the full leaderboard is here https://ogb.stanford.edu/docs/lsc/leaderboards/#wikikg90mv2)
You can try the following code in an ipython notebook to evaluate the pre-trained model. The full procedure of mapping entity to ids, filtering etc. is not included here for sake of simplicity but can be provided on request if needed. Please contact Apoorv ([email protected]) for clarifications/details.
---------
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("apoorvumang/kgt5-wikikg90mv2")
model = AutoModelForSeq2SeqLM.from_pretrained("apoorvumang/kgt5-wikikg90mv2")
```
```
import torch
def getScores(ids, scores, pad_token_id):
"""get sequence scores from model.generate output"""
scores = torch.stack(scores, dim=1)
log_probs = torch.log_softmax(scores, dim=2)
# remove start token
ids = ids[:,1:]
# gather needed probs
x = ids.unsqueeze(-1).expand(log_probs.shape)
needed_logits = torch.gather(log_probs, 2, x)
final_logits = needed_logits[:, :, 0]
padded_mask = (ids == pad_token_id)
final_logits[padded_mask] = 0
final_scores = final_logits.sum(dim=-1)
return final_scores.cpu().detach().numpy()
def topkSample(input, model, tokenizer,
num_samples=5,
num_beams=1,
max_output_length=30):
tokenized = tokenizer(input, return_tensors="pt")
out = model.generate(**tokenized,
do_sample=True,
num_return_sequences = num_samples,
num_beams = num_beams,
eos_token_id = tokenizer.eos_token_id,
pad_token_id = tokenizer.pad_token_id,
output_scores = True,
return_dict_in_generate=True,
max_length=max_output_length,)
out_tokens = out.sequences
out_str = tokenizer.batch_decode(out_tokens, skip_special_tokens=True)
out_scores = getScores(out_tokens, out.scores, tokenizer.pad_token_id)
pair_list = [(x[0], x[1]) for x in zip(out_str, out_scores)]
sorted_pair_list = sorted(pair_list, key=lambda x:x[1], reverse=True)
return sorted_pair_list
def greedyPredict(input, model, tokenizer):
input_ids = tokenizer([input], return_tensors="pt").input_ids
out_tokens = model.generate(input_ids)
out_str = tokenizer.batch_decode(out_tokens, skip_special_tokens=True)
return out_str[0]
```
```
# an example from validation set that the model predicts correctly
# you can try your own examples here. what's your noble title?
input = "Sophie Valdemarsdottir| noble title"
out = topkSample(input, model, tokenizer, num_samples=5)
out
```
You can further load the list of entity aliases, then filter only those predictions which are valid entities then create a reverse mapping from alias -> integer id to get final predictions in required format.
However, loading these aliases in memory as a dictionary requires a lot of RAM + you need to download the aliases file (made available here https://storage.googleapis.com/kgt5-wikikg90mv2/ent_alias_list.pickle) (relation file: https://storage.googleapis.com/kgt5-wikikg90mv2/rel_alias_list.pickle)
The submitted validation/test results for were obtained by sampling 300 times for each input, then applying above procedure, followed by filtering known entities. The final MRR can vary slightly due to this sampling nature (we found that although beam search gives deterministic output, the results are inferior to sampling large number of times).
```
# download valid.txt. you can also try same url with test.txt. however test does not contain the correct tails
!wget https://storage.googleapis.com/kgt5-wikikg90mv2/valid.txt
```
```
fname = 'valid.txt'
valid_lines = []
f = open(fname)
for line in f:
valid_lines.append(line.rstrip())
f.close()
print(valid_lines[0])
```
```
from tqdm.auto import tqdm
# try unfiltered hits@k. this is approximation since model can sample same seq multiple times
# you should run this on gpu if you want to evaluate on all points with 300 samples each
k = 1
count_at_k = 0
max_predictions = k
max_points = 1000
for line in tqdm(valid_lines[:max_points]):
input, target = line.split('\t')
model_output = topkSample(input, model, tokenizer, num_samples=max_predictions)
prediction_strings = [x[0] for x in model_output]
if target in prediction_strings:
count_at_k += 1
print('Hits at {0} unfiltered: {1}'.format(k, count_at_k/max_points))
``` | {"license": "mit", "widget": [{"text": "Apoorv Umang Saxena| family name", "example_title": "Family name prediction"}, {"text": "Apoorv Saxena| country", "example_title": "Country prediction"}, {"text": "World War 2| followed by", "example_title": "followed by"}]} | apoorvumang/kgt5-wikikg90mv2 | null | [
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | 1 | {} | app-test-user/test-tensorboard | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers | {} | appleternity/bert-base-uncased-finetuned-coda19 | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | appleternity/scibert-uncased-finetuned-coda19 | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text2text-generation | transformers | {} | aqj213/t5-base-customised-1k-tokens-pisa-state-only-finetuned | null | [
"transformers",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text2text-generation | transformers | {} | aqj213/t5-base-pisa-state-only-finetuned | null | [
"transformers",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text2text-generation | transformers | {} | aqj213/t5-small-pisa-state-only-finetuned | null | [
"transformers",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text2text-generation | transformers | {} | aqj213/t5-v1_1-large-last-1-step-pisa-state-only-finetuned | null | [
"transformers",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text2text-generation | transformers | {} | aqj213/t5-v1_1-large-pisa-state-only-finetuned | null | [
"transformers",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
# DialoGPT-medium-simpsons
This is a version of [DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) fine-tuned on The Simpsons scripts. | {"tags": ["conversational"]} | arampacha/DialoGPT-medium-simpsons | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
zero-shot-image-classification | transformers | {} | arampacha/clip-rsicd-v5 | null | [
"transformers",
"pytorch",
"jax",
"clip",
"zero-shot-image-classification",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Chech
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "cs", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("arampacha/wav2vec2-large-xlsr-czech")
model = Wav2Vec2ForCTC.from_pretrained("arampacha/wav2vec2-large-xlsr-czech")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Czech test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "cs", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("arampacha/wav2vec2-large-xlsr-czech")
model = Wav2Vec2ForCTC.from_pretrained("arampacha/wav2vec2-large-xlsr-czech")
model.to("cuda")
chars_to_ignore = [",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", '«', '»', '—', '…', '(', ')', '*', '”', '“']
chars_to_ignore_regex = f'[{"".join(chars_to_ignore)}]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
# Note: this models is trained ignoring accents on letters as below
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().strip()
batch["sentence"] = re.sub(re.compile('[äá]'), 'a', batch['sentence'])
batch["sentence"] = re.sub(re.compile('[öó]'), 'o', batch['sentence'])
batch["sentence"] = re.sub(re.compile('[èé]'), 'e', batch['sentence'])
batch["sentence"] = re.sub(re.compile("[ïí]"), 'i', batch['sentence'])
batch["sentence"] = re.sub(re.compile("[üů]"), 'u', batch['sentence'])
batch['sentence'] = re.sub(' ', ' ', batch['sentence'])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 24.56
## Training
The Common Voice `train`, `validation`.
The script used for training will be available [here](https://github.com/arampacha/hf-sprint-xlsr) soon. | {"language": "cs", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "metrics": "wer", "dataset": "common_voice", "model-index": [{"name": "Czech XLSR Wav2Vec2 Large 53", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice cs", "type": "common_voice", "args": "cs"}, "metrics": [{"type": "wer", "value": 24.56, "name": "Test WER"}]}]}]} | arampacha/wav2vec2-large-xlsr-czech | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"cs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Ukrainian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Ukrainian using the [Common Voice](https://huggingface.co/datasets/common_voice) and sample of [M-AILABS Ukrainian Corpus](https://www.caito.de/2019/01/the-m-ailabs-speech-dataset/) datasets.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "uk", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("arampacha/wav2vec2-large-xlsr-ukrainian")
model = Wav2Vec2ForCTC.from_pretrained("arampacha/wav2vec2-large-xlsr-ukrainian")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = torchaudio.transforms.Resample(sampling_rate, 16_000)(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Ukrainian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "uk", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("arampacha/wav2vec2-large-xlsr-ukrainian")
model = Wav2Vec2ForCTC.from_pretrained("arampacha/wav2vec2-large-xlsr-ukrainian")
model.to("cuda")
chars_to_ignore = [",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", '«', '»', '—', '…', '(', ')', '*', '”', '“']
chars_to_ignore_regex = f'[{"".join(chars_to_ignore)}]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays and normalize charecters
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(re.compile("['`]"), '’', batch['sentence'])
batch["sentence"] = re.sub(re.compile(chars_to_ignore_regex), '', batch["sentence"]).lower().strip()
batch["sentence"] = re.sub(re.compile('i'), 'і', batch['sentence'])
batch["sentence"] = re.sub(re.compile('o'), 'о', batch['sentence'])
batch["sentence"] = re.sub(re.compile('a'), 'а', batch['sentence'])
batch["sentence"] = re.sub(re.compile('ы'), 'и', batch['sentence'])
batch["sentence"] = re.sub(re.compile("–"), '', batch['sentence'])
batch['sentence'] = re.sub(' ', ' ', batch['sentence'])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = torchaudio.transforms.Resample(sampling_rate, 16_000)(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 29.89
## Training
The Common Voice `train`, `validation` and the M-AILABS Ukrainian corpus.
The script used for training will be available [here](https://github.com/arampacha/hf-sprint-xlsr) soon. | {"language": "uk", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "metrics": "wer", "dataset": "common_voice", "model-index": [{"name": "Ukrainian XLSR Wav2Vec2 Large 53", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice uk", "type": "common_voice", "args": "uk"}, "metrics": [{"type": "wer", "value": 29.89, "name": "Test WER"}]}]}]} | arampacha/wav2vec2-large-xlsr-ukrainian | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"uk",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HY-AM dataset.
It achieves the following results on the evaluation set:
- Loss: **0.4521**
- Wer: **0.5141**
- Cer: **0.1100**
- Wer+LM: **0.2756**
- Cer+LM: **0.0866**
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: tristage
- lr_scheduler_ratios: [0.1, 0.4, 0.5]
- training_steps: 1400
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 6.1298 | 19.87 | 100 | 3.1204 | 1.0 | 1.0 |
| 2.7269 | 39.87 | 200 | 0.6200 | 0.7592 | 0.1755 |
| 1.4643 | 59.87 | 300 | 0.4796 | 0.5921 | 0.1277 |
| 1.1242 | 79.87 | 400 | 0.4637 | 0.5359 | 0.1145 |
| 0.9592 | 99.87 | 500 | 0.4521 | 0.5141 | 0.1100 |
| 0.8704 | 119.87 | 600 | 0.4736 | 0.4914 | 0.1045 |
| 0.7908 | 139.87 | 700 | 0.5394 | 0.5250 | 0.1124 |
| 0.7049 | 159.87 | 800 | 0.4822 | 0.4754 | 0.0985 |
| 0.6299 | 179.87 | 900 | 0.4890 | 0.4809 | 0.1028 |
| 0.5832 | 199.87 | 1000 | 0.5233 | 0.4813 | 0.1028 |
| 0.5145 | 219.87 | 1100 | 0.5350 | 0.4781 | 0.0994 |
| 0.4604 | 239.87 | 1200 | 0.5223 | 0.4715 | 0.0984 |
| 0.4226 | 259.87 | 1300 | 0.5167 | 0.4625 | 0.0953 |
| 0.3946 | 279.87 | 1400 | 0.5248 | 0.4614 | 0.0950 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["hy"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hy", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-xls-r-1b-hy-cv", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice hy-AM", "type": "mozilla-foundation/common_voice_8_0", "args": "hy-AM"}, "metrics": [{"type": "wer", "value": 0.2755659640905542, "name": "WER LM"}, {"type": "cer", "value": 0.08659585230146687, "name": "CER LM"}]}]}]} | arampacha/wav2vec2-xls-r-1b-hy-cv | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hy",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the /WORKSPACE/DATA/HY/NOIZY_STUDENT_4/ - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1693
- Wer: 0.2373
- Cer: 0.0429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 842
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.255 | 7.24 | 500 | 0.2978 | 0.4294 | 0.0758 |
| 1.0058 | 14.49 | 1000 | 0.1883 | 0.2838 | 0.0483 |
| 0.9371 | 21.73 | 1500 | 0.1813 | 0.2627 | 0.0457 |
| 0.8999 | 28.98 | 2000 | 0.1693 | 0.2373 | 0.0429 |
| 0.8814 | 36.23 | 2500 | 0.1760 | 0.2420 | 0.0435 |
| 0.8364 | 43.47 | 3000 | 0.1765 | 0.2416 | 0.0419 |
| 0.8019 | 50.72 | 3500 | 0.1758 | 0.2311 | 0.0398 |
| 0.7665 | 57.96 | 4000 | 0.1745 | 0.2240 | 0.0399 |
| 0.7376 | 65.22 | 4500 | 0.1717 | 0.2190 | 0.0385 |
| 0.716 | 72.46 | 5000 | 0.1700 | 0.2147 | 0.0382 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
| {"language": ["hy"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "hy", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-1b-hy-cv", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice hy-AM", "type": "mozilla-foundation/common_voice_8_0", "args": "hy-AM"}, "metrics": [{"type": "wer", "value": 10.811865729898516, "name": "WER LM"}, {"type": "cer", "value": 2.2205361659079412, "name": "CER LM"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "hy"}, "metrics": [{"type": "wer", "value": 18.219363037089988, "name": "Test WER"}, {"type": "cer", "value": 7.075988867335752, "name": "Test CER"}]}]}]} | arampacha/wav2vec2-xls-r-1b-hy | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"hy",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-ka
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the /WORKSPACE/DATA/KA/NOIZY_STUDENT_2/ - KA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1022
- Wer: 0.1527
- Cer: 0.0221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.2839 | 6.45 | 400 | 0.2229 | 0.3609 | 0.0557 |
| 0.9775 | 12.9 | 800 | 0.1271 | 0.2202 | 0.0317 |
| 0.9045 | 19.35 | 1200 | 0.1268 | 0.2030 | 0.0294 |
| 0.8652 | 25.8 | 1600 | 0.1211 | 0.1940 | 0.0287 |
| 0.8505 | 32.26 | 2000 | 0.1192 | 0.1912 | 0.0276 |
| 0.8168 | 38.7 | 2400 | 0.1086 | 0.1763 | 0.0260 |
| 0.7737 | 45.16 | 2800 | 0.1098 | 0.1753 | 0.0256 |
| 0.744 | 51.61 | 3200 | 0.1054 | 0.1646 | 0.0239 |
| 0.7114 | 58.06 | 3600 | 0.1034 | 0.1573 | 0.0228 |
| 0.6773 | 64.51 | 4000 | 0.1022 | 0.1527 | 0.0221 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
| {"language": ["ka"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-1b-ka", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ka", "type": "mozilla-foundation/common_voice_8_0", "args": "ka"}, "metrics": [{"type": "wer", "value": 7.39778066580026, "name": "WER LM"}, {"type": "cer", "value": 1.1882089427096434, "name": "CER LM"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ka"}, "metrics": [{"type": "wer", "value": 22.61, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ka"}, "metrics": [{"type": "wer", "value": 21.58, "name": "Test WER"}]}]}]} | arampacha/wav2vec2-xls-r-1b-ka | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"ka",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UK dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1747
- Wer: 0.2107
- Cer: 0.0408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.3719 | 4.35 | 500 | 0.3389 | 0.4236 | 0.0833 |
| 1.1361 | 8.7 | 1000 | 0.2309 | 0.3162 | 0.0630 |
| 1.0517 | 13.04 | 1500 | 0.2166 | 0.3056 | 0.0597 |
| 1.0118 | 17.39 | 2000 | 0.2141 | 0.2784 | 0.0557 |
| 0.9922 | 21.74 | 2500 | 0.2231 | 0.2941 | 0.0594 |
| 0.9929 | 26.09 | 3000 | 0.2171 | 0.2892 | 0.0587 |
| 0.9485 | 30.43 | 3500 | 0.2236 | 0.2956 | 0.0599 |
| 0.9573 | 34.78 | 4000 | 0.2314 | 0.3043 | 0.0616 |
| 0.9195 | 39.13 | 4500 | 0.2169 | 0.2812 | 0.0580 |
| 0.8915 | 43.48 | 5000 | 0.2109 | 0.2780 | 0.0560 |
| 0.8449 | 47.83 | 5500 | 0.2050 | 0.2534 | 0.0514 |
| 0.8028 | 52.17 | 6000 | 0.2032 | 0.2456 | 0.0492 |
| 0.7881 | 56.52 | 6500 | 0.1890 | 0.2380 | 0.0469 |
| 0.7423 | 60.87 | 7000 | 0.1816 | 0.2245 | 0.0442 |
| 0.7248 | 65.22 | 7500 | 0.1789 | 0.2165 | 0.0422 |
| 0.6993 | 69.57 | 8000 | 0.1747 | 0.2107 | 0.0408 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["uk"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-xls-r-1b-hy-cv", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice uk", "type": "mozilla-foundation/common_voice_8_0", "args": "uk"}, "metrics": [{"type": "wer", "value": 12.246920571994902, "name": "WER LM"}, {"type": "cer", "value": 2.513653497966816, "name": "CER LM"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "uk"}, "metrics": [{"type": "wer", "value": 46.56, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "uk"}, "metrics": [{"type": "wer", "value": 35.98, "name": "Test WER"}]}]}]} | arampacha/wav2vec2-xls-r-1b-uk-cv | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"uk",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the /WORKSPACE/DATA/UK/COMPOSED_DATASET/ - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1092
- Wer: 0.1752
- Cer: 0.0323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 12000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 1.7005 | 1.61 | 500 | 0.4082 | 0.5584 | 0.1164 |
| 1.1555 | 3.22 | 1000 | 0.2020 | 0.2953 | 0.0557 |
| 1.0927 | 4.82 | 1500 | 0.1708 | 0.2584 | 0.0480 |
| 1.0707 | 6.43 | 2000 | 0.1563 | 0.2405 | 0.0450 |
| 1.0728 | 8.04 | 2500 | 0.1620 | 0.2442 | 0.0463 |
| 1.0268 | 9.65 | 3000 | 0.1588 | 0.2378 | 0.0458 |
| 1.0328 | 11.25 | 3500 | 0.1466 | 0.2352 | 0.0442 |
| 1.0249 | 12.86 | 4000 | 0.1552 | 0.2341 | 0.0449 |
| 1.016 | 14.47 | 4500 | 0.1602 | 0.2435 | 0.0473 |
| 1.0164 | 16.08 | 5000 | 0.1491 | 0.2337 | 0.0444 |
| 0.9935 | 17.68 | 5500 | 0.1539 | 0.2373 | 0.0458 |
| 0.9626 | 19.29 | 6000 | 0.1458 | 0.2305 | 0.0434 |
| 0.9505 | 20.9 | 6500 | 0.1368 | 0.2157 | 0.0407 |
| 0.9389 | 22.51 | 7000 | 0.1437 | 0.2231 | 0.0426 |
| 0.9129 | 24.12 | 7500 | 0.1313 | 0.2076 | 0.0394 |
| 0.9118 | 25.72 | 8000 | 0.1292 | 0.2040 | 0.0384 |
| 0.8848 | 27.33 | 8500 | 0.1299 | 0.2028 | 0.0384 |
| 0.8667 | 28.94 | 9000 | 0.1228 | 0.1945 | 0.0367 |
| 0.8641 | 30.55 | 9500 | 0.1223 | 0.1939 | 0.0364 |
| 0.8516 | 32.15 | 10000 | 0.1184 | 0.1876 | 0.0349 |
| 0.8379 | 33.76 | 10500 | 0.1137 | 0.1821 | 0.0338 |
| 0.8235 | 35.37 | 11000 | 0.1127 | 0.1779 | 0.0331 |
| 0.8112 | 36.98 | 11500 | 0.1103 | 0.1766 | 0.0327 |
| 0.8069 | 38.59 | 12000 | 0.1092 | 0.1752 | 0.0323 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
| {"language": ["uk"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-1b-hy", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice uk", "type": "mozilla-foundation/common_voice_8_0", "args": "uk"}, "metrics": [{"type": "wer", "value": 10.406342913776015, "name": "WER LM"}, {"type": "cer", "value": 2.0387492208601703, "name": "CER LM"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "uk"}, "metrics": [{"type": "wer", "value": 40.57, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "uk"}, "metrics": [{"type": "wer", "value": 28.95, "name": "Test WER"}]}]}]} | arampacha/wav2vec2-xls-r-1b-uk | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"uk",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HY-AM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5891
- Wer: 0.6569
**Note**: If you aim for best performance use [this model](https://huggingface.co/arampacha/wav2vec2-xls-r-300m-hy). It is trained using noizy student procedure and achieves considerably better results.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 9.167 | 16.67 | 100 | 3.5599 | 1.0 |
| 3.2645 | 33.33 | 200 | 3.1771 | 1.0 |
| 3.1509 | 50.0 | 300 | 3.1321 | 1.0 |
| 3.0757 | 66.67 | 400 | 2.8594 | 1.0 |
| 2.5274 | 83.33 | 500 | 1.5286 | 0.9797 |
| 1.6826 | 100.0 | 600 | 0.8058 | 0.7974 |
| 1.2868 | 116.67 | 700 | 0.6713 | 0.7279 |
| 1.1262 | 133.33 | 800 | 0.6308 | 0.7034 |
| 1.0408 | 150.0 | 900 | 0.6056 | 0.6745 |
| 0.9617 | 166.67 | 1000 | 0.5891 | 0.6569 |
| 0.9196 | 183.33 | 1100 | 0.5913 | 0.6432 |
| 0.8853 | 200.0 | 1200 | 0.5924 | 0.6347 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["hy-AM"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "hy"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]} | arampacha/wav2vec2-xls-r-300m-hy-cv | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"hy",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the /WORKSPACE/DATA/HY/NOIZY_STUDENT_3/ - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2293
- Wer: 0.3333
- Cer: 0.0602
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 842
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 3.1471 | 7.02 | 400 | 3.1599 | 1.0 | 1.0 |
| 1.8691 | 14.04 | 800 | 0.7674 | 0.7361 | 0.1686 |
| 1.3227 | 21.05 | 1200 | 0.3849 | 0.5336 | 0.1007 |
| 1.163 | 28.07 | 1600 | 0.3015 | 0.4559 | 0.0823 |
| 1.0768 | 35.09 | 2000 | 0.2721 | 0.4032 | 0.0728 |
| 1.0224 | 42.11 | 2400 | 0.2586 | 0.3825 | 0.0691 |
| 0.9817 | 49.12 | 2800 | 0.2458 | 0.3653 | 0.0653 |
| 0.941 | 56.14 | 3200 | 0.2306 | 0.3388 | 0.0605 |
| 0.9235 | 63.16 | 3600 | 0.2315 | 0.3380 | 0.0615 |
| 0.9141 | 70.18 | 4000 | 0.2293 | 0.3333 | 0.0602 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
| {"language": ["hy"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hy", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-300m-hy", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice hy-AM", "type": "mozilla-foundation/common_voice_8_0", "args": "hy-AM"}, "metrics": [{"type": "wer", "value": 13.192818110850899, "name": "WER LM"}, {"type": "cer", "value": 2.787051087506323, "name": "CER LM"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "hy"}, "metrics": [{"type": "wer", "value": 22.246048764990867, "name": "Test WER"}, {"type": "cer", "value": 7.59406739840239, "name": "Test CER"}]}]}]} | arampacha/wav2vec2-xls-r-300m-hy | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hy",
"hf-asr-leaderboard",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers | {} | arampacha/wav2vec2-xls-r-300m-ka | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers | ---
datasets:
- squad
widget:
- text: "Which name is also used to describe the Amazon rainforest in English?"
context: "The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."
- text: "How many square kilometers of rainforest is covered in the basin?"
context: "The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species." | {} | aravind-812/roberta-train-json | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "results", "results": []}]} | arawat/pegasus-custom-xsum | null | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | arch-raven/ahsg | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | archifarmer/9film | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
#HourAI bot based on DialoGPT | {"tags": ["conversational"]} | archmagos/HourAI | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
#Mini-Me | {"tags": ["conversational"]} | ardatasc/miniMe-version1 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | ardatasc/myself | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aretw0/mbart-large-cc25-finetuned-en-to-ro-fp16False | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aretw0/mbart-large-cc25-finetuned-en-to-ro | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aretw0/opus-mt-en-ro-finetuned-en-to-ro-epoch.175-fp16False-batch16 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aretw0/opus-mt-en-ro-finetuned-en-to-ro-epoch.25-fp16False-batch4 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aretw0/opus-mt-en-ro-finetuned-en-to-ro-epoch.25-fp16False-batch8 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aretw0/opus-mt-en-ro-finetuned-en-to-ro-epoch.5-fp16False-batch8 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aretw0/opus-mt-en-ro-finetuned-en-to-ro | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aretw0/opus-mt-en-ro-finetuned-en-to-ro_fp16False_batch8 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aretw0/t5-small-finetuned-en-to-ro-batch8 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-dataset_20-input_64
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4335
- Bleu: 8.6652
- Gen Len: 18.2596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6351 | 1.0 | 7629 | 1.4335 | 8.6652 | 18.2596 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "metrics": ["bleu"], "model-index": [{"name": "t5-small-finetuned-en-to-ro-dataset_20-input_64", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16", "type": "wmt16", "args": "ro-en"}, "metrics": [{"type": "bleu", "value": 8.6652, "name": "Bleu"}]}]}]} | aretw0/t5-small-finetuned-en-to-ro-dataset_20-input_64 | null | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-dataset_20
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4052
- Bleu: 7.3293
- Gen Len: 18.2556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6029 | 1.0 | 7629 | 1.4052 | 7.3293 | 18.2556 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "metrics": ["bleu"], "model-index": [{"name": "t5-small-finetuned-en-to-ro-dataset_20", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16", "type": "wmt16", "args": "ro-en"}, "metrics": [{"type": "bleu", "value": 7.3293, "name": "Bleu"}]}]}]} | aretw0/t5-small-finetuned-en-to-ro-dataset_20 | null | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-epoch.04375
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4137
- Bleu: 7.3292
- Gen Len: 18.2541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.04375
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6211 | 0.04 | 1669 | 1.4137 | 7.3292 | 18.2541 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "metrics": ["bleu"], "model-index": [{"name": "t5-small-finetuned-en-to-ro-epoch.04375", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16", "type": "wmt16", "args": "ro-en"}, "metrics": [{"type": "bleu", "value": 7.3292, "name": "Bleu"}]}]}]} | aretw0/t5-small-finetuned-en-to-ro-epoch.04375 | null | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | aretw0/t5-small-finetuned-en-to-ro-epoch.175 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aretw0/t5-small-finetuned-en-to-ro-fp16False-batch16-epoch.021875 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aretw0/t5-small-finetuned-en-to-ro-fp16False-batch16-epoch.175 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aretw0/t5-small-finetuned-en-to-ro-fp16False-batch8 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aretw0/t5-small-finetuned-en-to-ro-fp16False | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aretw0/t5-small-finetuned-en-to-ro | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | arev/translationtest | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
feature-extraction | transformers | hello
| {} | argv947059/example-based-ner-bert | null | [
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | ari9dam/tablerow2text-prt-openweb | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
# citizenlab/distilbert-base-multilingual-cased-toxicity
This is multilingual Distil-Bert model sequence classifier trained based on [JIGSAW Toxic Comment Classification Challenge](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) dataset.
## How to use it
```python
from transformers import pipeline
model_path = "citizenlab/distilbert-base-multilingual-cased-toxicity"
toxicity_classifier = pipeline("text-classification", model=model_path, tokenizer=model_path)
toxicity_classifier("this is a lovely message")
> [{'label': 'not_toxic', 'score': 0.9954179525375366}]
toxicity_classifier("you are an idiot and you and your family should go back to your country")
> [{'label': 'toxic', 'score': 0.9948776960372925}]
```
## Evaluation
### Accuracy
```
Accuracy Score = 0.9425
F1 Score (Micro) = 0.9450549450549449
F1 Score (Macro) = 0.8491432341169309
``` | {"language": ["en", "nl", "fr", "pt", "it", "es", "de", "da", "pl", "af"], "datasets": ["jigsaw_toxicity_pred"], "metrics": ["F1 Accuracy"], "pipeline_type": "text-classification", "widget": [{"text": "this is a lovely message", "example_title": "Example 1", "multi_class": false}, {"text": "you are an idiot and you and your family should go back to your country", "example_title": "Example 2", "multi_class": false}]} | citizenlab/distilbert-base-multilingual-cased-toxicity | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"nl",
"fr",
"pt",
"it",
"es",
"de",
"da",
"pl",
"af",
"dataset:jigsaw_toxicity_pred",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7751
- Accuracy: 0.9113
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.315 | 1.0 | 318 | 3.3087 | 0.74 |
| 2.6371 | 2.0 | 636 | 1.8833 | 0.8381 |
| 1.5388 | 3.0 | 954 | 1.1547 | 0.8929 |
| 1.0076 | 4.0 | 1272 | 0.8590 | 0.9071 |
| 0.79 | 5.0 | 1590 | 0.7751 | 0.9113 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.7.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["clinc_oos"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9112903225806451, "name": "Accuracy"}]}]}]} | arianpasquali/distilbert-base-uncased-finetuned-clinc | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# citizenlab/twitter-xlm-roberta-base-sentiment-finetunned
This is multilingual XLM-Roberta model sequence classifier fine tunned and based on [Cardiff NLP Group](cardiffnlp/twitter-roberta-base-sentiment) sentiment classification model.
## How to use it
```python
from transformers import pipeline
model_path = "citizenlab/twitter-xlm-roberta-base-sentiment-finetunned"
sentiment_classifier = pipeline("text-classification", model=model_path, tokenizer=model_path)
sentiment_classifier("this is a lovely message")
> [{'label': 'Positive', 'score': 0.9918450713157654}]
sentiment_classifier("you are an idiot and you and your family should go back to your country")
> [{'label': 'Negative', 'score': 0.9849833846092224}]
```
## Evaluation
```
precision recall f1-score support
Negative 0.57 0.14 0.23 28
Neutral 0.78 0.94 0.86 132
Positive 0.89 0.80 0.85 51
accuracy 0.80 211
macro avg 0.75 0.63 0.64 211
weighted avg 0.78 0.80 0.77 211
```
| {"language": ["en", "nl", "fr", "pt", "it", "es", "de", "da", "pl", "af"], "datasets": ["jigsaw_toxicity_pred"], "metrics": ["F1 Accuracy"], "pipeline_type": "text-classification", "widget": [{"text": "this is a lovely message", "example_title": "Example 1", "multi_class": false}, {"text": "you are an idiot and you and your family should go back to your country", "example_title": "Example 2", "multi_class": false}]} | citizenlab/twitter-xlm-roberta-base-sentiment-finetunned | null | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"en",
"nl",
"fr",
"pt",
"it",
"es",
"de",
"da",
"pl",
"af",
"dataset:jigsaw_toxicity_pred",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | arie5555/distilbert-base-uncased-finetuned-mnli | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | # Rick DialoGPT Model | {"tags": ["conversational"]} | arifbhrn/DialogGPT-small-Rickk | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers | # Wav2Vec2-Large-XLSR-Bengali
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) Bengali using a subset of 40,000 utterances from [Bengali ASR training data set containing ~196K utterances](https://www.openslr.org/53/). Tested WER using ~4200 held out from training.
When using this model, make sure that your speech input is sampled at 16kHz.
Train Script can be Found at : train.py
Data Prep Notebook : https://colab.research.google.com/drive/1JMlZPU-DrezXjZ2t7sOVqn7CJjZhdK2q?usp=sharing
Inference Notebook : https://colab.research.google.com/drive/1uKC2cK9JfUPDTUHbrNdOYqKtNozhxqgZ?usp=sharing
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
processor = Wav2Vec2Processor.from_pretrained("arijitx/wav2vec2-large-xlsr-bengali")
model = Wav2Vec2ForCTC.from_pretrained("arijitx/wav2vec2-large-xlsr-bengali")
# model = model.to("cuda")
resampler = torchaudio.transforms.Resample(TEST_AUDIO_SR, 16_000)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch)
speech = resampler(speech_array).squeeze().numpy()
return speech
speech_array = speech_file_to_array_fn("test_file.wav")
inputs = processor(speech_array, sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
preds = processor.batch_decode(predicted_ids)[0]
print(preds.replace("[PAD]",""))
```
**Test Result**: WER on ~4200 utterance : 32.45 %
| {"language": "Bengali", "license": "cc-by-sa-4.0", "tags": ["bn", "audio", "automatic-speech-recognition", "speech"], "datasets": ["OpenSLR"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Bengali by Arijit", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "OpenSLR", "type": "OpenSLR", "args": "ben"}, "metrics": [{"type": "wer", "value": 32.45, "name": "Test WER"}]}]}]} | arijitx/wav2vec2-large-xlsr-bengali | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"bn",
"audio",
"speech",
"dataset:OpenSLR",
"license:cc-by-sa-4.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers | This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the OPENSLR_SLR53 - bengali dataset.
It achieves the following results on the evaluation set.
Without language model :
- WER: 0.21726385291857586
- CER: 0.04725010353701041
With 5 gram language model trained on 30M sentences randomly chosen from [AI4Bharat IndicCorp](https://indicnlp.ai4bharat.org/corpora/) dataset :
- WER: 0.15322879016421437
- CER: 0.03413696666806267
Note : 5% of a total 10935 samples have been used for evaluation. Evaluation set has 10935 examples which was not part of training training was done on first 95% and eval was done on last 5%. Training was stopped after 180k steps. Output predictions are available under files section.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset_name="openslr"
- model_name_or_path="facebook/wav2vec2-xls-r-300m"
- dataset_config_name="SLR53"
- output_dir="./wav2vec2-xls-r-300m-bengali"
- overwrite_output_dir
- num_train_epochs="50"
- per_device_train_batch_size="32"
- per_device_eval_batch_size="32"
- gradient_accumulation_steps="1"
- learning_rate="7.5e-5"
- warmup_steps="2000"
- length_column_name="input_length"
- evaluation_strategy="steps"
- text_column_name="sentence"
- chars_to_ignore , ? . ! \- \; \: \" “ % ‘ ” � — ’ … –
- save_steps="2000"
- eval_steps="3000"
- logging_steps="100"
- layerdrop="0.0"
- activation_dropout="0.1"
- save_total_limit="3"
- freeze_feature_encoder
- feat_proj_dropout="0.0"
- mask_time_prob="0.75"
- mask_time_length="10"
- mask_feature_prob="0.25"
- mask_feature_length="64"
- preprocessing_num_workers 32
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
Notes
- Training and eval code modified from : https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event.
- Bengali speech data was not available from common voice or librispeech multilingual datasets, so OpenSLR53 has been used.
- Minimum audio duration of 0.5s has been used to filter the training data which excluded may be 10-20 samples.
- OpenSLR53 transcripts are *not* part of LM training and LM used to evaluate. | {"language": ["bn"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "bn", "hf-asr-leaderboard", "openslr_SLR53", "robust-speech-event"], "datasets": ["openslr", "SLR53", "AI4Bharat/IndicCorp"], "metrics": ["wer", "cer"], "model-index": [{"name": "arijitx/wav2vec2-xls-r-300m-bengali", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Open SLR", "type": "openslr", "args": "SLR53"}, "metrics": [{"type": "wer", "value": 0.21726385291857586, "name": "Test WER"}, {"type": "cer", "value": 0.04725010353701041, "name": "Test CER"}, {"type": "wer", "value": 0.15322879016421437, "name": "Test WER with lm"}, {"type": "cer", "value": 0.03413696666806267, "name": "Test CER with lm"}]}]}]} | arijitx/wav2vec2-xls-r-300m-bengali | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"bn",
"hf-asr-leaderboard",
"openslr_SLR53",
"robust-speech-event",
"dataset:openslr",
"dataset:SLR53",
"dataset:AI4Bharat/IndicCorp",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers | {} | aripo99/dummy_model | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aristotletan/albert-base-v2-finetuned-sst2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-finetuned-xsum
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the wsj_markets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8497
- Rouge1: 15.3934
- Rouge2: 7.0378
- Rougel: 13.9522
- Rougelsum: 14.3541
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.0964 | 1.0 | 1735 | 0.9365 | 18.703 | 12.7539 | 18.1293 | 18.5397 | 20.0 |
| 0.95 | 2.0 | 3470 | 0.8871 | 19.5223 | 13.0938 | 18.9148 | 18.8363 | 20.0 |
| 0.8687 | 3.0 | 5205 | 0.8587 | 15.0915 | 7.142 | 13.6693 | 14.5975 | 20.0 |
| 0.7989 | 4.0 | 6940 | 0.8569 | 18.243 | 11.4495 | 17.4326 | 17.489 | 20.0 |
| 0.7493 | 5.0 | 8675 | 0.8497 | 15.3934 | 7.0378 | 13.9522 | 14.3541 | 20.0 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.10.0
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["wsj_markets"], "metrics": ["rouge"], "model_index": [{"name": "bart-large-finetuned-xsum", "results": [{"task": {"name": "Sequence-to-sequence Language Modeling", "type": "text2text-generation"}, "dataset": {"name": "wsj_markets", "type": "wsj_markets", "args": "default"}, "metric": {"name": "Rouge1", "type": "rouge", "value": 15.3934}}]}]} | aristotletan/bart-large-finetuned-xsum | null | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:wsj_markets",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | aristotletan/electra-base-discriminator-finetuned-sst2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-sst2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the scim dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4632
- Accuracy: 0.9111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 90 | 2.0273 | 0.6667 |
| No log | 2.0 | 180 | 0.8802 | 0.8556 |
| No log | 3.0 | 270 | 0.5908 | 0.8889 |
| No log | 4.0 | 360 | 0.4632 | 0.9111 |
| No log | 5.0 | 450 | 0.4294 | 0.9111 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["scim"], "metrics": ["accuracy"], "model_index": [{"name": "roberta-base-finetuned-sst2", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "scim", "type": "scim", "args": "eod"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9111111111111111}}]}]} | aristotletan/roberta-base-finetuned-sst2 | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:scim",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers | {} | aristotletan/sc-distilbert | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | aristotletan/scim-distillbert | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | aristotletan/scim-distilroberta | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aristotletan/t5-base-finetuned-wsj | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aristotletan/t5-large-finetuned-wsj | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wsj_markets dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1447
- Rouge1: 10.4492
- Rouge2: 3.9563
- Rougel: 9.3368
- Rougelsum: 9.9828
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| 2.2742 | 1.0 | 868 | 1.3135 | 9.4644 | 2.618 | 8.4048 | 8.9764 | 19.0 |
| 1.4607 | 2.0 | 1736 | 1.2134 | 9.6327 | 3.8535 | 9.0703 | 9.2466 | 19.0 |
| 1.3579 | 3.0 | 2604 | 1.1684 | 10.1616 | 3.5498 | 9.2294 | 9.4507 | 19.0 |
| 1.3314 | 4.0 | 3472 | 1.1514 | 10.0621 | 3.6907 | 9.1635 | 9.4955 | 19.0 |
| 1.3084 | 5.0 | 4340 | 1.1447 | 10.4492 | 3.9563 | 9.3368 | 9.9828 | 19.0 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.10.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wsj_markets"], "metrics": ["rouge"], "model_index": [{"name": "t5-small-finetuned-xsum", "results": [{"task": {"name": "Sequence-to-sequence Language Modeling", "type": "text2text-generation"}, "dataset": {"name": "wsj_markets", "type": "wsj_markets", "args": "default"}, "metric": {"name": "Rouge1", "type": "rouge", "value": 10.4492}}]}]} | aristotletan/t5-small-finetuned-xsum | null | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wsj_markets",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 15892673
## Validation Metrics
- Loss: 1.3661842346191406
- Rouge1: 50.8868
- Rouge2: 26.996
- RougeL: 42.9088
- RougeLsum: 46.6748
- Gen Len: 20.716
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/arjun3816/autonlp-pegas_large_samsum-15892673
``` | {"language": "unk", "tags": "autonlp", "datasets": ["arjun3816/autonlp-data-pegas_large_samsum"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]} | arjun3816/autonlp-pegas_large_samsum-15892673 | null | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autonlp",
"unk",
"dataset:arjun3816/autonlp-data-pegas_large_samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 15492651
## Validation Metrics
- Loss: 1.4060134887695312
- Rouge1: 50.9953
- Rouge2: 35.9204
- RougeL: 43.5673
- RougeLsum: 46.445
- Gen Len: 58.0193
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/arjun3816/autonlp-sam_summarization1-15492651
``` | {"language": "unk", "tags": "autonlp", "datasets": ["arjun3816/autonlp-data-sam_summarization1"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]} | arjun3816/autonlp-sam_summarization1-15492651 | null | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autonlp",
"unk",
"dataset:arjun3816/autonlp-data-sam_summarization1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null |
# Noise2Recon
> **Noise2Recon: A Semi-Supervised Framework for Joint MRI Reconstruction and Denoising**\
> Arjun Desai, Batu Ozturkler, Christopher Sandino, Shreyas Vasanawala, Brian Hargreaves, Christopher Ré, John Pauly, Akshay Chaudhari\
> https://arxiv.org/abs/2110.00075
This repository contains the artifacts for the Noise2Recon paper. To use our code
and artifacts in your research, please use the [Meddlr](https://github.com/ad12/meddlr) package.
| {"language": "en", "license": "apache-2.0", "tags": ["mri", "reconstruction", "denoising"]} | arjundd/noise2recon-release | null | [
"mri",
"reconstruction",
"denoising",
"en",
"arxiv:2110.00075",
"license:apache-2.0",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {"license": "apache-2.0"} | arjundd/skm-tea-models | null | [
"license:apache-2.0",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null |
# VORTEX
<div align="center">
<img src="https://drive.google.com/uc?export=view&id=1q0jAm6Kg5ZhRg3h0w0ZbtIgcRF3_-Vgb" alt="Vortex Schematic" width="700px" />
</div>
> **VORTEX: Physics-Driven Data Augmentations for Consistency Training for Robust Accelerated MRI Reconstruction**\
> Arjun Desai, Beliz Gunel, Batu Ozturkler, Harris Beg, Shreyas Vasanawala, Brian Hargreaves, Christopher Ré, John Pauly, Akshay Chaudhari\
> https://arxiv.org/abs/2111.02549
This repository contains the artifacts for the VORTEX paper. To use our code
and artifacts in your research, please use the [Meddlr](https://github.com/ad12/meddlr) package.
| {"language": "en", "license": "apache-2.0", "tags": ["mri", "reconstruction", "artifact correction"]} | arjundd/vortex-release | null | [
"mri",
"reconstruction",
"artifact correction",
"en",
"arxiv:2111.02549",
"license:apache-2.0",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | arjunsanchala/wav2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | arjunth2001/priv_ftc | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers | {} | arjunth2001/priv_qna | null | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text2text-generation | transformers | {} | arjunth2001/priv_sum | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-sentiment-2
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5882
- Accuracy: 0.7614
- F1: 0.7614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00024
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-multilingual-cased-sentiment-2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "amazon_reviews_multi", "type": "amazon_reviews_multi", "args": "en"}, "metrics": [{"type": "accuracy", "value": 0.7614, "name": "Accuracy"}, {"type": "f1", "value": 0.7614, "name": "F1"}]}]}]} | arjuntheprogrammer/distilbert-base-multilingual-cased-sentiment-2 | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | arjunusha/zeena | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | arjunv786/a-fancy-model-name | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | arklemmer/fridayplay | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | arkosark/t5-small-finetuned-xsum | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | arkothiwala/test-pegasus-finetuned-news | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers |
BERTweet-FA: A pre-trained language model for Persian (a.k.a Farsi) Tweets
---
BERTweet-FA is a transformer-based model trained on 20665964 Persian tweets. The model has been trained on the data only for 1 epoch (322906 steps), and yet it has the ability to recognize the meaning of most of the conversational sentences used in Farsi. Note that the architecture of this model follows the original BERT [[Devlin et al.](https://arxiv.org/abs/1810.04805)].
How to use the Model
---
```python
from transformers import BertForMaskedLM, BertTokenizer, pipeline
model = BertForMaskedLM.from_pretrained('arm-on/BERTweet-FA')
tokenizer = BertTokenizer.from_pretrained('arm-on/BERTweet-FA')
fill_sentence = pipeline('fill-mask', model=model, tokenizer=tokenizer)
fill_sentence('اینجا جمله مورد نظر خود را بنویسید و کلمه موردنظر را [MASK] کنید')
```
The Training Data
---
The first version of the model was trained on the "[Large Scale Colloquial Persian Dataset](https://iasbs.ac.ir/~ansari/lscp/)" containing more than 20 million tweets in Farsi, gathered by Khojasteh et al., and published on 2020.
Evaluation
---
| Training Loss | Epoch | Step |
|:-------------:|:-----:|:-----:|
| 0.0036 | 1.0 | 322906 |
Contributors
---
- Arman Malekzadeh [[Github](https://github.com/arm-on)] | {"language": "fa", "license": "apache-2.0", "tags": ["BERTweet"], "widget": [{"text": "\u0627\u06cc\u0646 \u0628\u0648\u062f [MASK] \u0647\u0627\u06cc \u0645\u0627\u061f"}, {"text": "\u062f\u0627\u062f\u0627\u0686 \u062f\u0627\u0631\u06cc [MASK] \u0645\u06cc\u0632\u0646\u06cc"}, {"text": "\u0628\u0647 \u0639\u0644\u06cc [MASK] \u0645\u06cc\u06af\u0641\u062a\u0646 \u062c\u0627\u062f\u0648\u06af\u0631"}, {"text": "\u0622\u062e\u0647 \u0645\u062d\u0633\u0646 [MASK] \u0647\u0645 \u0634\u062f \u062e\u0648\u0627\u0646\u0646\u062f\u0647\u061f"}, {"text": "\u067e\u0633\u0631 \u0639\u062c\u0628 [MASK] \u0632\u062f"}], "model-index": [{"name": "BERTweet-FA", "results": []}]} | arm-on/BERTweet-FA | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"BERTweet",
"fa",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.