Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | null | {} | Sohail/Client_details | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Solo9x/DialoGPT-medium-harrypotter | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | null |
# My Awesome Model | {"tags": ["conversational"]} | SonMooSans/DialoGPT-small-joshua | null | [
"conversational",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
# My Awesome Model | {"tags": ["conversational"]} | SonMooSans/test | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8549
- Matthews Correlation: 0.5332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5213 | 1.0 | 535 | 0.5163 | 0.4183 |
| 0.3479 | 2.0 | 1070 | 0.5351 | 0.5182 |
| 0.231 | 3.0 | 1605 | 0.6271 | 0.5291 |
| 0.166 | 4.0 | 2140 | 0.7531 | 0.5279 |
| 0.1313 | 5.0 | 2675 | 0.8549 | 0.5332 |
### Framework versions
- Transformers 4.10.0.dev0
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model_index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metric": {"name": "Matthews Correlation", "type": "matthews_correlation", "value": 0.5332198659134496}}]}]} | SongRb/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0746
- Precision: 0.9347
- Recall: 0.9426
- F1: 0.9386
- Accuracy: 0.9851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0832 | 1.0 | 3511 | 0.0701 | 0.9317 | 0.9249 | 0.9283 | 0.9827 |
| 0.0384 | 2.0 | 7022 | 0.0701 | 0.9282 | 0.9410 | 0.9346 | 0.9845 |
| 0.0222 | 3.0 | 10533 | 0.0746 | 0.9347 | 0.9426 | 0.9386 | 0.9851 |
### Framework versions
- Transformers 4.10.0.dev0
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9850826886110537}}]}]} | SongRb/distilbert-base-uncased-finetuned-ner | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | SongRb/distilgpt2-finetuned-wikitext2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | SongRb/distilroberta-base-finetuned-wikitext2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers |
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["en"], "license": "apache-2.0", "tags": ["question-answering"], "datasets": ["squad"], "metrics": ["squad"], "thumbnail": "https://github.com/karanchahal/distiller/blob/master/distiller.jpg"} | Sonny/distilbert-base-uncased-finetuned-squad-d5716d28 | null | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers | {} | Sonny/dummy-model | null | [
"transformers",
"pytorch",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | This is a test model2. | {} | Sonny/dummy-model2 | null | [
"transformers",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Sonny/dummy-model3 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | Soonhwan-Kwon/xlm-roberta-xlarge | null | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | Soonhwan-Kwon/xlm-roberta-xxlarge | null | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text2text-generation | transformers | {} | SophieTr/PPO_training | null | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text2text-generation | transformers | This is the model so far before time out
| {} | SophieTr/distil-pegasus-reddit | null | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tune-Pegasus-large
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 11.0526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.35e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "fine-tune-Pegasus-large", "results": []}]} | SophieTr/fine-tune-Pegasus-large | null | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers | {} | SophieTr/fine-tune-Pegasus | null | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [sshleifer/distill-pegasus-xsum-16-4](https://huggingface.co/sshleifer/distill-pegasus-xsum-16-4) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4473
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.2378 | 0.51 | 100 | 7.1853 |
| 7.2309 | 1.01 | 200 | 6.6342 |
| 6.4796 | 1.52 | 300 | 6.3206 |
| 6.2691 | 2.02 | 400 | 6.0184 |
| 5.7382 | 2.53 | 500 | 5.5754 |
| 4.9922 | 3.03 | 600 | 4.5178 |
| 3.6031 | 3.54 | 700 | 2.8579 |
| 2.5203 | 4.04 | 800 | 2.4718 |
| 2.2563 | 4.55 | 900 | 2.4128 |
| 2.1425 | 5.05 | 1000 | 2.3767 |
| 2.004 | 5.56 | 1100 | 2.3982 |
| 2.0437 | 6.06 | 1200 | 2.3787 |
| 1.9407 | 6.57 | 1300 | 2.3952 |
| 1.9194 | 7.07 | 1400 | 2.3964 |
| 1.758 | 7.58 | 1500 | 2.4056 |
| 1.918 | 8.08 | 1600 | 2.4101 |
| 1.9162 | 8.59 | 1700 | 2.4085 |
| 1.8983 | 9.09 | 1800 | 2.4058 |
| 1.6939 | 9.6 | 1900 | 2.4050 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "results", "results": []}]} | SophieTr/results | null | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Sora/Haechan | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
# Naruto DialoGPT Model | {"tags": ["conversational"]} | Sora4762/DialoGPT-small-naruto | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
# Naruto DialoGPT Model1.1 | {"tags": ["conversational"]} | Sora4762/DialoGPT-small-naruto1.1 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT
This model is a fine-tuned version of [Sotireas/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT](https://huggingface.co/Sotireas/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 21 | 3.8118 |
| No log | 2.0 | 42 | 3.5006 |
| No log | 3.0 | 63 | 3.1242 |
| No log | 4.0 | 84 | 2.9528 |
| No log | 5.0 | 105 | 2.9190 |
| No log | 6.0 | 126 | 2.9876 |
| No log | 7.0 | 147 | 3.0574 |
| No log | 8.0 | 168 | 3.0718 |
| No log | 9.0 | 189 | 3.0426 |
| No log | 10.0 | 210 | 3.0853 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT", "results": []}]} | Sotireas/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Soumyajit1008/DialoGPT-small-harryPotternew | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | Soumyajit1008/DialoGPT-small-harryPotterssen | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Soundside/Road | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Soundside/Road_trip | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2188 | 1.0 | 5533 | 1.1708 |
| 0.9519 | 2.0 | 11066 | 1.1058 |
| 0.7576 | 3.0 | 16599 | 1.1573 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]} | Sourabh714/distilbert-base-uncased-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | transformers |
### VAE with Pytorch-Lightning
This is inspired from vae-playground. This is an example where we test out vae and conv_vae models with multiple datasets
like MNIST, celeb-a and MNIST-Fashion datasets.
This also comes with an example streamlit app & deployed at huggingface.
## Model Training
You can train the VAE models by using `train.py` and editing the `config.yaml` file. \
Hyperparameters to change are:
- model_type [vae|conv_vae]
- alpha
- hidden_dim
- dataset [celeba|mnist|fashion-mnist]
There are other configurations that can be changed if required like height, width, channels etc. It also contains the pytorch-lightning configs as well.
| {"license": "apache-2.0"} | Souranil/VAE | null | [
"transformers",
"pytorch",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers | {} | SouvikGhosh/DialoGPT-Souvik | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Souvikcmsa/FiBER | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | Log FiBER
This model is able to sentence embedding. | {} | Souvikcmsa/LogFiBER | null | [
"pytorch",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
#Gandalf DialoGPT Model | {"tags": ["conversational"]} | SpacyGalaxy/DialoGPT-medium-Gandalf | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | transformers | {} | SpanBERT/spanbert-base-cased | null | [
"transformers",
"pytorch",
"jax",
"bert",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | transformers | {} | SpanBERT/spanbert-large-cased | null | [
"transformers",
"pytorch",
"jax",
"bert",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
# Roberta Large STS-B
This model is a fine tuned RoBERTA model over STS-B.
It was trained with these params:
!python /content/transformers/examples/text-classification/run_glue.py \
--model_type roberta \
--model_name_or_path roberta-large \
--task_name STS-B \
--do_train \
--do_eval \
--do_lower_case \
--data_dir /content/glue_data/STS-B/ \
--max_seq_length 128 \
--per_gpu_eval_batch_size=8 \
--per_gpu_train_batch_size=8 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /content/roberta-sts-b
## How to run
```python
import toolz
import torch
batch_size = 6
def roberta_similarity_batches(to_predict):
batches = toolz.partition(batch_size, to_predict)
similarity_scores = []
for batch in batches:
sentences = [(sentence_similarity["sent1"], sentence_similarity["sent2"]) for sentence_similarity in batch]
batch_scores = similarity_roberta(model, tokenizer,sentences)
similarity_scores = similarity_scores + batch_scores[0].cpu().squeeze(axis=1).tolist()
return similarity_scores
def similarity_roberta(model, tokenizer, sent_pairs):
batch_token = tokenizer(sent_pairs, padding='max_length', truncation=True, max_length=500)
res = model(torch.tensor(batch_token['input_ids']).cuda(), attention_mask=torch.tensor(batch_token["attention_mask"]).cuda())
return res
similarity_roberta(model, tokenizer, [('NEW YORK--(BUSINESS WIRE)--Rosen Law Firm, a global investor rights law firm, announces it is investigating potential securities claims on behalf of shareholders of Vale S.A. ( VALE ) resulting from allegations that Vale may have issued materially misleading business information to the investing public',
'EQUITY ALERT: Rosen Law Firm Announces Investigation of Securities Claims Against Vale S.A. – VALE')])
```
| {} | SparkBeyond/roberta-large-sts-b | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
#EmmyBot | {"tags": ["conversational"]} | Spectrox/emmybot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Spidey8801/NLPTraining | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
# DialoGPT Trained on the Speech of a TV Series Character
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a TV series character, Sheldon from [The Big Bang Theory](https://en.wikipedia.org/wiki/The_Big_Bang_Theory). The data comes from [a Kaggle TV series script dataset](https://www.kaggle.com/mitramir5/the-big-bang-theory-series-transcript).
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("spirax/DialoGPT-medium-sheldon")
model = AutoModelWithLMHead.from_pretrained("spirax/DialoGPT-medium-sheldon")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("SheldorBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` | {"license": "mit", "tags": ["conversational"], "thumbnail": "https://i.imgur.com/7HAcbbD.gif"} | Spirax/DialoGPT-medium-sheldon | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
feature-extraction | transformers | {} | Splend1dchan/phoneme-bart-base | null | [
"transformers",
"pytorch",
"bart",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
# Engineer DialoGPT Model | {"tags": ["conversational"]} | Spoon/DialoGPT-small-engineer | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Spoon/DialoGPT-small-engineertwo | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Sreejith/back-and-forth | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
image-classification | transformers |
# sriram-car-classifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### AM_General_Hummer_SUV_2000

#### Acura_Integra_Type_R_2001

#### Acura_RL_Sedan_2012

#### Acura_TL_Sedan_2012

#### Acura_TL_Type-S_2008

#### Acura_TSX_Sedan_2012

#### Acura_ZDX_Hatchback_2012

#### Aston_Martin_V8_Vantage_Convertible_2012

#### Aston_Martin_V8_Vantage_Coupe_2012

#### Aston_Martin_Virage_Convertible_2012

#### Aston_Martin_Virage_Coupe_2012

#### Audi_100_Sedan_1994

#### Audi_100_Wagon_1994

#### Audi_A5_Coupe_2012

#### Audi_R8_Coupe_2012

#### Audi_RS_4_Convertible_2008

#### Audi_S4_Sedan_2007

#### Audi_S4_Sedan_2012

#### Audi_S5_Convertible_2012

#### Audi_S5_Coupe_2012

#### Audi_S6_Sedan_2011

#### Audi_TTS_Coupe_2012

#### Audi_TT_Hatchback_2011

#### Audi_TT_RS_Coupe_2012

#### Audi_V8_Sedan_1994

#### BMW_1_Series_Convertible_2012

#### BMW_1_Series_Coupe_2012

#### BMW_3_Series_Sedan_2012

#### BMW_3_Series_Wagon_2012

#### BMW_6_Series_Convertible_2007

#### BMW_ActiveHybrid_5_Sedan_2012

#### BMW_M3_Coupe_2012

#### BMW_M5_Sedan_2010

#### BMW_M6_Convertible_2010

#### BMW_X3_SUV_2012

#### BMW_X5_SUV_2007

#### BMW_X6_SUV_2012

#### BMW_Z4_Convertible_2012

#### Bentley_Arnage_Sedan_2009

#### Bentley_Continental_Flying_Spur_Sedan_2007

#### Bentley_Continental_GT_Coupe_2007

#### Bentley_Continental_GT_Coupe_2012

#### Bentley_Continental_Supersports_Conv._Convertible_2012

#### Bentley_Mulsanne_Sedan_2011

#### Bugatti_Veyron_16.4_Convertible_2009

#### Bugatti_Veyron_16.4_Coupe_2009

#### Buick_Enclave_SUV_2012

#### Buick_Rainier_SUV_2007

#### Buick_Regal_GS_2012

#### Buick_Verano_Sedan_2012

#### Cadillac_CTS-V_Sedan_2012

#### Cadillac_Escalade_EXT_Crew_Cab_2007

#### Cadillac_SRX_SUV_2012

#### Chevrolet_Avalanche_Crew_Cab_2012

#### Chevrolet_Camaro_Convertible_2012

#### Chevrolet_Cobalt_SS_2010

#### Chevrolet_Corvette_Convertible_2012

#### Chevrolet_Corvette_Ron_Fellows_Edition_Z06_2007

#### Chevrolet_Corvette_ZR1_2012

#### Chevrolet_Express_Cargo_Van_2007

#### Chevrolet_Express_Van_2007

#### Chevrolet_HHR_SS_2010

#### Chevrolet_Impala_Sedan_2007

#### Chevrolet_Malibu_Hybrid_Sedan_2010

#### Chevrolet_Malibu_Sedan_2007

#### Chevrolet_Monte_Carlo_Coupe_2007

#### Chevrolet_Silverado_1500_Classic_Extended_Cab_2007

#### Chevrolet_Silverado_1500_Extended_Cab_2012

#### Chevrolet_Silverado_1500_Hybrid_Crew_Cab_2012

#### Chevrolet_Silverado_1500_Regular_Cab_2012

#### Chevrolet_Silverado_2500HD_Regular_Cab_2012

#### Chevrolet_Sonic_Sedan_2012

#### Chevrolet_Tahoe_Hybrid_SUV_2012

#### Chevrolet_TrailBlazer_SS_2009

#### Chevrolet_Traverse_SUV_2012

#### Chrysler_300_SRT-8_2010

#### Chrysler_Aspen_SUV_2009

#### Chrysler_Crossfire_Convertible_2008

#### Chrysler_PT_Cruiser_Convertible_2008

#### Chrysler_Sebring_Convertible_2010

#### Chrysler_Town_and_Country_Minivan_2012

#### Daewoo_Nubira_Wagon_2002

#### Dodge_Caliber_Wagon_2007

#### Dodge_Caliber_Wagon_2012

#### Dodge_Caravan_Minivan_1997

#### Dodge_Challenger_SRT8_2011

#### Dodge_Charger_SRT-8_2009

#### Dodge_Charger_Sedan_2012

#### Dodge_Dakota_Club_Cab_2007

#### Dodge_Dakota_Crew_Cab_2010

#### Dodge_Durango_SUV_2007

#### Dodge_Durango_SUV_2012

#### Dodge_Journey_SUV_2012

#### Dodge_Magnum_Wagon_2008

#### Dodge_Ram_Pickup_3500_Crew_Cab_2010

#### Dodge_Ram_Pickup_3500_Quad_Cab_2009

#### Dodge_Sprinter_Cargo_Van_2009

#### Eagle_Talon_Hatchback_1998

#### FIAT_500_Abarth_2012

#### FIAT_500_Convertible_2012

#### Ferrari_458_Italia_Convertible_2012

#### Ferrari_458_Italia_Coupe_2012

#### Ferrari_California_Convertible_2012

#### Ferrari_FF_Coupe_2012

#### Fisker_Karma_Sedan_2012

#### Ford_E-Series_Wagon_Van_2012

#### Ford_Edge_SUV_2012

#### Ford_Expedition_EL_SUV_2009

#### Ford_F-150_Regular_Cab_2007

#### Ford_F-150_Regular_Cab_2012

#### Ford_F-450_Super_Duty_Crew_Cab_2012

#### Ford_Fiesta_Sedan_2012

#### Ford_Focus_Sedan_2007

#### Ford_Freestar_Minivan_2007

#### Ford_GT_Coupe_2006

#### Ford_Mustang_Convertible_2007

#### Ford_Ranger_SuperCab_2011

#### GMC_Acadia_SUV_2012

#### GMC_Canyon_Extended_Cab_2012

#### GMC_Savana_Van_2012

#### GMC_Terrain_SUV_2012

#### GMC_Yukon_Hybrid_SUV_2012

#### Geo_Metro_Convertible_1993

#### HUMMER_H2_SUT_Crew_Cab_2009

#### HUMMER_H3T_Crew_Cab_2010

#### Honda_Accord_Coupe_2012

#### Honda_Accord_Sedan_2012

#### Honda_Odyssey_Minivan_2007

#### Honda_Odyssey_Minivan_2012

#### Hyundai_Accent_Sedan_2012

#### Hyundai_Azera_Sedan_2012

#### Hyundai_Elantra_Sedan_2007

#### Hyundai_Elantra_Touring_Hatchback_2012

#### Hyundai_Genesis_Sedan_2012

#### Hyundai_Santa_Fe_SUV_2012

#### Hyundai_Sonata_Hybrid_Sedan_2012

#### Hyundai_Sonata_Sedan_2012

#### Hyundai_Tucson_SUV_2012

#### Hyundai_Veloster_Hatchback_2012

#### Hyundai_Veracruz_SUV_2012

#### Infiniti_G_Coupe_IPL_2012

#### Infiniti_QX56_SUV_2011

#### Isuzu_Ascender_SUV_2008

#### Jaguar_XK_XKR_2012

#### Jeep_Compass_SUV_2012

#### Jeep_Grand_Cherokee_SUV_2012

#### Jeep_Liberty_SUV_2012

#### Jeep_Patriot_SUV_2012

#### Jeep_Wrangler_SUV_2012

#### Lamborghini_Aventador_Coupe_2012

#### Lamborghini_Diablo_Coupe_2001

#### Lamborghini_Gallardo_LP_570-4_Superleggera_2012

#### Lamborghini_Reventon_Coupe_2008

#### Land_Rover_LR2_SUV_2012

#### Land_Rover_Range_Rover_SUV_2012

#### Lincoln_Town_Car_Sedan_2011

#### MINI_Cooper_Roadster_Convertible_2012

#### Maybach_Landaulet_Convertible_2012

#### Mazda_Tribute_SUV_2011

#### McLaren_MP4-12C_Coupe_2012

#### Mercedes-Benz_300-Class_Convertible_1993

#### Mercedes-Benz_C-Class_Sedan_2012

#### Mercedes-Benz_E-Class_Sedan_2012

#### Mercedes-Benz_S-Class_Sedan_2012

#### Mercedes-Benz_SL-Class_Coupe_2009

#### Mercedes-Benz_Sprinter_Van_2012

#### Mitsubishi_Lancer_Sedan_2012

#### Nissan_240SX_Coupe_1998

#### Nissan_Juke_Hatchback_2012

#### Nissan_Leaf_Hatchback_2012

#### Nissan_NV_Passenger_Van_2012

#### Plymouth_Neon_Coupe_1999

#### Porsche_Panamera_Sedan_2012

#### Ram_C_V_Cargo_Van_Minivan_2012

#### Rolls-Royce_Ghost_Sedan_2012

#### Rolls-Royce_Phantom_Drophead_Coupe_Convertible_2012

#### Rolls-Royce_Phantom_Sedan_2012

#### Scion_xD_Hatchback_2012

#### Spyker_C8_Convertible_2009

#### Spyker_C8_Coupe_2009

#### Suzuki_Aerio_Sedan_2007

#### Suzuki_Kizashi_Sedan_2012

#### Suzuki_SX4_Hatchback_2012

#### Suzuki_SX4_Sedan_2012

#### Tesla_Model_S_Sedan_2012

#### Toyota_4Runner_SUV_2012

#### Toyota_Camry_Sedan_2012

#### Toyota_Corolla_Sedan_2012

#### Toyota_Sequoia_SUV_2012

#### Volkswagen_Beetle_Hatchback_2012

#### Volkswagen_Golf_Hatchback_1991

#### Volkswagen_Golf_Hatchback_2012

#### Volvo_240_Sedan_1993

#### Volvo_C30_Hatchback_2012

#### Volvo_XC90_SUV_2007

#### smart_fortwo_Convertible_2012
 | {"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]} | SriramSridhar78/sriram-car-classifier | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | -----
tags:
- conversational
----
# Discord Bot | {} | Sristi/Senti-Bot | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
Wav2Vec2-Large-XLSR-Welsh
Fine-tuned facebook/wav2vec2-large-xlsr-53 on the Welsh Common Voice dataset.
The data was augmented using standard augmentation approach.
When using this model, make sure that your speech input is sampled at 16kHz.
Test Result: 29.4%
Usage
The model can be used directly (without a language model) as follows:
```
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "cy", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("Srulikbdd/Wav2vec2-large-xlsr-welsh")
model = Wav2Vec2ForCTC.from_pretrained("Srulikbdd/Wav2vec2-large-xlsr-welsh")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
Evaluation
The model can be evaluated as follows on the Welsh test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "cy", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Srulikbdd/Wav2Vec2-large-xlsr-welsh")
model = Wav2Vec2ForCTC.from_pretrained("Srulikbdd/Wav2Vec2-large-xlsr-welsh")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\u2013\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\u2014\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
``` | {"language": "sv", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "model-index": [{"name": "XLSR Wav2Vec2 Welsh by Srulik Ben David", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice cy", "type": "common_voice", "args": "cy"}, "metrics": [{"type": "wer", "value": 29.4, "name": "Test WER"}]}]}]} | Srulikbdd/Wav2Vec2-large-xlsr-welsh | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"sv",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Ssadaf/bert-base-uncased-finetuned-copa | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Stabley/DialoDPT-small-evelynn | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
# Evelynn DialoGPT Model | {"tags": ["conversational"]} | Stabley/DialoGPT-small-evelynn | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | StanBienaives/wisenlp | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | Stancld/roformer_chinese_char_base | null | [
"transformers",
"jax",
"roformer",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Stargazer9/roberta-base-squad2-finetuned-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Startlate/my-new-shiny-tokenizer | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | StellarSav2021/DialoGPT-small-harrypotter | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | StellarSav2021/Dialogpt-small-4t3t54wy6y | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers | {} | StephennFernandes/XLS-R-300m-marathi | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | This is a dummy readme | {} | StephennFernandes/XLS-R-assamese-LM | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | StephennFernandes/XLS-R-assamese | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-marathi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1200
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
| {"language": ["mr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "generated_from_trainer", "hf-asr-leaderboard"], "model-index": [{"name": "XLS-R-marathi", "results": []}]} | StephennFernandes/XLS-R-marathi | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"generated_from_trainer",
"hf-asr-leaderboard",
"mr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | StephennFernandes/backup-test | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
feature-extraction | transformers | {} | StephennFernandes/wav2vec2-XLS-R-300m-assamese | null | [
"transformers",
"pytorch",
"wav2vec2",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers |
tags:
- automatic-speech-recognition
- robust-speech-event
---
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on a private dataset.
It achieves the following results on the evaluation set:
The following hyper-parameters were used during training:
- learning_rate: 3e-4
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 30
- mixed_precision_training: Native AMP
| {} | StephennFernandes/wav2vec2-XLS-R-300m-konkani | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Stepp/WorkTime | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | It's just a dialog bot trained on my Tweets. Unfortunately as tweets aren\'t very conversational it comes off pretty random. | {} | SteveC/sdc_bot_15K | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers | {} | SteveC/sdc_bot_medium | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | {} | SteveC/sdc_bot_small | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | {} | SteveC/sdc_bot_two_step | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | SteveMama/abc | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | SteveMama/pegasus-samsum | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers |
## Melayu BERT
Melayu BERT is a masked language model based on [BERT](https://arxiv.org/abs/1810.04805). It was trained on the [OSCAR](https://huggingface.co/datasets/oscar) dataset, specifically the `unshuffled_original_ms` subset. The model used was [English BERT model](https://huggingface.co/bert-base-uncased) and fine-tuned on the Malaysian dataset. The model achieved a perplexity of 9.46 on a 20% validation dataset. Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger), and [fine-tuning tutorial notebook](https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb) written by [Pierre Guillou](https://huggingface.co/pierreguillou). The model is available both for PyTorch and TensorFlow use.
## Model
The model was trained on 3 epochs with a learning rate of 2e-3 and achieved a training loss per steps as shown below.
| Step |Training loss|
|--------|-------------|
|500 | 5.051300 |
|1000 | 3.701700 |
|1500 | 3.288600 |
|2000 | 3.024000 |
|2500 | 2.833500 |
|3000 | 2.741600 |
|3500 | 2.637900 |
|4000 | 2.547900 |
|4500 | 2.451500 |
|5000 | 2.409600 |
|5500 | 2.388300 |
|6000 | 2.351600 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "StevenLimcorn/MelayuBERT"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Saya [MASK] makan nasi hari ini.")
```
### Import Tokenizer and Model
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("StevenLimcorn/MelayuBERT")
model = AutoModelForMaskedLM.from_pretrained("StevenLimcorn/MelayuBERT")
```
## Author
Melayu BERT was trained by [Steven Limcorn](https://github.com/stevenlimcorn) and [Wilson Wongso](https://hf.co/w11wo). | {"language": "ms", "license": "mit", "tags": ["melayu-bert"], "datasets": ["oscar"], "widget": [{"text": "Saya [MASK] makan nasi hari ini."}]} | StevenLimcorn/MelayuBERT | null | [
"transformers",
"pytorch",
"tf",
"bert",
"fill-mask",
"melayu-bert",
"ms",
"dataset:oscar",
"arxiv:1810.04805",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
## Indo-roberta-indonli
Indo-roberta-indonli is natural language inference classifier based on [Indo-roberta](https://huggingface.co/flax-community/indonesian-roberta-base) model. It was trained on the trained on [IndoNLI](https://github.com/ir-nlp-csui/indonli/tree/main/data/indonli) dataset. The model used was [Indo-roberta](https://huggingface.co/flax-community/indonesian-roberta-base) and was transfer-learned to a natural inference classifier model. The model are tested using the validation, test_layer and test_expert dataset given in the github repository. The results are shown below.
### Result
| Dataset | Accuracy | F1 | Precision | Recall |
|-------------|----------|---------|-----------|---------|
| Test Lay | 0.74329 | 0.74075 | 0.74283 | 0.74133 |
| Test Expert | 0.6115 | 0.60543 | 0.63924 | 0.61742 |
## Model
The model was trained on with 5 epochs, batch size 16, learning rate 2e-5 and weight decay 0.01. Achieved different metrics as shown below.
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
|-------|---------------|-----------------|----------|----------|-----------|----------|
| 1 | 0.942500 | 0.658559 | 0.737369 | 0.735552 | 0.735488 | 0.736679 |
| 2 | 0.649200 | 0.645290 | 0.761493 | 0.759593 | 0.762784 | 0.759642 |
| 3 | 0.437100 | 0.667163 | 0.766045 | 0.763979 | 0.765740 | 0.763792 |
| 4 | 0.282000 | 0.786683 | 0.764679 | 0.761802 | 0.762011 | 0.761684 |
| 5 | 0.193500 | 0.925717 | 0.765134 | 0.763127 | 0.763560 | 0.763489 |
## How to Use
### As NLI Classifier
```python
from transformers import pipeline
pretrained_name = "StevenLimcorn/indonesian-roberta-indonli"
nlp = pipeline(
"zero-shot-classification",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Amir Sjarifoeddin Harahap lahir di Kota Medan, Sumatera Utara, 27 April 1907. Ia meninggal di Surakarta, Jawa Tengah, pada 19 Desember 1948 dalam usia 41 tahun. </s></s> Amir Sjarifoeddin Harahap masih hidup.")
```
## Disclaimer
Do consider the biases which come from both the pre-trained RoBERTa model and the `INDONLI` dataset that may be carried over into the results of this model.
## Author
Indonesian RoBERTa Base IndoNLI was trained and evaluated by [Steven Limcorn](https://github.com/stevenlimcorn). All computation and development are done on Google Colaboratory using their free GPU access.
## Reference
The dataset we used is by IndoNLI.
```
@inproceedings{indonli,
title = "IndoNLI: A Natural Language Inference Dataset for Indonesian",
author = "Mahendra, Rahmad and Aji, Alham Fikri and Louvan, Samuel and Rahman, Fahrurrozi and Vania, Clara",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
publisher = "Association for Computational Linguistics",
}
``` | {"language": "id", "license": "mit", "tags": ["roberta"], "datasets": ["indonli"], "widget": [{"text": "Amir Sjarifoeddin Harahap lahir di Kota Medan, Sumatera Utara, 27 April 1907. Ia meninggal di Surakarta, Jawa Tengah, pada 19 Desember 1948 dalam usia 41 tahun. </s></s> Amir Sjarifoeddin Harahap masih hidup."}]} | StevenLimcorn/indo-roberta-indonli | null | [
"transformers",
"pytorch",
"tf",
"roberta",
"text-classification",
"id",
"dataset:indonli",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers | {} | StevenLimcorn/indonesian-roberta-base-bapos-tagger | null | [
"transformers",
"pytorch",
"tf",
"roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
# Indo RoBERTa Emotion Classifier
Indo RoBERTa Emotion Classifier is emotion classifier based on [Indo-roberta](https://huggingface.co/flax-community/indonesian-roberta-base) model. It was trained on the trained on [IndoNLU EmoT](https://huggingface.co/datasets/indonlu) dataset. The model used was [Indo-roberta](https://huggingface.co/flax-community/indonesian-roberta-base) and was transfer-learned to an emotion classifier model. Based from the [IndoNLU bencmark](https://www.indobenchmark.com/), the model achieve an f1-macro of 72.05%, accuracy of 71.81%, precision of 72.47% and recall of 71.94%.
## Model
The model was trained on 7 epochs with learning rate 2e-5. Achieved different metrics as shown below.
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
|-------|---------------|-----------------|----------|----------|-----------|----------|
| 1 | 1.300700 | 1.005149 | 0.622727 | 0.601846 | 0.640845 | 0.611144 |
| 2 | 0.806300 | 0.841953 | 0.686364 | 0.694096 | 0.701984 | 0.696657 |
| 3 | 0.591900 | 0.796794 | 0.686364 | 0.696573 | 0.707520 | 0.691671 |
| 4 | 0.441200 | 0.782094 | 0.722727 | 0.724359 | 0.725985 | 0.730229 |
| 5 | 0.334700 | 0.809931 | 0.711364 | 0.720550 | 0.718318 | 0.724608 |
| 6 | 0.268400 | 0.812771 | 0.718182 | 0.724192 | 0.721222 | 0.729195 |
| 7 | 0.226000 | 0.828461 | 0.725000 | 0.733625 | 0.731709 | 0.735800 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "StevenLimcorn/indonesian-roberta-base-emotion-classifier"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Hal-hal baik akan datang.")
```
## Disclaimer
Do consider the biases which come from both the pre-trained RoBERTa model and the `EmoT` dataset that may be carried over into the results of this model.
## Author
Indonesian RoBERTa Base Emotion Classifier was trained and evaluated by [Steven Limcorn](https://github.com/stevenlimcorn). All computation and development are done on Google Colaboratory using their free GPU access.
If used, please cite
```bibtex
@misc {steven_limcorn_2023,
author = { {Steven Limcorn} },
title = { indonesian-roberta-base-emotion-classifier (Revision e8a9cb9) },
year = 2023,
url = { https://huggingface.co/StevenLimcorn/indonesian-roberta-base-emotion-classifier },
doi = { 10.57967/hf/0681 },
publisher = { Hugging Face }
}
``` | {"language": "id", "license": "mit", "tags": ["roberta"], "datasets": ["indonlu"], "widget": [{"text": "Hal-hal baik akan datang."}]} | StevenLimcorn/indonesian-roberta-base-emotion-classifier | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"roberta",
"text-classification",
"id",
"dataset:indonlu",
"doi:10.57967/hf/0681",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - ZH-TW dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1786
- Wer: 0.8594
- Cer: 0.2964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 64.6189 | 2.51 | 500 | 63.8077 | 1.0 | 1.0 |
| 8.0561 | 5.03 | 1000 | 6.8014 | 1.0 | 1.0 |
| 6.0427 | 7.54 | 1500 | 6.0745 | 1.0 | 1.0 |
| 5.9357 | 10.05 | 2000 | 5.8682 | 1.0 | 1.0 |
| 5.0489 | 12.56 | 2500 | 4.4032 | 0.9990 | 0.7750 |
| 4.6184 | 15.08 | 3000 | 3.8383 | 0.9983 | 0.6768 |
| 4.365 | 17.59 | 3500 | 3.4633 | 0.9959 | 0.6299 |
| 4.1026 | 20.1 | 4000 | 3.0732 | 0.9902 | 0.5814 |
| 3.8655 | 22.61 | 4500 | 2.7638 | 0.9868 | 0.5465 |
| 3.6991 | 25.13 | 5000 | 2.4759 | 0.9811 | 0.5088 |
| 3.4894 | 27.64 | 5500 | 2.2937 | 0.9746 | 0.4852 |
| 3.3983 | 30.15 | 6000 | 2.1684 | 0.9733 | 0.4674 |
| 3.2736 | 32.66 | 6500 | 2.0372 | 0.9659 | 0.4458 |
| 3.1884 | 35.18 | 7000 | 1.9267 | 0.9648 | 0.4329 |
| 3.1248 | 37.69 | 7500 | 1.8408 | 0.9591 | 0.4217 |
| 3.0381 | 40.2 | 8000 | 1.7531 | 0.9503 | 0.4074 |
| 2.9515 | 42.71 | 8500 | 1.6880 | 0.9459 | 0.3967 |
| 2.8704 | 45.23 | 9000 | 1.6264 | 0.9378 | 0.3884 |
| 2.8128 | 47.74 | 9500 | 1.5621 | 0.9341 | 0.3782 |
| 2.7386 | 50.25 | 10000 | 1.5011 | 0.9243 | 0.3664 |
| 2.6646 | 52.76 | 10500 | 1.4608 | 0.9192 | 0.3575 |
| 2.6072 | 55.28 | 11000 | 1.4251 | 0.9148 | 0.3501 |
| 2.569 | 57.79 | 11500 | 1.3837 | 0.9060 | 0.3462 |
| 2.5091 | 60.3 | 12000 | 1.3589 | 0.9070 | 0.3392 |
| 2.4588 | 62.81 | 12500 | 1.3261 | 0.8966 | 0.3284 |
| 2.4083 | 65.33 | 13000 | 1.3052 | 0.8982 | 0.3265 |
| 2.3787 | 67.84 | 13500 | 1.2997 | 0.8908 | 0.3243 |
| 2.3457 | 70.35 | 14000 | 1.2778 | 0.8898 | 0.3187 |
| 2.3099 | 72.86 | 14500 | 1.2661 | 0.8830 | 0.3172 |
| 2.2559 | 75.38 | 15000 | 1.2475 | 0.8851 | 0.3143 |
| 2.2264 | 77.89 | 15500 | 1.2319 | 0.8739 | 0.3085 |
| 2.196 | 80.4 | 16000 | 1.2218 | 0.8722 | 0.3049 |
| 2.1613 | 82.91 | 16500 | 1.2093 | 0.8719 | 0.3051 |
| 2.1455 | 85.43 | 17000 | 1.2055 | 0.8624 | 0.3005 |
| 2.1193 | 87.94 | 17500 | 1.1975 | 0.8600 | 0.2982 |
| 2.0911 | 90.45 | 18000 | 1.1960 | 0.8648 | 0.3003 |
| 2.0884 | 92.96 | 18500 | 1.1871 | 0.8638 | 0.2971 |
| 2.0766 | 95.48 | 19000 | 1.1814 | 0.8617 | 0.2967 |
| 2.0735 | 97.99 | 19500 | 1.1801 | 0.8621 | 0.2969 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["zh-TW"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]} | StevenLimcorn/wav2vec2-xls-r-300m-zh-TW | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers | {} | StevenShoemakerNLP/pitchfork | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Stevenn/test | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
@ Deltarune Spamton DialoGPT Model | {"tags": ["conversational"]} | Stevo/DiagloGPT-medium-spamton | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-ner-4
#This model is part of a test for creating multilingual BioMedical NER systems. Not intended for proffesional use now.
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the CRAFT+BC4CHEMD+BioNLP09 datasets concatenated.
It achieves the following results on the evaluation set:
- Loss: 0.1027
- Precision: 0.9830
- Recall: 0.9832
- F1: 0.9831
- Accuracy: 0.9799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0658 | 1.0 | 6128 | 0.0751 | 0.9795 | 0.9795 | 0.9795 | 0.9758 |
| 0.0406 | 2.0 | 12256 | 0.0753 | 0.9827 | 0.9815 | 0.9821 | 0.9786 |
| 0.0182 | 3.0 | 18384 | 0.0934 | 0.9834 | 0.9825 | 0.9829 | 0.9796 |
| 0.011 | 4.0 | 24512 | 0.1027 | 0.9830 | 0.9832 | 0.9831 | 0.9799 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-multilingual-cased-finetuned-ner-4", "results": []}]} | StivenLancheros/mBERT-base-Biomedical-NER | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1720
- Precision: 0.8253
- Recall: 0.8147
- F1: 0.8200
- Accuracy: 0.9660
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the [CRAFT](https://github.com/UCDenver-ccp/CRAFT/releases)(Colorado Richly Annotated Full Text) Corpus in English.
Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1133 | 1.0 | 1360 | 0.1629 | 0.7985 | 0.7782 | 0.7882 | 0.9610 |
| 0.049 | 2.0 | 2720 | 0.1530 | 0.8165 | 0.8084 | 0.8124 | 0.9651 |
| 0.0306 | 3.0 | 4080 | 0.1603 | 0.8198 | 0.8075 | 0.8136 | 0.9650 |
| 0.0158 | 4.0 | 5440 | 0.1720 | 0.8253 | 0.8147 | 0.8200 | 0.9660 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT", "results": []}]} | StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Storm-Breaker/NER-Test | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | StormZJ/test1 | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Stoyan/Sssdd | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Strawberrymilkshake/personal_project | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Stu/bert | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Stu/model_name | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | asdf | {} | Subfire/testModel | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Subhashini17/wav2vec2-large-xls-r-300m-ta-colab-copy | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers | {} | Subhashini17/wav2vec2-large-xls-r-300m-ta-colab-new | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ta-colab-new1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6642
- eval_wer: 0.7611
- eval_runtime: 152.4412
- eval_samples_per_second: 11.683
- eval_steps_per_second: 1.463
- epoch: 10.11
- step: 960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-ta-colab-new1", "results": []}]} | Subhashini17/wav2vec2-large-xls-r-300m-ta-colab-new1 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ta-colab
This model is a fine-tuned version of [akashsivanandan/wav2vec2-large-xls-r-300m-tamil-colab-final](https://huggingface.co/akashsivanandan/wav2vec2-large-xls-r-300m-tamil-colab-final) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-ta-colab", "results": []}]} | Subhashini17/wav2vec2-large-xls-r-300m-ta-colab | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Subhashini17/wav2vec2-large-xls-r-300m-tamil-colab | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Subhashini17/wav2vec2-large-xls-r-300m-tamilasr-colab | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Subhashini17/wav2vec2-large-xlsr-300m-tamil-colab | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Subhrato20/testing-bot-repov2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
token-classification | transformers |
<h1>Bengali Named Entity Recognition</h1>
Fine-tuning bert-base-multilingual-cased on Wikiann dataset for performing NER on Bengali language.
## Label ID and its corresponding label name
| Label ID | Label Name|
| -------- | ----- |
|0 | O |
| 1 | B-PER |
| 2 | I-PER |
| 3 | B-ORG|
| 4 | I-ORG |
| 5 | B-LOC |
| 6 | I-LOC |
<h1>Results</h1>
| Name | Overall F1 | LOC F1 | ORG F1 | PER F1 |
| ---- | -------- | ----- | ---- | ---- |
| Train set | 0.997927 | 0.998246 | 0.996613 | 0.998769 |
| Validation set | 0.970187 | 0.969212 | 0.956831 | 0.982079 |
| Test set | 0.9673011 | 0.967120 | 0.963614 | 0.970938 |
Example
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Suchandra/bengali_language_NER")
model = AutoModelForTokenClassification.from_pretrained("Suchandra/bengali_language_NER")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "মারভিন দি মারসিয়ান"
ner_results = nlp(example)
ner_results
```
| {"language": "bn", "datasets": ["wikiann"], "widget": [{"text": "\u09ae\u09be\u09b0\u09ad\u09bf\u09a8 \u09a6\u09bf \u09ae\u09be\u09b0\u09b8\u09bf\u09af\u09bc\u09be\u09a8", "example_title": "Sentence_1"}, {"text": "\u09b2\u09bf\u0993\u09a8\u09be\u09b0\u09cd\u09a6\u09cb \u09a6\u09be \u09ad\u09bf\u099e\u09cd\u099a\u09bf", "example_title": "Sentence_2"}, {"text": "\u09ac\u09b8\u09a8\u09bf\u09af\u09bc\u09be \u0993 \u09b9\u09be\u09b0\u09cd\u099c\u09c7\u0997\u09cb\u09ad\u09bf\u09a8\u09be", "example_title": "Sentence_3"}, {"text": "\u09b8\u09be\u0989\u09a5 \u0987\u09b8\u09cd\u099f \u0987\u0989\u09a8\u09bf\u09ad\u09be\u09b0\u09cd\u09b8\u09bf\u099f\u09bf", "example_title": "Sentence_4"}, {"text": "\u09ae\u09be\u09a8\u09bf\u0995 \u09ac\u09a8\u09cd\u09a6\u09cd\u09af\u09cb\u09aa\u09be\u09a7\u09cd\u09af\u09be\u09af\u09bc \u09b2\u09c7\u0996\u0995", "example_title": "Sentence_5"}]} | Suchandra/bengali_language_NER | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"bn",
"dataset:wikiann",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | SugarB/SugarB | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Suha/mbart50-finetuned-ar-to-ar-accelerate | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Suhan/indic-bert-v2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Suhanshu/Movie-plot-generator | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Summerbud/test | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.