Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | null | {} | HJHGJGHHG/paddle-fnet-large | null | [
"paddlepaddle",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | basically, it makes pickup lines
https://huggingface.co/gpt2
| {} | HJK/PickupLineGenerator | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | HOmoikane/test | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | The model that generates the My little pony script
Fine tuning data: [Kaggle](https://www.kaggle.com/liury123/my-little-pony-transcript?select=clean_dialog.csv)
API page: [Ainize](https://ainize.ai/fpem123/GPT2-MyLittlePony)
Demo page: [End point](https://master-gpt2-my-little-pony-fpem123.endpoint.ainize.ai/)
### Model information
Base model: gpt-2 large
Epoch: 30
Train runtime: 4943.9641 secs
Loss: 0.0291
###===Teachable NLP===
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
| {} | HScomcom/gpt2-MyLittlePony | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | ### Model information
Fine tuning data: https://www.kaggle.com/cuddlefish/fairy-tales
License: CC0: Public Domain
Base model: gpt-2 large
Epoch: 30
Train runtime: 17861.6048 secs
Loss: 0.0412
API page: [Ainize](https://ainize.ai/fpem123/GPT2-FairyTales?branch=master)
Demo page: [End-point](https://master-gpt2-fairy-tales-fpem123.endpoint.ainize.ai/)
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
And my other fairytale model: [showcase](https://forum.ainetwork.ai/t/teachable-nlp-gpt-2-fairy-tales/68) | {} | HScomcom/gpt2-fairytales | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | {} | HScomcom/gpt2-friends | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | {} | HScomcom/gpt2-game-of-thrones | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | ### Model information
Fine tuning data: https://www.kaggle.com/bennijesus/lovecraft-fiction
License: CC0: Public Domain
Base model: gpt-2 large
Epoch: 30
Train runtime: 10307.3488 secs
Loss: 0.0292
API page: [Ainize](https://ainize.ai/fpem123/GPT2-LoveCraft?branch=master)
Demo page: [End-point](https://master-gpt2-love-craft-fpem123.endpoint.ainize.ai/)
### ===Teachable NLP===
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
And my other lovecraft model: [showcase](https://forum.ainetwork.ai/t/teachable-nlp-gpt-2-lovecraft/71) | {} | HScomcom/gpt2-lovecraft | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | {} | HScomcom/gpt2-theoffice | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | HUNGPHAM/NewModel | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | HUNGPHAM/distilbert-base-uncased-finetuned-squad | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | This is a RainGAN model | {} | HVH/RainGAN | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | {} | HackMIT/double-agent | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
#Harry Potter DialoGPT Model | {"tags": ["conversational"]} | HackyHackyMan/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# My Awesome Model | {"tags": ["conversational"]} | Hadron/DialoGPT-medium-nino | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers | {} | hchang/t5-small-finetuned-xsum | null | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {} | HaitaoYang/bert_cn_bi-classification | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | HaitaoYang/bert_cn_finetuning | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Hakar/Funny | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Hakun/TestModeel | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Hal9000/12 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Haley/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Half-cup-of-tea/bert-base-uncased-finetuned-wikitext2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Half-cup-of-tea/distilroberta-base-finetuned-wikitext2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Peter from Your Boyfriend Game.
| {"tags": ["conversational"]} | Hallzy/Peterbot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Jake DialoGPT-large-jake
| {"tags": ["conversational"]} | Hamas/DialoGPT-large-jake | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Jake DialoGPT-large-jake2
| {"tags": ["conversational"]} | Hamas/DialoGPT-large-jake2 | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Jake DialoGPT-large-jake
| {"tags": ["conversational"]} | Hamas/DialoGPT-large-jake3 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Jake DialoGPT-large-jake
| {"tags": ["conversational"]} | Hamas/DialoGPT-large-jake4 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
#Rick DialoGPT Model | {"tags": ["conversational"]} | Hamhams/DialoGPT-small-rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
## GPT2-Home
This model is fine-tuned using GPT-2 on amazon home products metadata.
It can generate descriptions for your **home** products by getting a text prompt.
### Model description
[GPT-2](https://openai.com/blog/better-language-models/) is a large [transformer](https://arxiv.org/abs/1706.03762)-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data.
### Live Demo
For testing model with special configuration, please visit [Demo](https://huggingface.co/spaces/HamidRezaAttar/gpt2-home)
### Blog Post
For more detailed information about project development please refer to my [blog post](https://hamidrezaattar.github.io/blog/markdown/2022/02/17/gpt2-home.html).
### How to use
For best experience and clean outputs, you can use Live Demo mentioned above, also you can use the notebook mentioned in my [GitHub](https://github.com/HamidRezaAttar/GPT2-Home)
You can use this model directly with a pipeline for text generation.
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
>>> tokenizer = AutoTokenizer.from_pretrained("HamidRezaAttar/gpt2-product-description-generator")
>>> model = AutoModelForCausalLM.from_pretrained("HamidRezaAttar/gpt2-product-description-generator")
>>> generator = pipeline('text-generation', model, tokenizer=tokenizer, config={'max_length':100})
>>> generated_text = generator("This bed is very comfortable.")
```
### Citation info
```bibtex
@misc{GPT2-Home,
author = {HamidReza Fatollah Zadeh Attar},
title = {GPT2-Home the English home product description generator},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/HamidRezaAttar/GPT2-Home}},
}
```
| {"language": "en", "license": "apache-2.0", "tags": ["text-generation"], "widget": [{"text": "Maximize your bedroom space without sacrificing style with the storage bed."}, {"text": "Handcrafted of solid acacia in weathered gray, our round Jozy drop-leaf dining table is a space-saving."}, {"text": "Our plush and luxurious Emmett modular sofa brings custom comfort to your living space."}]} | HamidRezaAttar/gpt2-product-description-generator | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"arxiv:1706.03762",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Han11/test | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | HanJing/test_model | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Hanaa98/Hana | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Hanchen/roberta-large | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | Model Description | {} | Hanchen/testRepo | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0612
- Precision: 0.9259
- Recall: 0.9369
- F1: 0.9314
- Accuracy: 0.9839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.243 | 1.0 | 878 | 0.0703 | 0.9134 | 0.9181 | 0.9158 | 0.9806 |
| 0.0515 | 2.0 | 1756 | 0.0609 | 0.9214 | 0.9343 | 0.9278 | 0.9832 |
| 0.0305 | 3.0 | 2634 | 0.0612 | 0.9259 | 0.9369 | 0.9314 | 0.9839 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9839229828268226}}]}]} | Hank/distilbert-base-uncased-finetuned-ner | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Hano/Asher | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Rick from Rick & Morty DialoGPT Model | {"tags": ["conversational"]} | HansAnonymous/DialoGPT-medium-rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Shrek from Shrek DialoGPT Model | {"tags": ["conversational"]} | HansAnonymous/DialoGPT-small-shrek | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7608 | 1.0 | 2334 | 3.6655 |
| 3.6335 | 2.0 | 4668 | 3.6455 |
| 3.6066 | 3.0 | 7002 | 3.6424 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"]} | Haotian/distilgpt2-finetuned-wikitext2 | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | HarjyotSahni/personal | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9613
- Wer: 0.5376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.3118 | 1.96 | 100 | 2.9093 | 0.9982 |
| 2.2071 | 3.92 | 200 | 1.1737 | 0.7779 |
| 1.6098 | 5.88 | 300 | 0.9984 | 0.7015 |
| 1.4333 | 7.84 | 400 | 0.9800 | 0.6705 |
| 1.2859 | 9.8 | 500 | 0.9582 | 0.6487 |
| 1.2073 | 11.76 | 600 | 0.8841 | 0.6077 |
| 1.1417 | 13.73 | 700 | 0.9118 | 0.6343 |
| 1.0988 | 15.69 | 800 | 0.9217 | 0.6196 |
| 1.0279 | 17.65 | 900 | 0.9165 | 0.5867 |
| 0.9765 | 19.61 | 1000 | 0.9306 | 0.5978 |
| 0.9161 | 21.57 | 1100 | 0.9305 | 0.5768 |
| 0.8395 | 23.53 | 1200 | 0.9828 | 0.5819 |
| 0.8306 | 25.49 | 1300 | 0.9397 | 0.5760 |
| 0.7819 | 27.45 | 1400 | 0.9544 | 0.5742 |
| 0.7509 | 29.41 | 1500 | 0.9278 | 0.5690 |
| 0.7218 | 31.37 | 1600 | 0.9003 | 0.5587 |
| 0.6725 | 33.33 | 1700 | 0.9659 | 0.5554 |
| 0.6287 | 35.29 | 1800 | 0.9522 | 0.5561 |
| 0.6077 | 37.25 | 1900 | 0.9154 | 0.5465 |
| 0.5873 | 39.22 | 2000 | 0.9331 | 0.5469 |
| 0.5621 | 41.18 | 2100 | 0.9335 | 0.5491 |
| 0.5168 | 43.14 | 2200 | 0.9632 | 0.5458 |
| 0.5114 | 45.1 | 2300 | 0.9349 | 0.5387 |
| 0.4986 | 47.06 | 2400 | 0.9364 | 0.5380 |
| 0.4761 | 49.02 | 2500 | 0.9584 | 0.5391 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["ur"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ur", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8.0", "type": "mozilla-foundation/common_voice_8_0", "args": "ur"}, "metrics": [{"type": "wer", "value": 44.13, "name": "Test WER"}]}]}]} | HarrisDePerceptron/xls-r-1b-ur | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ur",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2924
- Wer: 0.7201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 11.2783 | 4.17 | 100 | 4.6409 | 1.0 |
| 3.5578 | 8.33 | 200 | 3.1649 | 1.0 |
| 3.1279 | 12.5 | 300 | 3.0335 | 1.0 |
| 2.9944 | 16.67 | 400 | 2.9526 | 0.9983 |
| 2.9275 | 20.83 | 500 | 2.9291 | 1.0009 |
| 2.8077 | 25.0 | 600 | 2.5633 | 0.9895 |
| 2.4438 | 29.17 | 700 | 1.9045 | 0.9564 |
| 1.9659 | 33.33 | 800 | 1.4114 | 0.7960 |
| 1.7092 | 37.5 | 900 | 1.2584 | 0.7637 |
| 1.517 | 41.67 | 1000 | 1.2040 | 0.7507 |
| 1.3966 | 45.83 | 1100 | 1.1273 | 0.7463 |
| 1.3197 | 50.0 | 1200 | 1.1054 | 0.6957 |
| 1.2476 | 54.17 | 1300 | 1.1035 | 0.7001 |
| 1.1796 | 58.33 | 1400 | 1.0890 | 0.7097 |
| 1.1237 | 62.5 | 1500 | 1.0883 | 0.7167 |
| 1.0777 | 66.67 | 1600 | 1.1067 | 0.7219 |
| 1.0051 | 70.83 | 1700 | 1.1115 | 0.7236 |
| 0.9521 | 75.0 | 1800 | 1.0867 | 0.7132 |
| 0.9147 | 79.17 | 1900 | 1.0852 | 0.7210 |
| 0.8798 | 83.33 | 2000 | 1.1411 | 0.7097 |
| 0.8317 | 87.5 | 2100 | 1.1634 | 0.7018 |
| 0.7946 | 91.67 | 2200 | 1.1621 | 0.7201 |
| 0.7594 | 95.83 | 2300 | 1.1482 | 0.7036 |
| 0.729 | 100.0 | 2400 | 1.1493 | 0.7062 |
| 0.7055 | 104.17 | 2500 | 1.1726 | 0.6931 |
| 0.6622 | 108.33 | 2600 | 1.1938 | 0.7001 |
| 0.6583 | 112.5 | 2700 | 1.1832 | 0.7149 |
| 0.6299 | 116.67 | 2800 | 1.1996 | 0.7175 |
| 0.5903 | 120.83 | 2900 | 1.1986 | 0.7132 |
| 0.5816 | 125.0 | 3000 | 1.1909 | 0.7010 |
| 0.5583 | 129.17 | 3100 | 1.2079 | 0.6870 |
| 0.5392 | 133.33 | 3200 | 1.2109 | 0.7228 |
| 0.5412 | 137.5 | 3300 | 1.2353 | 0.7245 |
| 0.5136 | 141.67 | 3400 | 1.2390 | 0.7254 |
| 0.5007 | 145.83 | 3500 | 1.2273 | 0.7123 |
| 0.4883 | 150.0 | 3600 | 1.2773 | 0.7289 |
| 0.4835 | 154.17 | 3700 | 1.2678 | 0.7289 |
| 0.4568 | 158.33 | 3800 | 1.2592 | 0.7350 |
| 0.4525 | 162.5 | 3900 | 1.2705 | 0.7254 |
| 0.4379 | 166.67 | 4000 | 1.2717 | 0.7306 |
| 0.4198 | 170.83 | 4100 | 1.2618 | 0.7219 |
| 0.4216 | 175.0 | 4200 | 1.2909 | 0.7158 |
| 0.4305 | 179.17 | 4300 | 1.2808 | 0.7167 |
| 0.399 | 183.33 | 4400 | 1.2750 | 0.7193 |
| 0.3937 | 187.5 | 4500 | 1.2719 | 0.7149 |
| 0.3905 | 191.67 | 4600 | 1.2816 | 0.7158 |
| 0.3892 | 195.83 | 4700 | 1.2951 | 0.7210 |
| 0.3932 | 200.0 | 4800 | 1.2924 | 0.7201 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["ur"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]} | HarrisDePerceptron/xls-r-300m-ur-cv7 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ur",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [DrishtiSharma/wav2vec2-large-xls-r-300m-hi-d3](https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hi-d3) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5443
- Wer: 0.7030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000388
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 750
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 10.7052 | 1.96 | 100 | 3.4683 | 1.0 |
| 3.2395 | 3.92 | 200 | 3.1489 | 1.0 |
| 2.9951 | 5.88 | 300 | 2.9823 | 1.0007 |
| 2.3574 | 7.84 | 400 | 1.2614 | 0.7598 |
| 1.7287 | 9.8 | 500 | 1.1817 | 0.7421 |
| 1.6144 | 11.76 | 600 | 1.1315 | 0.7321 |
| 1.5598 | 13.73 | 700 | 1.2322 | 0.7550 |
| 1.5418 | 15.69 | 800 | 1.2721 | 0.7819 |
| 1.4578 | 17.65 | 900 | 1.1710 | 0.7531 |
| 1.4311 | 19.61 | 1000 | 1.2042 | 0.7491 |
| 1.3483 | 21.57 | 1100 | 1.1702 | 0.7465 |
| 1.3078 | 23.53 | 1200 | 1.1963 | 0.7421 |
| 1.2576 | 25.49 | 1300 | 1.1501 | 0.7280 |
| 1.2173 | 27.45 | 1400 | 1.2526 | 0.7299 |
| 1.2217 | 29.41 | 1500 | 1.2479 | 0.7310 |
| 1.1536 | 31.37 | 1600 | 1.2567 | 0.7432 |
| 1.0939 | 33.33 | 1700 | 1.2801 | 0.7247 |
| 1.0745 | 35.29 | 1800 | 1.2340 | 0.7151 |
| 1.0454 | 37.25 | 1900 | 1.2372 | 0.7151 |
| 1.0101 | 39.22 | 2000 | 1.2461 | 0.7376 |
| 0.9833 | 41.18 | 2100 | 1.2553 | 0.7269 |
| 0.9314 | 43.14 | 2200 | 1.2372 | 0.7015 |
| 0.9147 | 45.1 | 2300 | 1.3035 | 0.7358 |
| 0.8758 | 47.06 | 2400 | 1.2598 | 0.7092 |
| 0.8356 | 49.02 | 2500 | 1.2557 | 0.7144 |
| 0.8105 | 50.98 | 2600 | 1.2619 | 0.7236 |
| 0.7947 | 52.94 | 2700 | 1.3994 | 0.7491 |
| 0.7623 | 54.9 | 2800 | 1.2932 | 0.7133 |
| 0.7282 | 56.86 | 2900 | 1.2799 | 0.7089 |
| 0.7108 | 58.82 | 3000 | 1.3615 | 0.7148 |
| 0.6896 | 60.78 | 3100 | 1.3129 | 0.7041 |
| 0.6496 | 62.75 | 3200 | 1.4050 | 0.6934 |
| 0.6075 | 64.71 | 3300 | 1.3571 | 0.7026 |
| 0.6242 | 66.67 | 3400 | 1.3369 | 0.7063 |
| 0.5865 | 68.63 | 3500 | 1.4368 | 0.7140 |
| 0.5721 | 70.59 | 3600 | 1.4224 | 0.7066 |
| 0.5475 | 72.55 | 3700 | 1.4798 | 0.7118 |
| 0.5086 | 74.51 | 3800 | 1.5107 | 0.7232 |
| 0.4958 | 76.47 | 3900 | 1.4849 | 0.7089 |
| 0.5046 | 78.43 | 4000 | 1.4451 | 0.7114 |
| 0.4694 | 80.39 | 4100 | 1.4674 | 0.7089 |
| 0.4386 | 82.35 | 4200 | 1.5245 | 0.7103 |
| 0.4516 | 84.31 | 4300 | 1.5032 | 0.7103 |
| 0.4113 | 86.27 | 4400 | 1.5246 | 0.7196 |
| 0.3972 | 88.24 | 4500 | 1.5318 | 0.7114 |
| 0.4006 | 90.2 | 4600 | 1.5543 | 0.6982 |
| 0.4014 | 92.16 | 4700 | 1.5442 | 0.7048 |
| 0.3672 | 94.12 | 4800 | 1.5542 | 0.7137 |
| 0.3666 | 96.08 | 4900 | 1.5414 | 0.7018 |
| 0.3574 | 98.04 | 5000 | 1.5465 | 0.7059 |
| 0.3428 | 100.0 | 5100 | 1.5443 | 0.7030 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["ur"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]} | HarrisDePerceptron/xls-r-300m-ur-cv8-hi | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ur",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [HarrisDePerceptron/xls-r-300m-ur](https://huggingface.co/HarrisDePerceptron/xls-r-300m-ur) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0517
- WER: 0.5151291512915129
- CER: 0.23689640940982254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.2991 | 1.96 | 100 | 0.9769 | 0.6627 |
| 1.3415 | 3.92 | 200 | 0.9701 | 0.6594 |
| 1.2998 | 5.88 | 300 | 0.9678 | 0.6668 |
| 1.2881 | 7.84 | 400 | 0.9650 | 0.6613 |
| 1.2369 | 9.8 | 500 | 0.9392 | 0.6502 |
| 1.2293 | 11.76 | 600 | 0.9536 | 0.6480 |
| 1.1709 | 13.73 | 700 | 0.9265 | 0.6402 |
| 1.1492 | 15.69 | 800 | 0.9636 | 0.6506 |
| 1.1044 | 17.65 | 900 | 0.9305 | 0.6351 |
| 1.0704 | 19.61 | 1000 | 0.9329 | 0.6280 |
| 1.0039 | 21.57 | 1100 | 0.9413 | 0.6295 |
| 0.9756 | 23.53 | 1200 | 0.9718 | 0.6185 |
| 0.9633 | 25.49 | 1300 | 0.9731 | 0.6133 |
| 0.932 | 27.45 | 1400 | 0.9659 | 0.6199 |
| 0.9252 | 29.41 | 1500 | 0.9766 | 0.6196 |
| 0.9172 | 31.37 | 1600 | 1.0052 | 0.6199 |
| 0.8733 | 33.33 | 1700 | 0.9955 | 0.6203 |
| 0.868 | 35.29 | 1800 | 1.0069 | 0.6240 |
| 0.8547 | 37.25 | 1900 | 0.9783 | 0.6258 |
| 0.8451 | 39.22 | 2000 | 0.9845 | 0.6052 |
| 0.8374 | 41.18 | 2100 | 0.9496 | 0.6137 |
| 0.8153 | 43.14 | 2200 | 0.9756 | 0.6122 |
| 0.8134 | 45.1 | 2300 | 0.9712 | 0.6096 |
| 0.8019 | 47.06 | 2400 | 0.9565 | 0.5970 |
| 0.7746 | 49.02 | 2500 | 0.9864 | 0.6096 |
| 0.7664 | 50.98 | 2600 | 0.9988 | 0.6092 |
| 0.7708 | 52.94 | 2700 | 1.0181 | 0.6255 |
| 0.7468 | 54.9 | 2800 | 0.9918 | 0.6148 |
| 0.7241 | 56.86 | 2900 | 1.0150 | 0.6018 |
| 0.7165 | 58.82 | 3000 | 1.0439 | 0.6063 |
| 0.7104 | 60.78 | 3100 | 1.0016 | 0.6037 |
| 0.6954 | 62.75 | 3200 | 1.0117 | 0.5970 |
| 0.6753 | 64.71 | 3300 | 1.0191 | 0.6037 |
| 0.6803 | 66.67 | 3400 | 1.0190 | 0.6033 |
| 0.661 | 68.63 | 3500 | 1.0284 | 0.6007 |
| 0.6597 | 70.59 | 3600 | 1.0060 | 0.5967 |
| 0.6398 | 72.55 | 3700 | 1.0372 | 0.6048 |
| 0.6105 | 74.51 | 3800 | 1.0048 | 0.6044 |
| 0.6164 | 76.47 | 3900 | 1.0398 | 0.6148 |
| 0.6354 | 78.43 | 4000 | 1.0272 | 0.6133 |
| 0.5952 | 80.39 | 4100 | 1.0364 | 0.6081 |
| 0.5814 | 82.35 | 4200 | 1.0418 | 0.6092 |
| 0.6079 | 84.31 | 4300 | 1.0277 | 0.5967 |
| 0.5748 | 86.27 | 4400 | 1.0362 | 0.6041 |
| 0.5624 | 88.24 | 4500 | 1.0427 | 0.6007 |
| 0.5767 | 90.2 | 4600 | 1.0370 | 0.5919 |
| 0.5793 | 92.16 | 4700 | 1.0442 | 0.6011 |
| 0.547 | 94.12 | 4800 | 1.0516 | 0.5982 |
| 0.5513 | 96.08 | 4900 | 1.0461 | 0.5989 |
| 0.5429 | 98.04 | 5000 | 1.0504 | 0.5996 |
| 0.5404 | 100.0 | 5100 | 1.0517 | 0.5967 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["ur"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ur", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8.0", "type": "mozilla-foundation/common_voice_8_0", "args": "ur"}, "metrics": [{"type": "wer", "value": 47.38, "name": "Test WER"}]}]}]} | HarrisDePerceptron/xls-r-300m-ur | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ur",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8888
- Wer: 0.6642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 10.1224 | 1.96 | 100 | 3.5429 | 1.0 |
| 3.2411 | 3.92 | 200 | 3.1786 | 1.0 |
| 3.1283 | 5.88 | 300 | 3.0571 | 1.0 |
| 3.0044 | 7.84 | 400 | 2.9560 | 0.9996 |
| 2.9388 | 9.8 | 500 | 2.8977 | 1.0011 |
| 2.86 | 11.76 | 600 | 2.6944 | 0.9952 |
| 2.5538 | 13.73 | 700 | 2.0967 | 0.9435 |
| 2.1214 | 15.69 | 800 | 1.4816 | 0.8428 |
| 1.8136 | 17.65 | 900 | 1.2459 | 0.8048 |
| 1.6795 | 19.61 | 1000 | 1.1232 | 0.7649 |
| 1.5571 | 21.57 | 1100 | 1.0510 | 0.7432 |
| 1.4975 | 23.53 | 1200 | 1.0298 | 0.6963 |
| 1.4485 | 25.49 | 1300 | 0.9775 | 0.7074 |
| 1.3924 | 27.45 | 1400 | 0.9798 | 0.6956 |
| 1.3604 | 29.41 | 1500 | 0.9345 | 0.7092 |
| 1.3224 | 31.37 | 1600 | 0.9535 | 0.6830 |
| 1.2816 | 33.33 | 1700 | 0.9178 | 0.6679 |
| 1.2623 | 35.29 | 1800 | 0.9249 | 0.6679 |
| 1.2421 | 37.25 | 1900 | 0.9124 | 0.6734 |
| 1.2208 | 39.22 | 2000 | 0.8962 | 0.6664 |
| 1.2145 | 41.18 | 2100 | 0.8903 | 0.6734 |
| 1.1888 | 43.14 | 2200 | 0.8883 | 0.6708 |
| 1.1933 | 45.1 | 2300 | 0.8928 | 0.6723 |
| 1.1838 | 47.06 | 2400 | 0.8868 | 0.6679 |
| 1.1634 | 49.02 | 2500 | 0.8886 | 0.6657 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["ur"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ur", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8.0", "type": "mozilla-foundation/common_voice_8_0", "args": "ur"}, "metrics": [{"type": "wer", "value": 62.47, "name": "Test WER"}]}]}]} | HarrisDePerceptron/xlsr-large-53-ur | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ur",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | HarryPotter09/hubert-base-tokenizer | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Harry Potter DailogGPT Model | {"tags": ["conversational"]} | HarryPuttar/HarryPotterDC | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | HarryWizard/cuad-large | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Harshal/transformers | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Jack Sparrow GPT | {"tags": ["conversational"]} | Harshal6927/Jack_Sparrow_GPT | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Tony Stark GPT
My first AI model still learning, used small dataset so don't expect much | {"tags": ["conversational"]} | Harshal6927/Tony_Stark_GPT | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Harshil7652/code_search | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Single Column Regression
- Model ID: 32597818
- CO2 Emissions (in grams): 8.655894631203154
## Validation Metrics
- Loss: 0.5410276651382446
- MSE: 0.5410276651382446
- MAE: 0.5694561004638672
- R2: 0.6830431129198475
- RMSE: 0.735545814037323
- Explained Variance: 0.6834385395050049
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Harshveer/autonlp-formality_scoring_2-32597818
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Harshveer/autonlp-formality_scoring_2-32597818", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Harshveer/autonlp-formality_scoring_2-32597818", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "datasets": ["Harshveer/autonlp-data-formality_scoring_2"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 8.655894631203154} | Harshveer/autonlp-formality_scoring_2-32597818 | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:Harshveer/autonlp-data-formality_scoring_2",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
# hindi_base_wav2vec2 | {"language": ["hi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "hi", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["Harveenchadha/indic-voice"], "model-index": [{"name": "Hindi Large", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice", "type": "common_voice", "args": "hi"}, "metrics": [{"type": "wer", "value": 22.62, "name": "Test WER"}, {"type": "cer", "value": 7.42, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice-7.0", "type": "mozilla-foundation/common_voice_7_0", "args": "hi"}, "metrics": [{"type": "wer", "value": 19.47, "name": "Test WER"}, {"type": "cer", "value": 8.05, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice-8.0", "type": "mozilla-foundation/common_voice_8_0", "args": "hi"}, "metrics": [{"type": "wer", "value": 20.87, "name": "Test WER"}, {"type": "cer", "value": 9.47, "name": "Test CER"}]}]}]} | Harveenchadha/hindi_base_wav2vec2 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"hi",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:Harveenchadha/indic-voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers | {"language": ["hi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "hi", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["Harveenchadha/indic-voice"], "model-index": [{"name": "Hindi Large", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice", "type": "common_voice", "args": "hi"}, "metrics": [{"type": "wer", "value": 23.08, "name": "Test WER"}, {"type": "cer", "value": 8.11, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice-7.0", "type": "mozilla-foundation/common_voice_7_0", "args": "hi"}, "metrics": [{"type": "wer", "value": 23.36, "name": "Test WER"}, {"type": "cer", "value": 8.94, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice-8.0", "type": "mozilla-foundation/common_voice_8_0", "args": "hi"}, "metrics": [{"type": "wer", "value": 24.85, "name": "Test WER"}, {"type": "cer", "value": 9.99, "name": "Test CER"}]}]}]} | Harveenchadha/hindi_large_wav2vec2 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"hi",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:Harveenchadha/indic-voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {"language": ["hi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "hi", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["Harveenchadha/indic-voice"], "model-index": [{"name": "Hindi Large", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice", "type": "common_voice", "args": "hi"}, "metrics": [{"type": "wer", "value": 19.14, "name": "Test WER"}, {"type": "cer", "value": 5.93, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice-7.0", "type": "mozilla-foundation/common_voice_7_0", "args": "hi"}, "metrics": [{"type": "wer", "value": 17.4, "name": "Test WER"}, {"type": "cer", "value": 7.13, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice-8.0", "type": "mozilla-foundation/common_voice_8_0", "args": "hi"}, "metrics": [{"type": "wer", "value": 18.99, "name": "Test WER"}, {"type": "cer", "value": 8.91, "name": "Test CER"}]}]}]} | Harveenchadha/hindi_model_with_lm_vakyansh | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"hi",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:Harveenchadha/indic-voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | **Work in progress** | {} | Harveenchadha/indictrans | null | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | keras |
## Multimodal entailment
Author: Sayak Paul
Date created: 2021/08/08
Last modified: 2021/08/15
Description: Training a multimodal model for predicting entailment.
### What is multimodal entailment?
On social media platforms, to audit and moderate content we may want to find answers to the following questions in near real-time:
Does a given piece of information contradict the other?
Does a given piece of information imply the other?
In NLP, this task is called analyzing textual entailment. However, that's only when the information comes from text content. In practice, it's often the case the information available comes not just from text content, but from a multimodal combination of text, images, audio, video, etc. Multimodal entailment is simply the extension of textual entailment to a variety of new input modalities. | {"library_name": "keras", "tags": ["nlp"]} | Harveenchadha/model-entailment | null | [
"keras",
"nlp",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers | {"language": ["or"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_7_0", "or", "robust-speech-event"], "datasets": ["Harveenchadha/indic-voice"], "model-index": [{"name": "Hindi Large", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice", "type": "common_voice", "args": "or"}, "metrics": [{"type": "wer", "value": 54.26, "name": "Test WER"}, {"type": "cer", "value": 11.36, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice-7.0", "type": "mozilla-foundation/common_voice_7_0", "args": "or"}, "metrics": [{"type": "wer", "value": 53.58, "name": "Test WER"}, {"type": "cer", "value": 11.26, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice-8.0", "type": "mozilla-foundation/common_voice_8_0", "args": "or"}, "metrics": [{"type": "wer", "value": 55.26, "name": "Test WER"}, {"type": "cer", "value": 13.01, "name": "Test CER"}]}]}]} | Harveenchadha/odia_large_wav2vec2 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"or",
"robust-speech-event",
"dataset:Harveenchadha/indic-voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {} | Harveenchadha/vakyansh-wav2vec2-assamese-asm-8 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {} | Harveenchadha/vakyansh-wav2vec2-bengali-bnm-200 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {} | Harveenchadha/vakyansh-wav2vec2-bhojpuri-bhom-60 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {} | Harveenchadha/vakyansh-wav2vec2-dogri-doi-55 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {} | Harveenchadha/vakyansh-wav2vec2-gujarati-gnm-100 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
## Spaces Demo
Check the spaces demo [here](https://huggingface.co/spaces/Harveenchadha/wav2vec2-vakyansh-hindi/tree/main)
## Pretrained Model
Fine-tuned on Multilingual Pretrained Model [CLSRIL-23](https://arxiv.org/abs/2107.07402). The original fairseq checkpoint is present [here](https://github.com/Open-Speech-EkStep/vakyansh-models). When using this model, make sure that your speech input is sampled at 16kHz.
**Note: The result from this model is without a language model so you may witness a higher WER in some cases.**
## Dataset
This model was trained on 4200 hours of Hindi Labelled Data. The labelled data is not present in public domain as of now.
## Training Script
Models were trained using experimental platform setup by Vakyansh team at Ekstep. Here is the [training repository](https://github.com/Open-Speech-EkStep/vakyansh-wav2vec2-experimentation).
In case you want to explore training logs on wandb they are [here](https://wandb.ai/harveenchadha/hindi_finetuning_multilingual?workspace=user-harveenchadha).
## [Colab Demo](https://colab.research.google.com/github/harveenchadha/bol/blob/main/demos/hf/hindi/hf_hindi_him_4200_demo.ipynb)
## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200")
model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
```
## Evaluation
The model can be evaluated as follows on the hindi test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "hi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200")
model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids, skip_special_tokens=True)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 33.17 %
[**Colab Evaluation**](https://colab.research.google.com/github/harveenchadha/bol/blob/main/demos/hf/hindi/hf_vakyansh_hindi_him_4200_evaluation_common_voice.ipynb)
## Credits
Thanks to Ekstep Foundation for making this possible. The vakyansh team will be open sourcing speech models in all the Indic Languages. | {"language": "hi", "license": "mit", "tags": ["audio", "automatic-speech-recognition", "speech"], "metrics": ["wer"], "model-index": [{"name": "Wav2Vec2 Vakyansh Hindi Model by Harveen Chadha", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice hi", "type": "common_voice", "args": "hi"}, "metrics": [{"type": "wer", "value": 33.17, "name": "Test WER"}]}]}]} | Harveenchadha/vakyansh-wav2vec2-hindi-him-4200 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"hi",
"arxiv:2107.07402",
"license:mit",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers | {} | Harveenchadha/vakyansh-wav2vec2-indian-english-enm-700 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {} | Harveenchadha/vakyansh-wav2vec2-kannada-knm-560 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {} | Harveenchadha/vakyansh-wav2vec2-maithili-maim-50 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {} | Harveenchadha/vakyansh-wav2vec2-malayalam-mlm-8 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {} | Harveenchadha/vakyansh-wav2vec2-marathi-mrm-100 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {} | Harveenchadha/vakyansh-wav2vec2-nepali-nem-130 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {} | Harveenchadha/vakyansh-wav2vec2-odia-orm-100 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
Fine-tuned on Multilingual Pretrained Model [CLSRIL-23](https://arxiv.org/abs/2107.07402). The original fairseq checkpoint is present [here](https://github.com/Open-Speech-EkStep/vakyansh-models). When using this model, make sure that your speech input is sampled at 16kHz.
**Note: The result from this model is without a language model so you may witness a higher WER in some cases.**
| {"language": "pa", "license": "mit", "tags": ["audio", "automatic-speech-recognition", "speech"], "metrics": ["wer"], "model-index": [{"name": "Wav2Vec2 Vakyansh Punjabi Model by Harveen Chadha", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice hi", "type": "common_voice", "args": "pa"}, "metrics": [{"type": "wer", "value": 33.17, "name": "Test WER"}]}]}]} | Harveenchadha/vakyansh-wav2vec2-punjabi-pam-10 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pa",
"arxiv:2107.07402",
"license:mit",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers | {} | Harveenchadha/vakyansh-wav2vec2-rajasthani-raj-45 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {} | Harveenchadha/vakyansh-wav2vec2-sanskrit-sam-60 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
## Pretrained Model
Fine-tuned on Multilingual Pretrained Model [CLSRIL-23](https://arxiv.org/abs/2107.07402). The original fairseq checkpoint is present [here](https://github.com/Open-Speech-EkStep/vakyansh-models). When using this model, make sure that your speech input is sampled at 16kHz.
**Note: The result from this model is without a language model so you may witness a higher WER in some cases.**
## Dataset
This model was trained on 4200 hours of Hindi Labelled Data. The labelled data is not present in public domain as of now.
## Training Script
Models were trained using experimental platform setup by Vakyansh team at Ekstep. Here is the [training repository](https://github.com/Open-Speech-EkStep/vakyansh-wav2vec2-experimentation).
In case you want to explore training logs on wandb they are [here](https://wandb.ai/harveenchadha/tamil-finetuning-multilingual).
## [Colab Demo](https://github.com/harveenchadha/bol/blob/main/demos/hf/tamil/hf_tamil_tnm_4200_demo.ipynb)
## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-tamil-tam-250")
model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-tamil-tam-250")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
```
## Evaluation
The model can be evaluated as follows on the hindi test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ta", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-tamil-tam-250")
model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-tamil-tam-250")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids, skip_special_tokens=True)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 53.64 %
[**Colab Evaluation**](https://github.com/harveenchadha/bol/blob/main/demos/hf/tamil/hf_vakyansh_tamil_tnm_4200_evaluation_common_voice.ipynb)
## Credits
Thanks to Ekstep Foundation for making this possible. The vakyansh team will be open sourcing speech models in all the Indic Languages. | {"language": "ta", "license": "mit", "tags": ["audio", "automatic-speech-recognition", "speech"], "metrics": ["wer"], "model-index": [{"name": "Wav2Vec2 Vakyansh Tamil Model by Harveen Chadha", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ta", "type": "common_voice", "args": "ta"}, "metrics": [{"type": "wer", "value": 53.64, "name": "Test WER"}]}]}]} | Harveenchadha/vakyansh-wav2vec2-tamil-tam-250 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"ta",
"arxiv:2107.07402",
"license:mit",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers | {} | Harveenchadha/vakyansh-wav2vec2-telugu-tem-100 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {} | Harveenchadha/vakyansh-wav2vec2-urdu-urm-60 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | transformers |
Hindi Pretrained model on 4200 hours. [Link](https://arxiv.org/abs/2107.07402) | {"language": "hi", "license": "apache-2.0", "tags": ["hf-asr-leaderboard", "hi", "model_for_talk", "pretrained", "robust-speech-event", "speech"]} | Harveenchadha/vakyansh_hindi_base_pretrained | null | [
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"hf-asr-leaderboard",
"hi",
"model_for_talk",
"pretrained",
"robust-speech-event",
"speech",
"arxiv:2107.07402",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
feature-extraction | transformers | ## Overview
We present a CLSRIL-23 (Cross Lingual Speech Representations on Indic Languages), a self supervised learning based audio pre-trained model which learns cross
lingual speech representations from raw audio across **23 Indic languages**. It is built on top of wav2vec
2.0 which is solved by training a contrastive task over masked latent speech representations and
jointly learns the quantization of latents shared across all languages.
[Arxiv Link](https://arxiv.org/pdf/2107.07402.pdf)
[Original Repo](https://github.com/Open-Speech-EkStep/vakyansh-models) contains models in fairseq format.
## Languages in the pretraining dataset
| Language | Data (In Hrs) |
|-----------|---------------|
| Assamese | 254.9 |
| Bengali | 331.3 |
| Bodo | 26.9 |
| Dogri | 17.1 |
| English | 819.7 |
| Gujarati | 336.7 |
| Hindi | 4563.7 |
| Kannada | 451.8 |
| Kashmiri | 67.8 |
| Konkani | 36.8 |
| Maithili | 113.8 |
| Malayalam | 297.7 |
| Manipuri | 171.9 |
| Marathi | 458.2 |
| Nepali | 31.6 |
| Odia | 131.4 |
| Punjabi | 486.05 |
| Sanskrit | 58.8 |
| Santali | 6.56 |
| Sindhi | 16 |
| Tamil | 542.6 |
| Telugu | 302.8 |
| Urdu | 259.68 |
## Repo for training:
[Experimentation](https://github.com/Open-Speech-EkStep/vakyansh-wav2vec2-experimentation) platform built on top of fairseq.
| {} | Harveenchadha/wav2vec2-pretrained-clsril-23-10k | null | [
"transformers",
"pytorch",
"wav2vec2",
"feature-extraction",
"arxiv:2107.07402",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | HarveyBWest/DialoGPT-small-sheldon | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Hasan/Test | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Hasanmuradbuet/bert-finetuned-mrpc | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {} | Hasanmuradbuet/dummy-model | null | [
"transformers",
"tf",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | notabota/DialoGPT-Large-Lelouch | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Hassan6678/wav2vec2-base-urdu-demo | null | [
"tensorboard",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Hassene/DialoGPT-medium-harrypotter | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
## Model Details
**Model Description:**
The model is used for classifying a text as Abusive (Hatespeech and Offensive) or Normal. The model is trained using data from Gab and Twitter and Human Rationales were included as part of the training data to boost the performance. The model also has a rationale predictor head that can predict the rationales given an abusive sentence
- **Developed by:** Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee
- **Model Type:** Text Classification
- **Language(s):** English
- **License:** Apache-2.0
- **Parent Model:** See the [BERT base uncased model](https://huggingface.co/bert-base-uncased) for more information about the BERT base model.
- **Resources for more information:**
- [Research Paper](https://arxiv.org/abs/2012.10289) Accepted at AAAI 2021.
- [GitHub Repo with datatsets and models](https://github.com/punyajoy/HateXplain)
## How to Get Started with the Model
**Details of usage**
Please use the **Model_Rational_Label** class inside [models.py](models.py) to load the models. The default prediction in this hosted inference API may be wrong due to the use of different class initialisations.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
### from models.py
from models import *
tokenizer = AutoTokenizer.from_pretrained("Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two")
model = Model_Rational_Label.from_pretrained("Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two")
inputs = tokenizer('He is a great guy", return_tensors="pt")
prediction_logits, _ = model(input_ids=inputs['input_ids'],attention_mask=inputs['attention_mask'])
```
## Uses
#### Direct Use
This model can be used for Text Classification
#### Downstream Use
[More information needed]
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
(and if you can generate an example of a biased prediction, also something like this):
Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For 
The model author's also note in their HateXplain paper that they
> *have not considered any external context such as profile bio, user gender, history of posts etc., which might be helpful in the classification task. Also, in this work we have focused on the English language. It does not consider multilingual hate speech into account.*
#### Training Procedure
##### Preprocessing
The authors detail their preprocessing procedure in the [Github repository](https://github.com/hate-alert/HateXplain/tree/master/Preprocess)
## Evaluation
The mode authors detail the Hidden layer size and attention for the HateXplain fien tuned models in the [associated paper](https://arxiv.org/pdf/2012.10289.pdf)
#### Results
The model authors both in their paper and in the git repository provide the illustrative output of the BERT - HateXplain in comparison to BERT and and other HateXplain fine tuned 
## Citation Information
```bibtex
@article{mathew2020hatexplain,
title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection},
author={Mathew, Binny and Saha, Punyajoy and Yimam, Seid Muhie and Biemann, Chris and Goyal, Pawan and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2012.10289},
year={2020}
}
```
| {"language": "en", "license": "apache-2.0", "datasets": ["hatexplain"]} | Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"dataset:hatexplain",
"arxiv:2012.10289",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | The model is used for classifying a text as **Hatespeech**, **Offensive**, or **Normal**. The model is trained using data from Gab and Twitter and *Human Rationales* were included as part of the training data to boost the performance.
The dataset and models are available here: https://github.com/punyajoy/HateXplain
**For more details about our paper**
Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee "[HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection)". Accepted at AAAI 2021.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{mathew2020hatexplain,
title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection},
author={Mathew, Binny and Saha, Punyajoy and Yimam, Seid Muhie and Biemann, Chris and Goyal, Pawan and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2012.10289},
year={2020}
}
~~~
| {"language": "en", "license": "apache-2.0", "datasets": ["hatexplain"]} | Hate-speech-CNERG/bert-base-uncased-hatexplain | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"dataset:hatexplain",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
This model is used detecting **hatespeech** in **Arabic language**. The mono in the name refers to the monolingual setting, where the model is trained using only Arabic language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.877609 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| {"language": "ar", "license": "apache-2.0"} | Hate-speech-CNERG/dehatebert-mono-arabic | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"ar",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | This model is used detecting **hatespeech** in **English language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.726030 for a learning rate of 2e-5. Training code can be found here https://github.com/punyajoy/DE-LIMIT
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| {"language": "en", "license": "apache-2.0"} | Hate-speech-CNERG/dehatebert-mono-english | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
This model is used detecting **hatespeech** in **French language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.692094 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| {"language": "fr", "license": "apache-2.0"} | Hate-speech-CNERG/dehatebert-mono-french | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"fr",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
This model is used detecting **hatespeech** in **German language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.649794 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| {"language": "de", "license": "apache-2.0"} | Hate-speech-CNERG/dehatebert-mono-german | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"de",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | This model is used detecting **hatespeech** in **Indonesian language**. The mono in the name refers to the monolingual setting, where the model is trained using only Arabic language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.844494 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| {} | Hate-speech-CNERG/dehatebert-mono-indonesian | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:2004.06465",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | This model is used detecting **hatespeech** in **Italian language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.837288 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| {"language": "it", "license": "apache-2.0"} | Hate-speech-CNERG/dehatebert-mono-italian | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"it",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | This model is used detecting **hatespeech** in **Polish language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.723254 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| {"language": "pl", "license": "apache-2.0"} | Hate-speech-CNERG/dehatebert-mono-polish | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"pl",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | This model is used detecting **hatespeech** in **Portuguese language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.716119 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| {"language": "pt", "license": "apache-2.0"} | Hate-speech-CNERG/dehatebert-mono-portugese | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"pt",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | This model is used detecting **hatespeech** in **Spanish language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.740287 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| {"language": "es", "license": "apache-2.0"} | Hate-speech-CNERG/dehatebert-mono-spanish | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"es",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.