Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3651 - Accuracy: 0.9151 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1902 | 1.0 | 4210 | 0.3102 | 0.9117 | | 0.1293 | 2.0 | 8420 | 0.3672 | 0.9048 | | 0.084 | 3.0 | 12630 | 0.3651 | 0.9151 | | 0.0682 | 4.0 | 16840 | 0.3971 | 0.9037 | | 0.0438 | 5.0 | 21050 | 0.4720 | 0.9117 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model_index": [{"name": "distilbert-base-uncased-finetuned-sst2", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "glue", "type": "glue", "args": "sst2"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9151376146788991}}]}]}
avneet/distilbert-base-uncased-finetuned-sst2
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
---- tags: - conversational --- #Rick DialoGPT model
{}
avnish100/DialoGPT-small-rick
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
## Описание модели Этот чатбот - дипломная работа студента Андрея Ворожко в УИИ (Университет Искусственного Интеллекта). Окончание обучения - март 2022 года. Чатбот сделан на основе модели [Kirili4ik/ruDialoGpt3-medium-finetuned-telegram](https://huggingface.co/Kirili4ik/ruDialoGpt3-medium-finetuned-telegram) Теперь модель дообучена на основе 27000 анекдотов (14 эпох, скорость обучения в колабе 2-6 часов на эпоху) и умеет понимать контекст разговора. Однако контекст приходится ограничивать несколькими последними сообщениями потому что чем больше контекста тем медленнее модель работает, а контекст растет как снежный ком в процессе разговора. Инференс находится в [spaces](https://huggingface.co/spaces/avorozhko/funbot): Там с ботом можно поговорить. Контекст ограничен 10 последними сообщениями. Шутки бот выдает, но пока скорее случайно, чем намеренно. Однако разговор поддержать способен и даже немного развлечь. Так как это генерация текста, то на одну и ту же фразу бот всегда будет выдавать разные ответы. Также для определения качества данной модели использовалась кастомная метрика - угловое расстояния между эмбеддингами y_train и предикта. То есть мы взяли первый слой эмбеддинга модели и прогоняли предикты и лейблы, получили вектора слов. Потом вектора слов суммировали и получили общие (суммарные) вектора лейблов и предиктов. Чем меньше угол между ними, тем лучше. При рассчетах ориентировались на косинус этого угла, так как cos 0 = 1, то это очень удобно - чем ближе показатель к 1, тем лучше. Вот такое распределение этих значений получилось по эпохам на ПРОВЕРОЧНОЙ выборке (1406 анекдотов): ``` {1: tensor(0.9357, device='cuda:0', grad_fn=<DivBackward0>), 2: tensor(0.9390, device='cuda:0', grad_fn=<DivBackward0>), 3: tensor(0.9417, device='cuda:0', grad_fn=<DivBackward0>), 4: tensor(0.9439, device='cuda:0', grad_fn=<DivBackward0>), 5: tensor(0.9470, device='cuda:0', grad_fn=<DivBackward0>), 6: tensor(0.9537, device='cuda:0', grad_fn=<DivBackward0>), 7: tensor(0.9568, device='cuda:0', grad_fn=<DivBackward0>), 8: tensor(0.9592, device='cuda:0', grad_fn=<DivBackward0>), 9: tensor(0.9610, device='cuda:0', grad_fn=<DivBackward0>), 10: tensor(0.9622, device='cuda:0', grad_fn=<DivBackward0>), 11: tensor(0.9628, device='cuda:0', grad_fn=<DivBackward0>), 12: tensor(0.9632, device='cuda:0', grad_fn=<DivBackward0>), 13: tensor(0.9630, device='cuda:0', grad_fn=<DivBackward0>), 14: tensor(0.9634, device='cuda:0', grad_fn=<DivBackward0>), 15: tensor(0.9634, device='cuda:0', grad_fn=<DivBackward0>)} ``` Для инференса выбрана 14-я эпоха с точностью 0.9634. Далее, судя по всему идет уже переобучение.
{}
avorozhko/ruDialoGpt3-medium-finetuned-context
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
avsk/distilbert-base-uncased-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
awacke1/wav2vec2-base-timit-demo-colab
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
awl/pegasus_cnn
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
aws-ai/pairsupcon-bert-base-uncased
null
[ "transformers", "pytorch", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
aws-ai/pairsupcon-bert-large-uncased
null
[ "transformers", "pytorch", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
keras
# [Deep Chimpact](https://www.drivendata.org/competitions/82/competition-wildlife-video-depth-estimation/page/390/) > Depth Estimation for Wildlife Conservation (1st place solution) <div align=center> <img src="https://user-images.githubusercontent.com/36858976/138281204-c3cbcb77-11ca-448b-a693-cb3cfa3c5181.png" width=800> ## Overview Healthy natural ecosystems have wide-ranging benefits from public health to the economy to agriculture. In order to protect the Earth's natural resources, conservationists need to be able to monitor species population sizes and population change. Camera traps are widely used in conservation research to capture images and videos of wildlife without human interference. Using statistical models for distance sampling, the frequency of animal sightings can be combined with the distance of each animal from the camera to estimate a species' full population size. However, getting distances from camera trap footage currently entails an extremely manual, time-intensive process. It takes a researcher more than **10 minutes** on average to label distance for every **1 minute** of video - that’s a lot of time when you have a million videos! This also creates a bottleneck for critical information that conservationists can use to **monitor wildlife populations**. > Your goal in this challenge is to use machine learning to automatically estimate the distance between a camera trap and an animal in a series of camera trap videos. You will be given a series of timestamps indicating when animals are visible in each camera trap video. To complete the challenge, you will predict the distance between the animal and the camera at each point in time. Along the way, keep an eye out for some sneaky leopards hunting at night, baby chimpanzees getting piggy-back rides, and diva elephants that can't get enough of the limelight. By contributing to this challenge, you can help advance cutting-edge methods for keeping these animal populations (and humans) healthy and safe!
{}
awsaf49/deep-chimpact
null
[ "keras", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# My Awesome Model
{"tags": ["conversational"]}
awvik360/DialoGPT-medium-plemons
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
null
# My Awesome Model
{"tags": ["conversational"]}
awvik360/DialoGPT-medium-plemons2
null
[ "conversational", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# My Awesome Model
{"tags": ["conversational"]}
awvik360/DialoGPT-small-plemons
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
awww263/OP-Model
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
axantroff/test-neuro
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ayaanzaveri/mnist
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-indonesian-1.5G-finetuned-sentiment-analysis-smsa This model is a fine-tuned version of [cahya/bert-base-indonesian-1.5G](https://huggingface.co/cahya/bert-base-indonesian-1.5G) on the indonlu dataset. It achieves the following results on the evaluation set: - Loss: 0.3390 - Accuracy: 0.9373 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2864 | 1.0 | 688 | 0.2154 | 0.9286 | | 0.1648 | 2.0 | 1376 | 0.2238 | 0.9357 | | 0.0759 | 3.0 | 2064 | 0.3351 | 0.9365 | | 0.044 | 4.0 | 2752 | 0.3390 | 0.9373 | | 0.0308 | 5.0 | 3440 | 0.4346 | 0.9365 | | 0.0113 | 6.0 | 4128 | 0.4708 | 0.9365 | | 0.006 | 7.0 | 4816 | 0.5533 | 0.9325 | | 0.0047 | 8.0 | 5504 | 0.5888 | 0.9310 | | 0.0001 | 9.0 | 6192 | 0.5961 | 0.9333 | | 0.0 | 10.0 | 6880 | 0.5992 | 0.9357 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"language": "id", "license": "mit", "tags": ["generated_from_trainer"], "datasets": ["indonlu"], "metrics": ["accuracy"], "widget": [{"text": "Saya mengapresiasi usaha anda"}], "model-index": [{"name": "bert-base-indonesian-1.5G-finetuned-sentiment-analysis-smsa", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "indonlu", "type": "indonlu", "args": "smsa"}, "metrics": [{"type": "accuracy", "value": 0.9373015873015873, "name": "Accuracy"}]}]}]}
ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa
null
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "id", "dataset:indonlu", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Indonesian GPT-2-medium finetuned on Indonesian poems This is the [Indonesian gpt2-medium model](https://huggingface.co/flax-community/gpt2-medium-indonesian) fine-tuned to Indonesian poems. The dataset can be found in [here](https://huggingface.co/datasets/id_puisi) All training was done on Google Colab Jupyter Notebook (soon). The dataset is splitted into two subset with details belows: | split | count (examples) | percentage | | ---------- | ---------- | -------------- | | train | 7,358 | 80% | | validation | 1,890 | 20% | ### Evaluation results The model evaluation results after 10 epochs are as follows: | dataset | train/loss | eval/loss | eval perplexity | | ---------- | ---------- | -------------- | ---------- | | [id puisi](https://huggingface.co/datasets/id_puisi) | 3.104 | 3.384 | 29.4884 | The logs can be found in [wandb page here](https://wandb.ai/ayamerushia/gpt-2_poem/runs/3jsu1orj/overview?workspace=user-ayamerushia)
{"language": "id", "widget": [{"text": "Wahai rembulan yang tertutup awan hujan"}]}
ayameRushia/gpt2-medium-fine-tuning-indonesia-poem
null
[ "transformers", "pytorch", "gpt2", "text-generation", "id", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Indonesian GPT-2 finetuned on Indonesian poems This is the [Indonesian gpt2-small model](https://huggingface.co/flax-community/gpt2-small-indonesian) fine-tuned to Indonesian poems. The dataset can be found in [here](https://huggingface.co/datasets/id_puisi) All training was done on Google Colab Jupyter Notebook (soon). The dataset is splitted into two subset with details belows: | split | count (examples) | percentage | | ---------- | ---------- | -------------- | | train | 7,358 | 80% | | validation | 1,890 | 20% | ### Evaluation results The model evaluation results after 10 epochs are as follows: | dataset | train/loss | eval/loss | eval perplexity | | ---------- | ---------- | -------------- | ---------- | | [id puisi](https://huggingface.co/datasets/id_puisi) | 3.324700 | 3.502665 | 33.20 | The logs can be found in [wandb page here](https://wandb.ai/ayamerushia/gpt-2_poem/runs/36ymudz9/overview?workspace=user-ayamerushia) or tensorboard [here](https://huggingface.co/ayameRushia/gpt2-small-indonesia-fine-tuning-poem/tensorboard)
{"language": "id", "widget": [{"text": "Wahai rembulan yang tertutup awan hujan"}]}
ayameRushia/gpt2-small-indonesia-fine-tuning-poem
null
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "id", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # indobert-base-uncased-finetuned-indonlu-smsa This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the indonlu dataset. It achieves the following results on the evaluation set: - Loss: 0.2277 - Accuracy: 0.9302 - F1: 0.9066 - Precision: 0.8992 - Recall: 0.9147 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 344 | 0.3831 | 0.8476 | 0.7715 | 0.7817 | 0.7627 | | 0.4167 | 2.0 | 688 | 0.2809 | 0.8905 | 0.8406 | 0.8699 | 0.8185 | | 0.2624 | 3.0 | 1032 | 0.2254 | 0.9230 | 0.8842 | 0.9004 | 0.8714 | | 0.2624 | 4.0 | 1376 | 0.2378 | 0.9238 | 0.8797 | 0.9180 | 0.8594 | | 0.1865 | 5.0 | 1720 | 0.2277 | 0.9302 | 0.9066 | 0.8992 | 0.9147 | | 0.1217 | 6.0 | 2064 | 0.2444 | 0.9262 | 0.8981 | 0.9013 | 0.8957 | | 0.1217 | 7.0 | 2408 | 0.2985 | 0.9286 | 0.8999 | 0.9035 | 0.8971 | | 0.0847 | 8.0 | 2752 | 0.3397 | 0.9278 | 0.8969 | 0.9090 | 0.8871 | | 0.0551 | 9.0 | 3096 | 0.3542 | 0.9270 | 0.8961 | 0.9010 | 0.8924 | | 0.0551 | 10.0 | 3440 | 0.3862 | 0.9222 | 0.8895 | 0.8970 | 0.8846 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"language": "id", "license": "mit", "tags": ["generated_from_trainer"], "datasets": ["indonlu"], "metrics": ["accuracy", "f1", "precision", "recall"], "widget": [{"text": "Entah mengapa saya merasakan ada sesuatu yang janggal di produk ini"}], "model-index": [{"name": "indobert-base-uncased-finetuned-indonlu-smsa", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "indonlu", "type": "indonlu", "args": "smsa"}, "metrics": [{"type": "accuracy", "value": 0.9301587301587302, "name": "Accuracy"}, {"type": "f1", "value": 0.9066105299178986, "name": "F1"}, {"type": "precision", "value": 0.8992078788375845, "name": "Precision"}, {"type": "recall", "value": 0.9147307323234121, "name": "Recall"}]}]}]}
ayameRushia/indobert-base-uncased-finetuned-indonlu-smsa
null
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "generated_from_trainer", "id", "dataset:indonlu", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-indonesian-1.5G-sentiment-analysis-smsa This model is a fine-tuned version of [cahya/roberta-base-indonesian-1.5G](https://huggingface.co/cahya/roberta-base-indonesian-1.5G) on the indonlu dataset. It achieves the following results on the evaluation set: - Loss: 0.4294 - Accuracy: 0.9262 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6461 | 1.0 | 688 | 0.2620 | 0.9087 | | 0.2627 | 2.0 | 1376 | 0.2291 | 0.9151 | | 0.1784 | 3.0 | 2064 | 0.2891 | 0.9167 | | 0.1099 | 4.0 | 2752 | 0.3317 | 0.9230 | | 0.0857 | 5.0 | 3440 | 0.4294 | 0.9262 | | 0.0346 | 6.0 | 4128 | 0.4759 | 0.9246 | | 0.0221 | 7.0 | 4816 | 0.4946 | 0.9206 | | 0.006 | 8.0 | 5504 | 0.5823 | 0.9175 | | 0.0047 | 9.0 | 6192 | 0.5777 | 0.9159 | | 0.004 | 10.0 | 6880 | 0.5800 | 0.9175 | ### How to use this model in Transformers Library ```python from transformers import pipeline pipe = pipeline( "text-classification", model="ayameRushia/roberta-base-indonesian-1.5G-sentiment-analysis-smsa" ) pipe("Terima kasih atas bantuannya ya!") ``` ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"language": ["id"], "tags": ["generated_from_trainer"], "datasets": ["indonlp/indonlu"], "metrics": ["accuracy"], "widget": [{"text": "Entah mengapa saya merasakan ada sesuatu yang janggal di produk ini"}], "model-index": [{"name": "roberta-base-indonesian-1.5G-sentiment-analysis-smsa", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "indonlu", "type": "indonlu", "args": "smsa"}, "metrics": [{"type": "accuracy", "value": 0.9261904761904762, "name": "Accuracy"}]}]}]}
ayameRushia/roberta-base-indonesian-1.5G-sentiment-analysis-smsa
null
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "id", "dataset:indonlp/indonlu", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-indonesian-sentiment-analysis-smsa This model is a fine-tuned version of [flax-community/indonesian-roberta-base](https://huggingface.co/flax-community/indonesian-roberta-base) on the indonlu dataset. It achieves the following results on the evaluation set: - Loss: 0.4252 - Accuracy: 0.9349 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7582 | 1.0 | 688 | 0.3280 | 0.8786 | | 0.3225 | 2.0 | 1376 | 0.2398 | 0.9206 | | 0.2057 | 3.0 | 2064 | 0.2574 | 0.9230 | | 0.1642 | 4.0 | 2752 | 0.2820 | 0.9302 | | 0.1266 | 5.0 | 3440 | 0.3344 | 0.9317 | | 0.0608 | 6.0 | 4128 | 0.3543 | 0.9341 | | 0.058 | 7.0 | 4816 | 0.4252 | 0.9349 | | 0.0315 | 8.0 | 5504 | 0.4736 | 0.9310 | | 0.0166 | 9.0 | 6192 | 0.4649 | 0.9349 | | 0.0143 | 10.0 | 6880 | 0.4648 | 0.9341 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["indonlu"], "metrics": ["accuracy"], "model-index": [{"name": "roberta-base-indonesian-sentiment-analysis-smsa", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "indonlu", "type": "indonlu", "args": "smsa"}, "metrics": [{"type": "accuracy", "value": 0.9349206349206349, "name": "Accuracy"}]}]}]}
ayameRushia/roberta-base-indonesian-sentiment-analysis-smsa
null
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "dataset:indonlu", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-ar This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4819 - Wer: 0.4244 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 11.0435 | 0.67 | 400 | 4.3104 | 1.0 | | 3.4451 | 1.34 | 800 | 3.1566 | 1.0 | | 3.1399 | 2.01 | 1200 | 3.0532 | 0.9990 | | 2.8538 | 2.68 | 1600 | 1.6994 | 0.9238 | | 1.7195 | 3.35 | 2000 | 0.8867 | 0.6727 | | 1.326 | 4.02 | 2400 | 0.6603 | 0.5834 | | 1.1561 | 4.69 | 2800 | 0.5809 | 0.5479 | | 1.0764 | 5.36 | 3200 | 0.5943 | 0.5495 | | 1.0144 | 6.03 | 3600 | 0.5344 | 0.5251 | | 0.965 | 6.7 | 4000 | 0.4844 | 0.4936 | | 0.927 | 7.37 | 4400 | 0.5048 | 0.5019 | | 0.8985 | 8.04 | 4800 | 0.5809 | 0.5267 | | 0.8684 | 8.71 | 5200 | 0.4740 | 0.4753 | | 0.8581 | 9.38 | 5600 | 0.4813 | 0.4834 | | 0.8334 | 10.05 | 6000 | 0.4515 | 0.4545 | | 0.8134 | 10.72 | 6400 | 0.4370 | 0.4543 | | 0.8002 | 11.39 | 6800 | 0.4225 | 0.4384 | | 0.7884 | 12.06 | 7200 | 0.4593 | 0.4565 | | 0.7675 | 12.73 | 7600 | 0.4752 | 0.4680 | | 0.7607 | 13.4 | 8000 | 0.4950 | 0.4771 | | 0.7475 | 14.07 | 8400 | 0.4373 | 0.4391 | | 0.7397 | 14.74 | 8800 | 0.4506 | 0.4541 | | 0.7289 | 15.41 | 9200 | 0.4840 | 0.4691 | | 0.722 | 16.08 | 9600 | 0.4701 | 0.4571 | | 0.7067 | 16.75 | 10000 | 0.4561 | 0.4461 | | 0.7033 | 17.42 | 10400 | 0.4384 | 0.4347 | | 0.6915 | 18.09 | 10800 | 0.4424 | 0.4290 | | 0.6854 | 18.76 | 11200 | 0.4635 | 0.4360 | | 0.6813 | 19.43 | 11600 | 0.4280 | 0.4147 | | 0.6776 | 20.1 | 12000 | 0.4610 | 0.4344 | | 0.67 | 20.77 | 12400 | 0.4540 | 0.4367 | | 0.6653 | 21.44 | 12800 | 0.4509 | 0.4234 | | 0.6609 | 22.11 | 13200 | 0.4874 | 0.4444 | | 0.6541 | 22.78 | 13600 | 0.4542 | 0.4230 | | 0.6528 | 23.45 | 14000 | 0.4732 | 0.4373 | | 0.6463 | 24.12 | 14400 | 0.4483 | 0.4188 | | 0.6399 | 24.79 | 14800 | 0.4731 | 0.4341 | | 0.6353 | 25.46 | 15200 | 0.5031 | 0.4412 | | 0.6358 | 26.13 | 15600 | 0.4986 | 0.4397 | | 0.6317 | 26.8 | 16000 | 0.5000 | 0.4360 | | 0.6262 | 27.47 | 16400 | 0.4958 | 0.4318 | | 0.6317 | 28.14 | 16800 | 0.4738 | 0.4234 | | 0.6205 | 28.81 | 17200 | 0.4853 | 0.4262 | | 0.6205 | 29.48 | 17600 | 0.4819 | 0.4244 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-ar", "results": []}]}
ayameRushia/wav2vec2-large-xls-r-300m-ar
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - EL dataset. It achieves the following results on the evaluation set: - Loss: 0.3218 - Wer: 0.3095 ## Training and evaluation data Evaluation is conducted in Notebook, you can see within the repo "notebook_evaluation_wav2vec2_el.ipynb" Test WER without LM wer = 31.1294 % cer = 7.9509 % Test WER using LM wer = 20.7340 % cer = 6.0466 % How to use eval.py ``` huggingface-cli login #login to huggingface for getting auth token to access the common voice v8 #running with LM !python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-el --dataset mozilla-foundation/common_voice_8_0 --config el --split test # running without LM !python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-el --dataset mozilla-foundation/common_voice_8_0 --config el --split test --greedy ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 80.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.3683 | 8.77 | 500 | 3.1280 | 1.0 | | 1.9915 | 17.54 | 1000 | 0.6600 | 0.6444 | | 0.6565 | 26.32 | 1500 | 0.4208 | 0.4486 | | 0.4484 | 35.09 | 2000 | 0.3885 | 0.4006 | | 0.3573 | 43.86 | 2500 | 0.3548 | 0.3626 | | 0.3063 | 52.63 | 3000 | 0.3375 | 0.3430 | | 0.2751 | 61.4 | 3500 | 0.3359 | 0.3241 | | 0.2511 | 70.18 | 4000 | 0.3222 | 0.3108 | | 0.2361 | 78.95 | 4500 | 0.3205 | 0.3084 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": ["el"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-el", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "el"}, "metrics": [{"type": "wer", "value": 20.9, "name": "Test WER using LM"}, {"type": "cer", "value": 6.0466, "name": "Test CER using LM"}]}]}]}
ayameRushia/wav2vec2-large-xls-r-300m-el
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "el", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-ia This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.1452 - Wer: 0.1253 ## Training Procedure Training is conducted in Google Colab, the training notebook provided in the repo ## Training and evaluation data Language Model Created from texts from processed sentence in train + validation split of dataset (common voice 8.0 for Interlingua) Evaluation is conducted in Notebook, you can see within the repo "notebook_evaluation_wav2vec2_ia.ipynb" Test WER without LM wer = 20.1776 % cer = 4.7205 % Test WER using wer = 8.6074 % cer = 2.4147 % evaluation using eval.py ``` huggingface-cli login #login to huggingface for getting auth token to access the common voice v8 #running with LM python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-ia --dataset mozilla-foundation/common_voice_8_0 --config ia --split test # running without LM python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-ia --dataset mozilla-foundation/common_voice_8_0 --config ia --split test --greedy ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 7.432 | 1.87 | 400 | 2.9636 | 1.0 | | 2.6922 | 3.74 | 800 | 2.2111 | 0.9977 | | 1.2581 | 5.61 | 1200 | 0.4864 | 0.4028 | | 0.6232 | 7.48 | 1600 | 0.2807 | 0.2413 | | 0.4479 | 9.35 | 2000 | 0.2219 | 0.1885 | | 0.3654 | 11.21 | 2400 | 0.1886 | 0.1606 | | 0.323 | 13.08 | 2800 | 0.1716 | 0.1444 | | 0.2935 | 14.95 | 3200 | 0.1687 | 0.1443 | | 0.2707 | 16.82 | 3600 | 0.1632 | 0.1382 | | 0.2559 | 18.69 | 4000 | 0.1507 | 0.1337 | | 0.2433 | 20.56 | 4400 | 0.1572 | 0.1358 | | 0.2338 | 22.43 | 4800 | 0.1489 | 0.1305 | | 0.2258 | 24.3 | 5200 | 0.1485 | 0.1278 | | 0.2218 | 26.17 | 5600 | 0.1470 | 0.1272 | | 0.2169 | 28.04 | 6000 | 0.1470 | 0.1270 | | 0.2117 | 29.91 | 6400 | 0.1452 | 0.1253 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": ["ia"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-ia", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ia"}, "metrics": [{"type": "wer", "value": 8.6074, "name": "Test WER using LM"}, {"type": "cer", "value": 2.4147, "name": "Test CER using LM"}]}]}]}
ayameRushia/wav2vec2-large-xls-r-300m-ia
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "mozilla-foundation/common_voice_8_0", "ia", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ID dataset. It achieves the following results on the evaluation set: - Loss: 0.3975 - Wer: 0.2633 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.78 | 100 | 4.5645 | 1.0 | | No log | 1.55 | 200 | 2.9016 | 1.0 | | No log | 2.33 | 300 | 2.2666 | 1.0982 | | No log | 3.1 | 400 | 0.6079 | 0.6376 | | 3.2188 | 3.88 | 500 | 0.4985 | 0.5008 | | 3.2188 | 4.65 | 600 | 0.4477 | 0.4469 | | 3.2188 | 5.43 | 700 | 0.3953 | 0.3915 | | 3.2188 | 6.2 | 800 | 0.4319 | 0.3921 | | 3.2188 | 6.98 | 900 | 0.4171 | 0.3698 | | 0.2193 | 7.75 | 1000 | 0.3957 | 0.3600 | | 0.2193 | 8.53 | 1100 | 0.3730 | 0.3493 | | 0.2193 | 9.3 | 1200 | 0.3780 | 0.3348 | | 0.2193 | 10.08 | 1300 | 0.4133 | 0.3568 | | 0.2193 | 10.85 | 1400 | 0.3984 | 0.3193 | | 0.1129 | 11.63 | 1500 | 0.3845 | 0.3174 | | 0.1129 | 12.4 | 1600 | 0.3882 | 0.3162 | | 0.1129 | 13.18 | 1700 | 0.3982 | 0.3008 | | 0.1129 | 13.95 | 1800 | 0.3902 | 0.3198 | | 0.1129 | 14.73 | 1900 | 0.4082 | 0.3237 | | 0.0765 | 15.5 | 2000 | 0.3732 | 0.3126 | | 0.0765 | 16.28 | 2100 | 0.3893 | 0.3001 | | 0.0765 | 17.05 | 2200 | 0.4168 | 0.3083 | | 0.0765 | 17.83 | 2300 | 0.4193 | 0.3044 | | 0.0765 | 18.6 | 2400 | 0.4006 | 0.3013 | | 0.0588 | 19.38 | 2500 | 0.3836 | 0.2892 | | 0.0588 | 20.16 | 2600 | 0.3761 | 0.2903 | | 0.0588 | 20.93 | 2700 | 0.3895 | 0.2930 | | 0.0588 | 21.71 | 2800 | 0.3885 | 0.2791 | | 0.0588 | 22.48 | 2900 | 0.3902 | 0.2891 | | 0.0448 | 23.26 | 3000 | 0.4200 | 0.2849 | | 0.0448 | 24.03 | 3100 | 0.4013 | 0.2799 | | 0.0448 | 24.81 | 3200 | 0.4039 | 0.2731 | | 0.0448 | 25.58 | 3300 | 0.3970 | 0.2647 | | 0.0448 | 26.36 | 3400 | 0.4081 | 0.2690 | | 0.0351 | 27.13 | 3500 | 0.4090 | 0.2674 | | 0.0351 | 27.91 | 3600 | 0.3953 | 0.2663 | | 0.0351 | 28.68 | 3700 | 0.4044 | 0.2650 | | 0.0351 | 29.46 | 3800 | 0.3969 | 0.2646 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["id"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event"], "datasets": ["common_voice"], "model-index": [{"name": "XLS-R-300M - Indonesia", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "sv-SE"}, "metrics": [{"type": "wer", "value": 38.098, "name": "Test WER"}, {"type": "cer", "value": 14.261, "name": "Test CER"}]}]}]}
ayameRushia/wav2vec2-large-xls-r-300m-id
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "id", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MN dataset. It achieves the following results on the evaluation set: - Loss: 0.5502 - Wer: 0.4042 ## Training and evaluation data Evaluation is conducted in Notebook, you can see within the repo "notebook_evaluation_wav2vec2_mn.ipynb" Test WER without LM wer = 58.2171 % cer = 16.0670 % Test WER using wer = 31.3919 % cer = 10.2565 % How to use eval.py ``` huggingface-cli login #login to huggingface for getting auth token to access the common voice v8 #running with LM python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-mn --dataset mozilla-foundation/common_voice_8_0 --config mn --split test # running without LM python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-mn --dataset mozilla-foundation/common_voice_8_0 --config mn --split test --greedy ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 40.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 6.35 | 400 | 0.9380 | 0.7902 | | 3.2674 | 12.7 | 800 | 0.5794 | 0.5309 | | 0.7531 | 19.05 | 1200 | 0.5749 | 0.4815 | | 0.5382 | 25.4 | 1600 | 0.5530 | 0.4447 | | 0.4293 | 31.75 | 2000 | 0.5709 | 0.4237 | | 0.4293 | 38.1 | 2400 | 0.5476 | 0.4059 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": ["mn"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-mn", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "mn"}, "metrics": [{"type": "wer", "value": 31.3919, "name": "Test WER using LM"}, {"type": "cer", "value": 10.2565, "name": "Test CER using LM"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "mn"}, "metrics": [{"type": "wer", "value": 65.26, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "mn"}, "metrics": [{"type": "wer", "value": 63.09, "name": "Test WER"}]}]}]}
ayameRushia/wav2vec2-large-xls-r-300m-mn
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "mozilla-foundation/common_voice_8_0", "mn", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Indonesia Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Indonesia using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "id", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo") model = Wav2Vec2ForCTC.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "id", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo") model = Wav2Vec2ForCTC.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo") model.to("cuda") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: WER = 20.072720 % ## Training Training using common voice dataset
{"language": "id", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Indonesia by Ayame Rushia", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice id", "type": "common_voice", "args": "id"}, "metrics": [{"type": "wer", "value": "???", "name": "Test WER"}]}]}]}
ayameRushia/wav2vec2-large-xlsr-indo-base
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "id", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Indonesia Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Indonesia using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "id", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo") model = Wav2Vec2ForCTC.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "id", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo") model = Wav2Vec2ForCTC.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: WER = 19.830319 % ## Training Training using common voice dataset
{"language": "id", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Indonesia by Ayame Rushia", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice id", "type": "common_voice", "args": "id"}, "metrics": [{"type": "wer", "value": 19.830319, "name": "Test WER"}]}]}]}
ayameRushia/wav2vec2-large-xlsr-indonesia
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "id", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# `false-positives-scancode-bert-base-uncased-L8-1` ## Intended Use This model is intended to be used for Sentence Classification which is used for results analysis in [`scancode-results-analyzer`](https://github.com/nexB/scancode-results-analyzer). `scancode-results-analyzer` helps detect faulty scans in [`scancode-toolkit`](https://github.com/nexB/scancode-results-analyzer) by using statistics and nlp modeling, among other tools, to make Scancode better. #### How to use Refer [quickstart](https://github.com/nexB/scancode-results-analyzer#quickstart---local-machine) section in `scancode-results-analyzer` documentation, for installing and getting started. - [Link to Code](https://github.com/nexB/scancode-results-analyzer/blob/master/src/results_analyze/nlp_models.py) Then in `NLPModelsPredict` class, function `predict_basic_false_positive` uses this classifier to predict sentances as either valid license tags or false positives. #### Limitations and bias As this model is a fine-tuned version of the [`bert-base-uncased`](https://huggingface.co/bert-base-uncased) model, it has the same biases, but as the task it is fine-tuned to is a very specific field (license tags vs false positives) without those intended biases, it's safe to assume those don't apply at all here. ## Training and Fine-Tuning Data The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers). Then this `bert-base-uncased` model was fine-tuned on Scancode Rule texts, specifically trained in the context of sentence classification, where the two classes are - License Tags - False Positives of License Tags ## Training procedure For fine-tuning procedure and training, refer `scancode-results-analyzer` code. - [Link to Code](https://github.com/nexB/scancode-results-analyzer/blob/master/src/results_analyze/nlp_models.py) In `NLPModelsTrain` class, function `prepare_input_data_false_positive` prepares the training data. In `NLPModelsTrain` class, function `train_basic_false_positive_classifier` fine-tunes this classifier. 1. Model - [BertBaseUncased](https://huggingface.co/bert-base-uncased) (Weights 0.5 GB) 2. Sentence Length - 8 3. Labels - 2 (False Positive/License Tag) 4. After 4-6 Epochs of Fine-Tuning with learning rate 2e-5 (6 secs each on an RTX 2060) Note: The classes aren't balanced. ## Eval results - Accuracy on the training data (90%) : 0.99 (+- 0.005) - Accuracy on the validation data (10%) : 0.96 (+- 0.015) The errors have lower confidence scores using thresholds on confidence scores almost makes it a perfect classifier as the classification task is comparatively easier. Results are stable, in the sence fine-tuning accuracy is very easily achieved every time, though more learning epochs makes the data overfit, i.e. the training loss decreases, but the validation loss increases, even though accuracies are very stable even on overfitting.
{"language": "en", "license": "apache-2.0", "tags": ["license", "sentence-classification", "scancode", "license-compliance"], "datasets": ["bookcorpus", "wikipedia", "scancode-rules"], "version": 1.0}
ayansinha/false-positives-scancode-bert-base-uncased-L8-1
null
[ "transformers", "tf", "bert", "fill-mask", "license", "sentence-classification", "scancode", "license-compliance", "en", "dataset:bookcorpus", "dataset:wikipedia", "dataset:scancode-rules", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# `lic-class-scancode-bert-base-cased-L32-1` ## Intended Use This model is intended to be used for Sentence Classification which is used for results analysis in [`scancode-results-analyzer`](https://github.com/nexB/scancode-results-analyzer). `scancode-results-analyzer` helps detect faulty scans in [`scancode-toolkit`](https://github.com/nexB/scancode-results-analyzer) by using statistics and nlp modeling, among other tools, to make Scancode better. ## How to Use Refer [quickstart](https://github.com/nexB/scancode-results-analyzer#quickstart---local-machine) section in `scancode-results-analyzer` documentation, for installing and getting started. - [Link to Code](https://github.com/nexB/scancode-results-analyzer/blob/master/src/results_analyze/nlp_models.py) Then in `NLPModelsPredict` class, function `predict_basic_lic_class` uses this classifier to predict sentances as either valid license tags or false positives. ## Limitations and Bias As this model is a fine-tuned version of the [`bert-base-cased`](https://huggingface.co/bert-base-cased) model, it has the same biases, but as the task it is fine-tuned to is a very specific task (license text/notice/tag/referance) without those intended biases, it's safe to assume those don't apply at all here. ## Training and Fine-Tuning Data The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers). Then this `bert-base-cased` model was fine-tuned on Scancode Rule texts, specifically trained in the context of sentence classification, where the four classes are - License Text - License Notice - License Tag - License Referance ## Training Procedure For fine-tuning procedure and training, refer `scancode-results-analyzer` code. - [Link to Code](https://github.com/nexB/scancode-results-analyzer/blob/master/src/results_analyze/nlp_models.py) In `NLPModelsTrain` class, function `prepare_input_data_false_positive` prepares the training data. In `NLPModelsTrain` class, function `train_basic_false_positive_classifier` fine-tunes this classifier. 1. Model - [BertBaseCased](https://huggingface.co/bert-base-cased) (Weights 0.5 GB) 2. Sentence Length - 32 3. Labels - 4 (License Text/Notice/Tag/Referance) 4. After 4 Epochs of Fine-Tuning with learning rate 2e-5 (60 secs each on an RTX 2060) Note: The classes aren't balanced. ## Eval Results - Accuracy on the training data (90%) : 0.98 (+- 0.01) - Accuracy on the validation data (10%) : 0.84 (+- 0.01) ## Further Work 1. Apllying Splitting/Aggregation Strategies 2. Data Augmentation according to Vaalidation Errors 3. Bigger/Better Suited Models
{"language": "en", "license": "apache-2.0", "tags": ["license", "sentence-classification", "scancode", "license-compliance"], "datasets": ["bookcorpus", "wikipedia", "scancode-rules"], "version": 1.0}
ayansinha/lic-class-scancode-bert-base-cased-L32-1
null
[ "transformers", "tf", "bert", "fill-mask", "license", "sentence-classification", "scancode", "license-compliance", "en", "dataset:bookcorpus", "dataset:wikipedia", "dataset:scancode-rules", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# bert-base-cased trained on TREC 6-class task ## Model description A simple base BERT model trained on the "trec" dataset. ## Intended uses & limitations #### How to use ##### Transformers ```python # Load model and tokenizer from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # Use pipeline from transformers import pipeline model_name = "aychang/bert-base-cased-trec-coarse" nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name) results = nlp(["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"]) ``` ##### AdaptNLP ```python from adaptnlp import EasySequenceClassifier model_name = "aychang/bert-base-cased-trec-coarse" texts = ["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"] classifer = EasySequenceClassifier results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2) ``` #### Limitations and bias This is minimal language model trained on a benchmark dataset. ## Training data TREC https://huggingface.co/datasets/trec ## Training procedure Preprocessing, hardware used, hyperparameters... #### Hardware One V100 #### Hyperparameters and Training Args ```python from transformers import TrainingArguments training_args = TrainingArguments( output_dir='./models', num_train_epochs=2, per_device_train_batch_size=16, per_device_eval_batch_size=16, warmup_steps=500, weight_decay=0.01, evaluation_strategy="steps", logging_dir='./logs', save_steps=3000 ) ``` ## Eval results ``` {'epoch': 2.0, 'eval_accuracy': 0.974, 'eval_f1': array([0.98181818, 0.94444444, 1. , 0.99236641, 0.96995708, 0.98159509]), 'eval_loss': 0.138086199760437, 'eval_precision': array([0.98540146, 0.98837209, 1. , 0.98484848, 0.94166667, 0.97560976]), 'eval_recall': array([0.97826087, 0.90425532, 1. , 1. , 1. , 0.98765432]), 'eval_runtime': 1.6132, 'eval_samples_per_second': 309.943} ```
{"language": ["en"], "license": "mit", "tags": ["text-classification"], "datasets": ["trec"], "model-index": [{"name": "aychang/bert-base-cased-trec-coarse", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "trec", "type": "trec", "config": "default", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.974, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTUwZTU1ZGU5YTRiMzNhNmQyMjNlY2M5YjAwN2RlMmYxODI2MjFkY2Q3NWFjZDg3Zjg5ZDk1Y2I1MTUxYjFhMCIsInZlcnNpb24iOjF9.GJkxJOFhsO4UaoHpHH1136Qj_fu9UQ9o3DThtT46hvMduswkgobl9iz6ICYQ7IdYKFbh3zRTlsZzjnAlzGqdBA"}, {"type": "precision", "value": 0.9793164100816639, "name": "Precision Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTMxMjI3NWZhOGZkODJmYzkxYzdhZWIwMTBkZTg4YWZiNjcwNTVmM2RjYmQ3ZmNhZjM2MWQzYTUzNzFlMjQzOCIsInZlcnNpb24iOjF9.n45s1_gW040u5f2y-zfVx_5XU-J97dcuWlmaIZsJsCetcHtrjsbHut2gAcPxErl8UPTXSq1XDg5WWug4FPM8CQ"}, {"type": "precision", "value": 0.974, "name": "Precision Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTY5ZTZiNmYzZDQzYWZiZDdlNDllZWQ4NTVjZWZlYWJkZDgyNGNhZjAzOTZjZDc0NDUwMTE3ODVlMjFjNTIxZCIsInZlcnNpb24iOjF9.4lR7MgvxxTblEV4LZGbko-ylIeFjcjNM5P21iYH6vkNkjItIfiXmKbL55_Zeab4oGJ5ytWz0rIdlpNnmmV29Cw"}, {"type": "precision", "value": 0.9746805065928548, "name": "Precision Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDEzYmZmZDIyNDFmNzJmODQ2ODdhYTUyYzQyZjEzZTdhMjg3MTllOGFkNGRlMDFhYzI4ZGE5OTExNjk1ZTI5OSIsInZlcnNpb24iOjF9.Ti5gL3Tk9hCpriIUhB8ltdKRibSilvRZOxAlLCgAkrhg0dXGE5f4n8almCAjbRJEaPW6H6581PhuUfjgMqceBw"}, {"type": "recall", "value": 0.9783617516169679, "name": "Recall Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWUwMGUwYmY3MWQwOTcwYjI2Yjc3Yzc1YWQ1YjU2ODY3MzAyMDdkNmM3MmFhZmMxZWFhMTUxNzZlNzViMDA0ZiIsInZlcnNpb24iOjF9.IWhPl9xS5pqEaFHKsBZj6JRtJRpQZQqJhQYW6zmtPi2F3speRsKc0iksfHkmPjm678v-wKUJ4zyGfRs-63HmBg"}, {"type": "recall", "value": 0.974, "name": "Recall Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjlhMDY0MmI2NzBiMWY5NTcwYjZlYzE5ODg0ODk1ZTBjZDI4YmZiY2RmZWVlZGUxYzk2MDQ4NjRkMTQ4ZTEzZiIsInZlcnNpb24iOjF9.g5p5b0BqyZxb7Hk9DayRndhs5F0r44h8TXMJDaP6IoFdYzlBfEcZv7UkCu6s6laz9-F-hhZHUZii2ljtYasVAA"}, {"type": "recall", "value": 0.974, "name": "Recall Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjJjNTE2ZWFjMGYyZGUzOWI3MDRhM2I2MTRjZGNkOWZkZDJhNzQ4OTYwOTQ2NDY5OGNjZTZhOWU2MzlhNTY5YyIsInZlcnNpb24iOjF9.JnRFkZ-v-yRhCf6di7ONcy_8Tv0rNXQir1TVw-cU9fNY1c4vKRmGaKmLGeR7TxpmKzEQtikb6mFwRwhIAhl8AA"}, {"type": "f1", "value": 0.9783635353409951, "name": "F1 Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjM2NDY3MmUyMmEyZjg5MWZhNjllOGRlNWVkYzgyYmM5ZDBmMDdhYmY5NDAxZmYwMjA0YTkzNTI2MjU0NTRlZiIsInZlcnNpb24iOjF9.HlbHjJa-bpYPjujWODpvfLVMtCnNQMDBCYpLGokfBoXibZGKfIzXcgNdXLdJ-DkmMUriX3wVZtGcRvA2ErUeDw"}, {"type": "f1", "value": 0.974, "name": "F1 Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjMxNDE4MTBmYzU2MTllMjlhNTcwYWJhMzRkNTE2ZGFiNmQ0ZTEyOWJhMmU2ZDliYTIzNDExYTM5MTAxYjcxNSIsInZlcnNpb24iOjF9.B7G9Gs74MosZPQ16QH2k-zrmlE8KCtIFu3BcrgObYiuqOz1aFURS3IPoOynVFLp1jnJtgQAmQRY_GDumSS-oDg"}, {"type": "f1", "value": 0.97377371266232, "name": "F1 Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmEyNjRlYmE5M2U1OWY0OGY2YjQyN2E0NmQxNjY0NTY3N2JiZmMwOWQ1ZTMzZDcwNTdjNWYwNTRiNTljNjMxMiIsInZlcnNpb24iOjF9.VryHh8G_ZvoiSm1SZRMw4kheGWuI3rQ6GUVqm2uf-kkaSU20rYMW20-VKCtwayLcrIHJ92to6YvvW7yI0Le5DA"}, {"type": "loss", "value": 0.13812002539634705, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjk4MDQ5NGRiNTExYmE3NGU1ZmQ1YjUzMTQ4NzUwNWViYzFiODEzMjc2MDA2MzYyOGNjNjYxYzliNDM4Y2U0ZSIsInZlcnNpb24iOjF9.u68ogPOH6-_pb6ZVulzMVfHIfFlLwBeDp8H4iqgfBadjwj2h-aO0jzc4umWFWtzWespsZvnlDjklbhhgrd1vCQ"}]}]}]}
aychang/bert-base-cased-trec-coarse
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "en", "dataset:trec", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
null
# TorchScript model of bert-large-cased-whole-word-masking-finetuned-squad ## Model description A serialized torchscript model of bert-large-cased-whole-word-masking-finetuned-squad with a config.pbtxt for deployment using NVIDIA Triton Inference Server.
{"language": ["en"], "license": "mit", "tags": ["question-answering", "torchscript", "FastNN"], "datasets": ["squad"]}
aychang/bert-large-cased-whole-word-masking-finetuned-squad
null
[ "question-answering", "torchscript", "FastNN", "en", "dataset:squad", "license:mit", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# TREC 6-class Task: distilbert-base-cased ## Model description A simple base distilBERT model trained on the "trec" dataset. ## Intended uses & limitations #### How to use ##### Transformers ```python # Load model and tokenizer from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # Use pipeline from transformers import pipeline model_name = "aychang/distilbert-base-cased-trec-coarse" nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name) results = nlp(["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"]) ``` ##### AdaptNLP ```python from adaptnlp import EasySequenceClassifier model_name = "aychang/distilbert-base-cased-trec-coarse" texts = ["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"] classifer = EasySequenceClassifier results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2) ``` #### Limitations and bias This is minimal language model trained on a benchmark dataset. ## Training data TREC https://huggingface.co/datasets/trec ## Training procedure Preprocessing, hardware used, hyperparameters... #### Hardware One V100 #### Hyperparameters and Training Args ```python from transformers import TrainingArguments training_args = TrainingArguments( output_dir='./models', overwrite_output_dir=False, num_train_epochs=2, per_device_train_batch_size=16, per_device_eval_batch_size=16, warmup_steps=500, weight_decay=0.01, evaluation_strategy="steps", logging_dir='./logs', fp16=False, eval_steps=500, save_steps=300000 ) ``` ## Eval results ``` {'epoch': 2.0, 'eval_accuracy': 0.97, 'eval_f1': array([0.98220641, 0.91620112, 1. , 0.97709924, 0.98678414, 0.97560976]), 'eval_loss': 0.14275787770748138, 'eval_precision': array([0.96503497, 0.96470588, 1. , 0.96969697, 0.98245614, 0.96385542]), 'eval_recall': array([1. , 0.87234043, 1. , 0.98461538, 0.99115044, 0.98765432]), 'eval_runtime': 0.9731, 'eval_samples_per_second': 513.798} ```
{"language": ["en"], "license": "mit", "tags": ["text-classification"], "datasets": ["trec"], "model-index": [{"name": "aychang/distilbert-base-cased-trec-coarse", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "trec", "type": "trec", "config": "default", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.97, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGNmZTQ1Mjk3YTQ0NTdiZmY2NGM2NDM2Yzc2OTI4NGNiZDg4MmViN2I0ZGZiYWJlMTg1ZDU0MTc2ZTg1NjcwZiIsInZlcnNpb24iOjF9.4x_Ze9S5MbAeIHZ4p1EFmWev8RLkAIYWKqouAzYOxTNqdfFN0HnqULiM19EMP42v658vl_fR3-Ig0xG45DioCA"}, {"type": "precision", "value": 0.9742915631870833, "name": "Precision Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjA2MWVjMDc3MDYyY2M3NzY4NGNhY2JlNzJjMGQzZDUzZjE3ZWI1MjVmMzc4ODM2ZTQ4YmRhOTVkZDU0MzJiNiIsInZlcnNpb24iOjF9.EfmXJ6w5_7dK6ys03hpADP9h_sWuPAHgxpltUtCkJP4Ys_Gh8Ak4pGS149zt5AdP_zkvsWlXwAvx5BDMEoB2AA"}, {"type": "precision", "value": 0.97, "name": "Precision Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDVjOGFjM2RkMDMxZTFiMzE1ZDM4OTRjMzkwOWE2NTJmMmUwMDdiZDg5ZjExYmFmZjg2Y2Y5NzcxZWVkODkwZSIsInZlcnNpb24iOjF9.BtO7DqJsUhSXE-_tJZJOPPd421VmZ3KR9-KkrhJkLNenoV2Xd6Pu6i5y6HZQhFB-9WfEhU9cCsIPQ1ioZ7dyDA"}, {"type": "precision", "value": 0.9699546283251607, "name": "Precision Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGQ0Mzc2MTE2YjkwNGY1MDEzNWQwYmNlZDMzZjBmNWM0ODExYjM1OTQyZGJkNjI2OTA5MDczZjFmOGM5MmMzMyIsInZlcnNpb24iOjF9.fGi2qNpOjWd1ci3p_E1p80nOqabiKiQqpQIxtk5aWxe_Nzqh3XiOCBF8vswCRvX8qTKdCc2ZEJ4s8dZMeltfCA"}, {"type": "recall", "value": 0.972626762268805, "name": "Recall Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjQwMWZiYjIyMGVhN2M1ZDE5M2EzZmQ1ODRlYzE0MzJhZmU3ZTM1MmIyNTg5ZjBlMDcyMmQ0NmYzZjFmMmM4NSIsInZlcnNpb24iOjF9.SYDxsRw0xoQuQhei0YBdUbBxG891gqLafVFLdPMCJtQIktqCTrPW0sMKtis7GA-FEbNQVu8lp92znvlryNiFCw"}, {"type": "recall", "value": 0.97, "name": "Recall Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjQ0MjczYjFhZDdiMjdkMWVlZTAzYWU0ODVhNjkxN2I1N2Y1Y2IyOTNlYWQxM2UxODIyNDZhZDM3MWIwMTgzZCIsInZlcnNpb24iOjF9.C5cfDTz_H4Y7nEO4Eq_XFy92CSbo3IBuL5n8wBKkTuB6hSgctTHOdOJzV8gWyMJ9gRcNqxp_yVU4BEB_I_0KAA"}, {"type": "recall", "value": 0.97, "name": "Recall Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDZmYWM3OWExZWI1ZjRiZjczYWQwOWI5NWQzNDNkODcyMjBhMmVkYjY0MGZjYzlhNWQ0Y2MyMjc3OWEyZjY4NCIsInZlcnNpb24iOjF9.65WM5ihNfbKOCNZ6apX7iVAC2Ge_cwz9Xwa5oJHFq3Ci97eBFqK-qtADdB_SFRcSQUoNodaBeIhNfe0hVddxCA"}, {"type": "f1", "value": 0.9729834427867218, "name": "F1 Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWQyZGZmYjU4NjE4M2YzMTUxOWVkYjU0YTFmYzE3MmQ2NjhmNDY1MGRmNGQ1MWZjYjM1Mzg5Y2RmNTk5YmZiMSIsInZlcnNpb24iOjF9.WIF-fmV0SZ6-lcg3Rz6TjbVl7nLvy_ftDi8PPhDIP1V61jgR1AcjLFeEgeZLxSFMdmU9yqG2DWYubF0luK0jCg"}, {"type": "f1", "value": 0.97, "name": "F1 Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDM0NDY0YzI2ZTBjYWVmZmVkOTI4ODkzM2RhNWM2ZjkwYTU3N2FjNjA4NjUwYWVjODNhMGEwMzdhYmE2YmIwYyIsInZlcnNpb24iOjF9.sihEhcsOeg8dvpuGgC-KCp1PsRNyguAif2uTBv5ELtRnM5KmMaHzRqpdpdc88Dj_DeuY6Y6qPQJt_dGk2q1rDQ"}, {"type": "f1", "value": 0.9694196751375908, "name": "F1 Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTQ5ZjdiM2NiNDNkZTY5ZjNjNWUzZmI1MzgwMjhhNDEzMTEzZjFiNDhmZDllYmI0NjIwYjY0ZjcxM2M0ODE3NSIsInZlcnNpb24iOjF9.x4oR_PL0ALHYl-s4S7cPNPm4asSX3s3h30m-TKe7wpyZs0x6jwOqF-Tb1kgd4IMLl23pzsezmh72e_PmBFpRCg"}, {"type": "loss", "value": 0.14272506535053253, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODU3NGFiMzIxYWI4NzYxMzUxZGE5ZTZkYTlkN2U5MTI1NzA5NTBiNGM3Y2Q5YmVmZjU0MmU5MjJlZThkZTllMCIsInZlcnNpb24iOjF9.3QeWbECpJ0MHV5gC0_ES6PpwplLsCHPKuToErB1MSG69xNWVyMjKu1-1YEWZOU6dGfwKGh_HvwucY5kC9qwWBQ"}]}]}]}
aychang/distilbert-base-cased-trec-coarse
null
[ "transformers", "pytorch", "distilbert", "text-classification", "en", "dataset:trec", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
null
# TorchScript model of distilbert-squad ## Model description A serialized torchscript model of distilbert-squad with a config.pbtxt for deployment using NVIDIA Triton Inference Server.
{"language": ["en"], "license": "mit", "tags": ["question-answering", "torchscript", "FastNN"], "datasets": ["squad"]}
aychang/distilbert-squad
null
[ "question-answering", "torchscript", "FastNN", "en", "dataset:squad", "license:mit", "region:us" ]
null
2022-03-02T23:29:05+00:00
object-detection
null
# TorchScript model of faster-rcnn ## Model description A serialized torchscript model of [faster-rcnn](https://pytorch.org/vision/stable/models.html#faster-r-cnn) with a config.pbtxt for deployment using NVIDIA Triton Inference Server.
{"language": ["en"], "license": "mit", "tags": ["object-detection", "torchscript", "FastNN"], "datasets": ["coco"]}
aychang/fasterrcnn-resnet50-cpu
null
[ "object-detection", "torchscript", "FastNN", "en", "dataset:coco", "license:mit", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# IMDB Sentiment Task: roberta-base ## Model description A simple base roBERTa model trained on the "imdb" dataset. ## Intended uses & limitations #### How to use ##### Transformers ```python # Load model and tokenizer from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # Use pipeline from transformers import pipeline model_name = "aychang/roberta-base-imdb" nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name) results = nlp(["I didn't really like it because it was so terrible.", "I love how easy it is to watch and get good results."]) ``` ##### AdaptNLP ```python from adaptnlp import EasySequenceClassifier model_name = "aychang/roberta-base-imdb" texts = ["I didn't really like it because it was so terrible.", "I love how easy it is to watch and get good results."] classifer = EasySequenceClassifier results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2) ``` #### Limitations and bias This is minimal language model trained on a benchmark dataset. ## Training data IMDB https://huggingface.co/datasets/imdb ## Training procedure #### Hardware One V100 #### Hyperparameters and Training Args ```python from transformers import TrainingArguments training_args = TrainingArguments( output_dir='./models', overwrite_output_dir=False, num_train_epochs=2, per_device_train_batch_size=8, per_device_eval_batch_size=8, warmup_steps=500, weight_decay=0.01, evaluation_strategy="steps", logging_dir='./logs', fp16=False, eval_steps=800, save_steps=300000 ) ``` ## Eval results ``` {'epoch': 2.0, 'eval_accuracy': 0.94668, 'eval_f1': array([0.94603457, 0.94731017]), 'eval_loss': 0.2578844428062439, 'eval_precision': array([0.95762642, 0.93624502]), 'eval_recall': array([0.93472, 0.95864]), 'eval_runtime': 244.7522, 'eval_samples_per_second': 102.144} ```
{"language": ["en"], "license": "mit", "tags": ["text-classification"], "datasets": ["imdb"]}
aychang/roberta-base-imdb
null
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "en", "dataset:imdb", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# My Awesome Model
{"tags": ["conversational"]}
aydin/DialoGPT-medium-michael
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
{}
aymanm419/araElectra-SQUAD-ARCD
null
[ "transformers", "pytorch", "electra", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
aymen/model_test
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-imdb This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the [imdb](https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews) dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilgpt2-imdb", "results": []}]}
aypan17/distilgpt2-imdb
null
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-med-imdb This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
{"tags": ["generated_from_trainer"], "model-index": [{"name": "gpt2-med-imdb", "results": []}]}
aypan17/gpt2-med-imdb
null
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
TrainingArgs: lr=2e-5, train-batch-size=16, eval-batch-size=16, num-train-epochs=5, weight-decay=0.01,
{"license": "mit"}
aypan17/roberta-base-imdb
null
[ "transformers", "pytorch", "roberta", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# RudeRick discord bot
{"tags": ["conversational"]}
ayush19/rick-sanchez
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ayushkshah/dummy-model
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ayushkshah/dummy
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ayushsubedi/v0_drg
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
azazbutt7/image_captioning
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
azhard229/Fastspeech2
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
azhard229/Hifigan
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbert-finetuned-azerbaijani-ner This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.1385 - Precision: 0.8899 - Recall: 0.9154 - F1: 0.9025 - Accuracy: 0.9669 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2928 | 1.0 | 625 | 0.1415 | 0.8584 | 0.8918 | 0.8748 | 0.9595 | | 0.1254 | 2.0 | 1250 | 0.1335 | 0.8875 | 0.9119 | 0.8996 | 0.9637 | | 0.077 | 3.0 | 1875 | 0.1385 | 0.8899 | 0.9154 | 0.9025 | 0.9669 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "mbert-finetuned-azerbaijani-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "az"}, "metrics": [{"type": "precision", "value": 0.8898541731306236, "name": "Precision"}, {"type": "recall", "value": 0.915416533673795, "name": "Recall"}, {"type": "f1", "value": 0.9024543738200126, "name": "F1"}, {"type": "accuracy", "value": 0.966948310139165, "name": "Accuracy"}]}]}]}
azizbarank/mbert-finetuned-azerbaijani-ner
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:wikiann", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
azmatullah31/ai_world
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
azmatullah31/my-awesome-model
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
{"language": "tw", "tags": ["speech", "audio", "automatic-speech-recognition"], "datasets": ["common_voice"]}
azunre/wav2vec2large-xlsr-akan
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "speech", "audio", "tw", "dataset:common_voice", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-gn-demo This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.7426 - Wer: 0.7256 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 50 - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 4.0 | 100 | 0.7045 | 0.7409 | | No log | 8.0 | 200 | 0.7200 | 0.75 | | No log | 12.0 | 300 | 0.7400 | 0.7439 | | No log | 16.0 | 400 | 0.7677 | 0.7515 | | 0.0846 | 20.0 | 500 | 0.7765 | 0.7271 | | 0.0846 | 24.0 | 600 | 0.7821 | 0.7287 | | 0.0846 | 28.0 | 700 | 0.7671 | 0.7180 | | 0.0846 | 32.0 | 800 | 0.7594 | 0.7180 | | 0.0846 | 36.0 | 900 | 0.7500 | 0.7165 | | 0.0713 | 40.0 | 1000 | 0.7351 | 0.7287 | | 0.0713 | 44.0 | 1100 | 0.7361 | 0.7241 | | 0.0713 | 48.0 | 1200 | 0.7389 | 0.7378 | | 0.0713 | 52.0 | 1300 | 0.7424 | 0.7210 | | 0.0713 | 56.0 | 1400 | 0.7425 | 0.7256 | | 0.0669 | 60.0 | 1500 | 0.7426 | 0.7256 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3
{"language": ["gn"], "license": "apache-2.0", "tags": ["generated_from_trainer", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["common_voice", "mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-base-gn-demo", "results": []}]}
azuur/wav2vec2-base-gn-demo
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "hf-asr-leaderboard", "gn", "dataset:common_voice", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
azwierzc/plt5-base-pl-to-sql
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
azwierzc/plt5-small-pl-to-sql
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
azwierzc/t5-base-pl-to-sql
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
azzim/gpt2_model_autocode
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
b-mueller/deepslayer
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
#Ragnar Lothbrok DialoGPT Model
{"tags": ["conversational"]}
b0shakk/DialoGPT-small-Ragnar
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
image-classification
transformers
# shirt_identifier Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Big Check shirt ![Big Check shirt](images/Big_Check_shirt.jpg) #### Formal Shirt ![Formal Shirt](images/Formal_Shirt.jpg) #### casual shirt ![casual shirt](images/casual_shirt.jpg) #### denim shirt ![denim shirt](images/denim_shirt.jpg)
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
b25mayank3/shirt_identifier
null
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# GPT-Neo 125M finetuned with beer recipes ## Model Description GPT-Neo 125M is a transformer model based on EleutherAI's replication of the GPT-3 architecture https://huggingface.co/EleutherAI/gpt-neo-125M. It generates recipes for brewing beer in a YAML-like format which can be easily used for different purposes. ## Training data This model was trained on a custom dataset of ~ 76,800 beer recipes from the internet. It includes recipes for the following styles of beer: * Strong American Ale * Pale American Ale * India Pale Ale (IPA) * Standard American Beer * Stout * English Pale Ale * IPA * American Porter and Stout * Sour Ale * Irish Beer * Strong British Ale * Belgian and French Ale * German Wheat and Rye Beer * Czech Lager * Spice/Herb/Vegetable Beer * Specialty Beer * American Ale * Pilsner * Belgian Ale * Strong Belgian Ale * Bock * Brown British Beer * German Wheat Beer * Fruit Beer * Amber Malty European Lager * Pale Malty European Lager * British Bitter * Amber and Brown American Beer * Light Hybrid Beer * Pale Commonwealth Beer * American Wild Ale * European Amber Lager * Belgian Strong Ale * International Lager * Amber Bitter European Lager * Light Lager * Scottish and Irish Ale * European Sour Ale * Trappist Ale * Strong European Beer * Porter * Historical Beer * Pale Bitter European Beer * Amber Hybrid Beer * Smoke Flavored/Wood-Aged Beer * Spiced Beer * Dark European Lager * Alternative Fermentables Beer * Mead * Strong Ale * Dark British Beer * Scottish Ale * Smoked Beer * English Brown Ale * Dark Lager * Cider or Perry * Wood Beer ### How to use You can use this model directly with a pipeline for text generation. This example generates a different recipe each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='b3ck1/gpt-neo-125M-finetuned-beer-recipes') >>> generator("style: Pilsner\nbatch_size: 20\nefficiency: 75\nboil_size:", do_sample=True, min_length=50, max_length=500) >>> print(output[0]['generated_text']) style: Pilsner batch_size: 20 efficiency: 70 boil_size: 24 boil_time: 60 fermentables: - name: Pale Ale type: Grain amount: 6.5 hops: - name: Saaz alpha: 3.5 use: Boil time: 60 amount: 0.06 - name: Saaz alpha: 3.5 use: Boil time: 30 amount: 0.06 - name: Saaz alpha: 3.5 use: Boil time: 10 amount: 0.06 - name: Saaz alpha: 3.5 use: Boil time: 0 amount: 0.06 yeasts: - name: Safale - American Ale Yeast US-05 amount: 0.11 min_temperature: 12 max_temperature: 25 primary_temp: null mash_steps: - step_temp: 65 step_time: 60 miscs: [] ``` ### See this model in action This model was used to build https://beerai.net.
{"language": ["en"], "license": "apache-2.0", "tags": ["text generation", "pytorch", "causal-lm"], "datasets": ["custom"], "widget": [{"text": "style: Pilsner\nbatch_size: 20\nefficiency: 75\nboil_size:", "example_title": "Pilsener"}, {"text": "style: IPA\nbatch_size: 20\nefficiency: 75\nboil_size:", "example_title": "IPA"}, {"text": "style: Scottish Ale\nbatch_size: 20\nefficiency: 75\nboil_size:", "example_title": "Scottish Ale"}], "inference": {"parameters": {"do_sample": true, "top_k": 10, "top_p": 0.99, "max_length": 500}}}
b3ck1/gpt-neo-125M-finetuned-beer-recipes
null
[ "transformers", "pytorch", "gpt_neo", "text-generation", "text generation", "causal-lm", "en", "dataset:custom", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the COMMON_VOICE - AB dataset. It achieves the following results on the evaluation set: - Loss: 133.5167 - Wer: 18.9286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["ab"], "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]}
baaastien/xls-r-ab-test
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "ab", "dataset:common_voice", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
baboubou/test
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-timit_asr-oogway This model is a fine-tuned version of [OthmaneJ/distil-wav2vec2](https://huggingface.co/OthmaneJ/distil-wav2vec2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-timit_asr-oogway", "results": []}]}
baby-oogway/wav2vec2-timit_asr-oogway
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
"hello"
{}
bada/test
null
[ "transformers", "pytorch", "jax", "bert", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
bada/test_gpt
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
baffo32/T0pp-ptmap
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
baffo32/bart-base-xsum
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Genji-python 6B For example usage or to easily use the model you can check our colab notebook: [Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing) ## Model Description Genji is a transformer model finetuned on EleutherAI's GPT-J 6B model. This particular model is trained on python only code approaching 4GB in size. Split model has the checkpoints splitted, which makes it use less system RAM while loading and makes it faster to load. This model needs more effort to set up as you need to install git-lfs and pull the repo. | Hyperparameter | Value | |-------------------|--------| | n_parameters | 6,053,381,344 | | n_layers | 28* | | d_model | 4,096 | | d_ff | 16,384 | | n_heads | 16 | | d_head | 256 | | n_ctx | 2,048 | | n_vocab | 50,400 (same tokenizer as GPT-2/3) | | position encoding | [Rotary position encodings (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | `*` each layer consists of one feedforward block and one self attention block The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary position encodings (RoPE) was applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3. ## Training data GPT-J 6B was pretrained on the [Pile](pile.eleuther.ai), a large scale curated dataset created by EleutherAI for the purpose of training this model. After the pre-training, it's finetuned on the python code that was taken from the Pile. ## Training procedure Genji-python-6B is trained for 20k steps on around 655 million tokens with learning rate of 2e-06 ## Intended Use This model is trained for assistence on writing python code and having fun trying weird stuff with it. ### How to use This model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable. For now, you need to use this fork: [Fork](https://github.com/finetuneanon/transformers) to install with pip: ```bash pip install git+https://github.com/finetuneanon/transformers@gpt-neo-localattention3-rp-b ``` **git-lfs** also needs to be installed, on ubuntu: ```bash apt install git-lfs ``` after it's installed, initialize git-lfs: ```bash git lfs install ``` then clone this repo: ```bash git clone https://huggingface.co/NovelAI/genji-python-6B-split ``` Now we can load the model. We recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards. How to use: ```python from transformers import ( AutoTokenizer, AutoModelForCausalLM, GPTNeoForCausalLM, ) model = AutoModelForCausalLM.from_pretrained("genji-python-6B-split/model").half().eval().cuda() tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B") text = '''def print_customer_name''' tokens = tokenizer(text, return_tensors="pt").input_ids generated_tokens = model.generate(tokens.long().cuda(), use_cache=True, do_sample=True, top_k=50, temperature=0.3, top_p=0.9, repetition_penalty=1.125, min_length=1, max_length=len(tokens[0]) + 400, pad_token_id=tokenizer.eos_token_id) last_tokens = generated_tokens[0][len(tokens[0]):] generated_text = tokenizer.decode(last_tokens) print("Generation:\n" + generated_text) ``` When ran, this code generates: ```python Prompt: def print_customer_name Generation: (self, customer): """Print the name of a customer.""" if not self.is_valid(): return print("Customer: {}".format(customer)) ``` For example usage, you can see our colab notebook as well: [Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing) ## Eval results TBD ## Acknowledgements This project was possible because of the compute provided by the [TPU Research Cloud](https://sites.research.google/trc/) and [EleutherAI](https://eleuther.ai/) for pretraining of the GPT-J 6B. Thanks to everyone who contributed to this project: - [Aero](https://github.com/AeroScripts) - [Finetune](https://github.com/finetuneanon) - [Kurumuz](https://github.com/kurumuz)
{"language": ["en"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm"], "datasets": ["the Pile"]}
baffo32/genji-python-6B-split
null
[ "transformers", "gpt_neo", "text-generation", "pytorch", "causal-lm", "en", "arxiv:2104.09864", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# GPT-J 6B ## Model Description GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters. <figure> | Hyperparameter | Value | |----------------------|------------| | \\(n_{parameters}\\) | 6053381344 | | \\(n_{layers}\\) | 28&ast; | | \\(d_{model}\\) | 4096 | | \\(d_{ff}\\) | 16384 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 256 | | \\(n_{ctx}\\) | 2048 | | \\(n_{vocab}\\) | 50257/50400&dagger; (same tokenizer as GPT-2/3) | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | <figcaption><p><strong>&ast;</strong> Each layer consists of one feedforward block and one self attention block.</p> <p><strong>&dagger;</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure> The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3. ## Training data GPT-J 6B was trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai). ## Training procedure This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly. ## Intended Use and Limitations GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt. ### How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") ``` ### Limitations and Biases The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output. GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ## Evaluation results <figure> | Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) | |--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------| | Random Chance | &check; | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 | | GPT-3 Ada&ddagger; | &cross; | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- | | GPT-2 1.5B | &check; | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 | | GPT-Neo 1.3B&ddagger; | &check; | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 | | Megatron-2.5B&ast; | &cross; | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 | | GPT-Neo 2.7B&ddagger; | &check; | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 | | GPT-3 1.3B&ast;&ddagger; | &cross; | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 | | GPT-3 Babbage&ddagger; | &cross; | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- | | Megatron-8.3B&ast; | &cross; | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 | | GPT-3 2.7B&ast;&ddagger; | &cross; | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 | | Megatron-11B&dagger; | &check; | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 | | **GPT-J 6B&ddagger;** | **&check;** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** | | GPT-3 6.7B&ast;&ddagger; | &cross; | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 | | GPT-3 Curie&ddagger; | &cross; | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- | | GPT-3 13B&ast;&ddagger; | &cross; | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 | | GPT-3 175B&ast;&ddagger; | &cross; | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 | | GPT-3 Davinci&ddagger; | &cross; | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- | <figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p> <p><strong>&ast;</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more details.</p> <p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a> <a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>) Thus, evaluation was not attempted.</p> <p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure> ## Citation and Related Information ### BibTeX entry To cite this model: ```bibtex @misc{gpt-j, author = {Wang, Ben and Komatsuzaki, Aran}, title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` To cite the codebase that trained this model: ```bibtex @misc{mesh-transformer-jax, author = {Wang, Ben}, title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email. ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha. Thanks to everyone who have helped out one way or another (listed alphabetically): - [James Bradbury](https://twitter.com/jekbradbury) for valuable assistance with debugging JAX issues. - [Stella Biderman](https://www.stellabiderman.com), [Eric Hallahan](https://twitter.com/erichallahan), [Kurumuz](https://github.com/kurumuz/), and [Finetune](https://github.com/finetuneanon/) for converting the model to be compatible with the `transformers` package. - [Leo Gao](https://twitter.com/nabla_theta) for running zero shot evaluations for the baseline models for the table. - [Laurence Golding](https://github.com/researcher2/) for adding some features to the web demo. - [Aran Komatsuzaki](https://twitter.com/arankomatsuzaki) for advice with experiment design and writing the blog posts. - [Janko Prester](https://github.com/jprester/) for creating the web demo frontend.
{"language": ["en"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm"], "datasets": ["The Pile"]}
baffo32/gpt-j-6B-ptmap
null
[ "transformers", "pytorch", "gptj", "text-generation", "causal-lm", "en", "arxiv:2104.09864", "arxiv:2101.00027", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# GPT-2 Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. ## Model description GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ## Intended uses & limitations You can use the raw model for text generation or fine-tune it to a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."}, {'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"}, {'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"}, {'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"}, {'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2Model.from_pretrained('gpt2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = TFGPT2Model.from_pretrained('gpt2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases): > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases > that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar > levels of caution around use cases that are sensitive to biases around human attributes. Here's an example of how the model can have biased predictions: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2') >>> set_seed(42) >>> generator("The White man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The White man worked as a mannequin for'}, {'generated_text': 'The White man worked as a maniser of the'}, {'generated_text': 'The White man worked as a bus conductor by day'}, {'generated_text': 'The White man worked as a plumber at the'}, {'generated_text': 'The White man worked as a journalist. He had'}] >>> set_seed(42) >>> generator("The Black man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The Black man worked as a man at a restaurant'}, {'generated_text': 'The Black man worked as a car salesman in a'}, {'generated_text': 'The Black man worked as a police sergeant at the'}, {'generated_text': 'The Black man worked as a man-eating monster'}, {'generated_text': 'The Black man worked as a slave, and was'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). ## Training procedure ### Preprocessing The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact details of training. ## Evaluation results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 | ### BibTeX entry and citation info ```bibtex @article{radford2019language, title={Language Models are Unsupervised Multitask Learners}, author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, year={2019} } ``` <a href="https://huggingface.co/exbert/?model=gpt2"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
{"language": "en", "license": "mit", "tags": ["exbert"]}
baffo32/gpt2-ptmap
null
[ "transformers", "pytorch", "tf", "jax", "tflite", "rust", "gpt2", "text-generation", "exbert", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
baffo32/pyc2py_alpha
null
[ "transformers", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# ByT5 - Base ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-base). ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task. ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-base` significantly outperforms [mt5-base](https://huggingface.co/google/mt5-base) on [TweetQA](https://arxiv.org/abs/1907.06292). Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel* ## Example Inference ByT5 works on raw UTF-8 bytes and can be used without a tokenizer: ```python from transformers import T5ForConditionalGeneration import torch model = T5ForConditionalGeneration.from_pretrained('google/byt5-base') input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + 3 # add 3 for special tokens labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + 3 # add 3 for special tokens loss = model(input_ids, labels=labels).loss # forward pass ``` For batched inference & training it is however recommended using a tokenizer class for padding: ```python from transformers import T5ForConditionalGeneration, AutoTokenizer model = T5ForConditionalGeneration.from_pretrained('google/byt5-base') tokenizer = AutoTokenizer.from_pretrained('google/byt5-base') model_inputs = tokenizer(["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt") labels = tokenizer(["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt").input_ids loss = model(**model_inputs, labels=labels).loss # forward pass ``` ## Abstract Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments. ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/ByT5.png)
{"language": "multilingual", "license": "apache-2.0", "datasets": ["mc4"]}
baffo32/pyc2py_alpha2
null
[ "transformers", "jax", "t5", "text2text-generation", "multilingual", "dataset:mc4", "arxiv:1907.06292", "arxiv:2105.13626", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
translation
transformers
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Pretraining Dataset: [C4](https://huggingface.co/datasets/c4) Other Community Checkpoints: [here](https://huggingface.co/models?search=t5) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* ## Abstract Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
{"language": ["en", "fr", "ro", "de"], "license": "apache-2.0", "tags": ["summarization", "translation"], "datasets": ["c4"]}
baffo32/t5-base-ptmap
null
[ "transformers", "pytorch", "tf", "jax", "rust", "t5", "text2text-generation", "summarization", "translation", "en", "fr", "ro", "de", "dataset:c4", "arxiv:1910.10683", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
baffo32/test-roberta-base
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
bag03/Bri
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Model name Indian Political Tweets LM ## Model description Note: This model is based on GPT2, if you want a bigger model based on GPT2-medium and finetuned on the same data please take a look at the [IndianPoliticalTweetsLMMedium](https://huggingface.co/bagdaebhishek/IndianPoliticalTweetsLMMedium) model. This is a GPT2 Language model with LM head fine-tuned on tweets crawled from handles which belong predominantly to Indian Politics. For more information about the crawled data, you can go through this [blog](https://bagdeabhishek.github.io/twitterAnalysis) post. ## Intended uses & limitations This finetuned model can be used to generate tweets which are related to Indian politics. #### How to use ```python from transformers import AutoTokenizer,AutoModelWithLMHead,pipeline tokenizer = AutoTokenizer.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM") model = AutoModelWithLMHead.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM") text_generator = pipeline("text-generation",model=model, tokenizer=tokenizer) init_sentence = "India will always be" print(text_generator(init_sentence)) ``` #### Limitations and bias 1. The tweets used to train the model were not manually labelled, so the generated text may not always be in English. I've cleaned the data to remove non-English tweets but the model may generate "Hinglish" text and hence no assumptions should be made about the language of the generated text. 2. I've taken enough care to remove tweets from twitter handles which are not very influential but since it's not curated by hand there might be some artefacts like "-sent via NamoApp" etc. 3. Like any language model trained on real-world data this model also exhibits some biases which unfortunately are a part of the political discourse on Twitter. Please keep this in mind while using the output from this model. ## Training data I used the pre-trained gpt2 model from Huggingface transformers repository and fine-tuned it on custom data set crawled from twitter. The method used to identify the political handles is mentioned in detail in a [blog](https://bagdeabhishek.github.io/twitterAnalysis) post. I used tweets from both the Pro-BJP and Anti-BJP clusters mentioned in the blog. ## Training procedure For pre-processing, I removed tweets from handles which are not very influential in their cluster. I removed them by calculating Eigenvector centrality on the twitter graph and pruning handles which have this measure below a certain threshold. This threshold was set manually after experimenting with different values. I then separated tweets by these handles based on their language. I trained the LM with English tweets from both handles. ### Hardware 1. GPU: GTX 1080Ti 2. CPU: Ryzen 3900x 3. RAM: 32GB This model took roughly 36 hours to fine-tune.
{"language": "en", "license": "apache-2.0", "tags": ["India", "politics", "tweets", "BJP", "Congress", "AAP", "pytorch", "gpt2", "lm-head", "text-generation"], "datasets": ["Twitter", "IndianPolitics"], "thumbnail": "https://bagdeabhishek.github.io/twitterAnalysis_files/networkfin.jpg"}
bagdaebhishek/IndianPoliticalTweetsLM
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "India", "politics", "tweets", "BJP", "Congress", "AAP", "lm-head", "en", "dataset:Twitter", "dataset:IndianPolitics", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Model name Indian Political Tweets LM Medium (Based on GPT2-Medium) ## Model description This is a GPT2 Language model with LM head fine-tuned on tweets crawled from handles which belong predominantly to Indian Politics. For more information about the crawled data, you can go through this [blog](https://bagdeabhishek.github.io/twitterAnalysis) post. This model is finetuned using GPT2-medium instead of the vanilla GPT2 implementation. This model has more parameters but it is able to model language slightly better. ## Intended uses & limitations This finetuned model can be used to generate tweets which are related to Indian politics. #### How to use ```python from transformers import AutoTokenizer,AutoModelWithLMHead,pipeline tokenizer = AutoTokenizer.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM") model = AutoModelWithLMHead.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM") text_generator = pipeline("text-generation",model=model, tokenizer=tokenizer) init_sentence = "India will always be" print(text_generator(init_sentence)) ``` #### Limitations and bias 1. The tweets used to train the model were not manually labelled, so the generated text may not always be in English. I've cleaned the data to remove non-English tweets but the model may generate "Hinglish" text and hence no assumptions should be made about the language of the generated text. 2. I've taken enough care to remove tweets from twitter handles which are not very influential but since it's not curated by hand there might be some artefacts like "-sent via NamoApp" etc. 3. Like any language model trained on real-world data this model also exhibits some biases which unfortunately are a part of the political discourse on Twitter. Please keep this in mind while using the output from this model. ## Training data I used the pre-trained gpt2-medium model from Huggingface transformers repository and fine-tuned it on custom data set crawled from twitter. The method used to identify the political handles is mentioned in detail in a [blog](https://bagdeabhishek.github.io/twitterAnalysis) post. I used tweets from both the Pro-BJP and Anti-BJP clusters mentioned in the blog. ## Training procedure For pre-processing, I removed tweets from handles which are not very influential in their cluster. I removed them by calculating Eigenvector centrality on the twitter graph and pruning handles which have this measure below a certain threshold. This threshold was set manually after experimenting with different values. I then separated tweets by these handles based on their language. I trained the LM with English tweets from both handles. ### Hardware 1. GPU: GTX 1080Ti 2. CPU: Ryzen 3900x 3. RAM: 32GB This model took roughly 36 hours to fine-tune.
{"language": "en", "license": "apache-2.0", "tags": ["India", "politics", "tweets", "BJP", "Congress", "AAP", "pytorch", "gpt2", "lm-head", "text-generation"], "datasets": ["Twitter", "IndianPolitics"], "thumbnail": "https://bagdeabhishek.github.io/twitterAnalysis_files/networkfin.jpg"}
bagdaebhishek/IndianPoliticalTweetsLMMedium
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "India", "politics", "tweets", "BJP", "Congress", "AAP", "lm-head", "en", "dataset:Twitter", "dataset:IndianPolitics", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
baib/wav2vec2-large-xlsr-turkish-demo-colab
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
hello
{}
baicuya/bert_cn
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
baihaisheng/bert_finetuning_test
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Sinai Voice Arabic Speech Recognition Model # نموذج **صوت سيناء** للتعرف على الأصوات العربية الفصحى و تحويلها إلى نصوص This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AR dataset. It achieves the following results on the evaluation set: - Loss: 0.2141 - Wer: 0.1808 It achieves the following results on the evaluation set: - eval_loss = 0.2141 - eval_samples = 10388 - eval_wer = 0.181 - eval_cer = 0.049 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id bakrianoo/sinai-voice-ar-stt --dataset mozilla-foundation/common_voice_8_0 --config ar --split test ``` ### Inference Without LM ```python from transformers import (Wav2Vec2Processor, Wav2Vec2ForCTC) import torchaudio import torch def speech_file_to_array_fn(voice_path, resampling_to=16000): speech_array, sampling_rate = torchaudio.load(voice_path) resampler = torchaudio.transforms.Resample(sampling_rate, resampling_to) return resampler(speech_array)[0].numpy(), sampling_rate # load the model cp = "bakrianoo/sinai-voice-ar-stt" processor = Wav2Vec2Processor.from_pretrained(cp) model = Wav2Vec2ForCTC.from_pretrained(cp) # recognize the text in a sample sound file sound_path = './my_voice.mp3' sample, sr = speech_file_to_array_fn(sound_path) inputs = processor([sample], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values,).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 10 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 256 - total_eval_batch_size: 80 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.354 | 0.64 | 1000 | 0.4109 | 0.4493 | | 0.5886 | 1.28 | 2000 | 0.2798 | 0.3099 | | 0.4977 | 1.92 | 3000 | 0.2387 | 0.2673 | | 0.4253 | 2.56 | 4000 | 0.2266 | 0.2523 | | 0.3942 | 3.2 | 5000 | 0.2171 | 0.2437 | | 0.3619 | 3.84 | 6000 | 0.2076 | 0.2253 | | 0.3245 | 4.48 | 7000 | 0.2088 | 0.2186 | | 0.308 | 5.12 | 8000 | 0.2086 | 0.2206 | | 0.2881 | 5.76 | 9000 | 0.2089 | 0.2105 | | 0.2557 | 6.4 | 10000 | 0.2015 | 0.2004 | | 0.248 | 7.04 | 11000 | 0.2044 | 0.1953 | | 0.2251 | 7.68 | 12000 | 0.2058 | 0.1932 | | 0.2052 | 8.32 | 13000 | 0.2117 | 0.1878 | | 0.1976 | 8.96 | 14000 | 0.2104 | 0.1825 | | 0.1845 | 9.6 | 15000 | 0.2156 | 0.1821 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": ["ar"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer", "cer"], "widget": [{"example_title": "Example 1", "src": "https://huggingface.co/bakrianoo/sinai-voice-ar-stt/raw/main/examples/common_voice_ar_19077324.mp3"}, {"example_title": "Example 2", "src": "https://huggingface.co/bakrianoo/sinai-voice-ar-stt/raw/main/examples/common_voice_ar_19205138.mp3"}, {"example_title": "Example 3", "src": "https://huggingface.co/bakrianoo/sinai-voice-ar-stt/raw/main/examples/common_voice_ar_19331711.mp3"}], "model-index": [{"name": "Sinai Voice Arabic Speech Recognition Model", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ar", "type": "mozilla-foundation/common_voice_8_0", "args": "ar"}, "metrics": [{"type": "wer", "value": 0.181, "name": "Test WER"}, {"type": "cer", "value": 0.049, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ar"}, "metrics": [{"type": "wer", "value": 93.03, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ar"}, "metrics": [{"type": "wer", "value": 90.79, "name": "Test WER"}]}]}]}
bakrianoo/sinai-voice-ar-stt
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "hf-asr-leaderboard", "robust-speech-event", "ar", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
## Arabic T5 Base Model A customized T5 Model for Arabic and English Task. It could be used as an alternative for `google/mt5-base` model, as it's much smaller and only targets Arabic and English based tasks. ### About T5 ``` T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format. The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. ``` [Read More](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
{"language": "Arabic", "license": "apache-2.0", "datasets": ["mc4"]}
bakrianoo/t5-arabic-base
null
[ "transformers", "pytorch", "t5", "text2text-generation", "dataset:mc4", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
## Arabic T5 Large Model A customized T5 Model for Arabic and English Task. It could be used as an alternative for `google/mt5-large` model, as it's much smaller and only targets Arabic and English based tasks. ### About T5 ``` T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format. The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. ``` [Read More](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
{"language": "Arabic", "license": "apache-2.0", "datasets": ["mc4"]}
bakrianoo/t5-arabic-large
null
[ "transformers", "pytorch", "t5", "text2text-generation", "dataset:mc4", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
## Arabic T5 Small Model A customized T5 Model for Arabic and English Task. It could be used as an alternative for `google/mt5-small` model, as it's much smaller and only targets Arabic and English based tasks. ### About T5 ``` T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format. The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. ``` [Read More](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
{"language": "Arabic", "license": "apache-2.0", "datasets": ["mc4"]}
bakrianoo/t5-arabic-small
null
[ "transformers", "pytorch", "t5", "text2text-generation", "dataset:mc4", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
bala1802/model_1_test
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
The main card for Saturday’s Manny Pacquiao vs Yordenis Ugas fight gets underway at T-Mobile Arena in Las Vegas at 9 p.m. ET and the main event is expected to start sometime around 11:30 p.m. This is going to air on FOX Sports PPV and YouTube PPV. The card will cost https://web.sites.google.com/view/ppv-livemanny-pacquiao-vs-yord/home https://web.sites.google.com/view/freevasyl-manny-pacquiao-vs-yo/home https://web.sites.google.com/view/ppvlivestreammannypacquiaovsyo/home https://web.sites.google.com/view/watchtv-manny-pacquiao-vs-yord/home https://web.sites.google.com/view/heresmannypacquiaovsyordenisug/home https://web.sites.google.com/view/mannypacquiaovsyordenislive/home https://web.sites.google.com/view/free-2021-manny-pacquiao-vs-yo/home LIVE::Watch Full Fight Live Here LIVE::Watch Full Fight Live Here https://goodavail.com/boxing/ The most intriguing storyline for this fight is the belt itself that is on the line. Pacquiao won the Super version of the WBA’s welterweight title in July 2019 when he beat Keith Thurman in a split decision. The WBA stripped Pacquiao of the title this past January due to inactivity. The organizing body then promoted Ugas into the Super belt. Ugas won the WBA’s Regular title in September 2020 when he beat Abel Ramos in a split decision. That means Ugas won a title last held by Pacquiao without having to beat Pacquiao. Pacquiao-Spence would have been a significantly bigger fight for welterweight supremacy, but this is still interesting. Pacquiao was an underdog against Spence, but comes into this fight as a -360 favorite at DraftKings Sportsbook. How to watch Manny Pacquiao vs. Yordenis Ugas TV channel: FOX Sports PPV
{}
balalsahabi/fdgdfg
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
balamariannmt/LanguageModel_Trial_2
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
# Named Entity Recognition using Transformers This is a Fine-tuned version of BERT using HuggingFace transformers to perform Named Entity Recognition on Text data. BERT is a state-of-the-art model with attention mechanism as underlying architecture trained with masked-language-modeling and next-sentence-prediction objectives, used for various tasks including Question answering systems, Text Summarization, etc... which can also perform token classification tasks such as NER with great performance. # Dataset **CoNLL-2003** : The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations, and names of miscellaneous entities that do not belong to the previous three groups.<br><br> **Link** : https://huggingface.co/datasets/conll2003 # Using this fine-tuned version From python, download the whole pipeline and use it instantly using the following code : ``` from transformers import pipeline # Loading the pipeline from hub # Pipeline handles the preprocessing and post processing steps model_checkpoint = "balamurugan1603/bert-finetuned-ner" namedEntityRecogniser = pipeline( "token-classification", model=model_checkpoint, aggregation_strategy="simple" ) ``` Reference for using this pipeline to find NER tags can be found in this <a href="https://github.com/balamurugan1603/Named-Entity-Recognition-using-Tranformers/blob/main/named-entity-recognition-using-transfer-learning.ipynb">notebook</a>.
{}
balamurugan1603/bert-finetuned-ner
null
[ "transformers", "pytorch", "tf", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
balamurugan1603/mt5-small-finetuned-ta-summarization
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
balawmt/LanguageModel_Trial_1
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Test Bot DialoGTP Model
{"tags": ["conversational"]}
balta/DialoGPT-small-TestBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
balu/absamodel
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
balu/aus
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
balu/auscura
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
balu/fff
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
balu/ffff
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
balu/sample
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
bamin0422/koreanHateSpeechModel_BERT
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00