Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
text-classification | transformers |
# Model description
This model is an Arabic language sentiment analysis pretrained model.
The model is built on top of the CAMelBERT_msa_sixteenth BERT-based model.
We used the HARD dataset of hotels review to fine tune the model.
The dataset original labels based on a five-star rating were modified to a 3 label data:
- POSITIVE: for ratings > 3 stars
- NEUTRAL: for a 3 star rating
- NEGATIVE: for ratings < 3 stars
This first prototype was trained on 3 epochs for 1 hours using Colab and a TPU acceleration.
# Examples
Here are some examples in Arabic to test :
- Excellent -> ممتاز(Happy)
- I'am sad -> أنا حزين (Sad)
- Nothing -> لا شيء (Neutral)
# Contact
If you have questions or improvement remarks, feel free to contact me on my LinkedIn profile: https://www.linkedin.com/in/yahya-ghrab/ | {"language": "ar", "widget": [{"text": "\u0645\u0645\u062a\u0627\u0632"}, {"text": "\u0623\u0646\u0627 \u062d\u0632\u064a\u0646"}, {"text": "\u0644\u0627 \u0634\u064a\u0621"}]} | Yah216/Sentiment_Analysis_CAMelBERT_msa_sixteenth_HARD | null | [
"transformers",
"tf",
"bert",
"text-classification",
"ar",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Yaia/distilbert-base-uncased-finetuned-disaster | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yaia/distilbert-base-uncased-finetuned-disaster_full | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2086
- Accuracy: 0.9255
- F1: 0.9257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8249 | 1.0 | 250 | 0.3042 | 0.9085 | 0.9068 |
| 0.2437 | 2.0 | 500 | 0.2086 | 0.9255 | 0.9257 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9255, "name": "Accuracy"}, {"type": "f1", "value": 0.9257196896784097, "name": "F1"}]}]}]} | Yaia/distilbert-base-uncased-finetuned-emotion | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | YanWang20121865/2021MScAI_ML | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | ONNX version of message-intent model. Will be used on GPU machine. | {} | Yanjie/message-intent-onnx | null | [
"onnx",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers | This is the concierge intent model. Fined tuned on DistilBert uncased model. | {} | Yanjie/message-intent | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers | This is the concierge preamble model. Fined tuned on DistilBert uncased model. | {} | Yanjie/message-preamble | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers | {} | Yankee/TEST21 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yankee/test | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
#test | {"tags": ["conversational"]} | Yankee/test1234 | null | [
"transformers",
"pytorch",
"conversational",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
Domain-adaptive pretraining of camembert-base using 15 GB of French Tweets | {"language": "fr"} | Yanzhu/bertweetfr-base | null | [
"transformers",
"pytorch",
"camembert",
"fill-mask",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers | French NER model for tweets. Fine-tuned on the CAP2017 dataset.
label_list = ['O',
'B-person',
'I-person',
'B-musicartist',
'I-musicartist',
'B-org',
'I-org',
'B-geoloc',
'I-geoloc',
'B-product',
'I-product',
'B-transportLine',
'I-transportLine',
'B-media',
'I-media',
'B-sportsteam',
'I-sportsteam',
'B-event',
'I-event',
'B-tvshow',
'I-tvshow',
'B-movie',
'I-movie',
'B-facility',
'I-facility',
'B-other',
'I-other'] | {} | Yanzhu/bertweetfr_ner | null | [
"transformers",
"pytorch",
"camembert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | French roBERTa-base model fine-tuned for Offensive Language Identification on COVID-19 tweets. | {} | Yanzhu/bertweetfr_offensiveness | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | YashBit/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | YashBit/roberta-base-finetuned-sst2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yashaaaaa/Yahsjaja | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yashwanth/test-trainer | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | null | # Wav2Vec2-Large-XLSR-Bengali
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) Bengali using a subset of 40,000 utterances from [Bengali ASR training data set containing ~196K utterances](https://www.openslr.org/53/). Tested WER using ~4200 held out from training.
When using this model, make sure that your speech input is sampled at 16kHz.
Train Script can be Found at : train.py
Data Prep Notebook : https://colab.research.google.com/drive/1JMlZPU-DrezXjZ2t7sOVqn7CJjZhdK2q?usp=sharing
Inference Notebook : https://colab.research.google.com/drive/1uKC2cK9JfUPDTUHbrNdOYqKtNozhxqgZ?usp=sharing
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
processor = Wav2Vec2Processor.from_pretrained("arijitx/wav2vec2-large-xlsr-bengali")
model = Wav2Vec2ForCTC.from_pretrained("arijitx/wav2vec2-large-xlsr-bengali")
# model = model.to("cuda")
resampler = torchaudio.transforms.Resample(TEST_AUDIO_SR, 16_000)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch)
speech = resampler(speech_array).squeeze().numpy()
return speech
speech_array = speech_file_to_array_fn("test_file.wav")
inputs = processor(speech_array, sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
preds = processor.batch_decode(predicted_ids)[0]
print(preds.replace("[PAD]",""))
```
**Test Result**: WER on ~4200 utterance : 32.45 % | {"language": "Bengali", "license": "cc-by-sa-4.0", "tags": ["bn", "audio", "automatic-speech-recognition", "speech"], "datasets": ["OpenSLR"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Bengali by Arijit", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "OpenSLR", "type": "OpenSLR", "args": "ben"}, "metrics": [{"type": "wer", "value": 32.45, "name": "Test WER"}]}]}]} | YasinShihab/asr-en-bn-test | null | [
"bn",
"audio",
"automatic-speech-recognition",
"speech",
"dataset:OpenSLR",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Yasmin007anjinha/O | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yasmin018393o03i4/Arcade | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yatoro/Yatoro-finetuned-imdb | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yatoro/bert-finetuned-ner | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yatoro/codeparrot-ds | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yatoro/marian-finetuned-kde4-en-to-fr | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yatoro/mt5-small-finetuned-amazon-en-es | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yax/yaxescobedo | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Ye/albert_finetuned_ye | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers |
# Ukrainian STT model (with Language Model)
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk
⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UK dataset.
It achieves the following results on the evaluation set without the language model:
- Loss: 0.1875
- Wer: 0.2033
- Cer: 0.0384
## Model description
On 100 test example the model shows the following results:
Without LM:
- WER: 0.1862
- CER: 0.0277
With LM:
- WER: 0.1218
- CER: 0.0190
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.2815 | 7.93 | 500 | 0.3536 | 0.4753 | 0.1009 |
| 1.0869 | 15.86 | 1000 | 0.2317 | 0.3111 | 0.0614 |
| 0.9984 | 23.8 | 1500 | 0.2022 | 0.2676 | 0.0521 |
| 0.975 | 31.74 | 2000 | 0.1948 | 0.2469 | 0.0487 |
| 0.9306 | 39.67 | 2500 | 0.1916 | 0.2377 | 0.0464 |
| 0.8868 | 47.61 | 3000 | 0.1903 | 0.2257 | 0.0439 |
| 0.8424 | 55.55 | 3500 | 0.1786 | 0.2206 | 0.0423 |
| 0.8126 | 63.49 | 4000 | 0.1849 | 0.2160 | 0.0416 |
| 0.7901 | 71.42 | 4500 | 0.1869 | 0.2138 | 0.0413 |
| 0.7671 | 79.36 | 5000 | 0.1855 | 0.2075 | 0.0394 |
| 0.7467 | 87.3 | 5500 | 0.1884 | 0.2049 | 0.0389 |
| 0.731 | 95.24 | 6000 | 0.1877 | 0.2060 | 0.0387 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id Yehor/wav2vec2-xls-r-1b-uk-with-lm --dataset mozilla-foundation/common_voice_7_0 --config uk --split test
```
### Eval results on Common Voice 7 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 21.52 | 14.62 |
| {"language": ["uk"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "uk"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "wav2vec2-xls-r-1b-uk-with-lm", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "uk"}, "metrics": [{"type": "wer", "value": 14.62, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "uk"}, "metrics": [{"type": "wer", "value": 48.72, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "uk"}, "metrics": [{"type": "wer", "value": 40.66, "name": "Test WER"}]}]}]} | Yehor/wav2vec2-xls-r-1b-uk-with-lm | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"uk",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
# Ukrainian STT model (with the Big Language Model formed on News Dataset)
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk
⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UK dataset.
Attribution to the dataset of Language Model:
- Chaplynskyi, D. et al. (2021) lang-uk Ukrainian Ubercorpus [Data set]. https://lang.org.ua/uk/corpora/#anchor4
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.2815 | 7.93 | 500 | 0.3536 | 0.4753 | 0.1009 |
| 1.0869 | 15.86 | 1000 | 0.2317 | 0.3111 | 0.0614 |
| 0.9984 | 23.8 | 1500 | 0.2022 | 0.2676 | 0.0521 |
| 0.975 | 31.74 | 2000 | 0.1948 | 0.2469 | 0.0487 |
| 0.9306 | 39.67 | 2500 | 0.1916 | 0.2377 | 0.0464 |
| 0.8868 | 47.61 | 3000 | 0.1903 | 0.2257 | 0.0439 |
| 0.8424 | 55.55 | 3500 | 0.1786 | 0.2206 | 0.0423 |
| 0.8126 | 63.49 | 4000 | 0.1849 | 0.2160 | 0.0416 |
| 0.7901 | 71.42 | 4500 | 0.1869 | 0.2138 | 0.0413 |
| 0.7671 | 79.36 | 5000 | 0.1855 | 0.2075 | 0.0394 |
| 0.7467 | 87.3 | 5500 | 0.1884 | 0.2049 | 0.0389 |
| 0.731 | 95.24 | 6000 | 0.1877 | 0.2060 | 0.0387 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
| {"language": ["uk"], "license": "cc-by-nc-sa-4.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "uk"], "xdatasets": ["mozilla-foundation/common_voice_7_0"]} | Yehor/wav2vec2-xls-r-1b-uk-with-news-lm | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"uk",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
# Ukrainian STT model (with Language Model)
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk
⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk
- Have a look on an updated 300m model: https://huggingface.co/Yehor/wav2vec2-xls-r-300m-uk-with-small-lm
- Have a look on a better model with more parameters: https://huggingface.co/Yehor/wav2vec2-xls-r-1b-uk-with-lm
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UK dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3015
- Wer: 0.3377
- Cer: 0.0708
The above results present evaluation without the language model.
## Model description
On 100 test example the model shows the following results:
Without LM:
- WER: 0.2647
- CER: 0.0469
With LM:
- WER: 0.1568
- CER: 0.0289
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 3.0255 | 7.93 | 500 | 2.5514 | 0.9921 | 0.9047 |
| 1.3809 | 15.86 | 1000 | 0.4065 | 0.5361 | 0.1201 |
| 1.2355 | 23.8 | 1500 | 0.3474 | 0.4618 | 0.1033 |
| 1.1956 | 31.74 | 2000 | 0.3617 | 0.4580 | 0.1005 |
| 1.1416 | 39.67 | 2500 | 0.3182 | 0.4074 | 0.0891 |
| 1.0996 | 47.61 | 3000 | 0.3166 | 0.3985 | 0.0875 |
| 1.0427 | 55.55 | 3500 | 0.3116 | 0.3835 | 0.0828 |
| 0.9961 | 63.49 | 4000 | 0.3137 | 0.3757 | 0.0807 |
| 0.9575 | 71.42 | 4500 | 0.2992 | 0.3632 | 0.0771 |
| 0.9154 | 79.36 | 5000 | 0.3015 | 0.3502 | 0.0740 |
| 0.8994 | 87.3 | 5500 | 0.3004 | 0.3425 | 0.0723 |
| 0.871 | 95.24 | 6000 | 0.3016 | 0.3394 | 0.0713 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
| {"language": ["uk"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "uk"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "wav2vec2-xls-r-300m-uk-with-lm", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "uk"}, "metrics": [{"type": "wer", "value": 26.47, "name": "Test WER"}, {"type": "cer", "value": 2.9, "name": "Test CER"}]}]}]} | Yehor/wav2vec2-xls-r-300m-uk-with-lm | null | [
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"uk",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Yero/distilroberta-base-finetuned-wikitext2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yero/model_basic | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | # ProteinLM | {} | Yijia-Xiao/ProteinLM | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Yita/wav2vec2-base-timit-demo-colab | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
feature-extraction | transformers | {} | YituTech/conv-bert-base | null | [
"transformers",
"pytorch",
"tf",
"convbert",
"feature-extraction",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
feature-extraction | transformers | {} | YituTech/conv-bert-medium-small | null | [
"transformers",
"pytorch",
"tf",
"convbert",
"feature-extraction",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
feature-extraction | transformers | {} | YituTech/conv-bert-small | null | [
"transformers",
"pytorch",
"tf",
"convbert",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yixuan/wav2vec2-large-xls-r-300m-cantonese | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers | {} | Yixuan/wav2vec2-large-xls-r-300m-turkish-colab | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yizumi/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | YoadTew/Arithmetic | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yoiimomo/Beep | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yokiiiii/Yolii | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yokito7u7/Agua | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yongqi/gru_bidaf | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {"license": "mit"} | Yoonseong/GreenWashing | null | [
"license:mit",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | YoooCupid/Cupid | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | {} | Yoshisaur/kono-chat | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yotam/new | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | YoussefSaad/t5-small-finetuned-en-to-ro | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Ysa/Chat-Pizza | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Ysa/DialoGPT-small-harrypotter | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {"license": "wtfpl"} | Ysfafandi/A1 | null | [
"license:wtfpl",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | YuCui/distilgpt2-finetuned-wikitext2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yuber-Lobo/Prueba | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers |
# Question Answering model for Hindi and Tamil
This model is part of the ensemble that ranked 4/943 in the [Hindi and Tamil Question Answering](https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering) competition held by Google Research India at Kaggle.
```
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("Yuchen/muril-large-cased-hita-qa")
model = AutoModelForQuestionAnswering.from_pretrained("Yuchen/muril-large-cased-hita-qa")
``` | {"license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"} | Yuchen/muril-large-cased-hita-qa | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Yuki9666/Anime | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yuqian/Celine_SRL | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yuqian/multilingual-ner | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9825
- Mae: 0.4956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1432 | 1.0 | 308 | 1.0559 | 0.5133 |
| 0.9883 | 2.0 | 616 | 0.9825 | 0.4956 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model-index": [{"name": "xlm-roberta-base-finetuned-marc", "results": []}]} | Yuri/xlm-roberta-base-finetuned-marc | null | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Yuriy/wer | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yurun/distilbert-base-uncased-finetuned-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | espnet |
## ESPnet2 DIAR model
### `YushiUeda/test`
This model was trained by Yushi Ueda using mini_librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 4dfa2be4331d3d68f124aa5fd81f63217a7278a4
pip install -e .
cd egs2/mini_librispeech/diar1
./run.sh --skip_data_prep false --skip_train true --download_model YushiUeda/test
```
<!-- Generated by scripts/utils/show_diar_result.sh -->
# RESULTS
## Environments
- date: `Wed Aug 25 23:29:07 EDT 2021`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.2a1`
- pytorch version: `pytorch 1.9.0+cu102`
- Git hash: `19bcd34f9395e01e54a97c4db5ecbcedb429dd92`
- Commit date: `Tue Aug 24 19:50:44 2021 -0400`
## `diar_train_diar_raw_max_epoch20`
### DER
`dev_clean_2_ns2_beta2_500`
|threshold_median_collar|DER|
|---|---|
|result_th0.3_med1_collar0.0|32.42|
|result_th0.3_med11_collar0.0|32.03|
|result_th0.4_med1_collar0.0|30.96|
|result_th0.4_med11_collar0.0|30.26|
|result_th0.5_med1_collar0.0|30.35|
|result_th0.5_med11_collar0.0|29.37|
|result_th0.6_med1_collar0.0|30.77|
|result_th0.6_med11_collar0.0|29.52|
|result_th0.7_med1_collar0.0|32.60|
|result_th0.7_med11_collar0.0|31.03|
## DIAR config
<details><summary>expand</summary>
```
config: conf/train_diar.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/diar_train_diar_raw_max_epoch20
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 20
patience: 3
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 3
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/diar_stats_8k/train/speech_shape
- exp/diar_stats_8k/train/spk_labels_shape
valid_shape_file:
- exp/diar_stats_8k/valid/speech_shape
- exp/diar_stats_8k/valid/spk_labels_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 200000
chunk_shift_ratio: 0.5
num_cache_chunks: 64
train_data_path_and_name_and_type:
- - dump/raw/simu/data/train_clean_5_ns2_beta2_500/wav.scp
- speech
- sound
- - dump/raw/simu/data/train_clean_5_ns2_beta2_500/espnet_rttm
- spk_labels
- rttm
valid_data_path_and_name_and_type:
- - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/wav.scp
- speech
- sound
- - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/espnet_rttm
- spk_labels
- rttm
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.01
scheduler: noamlr
scheduler_conf:
warmup_steps: 1000
num_spk: 2
init: xavier_uniform
input_size: null
model_conf:
loss_type: pit
use_preprocessor: true
frontend: default
frontend_conf:
fs: 8k
hop_length: 128
normalize: global_mvn
normalize_conf:
stats_file: exp/diar_stats_8k/train/feats_stats.npz
encoder: transformer
encoder_conf:
input_layer: linear
num_blocks: 2
linear_units: 512
dropout_rate: 0.1
output_size: 256
attention_heads: 4
attention_dropout_rate: 0.0
decoder: linear
decoder_conf: {}
label_aggregator: label_aggregator
label_aggregator_conf: {}
required:
- output_dir
version: 0.10.2a1
distributed: false
```
</details>
| {"license": "cc-by-4.0", "tags": ["espnet", "audio", "diarization"], "datasets": ["mini_librispeech"]} | YushiUeda/test | null | [
"espnet",
"audio",
"diarization",
"dataset:mini_librispeech",
"license:cc-by-4.0",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IFIS_ZORK_AI_FANTASY
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "model_index": [{"name": "IFIS_ZORK_AI_FANTASY", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]} | YusufSahin99/IFIS_ZORK_AI_FANTASY | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IFIS_ZORK_AI_HORROR
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "model_index": [{"name": "IFIS_ZORK_AI_HORROR", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]} | YusufSahin99/IFIS_ZORK_AI_HORROR | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IFIS_ZORK_AI_MODERN
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "model_index": [{"name": "IFIS_ZORK_AI_MODERN", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]} | YusufSahin99/IFIS_ZORK_AI_MODERN | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IFIS_ZORK_AI_SCIFI
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "model_index": [{"name": "IFIS_ZORK_AI_SCIFI", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]} | YusufSahin99/IFIS_ZORK_AI_SCIFI | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | YusufSahin99/IFIS_ZORK_AI_SCIFI2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Zork_AI_SciFi
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "model_index": [{"name": "Zork_AI_SciFi", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]} | YusufSahin99/Zork_AI_SciFi | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | YusufSahin99/Zork_AI_SciFi2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yuta565/OkuYuta | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yuuryoku/Junko_Enoshima | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yuxing/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Yuxing/distilbert-base-uncased-finetuned-mnli | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
token-classification | transformers | {} | Yv/bert-finetuned-ner-accelerate | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0598
- Precision: 0.9370
- Recall: 0.9509
- F1: 0.9439
- Accuracy: 0.9869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0871 | 1.0 | 1756 | 0.0633 | 0.9197 | 0.9362 | 0.9279 | 0.9833 |
| 0.0386 | 2.0 | 3512 | 0.0572 | 0.9351 | 0.9483 | 0.9417 | 0.9866 |
| 0.0214 | 3.0 | 5268 | 0.0598 | 0.9370 | 0.9509 | 0.9439 | 0.9869 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9369817578772802, "name": "Precision"}, {"type": "recall", "value": 0.9508582968697409, "name": "Recall"}, {"type": "f1", "value": 0.9438690277313732, "name": "F1"}, {"type": "accuracy", "value": 0.9868575969859305, "name": "Accuracy"}]}]}]} | Yv/bert-finetuned-ner | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Yv/mt5-small-finetuned-amazon-en-es | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | ZAKok97/test | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | ZYW/Xquad | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en-de-es-model
This model was trained from scratch on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.7.0
- Tokenizers 0.10.3
| {} | ZYW/en-de-es-model | null | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en-de-model
This model was trained from scratch on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.7.0
- Tokenizers 0.10.3
| {} | ZYW/en-de-model | null | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en-de-vi-zh-es-model
This model was trained from scratch on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.7.0
- Tokenizers 0.10.3
| {} | ZYW/en-de-vi-zh-es-model | null | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# squad-en-de-es-model
This model was trained from scratch on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.7.0
- Tokenizers 0.10.3
| {} | ZYW/squad-en-de-es-model | null | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# squad-en-de-es-vi-zh-model
This model was trained from scratch on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.7.0
- Tokenizers 0.10.3
| {} | ZYW/squad-en-de-es-vi-zh-model | null | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# squad-mbart-model
This model was trained from scratch on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.7.0
- Tokenizers 0.10.3
| {} | ZYW/squad-mbart-model | null | [
"transformers",
"pytorch",
"mbart",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# squad-mbert-en-de-es-model
This model was trained from scratch on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.7.0
- Tokenizers 0.10.3
| {} | ZYW/squad-mbert-en-de-es-model | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# squad-mbert-en-de-es-vi-zh-model
This model was trained from scratch on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.7.0
- Tokenizers 0.10.3
| {} | ZYW/squad-mbert-en-de-es-vi-zh-model | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# squad-mbert-model
This model was trained from scratch on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.7.0
- Tokenizers 0.10.3
| {} | ZYW/squad-mbert-model | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# squad-mbert-model_2
This model was trained from scratch on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.7.0
- Tokenizers 0.10.3
| {} | ZYW/squad-mbert-model_2 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-squad-trained
This model was trained from scratch on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.988 | 1.0 | 5486 | 1.1790 |
| 0.7793 | 2.0 | 10972 | 1.2026 |
| 0.8068 | 3.0 | 16458 | 1.2026 |
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.6.2
- Tokenizers 0.10.3
| {} | ZYW/test-squad-trained | null | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8631
- Matthews Correlation: 0.5411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5249 | 1.0 | 535 | 0.5300 | 0.4152 |
| 0.3489 | 2.0 | 1070 | 0.5238 | 0.4940 |
| 0.2329 | 3.0 | 1605 | 0.6447 | 0.5162 |
| 0.1692 | 4.0 | 2140 | 0.7805 | 0.5332 |
| 0.1256 | 5.0 | 2675 | 0.8631 | 0.5411 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5410897632107913, "name": "Matthews Correlation"}]}]}]} | ZZDDBBCC/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | ZacharyAllwein/DialoGPT-small-rick | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Zafer/test | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Tamil
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "???", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("Zaid/wav2vec2-large-xlsr-53-arabic-egyptian")
model = Wav2Vec2ForCTC.from_pretrained("Zaid/wav2vec2-large-xlsr-53-arabic-egyptian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "???", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Zaid/wav2vec2-large-xlsr-53-arabic-egyptian")
model = Wav2Vec2ForCTC.from_pretrained("Zaid/wav2vec2-large-xlsr-53-arabic-egyptian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: ??? %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found ??? | {"language": "???", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Arabic Egyptian by Zaid", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ???", "type": "common_voice", "args": "???"}, "metrics": [{"type": "wer", "value": "???", "name": "Test WER"}]}]}]} | arbml/wav2vec2-large-xlsr-53-arabic-egyptian | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | transformers | {} | arbml/wav2vec2-large-xlsr-dialect-classification | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Zakky/Real_Esrgan | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
# DialoGPT Trained on the Speech of a Game Character
This is an instance of [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small) trained on a game character, Neku Sakuraba from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script).
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("Zane/Ricky")
model = AutoModelWithLMHead.from_pretrained("Zane/Ricky")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("NekuBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` | {"license": "mit", "tags": ["conversational"], "thumbnail": "https://huggingface.co/front/thumbnails/dialogpt.png"} | Zane/Ricky | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
# DialoGPT Trained on the Speech of a Game Character
This is an instance of [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small) trained on a game character, Neku Sakuraba from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script).
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-small-neku")
model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-small-neku")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("NekuBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` | {"license": "mit", "tags": ["conversational"], "thumbnail": "https://huggingface.co/front/thumbnails/dialogpt.png"} | Zane/Ricky3 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers | More information: [github](https://github.com/TanHM-1211/viRoberta-l6-h384-cased)
```python
from underthesea import word_tokenize
from transformers import RobertaTokenizer, RobertaModel
model_name = 'Zayt/viRoberta-l6-h384-word-cased'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForMaskedLM.from_pretrained(model_name)
text = word_tokenize("Xin chào, tôi không còn là sinh viên đại học Bách Khoa.", format='text')
output = model(**tokenizer(text, return_tensors='pt))
output
``` | {} | Zayt/viRoberta-l6-h384-word-cased | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.