Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | null | {} | aman2304/distilbert-base-uncased-finetuned-mrpc | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aman2304/distilbert-base-uncased-finetuned-ner | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aman2304/distilbert-base-uncased-finetuned-qnli | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aman2304/distilbert-base-uncased-finetuned-rte | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aman2304/distilbert-base-uncased-finetuned-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aman2304/distilbert-base-uncased-finetuned-sst2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aman2304/distilgpt2-finetuned-wikitext2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
token-classification | spacy | {"language": ["hi"], "tags": ["spacy", "token-classification"]} | amank22/hi_ud_hi_ewt | null | [
"spacy",
"token-classification",
"hi",
"model-index",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers |
# RoBERTa base model for Hindi language
Pretrained model on Hindi language using a masked language modeling (MLM) objective. [A more interactive & comparison demo is available here](https://huggingface.co/spaces/flax-community/roberta-hindi).
> This is part of the
[Flax/Jax Community Week](https://discuss.huggingface.co/t/pretrain-roberta-from-scratch-in-hindi/7091), organized by [Hugging Face](https://huggingface.co/) and TPU usage sponsored by Google.
## Model description
RoBERTa Hindi is a transformers model pretrained on a large corpus of Hindi data(a combination of **mc4, oscar and indic-nlp** datasets)
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='flax-community/roberta-hindi')
>>> unmasker("हम आपके सुखद <mask> की कामना करते हैं")
[{'score': 0.3310680091381073,
'sequence': 'हम आपके सुखद सफर की कामना करते हैं',
'token': 1349,
'token_str': ' सफर'},
{'score': 0.15317578613758087,
'sequence': 'हम आपके सुखद पल की कामना करते हैं',
'token': 848,
'token_str': ' पल'},
{'score': 0.07826550304889679,
'sequence': 'हम आपके सुखद समय की कामना करते हैं',
'token': 453,
'token_str': ' समय'},
{'score': 0.06304813921451569,
'sequence': 'हम आपके सुखद पहल की कामना करते हैं',
'token': 404,
'token_str': ' पहल'},
{'score': 0.058322224766016006,
'sequence': 'हम आपके सुखद अवसर की कामना करते हैं',
'token': 857,
'token_str': ' अवसर'}]
```
## Training data
The RoBERTa Hindi model was pretrained on the reunion of the following datasets:
- [OSCAR](https://huggingface.co/datasets/oscar) is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.
- [mC4](https://huggingface.co/datasets/mc4) is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus.
- [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) is a natural language understanding benchmark.
- [Samanantar](https://indicnlp.ai4bharat.org/samanantar/) is a parallel corpora collection for Indic language.
- [Hindi Text Short and Large Summarization Corpus](https://www.kaggle.com/disisbig/hindi-text-short-and-large-summarization-corpus) is a collection of ~180k articles with their headlines and summary collected from Hindi News Websites.
- [Hindi Text Short Summarization Corpus](https://www.kaggle.com/disisbig/hindi-text-short-summarization-corpus) is a collection of ~330k articles with their headlines collected from Hindi News Websites.
- [Old Newspapers Hindi](https://www.kaggle.com/crazydiv/oldnewspapershindi) is a cleaned subset of HC Corpora newspapers.
## Training procedure
### Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with `<s>` and the end of one by `</s>`.
- We had to perform cleanup of **mC4** and **oscar** datasets by removing all non hindi (non Devanagari) characters from the datasets.
- We tried to filter out evaluation set of WikiNER of [IndicGlue](https://indicnlp.ai4bharat.org/indic-glue/) benchmark by [manual labelling](https://github.com/amankhandelia/roberta_hindi/blob/master/wikiner_incorrect_eval_set.csv) where the actual labels were not correct and modifying the [downstream evaluation dataset](https://github.com/amankhandelia/roberta_hindi/blob/master/utils.py).
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
### Pretraining
The model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores).A randomized shuffle of combined dataset of **mC4, oscar** and other datasets listed above was used to train the model. Training logs are present in [wandb](https://wandb.ai/wandb/hf-flax-roberta-hindi).
## Evaluation Results
RoBERTa Hindi is evaluated on various downstream tasks. The results are summarized below.
| Task | Task Type | IndicBERT | HindiBERTa | Indic Transformers Hindi BERT | RoBERTa Hindi Guj San | RoBERTa Hindi |
|-------------------------|----------------------|-----------|------------|-------------------------------|-----------------------|---------------|
| BBC News Classification | Genre Classification | **76.44** | 66.86 | **77.6** | 64.9 | 73.67 |
| WikiNER | Token Classification | - | 90.68 | **95.09** | 89.61 | **92.76** |
| IITP Product Reviews | Sentiment Analysis | **78.01** | 73.23 | **78.39** | 66.16 | 75.53 |
| IITP Movie Reviews | Sentiment Analysis | 60.97 | 52.26 | **70.65** | 49.35 | **61.29** |
## Team Members
- Aman K ([amankhandelia](https://huggingface.co/amankhandelia))
- Haswanth Aekula ([hassiahk](https://huggingface.co/hassiahk))
- Kartik Godawat ([dk-crazydiv](https://huggingface.co/dk-crazydiv))
- Prateek Agrawal ([prateekagrawal](https://huggingface.co/prateekagrawal))
- Rahul Dev ([mlkorra](https://huggingface.co/mlkorra))
## Credits
Huge thanks to Hugging Face 🤗 & Google Jax/Flax team for such a wonderful community week, especially for providing such massive computing resources. Big thanks to [Suraj Patil](https://huggingface.co/valhalla) & [Patrick von Platen](https://huggingface.co/patrickvonplaten) for mentoring during the whole week.
<img src=https://pbs.twimg.com/media/E443fPjX0AY1BsR.jpg:medium> | {"widget": [{"text": "\u092e\u0941\u091d\u0947 \u0909\u0928\u0938\u0947 \u092c\u093e\u0924 \u0915\u0930\u0928\u093e <mask> \u0905\u091a\u094d\u091b\u093e \u0932\u0917\u093e"}, {"text": "\u0939\u092e \u0906\u092a\u0915\u0947 \u0938\u0941\u0916\u0926 <mask> \u0915\u0940 \u0915\u093e\u092e\u0928\u093e \u0915\u0930\u0924\u0947 \u0939\u0948\u0902"}, {"text": "\u0938\u092d\u0940 \u0905\u091a\u094d\u091b\u0940 \u091a\u0940\u091c\u094b\u0902 \u0915\u093e \u090f\u0915 <mask> \u0939\u094b\u0924\u093e \u0939\u0948"}]} | amankhandelia/panini | null | [
"transformers",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | amankhandelia/roberta-pretraining-hindi | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | amansd12/xlm-roberta-base-finetuned-marc-en | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 20114061
- CO2 Emissions (in grams): 3.651199395353127
## Validation Metrics
- Loss: 0.5046541690826416
- Accuracy: 0.8036219581211093
- Macro F1: 0.807095210403678
- Micro F1: 0.8036219581211093
- Weighted F1: 0.8039634739225368
- Macro Precision: 0.8076842795233988
- Micro Precision: 0.8036219581211093
- Weighted Precision: 0.8052135235094771
- Macro Recall: 0.8075241470527056
- Micro Recall: 0.8036219581211093
- Weighted Recall: 0.8036219581211093
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "datasets": ["amansolanki/autonlp-data-Tweet-Sentiment-Extraction"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 3.651199395353127} | amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:amansolanki/autonlp-data-Tweet-Sentiment-Extraction",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | amarendrafero/bert-finetuned-ner-accelerate | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | amartini01/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | amauboussin/twitter-toxicity-v0 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | ⚠️ **Disclaimer** ⚠️
This model is community-contributed, and not supported by Amazon, Inc.
## BORT
[Amazon's BORT](https://www.amazon.science/blog/a-version-of-the-bert-language-model-thats-20-times-as-fast)
BORT is a highly compressed version of [bert-large](https://huggingface.co/bert-large-uncased) that is up to 10 times faster at inference.
The model is an optimal sub-architecture of *bert-large* that was found using neural architecture search.
[Paper](https://arxiv.org/abs/2010.10499)
**Abstract**
We extract an optimal subset of architectural parameters for the BERT architecture from Devlin et al. (2018) by applying recent breakthroughs in algorithms for neural architecture search. This optimal subset, which we refer to as "Bort", is demonstrably smaller, having an effective (that is, not counting the embedding layer) size of 5.5% the original BERT-large architecture, and 16% of the net size. Bort is also able to be pretrained in 288 GPU hours, which is 1.2% of the time required to pretrain the highest-performing BERT parametric architectural variant, RoBERTa-large (Liu et al., 2019), and about 33% of that of the world-record, in GPU hours, required to train BERT-large on the same hardware. It is also 7.9x faster on a CPU, as well as being better performing than other compressed variants of the architecture, and some of the non-compressed variants: it obtains performance improvements of between 0.3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks.
The original model can be found under:
https://github.com/alexa/bort
**IMPORTANT**
BORT requires a very unique fine-tuning algorithm, called [Agora](https://adewynter.github.io/notes/bort_algorithms_and_applications.html) which is not open-sourced yet.
Standard fine-tuning has not shown to work well in initial experiments, so stay tuned for updates!
| {} | amazon/bort | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"arxiv:2010.10499",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# encoder_decoder_es
This model is a fine-tuned version of [](https://huggingface.co/) on the cc_news_es_titles dataset.
It achieves the following results on the evaluation set:
- Loss: 7.8773
- Rouge2 Precision: 0.002
- Rouge2 Recall: 0.0116
- Rouge2 Fmeasure: 0.0034
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 7.8807 | 1.0 | 5784 | 7.8976 | 0.0023 | 0.012 | 0.0038 |
| 7.8771 | 2.0 | 11568 | 7.8873 | 0.0018 | 0.0099 | 0.003 |
| 7.8588 | 3.0 | 17352 | 7.8819 | 0.0015 | 0.0085 | 0.0025 |
| 7.8507 | 4.0 | 23136 | 7.8773 | 0.002 | 0.0116 | 0.0034 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["cc_news_es_titles"], "model-index": [{"name": "encoder_decoder_es", "results": []}]} | amazon-sagemaker-community/encoder_decoder_es | null | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cc_news_es_titles",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-en-ru-emoji-v2
This model is a fine-tuned version of [DeepPavlov/xlm-roberta-large-en-ru](https://huggingface.co/DeepPavlov/xlm-roberta-large-en-ru) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3356
- Accuracy: 0.3102
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.4 | 200 | 3.0592 | 0.1204 |
| No log | 0.81 | 400 | 2.5356 | 0.2480 |
| 2.6294 | 1.21 | 600 | 2.4570 | 0.2569 |
| 2.6294 | 1.62 | 800 | 2.3332 | 0.2832 |
| 1.9286 | 2.02 | 1000 | 2.3354 | 0.2803 |
| 1.9286 | 2.42 | 1200 | 2.3610 | 0.2881 |
| 1.9286 | 2.83 | 1400 | 2.3004 | 0.2973 |
| 1.7312 | 3.23 | 1600 | 2.3619 | 0.3026 |
| 1.7312 | 3.64 | 1800 | 2.3596 | 0.3032 |
| 1.5816 | 4.04 | 2000 | 2.2972 | 0.3072 |
| 1.5816 | 4.44 | 2200 | 2.3077 | 0.3073 |
| 1.5816 | 4.85 | 2400 | 2.3356 | 0.3102 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "DeepPavlov/xlm-roberta-large-en-ru", "model-index": [{"name": "xlm-roberta-en-ru-emoji-v2", "results": []}]} | amazon-sagemaker-community/xlm-roberta-en-ru-emoji-v2 | null | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:DeepPavlov/xlm-roberta-large-en-ru",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# Passage Reranking Multilingual BERT 🔃 🌍
## Model description
**Input:** Supports over 100 Languages. See [List of supported languages](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages) for all available.
**Purpose:** This module takes a search query [1] and a passage [2] and calculates if the passage matches the query.
It can be used as an improvement for Elasticsearch Results and boosts the relevancy by up to 100%.
**Architecture:** On top of BERT there is a Densly Connected NN which takes the 768 Dimensional [CLS] Token as input and provides the output ([Arxiv](https://arxiv.org/abs/1901.04085)).
**Output:** Just a single value between between -10 and 10. Better matching query,passage pairs tend to have a higher a score.
## Intended uses & limitations
Both query[1] and passage[2] have to fit in 512 Tokens.
As you normally want to rerank the first dozens of search results keep in mind the inference time of approximately 300 ms/query.
#### How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("amberoad/bert-multilingual-passage-reranking-msmarco")
model = AutoModelForSequenceClassification.from_pretrained("amberoad/bert-multilingual-passage-reranking-msmarco")
```
This Model can be used as a drop-in replacement in the [Nboost Library](https://github.com/koursaros-ai/nboost)
Through this you can directly improve your Elasticsearch Results without any coding.
## Training data
This model is trained using the [**Microsoft MS Marco Dataset**](https://microsoft.github.io/msmarco/ "Microsoft MS Marco"). This training dataset contains approximately 400M tuples of a query, relevant and non-relevant passages. All datasets used for training and evaluating are listed in this [table](https://github.com/microsoft/MSMARCO-Passage-Ranking#data-information-and-formating). The used dataset for training is called *Train Triples Large*, while the evaluation was made on *Top 1000 Dev*. There are 6,900 queries in total in the development dataset, where each query is mapped to top 1,000 passage retrieved using BM25 from MS MARCO corpus.
## Training procedure
The training is performed the same way as stated in this [README](https://github.com/nyu-dl/dl4marco-bert "NYU Github"). See their excellent Paper on [Arxiv](https://arxiv.org/abs/1901.04085).
We changed the BERT Model from an English only to the default BERT Multilingual uncased Model from [Google](https://huggingface.co/bert-base-multilingual-uncased).
Training was done 400 000 Steps. This equaled 12 hours an a TPU V3-8.
## Eval results
We see nearly similar performance than the English only Model in the English [Bing Queries Dataset](http://www.msmarco.org/). Although the training data is English only internal Tests on private data showed a far higher accurancy in German than all other available models.
Fine-tuned Models | Dependency | Eval Set | Search Boost<a href='#benchmarks'> | Speed on GPU
----------------------------------------------------------------------------------- | ---------------------------------------------------------------------------- | ------------------------------------------------------------------ | ----------------------------------------------------- | ----------------------------------
**`amberoad/Multilingual-uncased-MSMARCO`** (This Model) | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-blue"/> | <a href ='http://www.msmarco.org/'>bing queries</a> | **+61%** <sub><sup>(0.29 vs 0.18)</sup></sub> | ~300 ms/query <a href='#footnotes'>
`nboost/pt-tinybert-msmarco` | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-red"/> | <a href ='http://www.msmarco.org/'>bing queries</a> | **+45%** <sub><sup>(0.26 vs 0.18)</sup></sub> | ~50ms/query <a href='#footnotes'>
`nboost/pt-bert-base-uncased-msmarco` | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-red"/> | <a href ='http://www.msmarco.org/'>bing queries</a> | **+62%** <sub><sup>(0.29 vs 0.18)</sup></sub> | ~300 ms/query<a href='#footnotes'>
`nboost/pt-bert-large-msmarco` | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-red"/> | <a href ='http://www.msmarco.org/'>bing queries</a> | **+77%** <sub><sup>(0.32 vs 0.18)</sup></sub> | -
`nboost/pt-biobert-base-msmarco` | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-red"/> | <a href ='https://github.com/naver/biobert-pretrained'>biomed</a> | **+66%** <sub><sup>(0.17 vs 0.10)</sup></sub> | ~300 ms/query<a href='#footnotes'>
This table is taken from [nboost](https://github.com/koursaros-ai/nboost) and extended by the first line.
## Contact Infos

Amberoad is a company focussing on Search and Business Intelligence.
We provide you:
* Advanced Internal Company Search Engines thorugh NLP
* External Search Egnines: Find Competitors, Customers, Suppliers
**Get in Contact now to benefit from our Expertise:**
The training and evaluation was performed by [**Philipp Reissel**](https://reissel.eu/) and [**Igli Manaj**](https://github.com/iglimanaj)
[ Linkedin](https://de.linkedin.com/company/amberoad) | <svg xmlns="http://www.w3.org/2000/svg" x="0px" y="0px"
width="32" height="32"
viewBox="0 0 172 172"
style=" fill:#000000;"><g fill="none" fill-rule="nonzero" stroke="none" stroke-width="1" stroke-linecap="butt" stroke-linejoin="miter" stroke-miterlimit="10" stroke-dasharray="" stroke-dashoffset="0" font-family="none" font-weight="none" font-size="none" text-anchor="none" style="mix-blend-mode: normal"><path d="M0,172v-172h172v172z" fill="none"></path><g fill="#e67e22"><path d="M37.625,21.5v86h96.75v-86h-5.375zM48.375,32.25h10.75v10.75h-10.75zM69.875,32.25h10.75v10.75h-10.75zM91.375,32.25h32.25v10.75h-32.25zM48.375,53.75h75.25v43h-75.25zM80.625,112.875v17.61572c-1.61558,0.93921 -2.94506,2.2687 -3.88428,3.88428h-49.86572v10.75h49.86572c1.8612,3.20153 5.28744,5.375 9.25928,5.375c3.97183,0 7.39808,-2.17347 9.25928,-5.375h49.86572v-10.75h-49.86572c-0.93921,-1.61558 -2.2687,-2.94506 -3.88428,-3.88428v-17.61572z"></path></g></g></svg>[Homepage](https://de.linkedin.com/company/amberoad) | [Email]([email protected])
| {"language": ["multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "hr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo"], "license": "apache-2.0", "tags": ["msmarco", "multilingual", "passage reranking"], "datasets": ["msmarco"], "metrics": ["MRR"], "thumbnail": "https://amberoad.de/images/logo_text.png", "widget": [{"query": "What is a corporation?", "passage": "A company is incorporated in a specific nation, often within the bounds of a smaller subset of that nation, such as a state or province. The corporation is then governed by the laws of incorporation in that state. A corporation may issue stock, either private or public, or may be classified as a non-stock corporation. If stock is issued, the corporation will usually be governed by its shareholders, either directly or indirectly."}]} | amberoad/bert-multilingual-passage-reranking-msmarco | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"msmarco",
"multilingual",
"passage reranking",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:msmarco",
"arxiv:1901.04085",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | amejri/wav2vec2-common_voice-fr-demo | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | amerwafiy/finetuning-sentiment-model-3000-samples | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | ameyamore17/wav2vec2-base-timit-demo-colab | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | {} | amild01/GPT2-german-chefkoch | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers |
# bert-base-5lang-cased
This is a smaller version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handles only 5 languages (en, fr, es, de and zh) instead of 104.
The model is therefore 30% smaller than the original one (124M parameters instead of 178M) but gives exactly the same representations for the above cited languages.
Starting from `bert-base-5lang-cased` will facilitate the deployment of your model on public cloud platforms while keeping similar results.
For instance, Google Cloud Platform requires that the model size on disk should be lower than 500 MB for serveless deployments (Cloud Functions / Cloud ML) which is not the case of the original `bert-base-multilingual-cased`.
For more information about the models size, memory footprint and loading time please refer to the table below:
| Model | Num parameters | Size | Memory | Loading time |
| ---------------------------- | -------------- | -------- | -------- | ------------ |
| bert-base-multilingual-cased | 178 million | 714 MB | 1400 MB | 4.2 sec |
| bert-base-5lang-cased | 124 million | 495 MB | 950 MB | 3.6 sec |
These measurements have been computed on a [Google Cloud n1-standard-1 machine (1 vCPU, 3.75 GB)](https://cloud.google.com/compute/docs/machine-types\#n1_machine_type).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("amine/bert-base-5lang-cased")
model = AutoModel.from_pretrained("amine/bert-base-5lang-cased")
```
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Multilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | {"language": ["en", "fr", "es", "de", "zh", "multilingual"], "license": "apache-2.0", "tags": ["pytorch", "bert", "multilingual", "en", "fr", "es", "de", "zh"], "datasets": "wikipedia", "inference": false} | amine/bert-base-5lang-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"en",
"fr",
"es",
"de",
"zh",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | aminedjebbie/SentiBERT | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | aminezeggaf/EmotionAnalysisAmine | null | [
"transformers",
"tf",
"camembert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aminezeggaf/emotionsAnalysisAmine | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | amir62/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | amirharati/wav2vec2-base-timit-demo-colab | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pft-clf-finetuned
This model is a fine-tuned version of [HooshvareLab/bert-fa-zwnj-base](https://huggingface.co/HooshvareLab/bert-fa-zwnj-base) on an "FarsNews1398" dataset. This dataset contains a collection of news that has been gathered from the farsnews website which is a news agency in Iran. You can download the dataset from [here](https://www.kaggle.com/amirhossein76/farsnews1398). I used category, abstract, and paragraphs of news for doing text classification. "abstract" and "paragraphs" for each news concatenated together and "category" used as a target for classification.
The notebook used for fine-tuning can be found [here](https://colab.research.google.com/drive/1jC2dfKRASxCY-b6bJSPkhEJfQkOA30O0?usp=sharing). I've reported loss and Matthews correlation criteria on the validation set.
It achieves the following results on the evaluation set:
- Loss: 0.0617
- Matthews Correlation: 0.9830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 6
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------:|
| 0.0634 | 1.0 | 20276 | 0.0617 | 0.9830 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"language": "fa", "license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["matthews_correlation"], "widget": [{"text": "\u0627\u0645\u0631\u0648\u0632 \u062f\u0631\u0628\u06cc \u062f\u0648 \u062a\u06cc\u0645 \u067e\u0631\u0633\u067e\u0648\u0644\u06cc\u0633 \u0648 \u0627\u0633\u062a\u0642\u0644\u0627\u0644 \u062f\u0631 \u0648\u0631\u0632\u0634\u06af\u0627\u0647 \u0622\u0632\u0627\u062f\u06cc \u062a\u0647\u0631\u0627\u0646 \u0628\u0631\u06af\u0632\u0627\u0631 \u0645\u06cc\u200c\u0634\u0648\u062f."}, {"text": "\u0648\u0632\u06cc\u0631 \u0627\u0645\u0648\u0631 \u062e\u0627\u0631\u062c\u0647 \u0627\u0631\u062f\u0646 \u062a\u0627\u06a9\u06cc\u062f \u06a9\u0631\u062f \u06a9\u0647 \u0647\u0645\u0647 \u06a9\u0634\u0648\u0631\u0647\u0627\u06cc \u0639\u0631\u0628\u06cc \u062e\u0648\u0627\u0647\u0627\u0646 \u0631\u0648\u0627\u0628\u0637 \u062e\u0648\u0628 \u0628\u0627 \u0627\u06cc\u0631\u0627\u0646 \u0647\u0633\u062a\u0646\u062f.\n\u0628\u0647 \u06af\u0632\u0627\u0631\u0634 \u0627\u06cc\u0633\u0646\u0627 \u0628\u0647 \u0646\u0642\u0644 \u0627\u0632 \u0634\u0628\u06a9\u0647 \u0641\u0631\u0627\u0646\u0633 \u06f2\u06f4\u060c \u0627\u06cc\u0645\u0646 \u0627\u0644\u0635\u0641\u062f\u06cc \u0645\u0639\u0627\u0648\u0646 \u0646\u062e\u0633\u062a\u200c\u0648\u0632\u06cc\u0631 \u0648 \u0648\u0632\u06cc\u0631 \u0627\u0645\u0648\u0631 \u062e\u0627\u0631\u062c\u0647 \u0627\u0631\u062f\u0646 \u067e\u0633 \u0627\u0632 \u06a9\u0646\u0641\u0631\u0627\u0646\u0633 \u0644\u06cc\u0628\u06cc \u062f\u0631 \u067e\u0627\u0631\u06cc\u0633 \u062f\u0631 \u06af\u0641\u062a\u200c\u0648\u06af\u0648\u06cc\u06cc \u0628\u0627 \u0641\u0631\u0627\u0646\u0633 \u06f2\u06f4 \u062a\u0627\u06a9\u06cc\u062f \u06a9\u0631\u062f: \u0645\u0648\u0636\u0639 \u0627\u0631\u062f\u0646 \u0631\u0648\u0634\u0646 \u0627\u0633\u062a\u060c \u0645\u0627 \u062e\u0648\u0627\u0633\u062a\u0627\u0631 \u0631\u0648\u0627\u0628\u0637 \u0645\u0646\u0637\u0642\u0647\u200c\u0627\u06cc \u0645\u0628\u062a\u0646\u06cc \u0628\u0631 \u062d\u0633\u0646 \u0647\u0645\u062c\u0648\u0627\u0631\u06cc \u0648 \u0639\u062f\u0645 \u0645\u062f\u0627\u062e\u0644\u0647 \u062f\u0631 \u0627\u0645\u0648\u0631 \u062f\u0627\u062e\u0644\u06cc \u0647\u0633\u062a\u06cc\u0645. \u0628\u0633\u06cc\u0627\u0631\u06cc \u0627\u0632 \u0645\u0633\u0627\u0626\u0644 \u0648 \u0645\u0634\u06a9\u0644\u0627\u062a \u0645\u0646\u0637\u0642\u0647 \u0646\u06cc\u0627\u0632 \u0628\u0647 \u0631\u0633\u06cc\u062f\u06af\u06cc \u0627\u0632 \u0637\u0631\u06cc\u0642 \u06af\u0641\u062a\u200c\u0648\u06af\u0648 \u062f\u0627\u0631\u062f.\n\n\u0627\u0644\u0635\u0641\u062f\u06cc \u0647\u0631\u06af\u0648\u0646\u0647 \u06af\u0641\u062a\u200c\u0648\u06af\u0648\u06cc \u0628\u0627 \u0648\u0627\u0633\u0637\u0647 \u0627\u0631\u062f\u0646 \u0628\u0627 \u0627\u06cc\u0631\u0627\u0646 \u0631\u0627 \u0631\u062f \u06a9\u0631\u062f\u0647 \u0648 \u06af\u0641\u062a: \u0645\u0627 \u0628\u0627 \u0646\u0645\u0627\u06cc\u0646\u062f\u06af\u0627\u0646 \u0647\u06cc\u0686\u200c\u06a9\u0633 \u0635\u062d\u0628\u062a \u0646\u0645\u06cc\u200c\u06a9\u0646\u06cc\u0645 \u0648 \u0632\u0645\u0627\u0646\u06cc \u06a9\u0647 \u0628\u0627 \u0627\u06cc\u0631\u0627\u0646 \u0635\u062d\u0628\u062a \u0645\u06cc\u200c\u06a9\u0646\u06cc\u0645 \u0645\u0633\u062a\u0642\u06cc\u0645\u0627\u064b \u0628\u0627 \u062f\u0648\u0644\u062a \u0627\u06cc\u0646 \u06a9\u0634\u0648\u0631 \u0628\u0648\u062f\u0647 \u0648 \u0627\u0632 \u0637\u0631\u06cc\u0642 \u062a\u0645\u0627\u0633 \u062a\u0644\u0641\u0646\u06cc \u0648\u0632\u06cc\u0631 \u0627\u0645\u0648\u0631 \u062e\u0627\u0631\u062c\u0647 \u062f\u0648 \u06a9\u0634\u0648\u0631.\n\u0648\u06cc \u062a\u0627\u06a9\u06cc\u062f \u06a9\u0631\u062f: \u0647\u0645\u0647 \u062f\u0631 \u0645\u0646\u0637\u0642\u0647 \u0639\u0631\u0628\u06cc \u062e\u0648\u0627\u0633\u062a\u0627\u0631 \u0631\u0648\u0627\u0628\u0637 \u062e\u0648\u0628 \u0628\u0627 \u0627\u06cc\u0631\u0627\u0646 \u0647\u0633\u062a\u0646\u062f\u060c \u0627\u0645\u0627 \u0628\u0631\u0627\u06cc \u062a\u062d\u0642\u0642 \u0627\u06cc\u0646 \u0627\u0645\u0631 \u0628\u0627\u06cc\u062f \u0631\u0648\u0627\u0628\u0637 \u0628\u0631 \u0627\u0633\u0627\u0633 \u0634\u0641\u0627\u0641\u06cc\u062a \u0648 \u0628\u0631 \u0627\u0633\u0627\u0633 \u0627\u0635\u0648\u0644 \u0627\u062d\u062a\u0631\u0627\u0645 \u0628\u0647 \u0647\u0645\u0633\u0627\u06cc\u06af\u06cc \u0648 \u0639\u062f\u0645 \u0645\u062f\u0627\u062e\u0644\u0647 \u062f\u0631 \u0627\u0645\u0648\u0631 \u062f\u0627\u062e\u0644\u06cc \u0628\u0627\u0634\u062f. "}], "model-index": [{"name": "pft-clf-finetuned", "results": []}]} | amirhossein1376/pft-clf-finetuned | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | transformers | {} | amitesh863/fin_embeds | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | transformers | {} | amitgajbhiye/mlqe_en_zh | null | [
"transformers",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | amitkumar/NLP_models | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers |
# nepbert
## Model description
Roberta trained from scratch on the Nepali CC-100 dataset with 12 million sentences.
## Intended uses & limitations
#### How to use
```python
from transformers import pipeline
pipe = pipeline(
"fill-mask",
model="amitness/nepbert",
tokenizer="amitness/nepbert"
)
print(pipe(u"तिमीलाई कस्तो <mask>?"))
```
## Training data
The data was taken from the nepali language subset of CC-100 dataset.
## Training procedure
The model was trained on Google Colab using `1x Tesla V100`. | {"language": ["ne"], "license": "mit", "tags": ["roberta", "nepali-laguage-model"], "datasets": ["cc100"], "widget": [{"text": "\u0924\u093f\u092e\u0940\u0932\u093e\u0908 \u0915\u0938\u094d\u0924\u094b <mask>?"}]} | amitness/roberta-base-ne | null | [
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"fill-mask",
"nepali-laguage-model",
"ne",
"dataset:cc100",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | amitthombre/distilgpt2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | ammoreira/teste | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | amodernmarketer/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Kannada
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Kannada using the [OpenSLR SLR79](http://openslr.org/79/) dataset. When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows, assuming you have a dataset with Kannada `sentence` and `path` fields:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# test_dataset = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For a sample, see the Colab link in Training Section.
processor = Wav2Vec2Processor.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn")
model = Wav2Vec2ForCTC.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn")
resampler = torchaudio.transforms.Resample(48_000, 16_000) # The original data was with 48,000 sampling rate. You can change it according to your input.
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on 10% of the Kannada data on OpenSLR.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
# test_dataset = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For sample see the Colab link in Training Section.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn")
model = Wav2Vec2ForCTC.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\–\…]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"),
attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 27.08 %
## Training
90% of the OpenSLR Kannada dataset was used for training.
The colab notebook used for training can be found [here](https://colab.research.google.com/github/amoghgopadi/wav2vec2-xlsr-kannada/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Kannada_ASR.ipynb). | {"language": "kn", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["openslr"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Large 53 Kannada by Amogh Gopadi", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "OpenSLR kn", "type": "openslr"}, "metrics": [{"type": "wer", "value": 27.08, "name": "Test WER"}]}]}]} | amoghsgopadi/wav2vec2-large-xlsr-kn | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"kn",
"dataset:openslr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# roberta-cord19-1M7k

> This model is based on ***RoBERTa*** and was pre-trained on 1.7 million sentences.
The training corpus was papers taken from *Semantic Scholar*'s CORD-19 historical releases. Corpus size is `13k` papers, `~60M` tokens. I used the full-text `"body_text"` of the papers in training (details below).
#### Usage
```python
from transformers import pipeline
from transformers import RobertaTokenizerFast, RobertaForMaskedLM
tokenizer = RobertaTokenizerFast.from_pretrained("amoux/roberta-cord19-1M7k")
model = RobertaForMaskedLM.from_pretrained("amoux/roberta-cord19-1M7k")
fillmask = pipeline("fill-mask", model=model, tokenizer=tokenizer)
text = "Lung infiltrates cause significant morbidity and mortality in immunocompromised patients."
masked_text = text.replace("patients", tokenizer.mask_token)
predictions = fillmask(masked_text, top_k=3)
```
- Predicted tokens
```bash
[{'sequence': '<s>Lung infiltrates cause significant morbidity and mortality in immunocompromised patients.</s>',
'score': 0.6273621320724487,
'token': 660,
'token_str': 'Ġpatients'},
{'sequence': '<s>Lung infiltrates cause significant morbidity and mortality in immunocompromised individuals.</s>',
'score': 0.19800445437431335,
'token': 1868,
'token_str': 'Ġindividuals'},
{'sequence': '<s>Lung infiltrates cause significant morbidity and mortality in immunocompromised animals.</s>',
'score': 0.022069649770855904,
'token': 1471,
'token_str': 'Ġanimals'}]
```
## Dataset
- About
- name: *CORD-19: The Covid-19 Open Research Dataset*
- date: *2020-03-18*
- md5 | sha1: `a36fe181 | 8fbea927`
- text-key: `body_text`
- subsets (*total*: `13,202`):
- *biorxiv_medrxiv*: `803`
- *comm_use_subset*: `9000`
- *pmc_custom_license*: `1426`
- *noncomm_use_subset*: `1973`
- Splits (*ratio: 0.9*)
- sentences used for training: `1,687,124`
- sentences used for evaluation: `187,459`
- Total training steps: `210,890`
- Total evaluation steps: `23,433`
## Parameters
- Data
- block_size: `256`
- Training
- per_device_train_batch_size: `8`
- per_device_eval_batch_size: `8`
- gradient_accumulation_steps: `2`
- learning_rate: `5e-5`
- num_train_epochs: `2`
- fp16: `True`
- fp16_opt_level: `'01'`
- seed: `42`
- Output
- global_step: `210890`
- training_loss: `3.5964575726682155`
## Evaluation
- Perplexity: `17.469366079957922`
### Citation
> Allen Institute CORD-19 [Historical Releases](https://ai2-semanticscholar-cord-19.s3-us-west-2.amazonaws.com/historical_releases.html)
```
@article{Wang2020CORD19TC,
title={CORD-19: The Covid-19 Open Research Dataset},
author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier},
journal={ArXiv},
year={2020}
}
``` | {"language": "en", "thumbnail": "https://github.githubassets.com/images/icons/emoji/unicode/2695.png", "widget": [{"text": "Lung infiltrates cause significant morbidity and mortality in immunocompromised <mask>."}, {"text": "Tuberculosis appears to be an important <mask> in endemic regions especially in the non-HIV, non-hematologic malignancy group."}, {"text": "For vector-transmitted diseases this places huge significance on vector mortality rates as vectors usually don't <mask> an infection and instead remain infectious for life."}, {"text": "The lung lesions were characterized by bronchointerstitial pneumonia with accumulation of neutrophils, macrophages and necrotic debris in <mask> and bronchiolar lumens and peribronchiolar/perivascular infiltration of inflammatory cells."}]} | amoux/roberta-cord19-1M7k | null | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers | {} | amoux/scibert_nli_squad | null | [
"transformers",
"pytorch",
"jax",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | amraniworking/testing | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | amritchhetrib/InstgramAnalytics | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | amritchhetrib/TwitterAnalytics | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | amritchhetrib/TwitterSentimentalAnalysis | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | amritchhetrib/model_name | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
token-classification | flair |
#### This model is used in the [Speech Interval Timer app](https://medium.com/@amtam0/speech-interval-timer-app-using-transformers-1df8fa3821d5)
7-class NER English model using [Flair TransformerWordEmbeddings - distilroberta-base](https://github.com/flairNLP/flair/).
| **tag** | **meaning** |
|---------------------------------|-----------|
| nb_rounds | Number of rounds |
| duration_br_sd | Duration btwn rounds in seconds |
| duration_br_min | Duration btwn rounds in minutes |
| duration_br_hr | Duration btwn rounds in hours |
| duration_wt_sd | workout duration in seconds |
| duration_wt_min | workout duration in minutes |
| duration_wt_hr | workout duration in hours |
---
The dataset was created manually (perfectible). Sentences example :
```
19 sets of 3 minutes 21 minutes between sets
start 7 sets of 32 seconds
create 13 sets of 26 seconds
init 8 series of 3 hours
2 sets of 30 seconds 35 minutes between each cycle
...
``` | {"language": "en", "tags": ["flair", "token-classification", "sequence-tagger-model"], "widget": [{"text": "12 sets of 2 minutes 38 minutes between each set"}]} | amtam0/timer-ner-en | null | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | flair | #### This model is used in the [Speech Interval Timer app](https://medium.com/@amtam0/speech-interval-timer-app-using-transformers-1df8fa3821d5)
7-class NER French model using [Flair TransformerWordEmbeddings - camembert-base](https://github.com/flairNLP/flair/).
| **tag** | **meaning** |
|---------------------------------|-----------|
| nb_rounds | Number of rounds |
| duration_br_sd | Duration btwn rounds in seconds |
| duration_br_min | Duration btwn rounds in minutes |
| duration_br_hr | Duration btwn rounds in hours |
| duration_wt_sd | workout duration in seconds |
| duration_wt_min | workout duration in minutes |
| duration_wt_hr | workout duration in hours |
---
Synthetic dataset has been used (perfectible). Sentences example in the widget. | {"language": "fr", "tags": ["flair", "token-classification", "sequence-tagger-model"], "widget": [{"text": "g\u00e9n\u00e8re 27 s\u00e9ries de 54 seconde "}, {"text": " 9 cycles de 17 minute "}, {"text": "initie 17 sets de 44 secondes 297 minutes entre s\u00e9ries"}, {"text": " 13 sets de 88 secondes 225 minutes 49 entre chaque s\u00e9rie"}, {"text": "g\u00e9n\u00e8re 39 s\u00e9ries de 19 minute 21 minute 45 entre s\u00e9ries"}, {"text": "d\u00e9bute 47 sets de 6 heures "}, {"text": "d\u00e9bute 1 cycle de 25 minutes 48 23 minute 32 entre chaque s\u00e9rie"}, {"text": "commence 23 s\u00e9ries de 18 heure et demi 25 minutes 41 entre s\u00e9ries"}, {"text": " 13 cycles de 52 secondes "}, {"text": "cr\u00e9e 31 s\u00e9rie de 60 secondes "}, {"text": " 7 set de 36 secondes 139 minutes 34 entre s\u00e9ries"}, {"text": "commence 37 sets de 51 minute 25 295 minute entre chaque s\u00e9rie"}, {"text": "cr\u00e9e 11 cycles de 72 seconde 169 minute 15 entre chaque s\u00e9rie"}, {"text": "initie 5 s\u00e9rie de 33 minutes 48 "}, {"text": "cr\u00e9e 23 set de 1 minute 46 279 minutes 50 entre chaque s\u00e9rie"}, {"text": "g\u00e9n\u00e8re 41 s\u00e9rie de 35 minutes 55 "}, {"text": "lance 11 cycles de 4 heures "}, {"text": "cr\u00e9e 47 cycle de 28 heure moins quart 243 minutes 45 entre chaque s\u00e9rie"}, {"text": "initie 23 set de 36 secondes "}, {"text": "commence 37 sets de 24 heures et quart "}]} | amtam0/timer-ner-fr | null | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers | {} | amyma21/sincere_question_classification | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | an0ushka/StockBot | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | an2ten/Andrea | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | anabiha/distilbert-base-uncased-finetuned-ner-updated-data | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | anabiha/distilgpt2-finetuned-wikitext2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]} | anan0329/wav2vec2-base-timit-demo-colab | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
audio-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-adult-child-cls
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1713
- Accuracy: 0.9460
- F1: 0.9509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.323 | 1.0 | 96 | 0.2699 | 0.9026 | 0.9085 |
| 0.2003 | 2.0 | 192 | 0.2005 | 0.9234 | 0.9300 |
| 0.1808 | 3.0 | 288 | 0.1780 | 0.9377 | 0.9438 |
| 0.1537 | 4.0 | 384 | 0.1673 | 0.9441 | 0.9488 |
| 0.1135 | 5.0 | 480 | 0.1713 | 0.9460 | 0.9509 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "wav2vec2-adult-child-cls", "results": []}]} | anantoj/wav2vec2-adult-child-cls | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
audio-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-adult-child-cls
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1755
- Accuracy: 0.9432
- F1: 0.9472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.368 | 1.0 | 383 | 0.2560 | 0.9072 | 0.9126 |
| 0.2013 | 2.0 | 766 | 0.1959 | 0.9321 | 0.9362 |
| 0.22 | 3.0 | 1149 | 0.1755 | 0.9432 | 0.9472 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "wav2vec2-xls-r-300m-adult-child-cls", "results": []}]} | anantoj/wav2vec2-large-xlsr-53-adult-child-cls | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the KRESNIK/ZEROTH_KOREAN - CLEAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0639
- Wer: 0.0449
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.603 | 0.72 | 500 | 4.6572 | 0.9985 |
| 2.6314 | 1.44 | 1000 | 2.0424 | 0.9256 |
| 2.2708 | 2.16 | 1500 | 0.9889 | 0.6989 |
| 2.1769 | 2.88 | 2000 | 0.8366 | 0.6312 |
| 2.1142 | 3.6 | 2500 | 0.7555 | 0.5998 |
| 2.0084 | 4.32 | 3000 | 0.7144 | 0.6003 |
| 1.9272 | 5.04 | 3500 | 0.6311 | 0.5461 |
| 1.8687 | 5.75 | 4000 | 0.6252 | 0.5430 |
| 1.8186 | 6.47 | 4500 | 0.5491 | 0.4988 |
| 1.7364 | 7.19 | 5000 | 0.5463 | 0.4959 |
| 1.6809 | 7.91 | 5500 | 0.4724 | 0.4484 |
| 1.641 | 8.63 | 6000 | 0.4679 | 0.4461 |
| 1.572 | 9.35 | 6500 | 0.4387 | 0.4236 |
| 1.5256 | 10.07 | 7000 | 0.3970 | 0.4003 |
| 1.5044 | 10.79 | 7500 | 0.3690 | 0.3893 |
| 1.4563 | 11.51 | 8000 | 0.3752 | 0.3875 |
| 1.394 | 12.23 | 8500 | 0.3386 | 0.3567 |
| 1.3641 | 12.95 | 9000 | 0.3290 | 0.3467 |
| 1.2878 | 13.67 | 9500 | 0.2893 | 0.3135 |
| 1.2602 | 14.39 | 10000 | 0.2723 | 0.3029 |
| 1.2302 | 15.11 | 10500 | 0.2603 | 0.2989 |
| 1.1865 | 15.83 | 11000 | 0.2440 | 0.2794 |
| 1.1491 | 16.55 | 11500 | 0.2500 | 0.2788 |
| 1.093 | 17.27 | 12000 | 0.2279 | 0.2629 |
| 1.0367 | 17.98 | 12500 | 0.2076 | 0.2443 |
| 0.9954 | 18.7 | 13000 | 0.1844 | 0.2259 |
| 0.99 | 19.42 | 13500 | 0.1794 | 0.2179 |
| 0.9385 | 20.14 | 14000 | 0.1765 | 0.2122 |
| 0.8952 | 20.86 | 14500 | 0.1706 | 0.1974 |
| 0.8841 | 21.58 | 15000 | 0.1791 | 0.1969 |
| 0.847 | 22.3 | 15500 | 0.1780 | 0.2060 |
| 0.8669 | 23.02 | 16000 | 0.1608 | 0.1862 |
| 0.8066 | 23.74 | 16500 | 0.1447 | 0.1626 |
| 0.7908 | 24.46 | 17000 | 0.1457 | 0.1655 |
| 0.7459 | 25.18 | 17500 | 0.1350 | 0.1445 |
| 0.7218 | 25.9 | 18000 | 0.1276 | 0.1421 |
| 0.703 | 26.62 | 18500 | 0.1177 | 0.1302 |
| 0.685 | 27.34 | 19000 | 0.1147 | 0.1305 |
| 0.6811 | 28.06 | 19500 | 0.1128 | 0.1244 |
| 0.6444 | 28.78 | 20000 | 0.1120 | 0.1213 |
| 0.6323 | 29.5 | 20500 | 0.1137 | 0.1166 |
| 0.5998 | 30.22 | 21000 | 0.1051 | 0.1107 |
| 0.5706 | 30.93 | 21500 | 0.1035 | 0.1037 |
| 0.5555 | 31.65 | 22000 | 0.1031 | 0.0927 |
| 0.5389 | 32.37 | 22500 | 0.0997 | 0.0900 |
| 0.5201 | 33.09 | 23000 | 0.0920 | 0.0912 |
| 0.5146 | 33.81 | 23500 | 0.0929 | 0.0947 |
| 0.515 | 34.53 | 24000 | 0.1000 | 0.0953 |
| 0.4743 | 35.25 | 24500 | 0.0922 | 0.0892 |
| 0.4707 | 35.97 | 25000 | 0.0852 | 0.0808 |
| 0.4456 | 36.69 | 25500 | 0.0855 | 0.0779 |
| 0.443 | 37.41 | 26000 | 0.0843 | 0.0738 |
| 0.4388 | 38.13 | 26500 | 0.0816 | 0.0699 |
| 0.4162 | 38.85 | 27000 | 0.0752 | 0.0645 |
| 0.3979 | 39.57 | 27500 | 0.0761 | 0.0621 |
| 0.3889 | 40.29 | 28000 | 0.0771 | 0.0625 |
| 0.3923 | 41.01 | 28500 | 0.0755 | 0.0598 |
| 0.3693 | 41.73 | 29000 | 0.0730 | 0.0578 |
| 0.3642 | 42.45 | 29500 | 0.0739 | 0.0598 |
| 0.3532 | 43.17 | 30000 | 0.0712 | 0.0553 |
| 0.3513 | 43.88 | 30500 | 0.0762 | 0.0516 |
| 0.3349 | 44.6 | 31000 | 0.0731 | 0.0504 |
| 0.3305 | 45.32 | 31500 | 0.0725 | 0.0507 |
| 0.3285 | 46.04 | 32000 | 0.0709 | 0.0489 |
| 0.3179 | 46.76 | 32500 | 0.0667 | 0.0467 |
| 0.3158 | 47.48 | 33000 | 0.0653 | 0.0494 |
| 0.3033 | 48.2 | 33500 | 0.0638 | 0.0456 |
| 0.3023 | 48.92 | 34000 | 0.0644 | 0.0464 |
| 0.2975 | 49.64 | 34500 | 0.0643 | 0.0455 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
| {"language": "ko", "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["kresnik/zeroth_korean"], "model-index": [{"name": "Wav2Vec2 XLS-R 1B Korean", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ko"}, "metrics": [{"type": "wer", "value": 82.07, "name": "Test WER"}, {"type": "cer", "value": 42.12, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ko"}, "metrics": [{"type": "wer", "value": 82.09, "name": "Test WER"}]}]}]} | anantoj/wav2vec2-xls-r-1b-korean | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"ko",
"dataset:kresnik/zeroth_korean",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
audio-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-adult-child-cls
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1770
- Accuracy: 0.9404
- F1: 0.9440
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.25 | 1.0 | 383 | 0.2516 | 0.9077 | 0.9106 |
| 0.2052 | 2.0 | 766 | 0.2138 | 0.9321 | 0.9353 |
| 0.1901 | 3.0 | 1149 | 0.1770 | 0.9404 | 0.9440 |
| 0.2255 | 4.0 | 1532 | 0.1794 | 0.9404 | 0.9440 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "wav2vec2-xls-r-300m-adult-child-cls", "results": []}]} | anantoj/wav2vec2-xls-r-300m-adult-child-cls | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - ZH-CN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8122
- Wer: 0.8392
- Cer: 0.2059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 69.215 | 0.74 | 500 | 74.9751 | 1.0 | 1.0 |
| 8.2109 | 1.48 | 1000 | 7.0617 | 1.0 | 1.0 |
| 6.4277 | 2.22 | 1500 | 6.3811 | 1.0 | 1.0 |
| 6.3513 | 2.95 | 2000 | 6.3061 | 1.0 | 1.0 |
| 6.2522 | 3.69 | 2500 | 6.2147 | 1.0 | 1.0 |
| 5.9757 | 4.43 | 3000 | 5.7906 | 1.1004 | 0.9924 |
| 5.0642 | 5.17 | 3500 | 4.2984 | 1.7729 | 0.8214 |
| 4.6346 | 5.91 | 4000 | 3.7129 | 1.8946 | 0.7728 |
| 4.267 | 6.65 | 4500 | 3.2177 | 1.7526 | 0.6922 |
| 3.9964 | 7.39 | 5000 | 2.8337 | 1.8055 | 0.6546 |
| 3.8035 | 8.12 | 5500 | 2.5726 | 2.1851 | 0.6992 |
| 3.6273 | 8.86 | 6000 | 2.3391 | 2.1029 | 0.6511 |
| 3.5248 | 9.6 | 6500 | 2.1944 | 2.3617 | 0.6859 |
| 3.3683 | 10.34 | 7000 | 1.9827 | 2.1014 | 0.6063 |
| 3.2411 | 11.08 | 7500 | 1.8610 | 1.6160 | 0.5135 |
| 3.1299 | 11.82 | 8000 | 1.7446 | 1.5948 | 0.4946 |
| 3.0574 | 12.56 | 8500 | 1.6454 | 1.1291 | 0.4051 |
| 2.985 | 13.29 | 9000 | 1.5919 | 1.0673 | 0.3893 |
| 2.9573 | 14.03 | 9500 | 1.4903 | 1.0604 | 0.3766 |
| 2.8897 | 14.77 | 10000 | 1.4614 | 1.0059 | 0.3653 |
| 2.8169 | 15.51 | 10500 | 1.3997 | 1.0030 | 0.3550 |
| 2.8155 | 16.25 | 11000 | 1.3444 | 0.9980 | 0.3441 |
| 2.7595 | 16.99 | 11500 | 1.2911 | 0.9703 | 0.3325 |
| 2.7107 | 17.72 | 12000 | 1.2462 | 0.9565 | 0.3227 |
| 2.6358 | 18.46 | 12500 | 1.2466 | 0.9955 | 0.3333 |
| 2.5801 | 19.2 | 13000 | 1.2059 | 1.0010 | 0.3226 |
| 2.5554 | 19.94 | 13500 | 1.1919 | 1.0094 | 0.3223 |
| 2.5314 | 20.68 | 14000 | 1.1703 | 0.9847 | 0.3156 |
| 2.509 | 21.42 | 14500 | 1.1733 | 0.9896 | 0.3177 |
| 2.4391 | 22.16 | 15000 | 1.1811 | 0.9723 | 0.3164 |
| 2.4631 | 22.89 | 15500 | 1.1382 | 0.9698 | 0.3059 |
| 2.4414 | 23.63 | 16000 | 1.0893 | 0.9644 | 0.2972 |
| 2.3771 | 24.37 | 16500 | 1.0930 | 0.9505 | 0.2954 |
| 2.3658 | 25.11 | 17000 | 1.0756 | 0.9609 | 0.2926 |
| 2.3215 | 25.85 | 17500 | 1.0512 | 0.9614 | 0.2890 |
| 2.3327 | 26.59 | 18000 | 1.0627 | 1.1984 | 0.3282 |
| 2.3055 | 27.33 | 18500 | 1.0582 | 0.9520 | 0.2841 |
| 2.299 | 28.06 | 19000 | 1.0356 | 0.9480 | 0.2817 |
| 2.2673 | 28.8 | 19500 | 1.0305 | 0.9367 | 0.2771 |
| 2.2166 | 29.54 | 20000 | 1.0139 | 0.9223 | 0.2702 |
| 2.2378 | 30.28 | 20500 | 1.0095 | 0.9268 | 0.2722 |
| 2.2168 | 31.02 | 21000 | 1.0001 | 0.9085 | 0.2691 |
| 2.1766 | 31.76 | 21500 | 0.9884 | 0.9050 | 0.2640 |
| 2.1715 | 32.5 | 22000 | 0.9730 | 0.9505 | 0.2719 |
| 2.1104 | 33.23 | 22500 | 0.9752 | 0.9362 | 0.2656 |
| 2.1158 | 33.97 | 23000 | 0.9720 | 0.9263 | 0.2624 |
| 2.0718 | 34.71 | 23500 | 0.9573 | 1.0005 | 0.2759 |
| 2.0824 | 35.45 | 24000 | 0.9609 | 0.9525 | 0.2643 |
| 2.0591 | 36.19 | 24500 | 0.9662 | 0.9570 | 0.2667 |
| 2.0768 | 36.93 | 25000 | 0.9528 | 0.9574 | 0.2646 |
| 2.0893 | 37.67 | 25500 | 0.9810 | 0.9169 | 0.2612 |
| 2.0282 | 38.4 | 26000 | 0.9556 | 0.8877 | 0.2528 |
| 1.997 | 39.14 | 26500 | 0.9523 | 0.8723 | 0.2501 |
| 2.0209 | 39.88 | 27000 | 0.9542 | 0.8773 | 0.2503 |
| 1.987 | 40.62 | 27500 | 0.9427 | 0.8867 | 0.2500 |
| 1.9663 | 41.36 | 28000 | 0.9546 | 0.9065 | 0.2546 |
| 1.9945 | 42.1 | 28500 | 0.9431 | 0.9119 | 0.2536 |
| 1.9604 | 42.84 | 29000 | 0.9367 | 0.9030 | 0.2490 |
| 1.933 | 43.57 | 29500 | 0.9071 | 0.8916 | 0.2432 |
| 1.9227 | 44.31 | 30000 | 0.9048 | 0.8882 | 0.2428 |
| 1.8784 | 45.05 | 30500 | 0.9106 | 0.8991 | 0.2437 |
| 1.8844 | 45.79 | 31000 | 0.8996 | 0.8758 | 0.2379 |
| 1.8776 | 46.53 | 31500 | 0.9028 | 0.8798 | 0.2395 |
| 1.8372 | 47.27 | 32000 | 0.9047 | 0.8778 | 0.2379 |
| 1.832 | 48.01 | 32500 | 0.9016 | 0.8941 | 0.2393 |
| 1.8154 | 48.74 | 33000 | 0.8915 | 0.8916 | 0.2372 |
| 1.8072 | 49.48 | 33500 | 0.8781 | 0.8872 | 0.2365 |
| 1.7489 | 50.22 | 34000 | 0.8738 | 0.8956 | 0.2340 |
| 1.7928 | 50.96 | 34500 | 0.8684 | 0.8872 | 0.2323 |
| 1.7748 | 51.7 | 35000 | 0.8723 | 0.8718 | 0.2321 |
| 1.7355 | 52.44 | 35500 | 0.8760 | 0.8842 | 0.2331 |
| 1.7167 | 53.18 | 36000 | 0.8746 | 0.8817 | 0.2324 |
| 1.7479 | 53.91 | 36500 | 0.8762 | 0.8753 | 0.2281 |
| 1.7428 | 54.65 | 37000 | 0.8733 | 0.8699 | 0.2277 |
| 1.7058 | 55.39 | 37500 | 0.8816 | 0.8649 | 0.2263 |
| 1.7045 | 56.13 | 38000 | 0.8733 | 0.8689 | 0.2297 |
| 1.709 | 56.87 | 38500 | 0.8648 | 0.8654 | 0.2232 |
| 1.6799 | 57.61 | 39000 | 0.8717 | 0.8580 | 0.2244 |
| 1.664 | 58.35 | 39500 | 0.8653 | 0.8723 | 0.2259 |
| 1.6488 | 59.08 | 40000 | 0.8637 | 0.8803 | 0.2271 |
| 1.6298 | 59.82 | 40500 | 0.8553 | 0.8768 | 0.2253 |
| 1.6185 | 60.56 | 41000 | 0.8512 | 0.8718 | 0.2240 |
| 1.574 | 61.3 | 41500 | 0.8579 | 0.8773 | 0.2251 |
| 1.6192 | 62.04 | 42000 | 0.8499 | 0.8743 | 0.2242 |
| 1.6275 | 62.78 | 42500 | 0.8419 | 0.8758 | 0.2216 |
| 1.5697 | 63.52 | 43000 | 0.8446 | 0.8699 | 0.2222 |
| 1.5384 | 64.25 | 43500 | 0.8462 | 0.8580 | 0.2200 |
| 1.5115 | 64.99 | 44000 | 0.8467 | 0.8674 | 0.2214 |
| 1.5547 | 65.73 | 44500 | 0.8505 | 0.8669 | 0.2204 |
| 1.5597 | 66.47 | 45000 | 0.8421 | 0.8684 | 0.2192 |
| 1.505 | 67.21 | 45500 | 0.8485 | 0.8619 | 0.2187 |
| 1.5101 | 67.95 | 46000 | 0.8489 | 0.8649 | 0.2204 |
| 1.5199 | 68.69 | 46500 | 0.8407 | 0.8619 | 0.2180 |
| 1.5207 | 69.42 | 47000 | 0.8379 | 0.8496 | 0.2163 |
| 1.478 | 70.16 | 47500 | 0.8357 | 0.8595 | 0.2163 |
| 1.4817 | 70.9 | 48000 | 0.8346 | 0.8496 | 0.2151 |
| 1.4827 | 71.64 | 48500 | 0.8362 | 0.8624 | 0.2169 |
| 1.4513 | 72.38 | 49000 | 0.8355 | 0.8451 | 0.2137 |
| 1.4988 | 73.12 | 49500 | 0.8325 | 0.8624 | 0.2161 |
| 1.4267 | 73.85 | 50000 | 0.8396 | 0.8481 | 0.2157 |
| 1.4421 | 74.59 | 50500 | 0.8355 | 0.8491 | 0.2122 |
| 1.4311 | 75.33 | 51000 | 0.8358 | 0.8476 | 0.2118 |
| 1.4174 | 76.07 | 51500 | 0.8289 | 0.8451 | 0.2101 |
| 1.4349 | 76.81 | 52000 | 0.8372 | 0.8580 | 0.2140 |
| 1.3959 | 77.55 | 52500 | 0.8325 | 0.8436 | 0.2116 |
| 1.4087 | 78.29 | 53000 | 0.8351 | 0.8446 | 0.2105 |
| 1.415 | 79.03 | 53500 | 0.8363 | 0.8476 | 0.2123 |
| 1.4122 | 79.76 | 54000 | 0.8310 | 0.8481 | 0.2112 |
| 1.3969 | 80.5 | 54500 | 0.8239 | 0.8446 | 0.2095 |
| 1.361 | 81.24 | 55000 | 0.8282 | 0.8427 | 0.2091 |
| 1.3611 | 81.98 | 55500 | 0.8282 | 0.8407 | 0.2092 |
| 1.3677 | 82.72 | 56000 | 0.8235 | 0.8436 | 0.2084 |
| 1.3361 | 83.46 | 56500 | 0.8231 | 0.8377 | 0.2069 |
| 1.3779 | 84.19 | 57000 | 0.8206 | 0.8436 | 0.2070 |
| 1.3727 | 84.93 | 57500 | 0.8204 | 0.8392 | 0.2065 |
| 1.3317 | 85.67 | 58000 | 0.8207 | 0.8436 | 0.2065 |
| 1.3332 | 86.41 | 58500 | 0.8186 | 0.8357 | 0.2055 |
| 1.3299 | 87.15 | 59000 | 0.8193 | 0.8417 | 0.2075 |
| 1.3129 | 87.89 | 59500 | 0.8183 | 0.8431 | 0.2065 |
| 1.3352 | 88.63 | 60000 | 0.8151 | 0.8471 | 0.2062 |
| 1.3026 | 89.36 | 60500 | 0.8125 | 0.8486 | 0.2067 |
| 1.3468 | 90.1 | 61000 | 0.8124 | 0.8407 | 0.2058 |
| 1.3028 | 90.84 | 61500 | 0.8122 | 0.8461 | 0.2051 |
| 1.2884 | 91.58 | 62000 | 0.8086 | 0.8427 | 0.2048 |
| 1.3005 | 92.32 | 62500 | 0.8110 | 0.8387 | 0.2055 |
| 1.2996 | 93.06 | 63000 | 0.8126 | 0.8328 | 0.2057 |
| 1.2707 | 93.8 | 63500 | 0.8098 | 0.8402 | 0.2047 |
| 1.3026 | 94.53 | 64000 | 0.8097 | 0.8402 | 0.2050 |
| 1.2546 | 95.27 | 64500 | 0.8111 | 0.8402 | 0.2055 |
| 1.2426 | 96.01 | 65000 | 0.8088 | 0.8372 | 0.2059 |
| 1.2869 | 96.75 | 65500 | 0.8093 | 0.8397 | 0.2048 |
| 1.2782 | 97.49 | 66000 | 0.8099 | 0.8412 | 0.2049 |
| 1.2457 | 98.23 | 66500 | 0.8134 | 0.8412 | 0.2062 |
| 1.2967 | 98.97 | 67000 | 0.8115 | 0.8382 | 0.2055 |
| 1.2817 | 99.7 | 67500 | 0.8128 | 0.8392 | 0.2063 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
| {"language": ["zh-CN"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "sv"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "zh-CN"}, "metrics": [{"type": "cer", "value": 66.22, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "zh-CN"}, "metrics": [{"type": "cer", "value": 37.51, "name": "Test CER"}]}]}]} | anantoj/wav2vec2-xls-r-300m-zh-CN | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"sv",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | anantvaid4/fine-tuned-bert-ment-ill_clean | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | anas/Coqui-large-Arabic-vocabulary-stt-model | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Arabic using the [Common Voice Corpus 4](https://commonvoice.mozilla.org/en/datasets) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ar", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anas/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("anas/wav2vec2-large-xlsr-arabic")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ar", split="test")
processor = Wav2Vec2Processor.from_pretrained("anas/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("anas/wav2vec2-large-xlsr-arabic/")
model.to("cuda")
chars_to_ignore_regex = '[\,\؟\.\!\-\;\\:\'\"\☭\«\»\؛\—\ـ\_\،\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
batch["sentence"] = re.sub('[a-z]','',batch["sentence"])
batch["sentence"] = re.sub("[إأٱآا]", "ا", batch["sentence"])
noise = re.compile(""" ّ | # Tashdid
َ | # Fatha
ً | # Tanwin Fath
ُ | # Damma
ٌ | # Tanwin Damm
ِ | # Kasra
ٍ | # Tanwin Kasr
ْ | # Sukun
ـ # Tatwil/Kashida
""", re.VERBOSE)
batch["sentence"] = re.sub(noise, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 52.18 %
## Training
The Common Voice Corpus 4 `train`, `validation`, datasets were used for training
The script used for training can be found [here](https://github.com/anashas/Fine-Tuning-of-XLSR-Wav2Vec2-on-Arabic)
Twitter: [here](https://twitter.com/hasnii_anas)
Email: [email protected] | {"language": "ar", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": [{"common_voice": "Common Voice Corpus 4"}], "metrics": ["wer"], "model-index": [{"name": "Hasni XLSR Wav2Vec2 Large 53", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ar", "type": "common_voice", "args": "ar"}, "metrics": [{"type": "wer", "value": 52.18, "name": "Test WER"}]}]}]} | anas/wav2vec2-large-xlsr-arabic | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ar",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | anas-awadalla/bart-base-finetuned-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | anas-awadalla/bert-base-pretrained-on-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-0", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-0 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-10", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-10 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-2", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-2 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-4", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-4 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-42
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
{'exact_match': 40.91769157994324, 'f1': 52.89154394730339}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-42", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-42 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-6", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-6 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-8
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-8", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-8 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-128-finetuned-squad-seed-0", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-0 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-128-finetuned-squad-seed-10", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-10 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-128-finetuned-squad-seed-2", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-2 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-128-finetuned-squad-seed-4", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-4 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-42
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
{'exact_match': 12.93282876064333, 'f1': 21.98821604201723}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-128-finetuned-squad-seed-42", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-42 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-128-finetuned-squad-seed-6", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-6 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-8
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-128-finetuned-squad-seed-8", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-8 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-16-finetuned-squad-seed-0", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-0 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-16-finetuned-squad-seed-10", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-10 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-16-finetuned-squad-seed-2", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-2 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-16-finetuned-squad-seed-4", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-4 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-42
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
{'exact_match': 3.207190160832545, 'f1': 6.680463956037787}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-16-finetuned-squad-seed-42", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-42 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-16-finetuned-squad-seed-6", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-6 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-8
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-16-finetuned-squad-seed-8", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-8 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-256-finetuned-squad-seed-0", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-256-finetuned-squad-seed-0 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-256-finetuned-squad-seed-10", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-256-finetuned-squad-seed-10 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-256-finetuned-squad-seed-2", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-256-finetuned-squad-seed-2 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-256-finetuned-squad-seed-4", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-256-finetuned-squad-seed-4 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-256-finetuned-squad-seed-6", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-256-finetuned-squad-seed-6 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-8
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-256-finetuned-squad-seed-8", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-256-finetuned-squad-seed-8 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-32-finetuned-squad-seed-0", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-0 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-32-finetuned-squad-seed-10", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-10 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-32-finetuned-squad-seed-2", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-2 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-32-finetuned-squad-seed-4", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-4 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-32-finetuned-squad-seed-6", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-6 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-8
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-32-finetuned-squad-seed-8", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-8 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-512-finetuned-squad-seed-0", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-0 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-512-finetuned-squad-seed-10", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-10 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-512-finetuned-squad-seed-2", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-2 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-512-finetuned-squad-seed-4", "results": []}]} | anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-4 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.