Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
text-classification | transformers | {} | Mael7307/bert-base-uncased-MRPC | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Mael7307/bert-base-uncased-QQP | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | ```
for i in range(len(predictions)):
if predictions[i] == 0:
predictions[i] = 2
elif predictions[i] == 1:
predictions[i] = 0
elif predictions[i] == 2:
predictions[i] = 1
``` | {} | Mael7307/bert-base-uncased-mnli | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | ```
for i in range(len(predictions)):
if predictions[i] == 0:
predictions[i] = 2
elif predictions[i] == 1:
predictions[i] = 0
elif predictions[i] == 2:
predictions[i] = 1
``` | {} | Mael7307/bert-base-uncased-snli | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | {} | Mael7307/roberta-large-mnli | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Mael7307/roberta-large-mrpc | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Mael7307/roberta-large-qqp | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Mael7307/roberta-large-snli | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Mael7307/roblclass | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Mael7307/rtevib | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Mael7307/vibert | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-finetuned-squad
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.0001
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.0 | 2 | 5.3843 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "deberta-base-finetuned-squad", "results": []}]} | MaggieXM/deberta-base-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"deberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.01
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.01 | 56 | 4.8054 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]} | MaggieXM/distilbert-base-uncased-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Maggieseeku/maggie | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {"license": "other"} | MagicalCat29/model_save_test2 | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | #Rick Sanchez DialoGPT Model | {"tags": "conversational"} | MagmaCubes1133/DialoGPT-large-rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
feature-extraction | transformers | {} | Magolor/deepex-ranking-model | null | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | {} | Magolor/deepstruct | null | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Maha/OGBV-gender-bert-hi-en-hasoc20a-fin | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Maha/OGBV-gender-bert-hi-en-hasoc20a_fin | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Maha/OGBV-gender-indicbert-ta-eacl_finals | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Maha/OGBV-gender-indicbert-ta-fire20_fin | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Maha/OGBV-gender-indicbert-ta-hasoc21_codemix | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Maha/OGBV-gender-twtrobertabase-en-davidson | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Maha/OGBV-gender-twtrobertabase-en-founta_final | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Maha/OGBV-gender-twtrobertabase-en-trac1 | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Maha/hi-const21-hibert_final | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Maha/hin-trac1_fin | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Maha/hin-trac2 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Mahalakshmi/RobustSpeechChallenge | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
#xlsr-large-53-tamil | {"language": ["ne"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["openslr"], "model-index": [{"name": "wav2vec2-large-xlsr-53-tamil", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "openslr", "type": "openslr", "args": "ne"}, "metrics": [{"type": "wer", "value": 25.02, "name": "Test WER"}]}]}]} | Mahalakshmi/wav2vec2-large-xlsr-53-demo-colab | null | [
"transformers",
"pytorch",
"automatic-speech-recognition",
"robust-speech-event",
"hf-asr-leaderboard",
"ne",
"dataset:openslr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.9475
- eval_wer: 1.0377
- eval_runtime: 70.5646
- eval_samples_per_second: 25.239
- eval_steps_per_second: 3.16
- epoch: 21.05
- step: 2000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 300
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-300m-demo-colab", "results": []}]} | Mahalakshmi/wav2vec2-xls-r-300m-demo-colab | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | {"license": "afl-3.0"} | Maheshwaranr/bert-analyzer | null | [
"transformers",
"tf",
"distilbert",
"text-classification",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | MahiKavi/wav2vec2-base-timit-demo-colab | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | MahmoodQa/1 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | testing for nothing
| {} | Mahmoud97/Temp | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | MahmoudBustami/roberta-large-BoolQ | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Persian-Image-Captioning
This model is a fine-tuned version of [Vision Encoder Decoder](https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder) on coco-flickr-farsi.
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"]} | MahsaShahidi/Persian-Image-Captioning | null | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"generated_from_trainer",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | ----
tags:
- conversational
---
#Peter Parker DialoGPT Model | {} | MaiaMaiaMaia/DialoGPT-medium-PeterParkerBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | {} | Majed/internet2 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | MakarenkoMD/test | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | {} | Makesh/adelectra-large | null | [
"transformers",
"pytorch",
"electra",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Malaika/distilbert-base-uncased-finetuned-emotion | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Malaina/mt5-large-spider | null | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Malar/BM_MLM_PAGEEXT_211101164710 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Malar/BM_MLM_PAGEEXT_211101165218 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Malar/BM_MLM_PAGEEXT_211101173159 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Malar/BM_MLM_PAGEEXT_211101181015 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Malar/BM_MLM_XTR_211101153349 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Malar/BM_MLM_XTR_211101153505 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Malar/BM_MLM_XTR_211101153614 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Malar/BM_MLM_XTR_211101153729 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Malar/BM_MLM_XTR_211101153850 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Malar/BM_MLM_XTR_211101154137 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Malar/BM_MLM_XTR_211101154518 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Malar/BM_MLM_XTR_211101154818 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Malar/BM_MLM_XTR_211101155245 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Malar/BM_MLM_XTR_211101155603 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Malar/BM_MLM_XTR_211101155938 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Malar/BM_MLM_XTR_211101160104 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Malar/BM_MLM_XTR_211101160848 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Malar/BM_MLM_XTR_211101164201 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Malar/distilbert-base-uncased | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Malar/layoutlmv2-finetuned-funsd-test | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | This model trained on nyanja dataset in Longformer | {} | MalawiUniST/ISO6392.nya.ny | null | [
"transformers",
"pytorch",
"longformer",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Malek/wav2vectunisian | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | Ver-Online Malignant PELICULA completa En Espanol Latino HD | {} | Malignant/Malignant | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# Ælæctra - Finetuned for Named Entity Recognition on the [DaNE dataset](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020) by Malte Højmark-Bertelsen.
**Ælæctra** is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models.
Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings!
Ælæctra was, as mentioned, created to enhance the Danish NLP capabilties and please do note how this GitHub still does not support the Danish characters "*Æ, Ø and Å*" as the title of this repository becomes "*-l-ctra*". How ironic.🙂
Here is an example on how to load the finetuned Ælæctra-cased model for Named Entity Recognition in [PyTorch](https://pytorch.org/) using the [🤗Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-danish-electra-small-cased-ner-dane")
model = AutoModelForTokenClassification.from_pretrained("Maltehb/-l-ctra-danish-electra-small-cased-ner-dane")
```
### Evaluation of current Danish Language Models
Ælæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:
| Model | Layers | Hidden Size | Params | AVG NER micro-f1 (DaNE-testset) | Average Inference Time (Sec/Epoch) | Download |
| --- | --- | --- | --- | --- | --- | --- |
| Ælæctra Uncased | 12 | 256 | 13.7M | 78.03 (SD = 1.28) | 10.91 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| Ælæctra Cased | 12 | 256 | 14.7M | 80.08 (SD = 0.26) | 10.92 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| DaBERT | 12 | 768 | 110M | 84.89 (SD = 0.64) | 43.03 | [Link for model](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1) |
| mBERT Uncased | 12 | 768 | 167M | 80.44 (SD = 0.82) | 72.10 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip) |
| mBERT Cased | 12 | 768 | 177M | 83.79 (SD = 0.91) | 70.56 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) |
On [DaNE](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020) without the *MISC-tag*, Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate.
### Pretraining
To pretrain Ælæctra it is recommended to build a Docker Container from the [Dockerfile](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/). Next, simply follow the [pretraining notebooks](https://github.com/MalteHB/Ælæctra/tree/master/infrastructure/Dockerfile/)
The pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company [KMD](https://www.kmd.dk/). The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model
### Fine-tuning
To fine-tune any Ælæctra model follow the [fine-tuning notebooks](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/)
### References
Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. http://arxiv.org/abs/2003.10555
Danish BERT. (2020). BotXO. https://github.com/botxo/nordic_bert (Original work published 2019)
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805
Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. https://www.aclweb.org/anthology/2020.lrec-1.565
Strømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. http://arxiv.org/abs/2005.03521
#### Acknowledgements
As the majority of this repository is build upon [the works](https://github.com/google-research/electra) by the team at Google who created ELECTRA, a HUGE thanks to them is in order.
A Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).
Furthermore, I would like to thank my supervisor [Riccardo Fusaroli](https://github.com/fusaroli) for the support with the thesis, and a special thanks goes out to [Kenneth Enevoldsen](https://github.com/KennethEnevoldsen) for his continuous feedback.
Lastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!
#### Contact
For help or further information feel free to connect with the author Malte Højmark-Bertelsen on [[email protected]](mailto:[email protected]?subject=[GitHub]%20ÆlæctraCasedNER) or any of the following platforms:
[<img align="left" alt="MalteHB | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][twitter]
[<img align="left" alt="MalteHB | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/linkedin.svg" />][linkedin]
[<img align="left" alt="MalteHB | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram]
<br />
</details>
[twitter]: https://twitter.com/malteH_B
[instagram]: https://www.instagram.com/maltemusen/
[linkedin]: https://www.linkedin.com/in/malte-h%C3%B8jmark-bertelsen-9a618017b/ | {"language": "da", "license": "mit", "tags": ["\u00e6l\u00e6ctra", "pytorch", "danish", "ELECTRA-Small", "replaced token detection"], "datasets": ["DAGW"], "metrics": ["f1"], "widget": [{"text": "Chili Jensen, som bor p\u00e5 Danmarksgade 12, k\u00f8ber chilifrugter fra Netto."}]} | Maltehb/aelaectra-danish-electra-small-cased-ner-dane | null | [
"transformers",
"pytorch",
"tf",
"electra",
"token-classification",
"ælæctra",
"danish",
"ELECTRA-Small",
"replaced token detection",
"da",
"dataset:DAGW",
"arxiv:2003.10555",
"arxiv:1810.04805",
"arxiv:2005.03521",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | transformers |
# Ælæctra - A Step Towards More Efficient Danish Natural Language Processing
**Ælæctra** is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models. Initially a cased and an uncased model are released. It was created as part of a Cognitive Science bachelor's thesis.
Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings!
Ælæctra was, as mentioned, created to enhance the Danish NLP capabilties and please do note how this GitHub still does not support the Danish characters "*Æ, Ø and Å*" as the title of this repository becomes "*-l-ctra*". How ironic.🙂
Here is an example on how to load both the cased and the uncased Ælæctra model in [PyTorch](https://pytorch.org/) using the [🤗Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForPreTraining
tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-danish-electra-small-cased")
model = AutoModelForPreTraining.from_pretrained("Maltehb/-l-ctra-danish-electra-small-cased")
```
```python
from transformers import AutoTokenizer, AutoModelForPreTraining
tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-danish-electra-small-uncased")
model = AutoModelForPreTraining.from_pretrained("Maltehb/-l-ctra-danish-electra-small-uncased")
```
### Evaluation of current Danish Language Models
Ælæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:
| Model | Layers | Hidden Size | Params | AVG NER micro-f1 (DaNE-testset) | Average Inference Time (Sec/Epoch) | Download |
| --- | --- | --- | --- | --- | --- | --- |
| Ælæctra Uncased | 12 | 256 | 13.7M | 78.03 (SD = 1.28) | 10.91 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| Ælæctra Cased | 12 | 256 | 14.7M | 80.08 (SD = 0.26) | 10.92 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| DaBERT | 12 | 768 | 110M | 84.89 (SD = 0.64) | 43.03 | [Link for model](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1) |
| mBERT Uncased | 12 | 768 | 167M | 80.44 (SD = 0.82) | 72.10 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip) |
| mBERT Cased | 12 | 768 | 177M | 83.79 (SD = 0.91) | 70.56 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) |
On [DaNE](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020), Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate. For a full description of the evaluation and specification of the model read the thesis: 'Ælæctra - A Step Towards More Efficient Danish Natural Language Processing'.
### Pretraining
To pretrain Ælæctra it is recommended to build a Docker Container from the [Dockerfile](https://github.com/MalteHB/-l-ctra/blob/master/infrastructure/Dockerfile). Next, simply follow the [pretraining notebooks](https://github.com/MalteHB/-l-ctra/blob/master/notebooks/pretraining/)
The pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company [KMD](https://www.kmd.dk/). The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model
### Fine-tuning
To fine-tune any Ælæctra model follow the [fine-tuning notebooks](https://github.com/MalteHB/-l-ctra/blob/master/notebooks/fine-tuning/)
### References
Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. http://arxiv.org/abs/2003.10555
Danish BERT. (2020). BotXO. https://github.com/botxo/nordic_bert (Original work published 2019)
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805
Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. https://www.aclweb.org/anthology/2020.lrec-1.565
Strømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. http://arxiv.org/abs/2005.03521
#### Acknowledgements
As the majority of this repository is build upon [the works](https://github.com/google-research/electra) by the team at Google who created ELECTRA, a HUGE thanks to them is in order.
A Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).
Furthermore, I would like to thank my supervisor [Riccardo Fusaroli](https://github.com/fusaroli) for the support with the thesis, and a special thanks goes out to [Kenneth Enevoldsen](https://github.com/KennethEnevoldsen) for his continuous feedback.
Lastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!
#### Contact
For help or further information feel free to connect with the author Malte Højmark-Bertelsen on [[email protected]](mailto:[email protected]?subject=[GitHub]%20Ælæctra) or any of the following platforms:
[<img align="left" alt="MalteHB | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][twitter]
[<img align="left" alt="MalteHB | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/linkedin.svg" />][linkedin]
[<img align="left" alt="MalteHB | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram]
<br />
</details>
[twitter]: https://twitter.com/malteH_B
[instagram]: https://www.instagram.com/maltemusen/
[linkedin]: https://www.linkedin.com/in/malte-h%C3%B8jmark-bertelsen-9a618017b/ | {"language": "da", "license": "mit", "tags": ["\u00e6l\u00e6ctra", "pytorch", "danish", "ELECTRA-Small", "replaced token detection"], "datasets": ["DAGW"], "metrics": ["f1"], "co2_eq_emissions": 4009.5} | Maltehb/aelaectra-danish-electra-small-cased | null | [
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"ælæctra",
"danish",
"ELECTRA-Small",
"replaced token detection",
"da",
"dataset:DAGW",
"arxiv:2003.10555",
"arxiv:1810.04805",
"arxiv:2005.03521",
"license:mit",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# Ælæctra - Finetuned for Named Entity Recognition on the [DaNE dataset](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020) by Malte Højmark-Bertelsen.
**Ælæctra** is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models.
Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings!
Ælæctra was, as mentioned, created to enhance the Danish NLP capabilties and please do note how this GitHub still does not support the Danish characters "*Æ, Ø and Å*" as the title of this repository becomes "*-l-ctra*". How ironic.🙂
Here is an example on how to load the finetuned Ælæctra-uncased model for Named Entity Recognition in [PyTorch](https://pytorch.org/) using the [🤗Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-danish-electra-small-uncased-ner-dane")
model = AutoModelForTokenClassification.from_pretrained("Maltehb/-l-ctra-danish-electra-small-uncased-ner-dane")
```
### Evaluation of current Danish Language Models
Ælæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:
| Model | Layers | Hidden Size | Params | AVG NER micro-f1 (DaNE-testset) | Average Inference Time (Sec/Epoch) | Download |
| --- | --- | --- | --- | --- | --- | --- |
| Ælæctra Uncased | 12 | 256 | 13.7M | 78.03 (SD = 1.28) | 10.91 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| Ælæctra Cased | 12 | 256 | 14.7M | 80.08 (SD = 0.26) | 10.92 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| DaBERT | 12 | 768 | 110M | 84.89 (SD = 0.64) | 43.03 | [Link for model](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1) |
| mBERT Uncased | 12 | 768 | 167M | 80.44 (SD = 0.82) | 72.10 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip) |
| mBERT Cased | 12 | 768 | 177M | 83.79 (SD = 0.91) | 70.56 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) |
On [DaNE](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020) without the *MISC-tag*, Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate.
### Pretraining
To pretrain Ælæctra it is recommended to build a Docker Container from the [Dockerfile](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/). Next, simply follow the [pretraining notebooks](https://github.com/MalteHB/Ælæctra/tree/master/infrastructure/Dockerfile/)
The pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company [KMD](https://www.kmd.dk/). The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model
### Fine-tuning
To fine-tune any Ælæctra model follow the [fine-tuning notebooks](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/)
### References
Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. http://arxiv.org/abs/2003.10555
Danish BERT. (2020). BotXO. https://github.com/botxo/nordic_bert (Original work published 2019)
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805
Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. https://www.aclweb.org/anthology/2020.lrec-1.565
Strømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. http://arxiv.org/abs/2005.03521
#### Acknowledgements
As the majority of this repository is build upon [the works](https://github.com/google-research/electra) by the team at Google who created ELECTRA, a HUGE thanks to them is in order.
A Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).
Furthermore, I would like to thank my supervisor [Riccardo Fusaroli](https://github.com/fusaroli) for the support with the thesis, and a special thanks goes out to [Kenneth Enevoldsen](https://github.com/KennethEnevoldsen) for his continuous feedback.
Lastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!
#### Contact
For help or further information feel free to connect with the author Malte Højmark-Bertelsen on [[email protected]](mailto:[email protected]?subject=[GitHub]%20ÆlæctraUncasedNER) or any of the following platforms:
[<img align="left" alt="MalteHB | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][twitter]
[<img align="left" alt="MalteHB | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/linkedin.svg" />][linkedin]
[<img align="left" alt="MalteHB | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram]
<br />
</details>
[twitter]: https://twitter.com/malteH_B
[instagram]: https://www.instagram.com/maltemusen/
[linkedin]: https://www.linkedin.com/in/malte-h%C3%B8jmark-bertelsen-9a618017b/ | {"language": "da", "license": "mit", "tags": ["\u00e6l\u00e6ctra", "pytorch", "danish", "ELECTRA-Small", "replaced token detection"], "datasets": ["DAGW"], "metrics": ["f1"], "widget": [{"text": "Chili Jensen, som bor p\u00e5 Danmarksgade 12, k\u00f8ber chilifrugter fra Netto."}]} | Maltehb/aelaectra-danish-electra-small-uncased-ner-dane | null | [
"transformers",
"pytorch",
"tf",
"electra",
"token-classification",
"ælæctra",
"danish",
"ELECTRA-Small",
"replaced token detection",
"da",
"dataset:DAGW",
"arxiv:2003.10555",
"arxiv:1810.04805",
"arxiv:2005.03521",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | transformers |
# Ælæctra - A Step Towards More Efficient Danish Natural Language Processing
**Ælæctra** is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models. Initially a cased and an uncased model are released. It was created as part of a Cognitive Science bachelor's thesis.
Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings!
Ælæctra was, as mentioned, created to enhance the Danish NLP capabilties and please do note how this GitHub still does not support the Danish characters "*Æ, Ø and Å*" as the title of this repository becomes "*-l-ctra*". How ironic.🙂
Here is an example on how to load both the cased and the uncased Ælæctra model in [PyTorch](https://pytorch.org/) using the [🤗Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForPreTraining
tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-cased")
model = AutoModelForPreTraining.from_pretrained("Maltehb/-l-ctra-cased")
```
```python
from transformers import AutoTokenizer, AutoModelForPreTraining
tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-uncased")
model = AutoModelForPreTraining.from_pretrained("Maltehb/-l-ctra-uncased")
```
### Evaluation of current Danish Language Models
Ælæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:
| Model | Layers | Hidden Size | Params | AVG NER micro-f1 (DaNE-testset) | Average Inference Time (Sec/Epoch) | Download |
| --- | --- | --- | --- | --- | --- | --- |
| Ælæctra Uncased | 12 | 256 | 13.7M | 78.03 (SD = 1.28) | 10.91 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| Ælæctra Cased | 12 | 256 | 14.7M | 80.08 (SD = 0.26) | 10.92 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| DaBERT | 12 | 768 | 110M | 84.89 (SD = 0.64) | 43.03 | [Link for model](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1) |
| mBERT Uncased | 12 | 768 | 167M | 80.44 (SD = 0.82) | 72.10 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip) |
| mBERT Cased | 12 | 768 | 177M | 83.79 (SD = 0.91) | 70.56 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) |
On [DaNE](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020), Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate. For a full description of the evaluation and specification of the model read the thesis: 'Ælæctra - A Step Towards More Efficient Danish Natural Language Processing'.
### Pretraining
To pretrain Ælæctra it is recommended to build a Docker Container from the [Dockerfile](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/). Next, simply follow the [pretraining notebooks](https://github.com/MalteHB/Ælæctra/tree/master/infrastructure/Dockerfile/)
The pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company [KMD](https://www.kmd.dk/). The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model
### Fine-tuning
To fine-tune any Ælæctra model follow the [fine-tuning notebooks](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/)
### References
Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. http://arxiv.org/abs/2003.10555
Danish BERT. (2020). BotXO. https://github.com/botxo/nordic_bert (Original work published 2019)
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805
Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. https://www.aclweb.org/anthology/2020.lrec-1.565
Strømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. http://arxiv.org/abs/2005.03521
#### Acknowledgements
As the majority of this repository is build upon [the works](https://github.com/google-research/electra) by the team at Google who created ELECTRA, a HUGE thanks to them is in order.
A Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).
Furthermore, I would like to thank my supervisor [Riccardo Fusaroli](https://github.com/fusaroli) for the support with the thesis, and a special thanks goes out to [Kenneth Enevoldsen](https://github.com/KennethEnevoldsen) for his continuous feedback.
Lastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!
#### Contact
For help or further information feel free to connect with the author Malte Højmark-Bertelsen on [[email protected]](mailto:[email protected]?subject=[GitHub]%20Ælæctra) or any of the following platforms:
[<img align="left" alt="MalteHB | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][twitter]
[<img align="left" alt="MalteHB | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/linkedin.svg" />][linkedin]
[<img align="left" alt="MalteHB | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram]
<br />
</details>
[twitter]: https://twitter.com/malteH_B
[instagram]: https://www.instagram.com/maltemusen/
[linkedin]: https://www.linkedin.com/in/malte-h%C3%B8jmark-bertelsen-9a618017b/ | {"language": "da", "license": "mit", "tags": ["\u00e6l\u00e6ctra", "pytorch", "danish", "ELECTRA-Small", "replaced token detection"], "datasets": ["DAGW"], "metrics": ["f1"], "co2_eq_emissions": 4009.5} | Maltehb/aelaectra-danish-electra-small-uncased | null | [
"transformers",
"pytorch",
"electra",
"pretraining",
"ælæctra",
"danish",
"ELECTRA-Small",
"replaced token detection",
"da",
"dataset:DAGW",
"arxiv:2003.10555",
"arxiv:1810.04805",
"arxiv:2005.03521",
"license:mit",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# Danish BERT (version 2, uncased) by [Certainly](https://certainly.io/) (previously known as BotXO) finetuned for Named Entity Recognition on the [DaNE dataset](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020) by Malte Højmark-Bertelsen.
Humongous amounts of credit needs to go to [Certainly](https://certainly.io/) (previously known as BotXO), for pretraining the Danish BERT. For data and training details see their [GitHub repository](https://github.com/certainlyio/nordic_bert) or [this article](https://www.certainly.io/blog/danish-bert-model/). You can also visit their [organization page](https://huggingface.co/Certainly) on Hugging Face.
It is both available in TensorFlow and Pytorch format.
The original TensorFlow version can be downloaded using [this link](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1).
Here is an example on how to load Danish BERT for token classification in PyTorch using the [🤗Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Maltehb/danish-bert-botxo-ner-dane")
model = AutoModelForTokenClassification.from_pretrained("Maltehb/danish-bert-botxo-ner-dane")
```
### References
Danish BERT. (2020). BotXO. https://github.com/botxo/nordic_bert (Original work published 2019)
Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. https://www.aclweb.org/anthology/2020.lrec-1.565
#### Contact
For help or further information feel free to connect with the author Malte Højmark-Bertelsen on [[email protected]](mailto:[email protected]?subject=[GitHub]%20DanishBERTUncasedNER) or any of the following platforms:
[<img align="left" alt="MalteHB | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][twitter]
[<img align="left" alt="MalteHB | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/linkedin.svg" />][linkedin]
[<img align="left" alt="MalteHB | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram]
<br />
</details>
[twitter]: https://twitter.com/malteH_B
[instagram]: https://www.instagram.com/maltemusen/
[linkedin]: https://www.linkedin.com/in/malte-h%C3%B8jmark-bertelsen-9a618017b/ | {"language": "da", "license": "cc-by-4.0", "tags": ["danish", "bert", "masked-lm", "botxo"], "datasets": ["common_crawl", "wikipedia", "dindebat.dk", "hestenettet.dk", "danish_OpenSubtitles"], "widget": [{"text": "Chili Jensen, som bor p\u00e5 Danmarksgade 12, k\u00f8ber chilifrugter fra Netto."}]} | Maltehb/danish-bert-botxo-ner-dane | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"danish",
"masked-lm",
"botxo",
"da",
"dataset:common_crawl",
"dataset:wikipedia",
"dataset:dindebat.dk",
"dataset:hestenettet.dk",
"dataset:danish_OpenSubtitles",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# Danish BERT (version 2, uncased) by [Certainly](https://certainly.io/) (previously known as BotXO).
All credit goes to [Certainly](https://certainly.io/) (previously known as BotXO), who developed Danish BERT. For data and training details see their [GitHub repository](https://github.com/certainlyio/nordic_bert) or [this article](https://www.certainly.io/blog/danish-bert-model/). You can also visit their [organization page](https://huggingface.co/Certainly) on Hugging Face.
It is both available in TensorFlow and Pytorch format.
The original TensorFlow version can be downloaded using [this link](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1).
Here is an example on how to load Danish BERT in PyTorch using the [🤗Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForPreTraining
tokenizer = AutoTokenizer.from_pretrained("Maltehb/danish-bert-botxo")
model = AutoModelForPreTraining.from_pretrained("Maltehb/danish-bert-botxo")
```
| {"language": "da", "license": "cc-by-4.0", "tags": ["danish", "bert", "masked-lm", "Certainly"], "datasets": ["common_crawl", "wikipedia", "dindebat.dk", "hestenettet.dk", "danishOpenSubtitles"], "pipeline_tag": "fill-mask", "widget": [{"text": "K\u00f8benhavn er [MASK] i Danmark."}]} | Maltehb/danish-bert-botxo | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"danish",
"masked-lm",
"Certainly",
"fill-mask",
"da",
"dataset:common_crawl",
"dataset:wikipedia",
"dataset:dindebat.dk",
"dataset:hestenettet.dk",
"dataset:danishOpenSubtitles",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
feature-extraction | transformers | {} | Maltehb/roberta-base-scandinavian | null | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | hello
| {} | Mamatha/agri-gpt2 | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | MandoNYC/love | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
#Mikasa Ackermann DialoGPT Model | {"tags": ["conversational"]} | Mandy/DialoGPT-small-Mikasa | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | MangoMan12Bam/ricknmorte | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8433
- Wer: 0.9852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 1.468 | 166.67 | 500 | 3.0262 | 1.0035 |
| 0.0572 | 333.33 | 1000 | 3.5352 | 0.9721 |
| 0.0209 | 500.0 | 1500 | 3.7266 | 0.9834 |
| 0.0092 | 666.67 | 2000 | 3.8433 | 0.9852 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
| {"language": ["ur"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]} | Maniac/wav2vec2-xls-r-60-urdu | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ur",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5614
- Wer: 0.6765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.9115 | 20.83 | 500 | 1.5400 | 0.7280 |
| 0.1155 | 41.67 | 1000 | 1.5614 | 0.6765 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0 | {"language": ["ur"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "sv", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8.0", "type": "mozilla-foundation/common_voice_8_0", "args": "ur"}, "metrics": [{"type": "wer", "value": 67.48, "name": "Test WER"}]}]}]} | Maniac/wav2vec2-xls-r-urdu | null | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"sv",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"ur",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | Language Detection Model for Nepali, English, Hindi and Spanish
Model fine tuned on xlm-roberta-large | {} | Manishl7/xlm-roberta-large-language-detection | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Manivarsh/Uni_Model | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Manivarsh/Uni_Model_2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Manorama/sentiment_analysis | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | Manthan/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Manuel1/test | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | return im
def main():
st.title("Lowlight Enhancement")
st.write("This is a simple lowlight enhancement app with great performance and does not require paired images to train.")
st.write("The model runs at 1000/11 FPS on single GPU/CPU on images with a size of 1200*900*3")
uploaded_file = st.file_uploader("Lowlight Image")
if uploaded_file:
data_lowlight = Image.open(uploaded_file)
col1, col2 = st.columns(2)
col1.write("Original (Lowlight)")
col1.image(data_lowlight, caption="Lowlight Image", use_column_width=True)
| {} | Manyman3231/lowlight-enhancement | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6936 | 0.54 | 500 | 1.4844 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["samsum"], "model-index": [{"name": "pegasus-samsum", "results": []}]} | Mapcar/pegasus-samsum | null | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers | {} | Mar-C/BERT-NER | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {} | Mar-C/TwitterSentiment | null | [
"transformers",
"tf",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | Mara/DialoGPT-medium-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Mara/Discord_Chatbot | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers |
# Pegasus XSUM Gigaword
## Model description
Pegasus XSUM model finetuned to Gigaword Summarization task, significantly better performance than pegasus gigaword, but still doesn't match model paper performance.
## Intended uses & limitations
Produces short summaries with the coherence of the XSUM Model
#### How to use
```python
# You can include sample code which will be formatted
```
#### Limitations and bias
Still has all the biases of any of the abstractive models, but seems a little less prone to hallucination.
## Training data
Initialized with pegasus-XSUM
## Training procedure
Trained for 11500 iterations on Gigaword corpus using OOB seq2seq (from hugging face using the default parameters)
## Eval results
Evaluated on Gigaword test set (from hugging face using the default parameters)
run_summarization.py --model_name_or_path pegasus-xsum/checkpoint-11500/ --do_predict --dataset_name gigaword --dataset_config "3.0.0" --source_prefix "summarize: " --output_dir pegasus-xsum --per_device_train_batch_size=8 --per_device_eval_batch_size=8 --overwrite_output_dir --predict_with_generate
| Metric | Score |
| ----------- | ----------- |
| eval_rouge1 | 34.1958 |
| eval_rouge2 | 15.4033 |
| eval_rougeL | 31.4488 |
run_summarization.py --model_name_or_path google/pegasus-gigaword --do_predict --dataset_name gigaword --dataset_config "3.0.0" --source_prefix "summarize: " --output_dir pegasus-xsum --per_device_train_batch_size=8 --per_device_eval_batch_size=8 --overwrite_output_dir --predict_with_generate
| Metric | Score |
| ----------- | ----------- |
| eval_rouge1 | 20.8111 |
| eval_rouge2 | 8.766 |
| eval_rougeL | 18.4431 |
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2020}
}
```
| {"language": ["English"], "tags": [], "datasets": ["XSUM", "Gigaword"], "metrics": ["Rouge"]} | Marc/pegasus_xsum_gigaword | null | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"dataset:XSUM",
"dataset:Gigaword",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers |
# ixambert-base-cased finetuned for QA
This is a basic implementation of the multilingual model ["ixambert-base-cased"](https://huggingface.co/ixa-ehu/ixambert-base-cased), fine-tuned on SQuAD v1.1 and an experimental version of SQuAD1.1 in Basque (1/3 size of original SQuAD1.1), that is able to answer basic factual questions in English, Spanish and Basque.
## Overview
* **Language model:** ixambert-base-cased
* **Languages:** English, Spanish and Basque
* **Downstream task:** Extractive QA
* **Training data:** SQuAD v1.1 + experimental SQuAD1.1 in Basque
* **Eval data:** SQuAD v1.1 + experimental SQuAD1.1 in Basque
* **Infrastructure:** 1x GeForce RTX 2080
## Outputs
The model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example:
```python
{'score': 0.9667195081710815, 'start': 101, 'end': 105, 'answer': '1820'}
```
## How to use
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "MarcBrun/ixambert-finetuned-squad-eu-en"
# To get predictions
context = "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820"
question = "When was Florence Nightingale born?"
qa = pipeline("question-answering", model=model_name, tokenizer=model_name)
pred = qa(question=question,context=context)
# To load the model and tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Hyperparameters
```
batch_size = 8
n_epochs = 3
learning_rate = 2e-5
optimizer = AdamW
lr_schedule = linear
max_seq_len = 384
doc_stride = 128
``` | {"language": ["en", "es", "eu"], "datasets": ["squad"], "widget": [{"text": "When was Florence Nightingale born?", "context": "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820.", "example_title": "English"}, {"text": "\u00bfPor qu\u00e9 provincias pasa el Tajo?", "context": "El Tajo es el r\u00edo m\u00e1s largo de la pen\u00ednsula ib\u00e9rica, a la que atraviesa en su parte central, siguiendo un rumbo este-oeste, con una leve inclinaci\u00f3n hacia el suroeste, que se acent\u00faa cuando llega a Portugal, donde recibe el nombre de Tejo.\nNace en los montes Universales, en la sierra de Albarrac\u00edn, sobre la rama occidental del sistema Ib\u00e9rico y, despu\u00e9s de recorrer 1007 km, llega al oc\u00e9ano Atl\u00e1ntico en la ciudad de Lisboa. En su desembocadura forma el estuario del mar de la Paja, en el que vierte un caudal medio de 456 m\u00b3/s. En sus primeros 816 km atraviesa Espa\u00f1a, donde discurre por cuatro comunidades aut\u00f3nomas (Arag\u00f3n, Castilla-La Mancha, Madrid y Extremadura) y un total de seis provincias (Teruel, Guadalajara, Cuenca, Madrid, Toledo y C\u00e1ceres).", "example_title": "Espa\u00f1ol"}, {"text": "Zer beste izenak ditu Tartalo?", "context": "Tartalo euskal mitologiako izaki begibakar artzain erraldoia da. Tartalo izena zenbait euskal hizkeratan herskari-bustidurarekin ahoskatu ohi denez, horrelaxe ere idazten da batzuetan: Ttarttalo. Euskal Herriko zenbait tokitan, Torto edo Anxo ere esaten diote.", "example_title": "Euskara"}]} | MarcBrun/ixambert-finetuned-squad-eu-en | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"en",
"es",
"eu",
"dataset:squad",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers |
# ixambert-base-cased finetuned for QA
This is a basic implementation of the multilingual model ["ixambert-base-cased"](https://huggingface.co/ixa-ehu/ixambert-base-cased), fine-tuned on an experimental version of SQuAD1.1 in Basque (1/3 size of original SQuAD1.1), that is able to answer basic factual questions.
## Overview
* **Language model:** ixambert-base-cased
* **Languages:** English, Spanish and Basque
* **Downstream task:** Extractive QA
* **Training data:** Experimental SQuAD1.1 in Basque
* **Eval data:** Experimental SQuAD1.1 in Basque
* **Infrastructure:** 1x GeForce RTX 2080
## Outputs
The model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example:
```python
{'score': 0.9667195081710815, 'start': 101, 'end': 105, 'answer': '1820'}
```
## How to use
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "MarcBrun/ixambert-finetuned-squad-eu"
# To get predictions
context = "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820"
question = "When was Florence Nightingale born?"
qa = pipeline("question-answering", model=model_name, tokenizer=model_name)
pred = qa(question=question,context=context)
# To load the model and tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Hyperparameters
```
batch_size = 8
n_epochs = 3
learning_rate = 2e-5
optimizer = AdamW
lr_schedule = linear
max_seq_len = 384
doc_stride = 128
``` | {"language": ["en", "es", "eu"], "widget": [{"text": "When was Florence Nightingale born?", "context": "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820.", "example_title": "English"}, {"text": "\u00bfPor qu\u00e9 provincias pasa el Tajo?", "context": "El Tajo es el r\u00edo m\u00e1s largo de la pen\u00ednsula ib\u00e9rica, a la que atraviesa en su parte central, siguiendo un rumbo este-oeste, con una leve inclinaci\u00f3n hacia el suroeste, que se acent\u00faa cuando llega a Portugal, donde recibe el nombre de Tejo.\nNace en los montes Universales, en la sierra de Albarrac\u00edn, sobre la rama occidental del sistema Ib\u00e9rico y, despu\u00e9s de recorrer 1007 km, llega al oc\u00e9ano Atl\u00e1ntico en la ciudad de Lisboa. En su desembocadura forma el estuario del mar de la Paja, en el que vierte un caudal medio de 456 m\u00b3/s. En sus primeros 816 km atraviesa Espa\u00f1a, donde discurre por cuatro comunidades aut\u00f3nomas (Arag\u00f3n, Castilla-La Mancha, Madrid y Extremadura) y un total de seis provincias (Teruel, Guadalajara, Cuenca, Madrid, Toledo y C\u00e1ceres).", "example_title": "Espa\u00f1ol"}, {"text": "Zer beste izenak ditu Tartalo?", "context": "Tartalo euskal mitologiako izaki begibakar artzain erraldoia da. Tartalo izena zenbait euskal hizkeratan herskari-bustidurarekin ahoskatu ohi denez, horrelaxe ere idazten da batzuetan: Ttarttalo. Euskal Herriko zenbait tokitan, Torto edo Anxo ere esaten diote.", "example_title": "Euskara"}]} | MarcBrun/ixambert-finetuned-squad-eu | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"en",
"es",
"eu",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers |
# ixambert-base-cased finetuned for QA
This is a basic implementation of the multilingual model ["ixambert-base-cased"](https://huggingface.co/ixa-ehu/ixambert-base-cased), fine-tuned on SQuAD v1.1, that is able to answer basic factual questions in English, Spanish and Basque.
## Overview
* **Language model:** ixambert-base-cased
* **Languages:** English, Spanish and Basque
* **Downstream task:** Extractive QA
* **Training data:** SQuAD v1.1
* **Eval data:** SQuAD v1.1
* **Infrastructure:** 1x GeForce RTX 2080
## Outputs
The model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example:
```python
{'score': 0.9667195081710815, 'start': 101, 'end': 105, 'answer': '1820'}
```
## How to use
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "MarcBrun/ixambert-finetuned-squad"
# To get predictions
context = "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820"
question = "When was Florence Nightingale born?"
qa = pipeline("question-answering", model=model_name, tokenizer=model_name)
pred = qa(question=question,context=context)
# To load the model and tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Hyperparameters
```
batch_size = 8
n_epochs = 3
learning_rate = 2e-5
optimizer = AdamW
lr_schedule = linear
max_seq_len = 384
doc_stride = 128
``` | {"language": ["en", "es", "eu"], "datasets": ["squad"], "widget": [{"text": "When was Florence Nightingale born?", "context": "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820.", "example_title": "English"}, {"text": "\u00bfPor qu\u00e9 provincias pasa el Tajo?", "context": "El Tajo es el r\u00edo m\u00e1s largo de la pen\u00ednsula ib\u00e9rica, a la que atraviesa en su parte central, siguiendo un rumbo este-oeste, con una leve inclinaci\u00f3n hacia el suroeste, que se acent\u00faa cuando llega a Portugal, donde recibe el nombre de Tejo.\nNace en los montes Universales, en la sierra de Albarrac\u00edn, sobre la rama occidental del sistema Ib\u00e9rico y, despu\u00e9s de recorrer 1007 km, llega al oc\u00e9ano Atl\u00e1ntico en la ciudad de Lisboa. En su desembocadura forma el estuario del mar de la Paja, en el que vierte un caudal medio de 456 m\u00b3/s. En sus primeros 816 km atraviesa Espa\u00f1a, donde discurre por cuatro comunidades aut\u00f3nomas (Arag\u00f3n, Castilla-La Mancha, Madrid y Extremadura) y un total de seis provincias (Teruel, Guadalajara, Cuenca, Madrid, Toledo y C\u00e1ceres).", "example_title": "Espa\u00f1ol"}, {"text": "Zer beste izenak ditu Tartalo?", "context": "Tartalo euskal mitologiako izaki begibakar artzain erraldoia da. Tartalo izena zenbait euskal hizkeratan herskari-bustidurarekin ahoskatu ohi denez, horrelaxe ere idazten da batzuetan: Ttarttalo. Euskal Herriko zenbait tokitan, Torto edo Anxo ere esaten diote.", "example_title": "Euskara"}]} | MarcBrun/ixambert-finetuned-squad | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"en",
"es",
"eu",
"dataset:squad",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | MarcoUno77/mio | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | {} | Margarita/ArmBERTa | null | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Margarita/ArmBERTa1 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Margooooo/Q | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.