Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
text2text-generation
transformers
##A T5ForConditionalGeneration trained on 3 tasks from PAN Profiling Hate Speech Spreaders on Twitter dataset (EN): * author attribution (train and test sets from the PAN task) * topic attribution - topics were assigned with BertTopic library using embeddings from `cardiffnlp/bertweet-base-hate` Roberta model (train and test sets from the PAN task) * hate speech identification (train set from the PAN task) in order to generate tone of comment use prefix **hater classification:**
{}
PaulAdversarial/T5_PAN_Hate_Speech_Twitter_topic_author_ishatespeach
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
A T5ForConditionalGeneration trained on 2 tasks from PAN Profiling Hate Speech Spreaders on Twitter dataset (EN): * topic attribution - topics were assigned with BertTopic library using embeddings from `cardiffnlp/bertweet-base-hate` Roberta model (train and test sets from the PAN task) * hate speech identification (train set from the PAN task) in order to generate tone of comment use prefix **hater classification:**
{}
PaulAdversarial/T5_PAN_Hate_Speech_Twitter_topic_ishatespeach
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{"datasets": ["PaulLerner/triviaqa_for_viquae"]}
PaulLerner/dpr_context_encoder_triviaqa_without_viquae
null
[ "transformers", "pytorch", "dpr", "dataset:PaulLerner/triviaqa_for_viquae", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
{}
PaulLerner/dpr_question_encoder_triviaqa_without_viquae
null
[ "transformers", "pytorch", "dpr", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
PaulLerner/multi_passage_bert_triviaqa_without_viquae
null
[ "transformers", "pytorch", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Paulosknupp/teste_fin_port
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Pawel838383/My-repo
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Pawel838383/Natural-Language-Infenrence-NLI
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Pdwin/test
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
PeanutLoves/Idk
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Pedro256/SS-4L
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
{}
PedroR/xlm-roberta-4-final
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
{}
PedroR/xlm-roberta-4-pretrained-with-tokenizer
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
{}
PedroR/xlm-roberta-4-pretrained
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
{}
PedroR/xlm-roberta-4
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
{}
PedroR/xlm-roberta-5-final
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
{}
PedroR/xlm-roberta-5-pretrained
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
{}
PedroR/xlm-roberta-5
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
{}
PedroR/xlm-roberta-6-final
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
{}
PedroR/xlm-roberta-6-pretrained
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
{}
PedroR/xlm-roberta-6
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
{}
PedroR/xlm-roberta-7-final
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
{}
PedroR/xlm-roberta-7-pretrained
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
{}
PedroR/xlm-roberta-7
null
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
PeerChristensen/TrumpTweetsDeviceNBClassifier
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
PeeweeTuna34/DialoGPT-medium-XiaoGenshinImpact
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
## XLM-R Longformer Model XLM-R Longformer is a XLM-R model, that has been extended to allow sequence lengths up to 4096 tokens, instead of the regular 512. The model was pre-trained from the XLM-RoBERTa checkpoint using the Longformer [pre-training scheme](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) on the English WikiText-103 corpus. The reason for this was to investigate methods for creating efficient Transformers for low-resource languages, such as Swedish, without the need to pre-train them on long-context datasets in each respecitve language. The trained model came as a result of a master thesis project at [Peltarion](https://peltarion.com/) and was fine-tuned on multilingual quesion-answering tasks, with code available [here](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer#xlm-r). Since both XLM-R model and Longformer models are large models, it it recommended to run the models with NVIDIA Apex (16bit precision), large GPU and several gradient accumulation steps. ## How to Use The model can be used as expected to fine-tune on a downstream task. For instance for QA. ```python import torch from transformers import AutoModel, AutoTokenizer MAX_SEQUENCE_LENGTH = 4096 MODEL_NAME_OR_PATH = "markussagen/xlm-roberta-longformer-base-4096" tokenizer = AutoTokenizer.from_pretrained( MODEL_NAME_OR_PATH, max_length=MAX_SEQUENCE_LENGTH, padding="max_length", truncation=True, ) model = AutoModelForQuestionAnswering.from_pretrained( MODEL_NAME_OR_PATH, max_length=MAX_SEQUENCE_LENGTH, ) ``` ## Training Procedure The model have been trained on the WikiText-103 corpus, using a **48GB** GPU with the following training script and parameters. The model was pre-trained for 6000 iterations and took ~5 days. See the full [training script](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer/blob/main/scripts/finetune_qa_models.py) and [Github repo](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer) for more information ```sh wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip unzip wikitext-103-raw-v1.zip export DATA_DIR=./wikitext-103-raw scripts/run_long_lm.py \ --model_name_or_path xlm-roberta-base \ --model_name xlm-roberta-to-longformer \ --output_dir ./output \ --logging_dir ./logs \ --val_file_path $DATA_DIR/wiki.valid.raw \ --train_file_path $DATA_DIR/wiki.train.raw \ --seed 42 \ --max_pos 4096 \ --adam_epsilon 1e-8 \ --warmup_steps 500 \ --learning_rate 3e-5 \ --weight_decay 0.01 \ --max_steps 6000 \ --evaluate_during_training \ --logging_steps 50 \ --eval_steps 50 \ --save_steps 6000 \ --max_grad_norm 1.0 \ --per_device_eval_batch_size 2 \ --per_device_train_batch_size 1 \ --gradient_accumulation_steps 64 \ --overwrite_output_dir \ --fp16 \ --do_train \ --do_eval ```
{"language": "multilingual", "license": "apache-2.0", "tags": ["longformer"], "datasets": ["wikitext"]}
Peltarion/xlm-roberta-longformer-base-4096
null
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "longformer", "multilingual", "dataset:wikitext", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Penguint/nlp
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Rick and Morty DialoGPT Model
{"tags": ["conversational"]}
Pensador777critico/DialoGPT-small-RickandMorty
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
# Disclaimer This model was trained on Common Voice 6, if you need a catalan model for ASR, I recommend checking [wav2vec2-xls-r-1b-ca-lm](https://huggingface.co/PereLluis13/wav2vec2-xls-r-1b-ca-lm) which is a 1b model with a LM on top trained on CV8+ with much better performance or [wav2vec2-xls-r-300m-ca-lm](https://huggingface.co/PereLluis13/wav2vec2-xls-r-300m-ca-lm) which has the same size (300m) as this model but trained on CV8+ and the same LM. # Wav2Vec2-Large-XLSR-53-ca Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on catalan using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ca", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan") model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the catalan test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ca", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan") model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\;\:\"\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) import jiwer # Chunk WER computation due to memory issues, taken from https://huggingface.co/pcuenq/wav2vec2-large-xlsr-53-es def chunked_wer(targets, predictions, chunk_size=None): if chunk_size is None: return jiwer.wer(targets, predictions) start = 0 end = chunk_size H, S, D, I = 0, 0, 0, 0 while start < len(targets): chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end]) H = H + chunk_metrics["hits"] S = S + chunk_metrics["substitutions"] D = D + chunk_metrics["deletions"] I = I + chunk_metrics["insertions"] start += chunk_size end += chunk_size return float(S + D + I) / float(H + S + D) print("WER: {:2f}".format(100 * chunked_wer(result["sentence"], result["pred_strings"], chunk_size=4000))) ``` **Test Result**: 8.11 % ## Training The Common Voice `train`, `validation` datasets were used for training. At the second epoch training was halted due to a memory issue, and was continued with lower batch size, but acc. gradient steps were scaled to keep it at 32 batch size during all training. Then the model was trained for an additional 10 epochs where half the male samples were pitched up. The script used for training can be found [here](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py). Slight modifications were done in order to speed up the ordering by length during training, which can be found [here](https://discuss.huggingface.co/t/spanish-asr-fine-tuning-wav2vec2/4586/6). Another version trained for catalan can be found [here](https://huggingface.co/ccoreilly/wav2vec2-large-xlsr-catala), which may be better than this one since it was trained with extra data and for longer time. Whoever, since it used different splits that include part of the Common Voice test set, this version can be used to get a baseline on the Common Voice dataset.
{"language": "ca", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Catalan XLSR Wav2Vec Large 53", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ca", "type": "common_voice", "args": "ca"}, "metrics": [{"type": "wer", "value": 8.11, "name": "Test WER"}]}]}]}
PereLluis13/Wav2Vec2-Large-XLSR-53-catalan
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ca", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
PereLluis13/wav2vec2-large-xlsr-53-ca
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-greek Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on greek using the [Common Voice](https://huggingface.co/datasets/common_voice) and [CSS10](https://github.com/Kyubyong/css10) datasets. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "el", split="test") processor = Wav2Vec2Processor.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek") model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the greek test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "el", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek") model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 20.89 % ## Training The Common Voice `train`, `validation`, and CSS10 datasets were used for training, added as `extra` split to the dataset. The sampling rate and format of the CSS10 files is different, hence the function `speech_file_to_array_fn` was changed to: ``` def speech_file_to_array_fn(batch): try: speech_array, sampling_rate = sf.read(batch["path"] + ".wav") except: speech_array, sampling_rate = librosa.load(batch["path"], sr = 16000, res_type='zero_order_hold') sf.write(batch["path"] + ".wav", speech_array, sampling_rate, subtype='PCM_24') batch["speech"] = speech_array batch["sampling_rate"] = sampling_rate batch["target_text"] = batch["text"] return batch ``` As suggested by [Florian Zimmermeister](https://github.com/flozi00). The script used for training can be found in [run_common_voice.py](examples/research_projects/wav2vec2/run_common_voice.py), still pending of PR. The only changes are to `speech_file_to_array_fn`. Batch size was kept at 32 (using `gradient_accumulation_steps`) using one of the [OVH](https://www.ovh.com/) machines, with a V100 GPU (thank you very much [OVH](https://www.ovh.com/)). The model trained for 40 epochs, the first 20 with the `train+validation` splits, and then `extra` split was added with the data from CSS10 at the 20th epoch.
{"language": "el", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice", "CSS10"], "metrics": ["wer"], "model-index": [{"name": "Greek XLSR Wav2Vec2 Large 53 - CV + CSS10", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice el", "type": "common_voice", "args": "el"}, "metrics": [{"type": "wer", "value": 20.89, "name": "Test WER"}]}]}]}
PereLluis13/wav2vec2-large-xlsr-53-greek
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "el", "dataset:common_voice", "dataset:CSS10", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
# wav2vec2-xls-r-1b-ca-lm This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets. ## Model description Please check the original [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) Model card. This is just a finetuned version of that model. ## Intended uses & limitations As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language. ## Training and evaluation data ## Training procedure The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py). ### Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 # Thanks Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
{"language": ["ca"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0", "collectivat/tv3_parla", "projecte-aina/parlament_parla"], "model-index": [{"name": "wav2vec2-xls-r-1b-ca-lm", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "mozilla-foundation/common_voice_8_0 ca", "type": "mozilla-foundation/common_voice_8_0", "args": "ca"}, "metrics": [{"type": "wer", "value": 6.072266995813065, "name": "Test WER"}, {"type": "cer", "value": 1.9180697705166525, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "projecte-aina/parlament_parla ca", "type": "projecte-aina/parlament_parla", "args": "clean"}, "metrics": [{"type": "wer", "value": 5.139820371024042, "name": "Test WER"}, {"type": "cer", "value": 2.0163620128164723, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "collectivat/tv3_parla ca", "type": "collectivat/tv3_parla", "args": "ca"}, "metrics": [{"type": "wer", "value": 11.207991684952074, "name": "Test WER"}, {"type": "cer", "value": 7.32119307305963, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Catalan Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ca"}, "metrics": [{"type": "wer", "value": 22.870153690468662, "name": "Test WER"}, {"type": "cer", "value": 13.59039190897598, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ca"}, "metrics": [{"type": "wer", "value": 15.41, "name": "Test WER"}]}]}]}
PereLluis13/wav2vec2-xls-r-1b-ca-lm
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event", "ca", "dataset:mozilla-foundation/common_voice_8_0", "dataset:collectivat/tv3_parla", "dataset:projecte-aina/parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
PereLluis13/wav2vec2-xls-r-1b-ca-old
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-1b-ca This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets. ## Model description Please check the original [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) Model card. This is just a finetuned version of that model. ## Intended uses & limitations As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language. ## Training and evaluation data ## Training procedure The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py). ### Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 # Thanks Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
{"language": ["ca"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0", "collectivat/tv3_parla", "projecte-aina/parlament_parla"], "model-index": [{"name": "wav2vec2-xls-r-1b-ca", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "mozilla-foundation/common_voice_8_0 ca", "type": "mozilla-foundation/common_voice_8_0", "args": "ca"}, "metrics": [{"type": "wer", "value": 11.030639657300515, "name": "Test WER"}, {"type": "cer", "value": 2.8405630530040633, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "projecte-aina/parlament_parla ca", "type": "projecte-aina/parlament_parla", "args": "clean"}, "metrics": [{"type": "wer", "value": 6.483115660665961, "name": "Test WER"}, {"type": "cer", "value": 2.0212863746191827, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "collectivat/tv3_parla ca", "type": "collectivat/tv3_parla", "args": "ca"}, "metrics": [{"type": "wer", "value": 17.917773414943987, "name": "Test WER"}, {"type": "cer", "value": 8.872589572206396, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Catalan Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ca"}, "metrics": [{"type": "wer", "value": 27.126683954209096, "name": "Test WER"}, {"type": "cer", "value": 14.213308815078726, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ca"}, "metrics": [{"type": "wer", "value": 18.7, "name": "Test WER"}]}]}]}
PereLluis13/wav2vec2-xls-r-1b-ca
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event", "ca", "dataset:mozilla-foundation/common_voice_8_0", "dataset:collectivat/tv3_parla", "dataset:projecte-aina/parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
# wav2vec2-xls-r-300m-ca-lm This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets. It achieves the following results on the evaluation set (for the three datasets and without the LM): - Loss: 0.2472 - Wer: 0.1499 ## Model description Please check the original [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) Model card. This is just a finetuned version of that model. ## Intended uses & limitations As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language. ## Training and evaluation data More information needed ## Training procedure The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 18.0 - mixed_precision_training: Native AMP ### Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training. | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 6.2099 | 0.09 | 500 | 3.4125 | 1.0 | | 2.9961 | 0.18 | 1000 | 2.9224 | 1.0 | | 2.2147 | 0.26 | 1500 | 0.6521 | 0.5568 | | 1.3017 | 0.35 | 2000 | 0.3153 | 0.2761 | | 1.1196 | 0.44 | 2500 | 0.2444 | 0.2367 | | 1.0712 | 0.53 | 3000 | 0.2324 | 0.2132 | | 1.052 | 0.62 | 3500 | 0.2173 | 0.2032 | | 1.2813 | 2.13 | 4000 | 0.3326 | 0.2099 | | 1.2365 | 2.4 | 4500 | 0.3224 | 0.2003 | | 1.2193 | 2.66 | 5000 | 0.3198 | 0.1957 | | 1.2072 | 2.93 | 5500 | 0.3063 | 0.1933 | | 1.213 | 3.2 | 6000 | 0.3051 | 0.1980 | | 1.2074 | 3.46 | 6500 | 0.3012 | 0.1879 | | 1.1918 | 3.73 | 7000 | 0.2947 | 0.1829 | | 1.1893 | 4.0 | 7500 | 0.2895 | 0.1807 | | 1.1751 | 4.26 | 8000 | 0.2878 | 0.1776 | | 1.1628 | 4.53 | 8500 | 0.2835 | 0.1731 | | 1.1577 | 4.79 | 9000 | 0.2816 | 0.1761 | | 1.1448 | 5.06 | 9500 | 0.2757 | 0.1740 | | 1.1407 | 5.33 | 10000 | 0.2768 | 0.1798 | | 1.1401 | 5.59 | 10500 | 0.2780 | 0.1816 | | 1.1333 | 5.86 | 11000 | 0.2748 | 0.1750 | | 1.1571 | 6.13 | 11500 | 0.2808 | 0.1708 | | 1.1505 | 6.39 | 12000 | 0.2726 | 0.1692 | | 1.1519 | 6.66 | 12500 | 0.2749 | 0.1654 | | 1.136 | 6.93 | 13000 | 0.2765 | 0.1643 | | 1.1326 | 7.19 | 13500 | 0.2706 | 0.1668 | | 1.1342 | 7.46 | 14000 | 0.2665 | 0.1638 | | 1.1286 | 7.72 | 14500 | 0.2669 | 0.1636 | | 1.1243 | 7.99 | 15000 | 0.2619 | 0.1623 | | 1.1173 | 8.26 | 15500 | 0.2652 | 0.1604 | | 1.1129 | 8.52 | 16000 | 0.2610 | 0.1598 | | 1.1091 | 8.79 | 16500 | 0.2608 | 0.1584 | | 1.1053 | 9.06 | 17000 | 0.2633 | 0.1664 | | 1.1004 | 9.32 | 17500 | 0.2594 | 0.1662 | | 1.0995 | 9.59 | 18000 | 0.2623 | 0.1569 | | 1.0964 | 9.86 | 18500 | 0.2624 | 0.1597 | | 1.09 | 10.12 | 19000 | 0.2577 | 0.1578 | | 1.089 | 10.39 | 19500 | 0.2574 | 0.1531 | | 1.0864 | 10.66 | 20000 | 0.2556 | 0.1546 | | 1.0806 | 10.92 | 20500 | 0.2548 | 0.1583 | | 1.0842 | 11.19 | 21000 | 0.2550 | 0.1542 | | 1.0805 | 11.45 | 21500 | 0.2561 | 0.1524 | | 1.0722 | 11.72 | 22000 | 0.2540 | 0.1566 | | 1.0763 | 11.99 | 22500 | 0.2549 | 0.1572 | | 1.0835 | 12.25 | 23000 | 0.2586 | 0.1521 | | 1.0883 | 12.52 | 23500 | 0.2583 | 0.1519 | | 1.0888 | 12.79 | 24000 | 0.2551 | 0.1582 | | 1.0933 | 13.05 | 24500 | 0.2628 | 0.1537 | | 1.0799 | 13.32 | 25000 | 0.2600 | 0.1508 | | 1.0804 | 13.59 | 25500 | 0.2620 | 0.1475 | | 1.0814 | 13.85 | 26000 | 0.2537 | 0.1517 | | 1.0693 | 14.12 | 26500 | 0.2560 | 0.1542 | | 1.0724 | 14.38 | 27000 | 0.2540 | 0.1574 | | 1.0704 | 14.65 | 27500 | 0.2548 | 0.1626 | | 1.0729 | 14.92 | 28000 | 0.2548 | 0.1601 | | 1.0724 | 15.18 | 28500 | 0.2511 | 0.1512 | | 1.0655 | 15.45 | 29000 | 0.2498 | 0.1490 | | 1.0608 | 15.98 | 30000 | 0.2487 | 0.1481 | | 1.0541 | 16.52 | 31000 | 0.2468 | 0.1504 | | 1.0584 | 17.05 | 32000 | 0.2467 | 0.1493 | | 1.0507 | 17.58 | 33000 | 0.2481 | 0.1517 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 # Thanks Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
{"language": ["ca"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0", "collectivat/tv3_parla", "projecte-aina/parlament_parla"], "model-index": [{"name": "wav2vec2-xls-r-300m-ca-lm", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "mozilla-foundation/common_voice_8_0 ca", "type": "mozilla-foundation/common_voice_8_0", "args": "ca"}, "metrics": [{"type": "wer", "value": 6.771703090587865, "name": "Test WER"}, {"type": "cer", "value": 2.100777784371229, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "projecte-aina/parlament_parla ca", "type": "projecte-aina/parlament_parla", "args": "clean"}, "metrics": [{"type": "wer", "value": 5.565360630662431, "name": "Test WER"}, {"type": "cer", "value": 1.8594390167034354, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "collectivat/tv3_parla ca", "type": "collectivat/tv3_parla", "args": "ca"}, "metrics": [{"type": "wer", "value": 13.53312545713516, "name": "Test WER"}, {"type": "cer", "value": 8.684635913340555, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Catalan Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ca"}, "metrics": [{"type": "wer", "value": 26.04515843400164, "name": "Test WER"}, {"type": "cer", "value": 15.056890012642224, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ca"}, "metrics": [{"type": "wer", "value": 17.68, "name": "Test WER"}]}]}]}
PereLluis13/wav2vec2-xls-r-300m-ca-lm
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event", "ca", "dataset:mozilla-foundation/common_voice_8_0", "dataset:collectivat/tv3_parla", "dataset:projecte-aina/parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
# wav2vec2-xls-r-300m-ca This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets. It achieves the following results on the evaluation set (for the three datasets): - Loss: 0.2472 - Wer: 0.1499 ## Model description Please check the original [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) Model card. This is just a finetuned version of that model. ## Intended uses & limitations As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language. ## Training and evaluation data More information needed ## Training procedure The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 18.0 - mixed_precision_training: Native AMP ### Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training. | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 6.2099 | 0.09 | 500 | 3.4125 | 1.0 | | 2.9961 | 0.18 | 1000 | 2.9224 | 1.0 | | 2.2147 | 0.26 | 1500 | 0.6521 | 0.5568 | | 1.3017 | 0.35 | 2000 | 0.3153 | 0.2761 | | 1.1196 | 0.44 | 2500 | 0.2444 | 0.2367 | | 1.0712 | 0.53 | 3000 | 0.2324 | 0.2132 | | 1.052 | 0.62 | 3500 | 0.2173 | 0.2032 | | 1.2813 | 2.13 | 4000 | 0.3326 | 0.2099 | | 1.2365 | 2.4 | 4500 | 0.3224 | 0.2003 | | 1.2193 | 2.66 | 5000 | 0.3198 | 0.1957 | | 1.2072 | 2.93 | 5500 | 0.3063 | 0.1933 | | 1.213 | 3.2 | 6000 | 0.3051 | 0.1980 | | 1.2074 | 3.46 | 6500 | 0.3012 | 0.1879 | | 1.1918 | 3.73 | 7000 | 0.2947 | 0.1829 | | 1.1893 | 4.0 | 7500 | 0.2895 | 0.1807 | | 1.1751 | 4.26 | 8000 | 0.2878 | 0.1776 | | 1.1628 | 4.53 | 8500 | 0.2835 | 0.1731 | | 1.1577 | 4.79 | 9000 | 0.2816 | 0.1761 | | 1.1448 | 5.06 | 9500 | 0.2757 | 0.1740 | | 1.1407 | 5.33 | 10000 | 0.2768 | 0.1798 | | 1.1401 | 5.59 | 10500 | 0.2780 | 0.1816 | | 1.1333 | 5.86 | 11000 | 0.2748 | 0.1750 | | 1.1571 | 6.13 | 11500 | 0.2808 | 0.1708 | | 1.1505 | 6.39 | 12000 | 0.2726 | 0.1692 | | 1.1519 | 6.66 | 12500 | 0.2749 | 0.1654 | | 1.136 | 6.93 | 13000 | 0.2765 | 0.1643 | | 1.1326 | 7.19 | 13500 | 0.2706 | 0.1668 | | 1.1342 | 7.46 | 14000 | 0.2665 | 0.1638 | | 1.1286 | 7.72 | 14500 | 0.2669 | 0.1636 | | 1.1243 | 7.99 | 15000 | 0.2619 | 0.1623 | | 1.1173 | 8.26 | 15500 | 0.2652 | 0.1604 | | 1.1129 | 8.52 | 16000 | 0.2610 | 0.1598 | | 1.1091 | 8.79 | 16500 | 0.2608 | 0.1584 | | 1.1053 | 9.06 | 17000 | 0.2633 | 0.1664 | | 1.1004 | 9.32 | 17500 | 0.2594 | 0.1662 | | 1.0995 | 9.59 | 18000 | 0.2623 | 0.1569 | | 1.0964 | 9.86 | 18500 | 0.2624 | 0.1597 | | 1.09 | 10.12 | 19000 | 0.2577 | 0.1578 | | 1.089 | 10.39 | 19500 | 0.2574 | 0.1531 | | 1.0864 | 10.66 | 20000 | 0.2556 | 0.1546 | | 1.0806 | 10.92 | 20500 | 0.2548 | 0.1583 | | 1.0842 | 11.19 | 21000 | 0.2550 | 0.1542 | | 1.0805 | 11.45 | 21500 | 0.2561 | 0.1524 | | 1.0722 | 11.72 | 22000 | 0.2540 | 0.1566 | | 1.0763 | 11.99 | 22500 | 0.2549 | 0.1572 | | 1.0835 | 12.25 | 23000 | 0.2586 | 0.1521 | | 1.0883 | 12.52 | 23500 | 0.2583 | 0.1519 | | 1.0888 | 12.79 | 24000 | 0.2551 | 0.1582 | | 1.0933 | 13.05 | 24500 | 0.2628 | 0.1537 | | 1.0799 | 13.32 | 25000 | 0.2600 | 0.1508 | | 1.0804 | 13.59 | 25500 | 0.2620 | 0.1475 | | 1.0814 | 13.85 | 26000 | 0.2537 | 0.1517 | | 1.0693 | 14.12 | 26500 | 0.2560 | 0.1542 | | 1.0724 | 14.38 | 27000 | 0.2540 | 0.1574 | | 1.0704 | 14.65 | 27500 | 0.2548 | 0.1626 | | 1.0729 | 14.92 | 28000 | 0.2548 | 0.1601 | | 1.0724 | 15.18 | 28500 | 0.2511 | 0.1512 | | 1.0655 | 15.45 | 29000 | 0.2498 | 0.1490 | | 1.0608 | 15.98 | 30000 | 0.2487 | 0.1481 | | 1.0541 | 16.52 | 31000 | 0.2468 | 0.1504 | | 1.0584 | 17.05 | 32000 | 0.2467 | 0.1493 | | 1.0507 | 17.58 | 33000 | 0.2481 | 0.1517 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 # Thanks Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
{"language": ["ca"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0", "collectivat/tv3_parla", "projecte-aina/parlament_parla"], "model-index": [{"name": "wav2vec2-xls-r-300m-ca", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "mozilla-foundation/common_voice_8_0 ca", "type": "mozilla-foundation/common_voice_8_0", "args": "ca"}, "metrics": [{"type": "wer", "value": 13.170091241317552, "name": "Test WER"}, {"type": "cer", "value": 3.356726205534543, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "projecte-aina/parlament_parla ca", "type": "projecte-aina/parlament_parla", "args": "clean"}, "metrics": [{"type": "wer", "value": 8.048005647723262, "name": "Test WER"}, {"type": "cer", "value": 2.240912911020065, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "collectivat/tv3_parla ca", "type": "collectivat/tv3_parla", "args": "ca"}, "metrics": [{"type": "wer", "value": 23.320629787889285, "name": "Test WER"}, {"type": "cer", "value": 10.43921620208999, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "speech-recognition-community-v2/dev_data ca", "type": "speech-recognition-community-v2/dev_data", "args": "ca"}, "metrics": [{"type": "wer", "value": 31.99671115046487, "name": "Test WER"}, {"type": "cer", "value": 15.820020687277324, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ca"}, "metrics": [{"type": "wer", "value": 22.04, "name": "Test WER"}]}]}]}
PereLluis13/wav2vec2-xls-r-300m-ca
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event", "ca", "dataset:mozilla-foundation/common_voice_8_0", "dataset:collectivat/tv3_parla", "dataset:projecte-aina/parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
Peter/in_g_2
null
[ "transformers", "pytorch", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medium This model is a fine-tuned version of [prithivida/parrot_paraphraser_on_T5](https://huggingface.co/prithivida/parrot_paraphraser_on_T5) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6025 - Rouge1: 81.6007 - Rouge2: 75.1196 - Rougel: 81.4213 - Rougelsum: 81.4956 - Gen Len: 32.4286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 63 | 0.5775 | 65.0748 | 58.8985 | 64.5731 | 63.6249 | 19.0 | | No log | 2.0 | 126 | 0.5806 | 74.3055 | 69.2025 | 73.4922 | 73.0941 | 17.8571 | | No log | 3.0 | 189 | 0.6025 | 71.3808 | 66.0359 | 70.1235 | 69.4614 | 18.0 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.17.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "medium", "results": []}]}
Peter/medium
null
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
PeterH/Chatbot
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
PeterRucek/distilbert-base-uncased-finetuned-cola
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
How to use this classifier: ``` from transformers import pipeline pipe = pipeline("text-classification", model="Peterard/distilbert_bug_classifier") pipe("The app crashed when I opened it this morning. Can you fix this please?") # [{'label': 'bug', 'score': 0.9042391180992126}] pipe("Please add a like button!") # [{'label': 'no_bug', 'score': 0.9977496266365051}] ``` N.B. The label will change depending on which is the likelier class
{"language": ["en"], "tags": ["text-classification"], "widget": [{"text": "The app crashed when I opened it this morning. Can you fix this please?", "example_title": "Likely bug report"}, {"text": "Please add a like button!", "example_title": "Unlikely bug report"}]}
Peterard/distilbert_bug_classifier
null
[ "transformers", "pytorch", "distilbert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
How to use this classifier: ``` from transformers import pipeline pipe = pipeline("text-classification", model="Peterard/distilbert_feature_classifier") pipe("Please add a like button!") # [{'label': 'feature_request', 'score': 0.8930749893188477}] pipe("The app crashed when I opened it this morning. Can you fix this please?") #[{'label': 'no_feature_request', 'score': 0.9971746206283569}] ``` N.B. The label will change depending on which is the likelier class
{"language": ["en"], "tags": ["text-classification"], "widget": [{"text": "Please add a like button!", "example_title": "Likely feature request"}, {"text": "The app crashed when I opened it this morning. Can you fix this please?", "example_title": "Unlikely feature request"}]}
Peterard/distilbert_feature_classifier
null
[ "transformers", "pytorch", "distilbert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Pgarriga23/TestingCourse
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Phaneendra/distilbert-base-uncased-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
Phantomhive/Noelle-bot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
Phiion/DialoGPT-large-dilucbot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
Attempt of guided text generation to replace GPT-3 for :[This SCP Does Not Exist](https://www.thisscpdoesnotexist.ml) Work in Porgress Finetuned on a dataset of 1700 automatically generated samples from the [official SCP wiki](https://scp-wiki.wikidot.com/) Exemple input : ```Prompt: SCP-9741 is a pair of jeans that looks really cool ### Generation: Item #: SCP-9741\nObject Class: Safe\nSpecial Containment Procedures:``` # Acknowledgment This work was made possible thanks to the TPU Research Cloud program by Google
{}
PhilSad/GPT-J6B-Guided-SCP
null
[ "transformers", "pytorch", "gptj", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
GPT J 6B finetuned on SCP articles Very experimental
{}
PhilSad/GPTJ2B-SCP
null
[ "transformers", "pytorch", "gptj", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output_gptneo125-2 This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: tpu - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "output_gptneo125-2", "results": []}]}
PhilSad/gpt-scp-neo-125M
null
[ "transformers", "pytorch", "tensorboard", "gpt_neo", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
#Traveller DiabloGPT Model
{"tags": ["conversational"]}
PhilipTheGreat/DiabloGPT-small-Traveller
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Philippe/test
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
### **GPT-Macbeth** A custom finetune of GPT-2 trained on a custom dataset of victorian literature ## Information The goal of this finetune is to output high-quality victorian literature, while being customizable with Author's Note and being light to run (aka not being a GPT-Neo or GPT-Jax finetune, for now at least). ## Authors Note Author's Note was added manually, so please appreciate it. :) The format of it is [ Author: George Eliot; Genre: Horror, fantasy, novel; Tags: scary, magical, victorian ] Some words will work well, some won't. Please make sure to have spaces before each ][. Most popular victorian authors should work, but keep in mind that some authors (e.g. Mark Twain) will result in a somewhat weird behavior due to a quirk in the dataset that will be addressed in the next version of the finetune. When it comes to the genres, "novel", "fiction", "horror" and "romance" work best, but from playing around with it, I've noticed that most other not too specific genres work pretty well too. The tags are a bit complicated. Adding "normal" will result in a story without anything special (like no magic or fantasy element) and tends to be pretty low-pace. Using "real-life" will push the AI towards a historical/biographical path. Almost all tags should work. Using "man" or "woman" is supposed to semi-determine what gender the main character is, but it heavily depends on the chosen author. ## History Version 0 - This was the first test version of the finetune, trained on GPT-2-small and with a really small dataset. The name was GPT-Kelini before it was renamed to GPT-Macbeth in V1. Version 1 - The current version of the finetune. Trained on GPT-2-medium with a much, much bigger dataset compared to V0. Supports Author's Note ### Notes Please use a very low temperature/randomness when using it, if you want to get anything out of it. Pumping the repetition penalty up helps a lot too. The model was specifically converted to PyTorch so that most front-end GUI's should run it. It has been only tested on KoboldAI, but should theoretically work on others too. For some odd reason, my finetune is capable of writing victorian NSFW content, if used the right way. No NSFW was in the dataset and considering the size of the model, it's really odd to see it do so. Perhaps the countless romantic novels in the dataset had something naughty in them, but I highly doubt it. You may sometimes get roman numerals on random occasions, this shouldn't happen often, but if it does, it's again something that will be (manually, unfortunately) addressed in the next version of the finetune. If you are wondering why I renamed my finetune to Macbeth, there are a few reasons: First, it sounds much better and smoother than Kelini, second, it's a play by Shakespeare that closely matches the writing style of some of the authors in my dataset, and third, the most important reason, it's was mentioned in Hamilton, so yes, my love with Hamilton is bleeding everywhere and yes, the next version of the dataset will try to have a Hamilton easter egg featuring the Author's Note. ### Credits I want to thank HuggingFace for their tokenizer and everything they've done to make everything easier. Then is OpenAI for making GPT-2. I also want to thank most active people on the AIM Discord server in the community-projects channel. Thanks to Bran for finding a way to convert checkpoints to a PyTorch model, thanks to Mr. Seeker and Aedial for helping me in cleaning the dataset and to *finetune* from the NovelAI team for perhaps making my finetune output much better quality by telling me about the magic of the <\|endoftext\|> token. P.S. If you happen to use it in something commercial or in an online demo or in any other way that is not for personal use, a credit will be greatly appreciated (and if you do something exciting with it, make sure to let me know, I'd be more than happy to see it being used by someone!).
{}
Philipuss/GPT-Macbeth
null
[ "transformers", "pytorch", "tensorboard", "gpt2", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
PhoneSimp/DialoGPT-medium-harrypotter
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
{}
Photons/dummy_model
null
[ "transformers", "tf", "camembert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Photons/dummy_tokenizer
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Phuoc/asr
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
Phuoc/asr_model
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Pierre-Sylvain/distilbert-base-uncased-finetuned-emotion
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
This is Brain Piano --- inference: parameters: temperature: 0.7 ---
{}
Pikachu/BrainPiano
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Pikachuqs/Adri1
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Pineapple/DialoGPT-medium-Rick-Samchez
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
PinoCorgi/DialoGPT-small-Shrek
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
@ Shrek DialoGPT Model
{"tags": ["conversational"]}
PinoCorgi/DialoGPT-small-Shrek1
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Pip/dubsky
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
Piumi/DialogGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Piumi/DialogGPT-small-harrypotter2
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Pixie/Fadas
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Plaban81/Imdb-sentiment-analysis
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
# RoBERTa base trained with Spanish Legal Domain Corpora ## Table of contents <details> <summary>Click to expand</summary> - [Overview](#overview) - [Model description](#model-description) - [Intended uses and limitations](#intended-uses-and-limitations) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citation Information](#citation-information) - [Disclaimer](#disclaimer) </details> ## Overview - **Architecture:** roberta-base - **Language:** Spanish - **Task:** fill-mask - **Data:** Legal ## Model description The **RoBERTalex** is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using a large [Spanish Legal Domain Corpora](https://zenodo.org/record/5495529), with a total of 8.9GB of text. ## Intended uses and limitations The **RoBERTalex** model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition. You can use the raw model for fill mask or fine-tune it to a downstream task. ## How to use Here is how to use this model: ```python >>> from transformers import pipeline >>> from pprint import pprint >>> unmasker = pipeline('fill-mask', model='PlanTL-GOB-ES/RoBERTalex') >>> pprint(unmasker("La ley fue <mask> finalmente.")) [{'score': 0.21217258274555206, 'sequence': ' La ley fue modificada finalmente.', 'token': 5781, 'token_str': ' modificada'}, {'score': 0.20414969325065613, 'sequence': ' La ley fue derogada finalmente.', 'token': 15951, 'token_str': ' derogada'}, {'score': 0.19272951781749725, 'sequence': ' La ley fue aprobada finalmente.', 'token': 5534, 'token_str': ' aprobada'}, {'score': 0.061143241822719574, 'sequence': ' La ley fue revisada finalmente.', 'token': 14192, 'token_str': ' revisada'}, {'score': 0.041809432208538055, 'sequence': ' La ley fue aplicada finalmente.', 'token': 12208, 'token_str': ' aplicada'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python >>> from transformers import RobertaTokenizer, RobertaModel >>> tokenizer = RobertaTokenizer.from_pretrained('PlanTL-GOB-ES/RoBERTalex') >>> model = RobertaModel.from_pretrained('PlanTL-GOB-ES/RoBERTalex') >>> text = "Gracias a los datos legales se ha podido desarrollar este modelo del lenguaje." >>> encoded_input = tokenizer(text, return_tensors='pt') >>> output = model(**encoded_input) >>> print(output.last_hidden_state.shape) torch.Size([1, 16, 768]) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training data The [Spanish Legal Domain Corpora](https://zenodo.org/record/5495529) corpora comprise multiple digital resources and it has a total of 8.9GB of textual data. Part of it has been obtained from [previous work](https://aclanthology.org/2020.lt4gov-1.6/). To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. ### Training procedure The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [RoBERTA](https://arxiv.org/abs/1907.11692) model with a vocabulary size of 50,262 tokens. The **RoBERTalex** pre-training consists of a masked language model training, that follows the approach employed for the RoBERTa base. The model was trained until convergence with 2 computing nodes, each one with 4 NVIDIA V100 GPUs of 16GB VRAM. ## Evaluation Due to the lack of domain-specific evaluation data, the model was evaluated on general domain tasks, where it obtains reasonable performance. We fine-tuned the model in the following task: | Dataset | Metric | **RoBERtalex** | |--------------|----------|------------| | UD-POS | F1 | 0.9871 | | CoNLL-NERC | F1 | 0.8323 | | CAPITEL-POS | F1 | 0.9788| | CAPITEL-NERC | F1 | 0.8394 | | STS | Combined | 0.7374 | | MLDoc | Accuracy | 0.9417 | | PAWS-X | F1 | 0.7304 | | XNLI | Accuracy | 0.7337 | ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to <[email protected]> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ## Citing information ``` @misc{gutierrezfandino2021legal, title={Spanish Legalese Language Model and Corpora}, author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Aitor Gonzalez-Agirre and Marta Villegas}, year={2021}, eprint={2110.12201}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
{"language": ["es"], "license": "apache-2.0", "tags": ["legal", "spanish"], "datasets": ["legal_ES", "temu_legal"], "metrics": ["ppl"], "widget": [{"text": "La ley fue <mask> finalmente."}, {"text": "El Tribunal <mask> desestim\u00f3 el recurso de amparo."}, {"text": "Hay base legal dentro del marco <mask> actual."}]}
PlanTL-GOB-ES/RoBERTalex
null
[ "transformers", "pytorch", "roberta", "fill-mask", "legal", "spanish", "es", "dataset:legal_ES", "dataset:temu_legal", "arxiv:1907.11692", "arxiv:2110.12201", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# GPT2-base (gpt2-base-bne) trained with data from the National Library of Spain (BNE) ## Table of Contents <details> <summary>Click to expand</summary> - [Overview](#overview) - [Model description](#model-description) - [Intended uses and limitations](#intended-uses-and-limitations) - [How to Use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citation Information](#citation-information) - [Disclaimer](#disclaimer) </details> ## Overview - **Architecture:** gpt2-base - **Language:** Spanish - **Task:** text-generation - **Data:** BNE ## Model description **GPT2-base-bne** is a transformer-based model for the Spanish language. It is based on the [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. ## Intended uses and limitations You can use the raw model for text generation or fine-tune it to a downstream task. ## How to Use Here is how to use this model: You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, set_seed >>> tokenizer = AutoTokenizer.from_pretrained("PlanTL-GOB-ES/gpt2-base-bne") >>> model = AutoModelForCausalLM.from_pretrained("PlanTL-GOB-ES/gpt2-base-bne") >>> generator = pipeline('text-generation', tokenizer=tokenizer, model=model) >>> set_seed(42) >>> generator("La Biblioteca Nacional de España es una entidad pública y sus fines son", num_return_sequences=5) [{'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son difundir la cultura y el arte hispánico, así como potenciar las publicaciones de la Biblioteca y colecciones de la Biblioteca Nacional de España para su difusión e inquisición. '}, {'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son diversos. '}, {'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son la publicación, difusión y producción de obras de arte español, y su patrimonio intelectual es el que tiene la distinción de Patrimonio de la Humanidad. '}, {'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son los de colaborar en el mantenimiento de los servicios bibliotecarios y mejorar la calidad de la información de titularidad institucional y en su difusión, acceso y salvaguarda para la sociedad. '}, {'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son la conservación, enseñanza y difusión del patrimonio bibliográfico en su lengua específica y/o escrita. '}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python >>> from transformers import AutoTokenizer, GPT2Model >>> tokenizer = AutoTokenizer.from_pretrained("PlanTL-GOB-ES/gpt2-base-bne") >>> model = GPT2Model.from_pretrained("PlanTL-GOB-ES/gpt2-base-bne") >>> text = "La Biblioteca Nacional de España es una entidad pública y sus fines son" >>> encoded_input = tokenizer(text, return_tensors='pt') >>> output = model(**encoded_input) >>> print(output.last_hidden_state.shape) torch.Size([1, 14, 768]) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Nevertheless, here's an example of how the model can have biased predictions: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, set_seed >>> tokenizer = AutoTokenizer.from_pretrained("PlanTL-GOB-ES/gpt2-base-bne") >>> model = AutoModelForCausalLM.from_pretrained("PlanTL-GOB-ES/gpt2-base-bne") >>> generator = pipeline('text-generation', tokenizer=tokenizer, model=model) >>> set_seed(42) >>> generator("El hombre se dedica a", num_return_sequences=5) [{'generated_text': 'El hombre se dedica a comprar armas a sus amigos, pero les cuenta la historia de las ventajas de ser "buenos y regulares en la vida" e ir "bien" por los pueblos. '}, {'generated_text': 'El hombre se dedica a la venta de todo tipo de juguetes durante todo el año y los vende a través de Internet con la intención de alcanzar una mayor rentabilidad. '}, {'generated_text': 'El hombre se dedica a la venta ambulante en plena Plaza Mayor. '}, {'generated_text': 'El hombre se dedica a los toros y él se dedica a los servicios religiosos. '}, {'generated_text': 'El hombre se dedica a la caza y a la tala de pinos. '}] >>> set_seed(42) >>> generator("La mujer se dedica a", num_return_sequences=5) [{'generated_text': 'La mujer se dedica a comprar vestidos de sus padres, como su madre, y siempre le enseña el último que ha hecho en poco menos de un año para ver si le da tiempo. '}, {'generated_text': 'La mujer se dedica a la venta ambulante y su pareja vende su cuerpo desde que tenía uso del automóvil. '}, {'generated_text': 'La mujer se dedica a la venta ambulante en plena ola de frío. '}, {'generated_text': 'La mujer se dedica a limpiar los suelos y paredes en pueblos con mucha humedad. '}, {'generated_text': 'La mujer se dedica a la prostitución en varios locales de alterne clandestinos en Barcelona. '}] ``` ## Training ### Training Data The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text. Some of the statistics of the corpus: | Corpora | Number of documents | Number of tokens | Size (GB) | |---------|---------------------|------------------|-----------| | BNE | 201,080,084 | 135,733,450,668 | 570GB | ### Training Procedure The pretraining objective used for this architecture is next token prediction. The configuration of the **GPT2-base-bne** model is as follows: - gpt2-base: 12-layer, 768-hidden, 12-heads, 117M parameters. The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model with a vocabulary size of 50,262 tokens. The GPT2-base-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2. The training lasted a total of 3 days with 16 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM. ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to <[email protected]> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citation information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, year = {2022}, } ``` ### Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. </details>
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "gpt2-base-bne"], "datasets": ["bne"], "widget": [{"text": "El modelo del lenguaje GPT es capaz de"}, {"text": "La Biblioteca Nacional de Espa\u00f1a es una entidad p\u00fablica y sus fines son"}]}
PlanTL-GOB-ES/gpt2-base-bne
null
[ "transformers", "pytorch", "gpt2", "text-generation", "national library of spain", "spanish", "bne", "gpt2-base-bne", "es", "dataset:bne", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# GPT2-large trained with data from the National Library of Spain (BNE) ## Table of Contents <details> <summary>Click to expand</summary> - [Overview](#overview) - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Additional Information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Disclaimer](#disclaimer) </details> ## Overview - **Architecture:** gpt2-large - **Language:** Spanish - **Task:** text-generation - **Data:** BNE ## Model description **GPT2-large-bne** is a transformer-based model for the Spanish language. It is based on the [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. ## Intended uses and limitations You can use the raw model for text generation or fine-tune it to a downstream task. ## How to use Here is how to use this model: You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, set_seed >>> tokenizer = AutoTokenizer.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne") >>> model = AutoModelForCausalLM.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne") >>> generator = pipeline('text-generation', tokenizer=tokenizer, model=model) >>> set_seed(42) >>> generator("La Biblioteca Nacional de España es una entidad pública y sus fines son", num_return_sequences=5) [{'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son servir como herramienta básica en la difusión de la cultura. '}, {'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son el desarrollo de la educación, la cultura y el conocimiento, promoviendo actividades a través de Internet con la información que recibe del acceso a los fondos que en ella se almacenan. '}, {'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son la publicación y difusión cultural. '}, {'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son preservar y difundir los fondos y colecciones de la Biblioteca Nacional, así como servir de punto de encuentro para toda la comunidad científica, la academia y para la sociedad civil. '}, {'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son la conservación, estudio y difusión del Patrimonio Bibliográfico en cualquiera de sus formas así como la formación y perfeccionamiento de los especialistas e investigadores en el campo de la información y de las bibliotecas.'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python >>> from transformers import AutoTokenizer, GPT2Model >>> tokenizer = AutoTokenizer.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne") >>> model = GPT2Model.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne") >>> text = "La Biblioteca Nacional de España es una entidad pública y sus fines son" >>> encoded_input = tokenizer(text, return_tensors='pt') >>> output = model(**encoded_input) >>> print(output.last_hidden_state.shape) torch.Size([1, 14, 1280]) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Nevertheless, here's an example of how the model can have biased predictions: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, set_seed >>> tokenizer = AutoTokenizer.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne") >>> model = AutoModelForCausalLM.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne") >>> generator = pipeline('text-generation', tokenizer=tokenizer, model=model) >>> set_seed(42) >>> generator("El hombre se dedica a", num_return_sequences=5) [{'generated_text': 'El hombre se dedica a comprar móviles a sus padres, pero les paga por ellos y luego les devuelve la pasta a ella. '}, {'generated_text': 'El hombre se dedica a la venta ambulante ilegal en la zona de la Alameda, con puestos del rastro callejero o de supermercados a los que luego roba. '}, {'generated_text': 'El hombre se dedica a la venta ambulante en el Paseo de Melilla. '}, {'generated_text': 'El hombre se dedica a los tatuajes y los dibujos en el cuerpo con su apariencia física y no da a basto en las tareas domésticas. '}, {'generated_text': 'El hombre se dedica a la caza indiscriminada de animales. '}] >>> set_seed(42) >>> generator("La mujer se dedica a", num_return_sequences=5) [{'generated_text': 'La mujer se dedica a comprar móviles a sus padres, pero les paga por ellos y luego no paga la factura." '}, {'generated_text': 'La mujer se dedica a la venta ambulante y su pareja vende cupones en el mercadillo navideño. '}, {'generated_text': 'La mujer se dedica a la venta al por mayor de perfumes, cosmética, complementos, y otros bienes de consumo. '}, {'generated_text': 'La mujer se dedica a los servicios sexuales y se aprovecha de los servicios religiosos. '}, {'generated_text': 'La mujer se dedica a la prostitución y tiene dos hijas del matrimonio y la propia familia de la víctima. '}] ``` ## Training ### Training data The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text. Some of the statistics of the corpus: | Corpora | Number of documents | Number of tokens | Size (GB) | |---------|---------------------|------------------|-----------| | BNE | 201,080,084 | 135,733,450,668 | 570GB | ### Training procedure The pretraining objective used for this architecture is next token prediction. The configuration of the **GPT2-large-bne** model is as follows: - gpt2-large: 36-layer, 1280-hidden, 20-heads, 774M parameters. The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model with a vocabulary size of 50,262 tokens. The GPT2-large-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2. The training lasted a total of 10 days with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM. ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to <[email protected]> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citation information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, year = {2022}, } ``` ### Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. </details>
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "gpt2-large-bne"], "datasets": ["bne"], "widget": [{"text": "El modelo del lenguaje GPT es capaz de"}, {"text": "La Biblioteca Nacional de Espa\u00f1a es una entidad p\u00fablica y sus fines son"}]}
PlanTL-GOB-ES/gpt2-large-bne
null
[ "transformers", "pytorch", "gpt2", "text-generation", "national library of spain", "spanish", "bne", "gpt2-large-bne", "es", "dataset:bne", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
# Biomedical-clinical language model for Spanish ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Evaluation](#evaluation) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citation information](#citation-information) - [Disclaimer](#disclaimer) </details> ## Model description Biomedical pretrained language model for Spanish. This model is a [RoBERTa-based](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model trained on a **biomedical-clinical** corpus in Spanish collected from several sources. ## Intended uses and limitations The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification. ## How to use ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("BSC-TeMU/roberta-base-biomedical-es") model = AutoModelForMaskedLM.from_pretrained("BSC-TeMU/roberta-base-biomedical-es") from transformers import pipeline unmasker = pipeline('fill-mask', model="BSC-TeMU/roberta-base-biomedical-es") unmasker("El único antecedente personal a reseñar era la <mask> arterial.") ``` ``` # Output [ { "sequence": " El único antecedente personal a reseñar era la hipertensión arterial.", "score": 0.9855039715766907, "token": 3529, "token_str": " hipertensión" }, { "sequence": " El único antecedente personal a reseñar era la diabetes arterial.", "score": 0.0039140828885138035, "token": 1945, "token_str": " diabetes" }, { "sequence": " El único antecedente personal a reseñar era la hipotensión arterial.", "score": 0.002484665485098958, "token": 11483, "token_str": " hipotensión" }, { "sequence": " El único antecedente personal a reseñar era la Hipertensión arterial.", "score": 0.0023484621196985245, "token": 12238, "token_str": " Hipertensión" }, { "sequence": " El único antecedente personal a reseñar era la presión arterial.", "score": 0.0008009297889657319, "token": 2267, "token_str": " presión" } ] ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2) used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences. The training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers, and a real-world clinical corpus collected from more than 278K clinical documents and notes. To obtain a high-quality training corpus while retaining the idiosyncrasies of the clinical language, a cleaning pipeline has been applied only to the biomedical corpora, keeping the clinical corpus uncleaned. Essentially, the cleaning operations used are: - data parsing in different formats - sentence splitting - language detection - filtering of ill-formed sentences - deduplication of repetitive contents - keep the original document boundaries Then, the biomedical corpora are concatenated and further global deduplication among the biomedical corpora have been applied. Eventually, the clinical corpus is concatenated to the cleaned biomedical corpus resulting in a medium-size biomedical-clinical corpus for Spanish composed of more than 1B tokens. The table below shows some basic statistics of the individual cleaned corpora: | Name | No. tokens | Description | |-----------------------------------------------------------------------------------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [Medical crawler](https://zenodo.org/record/4561970) | 745,705,946 | Crawler of more than 3,000 URLs belonging to Spanish biomedical and health domains. | | Clinical cases misc. | 102,855,267 | A miscellany of medical content, essentially clinical cases. Note that a clinical case report is a scientific publication where medical practitioners share patient cases and it is different from a clinical note or document. | | Clinical notes/documents | 91,250,080 | Collection of more than 278K clinical documents, including discharge reports, clinical course notes and X-ray reports, for a total of 91M tokens. | | [Scielo](https://github.com/PlanTL-SANIDAD/SciELO-Spain-Crawler) | 60,007,289 | Publications written in Spanish crawled from the Spanish SciELO server in 2017. | | [BARR2_background](https://temu.bsc.es/BARR2/downloads/background_set.raw_text.tar.bz2) | 24,516,442 | Biomedical Abbreviation Recognition and Resolution (BARR2) containing Spanish clinical case study sections from a variety of clinical disciplines. | | Wikipedia_life_sciences | 13,890,501 | Wikipedia articles crawled 04/01/2021 with the [Wikipedia API python library](https://pypi.org/project/Wikipedia-API/) starting from the "Ciencias\_de\_la\_vida" category up to a maximum of 5 subcategories. Multiple links to the same articles are then discarded to avoid repeating content. | | Patents | 13,463,387 | Google Patent in Medical Domain for Spain (Spanish). The accepted codes (Medical Domain) for Json files of patents are: "A61B", "A61C","A61F", "A61H", "A61K", "A61L","A61M", "A61B", "A61P". | | [EMEA](http://opus.nlpl.eu/download.php?f=EMEA/v3/moses/en-es.txt.zip) | 5,377,448 | Spanish-side documents extracted from parallel corpora made out of PDF documents from the European Medicines Agency. | | [mespen_Medline](https://zenodo.org/record/3562536#.YTt1fH2xXbR) | 4,166,077 | Spanish-side articles extracted from a collection of Spanish-English parallel corpus consisting of biomedical scientific literature. The collection of parallel resources are aggregated from the MedlinePlus source. | | PubMed | 1,858,966 | Open-access articles from the PubMed repository crawled in 2017. | ## Evaluation The model has been evaluated on the Named Entity Recognition (NER) using the following datasets: - [PharmaCoNER](https://zenodo.org/record/4270158): is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: https://temu.bsc.es/pharmaconer/). - [CANTEMIST](https://zenodo.org/record/3978041#.YTt5qH2xXbQ): is a shared task specifically focusing on named entity recognition of tumor morphology, in Spanish (for more info see: https://zenodo.org/record/3978041#.YTt5qH2xXbQ). - ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables. The evaluation results are compared against the [mBERT](https://huggingface.co/bert-base-multilingual-cased) and [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) models: | F1 - Precision - Recall | roberta-base-biomedical-clinical-es | mBERT | BETO | |---------------------------|----------------------------|-------------------------------|-------------------------| | PharmaCoNER | **90.04** - **88.92** - **91.18** | 87.46 - 86.50 - 88.46 | 88.18 - 87.12 - 89.28 | | CANTEMIST | **83.34** - **81.48** - **85.30** | 82.61 - 81.12 - 84.15 | 82.42 - 80.91 - 84.00 | | ICTUSnet | **88.08** - **84.92** - **91.50** | 86.75 - 83.53 - 90.23 | 85.95 - 83.10 - 89.02 | ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to <[email protected]> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citation information If you use our models, please cite our latest preprint: ```bibtex @misc{carrino2021biomedical, title={Biomedical and Clinical Language Models for Spanish: On the Benefits of Domain-Specific Pretraining in a Mid-Resource Scenario}, author={Casimiro Pio Carrino and Jordi Armengol-Estapé and Asier Gutiérrez-Fandiño and Joan Llop-Palao and Marc Pàmies and Aitor Gonzalez-Agirre and Marta Villegas}, year={2021}, eprint={2109.03570}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` If you use our Medical Crawler corpus, please cite the preprint: ```bibtex @misc{carrino2021spanish, title={Spanish Biomedical Crawled Corpus: A Large, Diverse Dataset for Spanish Biomedical Language Models}, author={Casimiro Pio Carrino and Jordi Armengol-Estapé and Ona de Gibert Bonet and Asier Gutiérrez-Fandiño and Aitor Gonzalez-Agirre and Martin Krallinger and Marta Villegas}, year={2021}, eprint={2109.07765}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. </details>
{"language": ["es"], "license": "apache-2.0", "tags": ["biomedical", "clinical", "spanish"], "metrics": ["ppl"], "widget": [{"text": "El \u00fanico antecedente personal a rese\u00f1ar era la <mask> arterial."}, {"text": "Las radiolog\u00edas \u00f3seas de cuerpo entero no detectan alteraciones <mask>, ni alteraciones vertebrales."}, {"text": "En el <mask> toraco-abd\u00f3mino-p\u00e9lvico no se encontraron hallazgos patol\u00f3gicos de inter\u00e9s."}]}
PlanTL-GOB-ES/roberta-base-biomedical-clinical-es
null
[ "transformers", "pytorch", "roberta", "fill-mask", "biomedical", "clinical", "spanish", "es", "arxiv:2109.03570", "arxiv:2109.07765", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
# Biomedical language model for Spanish ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Tokenization and model pretraining](#Tokenization-pretraining) - [Training corpora and preprocessing](#training-corpora-preprocessing) - [Evaluation](#evaluation) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Disclaimer](#disclaimer) </details> ## Model description Biomedical pretrained language model for Spanish. For more details about the corpus, the pretraining and the evaluation, check the official [repository](https://github.com/PlanTL-SANIDAD/lm-biomedical-clinical-es) and read our [preprint](https://arxiv.org/abs/2109.03570). ## Intended uses and limitations The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification. ## How to use ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("BSC-TeMU/roberta-base-biomedical-es") model = AutoModelForMaskedLM.from_pretrained("BSC-TeMU/roberta-base-biomedical-es") from transformers import pipeline unmasker = pipeline('fill-mask', model="BSC-TeMU/roberta-base-biomedical-es") unmasker("El único antecedente personal a reseñar era la <mask> arterial.") ``` ``` # Output [ { "sequence": " El único antecedente personal a reseñar era la hipertensión arterial.", "score": 0.9855039715766907, "token": 3529, "token_str": " hipertensión" }, { "sequence": " El único antecedente personal a reseñar era la diabetes arterial.", "score": 0.0039140828885138035, "token": 1945, "token_str": " diabetes" }, { "sequence": " El único antecedente personal a reseñar era la hipotensión arterial.", "score": 0.002484665485098958, "token": 11483, "token_str": " hipotensión" }, { "sequence": " El único antecedente personal a reseñar era la Hipertensión arterial.", "score": 0.0023484621196985245, "token": 12238, "token_str": " Hipertensión" }, { "sequence": " El único antecedente personal a reseñar era la presión arterial.", "score": 0.0008009297889657319, "token": 2267, "token_str": " presión" } ] ``` ## Training ### Tokenization and model pretraining This model is a [RoBERTa-based](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model trained on a **biomedical** corpus in Spanish collected from several sources (see next section). The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2) used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences. ### Training corpora and preprocessing The training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers. To obtain a high-quality training corpus, a cleaning pipeline with the following operations has been applied: - data parsing in different formats - sentence splitting - language detection - filtering of ill-formed sentences - deduplication of repetitive contents - keep the original document boundaries Finally, the corpora are concatenated and further global deduplication among the corpora have been applied. The result is a medium-size biomedical corpus for Spanish composed of about 963M tokens. The table below shows some basic statistics of the individual cleaned corpora: | Name | No. tokens | Description | |-----------------------------------------------------------------------------------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [Medical crawler](https://zenodo.org/record/4561970) | 745,705,946 | Crawler of more than 3,000 URLs belonging to Spanish biomedical and health domains. | | Clinical cases misc. | 102,855,267 | A miscellany of medical content, essentially clinical cases. Note that a clinical case report is a scientific publication where medical practitioners share patient cases and it is different from a clinical note or document. | | [Scielo](https://github.com/PlanTL-SANIDAD/SciELO-Spain-Crawler) | 60,007,289 | Publications written in Spanish crawled from the Spanish SciELO server in 2017. | | [BARR2_background](https://temu.bsc.es/BARR2/downloads/background_set.raw_text.tar.bz2) | 24,516,442 | Biomedical Abbreviation Recognition and Resolution (BARR2) containing Spanish clinical case study sections from a variety of clinical disciplines. | | Wikipedia_life_sciences | 13,890,501 | Wikipedia articles crawled 04/01/2021 with the [Wikipedia API python library](https://pypi.org/project/Wikipedia-API/) starting from the "Ciencias\_de\_la\_vida" category up to a maximum of 5 subcategories. Multiple links to the same articles are then discarded to avoid repeating content. | | Patents | 13,463,387 | Google Patent in Medical Domain for Spain (Spanish). The accepted codes (Medical Domain) for Json files of patents are: "A61B", "A61C","A61F", "A61H", "A61K", "A61L","A61M", "A61B", "A61P". | | [EMEA](http://opus.nlpl.eu/download.php?f=EMEA/v3/moses/en-es.txt.zip) | 5,377,448 | Spanish-side documents extracted from parallel corpora made out of PDF documents from the European Medicines Agency. | | [mespen_Medline](https://zenodo.org/record/3562536#.YTt1fH2xXbR) | 4,166,077 | Spanish-side articles extracted from a collection of Spanish-English parallel corpus consisting of biomedical scientific literature. The collection of parallel resources are aggregated from the MedlinePlus source. | | PubMed | 1,858,966 | Open-access articles from the PubMed repository crawled in 2017. | ## Evaluation The model has been evaluated on the Named Entity Recognition (NER) using the following datasets: - [PharmaCoNER](https://zenodo.org/record/4270158): is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: https://temu.bsc.es/pharmaconer/). - [CANTEMIST](https://zenodo.org/record/3978041#.YTt5qH2xXbQ): is a shared task specifically focusing on named entity recognition of tumor morphology, in Spanish (for more info see: https://zenodo.org/record/3978041#.YTt5qH2xXbQ). - ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables. The evaluation results are compared against the [mBERT](https://huggingface.co/bert-base-multilingual-cased) and [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) models: | F1 - Precision - Recall | roberta-base-biomedical-es | mBERT | BETO | |---------------------------|----------------------------|-------------------------------|-------------------------| | PharmaCoNER | **89.48** - **87.85** - **91.18** | 87.46 - 86.50 - 88.46 | 88.18 - 87.12 - 89.28 | | CANTEMIST | **83.87** - **81.70** - **86.17** | 82.61 - 81.12 - 84.15 | 82.42 - 80.91 - 84.00 | | ICTUSnet | **88.12** - **85.56** - **90.83** | 86.75 - 83.53 - 90.23 | 85.95 - 83.10 - 89.02 | ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to <[email protected]> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ## Citation information If you use our models, please cite our latest preprint: ```bibtex @misc{carrino2021biomedical, title={Biomedical and Clinical Language Models for Spanish: On the Benefits of Domain-Specific Pretraining in a Mid-Resource Scenario}, author={Casimiro Pio Carrino and Jordi Armengol-Estapé and Asier Gutiérrez-Fandiño and Joan Llop-Palao and Marc Pàmies and Aitor Gonzalez-Agirre and Marta Villegas}, year={2021}, eprint={2109.03570}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` If you use our Medical Crawler corpus, please cite the preprint: ```bibtex @misc{carrino2021spanish, title={Spanish Biomedical Crawled Corpus: A Large, Diverse Dataset for Spanish Biomedical Language Models}, author={Casimiro Pio Carrino and Jordi Armengol-Estapé and Ona de Gibert Bonet and Asier Gutiérrez-Fandiño and Aitor Gonzalez-Agirre and Martin Krallinger and Marta Villegas}, year={2021}, eprint={2109.07765}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. </details>
{"language": ["es"], "license": "apache-2.0", "tags": ["biomedical", "spanish"], "metrics": ["ppl"], "widget": [{"text": "El \u00fanico antecedente personal a rese\u00f1ar era la <mask> arterial."}, {"text": "Las radiolog\u00edas \u00f3seas de cuerpo entero no detectan alteraciones <mask>, ni alteraciones vertebrales."}, {"text": "En el <mask> toraco-abd\u00f3mino-p\u00e9lvico no se encontraron hallazgos patol\u00f3gicos de inter\u00e9s."}]}
PlanTL-GOB-ES/roberta-base-biomedical-es
null
[ "transformers", "pytorch", "roberta", "fill-mask", "biomedical", "spanish", "es", "arxiv:2109.03570", "arxiv:2109.07765", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
# Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset. ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Evaluation](#evaluation) - [Variable and metrics](#variable-and-metrics) - [Evaluation results](#evaluation-results) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description The **roberta-base-bne-capitel-ner-plus** is a Named Entity Recognition (NER) model for the Spanish language fine-tuned from the [roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. This model is a more robust version of the [roberta-base-bne-capitel-ner](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-capitel-ner) model that recognizes better lowercased Named Entities (NE). ## Intended uses and limitations **roberta-base-bne-capitel-ner-plus** model can be used to recognize Named Entities (NE). The model is limited by its training dataset and may not generalize well for all use cases. ## How to use ```python from transformers import pipeline from pprint import pprint nlp = pipeline("ner", model="PlanTL-GOB-ES/roberta-base-bne-capitel-ner-plus") example = "Me llamo francisco javier y vivo en madrid." ner_results = nlp(example) pprint(ner_results) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training The dataset used for training and evaluation is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1). We lowercased and uppercased the dataset, and added the additional sentences to the training. ### Training procedure The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. ## Evaluation ### Variable and metrics This model was finetuned maximizing F1 score. ## Evaluation results We evaluated the **roberta-base-bne-capitel-ner-plus** on the CAPITEL-NERC test set against standard multilingual and monolingual baselines: | Model | CAPITEL-NERC (F1) | | ------------|:----| | roberta-large-bne-capitel-ner | **90.51** | | roberta-base-bne-capitel-ner | 89.60| | roberta-base-bne-capitel-ner-plus | 89.60| | BETO | 87.72 | | mBERT | 88.10 | | BERTIN | 88.56 | | ELECTRA | 80.35 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to <[email protected]> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, year = {2022}, } ``` ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "capitel", "ner"], "datasets": ["bne", "capitel"], "metrics": ["f1"], "inference": {"parameters": {"aggregation_strategy": "first"}}, "widget": ["Me llamo francisco javier y vivo en madrid.", "Mi hermano ram\u00f3n y su mejor amigo luis trabajan en el bsc."], "model-index": [{"name": "roberta-base-bne-capiter-ner-plus", "results": [{"task": {"type": "token-classification"}, "dataset": {"name": "CAPITEL-NERC", "type": "ner"}, "metrics": [{"type": "f1", "value": 0.896, "name": "F1"}]}]}]}
PlanTL-GOB-ES/roberta-base-bne-capitel-ner-plus
null
[ "transformers", "pytorch", "roberta", "token-classification", "national library of spain", "spanish", "bne", "capitel", "ner", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
# Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset. ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Evaluation](#evaluation) - [Variable and metrics](#variable-and-metrics) - [Evaluation results](#evaluation-results) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description The **roberta-base-bne-capitel-ner** is a Named Entity Recognition (NER) model for the Spanish language fine-tuned from the [roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. ## Intended uses and limitations **roberta-base-bne-capitel-ner** model can be used to recognize Named Entities (NE). The model is limited by its training dataset and may not generalize well for all use cases. ## How to use ```python from transformers import pipeline from pprint import pprint nlp = pipeline("ner", model="PlanTL-GOB-ES/roberta-base-bne-capitel-ner") example = "Me llamo Francisco Javier y vivo en Madrid." ner_results = nlp(example) pprint(ner_results) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training The dataset used for training and evaluation is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1). ### Training procedure The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. ## Evaluation ### Variable and metrics This model was finetuned maximizing F1 score. ## Evaluation results We evaluated the **roberta-base-bne-capitel-ner** on the CAPITEL-NERC test set against standard multilingual and monolingual baselines: | Model | CAPITEL-NERC (F1) | | ------------|:----| | roberta-large-bne-capitel-ner | **90.51** | | roberta-base-bne-capitel-ner | 89.60| | BETO | 87.72 | | mBERT | 88.10 | | BERTIN | 88.56 | | ELECTRA | 80.35 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to <[email protected]> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, year = {2022}, } ``` ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "capitel", "ner"], "datasets": ["bne", "capitel"], "metrics": ["f1"], "inference": {"parameters": {"aggregation_strategy": "first"}}, "widget": ["Me llamo Francisco Javier y vivo en Madrid.", "Mi hermano Ram\u00f3n y su mejor amigo Luis trabajan en el BSC."], "model-index": [{"name": "roberta-base-bne-capiter-ner", "results": [{"task": {"type": "token-classification"}, "dataset": {"name": "CAPITEL-NERC", "type": "ner"}, "metrics": [{"type": "f1", "value": 0.896, "name": "F1"}]}]}]}
PlanTL-GOB-ES/roberta-base-bne-capitel-ner
null
[ "transformers", "pytorch", "roberta", "token-classification", "national library of spain", "spanish", "bne", "capitel", "ner", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
# Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Part of Speech (POS) dataset ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Evaluation](#evaluation) - [Variable and metrics](#variable-and-metrics) - [Evaluation results](#evaluation-results) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description The **roberta-base-bne-capitel-pos** is a Part-of-speech-tagging (POS) model for the Spanish language fine-tuned from the [roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. # Intended uses and limitations **roberta-base-bne-capitel-pos** model can be used to Part-of-speech-tagging (POS) a text. The model is limited by its training dataset and may not generalize well for all use cases. ## How to use Here is how to use this model: ```python from transformers import pipeline from pprint import pprint nlp = pipeline("token-classification", model="PlanTL-GOB-ES/roberta-base-bne-capitel-pos") example = "El alcalde de Vigo, Abel Caballero, ha comenzado a colocar las luces de Navidad en agosto." pos_results = nlp(example) pprint(pos_results) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 2). ### Training procedure The model was trained with a batch size of 32 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. ## Evaluation ### Variable and metrics This model was finetuned maximizing F1 score. ## Evaluation results We evaluated the **roberta-base-bne-capitel-pos** on the CAPITEL-POS test set against standard multilingual and monolingual baselines: | Model | CAPITEL-POS (F1) | | ------------|:----| | roberta-large-bne-capitel-pos | **98.56** | | roberta-base-bne-capitel-pos | 98.46 | | BETO | 98.36 | | mBERT | 98.39 | | BERTIN | 98.47 | | ELECTRA | 98.16 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to <[email protected]> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, year = {2022}, } ``` ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "capitel", "pos"], "datasets": ["bne", "capitel"], "metrics": ["f1"], "inference": {"parameters": {"aggregation_strategy": "first"}}, "widget": [{"text": "Festival de San Sebasti\u00e1n: Johnny Depp recibir\u00e1 el premio Donostia en pleno rifirrafe judicial con Amber Heard"}, {"text": "El alcalde de Vigo, Abel Caballero, ha comenzado a colocar las luces de Navidad en agosto."}, {"text": "Gracias a los datos de la BNE, se ha podido lograr este modelo del lenguaje."}], "model-index": [{"name": "roberta-base-bne-capiter-pos", "results": [{"task": {"type": "token-classification"}, "dataset": {"name": "CAPITEL-POS", "type": "pos"}, "metrics": [{"type": "f1", "value": 0.9846, "name": "F1"}]}]}]}
PlanTL-GOB-ES/roberta-base-bne-capitel-pos
null
[ "transformers", "pytorch", "roberta", "token-classification", "national library of spain", "spanish", "bne", "capitel", "pos", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
# Spanish RoBERTa-base trained on BNE finetuned for Spanish Question Answering Corpus (SQAC) dataset. ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Evaluation](#evaluation) - [Variable and metrics](#variable-and-metrics) - [Evaluation results](#evaluation-results) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description The **roberta-base-bne-sqac** is a Question Answering (QA) model for the Spanish language fine-tuned from the [roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. ## Intended uses and limitations **roberta-base-bne-sqac** model can be used for extractive question answering. The model is limited by its training dataset and may not generalize well for all use cases. ## How to use ```python from transformers import pipeline nlp = pipeline("question-answering", model="PlanTL-GOB-ES/roberta-base-bne-sqac") text = "¿Dónde vivo?" context = "Me llamo Wolfgang y vivo en Berlin" qa_results = nlp(text, context) print(qa_results) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training ### Training data We used the QA dataset in Spanish called [SQAC corpus](https://huggingface.co/datasets/PlanTL-GOB-ES/SQAC) for training and evaluation. ### Training procedure The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. ## Evaluation results We evaluated the **roberta-base-bne-sqac** on the SQAC test set against standard multilingual and monolingual baselines: | Model | SQAC (F1) | | ------------|:----| | roberta-large-bne-sqac | **82.02** | | roberta-base-bne-sqac | 79.23| | BETO | 79.23 | | mBERT | 75.62 | | BERTIN | 76.78 | | ELECTRA | 73.83 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to <[email protected]> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, year = {2022}, } ``` ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "qa", "question answering"], "datasets": ["PlanTL-GOB-ES/SQAC"], "metrics": ["f1", "exact match"], "model-index": [{"name": "roberta-base-bne-sqac", "results": [{"task": {"type": "question-answering"}, "dataset": {"name": "SQAC", "type": "PlanTL-GOB-ES/SQAC"}, "metrics": [{"type": "f1", "value": 0.7923, "name": "F1"}]}]}]}
PlanTL-GOB-ES/roberta-base-bne-sqac
null
[ "transformers", "pytorch", "roberta", "question-answering", "national library of spain", "spanish", "bne", "qa", "question answering", "es", "dataset:PlanTL-GOB-ES/SQAC", "arxiv:1907.11692", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
# RoBERTa base trained with data from the National Library of Spain (BNE) ## Table of Contents <details> <summary>Click to expand</summary> - [Overview](#overview) - [Model description](#model-description) - [Intended uses and limitations](#intended-uses-and-limitations) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citation Information](#citation-information) - [Disclaimer](#disclaimer) </details> ## Overview - **Architecture:** roberta-base - **Language:** Spanish - **Task:** fill-mask - **Data:** BNE ## Model description The **roberta-base-bne** is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. ## Intended uses and limitations The **roberta-base-bne** model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition. You can use the raw model for fill mask or fine-tune it to a downstream task. ## How to use Here is how to use this model: ```python >>> from transformers import pipeline >>> from pprint import pprint >>> unmasker = pipeline('fill-mask', model='PlanTL-GOB-ES/roberta-base-bne') >>> pprint(unmasker("Gracias a los datos de la BNE se ha podido <mask> este modelo del lenguaje.")) [{'score': 0.08422081917524338, 'token': 3832, 'token_str': ' desarrollar', 'sequence': 'Gracias a los datos de la BNE se ha podido desarrollar este modelo del lenguaje.'}, {'score': 0.06348305940628052, 'token': 3078, 'token_str': ' crear', 'sequence': 'Gracias a los datos de la BNE se ha podido crear este modelo del lenguaje.'}, {'score': 0.06148449331521988, 'token': 2171, 'token_str': ' realizar', 'sequence': 'Gracias a los datos de la BNE se ha podido realizar este modelo del lenguaje.'}, {'score': 0.056218471378088, 'token': 10880, 'token_str': ' elaborar', 'sequence': 'Gracias a los datos de la BNE se ha podido elaborar este modelo del lenguaje.'}, {'score': 0.05133328214287758, 'token': 31915, 'token_str': ' validar', 'sequence': 'Gracias a los datos de la BNE se ha podido validar este modelo del lenguaje.'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python >>> from transformers import RobertaTokenizer, RobertaModel >>> tokenizer = RobertaTokenizer.from_pretrained('PlanTL-GOB-ES/roberta-base-bne') >>> model = RobertaModel.from_pretrained('PlanTL-GOB-ES/roberta-base-bne') >>> text = "Gracias a los datos de la BNE se ha podido desarrollar este modelo del lenguaje." >>> encoded_input = tokenizer(text, return_tensors='pt') >>> output = model(**encoded_input) >>> print(output.last_hidden_state.shape) torch.Size([1, 19, 768]) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Nevertheless, here's an example of how the model can have biased predictions: ```python >>> from transformers import pipeline, set_seed >>> from pprint import pprint >>> unmasker = pipeline('fill-mask', model='PlanTL-GOB-ES/roberta-base-bne') >>> set_seed(42) >>> pprint(unmasker("Antonio está pensando en <mask>.")) [{'score': 0.07950365543365479, 'sequence': 'Antonio está pensando en ti.', 'token': 486, 'token_str': ' ti'}, {'score': 0.03375273942947388, 'sequence': 'Antonio está pensando en irse.', 'token': 13134, 'token_str': ' irse'}, {'score': 0.031026942655444145, 'sequence': 'Antonio está pensando en casarse.', 'token': 24852, 'token_str': ' casarse'}, {'score': 0.030703715980052948, 'sequence': 'Antonio está pensando en todo.', 'token': 665, 'token_str': ' todo'}, {'score': 0.02838558703660965, 'sequence': 'Antonio está pensando en ello.', 'token': 1577, 'token_str': ' ello'}] >>> set_seed(42) >>> pprint(unmasker("Mohammed está pensando en <mask>.")) [{'score': 0.05433618649840355, 'sequence': 'Mohammed está pensando en morir.', 'token': 9459, 'token_str': ' morir'}, {'score': 0.0400255024433136, 'sequence': 'Mohammed está pensando en irse.', 'token': 13134, 'token_str': ' irse'}, {'score': 0.03705748915672302, 'sequence': 'Mohammed está pensando en todo.', 'token': 665, 'token_str': ' todo'}, {'score': 0.03658654913306236, 'sequence': 'Mohammed está pensando en quedarse.', 'token': 9331, 'token_str': ' quedarse'}, {'score': 0.03329474478960037, 'sequence': 'Mohammed está pensando en ello.', 'token': 1577, 'token_str': ' ello'}] ``` ## Training ### Training data The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text. Some of the statistics of the corpus: | Corpora | Number of documents | Number of tokens | Size (GB) | |---------|---------------------|------------------|-----------| | BNE | 201,080,084 | 135,733,450,668 | 570GB | ### Training procedure The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [RoBERTA](https://arxiv.org/abs/1907.11692) model with a vocabulary size of 50,262 tokens. The **roberta-base-bne** pre-training consists of a masked language model training, that follows the approach employed for the RoBERTa base. The training lasted a total of 48 hours with 16 computing nodes, each one with 4 NVIDIA V100 GPUs of 16GB VRAM. ## Evaluation When fine-tuned on downstream tasks, this model achieves the following results: | Dataset | Metric | [**RoBERTa-base**](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) | |--------------|----------|------------| | MLDoc | F1 | 0.9664 | | CoNLL-NERC | F1 | 0.8851 | | CAPITEL-NERC | F1 | 0.8960 | | PAWS-X | F1 | 0.9020 | | UD-POS | F1 | 0.9907 | | CAPITEL-POS | F1 | 0.9846 | | SQAC | F1 | 0.7923 | | STS | Combined | 0.8533 | | XNLI | Accuracy | 0.8016 | For more evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish) or [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405). ## Additional information ### Author Text Mining Unit (TeMU) from Barcelona Supercomputing Center (<[email protected]>). ### Contact information For further information, send an email to <[email protected]>. ### Copyright Copyright by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://portal.mineco.gob.es/en-us/digitalizacionIA/Pages/sedia.aspx). ### Licensing information This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://portal.mineco.gob.es/en-us/digitalizacionIA/Pages/sedia.aspx) within the framework of the Plan-TL. ### Citation information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, title = {MarIA: Spanish Language Models}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, volume = {68}, year = {2022}, } ``` ### Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA) nor the creator (BSC) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de Inteligencia Artificial. En ningún caso el propietario de los modelos (SEDIA) ni el creador (BSC) serán responsables de los resultados derivados del uso que hagan terceros de estos models. </details>
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "roberta-base-bne"], "datasets": ["bne"], "metrics": ["ppl"], "widget": [{"text": "Por la ventanilla del coche vi la Giralda y pens\u00e9 que bonita que es la ciudad de <mask>."}, {"text": "M\u00e1s vale <mask> que lamentar."}, {"text": "Caminante no hay camino, se hace camino al <mask>."}, {"text": "Tengo una pelota roja y otra amarilla. Si le doy la roja a Jose, s\u00f3lo me queda la <mask>."}, {"text": "Tengo una pelota roja y otra amarilla. Si le doy la amarilla a Jose, s\u00f3lo me queda la <mask>."}, {"text": "El <mask> es el pico m\u00e1s alto de Espa\u00f1a."}]}
PlanTL-GOB-ES/roberta-base-bne
null
[ "transformers", "pytorch", "roberta", "fill-mask", "national library of spain", "spanish", "bne", "roberta-base-bne", "es", "dataset:bne", "arxiv:1907.11692", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
# BERTa: RoBERTa-based Catalan language model ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Evaluation](#evaluation) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description BERTa is a transformer-based masked language model for the Catalan language. It is based on the [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) base model and has been trained on a medium-size corpus collected from publicly available corpora and crawlers. This model was originally published as [bsc/roberta-base-ca-cased](https://huggingface.co/bsc/roberta-base-ca-cased). ## Intended uses and limitations The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification or Named Entity Recognition. ## How to use ### Load model and tokenizer ``` python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("PlanTL-GOB-ES/roberta-base-ca-cased") model = AutoModelForMaskedLM.from_pretrained("PlanTL-GOB-ES/roberta-base-ca-cased") ``` ### Fill Mask task Below, an example of how to use the masked language modelling task with a pipeline. ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='PlanTL-GOB-ES/roberta-base-ca-cased') >>> unmasker("Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada " "entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, " "i Besòs, al nord-est, i limitada pel sud-est per la línia de costa," "i pel nord-oest per la serralada de Collserola " "(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela " "la línia de costa encaixant la ciutat en un perímetre molt definit.") [ { "sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada " "entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, " "i Besòs, al nord-est, i limitada pel sud-est per la línia de costa," "i pel nord-oest per la serralada de Collserola " "(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela " "la línia de costa encaixant la ciutat en un perímetre molt definit.", "score": 0.4177263379096985, "token": 734, "token_str": " Barcelona" }, { "sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada " "entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, " "i Besòs, al nord-est, i limitada pel sud-est per la línia de costa," "i pel nord-oest per la serralada de Collserola " "(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela " "la línia de costa encaixant la ciutat en un perímetre molt definit.", "score": 0.10696165263652802, "token": 3849, "token_str": " Badalona" }, { "sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada " "entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, " "i Besòs, al nord-est, i limitada pel sud-est per la línia de costa," "i pel nord-oest per la serralada de Collserola " "(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela " "la línia de costa encaixant la ciutat en un perímetre molt definit.", "score": 0.08135009557008743, "token": 19349, "token_str": " Collserola" }, { "sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada " "entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, " "i Besòs, al nord-est, i limitada pel sud-est per la línia de costa," "i pel nord-oest per la serralada de Collserola " "(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela " "la línia de costa encaixant la ciutat en un perímetre molt definit.", "score": 0.07330769300460815, "token": 4974, "token_str": " Terrassa" }, { "sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada " "entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, " "i Besòs, al nord-est, i limitada pel sud-est per la línia de costa," "i pel nord-oest per la serralada de Collserola " "(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela " "la línia de costa encaixant la ciutat en un perímetre molt definit.", "score": 0.03317456692457199, "token": 14333, "token_str": " Gavà" } ] ``` ## Limitations and bias ## Training ### Training corpora and preprocessing The training corpus consists of several corpora gathered from web crawling and public corpora. The publicly available corpora are: 1. the Catalan part of the [DOGC](http://opus.nlpl.eu/DOGC-v2.php) corpus, a set of documents from the Official Gazette of the Catalan Government 2. the [Catalan Open Subtitles](http://opus.nlpl.eu/download.php?f=OpenSubtitles/v2018/mono/OpenSubtitles.raw.ca.gz), a collection of translated movie subtitles 3. the non-shuffled version of the Catalan part of the [OSCAR](https://traces1.inria.fr/oscar/) corpus \\\\cite{suarez2019asynchronous}, a collection of monolingual corpora, filtered from [Common Crawl](https://commoncrawl.org/about/) 4. The [CaWac](http://nlp.ffzg.hr/resources/corpora/cawac/) corpus, a web corpus of Catalan built from the .cat top-level-domain in late 2013 the non-deduplicated version 5. the [Catalan Wikipedia articles](https://ftp.acc.umu.se/mirror/wikimedia.org/dumps/cawiki/20200801/) downloaded on 18-08-2020. The crawled corpora are: 6. The Catalan General Crawling, obtained by crawling the 500 most popular .cat and .ad domains 7. the Catalan Government Crawling, obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government 8. the ACN corpus with 220k news items from March 2015 until October 2020, crawled from the [Catalan News Agency](https://www.acn.cat/) To obtain a high-quality training corpus, each corpus have preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process, we keep document boundaries are kept. Finally, the corpora are concatenated and further global deduplication among the corpora is applied. The final training corpus consists of about 1,8B tokens. ### Tokenization and pretraining The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2) used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 52,000 tokens. The BERTa pretraining consists of a masked language model training that follows the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM. ## Evaluation ### CLUB benchmark The BERTa model has been fine-tuned on the downstream tasks of the Catalan Language Understanding Evaluation benchmark (CLUB), that has been created along with the model. It contains the following tasks and their related datasets: 1. Part-of-Speech Tagging (POS) Catalan-Ancora: from the [Universal Dependencies treebank](https://github.com/UniversalDependencies/UD_Catalan-AnCora) of the well-known Ancora corpus 2. Named Entity Recognition (NER) **[AnCora Catalan 2.0.0](https://zenodo.org/record/4762031#.YKaFjqGxWUk)**: extracted named entities from the original [Ancora](https://doi.org/10.5281/zenodo.4762030) version, filtering out some unconventional ones, like book titles, and transcribed them into a standard CONLL-IOB format 3. Text Classification (TC) **[TeCla](https://doi.org/10.5281/zenodo.4627197)**: consisting of 137k news pieces from the Catalan News Agency ([ACN](https://www.acn.cat/)) corpus 4. Semantic Textual Similarity (STS) **[Catalan semantic textual similarity](https://doi.org/10.5281/zenodo.4529183)**: consisting of more than 3000 sentence pairs, annotated with the semantic similarity between them, scraped from the [Catalan Textual Corpus](https://doi.org/10.5281/zenodo.4519349) 5. Question Answering (QA): **[ViquiQuAD](https://doi.org/10.5281/zenodo.4562344)**: consisting of more than 15,000 questions outsourced from Catalan Wikipedia randomly chosen from a set of 596 articles that were originally written in Catalan. **[XQuAD](https://doi.org/10.5281/zenodo.4526223)**: the Catalan translation of XQuAD, a multilingual collection of manual translations of 1,190 question-answer pairs from English Wikipedia used only as a _test set_ Here are the train/dev/test splits of the datasets: | Task (Dataset) | Total | Train | Dev | Test | |:--|:--|:--|:--|:--| | NER (Ancora) |13,581 | 10,628 | 1,427 | 1,526 | | POS (Ancora)| 16,678 | 13,123 | 1,709 | 1,846 | | STS | 3,073 | 2,073 | 500 | 500 | | TC (TeCla) | 137,775 | 110,203 | 13,786 | 13,786| | QA (ViquiQuAD) | 14,239 | 11,255 | 1,492 | 1,429 | _The fine-tuning on downstream tasks have been performed with the HuggingFace [**Transformers**](https://github.com/huggingface/transformers) library_ ### Results Below the evaluation results on the CLUB tasks compared with the multilingual mBERT, XLM-RoBERTa models and the Catalan WikiBERT-ca model | Task | NER (F1) | POS (F1) | STS (Pearson) | TC (accuracy) | QA (ViquiQuAD) (F1/EM) | QA (XQuAD) (F1/EM) | | ------------|:-------------:| -----:|:------|:-------|:------|:----| | BERTa | **88.13** | **98.97** | **79.73** | **74.16** | **86.97/72.29** | **68.89/48.87** | | mBERT | 86.38 | 98.82 | 76.34 | 70.56 | 86.97/72.22 | 67.15/46.51 | | XLM-RoBERTa | 87.66 | 98.89 | 75.40 | 71.68 | 85.50/70.47 | 67.10/46.42 | | WikiBERT-ca | 77.66 | 97.60 | 77.18 | 73.22 | 85.45/70.75 | 65.21/36.60 | ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to <[email protected]> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use this model, please cite our latest paper: ```bibtex @inproceedings{armengol-estape-etal-2021-multilingual, title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan", author = "Armengol-Estap{\'e}, Jordi and Carrino, Casimiro Pio and Rodriguez-Penagos, Carlos and de Gibert Bonet, Ona and Armentano-Oller, Carme and Gonzalez-Agirre, Aitor and Melero, Maite and Villegas, Marta", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.437", doi = "10.18653/v1/2021.findings-acl.437", pages = "4933--4946", } ``` ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
{"language": "ca", "license": "apache-2.0", "tags": ["masked-lm", "BERTa", "catalan"], "widget": [{"text": "El Catal\u00e0 \u00e9s una llengua molt <mask>."}, {"text": "Salvador Dal\u00ed va viure a <mask>."}, {"text": "La Costa Brava t\u00e9 les millors <mask> d'Espanya."}, {"text": "El cacaolat \u00e9s un batut de <mask>."}, {"text": "<mask> \u00e9s la capital de la Garrotxa."}, {"text": "Vaig al <mask> a buscar bolets."}, {"text": "Antoni Gaud\u00ed vas ser un <mask> molt important per la ciutat."}, {"text": "Catalunya \u00e9s una refer\u00e8ncia en <mask> a nivell europeu."}]}
PlanTL-GOB-ES/roberta-base-ca
null
[ "transformers", "pytorch", "roberta", "fill-mask", "masked-lm", "BERTa", "catalan", "ca", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
# Spanish RoBERTa-large trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset. ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Evaluation](#evaluation) - [Variable and metrics](#variable-and-metrics) - [Evaluation results](#evaluation-results) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description The **roberta-large-bne-capitel-ner** is a Named Entity Recognition (NER) model for the Spanish language fine-tuned from the [roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) large model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. ## Intended uses and limitations **roberta-large-bne-capitel-ner** model can be used to recognize Named Entities (NE). The model is limited by its training dataset and may not generalize well for all use cases. ## How to use ```python from transformers import pipeline from pprint import pprint nlp = pipeline("ner", model="PlanTL-GOB-ES/roberta-large-bne-capitel-ner") example = "Me llamo Francisco Javier y vivo en Madrid." ner_results = nlp(example) pprint(ner_results) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1). ### Training procedure The model was trained with a batch size of 32 and a learning rate of 3e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. ## Evaluation ### Variable and metrics This model was finetuned maximizing F1 score. ## Evaluation results We evaluated the **roberta-large-bne-capitel-ner** on the CAPITEL-NERC test set against standard multilingual and monolingual baselines: | Model | CAPITEL-NERC (F1) | | ------------|:----| | roberta-large-bne-capitel-ner | **90.51** | | roberta-base-bne-capitel-ner | 89.60| | BETO | 87.72 | | mBERT | 88.10 | | BERTIN | 88.56 | | ELECTRA | 80.35 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to <[email protected]> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ## Citing information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, year = {2022}, } ``` ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "capitel", "ner"], "datasets": ["bne", "capitel"], "metrics": ["f1"], "inference": {"parameters": {"aggregation_strategy": "first"}}, "widget": ["Me llamo Francisco Javier y vivo en Madrid.", "Mi hermano Ram\u00f3n y su mejor amigo Luis trabajan en el BSC."], "model-index": [{"name": "roberta-large-bne-capiter-ner", "results": [{"task": {"type": "token-classification"}, "dataset": {"name": "CAPITEL-NERC", "type": "ner"}, "metrics": [{"type": "f1", "value": 0.9051, "name": "F1"}]}]}]}
PlanTL-GOB-ES/roberta-large-bne-capitel-ner
null
[ "transformers", "pytorch", "roberta", "token-classification", "national library of spain", "spanish", "bne", "capitel", "ner", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
# Spanish RoBERTa-large trained on BNE finetuned for CAPITEL Part of Speech (POS) dataset ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Evaluation](#evaluation) - [Variable and metrics](#variable-and-metrics) - [Evaluation results](#evaluation-results) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description The **roberta-large-bne-capitel-pos** is a Part-of-speech-tagging (POS) model for the Spanish language fine-tuned from the [roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) large model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. # Intended uses and limitations **roberta-large-bne-capitel-pos** model can be used to Part-of-speech-tagging (POS) a text. The model is limited by its training dataset and may not generalize well for all use cases. ## How to use Here is how to use this model: ```python from transformers import pipeline from pprint import pprint nlp = pipeline("token-classification", model="PlanTL-GOB-ES/roberta-large-bne-capitel-pos") example = "El alcalde de Vigo, Abel Caballero, ha comenzado a colocar las luces de Navidad en agosto." pos_results = nlp(example) pprint(pos_results) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 2). ### Training procedure The model was trained with a batch size of 16 and a learning rate of 3e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. ## Evaluation ### Variable and metrics This model was finetuned maximizing F1 score. ## Evaluation results We evaluated the **roberta-large-bne-capitel-pos** on the CAPITEL-POS test set against standard multilingual and monolingual baselines: | Model | CAPITEL-POS (F1) | | ------------|:----| | roberta-large-bne-capitel-pos | **98.56** | | roberta-base-bne-capitel-pos | 98.46 | | BETO | 98.36 | | mBERT | 98.39 | | BERTIN | 98.47 | | ELECTRA | 98.16 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to <[email protected]> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, year = {2022}, } ``` ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "capitel", "pos"], "datasets": ["bne", "capitel"], "metrics": ["f1"], "inference": {"parameters": {"aggregation_strategy": "first"}}, "widget": [{"text": "Festival de San Sebasti\u00e1n: Johnny Depp recibir\u00e1 el premio Donostia en pleno rifirrafe judicial con Amber Heard"}, {"text": "El alcalde de Vigo, Abel Caballero, ha comenzado a colocar las luces de Navidad en agosto."}, {"text": "Gracias a los datos de la BNE, se ha podido lograr este modelo del lenguaje."}], "model-index": [{"name": "roberta-large-bne-capiter-pos", "results": [{"task": {"type": "token-classification"}, "dataset": {"name": "CAPITEL-POS", "type": "pos"}, "metrics": [{"type": "f1", "value": 0.986, "name": "F1"}]}]}]}
PlanTL-GOB-ES/roberta-large-bne-capitel-pos
null
[ "transformers", "pytorch", "roberta", "token-classification", "national library of spain", "spanish", "bne", "capitel", "pos", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
# Spanish RoBERTa-large trained on BNE finetuned for Spanish Question Answering Corpus (SQAC) dataset. ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Evaluation](#evaluation) - [Variable and metrics](#variable-and-metrics) - [Evaluation results](#evaluation-results) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description The **roberta-large-bne-sqac** is a Question Answering (QA) model for the Spanish language fine-tuned from the [roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) large model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. ## Intended uses and limitations **roberta-large-bne-sqac** model can be used for extractive question answering. The model is limited by its training dataset and may not generalize well for all use cases. ## How to use ```python from transformers import pipeline nlp = pipeline("question-answering", model="PlanTL-GOB-ES/roberta-large-bne-sqac") text = "¿Dónde vivo?" context = "Me llamo Wolfgang y vivo en Berlin" qa_results = nlp(text, context) print(qa_results) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training ### Training data We used the QA dataset in Spanish called [SQAC corpus](https://huggingface.co/datasets/PlanTL-GOB-ES/SQAC) for training and evaluation. ### Training procedure The model was trained with a batch size of 16 and a learning rate of 1e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. ## Evaluation results We evaluated the **roberta-large-bne-sqac** on the SQAC test set against standard multilingual and monolingual baselines: | Model | SQAC (F1) | | ------------|:----| | roberta-large-bne-sqac | **82.02** | | roberta-base-bne-sqac | 79.23| | BETO | 79.23 | | mBERT | 75.62 | | BERTIN | 76.78 | | ELECTRA | 73.83 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to <[email protected]> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ### Citing information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, year = {2022}, } ``` ### Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "qa", "question answering"], "datasets": ["PlanTL-GOB-ES/SQAC"], "metrics": ["f1", "exact match"], "model-index": [{"name": "roberta-large-bne-sqac", "results": [{"task": {"type": "question-answering"}, "dataset": {"name": "SQAC", "type": "PlanTL-GOB-ES/SQAC"}, "metrics": [{"type": "f1", "value": 0.8202, "name": "F1"}]}]}]}
PlanTL-GOB-ES/roberta-large-bne-sqac
null
[ "transformers", "pytorch", "roberta", "question-answering", "national library of spain", "spanish", "bne", "qa", "question answering", "es", "dataset:PlanTL-GOB-ES/SQAC", "arxiv:1907.11692", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
# RoBERTa large trained with data from the National Library of Spain (BNE) ## Table of Contents <details> <summary>Click to expand</summary> - [Overview](#overview) - [Model description](#model-description) - [Intended uses and limitations](#intended-uses-and-limitations) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citation Information](#citation-information) - [Disclaimer](#disclaimer) </details> ## Overview - **Architecture:** roberta-large - **Language:** Spanish - **Task:** fill-mask - **Data:** BNE ## Model description The **roberta-large-bne** is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. ## Intended uses and limitations The **roberta-large-bne** model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition. You can use the raw model for fill mask or fine-tune it to a downstream task. ## How to use Here is how to use this model: ```python >>> from transformers import pipeline >>> from pprint import pprint >>> unmasker = pipeline('fill-mask', model='PlanTL-GOB-ES/roberta-large-bne') >>> pprint(unmasker("Gracias a los datos de la BNE se ha podido <mask> este modelo del lenguaje.")) [{'score': 0.0664491355419159, 'sequence': ' Gracias a los datos de la BNE se ha podido conocer este modelo del lenguaje.', 'token': 1910, 'token_str': ' conocer'}, {'score': 0.0492338091135025, 'sequence': ' Gracias a los datos de la BNE se ha podido realizar este modelo del lenguaje.', 'token': 2178, 'token_str': ' realizar'}, {'score': 0.03890657424926758, 'sequence': ' Gracias a los datos de la BNE se ha podido reconstruir este modelo del lenguaje.', 'token': 23368, 'token_str': ' reconstruir'}, {'score': 0.03662774711847305, 'sequence': ' Gracias a los datos de la BNE se ha podido desarrollar este modelo del lenguaje.', 'token': 3815, 'token_str': ' desarrollar'}, {'score': 0.030557377263903618, 'sequence': ' Gracias a los datos de la BNE se ha podido estudiar este modelo del lenguaje.', 'token': 6361, 'token_str': ' estudiar'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python >>> from transformers import RobertaTokenizer, RobertaModel >>> tokenizer = RobertaTokenizer.from_pretrained('PlanTL-GOB-ES/roberta-large-bne') >>> model = RobertaModel.from_pretrained('PlanTL-GOB-ES/roberta-large-bne') >>> text = "Gracias a los datos de la BNE se ha podido desarrollar este modelo del lenguaje." >>> encoded_input = tokenizer(text, return_tensors='pt') >>> output = model(**encoded_input) >>> print(output.last_hidden_state.shape) torch.Size([1, 19, 1024]) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training ### Training data The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text. Some of the statistics of the corpus: | Corpora | Number of documents | Number of tokens | Size (GB) | |---------|---------------------|------------------|-----------| | BNE | 201,080,084 | 135,733,450,668 | 570GB | ### Training procedure The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [RoBERTA](https://arxiv.org/abs/1907.11692) model with a vocabulary size of 50,262 tokens. The **roberta-large-bne** pre-training consists of a masked language model training, that follows the approach employed for the RoBERTa large. The training lasted a total of 96 hours with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM. ## Evaluation When fine-tuned on downstream tasks, this model achieves the following results: | Dataset | Metric | [**RoBERTa-large**](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) | |--------------|----------|------------| | MLDoc | F1 | 0.9702 | | CoNLL-NERC | F1 | 0.8823 | | CAPITEL-NERC | F1 | 0.9051 | | PAWS-X | F1 | 0.9150 | | UD-POS | F1 | 0.9904 | | CAPITEL-POS | F1 | 0.9856 | | SQAC | F1 | 0.8202 | | STS | Combined | 0.8411 | | XNLI | Accuracy | 0.8263 | For more evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish) or [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to <[email protected]> ### Copyright Copyright by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://portal.mineco.gob.es/en-us/digitalizacionIA/Pages/sedia.aspx) (2022) ### Licensing information This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://portal.mineco.gob.es/en-us/digitalizacionIA/Pages/sedia.aspx) within the framework of the Plan-TL. ### Citation information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, year = {2022}, } ``` ### Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. </details>
{"language": ["es"], "license": "apache-2.0", "tags": ["national library of spain", "spanish", "bne", "roberta-large-bne"], "datasets": ["bne"], "metrics": ["ppl"], "widget": [{"text": "Por la ventanilla del coche vi la Giralda y pens\u00e9 que bonita que es la ciudad de <mask>."}, {"text": "M\u00e1s vale <mask> que lamentar."}, {"text": "Caminante no hay camino, se hace camino al <mask>."}, {"text": "Tengo una pelota roja y otra amarilla. Si le doy la roja a Jose, s\u00f3lo me queda la <mask>."}, {"text": "Tengo una pelota roja y otra amarilla. Si le doy la amarilla a Jose, s\u00f3lo me queda la <mask>."}, {"text": "El <mask> es el pico m\u00e1s alto de Espa\u00f1a."}]}
PlanTL-GOB-ES/roberta-large-bne
null
[ "transformers", "pytorch", "roberta", "fill-mask", "national library of spain", "spanish", "bne", "roberta-large-bne", "es", "dataset:bne", "arxiv:1907.11692", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
#Homer DialoGPT Model
{"tags": ["conversational"]}
Plencers/DialoGPT-small-homer
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Plim/language_model_fr
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
## Model description This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 4.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.9827 | 0.29 | 1000 | inf | 0.2937 | | 1.0203 | 0.57 | 2000 | inf | 0.2711 | | 1.0048 | 0.86 | 3000 | inf | 0.2620 | | 0.9858 | 1.15 | 4000 | inf | 0.2522 | | 0.9709 | 1.43 | 5000 | inf | 0.2365 | | 0.9347 | 1.72 | 6000 | inf | 0.2332 | | 0.9256 | 2.01 | 7000 | inf | 0.2261 | | 0.8936 | 2.29 | 8000 | inf | 0.2203 | | 0.877 | 2.58 | 9000 | inf | 0.2096 | | 0.8393 | 2.87 | 10000 | inf | 0.2017 | | 0.8156 | 3.15 | 11000 | inf | 0.1936 | | 0.8015 | 3.44 | 12000 | inf | 0.1880 | | 0.774 | 3.73 | 13000 | inf | 0.1834 | It achieves the best result on the validation set on STEP 13000: - Wer: 0.1834 Some problem occurs when calculating the validation loss. ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3.dev0 - Tokenizers 0.11.0 ### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8` with split `test` ```bash python eval.py --model_id Plim/xls-r-1b-cv_8-fr --dataset mozilla-foundation/common_voice_8_0 --config fr --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id Plim/xls-r-1b-cv_8-fr --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
{"language": ["fr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer"], "model-index": [{"name": "XLS-R-1B - French", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "fr"}, "metrics": [{"type": "wer", "value": 18.33, "name": "Test WER"}, {"type": "cer", "value": 5.6, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 60.25, "name": "Test WER"}, {"type": "cer", "value": 15.68, "name": "Test CER"}]}]}]}
Plim/test_lm
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "fr", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
## Model description This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 6.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.9827 | 0.29 | 1000 | inf | 0.2937 | | 1.0203 | 0.57 | 2000 | inf | 0.2711 | | 1.0048 | 0.86 | 3000 | inf | 0.2620 | | 0.9858 | 1.15 | 4000 | inf | 0.2522 | | 0.9709 | 1.43 | 5000 | inf | 0.2365 | | 0.9347 | 1.72 | 6000 | inf | 0.2332 | | 0.9256 | 2.01 | 7000 | inf | 0.2261 | | 0.8936 | 2.29 | 8000 | inf | 0.2203 | | 0.877 | 2.58 | 9000 | inf | 0.2096 | | 0.8393 | 2.87 | 10000 | inf | 0.2017 | | 0.8156 | 3.15 | 11000 | inf | 0.1936 | | 0.8015 | 3.44 | 12000 | inf | 0.1880 | | 0.774 | 3.73 | 13000 | inf | 0.1834 | | 0.8372 | 4.01 | 14000 | inf | 0.1934 | | 0.8075 | 4.3 | 15000 | inf | 0.1923 | | 0.8069 | 4.59 | 16000 | inf | 0.1877 | | 0.8064 | 4.87 | 17000 | inf | 0.1955 | | 0.801 | 5.16 | 18000 | inf | 0.1891 | | 0.8022 | 5.45 | 19000 | inf | 0.1895 | | 0.792 | 5.73 | 20000 | inf | 0.1854 | It achieves the best result on the validation set on STEP 13000: - Wer: 0.1834 Some problem occurs when calculating the validation loss. ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3.dev0 - Tokenizers 0.11.0 ### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8` with split `test` ```bash python eval.py --model_id Plim/xls-r-1b-cv_8-fr --dataset mozilla-foundation/common_voice_8_0 --config fr --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id Plim/xls-r-1b-cv_8-fr --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ``` ### Evaluation Results Without LM: | Dataset | WER | CER | |:----------:|:-----:|:-----:| | TEST CV | 18.33 | 5.60 | | DEV audio | 31.33 | 13.20 | | TEST audio | / | / | With LM: | Dataset | WER | CER | |:----------:|:-----:|:-----:| | TEST CV | 15.40 | 5.36 | | DEV audio | 25.05 | 12.45 | | TEST audio | / | / |
{"language": ["fr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-1B - French", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "fr"}, "metrics": [{"type": "wer", "value": 15.4, "name": "Test WER (with LM)"}, {"type": "cer", "value": 5.36, "name": "Test CER (with LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 25.05, "name": "Test WER (with LM)"}, {"type": "cer", "value": 12.45, "name": "Test CER (with LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 27.1, "name": "Test WER"}]}]}]}
Plim/xls-r-1b-cv_8-fr
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "fr", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FR dataset. It achieves the following results on the evaluation set: - Loss: 0.2464 - Wer: 0.2220 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.0326 | 0.32 | 1000 | 0.3092 | 0.2718 | | 1.0828 | 0.65 | 2000 | 0.2843 | 0.2606 | | 1.0771 | 0.97 | 3000 | 0.2774 | 0.2488 | | 1.0306 | 1.3 | 4000 | 0.2588 | 0.2351 | | 1.0052 | 1.62 | 5000 | 0.2483 | 0.2284 | | 0.9865 | 1.94 | 6000 | 0.2464 | 0.2220 | | 0.978 | 2.27 | 7000 | 0.2514 | 0.2172 | | 1.7438 | 2.59 | 8000 | 0.7983 | 0.5072 | | 2.3309 | 2.92 | 9000 | 1.8917 | 0.9416 | | 2.1834 | 3.24 | 10000 | 1.7496 | 0.9030 | | 2.3047 | 3.56 | 11000 | 1.5377 | 0.8747 | | 2.1378 | 3.89 | 12000 | 1.3501 | 0.7923 | | 1.9812 | 4.21 | 13000 | 1.2662 | 0.7697 | | 2.6855 | 4.54 | 14000 | 2.4120 | 0.9902 | | 2.7482 | 4.86 | 15000 | 2.5341 | 0.9874 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
{"language": ["fr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer"], "model-index": [{"name": "", "results": []}]}
Plim/xls-r-1b-fr
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "fr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
## Model description This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 5.0 (extended to 7.0 with training with checkpoint) - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 2.9114 | 0.29 | 1000 | inf | 0.9997 | | 1.2436 | 0.57 | 2000 | inf | 0.4310 | | 1.0552 | 0.86 | 3000 | inf | 0.3144 | | 1.0044 | 1.15 | 4000 | inf | 0.2814 | | 0.9718 | 1.43 | 5000 | inf | 0.2658 | | 0.9502 | 1.72 | 6000 | inf | 0.2566 | | 0.9418 | 2.01 | 7000 | inf | 0.2476 | | 0.9215 | 2.29 | 8000 | inf | 0.2420 | | 0.9236 | 2.58 | 9000 | inf | 0.2388 | | 0.9014 | 2.87 | 10000 | inf | 0.2354 | | 0.8814 | 3.15 | 11000 | inf | 0.2312 | | 0.8809 | 3.44 | 12000 | inf | 0.2285 | | 0.8717 | 3.73 | 13000 | inf | 0.2263 | | 0.8787 | 4.01 | 14000 | inf | 0.2218 | | 0.8567 | 4.3 | 15000 | inf | 0.2193 | | 0.8488 | 4.59 | 16000 | inf | 0.2187 | | 0.8359 | 4.87 | 17000 | inf | 0.2172 | Training continued with checkpoint from STEP 17000: | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | / | 5.16 | 18000 | inf | 0.2176 | | / | 5.45 | 19000 | inf | 0.2181 | | / | 5.73 | 20000 | inf | 0.2155 | | / | 6.02 | 21000 | inf | 0.2140 | | / | 6.31 | 22000 | inf | 0.2124 | | / | 6.59 | 23000 | inf | 0.2117 | | / | 6.88 | 24000 | inf | 0.2116 | It achieves the best result on the validation set on Step 24000: - Wer: 0.2116 Got some issue with validation loss calculation. ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3.dev0 - Tokenizers 0.11.0 ### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8` with split `test` ```bash python eval.py --model_id Plim/xls-r-300m-cv_8-fr --dataset mozilla-foundation/common_voice_8_0 --config fr --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id Plim/xls-r-300m-cv_8-fr --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
{"language": ["fr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer"], "model-index": [{"name": "XLS-R-300m - French", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "fr"}, "metrics": [{"type": "wer", "value": "to recompute with STEP 24000", "name": "Test WER"}, {"type": "cer", "value": "to recompute with STEP 24000", "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 35.29, "name": "Test WER"}, {"type": "cer", "value": 13.94, "name": "Test CER"}]}]}]}
Plim/xls-r-300m-cv_8-fr
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "fr", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
--- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> ## Model description This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FR dataset. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.495 | 0.16 | 500 | 3.3883 | 1.0 | | 2.9095 | 0.32 | 1000 | 2.9152 | 1.0000 | | 1.8434 | 0.49 | 1500 | 1.0473 | 0.7446 | | 1.4298 | 0.65 | 2000 | 0.5729 | 0.5130 | | 1.1937 | 0.81 | 2500 | 0.3795 | 0.3450 | | 1.1248 | 0.97 | 3000 | 0.3321 | 0.3052 | | 1.0835 | 1.13 | 3500 | 0.3038 | 0.2805 | | 1.0479 | 1.3 | 4000 | 0.2910 | 0.2689 | | 1.0413 | 1.46 | 4500 | 0.2798 | 0.2593 | | 1.014 | 1.62 | 5000 | 0.2727 | 0.2512 | | 1.004 | 1.78 | 5500 | 0.2646 | 0.2471 | | 0.9949 | 1.94 | 6000 | 0.2619 | 0.2457 | It achieves the best result on STEP 6000 on the validation set: - Loss: 0.2619 - Wer: 0.2457 ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0 ### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_7` with split `test` ```bash python eval.py --model_id Plim/xls-r-300m-fr --dataset mozilla-foundation/common_voice_7_0 --config fr --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id Plim/xls-r-300m-fr --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
{"language": ["fr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - French", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "fr"}, "metrics": [{"type": "wer", "value": 24.56, "name": "Test WER"}, {"type": "cer", "value": 7.3, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 63.62, "name": "Test WER"}, {"type": "cer", "value": 17.2, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 66.45, "name": "Test WER"}]}]}]}
Plim/xls-r-300m-fr
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "fr", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [./checkpoint-6000](https://huggingface.co/./checkpoint-6000) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FR dataset. It achieves the following results on the evaluation set: - Loss: 0.2619 - Wer: 0.2457 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.495 | 0.16 | 500 | 3.3883 | 1.0 | | 2.9095 | 0.32 | 1000 | 2.9152 | 1.0000 | | 1.8434 | 0.49 | 1500 | 1.0473 | 0.7446 | | 1.4298 | 0.65 | 2000 | 0.5729 | 0.5130 | | 1.1937 | 0.81 | 2500 | 0.3795 | 0.3450 | | 1.1248 | 0.97 | 3000 | 0.3321 | 0.3052 | | 1.0835 | 1.13 | 3500 | 0.3038 | 0.2805 | | 1.0479 | 1.3 | 4000 | 0.2910 | 0.2689 | | 1.0413 | 1.46 | 4500 | 0.2798 | 0.2593 | | 1.014 | 1.62 | 5000 | 0.2727 | 0.2512 | | 1.004 | 1.78 | 5500 | 0.2646 | 0.2471 | | 0.9949 | 1.94 | 6000 | 0.2619 | 0.2457 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
{"language": ["fr"], "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer"], "model-index": [{"name": "", "results": []}]}
Plim/xls-r-300m-lm-fr
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "fr", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 2.4285 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5169 | 1.0 | 1642 | 1.6958 | | 1.1326 | 2.0 | 3284 | 2.0009 | | 0.8638 | 3.0 | 4926 | 2.4285 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
Plimpton/distilbert-base-uncased-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Plutovio/DialoGPT-small-harrypotter
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
[Google's mT5](https://github.com/google-research/multilingual-t5) This is a model for generating questions from Thai texts. It was fine-tuned on NSC2018 corpus ```python from transformers import MT5Tokenizer, MT5ForConditionalGeneration tokenizer = MT5Tokenizer.from_pretrained("Pollawat/mt5-small-thai-qa-qg") model = MT5ForConditionalGeneration.from_pretrained("Pollawat/mt5-small-thai-qa-qg") text = "กรุงเทพมหานคร เป็นเมืองหลวงและนครที่มีประชากรมากที่สุดของประเทศไทย เป็นศูนย์กลางการปกครอง การศึกษา การคมนาคมขนส่ง การเงินการธนาคาร การพาณิชย์ การสื่อสาร และความเจริญของประเทศ เป็นเมืองที่มีชื่อยาวที่สุดในโลก ตั้งอยู่บนสามเหลี่ยมปากแม่น้ำเจ้าพระยา มีแม่น้ำเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งพระนครและฝั่งธนบุรี กรุงเทพมหานครมีพื้นที่ทั้งหมด 1,568.737 ตร.กม. มีประชากรตามทะเบียนราษฎรกว่า 5 ล้านคน" input_ids = tokenizer.encode(text, return_tensors='pt') beam_output = model.generate( input_ids, max_length=50, num_beams=5, early_stopping=True ) print(tokenizer.decode(beam_output[0])) >> <pad> <extra_id_0> แม่น้ําเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งใด <ANS> ฝั่งพระนครและฝั่งธนบุรี</s> print(tokenizer.decode(beam_output[0], skip_special_tokens=True)) >> <extra_id_0> แม่น้ําเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งใด ฝั่งพระนครและฝั่งธนบุรี ```
{"language": ["thai", "th"], "license": "mit", "tags": ["question-generation", "question-answering"], "datasets": ["NSC2018", "iapp-wiki-qa-dataset", "XQuAD"]}
Pollawat/mt5-small-thai-qa-qg
null
[ "transformers", "pytorch", "mt5", "text2text-generation", "question-generation", "question-answering", "dataset:NSC2018", "dataset:iapp-wiki-qa-dataset", "dataset:XQuAD", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
[Google's mT5](https://github.com/google-research/multilingual-t5) This is a model for generating questions from Thai texts. It was fine-tuned on NSC2018 corpus ```python from transformers import T5Tokenizer, MT5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("Pollawat/mt5-small-thai-qg") model = MT5ForConditionalGeneration.from_pretrained("Pollawat/mt5-small-thai-qg") text = "กรุงเทพมหานคร เป็นเมืองหลวงและนครที่มีประชากรมากที่สุดของประเทศไทย เป็นศูนย์กลางการปกครอง การศึกษา การคมนาคมขนส่ง การเงินการธนาคาร การพาณิชย์ การสื่อสาร และความเจริญของประเทศ เป็นเมืองที่มีชื่อยาวที่สุดในโลก ตั้งอยู่บนสามเหลี่ยมปากแม่น้ำเจ้าพระยา มีแม่น้ำเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งพระนครและฝั่งธนบุรี กรุงเทพมหานครมีพื้นที่ทั้งหมด 1,568.737 ตร.กม. มีประชากรตามทะเบียนราษฎรกว่า 5 ล้านคน ทำให้กรุงเทพมหานครเป็นเอกนคร (Primate City) จัด มีผู้กล่าวว่า กรุงเทพมหานครเป็น 'เอกนครที่สุดในโลก' เพราะมีประชากรมากกว่านครที่มีประชากรมากเป็นอันดับ 2 ถึง 40 เท่า[3]" input_ids = tokenizer.encode(text, return_tensors='pt') beam_output = model.generate( input_ids, max_length=50, num_beams=5, early_stopping=True ) print(tokenizer.decode(beam_output[0], skip_special_tokens=True)) >> <extra_id_0>ของกรุงเทพมหานครเป็นเมืองหลวงของประเทศใด ```
{"language": ["thai", "th"], "license": "mit", "tags": ["question-generation"], "datasets": ["NSC2018"]}
Pollawat/mt5-small-thai-qg
null
[ "transformers", "pytorch", "mt5", "text2text-generation", "question-generation", "dataset:NSC2018", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Pololinger/gpt-neo-125M
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
Shrek, with all 4 scripts!
{"tags": ["conversational"]}
Poly-Pixel/shrek-medium-full
null
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
Shrek
{"tags": ["conversational"]}
Poly-Pixel/shrek-medium
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Shrek Small DialoGPT Model
{"tags": ["conversational"]}
Poly-Pixel/shrek-test-small
null
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00