modelId
stringlengths 4
81
| tags
sequence | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Chungu424/repo | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2020-05-21T16:16:47Z | ---
language: ko
---
# 📈 Financial Korean ELECTRA model
Pretrained ELECTRA Language Model for Korean (`finance-koelectra-base-generator`)
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB)
or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
## Stats
The current version of the model is trained on a financial news data of Naver news.
The final training corpus has a size of 25GB and 2.3B tokens.
This model was trained a cased model on a TITAN RTX for 500k steps.
## Usage
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="krevas/finance-koelectra-base-generator",
tokenizer="krevas/finance-koelectra-base-generator"
)
print(fill_mask(f"내일 해당 종목이 대폭 {fill_mask.tokenizer.mask_token}할 것이다."))
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/krevas).
|
Chungu424/repodata | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2020-05-22T03:15:12Z | ---
language: ko
---
# 📈 Financial Korean ELECTRA model
Pretrained ELECTRA Language Model for Korean (`finance-koelectra-small-discriminator`)
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB)
or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
## Stats
The current version of the model is trained on a financial news data of Naver news.
The final training corpus has a size of 25GB and 2.3B tokens.
This model was trained a cased model on a TITAN RTX for 500k steps.
## Usage
```python
from transformers import ElectraForPreTraining, ElectraTokenizer
import torch
discriminator = ElectraForPreTraining.from_pretrained("krevas/finance-koelectra-small-discriminator")
tokenizer = ElectraTokenizer.from_pretrained("krevas/finance-koelectra-small-discriminator")
sentence = "내일 해당 종목이 대폭 상승할 것이다"
fake_sentence = "내일 해당 종목이 맛있게 상승할 것이다"
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % int(prediction), end="") for prediction in predictions.tolist()[1:-1]]
print("fake token : %s" % fake_tokens[predictions.tolist()[1:-1].index(1)])
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/krevas).
|
Chuu/Chumar | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2020-05-22T03:16:25Z | ---
language: ko
---
# 📈 Financial Korean ELECTRA model
Pretrained ELECTRA Language Model for Korean (`finance-koelectra-small-generator`)
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB)
or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
## Stats
The current version of the model is trained on a financial news data of Naver news.
The final training corpus has a size of 25GB and 2.3B tokens.
This model was trained a cased model on a TITAN RTX for 500k steps.
## Usage
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="krevas/finance-koelectra-small-generator",
tokenizer="krevas/finance-koelectra-small-generator"
)
print(fill_mask(f"내일 해당 종목이 대폭 {fill_mask.tokenizer.mask_token}할 것이다."))
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/krevas).
|
Cinnamon/electra-small-japanese-discriminator | [
"pytorch",
"electra",
"pretraining",
"ja",
"transformers",
"license:apache-2.0"
] | null | {
"architectures": [
"ElectraForPreTraining"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 419 | 2021-08-28T14:10:34Z | ---
tags:
- conversational
---
# Phoenix DialoGPT model |
Cinnamon/electra-small-japanese-generator | [
"pytorch",
"electra",
"fill-mask",
"ja",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"ElectraForMaskedLM"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 19 | 2022-01-26T08:33:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3942
- Wer: 0.3149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.9921 | 3.67 | 400 | 0.7820 | 0.7857 |
| 0.4496 | 7.34 | 800 | 0.4630 | 0.4977 |
| 0.2057 | 11.01 | 1200 | 0.4293 | 0.4627 |
| 0.1328 | 14.68 | 1600 | 0.4464 | 0.4068 |
| 0.1009 | 18.35 | 2000 | 0.4461 | 0.3742 |
| 0.0794 | 22.02 | 2400 | 0.4328 | 0.3467 |
| 0.0628 | 25.69 | 2800 | 0.4036 | 0.3263 |
| 0.0497 | 29.36 | 3200 | 0.3942 | 0.3149 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
CleveGreen/JobClassifier | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | 2021-09-07T14:09:07Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sts-GBERT-bi-encoder
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sts-GBERT-bi-encoder')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sts-GBERT-bi-encoder')
model = AutoModel.from_pretrained('sts-GBERT-bi-encoder')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sts-GBERT-bi-encoder)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 859 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 4,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 344,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
CodeDanCode/SP-KyleBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | 2021-08-23T13:08:49Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model_index:
- name: name
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# name
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
CodeNinja1126/bert-q-encoder | [
"pytorch"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2021-02-09T17:22:46Z | # I-BERT base model
This model, `ibert-roberta-base`, is an integer-only quantized version of [RoBERTa](https://arxiv.org/abs/1907.11692), and was introduced in [this paper](https://arxiv.org/abs/2101.01321).
I-BERT stores all parameters with INT8 representation, and carries out the entire inference using integer-only arithmetic.
In particular, I-BERT replaces all floating point operations in the Transformer architectures (e.g., MatMul, GELU, Softmax, and LayerNorm) with closely approximating integer operations.
This can result in upto 4x inference speed up as compared to floating point counterpart when tested on an Nvidia T4 GPU.
The best model parameters searched via quantization-aware finetuning can be then exported (e.g., to TensorRT) for integer-only deployment of the model.
## Finetuning Procedure
Finetuning of I-BERT consists of 3 stages: (1) Full-precision finetuning from the pretrained model on a down-stream task, (2) model quantization, and (3) integer-only finetuning (i.e., quantization-aware training) of the quantized model.
### Full-precision finetuning
Full-precision finetuning of I-BERT is similar to RoBERTa finetuning.
For instance, you can run the following command to finetune on the [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) text classification task.
```
python examples/text-classification/run_glue.py \
--model_name_or_path kssteven/ibert-roberta-base \
--task_name MRPC \
--do_eval \
--do_train \
--evaluation_strategy epoch \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--save_steps 115 \
--learning_rate 2e-5 \
--num_train_epochs 10 \
--output_dir $OUTPUT_DIR
```
### Model Quantization
Once you are done with full-precision finetuning, open up `config.json` in your checkpoint directory and set the `quantize` attribute as `true`.
```
{
"_name_or_path": "kssteven/ibert-roberta-base",
"architectures": [
"IBertForSequenceClassification"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"finetuning_task": "mrpc",
"force_dequant": "none",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "ibert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"quant_mode": true,
"tokenizer_class": "RobertaTokenizer",
"transformers_version": "4.4.0.dev0",
"type_vocab_size": 1,
"vocab_size": 50265
}
```
Then, your model will automatically run as the integer-only mode when you load the checkpoint.
Also, make sure to delete `optimizer.pt`, `scheduler.pt` and `trainer_state.json` in the same directory.
Otherwise, HF will not reset the optimizer, scheduler, or trainer state for the following integer-only finetuning.
### Integer-only finetuning (Quantization-aware training)
Finally, you will be able to run integer-only finetuning simply by loading the checkpoint file you modified.
Note that the only difference in the example command below is `model_name_or_path`.
```
python examples/text-classification/run_glue.py \
--model_name_or_path $CHECKPOINT_DIR
--task_name MRPC \
--do_eval \
--do_train \
--evaluation_strategy epoch \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--save_steps 115 \
--learning_rate 1e-6 \
--num_train_epochs 10 \
--output_dir $OUTPUT_DIR
```
## Citation info
If you use I-BERT, please cite [our papaer](https://arxiv.org/abs/2101.01321).
```
@article{kim2021bert,
title={I-BERT: Integer-only BERT Quantization},
author={Kim, Sehoon and Gholami, Amir and Yao, Zhewei and Mahoney, Michael W and Keutzer, Kurt},
journal={arXiv preprint arXiv:2101.01321},
year={2021}
}
```
|
CodeNinja1126/test-model | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 24 | 2021-02-14T05:33:59Z | # I-BERT large model
This model, `ibert-roberta-large`, is an integer-only quantized version of [RoBERTa](https://arxiv.org/abs/1907.11692), and was introduced in [this papaer](https://arxiv.org/abs/2101.01321).
I-BERT stores all parameters with INT8 representation, and carries out the entire inference using integer-only arithmetic.
In particular, I-BERT replaces all floating point operations in the Transformer architectures (e.g., MatMul, GELU, Softmax, and LayerNorm) with closely approximating integer operations.
This can result in upto 4x inference speed up as compared to floating point counterpart when tested on an Nvidia T4 GPU.
The best model parameters searched via quantization-aware finetuning can be then exported (e.g., to TensorRT) for integer-only deployment of the model.
## Finetuning Procedure
Finetuning of I-BERT consists of 3 stages: (1) Full-precision finetuning from the pretrained model on a down-stream task, (2) model quantization, and (3) integer-only finetuning (i.e., quantization-aware training) of the quantized model.
### Full-precision finetuning
Full-precision finetuning of I-BERT is similar to RoBERTa finetuning.
For instance, you can run the following command to finetune on the [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) text classification task.
```
python examples/text-classification/run_glue.py \
--model_name_or_path kssteven/ibert-roberta-large \
--task_name MRPC \
--do_eval \
--do_train \
--evaluation_strategy epoch \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--save_steps 115 \
--learning_rate 2e-5 \
--num_train_epochs 10 \
--output_dir $OUTPUT_DIR
```
### Model Quantization
Once you are done with full-precision finetuning, open up `config.json` in your checkpoint directory and set the `quantize` attribute as `true`.
```
{
"_name_or_path": "kssteven/ibert-roberta-large",
"architectures": [
"IBertForSequenceClassification"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"finetuning_task": "mrpc",
"force_dequant": "none",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "ibert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"quant_mode": true,
"tokenizer_class": "RobertaTokenizer",
"transformers_version": "4.4.0.dev0",
"type_vocab_size": 1,
"vocab_size": 50265
}
```
Then, your model will automatically run as the integer-only mode when you load the checkpoint.
Also, make sure to delete `optimizer.pt`, `scheduler.pt` and `trainer_state.json` in the same directory.
Otherwise, HF will not reset the optimizer, scheduler, or trainer state for the following integer-only finetuning.
### Integer-only finetuning (Quantization-aware training)
Finally, you will be able to run integer-only finetuning simply by loading the checkpoint file you modified.
Note that the only difference in the example command below is `model_name_or_path`.
```
python examples/text-classification/run_glue.py \
--model_name_or_path $CHECKPOINT_DIR
--task_name MRPC \
--do_eval \
--do_train \
--evaluation_strategy epoch \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--save_steps 115 \
--learning_rate 1e-6 \
--num_train_epochs 10 \
--output_dir $OUTPUT_DIR
```
## Citation info
If you use I-BERT, please cite [our papaer](https://arxiv.org/abs/2101.01321).
```
@article{kim2021bert,
title={I-BERT: Integer-only BERT Quantization},
author={Kim, Sehoon and Gholami, Amir and Yao, Zhewei and Mahoney, Michael W and Keutzer, Kurt},
journal={arXiv preprint arXiv:2101.01321},
year={2021}
}
```
|
CoderBoy432/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | 2021-07-21T14:49:24Z | ---
language:
- en
tags:
- text generation
- pytorch
- the Pile
- causal-lm
license: apache-2.0
datasets:
- the Pile
---
# GPT-Neo 2.7B (By EleutherAI)
## Model Description
GPT-Neo 2.7B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model.
## Training data
GPT-Neo 2.7B was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model.
## Training procedure
This model was trained for 420 billion tokens over 400,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss.
## Intended Use and Limitations
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B')
>>> generator("EleutherAI has", do_sample=True, min_length=50)
[{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}]
```
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Eval results
All evaluations were done using our [evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness). Some results for GPT-2 and GPT-3 are inconsistent with the values reported in the respective papers. We are currently looking into why, and would greatly appreciate feedback and further testing of our eval harness. If you would like to contribute evaluations you have done, please reach out on our [Discord](https://discord.gg/vtRgjbM).
### Linguistic Reasoning
| Model and Size | Pile BPB | Pile PPL | Wikitext PPL | Lambada PPL | Lambada Acc | Winogrande | Hellaswag |
| ---------------- | ---------- | ---------- | ------------- | ----------- | ----------- | ---------- | ----------- |
| GPT-Neo 1.3B | 0.7527 | 6.159 | 13.10 | 7.498 | 57.23% | 55.01% | 38.66% |
| GPT-2 1.5B | 1.0468 | ----- | 17.48 | 10.634 | 51.21% | 59.40% | 40.03% |
| **GPT-Neo 2.7B** | **0.7165** | **5.646** | **11.39** | **5.626** | **62.22%** | **56.50%** | **42.73%** |
| GPT-3 Ada | 0.9631 | ----- | ----- | 9.954 | 51.60% | 52.90% | 35.93% |
### Physical and Scientific Reasoning
| Model and Size | MathQA | PubMedQA | Piqa |
| ---------------- | ---------- | ---------- | ----------- |
| GPT-Neo 1.3B | 24.05% | 54.40% | 71.11% |
| GPT-2 1.5B | 23.64% | 58.33% | 70.78% |
| **GPT-Neo 2.7B** | **24.72%** | **57.54%** | **72.14%** |
| GPT-3 Ada | 24.29% | 52.80% | 68.88% |
### Down-Stream Applications
TBD
### BibTeX entry and citation info
To cite this model, use
```bibtex
@article{gao2020pile,
title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
```
To cite the codebase that this model was trained with, use
```bibtex
@software{gpt-neo,
author = {Black, Sid and Gao, Leo and Wang, Phil and Leahy, Connor and Biderman, Stella},
title = {{GPT-Neo}: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow},
url = {http://github.com/eleutherai/gpt-neo},
version = {1.0},
year = {2021},
}
``` |
CoderEFE/DialoGPT-marxbot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"has_space"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | 2020-01-08T18:52:40Z | ### Model
**[`albert-xlarge-v2`](https://huggingface.co/albert-xlarge-v2)** fine-tuned on **[`SQuAD V2`](https://rajpurkar.github.io/SQuAD-explorer/)** using **[`run_squad.py`](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)**
### Training Parameters
Trained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb
```bash
BASE_MODEL=albert-xlarge-v2
python run_squad.py \
--version_2_with_negative \
--model_type albert \
--model_name_or_path $BASE_MODEL \
--output_dir $OUTPUT_MODEL \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v2.0.json \
--predict_file $SQUAD_DIR/dev-v2.0.json \
--per_gpu_train_batch_size 3 \
--per_gpu_eval_batch_size 64 \
--learning_rate 3e-5 \
--num_train_epochs 3.0 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 2000 \
--threads 24 \
--warmup_steps 814 \
--gradient_accumulation_steps 4 \
--fp16 \
--do_train
```
### Evaluation
Evaluation on the dev set. I did not sweep for best threshold.
| | val |
|-------------------|-------------------|
| exact | 84.41842836688285 |
| f1 | 87.4628460501696 |
| total | 11873.0 |
| HasAns_exact | 80.68488529014844 |
| HasAns_f1 | 86.78245127423482 |
| HasAns_total | 5928.0 |
| NoAns_exact | 88.1412952060555 |
| NoAns_f1 | 88.1412952060555 |
| NoAns_total | 5945.0 |
| best_exact | 84.41842836688285 |
| best_exact_thresh | 0.0 |
| best_f1 | 87.46284605016956 |
| best_f1_thresh | 0.0 |
### Usage
See [huggingface documentation](https://huggingface.co/transformers/model_doc/albert.html#albertforquestionanswering). Training on `SQuAD V2` allows the model to score if a paragraph contains an answer:
```python
start_scores, end_scores = model(input_ids)
span_scores = start_scores.softmax(dim=1).log()[:,:,None] + end_scores.softmax(dim=1).log()[:,None,:]
ignore_score = span_scores[:,0,0] #no answer scores
```
|
CoderEFE/DialoGPT-medium-marx | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2020-04-06T13:57:56Z | ### Model
**[`monologg/biobert_v1.1_pubmed`](https://huggingface.co/monologg/biobert_v1.1_pubmed)** fine-tuned on **[`SQuAD V2`](https://rajpurkar.github.io/SQuAD-explorer/)** using **[`run_squad.py`](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)**
This model is cased.
### Training Parameters
Trained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb
```bash
BASE_MODEL=monologg/biobert_v1.1_pubmed
python run_squad.py \
--version_2_with_negative \
--model_type albert \
--model_name_or_path $BASE_MODEL \
--output_dir $OUTPUT_MODEL \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v2.0.json \
--predict_file $SQUAD_DIR/dev-v2.0.json \
--per_gpu_train_batch_size 18 \
--per_gpu_eval_batch_size 64 \
--learning_rate 3e-5 \
--num_train_epochs 3.0 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 2000 \
--threads 24 \
--warmup_steps 550 \
--gradient_accumulation_steps 1 \
--fp16 \
--logging_steps 50 \
--do_train
```
### Evaluation
Evaluation on the dev set. I did not sweep for best threshold.
| | val |
|-------------------|-------------------|
| exact | 75.97068980038743 |
| f1 | 79.37043950121722 |
| total | 11873.0 |
| HasAns_exact | 74.13967611336032 |
| HasAns_f1 | 80.94892513460755 |
| HasAns_total | 5928.0 |
| NoAns_exact | 77.79646761984861 |
| NoAns_f1 | 77.79646761984861 |
| NoAns_total | 5945.0 |
| best_exact | 75.97068980038743 |
| best_exact_thresh | 0.0 |
| best_f1 | 79.37043950121729 |
| best_f1_thresh | 0.0 |
### Usage
See [huggingface documentation](https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering). Training on `SQuAD V2` allows the model to score if a paragraph contains an answer:
```python
start_scores, end_scores = model(input_ids)
span_scores = start_scores.softmax(dim=1).log()[:,:,None] + end_scores.softmax(dim=1).log()[:,None,:]
ignore_score = span_scores[:,0,0] #no answer scores
```
|
Venkatakrishnan-Ramesh/Text_gen | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2021-03-01T21:51:38Z | ---
language:
- en
thumbnail:
widget:
- text: "topic climate source washington post title "
---
# GPT2-medium-topic-news
## Model description
GPT2-medium fine tuned on a largish news corpus conditioned on a topic, source, title
## Intended uses & limitations
#### How to use
To generate a news article text conditioned on a topic, source, title or some subsets, prompt model with:
```python
f"topic {topic} source"
f"topic {topic} source {source} title"
f"topic {topic} source {source} title {title} body"
```
Try the following tags for `topic: climate, weather, vaccination`.
Zero shot generation works pretty well as long as `topic` is a single word and not too specific.
```python
device = "cuda:0"
tokenizer = AutoTokenizer.from_pretrained("ktrapeznikov/gpt2-medium-topic-small-set")
model = AutoModelWithLMHead.from_pretrained("ktrapeznikov/gpt2-medium-topic-small-set")
model.to(device)
topic = "climate"
prompt = tokenizer(f"topic {topics} source straitstimes title", return_tensors="pt")
out = model.generate(prompt["input_ids"].to(device), do_sample=True,max_length=500, early_stopping=True, top_p=.9)
print(tokenizer.decode(out[0].cpu(), skip_special_tokens=True))
``` |
CoffeeAddict93/gpt1-call-of-the-wild | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2020-11-05T00:05:58Z | ---
language:
- en
thumbnail:
widget:
- text: "topic: climate article:"
---
# GPT2-medium-topic-news
## Model description
GPT2-medium fine tuned on a large news corpus conditioned on a topic
## Intended uses & limitations
#### How to use
To generate a news article text conditioned on a topic, prompt model with:
`topic: climate article:`
The following tags were used during training:
`arts law international science business politics disaster world conflict football sport sports artanddesign environment music film lifeandstyle business health commentisfree books technology media education politics travel stage uk society us money culture religion science news tv fashion uk australia cities global childrens sustainable global voluntary housing law local healthcare theguardian`
Zero shot generation works pretty well as long as `topic` is a single word and not too specific.
```python
device = "cuda:0"
tokenizer = AutoTokenizer.from_pretrained("ktrapeznikov/gpt2-medium-topic-news")
model = AutoModelWithLMHead.from_pretrained("ktrapeznikov/gpt2-medium-topic-news")
model.to(device)
topic = "climate"
prompt = tokenizer(f"topic: {topic} article:", return_tensors="pt")
out = model.generate(prompt["input_ids"].to(device), do_sample=True,max_length=500, early_stopping=True, top_p=.9)
print(tokenizer.decode(list(out.cpu()[0])))
```
## Training data
## Training procedure
|
CoffeeAddict93/gpt1-modest-proposal | [
"pytorch",
"openai-gpt",
"text-generation",
"transformers",
"has_space"
] | text-generation | {
"architectures": [
"OpenAIGPTLMHeadModel"
],
"model_type": "openai-gpt",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | 2021-02-09T15:33:26Z | ---
language:
- en
thumbnail:
widget:
- text: "topic climate source"
---
# GPT2-medium-topic-news
## Model description
GPT2-medium fine tuned on a small news corpus conditioned on a topic, source, title
## Intended uses & limitations
#### How to use
To generate a news article text conditioned on a topic, source, title or some subsets, prompt model with:
```python
f"topic {topic} source"
f"topic {topic} source {source} title"
f"topic {topic} source {source} title {title} body"
```
Try the following tags for `topic: climate, weather, vaccination`.
Zero shot generation works pretty well as long as `topic` is a single word and not too specific.
```python
device = "cuda:0"
tokenizer = AutoTokenizer.from_pretrained("ktrapeznikov/gpt2-medium-topic-small-set")
model = AutoModelWithLMHead.from_pretrained("ktrapeznikov/gpt2-medium-topic-small-set")
model.to(device)
topic = "climate"
prompt = tokenizer(f"topic {topics} source straitstimes title", return_tensors="pt")
out = model.generate(prompt["input_ids"].to(device), do_sample=True,max_length=500, early_stopping=True, top_p=.9)
print(tokenizer.decode(out[0].cpu(), skip_special_tokens=True))
```
## Sample Output
>[topic] military [source] straitstimes [title] Trump signs bill on military aid to Israel [body] WASHINGTON (AFP) - US President Donald Trump signed into law Thursday (April 24) legislation to provide more than US$15 billion (S$20.43 billion) in military aid to Israel, a move the Obama administration had resisted for political reasons. The White House did not immediately respond to a request for comment on the Israel measure, which Trump had sought unsuccessfully to block during the Obama pres ...
>[topic] military [source] straitstimes [title] Hong Kong's leaders to discuss new travel restrictions as lockdown looms [body] HONG KONG (REUTERS) - Hong Kong authorities said they would hold a meeting of the Legislative Council on Monday (July 21) to discuss new travel restrictions on Hong Kong residents, as the city reported a record daily increase in coronavirus cases. The authorities said they would consider the proposal after meeting government chiefs and reviewing other measures. The co ...
>[topic] military [source] straitstimes [title] Trump signs Bill that gives US troops wider latitude to conduct operations abroad [body] WASHINGTON (AFP) - US President Donald Trump on Thursday (July 23) signed a controversial law that gives US troops more leeway to conduct operations abroad, as he seeks to shore up the embattled government's defences against the coronavirus pandemic and stave off a potentially devastating election defeat. Trump's signature Bill, named after his late father's l ...
>[topic] military [source] straitstimes [title] China's Foreign Ministry responds to Japan's statement on South China Sea: 'No one should assume the role of mediator' [body] BEIJING (AFP) - The Ministry of Foreign Affairs on Tuesday (Oct 18) told Japan to stop taking sides in the South China Sea issue and not interfere in the bilateral relationship, as Japan said it would do "nothing". Foreign Ministry spokesman Zhao Lijian told reporters in Beijing that the Chinese government's position on the ...
>[topic] military [source] straitstimes [title] US warns North Korea on potential nuclear strike [body] WASHINGTON - The United States warned North Korea last Friday that an attack by the North could be a "provocation" that would have "a devastating effect" on its security, as it took aim at Pyongyang over its continued efforts to develop weapons of mass destruction. US Secretary of State Mike Pompeo was speaking at the conclusion of a White House news conference when a reporter asked him how t ...
>[topic] military [source] straitstimes [title] China calls Hong Kong to halt 'illegal and illegal military acts' [body] WASHINGTON • Chinese Foreign Ministry spokeswoman Hua Chunying said yesterday that Hong Kong must stop 'illegal and illegal military acts' before Beijing can recognise the city as its own. In her annual State Councillor's speech, Ms Hua made the case for Hong Kong to resume Hong Kong's status as a semi-autonomous city, and vowed to use its "great power position to actively an ...
## Training data
## Training procedure
|
CoffeeAddict93/gpt2-call-of-the-wild | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ### Model
**[`allenai/scibert_scivocab_uncased`](https://huggingface.co/allenai/scibert_scivocab_uncased)** fine-tuned on **[`SQuAD V2`](https://rajpurkar.github.io/SQuAD-explorer/)** using **[`run_squad.py`](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)**
### Training Parameters
Trained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb
```bash
BASE_MODEL=allenai/scibert_scivocab_uncased
python run_squad.py \
--version_2_with_negative \
--model_type albert \
--model_name_or_path $BASE_MODEL \
--output_dir $OUTPUT_MODEL \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v2.0.json \
--predict_file $SQUAD_DIR/dev-v2.0.json \
--per_gpu_train_batch_size 18 \
--per_gpu_eval_batch_size 64 \
--learning_rate 3e-5 \
--num_train_epochs 3.0 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 2000 \
--threads 24 \
--warmup_steps 550 \
--gradient_accumulation_steps 1 \
--fp16 \
--logging_steps 50 \
--do_train
```
### Evaluation
Evaluation on the dev set. I did not sweep for best threshold.
| | val |
|-------------------|-------------------|
| exact | 75.07790785816559 |
| f1 | 78.47735207283013 |
| total | 11873.0 |
| HasAns_exact | 70.76585695006747 |
| HasAns_f1 | 77.57449412292718 |
| HasAns_total | 5928.0 |
| NoAns_exact | 79.37762825904122 |
| NoAns_f1 | 79.37762825904122 |
| NoAns_total | 5945.0 |
| best_exact | 75.08633032931863 |
| best_exact_thresh | 0.0 |
| best_f1 | 78.48577454398324 |
| best_f1_thresh | 0.0 |
### Usage
See [huggingface documentation](https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering). Training on `SQuAD V2` allows the model to score if a paragraph contains an answer:
```python
start_scores, end_scores = model(input_ids)
span_scores = start_scores.softmax(dim=1).log()[:,:,None] + end_scores.softmax(dim=1).log()[:,None,:]
ignore_score = span_scores[:,0,0] #no answer scores
```
|
CohleM/mbert-nepali-tokenizer | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: te
---
# telugu_bertu
## Model description
This model is a BERT MLM model trained on Telugu. Please use it from the terminal as the web interface has encoding issues.
PS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models. And also, please cite my model if you are using it in your pipeline.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoModelWithLMHead, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("kuppuluri/telugu_bertu",
clean_text=False,
handle_chinese_chars=False,
strip_accents=False,
wordpieces_prefix='##')
model = AutoModelWithLMHead.from_pretrained("kuppuluri/telugu_bertu")
fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer)
results = fill_mask("మక్దూంపల్లి పేరుతో చాలా [MASK] ఉన్నాయి.")
```
|
Coldestadam/Breakout_Mentors_SpongeBob_Model | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | # Named Entity Recognition Model for Telugu
#### How to use
Use the below script from your python terminal as the web interface for inference has few encoding issues for Telugu
PS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models.
```python
from simpletransformers.ner import NERModel
model = NERModel('bert',
'kuppuluri/telugu_bertu_ner',
labels=[
'B-PERSON', 'I-ORG', 'B-ORG', 'I-LOC', 'B-MISC',
'I-MISC', 'I-PERSON', 'B-LOC', 'O'
],
use_cuda=False,
args={"use_multiprocessing": False})
text = "విరాట్ కోహ్లీ కూడా అదే నిర్లక్ష్యాన్ని ప్రదర్శించి కేవలం ఒక పరుగుకే రనౌటై పెవిలియన్ చేరాడు ."
results = model.predict([text])
```
## Training data
Training data is from https://github.com/anikethjr/NER_Telugu
## Eval results
On the test set my results were
eval_loss = 0.0004407190410447974
f1_score = 0.999519076627124
precision = 0.9994389677005691
recall = 0.9995991983967936
|
ComCom/gpt2-large | [
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"GPT2Model"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | # Part of Speech tagging Model for Telugu
#### How to use
Use the below script from your python terminal as the web interface for inference has few encoding issues for Telugu
PS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models.
```python
from simpletransformers.ner import NERModel
model = NERModel('bert',
'kuppuluri/telugu_bertu_pos',
args={"use_multiprocessing": False},
labels=[
'QC', 'JJ', 'NN', 'QF', 'RDP', 'O',
'NNO', 'PRP', 'RP', 'VM', 'WQ',
'PSP', 'UT', 'CC', 'INTF', 'SYMP',
'NNP', 'INJ', 'SYM', 'CL', 'QO',
'DEM', 'RB', 'NST', ],
use_cuda=False)
text = "విరాట్ కోహ్లీ కూడా అదే నిర్లక్ష్యాన్ని ప్రదర్శించి కేవలం ఒక పరుగుకే రనౌటై పెవిలియన్ చేరాడు ."
results = model.predict([text])
```
## Training data
Training data is from https://github.com/anikethjr/NER_Telugu
## Eval results
On the test set my results were
eval_loss = 0.0036797842364565416
f1_score = 0.9983795127912227
precision = 0.9984325602401637
recall = 0.9983264709788816
|
ComCom/gpt2-medium | [
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"GPT2Model"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2020-10-30T02:36:27Z | # Telugu Question-Answering model trained on Tydiqa dataset from Google
#### How to use
Use the below script from your python terminal as the web interface for inference has few encoding issues for Telugu
```python
from transformers.pipelines import pipeline, AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained("kuppuluri/telugu_bertu_tydiqa",
clean_text=False,
handle_chinese_chars=False,
strip_accents=False,
wordpieces_prefix='##')
nlp = pipeline('question-answering', model=model, tokenizer=tokenizer)
result = nlp({'question': question, 'context': context})
```
## Training data
I used Tydiqa Telugu data from Google https://github.com/google-research-datasets/tydiqa
PS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models.
|
ComCom/gpt2 | [
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"GPT2Model"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9304777594728171
- name: Recall
type: recall
value: 0.9505217098619994
- name: F1
type: f1
value: 0.9403929403929404
- name: Accuracy
type: accuracy
value: 0.9861070230176017
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0611
- Precision: 0.9305
- Recall: 0.9505
- F1: 0.9404
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0869 | 1.0 | 1756 | 0.0680 | 0.9174 | 0.9342 | 0.9257 | 0.9827 |
| 0.0334 | 2.0 | 3512 | 0.0620 | 0.9305 | 0.9470 | 0.9387 | 0.9853 |
| 0.0233 | 3.0 | 5268 | 0.0611 | 0.9305 | 0.9505 | 0.9404 | 0.9861 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ComCom-Dev/gpt2-bible-test | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.923
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3073
- Accuracy: 0.923
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2744 | 1.0 | 1563 | 0.2049 | 0.921 |
| 0.1572 | 2.0 | 3126 | 0.2308 | 0.923 |
| 0.0917 | 3.0 | 4689 | 0.3073 | 0.923 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Cometasonmi451/Mine | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst-2-english-finetuned-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93032
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst-2-english-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2165
- Accuracy: 0.9303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2749 | 1.0 | 3125 | 0.2165 | 0.9303 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Connor/DialoGPT-small-rick | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | This model can predict which categories a specific competitive problem falls into |
Contrastive-Tension/BERT-Base-CT-STSb | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2021-09-03T21:02:06Z | ---
tags:
- conversational
---
# Rick DiabloGPT Model |
Contrastive-Tension/BERT-Base-CT | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16 | 2021-12-21T04:38:46Z | https://huggingface.co/blog/fine-tune-wav2vec2-english
Use the processor from https://huggingface.co/facebook/wav2vec2-base |
Contrastive-Tension/BERT-Distil-CT-STSb | [
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"DistilBertModel"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | 2021-09-04T13:34:45Z | # kwang2049/TSDAE-askubuntu2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model was only trained with the TSDAE objective on AskUbuntu in an unsupervised manner. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on AskUbuntu with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'askubuntu'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'askubuntu'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
``` |
Contrastive-Tension/BERT-Distil-CT | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2021-10-25T13:20:32Z | # kwang2049/TSDAE-askubuntu2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model adapts the knowledge from the NLI and STSb data to the specific domain AskUbuntu. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on AskUbuntu with the TSDAE objective;
3. Supervised training on the NLI data with cross-entropy loss;
4. Supervised training on the STSb data with MSE loss.
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'askubuntu'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'askubuntu'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
``` |
Contrastive-Tension/BERT-Distil-NLI-CT | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2021-09-04T13:35:48Z | # kwang2049/TSDAE-cqadupstack2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model was only trained with the TSDAE objective on cqadupstack in an unsupervised manner. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on cqadupstack with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'cqadupstack'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'cqadupstack'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
``` |
Contrastive-Tension/BERT-Large-CT-STSb | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2021-10-25T13:28:34Z | # kwang2049/TSDAE-cqadupstack2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model adapts the knowledge from the NLI and STSb data to the specific domain cqadupstack. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on cqadupstack with the TSDAE objective;
3. Supervised training on the NLI data with cross-entropy loss;
4. Supervised training on the STSb data with MSE loss.
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'cqadupstack'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'cqadupstack'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
``` |
Contrastive-Tension/BERT-Large-CT | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2021-09-04T13:37:35Z | # kwang2049/TSDAE-scidocs2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model was only trained with the TSDAE objective on scidocs in an unsupervised manner. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on scidocs with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'scidocs'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'scidocs'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
``` |
Contrastive-Tension/BERT-Large-NLI-CT | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | null | # kwang2049/TSDAE-scidocs2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model adapts the knowledge from the NLI and STSb data to the specific domain scidocs. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on scidocs with the TSDAE objective;
3. Supervised training on the NLI data with cross-entropy loss;
4. Supervised training on the STSb data with MSE loss.
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'scidocs'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'scidocs'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
``` |
Contrastive-Tension/RoBerta-Large-CT-STSb | [
"pytorch",
"tf",
"jax",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | # kwang2049/TSDAE-twitterpara2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model was only trained with the TSDAE objective on twitterpara in an unsupervised manner. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on twitterpara with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'twitterpara'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'twitterpara'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
``` |
Cooker/cicero-similis | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | # kwang2049/TSDAE-twitterpara2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model adapts the knowledge from the NLI and STSb data to the specific domain twitterpara. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on twitterpara with the TSDAE objective;
3. Supervised training on the NLI data with cross-entropy loss;
4. Supervised training on the STSb data with MSE loss.
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'twitterpara'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'twitterpara'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
``` |
Coolhand/Sentiment | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: ko
---
# Albert base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
from transformers import BertTokenizerFast, AlbertModel
tokenizer_albert = BertTokenizerFast.from_pretrained("kykim/albert-kor-base")
model_albert = AlbertModel.from_pretrained("kykim/albert-kor-base")
``` |
CopymySkill/DialoGPT-medium-atakan | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language: ko
---
# Bert base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
from transformers import BertTokenizerFast, BertModel
tokenizer_bert = BertTokenizerFast.from_pretrained("kykim/bert-kor-base")
model_bert = BertModel.from_pretrained("kykim/bert-kor-base")
``` |
Corvus/DialoGPT-medium-CaptainPrice-Extended | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language: ko
---
# Bert base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
# only for pytorch in transformers
from transformers import BertTokenizerFast, EncoderDecoderModel
tokenizer = BertTokenizerFast.from_pretrained("kykim/bertshared-kor-base")
model = EncoderDecoderModel.from_pretrained("kykim/bertshared-kor-base")
``` |
Corvus/DialoGPT-medium-CaptainPrice | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language: ko
---
# Electra base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
from transformers import ElectraTokenizerFast, ElectraModel
tokenizer_electra = ElectraTokenizerFast.from_pretrained("kykim/electra-kor-base")
model = ElectraModel.from_pretrained("kykim/electra-kor-base")
``` |
CouchCat/ma_mlc_v7_distil | [
"pytorch",
"distilbert",
"text-classification",
"en",
"transformers",
"multi-label",
"license:mit"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
language: ko
---
# Funnel-transformer base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("kykim/funnel-kor-base")
model = FunnelModel.from_pretrained("kykim/funnel-kor-base")
``` |
CouchCat/ma_ner_v6_distil | [
"pytorch",
"distilbert",
"token-classification",
"en",
"transformers",
"ner",
"license:mit",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
language: ko
tags:
- text-generation
---
# Bert base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
from transformers import BertTokenizerFast, GPT2LMHeadModel
tokenizer_gpt3 = BertTokenizerFast.from_pretrained("kykim/gpt3-kor-small_based_on_gpt2")
input_ids = tokenizer_gpt3.encode("text to tokenize")[1:] # remove cls token
model_gpt3 = GPT2LMHeadModel.from_pretrained("kykim/gpt3-kor-small_based_on_gpt2")
``` |
Coverage/sakurajimamai | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.707 | 1.0 | 157 | 2.4883 |
| 2.572 | 2.0 | 314 | 2.4240 |
| 2.5377 | 3.0 | 471 | 2.4355 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Coyotl/DialoGPT-test3-arthurmorgan | [
"conversational"
] | conversational | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2021-03-22T08:12:31Z | ---
language: "ja"
widget:
- text: "吾輩をは猫である。を書いた作家は,夏目漱 <extra_id_0>"
- text: "吾輩をは猫である。名前えはまだない。"
- text: "translate japanese to english: 赤い花. => red flower. 青い花. => <extra_id_0>"
license: "mit"
---
Google's mt5-base fine-tuned in Japanese to solve error detection and correction task.
# 日本語誤り訂正
- "吾輩をは猫である。名前えはまだない。"→"吾輩は猫である。名前はまだない。"
- "-small" has been trained on 20,000 text pairs only.
- dataset: [link](http://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9EWikipedia%E5%85%A5%E5%8A%9B%E8%AA%A4%E3%82%8A%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88) *used only first 20,000 text pairs.
- prefix: "correction: " (notice: single task trained.)
- text-to-textのお気持ち体験版ぐらいの感覚でどうぞ.
## 参考
- "東北大学でMASKが研究をしています。"→"東北大学でMASKの研究をしています。" ジム・キャリーを主語とした唯一のガ格が消され、ジム・キャリーは研究対象となった。易読化のために用いられる主語と動詞を近づける記法は誤り扱い?
- "東北大学でマスクが研究をしています。"→"東北大学でマスクの研究をしています。"
- "東北大学でイーロン・マスクが研究をしています。"→"東北大学でイーロン・マスクが研究をしています。"
- "東北大学で「イーロン・マスク」が研究をしています。"→"東北大学で「イーロン・マスク」の研究をしています。" 単語の意味も考慮されている?
- "東北大学でイマスクが研究をしています。"→"東北大学でイマスクの研究をしています。"
- "東北大学でクが研究をしています。"→"東北大学でコンピューターが研究をしています。" それはちょっと待って。
## 参考 extra_idを用い探索 <>は半角に変更してください
- "東北大学で <extra_id_0> の研究をしています。"→"東北大学で化学の研究をしています。"
- "東北大学で <extra_id_0> が研究をしています。"→"東北大学で工学が研究をしています。" 工学さん。
- "吾輩は <extra_id_0> である。"→"吾輩は吾輩である。"
- "答えは猫です。吾輩は <extra_id_0> である。"→"答えは猫です。吾輩は猫である。"
- "答えは猫です。吾輩の <extra_id_0> である。"→"答えは猫です。吾輩の心は猫である。"
- "私は猫です。私は <extra_id_0>"→"私は猫です。私は猫です。"
- "私は猫です。N/A <extra_id_0>"→"猫です。"
- "あなたは女性で猫です。彼は犬です。彼女は <extra_id_0>"→"あなたは女性で猫です。彼は犬です。彼女は猫です。"
- "あなたは女性で猫です。彼は犬です。彼は <extra_id_0>"→"あなたは女性で猫です。彼は犬です。"
- "あなたは女性で猫です。彼は犬です。彼は男性で <extra_id_0>"→"あなたは女性で猫です。彼は犬です。彼は男性で猫です。"
- "あなたは女性で猫です。彼は犬です。ライオンは <extra_id_0>"→"あなたは女性で猫です。彼は犬です。ライオンは猫です。"
- "あなたがは女性で猫です。彼はが犬です。ライオンが <extra_id_0>"→"あなたが女性で猫です。彼は犬です。ライオンが犬です。"
- "Aは11、Bは9。Aは <extra_id_0> 。Bは <extra_id_1> 。"→"Aは11、Bは9。Aは11。Bは9。"
- "彼の名前はallenです。彼のnameは <extra_id_0>"→"彼の名前はallenです。彼の名前は英語です。"
- "translate japanease to english: 赤い花. => red flower. 青い花. => <extra_id_0>"→"赤い花. => red flower. 青い花. => blue flower" タスク比依存翻訳可能性の片鱗.japaneseをjapaneaseと間違えたことは秘密だ・・・と言うか間違えても動くのか
## Prompting参考
Chain of Thought Prompting Elicits Reasoning in Large Language Models
https://arxiv.org/abs/2201.11903
**check in progress**
## Licenese
- The MIT license |
Craak/GJ0001 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: "ja"
widget:
- text: "請求項 <extra_id_0>"
license: "mit"
tags:
- Summarization
- japanese
---
Google's mt5-base fine-tuned in Japanese to summarize patent claims in a limited Pharmaceutical domain.
# 日本語特許請求項要約(医薬特定ドメイン限定)
- """【請求項1】
ヒトCD38(配列番号1)及びカニクイザルCD38(配列番号2)に特異的に結合する単離された抗体であって、
a)以下を含む重鎖可変領域:
i)配列番号3を含む第1のCDR;
ii)配列番号4を含む第2のCDR;
iii)配列番号5を含む第3のCDR;及び
b)以下を含む軽鎖可変領域:
i)配列番号6を含む第1のCDR;
ii)配列番号7を含む第2のCDR;
iii)配列番号8を含む第3のCDR;
を含む、抗体。(請求項2~19省略)【請求項20】
前記自己免疫疾患が、関節リウマチ、全身性エリテマトーデス、炎症性腸疾患、潰瘍性大腸炎及び移植片対宿主病からなる群から選択される、請求項19記載の方法。
"""
- →"本発明は、ヒトCD38タンパク質(配列番号0)及びカニクイザルCD38(配列番号2)に特異的に結合する抗体に関する。本発明はまた、ヒトCD38タンパク質(配列番号0)及びカニクイザルCD38(配列番号2)に特異的に結合する抗体を、それを必要とする患者に投与することを含む、自己免疫疾患の治療方法に関する。"
- "-small" has been trained on 20,000 text pairs only.
- dataset: *
- prefix: "patent claim summarization: " (notice: single task trained.)
- 特定ドメインの2万テキストを用いて要約モデルを作成するとこの程度ですよ,とのお気持ちとして.
- 注意: Hosted inference APIでは要約の一部しか出力されません.使用する際には,Use in Transformersのコードをご自身の環境で実行されることをおすすめします.
# 参考
- https://huggingface.co/blog/how-to-generate
- 前処理が最適ではなかった。修正する。
- 任意に上位概念・下位概念と変換できるようprefixを追加する。
- 任意のテーマに沿った要約とできるようprefixを追加する。
- prefixを追加せずとも、ある程度任意のテーマに沿った要約とすることは可能。請求項の構造を利用する、任意のテーマに沿っているか判定するモデルを用い生成を補正するなど。
**check in progress**
## Licenese
- The MIT license |
Craftified/Bob | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: mr
tags:
- albert
license: cc-by-4.0
datasets:
- L3CubeMahaSent
widget:
- text: "I like you. </s></s> I love you."
---
## MarathiSentiment
MarathiSentiment is an IndicBERT(ai4bharat/indic-bert) model fine-tuned on L3CubeMahaSent - a Marathi tweet-based sentiment analysis dataset.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (http://arxiv.org/abs/2103.11408)
```
@inproceedings{kulkarni2021l3cubemahasent,
title={L3CubeMahaSent: A Marathi Tweet-based Sentiment Analysis Dataset},
author={Kulkarni, Atharva and Mandhane, Meet and Likhitkar, Manali and Kshirsagar, Gayatri and Joshi, Raviraj},
booktitle={Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis},
pages={213--220},
year={2021}
}
``` |
Craig/mGqFiPhu | [
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | feature-extraction | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: mr
tags:
- albert
license: cc-by-4.0
datasets:
- HASOC 2021
widget:
- text: "I like you. </s></s> I love you."
---
## hate-bert-hasoc-marathi
hate-bert-hasoc-marathi is a binary hate speech model fine-tuned on Marathi Hasoc Hate Speech Dataset 2021.
The label mappings are 0 -> None, 1 -> Hate.
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2110.12200)
A new version of Marathi Hate Speech Detection models can be found here: <br>
binary: https://huggingface.co/l3cube-pune/mahahate-bert <br>
multi label: https://huggingface.co/l3cube-pune/mahahate-multi-roberta <br>
```
@article{velankar2021hate,
title={Hate and Offensive Speech Detection in Hindi and Marathi},
author={Velankar, Abhishek and Patil, Hrushikesh and Gore, Amol and Salunke, Shubham and Joshi, Raviraj},
journal={arXiv preprint arXiv:2110.12200},
year={2021}
}
``` |
Craig/paraphrase-MiniLM-L6-v2 | [
"pytorch",
"bert",
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,026 | null | ---
language: hi
tags:
- roberta
license: cc-by-4.0
datasets:
- HASOC 2021
widget:
- text: "I like you. </s></s> I love you."
---
## hate-roberta-hasoc-hindi
hate-roberta-hasoc-hindi is a multi-class hate speech model fine-tuned on Hindi Hasoc Hate Speech Dataset 2021.
The label mappings are 0 -> None, 1 -> Offensive, 2 -> Hate, 3 -> Profane.
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2110.12200)
```
@article{velankar2021hate,
title={Hate and Offensive Speech Detection in Hindi and Marathi},
author={Velankar, Abhishek and Patil, Hrushikesh and Gore, Amol and Salunke, Shubham and Joshi, Raviraj},
journal={arXiv preprint arXiv:2110.12200},
year={2021}
}
``` |
Crasher222/kaggle-comp-test | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:Crasher222/autonlp-data-kaggle-test",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | 2021-10-20T17:10:14Z | ---
language: hi
tags:
- roberta
license: cc-by-4.0
datasets:
- HASOC 2021
widget:
- text: "I like you. </s></s> I love you."
---
## hate-roberta-hasoc-hindi
hate-roberta-hasoc-hindi is a binary hate speech model fine-tuned on Hindi Hasoc Hate Speech Dataset 2021.
The label mappings are 0 -> None, 1 -> Hate.
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2110.12200)
```
@article{velankar2021hate,
title={Hate and Offensive Speech Detection in Hindi and Marathi},
author={Velankar, Abhishek and Patil, Hrushikesh and Gore, Amol and Salunke, Shubham and Joshi, Raviraj},
journal={arXiv preprint arXiv:2110.12200},
year={2021}
}
``` |
CrayonShinchan/bart_fine_tune_test | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: cc-by-4.0
language: mr
datasets:
- L3Cube-MahaCorpus
---
## MahaAlBERT
MahaAlBERT is a Marathi AlBERT model trained on L3Cube-MahaCorpus and other publicly available Marathi monolingual datasets.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2202.01159)
```
@InProceedings{joshi:2022:WILDRE6,
author = {Joshi, Raviraj},
title = {L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT Language Models, and Resources},
booktitle = {Proceedings of The WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {97--101}
}
```
|
Cthyllax/DialoGPT-medium-PaladinDanse | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | Base model: [microsoft/DialoGPT-large](https://huggingface.co/microsoft/DialoGPT-large)
Fine tuned for dialogue response generation on the [Persuasion For Good Dataset](https://gitlab.com/ucdavisnlp/persuasionforgood) (Wang et al., 2019)
Three additional special tokens were added during the fine-tuning process:
- <|pad|> padding token
- <|user|> speaker control token to prompt user responses
- <|system|> speaker control token to prompt system responses
The following Dialogues were excluded:
- Those with donation amounts outside of the task range of [$0, $2].
- Those where a donation of 0 was made at the end of the task but a non-zero amount was pledged in the dialogue.
- Those with more than 800 words.
Stats:
- Training set: 519 dialogues
- Validation set: 58 dialogues
- ~20 utterances per dialogue |
Cyrell/Cyrell | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- espnet
- audio
- text-to-speech
language: ko
datasets:
- novelspeech
license: cc-by-4.0
---
## ESPnet2 TTS model
### `lakahaga/novel_reading_tts`
This model was trained by lakahaga using novelspeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 9827dfe37f69e8e55f902dc4e340de5108596311
pip install -e .
cd egs2/novelspeech/tts1
./run.sh --skip_data_prep false --skip_train true --download_model lakahaga/novel_reading_tts
```
## TTS config
<details><summary>expand</summary>
```
config: conf/tuning/train_conformer_fastspeech2.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_conformer_fastspeech2_raw_phn_tacotron_none
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 34177
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 1000
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 10
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 1000
batch_size: 20
valid_batch_size: null
batch_bins: 25600000
valid_batch_bins: null
train_shape_file:
- exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/text_shape.phn
- exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/speech_shape
valid_shape_file:
- exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//valid/text_shape.phn
- exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr_no_dev/text
- text
- text
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/tr_no_dev/durations
- durations
- text_int
- - dump/raw/tr_no_dev/wav.scp
- speech
- sound
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/collect_feats/pitch.scp
- pitch
- npy
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/collect_feats/energy.scp
- energy
- npy
- - dump/raw/tr_no_dev/utt2sid
- sids
- text_int
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/dev/durations
- durations
- text_int
- - dump/raw/dev/wav.scp
- speech
- sound
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//valid/collect_feats/pitch.scp
- pitch
- npy
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//valid/collect_feats/energy.scp
- energy
- npy
- - dump/raw/dev/utt2sid
- sids
- text_int
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 1.0
scheduler: noamlr
scheduler_conf:
model_size: 384
warmup_steps: 4000
token_list:
- <blank>
- <unk>
- '='
- _
- A
- Y
- N
- O
- E
- U
- L
- G
- S
- D
- M
- J
- H
- B
- ZERO
- TWO
- C
- .
- Q
- ','
- P
- T
- SEVEN
- X
- W
- THREE
- ONE
- NINE
- K
- EIGHT
- '@'
- '!'
- Z
- '?'
- F
- SIX
- FOUR
- '#'
- $
- +
- '%'
- FIVE
- '~'
- AND
- '*'
- '...'
- ''
- ^
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: tacotron
g2p: null
feats_extract: fbank
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
fs: 22050
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/feats_stats.npz
tts: fastspeech2
tts_conf:
adim: 384
aheads: 2
elayers: 4
eunits: 1536
dlayers: 4
dunits: 1536
positionwise_layer_type: conv1d
positionwise_conv_kernel_size: 3
duration_predictor_layers: 2
duration_predictor_chans: 256
duration_predictor_kernel_size: 3
postnet_layers: 5
postnet_filts: 5
postnet_chans: 256
use_masking: true
encoder_normalize_before: true
decoder_normalize_before: true
reduction_factor: 1
encoder_type: conformer
decoder_type: conformer
conformer_pos_enc_layer_type: rel_pos
conformer_self_attn_layer_type: rel_selfattn
conformer_activation_type: swish
use_macaron_style_in_conformer: true
use_cnn_in_conformer: true
conformer_enc_kernel_size: 7
conformer_dec_kernel_size: 31
init_type: xavier_uniform
transformer_enc_dropout_rate: 0.2
transformer_enc_positional_dropout_rate: 0.2
transformer_enc_attn_dropout_rate: 0.2
transformer_dec_dropout_rate: 0.2
transformer_dec_positional_dropout_rate: 0.2
transformer_dec_attn_dropout_rate: 0.2
pitch_predictor_layers: 5
pitch_predictor_chans: 256
pitch_predictor_kernel_size: 5
pitch_predictor_dropout: 0.5
pitch_embed_kernel_size: 1
pitch_embed_dropout: 0.0
stop_gradient_from_pitch_predictor: true
energy_predictor_layers: 2
energy_predictor_chans: 256
energy_predictor_kernel_size: 3
energy_predictor_dropout: 0.5
energy_embed_kernel_size: 1
energy_embed_dropout: 0.0
stop_gradient_from_energy_predictor: false
pitch_extract: dio
pitch_extract_conf:
fs: 22050
n_fft: 1024
hop_length: 256
f0max: 400
f0min: 80
reduction_factor: 1
pitch_normalize: global_mvn
pitch_normalize_conf:
stats_file: exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/pitch_stats.npz
energy_extract: energy
energy_extract_conf:
fs: 22050
n_fft: 1024
hop_length: 256
win_length: null
reduction_factor: 1
energy_normalize: global_mvn
energy_normalize_conf:
stats_file: exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/energy_stats.npz
required:
- output_dir
- token_list
version: 0.10.5a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Darkecho789/email-gen | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | # Supervised Continous Bag of words model trained with Uruguayan news from Twitter
Model trained with Facebook's fasttext library. |
DataikuNLP/paraphrase-albert-small-v2 | [
"pytorch",
"albert",
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | {
"architectures": [
"AlbertModel"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 628 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: market_positivity_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# market_positivity_model
This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5776
- Train Sparse Categorical Accuracy: 0.7278
- Validation Loss: 0.6460
- Validation Sparse Categorical Accuracy: 0.6859
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 0.7207 | 0.6394 | 0.6930 | 0.6811 | 0 |
| 0.6253 | 0.7033 | 0.6549 | 0.6872 | 1 |
| 0.5776 | 0.7278 | 0.6460 | 0.6859 | 2 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Davlan/bert-base-multilingual-cased-finetuned-yoruba | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 21 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9210439378923027
- name: Recall
type: recall
value: 0.9356751314464705
- name: F1
type: f1
value: 0.9283018867924528
- name: Accuracy
type: accuracy
value: 0.983176322938345
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0611
- Precision: 0.9210
- Recall: 0.9357
- F1: 0.9283
- Accuracy: 0.9832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2341 | 1.0 | 878 | 0.0734 | 0.9118 | 0.9206 | 0.9162 | 0.9799 |
| 0.0546 | 2.0 | 1756 | 0.0591 | 0.9210 | 0.9350 | 0.9279 | 0.9829 |
| 0.0297 | 3.0 | 2634 | 0.0611 | 0.9210 | 0.9357 | 0.9283 | 0.9832 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Davlan/bert-base-multilingual-cased-masakhaner | [
"pytorch",
"tf",
"bert",
"token-classification",
"arxiv:2103.11811",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 88 | null | ---
language: en
datasets:
- libri_light
- common_voice
- switchboard
- fisher
tags:
- speech
- automatic-speech-recognition
- CTC
- Attention
- wav2vec2
license: apache-2.0
---
# Wav2Vec2-Large-Robust - Finetuned on Librispeech (960 hours)
## Note : Model has not been initialized. If you want to use it without further finetuning, do a forward pass first to recalculate the normalized weights of the positional convolutional layer :
```ipython
with torch.no_grad():
model(torch.randn((1,300_000)))
```
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained on 16kHz sampled speech audio.
Speech datasets from multiple domains were used to pretrain the model:
- [Libri-Light](https://github.com/facebookresearch/libri-light): open-source audio books from the LibriVox project; clean, read-out audio data
- [CommonVoice](https://huggingface.co/datasets/common_voice): crowd-source collected audio data; read-out text snippets
- [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62): telephone speech corpus; noisy telephone data
- [Fisher](https://catalog.ldc.upenn.edu/LDC2004T19): conversational telephone speech; noisy telephone data
When using the model make sure that your speech input is also sampled at 16Khz.
Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information.
[Paper Robust Wav2Vec2](https://arxiv.org/abs/2104.01027)
Authors: Wei-Ning Hsu, Anuroop Sriram, Alexei Baevski, Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Jacob Kahn, Ann Lee, Ronan Collobert, Gabriel Synnaeve, Michael Auli
**Abstract**
Self-supervised learning of speech representations has been a very active research area but most work is focused on a single domain such as read audio books for which there exist large quantities of labeled and unlabeled data. In this paper, we explore more general setups where the domain of the unlabeled data for pre-training data differs from the domain of the labeled data for fine-tuning, which in turn may differ from the test data domain. Our experiments show that using target domain data during pre-training leads to large performance improvements across a variety of setups. On a large-scale competitive setup, we show that pre-training on unlabeled in-domain data reduces the gap between models trained on in-domain and out-of-domain labeled data by 66%-73%. This has obvious practical implications since it is much easier to obtain unlabeled target domain data than labeled data. Moreover, we find that pre-training on multiple domains improves generalization performance on domains not seen during training. Code and models will be made available at this https URL.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model. |
Dayout/test | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2021-09-06T10:53:00Z | ---
tags:
- autonlp
- evaluation
- benchmark
---
# Model Card for MetNet
|
DecafNosebleed/scarabot-model | [
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2021-08-22T18:42:16Z | ---
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model_index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metric:
name: Accuracy
type: accuracy
value: 0.9285
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi-finetuned-amazon_reviews_multi
This model was trained from scratch on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3595
- Accuracy: 0.9285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.103 | 1.0 | 1250 | 0.2864 | 0.928 |
| 0.0407 | 2.0 | 2500 | 0.3595 | 0.9285 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
Declan/Breitbart_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2021-08-22T15:14:32Z | ---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model_index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metric:
name: Accuracy
type: accuracy
value: 0.93075
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2306
- Accuracy: 0.9307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1978 | 1.0 | 1250 | 0.1750 | 0.9325 |
| 0.0951 | 2.0 | 2500 | 0.2306 | 0.9307 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
Declan/CNN_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
library_name: superb
benchmark: superb
task: asr
datasets:
- superb
tags:
- automatic-speech-recognition
- osanseviero/hubert_base
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
---
# Fine-tuned s3prl model for ASR |
Declan/NPR_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | **This model is provided with no guarantees whatsoever; use at your own risk.**
This is a Neo2.7B model fine tuned on github data scraped by an EleutherAI member (filtered for python-only) for 20k steps. A better code model is coming soon™ (hopefully, maybe); this model was created mostly as a test of infrastructure code. |
Declan/Politico_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language: pt
datasets:
- common_voice
- mls
- cetuc
- lapsbm
- voxforge
- tedx
- sid
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
---
# cetuc100-xlsr: Wav2vec 2.0 with CETUC Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz) dataset. This dataset contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | 94h | -- | 5.4h |
| Common Voice | | -- | 9.5h |
| LaPS BM | | -- | 0.1h |
| MLS | | -- | 3.7h |
| Multilingual TEDx (Portuguese) | | -- | 1.8h |
| SID | | -- | 1.0h |
| VoxForge | | -- | 0.1h |
| Total | | -- | 21.6h |
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| cetuc\_100 (demonstration below)| 0.446 | 0.856 | 0.089 | 0.967 | 1.172 | 0.929 | 0.902 | 0.765 |
| cetuc\_100 + 4-gram (demonstration below)|0.339 | 0.734 | 0.076 | 0.961 | 1.188 | 1.227 | 0.801 | 0.760 |
## Demonstration
```python
MODEL_NAME = "lgris/cetuc100-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.44677581829220825
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.8561919899139065
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.08955808080808081
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.9670008790979718
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 1.1723738343632861
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.929976436317539
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.9020183982683985
### Tests with LM
```python
# !find -type f -name "*.wav" -delete
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.3396346663354827
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.7341013242719512
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.07612373737373737
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.960908940243212
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 1.188118540533579
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 1.2271077178339618
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.800196158008658
|
Declan/Politico_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language: pt
datasets:
- common_voice
- mls
- cetuc
- lapsbm
- voxforge
- tedx
- sid
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
---
# commonvoice10-xlsr: Wav2vec 2.0 with Common Voice Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the [Common Voice 7.0](https://commonvoice.mozilla.org/pt) dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | | -- | 5.4h |
| Common Voice | 37.8h | -- | 9.5h |
| LaPS BM | | -- | 0.1h |
| MLS | | -- | 3.7h |
| Multilingual TEDx (Portuguese) | | -- | 1.8h |
| SID | | -- | 1.0h |
| VoxForge | | -- | 0.1h |
| Total | | -- | 21.6h |
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| commonvoice10 (demonstration below) | 0.133 | 0.189 | 0.165 | 0.189 | 0.247 | 0.474 | 0.251 | 0.235 |
| commonvoice10 + 4-gram (demonstration below) | 0.060 | 0.117 | 0.088 | 0.136 | 0.181 | 0.394 | 0.227 | 0.171 |
## Demonstration
```python
MODEL_NAME = "lgris/commonvoice10-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.13291846056190185
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.18909733896486755
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.1655429292929293
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.1894711228284466
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.2471983709551264
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.4739658565194102
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.2510294913419914
### Tests with LM
```python
# !find -type f -name "*.wav" -delete
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.060609303416680915
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.11758415681158373
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.08815340909090909
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.1359966791836458
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.1818429601530829
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.39469326522731385
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.22779897186147183
|
Declan/Reuters_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2021-11-26T17:06:25Z | ---
language: pt
datasets:
- common_voice
- mls
- cetuc
- lapsbm
- voxforge
- tedx
- sid
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
- hf-asr-leaderboard
model-index:
- name: bp500-base10k_voxpopuli
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: pt
metrics:
- name: Test WER
type: wer
value: 24.9
license: apache-2.0
---
# bp500-base10k_voxpopuli: Wav2vec 2.0 with Brazilian Portuguese (BP) Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the following datasets:
- [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz): contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus.
- [Common Voice 7.0](https://commonvoice.mozilla.org/pt): is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages. In this project, volunteers donate and validate speech using the [oficial site](https://commonvoice.mozilla.org/pt).
- [Lapsbm](https://github.com/falabrasil/gitlab-resources): "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control.
- [Multilingual Librispeech (MLS)](https://arxiv.org/abs/2012.03411): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like [LibriVox](https://librivox.org/). The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese [used in this work](http://www.openslr.org/94/) (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers.
- [Multilingual TEDx](http://www.openslr.org/100): a collection of audio recordings from TEDx talks in 8 source languages. The Portuguese set (mostly Brazilian Portuguese variant) contains 164 hours of transcribed speech.
- [Sidney](https://igormq.github.io/datasets/) (SID): contains 5,777 utterances recorded by 72 speakers (20 women) from 17 to 59 years old with fields such as place of birth, age, gender, education, and occupation;
- [VoxForge](http://www.voxforge.org/): is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz.
These datasets were combined to build a larger Brazilian Portuguese dataset. All data was used for training except Common Voice dev/test sets, that were used for validation/test respectively. We also made test sets for all the gathered datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | 94.0h | -- | 5.4h |
| Common Voice | 37.8h | 8.9h | 9.5h |
| LaPS BM | 0.8h | -- | 0.1h |
| MLS | 161.0h | -- | 3.7h |
| Multilingual TEDx (Portuguese) | 148.9h | -- | 1.8h |
| SID | 7.2h | -- | 1.0h |
| VoxForge | 3.9h | -- | 0.1h |
| Total | 453.6h | 8.9h | 21.6h |
The original model was fine-tuned using [fairseq](https://github.com/pytorch/fairseq). This notebook uses a converted version of the original one. The link to the original fairseq model is available [here](https://drive.google.com/file/d/19kkENi8uvczmw9OLSdqnjvKqBE53cl_W/view?usp=sharing).
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| bp\_500-base10k_voxpopuli (demonstration below) | 0.120 | 0.249 | 0.039 | 0.227 | 0.169 | 0.349 | 0.116 | 0.181 |
| bp\_500-base10k_voxpopuli + 4-gram (demonstration below) | 0.074 | 0.174 | 0.032 | 0.182 | 0.181 | 0.349 | 0.111 | 0.157 |
#### Transcription examples
| Text | Transcription |
|------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------|
|suco de uva e água misturam bem|suco **deúva** e água **misturão** bem|
|culpa do dinheiro|**cupa** do dinheiro|
|eu amo shooters call of duty é o meu favorito|eu **omo** **shúters cofedete** é meu favorito|
|você pode explicar por que isso acontece|você pode explicar *por* que isso **ontece**|
|no futuro você desejará ter começado a investir hoje|no futuro você desejará **a** ter começado a investir hoje|
## Demonstration
```python
MODEL_NAME = "lgris/bp500-base10k_voxpopuli"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
with torch.no_grad():
logits = self.model(input_values).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
```python
%cd bp_dataset
```
/content/bp_dataset
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.12096759949218888
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.24977003159495725
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.039769570707070705
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.2269637077788063
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.1691680138494731
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.34908555859018014
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.11649350649350651
### Tests with LM
```python
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
### Cetuc
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.07499558425787961
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.17442648452610307
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.032774621212121206
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.18213620321569274
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.18102544972868206
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.3491402028105601
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.11189529220779222
|
Declan/Reuters_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2021-11-26T17:06:03Z | ---
language: pt
datasets:
- common_voice
- mls
- cetuc
- lapsbm
- voxforge
- tedx
- sid
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
- hf-asr-leaderboard
model-index:
- name: bp400-xlsr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: pt
metrics:
- name: Test WER
type: wer
value: 13.6
license: apache-2.0
---
# bp500-xlsr: Wav2vec 2.0 with Brazilian Portuguese (BP) Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the following datasets:
- [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz): contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus;
- [Common Voice 7.0](https://commonvoice.mozilla.org/pt): is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages. In this project, volunteers donate and validate speech using the [oficial site](https://commonvoice.mozilla.org/pt);
- [Lapsbm](https://github.com/falabrasil/gitlab-resources): "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control;
- [Multilingual Librispeech (MLS)](https://arxiv.org/abs/2012.03411): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like [LibriVox](https://librivox.org/). The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese [used in this work](http://www.openslr.org/94/) (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers;
- [VoxForge](http://www.voxforge.org/): is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz.
These datasets were combined to build a larger Brazilian Portuguese dataset. All data was used for training except Common Voice dev/test sets, that were used for validation/test respectively. We also made test sets for all the gathered datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | 93.9h | -- | 5.4h |
| Common Voice | 37.6h | 8.9h | 9.5h |
| LaPS BM | 0.8h | -- | 0.1h |
| MLS | 161.0h | -- | 3.7h |
| Multilingual TEDx (Portuguese) | 144.2h | -- | 1.8h |
| SID | 5.0h | -- | 1.0h |
| VoxForge | 2.8h | -- | 0.1h |
| Total | 437.2h | 8.9h | 21.6h |
The original model was fine-tuned using [fairseq](https://github.com/pytorch/fairseq). This notebook uses a converted version of the original one. The link to the original fairseq model is available [here](https://drive.google.com/file/d/1J8aR1ltDLQFe-dVrGuyxoRm2uyJjCWgf/view?usp=sharing).
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| bp\_500 (demonstration below) | 0.051 | 0.136 | 0.032 | 0.118 | 0.095 | 0.248 | 0.082 | 0.108 |
| bp\_500 + 4-gram (demonstration below) | 0.032 | 0.097 | 0.022 | 0.114 | 0.125 | 0.246 | 0.065 | 0.100 |
#### Transcription examples
| Text | Transcription |
|------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------|
|não há um departamento de mediadores independente das federações e das agremiações|não há um **dearamento** de mediadores independente das federações e das **agrebiações**|
|mas que bodega|**masque** bodega|
|a cortina abriu o show começou|a cortina abriu o **chô** começou|
|por sorte havia uma passadeira|**busote avinhoa** **passadeiro**|
|estou maravilhada está tudo pronto|**stou** estou maravilhada está tudo pronto|
## Demonstration
```python
MODEL_NAME = "lgris/bp500-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
```python
%cd bp_dataset
```
/content/bp_dataset
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.05159097808687998
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.13659981509705973
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.03196969696969697
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.1178481066463896
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.09544588416964224
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.24868046340420813
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.08246076839826841
### Tests with LM
```python
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
### Cetuc
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.03222801788375573
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.09713866021093655
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.022310606060606065
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.11408590958696524
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.12502797252979136
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.24603179403904793
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.06542207792207791
|
Declan/WallStreetJournal_model_v3 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2021-12-29T20:26:44Z | ---
language: pt
tags:
- speech
license: apache-2.0
---
# DistilXLSR-53 for BP
[DistilXLSR-53 for BP: DistilHuBERT applied to Wav2vec XLSR-53 for Brazilian Portuguese](https://github.com/s3prl/s3prl/tree/master/s3prl/upstream/distiller)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
Paper: [DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT](https://arxiv.org/abs/2110.01900)
Authors: Heng-Jui Chang, Shu-wen Yang, Hung-yi Lee
**Note 2**: The XLSR-53 model was distilled using [Brazilian Portuguese Datasets](https://huggingface.co/lgris/bp400-xlsr) for test purposes. The dataset is quite small to perform such task (the performance might not be so good as the [original work](https://arxiv.org/abs/2110.01900)).
**Abstract**
Self-supervised speech representation learning methods like wav2vec 2.0 and Hidden-unit BERT (HuBERT) leverage unlabeled speech data for pre-training and offer good representations for numerous speech processing tasks. Despite the success of these methods, they require large memory and high pre-training costs, making them inaccessible for researchers in academia and small companies. Therefore, this paper introduces DistilHuBERT, a novel multi-task learning framework to distill hidden representations from a HuBERT model directly. This method reduces HuBERT's size by 75% and 73% faster while retaining most performance in ten different tasks. Moreover, DistilHuBERT required little training time and data, opening the possibilities of pre-training personal and on-device SSL models for speech.
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model.
|
Declan/WallStreetJournal_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
language:
- pt
license: apache-2.0
tags:
- generated_from_trainer
- hf-asr-leaderboard
- pt
- robust-speech-event
datasets:
- common_voice
model-index:
- name: sew-tiny-portuguese-cv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6
type: common_voice
args: pt
metrics:
- name: Test WER
type: wer
value: 30.02
- name: Test CER
type: cer
value: 10.34
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: 56.46
- name: Test CER
type: cer
value: 22.94
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: pt
metrics:
- name: Test WER
type: wer
value: 57.17
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: pt
metrics:
- name: Test WER
type: wer
value: 61.3
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-tiny-portuguese-cv
This model is a fine-tuned version of [lgris/sew-tiny-pt](https://huggingface.co/lgris/sew-tiny-pt) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5110
- Wer: 0.2842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 40000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| No log | 4.92 | 1000 | 0.8468 | 0.6494 |
| 3.4638 | 9.85 | 2000 | 0.4978 | 0.3815 |
| 3.4638 | 14.78 | 3000 | 0.4734 | 0.3417 |
| 0.9904 | 19.7 | 4000 | 0.4577 | 0.3344 |
| 0.9904 | 24.63 | 5000 | 0.4376 | 0.3170 |
| 0.8849 | 29.55 | 6000 | 0.4225 | 0.3118 |
| 0.8849 | 34.48 | 7000 | 0.4354 | 0.3080 |
| 0.819 | 39.41 | 8000 | 0.4434 | 0.3004 |
| 0.819 | 44.33 | 9000 | 0.4710 | 0.3132 |
| 0.7706 | 49.26 | 10000 | 0.4497 | 0.3064 |
| 0.7706 | 54.19 | 11000 | 0.4598 | 0.3100 |
| 0.7264 | 59.11 | 12000 | 0.4271 | 0.3013 |
| 0.7264 | 64.04 | 13000 | 0.4333 | 0.2959 |
| 0.6909 | 68.96 | 14000 | 0.4554 | 0.3019 |
| 0.6909 | 73.89 | 15000 | 0.4444 | 0.2888 |
| 0.6614 | 78.81 | 16000 | 0.4734 | 0.3081 |
| 0.6614 | 83.74 | 17000 | 0.4820 | 0.3058 |
| 0.6379 | 88.67 | 18000 | 0.4416 | 0.2950 |
| 0.6379 | 93.59 | 19000 | 0.4614 | 0.2974 |
| 0.6055 | 98.52 | 20000 | 0.4812 | 0.3018 |
| 0.6055 | 103.45 | 21000 | 0.4700 | 0.3018 |
| 0.5823 | 108.37 | 22000 | 0.4726 | 0.2999 |
| 0.5823 | 113.3 | 23000 | 0.4979 | 0.2887 |
| 0.5597 | 118.23 | 24000 | 0.4813 | 0.2980 |
| 0.5597 | 123.15 | 25000 | 0.4968 | 0.2972 |
| 0.542 | 128.08 | 26000 | 0.5331 | 0.3059 |
| 0.542 | 133.0 | 27000 | 0.5046 | 0.2978 |
| 0.5185 | 137.93 | 28000 | 0.4882 | 0.2922 |
| 0.5185 | 142.85 | 29000 | 0.4945 | 0.2938 |
| 0.499 | 147.78 | 30000 | 0.4971 | 0.2913 |
| 0.499 | 152.71 | 31000 | 0.4948 | 0.2873 |
| 0.4811 | 157.63 | 32000 | 0.4924 | 0.2918 |
| 0.4811 | 162.56 | 33000 | 0.5128 | 0.2911 |
| 0.4679 | 167.49 | 34000 | 0.5098 | 0.2892 |
| 0.4679 | 172.41 | 35000 | 0.4966 | 0.2863 |
| 0.456 | 177.34 | 36000 | 0.5033 | 0.2839 |
| 0.456 | 182.27 | 37000 | 0.5114 | 0.2875 |
| 0.4453 | 187.19 | 38000 | 0.5154 | 0.2859 |
| 0.4453 | 192.12 | 39000 | 0.5102 | 0.2847 |
| 0.4366 | 197.04 | 40000 | 0.5110 | 0.2842 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
Declan/test_model | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: pt
tags:
- speech
license: apache-2.0
---
# SEW-tiny-pt
This is a pretrained version of [SEW tiny by ASAPP Research](https://github.com/asappresearch/sew) trained over Brazilian Portuguese audio.
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWForCTC`.
|
Declan/test_push | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- robust-speech-event
- pt
- hf-asr-leaderboard
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-pt-cv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6
type: common_voice
args: pt
metrics:
- name: Test WER
type: wer
value: 24.29
- name: Test CER
type: cer
value: 7.51
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: 55.72
- name: Test CER
type: cer
value: 21.82
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: pt
metrics:
- name: Test WER
type: wer
value: 47.88
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: pt
metrics:
- name: Test WER
type: wer
value: 50.78
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-pt-cv
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3418
- Wer: 0.3581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 10.9035 | 0.2 | 100 | 4.2750 | 1.0 |
| 3.3275 | 0.41 | 200 | 3.0334 | 1.0 |
| 3.0016 | 0.61 | 300 | 2.9494 | 1.0 |
| 2.1874 | 0.82 | 400 | 1.4355 | 0.8721 |
| 1.09 | 1.02 | 500 | 0.9987 | 0.7165 |
| 0.8251 | 1.22 | 600 | 0.7886 | 0.6406 |
| 0.6927 | 1.43 | 700 | 0.6753 | 0.5801 |
| 0.6143 | 1.63 | 800 | 0.6300 | 0.5509 |
| 0.5451 | 1.84 | 900 | 0.5586 | 0.5156 |
| 0.5003 | 2.04 | 1000 | 0.5493 | 0.5027 |
| 0.3712 | 2.24 | 1100 | 0.5271 | 0.4872 |
| 0.3486 | 2.45 | 1200 | 0.4953 | 0.4817 |
| 0.3498 | 2.65 | 1300 | 0.4619 | 0.4538 |
| 0.3112 | 2.86 | 1400 | 0.4570 | 0.4387 |
| 0.3013 | 3.06 | 1500 | 0.4437 | 0.4147 |
| 0.2136 | 3.27 | 1600 | 0.4176 | 0.4124 |
| 0.2131 | 3.47 | 1700 | 0.4281 | 0.4194 |
| 0.2099 | 3.67 | 1800 | 0.3864 | 0.3949 |
| 0.1925 | 3.88 | 1900 | 0.3926 | 0.3913 |
| 0.1709 | 4.08 | 2000 | 0.3764 | 0.3804 |
| 0.1406 | 4.29 | 2100 | 0.3787 | 0.3742 |
| 0.1342 | 4.49 | 2200 | 0.3645 | 0.3693 |
| 0.1305 | 4.69 | 2300 | 0.3463 | 0.3625 |
| 0.1298 | 4.9 | 2400 | 0.3418 | 0.3581 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
DeepBasak/Slack | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
- pt
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-large-xlsr-coraa-portuguese-cv7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-coraa-portuguese-cv7
This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1777
- Wer: 0.1339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4779 | 0.13 | 100 | 0.2620 | 0.2020 |
| 0.4505 | 0.26 | 200 | 0.2339 | 0.1998 |
| 0.4285 | 0.39 | 300 | 0.2507 | 0.2109 |
| 0.4148 | 0.52 | 400 | 0.2311 | 0.2101 |
| 0.4072 | 0.65 | 500 | 0.2278 | 0.1899 |
| 0.388 | 0.78 | 600 | 0.2193 | 0.1898 |
| 0.3952 | 0.91 | 700 | 0.2108 | 0.1901 |
| 0.3851 | 1.04 | 800 | 0.2121 | 0.1788 |
| 0.3496 | 1.17 | 900 | 0.2154 | 0.1776 |
| 0.3063 | 1.3 | 1000 | 0.2095 | 0.1730 |
| 0.3376 | 1.43 | 1100 | 0.2129 | 0.1801 |
| 0.3273 | 1.56 | 1200 | 0.2132 | 0.1776 |
| 0.3347 | 1.69 | 1300 | 0.2054 | 0.1698 |
| 0.323 | 1.82 | 1400 | 0.1986 | 0.1724 |
| 0.3079 | 1.95 | 1500 | 0.2005 | 0.1701 |
| 0.3029 | 2.08 | 1600 | 0.2159 | 0.1644 |
| 0.2694 | 2.21 | 1700 | 0.1992 | 0.1678 |
| 0.2733 | 2.34 | 1800 | 0.2032 | 0.1657 |
| 0.269 | 2.47 | 1900 | 0.2056 | 0.1592 |
| 0.2869 | 2.6 | 2000 | 0.2058 | 0.1616 |
| 0.2813 | 2.73 | 2100 | 0.1868 | 0.1584 |
| 0.2616 | 2.86 | 2200 | 0.1841 | 0.1550 |
| 0.2809 | 2.99 | 2300 | 0.1902 | 0.1577 |
| 0.2598 | 3.12 | 2400 | 0.1910 | 0.1514 |
| 0.24 | 3.25 | 2500 | 0.1971 | 0.1555 |
| 0.2481 | 3.38 | 2600 | 0.1853 | 0.1537 |
| 0.2437 | 3.51 | 2700 | 0.1897 | 0.1496 |
| 0.2384 | 3.64 | 2800 | 0.1842 | 0.1495 |
| 0.2405 | 3.77 | 2900 | 0.1884 | 0.1500 |
| 0.2372 | 3.9 | 3000 | 0.1950 | 0.1548 |
| 0.229 | 4.03 | 3100 | 0.1928 | 0.1477 |
| 0.2047 | 4.16 | 3200 | 0.1891 | 0.1472 |
| 0.2102 | 4.29 | 3300 | 0.1930 | 0.1473 |
| 0.199 | 4.42 | 3400 | 0.1914 | 0.1456 |
| 0.2121 | 4.55 | 3500 | 0.1840 | 0.1437 |
| 0.211 | 4.67 | 3600 | 0.1843 | 0.1403 |
| 0.2072 | 4.8 | 3700 | 0.1836 | 0.1428 |
| 0.2224 | 4.93 | 3800 | 0.1747 | 0.1412 |
| 0.1974 | 5.06 | 3900 | 0.1813 | 0.1416 |
| 0.1895 | 5.19 | 4000 | 0.1869 | 0.1406 |
| 0.1763 | 5.32 | 4100 | 0.1830 | 0.1394 |
| 0.2001 | 5.45 | 4200 | 0.1775 | 0.1394 |
| 0.1909 | 5.58 | 4300 | 0.1806 | 0.1373 |
| 0.1812 | 5.71 | 4400 | 0.1784 | 0.1359 |
| 0.1737 | 5.84 | 4500 | 0.1778 | 0.1353 |
| 0.1915 | 5.97 | 4600 | 0.1777 | 0.1349 |
| 0.1921 | 6.1 | 4700 | 0.1784 | 0.1359 |
| 0.1805 | 6.23 | 4800 | 0.1757 | 0.1348 |
| 0.1742 | 6.36 | 4900 | 0.1771 | 0.1341 |
| 0.1709 | 6.49 | 5000 | 0.1777 | 0.1339 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
DeepChem/SmilesTokenizer_PubChem_1M | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 227 | null | ---
language:
- gn
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- gn
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-300m-gn-cv8-4
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: gn
metrics:
- name: Test WER
type: wer
value: 68.45
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-gn-cv8-4
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5805
- Wer: 0.7545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 13000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 9.2216 | 16.65 | 300 | 3.2771 | 1.0 |
| 3.1804 | 33.32 | 600 | 2.2869 | 1.0 |
| 1.5856 | 49.97 | 900 | 0.9573 | 0.8772 |
| 1.0299 | 66.65 | 1200 | 0.9044 | 0.8082 |
| 0.8916 | 83.32 | 1500 | 0.9478 | 0.8056 |
| 0.8451 | 99.97 | 1800 | 0.8814 | 0.8107 |
| 0.7649 | 116.65 | 2100 | 0.9897 | 0.7826 |
| 0.7185 | 133.32 | 2400 | 0.9988 | 0.7621 |
| 0.6595 | 149.97 | 2700 | 1.0607 | 0.7749 |
| 0.6211 | 166.65 | 3000 | 1.1826 | 0.7877 |
| 0.59 | 183.32 | 3300 | 1.1060 | 0.7826 |
| 0.5383 | 199.97 | 3600 | 1.1826 | 0.7852 |
| 0.5205 | 216.65 | 3900 | 1.2148 | 0.8261 |
| 0.4786 | 233.32 | 4200 | 1.2710 | 0.7928 |
| 0.4482 | 249.97 | 4500 | 1.1943 | 0.7980 |
| 0.4149 | 266.65 | 4800 | 1.2449 | 0.8031 |
| 0.3904 | 283.32 | 5100 | 1.3100 | 0.7928 |
| 0.3619 | 299.97 | 5400 | 1.3125 | 0.7596 |
| 0.3496 | 316.65 | 5700 | 1.3699 | 0.7877 |
| 0.3277 | 333.32 | 6000 | 1.4344 | 0.8031 |
| 0.2958 | 349.97 | 6300 | 1.4093 | 0.7980 |
| 0.2883 | 366.65 | 6600 | 1.3296 | 0.7570 |
| 0.2598 | 383.32 | 6900 | 1.4026 | 0.7980 |
| 0.2564 | 399.97 | 7200 | 1.4847 | 0.8031 |
| 0.2408 | 416.65 | 7500 | 1.4896 | 0.8107 |
| 0.2266 | 433.32 | 7800 | 1.4232 | 0.7698 |
| 0.224 | 449.97 | 8100 | 1.5560 | 0.7903 |
| 0.2038 | 466.65 | 8400 | 1.5355 | 0.7724 |
| 0.1948 | 483.32 | 8700 | 1.4624 | 0.7621 |
| 0.1995 | 499.97 | 9000 | 1.5808 | 0.7724 |
| 0.1864 | 516.65 | 9300 | 1.5653 | 0.7698 |
| 0.18 | 533.32 | 9600 | 1.4868 | 0.7494 |
| 0.1689 | 549.97 | 9900 | 1.5379 | 0.7749 |
| 0.1624 | 566.65 | 10200 | 1.5936 | 0.7749 |
| 0.1537 | 583.32 | 10500 | 1.6436 | 0.7801 |
| 0.1455 | 599.97 | 10800 | 1.6401 | 0.7673 |
| 0.1437 | 616.65 | 11100 | 1.6069 | 0.7673 |
| 0.1452 | 633.32 | 11400 | 1.6041 | 0.7519 |
| 0.139 | 649.97 | 11700 | 1.5758 | 0.7545 |
| 0.1299 | 666.65 | 12000 | 1.5559 | 0.7545 |
| 0.127 | 683.32 | 12300 | 1.5776 | 0.7596 |
| 0.1264 | 699.97 | 12600 | 1.5790 | 0.7519 |
| 0.1209 | 716.65 | 12900 | 1.5805 | 0.7545 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
DeepESP/gpt2-spanish-medium | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"es",
"dataset:ebooks",
"transformers",
"GPT-2",
"Spanish",
"ebooks",
"nlg",
"license:mit"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 340 | null | ---
language:
- gn
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- gn
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-300m-gn-cv8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: pt
metrics:
- name: Test WER
type: wer
value: 69.05
- name: Test CER
type: cer
value: 14.7
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: gn
metrics:
- name: Test WER
type: wer
value: 69.05
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-gn-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9392
- Wer: 0.7033
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 20.0601 | 5.54 | 100 | 5.1622 | 1.0 |
| 3.7052 | 11.11 | 200 | 3.2869 | 1.0 |
| 3.3275 | 16.65 | 300 | 3.2162 | 1.0 |
| 3.2984 | 22.22 | 400 | 3.1638 | 1.0 |
| 3.1111 | 27.76 | 500 | 2.5541 | 1.0 |
| 2.238 | 33.32 | 600 | 1.2198 | 0.9616 |
| 1.5284 | 38.86 | 700 | 0.9571 | 0.8593 |
| 1.2735 | 44.43 | 800 | 0.8719 | 0.8363 |
| 1.1269 | 49.97 | 900 | 0.8334 | 0.7954 |
| 1.0427 | 55.54 | 1000 | 0.7700 | 0.7749 |
| 1.0152 | 61.11 | 1100 | 0.7747 | 0.7877 |
| 0.943 | 66.65 | 1200 | 0.7151 | 0.7442 |
| 0.9132 | 72.22 | 1300 | 0.7224 | 0.7289 |
| 0.8397 | 77.76 | 1400 | 0.7354 | 0.7059 |
| 0.8577 | 83.32 | 1500 | 0.7285 | 0.7263 |
| 0.7931 | 88.86 | 1600 | 0.7863 | 0.7084 |
| 0.7995 | 94.43 | 1700 | 0.7562 | 0.6880 |
| 0.799 | 99.97 | 1800 | 0.7905 | 0.7059 |
| 0.7373 | 105.54 | 1900 | 0.7791 | 0.7161 |
| 0.749 | 111.11 | 2000 | 0.8125 | 0.7161 |
| 0.6925 | 116.65 | 2100 | 0.7722 | 0.6905 |
| 0.7034 | 122.22 | 2200 | 0.8989 | 0.7136 |
| 0.6745 | 127.76 | 2300 | 0.8270 | 0.6982 |
| 0.6837 | 133.32 | 2400 | 0.8569 | 0.7161 |
| 0.6689 | 138.86 | 2500 | 0.8339 | 0.6982 |
| 0.6471 | 144.43 | 2600 | 0.8441 | 0.7110 |
| 0.615 | 149.97 | 2700 | 0.9038 | 0.7212 |
| 0.6477 | 155.54 | 2800 | 0.9089 | 0.7059 |
| 0.6047 | 161.11 | 2900 | 0.9149 | 0.7059 |
| 0.5613 | 166.65 | 3000 | 0.8582 | 0.7263 |
| 0.6017 | 172.22 | 3100 | 0.8787 | 0.7084 |
| 0.5546 | 177.76 | 3200 | 0.8753 | 0.6957 |
| 0.5747 | 183.32 | 3300 | 0.9167 | 0.7212 |
| 0.5535 | 188.86 | 3400 | 0.8448 | 0.6905 |
| 0.5331 | 194.43 | 3500 | 0.8644 | 0.7161 |
| 0.5428 | 199.97 | 3600 | 0.8730 | 0.7033 |
| 0.5219 | 205.54 | 3700 | 0.9047 | 0.6982 |
| 0.5158 | 211.11 | 3800 | 0.8706 | 0.7033 |
| 0.5107 | 216.65 | 3900 | 0.9139 | 0.7084 |
| 0.4903 | 222.22 | 4000 | 0.9456 | 0.7315 |
| 0.4772 | 227.76 | 4100 | 0.9475 | 0.7161 |
| 0.4713 | 233.32 | 4200 | 0.9237 | 0.7059 |
| 0.4743 | 238.86 | 4300 | 0.9305 | 0.6957 |
| 0.4705 | 244.43 | 4400 | 0.9561 | 0.7110 |
| 0.4908 | 249.97 | 4500 | 0.9389 | 0.7084 |
| 0.4717 | 255.54 | 4600 | 0.9234 | 0.6982 |
| 0.4462 | 261.11 | 4700 | 0.9323 | 0.6957 |
| 0.4556 | 266.65 | 4800 | 0.9432 | 0.7033 |
| 0.4691 | 272.22 | 4900 | 0.9389 | 0.7059 |
| 0.4601 | 277.76 | 5000 | 0.9392 | 0.7033 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.11.0
|
DeltaHub/lora_t5-base_mrpc | [
"pytorch",
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2021-01-08T11:49:52Z | ---
language:
- multilingual
- pt
- en
tags:
- xlm-roberta-base
- semantic role labeling
- finetuned
license: apache-2.0
datasets:
- PropBank.Br
- CoNLL-2012
metrics:
- F1 Measure
---
# XLM-R base fine-tune in English and Portuguese semantic role labeling
## Model description
This model is the [`xlm-roberta-base`](https://huggingface.co/xlm-roberta-base) fine-tuned first on the English CoNLL formatted OntoNotes v5.0 semantic role labeling data and then fine-tuned on the PropBank.Br data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-enpt_xlmr-base")
model = AutoModel.from_pretrained("liaad/srl-enpt_xlmr-base")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow.
- The English data was preprocessed to match the Portuguese data, so there are some differences in role attributions and some roles were removed from the data.
## Training procedure
The model was first fine-tuned on the CoNLL-2012 dataset, preprocessed to match the Portuguese PropBank.Br data; then it was fine-tuned in the PropBank.Br dataset using 10-fold Cross-Validation. The resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
DemangeJeremy/4-sentiments-with-flaubert | [
"pytorch",
"flaubert",
"text-classification",
"fr",
"transformers",
"sentiments",
"french",
"flaubert-large"
] | text-classification | {
"architectures": [
"FlaubertForSequenceClassification"
],
"model_type": "flaubert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 226 | null | ---
language:
- multilingual
- pt
- en
tags:
- xlm-roberta-large
- semantic role labeling
- finetuned
license: apache-2.0
datasets:
- PropBank.Br
- CoNLL-2012
metrics:
- F1 Measure
---
# XLM-R large fine-tuned in English and Portuguese semantic role labeling
## Model description
This model is the [`xlm-roberta-large`](https://huggingface.co/xlm-roberta-large) fine-tuned first on the English CoNLL formatted OntoNotes v5.0 semantic role labeling data and then fine-tuned on the PropBank.Br data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-enpt_xlmr-large")
model = AutoModel.from_pretrained("liaad/srl-enpt_xlmr-large")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow.
- The English data was preprocessed to match the Portuguese data, so there are some differences in role attributions and some roles were removed from the data.
## Training procedure
The model was first fine-tuned on the CoNLL-2012 dataset, preprocessed to match the Portuguese PropBank.Br data; then it was fine-tuned in the PropBank.Br dataset using 10-fold Cross-Validation. The resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Deniskin/essays_small_2000 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- multilingual
- pt
tags:
- bert-base-multilingual-cased
- semantic role labeling
- finetuned
license: apache-2.0
datasets:
- PropBank.Br
metrics:
- F1 Measure
---
# mBERT fine-tuned on Portuguese semantic role labeling
## Model description
This model is the [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) fine-tuned on Portuguese semantic role labeling data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-pt_mbert-base")
model = AutoModel.from_pretrained("liaad/srl-pt_mbert-base")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Training procedure
The model was trained on the PropBank.Br datasets, using 10-fold Cross-Validation. The 10 resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Deniskin/essays_small_2000i | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- multilingual
- pt
tags:
- xlm-roberta-base
- semantic role labeling
- finetuned
license: apache-2.0
datasets:
- PropBank.Br
metrics:
- F1 Measure
---
# XLM-R base fine-tuned on Portuguese semantic role labeling
## Model description
This model is the [`xlm-roberta-base`](https://huggingface.co/xlm-roberta-base) fine-tuned on Portuguese semantic role labeling data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-pt_xlmr-base")
model = AutoModel.from_pretrained("liaad/srl-pt_xlmr-base")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow.
## Training procedure
The model was trained on the PropBank.Br datasets, using 10-fold Cross-Validation. The 10 resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Denny29/DialoGPT-medium-asunayuuki | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
language:
- multilingual
- pt
- en
tags:
- xlm-roberta-large
- semantic role labeling
- finetuned
- dependency parsing
license: apache-2.0
datasets:
- PropBank.Br
- CoNLL-2012
- Universal Dependencies
metrics: f1
---
# XLM-R large fine-tuned in Portuguese Universal Dependencies and English and Portuguese semantic role labeling
## Model description
This model is the [`xlm-roberta-large`](https://huggingface.co/xlm-roberta-large) fine-tuned first on the Universal Dependencies Portuguese dataset, then fine-tuned on the CoNLL formatted OntoNotes v5.0 and then fine-tuned on the PropBank.Br data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/ud_srl-enpt_xlmr-large")
model = AutoModel.from_pretrained("liaad/ud_srl-enpt_xlmr-large")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow.
- The model was trained only for 10 epochs in the Universal Dependencies dataset.
- The model was trained only for 5 epochs in the CoNLL formatted OntoNotes v5.0.
- The English data was preprocessed to match the Portuguese data, so there are some differences in role attributions and some roles were removed from the data.
## Training procedure
The model was trained on the Universal Dependencies Portuguese dataset; then on the CoNLL formatted OntoNotes v5.0; then on Portuguese semantic role labeling data (PropBank.Br) using 10-fold Cross-Validation. The 10 resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
DeskDown/MarianMixFT_en-id | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2021-07-08T01:29:15Z | ---
language: zh
widget:
- text: "我喜欢下雨。"
- text: "我讨厌他。"
---
# liam168/c2-roberta-base-finetuned-dianping-chinese
## Model description
用中文对话情绪语料训练的模型,2分类:乐观和悲观。
## Overview
- **Language model**: BertForSequenceClassification
- **Model size**: 410M
- **Language**: Chinese
## Example
```python
>>> from transformers import AutoModelForSequenceClassification , AutoTokenizer, pipeline
>>> model_name = "liam168/c2-roberta-base-finetuned-dianping-chinese"
>>> class_num = 2
>>> ts_texts = ["我喜欢下雨。", "我讨厌他."]
>>> model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=class_num)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
>>> classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
>>> classifier(ts_texts[0])
>>> classifier(ts_texts[1])
[{'label': 'positive', 'score': 0.9973447918891907}]
[{'label': 'negative', 'score': 0.9972558617591858}]
```
|
DeskDown/MarianMixFT_en-ja | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2021-07-07T02:44:45Z | ---
language: zh
tags:
- exbert
license: apache-2.0
widget:
- text: "女人做得越纯粹,皮肤和身材就越好"
- text: "我喜欢篮球"
---
# liam168/c4-zh-distilbert-base-uncased
## Model description
用 ["女性","体育","文学","校园"]4类数据训练的分类模型。
## Overview
- **Language model**: DistilBERT
- **Model size**: 280M
- **Language**: Chinese
## Example
```python
>>> from transformers import DistilBertForSequenceClassification , AutoTokenizer, pipeline
>>> model_name = "liam168/c4-zh-distilbert-base-uncased"
>>> class_num = 4
>>> ts_texts = ["女人做得越纯粹,皮肤和身材就越好", "我喜欢篮球"]
>>> model = DistilBertForSequenceClassification.from_pretrained(model_name, num_labels=class_num)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
>>> classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
>>> classifier(ts_texts[0])
>>> classifier(ts_texts[1])
[{'label': 'Female', 'score': 0.9137857556343079}]
[{'label': 'Sports', 'score': 0.8206522464752197}]
```
|
DeskDown/MarianMixFT_en-vi | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2021-07-07T00:58:55Z | ---
language: zh
widget:
- text: "晓日千红"
- text: "长街躞蹀"
---
# gen-gpt2-medium-chinese
# Overview
- **Language model**: GPT2-Medium
- **Model size**: 68M
- **Language**: Chinese
# Example
```python
from transformers import TFGPT2LMHeadModel,AutoTokenizer
from transformers import TextGenerationPipeline
mode_name = 'liam168/gen-gpt2-medium-chinese'
tokenizer = AutoTokenizer.from_pretrained(mode_name)
model = TFGPT2LMHeadModel.from_pretrained(mode_name)
text_generator = TextGenerationPipeline(model, tokenizer)
print(text_generator("晓日千红", max_length=64, do_sample=True))
print(text_generator("加餐小语", max_length=50, do_sample=False))
```
输出
```text
[{'generated_text': '晓日千红 独 远 客 。 孤 夜 云 云 梦 到 冷 。 著 剩 笑 、 人 远 。 灯 啼 鸦 最 回 吟 。 望 , 枕 付 孤 灯 、 客 。 对 梅 残 照 偏 相 思 , 玉 弦 语 。 翠 台 新 妆 、 沉 、 登 临 水 。 空'}]
[{'generated_text': '加餐小语 有 有 骨 , 有 人 诗 成 自 远 诗 。 死 了 自 喜 乐 , 独 撑 天 下 诗 事 小 诗 柴 。 桃 花 谁 知 何 处 何 处 高 吟 诗 从 今 死 火 , 此 事'}]
```
|
DivyanshuSheth/T5-Seq2Seq-Final | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-02-23T08:55:46Z | ---
tags:
- espnet
- audio
- audio-to-audio
language: en
datasets:
- wsj0_2mix
license: cc-by-4.0
---
## ESPnet2 ENH model
### `lichenda/wsj0_2mix_skim_noncausal`
This model was trained by LiChenda using wsj0_2mix recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout ac3c10cfe4faf82c0bb30f8b32d9e8692363e0a9
pip install -e .
cd egs2/wsj0_2mix/enh1
./run.sh --skip_data_prep false --skip_train true --download_model lichenda/wsj0_2mix_skim_noncausal
```
<!-- Generated by ./scripts/utils/show_enh_score.sh -->
# RESULTS
## Environments
- date: `Wed Feb 23 16:42:06 CST 2022`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.8.1`
- Git hash: `ac3c10cfe4faf82c0bb30f8b32d9e8692363e0a9`
- Commit date: `Fri Feb 11 16:22:52 2022 +0800`
## ..
config: conf/tuning/train_enh_skim_tasnet_noncausal.yaml
|dataset|STOI|SAR|SDR|SIR|
|---|---|---|---|---|
|enhanced_cv_min_8k|0.96|19.17|18.70|29.56|
|enhanced_tt_min_8k|0.97|18.96|18.45|29.31|
## ENH config
<details><summary>expand</summary>
```
config: conf/tuning/train_enh_skim_tasnet_noncausal.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/enh_train_enh_skim_tasnet_noncausal_raw
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 150
patience: 20
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- si_snr
- max
- - valid
- loss
- min
keep_nbest_models: 1
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 8
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/enh_stats_8k/train/speech_mix_shape
- exp/enh_stats_8k/train/speech_ref1_shape
- exp/enh_stats_8k/train/speech_ref2_shape
valid_shape_file:
- exp/enh_stats_8k/valid/speech_mix_shape
- exp/enh_stats_8k/valid/speech_ref1_shape
- exp/enh_stats_8k/valid/speech_ref2_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 80000
- 80000
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 16000
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr_min_8k/wav.scp
- speech_mix
- sound
- - dump/raw/tr_min_8k/spk1.scp
- speech_ref1
- sound
- - dump/raw/tr_min_8k/spk2.scp
- speech_ref2
- sound
valid_data_path_and_name_and_type:
- - dump/raw/cv_min_8k/wav.scp
- speech_mix
- sound
- - dump/raw/cv_min_8k/spk1.scp
- speech_ref1
- sound
- - dump/raw/cv_min_8k/spk2.scp
- speech_ref2
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-08
weight_decay: 0
scheduler: reducelronplateau
scheduler_conf:
mode: min
factor: 0.7
patience: 1
init: xavier_uniform
model_conf:
stft_consistency: false
loss_type: mask_mse
mask_type: null
criterions:
- name: si_snr
conf:
eps: 1.0e-07
wrapper: pit
wrapper_conf:
weight: 1.0
independent_perm: true
use_preprocessor: false
encoder: conv
encoder_conf:
channel: 64
kernel_size: 2
stride: 1
separator: skim
separator_conf:
causal: false
num_spk: 2
layer: 6
nonlinear: relu
unit: 128
segment_size: 250
dropout: 0.1
mem_type: hc
seg_overlap: true
decoder: conv
decoder_conf:
channel: 64
kernel_size: 2
stride: 1
required:
- output_dir
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{ESPnet-SE,
author = {Chenda Li and Jing Shi and Wangyou Zhang and Aswin Shanmugam Subramanian and Xuankai Chang and
Naoyuki Kamo and Moto Hira and Tomoki Hayashi and Christoph B{"{o}}ddeker and Zhuo Chen and Shinji Watanabe},
title = {ESPnet-SE: End-To-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
booktitle = {{IEEE} Spoken Language Technology Workshop, {SLT} 2021, Shenzhen, China, January 19-22, 2021},
pages = {785--792},
publisher = {{IEEE}},
year = {2021},
url = {https://doi.org/10.1109/SLT48900.2021.9383615},
doi = {10.1109/SLT48900.2021.9383615},
timestamp = {Mon, 12 Apr 2021 17:08:59 +0200},
biburl = {https://dblp.org/rec/conf/slt/Li0ZSCKHHBC021.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Citing SkiM:
```bibtex
@article{li2022skim,
title={SkiM: Skipping Memory LSTM for Low-Latency Real-Time Continuous Speech Separation},
author={Li, Chenda and Yang, Lei and Wang, Weiqin and Qian, Yanmin},
journal={arXiv preprint arXiv:2201.10800},
year={2022}
}
```
|
Dizoid/Lll | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2021-11-15T16:48:35Z | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- lidiia/autonlp-data-trans_class_arg
co2_eq_emissions: 0.9756221672668951
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 32957902
- CO2 Emissions (in grams): 0.9756221672668951
## Validation Metrics
- Loss: 0.2765039801597595
- Accuracy: 0.8939828080229226
- Precision: 0.7757009345794392
- Recall: 0.8645833333333334
- AUC: 0.9552659749670619
- F1: 0.8177339901477833
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/lidiia/autonlp-trans_class_arg-32957902
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("lidiia/autonlp-trans_class_arg-32957902", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("lidiia/autonlp-trans_class_arg-32957902", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
Waynehillsdev/Waynehills-STT-doogie-server | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 61 | 2022-02-27T15:21:32Z | ---
language:
- en
- el
- multilingual
tags:
- text-classification
- fact-or-opinion
- transformers
widget:
- text: "Ξεχωρίζει η καθηλωτική ερμηνεία του πρωταγωνιστή."
- text: "Η Ελλάδα είναι χώρα της Ευρώπης."
- text: "Tolkien was an English writer"
- text: "Tolkien is my favorite writer."
pipeline_tag: text-classification
license: apache-2.0
---
# Fact vs. opinion binary classifier, trained on a mixed EN-EL annotated corpus.
### By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
This is an XLM-Roberta-base model with a binary classification head. Given a sentence, it can classify it either as a fact or an opinion based on its content.
You can use this model in any of the XLM-R supported languages for the same task, taking advantage of its 0-shot learning capabilities. However, the model was trained only using English and Greek sentences.
Legend of HuggingFace API labels:
* Label 0: Opinion/Subjective sentence
* Label 1: Fact/Objective sentence
## Dataset training info
The original dataset (available here: https://github.com/1024er/cbert_aug/tree/crayon/datasets/subj) contained aprox. 9000 annotated sentences (classified as subjective or objective). It was translated to Greek using Google Translate. The Greek version was then concatenated with the original English one to create the mixed EN-EL dataset.
The model was trained for 5 epochs, using batch size = 8. Detailed metrics and hyperparameters available on the "Metrics" tab.
## Evaluation Results on test set
| accuracy | precision | recall | f1 |
| ----------- | ----------- | ----------- | ----------- |
|0.952 | 0.945 | 0.960 | 0.952 |
## Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
|
Waynehillsdev/Waynehills_summary_tensorflow | [
"tf",
"t5",
"text2text-generation",
"transformers",
"generated_from_keras_callback",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2021-01-14T14:09:21Z |
---
language:
- el
tags:
- pytorch
- causal-lm
widget:
- text: "Το αγαπημένο μου μέρος είναι"
license: apache-2.0
---
# Greek (el) GPT2 model - small
<img src="https://huggingface.co/lighteternal/gpt2-finetuned-greek-small/raw/main/GPT2el.png" width="600"/>
#### A new version (recommended) trained on 5x more data is available at: https://huggingface.co/lighteternal/gpt2-finetuned-greek
### By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
* language: el
* licence: apache-2.0
* dataset: ~5GB of Greek corpora
* model: GPT2 (12-layer, 768-hidden, 12-heads, 117M parameters. OpenAI GPT-2 English model, finetuned for the Greek language)
* pre-processing: tokenization + BPE segmentation
### Model description
A text generation (autoregressive) model, using Huggingface transformers and fastai based on the English GPT-2(small). 

Finetuned with gradual layer unfreezing. This is a more efficient and sustainable alternative compared to training from scratch, especially for low-resource languages. 

Based on the work of Thomas Dehaene (ML6) for the creation of a Dutch GPT2: https://colab.research.google.com/drive/1Y31tjMkB8TqKKFlZ5OJ9fcMp3p8suvs4?usp=sharing
### How to use
```
from transformers import pipeline
model = "lighteternal/gpt2-finetuned-greek-small"
generator = pipeline(
'text-generation',
device=0,
model=f'{model}',
tokenizer=f'{model}')
text = "Μια φορά κι έναν καιρό"
print("\\\\
".join([x.get("generated_text") for x in generator(
text,
max_length=len(text.split(" "))+15,
do_sample=True,
top_k=50,
repetition_penalty = 1.2,
add_special_tokens=False,
num_return_sequences=5,
temperature=0.95,
top_p=0.95)]))
```
## Training data
We used a small (~5GB) sample from a consolidated Greek corpus based on CC100, Wikimatrix, Tatoeba, Books, SETIMES and GlobalVoices. A bigger corpus is expected to provide better results (T0D0).
### Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
Based on the work of Thomas Dehaene (ML6): https://blog.ml6.eu/dutch-gpt2-autoregressive-language-modelling-on-a-budget-cff3942dd020
|
Waynehillsdev/wav2vec2-base-timit-demo-colab | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2021-01-29T15:18:09Z |
---
language:
- el
tags:
- pytorch
- causal-lm
widget:
- text: "Το αγαπημένο μου μέρος είναι"
license: apache-2.0
---
# Greek (el) GPT2 model
<img src="https://huggingface.co/lighteternal/gpt2-finetuned-greek-small/raw/main/GPT2el.png" width="600"/>
### By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
* language: el
* licence: apache-2.0
* dataset: ~23.4 GB of Greek corpora
* model: GPT2 (12-layer, 768-hidden, 12-heads, 117M parameters. OpenAI GPT-2 English model, finetuned for the Greek language)
* pre-processing: tokenization + BPE segmentation
* metrics: perplexity
### Model description
A text generation (autoregressive) model, using Huggingface transformers and fastai based on the English GPT-2.
Finetuned with gradual layer unfreezing. This is a more efficient and sustainable alternative compared to training from scratch, especially for low-resource languages.
Based on the work of Thomas Dehaene (ML6) for the creation of a Dutch GPT2: https://colab.research.google.com/drive/1Y31tjMkB8TqKKFlZ5OJ9fcMp3p8suvs4?usp=sharing
### How to use
```
from transformers import pipeline
model = "lighteternal/gpt2-finetuned-greek"
generator = pipeline(
'text-generation',
device=0,
model=f'{model}',
tokenizer=f'{model}')
text = "Μια φορά κι έναν καιρό"
print("\
".join([x.get("generated_text") for x in generator(
text,
max_length=len(text.split(" "))+15,
do_sample=True,
top_k=50,
repetition_penalty = 1.2,
add_special_tokens=False,
num_return_sequences=5,
temperature=0.95,
top_p=0.95)]))
```
## Training data
We used a 23.4GB sample from a consolidated Greek corpus from CC100, Wikimatrix, Tatoeba, Books, SETIMES and GlobalVoices containing long senquences.
This is a better version of our GPT-2 small model (https://huggingface.co/lighteternal/gpt2-finetuned-greek-small)
## Metrics
| Metric | Value |
| ----------- | ----------- |
| Train Loss | 3.67 |
| Validation Loss | 3.83 |
| Perplexity | 39.12 |
### Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
Based on the work of Thomas Dehaene (ML6): https://blog.ml6.eu/dutch-gpt2-autoregressive-language-modelling-on-a-budget-cff3942dd020
|
Waynehillsdev/waynehills_sentimental_kor | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"ElectraForSequenceClassification"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | 2021-09-21T13:18:51Z | ---
language:
- el
- en
tags:
- xlm-roberta-base
datasets:
- multi_nli
- snli
- allnli_greek
metrics:
- accuracy
pipeline_tag: zero-shot-classification
widget:
- text: "Η Facebook κυκλοφόρησε τα πρώτα «έξυπνα» γυαλιά επαυξημένης πραγματικότητας."
candidate_labels: "τεχνολογία, πολιτική, αθλητισμός"
multi_class: false
license: apache-2.0
---
# Cross-Encoder for Greek Natural Language Inference (Textual Entailment) & Zero-Shot Classification
## By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
The model was trained on the the combined Greek+English version of the AllNLI dataset(sum of [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/)). The Greek part was created using the EN2EL NMT model available [here](https://huggingface.co/lighteternal/SSE-TUC-mt-en-el-cased).
The model can be used in two ways:
* NLI/Textual Entailment: For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
* Zero-shot classification through the Huggingface pipeline: Given a sentence and a set of labels/topics, it will output the likelihood of the sentence belonging to each of the topic. Under the hood, the logit for entailment between the sentence and each label is taken as the logit for the candidate label being valid.
## Performance
Evaluation on classification accuracy (entailment, contradiction, neutral) on mixed (Greek+English) AllNLI-dev set:
| Metric | Value |
| --- | --- |
| Accuracy | 0.8409 |
## To use the model for NLI/Textual Entailment
#### Usage with sentence_transformers
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('lighteternal/nli-xlm-r-greek')
scores = model.predict([('Δύο άνθρωποι συναντιούνται στο δρόμο', 'Ο δρόμος έχει κόσμο'),
('Ένα μαύρο αυτοκίνητο ξεκινάει στη μέση του πλήθους.', 'Ένας άντρας οδηγάει σε ένα μοναχικό δρόμο'),
('Δυο γυναίκες μιλάνε στο κινητό', 'Το τραπέζι ήταν πράσινο')])
#Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
print(scores, labels)
# Οutputs
#[[-3.1526504 2.9981945 -0.3108107]
# [ 5.0549307 -2.757949 -1.6220676]
# [-0.5124733 -2.2671669 3.1630592]] ['entailment', 'contradiction', 'neutral']
```
#### Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('lighteternal/nli-xlm-r-greek')
tokenizer = AutoTokenizer.from_pretrained('lighteternal/nli-xlm-r-greek')
features = tokenizer(['Δύο άνθρωποι συναντιούνται στο δρόμο', 'Ο δρόμος έχει κόσμο'],
['Ένα μαύρο αυτοκίνητο ξεκινάει στη μέση του πλήθους.', 'Ένας άντρας οδηγάει σε ένα μοναχικό δρόμο.'],
padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
## To use the model for Zero-Shot Classification
This model can also be used for zero-shot-classification:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='lighteternal/nli-xlm-r-greek')
sent = "Το Facebook κυκλοφόρησε τα πρώτα «έξυπνα» γυαλιά επαυξημένης πραγματικότητας"
candidate_labels = ["πολιτική", "τεχνολογία", "αθλητισμός"]
res = classifier(sent, candidate_labels)
print(res)
#outputs:
#{'sequence': 'Το Facebook κυκλοφόρησε τα πρώτα «έξυπνα» γυαλιά επαυξημένης πραγματικότητας', 'labels': ['τεχνολογία', 'αθλητισμός', 'πολιτική'], 'scores': [0.8380699157714844, 0.09086982160806656, 0.07106029987335205]}
```
### Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
### Citation info
Citation for the Greek model TBA.
Based on the work [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084)
Kudos to @nreimers (Nils Reimers) for his support on Github .
|
Doohae/p_encoder | [
"pytorch"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2021-09-20T17:37:43Z | ---
language:
- en
- el
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
widget:
- source_sentence: "Το κινητό έπεσε και έσπασε."
sentences: [
"H πτώση κατέστρεψε τη συσκευή.",
"Το αυτοκίνητο έσπασε στα δυο.",
"Ο υπουργός έπεσε και έσπασε το πόδι του."
]
pipeline_tag: sentence-similarity
license: apache-2.0
---
# Semantic Textual Similarity for the Greek language using Transformers and Transfer Learning
### By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
We follow a Teacher-Student transfer learning approach described [here](https://www.sbert.net/examples/training/multilingual/README.html) to train an XLM-Roberta-base model on STS using parallel EN-EL sentence pairs.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('{MODEL_NAME}')
sentences1 = ['Το κινητό έπεσε και έσπασε.',
'Το κινητό έπεσε και έσπασε.',
'Το κινητό έπεσε και έσπασε.']
sentences2 = ["H πτώση κατέστρεψε τη συσκευή.",
"Το αυτοκίνητο έσπασε στα δυο.",
"Ο υπουργός έπεσε και έσπασε το πόδι του."]
embeddings1 = model.encode(sentences1, convert_to_tensor=True)
embeddings2 = model.encode(sentences2, convert_to_tensor=True)
#Compute cosine-similarities (clone repo for util functions)
from sentence_transformers import util
cosine_scores = util.pytorch_cos_sim(embeddings1, embeddings2)
#Output the pairs with their score
for i in range(len(sentences1)):
print("{} {} Score: {:.4f}".format(sentences1[i], sentences2[i], cosine_scores[i][i]))
#Outputs:
#Το κινητό έπεσε και έσπασε. H πτώση κατέστρεψε τη συσκευή. Score: 0.6741
#Το κινητό έπεσε και έσπασε. Το αυτοκίνητο έσπασε στα δυο. Score: 0.5067
#Το κινητό έπεσε και έσπασε. Ο υπουργός έπεσε και έσπασε το πόδι του. Score: 0.4548
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained(
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
#### Similarity Evaluation on STS.en-el.txt (translated manually for evaluation purposes)
We measure the semantic textual similarity (STS) between sentence pairs in different languages:
| cosine_pearson | cosine_spearman | euclidean_pearson | euclidean_spearman | manhattan_pearson | manhattan_spearman | dot_pearson | dot_spearman |
| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
0.834474802920369 | 0.845687403828107 | 0.815895882192263 | 0.81084300966291 | 0.816333562677654 | 0.813879742416394 | 0.7945167996031 | 0.802604238383742 |
#### Translation
We measure the translation accuracy. Given a list with source sentences, for example, 1000 English sentences. And a list with matching target (translated) sentences, for example, 1000 Greek sentences. For each sentence pair, we check if their embeddings are the closest using cosine similarity. I.e., for each src_sentences[i] we check if trg_sentences[i] has the highest similarity out of all target sentences. If this is the case, we have a hit, otherwise an error. This evaluator reports accuracy (higher = better).
| src2trg | trg2src |
| ----------- | ----------- |
| 0.981 | 0.9775 |
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 135121 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 400, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
## Citing & Authors
Citation info for Greek model: TBD
Based on the transfer learning approach of [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813)
|
Doohae/roberta | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2021-05-25T17:07:31Z | ---
pipeline_tag: feature-extraction
tags:
- sentence-transformers
---
## Testing Sentence Transformer
This Roberta model is trained from scratch using Masked Language Modelling task on a collection of medical reports |
DoyyingFace/bert-asian-hate-tweets-asonam-unclean | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | 2021-08-27T12:30:13Z | ---
tags:
- conversational
---
#C3PO DialoGPT Model |
DoyyingFace/bert-asian-hate-tweets-concat-clean-with-unclean-valid | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-hateful-memes-expanded
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-hateful-memes-expanded
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on texts from the following datasets:
- [Hateful Memes](https://hatefulmemeschallenge.com/), `train`, `dev_seen` and `dev_unseen`
- [HarMeme](https://github.com/di-dimitrov/harmeme), `train`, `val` and `test`
- [MultiOFF](https://github.com/bharathichezhiyan/Multimodal-Meme-Classification-Identifying-Offensive-Content-in-Image-and-Text), `Training`, `Validation` and `Testing`
It achieves the following results on the evaluation set:
- Loss: 3.7600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.11.0
- Pytorch 1.8.1+cu102
- Datasets 1.8.0
- Tokenizers 0.10.2
|
albert-base-v1 | [
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 38,156 | 2022-01-07T15:43:19Z | ---
language:
- fr
license: mit
pipeline_tag: sentence-similarity
widget:
- source_sentence: "Bonsoir"
sentences:
- "Salut !"
- "Hello"
- "Bonsoir!"
- "Bonsouar!"
- "Bonsouar !"
- "De rien"
- "LUL LUL"
example_title: "Coucou"
- source_sentence: "elle s'en sort bien"
sentences:
- "elle a raison"
- "elle a tellement raison"
- "Elle a pas tort"
- "C'est bien ce qu'elle dit là"
- "Hello"
example_title: "Raison or not"
- source_sentence: "et la question énergétique n'est pas politique ?"
sentences:
- "C'est le nucléaire militaire qui a entaché le nucléaire pour l'énergie."
- "La fusion nucléaire c'est pas pour maintenant malheureusement"
- "le pro nucléaire redevient acceptable à gauche j'ai l'impression"
- "La mer à Nantes?"
- "c'est bien un olivier pour l'upr"
- "Moi je vois juste sa lavallière"
example_title: "Nucléaire"
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- twitch
- convbert
---
## Modèle de représentation d'un message Twitch à l'aide de ConvBERT
Modèle [sentence-transformers](https://www.SBERT.net): cela permet de mapper une séquence de texte en un vecteur numérique de dimension 256 et peut être utilisé pour des tâches de clustering ou de recherche sémantique.
L'expérimentation menée au sein de Lincoln avait pour principal objectif de mettre en œuvre des techniques NLP from scratch sur un corpus de messages issus d’un chat Twitch. Ces derniers sont exprimés en français, mais sur une plateforme internet avec le vocabulaire internet que cela implique (fautes, vocabulaire communautaires, abréviations, anglicisme, emotes, ...).
Après avoir entrainé un modèle `ConvBert` puis `MLM` (cf section smodèles), nous avons entrainé un modèle _sentence-transformers_ à l'aide du framework d'apprentissage [SimCSE](https://www.sbert.net/examples/unsupervised_learning/SimCSE/README.html) en non supervisée.
L'objectif est de spécialiser la moyenne des tokens _CLS_ de chaque token de la séquence pour représenter un vecteur numérique cohérent avec l'ensemble du corpus. _SimCSE_ crée fictivement des exemples positifs et négatifs supervisées à l'aide du dropout pour revenir à une tâche classique.
_Nous garantissons pas la stabilité du modèle sur le long terme. Modèle réalisé dans le cadre d'un POC._
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('2021twitchfr-conv-bert-small-mlm-simcse')
embeddings = model.encode(sentences)
print(embeddings)
```
## Semantic Textual Similarity
```python
from sentence_transformers import SentenceTransformer, models, util
# Two lists of sentences
sentences1 = ['zackFCZack',
'Team bons petits plats',
'sa commence a quelle heure de base popcorn ?',
'BibleThump']
sentences2 = ['zack titulaire',
'salade de pates c une dinguerie',
'ça commence à être long la',
'NotLikeThis']
# Compute embedding for both lists
embeddings1 = model.encode(sentences1, convert_to_tensor=True)
embeddings2 = model.encode(sentences2, convert_to_tensor=True)
# Compute cosine-similarits
cosine_scores = util.cos_sim(embeddings1, embeddings2)
# Output the pairs with their score
for i in range(len(sentences1)):
print("Score: {:.4f} | \"{}\" -vs- \"{}\" ".format(cosine_scores[i][i], sentences1[i], sentences2[i]))
# Score: 0.5783 | "zackFCZack" -vs- "zack titulaire"
# Score: 0.2881 | "Team bons petits plats" -vs- "salade de pates c une dinguerie"
# Score: 0.4529 | "sa commence a quelle heure de base popcorn ?" -vs- "ça commence à être long la"
# Score: 0.5805 | "BibleThump" -vs- "NotLikeThis"
```
## Entrainement
* 500 000 messages twitchs échantillonnés (cf description données des modèles de bases)
* Batch size: 24
* Epochs: 24
* Loss: MultipleNegativesRankingLoss
_A noter:_
* _ConvBert a été entrainé avec un longueur de 128 tokens max, mais est utilisé pour 512 dans ce modèle. Pas de problème._
* _La loss d'apprentissage n'est pas encore disponible: peu de visibilité sur les performances._
L'ensemble du code d'entrainement sur le github public [lincoln/twitchatds](https://github.com/Lincoln-France/twitchatds).
## Application:
Nous avons utilisé une approche détournée de [BERTopic](https://maartengr.github.io/BERTopic/) pour réaliser un clustering d'un stream en prenant en compte la dimension temporelle: i.e. le nombre de seconde écoulée depuis le début du stream.

Globalement, l'approche donnes des résultats satisfaisant pour identifier des messages dit "similaires" récurrents. L'approche en revanche est fortement influencée par la ponctuation et la structure d'un message. Cela est largement explicable par le manque d'entrainement de l'ensemble des modèles et une volumétrie faible.
### Clustering émission "Backseat":
Entre 19h30 et 20h00:

🎞️ en vidéo: [youtu.be/EcjvlE9aTls](https://youtu.be/EcjvlE9aTls)
### Exemple regroupement émission "PopCorn":
```txt
-------------------- LABEL 106 --------------------
circus (0.88)/sulli (0.23)/connu (0.19)/jure (0.12)/aime (0.11)
silouhette moyenne: 0.04
-------------------- LABEL 106 --------------------
2021-03-30 20:10:22 0.01: les gosse c est des animaux
2021-03-30 20:12:11 -0.03: oue c connu
2021-03-30 20:14:15 0.03: oh le circus !! <3
2021-03-30 20:14:19 0.12: le circus l'anciennnee
2021-03-30 20:14:22 0.06: jure le circus !
2021-03-30 20:14:27 -0.03: le sulli
2021-03-30 20:14:31 0.09: le circus??? j'aime po
2021-03-30 20:14:34 0.11: le Circus, hors de prix !
2021-03-30 20:14:35 -0.09: le Paddock a Rignac en Aveyron
2021-03-30 20:14:39 0.11: le circus ><
2021-03-30 20:14:39 0.04: le Titty Twister de Besançon
-------------------- LABEL 17 --------------------
pates (0.12)/riz (0.09)/pâtes (0.09)/salade (0.07)/emission (0.07)
silouhette moyenne: -0.05
-------------------- LABEL 17 --------------------
2021-03-30 20:11:18 -0.03: Des nanimaux trop beaux !
2021-03-30 20:13:11 -0.01: episode des simpsons ça...
2021-03-30 20:13:41 -0.01: des le debut d'emission ca tue mdrrrrr
2021-03-30 20:13:50 0.03: des "lasagnes"
2021-03-30 20:14:37 -0.18: poubelle la vie
2021-03-30 20:15:13 0.03: Une omelette
2021-03-30 20:15:35 -0.19: salade de bite
2021-03-30 20:15:36 -0.00: hahaha ce gastronome
2021-03-30 20:15:43 -0.08: salade de pates c une dinguerie
2021-03-30 20:17:00 -0.11: Une bonne femme !
2021-03-30 20:17:06 -0.05: bouffe des graines
2021-03-30 20:17:08 -0.06: des pokeball ?
2021-03-30 20:17:11 -0.12: le choux fleur cru
2021-03-30 20:17:15 0.05: des pockeball ?
2021-03-30 20:17:27 -0.00: du chou fleur crue
2021-03-30 20:17:36 -0.09: un râgout de Meynia !!!!
2021-03-30 20:17:43 -0.07: une line up Sa rd o ch Zack Ponce my dream
2021-03-30 20:17:59 -0.10: Pâtes/10
2021-03-30 20:18:09 -0.05: Team bons petits plats
2021-03-30 20:18:13 -0.10: pate level
2021-03-30 20:18:19 -0.03: que des trucs très basiques
2021-03-30 20:18:24 0.03: des pates et du jambon c'est de la cuisine?
2021-03-30 20:18:30 0.05: Des pates et du riz ouai
2021-03-30 20:18:37 -0.02: des gnocchis à la poele c'est cuisiner ?
2021-03-30 20:18:50 -0.03: Pâtes à pizzas, pulled pork, carbonade flamande, etc..
2021-03-30 20:19:01 -0.11: Des pâtes ou du riz ça compte ?
2021-03-30 20:19:22 -0.21: le noob
2021-03-30 20:19:47 -0.02: Une bonne escalope de milanaise les gars
2021-03-30 20:20:05 -0.04: faites des gratins et des quiches
-------------------- LABEL 67 --------------------
1 1 (0.25)/1 (0.19)/ (0.0)/ (0.0)/ (0.0)
silouhette moyenne: 0.96
-------------------- LABEL 67 --------------------
2021-03-30 20:24:17 0.94: +1
2021-03-30 20:24:37 0.97: +1
2021-03-30 20:24:37 0.97: +1
2021-03-30 20:24:38 0.97: +1
2021-03-30 20:24:39 0.97: +1
2021-03-30 20:24:43 0.97: +1
2021-03-30 20:24:44 0.97: +1
2021-03-30 20:24:47 0.97: +1
2021-03-30 20:24:49 0.97: +1
2021-03-30 20:25:00 0.97: +1
2021-03-30 20:25:21 0.95: +1
2021-03-30 20:25:25 0.95: +1
2021-03-30 20:25:28 0.94: +1
2021-03-30 20:25:30 0.94: +1
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: ConvBertModel
(1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Modèles:
* [2021twitchfr-conv-bert-small](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small)
* [2021twitchfr-conv-bert-small-mlm](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small-mlm)
* [2021twitchfr-conv-bert-small-mlm-simcse](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small-mlm-simcse) |
albert-base-v2 | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4,785,283 | 2022-01-07T14:25:51Z | ---
language:
- fr
license: mit
pipeline_tag: "fill-mask"
widget:
- text: <mask> tt le monde !
- text: cc<mask> va?
- text: <mask> la Fronce !
tags:
- fill-mask
- convbert
- twitch
---
## Modèle de Masking sur les données Twitch FR
L'expérimentation menée au sein de Lincoln avait pour principal objectif de mettre en œuvre des techniques NLP from scratch sur un corpus de messages issus d’un chat Twitch. Ces derniers sont exprimés en français, mais sur une plateforme internet avec le vocabulaire internet que cela implique (fautes, vocabulaire communautaires, abréviations, anglicisme, emotes, ...).
Nos contraintes sont celles d’une entreprise n’ayant pas une volumétrie excessive de données et une puissance infinie de calcul.
Il a été nécessaire de construire un nouveau tokenizer afin de mieux correspondre à notre corpus plutôt qu’un tokenizer français existant.
Note corpus étant faible en volumétrie par rapport aux données habituelles pour entrainer un modèle BERT, nous avons opté pour l’entrainement d’un modèle dit « small ». Et il a été montré dans la littérature qu’un corpus de quelques giga octets peut donner de bons résultats, c’est pourquoi nous avons continué avec notre corpus.
La limite de la puissance de calcul a été contourné à l’aide d’une nouvelle architecture d’apprentissage basée sur un double modèle générateur / discriminateur.
Ceci nous a permis d’entrainer un modèle de langue ConvBERT sur nos données, ainsi qu’un modèle de masking en quelques heures sur une carte GPU V100.
_Nous garantissons pas la stabilité du modèle sur le long terme. Modèle réalisé dans le cadre d'un POC._
## Données
| Streamer | Nbr de messages | Categories notables en 2021 |
| --------------------------------------------- | --------------- | ---------------------------------- |
| Ponce | 2 604 935 | Chatting/Mario Kart/FIFA |
| Domingo | 1 209 703 | Chatting/talk-shows/FM2O21 |
| Mistermv | 1 205 882 | Isaac/Special events/TFT |
| Zerator | 900 894 | New World/WOW/Valorant |
| Blitzstream | 821 585 | Chess |
| Squeezie | 602 148 | Chatting / Minecraft |
| Antoinedaniellive | 548 497 | Geoguessr |
| Jeanmassietaccropolis/jeanmassiet | 301 387 | Talk-shows/chatting/special events |
| Samueletienne | 215 956 | chatting |
Sur la période du 12/03/2021 au 22/07/2021. La totalité des messages comptent 9 410 987 messages sur ces neufs streamers. Ces messages sont issus du canal IRC, donc n’ont pas subi de modération
Les données d'entrainement du modèle de masking contient 899 652 instances de train et 99 962 instances de test. Les données ont été formaté en concaténant les messages sur une fenêtre de 10s. Cette fenêtre correspond à une fenêtre courte qui regroupe des messages très « proches » temporellement.
* 512 tokens max
* Probabilité du « mask » : 15%
## Application
Voir github public [lincoln/twitchatds](https://github.com/Lincoln-France/twitchatds) pour les détails d'implémentation et les résultats.
## Remarques
* Expérimentation ponctuelle
* Les métriques d'entrainement sont disponibles dans l'onglet _Training metrics_
* Pour une meilleure stabilité, les données doivent être plus hétérogènes et volumineuse. Le modèle doit être entrainé + de 24h.
* Le token `<mask>` fonctionne probablement mieux sans laisser d'espace à gauche. Cela est dû au fait que `lstrip=False` pour ce token spécial.
## Usage
```python
from transformers import AutoTokenizer, ConvBertForMaskedLM
from transformers import pipeline
model_name = 'lincoln/2021twitchfr-conv-bert-small-mlm'
tokenizer_name = 'lincoln/2021twitchfr-conv-bert-small'
loaded_tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)
loaded_model = ConvBertForMaskedLM.from_pretrained(model_name)
nlp = pipeline('fill-mask', model=loaded_model, tokenizer=loaded_tokenizer)
nlp('<mask> les gens !')
```
## Modèles:
* [2021twitchfr-conv-bert-small](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small)
* [2021twitchfr-conv-bert-small-mlm](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small-mlm)
* [2021twitchfr-conv-bert-small-mlm-simcse](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small-mlm-simcse)
|
albert-large-v1 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 687 | 2022-01-07T10:44:43Z | ---
language:
- fr
license: mit
pipeline_tag: "feature-extraction"
widget:
- text: LUL +1 xD La Fronce !
tags:
- feature-extraction
- convbert
- twitch
---
## Modèle de langue sur les données Twitch FR
L'expérimentation menée au sein de Lincoln avait pour principal objectif de mettre en œuvre des techniques NLP from scratch sur un corpus de messages issus d’un chat Twitch. Ces derniers sont exprimés en français, mais sur une plateforme internet avec le vocabulaire internet que cela implique (fautes, vocabulaire communautaires, abréviations, anglicisme, emotes, ...).
Nos contraintes sont celles d’une entreprise n’ayant pas une volumétrie excessive de données et une puissance infinie de calcul.
Il a été nécessaire de construire un nouveau tokenizer afin de mieux correspondre à notre corpus plutôt qu’un tokenizer français existant.
Note corpus étant faible en volumétrie par rapport aux données habituelles pour entrainer un modèle BERT, nous avons opté pour l’entrainement d’un modèle dit « small ». Et il a été montré dans la littérature qu’un corpus de quelques giga octets peut donner de bons résultats, c’est pourquoi nous avons continué avec notre corpus.
La limite de la puissance de calcul a été contourné à l’aide d’une nouvelle architecture d’apprentissage basée sur un double modèle générateur / discriminateur.
Ceci nous a permis d’entrainer un modèle de langue ConvBERT sur nos données, ainsi qu’un modèle de masking en quelques heures sur une carte GPU V100.
_Nous garantissons pas la stabilité du modèle sur le long terme. Modèle réalisé dans le cadre d'un POC._
## Données
| Streamer | Nbr de messages | Categories notables en 2021 |
| --------------------------------------------- | --------------- | ---------------------------------- |
| Ponce | 2 604 935 | Chatting/Mario Kart/FIFA |
| Domingo | 1 209 703 | Chatting/talk-shows/FM2O21 |
| Mistermv | 1 205 882 | Isaac/Special events/TFT |
| Zerator | 900 894 | New World/WOW/Valorant |
| Blitzstream | 821 585 | Chess |
| Squeezie | 602 148 | Chatting / Minecraft |
| Antoinedaniellive | 548 497 | Geoguessr |
| Jeanmassietaccropolis/jeanmassiet | 301 387 | Talk-shows/chatting/special events |
| Samueletienne | 215 956 | chatting |
Sur la période du 12/03/2021 au 22/07/2021. La totalité des messages comptent 9 410 987 messages sur ces neufs streamers. Ces messages sont issus du canal IRC, donc n’ont pas subi de modération
Les données d'entrainement sont basé sur le format d'entrainement du modèle ELECTRA. Cela nécessite de formater les données en paragraphe, séparés par phrase. Nous avons choisi de regrouper les messages dans une fenêtre de 60 secondes, faisant office de paragraphe, avec les conditions suivantes :
* Longueur supérieure à 170 (ce qui représente en moyenne 50 tokens) afin de ne pas créer des instances ayant pas d’information car majoritairement vide : un padding sera nécessaire et pénalise la vitesse d’apprentissage.
* 128 tokens maximums (défaut)
Si la longueur maximale est atteinte, une deuxième instance est créée. Au final, la volumétrie d'instance d'entrainement est de 554 974.
## Application
Voir github public [lincoln/twitchatds](https://github.com/Lincoln-France/twitchatds) pour les détails d'implémentation et les résultats.
## Remarques
* Expérimentation ponctuelle
* Les métriques d'entrainement sont disponibles dans l'onglet _Training metrics_
* Pour une meilleure stabilité, les données doivent être plus hétérogènes et volumineuse. Le modèle doit être entrainé + de 24h.
## Usage
```python
from transformers import AutoTokenizer, ConvBertModel
from transformers import FeatureExtractionPipeline
model_name = 'lincoln/2021twitchfr-conv-bert-small'
loaded_tokenizer = AutoTokenizer.from_pretrained(model_name)
loaded_model = ConvBertModel.from_pretrained(model_name)
nlp = FeatureExtractionPipeline(model=loaded_model, tokenizer=loaded_tokenizer)
nlp("<3 <3 les modos")
```
## Modèles:
* [2021twitchfr-conv-bert-small](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small)
* [2021twitchfr-conv-bert-small-mlm](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small-mlm)
* [2021twitchfr-conv-bert-small-mlm-simcse](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small-mlm-simcse)
|
albert-large-v2 | [
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26,792 | 2021-10-07T12:43:38Z | ---
language:
- fr
license: mit
pipeline_tag: "text2text-generation"
datasets:
- squadFR
- fquad
- piaf
metrics:
- bleu
- rouge
widget:
- text: "La science des données est un domaine interdisciplinaire qui utilise des méthodes, des processus, des algorithmes et des systèmes scientifiques pour extraire des connaissances et des idées de nombreuses données structurelles et non structurées.\
Elle est souvent associée aux <hl>données massives et à l'analyse des données<hl>."
tags:
- seq2seq
- barthez
---
# Génération de question à partir d'un contexte
Le modèle est _fine tuné_ à partir du modèle [moussaKam/barthez](https://huggingface.co/moussaKam/barthez) afin de générer des questions à partir d'un paragraphe et d'une suite de token. La suite de token représente la réponse sur laquelle la question est basée.
Input: _Les projecteurs peuvent être utilisées pour \<hl\>illuminer\<hl\> des terrains de jeu extérieurs_
Output: _À quoi servent les projecteurs sur les terrains de jeu extérieurs?_
## Données d'apprentissage
La base d'entrainement est la concatenation des bases SquadFR, [fquad](https://huggingface.co/datasets/fquad), [piaf](https://huggingface.co/datasets/piaf). L'input est le context et nous avons entouré à l'aide du token spécial **\<hl\>** les réponses.
Volumétrie (nombre de triplet contexte/réponse/question):
* train: 98 211
* test: 12 277
* valid: 12 776
## Entrainement
L'apprentissage s'est effectué sur une carte Tesla V100.
* Batch size: 20
* Weight decay: 0.01
* Learning rate: 3x10-5 (décroit linéairement)
* < 24h d'entrainement
* Paramètres par défaut de la classe [TrainingArguments](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments)
* Total steps: 56 000
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAj0AAAGOCAYAAAB8J7JHAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAEKXSURBVHhe7d1/sB11fcf/zHSmihVDKagFpqQDrVUcQptaf9Q2acf6ox1M1NKpVkpGWrRaJ3FqW6f/JPX7HR0pNSkWKUMRqEz6bRSj8kMRh8QREWyQYtFCgw0Uwy8hKYIKWrvf+9y77+Rz9+45d2/u+bHnfJ6Pmc/cuz/Ont09e3Zf57Of3V1WSJIkZcDQI0mSsmDokSRJWTD0SJKkLBh6JElSFgw9kiQpC4YeSZKUBUOPJEnKgqFHkiRlwdAjSZKyYOiRJElZMPRIkqQsGHokSVIWDD2SJCkLhh5JkpQFQ48kScqCoUeSJGXB0CNJkrJg6JGm0I9+9KPioosuKu64446qz9Lt3Lmz2Lx5c/l31B5//PHi0ksvLd+fMizDnn4bXZiHQZmmZdF0MPRo4nEQXrZs2VgOxl31gx/8oFwnf//3f1/1aS/CTd041/OqVauKFStWFGvWrCnLsAx7+qlNmzYVe/furboO4f2nJSiwvRh61CWGHk08Q898Swk9HIx5bV2EoVGv50996lPl/Hz5y1+u+gzPKENPr212mkIPy2HoUZcYejTxFht6brvttuLAgQNVVzPG2bVrV9U1H7/QGd70S30hgwwNzGeTYYSehfSalzrWWdtxwUFzoflpOz3G6zduPfTwWX3rW9+quuZjO2I7WGh7Ckwvtpm2oYdxHn744aqrGdM8nG2xrcVMP11GqWsMPZp47GTbhJ44mEc57bTT5u2cN2zYUBx11FFzxjvrrLOqobMHTU6zpMPpXggHxXXr1s15Hd1xsGQ+6Ee7lbqY7xiXv+vXr58zLQ6U6YG3KfSsXr26LHX04z1QX0dR0Gs9s87ScZvWK/0ZL10HrOcdO3ZUYzRjuWL8KPH+vAfvlQ6r1yrEPKefW9M6CBF6tm3bNudzfte73lWNMatpO2Be6oEq1nm6Xtme0tdFieWK0EM5+uijDw4///zzy+Eptpd0e+X/9PPp9XlS4jPvh2nVp1//zJgOy5jOS0yb/+ufiTROhh5NvDiwpTv7utj5b9mypQwHHJxWrlxZHrgiLNAvxgn0S3fyy5cvLw9a8RoOvE1Bpe7EE08s3y/mkb/0IwQEDhwcOOsYLw1e/M98xPsyf3RzsAyHG3oQ66quaT3X1yvDYr2mGIcDIuOzziiMR780rNUxPQ6a8b5RkK5TpsE8MF66LAyjH+OyvhivHshSEXpOOeWU4pZbbikeffTRg++fHrzjc495T7enFOuWZUznM94/lqmO9z/99NOLjRs3ltNlHPr9xE/8xJx55/2ZRqx7SgSqGI+/vD4tzBPjMO1+4vvANJkOJabPdALrO5Yxvivx/oxr6FGXGHo08dgB13fEdRFWUrFTj5ATB/A4kDVh+EK1E3VxcIoDQWA6af+m8WLZ0oMJ3emBHfHaGG8UoYf1RHd9vcZ4zFOgu/7e9en1EqEjFQGn/to4KId4j/r66oVwwfgf+chHqj6zTj311OK4444rnnzyyarPfLE9pfMUAaP+2aM+bmAenvvc5855zfbt28vx+RvYpqk9qyPgNfVHbCfpZ9PL2rVry/eoY/oMC7G99FpGQ4+6xNCjiRcHtqYDCJoORiENQxEo+LW+devWxl/CUTvBr/BPfvKTVd/+4pQZO/+0MI10vggRzE96gGbeOMiEfstK/3jtKEJP23lBvRsRmppen2JdMV5qEPPYJEJPvQ3NO97xjrL/N7/5zarP7LT5DHlNFMZJQ3GvdY5e88V0OH1Zx/isC8S2ynjpNkVh+216z1gX6efw3e9+t7j++uvnFLYdsK2n4SbU1z3dTeEI6TxLXWDo0cRb6MDWb3j9oETQIWiwE+c1HEDSgxgHakIMQYThbXbqTJ/pxXvVSxqueO84RcJ7EZbSX+1Rw8GwOvrHAW3coad+wEznLdXr9SnWb31+mOemU4G95rGtCC919enG58DnxXKxjcQ46XL2WudIp5fi/Zu2KcaP/vFerOd4j7TUa3rYxtiWIuCHPXv2lNNJy/79+8th/N/0mdW3D7p5zyaMZ+hRlxh6NPHqB6S6+FWchpdA/16nAjhQRM1OE6YbB4B+pwuipqeNdFniVER62qDXskatSRykFhN66rVL9YNaqL93dPdar+k0690hnV4vTaGn1zwyL+k0Yx7bitqaXjU99957b9nddAqJ7YVx0uXstc6RzmeqTeiJbbrNaSq2DYI023Ld//3f/xU//OEP55TQa95Z7rRmh+Xtt4yGHnWJoUcTLw5s/Q6eHKTqv+Djdf0OHHEQTYNHHcObDughptEUDpowr/wi50BSP1AxHxxw6r/Yo+Yhao2aQg/zGLVIIdZBOv8Rtuqa1nPTvMTr0+Wtv0eoT69JU+iJdVr/7Fhn6QE55rmtCD1NbXq4QWJgnPry0F3v3ys4gHGjPVmqTehB0zZdR+ChRoztqKl2sB8+V8J6+jr+r9cYsbyGHk0KQ48mXhzYmto3xA43DpJcLcV9VWizw847DRUcgJgGbXUYh79xwADvw0GG1zKcEpdg9wtF4CDBeLQBidcynaZTNBxEmDfGbwpkcXCNabGMdKc1D02hJ9ZTug44cFLSA3XUIjBeug7j9fwNEXDq67V+EGSc9D1CfXpNYvnqeA/eKz6P+CzSsBXz3BafL+U5z3lOeYn4VVddVZx55pnlNHisR4hAcNlll5XvzWfBdsJ46XL2Cz2MTwiNdRzbUNvQE8vG+DEf/GUbjnlgm2Cc9LOMstB6J+AQINlGmW58H+iXbu+8l6FHk8LQo4lH7UYcXJpK4GBINzttDjgcENJfsRwE4ooVdtZR4xLjsKOnOw5ujMf4Cx08AqGK9+e1FP5vCgK8T8x7On8pwkZMi/mpT4fTFL/+679efOITn6j6zOJ1Mf+8nnXHeqiHq1gXMR/RjwNsfXljvca8pOErMLwpwMU89MNBk/dtwnuly5MGHsS20RbvQ/n6179enHHGGeV0qeW55JJLqjFm8bmwLcS2wrqKzy1dTuavaX2A8RnGa9L1wPs3BYWm/kyD92ZbZT5im41ppdOvl6bPoy6dfmzv9c+L6fRaxl7LIo2LoUeSJGXB0CNJkrJg6JEkSVkw9EiSpCwYeiRJUhYMPZIkKQuGHkmSlAVDT4Mbb7yxOPLII4uXvvSlFovFYrFkUZ71rGfNuaHpNDL0NLj11lvLu61ec801FovFYrFkUV74wheWd9+eZoaeBv/xH/9R3oZekqRccBd3HjcyzQw9DQw9kqTcGHoyZeiRJOXG0JMpQ48kKTeGnkwZeiRp8b773e9aOlwWYujJlKFHktr54Q9/WOzbt6+48847i2984xuWDpf//M//LB544IHqk5vP0JMpQ48ktfPII48Ud911V/Hoo48WP/jBD8oQZOleeeqpp8rPivDz+OOPV5/eXIaeTBl6JKmde+65p7j//vurLnXd3r17e35ehp5MGXokqR1qeQ4cOFB1qeseeuihMvg0MfRkytAjSe0YeiYLoYfauSaGnkwZeiSpnUkLPRz0//Vf/7Xqyo+hR/MMOvRceunMip5Z02vXVj0kaUpMWuj56Ec/WrzgBS+ouvJj6NE8gw49O3fOhp7Vq6sekjQlDD2TxdCjeQw9ktQO9+eZ9NBDw97NmzeXZSc77JpLL7304HCeQp4ub31Y1xl6NI+hR5LamfTQs2PHjuKoo44qNmzYUGzatKlYvnx5+X/g/5UrV5bDKGvXrj0YjFbP7NTrw7rO0KN5Bh16brttNvSsWFH1kKQp0XR666UvHW15zWuqN26hHnpOPPHEMrCE22Z22Mtmdtj8Bf831f6g37CuMvRonkGHHhB6KJI0TZpCz5FHHtrnjaIcbuhhvtOAE6i92bJlS/k/tTkrZn6xbt26tXG80047rXFYVxl6NI+hR5LaaQo9t9xSFDfdNLryjW9Ub9xCGnqiVqc+/wSdqP1hGAGIU1ec+iLkxPgxjPGZzpo1a+ZNq2sMPZrH0CNJ7Uz61VuEFdr1pAg39X5gOeunw0IMixqirjL0aJ5hhp4J2jdI0oImPfScddZZc2pv6Ca8RDdXZ8X/XOXFqS76oWlYU1jqEkOP5hlG6OHKLULPhLV5k6S+Ji30XHnllXNCD/POFVrU+FA4VZW2z4lTV1HSK7to0xP961d9dZWhR/MYeiSpnUkLPbkz9GgeQ48ktWPomSyGHs1j6JGkdgw9k8XQo3mGEXo41UvoaWj0L0kTy9AzWQw9E4DLA2ldHw3GFoMW9dxifDGvG0boIewYeiRNG0PPZCH0cFxsYujpCC4LpHD/g8WGHlreRwv7tgw9ktSOoWeyGHomCM84WUx4iTtlUlNk6JGkwTP0TBZDzwRZTOjhQ+UGU/ztQujhXlbMwgQ8hFeSWjP0TBZDzwRZTOihhiduB75Q6Ln++uuLZz/72QfLT/3UT5U3mhokrtpiFriKS5KmhaFnshh6Jkjb0BOntcJCoeeJJ54o7rzzzoPl2muvLcPPIBl6JE0jQ89crIvf/d3fLfbv31/16Y3xPvGJT1Rdo2HomSBtQw+Bh2eg8MRbCv/zOv5v8/j/YZzeMvRImkaGnrnuv//+8nizb9++qk9vr3zlK4v3ve99VddoEHq8ZH1CtA09XOlF7U4UQhCv4/9eCTc1jNDD2zLrM/lLkqaGoWcuQ0+3TUToIexs3ry5WL9+fbkx8T8l7Nq1q3j6059e3HvvvVWfuRY6vVU3jNADZmERsyFJnTeJoYdjCjX/HBc4E7B169ZqyOxT1utPSmf8tdVVKPxwXrduXflaCveQY3hYSuhJp818bdy4sRoyi2NZ3HeOv+kDTvsNSxl6JgA1N9TW1Ev493//9+J1r3td8fDDD1d95orXt2XokaR2GkPPrbcWxVe+MroyMw9t0cSBYMBxAQQWrvSNbsICgShF4IkQQTBJQxFtSLnwJdbB4YYeXk/QIXTxf8wXYQbMN+8TASvGQYwbZzLSYXWGHs1j6JGkdhpDz5FHHtrhjaK85jXVGy8sDTAhvfiF4EBoSQME3U3tQRmHMw2ElQhChxt6CF0ElxT9qLUBIYb3iflK8d4Mm/c5NDD0aJ5hhR6uguf72WK7lKSJ0Bh6XvSi0ZZFhB7u0E9AiAtdKJyiol9gnKhhIRDRHVhWXkOtS5x14P8Y/3BDD6+vn5Eg6DCtWL8ENrqZX5p4RH/+8lqGMW/psDpDj+YZVuhheyb09Kh1lKSJM2lteggHcQ+3XqhhiRBE4EnHp5Yo2veENCQNI/SkqOlh/hg3DWroNywYejSPoUeS2pm00ENooaakH5aHsEHY4W+6fASKCDggaDDOUkNPtDVKT18xzfoprxDvm44fmsJSMPRoHkOPJLXDDV0nKfQwr9TMxGkgClcG12t/aFBMcKjX6kQ7G17HVV9MK21wfLihB7xXnLriyi2mE22FmD7zybDLLrusvMorTrvFMOaHwrLFsDpDj+Yx9EhSO5MWegIhh1ofCv/Xa0yoeSFMNDVgJvgQihjO6+iOq6W4w/+5555bPP7442V3P9u3by9uvvnmqmsW02Ke6u/N+9TnOdZ7v2F1hh7NM6zQM7MNl6GHv5I0DSY19OTK0KN5DD2S1I6hp7cbbrih+PjHP95Y2tQEDYOhR/MYeiSpHUNPb29961t7lgcffLAaa7QMPZrH0CNJ7Rh6JouhR/MMK/TQCJ/QU7sYQJImlqFnshh6NM+wQg+N+wk9XMUlSdNgz549xbe//e2qS133rW99q+fDuQ09mTL0SFI7tE35r//6r+J///d/qz7qqqeeeqo8vu3fv7/qM5ehJ1OGHklqh/vSsM/8xje+UdYgWLpZOKXFZ8QdtH/4wx9Wn95chp5MDSv0cO8rQk+PR6JI0kT60Y9+VDz22GPFAw880Fi4S/GgStP0R1ma5mmUpWme2paFLpM39GRqWKEHhB6KJEldYujJlKFHkpQbQ0+mDD2SpNwYejI1zNCzfPls6PG2FpKkLjH0ZGqYoccnrUuSusjQkylDjyQpN4aeTBl6JEm5MfRkytAjScqNoSdTwww9PmldktRFhp5MGXokSbkx9GTK0CNJyo2hJ1OGHklSbgw9mRpm6NmxYzb0rF1b9ZAkqQMMPZkaZujhqi1CD1dxSZLUFYaeTBl6JEm5MfRkytAjScqNoSdTwww9e/fOhp4VK6oekiR1gKEnU8MMPSD0UCRJ6gpDT6YMPZKk3Bh6MmXokSTlxtCTqWGHnuXLZ0PPgQNVD0mSxszQ0xEHZtLBrl27is2bN5eljSuvvLI4++yzi9e97nXFhRdeWNx9993VkIUNO/T4pHVJUtcYejpi9UxKWL58ebFy5cqZsLDwLBOMTj755GL9+vXFxo0bi9NOO6183Y033liN0Z+hR5KUG0NPR1DTg50zKaFN6Ln22mur/2bdeuutxdOe9rTibW97W9WnP0OPJCk3hp6OaRt6mpx66qllzU8bhh5JUm4MPR1zuKHnuuuuK193ySWXVH3muu+++4pLL730YHnf+95XHHPMMdXQwfNJ65KkrjH0dMzhhp4Xv/jFfRtA7969u1i7du3B8pu/+ZvFT/7kT1ZDB8/QI0nqGkNPxxxO6DnllFNaX/EVhn16y9AjSeoaQ0/HLCb0PPjgg8WP//iPF6961auqPu0ZeiRJuTH0dEyv0EObnL/7u78rvvOd75TdXK11wgknFG95y1vK7sUadujZsWM29KxdW/WQJGnMDD0dsWnTpjLs1Eu44YYbyu577rmn7OZ0Vn1cypo1a8rhCxl26JnJbjPzM3sVlyRJXWDo6Yi9e/eWtTz1Eh577LHipptuKp588smyu2ncKG0YeiRJuTH0ZMrQI0nKjaEnU8MOPXv3zoaeFSuqHpIkjZmhJ1PDDj0g9FAkSeoCQ0+mDD2SpNwYejJl6JEk5cbQk6lRhJ7ly2dDT/UAeUmSxsrQk6lRhB6ftC5J6hJDT6YMPZKk3Bh6MmXokSTlxtCTKUOPJCk3hp5MjSL0+KR1SVKXGHoyZeiRJOXG0JMpQ48kKTeGnkwZeiRJuTH0ZGoUoWfHjtnQs3Zt1UOSpDEy9GRqFKGHq7YIPVzFJUnSuBl6MmXokSTlxtCTKUOPJCk3hp5MjSL07N07G3pWrKh6SJI0RoaeTI0i9IDQQ5EkadwMPZky9EiScmPoyZShR5KUG0NPpkYVepYvnw09Bw5UPSRJGhNDT6ZGFXp80rokqSsMPZky9EiScmPoyZShR5KUG0NPpgw9kqTcGHoyNarQ45PWJUldYejJlKFHkpQbQ0+mDD2SpNwYejJl6JEk5cbQk6lRhZ4dO2ZDz9q1VQ9JksbE0JOpUYUertoi9HAVlyRJ42ToyZShR5KUG0NPpgw9kqTcGHqmAAHm+9//ftXVzqhCz969s6FnxYqqhyRJY2Lo6YhLL720WLNmTXHUUUfNhIR2s3zHHXcUL3nJS8rxjzzyyGLz5s3VkIWNKvSAxWm5SJIkDY2hpyM2bdpUli1btrQOPS960YuKM844o9i3b19x8803l8Hnoosuqob2Z+iRJOXG0NMxO3fubBV6br/99nK82267repTFO985zuLVatWVV39GXokSbkx9HRM29Czffv24ogjjqi6ZsVrqflZyChDz/Lls6HnwIGqhyRJY2Do6Zi2oeeDH/xgceqpp1Zds+K1u3fvrvoc8oUvfKF4/vOff7CcdNJJZfuhUfBJ65KkLjD0dEzb0EPbn16h59/+7d+qPofs37+/+NKXvnSwbNu2rTj22GOrocNl6JEkdYGhp2Pahp4dO3b0PL317W9/u+rT2yhPbxl6JEldYOjpmLah57777ivH47L1wCXrr3jFK6qu/gw9kqTcGHo6Yu/evcWuXbuKrVu3lmGG/ymBU1LHH398GXbCS1/60vKSdV7L8Gc+85nFJZdcUg3tb5ShxyetS5K6wNDTEdyccPXq1fNK+OpXv1r82q/9WvHggw9WfWZvTviyl72sDDs0TO7qzQkNPZKkLjD0ZMrQI0nKjaFnSGibc6DDN6Yx9EiScmPoGQBOTXEJeVi5cmXZLof74BB+umiUoWfHjtnQs3Zt1UOSpDEw9AzA2pmjOZeQg7/Lly8va3kIQgzrolGGHnIfoSdpoiRJ0sgZegaABsdRo7Nhw4birLPOKv8n+Jx44onl/11j6JEk5cbQMwCEHJ6QDkIOp7vAw0ANPYYeSVI3GHoGgPvkcEqLdjy054kGzJ7emjWzesrQs2JF1UOSpDEw9AwIQad+xRY1PQSiLhpl6AGhhyJJ0rgYeoaA4MPdlLsaeGDokSTlxtAzAJzGisbLiEvWKXFVV9eMOvQsXz4beqrmTpIkjZyhZwC4estL1vtj9RB6jjqKmrCqpyRJI2ToGYB+l6wTgLpo1KEHXL1F8Fm3ruohSdIIGXoGIC5ZJ+SsWLHCS9Z7oIlTnObq6Fk/SdIUM/QMgJestzezSjzNJUkaC0PPANWfs+Ul683iNFfS9luSpKEz9AwYwYfL1aO2p6vGGXrS01y1nChJ0tAYegaENj08VT0uVaesX7++Gto94ww94KkdhB7u0uxpLknSKBh6BiAuU48GzKDGh/Y9XM3VReMOPZhZPWXw6egqkiRNGUPPAHD1Vhp4AsHntNNOq7q6pQuh57bbZkMPxdNckqRhM/QMAFdoNYUeGjJT29NFXQg9SE9zSZI0TIaeASDwcH8eQk7gqi1qeTy9tbA4zUXwscZHkjQshp4BIdxEA+Zo0Mydmrt6FVeXQg9ZkXs4Enwo3LG5o1f6S5ImmKFngKjdoVEzJa316aIuhZ7Aqa64lJ2bF27dWg2QJGkADD1DRENmanu6qIuhB9TwcBPrqPWhHbinvCRJg2DoGSJDz+Ej6KSnvDZurAZIknSYDD1DZOhZGppDxdVdlDVrvJGhJOnwGXqGyNAzGDSPirY+nO7qeHMpSVJHGXqWgFDDc7Z6la1bt+YTeqiC2bx59rzUEDD5uLSdRs5DehtJ0hQz9CxBXKLer2QTeuI8FOeghoTgw5PZeRtKw/0gJUnqydCTqaHU9ETL4yGnkbSdT4ef6SqpLfYfHIiGWFsswdCTqaG06SHskERG8EwJ3ira+QyxcknSsNA4j5tx8QWOXzFRuEMpQUgaMENPpobWkHlEtT1IGzhv2VL1lNRdfGmpnuWHURpyKDQF4LE96R1Kd+yoXigNhqEnU0MLPeykYoc1gl9q1ISP8O0kHS4CD1/UCDn8QKKRHvuM9MvLHUoJQDEeNUE+l0YDYujJ1NBCD2KHReObERjx20laLEJN1O7whW1z34n0HDZhyefSaAAMPZkaaugZcfVL+nb+IJQ6hn0AN9jiS8p9JxazT2Dc9Lk0BKfLLhvJfkXTydCTqaGGHoy4+iUuZeevpI5YSuBJ8csmfS4Nv3BoG+SVXlokQ0+H3H777cUll1xSXHTRRcX1119f9e3tO9/5zsx3fmdx3nnnFdu2bSvuuuuuasjChh56qL6OHdQIql94ixG+naQ2CCZ8KTlNNYjaGU55pe19KNT+cOrL2h+1YOjpiB07dpQhZP3MTuKcc86Z+S4vK7Zv314NbfaqV72qePWrX138+Z//eXHmmWeWryEEtTH00IMRV7/E23kJu7RE/HLYtavqOExp4GnThmcxmD+u9EprfyjW/GgBhp6OWDNzpN7Mjbkq/L9q1aqqa77HHnts5ju+rLjjjjuqPkXx2te+tpxOGyMJPSOufuGHXrR7dN8nHQa+pxFWKNSibNy4+NBCIOH1wwg8dVz9FbU/tP+R+jD0dMCTTz45831dVtb2hL0zOx/6XX311VWfuR599NFyeFobRM1Pp0IPYuc3ouqXETwNQ5o+/GJIww6lXotC2xxOI/X6AUO4oXaIceI1yT5tqJj/eM8R/MDS5DL0dMDdd989811dVuzZs6fqM4t+tO/pZcuWLcUb3/jG4mMf+1jx3ve+t3j5y19e3HDDDdXQuR566KHi05/+9MHy4Q9/uDjmmGOqoUM04uqX9O1Gtb+VJhZfGGqY0/vncJ44ggNBJr1hYBQCEL8smm4yGGUENyidI67y8k6l6sPQ0wG0wyHg1NEvPeVVd8UVVxTHH3/8zP7ntOLII48szj777DLcNLnpppvKUBSFU2dHsaMbhah+YUfJTnbI2NfyduyPJfXQL+w04VdENJxrKlydxWkmwsc4fnHwnsyHX3z1YejpgDiV1VTTc/nll1ddc9028wuM4buqxoYPP/xw8eY3v7l4wxveUHYvZGSnt5BWv7CTHUGNzwifhiFNnvhlQCGoLOY7yfeZ8SnDbq+zWPHFH8E+RpPJ0NMRBJhrrrmm6iqKffv2lf0+//nPV33mogao3n4naozaXME10tADfkGml5rSOHKItT7xo4+MNYLKJWmysO/gCzJtp4KiDeGIrhjV5DH0dMQZZ5wx51QW/3MaKjz44IPl6awnnnii7P7Upz41891eVuzevbvsRlzq/u1vf7vq09vIQ0+IU10UqqGH+IsszVjs41m9bOu2c1TW+M7xpaD2ddp+EfDlZtn8taMeDD0dcdVVVxXHHXdcGX7injvplVk0UKbfPffcU/Upil/5lV8pVq5cOfPjZkOxbt26mX3Y8uIDH/hANbS/sYUeUCXO+X92TpQk7A0Sb1O/ACUK+8SZVVbe0V7KSjT4ndaH1cWvHc9tq4Ghp0O+9rWvFRdccEFx/vnnF9ddd13Vd9YjjzxSBqPvfe97VZ9Zn/3sZ4tzzz23uPjii+fU+ixkrKEnpLU+NHIeUq1PNEHg7dgf1i9EocLJ8KMsRE0IZVprQqK9EvsUqcbQk6lOhB6QRtLqGM5DDfGUV2Dfz74xfWvDj6ZeXH01zW1eCHPxy8Zz2aox9GSqM6EH7KSohkmrYEYUfmD4URb4nsUl6tMeBiLc0bBZShh6MtWp0BM6GH54+yi0AaL5URS+N9N6hkBTKE4nc4532tGgj2Ul5EkJQ0+mOhl6QlP44fx8v1vgD1A9/CxUmDWuwF/q8xmloYpanhH9iBi7+BJ7a3YlDD2Z6nToCU3hJ1LGCAIQx4a0sO9kdqKkl8SnhVohZk/qjGjcy1WTueAeRCzzYh9Cyn6FXzAUDo5U6/qLZmoYejI1EaEnReLgPH09AMV152Nso0AgoulAehU+xR+Y6gzO1bJREn5ywY+m+DL22j/QnwetRi3YQoVfNIagiWboydTEhZ4UO+6410ha4jzTGDfoqJxidjjO2OZHY0cqZ4PkdE9uej2ElC8m4SX2HWlhPVGNS+H1Tb9ookQI4ofXUoMQ88Q0mBbTZF8WDQqjsI9L35txGL9rjwPpMENPpiY69AR2EhGA6jVAsVPgPNMYdgixj/TiEY0d3wM2xhyfPh7Po+EXSCBQpDU71CC3qSlmf8P0+oUgSlwBQSDhvfoVapni8xlEqb9vTL9eRtQ+sosMPZmaitBTxy9aqlmadkjs5NgB8KtoBNUvcfEIxR9hGpvYEPlRMILtvpOiQTMhIE7zUajJYZ9xuCIEsc/hh1e/INS2RO1SNBxk/tISOxPem+5478VceZGWhdpH8j7UPrHuaEqQBif2pxGuokSNV4e3NUNPpqYy9KT40lELxK+4ph1CesnVkH7x8IMw3ko6bEvZRr1fzaEvYhT2BwSGYeGzikCyUGEfxbiDCAlpEKpPPy292kdGDRDhhf/TgHi4hZ0f06qHI96H7TotIwpKhp5MTX3oqeMXEtX7/Cpq+nJS+HLGzXj4UiwxDPEdjryV45kFHQa2U34tE8jT9hvpNhq/sBcKQxzg4nVDCvYTgWVnHfBlJATokAhAsZ00FWqfCI71AEV3GrAo7F8Zv6m5wWIK23m6rfN+A2LoyVR2oaeOLxFfUr6g/aqGF6r+XUA0KeDsWs7HnYlGeh3GL1I2CHa+7NTZwde3vSicNlnKQYSDWu788vXHNk2IiVNr7LgGsc4iIKXBiEKIYt+bln7bOK8ZEENPprIPPU3SXy9NX8LDDEBRuUQlkjqMHX/a8DP97OuFqv+oFVwoCLG9ME4acHpdIk0AZ4OharD+65b3oB/D4qDR70BBWGIcD/iaJLGdp/vj+ndhCQw9mTL0tNTr/Hecq+7XmK8qD/x/u4rf/oldxeplu4ov/r+H+jcdjDw+jQmnlXq1YeCzj1+jlKZxKG3v9RIlphs79X7BSdJAGHoyZeg5DL0C0ADKY0evKP71mWuKncvWFFuWbSw2/uwni/e+ZW9ZOeCxcMj4XCOwUDsSvzL7ISTxKzRqXGqf55yShiYCzqBOHUhaNENPpgw9SxQHRk41cCBLC8EoDnJJ2f3M1cWuZauLPcevLh5ftbp48ukLh6cDy44qdixbV1z43M3F+1+9q/j0e28r9vxjVVMUp0uicLql6d4gac2TB9u5WD+xvvnclpIwTadS5xl6MmXoGT0qB5I8c7BQEfBnb9hbfPH/qYLUTHA6sHJ18f2nDb5G6WCJG6jF6bn6JaTTHo4IKCx7rA8vr5OyYOjJlKFnPOKWIQQdKhY409EX4WNmpPvesKG476TVxcPPOPFgjRFl87JNB8u6ZTuK9x69pfjkaZuKf1s3E5w2NNQ8He6puWjDVC9LqVEieKRBKy3slGptowaGeYvLwVkfC53KkjQ1DD2ZMvSMz6COsUwnbVbSlGeo0KFCg/ww5+wLB/6YAMGofglpv8v4u1Da3jitKayl7XeofpOUDUNPpgw904ljOGdquOq5HoI41h/2MZ4XEpLqJdox1WuU2oQmxkmDVlpYgHrbqKZpHG5h+rbBkbJj6MmUoScPEYIiMxB8qNyZeNRUtQktvcKapCwZejJl6MkPFSdR0TEVwUeSFsnQkylDT544CxXBh6vbJSknhp5MGXryRS1PBB8aOUtSLgw9mTL05I1mLdHQmQucbNMrKQeGnkwZekQb3wg+ca/CXiVuxbPQLXNoX8xw9inxminfv0iaIIaeTBl6BGp4uF1NnO5abIlQtNCzNhnOqTQDkKRxMvRkytCjQPCpX9FdL3ErnoVumUPNEcO5DU68ph6qIgDNu2GiJA2ZoSdThh4NQoSihZ46wXDuF1QPQJxWW/BRHJI0IIaeTBl6NC5NAYhTZAsFJ0laKkNPpgw96gIun08fl0HD516nvAhFPAyegBTjx6O1uOdQNJqmIbWnzSQ1MfRkytCjriCgpHeL5pQXp8wQQSceir7YEg2tCURMh0BkjZKUL0NPpgw96hqCTnrKq+lB6jSQpnYoanKiTVE0mmY4DanrD1utFx/DIeXJ0JMpQ4+6ivY+EVr4Sy3Q4TZ2JhDxWgJRPLQ9go+P4ZDyY+jJlKFHXUZNTpziGjQfwyHly9CTKUOPckbtT9QmGXykfBh6OuTd7353sWLFiuK4444rzjjjjKpvf+95z3uK5z3veTM772Vl2UyLzRYMPcpd+hgOnz8m5cHQ0xFvf/vbi1NOOaXYvXt3sWfPnmLNmjXFOeecUw1t9id/8ifF8ccfX1x55ZVl986dOw090iIYfKS8GHo64thjjy0+9KEPVV1FccUVV5Q1N4SgJvfee+/Mznp5cdVVV1V9FsfQI80i6MRVYwQfgpCk6WTo6YCHH364DDg333xz1WcW/bZt21Z1zfWJT3yiHH777bcXF1xwQVl6BaQmhh7pkDT48GwwH40hTSdDTwfceuutZYB59NFHqz6z6HfeeedVXXOdf/755XDa/6xfv75405veNLOzPqr4wAc+UI0x10033VS8/OUvP1hWrVpVji9pFsGH+/wQfChc0u7pLmm6GHo6oF/o2cJNSxrQn+EXXnhh1Yed9May3/79+6s+hzz00EPFpz/96YPlwx/+cHHMMcdUQyUFvnKEHoqnu6TpYujpgH6nt7Zv3151zUV/hu/bt6/qUxR33XVX2a/NaS5Pb0m9EXTS0108wkLS5DP0dASnqf7xH/+x6irKBsr9Agz9GZ42ZI7an//+7/+u+vRm6JH649RW+kywdes83SVNOkNPR3BqikvWaXtzxx13FC972cvmXLL+la98pTj11FOL+++/v+pTlMO5tJ1L1a+99tri5JNPLk4//fRqaH+GHqmd9EaG1PoQfuLhpW0QlBjX02TS+Bl6OoQbDZ5wwgllGKnfnPCWW24pXvCCF8w5nQXGY3xe1/YePTD0SO3xZPb0Yahpiae4E2wIQ/xPMKJ/r/FpJL2Y4CRpMAw9mTL0SItH+OHZXZz26hWC6oVaIh50euKJzcMp3hhRGg1DT6YMPdLSEVQ4/bVhw2ywIQzxRHf69XpgKv25QqwenKgdkjRchp5MGXqkbqD2KNoMeVNEabgMPZky9EjdEfcGWrHC01zSMBl6MmXokbqF02MEH06VSRoOQ0+mDD1St3BJe7Tv8fJ2aTgMPZky9EjdQyNoQg9Xc0kaPENPpgw9UvfQnicubScASRosQ0+mDD1SN3FJO6GHuz9zZZekwTH0ZMrQI3VXPPOLuzdLGhxDT6YMPVJ3cZor7t3DHaDrGM4jLHjkRVouu2y2fxQbREtzGXoyZeiRuo2wE6e5CDOEGmp+6KZ/2+IND6VDDD2ZMvRI3Rf37qmXeJ4X9/ShwXMUTovRnxKPuKBfzqgV42o4AiOP+khrxLwRZH4MPZky9EjdR0NmAszatbOhhkbObRs3x31/uMtzzqJheL9CGDIA5cHQkylDjzT94vL3nNv2EBZZB9R4ccqQbkJkvRaN/pp+hp5MGXqk6RdXgfFsr1wRcFgHvdo2pTVB3iJg+hl6MmXokaYfB3oO5tRq5IrTewsFmgiHnObSdDP0ZMrQI00/2qlELUaObVYIOiw7Db/7Yby4RQA1P5pehp5MGXqkPETblRwvXV9MTVe0/fGGkNPN0JMpQ4+Uh7Qhb25i2ds0UqYmLBp+N90QUtPB0JMpQ4+Uh5wvXV9sLVfcEJJ15SXs08nQkylDj5SPaK+S29VJcffqxSx3BKVBXMLO+3ITRI6x8agQGktzCo1gxd8o69cfGoeSU9sigjnraevW2WVnXcQ6GtRnEQw9mTL0SPnI8dL1to2Y6+IS9jZPuY/Hg2zceCi8xIF6EIXpDhs1WiwHd62O9407WLNsg7x7NeEmAiDhL33PfsXQszgzq0x1hh4pH3HahnvW5GIpl+tHSOzVDoog0Cbc0EaI96dw4KYwXwQrAhV/o8SNEyk8XiSmQfgYxqk2jvsEj3R+FyosM8u+GPVA1VR4ZArriPXN8rMuYh0NmqEnU4YeKR8cNOMAkwsOniwvfxeLg23TJez1sEOoYfrUoEV4GdTdr5lWzAPvudTpxikkao/qD60lDKeNtxmXcMayNd29eqHww/bGqap6MCTcMD2my/QHta4Ww9CTKUOPlJd4AGnbRr2TLg7Uh7u8EZriNE8aFAg7aUgYFsJDfG6Uhd6T8eP0UbSL6fVUfqZLWFtMbQrvH1e4Uerhh/cf17pqy9CTKUOPlJc4ZcLfHMSB93BPkXAATw/wFIJUWvMzKnG6jUKYCSwboYN+bU8hEeaWWsPSFH7qNUjjWlcLMfRkytAj5YUDEAcjDo7TjjDAsi62EXMdtURdOYATNNLTXfVTR1Ei2MQpt8MNfW3Uw0+8fxfDTjD0ZMrQI+UnDprDPBB2QRpWlooan66ghiYNGXyetJGJgDMuhB9qo7ocdoKhJ1OGHik/HCA5WHapjcUwRHsc/k4bQhif3zgaAU8DQ0+mDD1SfjhYEgYIP9OMGh6WM5dG22rP0JMpQ4+Un2jrQoPTabbURsyaXoaeTBl6pDxFm5BJaH9xOAbViFnTydCTKUOPlKe4dH0a27tgkI2YNX0MPZky9Eh5ilAwrZeuT3MjZi2doSdThh4pX4QCSpcuxx4UGzGrH0NPh9x+++3FJZdcUlx00UXF9ddfX/VtZ+fOncUuHqzSkqFHytc0X7puI2b1Y+jpiB0zP0sIIevXry/OOeecmS/tsmL79u3V0P54LeNT2jL0SPniZnbsLnjK9jTV9tiIWQsx9HTEmjVris08qa3C/6tWraq6ejsws8c6auanzdqZn26GHkltEA7SRxpMy43ubMSshRh6OuDJJ58sAws1NmHvzF6JfldffXXVp9lZZ51VbNq0qSyGHkltEXTSJ3gnv7kmlo2YtRBDTwfcfffdZWDZs2dP1WcW/Wjf0wvteFay15qxUOjZv39/8aUvfelg2bZtW3HsscdWQyXlKi5hp3BF11LbwhCmaF7YqwyzVslGzFqIoacDCC9NgYV+6SmvFKe1VqxYUb4WC4WeL3zhC8Xzn//8g+Wkk04qT4tJEruRuGkhu4WtW6sBLRBiGJ/2QdGIeKEyrFBiI2YtxNDTAf1qej7ykY9UXXNtmPl5Rgme3pK0FDRo5knZ7EYo1PqsWTMbZvjtVS+9Qg7hiRqXphKn02hHNGg2YlYbhp4OiDY911xzTdWnKPbt21f2u/baa6s+c62e2YMwvKlE7U8/hh5JTaiFiUbObQohh7DE5e9taliiRmnQ7W5sxKw2DD0dcfrpp8/8epr5+VSpX7316KOPFp/5zGeK733ve1WfuazpkTQo1Prw24lCmCCg1EvbkFPHNNlVUUs0yMvlmadhhClNF0NPR1x11VXFcccdV5xxxhnFmWeeWQaY9D49N9xwQ9nvnnvuqfrMZeiRNCni5ojUEA0KNTxM00bM6sfQ0yFf+9rXigsuuKA4//zzi+uuu67qO+uBBx4oLrvssuKJJ56o+szFJe5tTmsFQ4+kcYn2N5RBXc1lI2a1YejJlKFH0jjF6SgaSy9VhCgbMWshhp5MGXokjRPteaLB9FKfAWYjZrVl6MmUoUfSuBF2CCtcwr7YRs3c6LB+6byNmLUQQ0+mDD2SuiAaIC8UWGiyuHHj7Okwxq8XLoUf5t2eNR0MPZky9EjqgvQS9nojZLqpzaEmqB5yuNEh92fl1JaNl9WWoSdThh5JXRF3guZUFTgm8X8acqjJIeQQkgZ5fx/lxdCTKUOPpK6gpiYaNddrdQhE3ntHg2LoyZShR1KXxCXsFGp1tmyxRkeDZ+jJlKFHUtcQdBZxj1Vp0Qw9mTL0SJJyY+jJlKFHkpQbQ0+mDD2SpNwYejJl6JEk5cbQkylDjyQpN4aeTBl6JEm5MfRkytAjScqNoSdThh5JUm4MPZky9EiScmPoyZShR5KUG0NPpgw9kqTcGHoy9bnPfa74sR/7seLnfu7nWpcTTjih+Omf/unGYZZDhXXkelq4uJ7aFdbRz/zMzzQOs8yWn/3Zn3VbalH8zv1cccQRRxTvfe97qyPhdDL0NNi/f3/x0Y9+tLjllltal3/4h38oTj755MZhlkPlT//0T4t169Y1DrMcKr/xG79R/NVf/VXjMMuh8uxnP7vYtm1b4zDLbLn66quLpz/96Y3DLIfKBz7wgWLVqlWNw3Ip//Iv/1Ls3bu3OhJOJ0PPgOzatas49dRTqy71wo7lj//4j6su9fKGN7yhuPjii6su9UIN69e+9rWqS02+9a1vFc94xjOqLvVy5ZVXlj82NN0MPQNi6GnH0NOOoacdQ8/CDD3tGHryYOgZEENPO4aedgw97Rh6FmboacfQkwdDz4AQev75n/+56lIvrCPX08JcT+2wjgw9/RF63JYWRuhxPU0/Q48kScqCoUeSJGXB0CNJkrJg6JEkSVkw9AzA9u3bi3POOac4/fTTi7/8y7+s+k4PblbFrck3b95cliYHDhwo/uIv/qJYs2ZNsWHDhuKOO+6ohhzyT//0T8X69euLN77xjY3T4TW8tt80eB2v7zWNcfriF79YvOtd7yrn/7zzzivuuuuuasghX/nKV4qzzz67eM1rXlO8+93vrvrOxXL99m//dnHmmWeW21bdl7/85fImj/22N6bBTSDZLpumMS4PPPBA8f73v79cR/3WU3ynWIZenzPrj/XI+mS91jGNM844o3j961/fOI3/+Z//Ofh5ve1tb2vc3rrgtttuK+d/586dVZ9D6N/v+5B+p/h+8j2ti2nw3eQ7Wsc0WD9Mg/XFeusK5r2p1PE94fvS6/sQ21u/71TbafTbZjV+hp4luuaaa4ply5aVG/lVV11VfinY0U6T1atXFyeeeGKxcuXKclnrHn300eK0004rl5sdM+vilFNOKfbt21eNMbtz4rUXXXRRccUVVxTHHXfcnB0D0+A1S5nGODHPK1asKD74wQ+WV/IxX0cffXRx4403VmMUxWc+85niyCOPLIexrbCs9W2Fgw/LvWPHjuL8888vl5dxA9vbM5/5zIPTaNrefv/3f7/vNMaJ9cQybtmypZy/d7zjHfO2KeaVfsw747AsLFMq1h3jsi5Yr6zfENOIbYVppNsKB3/uvss00u3t/vvvr8boDr53y5cvLzZt2lT1mcU88x1g+VhOljddRr47sdwsI8vK95TvWjicabDemsLTODBfhDH+piXFPPM9iW2FZeR7FJq2N16TOpxp1LdZdYOhZ4n+6I/+qPiDP/iDqqsobr755nLj50swbdjpsWx17DCPOeaYqmtW7CgD3dyjJ7BzYGcboaZpGjzWoz4NXhfq0xgn1k3dq171qvIXcvizP/uz4rWvfW3VNYtljm2FWhDWbxqUWGfsbAPbWzoNajjS7Y3Lk+vTYPtMp9ElDz/8cHHssceWB9zAvFLrEFgWlollA8ta31ZYJ6zfwDTS7Y1psK1EqOHA1DSN+gFz3AiHa9euLX94pKGHbZ7lSb8PLC/fkcCy8B1Kscx81xDTuOCCC8puNE2jaZtl/XUB80fo6SXCCPvlwPeB71FgW2nah6fbW30arJN+06hvs+oOQ88S/dIv/dK8A15UA0+bXqEnThGk2Bm96EUvKv9/7LHHytfV1xP9uDcG2k6jLp1G13B6Kj14s62wTCmWOQ7Wn/rUp+YtY6zz+HXeNI1XvOIVB7e3ftN45JFHqj7dwrxdfvnl5f/MI93pr2jQj2UD66tpW2HdgHXF+E3bWzxButf29uIXv7jqGj9OK1PDSq1KPfSwzbM8qfic+a6A707T9hbbCtPgmVwEz1CfBuuj3zTGjXljfpjvPXv2VH0PafqcGTe2ldje6tsKr4ntrde2stA06BfTUHfMP4poUfiV2vSF4Y6604bl5ItcR9Vv+qsHl156afGc5zyn/J82Abyu/quHfh/60IfK/5umwY6lPo26dBpdwrriicVpDQbbSvzKDixzVIPzi7v+yzxqbqK9SdM03vKWtxzc3pqmEZ9bl27ix2f7zne+s3yy81lnnXXwIYfMI/Naf+ghyxQ1Eqyv+rbCOmHdoNf2xjQuvPDC8v+m7Y1pPPe5z626xo+gw/co/k9DD9s8y5iqbyt8d+qBhWWOUzdM4+d//ufL/0N9GqyPpm02pjFuLB+nldnnMt+cerv22murobN3Nm8KPbGtxPbWtA+P7a1pGun21msa6Tar7jD0LNHTnva0xi/Mq1/96qpresTBs47TODSSTNG+gl+RuOmmm8rXfe973yu7A/1o1IqmabBDq0+jLp1GV/DLnLYTNGpMsa2kO2SwzLGtsBz1mgbWGctI42U0TeM973lP32nE5xbT6AI+WxrYvvCFLyz+8A//8GDIYR6Z1+9+97tld2CZ4nNmWevbCuuEdYOYRn17S6fRtL0xjdjexo3TWgSdUA89LAfLmIpthe8KWJZ66GGZWXYwjZe85CXl/6FpGk3bbExj3KgRfOKJJ8r/d+/eXe57CT6BbaUp9NS3laZ9eLq91afRtL3Vp5Fub+oOQ88SnXTSSY1fmLe+9a1V1/SIg2cdV89whUyKX5G/8Au/UP5/3333la+7/fbby+5Av/gV2TQNdtj1adSl0+gKfgU3HRTYVtI2GGCZY1v56Ec/WjZ+TrHOWMZ777237G6aBjUf/aYRn1tMo2t+8Rd/8eDBgXlkXr/61a+W3YFlYtnAsta3FdYJ6wYxjfr2xjT6bW9MI7a3cSIAHnXUUcVll11WNoqnEKI5Vcr/YDlYxlRsK3xXwLLUQw/LzLKDadRrtpqm0bTNxjS6Jrb1O++8s+xmW2kKPfVtpWkfnm5v9Wk0bW/1aaTbrLrD0LNEfBnqVZhUe9Z3NtMgdih1LGva+BEc+NPGs7wuvcyTX2X0+/znP192t50Grwv1aXQB89zrs2db4VRU6nnPe97B8WP9pqd2OL1BY9Onnnqq7GYab3/728v/wa9cqvfr04iDFtg+02l0DQdzLvMF88i8nnvuuWU3IvDGQYVlZb2lWK9xYIppxKkhsE7TbaXXNNLtbVy4RJ2anbRw9Rbte/gfLAfLk34f+H7RL7As9dNQfMdiW4lppAfrpmn022a7JvYJcXqO+YzTUIHvQ31badqHp9tbfRp8B/tNo77NqjsMPUtEGwF2JPEl+5u/+ZvyC9CFK4oGLQ6odeyk6R+hJsaLK4rAgS3dAdPINz3ADGIa48a89WvLVd9WWNb6tkIbi/SAUg9Rbba3+jR4+n/aPU7MR8w7+Nzr88v/zHOgO217whVYxx9//MFthemxTqK9DnjN7/3e71Vds939trevf/3r5TTS7a1LCDv1S9ZZHr4DgW0lbTgfVx3FgZdlpZtlD22mwSlI1g+apjFO8fmBBuxvetObyh8BcWozrlDje4Je20q/7W0Q01B3GHoGgB3Fs571rHKjT3ek04KdLctVL6kPf/jDZT9+/cR9ZFJUAf/O7/xOccIJJ5TVwumBO/CaftOInU2/aYxLhLR6iV+DgStBOHVBuwOG17eV66+/vtxZcjqDHS3bVh3T6Le9xTRYPzRmbZrGuMRnzLwxj/zfNH/0Y95jPJYpFQdf1iPrs+lqIqbBATDWRX1bSbfZuH9SVzWFnvg+8F3gO9G0jCwT3yWWkWVlmVP1afAd5buaYhqsn17TGCfmh1qYmDe2h/oVU7Gt8H3he7PQ9sa49e/UYqfRtM2qGww9A8LlktzHIb3x17Tg1AAH9Xqp49JX+tevmkmxk6XdQL2RaeC1/abB63h9v2mMQ7pe6qWO9dlvW+GUFfffaboENwxiGuPCPPVbP4HxWIZoqFrHsrMO0tOBdQttb2222S6gZqVpOdt8H+I7lV6anopp1ANTaqFpjEu6LfXbDmJb6fd9YFi/71TbafTbZjV+hh5JkpQFQ48kScqCoUeSJGXB0CNJkrJg6JEkSVkw9EiSpCwYeiRJUhYMPdKE4r4kPIupfn8SHngaz2gaFu6LUr/x4jjxLKQ3v/nN5Twxb5LUxNAjTai4U3Y9fHDQp/8wjeI92vr+979fzst5551XztdiQ0/T3Y4lTSdDjzShOFCvXLmyfBhleqDPLfQwLzyKgIdALjbwwNAj5cPQI00oDtRxwOZZXaEeSJoO6mm/GP/KK688+EywX/3VXy2H/fVf/3X5HCGeA/b+97+/7Id4Dc8kInjx/+tf//p5j3PgPXg2FsP5u2PHjmrIofnfsGFDOZz/m3C6jodgMg6Fmq144GU8yysd1oQnrsd8UHgmF5iH9PWUwGsYj368duvWrdWQQ8vPQyh5bhXPY6ov/5YtW+a851lnnVUNkTQuhh5pQkVoIBRQ28NBGocbeggQ/M8zmF75yleW3Zwyos3Q5z73ueKII44odu/ePec1v/zLv1w+WJFuAsdv/dZvlcPB9AlE0eYoXhPdETjq81ZHWGA6LGd0R2hBvHc/vE8auNKnhDetH8ZlncZ4/KU7phHLwvvyf8xDPIiS8Rmevk/6v6TxMPRIE4oDddSO8H8EgTggh7ahh4cphne+853F0UcfPechljxh+qKLLir/j9d8/OMfL7tx7bXXlv2uuuqqspv/06CB9H35e+KJJ5b/91OfDuGHfswDInD00zQvodf6oaYmxThr164t/4/l/9jHPlZ2Y9u2bWW//fv3Hww9MY+SusHQI00oDsIcnEEQIEBQ2xMH5NDroB796uODWp56kKCb/ojXcIBPnXzyyQdrhxjeVOJ90/nvpWneQM1POv8LhR7WCzU1nG5at25d+ZrQtH4Ytz7flJjfpuXnKdz0i9qwOG1HGOX0nDU90vgZeqQJVQ8N0QaFGg0OtoHaifpBfVCh59Zbby27EQf9yy+/vOzm/34H+vr8N4nwFKfEQv1U00KhJzBuhJGYt6bQQ6iK04VNmpafmjL6ffOb36z6zGJcPgMCV305JI2WoUeaUE2hgdoeAgAH38B4aSiIIDGI0PO3f/u3ZTe4V84znvGM4qtf/WrZTXAgYPTSJvSAZUpDSbx3BAi624aeELVioI1QfT7p12+aTctPDdcpp5xSdc3H+LxO0vgYeqQJ1RQaOJBzcKWECDmcYiG0cMBPg0QcwFNtQw8Nd+lPoTuGg5oUamQ4nUR/CleZxYG/beiJmquNGzeW06DGpB6CFgoo1IDFPDA/zFeEJtZZnPZiODhdSGhjftPX1dcZ89+0/AxPX8twpidpvAw90oTiwNp0CoYDcxoKQACJ/hzseR2vB3/jYB3iYJ1K+6WvueKKK8r/achcR3jgveK908bEvea/CfMc04j5Dk3zX8f7xutpoBxXggXWT4yTSl/H//E63pOQc9dddxUXX3xxce655xaf/exny2FgvPS1LGf9PSWNnqFHkhYpQo+kyeK3VpIWydAjTSa/tZK0SIQeiqTJYuiRJElZMPRIkqQsGHokSVIWDD2SJCkLhh5JkpQFQ48kScqCoUeSJGXB0CNJkrJg6JEkSVkw9EiSpCwYeiRJUhYMPZIkKQuGHkmSlAVDjyRJyoKhR5IkZcHQI0mSsmDokSRJWTD0SJKkLBh6JElSFgw9kiQpC4YeSZKUBUOPJEnKgqFHkiRlwdAjSZKyYOiRJElZMPRIkqQsGHokSVIWDD2SJCkDRfH/A6MbJSagde/sAAAAAElFTkSuQmCC">
La loss represente des "sauts" à cause de la reprise de l'entrainement à deux reprises. Cela induit une modification du learning rate et explique la forme de la courbe.
## Résultats
Les questions générées sont évaluées sur les métrique BLEU et ROUGE. Ce sont des métriques approximative pour la génération de texte.
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAApcAAAGFCAYAAAComticAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAF4vSURBVHhe7d0JmBTVvffxvO/z3PfebGq84d7EJSEBlwQl4hIWNe67EEVJVNAQNSioUaLBNQIuERQxgqgoiiBCQAERQRYVFFllEUQEhn3f91XU/9u/U1VQ01M93TA91U3398NzHqZOVVdXdfdM/fqcqlPfMQAAACBLCJcAAADIGsIlAAAAsoZwCQAAgKwhXAIAACBrCJcAAADIGsIlAAAAsoZwCQAAgKwhXAIAACBrCJcAAADIGsIlAAAAsoZwCQAAgKwhXAIAACBrCJcAAADIGsIlAAAAsoZwCQAAgKwhXKKgjRo1yvr37+9PZUebNm1cyWf9+vU7KLazIqZPn25dunSp9P3M9euoz7CeX/8f7AppXwCkRrhEbL7zne+kLJV18P7Tn/5ktWrV8qf2T6pQcfbZZ7uSr5o1a2ZVqlRJu52aF34Pvve977nX6kD3OXl94RJep6ZTrUvLaX46kydPdsvVqFHDrStqm7Mlefsri54j6nkUxLQNhRIuC2VfAKRGuERsdFAJgkBUqQwVCZepQkVlbm9FrV271m33M88849ekpvci/H7cd9991qRJE/vhD39od911l7+UJ1i2PMnrSy6B4HMQRctpfjqtWrWyqlWr+lOVK9XnINtS7XuhhUvtJ+ESKGyES8SmvFBxoDZt2mQzZsxwRT8nq4xwmcqOHTts5syZKbcl2bRp02zhwoX+VGbSPcf+BJEgDCZTXXJwS7VsWCbLSHmfg0zDZXnPle4zkUzvQ3nCn4ONGzfahAkTbP369W46it7T8ePH28qVK/2a1Hbt2rX3vco0XC5btsydEqDPQiqrVq2ySZMm2fz58/2a7ArWP2fOHNu5c6dfm1omn0cAhYNwidiUFyoCderUsYsuusif2keBSo9/8803/RpzrXOqC5fkFrvkcBkcqJMlH9jD6wxKcICMCjavvPKKHX300aWWT94WPYce17t3b6tevfre5e644w5/ifLp+ct7Dq07PE8lCEVRovZDWrRo4bqbw1ItG5bJMqLtSrVcqoAVCN6/5BLI5DMRbGf4fSjvdQrmN2rUqNR6dV5rmF43nY4QXqZp06ZlvkCoPljff/3Xf+2dDj8uKBLss/4Pb4O2fejQoW6ZgELfNddcs3cZlQsvvLDUNmjfw/PDJRO33nprqcdUq1bN3nvvPX+uR/XaJ5XDDz/cTWv7w/sCoHBl9tcEyAIdVNKFjyeeeMItt3nzZr/Gc9NNN9nPf/5zf8ps8ODBbjkdbEtKSlwJDryaFzjQcBksp/rgoBgcEINwEojalptvvrnMtmhdepwO/joY64Cv8KHlkoNKMoWGdM8RbKPqkrc5SvJ+qDWuV69e7vEtW7b0az3Jy0bJZBnR+lMtl/w+JAv2KXiu8D5m+pkIHlu7dm3r27evrVmzptzXKVjnX/7yl1LrPeKII2z58uX+UuZOKRg4cKB7X8Ov5d///nd/CU+wvhtvvNE+/fTTvfsQ7HswHWyT/lf9sccea+3atXMtot27d3ety8mvY/369V2dLnTSfulzVbduXbvgggv8Jfa9huFyxRVXuMemE2yj/tc+qhVXX0SSXwsto/VdffXVNm7cOPflMHiuYB8BFC7CJWKjg0qqEhxsli5d6qZ18Ar7j//4D7v33nv9Ke8gquVmzZrl15j7WXXhg+SBhkuJ2g4JwkkgaltEB//wtgTP8dZbb/k15gKA6u68806/JtpVV12V0XPsz8Fb+6Blw0V1UY9N3ucoUesLSnidmk61rqj3IUrU9mT6mQi2M9NRBLRs8mkC6kpXfdTnI0zzTzvtNH/Ko8fVrFnTn9on1b4H7+kll1zi13j0BUD1QagbOXKkm37sscfcdGDQoEGuXv9H0fm1J598ctpufHXhK0Qmb/urr77q1h9+LTSt31l9vsP25/MJ4OBFuERsdFDRgV0HlqgSUDfeMccc40+ZdevWzT02fG7cL3/5SzvllFP8qX1Up3mBOMJlqm259NJLS22L1hWeDiSvL0qmzxHsX/j1TCV43uD1f+edd6x169Z2zjnn2O9+9zvbsmWLv2Rm25i8vuQS0PalWleqgJUsantSvUbJnwk9TlfGZ0rbo9c52WGHHeZaj8PUUnj77bfbxRdfvHcbv//97/tzPVrf9ddf70/tk2rf9dqpXq2VYUF98Nrq90QXY2k9yeV//ud/ypweIKr7wQ9+YFOnTvVrvJCaXGTevHnu+W677TY3HQi+IN1www1+jbeP5557rj+1T/I2AyhMhEvERgeV5EAQpUePHm7ZESNGuGl16SWHBs3XQTOZ6jQvEEe4zHRbNB21/8nri5Lpc+zPwTvV8959991l1pHJNmayjGjdqZbTVeDh/Ukl6rn0uExeo0y3M5BqvcnrUTf3z372M/f6KQjq9QtOewjLdDsDqd7T5Ho9XudhBtuVXJKfs0+fPu7xyedtdujQwdUH5cwzz3T1qbZDgucIaLmofSxvHQAKR/q/4kCW6KASPgCVR11qOsdt7ty57nEvvfSSP8ej87xStVKFL0ZJFS6Tu+uCc/PCNB11gEw+kKbaFnXFhrdF64ra/+T1Rcn0Ofbn4J3qeXWhS/K+Z7KNmSwjQQCKcuWVV7r56UQ9V6rXKPkzkel2BvRaaLuSqeWyefPm7uclS5a45Tp37uymA7pYS/Vhya9tQHXJy0qq9zS5/o033nDTyWExis751bIvvPCCX7PPnj17yhRZvHixe4zCc5iuXld98FqIpqP2MdW+ACgshEvERgeVTA/q6mLTwfvxxx93jwt30YoGClf9xIkT/RpzP6tO8wKpwqW6EMMUPlQfpmldGZssOZxEbYvo/LTwtlQkXGb6HPtz8E71vLp4SusIX2SUyTZmsow0aNDArT98AYjoQhh1IUd1QSeLeq6o1yjqM5Hpdgb0+F/84hf+lCc457Jr165uWkP+aFpjhYb99re/dfVhmo4KXkGLvYb3CUv1nibXBwPLpxp9IFivhknS71byWKaZUFDXhUVhzz33nHve4LUQTRMugeJFuERsdFDRQV0HnagSNmXKFLf8SSed5M7BTBa+Q4taX1SCgKh5geRwKQpkWk7PqQOi1q+fVRcWXI2r8+o0PzggJoeTqG05/fTTy2yL1hEVapLXl0omz7E/B+/gebVdKmqR0rTO29Nrtn37dn/Jsssml0yXCWgbdWFI+/bt3bbqf02rXlcWpxM8V1imn4mox5ZHj9f2a4gsXQGuUq9evTKtpLroSut9+eWX3edKF+Bce+217vFhwfqS6cprzdOV2+HXLNV7GlX/1FNPuToFTHXN66IlvbbarmC5448/3m1b8Bzhko72S+tXS7++fATPl/xaqC5qfan2BUBhIVwiNsFBPapEHYhUf9ZZZ9nbb7/t15Q2ZswYF7B+/OMfu6KfVRemcKmLU5Lp4HjIIYe459BBUs+vn8PUvah6dT1rXnBA1M/Jy3755Zeu6/R///d/U25L1HNI1PpS0XaX9xzaRq0rk4N38LzhovMGg2FswqKWDUrw3kXNC5dk2hddaKOwof8VzjTkUiZSrTOTz0Sqx6aiZbWPKhq+SBfAKFwmh2B9joLTK4LHBO9HWDAvigKhxsvUMsHjUr2nqeq1HQp7P/rRj/beBlQtqsFywbqjSiY0pqu+fOiiKI27GvW+aV1R+5hqmwEUFsIlAAAAsoZwCQAAgKwhXAIAACBrch4uNa6dTmDX1Ys6VykTOtfpjDPOcBce6MrFVOcvAQAAIF45D5fBid86wTvTcKnbqenEeQ1loissFTLDw2AAAAAgN/KmWzzTcDl9+nS3XPhWgBp2I3koDAAAAMTvoAuXGmbju9/9rj/lCR6bPCgzAAAA4nXQhcuOHTu6wZbDgseGB0oOjB071urWrbu3nHDCCS6chusoFAqFQinkonF9dTclIA4FFS6nTp3q1+yzevVqGzJkyN7y7LPPul+ycB2FQqFQKIVc1LDy2muv+UdGoHIddOGyvG7xlStX+jWp6U4qusMJAADFQncqS3W3MyDbDrpwqXCo5cK3XtPV5ple0EO4BAAUG8Il4pTzcKlQGRSFxuDnwMSJE61GjRqlLtapU6eOG4po4cKF7pxK3es306GICJcAgGJDuEScch4u1eqosS6TS2DSpEl24okn2ooVK/wabxD1evXquVBZrVq1/RpEnXAJACg2hEvEKW+6xeNCuAQAFJtch8vt27dTKqns3r3bf5XzB+ESAIACl4tw+c0337hT2ubMmWOzZs2iVGLRaYLr16/3X/ncI1wCAFDgchEuN2zY4I65Cj1fffWV7dmzh1IJZdu2be7UwZKSEv+Vzz3CJQAABS4X4XLx4sXcOS8mu3btci2Y6ibPB4RLAAAKXC7C5dy5c/Oqq7bQ6fQDtRbnA8IlAAAFjnBZ+AiXOUS4BAAUG8JlesnjbEfR8Ihr1qzxp/IL4TKHCJcAgGJDuExPwTI8znaUY4891vr06eNPVa5BgwbZAw88YJdddlna7RLCZQ4RLgEAxYZwmV6+hUvdIKZVq1ZuuzK5PbbC5caNG/2p3CJcAgBQ4AiX6QXhUncBfPnll+3JJ5+0YcOG+XM9UeFy+PDh1qlTJ+vSpYtNnjzZr/UoIIa72vXz/txVUAiXBwHCJQCg2BAu0wvCZY0aNdz/Z511lgt16p4OJIfLxo0bW9WqVa1p06bWqFEjt7yCZkDTyeEyk6AYRrg8CBAuAQDFJl/C5ZVdxsZarn5hnP/M6SnEKViGw2GHDh3s9NNP96dKh0u1VCaHPrVKHnHEEf4U4bJoEC4BAMUmX8LlqY+NtJ/f+25sZX/DpUJceJsnTJjg6ubPn++mw+FSLZWtW7feWxQsVcJBUD8TLosA4RIAUGzyJVzOWLbJpizeEFuZs2qL/8zpKcRVr17dn/Jo+xXsgnMpw+Gyfv361rZt270lCJcqAcJlkSBcAgCKDedcpheEuKFDh/o1Zr1793Z1wRA/4XB577332kknneR+TkXnY2odgSeeeIJwWYgIlwCAYkO4TE8hThfynHnmmfbhhx/undZwQIFwuFy6dKkLfc8884xNmzbN1UnXrl39n8yuv/5615Kpe6xrfZdffnnG4VLLB0WPCU9HIVzmEOESAFBsCJfpKbQpTCoMVqtWzQ455BB3XmWYwmX46vFly5ZZw4YNXa5QAFRp1qyZP9dcd7qmVV+3bt29QTET2o5gneFCuMxDhEsAQLEhXBY+wmUOES4BAMWGcFn4CJc5RLgEABQbwmX+qVmzpv3yl7+MLNu2bfOXyhzhMocIlwCAYkO4zD+7du2ynTt3RpYDQbjMIcIlAKDYEC4LH+EyhwiXAIBiQ7gsfITLHCJcAgCKDeGy8BEuc4hwCQAoNoTLwke4zCHCJQCg2BAuCx/hMocIlwCAYkO4TK+8WysGdFtI3ZUnHxEuc4hwCQAoNoTL9ILbP5YnfG/xyqZbTwa3fNT9zjt27OjPiUa4zCHCJQCg2BAu08u3cHnGGWfYwIEDbfPmzTZ69Gj7/e9/X+Ze52GEyxwiXAIAig3hMr0gXC5evNhatWrlgtzgwYP9uZ6ocDlkyBC7+eab3fK9evXyaz1t2rQp1dWun1V3ILp06WKHHXaYP1UW4TKHCJcAgGJDuEwvCJePPfaYC2kPP/yw65J+5513/CXKhsvGjRtb1apVrWnTpnu7sTt37uzPTYSsxHRyuFTdgXjqqaesXr16/lRZhMscIlwCAIpNLsJlSUlJ2XD53G/jLa9c5D9xegp+NWrUKNX62LVrVzv99NP9qdLhUi2JyUFRrZJHH320P5W9cLl27Vr3uPK65AmXOUS4BAAUm7wJl08dY9b6kPjKfobL//f//p+753fg888/d6Fu/vz5bjocLtVS2bp1671FwVIlHB6zFS71mPLOtxTCZQ4RLgEAxSZvwuW6ErM1s+MrGxf7T5yegl/16tX9KY+2X8Fu8uTJbjocLuvXr29t27bdW4JwqRLIRrhUa2q6YCkKlxs2bPCncotwCQBAgcubcJnHguA3dOhQv8asd+/eri4IbeFwee+997rpFStWuOkoOh9T6wg88sgj+xUua9eunVGwFMJlDhEuAQDFhnCZnsKlLujRmJIaLD2Y1pXjgXC4XLp0qQuKLVq0cMtqyCBdXd6kSRM3X66//nrXkrl8+XLr3r27HXPMMRmHywYNGrhgqXWHSyqEyxwiXAIAig3hMj0FN4VJhcFq1arZIYccUqbVUF3U4ZZN3a2nYcOGLlcoNKo0a9bMn2uuO13Tqq9bt+7e58iElosqqRAuc4hwCQAoNrkIlwfbUEQHO8JlDhEuAQDFhnBZ+AiXOUS4BAAUG8Jl/qlZs6b9+Mc/jizbtm3zl8oc4TKHCJcAgGJDuCx8hMscIlwCAIoN4bLwES5ziHAJACg2hMvCR7jMIcIlAKDYEC4LH+EyhwiXAIBiQ7gsfITLHCJcAgCKDeGy8BEuc4hwCQAoNoTL9NLdXlEGDRpkCxcu9KfyC+EyhwiXAIBiQ7hML5NbM4bvLV7ZtC3BLSX1vNdee62NGzfOn1uWwuXGjRv9qdzKi3B5zz33WNWqVe2II44ocx/PKLrv50UXXeTu+6n7fN50003+nPQIlwCAYkO4TC/fwmXXrl39n/ZtW3kZiXAZ0qJFCxcQdXN33eReL174pu9RqlSp4gLm8uXLbfDgwS7VazoThEsAQLEhXKYXBLh+/frZ9ddfb5deemmZbBEVLrVMw4YNXfDTY8M0L9zVrp8zzSvJ9FjlnVQIlyEKip07d/anzHr16uVePIXNKFEvrt6o8l7wMMIlAKDYEC7TC8Jl0GDVunVr93M4oySHS80/6aST7JlnnrFWrVrtfWxA08nhUnX7a968eXbNNdfY6aef7teURbj0rVmzxr3IEyZM8Gs8quvdu7c/VVbt2rWtY8eO7me9mGr5vOOOO9x0OoRLAECxyZdweW6/c2MtN7x3g//M6QXBr0ePHn6N2QsvvGBHH3207dy5002Hw2VUw9YDDzxQqk4/VyRcBs+hUwe7d+/u10YjXPqmTJniXrTkD5/qOnTo4E+VtWDBAmvcuLGdcMIJ9v3vf9/at2/vzynrk08+sdNOO21vOfHEE+2www7z5wIAUPjyJVye3fdsO+G1E2Ir+xsu/+M//mNvkJRp06a5TKLgJuFwqQtsFP6Sy3/+53+6+aLHViRcBtQIp3XXrVvXrymLcOkL3rSocNmpUyd/qjS9cGeccYY7V3PgwIH29NNPu2ZsvehR1q5da8OHD99bunXr5rriAQAoFvkSLtfuWGurt6+OrWzYmfnQPAp+1atX96c82n5lkuBUvXC4rF+/vrVt23ZvUTd6UALZCpcSPHbp0qV+TWmES59eBL1QUd3iAwYM8KdKU6BUOFSXeuC+++7L+M2iWxwAUGw45zK9ILwNHTrUrzF3ip7qgvEjw+Hy3nvvddMrVqxw01GSu7Pvv//+jPNKMl0spMfOnj3brymNcBmi8yXVmhjQ1d+HHnqoLV682K8p7ZVXXnFDFoWp1bK8FzyMcAkAKDaEy/QULtUTeuaZZ9qHH364d1oX6gTC4VItiMoe6knVsps3b3YZpkmTJm6+6KpzZRSNbqOsU6tWrYzDpcKr1quidfzyl79025MK4TKkZcuWLmBqYNCZM2davXr1rHnz5v5c7zyD4447zr0xwbTemOeff961Xo4ePdouv/zycl/wMMIlAKDYEC7TC8Kkgly1atXcWNrJ40oqo4wcOdKfMlu2bJkbhki54n/+53/snHPOsb/+9a/+XHPd6RpeUblF50sGz5GJW265xU499VT3WOUgrae8RjTCZRJ1ax911FHuzUl+Iz/99FOX9MPNzq+99ppddtll7o3XByDdCx5GuAQAFBvCZeEjXOYQ4RIAUGwIl/ln27Zt5Zb9RbjMIcIlAKDYEC7zT82aNd0YmlGFcHmQIVwCAIoN4bLwES5ziHAJACg2hMvCR7jMIcIlAKDYEC4LH+EyhwiXAIBiQ7gsfITLHCJcAgCKDeGy8BEuc4hwCQAoNoTLwke4zCHCJQCg2BAu0wtus1ieXr16uf3KR4TLHCJcAgCKDeEyvUxuzRi+t3icdOtI3QZS25gK4TKHCJcAgGJDuEwvX8Pl008/bWeeeSbhMp8RLgEAxYZwmV4QLvv162fXX3+9XXrppWW6yaPCpZZp2LChNWrUyD02TPPCgVA/p+t6D5s5c6YLlR9++CHhMp8RLgEAxYZwmV4QLhXiFABbt27tfu7cubO/RNlwqfmNGze27t2720svvWTnnHNOqfCYHAj1s+oy1aBBA3vggQfcz8nrSka4zCHCJQCg2ORLuJx7+hk2u9bJsZVF1zX2nzm9IPjpop1A165d3b2+d+zY4abD4VIh8kc/+pH7OfDYY4+VCo8VCZddunSxU089de9zEy7zGOESAFBschEuFXaiwuWs446PrexvuDz88MP9KU9JSYkLdeqelnC4vO666+zee+915f7777eHHnrIHn74YTvssMPcfDnQcDlv3jz7yU9+YoMGDfJrCJd5jXAJACg2+RIuv9mxw77Zvj228u3Onf4zp6fgdsQRR/hTnoULF7pQN2PGDDcdDpf169e3Zs2a2ejRo0uVcAA80HCpVlF1saubPih6XPBzFMJlDhEuAQDFJl/CZT4Lgt/w4cP9GrOePXu6uk2bNrnpcLhUi2XVqlVt0aJFbjrKSSed5LrWAy1btsw4XCYXPS74OQrhMocIlwCAYkO4TE/hUq2CZ511lvs5mO7QoYO/ROlwuXTpUhf4WrRo4ZZVAFU3dpMmTdx8ufXWW6127druPM727du7rvRMwmUUPU7PkwrhMocIlwCAYkO4TC8IkyNHjnRXaR9yyCEuDIZdcsklrus7sGzZMjcMkXKFzpFUV/ngwYP9uZ6g1VFd6MFzHAg9jnCZpwiXAIBiQ7gsfITLHCJcAgCKTS7C5cE2zmXcFAT1+kSVb7/91l8qc4TLHCJcAgCKDeEy/9SsWdNq1KgRWbZv3+4vlTnCZQ4RLgEAxYZwWfgIlzlEuAQAFBvCZeEjXOYQ4RIAUGwIl4VP4XLDhg3+VG4RLgEAKHC5CJcrVqxwd7hB5duyZYvNmjXLdu/e7dfkFuESAIACl4twuXXrVhd4FixYYGvXrrV169ZRKqEsX77cZs+eXe6dguJGuAQAoMDlIlzKtm3bbOXKlTZv3rzIMn/+/FhK1HNno0Q9VyYlal0HWhYvXuxOP/j666/9Vz33CJcAABS4XIVLFCfCJQAABY5wiTgRLgEAKHCES8SJcAkAQIEjXCJOhEsAAAoc4RJxIlwCAFDgCJeIE+ESAIACR7hEnAiXAAAUOMIl4kS4BACgwBEuESfCJQAABY5wiTgRLgEAKHCES8SJcAkAQIEjXCJOhEsAAAoc4RJxIlwCAFDgCJeIE+ESAIACR7hEnAiXAAAUOMIl4kS4BACgwBEuESfCJQAABY5wiTgRLgEAKHCES8SJcAkAQIEjXCJOeREu77nnHqtataodccQR1qhRI7+2fPfdd58dd9xx9p3vfMeVNm3a+HPKR7gEABQbwiXilPNw2aJFC6tRo4ZNnjzZSkpK7Oyzz7ZmzZr5c6Np/pFHHmn9+/d306NGjSJcAgCQAuESccp5uKxSpYp17tzZnzLr1auXa4lU2Iyies0fPHiwX7N/CJcAgGJDuEScchou16xZ44LihAkT/BqP6nr37u1PldavXz83f/r06a7VU93oqssU4RIAUGwIl4hTTsPllClTXFBcv369X+NRXYcOHfyp0p555hk3X+dnNm3a1Bo3bmyHH354ym7xMWPGWK1atfaWX/3qV3bYYYf5cwEAKHyES8Qpp+Fy2rRpKcNlp06d/KnSVK/5L7zwgl9jdtddd7m6DRs2+DX7rFu3zj744IO9pXv37q4rHgCAYkG4RJxyGi43btzoQmFUt/iAAQP8qdJUr/nLly/3a8zmzJnj6lKdpxlGtzgAoNgQLhGnnF/QoyvFu3Xr5k+Zu1Dn0EMPtcWLF/s1pale88MX9ARd5UuWLPFrUiNcAgCKDeESccp5uGzZsqULmOPGjbOZM2davXr1rHnz5v5cs/Hjx1u1atVs2bJlfo25+RqySEMQDR061KpXr27169f355aPcAkAKDaES8Qp5+FSNCD6UUcd5UJf8iDq6uo+7bTTbOXKlX6NR8tpeT0u0zEuhXAJACg2hEvEKS/CZZwIlwCAYkO4RJwIlwAAFDjCJeJEuAQAoMARLhEnwiUAAAWOcIk4ES4BAChwhEvEKSvhUnfG0VXdGhoo3xEuAQDFhnCJOFU4XDZr1swNYK7AFoRLjV3ZtWtX93O+IVwCAIoN4RJxqlC4fPnll+3aa6+1vn372sMPP2yjR4929WPHjnWDoecjwiUAoNgQLhGnCoVLDWTevXt39/ODDz64N1xu3brVfvCDH7if8w3hEgBQbAiXiFOFwmXDhg1d66XoLjtBuHzvvffcLRvzEeESAFBsCJeIU4XCpW67eOmll9qUKVOsVatWLlwOHDjQrrnmGjedjwiXAIBiQ7hEnCp8Qc+NN97oLug577zz7KSTTnI/V61a1Z+bfwiXAIBiQ7hEnCocLmXEiBHu6vAOHTrY4MGD/dr8RLgEABQbwiXiVKFwefbZZ7uu8YMJ4RIAUGwIl4hThcJlixYtCJcAAOQ5wiXiVKFw+cUXX1iNGjWsW7dutmDBAr82vxEuAQDFhnCJOFUoXKrVUhfwRBV1mecjwiUAoNgQLhGnCoVL3e6xvJKPCJcAgGJDuEScKhQuD0aESwBAsSFcIk4VDpczZ8503eO6W49uB6k79eRrq6UQLgEAxYZwiThVKFx+8skn7vzKY4891ho3bmzNmjWzU045xdUFt4XMN4RLAECxIVwiThUKl3feeWfkhTtqyaxZs6Y/lV8IlwCAYkO4RJwqFC4VLFN1gav1Mh8RLgEAxYZwiTjRcgkAQIEjXCJOnHMJAECBI1wiThXuu1bAbNCggWvBDEqfPn38ufmHcAkAKDaES8QpP0+MrESESwBAsSFcIk4VCpcDBw5051cmU11UfT4gXAIAig3hEnGqULi8++677amnnvKn9unXr5879zIfES4BAMWGcIk4VShcMhQRAAD5j3CJOFUoAd56663WokULf2qfjh07MhQRAAB5gnCJOFUoXE6YMMGqVKlit99+u2vBnDZtmj3wwAN21FFHMRQRAAB5gnCJOFW471qtlAqY6gYPSqNGjfy5+YdwCQAoNoRLxCkrJ0bu2LHDZs6caTNmzLBNmzb5tfmJcAkAKDaES8Qpq1fdLF261GbPnu1P5SfCJQCg2BAuEacDCpe6SrxLly7+lKdu3bp7u8WrV69uQ4YM8efkF8IlAKDYEC4RpwMKlz/84Q9tzZo1/pTZ4MGD7Re/+IV169bNdY+ff/751rRpU39ufiFcAgCKDeEScdrvcDlnzhzXOhlWv359u/nmm/0ps1deecWOO+44fyq/EC4BAMWGcIk47Xe4nD59uguXq1atctMlJSVuunfv3m5aNCxRcgDNF4RLAECxIVwiTvudAHVl+BFHHGE9evRw0506dbJatWq5nwMKl9z+EQCA/EC4RJwOqHmxTZs2rmVSF/bo/z59+vhzPJofdeeefEC4BAAUG8Il4nTAfdfDhw93IXLZsmV+zT6q79evnz+VXwiXAIBiQ7hEnPLzxMhKRLgEABQbwiXiRLgEAKDAES4RJ8IlAAAFjnCJOBEuAQAocIRLxIlwCQBAgSNcIk6ESwAAChzhEnHKi3B5zz33WNWqVd3g7I0aNfJr01u4cKEbZ3N/7gZEuAQAFBvCJeKU83CpwdZr1KhhkydPdreS1MDszZo18+eWT0E0GNA9U4RLAECxIVwiTjkPl1WqVLHOnTv7U2a9evVyYVFhszzdu3e3c845h3AJAEAahEvEKafhcs2aNS4YTpgwwa/xqK53797+VFkrV660n/zkJzZlyhTCJQAAaRAuEaechkuFQwXD9evX+zUe1XXo0MGfKuvPf/6z3Xbbbe7ndOHy448/thNPPHFvOfbYY+2www7z5wIAUPgIl4hTTsPltGnTUobLTp06+VOl6Z7lRx55pG3fvt1NpwuXWvdHH320t/Ts2dN1xQMAUCwIl4hTTsPlxo0bXTCM6hYfMGCAP1WaLv5RoAwXLa//R40a5S+VGt3iAIBiQ7hEnHIaLkVhsVu3bv6U2eDBg+3QQw+1xYsX+zWlJQdLFcIlgGQ7vvra5q/ZZnNXb/VrgOzatOMrW75xh81ZtcWmLt5on5SstaGfr7Q3Jy+17mMXWucPSuxf78+15z4ssRdGz7OXP55vr36y0HqOW2RvTFxsfT9dYv2nLLNBny23d2essGEzV9rIWats1Ow19vHctbZ0ww7/mSqOcIk45TxctmzZ0gXMcePG2cyZM61evXrWvHlzf665+p///Oe2bNkyv6a0IFxminAJHPx0QNfBXAdyHcTbD5ttLftOs+tenmDndhhtv354mP383nf3loue+cgeHjTTHcDXbd3trwWBXXu+sfXbdtvi9dtt1orNNmnhevtw9mobPH259Zm0xIUihaTH3p1l9/efYXf0nmp/7j7JrnlpvPv/9sT0fYn6RxPztZyW750IT+8kQpPWM3HBevti+WZbtG67ex49Xy6E91Pbk24/tV/avz90HW+XPPux/e7JD+3kR0fYcQ+9V+rzVVlFoTRbCJeIU87Dpdx333121FFHudCXPIi6LvpR4NQV4lEULjU2ZqYIl8DBYduuPTZ23jrrkjjA/qXnZLs0cXDXgT3qILy/5eynRtnf+n3mWo/mrSmcls0N23fbwnXb7LMlG+3juWtci1iPcYus0wcl9sjgL9w+3/Tap9boxXF2/tOj7dTHRka+PnGUq54f60JbXEX7HLUdFSn6ElP78ffdF5oGnT+JfN6KFIXzbCFcIk55ES7jRLhEPti++2tbs2WXLVi7zWYs22QT5q9zXWHvfb7SBk5dZv+etMReG7vQXvzIa0lp996X1vadL+z+ATPsb30/sxa9ptiNr01yLXVRB6VMyh8T5bY3prpWP3XTjZ6zxmav2mI7v/ra38r4qEXp00UbrNuYBfbXPlPtvETwiTqYB+WUR0da/c5j3Gug1+TZxGuk1+yjxD58uWKza50KUxe5wtZTw2dHhoxaj4yw5onXVK2geny+0f6oRVH7pxY27a9a1rT/eh30eiTv0/6Wmm2GW712H9iFHT+yq18YZ026TXSvyT1vTrc278y0p0fMsWdGpi8dE8tpeT1Oj7/+lYlufVqv1v+btsMjnz+uoucP76e2L7yf2n51Yb8+fpENSPwujvhilY1P/H5OX7rJnWaxOvF7u233Hv+dOXgQLhEnwiXylro+1W2lP+wHS3lm5Fz7x9ufuwDYrOdka5wIfw2e+8SFpTr/fN9OaF26uzZfy4mJoKHQcnOPT631oJn20sfzbciMFfbZ0o1Z6VbWgVqBttVb012LZNQ2qFz0r4/dQV+hT93gKzZl7xw0ddWqRa9Jtwllujm1/2rh0zlwUe9zZRWds6cvEfryoJa90xMhKLxd6Yo+X3rMZZ3GuM+e1qPw/WTiC4S+qCiAa5/0ZUYhWq+nvujkkoLa5h1fuQCt4KZtUre1vnjpfFmFan0B0/sf/D3Q/5pWvebrnEctr8fp8VqP1qf1HoxBsDIQLhEnwiVySgcEtZgpPKjbrumrk1wXU9SBs5DKsQ8Oda1lZ7b/0C5OBCi1pqmlSGFO53kpUD048HN3DluH4bPdhQE6H0ytKf0+XeK6y9Siota45ICSaVGQ0brUMnp3v89cK6jOKYva3uRyeSK8RLWGpisK2lHrU9FzKwx1TYQgBb+4W1DVlawAptbAGknnbOa6/Pofw+ycDqPs2pfG213/nuZasnVhiAL/lMUbbFkWL/xAYSJcIk6ES8RCF17owK2uPIUYdUtFHUTDRd1XCiNRIaWyiw7i6i7Tyfw630/dZgp9umhEwU/78dDbn7tuNJ38327oly4EqiXslU8WuNCmA78C3ORFG1x3swLAxu1f+a9Iflu1eadrGVKIVRehgq6C/wUdP3JBJ+r92p/y28ffdy2DCs0fz11rW3bmX+vS58s2uW5eBbnHh8xyLYpqldZ7//fEZ0CfBV3YokCsz4hCqT4zwakK+l/Tqg8+Q1pej9Pjg8+Q1us+Q4nn0Wfo+cTrrSuIx81b584HzXXLIgoD4RJxIlwiLZ0Lp/OQdLD8fSLsqatSF0Som/ektiPs+ApcOamT4XUgVmDThRu6alNdr1t30ZV1MFDwCbo0dQ6puiSXRHRpTlvidWmqO5artYH4ES4RJ8IlytDYar0mLLJbXp+8361UunpS3b11n/jAdeNp+I4ru4y1G16Z6FppdMHG+7NWu3OkAADxIFwiToRLuCtpNXCvunijrtLVBQIaw05dpDq/S+PD6apJdfOqFUpDxgAA8hfhEnEiXBahb7/1zidTN7SGo6n2wJBSYfK0x0a64WB09whdsQ0AOLgRLhEnwmUR0LAcGkPx+VHz3Dh/yePMafgSXXCgq0/prgaAwkO4RJwIlwVELZK6kGLw9BXuylOd5xg1uLLOo/zTqxPdkC9qwfxGDwQAFCzCJeJEuDxIffX1N+4qXA2KrOFRGj4/NvLim+oPDHWDYWsoGQ2PoyFxCJMAUFwIl4gT4fIgo2FfdLcUjROYHCRVdL9g3T9YXdwaFBoAAMIl4kS4PEjojioagPmYB4fuDZK6u4vuDa2wqTua6KpvAACSES4RJ8JlHtPA1LqrjQYsDwKlur41LJAGGgcAIBOES8SJcJlndDrkmJK17pZy4SGCdN5k74mLbdtuxpQEAOwfwiXiRLjMExqMXONOqqs7CJQ1Hh7mLsT5ciXDAwEADhzhEnEiXOaQrtrW+JPNek62avfva6XU7RJ1ZfdOzqEEAGQB4RJxIlzm0K2vTy7TSjl39VZ/LgAA2UG4RJwIlzl03EPv2a8TobL/lGV+DQAA2Ue4RJwIlzmiMSjVYnl5pzF+DQAAlYNwiTgRLnPklU8WuHD58KCZfg0AAJWDcIk4ES5zRIOfK1y+PW25XwMAQOUgXCJOhMscqfPPD1y4XLJ+u18DADjobV9vtnGx2aqZZovHmy36xGzJBLNlk81WfObVr/nSbF2J2YaFZpuWmW1ZabZtrdmOjWa7tpp9tcNfWfYQLhEnwmUOrNy80wXLWo+M8GsAHLCdm8zWzvUO5F8MMpvUzWx0O7N3/2bW93qz1y4zG/p3b54O4Dhwu7d5QUjBaPlUs4VjzJZOSvxR+9yr27TUe421XL7atcVs62ovAK6ZnQh8073wt+Ajsznvmc0caPZZb7PJ3c3GP282pqPZh4+bDbvfbNDtZv3+ZNbrKrNXLzF7vq7Zv040a/8Ls9aHZL989JS/0RVHuEScCJc5MGTGChcuNb4lgHJsWGQ27wOzsZ3Mhj9oNqCZ2etXmr14htnTx0cfkNOV504ze+cOs2m9vEBULHZu9gKVWs/mj/ZC1KeveuFp5MNeGNfr2+c6sx71zV4626zzqd7r/M8jo1/LTIoe+2Q1s2dqeOvTe9ftAi/09/y9F9R6X2P27yZmbzY163+z2cBbE0Eu8R4Nvsv7YqBgNyKxje+39YLeiH949VpGy+uxWo/W+fJ5Zi/UM+t0slnHX3nB7/GfRG9bvpcxz/hvXsURLhEnwmUOPDL4Cxcudd9wFCB1a21bkziQLzFbO8ds5QyzJRPNFnxsNmeY2ReJP/Cf9fFaRia84B1ARj3hHTz3HjD/YtZXB8yrQwfM0xMH51MSB8xfJw7Wv0wcMH8afUDan/JU9cSB/nyzfjd4AWNiV7O5I7yuu90xnbKhLsBlU8xm9Eu8Dv/0tuXFM6O3N6ooOKj1SIGl9x+91qUPHvHWFRSFEe1n1OP1Gug5J77k7ffBQi2Iy6d5n6lPX/H2c8g9Zm/d5AUt7W+nWpXXqnawFn1e9JooeCqAKojq90u/Z3rdFFQVWPV7qN9HfXb02o591vudnfGm18Kp7m61eq5P/B1XS2gldGVnE+EScSJc5sAVXT5x4XLSwvV+DcrQH2q1Ki0a64WdWe944WNKDy8AffIvs9Htzd5vY/befWaD7zQbcIsXEhQwejQw635pqGUkUaeDhrq09raMJEKIWkZ0QHYtI4mDiFpGXBhR0GvltXAlBz0dtCsj6OVjUauTuv7e+EPidbrbe91nDvC6Qw+kzPvQe//0nuk9Stf6qOd/5SKzt2/zWtj0/s8e6p2/pla4/bVnp9f9qdYvvZdRz9nuZ17r3bjO3jlwcVM3v87Jmz/KbNobZh8/7b322iaFoANtsX0s8XdPj9X7qd+Nfzf2XtfhD3ndr/qiM/V17/QBtWzqC9HqWV5Xd0VeB+2PgrDOL9T6FIgXj/Oeo+R9L6h9Odj70qXgpi9eU3t6QU6BX13TarnW+//Rk97vp35WvZZRF7ZaYbUevbfq4lboU5e3PiMKfuoKL3KES8SJcBmzPd986271WO2BIbZrzzd+bZHRCexLP/UCo4KGAqKCocKGuiwr0gWXD8W1jFT1DuRqOdLB/OVzvQP66w29g7palxRuFWx1cFfYUYgY38XrqnQHzESIU5DSQVjnE6o7UwdMdRVvXeV1c1aU1qUWGD2fDtzaJr0P2u6ofaus8uxJ3pcAhfopr3n7G9f5kQojeu31/AfTlwSF4C61E1+grkh8WWpu9sGjiUD8nBdI9bnRa6iQqs8Kih7hEnEiXMZs8qINrtWyQefEAf1gopZEtSKWjEzdiqgWhVRF4Urdj1EHyVRFrYJqJdTBs8+1Zm/+2eztFmbvtjQb9oB3MI16rmyUXAS9fKR9Uyuh9l/vs1rQ1IqpoHygRef1qaVs9hCvJSvfqGVNrZZqKdT2qrWw61le96nCnLpS/1XT+3x2ONZruX7i6OwF00f/x1u/LhjROYj68qGWu8/f8n4H1y/wNxTIHOEScSJcxuylj+e7cNnmnTw/t0thSWFKIe6lc6IPggdadDBWt7K6mdXtrDCn7i11f6s7S+crAgCyhnCJOBEuY3br65NduHznszwbPF2BTuctqZtW3bhRobBLndKtiOqKC7ciqjVKLSyTXvbO3VJLy5fvelf76vwtXeACAIgd4RJxIlzG7KS2I1y4XLYhx1cW6uR6DcWibuZnf1M2SLY5zBsy5L17vW7wHRv8BwIADjaES8SJcBkjBUoFy9gHT1erpM6V1JA3amFMde7jKxd6F9fMHe4NpwMAKAiES8SJcBkjdYUrXN7yeiUOnq7he3TxhUKirn5NFSQ1LMlrl5sbX1HjLwIAChbhEnEiXMao9aCZLlx2zdbg6bqqVVdsa6BftTqmuguFrj7V3Tbe+at3PqRu1wYAKBqES8SJcBmj+p3HuHCp4YgOyDd7Ejsw2LtVWtvDo4OkxlfUrds0fImGC9I4d98W6XiaAACHcIk4ES5jogHTf3GfN3j6nq+/9WszpICoO8hoCJ9wkHzmBG9A7tHtvGGDNDg5AABJCJeIE+EyJhMXrHetlr9/LsPB03XLNN36rOvvSgfKDsd5rZKrv/AXBACgfIRLxIlwGZPnR89z4bLt4HJCobqvda9d3f/60Sr7AqXu/KHbI+pew3RxAwD2E+EScSJcxuTmHp+6cDl4+gq/JmT9fLP323r3og4CZdsfefeh1nmTuvUigHKt27HOFmxaYNNWT7PRS0fbO/PesddnvW4vfPaCjVw00jbu2ugviWK3fc9227J7i23YucHW7FhjK7ettKVbltqizYusZGOJzV4/275Y94VNXzPdpqyeYhNXTrSPl35sIxaNsHfnv2tvzX3L3pj1hr3y+Svu89VxckdrN7GdtRnXxu4fc7/9bdTfrMX7Leym4TfZjcNutL+M+IvdOvJWu/2D2+3OD++0v43+m7X6qJU9MOYB+8fYf7jHPTb+MXti4hP25KQn3fqenfKsTV6VvZFFCJeIE+EyJsHg6Ss2hYKi7lOtq7yDQKmibnDd13j7On8hoLht2LXBPl/7uT09+WlrPba13TXqLnfAvuqdq+z8N8+33/b6rZ3w2gkZld+//Xt7dPyjNnTBUFu9fbX/DBAFLQUsBSsFKoWp9xa+54JUjy96uBD11KdPuSCkYKTw9Odhf3YB6pYRt9ht799mf/3wry5Yab5CVhCcHp/wuAtfevwzU56xzlM7u/W9NOMlF9C0foW1vrP7uud7u+RtF+L0/Ppi8OGSD23MsjE2bvk4F/Q0PXj+YPv37H+7xz837TkXzB765CH3/NqeG4beYFe8fYVd+NaFdnqf0yM/D/leus3o5r87FUe4RJwIlzFYtG67C5a1H3/fr/E9U8MLlB1/5Y1LuXaOPwP5buvurbZ2x1pbtnWZzds4z7VyTF091SasmGCjloyyYQuH2dvz3nYHy55f9LSXZ7zsDoAdPu3gDrQ66Lb6uJVrxVCLhg7STYY2sWvfvdYaDW5kVw660hoMbGCX9r/UHRzP63eendX3LDujzxlW5406dmqvUyMPRvtbzuxzpjUc1NBtw8NjH3bb2G9OPxcsZq2b5VoDK5tajfTaKUzoYKowou3R/u/PfipA6PXSa9j0vaalyjXvXhP5mAvevMAFoT6z+7gWq4OdWmfV+jZj7Qwbu3ysDZk/xHp/2dtenP6itZ/U3h4c86BrPVPwUtA+u+/Zka9LoZdTXj/F/R7pM6Pfq3P7net+z/T5qT+wvvv9a/ROI/dZ0u+lPkMKrArP+rzo91ctjQrLnad1ti7TulRKUcjPFsIl4kS4jMGAqctcuGzeK/SHYsMiL1iqKxxl6CA5f9N8m7JqimuxeH/x++5A2X9uf3ewfPXzV+3Fz160f035lztoth3X1nUxqbtJLSg3D7+5TMDYn6KDikLXZQMuc8FOoS5bge5gLGohbDyksWs1VDiOOhBmUtRipfdJLY86mEc9V7hoGbWMKXCqlUthXa1Z41eMd4FeLW3q3szErq93uccpQCvMRz2fwoYCv7rTFa5zTV9iFBYVvvU7oOCvoKj3QJ91vY4K4fp8Ru3P/pR6veu511utfQqfzUY0c62Aag1Uq2BlhCi1YiqgqVVT+6T3WcFNrZ4KcdpHBTr9Tivc6bOg906tpn//6O9ueT1eraBq/VSrpz4f+nKkYKZWWH1G1Cqr97+YES4RJ8JlDB4c+LkLly9/HBo8XYOfK1z2beJXFD618k1fO9217CkkqjVPwVAteDonSWHuYGlJqf1Gbfvdv3/nWr4uH3i5Xf3O1S586eCnA1/LUS3tvo/vcwc/HZh1DtXznz3vuvB6zeplb85503Xr6Ryuj5Z+5Lr6FKR1jtfMdTPdQVEtaQoWOjiu2LbCnRumg6TClM4Zy8bBctX2Va6VS8FFob3T1E6udUvvh1q26vSuE7n/2SxqKWr+fnN3rpm6RvVlQvsdBwUQhVYFqdN6nVZm29T1HvXlozKLWq71hSZ5WzIpao1TQNQ69AVL4UxfvHT+nr6Q6fdO3cyfrvrU5myY41qNd+zhnO5iQLhEnAiXMbjk2Y9duJyyODR4+ls3eeFSd8zJc+t3rncXSKilUCFJYUmhSeFJIUrBQKEq6kAZlKgDYXlFB0l1UQUtKHd8cIdrqXjwkwfdOXMKImoF6zq9q2uxULfmwJKB7lw6nY+lLsFJKycdcPlszWf25fovbeGmhS7Y6TXY9tU2/xUpLgqxCnsKJGo97j6ze2QrVCZFAVutS3qNFWzyjc7tfG3ma661TC15UZ/NOIu6bxUW9cVFX1rUiqigqM+8vpyoJVafU84fRTqES8SJcFnJUg6eHtzzO8/Os1Q3nIKZWnN0bpbORYo66B1I0bp0HpO6t9Ttpe4steTp5H09p7o58zFwAGrdc1cX79p3dbFa4hW6dc5t8tXFyV9W9qdoHWqtVus0kC2ES8SJcFnJxs5b51otr+wy1q9JUKBUsFTAzCG1SGmoC7WCqPvs4v4XR4ZClevevc61WKorWxeo6Nw3XbCiC1fUza0LWXRemA6wOtjqwKsLXhRWAQC5RbhEnPImXK5fv95Wrsy81WrhwoU2ffp0fypzcYfLzh+UuHD56LuhiwPUFa5w2f9mv6LyqWtXQVIhUF3LOmk/KkSq6HwtXTmsiwfy4aIGAEDFEC4Rp7wIl/fcc4995zvfcaVRo0Z+bbQ2bdpY7dq19y5fvXp1u+OOO/y56cUdLv/cfZILl0NmhAZP10U8CpdTe/oV2aPuOV0tqasndbWluqHLu8pZV0Pf+/G97upYDT5d7FdUAkAhIlwiTjkPly1atLAaNWrY5MmTraSkxM4++2xr1qyZP7cshctOnTrZzJkzbc2aNdalSxcXMlWfibjD5a8fHubC5crNO/2ahHY/88LlxiV+xf7T1cUaQFjnLeocxnTDumigaV0UoCuBdZ6jWjCL9QIVACg2hEvEKefhskqVKta5c2d/yqxXr14uLCpsZqpatWp2xRVX+FPlizNczl+zzQXLOv/8wK9JWPm5Fyz/daJfkRm1KOoqaLVGRg2ZEpSL3rrIXb2tIX7Ura2u8DgGwgYA5C/CJeKU03CplkcFyQkTJvg1HtX17t3bn0pPYVFd65mIM1y+OXmpC5e3vREaPH3cc164HHS7X5Hazj07bfjC4W4gYw1JEg6RuvJa40NqeCAtM2s950YC8s327bYn8bdl98KFtnPmTNs+6VP7ev16fy6K3Tdbt9qedevsq2XLbNe8ebbzi1m2Y+pU2zZ+vG0dNco2v/eebRr4tm3s29fW9+hh67q+ZGs6d7Y1HZ+x1U89ZaueeMJWPfqYrWzdxlY89JAtv/c+W37P321Zy5a29PY7bGnzFrbkL81s8Z9vtEU3/MkV/ay6Jbc2Tyxzuy278y5b9re7E4+911Y88KCt+MfDtrJtW1v12OO2ql079zx6Pn12s4VwiTjlNFxOmTLFBUldzBOmug4dOvhT5Wvfvr07BzN5HYGPPvrIfvWrX+0tauU87LDD/LmV677+M1y4fGXMAr8m4Y0/eOFyel+/ojQNP6KxBHWXkORAqQG7Nb6jzo38NvEPhefbnTvtm23b7OtNm9wBcM+qVfbV8uW2e/Fi2z1/vu2aO9c7GM6Y4Q6IOvhsnzjxwMqkSbarpMS+3pzZHW7i9O2uXbZ70SK3jZsHD7Z1r7xia/71rK185BFb3qqVLb3tNlvctKktbPQHm3fJpTb3d2fZ7FNOtVnHHZ+ylJx/gTugb+j1RuI1/MJ/JgQUwL9assR2Jr6Ab5882baO/sg2DxlqG/u9aeu7v2Zrn+uSCD7tXRDS67jkllts0fU32KIm1x9Q0fundSz7653uPdV6Vz3+T1v99NPuuda93M3Wv/66e/5N77xjW0aMcNukELjlgw9s06BBtqF3b7fcmk6dXDBbcf8Dbn2Lb7rZFl3X2OZfXt9KzjnX5vy2duRnIt+Lgm22EC4Rp5yGy2nTpqUMlzqvMp1hw4a5ZQcnDj6pbNiwwcaOHbu3qEVUXfFxuLDjRy5cTluy0av4NhEIH/+pFy637hv0WMP1aFgfjSt58usnlwqUGh5IA5frLioo39ebN9tXK1a4ALYj8dlSMFEA2zF9ugsTu+bMcS0VCmpqtVBwU4D7euNG15rxTSLYBRTytL49q1e75d06E4Fu+6eTbdsnn9iW99+3ze8OsY39+9uGN3rb+ldftbUvvOACkFoeVrZpa8vvu99rzWiRCEI33pQ4oDaxBVddbfMvu9wFnbln/s7mnPZb+7LmbyIPLHGWL39zks278EJ30FdwUMuJAoXChYKGQke27Fm50r0nW0aMdEFvzTP/cq0/at2Zf9llNvvU0yK3MdtF+6xWJT2/Qove74OdPssK5Xp99TlVMNdrvPb5523VP//pXme1nrnglXit555+RuRrU8hl9km1bE6dulZy9tk276KLbcHvr7CFf/ij+yyodVGtj2qJVKukWihXP/mUC6+u9TLmot+9bCFcIk45DZcbE38IFQ6jusUHDBjgT0XTBTxarl+/fn5NZuLqFt+2a48LlqUGT1+W+EOhYNn5VDepO57oYpxwmFTRFdy6C0ehDwOk1imFO4W3nbO+dMFt66jRXmjr2y8R2Lr7rSXtbMU//pEIPX/zWksSAWjBFVfYvAsutLn1Ts+LcJbN4g5+p53mDoBzzzjTHQRLzjvfHQgVCHQwXNDwqsQB8Q8uJIRbg/arJB6rQKmQFbUdUUUtQPPrN4heXwal5OxzItcbVbTfC/94jTvY6yCv8L6+Z08X6DcnvlhuGzfefYlQ8Ffrrlp7y7Pz889d0FI3pD4/Uc85/9LL3JcCdYnqy4g+k/u+oMxydWpBLvMFJfHcyV9QsuGbLVu8sJjYBhfG//1vFxRXPvKoLburpQtE+kzosxK1P/tT9EVHrXxq7dNnQ1+I1Aqo1kC1Cq559lnXSqjWQrUa6gvW9sTf7shW8QzKtsSXfbVA6gvMpsTf+w19+ngtpC++6J5LraTqKtb7pS88rrVaLZLX3+C1eCb+HrjWzsRy+juhvxdq5dT69GVBwUx/V/TFSK2y+ntTzAiXiFPOL+jRleLdunXzp8y1Qh566KG2OPHHO5UDDZYSV7j8eO4aFy4bPh8aPH1MRy9cvvs3Nxm+wlv3ce48rbO7328+U5eta81bsMAdrHWQ0EGmVBdVxDdw7yB4uQsXOohFHdzyragFTS07CnYKHQuubOgddP/8Z9f6o4O7zpla+XBr1yq0+umOtrbL864Ld0OvXrbxzbdcy5FCwdaPP/ZaUj+bbrtmz3aBQS14rtV0R+7v7ayucXWRbxs7zh3odYDXgV2trgqxamWNeo0OpMypW88F5CXNmrnWIX0+FOa2fvSRC3AKa3FQ2FAAUSjROXAK2lHbW9Eyu9bJZb8snH9BxJeFP7rPlz5nJWedHbmudEWnBpSce55bhz6n2i99PnX+3rrE31l9JtW97E6JSHwOv1qxMi8+f6h8hEvEKefhsmXLli5gjhs3zg0vVK9ePWvevLk/11xX9pFHHmlLly5106+//vreYDlq1KhSJRNxhctnRs514fKx8ODpPa/wwuUXg2zuhrkuVF7S/xJ3C7l8oxYTHYDUkqAANb/B7yMPZhUtaglzrSVqDWucCG433ey1ljzwoDv/St1ROiiqVcOdd/XBB267dKGGLtjQhRscHOOjLxbuS0WpUw68Fr1MTznIZ9pWBWy1kurCDLW2KvTpdAaFQH1BUihUONQXJYVFhWWFR4XIqM94RYpalfX7oRZctdbp90JBUb+X+n1Q663OkdTvAVAewiXilPNwKffdd58dddRRLvQlD6I+NXEA0y/FqsTBSjRfY2FGlUzEFS6vf2WiC5fvfe7fdeibPWaPVvHC5Y6N1nV6VxcuHxv/mDc/h9R6odCmVjd1PanlI+pAp6KDnbqi1cqz4MorEwffJomD3q2u26pUF1Xi4Le3i+ojv4sq8doHXVRAMXAXaCUCa8oLtBKhXOE8fIHWjs8+c78nuuodyBbCJeKUF+EyTnGFy2Dw9PXbdnsVi8Z6wfKF092k7tWtcDlm2Rg3Xdl0IFPI0zlrm94eZKvbP+mu1pxTu05kiFTXtc5tUuuhzvPaMWWK6xIHABx8CJeIE+GyEsxZtcUFy3rtQoOnj27nhcthD9iGXRvsxNdOdLdl/Oqbr/wFKkZdw2oB0YUOOu9R57LpPMd0F1CoK0/n1K148EE3ppu6BOM65w0AEA/CJeJEuKwEfSYtceHy9t5T/ZqE7pd64XLOMOs/t79rtdRYlgdC57up63n531u588F0zldUcAwXXSCglkiFyHUvvewGC9a5cQCAwke4RJwIl5Xgnjenu3D56icLvYo9u8zaHm7W5lCz3dvsjg/ucOHy7ZLMftF1vpaGX9HVyeWNATjvwovcOG0apmT9az1s64cfuossAADFjXCJOBEuK8G5HUa7cPnZUn/w9PmjvFbLl89z3eDqDle3uLrHU9Gg37piVV3WySFS50nqqlGdC6lubFogAQDlIVwiToTLLAsGTz/2waH7Bk9/v60XLhP/f7T0I9dq2XhIY2+eT+dMbhn5vjtXssyYgsf/yo2Dp2F5NPSLu9MPAAAZIlwiToTLLPtw9moXLq9+YZxfk/DyeV64nD/a2o5r68LlyzNedkPy6CIa3QmjVJhMFN2lRQNYbxo4kKF7AAAVQrhEnAiXWdZh+GwXLv85xB88ffc271xLnXO5Z5ed2+9cFy5LNpbYwmuvKxUo511yqa164gnX1Q0AQLYQLhEnwmWWXffyBBcuh830B0+fO9xrtex+qbtXuIKlAqYGUVag1B0+dPGNpgEAqAyES8SJcJlF33z7rTvXUuFy7+Dpwx/0wuXo9tZlWhcXLp+Y+IQ7f1LhUvekBgCgMhEuESfCZRbNWrHZBcsz2n/o1yS8eIYXLhePt0aDG7lwOX7FeHdvYoVL3acZAIDKRLhEnAiXWdRrwiIXLv/axx88fcdG73zLR6vY6m0rXLDUMETbpk5xwVIBEwCAyka4RJwIl1nUsu80Fy5fG+sPnj7rHa/V8vWG1nd2Xxcu7x59t61s09aFy7XPP+8tBwBAJSJcIk6Eyyw668lRLlzOWLbJqxhyjxcuP/mXNX+/uQuX78552+ac5t1lZ89K/6IfAAAqEeEScSJcZsmG7btdsNQFPbqwx+lS24XLnUsmWq2etdxdedYMH+KC5cJrrvWWAQCgkhEuESfCZZaM+GKVC5eNXvTHqNT5lmq1fPyn9sHi912r5Q3v3WDL7rzLhcsNb/T2lgMAoJIRLhEnwmWWtHvvSxcu2w390qv4/C0vXPa51v4x9h8uXPb49EX7ssYJNuvXNezrzZu95QAAqGSES8SJcJklf+g63oVLtWA679zhwuW3E16wM/uc6cJlSc8XXavlkltu9ZYBACAGhEvEiXCZBZGDpz97kguX0+e87YLlRW9dZIuuv8GFy81DhnrLAAAQA8Il4kS4zAJdHa5geWYwePrW1V6XeLuf2bNTnnXhstPwNjbr+F/Z7Fon27e7dnnLAQAQA8Il4kS4zAKNa6lweee/p3kV097wwuWbTe3KQVe6cPnZ0w+7Vsvl997nLQMAQEwIl4gT4TIL7ug91YXLnuMWeRUDb3XhcvWEzi5Y1nmjjs279FIXLreN9a8mBwAgJoRLxIlwmQWnt/vAhcuZy/0rwJ+q7sLlG5O9LvF2b9ziguXceqebBWNgAgAQE8Il4kS4rKAyg6evn+91iScC5l9G/MWFywkP3OrC5ap27fxHAQAQH8Il4kS4rKDhM1e6cHnNS+O9ismvuXC5c0Azq9mjpv0mUebUrefC5c4vvvCWAQAgRoRLxIlwWUEfz11jVz0/1p4eMcerePPPLlwOH+0NnN722YYuWJacf4E3HwCAmBEuESfCZba1+5kLl/d/2NKFyzG3NHLhcu0LL/gLAAAQL8Il4kS4zKbVs1yw/LZTLXeFeK1uJ9iXtWq5cLln5Up/IQAA4kW4RJwIl9k0sasLl1MG/Mm1Wv6jzTkuWC66rrG/AAAA8SNcIk6Ey2z6dyJEJsLl08NudeHyoz9e6MLlhj59/AUAAIgf4RJxIlxmi4Yh8s+3vLz/pXb68yfYrF/92r6scYJ9vdkf/xIAgBwgXCJOhMtsWTHdBctlL9R2rZYP3emda7m0eQt/AQAAcoNwiTgRLrNlbCcXLnv0b+RdJX5RXRcuN7/3nr8AAAC5QbhEnAiX2dLrahcumw5oYOf9q4YLlrNrnWzf7tnjLwAAQG4QLhEnwmU26HzLx39qWx/5kbsrz2M3euFyxf0P+AsAAJA7hEvEiXCZDUsnuVbLId3qui7xSbV/48LltnH+LSEBAMghwiXiRLjMho87uHB5T9+L7arHvFbLufVO91o0AQDIMcIl4kS4zIYeDeybRLis8/qp9uw1v3bhcnX7J/2ZAADkFuEScSJcVtQ3e8werWIT2v/UfvPqCTb1JK/lcuesL/0FAADILcIl4kS4rKhFn7gu8Xbd69qfH/CC5fzLLvNnAgCQe4RLxIlwWVHzPjTrdr5d1KuOdW/wKxcu13Xt6s8EACD3CJeIE+EyC+ZtnGenvnSCff7r423W8b+yPStX+nMAAMg9wiXiRLjMgm4zutmdLb0LeRY1buzXAgCQHwiXiBPhMguaDG1i/c/3usQ39u3r1wIAkB8Il4gT4bKCNuzaYGd1PsG+SATLL2ucYF9v3uzPAQAgPxAuESfCZQX1n9vf/nGL1yW+9Lbb/FoAAPIH4RJxIlxW0NbdW23qeWe4cLll+HC/FgCA/EG4RJzyJlyuWbPGli1b5k9lZvr06bZjxw5/KjPZDpc7E+tTsJx98il+DQAA+YVwiTjlRbi877777Dvf+Y4rjRo18mtTmzlzpp122mlu+e9///vWpk0bf0562Q6X63v2dOFyxUMP+TUAAOQXwiXilPNwedddd1mNGjVs8uTJVlJSYmeffbY1a9bMnxutTp06LoQuX77cJkyYYD/84Q+ta4YDl2c7XMquOXNsV8k8fwoAgPxCuEScch4ujzjiCOvcubM/ZdarVy/XIqmwGUXhUPOnTZvm15jdcccddsopmXVLV0a4BAAgnxEuEaechkudZ6mgqNbHMNX169fPnypN9d/97nf9Kc+oUaPcY1ZmcGccwiUAoNgQLhGnnIbLKVOmuFC4fv16v8ajumeeecafKq1jx45Ws2ZNf8oThMupU6f6Nfto3jHHHLO3HHnkkfZ//+//LVVX0fLTn/7Ujj766Mh5FO/1UYmaR/GKXp+qVatGzqPwGcqk6PX55S9/GTmPwmdIjTKPPPKIf2QEKldOw6W6tlOFy06dOvlTpSl0pgqXn332mV+zz8aNG23ixIl7y/vvv2/PP/98qbqKFgXWHj16RM6jTLT777/fzjvvvMh5FK/ovOEBAwZEzqNMtNtuu82uuOKKyHkUr/yf//N/bOTIkZHzKBPtT3/6k91www2R84qh9O3b1xYuXOgfGYHKldNwqeCnUBjVLa4DbZSBAwem7BZfu3atXxMvtRakOkcU5i62ymQUgGL2ox/9yObPn+9PIVm7du3SXuhX7BQuk7+oYx+NSnLvvff6UwAqU84v6NGV4skX9Bx66KG2ePFiv6a0pUuXuiCZfEHP+eef70/Fj3BZPsJleoTL8hEu0yNclo9wCcQn5+GyZcuWLmDq/MtgKKLmzZv7c80++eQTdwGOQmWgbt26LqysWLFi71BEr7zyij83foTL8hEu0yNclo9wmR7hsnyESyA+OQ+X8tprr7mhhE444YQyJxyrhfLcc8+1VatW+TXm/oAqrOiArHlvvPGGPyc3+vTpQ7gsh14fFaSm14dwmRqfofT0+hAuU+MzBMQnL8IlAAAACgPhEgAAAFlDuAQAAEDWEC4BAACQNYTLCtLVh/Xr17fbb7/dxo8f79ce/HQB1fDhw+2pp56yNm3a+LVl3XPPPXbJJZfYTTfdZJMmTfJr99HtOq+//nq79NJLI9ezadMmN2KARgm49dZbbebMmf6cfXTBli7gatiwYbnbErf+/fu7/b7qqquse/futm7dOn/OPum2PZP979mzpzVt2tSuvfbayHXoMXfeeadbh/6PWkcu6PXRNqnoc6LPUzKNdduqVatyt137rH2vyP6nW0eu6bOj7YratnTbno39z2QduaBtjSphce1/unUA2IdwWQEKDQqWgwcPdn9sfvCDH9iQIUP8uQc37U/16tXdGKIaVzSK9l8l2H8NCfXee+/5c83V67G625IGv9eQU/rDHFCw0CgBWocGwtc6tIyGmAqoTuvQcEYaA1XzVZdr2gYdiLp16+YC9OWXX261atWy7du3+0uk3/YD2f8jjjii1Dp0dbAek7yO5cuX+0vkjl6fu+++2733urOWRnZ44okn/Lnetp900knlbns29j/dOvJBkyZN3OulEhbH/i9btsw9RnXhdag+1/R6aHuSSyDY9nT7r33Wvkftv5ZNt//hdeh11OsZXgeA0giXBygITuHWugYNGtjNN9/sTxUG/bHVfibT/v/4xz/2pzzaf4WJgIJ3+/bt/SlvzFKta+XKlW5aoSNqHeE/2gpfyevQH/lwAMsF3UY0bM2aNW7f1MoYSLftmey/DnLhdXTp0sWtIzh46mCXvA59KcjHA5/GGdS2B6K2PTjIB5L3X19U9nf/tY7w7WST15FrGoqtTp06keEyk/3X/oaF93/Hjh32ve99r8z+q07zRC3HUetQfa4F4TKVVNu+P/uvZcvbf73Wes2T16H3BkA0wuUBUldm8l2B9Efq5JNP9qcKQ6pwqRAZ1coS7L9aVPQ4PT5MdYMGDXI/B93BYVpH7dq13c+bN29OuY63337bn8of1apV2zuYfybbfiD7rxD7X//1X67LWVKt47TTTvOn8kfr1q3dF45Aum0v7zXMdP+DdSQLryOX1Dqmm0TojmTaj/C+ZLr/2t+w8P7rsan2P1ivnjNqHcmvay4E26ZtjWpJTbXt+7P/Wra8/ddrnWodeo8AlFX2NwYZ0Xl2N954oz/lUStClSpV/KnCkOqP8zXXXFOmlTa8/zpnSY9LPiCoRUCtb6KurKh1/OQnP3E/l7eOF154wZ/KD2rJ0LYGg+lnsu0Huv/HHnvs3lumRq1DB0YFlnygbVHRebn64jFy5Eh/TvS269zVYNtT7b/qMt3/YB3JwuvIpeuuu86djyoKM+FAl+n+a3/DwvufSbhKFdDC25Ir2oaf/exnduKJJ7pt1vTQoUP9uam3fX/2X8uWt/96rVOtQ+8RgLLK/sYgIxdffLHr5gvTH73//M//9KcKQ6o/ztr/5G6z8P7r4iY9Luh6CqhVLjjv7qKLLopch1rmZNy4cWnXkQ+C1yh8gMpk2w90/9WFWt46tB3BOnJN26Kiz4tCwquvvurPid52nbObbv9Vl+n+B+tIFl5HrqiV+/jjj/enyobLTPc//LmT8P5nEq5SBbTwtuTK2LFj/Z+8fVHLt043CaTa9v3Zfy1b3v7rtU61Dr1HAMoq+xuDjNxyyy2u9S5MrVfqGi0kqf44a/91BXRYeP/VzafHTZ8+3U0HDj/8cHv99dfdz7rSOmodwQFX95NPtQ618OWDrVu3ljlPUDLZ9gPdf7VslrcObUs4tOQL3R9cF33t3LnTTUdtu1qJ0u2/6jLd/2AdycLryJWf//zn7vcrKAozKkHoyXT/o4JRsP+pfn9VFzxPqoAWhKt8ou0K70+qbd+f/dey5e2/XutU69B7BKCssr8xyIj++FStWtW2bdvm15i1aNEiL/8gV0SqP87a/+OOO86f8ug0gWD/d+/e7U6CVzdnYOHChaX+qKdaR/i8PC0ftY7kC2pyIei2HDZsmF9TWrptz3T/dTV6IHg/wutIvrBAXaXhdeSLYP+nTp3qpjPZ9uT912kH+7v/Wj587//kdeSKfldSlUAm+6/9DQvvf/B5idr/4PdQzxe1jvB25IsOHTq4vyuBVNu+P/uvZcvbf73WqdYBIBq/HQcouIJQ40CKgoYOcvl2LmBFBX+ck+mK5yOPPHLvgS9q/3Xg+8Mf/uBPedM6XzAwbdo0t+5gHV988YVbh65ED2h8x/Affq0jHBxyRdusbdfBLpV0234g+68xQ8tbR/B+hdeRK+Ft0AVe2hd9IQuGa8pk27Ox/+nWkS8UZpIDXTb2X+eHJ69DdQGdB6vH6LESrCN8fmwuaDuCbRJ9OfnjH//ozt8NBNuebv+1z4Hk/dey6fZfr3l4HXo99d4AiEa4rAD9QdMfoZo1a9ohhxzirtwsFApC2rfkEv5jH+y/zoE67LDDIvdff4QVwjWeoYJl8gHr+eefd+vQQVVdpnreMHWvX3bZZS6U6PEKX/lwEr22N/y6BCW8/Zlse6b7f9RRR7lTDqLWEbxXWofGWk1eR65om/QFJHit1EobjBQQCO9/1LYHX1oqsv+ZrCMfaPtVwuLafz1Gj021jlwIQp5+f/Q3Rj+fddZZZQbjz3T/te8Huv/hdeh1jFoHgH0IlxWkFpkJEya4b9WFRH/YU5WwTPa/pKTEjQcaPoUgTMPraL3JV8WG6Q+5zj1LvrghV5Jfk3BJlm7bs7H/emy6deRC+HVZu3atX1tauv3XPmvfK7L/mawj14LXKVlc+59uHXHTublffvnl3telvO2KY/8zWQcAD+ESAAAAWUO4BAAAQNYQLgEAAJA1hEsAAABkDeESAAAAWUO4BAAAQNYQLgEAAJA1hEsgDwVj+82fP9+v8QT1lUnrTx7MO5d0v/UmTZq4barsfQcAVBzhEshDukPImWeeaVdffbVf41G40t1IKlMcz5EpDaStbdFtNrVd+xsuFUiT77YCAKhchEsgDykQKRgpWA0bNsyvLb5wqW2pUqWKtWjRYr+DpRAuASB+hEsgDwXh8s4777Tf/va3fm3Z4BcVnsJ1wfL9+/ffe3/m008/3c1r27atu+e57v3+xBNPuDoJHqN7x//mN79xPzds2LDMbfFefvlld199za9bt6517drVn7Nv+/X/Mccc435OpVWrVu5+zf/7v/9b6nn0WK07KKnWoe294oor3P3ttZzu+yzJj1cJ6DHaZtVpH9q3b+/P2bf/Tz31lLuX9CGHHFJm/1u3bm2/+93v9q63vP0DgGJDuATyUBDOli9fbt/97nftueeec/VB8AkEAS4sXBcsr2n9rHuUX3jhhW5aXc26J/yIESPcc0yePLnUY0499VQbOXKkm9Y6L7jgAjdfGjVqZFdeeaUNHTrUTStY6jHBdLD91157rS1YsCDlvecVLLWcnmP8+PH2hz/8odTzBM9dHs3v2bOnbd682U3rMQHNS359unXr5rZV4Vn0/9FHH713Otj/YLuCbdA+BzR/9OjRtmXLFjfdvXt39z8AgHAJ5KUgnAU/V61a1Xbt2rU3+ASiwlO4Llh+woQJblruuOMOO/zww23Hjh1+jbnWu6DlMXjMW2+95aZFoVF1gwcPtvXr17ufu3Tp4s/1hJ9X/2uZdevWuekoGzZscMv07t3brzHXOhg8jwTBrjyar3C5atUqv2afqNenXr16ZequueYa10oswf6/+eabblq0jarTNivw62ctF4RLAMA+hEsgDyn8hEOVuq4ffvjhvcEnEBWewnXJy0vyuiXqMQpSYdWrV3etnRMnTnTzo0qwjqjnSKaWUj1GYTVMQVfPI9qWdOvRc9WpU8etS8E5aD2V8H4FwtsbLsHzRO1/EKiD1t3bb7/dvR56X/72t7/ZmDFjXD0AgHAJ5KXkcKZu8e9973sZhcuTTz55b11FwuWUKVPctAThqkePHm54JP2sbuFUop4jWbCecKuq/Pd//7d7HskkXAaGDBlif/rTn9w6x40b5+qiXh+dZ6rzTVOJ2n9to+qSh4bq06eP3Xzzze41V8syAIBwCeSlqHCmC1VuvfVWF3ICOtdPF+oEPv30U7dcclAMi1p3OIQFj3n66afdtGisSYXbqVOnumk9x1133eV+DgvCV9RzRNHFRUErpSgUhp8nXbicN2+e/9M+Wj7oatdV5s2aNXM/B5o2bRq5zuA0gaj91zYGFwolB0zR8nPmzPGnAKC4ES6BPBQVzgYMGOBCTDgszpo1y44//nh3kY6ClLqU9bhshEtdwKJ6FU0H80Uh8Pvf/767Ylrdwpqn5fVYiXqOKIMGDXLnf2rZiy66qMzzpAuXmt+gQQP3mGAbzjjjDH+ud7GO1qlAGV6vroKvVauW/eUvf3H1eu2C+cH+n3XWWZH7r/l6Hk3//e9/t4svvtjOP/98Nw8AQLgE8pKCSxBmwqLqdVGJWup0gY2uBg8voyCUvHzUOlI9plevXu7n8HmMAbX0qVtYwxglLxNeXzrTp093267W0eHDh/u1nqjtT6arzIPn05igwVXjAdU9+eSTZdajsK4hiFSvYZUCQbhUS6Tq9djwWKOifdXj1KKp5bmwBwD2IVwCQEgQLgEAB4a/oAAQQrgEgIrhLygAhChcqgAADgzhEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFli9v8BHM980numNw4AAAAASUVORK5CYII=">
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAArMAAAGJCAYAAACZ7rtNAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAFhNSURBVHhe7d0JuNXUvf5x732ePm2tA7Xyby/iLS2I9aJUxRG1Uue5Tljn4lCqiHVGbK2ibVUUoYJoqVBAEAoKiMiMAjIjg+gBmZEZmQdBHH//864kh7BP9jkBzj575+T78VkPOyvZ2Ul2jnn3ykqynwEAAAAJRZgFAABAYhFmAQAAkFiEWQAAACQWYRYAAACJRZgFAABAYhFmAQAAkFiEWQAAACQWYRYAAACJRZgFAABAYhFmAQAAkFiEWaTS22+/bcOGDfOHKkarVq1cKWR9+/ZNxHLm07Zt26xr1645306F8D0UwjJUhNGjR7v10L8A0ocwi7zbb7/9spZcHZzOPvtsu/TSS/2h+Mo6aGp5CzkYaPkOO+wwa9SokSvZaFz4O6hVq5ZddtllNn78eH+K0u6991479thjrVq1aq7oteoyabtpntm+V42L2oa9e/e2M844w2rXrm3f/e537Sc/+YmdfPLJdvfdd9uSJUv8qTzhZc8s5e1PCxYscNPVqVPHbYdcfp+5nn8gyftsXOXtVwCqNsIs8i44oEaVXB2c9iXMZjtoBstciAYOHOiWe9KkSX5NdkHYDdancePGVrNmTff+yZMn+1N5Vq5caQ0aNCj5Dvv16+eKXqtO4zRNoLzQEcwnLJhXvXr1rEOHDjZ06FB74403rF27dtawYUO3rGHBPKJKeftTixYtXHivDJUZZrNt86jtnURatzjfL4CqiTCLvMvFAXXjxo02bdo0mzdvnn3++ed+7S65CLPZbN682T788ENX9Lo8mveKFSv8oXjK+wxtXy13HEGYDevUqZN7/x133OHXeIL59urVy6/ZRXWZ32152y9z+oULF7q6zOUJfPnll3bffff5Q57MeeyJqHUPaD/S/qT9SvtXeWbOnGlFRUX+UGn6nMxts3btWn+oNI3bk30j2MZlbfPwttq0aZNNnTrVPvvsMzccZc2aNW6abH9X+6q8v9tMUesEIH0Is8i7OOHjlFNOsfPPP98f2kVhQe9//fXX/ZpdASsoOjWt1rywzDAbvCdTuD4IBZklOKDqdeZ6qPUwc3rVhQUBSuFPrYLBdOVtk0B5n6F5Z44va97B8mTS+8L16ltao0aNMn8UaJym0bRSVrCSzGW74oor7LjjjrPt27f7NeUrb/2y0fsyS7Cc2n+0H4XHZX6GhlWvfVHdFKKmCdO21PjHH3/cDjnkkJL5vvTSS/4UHk2jVulgvIq6V2RSvaZVCeYXtW+oBPRa06v1PRinZR8yZIg/xS7XXnttyTQqWqZwWI/az4KS7fsO03KE3xP1dxvsm+F11LzL268AVG2EWeSdDkI6OJXl6aefdtNt2bLFr/Hcdttt9tOf/tQfMhs5cmTJ/DZs2OAOtjpQK1CFT3fvbZgNHzTDRYLPDQwaNMjV6fPVF1MlCA0aFwgO0AoLOnirD2iTJk3cdCNGjPCnihbnM7R8wXpkLnOUYHnCdOGY3t+2bVu/xmzChAmu7vnnn/drStM4TaNpRZ8bLEcUjQtvw1/84hduffZE5jzi0jIF6x5sIxXtN9p/tBzan7RfBdtT+1sgqKtfv35JIM22nqLP0T5Y1ncnmm/nzp1dK7U+u2fPnm6azO9IdZrf1VdfbRMnTnTLqjCrZdC48DoFVK/P1GfMnj3bhdigv3BYsG76V9tD+6n213PPPdefYtffR7hcddVVbn4fffSRP1W0uH+3wfdzwQUXuK4z+lsJPitYRwDpQ5hF3ukglK0Eli9f7oZ1sAv7zne+Yw8//LB7vXPnTqtbt64LE2E63Zv53r0Js1LWQTPzMzR/1c2ZM8evMfdadeHP1sFZdeoHGtApZdXdc889fk20uJ+Rbf2iBIFB71E577zz7Mc//rGdc8457vRv4OWXX3bzVB/ZbDRO02haKS90aFx4Gx544IFZLyQLlzDNI1spT7DuYcG2034UplZ07W/a7ySYTmEzjuB7L++7ixJ8lk77BzSsv4fMrgraPhqXuZ1E9SeeeKI/5FG3DdUHITIImnfeeacbDgQ/pLKt75/+9Cf73ve+Z+PGjfNrou3J322wzRYvXuzXeMpaRwBVX7yjG5BDwQFLB6LMEqZQdcQRR/hD5lqr9N4gZAT9K++66y43HKYr7MMhpTLC7M9//nN3AVQm1WlcQMsVHg6oPrzMUeJ+Rrb1ixJ8blD0vurVq5da52D7q4Usm+DCM00rZW0/0bjwNjzooINK9YkVTRcuYcE89BmZpTzBOodpWPtPposuush9lvY7Cbbx3Llz3XB5NN9s351aM8PUMt68eXPXIhksoz5L/VcDGj7rrLP8oV203hoXtf6qb9asmT/kyZw++J61fpnl9NNPd2cRMnXr1s29J9zCrFCcWaS8v9ubb77ZH/K22eGHH+4P7VLWOgKo+gizyLvgQFme7t27u2mDU+86xRkOA2Ud0IKDf6AywmzmcCBznkE4yZStPizuZ2QOlyXzc3UqV6FF7w+3BCpIqe6FF17wa0rTOE0ThK5g+0X1yVS/Wo0Lr4/6Zep0czZR65U5jz0Rtc01v6jvIfjsYF/Yk20smmfUcmbO59Zbb3XDaqHW/W/1eUE3lPB+qOGo+QXbPDxtIOo9mdNrvH4YBdsmqoSNGjXKvb9Hjx5+jadPnz6uPig/+9nPXH1Zy5c5/6jPk7LmAaDqi/9/XiBHdBCKOghH0WnU3//+9zZ//nz3vn/961/+GLOlS5e6ugceeMCv2UWnq8OhKDPMPvvss+69madoFZhVHyjroKn68HooiEWdLlYA17hAtgN0tvowzSdb6174M7Rc4fUoS9Tn6h6zer/6QAZ01btu2VXWKXGN0zTBFfKLFi1y83nuuefccFjQB1c/WgJ6v05Vhy80CotaLw3H3Z8yRa279hvtP5m0bPos7XeyJ9tY9DnlfXfLli1z88zsbqILwFQf3g+zrfee7LOSOf1rr73mhsPdYLLR96RWfPVxz/TVV1+VKlLe3224e0PU9yNlrSOAqo8wi7yLOqBmo1OOOvX497//3b1v69at/hiPgoD634XpYhhNq9tLBTLDbNC3MzgdLjNmzLD999/f1QeCMBZ1KyrVh9ejadOm7gKWsClTprjpNC6Q7QCdrT5M89H8NN9A1GfsSdDK9rna9ppH+MlpuiBMdXqoQSbVaVz4ojH1j1TYiZr/Qw895KZXy15A97VVXTiYh0Wtl4bj7k+ZotY9uC2Z9qMwfbfhMLon21j0OZo+qs9s8N0F+1vLli3dcOCkk05y9eHwpuGo9Q6CXpx9VjKDoW6VpWF1q4gS9KPWhVunnXZaqdu3xRH37zbq+5HMZQaQLoRZ5F1wQM1WwqZPn+6m1xOm1Ic206xZs9x4taapn6Fush/VepkZZoMr1oNl6dixo5166qnuterC1J9RfRfV11DjgwNo8N5AEAJ0gNcFUCpaFtVpXCDbATpbfVjwGZpvWZ8RtR7ZZPvcDz74wM3jN7/5jV/jCVoJ9RkKTCrB50XdQioIwApsCioDBgwomT7q9mvBOC2Trs4fPHiwe19woZJKmIb1nmylLNnWPWgt1f6k/Ur7lz5H+1tA885clrLoc/QePfhBF1Gp6LXmEf7u1BquaV955RW33hdeeKFdd911brpweAvWO5O6iWhceJ8NRL0nKhiqJV11wXem70BnM7RcwXRaTn2G5pdZwvOKEvfvNtv3E7XMANKDMIu8Cw5QUUUHwkyqP/PMM+3NN9/0a3anU50KBQcccIC7WEQH2XBfT8kMs6IDYfA+fYYOqvp8vQ7TRS1qRVRLlcYFB9Co5dWV3Aqzhx56qCt6nXl1t96X+RmSrT5TnM+IWo9syvpctc5qXLg1UV599VUXRNSlQEWvVZeNlk/9cPUDQq3fupdsly5d/LGl6UEBOg2tz9apZ91dQVfhq4U+8yb/wfJHlaj9KSyYLpP2H+1H2p+0f2g/yez6sCfbWILl0X6mgKowph9Qmd+dTu+rj+yPfvQj9whfvUf7nN4fDm/B/KLoB0Z4nw1EvSdq3hLez3SPVy2LuhOsXr3ajdd7spXMeUWJ83cbzC9TtmUGkA6EWQAAACQWYRYAAACJRZgFAABAYiU2zLZo0cIuv/xyd2W7+prFoT5Z6qenPne6cjZb/zIAAAAkQ2LDrDr7BxdCxA2zumBEF6boynXd8kehNnzbFwAAACRL4rsZxA2zwa1fws9X122DMm/9AgAAgORITZjV7W++//3v+0Oe4L1qqQUAAEDypCbM6kbt9evX94c8wXvDNygP6NGauudjUI4++mgXhsN1FAqFQqFU5XLQQQfZiy++6B8ZgcJEmC1+rx5bmunTTz91T7kJygsvvOD+qMN1FAqFQqFU5aKGnG7duvlHRqAwpSbMltXNIHiCTVk+/vhj99QhAADS4le/+lXWpy0ChSI1YVZhVNOFH0GpuyHEvQCMMAsASBvCLJIgsWFWITYoCqnB68CUKVOsXr16u13cdcopp7hbcy1ZssT1idUzwOPemoswCwBIG8IskiCxYVatqrrXbGYJTJ061Y455hhbtWqVX+M9NKFhw4YuxNauXXuPHppAmAUApA1hFkmQ+G4GlYUwCwBIm3yH2e3bt1MqoXzxxRf+Fk8mwmxMhFkAQNrkI8x+8803rovgvHnzbM6cOZRKKuqCuXHjRv9bSBbCbEyEWQBA2uQjzCpQ6Zi7YcMG+/LLL+2rr76i5Lhs27bNdctcuHCh/y0kC2E2JsIsACBt8hFmly5dypM582DHjh2uhVb/Jg1hNibCLAAgbfIRZufPn+9aZVH55s6da5s3b/aHkoMwGxNhFgCQNoTZdCHMVnGEWQBA2hBm04UwW8URZgEAaUOYTbZevXrZQw89ZBdddNFu9+LPhjBbxRFmAQBpQ5hNNgVYPSAqeFpqeRRmt2zZ4g8lB2E2JsIsACBtCLPlU1BUYNRTRrt06bLb00UHDRpkbdq0sd69e5dap8ynkAbzCZs1a5Z17NjRFc1f4zOnGT58uLVv395NM23aNL92d4RZOIRZAEDaEGbLp6CoFtB69eqVtITKtdde6wJkkyZNrFatWlajRg0bPHiwGyeZ4TIzcGraQw45xL1X89Aj+sPzF73WNDfccIPdcsstVqdOHRdsMxFm4RBmAQBpUyhh9oqOEyq1XP3yRP+Ty6egqCD74osv+jXmWmgVHidPnuzXmF111VUulAbKC7OaVu8JTJo0yY4++uiSMDtjxgzbf//93fsC+lyF5sx7xe5JmKXPbBVGmAUApE2hhNkT/jbSfvrw25VW9jTMHnjggf6Q56abbrKTTz7ZH/K0bt3aateu7Q+VH2br1q3r3hNWvXr1kjDbuXNn11L7+OOP71YOPfRQ1yUhjDALhzALAEibQgmzH67YbNOXbqy0Mm/NVv+Ty6egqFAZdumll1qzZs38Ic/AgQN3C73lhVlN269fP3/Ic/zxx5eEWfXFPfvss+2JJ55wRfVB0bzCCLNwCLMAgLShz2z5osLsww8/7FpRP//8c7/GrGXLlnb66af7Q+b6woZD59NPP71b4NS0zZs394fMVq1a5cYHYVYXl2l4zJgxbrgshFk4hFkAQNoQZssXFWYVCnW6X/1elyxZYl27dnXh9dlnn/Wn8LoiKJiuXLnSzeOSSy7ZLXB26tTJvUfvnTdvnvsMlSDMisZrONw3t2/fvv4rb9mConmHh6MQZqs4wiwAIG0Is+VTMMwMs6L6hg0b2gEHHOAuEGvXrp0/xqPbaDVt2tSFzFNPPdVNn9l6qqBas2ZN9/6zzjrL3dFAITesRYsWri+u3qvSuHFjf4z3/qA+XAizKUWYBQCkDWE2fxYtWuS/8ixYsMAF0ZEjR/o1FY8wW8URZgEAaUOYzR+1nqqVtU+fPq4/rfrgnnjiif7Y3CDMVnGEWQBA2hBm80dhVg9AUL/bK6+8cre+srlCmK3iCLMAgLQhzKYLYbaKI8wCANKGMJsuhNkqjjALAEgbwmy6EGarOMIsACBtCLPpQpit4gizAIC0yUeY1S2oCLP5QZit4gizAIC0IcymC2G2iiPMAgDShjCbfMOGDXOP0R0+fLhfkx1htoojzAIA0oYwm2x6YtgPfvAD97AFvf75z39uI0aM8MeWRpit4gizAIC04QKwZOvZs2fJtly5cqXVq1fPGjZs6IajEGarOMIsACBtCLPl05O6GjVq5J7QVbt2bdcCKmoBPf300+3AAw+0Y489ttQTvPSesGA+YS1atLCaNWva4Ycf7t4ffE5gxYoV7ulgyiea7uabb/bHRNNnVKtWzR8qjTBbxRFmAQBpUzBhdvWHZiunV15ZN8//4PIFIVTB9d1333V1CoWHHnqoexTtkiVLrGvXrlarVi3XdzUQhN6A5hOu07R6j967ePFiu/7660uF2XPPPdcef/xxKyoqstmzZ9sTTzzhwm02eu+ll17qD5VGmK3iCLMAgLQpmDD73BFmjx9UeaXL+f4Hly8Iofo38OCDD9ohhxxiO3bs8GvMmjdvvtsp/vLCrMKx3hNYtWqVGx+EWX0vGlZ9YOjQoa5OXQoy6X01atSwyZMn+zWlEWarOMIsACBtCibMdmpUuWUPw2xm94ALL7zQbr/9dn/I069fPzvggAP8ofLDrLon6D1hxx9/fEmYfeGFF+y4446zE044wU4++WQXlM844wyrW7fubsFadCcDzXvatGl+TTTCbBVHmAUApA19ZssXFWbVveC8887zhzydOnVy3QYCmWF24MCBu9UplLZu3dof8lSvXr0kzGbOL5u+ffu6+erf8hBmqzjCLAAgbQiz5YsKswqcCpCLFi3ya8xuu+02u+iii/wh7wIwBdKAxofDrALxVVdd5Q+ZTZw40Y466qiSMDty5Eg3vfrUZhMsR5wgK4TZKo4wCwBIG8Js+aLC7Pbt210XgOACLgXTzOCpfrV16tSxu+++212U9eSTT+4WZgcPHuz63Woeer+CrLoRBGFWzj//fHe7LdUNGDDA/asuBxJ0W9CyZZZsCLNVHGEWAJA2hNnyKTSGA2ZA66DAqv6zanXVdJmCIKwW2qj5zJo1yzp27OjKRx995C7geu211/yxHt1L9qabbnKtvnq/tp8E84sq2RBmqzjCLAAgbQiz+aMwGt4OGlZLa3kXce0LhdktW7b4Q8lBmI2JMAsASBvCbP4E4VUXgulhDEGXhVwizFZxhFkAQNoQZvNL96lVqFV3g/A9a3OFMFvFEWYBAGlDmE0XwmwVR5gFAKQNYTZdCLNVHGEWAJA2hNl0IcxWcYRZAEDaEGbThTBbxRFmAQBpQ5hNF8JsFUeYBQCkDWE2XQizVRxhFgCQNoTZZNPTxcp64lcmwmwVR5gFAKQNYTbZ9ibM8jjbSqZnHuuJGHpWcePGjf3a7PSFnn/++XbQQQdZvXr13LOS4yLMAgDShjBbPj3UQEWGDBmyW3gcNGiQtWnTxnr37l1qnYL3BMLzCehhCR07drS+ffu64ahphg8fbu3bt3fTafowwmyBa9asmQukekbxggUL3BfWtGlTf2y06tWruy915cqVbgfTY+LifsmEWQBA2hBmy6dwqQxy+umnuwY2vZZrr73W5YwmTZqUNLwNHjzYjRONC9N8wnWa9pBDDnHvVYPdhRdeWCqc6rXec/LJJ7txmn7gwIH+WMJswVMw7dChgz9k1rNnT/eFKtxGydxJJNgJ4iDMAgDSplDC7CPjHqn0Epfyhc74tm3b1q8x69Kli8sXkydP9mvMrrjiChdsA5n5IzOnaNqrrrrKHzKbOHGi/fCHPywJpzNmzLD999/funbt6oZFLbSnnXaaP0SYLWhr164ttZOI6nr16uUPlaZfLsHONm/ePNeye/fdd7vh8hBmAQBpUyhhtlGfRnZ0t6Mrrdw89Gb/k8unEPqd73zHlixZ4teY3XrrrXb00Uf7Qx6FyiOPPNIfKj/M1q1b11q3bu0PeX70ox+VhNNu3bq5XKL3hYvmoVAqhNkCNn36dPdlZe7sqlPflGwWL15sN9xwg9vBfvCDH5TaScLGjx9vJ554Ykk55phjrFq1av5YAACqvkIJsyM+GWFvLXyr0sqElRP8Ty6fAqRCY1hUiAyCZiD8WjLHH3jggdavXz9/yHP88ceXzFf/nnXWWa7o88JF85Ko5SgLYbYSzZw5M2uYVRN7lE2bNrn+LOprO2DAAHv++efL/JLXrVvnOlUHpXPnzq5rAwAAaUGf2fJFhdl77rnHnf0Ne/rpp61Bgwb+kLkGsqKiIn/IXGNcOMwqszRv3twfMlu1apUbH+QWXRT2/e9/3zXwZUOYLWAKpvpCo7oZ9O/f3x/anQKswqi6KARatmy5245TFroZAADSJh9hVt0Akx5mR4wY4fJF0J9VXRAuuOACe+ihh9ywXH755RZ0fdSZY80jnEmeffZZd/GX5rFo0SK75ppr7IwzzigJp7ofrM4yazjo4qDtpmuIAoTZAqdfPGotDejuBAcffLAtXbrUr9mdOmPrSsIwfcHacYK+JWUhzAIA0oYwW76oMCs6U6yMoUCqf3VBV7hfrTKIcskRRxzhxqulNRxmpUWLFlazZk07/PDD7dFHH7UTTjhht+6UupuT7nKg96nVV/8ee+yx/lgvzKouswTdEDIRZivZfffd5wKtru5TM33Dhg3tzjvv9Meaa7VVR2vdhisY1hf40ksvudbZMWPG2CWXXBK5A0YhzAIA0oYwu2+0Hsof4RCbScEyfNY4G7XeKseo8S6TQm22uzntCcJsHqibgH6xKGRmPjTh/ffft+OOO871MQnoyr+LL77Y3UKjdu3a7r60cVplhTALAEgbwmz+KOSqi4Ea33r06GG33367u4BdXS1zhTBbxRFmAQBpwwVg+aMwe+WVV7ozyPpX3RIyrxWqaITZKo4wCwBIG8JsuhBmqzjCLAAgbQiz6UKYreIIswCAtMlXmF2/fr0/hMryzTffuKxDmK3CCLMAgLTJR5hdvXp1mVf/Izd039o5c+bYV1995dckB2E2JsIsACBt8hFmP/vsMxeqFi5caJ9++imlEsqyZcvcNte/SUSYjYkwCwBIm3yEWdmxY4cLWWqhVdE9Viu6BPPe0xI1r8oquVqW5cuXu5ZZdTVIIsJsTIRZAEDa5CvMAnuCMBsTYRYAkDaEWSQBYTYmwiwAIG0Is0gCwmxMhFkAQNoQZpEEhNmYCLMAgLQhzCIJCLMxEWYBAGlDmEUSEGZjIswCANKGMIskIMzGRJgFAKQNYRZJQJiNiTALAEgbwiySgDAbE2EWAJA2hFkkAWE2JsIsACBtCLNIAsJsTIRZAEDaEGaRBITZmAizAIC0IcwiCQizMRFmAQBpQ5hFEhBmYyLMAgDShjCLJCDMxkSYBQCkDWEWSUCYjYkwCwBIG8IskoAwGxNhFgCQNoRZJAFhNibCLAAgbQizSALCbEyEWQBA2hBmkQSE2ZgIswCAtCHMIgkIszERZgEAaUOYRRIQZmMizAIA0oYwiyQgzMZEmAUApA1hFklAmI2JMAsASBvCLJKAMBsTYRYAkDaEWSQBYTYmwiwAIG0Is0gCwmxMhFkAQNoQZpEEhNmYCLMAgLQhzCIJCLMxEWYBAGlDmEUSEGZjIswCANKGMIskIMzGRJgFAKQNYRZJQJiNiTALAEgbwiySgDAbE2EWAJA2hFkkAWE2JsIsACBtCLNIAsJsTIRZAEDaEGaRBITZmAizAIC0IcwiCQizMRFmAQBpQ5hFEhBmYyLMAgDShjCLJCDMxkSYBQCkDWEWSUCYjYkwCwBIG8IskiDRYfbBBx+0WrVqWY0aNaxx48Z+bdlatmxpRx55pO23336utGrVyh9TNsIsACBtCLNIgsSG2WbNmlm9evVs2rRptmDBAmvUqJE1bdrUHxtN4w877DDr16+fGx49ejRhFgCALAizSILEhtnq1atbhw4d/CGznj17upZWhdsoqtf4QYMG+TV7hjALAEgbwiySIJFhdu3atS6YTp482a/xqK5Xr17+0O769u3rxs+aNcu16qpbguriIswCANKGMIskSGSYnT59ugumGzZs8Gs8qmvTpo0/tLt27dq58epf26RJE7vhhhvskEMOydrNYNy4cXbccceVlKOOOsqqVavmjwUAoOojzCIJEhlmZ86cmTXMtm/f3h/aneo1/uWXX/ZrzO69915Xt3HjRr9ml/Xr19s777xTUrp27eq6NgAAkBaEWSRBIsPspk2bXAiN6mbQv39/f2h3qtf4lStX+jVm8+bNc3XZ+tmG0c0AAJA2hFkkQWIvANOdDDp37uwPmbuw6+CDD7alS5f6NbtTvcaHLwALuh4sW7bMr8mOMAsASBvCLJIgsWH2vvvuc4F24sSJVlRUZA0bNrQ777zTH2s2adIkq127tq1YscKvMTdet/DSLbmGDBliderUsUsvvdQfWzbCLAAgbQizSILEhlnRAxBq1qzpQmbmQxPUdeDEE0+01atX+zUeTafp9b6495gVwiwAIG0Is0iCRIfZykSYBQCkDWEWSUCYjYkwCwBIG8IskoAwGxNhFgCQNoRZJAFhNibCLAAgbQizSALCbEyEWQBA2hBmkQR5DbN68pbuOqBbZRU6wiwAIG0Is0iCvIXZpk2bugcWKCAGYVb3ju3UqZN7XWgIswCAtCHMIgnyEmZfeeUVu+6666xPnz722GOP2ZgxY1z9hAkT3MMPChFhFgCQNoRZJEFewqweXNC1a1f3+s9//nNJmN22bZsdcMAB7nWhIcwCANKGMIskyEuYvfLKK13rrOgpXkGYHTp0qHsEbSEizAIA0oYwiyTIS5jVY2Qvuugimz59urVo0cKF2QEDBti1117rhgsRYRYAkDaEWSRB3i4Au/XWW90FYGeffbYde+yx7nWtWrX8sYWHMAsASBvCLJIgb2FWRowY4e5e0KZNGxs0aJBfW5gIswCAtCHMIgnyEmYbNWrkuhokCWEWAJA2hFkkQV7CbLNmzQizAAAUOMIskiAvYXb27NlWr14969y5sy1evNivLWyEWQBA2hBmkQR5CbNqldUFX1FFXRAKEWEWAJA2hFkkQV7CrB5fW1YpRIRZAEDaEGaRBHkJs0lEmAUApA1hFkmQtzBbVFTkuhvoaWB6vK2eBFaorbJCmAUApA1hFkmQlzA7fvx41z+2bt26dsMNN1jTpk2tQYMGri54zG2hIcwCANKGMIskyEuYveeeeyIv9FJLbf369f2hwkKYBQCkDWEWSZCXMKsgm61LgVpnCxFhFgCQNoRZJAEtszERZgEAaUOYRRLQZzYmwiwAIG0Is0iCvJ3TV6C97LLLXAttUHr37u2PLTyEWQBA2hBmkQSF2UG1ABFmAQBpQ5hFEuQlzA4YMMD1j82kuqj6QkCYBQCkDWEWSZCXMPvAAw/Yc8895w/t0rdvX9d3thARZgEAaUOYRRLkJcxyay4AAAofYRZJkJfkeMcdd1izZs38oV3atm3LrbkAACgQhFkkQV7C7OTJk6169erWvHlz10I7c+ZM+9Of/mQ1a9bk1lwAABQIwiySIG/n9NUKq0CrbgVBady4sT+28BBmAQBpQ5hFEuS1g+qOHTusqKjIPvzwQ9u8ebNfW5gIswCAtCHMIgkK4mqr5cuX29y5c/2hwkSYBQCkDWEWSVCpYVZ3MejYsaM/5Dn11FNLuhnUqVPHBg8e7I8pLIRZAEDaEGaRBJUaZg888EBbu3atP2Q2aNAg+9nPfmadO3d23Q3OOecca9KkiT+2sBBmAQBpQ5hFElRamJ03b55rfQ279NJL7fbbb/eHzLp06WJHHnmkP1RYCLMAgLQhzCIJKi3Mzpo1y4XZNWvWuOEFCxa44V69erlh0W26MgNvoSDMAkA67fzqG9vw2Re2dMN2m71yi01ZvMHenfupDZq10npNWWqvvLfI/jFqvv3t7TnWst+H1rzXDLul61T7badJduVLE+yyF8fbRS+8Z+e3G2tntRljZz472ho+846d/PdR1uCvI+2XTwy3eo8NsyMfHWo/ffjt3Yrq/q94XP1Ww+24J0fYiX8baac89Y6d0fpda/TcaDvn+TF2/j/es4vbj3Ofo8/rN32Fv+T7jjCLJKi05Kg7F9SoUcO6d+/uhtu3b2/HHXecex1QmOVxtgBybeP2L+zjVVts7Ly19p+py6z9Owvskf4f2m3d3rfLOoy3E4oDg4KEgsI9/5lpnccttvc/2ei/O702bf/Slm/cYR+v3mrTirfHe/PX2YjZa1yoe33acus5+RO3rV58d4E9P2KeC3ePvvmRPfT6LLu7OOD9/tVpdlOXKS7kXf/KZPf61m5TXf2dPae7ae7rM9NN/0hxKNR7n3hrtpvPM0M/tjbD59qzw+bak4Nm25/6f2T39/3A7nptut3e/X27sfNka/zPiXZJcajT96awp+B3THEIzAyIVb28NGah/43tO8IskqBSm0FbtWrlWl51IZj+7d27tz/Go/FRTwYrBIRZFILtX3xta7futMXrPrMPV2y2yYvW26R9KApz73z8qQ0vWm2DP1xlb32w0rXq9H1/mb02Zam9OvET6zJ+sf3rvUXuAKnQ127kfNcSpfCiIDN1yQabt2arfVq8XPmybedXtqI4ZBWt3LLbNpm4cL09URx8FHiufnminV4ccKIO/ntS1ML25wEfue3zUfF3kBTrtu20BZ9ucyF01JxP7Y3py61LcfBsN3KeC4cPvzGreDvNsN/9e4oLhRf84z0XCNUaGLUd0lbUeqrW1PPajnX7koK4AviDxcG71VtF1rY4vL9c/DfSY9In1n/GCve3of0v/PdWWWXV5h3+t77vCLNIgko/pz98+HAXWlesKH0aRPV9+/b1hwoLYRYBBUqdctQBQ6FSLXwzl23ap2CpgPiXNz+y+/t8YE1fnWY3vDLZnTI8+/kxdspTo+zox4dFHmALsei0qE6f6qB/TadJrtVNB3y1rik47U1RK9/jA4tcK6nC1hUdJ7hTrHsbtP7vL8Pc+6/71yQ3T7X6KbQr0Ie/l/EL1lmnsYusWXFo+dWz2YOwTu0qNOuHgIL9vvj8y69dC+jqLZ/bkvXF+9fqrfbB8k3u1HZ42cJFoempwXOsRXEg/UOPaa7lU6eedTr6qL+UPnW9t0X7ofZHtXz+pnj/1PbT6XSFOrWoqjVVLanPDPk48nusiPLCqPnuO+le/EOrT/GProHFP8D0Y2xM8Q8zbaMPiv8Wtc207bQN1Qq/o3ibYu8QZpEEhdlBtQARZpPns51f2Zrig9mitZ+5MKCDvlpL1Gqi1hO1NCokqVVFYeuO4hBwY+cprtVF/c8UJE975h3Xp61QwmTdPw9xAU4tZmo5UwuaAmM+i4LcvgTLiihBgFafRC2TArROVStA63tWyFTgW7h2m/sxsrfUAvze/HXW4Z0FrkvCScWfGbU8hViOfWKE66t5ecfxdnOXKe6Uvn5AqUVR4fC1yUvtzZkrXautQqFanfVjTS26hMH0IswiCQizMRFm80/h9JP122360o0ulOrCC/XNUxjVBRdqJVJr4PF/zW2oUqDUxRgKMmqt02de2mGcC5ZaBp1+VGtV0A9Qy6ZWKwXmoB+gltn1Axzi9QPU6Xu1DOr0vloHdfpfp4PnrtnqTp+rpS4pwqf81bKp9dEp+ZdGL4xsaYtX5ru+mIXUtSGwftsXbpn0PerHUEX00dSPJ/XbVbcItYKqH6j2L7XYK0Sr24T6i6rfqLoItB421/0tqM/qoFmrXOBWN5RlG7bblh3J2XdQeAizSALCbEyE2Yqlfp/qv6eLakbOWRPqvzd/txCjVtJTn37HBciog35Z5RePDnWtqmqNUj9HnXrVxSZqkVKo/PvgOe4KZIWk3lOXuYtYRs/1T1Uu3+SCpMKzWnc3FwcCXdEMAGlCmEUSEGZjIszuGbVWDflotQuMaq30+u+N2uf+e+oDqFYq9ZtUy5RapHRxkk4jj1uwzrUGqp8cAGDfEWaRBITZmAizZdMpzX+PX+KuhtYVv5khVH0adfo/aCVVv0adig9aSXXKPbOVVPdx1Kl2XcihU9cAgMpFmEUSEGZjIszuoquD1TVAV4ArlOp0fji4/qzlYLuwOLCqX6j6S2p6AKiqtn+13TZ+vtFWfbbKPtnyic3dMNdmrZ1lU1dPtYkrJ9q4FePs3WXv2shPRtrQJUPt7UVv25sL37Q35r9hfeb2sdfmvGbdZ3e3Lh91sX99+C97+YOXXdFr1Wlczzk9rffc3vb6vNdtwIIB9tbCt2zw4sE2fMlwG7V0lI1eNtp9jj5v5baV/pLtO8IskiDRYfbBBx+0WrVquYcxNG7c2K8t35IlS9x9bvfkaWNpDrNzVm1xF5boIia1rIaDa1B0hbTCra6E1oVaQJIs2bzEJq2a5IJCh5kdrOV7Le2hsQ/ZsCXDXDjB3nEhb+dGW/3Z6t1C3vtr3rfpn053r2evn+3qF2xa4KZZvnW5m37tjrXuvVu/2OrmE7bty222fsd6F9oWb15sczbMsZmfzrTJqybbmOVjXMBT2NP32WNOD+v8YWfrOLPjXhXtDzcOudEav9XYLh1wqZ33xnn2q//8yk557RQ7utvRBVle+fAVf0vtO8IskiCxYVYPV6hXr55NmzbNPRpXD2Jo2rSpP7ZsCr7BAxziSkuY3fr5V+4iKN2ySldO6zGKmcFVXQbUD1YXaE1YuJ4LoxJux1c7XGBQcFi3Y50LEiu2rXDBYuGmhS5oKHDMWjfLBRAFkTnr57ggoVBR6LZ8scU+3vCxjV0+1rVutZ3W1u4bfZ9dP/h6O+M/Z0SGgcxy8msn2+3Db7cXpr/gWtjWbPcey11V6Hv8dPunLhh+uO5Dm7J6ir2z9B0XCHt93MuFQYW6p6Y8ZX8e92e7d/S99vsRvy8V8s7sc2ZBh7xclQY9GthpvU+zs/qeZRf1u8iuGHiFXfv2tfa7ob+z24bfZn8Y8Qe7a9Rd9sd3/2j3j7nfWoxtYY+Me8T+MuEv1mpiK/v75L/bM1Oesefef87aTW/nhegZHdzrNu+3sdZTW7tpnpz0pD0+4XH3HTz83sP24JgH3b589zt3W7NRzdzn6PPUYltRCLNIgsSG2erVq1uHDh38IbOePXu6cKpwW5auXbvar3/9a8KsT7c3Uh9V3ZNTzwzPDK4qun+o7gKg+3WqDysqngLi8E+Gu9OHClydZnVyoeuvk/7qWgl1sNJB6reDfmuXDLjEHTRP6nlS5IE1H+WM3mfYxf0vthsG32B3jLzDHWh18NVBWS1jAxcOdC1mOu26N0WhSttDYUoHc81fB/E7R91ptw671ZoMbRJZFCailjezKIhpegWM9jPal7TKKUxou5/Y88RS71EQVoD456x/2viV411oLhTbvtjmfoyotVKnoPvO6+uWU9+JwpS22WUDLrPTe59ear2SUPR9aJ875/Vz3H531VtX2fVvX+/WS/ufwrb2Ee0rWufnpz1vL8580W0DBfNuRd3cfhmctu8/v/+u0/bFf4cK8tpf9b2qtVc/4orWF7nWY7Ucq9VYPwDTgDCLJEhkmF27dq0LopMnT/ZrPKrr1auXP1Ta6tWr7Sc/+YlNnz49tWFWt5zS7a900/R6Ea2uKroZv55T/8a05e6m6VWVDvhqiVQrpFog1fo449MZ7uA14pMRexQs1XIXddCl5L+o1Uwth2q1UiuYTsEOWjTIBRT1cYxLLdQKPwq8ClBRn3X+G+fb/aPvd8EpCMSVUdSK13hQYzu779mRy1VeUThs1KeRW69rBl3jQmHzd5pbi/dauNZAhcF/fvBPe3X2q9Zvfj93Gl/9M6eviQ55O7/O//1/UTEIs0iCRIZZhVEF0Q0bNvg1HtW1adPGHyrtlltusbvuusu9Li/Mvvfee3bMMceUlLp161q1atX8scmzfOMO+2PvGaWCq27wrttc6U4CurXVZ18UXn9Xnf5W4Jy3cZ4LmzqIqi+jDqo6uOogq1NxT0x8wh18dRDWwfi6t6+zKwde6Q7QOsirFeqEnidEHswruigcnNrrVNd6pJCgFiQFHS3Lb978jVsuhQa1Jt085GYXjnUK8k/j/uRakhROdPGHTvHqQhFdOKKQrT6GCt7qClBILUMKMVoufT86Da9l1kUrCnVaH51WVZi8Zdgt7vS0Ws/0o0AtaA+MecC1ounUqVrSFJ70nmenPutClE7tK7AphOr7ViujWnoVqNR6pu2iz1XXh0WbF7nuD/qRoh8rubRp5yZ30Y1ab9UCrNActS/ko2hZ1NqslnK1Hj86/lG3HfWdKMirf7C6XqhrAVAWwiySIJFhdubMmVnDbPv27f2h3fXt29cOO+ww277du5CgvDCreY8dO7akvPrqq65rQ9Js+OwL90z7cIB9+I1Z9p+py/b5GfJ7Qqdgl21d5lpxdLXtkMVD7D9z/+NaPRVadLBVuFEouPzNy11rZ67DgVpTdar43NfPdS2sV791tTv4xw6W6wozWCJ/Plr3kWvNz2w53ZMSnArvWtTVnQrXflfqVHhxkM88Fa4fOmodzbxYCtgXhFkkQSLD7KZNm1wQjepm0L9/f39od7pYTAE2XDS9/h09erQ/VXZJ62agOwroAq3/+4vXleDnjwx2j1NdtXmHP0XuqMVHB1+dllfrY1SQ3JOiFk61bKpFUy2ZatlTi55a8tSHUhem6OCvg74O9jrI6wKWD9Z+4JZlyeYlrrVuw+cb7LMvq263CQCoaIRZJEEiw6wonHbu3NkfMhs0aJAdfPDBtnTpUr9md5lBVqUqhtmvvv7WPeNfF20FLbG3d3/fPTo2F9QyqdOWal3Vlc1RYVRFFyupBVStn2r51MU76r+oU7S6j6LCry68UAhVAFX4JHgCQH4RZpEEiQ2z9913nwu0EydOtKKiImvYsKHdeeed/lhz9T/96U9txYoVfs3ugjAbV6GH2W+/NfdI19NCT9+66qUJNnPZJn+KfafT6TrFHlzhfUqv6FvwqIuAugyoC8GElRNy3ncRAJAbhFkkQWLDrLRs2dJq1qzpQmbmQxN0kZgCru5gEEVhVvemjauQw+w7H39q57cbWxJiz//He65uX3xb/N/8jfPd1du6l+Gv+/w6Mrjq3opNRzR1tzPShT9cUAIAVQdhFkmQ6DBbmQoxzKrVVa2vQYhVq2z/GStcK+2e+ubbb9zFK7r/olpVFVIzg6uu0L956M2uS4Eu4NIFXQCAqoswiyQgzMZUSGFW9369rdv7JSFW/WO7T9yzR25+8fUXNm3NNHfltC6oiroBvwKtgq0Crp4KpMALAEgPwiySgDAbUyGF2Uvaj3Mh9hePDrU2w+fa9i++9seUbenWpfaP6f+wm4bcVCq4quh+qLrnp+4KoHu6AgDSjTCLJCDMxlQoYXbr51+5IHvUX4a6e8jGoacc6RngmeFVN/HXvVT18AE9+hIAgDDCLJKAMBtToYTZEbPXuDDb+J8T/Zrs9FQmPU0pHGB1Oyw9PUtPSAIAoCyEWSQBYTamQgmzTwya7cJs2xHZuwHo8a+6SCt4gtYvu//ShVjuNAAA2BOEWSQBYTamQgmzF73wnguzExau92t20SNV1SdWdx1QiK3fvb49Mu4RW7Et+l67AACUhTCLJCDMxlQIYfazL76yn7UcbLUfGWw7v9p1ZwE9i/2fH/yz5CEGx3Q7xu4ffb97jCsAAHuLMIskIMzGVAhhNrO/7M6vd1rXoq52Ru8zSvrENhvVzD3sAACAfUWYRRIQZmMqhDD7pN9f9tnhs93ts8JP5dLjZYvWF/lTAgCw7wizSALCbEyFEGYv9u8ve/1bTUtC7PWDr7epq6f6UwAAUHEIs0gCwmxM+Q6z4f6yx/do4O5UMHb5WH8sAAAVjzCLJCDMxpTvMDtyjtdf9qJ/dnctspe/ebk/BgCA3CDMIgkIszHlO8z+7e05Lsze3O8pF2b/Oumv/hgAAHKDMIskIMzGlO8we4nfX/aaN29xYXbI4iH+GAAAcoMwiyQgzMaUzzC7q7/s23ZSz5NcmNWjagEAyCXCLJKAMBtTPsPsqDmf+v1le7kge0G/C/wxAADkDmEWSUCYjSmfYfbvg73+srf0a+PC7J/H/dkfAwBA7hBmkQSE2ZjyGWYv7eD1l73xrTtcmO0/v78/BgCA3CHMIgkIszHlK8yG7y/bsNdpLswu3brUHwsAQO4QZpEEhNmY8hVm3/nY6y978ctvuCB7Wu/T/DEAAOQWYRZJQJiNKV9h9im/v+xt/du7MPvAmAf8MQAA5BZhFklAmI0pX2H2sg7jvYu/3r7XhdleH/fyxwAAkFuEWSQBYTamfITZoL+sSqM+jVyYnbdxnj8WAIDcIswiCQizMeUjzI6eu9brL/vSQBdk9cAEAAAqC2EWSUCYjSkfYfbpIR+7MNu0fycXZu8adZc/BgCA3CPMIgkIszHlI8xe9qLXX7bpkIddmP33R//2xwAAkHuEWSQBYTamyg6z4f6yF/e/xIXZWWtn+WMBAMg9wiySgDAbU2WH2THzvP6yl3Qc7oJsgx4N7Otvv/bHAgCQe4RZJAFhNqbKDrPP+P1l7xzQ1YXZ24bf5o8BACTKlzvMdm4127HRbNunZltWmm1aarZhkdm6eWZrisxWzTJbMc1s2WSzT8abLRm392XTMv+D9x1hFklAmI2pssPsb/z+sncNe8yF2Zc+eMkfAyCnPltn9ukcL1DoNaoOhUoFSoVJBUmFSAVIfdcLRhX/j36Q2Yevm8141WzKv8wmvGA29lmzUU+YDXvEbNC9ZgPuMOv7O7Ne15q9+huzLuebdfqV2Ysnmf3jGLM2dc2ePtzs8YPyV95r46/wviPMIgkIszFVZpjd+dU3Jf1lrxx4lQuzU1ZP8ccC2CNffGa2cYnZ8qnFf8hvm03rVhxQnjMb0sLs9VvMul9q1vEUs2drRwcDBZQ3bvPCzaoP/JmmlIKgWhQVAtWCqABYNMALf5OKf3Ar+I34ixf6+t3uBb5uF3thr/3xXtD7+0+it3NVL3//Hy/ktv6Ztx3aHlW8b9X3tkvHk81ebuhtp1fO9gJy14v2vnzY1//C9h1hFklAmI2pMsPsWL+/7MUvjrL63evbL7v/0nZ+vdMfi0T7fIvZtjXF4eoTs7Ufm62cabZ0ktmiMWZzhxQHg/5mM18ze79LcTjoaPbe82bv/t1s+KNmgx8we/MuL1j95wazHld6B67O55r969dm/zzD7KVTzV480eyFY83aHW32/C/MnqtTfACtZfbUYWZ/K96How60e1qeqlE8/3rFn3m6Fwb73GT21h/NRrUyG9/ObHpxYJzzlnfKc/VHZptXeKFyXyhIqTVtxXSzhe94LWhTX/EClFrNBtzphafwQf3fF0Yvf1RR0FCoCL//X41KT6dtqHHvPGk2b1jxcm3yFzABtm8o3u/mei2R2temdDIb/bS3/QbeXRzum5j1vMrbbgpXClsKX5nbIMlF37PCpH6kqDVVAfLfF3itrNp/1Oqq1lcFcm0XtcpqH5vQ3vtBo+CufU+tuArz2pZq3VXA1/6pVl/tq2oFrgIIs0gCwmxMlRlmnxnq9Ze9a8BrrlX2hsHFwQUVa+e24oC1vDhofWi2+D2z2QO9FrtxxUFs9FNmIx/3AuTQh70QOegeL0j2/0NxmLy1+IB3sxcoX7vGC5UKdC78FIdKBcr2x3lBUiGyogIkZe/KX/+fF8oU0BTW9L0qmCiQfDLBCyBlUShZPNb7UaHvOOoz9ANCAUg/QhRq8kE/hqZ3934ADW3p/ehRQNP++NwR0cu9p+Xpmt5+rRD4ylnF+/1lxX8HN3rrPvjB4oD/1+K/obZe6Pugl/eDRttOYU8hWkFPP+iQGIRZJAFhNqbKDLOXd/T6y9474m8uzLadVnxwwC5ffW722driELLYC6MKJGoh++gNL5BO7OC1Nuk0cr/fFwfOxl7rZYcG2U8lV1ZRi6aWoe3/ecvz8mnFy3aOWbdLvOXsUxwMtMxv3e0t/8jHvHVRyJ78srd+s/7jhe/5w70grvXXKfSVM7xWUPX3XL/AO7WuwL51tdf38/PNXuuott++0sUsCia6eEUtpVoOtVR90NtrLVWgefdvxevwkNdiqsCjYBW1TeIWtSxXdmtwNksnev0SX708+rS5WjWD1t09LcEPIrWuKzg+89OK+UGkeekUtn6EKXjqR5srxfuX+oa+/2/v9LRCsVob1aVi/ULvTEIVaWXEniPMIgkIszFVVphVf9naj3j9Za9/+wYXZscuH+uPTTC1hG5dVRx+5nuhS4FD/RcVzKZ29sKaCz/FAe7NZl6oU1BQ3zH1Z1T4e+Z/ow/Se1MUDtoc6c1bLXa9r/M+d9ifvIO7gooCkoLx5H96y6iwNLOnt8wf9fMC5dzBXqhc+K63TsumeAFb66kgqdCtdUfVpkCvfaX39RW7n0aV4AeRgn2HE7xwrx9E6h4wsLnXgqxgqv1SgVRhFNhLhFkkAWE2psoKs+/N9/vLtn/X9ZVVn9ntX233xyaEbjEzravZ2/d5B9qoA/K+FrVW6WCuFiy1uva4wmupU1cAdQ3QAV3hQuFToVktaWqxVKAGKssXxX+7ahHfvt5rIVdrsfpLq+VcfabVgqx+02pZ1z6qH0R6rfqSH0TrctfCDJSDMIskIMzGVFlhtvWwuS7M/nHA665V9qq3rvLHFCgdkNVSqT56CpVRwVNFLaG6EEmnThVwdTpV/U3V/1Sniof/2WzMM8UB9EWvBVQtn/NHeAd4Hdh1ylyBAABQaQizSALCbEyVFWav6DjBhdkHRz7nwuxTU57yxxQAtSqpT6IujlKfRV0MkhlaVafuAeqLpyvO1bcSAJBIhFkkAWE2psoIs+H+srcMvdWF2eFLhvtjK5luLaOWUbWWqgVVraqZwbXVwd5FTOprqguTPp1t9u23/gwAAElHmEUSEGZjqoww+978dX5/2bHWoEcDF2Y379zsj80h9etTX73x//BuOaX7k2YGVxXd9FtX3ev+mrq4JEn31wQA7DHCLJKAMBtTZYTZZ/3+svcMGOiC7MX9L/bHVKCvvzBb/r53s3TdM1X3x2xVrXRwffJQs05nehdxzejht7p+488EAJAGhFkkAWE2psoIs1e95PWXfeSd9i7MPjbhMX9MBVg02ru/5BM/LB1cFWb15CPdD1T3CNVthhR6AQCpRphFEhBmY8p1mA33l71j5F0uzA5cONAfu5d0SyDdHUDP/g6HVz0NSU9C0lOQdHN0bogOAIhAmEUSEGZjynWYHbfA6y974Qvv2Uk9T3JhdsW2Ff7YPaSb9ut+q+GnBv21uvdkKd3UHwCAGAizSALCbEy5DrPPDff6y97/5lAXZH/d59f+mJi+2mk28zXvUZi7tcIe4z3JSncnAABgDxBmkQSE2ZhyHWavfnmiC7OPje7kwmyLsS38MeXQwwSGP2rWutauAKs+sK819u44wK2yAAB7iTCLJEh8mN2wYYOtXr3aHyrfkiVLbNasWf5QfLkMs0F/WXcng3fvd2G2z9w+/tgIuqvA3CFmPa707vUahNjWPzMb+ZjZpmX+hAAA7D3CLJIg0WH2wQcftP3228+Vxo0b+7XRWrVqZSeffHLJ9HXq1LG7777bH1u+XIbZ8X5/2Qv+8Z7rXqAwu2DTAn9sBF28FQRYFT1GdlYZ4RcAgL1AmEUSJDbMNmvWzOrVq2fTpk2zBQsWWKNGjaxp06b+2NIUZtu3b29FRUW2du1a69ixowu1qo8jl2H2+RHzXJh9aMA7Lsie1vs0f0yEndu8APvX/2f21h/NVn/kjwAAoGIRZpEEiQ2z1atXtw4dOvhDZj179nThVOE2rtq1a9vll1/uD5Utl2G28T+9/rJ/HdPNhdk/vlscUrMpGuCF2Vd/41cAAJAbhFkkQSLDrFpWFVwnT57s13hU16tXL3+ofAqn6qoQR67CbLi/7ENj/uTCbPfZ3f2xEfrd7oVZPcELAIAcIswiCRIZZqdPn+6Cqy7+ClNdmzZt/KGytW7d2vWhzZxHYOzYsXbUUUeVFLXiVqtWzR9bcSYsXO+C7Pn/eM8u6HeBC7NF64v8sRm++drsqcO8MLt1lV8JAEBuEGaRBIkMszNnzswaZtUvtjzDhg1z0w4aNMivKW3jxo02YcKEkqIWX3VtqGht/f6yDw8Y74Jsgx4N7BvdrSDK4ve8IPtyGX1qAQCoIIRZJEEiw+ymTZtcGI3qZtC/f39/KJou+NJ0ffv29WviyVU3g6C/7NNje7kw23RE9ovYbGhLL8yOfsqvAAAgdwizSILEXgCmOxl07tzZHzLXynrwwQfb0qVL/ZrS9jbISi7CbLi/7KPjnnBhttOsMvrCtjvaC7OrPvArAADIHcIskiCxYfa+++5zgXbixInudlsNGza0O++80x9rrmvAYYcdZsuXL3fDPXr0KAmyo0eP3q3EkYswO9HvL3te27F2+ZuXuzA7bU2WuzF8OtsLsm2O9CsAAMgtwiySILFhVlq2bGk1a9Z0ITPzoQkzZsxwf4Rr1qxxwxqve9FGlThyEWbbjfT6y7Z8c7ILsr/s/kv7+tuv/bEZxj7nhdm37/crAADILcIskiDRYbYy5SLMXtNpkguzbcb1d2H25iE3+2Mi/OvXXphd+I5fAQBAbhFmkQSE2ZgqOsyG+8v+bWJrF2ZfmP6CPzbD9g1ekP1b8efr9lwAAFQCwiySgDAbU0WH2SmLN7gge27bsXbt29e6MDtuxTh/bIb3/+2F2b6/8ysAAMg9wiySgDAbU0WH2b7vL3Nh9k8DZlj97vVd2f7Vdn9shp5Xe2F2Vh+/AgCA3CPMIgkIszHlos/sji+/tqELx7pW2WsGXePXZvhyh9mTh5o98UOznVv9SgAAco8wiyQgzMaUizArHWZ2cGG29dTWfk2GOW95rbLdLvYrAACoHIRZJAFhNqZchdkmQ5u4MDtq6Si/JkP/P3hhdtJLfgUAAJWDMIskIMzGlIswq3vKNujRwIXZzTs3+7Uh335r9tRhXpjdlP3JZgAA5AJhFklAmI0pF2F2xqczXJC9bMBlfk2GT8Z7QfalU/0KAAAqD2EWSUCYjSkXYfaVD19xYbbVxFZ+TYbhf/bC7DtP+hUAAFQewiySgDAbUy7C7B0j73Bh9u1Fb/s1Gdod7YXZFdP9CgAAKg9hFklAmI2posPsN99+Yyf1PMmF2bU71vq1IWs/9oJs61p+BQAAlYswiyQgzMZU0WF2zvo5Lsie98Z5fk2GcW29MDvoHr8CAIDKRZhFEhBmY6roMPvanNdcmH1k3CN+TYZXzvbC7PzhfgUAAJWLMIskIMzGVNFhdufXO23iyon24boP/ZqQ7RvMWh1s9rfiz/v6C78SAIDKRZhFEhBmY8rFBWBZTe/mtcr+50a/AgCAykeYRRIQZmOq1DDb67demP2gl18BAEDlI8wiCQizMVVamP1yh9mTh3rdDHZu9SsBAKh8hFkkAWE2pkoLsx8P8lpl/32hXwEAQH4QZpEEhNmYKi3MvtnMC7MT2vsVAADkB2EWSUCYjalSwuy333oPSVCY3bTUrwQAID8Is0gCwmxMlRJml07yguyLJ/oVAADkD2EWSUCYjalSwuyIv3hhduTjfgUAAPlDmEUSEGZjqpQw2+5oL8wum+JXAACQP4RZJAFhNqach9l1870gqz6z6jsLAECeEWaRBITZmHIeZsf/wwuzA5v7FQAA5BdhFklAmI0p52G2y3lemJ07xK8AACC/CLNIAsJsTDkNs9s3eE/8+lvx/L/+wq8EACC/CLNIAsJsTDkNszN6eK2yva/zKwAAyD/CLJKAMBtTTsOsQqzC7IxX/QoAAPKPMIskIMzGlLMwq24F6l6gbgbqbgAAQIEgzCIJCLMx5SzM6oIvtcp2PtevAACgMBBmkQSE2ZhyFmYH3u2F2fHt/AoAAAoDYRZJQJiNKSdhVg9H0EMSFGbXzfMrAQAoDIRZJAFhNqachNnlU70gq8fYAgBQYAizSALCbEw5CbMjH/fC7PBH/QoAAAoHYRZJQJiNKSdh9sUTvTC7dKJfAQBA4SDMIgkIszFVeJjdtNQLsuozq76zAAAUGMIskoAwG1OFh9mJL3phdsCdfgUAAIWFMIskIMzGVOFh9ssdZvOHm63+0K8AAKCwEGaRBITZmHLSZxYAgAJGmEUSEGZjIswCANKGMIskIMzGRJgFAKQNYRZJQJiNiTALAEgbwiySgDAbE2EWAJA2hFkkAWE2JsIsACBtCLNIAsJsTIRZAEDaEGaRBIkPs2vXrrUVK1b4Q/HMmjXLduzY4Q/FQ5gFAKQNYRZJkOgw27JlS9tvv/1cady4sV+bXVFRkZ144olu+h/84AfWqlUrf0z5CLMAgLQhzCIJEhtm7733XqtXr55NmzbNFixYYI0aNbKmTZv6Y6OdcsopLvSuXLnSJk+ebAceeKB16tTJH1s2wiwAIG0Is0iCxIbZGjVqWIcOHfwhs549e7oWV4XbKAqjGj9z5ky/xuzuu++2Bg0a+ENlI8wCANKGMIskSGSYVT9ZBVO1roaprm/fvv7Q7lT//e9/3x/yjB492r1n9erVfk12hFkAQNoQZpEEiQyz06dPdyF0w4YNfo1Hde3atfOHdte2bVurX7++P+QJwuyMGTP8ml007ogjjigphx12mP33f//3bnX7Wv7nf/7HDj/88MhxFG/7qESNo3hF26dWrVqR4yjsQ3GKts/Pf/7zyHEU9iE1Aj355JP+kREoTIkMs+oqkC3Mtm/f3h/anUJutjD7wQcf+DW7bNq0yaZMmVJSRo0aZS+99NJudftaFJC7d+8eOY4yxR555BE7++yzI8dRvKJ+3/37948cR5lid911l11++eWR4yhe+a//+i8bOXJk5DjKFPvd735nN998c+S4NJQ+ffrYkiVL/CMjUJgSGWYVNBVCo7oZ6MAeZcCAAVm7Gaxbt86vqVxqDcnWxxfmLs6Lc5eKNPvhD39oixYt8oeQ6Zlnnin3wtC0U5jNbBjALrprzsMPP+wPAShEib0ATHcyyLwA7OCDD7alS5f6Nbtbvny5C66ZF4Cdc845/lDlI8yWjTBbPsJs2Qiz5SPMlo0wCxS+xIbZ++67zwVa9Z8Nbs115513+mPNxo8f7y7YUogNnHrqqS4crVq1quTWXF26dPHHVj7CbNkIs+UjzJaNMFs+wmzZCLNA4UtsmJVu3bq5W2sdffTRpTqoqwX2rLPOsjVr1vg15v6HrXCkAKBxr732mj8mP3r37k2YLYO2jwqy0/YhzGbHPlQ+bR/CbHbsQ0DhS3SYBQAAQLoRZgEAAJBYhFkAAAAkFmEWAAAAiUWYzRNdHXvppZda8+bNbdKkSX5t8umCu+HDh9tzzz1nrVq18mtLe/DBB+3CCy+02267zaZOnerX7qLHD99000120UUXRc5n8+bN7o4WuovFHXfcYUVFRf6YXXSBny74u/LKK8tclsrWr18/t95XXXWVde3a1davX++P2aW8ZY+z/q+++qo1adLErrvuush56D333HOPm4f+jZpHPmj7aJlUtJ9of8qke023aNGizGXXOmvd92X9y5tHvmnf0XJFLVt5y14R6x9nHvmgZY0qYZW1/uXNA8C+I8zmgUKKguygQYPc/9wOOOAAGzx4sD822bQ+derUcffw1X19o2j9VYL11y3Shg4d6o81V6/36mluetiFbsGmA0FAQUZ3sdA89OALzUPT6JZrAdVpHrq9l+5BrPGqyzctgw58nTt3doH9kksuseOOO862b9/uT1H+su/N+teoUWO3eejqdb0ncx4rV670p8gfbZ8HHnjAffd6cp/uPPL000/7Y71lP/bYY8tc9opY//LmUQhuvPFGt71Uwipj/VesWOHeo7rwPFSfb9oeWp7MEgiWvbz11zpr3aPWX9OWt/7heWg7anuG5wGgYhBmK1kQ1MKtkZdddpndfvvt/lDVoP+5az0zaf0PPfRQf8ij9Vd4CSjot27d2h/y7hmsea1evdoNK+REzSN8kFDYy5yHDirhwJcPeixy2Nq1a926qRU1UN6yx1l/HVTD8+jYsaObR3Cw1sE1cx76EVKIB1rd51PLHoha9iBUBDLXXz+M9nT9NY/w47Ez55FvujXhKaecEhlm46y/1jcsvP47duyw/fffv9T6q07jRC3jUfNQfb4FYTabbMu+J+uvactaf21rbfPMeei7AVCxCLOVTKeGM586pv8pHn/88f5Q1ZAtzCq0RrUiBeuvFiO9T+8PU93AgQPd6+D0epjmcfLJJ7vXW7ZsyTqPN9980x8qHLVr1y55eEecZd+b9Vdo/t73vudO4Uu2eZx44on+UOF4/PHH3Q+cQHnLXtY2jLv+wTwyheeRT2r900Nh9MRDrUd4XeKuv9Y3LLz+em+29Q/mq8+Mmkfmds2HYNm0rFEtxdmWfU/WX9OWtf7a1tnmoe8IQMUp/ZeGnFI/yVtvvdUf8qiVpHr16v5Q1ZDtYHDttdeWaoUOr7/6nOl9mQcgtXiodVF0ajBqHj/5yU/c67Lm8fLLL/tDhUEtNVrW4OEZcZZ9b9e/bt26JY+AjpqHDsQKSIVAy6KiftX6oTNy5Eh/TPSyq+9xsOzZ1l91cdc/mEem8Dzy6frrr3f9iUXhKRwg466/1jcsvP5xwly2QBhelnzRMvzv//6vHXPMMW6ZNTxkyBB/bPZl35P117Rlrb+2dbZ56DsCUHFK/6Uhpy644AJ32jRM/5P97ne/6w9VDdkOBlr/zNOQ4fXXxXB6X3AqL6BWx6Df5Pnnnx85D7U8ysSJE8udRyEItlH4gBhn2fd2/XVKuqx5aDmCeeSblkVF+4tCyb///W9/TPSyq891eeuvurjrH8wjU3ge+aJW/F/84hf+UOkwG3f9w/udhNc/TpjLFgjDy5IvEyZM8F9566KWfXXfCWRb9j1Zf01b1vprW2ebh74jABWn9F8acuoPf/iDa50MU+ucTjVXJdkOBlp/XaEfFl5/nTbV+2bNmuWGA4cccoj16NHDvdadAKLmERzgly9fnnUeasEsBNu2bSvVz1PiLPverr9absuah5YlHJIKxTPPPOMuEvz888/dcNSyqxWsvPVXXdz1D+aRKTyPfPnpT3/q/r6CovCkEoSsuOsfFcSC9c/296u64HOyBcIgzBUSLVd4fbIt+56sv6Yta/21rbPNQ98RgIpT+i8NOaX/2dWqVcs+++wzv8asWbNmBXkA2BfZDgZa/yOPPNIf8qjbRbD+X3zxhbtoQqeNA0uWLNntIJJtHuF+lZo+ah6ZF2DlQ3AaeNiwYX7N7spb9rjrr7slBILvIzyPzAtRdOo5PI9CEaz/jBkz3HCcZc9cf3Xj2NP11/RB9w/JnEe+6G8lWwnEWX+tb1h4/YP9JWr9g79DfV7UPMLLUSjatGnj/r8SyLbse7L+mras9de2zjYPABWLv6pKFlzhqvuwioKNDqqF1pdzXwUHg0y6Iv+www4rOdBGrb8OtNdcc40/5A2rv2dg5syZbt7BPGbPnu3moTslBHR/1fCBRvMIB5V80TJr2XVwzaa8Zd+b9dc9e8uaR/B9heeRL+Fl0AWBWhf9AAxuXxZn2Sti/cubR6FQeMoMkBWx/urfnzkP1QXUj1nv0XslmEe4f3M+aDmCZRL9GPrtb3/r+l8HgmUvb/21zoHM9de05a2/tnl4Htqe+m4AVCzCbB7of6D6n179+vXtoIMOclcWVxUKXlq3zBI+uATrrz5s1apVi1x//U9foV/3E1WQzTxAvvTSS24eOojrFLQ+N0zdFS6++GIXgvR+hb1CuOhCyxveLkEJL3+cZY+7/jVr1nRdOKLmEXxXmofudZw5j3zRMukHT7Ct1Aod3MkiEF7/qGUPfiTty/rHmUch0PKrhFXW+us9em+2eeRDECr196P/x+j1mWeeWerhG3HXX+u+t+sfnoe2Y9Q8AOw7wmyeqMVp8uTJrtWgKtGBJFsJi7P+CxYscPfjDXfJCNPtpjTfzKu2w3TgUN/BzIth8iVzm4RLpvKWvSLWX+8tbx75EN4u69at82t3V976a5217vuy/nHmkW/BdspUWetf3jwqm/pWf/zxxyXbpazlqoz1jzMPAPuGMAsAAIDEIswCAAAgsQizAAAASCzCLAAAABKLMAsAAIDEIswCAAAgsQizAAAASCzCLFCFBPfWXLRokV/jCepzSfPPvHl/PrVv395uvPFGt0y5XncAQP4QZoEqRE8gOuOMM+zqq6/2azwKc3raUS5VxmfEpRvna1n02GAt156GWQXgzKc5AQAKE2EWqEIUwBTEFOSGDRvm16YvzGpZqlevbs2aNdvjICuEWQBIDsIsUIUEYfaee+6xk046ya8tHTSjwlq4Lpi+X79+Jc+3P+2009y4J554wurWrWs1atSwp59+2tVJ8J6+ffvaL3/5S/f6yiuvLPWYz1deecXq16/vxp966qnWqVMnf8yu5de/RxxxhHudTYsWLdzz7n/84x/v9jl6r+YdlGzz0PJefvnlVq1aNTednpsvme9XCeg9WmbVaR1at27tj9m1/s8995x7Fv9BBx1Uav0ff/xx+9WvflUy37LWDwAQD2EWqEKCMLhy5Ur7/ve/by+++KKrD4JWIAiMYeG6YHoN63VRUZGdd955blin7pcsWWIjRoxwnzFt2rTd3nPCCSfYyJEj3bDmee6557rx0rhxY7viiitsyJAhblhBVu8JhoPlv+6662zx4sXuc6IoyGo6fcakSZPsmmuu2e1zgs8ui8a/+uqrtmXLFjes9wQ0LnP7dO7c2S2rwrro38MPP7xkOFj/YLmCZdA6BzR+zJgxtnXrVjfctWtX9y8AYO8RZoEqJAiDwetatWrZzp07S4JWICqsheuC6SdPnuyG5e6777ZDDjnEduzY4deYa50MWlaD97zxxhtuWBRSVTdo0CDbsGGDe92xY0d/rCf8ufpX06xfv94NR9m4caObplevXn6NudbP4HMkCJJl0XiF2TVr1vg1u0Rtn4YNG5aqu/baa10ruATr//rrr7th0TKqTsusHxh6remCMAsA2HeEWaAKUdgKhzh1BXjsscdKglYgKqyF6zKnl8x5S9R7FNzC6tSp41pzp0yZ4sZHlWAeUZ+RSS3Beo/CcZiCtT5HtCzlzUefdcopp7h5KagHrcMSXq9AeHnDJficqPUPAnzQet28eXO3PfS93H///TZu3DhXDwDYe4RZoArJDIPqZrD//vvHCrPHH398Sd2+hNnp06e7YQnCXPfu3d3twvRap9mzifqMTMF8wq3G8qMf/ch9jsQJs4HBgwfb7373OzfPiRMnurqo7aN+wuovnE3U+msZVZd5q7TevXvb7bff7ra5Ws4BAHuPMAtUIVFhUBc23XHHHS5UBdRXUxd2Bd5//303XWYwDYuadzj0Be95/vnn3bDoXq8K0zNmzHDD+ox7773XvQ4Lwl7UZ0TRxWhBK6wohIY/p7wwu3DhQv/VLpo+6LqguyA0bdrUvQ40adIkcp5Bt4uo9dcyBheWZQZa0fTz5s3zhwAAe4MwC1QhUWGwf//+LjSFw+mcOXPsF7/4hbuoS8FNp+j1vooIs7rgSfUqGg7Gi0LnD37wA3dFv06za5ym13sl6jOiDBw40PXf1bTnn39+qc8pL8xq/GWXXebeEyzD6aef7o/1Lu7SPBVgw/PVXRqOO+44+/3vf+/qte2C8cH6n3nmmZHrr/H6HA0/9NBDdsEFF9g555zjxgEA9h5hFqhCFJSC8BQWVa+LkNQSqQuydLeC8DQKXpnTR80j23t69uzpXof7oQbUkqnT7LqtV+Y04fmVZ9asWW7Z1fo7fPhwv9YTtfyZdBeE4PN0T97grgYB1T377LOl5qMfB7oll+p1m7FAEGbV0qp6vTd8r1/Ruup9arHV9FwIBgD7jjALABUgCLMAgMrF/3kBoAIQZgEgP/g/LwBUAIVZFQBA5SLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgos/8PAKXZKAZp7rQAAAAASUVORK5CYII=">
## Tokenizer
Le tokenizer de départ est [BarthezTokenizer](https://huggingface.co/transformers/model_doc/barthez.html) auquel ont été rajouté les tokens spéciaux \<sep\> et \<hl\>.
## Utilisation
_Le modèle est un POC, nous garantissons pas ses performances_
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers import Text2TextGenerationPipeline
model_name = 'lincoln/barthez-squadFR-fquad-piaf-question-generation'
loaded_model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
loaded_tokenizer = AutoTokenizer.from_pretrained(model_name)
nlp = Text2TextGenerationPipeline(model=loaded_model, tokenizer=loaded_tokenizer)
nlp("Les projecteurs peuvent être utilisées pour <hl>illuminer<hl> des terrains de jeu extérieurs")
# >>> [{'generated_text': 'À quoi servent les projecteurs sur les terrains de jeu extérieurs?'}]
```
```py
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers import Text2TextGenerationPipeline
model_name = 'lincoln/barthez-squadFR-fquad-piaf-question-generation'
loaded_model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
loaded_tokenizer = AutoTokenizer.from_pretrained(model_name)
text = "Les Etats signataires de la convention sur la diversité biologique des Nations unies doivent parvenir, lors de la COP15, qui s’ouvre <hl>lundi<hl>, à un nouvel accord mondial pour enrayer la destruction du vivant au cours de la prochaine décennie."
inputs = loaded_tokenizer(text, return_tensors='pt')
out = loaded_model.generate(
input_ids=inputs.input_ids,
attention_mask=inputs.attention_mask,
num_beams=16,
num_return_sequences=16,
length_penalty=10
)
questions = []
for question in out:
questions.append(loaded_tokenizer.decode(question, skip_special_tokens=True))
for q in questions:
print(q)
# Quand se tient la conférence des Nations Unies sur la diversité biologique?
# Quand a lieu la conférence des Nations Unies sur la diversité biologique?
# Quand se tient la conférence sur la diversité biologique des Nations unies?
# Quand se tient la conférence de la diversité biologique des Nations unies?
# Quand a lieu la conférence sur la diversité biologique des Nations unies?
# Quand a lieu la conférence de la diversité biologique des Nations unies?
# Quand se tient la conférence des Nations unies sur la diversité biologique?
# Quand a lieu la conférence des Nations unies sur la diversité biologique?
# Quand se tient la conférence sur la diversité biologique des Nations Unies?
# Quand se tient la conférence des Nations Unies sur la diversité biologique?
# Quand se tient la conférence de la diversité biologique des Nations Unies?
# Quand la COP15 a-t-elle lieu?
# Quand la COP15 a-t-elle lieu?
# Quand se tient la conférence sur la diversité biologique?
# Quand s'ouvre la COP15,?
# Quand s'ouvre la COP15?
```
## Citation
Model based on:
paper: https://arxiv.org/abs/2010.12321 \
github: https://github.com/moussaKam/BARThez
```
@article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
```
|
albert-xlarge-v1 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 341 | 2021-10-11T13:01:58Z | ---
language:
- fr
license: mit
datasets:
- squadFR
- fquad
- piaf
tags:
- camembert
- answer extraction
---
# Extraction de réponse
Ce modèle est _fine tuné_ à partir du modèle [camembert-base](https://huggingface.co/camembert-base) pour la tâche de classification de tokens.
L'objectif est d'identifier les suites de tokens probables qui pourrait être l'objet d'une question.
## Données d'apprentissage
La base d'entrainement est la concatenation des bases SquadFR, [fquad](https://huggingface.co/datasets/fquad), [piaf](https://huggingface.co/datasets/piaf).
Les réponses de chaque contexte ont été labelisées avec le label "ANS".
Volumétrie (nombre de contexte):
* train: 24 652
* test: 1 370
* valid: 1 370
## Entrainement
L'apprentissage s'est effectué sur une carte Tesla K80.
* Batch size: 16
* Weight decay: 0.01
* Learning rate: 2x10-5 (décroit linéairement)
* Paramètres par défaut de la classe [TrainingArguments](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments)
* Total steps: 1 000
Le modèle semble sur apprendre au delà :

## Critiques
Le modèle n'a pas de bonnes performances et doit être corrigé après prédiction pour être cohérent. La tâche de classification n'est pas évidente car le modèle doit identifier des groupes de token _sachant_ qu'une question peut être posée.

## Utilisation
_Le modèle est un POC, nous garantissons pas ses performances_
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
import numpy as np
model_name = "lincoln/camembert-squadFR-fquad-piaf-answer-extraction"
loaded_tokenizer = AutoTokenizer.from_pretrained(model_path)
loaded_model = AutoModelForTokenClassification.from_pretrained(model_path)
text = "La science des données est un domaine interdisciplinaire qui utilise des méthodes, des processus,\
des algorithmes et des systèmes scientifiques pour extraire des connaissances et des idées de nombreuses données structurelles et non structurées.\
Elle est souvent associée aux données massives et à l'analyse des données."
inputs = loaded_tokenizer(text, return_tensors="pt", return_offsets_mapping=True)
outputs = loaded_model(inputs.input_ids).logits
probs = 1 / (1 + np.exp(-outputs.detach().numpy()))
probs[:, :, 1][0] = np.convolve(probs[:, :, 1][0], np.ones(2), 'same') / 2
sentences = loaded_tokenizer.tokenize(text, add_special_tokens=False)
prob_answer_tokens = probs[:, 1:-1, 1].flatten().tolist()
offset_start_mapping = inputs.offset_mapping[:, 1:-1, 0].flatten().tolist()
offset_end_mapping = inputs.offset_mapping[:, 1:-1, 1].flatten().tolist()
threshold = 0.4
entities = []
for ix, (token, prob_ans, offset_start, offset_end) in enumerate(zip(sentences, prob_answer_tokens, offset_start_mapping, offset_end_mapping)):
entities.append({
'entity': 'ANS' if prob_ans > threshold else 'O',
'score': prob_ans,
'index': ix,
'word': token,
'start': offset_start,
'end': offset_end
})
for p in entities:
print(p)
# {'entity': 'O', 'score': 0.3118681311607361, 'index': 0, 'word': '▁La', 'start': 0, 'end': 2}
# {'entity': 'O', 'score': 0.37866950035095215, 'index': 1, 'word': '▁science', 'start': 3, 'end': 10}
# {'entity': 'ANS', 'score': 0.45018652081489563, 'index': 2, 'word': '▁des', 'start': 11, 'end': 14}
# {'entity': 'ANS', 'score': 0.4615934491157532, 'index': 3, 'word': '▁données', 'start': 15, 'end': 22}
# {'entity': 'O', 'score': 0.35033443570137024, 'index': 4, 'word': '▁est', 'start': 23, 'end': 26}
# {'entity': 'O', 'score': 0.24779987335205078, 'index': 5, 'word': '▁un', 'start': 27, 'end': 29}
# {'entity': 'O', 'score': 0.27084410190582275, 'index': 6, 'word': '▁domaine', 'start': 30, 'end': 37}
# {'entity': 'O', 'score': 0.3259460926055908, 'index': 7, 'word': '▁in', 'start': 38, 'end': 40}
# {'entity': 'O', 'score': 0.371802419424057, 'index': 8, 'word': 'terdisciplinaire', 'start': 40, 'end': 56}
# {'entity': 'O', 'score': 0.3140853941440582, 'index': 9, 'word': '▁qui', 'start': 57, 'end': 60}
# {'entity': 'O', 'score': 0.2629334330558777, 'index': 10, 'word': '▁utilise', 'start': 61, 'end': 68}
# {'entity': 'O', 'score': 0.2968383729457855, 'index': 11, 'word': '▁des', 'start': 69, 'end': 72}
# {'entity': 'O', 'score': 0.33898216485977173, 'index': 12, 'word': '▁méthodes', 'start': 73, 'end': 81}
# {'entity': 'O', 'score': 0.3776060938835144, 'index': 13, 'word': ',', 'start': 81, 'end': 82}
# {'entity': 'O', 'score': 0.3710060119628906, 'index': 14, 'word': '▁des', 'start': 83, 'end': 86}
# {'entity': 'O', 'score': 0.35908180475234985, 'index': 15, 'word': '▁processus', 'start': 87, 'end': 96}
# {'entity': 'O', 'score': 0.3890596628189087, 'index': 16, 'word': ',', 'start': 96, 'end': 97}
# {'entity': 'O', 'score': 0.38341325521469116, 'index': 17, 'word': '▁des', 'start': 101, 'end': 104}
# {'entity': 'O', 'score': 0.3743852376937866, 'index': 18, 'word': '▁', 'start': 105, 'end': 106}
# {'entity': 'O', 'score': 0.3943936228752136, 'index': 19, 'word': 'algorithme', 'start': 105, 'end': 115}
# {'entity': 'O', 'score': 0.39456743001937866, 'index': 20, 'word': 's', 'start': 115, 'end': 116}
# {'entity': 'O', 'score': 0.3846966624259949, 'index': 21, 'word': '▁et', 'start': 117, 'end': 119}
# {'entity': 'O', 'score': 0.367380827665329, 'index': 22, 'word': '▁des', 'start': 120, 'end': 123}
# {'entity': 'O', 'score': 0.3652925491333008, 'index': 23, 'word': '▁systèmes', 'start': 124, 'end': 132}
# {'entity': 'O', 'score': 0.3975735306739807, 'index': 24, 'word': '▁scientifiques', 'start': 133, 'end': 146}
# {'entity': 'O', 'score': 0.36417365074157715, 'index': 25, 'word': '▁pour', 'start': 147, 'end': 151}
# {'entity': 'O', 'score': 0.32438698410987854, 'index': 26, 'word': '▁extraire', 'start': 152, 'end': 160}
# {'entity': 'O', 'score': 0.3416857123374939, 'index': 27, 'word': '▁des', 'start': 161, 'end': 164}
# {'entity': 'O', 'score': 0.3674810230731964, 'index': 28, 'word': '▁connaissances', 'start': 165, 'end': 178}
# {'entity': 'O', 'score': 0.38362061977386475, 'index': 29, 'word': '▁et', 'start': 179, 'end': 181}
# {'entity': 'O', 'score': 0.364640474319458, 'index': 30, 'word': '▁des', 'start': 182, 'end': 185}
# {'entity': 'O', 'score': 0.36050117015838623, 'index': 31, 'word': '▁idées', 'start': 186, 'end': 191}
# {'entity': 'O', 'score': 0.3768993020057678, 'index': 32, 'word': '▁de', 'start': 192, 'end': 194}
# {'entity': 'O', 'score': 0.39184248447418213, 'index': 33, 'word': '▁nombreuses', 'start': 195, 'end': 205}
# {'entity': 'ANS', 'score': 0.4091200828552246, 'index': 34, 'word': '▁données', 'start': 206, 'end': 213}
# {'entity': 'ANS', 'score': 0.41234123706817627, 'index': 35, 'word': '▁structurelle', 'start': 214, 'end': 226}
# {'entity': 'ANS', 'score': 0.40243157744407654, 'index': 36, 'word': 's', 'start': 226, 'end': 227}
# {'entity': 'ANS', 'score': 0.4007353186607361, 'index': 37, 'word': '▁et', 'start': 228, 'end': 230}
# {'entity': 'ANS', 'score': 0.40597623586654663, 'index': 38, 'word': '▁non', 'start': 231, 'end': 234}
# {'entity': 'ANS', 'score': 0.40272021293640137, 'index': 39, 'word': '▁structurée', 'start': 235, 'end': 245}
# {'entity': 'O', 'score': 0.392631471157074, 'index': 40, 'word': 's', 'start': 245, 'end': 246}
# {'entity': 'O', 'score': 0.34266412258148193, 'index': 41, 'word': '.', 'start': 246, 'end': 247}
# {'entity': 'O', 'score': 0.26178646087646484, 'index': 42, 'word': '▁Elle', 'start': 255, 'end': 259}
# {'entity': 'O', 'score': 0.2265639454126358, 'index': 43, 'word': '▁est', 'start': 260, 'end': 263}
# {'entity': 'O', 'score': 0.22844195365905762, 'index': 44, 'word': '▁souvent', 'start': 264, 'end': 271}
# {'entity': 'O', 'score': 0.2475772500038147, 'index': 45, 'word': '▁associée', 'start': 272, 'end': 280}
# {'entity': 'O', 'score': 0.3002186715602875, 'index': 46, 'word': '▁aux', 'start': 281, 'end': 284}
# {'entity': 'O', 'score': 0.3875720798969269, 'index': 47, 'word': '▁données', 'start': 285, 'end': 292}
# {'entity': 'ANS', 'score': 0.445063054561615, 'index': 48, 'word': '▁massive', 'start': 293, 'end': 300}
# {'entity': 'ANS', 'score': 0.4419114589691162, 'index': 49, 'word': 's', 'start': 300, 'end': 301}
# {'entity': 'ANS', 'score': 0.4240635633468628, 'index': 50, 'word': '▁et', 'start': 302, 'end': 304}
# {'entity': 'O', 'score': 0.3900952935218811, 'index': 51, 'word': '▁à', 'start': 305, 'end': 306}
# {'entity': 'O', 'score': 0.3784807324409485, 'index': 52, 'word': '▁l', 'start': 307, 'end': 308}
# {'entity': 'O', 'score': 0.3459452986717224, 'index': 53, 'word': "'", 'start': 308, 'end': 309}
# {'entity': 'O', 'score': 0.37636008858680725, 'index': 54, 'word': 'analyse', 'start': 309, 'end': 316}
# {'entity': 'ANS', 'score': 0.4475618302822113, 'index': 55, 'word': '▁des', 'start': 317, 'end': 320}
# {'entity': 'ANS', 'score': 0.43845775723457336, 'index': 56, 'word': '▁données', 'start': 321, 'end': 328}
# {'entity': 'O', 'score': 0.3761221170425415, 'index': 57, 'word': '.', 'start': 328, 'end': 329}
```
|
albert-xlarge-v2 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2,973 | 2021-04-29T13:16:16Z | ---
language:
- fr
license: mit
datasets:
- MLSUM
pipeline_tag: "text-classification"
widget:
- text: La bourse de paris en forte baisse après que des canards ont envahit le parlement.
tags:
- text-classification
- flaubert
---
# Classification d'articles de presses avec Flaubert
Ce modèle se base sur le modèle [`flaubert/flaubert_base_cased`](https://huggingface.co/flaubert/flaubert_base_cased) et à été fine-tuné en utilisant des articles de presse issus de la base de données MLSUM.
Dans leur papier, les équipes de reciTAL et de la Sorbonne ont proposé comme ouverture de réaliser un modèle de détection de topic sur les articles de presse.
Les topics ont été extrait à partir des URL et nous avons effectué une étape de regroupement de topics pour éliminer ceux avec un trop faible volume et ceux qui paraissaient redondants.
Nous avons finalement utilisé la liste de topics avec les regroupements suivants:
* __Economie__: economie, argent, emploi, entreprises, economie-francaise, immobilier, crise-financiere, evasion-fiscale, economie-mondiale, m-voiture, smart-cities, automobile, logement, flottes-d-entreprise, import, crise-de-l-euro, guide-des-impots, le-club-de-l-economie, telephonie-mobile
* __Opinion__: idees, les-decodeurs, tribunes
* __Politique__: politique, election-presidentielle-2012, election-presidentielle-2017, elections-americaines, municipales, referendum-sur-le-brexit, elections-legislatives-2017, elections-regionales, donald-trump, elections-regionales-2015, europeennes-2014, elections-cantonales-2011, primaire-parti-socialiste, gouvernement-philippe, elections-departementales-2015, chroniques-de-la-presidence-trump, primaire-de-la-gauche, la-republique-en-marche, elections-americaines-mi-mandat-2018, elections, elections-italiennes, elections-senatoriales
* __Societe__: societe, sante, attaques-a-paris, immigration-et-diversite, religions, medecine, francaises-francais, mobilite
* __Culture__: televisions-radio, musiques, festival, arts, scenes, festival-de-cannes, mode, bande-dessinee, architecture, vins, photo, m-mode, fashion-week, les-recettes-du-monde, tele-zapping, critique-litteraire, festival-d-avignon, m-gastronomie-le-lieu, les-enfants-akira, gastronomie, culture, livres, cinema, actualite-medias, blog, m-gastronomie
* __Sport__: sport, football, jeux-olympiques, ligue-1, tennis, coupe-du-monde, mondial-2018, rugby, euro-2016, jeux-olympiques-rio-2016, cyclisme, ligue-des-champions, basket, roland-garros, athletisme, tour-de-france, euro2012, jeux-olympiques-pyeongchang-2018, coupe-du-monde-rugby, formule-1, voile, top-14, ski, handball, sports-mecaniques, sports-de-combat, blog-du-tour-de-france, sport-et-societe, sports-de-glisse, tournoi-des-6-nations
* __Environement__: planete, climat, biodiversite, pollution, energies, cop21
* __Technologie__: pixels, technologies, sciences, cosmos, la-france-connectee, trajectoires-digitales
* __Education__: campus, education, bac-lycee, enseignement-superieur, ecole-primaire-et-secondaire, o21, orientation-scolaire, brevet-college
* __Justice__: police-justice, panama-papers, affaire-penelope-fillon, documents-wikileaks, enquetes, paradise-papers
Les thèmes ayant moins de 100 articles n'ont pas été pris en compte.
Nous avons également mis de côté les articles faisant référence à des topics geographiques, ce qui a donné lieu à un nouveau modèle de classification.
Après nettoyage, la base MLSUM a été réduite à 293 995 articles. Le corps d'un article en moyenne comporte 694 tokens.
Nous avons entrainé le modèle sur 20% de la base nettoyée. En moyenne, le nombre d'articles par classe est de ~4K.
## Entrainement
Nous avons benchmarké différents modèles en les entrainant sur différentes parties des articles (titre, résumé, corps et titre+résumé) et avec des échantillons d'apprentissage de tailles différentes.

Les modèles ont été entrainé sur le cloud Azure avec des Tesla V100.
## Modèle
Le modèle partagé sur HF est le modéle qui prend en entrée le corps d'un article. Nous l'avons entrainé sur 20% du jeu de donnée nettoyé.
## Résulats

*Les lignes correspondent aux labels prédits et les colonnes aux véritables topics. Les pourcentages sont calculés sur les colonnes.*
_Nous garantissons pas les résultats sur le long terme. Modèle réalisé dans le cadre d'un POC._
## Utilisation
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import TextClassificationPipeline
model_name = 'lincoln/flaubert-mlsum-topic-classification'
loaded_tokenizer = AutoTokenizer.from_pretrained(model_name)
loaded_model = AutoModelForSequenceClassification.from_pretrained(model_name)
nlp = TextClassificationPipeline(model=loaded_model, tokenizer=loaded_tokenizer)
nlp("Le Bayern Munich prend la grenadine.", truncation=True)
```
## Citation
```bibtex
@article{scialom2020mlsum,
title={MLSUM: The Multilingual Summarization Corpus},
author={Thomas Scialom and Paul-Alexis Dray and Sylvain Lamprier and Benjamin Piwowarski and Jacopo Staiano},
year={2020},
eprint={2004.14900},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
albert-xxlarge-v1 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7,091 | 2021-04-30T13:58:22Z | ---
language:
- fr
license: mit
datasets:
- MLSUM
pipeline_tag: "summarization"
widget:
- text: « La veille de l’ouverture, je vais faire venir un coach pour les salariés qui reprendront le travail. Cela va me coûter 300 euros, mais après des mois d’oisiveté obligatoire, la reprise n’est pas simple. Certains sont au chômage partiel depuis mars 2020 », raconte Alain Fontaine, propriétaire du restaurant Le Mesturet, dans le quartier de la Bourse, à Paris. Cette date d’ouverture, désormais, il la connaît. Emmanuel Macron a, en effet, donné le feu vert pour un premier accueil des clients en terrasse, mercredi 19 mai. M. Fontaine imagine même faire venir un orchestre ce jour-là pour fêter l’événement. Il lui reste toutefois à construire sa terrasse. Il pensait que les ouvriers passeraient samedi 1er mai pour l’installer, mais, finalement, le rendez-vous a été décalé. Pour l’instant, le tas de bois est entreposé dans la salle de restaurant qui n’a plus accueilli de convives depuis le 29 octobre 2020, quand le couperet de la fermeture administrative est tombé.M. Fontaine, président de l’Association française des maîtres restaurateurs, ne manquera pas de concurrents prêts à profiter de ce premier temps de réouverture des bars et restaurants. Même si le couvre-feu limite le service à 21 heures. D’autant que la Mairie de Paris vient d’annoncer le renouvellement des terrasses éphémères installées en 2020 et leur gratuité jusqu’à la fin de l’été.
tags:
- summarization
- mbart
- bart
---
# Résumé automatique d'article de presses
Ce modèles est basé sur le modèle [`facebook/mbart-large-50`](https://huggingface.co/facebook/mbart-large-50) et été fine-tuné en utilisant des articles de presse issus de la base de données MLSUM. L'hypothèse à été faite que les chapeaux des articles faisaient de bon résumés de référence.
## Entrainement
Nous avons testé deux architecture de modèles (T5 et BART) avec des textes en entrée de 512 ou 1024 tokens. Finallement c'est le modèle BART avec 512 tokens qui à été retenu.
Il a été entrainé sur 2 epochs (~700K articles) sur une Tesla V100 (32 heures d'entrainement).
## Résultats

Nous avons comparé notre modèle (`mbart-large-512-full` sur le graphique) à deux références:
* MBERT qui correspond aux performances du modèle entrainé par l'équipe à l'origine de la base d'articles MLSUM
* Barthez qui est un autre modèle basé sur des articles de presses issus de la base de données OrangeSum
On voit que le score de novelty (cf papier MLSUM) de notre modèle n'est pas encore comparable à ces deux références et encore moins à une production humaine néanmoins les résumés générés sont dans l'ensemble de bonne qualité.
## Utilisation
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from transformers import SummarizationPipeline
model_name = 'lincoln/mbart-mlsum-automatic-summarization'
loaded_tokenizer = AutoTokenizer.from_pretrained(model_name)
loaded_model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
nlp = SummarizationPipeline(model=loaded_model, tokenizer=loaded_tokenizer)
nlp("""
« La veille de l’ouverture, je vais faire venir un coach pour les salariés qui reprendront le travail.
Cela va me coûter 300 euros, mais après des mois d’oisiveté obligatoire, la reprise n’est pas simple.
Certains sont au chômage partiel depuis mars 2020 », raconte Alain Fontaine, propriétaire du restaurant Le Mesturet,
dans le quartier de la Bourse, à Paris. Cette date d’ouverture, désormais, il la connaît. Emmanuel Macron a, en effet,
donné le feu vert pour un premier accueil des clients en terrasse, mercredi 19 mai. M. Fontaine imagine même faire venir un orchestre ce jour-là pour fêter l’événement.
Il lui reste toutefois à construire sa terrasse. Il pensait que les ouvriers passeraient samedi 1er mai pour l’installer, mais, finalement, le rendez-vous a été décalé.
Pour l’instant, le tas de bois est entreposé dans la salle de restaurant qui n’a plus accueilli de convives depuis le 29 octobre 2020,
quand le couperet de la fermeture administrative est tombé.M. Fontaine, président de l’Association française des maîtres restaurateurs,
ne manquera pas de concurrents prêts à profiter de ce premier temps de réouverture des bars et restaurants. Même si le couvre-feu limite le service à 21 heures.
D’autant que la Mairie de Paris vient d’annoncer le renouvellement des terrasses éphémères installées en 2020 et leur gratuité jusqu’à la fin de l’été.
""")
```
## Citation
```bibtex
@article{scialom2020mlsum,
title={MLSUM: The Multilingual Summarization Corpus},
author={Thomas Scialom and Paul-Alexis Dray and Sylvain Lamprier and Benjamin Piwowarski and Jacopo Staiano},
year={2020},
eprint={2004.14900},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
bert-base-german-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"exbert",
"license:mit",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 175,983 | 2021-09-17T00:43:15Z | ---
language:
- en
license: apache-2.0
tags:
- summarization
- azureml
- azure
- codecarbon
- bart
datasets:
- samsum
metrics:
- rouge
model-index:
- name: bart-large-samsum
results:
- task:
name: Abstractive Text Summarization
type: abstractive-text-summarization
dataset:
name: "SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization"
type: samsum
metrics:
- name: Validation ROGUE-1
type: rouge-1
value: 55.0234
- name: Validation ROGUE-2
type: rouge-2
value: 29.6005
- name: Validation ROGUE-L
type: rouge-L
value: 44.914
- name: Validation ROGUE-Lsum
type: rouge-Lsum
value: 50.464
- name: Test ROGUE-1
type: rouge-1
value: 53.4345
- name: Test ROGUE-2
type: rouge-2
value: 28.7445
- name: Test ROGUE-L
type: rouge-L
value: 44.1848
- name: Test ROGUE-Lsum
type: rouge-Lsum
value: 49.1874
widget:
- text: |
Henry: Hey, is Nate coming over to watch the movie tonight?
Kevin: Yea, he said he'll be arriving a bit later at around 7 since he gets off of work at 6. Have you taken out the garbage yet?
Henry: Oh I forgot. I'll do that once I'm finished with my assignment for my math class.
Kevin: Yea, you should take it out as soon as possible. And also, Nate is bringing his girlfriend.
Henry: Nice, I'm really looking forward to seeing them again.
---
## `bart-large-samsum`
This model was trained using Microsoft's [`Azure Machine Learning Service`](https://azure.microsoft.com/en-us/services/machine-learning). It was fine-tuned on the [`samsum`](https://huggingface.co/datasets/samsum) corpus from [`facebook/bart-large`](https://huggingface.co/facebook/bart-large) checkpoint.
## Usage (Inference)
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="linydub/bart-large-samsum")
input_text = '''
Henry: Hey, is Nate coming over to watch the movie tonight?
Kevin: Yea, he said he'll be arriving a bit later at around 7 since he gets off of work at 6. Have you taken out the garbage yet?
Henry: Oh I forgot. I'll do that once I'm finished with my assignment for my math class.
Kevin: Yea, you should take it out as soon as possible. And also, Nate is bringing his girlfriend.
Henry: Nice, I'm really looking forward to seeing them again.
'''
summarizer(input_text)
```
## Fine-tune on AzureML
[](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Flinydub%2Fazureml-greenai-txtsum%2Fmain%2F.cloud%2Ftemplate-hub%2Flinydub%2Farm-bart-large-samsum.json) [](http://armviz.io/#/?load=https://raw.githubusercontent.com/linydub/azureml-greenai-txtsum/main/.cloud/template-hub/linydub/arm-bart-large-samsum.json)
More information about the fine-tuning process (including samples and benchmarks):
**[Preview]** https://github.com/linydub/azureml-greenai-txtsum
## Resource Usage
These results were retrieved from [`Azure Monitor Metrics`](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/data-platform-metrics). All experiments were ran on AzureML low priority compute clusters.
| Key | Value |
| --- | ----- |
| Region | US West 2 |
| AzureML Compute SKU | STANDARD_ND40RS_V2 |
| Compute SKU GPU Device | 8 x NVIDIA V100 32GB (NVLink) |
| Compute Node Count | 1 |
| Run Duration | 6m 48s |
| Compute Cost (Dedicated/LowPriority) | $2.50 / $0.50 USD |
| Average CPU Utilization | 47.9% |
| Average GPU Utilization | 69.8% |
| Average GPU Memory Usage | 25.71 GB |
| Total GPU Energy Usage | 370.84 kJ |
*Compute cost ($) is estimated from the run duration, number of compute nodes utilized, and SKU's price per hour. Updated SKU pricing could be found [here](https://azure.microsoft.com/en-us/pricing/details/machine-learning).
### Carbon Emissions
These results were obtained using [`CodeCarbon`](https://github.com/mlco2/codecarbon). The carbon emissions are estimated from training runtime only (excl. setup and evaluation runtimes).
| Key | Value |
| --- | ----- |
| timestamp | 2021-09-16T23:54:25 |
| duration | 263.2430217266083 |
| emissions | 0.029715544634717518 |
| energy_consumed | 0.09985062041235725 |
| country_name | USA |
| region | Washington |
| cloud_provider | azure |
| cloud_region | westus2 |
## Hyperparameters
- max_source_length: 512
- max_target_length: 90
- fp16: True
- seed: 1
- per_device_train_batch_size: 16
- per_device_eval_batch_size: 16
- gradient_accumulation_steps: 1
- learning_rate: 5e-5
- num_train_epochs: 3.0
- weight_decay: 0.1
## Results
| ROUGE | Score |
| ----- | ----- |
| eval_rouge1 | 55.0234 |
| eval_rouge2 | 29.6005 |
| eval_rougeL | 44.914 |
| eval_rougeLsum | 50.464 |
| predict_rouge1 | 53.4345 |
| predict_rouge2 | 28.7445 |
| predict_rougeL | 44.1848 |
| predict_rougeLsum | 49.1874 |
| Metric | Value |
| ------ | ----- |
| epoch | 3.0 |
| eval_gen_len | 30.6027 |
| eval_loss | 1.4327096939086914 |
| eval_runtime | 22.9127 |
| eval_samples | 818 |
| eval_samples_per_second | 35.701 |
| eval_steps_per_second | 0.306 |
| predict_gen_len | 30.4835 |
| predict_loss | 1.4501988887786865 |
| predict_runtime | 26.0269 |
| predict_samples | 819 |
| predict_samples_per_second | 31.467 |
| predict_steps_per_second | 0.269 |
| train_loss | 1.2014821151207233 |
| train_runtime | 263.3678 |
| train_samples | 14732 |
| train_samples_per_second | 167.811 |
| train_steps_per_second | 1.321 |
| total_steps | 348 |
| total_flops | 4.26008990669865e+16 |
|
distilbert-base-cased-distilled-squad | [
"pytorch",
"tf",
"rust",
"safetensors",
"openvino",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"has_space"
] | question-answering | {
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 257,745 | 2021-12-15T17:06:46Z | # CLIN-X-EN: a pre-trained language model for the English clinical domain
Details on the model, the pre-training corpus and the downstream task performance are given in the paper: "CLIN-X: pre-trained language models and a study on cross-task transfer for concept extraction in the clinical domain" by Lukas Lange, Heike Adel, Jannik Strötgen and Dietrich Klakow.
The paper can be found [here](https://arxiv.org/abs/2112.08754).
In case of questions, please contact the authors as listed on the paper.
Please cite the above paper when reporting, reproducing or extending the results.
@misc{lange-etal-2021-clin-x,
author = {Lukas Lange and
Heike Adel and
Jannik Str{\"{o}}tgen and
Dietrich Klakow},
title = {CLIN-X: pre-trained language models and a study on cross-task transfer for concept extraction in the clinical domain},
year={2021},
eprint={2112.08754},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2112.08754}
}
## Training details
The model is based on the multilingual XLM-R transformer `(xlm-roberta-large)`, which was trained on 100 languages and showed superior performance in many different tasks across languages and can even outperform monolingual models in certain settings (Conneau et al. 2020).
We train the CLIN-X model on clinical Pubmed abstracts (850MB) filtered
following Haynes et al. (2005). Pubmed is used with the courtesy of the U.S. National Library of Medicine
We initialize CLIN-X using the pre-trained XLM-R weights and train masked language modeling (MLM) on the Spanish clinical corpus for 3 epochs which roughly corresponds to 32k steps. This allows researchers and practitioners to address
the English clinical domain with an out-of-the-box tailored model.
## Results for Spanish concept extraction
We apply CLIN-X-EN to five different English sequence labeling tasks from i2b2 in a standard sequence labeling architecture similar to Devlin et al. 2019 and compare to BERT and ClinicalBERT. In addition, we perform experiments with an improved architecture `(+ OurArchitecture)` as described in the paper linked above. The code for our model architecture can be found [here](https://github.com/boschresearch/clin_x).
| | i2b2 2006 | i2b2 2010 | i2b2 2012 (Concept) | i2b2 2012 (Time) | i2b2 2014 |
|-------------------------------|-----------|-----------|---------------------|------------------|-----------|
| BERT | 94.80 | 82.25 | 76.51 | 75.28 | 94.86 |
| ClinicalBERT | 94.8 | 87.8 | 78.9 | 76.6 | 93.0 |
| CLIN-X (EN) | 96.25 | 88.10 | 79.58 | 77.70 | 96.73 |
| CLIN-X (EN) + OurArchitecture | **98.49** | **89.23** | **80.62** | **78.50** | **97.60** |
## Purpose of the project
This software is a research prototype, solely developed for and published as part of the publication cited above. It will neither be maintained nor monitored in any way.
## License
The CLIN-X models are open-sourced under the CC-BY 4.0 license.
See the [LICENSE](LICENSE) file for details. |
distilbert-base-german-cased | [
"pytorch",
"safetensors",
"distilbert",
"fill-mask",
"de",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 43,667 | 2021-12-15T17:07:21Z | # Spanish XLM-R (from NLNDE-MEDDOPROF)
This Spanish language model was created for the MEDDOPROF shared task as part of the **NLNDE** team submission and outperformed all other participants in both sequence labeling tasks.
Details on the model, the pre-training corpus and the downstream task performance are given in the paper: "Boosting Transformers for Job Expression Extraction and Classification in a Low-Resource Setting" by Lukas Lange, Heike Adel and Jannik Strötgen.
The paper can be found [here](http://ceur-ws.org/Vol-2943/meddoprof_paper1.pdf).
In case of questions, please contact the authors as listed on the paper.
Please cite the above paper when reporting, reproducing or extending the results.
@inproceedings{lange-etal-2021-meddoprof,
author = {Lukas Lange and
Heike Adel and
Jannik Str{\"{o}}tgen},
title = {Boosting Transformers for Job Expression Extraction and Classification in a Low-Resource Setting},
year={2021},
booktitle= {{Proceedings of The Iberian Languages Evaluation Forum (IberLEF 2021)}},
series = {{CEUR} Workshop Proceedings},
url = {http://ceur-ws.org/Vol-2943/meddoprof_paper1.pdf},
}
## Training details
We use XLM-R (`xlm-roberta-large`, Conneau et al. 2020) as the main component of our models. XLM-R is a pretrained multilingual transformer model for 100 languages, including Spanish. It shows superior performance in different tasks across languages, and can even outperform
monolingual models in certain settings. It was pretrained on a large-scale corpus,
and Spanish documents made up only 2% of this data.
Thus, we explore further pretraining of this model and tune it towards Spanish
documents by pretraining a medium-size Spanish corpus with general
domain documents. For this, we use the [spanish corpus](https://github.com/josecannete/spanish-corpora) used to train the BETO model.
We use masked language modeling for pretraining and trained for three epochs
over the corpus, which roughly corresponds to 685k steps using a batch-size of 4.
## Performance
This model was trained in the context of the Meddoprof shared tasks and outperformed all other participants in both sequence labeling tasks. Our results (F1) in comparison with the standard XLM-R and the second-best system of the shared task are given in the Table.
More information on the shared task and other participants is given in this paper [here](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6393/3813).
The code for our NER models can be found [here](https://github.com/boschresearch/nlnde-meddoprof).
| | Meddoprof Task 1 (NER) | Meddoprof Task 2 (CLASS) |
|---------------------------------|------------------------|--------------------------|
| Second-best System | 80.0 | 76.4 |
| XLM-R (our baseline) | 79.2 | 77.6 |
| Our Spanish XLM-R (best System) | **83.2** | **79.1** |
## Purpose of the project
This software is a research prototype, solely developed for and published as part of the publication cited above. It will neither be maintained nor monitored in any way.
## License
The CLIN-X models are open-sourced under the CC-BY 4.0 license.
See the [LICENSE](LICENSE) file for details. |
distilbert-base-multilingual-cased | [
"pytorch",
"tf",
"onnx",
"safetensors",
"distilbert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"mn",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"th",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8,339,633 | 2022-02-23T09:18:00Z | ---
license: mit
---
## long-covid-classification
We fine-tuned bert-base-cased using a [manually curated dataset](https://huggingface.co/llangnickel/long-covid-classification-data) to train a Sequence Classification model able to distinguish between long COVID and non-long COVID-related documents.
## Used hyper parameters
|Parameter|Value|
|---|---|
|Learning rate|3e-5|
|Batch size|16|
|Number of epochs|4|
|Sequence Length|512|
## Metrics
|Precision [%]|Recall [%]|F1-score [%]|
|---|---|---|
|91.18|91.18|91.18|
## How to load the model
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("llangnickel/long-covid-classification", use_auth_token=True)
label_dict = {0: "nonLongCOVID", 1: "longCOVID"}
model = AutoModelForSequenceClassification.from_pretrained("llangnickel/long-covid-classification", use_auth_token=True, num_labels=len(label_dict))
```
## Citation
@article{10.1093/database/baac048,
author = {Langnickel, Lisa and Darms, Johannes and Heldt, Katharina and Ducks, Denise and Fluck, Juliane},
title = "{Continuous development of the semantic search engine preVIEW: from COVID-19 to long COVID}",
journal = {Database},
volume = {2022},
year = {2022},
month = {07},
issn = {1758-0463},
doi = {10.1093/database/baac048},
url = {https://doi.org/10.1093/database/baac048},
note = {baac048},
eprint = {https://academic.oup.com/database/article-pdf/doi/10.1093/database/baac048/44371817/baac048.pdf},
} |
ASCCCCCCCC/distilbert-base-uncased-finetuned-clinc | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 35 | null | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate the conclusion of an argument
This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where some parameters (only the bias parameters, not weights) have been finetuned on the task of generating the conclusion of an argument given its premises. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
Consider the facts:
* [premise 1]
* [premise 2]
...
* [premise n]
We must conclude that: [generated conclusion]
```
# Dataset
The parameters were finetuned using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.