Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | null | {} | PolyakovMaxim/GPTCHAT | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | This model generate the time shift's text of Norbit Company also generate the same ending of the textes of any phrases like base gpt model. | {} | PolyakovMaxim/ModelGptTS | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | PolyakovMaxim/T | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Pooya448/distilgpt2-finetuned-wikitext2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Pornphat/distilbert-base-uncased-finetuned-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Prabhudayala/opus-mt-en-ro-finetuned-en-to-ro | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab-1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3857
- Wer: 0.3874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4285 | 2.01 | 500 | 1.4732 | 0.9905 |
| 0.7457 | 4.02 | 1000 | 0.5278 | 0.4960 |
| 0.3463 | 6.02 | 1500 | 0.4245 | 0.4155 |
| 0.2034 | 8.03 | 2000 | 0.3857 | 0.3874 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab-1", "results": []}]} | Prasadi/wav2vec2-base-timit-demo-colab-1 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Prasadi/wav2vec2-base-timit-demo-colab | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9575
- Mae: 0.5488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1253 | 1.0 | 235 | 0.9960 | 0.5366 |
| 0.9708 | 2.0 | 470 | 0.9575 | 0.5488 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model-index": [{"name": "xlm-roberta-base-finetuned-marc-en", "results": []}]} | Pratibha/xlm-roberta-base-finetuned-marc-en | null | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Pratik/wav2vec2-base-gujrati-openslr | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {} | Preeyank/roberta-base-education-domain | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers |
# ALBERT-base for QA
## Overview
**Language model:** albert-base </br>
**Language:** English </br>
**Downstream-task:** Extractive QA </br>
**Training data:** SQuAD 2.0 </br>
**Eval data:** SQuAD 2.0 </br>
**Code:** <TBD> </br>
## Env Information
`transformers` version: 4.9.1 </br>
Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic </br>
Python version: 3.7.11 </br>
PyTorch version (GPU?): 1.9.0+cu102 (False)</br>
Tensorflow version (GPU?): 2.5.0 (False)</br>
## Hyperparameters
```
max_seq_len=386
doc_stride=128
n_best_size=20
max_answer_length=30
min_null_score=7.0
batch_size=32
n_epochs=3
base_LM_model = "albert-base-v2"
learning_rate=3e-5
adam_epsilon=1e-5
adam_beta1=0.95
adam_beta2=0.999
warmup_steps=300
weight_decay=0.01
optimizer=AdamW
lr_scheduler="polynomial"
```
## Performance
```
"exact": 78.253
"f1": 81.523
"total": 11873
"HasAns_exact": 73.616
"HasAns_f1": 80.165
"HasAns_total": 5928
"NoAns_exact": 82.876
"NoAns_f1": 82.876
"NoAns_total": 5945
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "PremalMatalia/albert-base-best-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Which name is also used to describe the Amazon rainforest in English?',
'context': 'The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain "Amazonas" in their names. The Amazon represents over half of the planet\'s remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species.'
}
res = nlp(QA_input)
print(res)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Premal Matalia | {"datasets": ["squad_v2"]} | PremalMatalia/albert-base-best-squad2 | null | [
"transformers",
"pytorch",
"albert",
"question-answering",
"dataset:squad_v2",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers |
# ELECTRA-base for QA
## Overview
**Language model:** electra-base </br>
**Language:** English </br>
**Downstream-task:** Extractive QA </br>
**Training data:** SQuAD 2.0 </br>
**Eval data:** SQuAD 2.0 </br>
**Code:** <TBD> </br>
## Env Information
`transformers` version: 4.9.1 </br>
Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic </br>
Python version: 3.7.11 </br>
PyTorch version (GPU?): 1.9.0+cu102 (False)</br>
Tensorflow version (GPU?): 2.5.0 (False)</br>
## Hyperparameters
```
max_seq_len=386
doc_stride=128
n_best_size=20
max_answer_length=30
min_null_score=7.0
batch_size=8
n_epochs=2
base_LM_model = "google/electra-base-discriminator"
learning_rate=1.5e-5
adam_epsilon=1e-5
adam_beta1=0.95
adam_beta2=0.999
warmup_steps=100
weight_decay=0.01
optimizer=AdamW
lr_scheduler="polynomial"
```
##### There is a special threshold value CLS_threshold=-3 used to more accurately identify no answers [Logic will be available in GitHub Repo [TBD]
## Performance
```
"exact": 79.331256
"f1": 83.232347\t
"total": 11873
"HasAns_exact": 76.501350
"HasAns_f1": 84.314719
"HasAns_total": 5928
"NoAns_exact": 82.153070
"NoAns_f1": 82.153070
"NoAns_total": 5945
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "PremalMatalia/electra-base-best-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Which name is also used to describe the Amazon rainforest in English?',
'context': 'The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain "Amazonas" in their names. The Amazon represents over half of the planet\'s remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species.'
}
res = nlp(QA_input)
print(res)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Premal Matalia | {"datasets": ["squad_v2"]} | PremalMatalia/electra-base-best-squad2 | null | [
"transformers",
"pytorch",
"electra",
"question-answering",
"dataset:squad_v2",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers |
# RoBERTa-base for QA
## Overview
**Language model:** 'roberta-base' </br>
**Language:** English </br>
**Downstream-task:** Extractive QA </br>
**Training data:** SQuAD 2.0 </br>
**Eval data:** SQuAD 2.0 </br>
**Code:** <TBD> </br>
## Env Information
`transformers` version: 4.9.1 </br>
Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic </br>
Python version: 3.7.11 </br>
PyTorch version (GPU?): 1.9.0+cu102 (False)</br>
Tensorflow version (GPU?): 2.5.0 (False)</br>
## Hyperparameters
```
max_seq_len=386
doc_stride=128
n_best_size=20
max_answer_length=30
min_null_score=7.0
batch_size=8
n_epochs=6
base_LM_model = "roberta-base"
learning_rate=1.5e-5
adam_epsilon=1e-5
adam_beta1=0.95
adam_beta2=0.999
warmup_steps=100
weight_decay=0.01
optimizer=AdamW
lr_scheduler="polynomial"
```
##### There is a special threshold value CLS_threshold=-3 used to more accurately identify no answers [Logic will be available in GitHub Repo [TBD]
## Performance
```
"exact": 81.192622
"f1": 83.95408
"total": 11873
"HasAns_exact": 74.190283
"HasAns_f1": 79.721119
"HasAns_total": 5928
"NoAns_exact": 88.174937
"NoAns_f1": 88.174937
"NoAns_total": 5945
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "PremalMatalia/roberta-base-best-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Which name is also used to describe the Amazon rainforest in English?',
'context': 'The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain "Amazonas" in their names. The Amazon represents over half of the planet\'s remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species.'
}
res = nlp(QA_input)
print(res)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Premal Matalia | {"datasets": ["squad_v2"]} | PremalMatalia/roberta-base-best-squad2 | null | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"dataset:squad_v2",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Prim9000/trial_tacotron2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | https://github.com/Prim9000/Thai_TTS | {} | Prim9000/try | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers |
# BART-Squad2
## Model description
BART for extractive (span-based) question answering, trained on Squad 2.0.
F1 score of 87.4.
## Intended uses & limitations
Unfortunately, the Huggingface auto-inference API won't run this model, so if you're attempting to try it through the input box above and it complains, don't be discouraged!
#### How to use
Here's a quick way to get question answering running locally:
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("Primer/bart-squad2")
model = AutoModelForQuestionAnswering.from_pretrained("Primer/bart-squad2")
model.to('cuda'); model.eval()
def answer(question, text):
seq = '<s>' + question + ' </s> </s> ' + text + ' </s>'
tokens = tokenizer.encode_plus(seq, return_tensors='pt', padding='max_length', max_length=1024)
input_ids = tokens['input_ids'].to('cuda')
attention_mask = tokens['attention_mask'].to('cuda')
start, end, _ = model(input_ids, attention_mask=attention_mask)
start_idx = int(start.argmax().int())
end_idx = int(end.argmax().int())
print(tokenizer.decode(input_ids[0, start_idx:end_idx]).strip())
# ^^ it will be an empty string if the model decided "unanswerable"
>>> question = "Where does Tom live?"
>>> context = "Tom is an engineer in San Francisco."
>>> answer(question, context)
San Francisco
```
(Just drop the `.to('cuda')` stuff if running on CPU).
#### Limitations and bias
Unknown, no further evaluation has been performed. In a technical sense one big limitation is that it's 1.6G 😬
## Training procedure
`run_squad.py` with:
|param|value|
|---|---|
|batch size|8|
|max_seq_length|1024|
|learning rate|1e-5|
|epochs|2|
Modified to freeze shared parameters and encoder embeddings.
| {"language": "en"} | primer-ai/bart-squad2 | null | [
"transformers",
"pytorch",
"bart",
"question-answering",
"en",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Priscila/latentbert | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Priscila/teste | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the COMMON_VOICE - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 248.1278
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["hi"], "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]} | Priyajay/xls-r-ab-test | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"hi",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 26.7866
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["hi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]} | Priyajay/xls-r-kn-test | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"hi",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Pro/Ddddd | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3011
- Accuracy: 0.9185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2427 | 1.0 | 125 | 0.2109 | 0.919 |
| 0.0986 | 2.0 | 250 | 0.3011 | 0.9185 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "metrics": ["accuracy"], "model_index": [{"name": "roberta-base-bne-finetuned-amazon_reviews_multi", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "amazon_reviews_multi", "type": "amazon_reviews_multi", "args": "es"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9185}}]}]} | Proggleb/roberta-base-bne-finetuned-amazon_reviews_multi | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null |
# ***LegalNLP*** - Natural Language Processing Methods for the Brazilian Legal Language ⚖️
### The library of Natural Language Processing for Brazilian legal language, *LegalNLP*, was born in a partnership between Brazilian researchers and the legal tech [Tikal Tech](https://www.tikal.tech) based in São Paulo, Brazil. Besides containing pre-trained language models for the Brazilian legal language, ***LegalNLP*** provides functions that can facilitate the manipulation of legal texts in Portuguese and demonstration/tutorials to help people in their own work.
You can access our paper by clicking [**here**](https://arxiv.org/abs/2110.15709).
If you use our library in your academic work, please cite us in the following way
@article{polo2021legalnlp,
title={LegalNLP--Natural Language Processing methods for the Brazilian Legal Language},
author={Polo, Felipe Maia and Mendon{\c{c}}a, Gabriel Caiaffa Floriano and Parreira, Kau{\^e} Capellato J and Gianvechio, Lucka and Cordeiro, Peterson and Ferreira, Jonathan Batista and de Lima, Leticia Maria Paz and Maia, Ant{\^o}nio Carlos do Amaral and Vicente, Renato},
journal={arXiv preprint arXiv:2110.15709},
year={2021}
}
--------------
## Summary
0. [Accessing the Language Models](#0)
1. [ Introduction / Installing package](#1)
2. [ Language Models (Details / How to use)](#2)
1. [ Word2Vec/Doc2Vec ](#2.1)
3. [ Demonstrations / Tutorials](#3)
4. [ References](#4)
--------------
<a name="0"></a>
## 0\. Accessing the Language Models
All our models can be found [here](https://drive.google.com/drive/folders/1tCccOXPLSEAEUQtcWXvED3YaNJi3p7la?usp=sharing).
Please contact *[email protected]* if you have any problem accessing the language models.
--------------
<a name="1"></a>
## 1\. Introduction / Installing package
*LegalNLP* is promising given the scarcity of Natural Language Processing resources focused on the Brazilian legal language. It is worth mentioning that our library was made for Python, one of the most well-known programming languages for machine learning.
You first need to install the HuggingFaceHub library running the following command on terminal
``` :sh
$ pip install huggingface_hub
```
Import `hf_hub_download`:
```python
from huggingface_hub import hf_hub_download
```
And then you can download our Word2Vec(SG)/Doc2Vec(DBOW) and Word2Vec(CBOW)/Doc2Vec(DM) by the following commands:
```python
w2v_sg_d2v_dbow = hf_hub_download(repo_id = "Projeto/LegalNLP", filename = "w2v_d2v_dbow_size_100_window_15_epochs_20")
w2v_cbow_d2v_dm = hf_hub_download(repo_id = "Projeto/LegalNLP", filename = "w2v_d2v_dm_size_100_window_15_epochs_20")
```
--------------
<a name="2"></a>
## 2\. Model Languages
<a name="3.2"></a>
### 3.2\. Word2Vec/Doc2Vec
Our first models for generating vector representation for tokens and
texts (embeddings) are variations of the Word2Vec [1,
2] and Doc2Vec [3] methods. In short, the
Word2Vec methods generate embeddings for tokens5 and that somehow capture
the meaning of the various textual elements, based on the contexts in which these
elements appear. Doc2Vec methods are extensions/modifications of Word2Vec
for generating whole text representations.
Remember to at least make all letters lowercase. Please check our paper or [Gensim page](https://radimrehurek.com/gensim_3.8.3/models/doc2vec.html) for more details. Preferably use Gensim version 3.8.3.
Below we have a summary table with some important information about the trained models:
| Filenames | Doc2Vec | Word2Vec | Size | Windows
|:-------------------:|:--------------:|:--------------:|:--------------:|:--------------:|
| ```w2v_d2v_dm*``` | Distributed Memory (DM) | Continuous Bag-of-Words (CBOW) | 100, 200, 300 | 15
| ```w2v_d2v_dbow*``` | Distributed Bag-of-Words (DBOW) | Skip-Gram (SG) | 100, 200, 300 | 15
Here we made available both models with 100 size and 15 window.
#### Using *Word2Vec*
Installing Gensim
```python
!pip install gensim=='3.8.3'
```
Loading W2V:
```python
from gensim.models import KeyedVectors
#Loading a W2V model
w2v=KeyedVectors.load(w2v_cbow_d2v_dm)
w2v=w2v.wv
```
Viewing the first 10 entries of 'juiz' vector
```python
w2v['juiz'][:10]
```
array([ 6.570131 , -1.262787 , 5.156106 , -8.943866 , -5.884408 ,
-7.717058 , 1.8819941 , -8.02803 , -0.66901577, 6.7223144 ],
dtype=float32)
Viewing closest tokens to 'juiz'
```python
w2v.most_similar('juiz')
```
[('juíza', 0.8210258483886719),
('juiza', 0.7306275367736816),
('juíz', 0.691645085811615),
('juízo', 0.6605231165885925),
('magistrado', 0.6213295459747314),
('mmª_juíza', 0.5510469675064087),
('juizo', 0.5494943261146545),
('desembargador', 0.5313084721565247),
('mmjuiz', 0.5277603268623352),
('fabíola_melo_feijão_juíza', 0.5043971538543701)]
#### Using *Doc2Vec*
Installing Gensim
```python
!pip install gensim=='3.8.3'
```
Loading D2V
```python
from gensim.models import Doc2Vec
#Loading a D2V model
d2v=Doc2Vec.load(w2v_cbow_d2v_dm)
```
Inferring vector for a text
```python
txt='direito do consumidor origem : bangu regional xxix juizado especial civel ação : [processo] - - recte : fundo de investimento em direitos creditórios'
tokens=txt.split()
txt_vec=d2v.infer_vector(tokens, epochs=20)
txt_vec[:10]
```
array([ 0.02626514, -0.3876521 , -0.24873355, -0.0318402 , 0.3343679 ,
-0.21307918, 0.07193747, 0.02030687, 0.407305 , 0.20065512],
dtype=float32)
--------------
<a name="4"></a>
## 4\. Demonstrations
For a better understanding of the application of these models, below are the links to notebooks where we apply them to a legal dataset using various classification models such as Logistic Regression and CatBoost:
- **BERT notebook** :
[](https://colab.research.google.com/github/felipemaiapolo/legalnlp/blob/main/demo/BERT/BERT_TUTORIAL.ipynb)
- **Word2Vec notebook** :
[](https://colab.research.google.com/github/felipemaiapolo/legalnlp/blob/main/demo/Word2Vec/Word2Vec_TUTORIAL.ipynb)
- **Doc2Vec notebook** :
[](https://colab.research.google.com/github/felipemaiapolo/legalnlp/blob/main/demo/Doc2Vec/Doc2Vec_TUTORIAL.ipynb)
--------------
<a name="5"></a>
## 5\. References
[1] Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013b).
Distributed representations of words and phrases and their compositionality.
In Advances in neural information processing systems, pages 3111–3119.
[2] Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013a). Efficient estimation of
word representations in vector space. arXiv preprint arXiv:1301.3781.
[3] Le, Q. and Mikolov, T. (2014). Distributed representations of sentences and
documents. In International conference on machine learning, pages 1188–1196.
PMLR.
[4] Bojanowski, P., Grave, E., Joulin, A., and Mikolov, T. (2017). Enriching
word vectors with subword information. Transactions of the Association for
Computational Linguistics, 5:135–146.
[5] Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training
of deep bidirectional transformers for language understanding. arXiv preprint
arXiv:1810.04805.
[6] Souza, F., Nogueira, R., and Lotufo, R. (2020). BERTimbau: pretrained BERT
models for Brazilian Portuguese. In 9th Brazilian Conference on Intelligent
Systems, BRACIS, Rio Grande do Sul, Brazil, October 20-23
| {"language": "pt-br", "license": "mit", "tags": ["LegalNLP", "NLP", "legal field", "python", "word2vec", "doc2vec"]} | Projeto/LegalNLP | null | [
"LegalNLP",
"NLP",
"legal field",
"python",
"word2vec",
"doc2vec",
"arxiv:2110.15709",
"license:mit",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
# Prompsit/paraphrase-bert-en
This model allows to evaluate paraphrases for a given phrase.
We have fine-tuned this model from pretrained "bert-base-uncased".
Model built under a TSI-100905-2019-4 project, co-financed by Ministry of Economic Affairs and Digital Transformation from the Government of Spain.
# How to use it
The model answer the following question: Is "phrase B" a paraphrase of "phrase A".
Please note that we're considering phrases instead of sentences. Therefore, we must take into account that the model doesn't expect to find punctuation marks or long pieces of text.
Resulting probabilities correspond to classes:
* 0: Not a paraphrase
* 1: It's a paraphrase
So, considering the phrase "may be addressed" and a candidate paraphrase like "could be included", you can use the model like this:
```
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Prompsit/paraphrase-bert-en")
model = AutoModelForSequenceClassification.from_pretrained("Prompsit/paraphrase-bert-en")
input = tokenizer('may be addressed','could be included',return_tensors='pt')
logits = model(**input).logits
soft = torch.nn.Softmax(dim=1)
print(soft(logits))
```
Code output is:
```
tensor([[0.1592, 0.8408]], grad_fn=<SoftmaxBackward>)
```
As the probability of 1 (=It's a paraphrase) is 0.84 and the probability of 0 (=It is not a paraphrase) is 0.15, we can conclude, for our previous example, that "could be included" is a paraphrase of "may be addressed".
# Evaluation results
We have used as test dataset 16500 pairs of phrases human tagged.
Metrics obtained are:
```
metrics={
'test_loss': 0.5660144090652466,
'test_accuracy': 0.8170742794799527,
'test_precision': 0.7043977055449331,
'test_recall': 0.5978578383641675,
'test_f1': 0.6467696629213483,
'test_matthews_correlation': 0.5276716223607356,
'test_runtime': 19.3345,
'test_samples_per_second': 568.88,
'test_steps_per_second': 17.792
}
``` | {"language": "en", "tags": ["transformers"], "pipeline_tag": "text-classification", "inference": false} | Prompsit/paraphrase-bert-en | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"autotrain_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
# Prompsit/paraphrase-bert-pt
This model allows to evaluate paraphrases for a given phrase.
We have fine-tuned this model from pretrained "neuralmind/bert-base-portuguese-cased".
Model built under a TSI-100905-2019-4 project, co-financed by Ministry of Economic Affairs and Digital Transformation from the Government of Spain.
# How to use it
The model answer the following question: Is "phrase B" a paraphrase of "phrase A".
Please note that we're considering phrases instead of sentences. Therefore, we must take into account that the model doesn't expect to find punctuation marks or long pieces of text.
Resulting probabilities correspond to classes:
* 0: Not a paraphrase
* 1: It's a paraphrase
So, considering the phrase "logo após o homicídio" and a candidate paraphrase like "pouco depois do assassinato", you can use the model like this:
```
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Prompsit/paraphrase-bert-pt")
model = AutoModelForSequenceClassification.from_pretrained("Prompsit/paraphrase-bert-pt")
input = tokenizer('logo após o homicídio','pouco depois do assassinato',return_tensors='pt')
logits = model(**input).logits
soft = torch.nn.Softmax(dim=1)
print(soft(logits))
```
Code output is:
```
tensor([[0.2137, 0.7863]], grad_fn=<SoftmaxBackward>)
```
As the probability of 1 (=It's a paraphrase) is 0.7863 and the probability of 0 (=It is not a paraphrase) is 0.2137, we can conclude, for our previous example, that "pouco depois do assassinato" is a paraphrase of "logo após o homicidio".
# Evaluation results
We have used as test dataset 16500 pairs of phrases human tagged.
Metrics obtained are:
```
metrics={
'test_loss': 0.6074697375297546,
'test_accuracy': 0.7809,
'test_precision': 0.7157638466220329,
'test_recall': 0.40551724137931033,
'test_f1': 0.5177195685670262,
'test_matthews_correlation': 0.41603913834665324,
'test_runtime': 16.4585,
'test_samples_per_second': 607.587,
'test_steps_per_second': 19.017
}
``` | {"language": "pt", "tags": ["transformers"], "pipeline_tag": "text-classification", "inference": false} | Prompsit/paraphrase-bert-pt | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"pt",
"autotrain_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
# Prompsit/paraphrase-roberta-es
This model allows to evaluate paraphrases for a given phrase.
We have fine-tuned this model from pretrained "PlanTL-GOB-ES/roberta-base-bne".
Model built under a TSI-100905-2019-4 project, co-financed by Ministry of Economic Affairs and Digital Transformation from the Government of Spain.
# How to use it
The model answer the following question: Is "phrase B" a paraphrase of "phrase A".
Please note that we're considering phrases instead of sentences. Therefore, we must take into account that the model doesn't expect to find punctuation marks or long pieces of text.
Resulting probabilities correspond to classes:
* 0: Not a paraphrase
* 1: It's a paraphrase
So, considering the phrase "se buscarán acuerdos" and a candidate paraphrase like "se deberá obtener el acuerdo", you can use the model like this:
```
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Prompsit/paraphrase-roberta-es")
model = AutoModelForSequenceClassification.from_pretrained("Prompsit/paraphrase-roberta-es")
input = tokenizer('se buscarán acuerdos','se deberá obtener el acuerdo',return_tensors='pt')
logits = model(**input).logits
soft = torch.nn.Softmax(dim=1)
print(soft(logits))
```
Code output is:
```
tensor([[0.2266, 0.7734]], grad_fn=<SoftmaxBackward>)
```
As the probability of 1 (=It's a paraphrase) is 0.77 and the probability of 0 (=It is not a paraphrase) is 0.22, we can conclude, for our previous example, that "se deberá obtener el acuerdo" is a paraphrase of "se buscarán acuerdos".
# Evaluation results
We have used as test dataset 16500 pairs of phrases human tagged.
Metrics obtained are:
```
metrics={
'test_loss': 0.4869941473007202,
'test_accuracy': 0.8003636363636364,
'test_precision': 0.6692456479690522,
'test_recall': 0.5896889646357052,
'test_f1': 0.6269535673839184,
'test_matthews_correlation': 0.49324489316659575,
'test_runtime': 27.1537,
'test_samples_per_second': 607.652,
'test_steps_per_second': 19.003
}
``` | {"language": "es", "tags": ["transformers"], "pipeline_tag": "text-classification", "inference": false} | Prompsit/paraphrase-roberta-es | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"es",
"autotrain_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
FinBERT is a pre-trained NLP model to analyze sentiment of financial text. It is built by further training the BERT language model in the finance domain, using a large financial corpus and thereby fine-tuning it for financial sentiment classification. [Financial PhraseBank](https://www.researchgate.net/publication/251231107_Good_Debt_or_Bad_Debt_Detecting_Semantic_Orientations_in_Economic_Texts) by Malo et al. (2014) is used for fine-tuning. For more details, please see the paper [FinBERT: Financial Sentiment Analysis with Pre-trained Language Models](https://arxiv.org/abs/1908.10063) and our related [blog post](https://medium.com/prosus-ai-tech-blog/finbert-financial-sentiment-analysis-with-bert-b277a3607101) on Medium.
The model will give softmax outputs for three labels: positive, negative or neutral.
---
About Prosus
Prosus is a global consumer internet group and one of the largest technology investors in the world. Operating and investing globally in markets with long-term growth potential, Prosus builds leading consumer internet companies that empower people and enrich communities. For more information, please visit www.prosus.com.
Contact information
Please contact Dogu Araci dogu.araci[at]prosus[dot]com and Zulkuf Genc zulkuf.genc[at]prosus[dot]com about any FinBERT related issues and questions.
| {"language": "en", "tags": ["financial-sentiment-analysis", "sentiment-analysis"], "widget": [{"text": "Stocks rallied and the British pound gained."}]} | ProsusAI/finbert | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"financial-sentiment-analysis",
"sentiment-analysis",
"en",
"arxiv:1908.10063",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | {} | PubChimps/dl-bert | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | PubChimps/dlfBERT | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Pumpkinpie25/DialoGPT-small-Rick | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Shrek DialoGPT Model | {"tags": ["conversational"]} | Pupihed/DialoGPT-small-shrek | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | PurpleJacketGuy/DialoGPT-small-jarvis | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Jarvis DialoGPT Model | {"tags": ["conversational"]} | PurpleJacketGuy/My_Jarvis | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Jarvis DialoGPT Model | {"tags": ["conversational"]} | PurpleJacketGuy/My_Jarvis_2 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Purplegohtic13/Ella | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | PutaDaVi/Elizabeth | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-dutch-cased-finetuned-gv
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4741 | 1.0 | 2603 | 1.8404 |
| 1.2384 | 2.0 | 5206 | 1.8457 |
| 1.2121 | 3.0 | 7809 | 1.7837 |
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model_index": [{"name": "bert-base-dutch-cased-finetuned-gv", "results": [{"task": {"name": "Masked Language Modeling", "type": "fill-mask"}}]}]} | Pyjay/bert-base-dutch-cased-finetuned-gv | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-medium-dutch-finetuned-text-generation
This model is a fine-tuned version of [GroNLP/gpt2-medium-dutch-embeddings](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 394 | 4.0144 |
| 3.3633 | 2.0 | 788 | 3.9379 |
| 2.7108 | 3.0 | 1182 | 3.9268 |
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model_index": [{"name": "gpt2-medium-dutch-finetuned-text-generation", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]} | Pyjay/gpt2-medium-dutch-finetuned-text-generation | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
sentence-similarity | sentence-transformers |
# Pyjay/sentence-transformers-multilingual-snli-v2-500k
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Pyjay/sentence-transformers-multilingual-snli-v2-500k')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Pyjay/sentence-transformers-multilingual-snli-v2-500k')
model = AutoModel.from_pretrained('Pyjay/sentence-transformers-multilingual-snli-v2-500k')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Pyjay/sentence-transformers-multilingual-snli-v2-500k)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15604 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 180 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 72,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | Pyjay/sentence-transformers-multilingual-snli-v2-500k | null | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
feature-extraction | transformers | {} | Pyke/1 | null | [
"transformers",
"pytorch",
"bart",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/DS-config-1 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/DS-config-12 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/DS-config-14 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Pyke/DS-config-15 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | {} | Pyke/DS-config-16 | null | [
"transformers",
"pytorch",
"bart",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/DS-config-18 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/DS-config-19 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/DS-config-2 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/DS-config-20 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/DS-config-21 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/DS-config-22 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/DS-config-23 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/DS-config-3 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/DS-config-4 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/DS-config-5 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/DS-config-6 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/DS-config-7 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/DS-config-8 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/DS-config-9 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-DS-04 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-DS-1 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-DS-2 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-DS-4 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test-Formal-01 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test-Formal-02 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test-Formal-04 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test-Formal-05 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test-Formal-1 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test-Formal-2 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test-Formal-4 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test001 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test002 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test003 | null | [
"transformers",
"pytorch",
"bart",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test004 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test005 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test01 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test1 | null | [
"transformers",
"pytorch",
"bart",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test10 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test11 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test12 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test13 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test14 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test15 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test16 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test17 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test18 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test19 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test2 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test20 | null | [
"transformers",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test21 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test22 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test23 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test25 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test26 | null | [
"transformers",
"pytorch",
"bart",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test27 | null | [
"transformers",
"pytorch",
"bart",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test28 | null | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test29 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test3 | null | [
"transformers",
"pytorch",
"bart",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Pyke/bart-finetuned-on-patent-Deepspeed-Test30 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.