modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 21
values | files
list | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
piEsposito/braquad-bert-qna | 2021-05-20T02:42:18.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"pt-br",
"transformers",
"license:apache-2.0",
"pipeline_tag:question-answering"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| piEsposito | 32 | transformers | ---
language:
- pt-br
tags:
- question-answering
license: apache-2.0
pipeline_tag: question-answering
metrics:
- em
- f1
---
# BraQuAD BERT
## Model description
This is a question-answering model trained in BraQuAD 2.0, a version of SQuAD 2.0 translated to PT-BR using Google Cloud Translation API.
### Context
Edith Ranzini (São Paulo,[1] 1946) é uma engenheira brasileira formada pela USP, professora doutora da Pontifícia Universidade Católica de São Paulo[2] e professora sênior da Escola Politécnica da Universidade de São Paulo (Poli).[3] Ela compôs a equipe responsável pela criação do primeiro computador brasileiro, o Patinho Feio,[1] em 1972, e participou do grupo de instituidores da Fundação para o Desenvolvimento Tecnológico da Engenharia, sendo a única mulher do mesmo.[4][2] Atua nas áreas de inteligência artificial, engenharia de computação, redes neurais e sistemas gráficos.
Na sua época de prestar o vestibular, inscreveu-se para física na USP e para engenharia na Poli-USP,[3] sendo aprovada nesta última em 1965, ingressando como uma das 12 mulheres do total de 360 calouros.
Em 1969, formou-se como engenheira de eletricidade, permanecendo na universidade para fazer sua pós-graduação. Nessa época entrou para o Laboratório de Sistemas Digitais (LSD),atual Departamento de Engenharia de Computação e Sistemas Digitais, criado pelo professor Antônio Hélio Guerra Vieira.[3] Em 1970, deu início ao seu mestrado em Engenharia de Sistemas pela USP, concluindo o mesmo em 1975.[2] Nesse período, permaneceu no LSD e fez parte do grupo responsável pelo desenvolvimento do primeiro computador brasileiro, o Patinho Feio (1971-1972) e do G10 (1973-1975), primeiro computador brasileiro de médio porte, feito para o Grupo de trabalho Especial (GTE), posteriormente Digibras.
### Examples:
1-Alem do Patinho feio qual outro projeto edith trabalhou? Answer: G10
2-Quantas mulheres entraram na Poli em 1965? Answer: 12
3-Qual grande projeto edith trabalhou? Answer: do primeiro computador brasileiro
4-Qual o primeiro computador brasileiro? Answer: Patinho Feio
## Expected results
As for an example, let's show a context and some questions you can ask, as well as the expected responses. This QnA pairs were not part of the training dataset.
#### How to use
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
import torch
mname = "piEsposito/braquad-bert-qna"
model = AutoModelForQuestionAnswering.from_pretrained(mname)
tokenizer = AutoTokenizer.from_pretrained(mname)
context = """Edith Ranzini (São Paulo,[1] 1946) é uma engenheira brasileira formada pela USP, professora doutora da Pontifícia Universidade Católica de São Paulo[2] e professora sênior da Escola Politécnica da Universidade de São Paulo (Poli).[3] Ela compôs a equipe responsável pela criação do primeiro computador brasileiro, o Patinho Feio,[1] em 1972, e participou do grupo de instituidores da Fundação para o Desenvolvimento Tecnológico da Engenharia, sendo a única mulher do mesmo.[4][2] Atua nas áreas de inteligência artificial, engenharia de computação, redes neurais e sistemas gráficos.
Na sua época de prestar o vestibular, inscreveu-se para física na USP e para engenharia na Poli-USP,[3] sendo aprovada nesta última em 1965, ingressando como uma das 12 mulheres do total de 360 calouros.[5]
Em 1969, formou-se como engenheira de eletricidade,[2][3] permanecendo na universidade para fazer sua pós-graduação. Nessa época entrou para o Laboratório de Sistemas Digitais (LSD),atual Departamento de Engenharia de Computação e Sistemas Digitais, criado pelo professor Antônio Hélio Guerra Vieira.[3] Em 1970, deu início ao seu mestrado em Engenharia de Sistemas pela USP, concluindo o mesmo em 1975.[2] Nesse período, permaneceu no LSD e fez parte do grupo responsável pelo desenvolvimento do primeiro computador brasileiro, o Patinho Feio (1971-1972) e do G10 (1973-1975), primeiro computador brasileiro de médio porte, feito para o Grupo de trabalho Especial (GTE), posteriormente Digibras."""
# you can try this for all the examples above.
question = 'Qual grande projeto edith trabalhou?'
string = f"[CLS] {question} [SEP] {context} [SEP]"
as_tensor = torch.Tensor(tokenizer.encode(string)).unsqueeze(0)
starts, ends = model(as_tensor.long())
s, e = torch.argmax(starts[0]), torch.argmax(ends[0])
print(tokenizer.decode(tokenizer.encode(string)[s:e+1])) # 'do primeiro computador brasileiro'
```
#### Limitations and bias
- The model is trained on a dataset translated using Google Cloud API. Due to that, there are some issues with the labels, in some cases, not being identic to the answers. Due to that, the performance cannot reach the level it does with english, handly curated models. Anyway, it is a good progresso towards QnA in PT-BR.
## Training data
[BraQuAD dataset](https://github.com/piEsposito/br-quad-2.0).
## Training procedure
## Eval results
EM | F1
-------|---------
0.62 | 0.69
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2020},
title={BraQuAD - Dataset para Question Answering em PT-BR},
author={Esposito, Wladimir and Esposito, Piero and Tamais, Ana},
}
```
|
pierreguillou/bert-base-cased-squad-v1.1-portuguese | 2021-05-20T02:43:14.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"question-answering",
"pt",
"dataset:brWaC",
"dataset:squad",
"transformers",
"license:mit",
"bert-base"
]
| question-answering | [
".:",
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| pierreguillou | 2,634 | transformers | ---
language: pt
license: mit
tags:
- question-answering
- bert
- bert-base
- pytorch
datasets:
- brWaC
- squad
metrics:
- squad
widget:
- text: "Quando começou a pandemia de Covid-19 no mundo?"
context: "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano."
- text: "Onde foi descoberta a Covid-19?"
context: "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano."
---
# Portuguese BERT base cased QA (Question Answering), finetuned on SQUAD v1.1

## Introduction
The model was trained on the dataset SQUAD v1.1 in portuguese from the [Deep Learning Brasil group](http://www.deeplearningbrasil.com.br/) on Google Colab.
The language model used is the [BERTimbau Base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) (aka "bert-base-portuguese-cased") from [Neuralmind.ai](https://neuralmind.ai/): BERTimbau Base is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large.
## Informations on the method used
All the informations are in the blog post : [NLP | Modelo de Question Answering em qualquer idioma baseado no BERT base (estudo de caso em português)](https://medium.com/@pierre_guillou/nlp-modelo-de-question-answering-em-qualquer-idioma-baseado-no-bert-base-estudo-de-caso-em-12093d385e78)
## Notebooks in Google Colab & GitHub
- Google Colab: [colab_question_answering_BERT_base_cased_squad_v11_pt.ipynb](https://colab.research.google.com/drive/18ueLdi_V321Gz37x4gHq8mb4XZSGWfZx?usp=sharing)
- GitHub: [colab_question_answering_BERT_base_cased_squad_v11_pt.ipynb](https://github.com/piegu/language-models/blob/master/colab_question_answering_BERT_base_cased_squad_v11_pt.ipynb)
## Performance
The results obtained are the following:
```
f1 = 82.50
exact match = 70.49
```
## How to use the model... with Pipeline
```python
import transformers
from transformers import pipeline
# source: https://pt.wikipedia.org/wiki/Pandemia_de_COVID-19
context = r"""
A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19,
uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2).
A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China,
em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano.
Acredita-se que o vírus tenha uma origem zoonótica, porque os primeiros casos confirmados
tinham principalmente ligações ao Mercado Atacadista de Frutos do Mar de Huanan, que também vendia animais vivos.
Em 11 de março de 2020, a Organização Mundial da Saúde declarou o surto uma pandemia. Até 8 de fevereiro de 2021,
pelo menos 105 743 102 casos da doença foram confirmados em pelo menos 191 países e territórios,
com cerca de 2 308 943 mortes e 58 851 440 pessoas curadas.
"""
model_name = 'pierreguillou/bert-base-cased-squad-v1.1-portuguese'
nlp = pipeline("question-answering", model=model_name)
question = "Quando começou a pandemia de Covid-19 no mundo?"
result = nlp(question=question, context=context)
print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")
# Answer: '1 de dezembro de 2019', score: 0.713, start: 328, end: 349
```
## How to use the model... with the Auto classes
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("pierreguillou/bert-base-cased-squad-v1.1-portuguese")
model = AutoModelForQuestionAnswering.from_pretrained("pierreguillou/bert-base-cased-squad-v1.1-portuguese")
```
Or just clone the model repo:
```python
git lfs install
git clone https://huggingface.co/pierreguillou/bert-base-cased-squad-v1.1-portuguese
# if you want to clone without large files – just their pointers
# prepend your git clone with the following env var:
GIT_LFS_SKIP_SMUDGE=1
```
## Limitations and bias
The training data used for this model come from Portuguese SQUAD. It could contain a lot of unfiltered content, which is far from neutral, and biases.
## Author
Portuguese BERT base cased QA (Question Answering), finetuned on SQUAD v1.1 was trained and evaluated by [Pierre GUILLOU](https://www.linkedin.com/in/pierreguillou/) thanks to the Open Source code, platforms and advices of many organizations ([link to the list](https://medium.com/@pierre_guillou/nlp-modelo-de-question-answering-em-qualquer-idioma-baseado-no-bert-base-estudo-de-caso-em-12093d385e78#c572)). In particular: [Hugging Face](https://huggingface.co/), [Neuralmind.ai](https://neuralmind.ai/), [Deep Learning Brasil group](http://www.deeplearningbrasil.com.br/), [Google Colab](https://colab.research.google.com/) and [AI Lab](https://ailab.unb.br/).
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{pierreguillou2021bertbasecasedsquadv11portuguese,
title={Portuguese BERT base cased QA (Question Answering), finetuned on SQUAD v1.1},
author={Pierre Guillou},
year={2021}
}
``` |
pierreguillou/bert-large-cased-squad-v1.1-portuguese | 2021-06-18T21:34:50.000Z | [
"pytorch",
"tf",
"bert",
"question-answering",
"pt",
"dataset:brWaC",
"dataset:squad",
"transformers",
"license:mit",
"bert-large"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| pierreguillou | 0 | transformers | |
pierreguillou/gpt2-small-portuguese | 2021-05-23T10:59:56.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"pt",
"dataset:wikipedia",
"transformers",
"license:mit",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
]
| pierreguillou | 1,133 | transformers | ---
language: pt
widget:
- text: "Quem era Jim Henson? Jim Henson era um"
- text: "Em um achado chocante, o cientista descobriu um"
- text: "Barack Hussein Obama II, nascido em 4 de agosto de 1961, é"
- text: "Corrida por vacina contra Covid-19 já tem"
license: mit
datasets:
- wikipedia
---
# GPorTuguese-2: a Language Model for Portuguese text generation (and more NLP tasks...)
## Introduction
GPorTuguese-2 (Portuguese GPT-2 small) is a state-of-the-art language model for Portuguese based on the GPT-2 small model.
It was trained on Portuguese Wikipedia using **Transfer Learning and Fine-tuning techniques** in just over a day, on one GPU NVIDIA V100 32GB and with a little more than 1GB of training data.
It is a proof-of-concept that it is possible to get a state-of-the-art language model in any language with low ressources.
It was fine-tuned from the [English pre-trained GPT-2 small](https://huggingface.co/gpt2) using the Hugging Face libraries (Transformers and Tokenizers) wrapped into the [fastai v2](https://dev.fast.ai/) Deep Learning framework. All the fine-tuning fastai v2 techniques were used.
It is now available on Hugging Face. For further information or requests, please go to "[Faster than training from scratch — Fine-tuning the English GPT-2 in any language with Hugging Face and fastai v2 (practical case with Portuguese)](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787)".
## Model
| Model | #params | Model file (pt/tf) | Arch. | Training /Validation data (text) |
|-------------------------|---------|--------------------|-------------|------------------------------------------|
| `gpt2-small-portuguese` | 124M | 487M / 475M | GPT-2 small | Portuguese Wikipedia (1.28 GB / 0.32 GB) |
## Evaluation results
In a little more than a day (we only used one GPU NVIDIA V100 32GB; through a Distributed Data Parallel (DDP) training mode, we could have divided by three this time to 10 hours, just with 2 GPUs), we got a loss of 3.17, an **accuracy of 37.99%** and a **perplexity of 23.76** (see the validation results table below).
| after ... epochs | loss | accuracy (%) | perplexity | time by epoch | cumulative time |
|------------------|------|--------------|------------|---------------|-----------------|
| 0 | 9.95 | 9.90 | 20950.94 | 00:00:00 | 00:00:00 |
| 1 | 3.64 | 32.52 | 38.12 | 5:48:31 | 5:48:31 |
| 2 | 3.30 | 36.29 | 27.16 | 5:38:18 | 11:26:49 |
| 3 | 3.21 | 37.46 | 24.71 | 6:20:51 | 17:47:40 |
| 4 | 3.19 | 37.74 | 24.21 | 6:06:29 | 23:54:09 |
| 5 | 3.17 | 37.99 | 23.76 | 6:16:22 | 30:10:31 |
## GPT-2
*Note: information copied/pasted from [Model: gpt2 >> GPT-2](https://huggingface.co/gpt2#gpt-2)*
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in this [paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at this [page](https://openai.com/blog/better-language-models/) (February 14, 2019).
Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
*Note: information copied/pasted from [Model: gpt2 >> Model description](https://huggingface.co/gpt2#model-description)*
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
## How to use GPorTuguese-2 with HuggingFace (PyTorch)
The following code use PyTorch. To use TensorFlow, check the below corresponding paragraph.
### Load GPorTuguese-2 and its sub-word tokenizer (Byte-level BPE)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
import torch
tokenizer = AutoTokenizer.from_pretrained("pierreguillou/gpt2-small-portuguese")
model = AutoModelWithLMHead.from_pretrained("pierreguillou/gpt2-small-portuguese")
# Get sequence length max of 1024
tokenizer.model_max_length=1024
model.eval() # disable dropout (or leave in train mode to finetune)
```
### Generate one word
```python
# input sequence
text = "Quem era Jim Henson? Jim Henson era um"
inputs = tokenizer(text, return_tensors="pt")
# model output
outputs = model(**inputs, labels=inputs["input_ids"])
loss, logits = outputs[:2]
predicted_index = torch.argmax(logits[0, -1, :]).item()
predicted_text = tokenizer.decode([predicted_index])
# results
print('input text:', text)
print('predicted text:', predicted_text)
# input text: Quem era Jim Henson? Jim Henson era um
# predicted text: homem
```
### Generate one full sequence
```python
# input sequence
text = "Quem era Jim Henson? Jim Henson era um"
inputs = tokenizer(text, return_tensors="pt")
# model output using Top-k sampling text generation method
sample_outputs = model.generate(inputs.input_ids,
pad_token_id=50256,
do_sample=True,
max_length=50, # put the token number you want
top_k=40,
num_return_sequences=1)
# generated sequence
for i, sample_output in enumerate(sample_outputs):
print(">> Generated text {}\n\n{}".format(i+1, tokenizer.decode(sample_output.tolist())))
# >> Generated text
# Quem era Jim Henson? Jim Henson era um executivo de televisão e diretor de um grande estúdio de cinema mudo chamado Selig,
# depois que o diretor de cinema mudo Georges Seuray dirigiu vários filmes para a Columbia e o estúdio.
```
## How to use GPorTuguese-2 with HuggingFace (TensorFlow)
The following code use TensorFlow. To use PyTorch, check the above corresponding paragraph.
### Load GPorTuguese-2 and its sub-word tokenizer (Byte-level BPE)
```python
from transformers import AutoTokenizer, TFAutoModelWithLMHead
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("pierreguillou/gpt2-small-portuguese")
model = TFAutoModelWithLMHead.from_pretrained("pierreguillou/gpt2-small-portuguese")
# Get sequence length max of 1024
tokenizer.model_max_length=1024
model.eval() # disable dropout (or leave in train mode to finetune)
```
### Generate one full sequence
```python
# input sequence
text = "Quem era Jim Henson? Jim Henson era um"
inputs = tokenizer.encode(text, return_tensors="tf")
# model output using Top-k sampling text generation method
outputs = model.generate(inputs, eos_token_id=50256, pad_token_id=50256,
do_sample=True,
max_length=40,
top_k=40)
print(tokenizer.decode(outputs[0]))
# >> Generated text
# Quem era Jim Henson? Jim Henson era um amigo familiar da família. Ele foi contratado pelo seu pai
# para trabalhar como aprendiz no escritório de um escritório de impressão, e então começou a ganhar dinheiro
```
## Limitations and bias
The training data used for this model come from Portuguese Wikipedia. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their model card:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Author
Portuguese GPT-2 small was trained and evaluated by [Pierre GUILLOU](https://www.linkedin.com/in/pierreguillou/) thanks to the computing power of the GPU (GPU NVIDIA V100 32 Go) of the [AI Lab](https://www.linkedin.com/company/ailab-unb/) (University of Brasilia) to which I am attached as an Associate Researcher in NLP and the participation of its directors in the definition of NLP strategy, Professors Fabricio Ataides Braz and Nilton Correia da Silva.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{pierre2020gpt2smallportuguese,
title={GPorTuguese-2 (Portuguese GPT-2 small): a Language Model for Portuguese text generation (and more NLP tasks...)},
author={Pierre Guillou},
year={2020}
}
```
|
pierrerappolt/disease-extraction | 2021-06-03T21:35:50.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
]
| pierrerappolt | 292 | transformers | microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext trained on https://raw.githubusercontent.com/facebookresearch/Clinical-Trial-Parser/master/data/ner/medical_ner.tsv
Any token labeled as `chronic_disease` or `cancer` in that dataset got label 1
All other tokens got label 0 |
pierric/test-EsperBERTo-small | 2021-05-20T19:29:19.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"esperanto",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.json"
]
| pierric | 17 | transformers | ---
language: esperanto
thumbnail: https://huggingface.co/blog/assets/EsperBERTo-thumbnail-v2.png
---
## EsperBERTo: RoBERTa-like Language model trained on Esperanto
**Companion model to blog post https://huggingface.co/blog/how-to-train** 🔥
### Training Details
- current checkpoint: 566000
- machine name: `galinette`
|
pin/analytical | 2021-05-20T02:44:25.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"da",
"transformers",
"danish",
"sentiment",
"analytical",
"license:cc-by-4.0"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| pin | 38 | transformers | ---
language: da
tags:
- danish
- bert
- sentiment
- analytical
license: cc-by-4.0
widget:
- text: "Jeg synes, det er en elendig film"
---
# Danish BERT fine-tuned for Detecting 'Analytical'
This model detects if a Danish text is 'subjective' or 'objective'.
It is trained and tested on Tweets and texts transcribed from the European Parliament annotated by [Alexandra Institute](https://github.com/alexandrainst). The model is trained with the [`senda`](https://github.com/ebanalyse/senda) package.
Here is an example of how to load the model in PyTorch using the [🤗Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("pin/analytical")
model = AutoModelForSequenceClassification.from_pretrained("pin/analytical")
# create 'senda' sentiment analysis pipeline
analytical_pipeline = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
text = "Jeg synes, det er en elendig film"
# in English: 'I think, it is a terrible movie'
analytical_pipeline(text)
```
## Performance
The `senda` model achieves an accuracy of 0.89 and a macro-averaged F1-score of 0.78 on a small test data set, that [Alexandra Institute](https://github.com/alexandrainst/danlp/blob/master/docs/docs/datasets.md#twitter-sentiment) provides. The model can most certainly be improved, and we encourage all NLP-enthusiasts to give it their best shot - you can use the [`senda`](https://github.com/ebanalyse/senda) package to do this.
#### Contact
Feel free to contact author Lars Kjeldgaard on [[email protected]](mailto:[email protected]).
|
pin/senda | 2021-05-20T02:45:22.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"da",
"transformers",
"danish",
"sentiment",
"polarity",
"license:cc-by-4.0"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| pin | 618 | transformers | ---
language: da
tags:
- danish
- bert
- sentiment
- polarity
license: cc-by-4.0
widget:
- text: "Sikke en dejlig dag det er i dag"
---
# Danish BERT fine-tuned for Sentiment Analysis with `senda`
This model detects polarity ('positive', 'neutral', 'negative') of Danish texts.
It is trained and tested on Tweets annotated by [Alexandra Institute](https://github.com/alexandrainst). The model is trained with the [`senda`](https://github.com/ebanalyse/senda) package.
Here is an example of how to load the model in PyTorch using the [🤗Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("pin/senda")
model = AutoModelForSequenceClassification.from_pretrained("pin/senda")
# create 'senda' sentiment analysis pipeline
senda_pipeline = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
text = "Sikke en dejlig dag det er i dag"
# in English: 'what a lovely day'
senda_pipeline(text)
```
## Performance
The `senda` model achieves an accuracy of 0.77 and a macro-averaged F1-score of 0.73 on a small test data set, that [Alexandra Institute](https://github.com/alexandrainst/danlp/blob/master/docs/docs/datasets.md#twitter-sentiment) provides. The model can most certainly be improved, and we encourage all NLP-enthusiasts to give it their best shot - you can use the [`senda`](https://github.com/ebanalyse/senda) package to do this.
#### Contact
Feel free to contact author Lars Kjeldgaard on [[email protected]](mailto:[email protected]).
|
pinkpoofyllama/test | 2021-01-26T05:12:05.000Z | []
| [
".gitattributes"
]
| pinkpoofyllama | 0 | |||
pino/SQUAD | 2021-04-17T09:30:06.000Z | []
| [
".gitattributes"
]
| pino | 0 | |||
pino/gpt2-esp-chat | 2021-04-24T02:50:37.000Z | []
| [
".gitattributes"
]
| pino | 0 | |||
pino/gpt2espchat | 2021-04-24T02:58:19.000Z | []
| [
".gitattributes"
]
| pino | 0 | |||
pino/gptchatbot | 2021-04-17T10:28:49.000Z | []
| [
".gitattributes"
]
| pino | 0 | |||
pino/squadqa | 2021-04-17T10:10:37.000Z | []
| [
".gitattributes"
]
| pino | 0 | |||
placebokkk/first_hf_model | 2021-04-06T06:37:30.000Z | []
| [
".gitattributes"
]
| placebokkk | 0 | |||
plguillou/t5-base-fr-sum-cnndm | 2021-02-03T18:01:38.000Z | [
"pytorch",
"t5",
"seq2seq",
"fr",
"dataset:cnn_dailymail",
"transformers",
"summarization",
"text2text-generation"
]
| summarization | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| plguillou | 250 | transformers | ---
language: fr
tags:
- pytorch
- t5
- seq2seq
- summarization
datasets: cnn_dailymail
---
# French T5 Abstractive Text Summarization
Version 1.0 (I will keep improving the model's performances.)
## Model description
This model is a T5 Transformers model (JDBN/t5-base-fr-qg-fquad) that was fine-tuned in french for abstractive text summarization.
## How to use
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("plguillou/t5-base-fr-sum-cnndm")
model = T5ForConditionalGeneration.from_pretrained("plguillou/t5-base-fr-sum-cnndm")
```
To summarize an ARTICLE, just modify the string like this : "summarize: ARTICLE".
## Training data
The base model I used is JDBN/t5-base-fr-qg-fquad (it can perform question generation, question answering and answer extraction).
I used the "t5-base" model from the transformers library to translate in french the CNN / Daily Mail summarization dataset.
|
pliik/zero-shot-test | 2020-11-13T18:52:35.000Z | []
| [
".gitattributes"
]
| pliik | 0 | |||
pmthangk09/bert-base-uncased-esnli | 2021-05-20T02:46:17.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| pmthangk09 | 23 | transformers | |
pmthangk09/bert-base-uncased-glue-cola | 2021-05-20T02:47:36.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| pmthangk09 | 7 | transformers | |
pmthangk09/bert-base-uncased-glue-rte | 2021-03-13T23:58:38.000Z | []
| [
".gitattributes"
]
| pmthangk09 | 0 | |||
pmthangk09/bert-base-uncased-glue-sst2 | 2021-05-20T02:48:36.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| pmthangk09 | 20 | transformers | |
pmthangk09/bert-base-uncased-sst | 2021-05-20T02:49:40.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| pmthangk09 | 14 | transformers | |
pmthangk09/bert-base-uncased-superglue-multirc | 2021-05-20T02:50:34.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| pmthangk09 | 16 | transformers | |
poipii/yelp_sentiment_distilbert-base-uncased_tuned | 2021-01-14T02:37:35.000Z | [
"tf",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| poipii | 33 | transformers | language: en
tags:
- sentiment
- distilbert-
pipeline_tag: text-classification
|
ponmari/Question-Answering | 2020-07-21T07:56:56.000Z | [
"pytorch",
"longformer",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| ponmari | 100 | transformers | |
ponmari/QuestionAnsweingBert | 2021-05-20T02:51:30.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| ponmari | 31 | transformers | |
ponteineptique/latin-classical-small | 2020-04-24T16:05:14.000Z | [
"pytorch",
"xlm",
"transformers"
]
| [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| ponteineptique | 15 | transformers | ||
pop/update | 2021-03-13T17:17:43.000Z | []
| [
".gitattributes",
"README.md",
"gdgdgdgd"
]
| pop | 0 | |||
popcornell/FasNetTAC-paper | 2021-02-23T15:34:52.000Z | [
"pytorch",
"dataset:TACDataset",
"dataset:sep_noisy",
"asteroid",
"audio",
"FasNet-TAC",
"audio-source-separation",
"multichannel",
"beamforming",
"license:cc-by-sa-3.0"
]
| audio-source-separation | [
".gitattributes",
"README.md",
"pytorch_model.bin"
]
| popcornell | 0 | asteroid | ---
tags:
- asteroid
- audio
- FasNet-TAC
- audio-source-separation
- multichannel
- beamforming
datasets:
- TACDataset
- sep_noisy
license: cc-by-sa-3.0
inference: false
---
## Asteroid model `Samuele Cornell/FasNetTAC_TACDataset_separatenoisy`
Imported from [Zenodo](https://zenodo.org/record/4557489)
### Description:
This model was trained by popcornell using the TAC/TAC recipe in Asteroid. It was trained on the separate_noisy task of the TACDataset dataset.
### Training config:
```yaml
data:
dev_json: ./data/validation.json
sample_rate: 16000
segment: None
test_json: ./data/test.json
train_json: ./data/train.json
net:
chunk_size: 50
context_ms: 16
enc_dim: 64
feature_dim: 64
hidden_dim: 128
hop_size: 25
n_layers: 4
n_src: 2
window_ms: 4
optim:
lr: 0.001
weight_decay: 1e-06
training:
accumulate_batches: 1
batch_size: 8
early_stop: True
epochs: 200
gradient_clipping: 5
half_lr: True
num_workers: 8
patience: 30
save_top_k: 10
```
### Results:
```yaml
si_sdr: 10.871864315894744
si_sdr_imp: 11.322284052560262
```
### License notice:
This work "FasNetTAC_TACDataset_separatenoisy" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov, used under CC BY 4.0; of End-to-end Microphone Permutation and Number Invariant Multi-channel Speech Separation by Yi Luo, Zhuo Chen, Nima Mesgarani, Takuya Yoshioka, used under CC BY 4.0. "FasNetTAC_TACDataset_separatenoisy" is licensed under Attribution-ShareAlike 3.0 Unported by popcornell.
|
pradhyra/AWSBlogBert | 2021-05-20T19:30:09.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
]
| pradhyra | 25 | transformers | This model is pre-trained on blog articles from AWS Blogs.
## Pre-training corpora
The input text contains around 3000 blog articles on [AWS Blogs website](https://aws.amazon.com/blogs/) technical subject matter including AWS products, tools and tutorials.
## Pre-training details
I picked a Roberta architecture for masked language modeling (6-layer, 768-hidden, 12-heads, 82M parameters) and its corresponding ByteLevelBPE tokenization strategy. I then followed HuggingFace's Transformers [blog post](https://huggingface.co/blog/how-to-train) to train the model.
I chose to follow the following training set-up: 28k training steps with batches of 64 sequences of length 512 with an initial learning rate 5e-5. The model acheived a training loss of 3.6 on the MLM task over 10 epochs.
|
prajjwal1/albert-base-v1-mnli | 2020-05-18T18:16:29.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"eval_results_mnli-mm.txt",
"eval_results_mnli.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| prajjwal1 | 33 | transformers | |
prajjwal1/albert-base-v2-mnli | 2020-06-25T12:22:36.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"cls_embeddings_mnli.pth",
"config.json",
"eval_results_mnli-mm.txt",
"eval_results_mnli.txt",
"hans_predictions.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| prajjwal1 | 103 | transformers | |
prajjwal1/albert_new | 2021-05-26T19:57:27.000Z | [
"pytorch",
"albert",
"multiple-choice",
"transformers"
]
| [
".gitattributes",
"all_results.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"train_results.json",
"training_args.bin"
]
| prajjwal1 | 42 | transformers | ||
prajjwal1/bert-medium-mnli | 2021-05-20T02:52:25.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:1908.08962",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| prajjwal1 | 326 | transformers | The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are trained on MNLI.
```
MNLI: 75.86%
MNLI-mm: 77.03%
```
These models are trained for 4 epochs.
[@prajjwal_1](https://twitter.com/prajjwal_1)
|
prajjwal1/bert-medium | 2020-08-13T06:49:03.000Z | [
"pytorch",
"arxiv:1908.08962",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"vocab.txt"
]
| prajjwal1 | 2,905 | transformers | The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are supposed to be trained on a downstream task.
You can check out:
- `prajjwal1/bert-tiny` (L=2, H=128)
- `prajjwal1/bert-mini` (L=4, H=256)
- `prajjwal1/bert-small` (L=4, H=512)
- `prajjwal1/bert-medium` (L=8, H=512)
[@prajjwal_1](https://twitter.com/prajjwal_1)
|
|
prajjwal1/bert-mini-mnli | 2021-05-20T02:52:49.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:1908.08962",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| prajjwal1 | 16 | transformers | The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are trained on MNLI.
```
MNLI: 68.04%
MNLI-mm: 69.17%
```
These models were trained for 4 epochs.
[@prajjwal_1](https://twitter.com/prajjwal_1)
|
prajjwal1/bert-mini | 2020-08-13T06:49:43.000Z | [
"pytorch",
"arxiv:1908.08962",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"vocab.txt"
]
| prajjwal1 | 13,141 | transformers | The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are supposed to be trained on a downstream task.
You can check out:
- `prajjwal1/bert-tiny` (L=2, H=128)
- `prajjwal1/bert-mini` (L=4, H=256)
- `prajjwal1/bert-small` (L=4, H=512)
- `prajjwal1/bert-medium` (L=8, H=512)
[@prajjwal_1](https://twitter.com/prajjwal_1)
|
|
prajjwal1/bert-small-mnli | 2021-05-20T02:53:13.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:1908.08962",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| prajjwal1 | 96 | transformers | The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are trained on MNLI.
```
MNLI: 72.1%
MNLI-mm: 73.76%
```
These models were trained for 4 epochs.
[@prajjwal_1](https://twitter.com/prajjwal_1)
|
prajjwal1/bert-small | 2020-08-13T06:47:27.000Z | [
"pytorch",
"arxiv:1908.08962",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"vocab.txt"
]
| prajjwal1 | 3,689 | transformers | The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are supposed to be trained on a downstream task.
You can check out:
- `prajjwal1/bert-tiny` (L=2, H=128)
- `prajjwal1/bert-mini` (L=4, H=256)
- `prajjwal1/bert-small` (L=4, H=512)
- `prajjwal1/bert-medium` (L=8, H=512)
[@prajjwal_1](https://twitter.com/prajjwal_1)
|
|
prajjwal1/bert-tiny-mnli | 2021-05-20T02:53:35.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:1908.08962",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"hans_predictions.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| prajjwal1 | 46 | transformers | The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are trained on MNLI.
```
MNLI: 60%
MNLI-mm: 61.61%
```
These models were trained for 4 epochs.
[@prajjwal_1](https://twitter.com/prajjwal_1)
|
prajjwal1/bert-tiny | 2020-08-13T06:48:35.000Z | [
"pytorch",
"arxiv:1908.08962",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"vocab.txt"
]
| prajjwal1 | 34,177 | transformers | The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are supposed to be trained on a downstream task.
You can check out:
- `prajjwal1/bert-tiny` (L=2, H=128)
- `prajjwal1/bert-mini` (L=4, H=256)
- `prajjwal1/bert-small` (L=4, H=512)
- `prajjwal1/bert-medium` (L=8, H=512)
[@prajjwal_1](https://twitter.com/prajjwal_1)
|
|
prajjwal1/bert_small | 2020-08-13T05:45:17.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"vocab.txt"
]
| prajjwal1 | 16 | transformers | ||
prajjwal1/ctrl_discovery_1 | 2021-03-05T03:08:03.000Z | [
"pytorch",
"ctrl",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| prajjwal1 | 36 | transformers | |
prajjwal1/ctrl_discovery_10 | 2021-05-16T16:56:14.000Z | [
"pytorch",
"ctrl",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajjwal1 | 80 | transformers | |
prajjwal1/ctrl_discovery_11 | 2021-05-16T17:09:21.000Z | [
"pytorch",
"ctrl",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajjwal1 | 74 | transformers | |
prajjwal1/ctrl_discovery_12 | 2021-05-26T18:53:23.000Z | [
"pytorch",
"ctrl",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajjwal1 | 77 | transformers | |
prajjwal1/ctrl_discovery_13 | 2021-06-03T22:20:53.000Z | [
"pytorch",
"ctrl",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajjwal1 | 56 | transformers | |
prajjwal1/ctrl_discovery_14 | 2021-06-06T21:46:59.000Z | [
"pytorch",
"ctrl",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajjwal1 | 60 | transformers | |
prajjwal1/ctrl_discovery_2 | 2021-03-05T16:07:16.000Z | [
"pytorch",
"ctrl",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"vocab.json"
]
| prajjwal1 | 24 | transformers | |
prajjwal1/ctrl_discovery_3 | 2021-03-06T16:07:23.000Z | [
"pytorch",
"ctrl",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajjwal1 | 32 | transformers | |
prajjwal1/ctrl_discovery_4 | 2021-03-19T20:28:51.000Z | [
"pytorch",
"ctrl",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajjwal1 | 86 | transformers | |
prajjwal1/ctrl_discovery_5 | 2021-03-23T02:54:01.000Z | [
"pytorch",
"ctrl",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajjwal1 | 78 | transformers | |
prajjwal1/ctrl_discovery_6 | 2021-04-11T04:41:23.000Z | [
"pytorch",
"ctrl",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajjwal1 | 79 | transformers | |
prajjwal1/ctrl_discovery_7 | 2021-04-25T18:47:46.000Z | [
"pytorch",
"ctrl",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajjwal1 | 151 | transformers | |
prajjwal1/ctrl_discovery_8 | 2021-04-25T21:01:29.000Z | [
"pytorch",
"ctrl",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajjwal1 | 83 | transformers | |
prajjwal1/ctrl_discovery_9 | 2021-05-16T16:34:38.000Z | [
"pytorch",
"ctrl",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajjwal1 | 103 | transformers | |
prajjwal1/ctrl_discovery_flipped_1 | 2021-03-03T16:03:04.000Z | [
"pytorch",
"ctrl",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| prajjwal1 | 58 | transformers | |
prajjwal1/ctrl_discovery_flipped_2 | 2021-03-07T17:49:29.000Z | [
"pytorch",
"ctrl",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajjwal1 | 48 | transformers | |
prajjwal1/ctrl_discovery_flipped_3 | 2021-03-30T18:44:22.000Z | [
"pytorch",
"ctrl",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| prajjwal1 | 52 | transformers | |
prajjwal1/ctrl_discovery_flipped_4 | 2021-03-30T19:14:49.000Z | [
"pytorch",
"ctrl",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| prajjwal1 | 54 | transformers | |
prajjwal1/ctrl_discovery_flipped_5 | 2021-04-11T18:28:47.000Z | [
"pytorch",
"ctrl",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajjwal1 | 44 | transformers | |
prajjwal1/ctrl_discovery_flipped_6 | 2021-06-06T19:32:48.000Z | [
"pytorch",
"ctrl",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajjwal1 | 33 | transformers | |
prajjwal1/roberta-base-mnli | 2021-05-20T19:31:02.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajjwal1 | 57 | transformers | Roberta-base trained on MNLI.
| Task | Accuracy |
|---------|----------|
| MNLI | 86.32 |
| MNLI-mm | 86.43 |
You can also check out:
- `prajjwal1/roberta-base-mnli`
- `prajjwal1/roberta-large-mnli`
- `prajjwal1/albert-base-v2-mnli`
- `prajjwal1/albert-base-v1-mnli`
- `prajjwal1/albert-large-v2-mnli`
[@prajjwal_1](https://twitter.com/prajjwal_1)
|
prajjwal1/roberta-large-mnli | 2020-08-13T07:10:40.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajjwal1 | 41 | transformers | Roberta-large trained on MNLI.
----------------------
| Task | Accuracy |
|---------|----------|
| MNLI | 90.15 |
| MNLI-mm | 90.02 |
You can also check out:
- `prajjwal1/roberta-base-mnli`
- `prajjwal1/roberta-large-mnli`
- `prajjwal1/albert-base-v2-mnli`
- `prajjwal1/albert-base-v1-mnli`
- `prajjwal1/albert-large-v2-mnli`
[@prajjwal_1](https://twitter.com/prajjwal_1)
|
prajjwal1/roberta_hellaswag | 2021-05-28T22:28:13.000Z | [
"pytorch",
"roberta",
"multiple-choice",
"dataset:hellaswag",
"transformers",
"commonsense-reasoning",
"sentence-completion"
]
| [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"vocab.json"
]
| prajjwal1 | 15 | transformers | ---
tags:
- pytorch
- commonsense-reasoning
- sentence-completion
datasets:
- hellaswag
---
`RoBERTa` trained on HellaSwag dataset (`MultipleChoiceModel`). HellaSwag has a multiple choice questions format.
It gets around 74.99% accuracy.
[@prajjwal_1](https://twitter.com/prajjwal_1/)
|
|
prajjwal1/roberta_new | 2021-05-28T21:47:53.000Z | [
"pytorch",
"roberta",
"multiple-choice",
"transformers"
]
| [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"train_results.json",
"vocab.json"
]
| prajjwal1 | 15 | transformers | ||
prajwalcr/poetry-anger_gpt2 | 2021-05-29T17:48:12.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajwalcr | 145 | transformers | |
prajwalcr/poetry-anticipation_gpt2 | 2021-05-29T17:57:38.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajwalcr | 118 | transformers | |
prajwalcr/poetry-disgust_gpt2 | 2021-05-29T18:47:21.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajwalcr | 75 | transformers | |
prajwalcr/poetry-fear_gpt2 | 2021-05-29T19:35:20.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajwalcr | 110 | transformers | |
prajwalcr/poetry-joy_gpt2 | 2021-05-29T12:40:42.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajwalcr | 36 | transformers | |
prajwalcr/poetry-sadness_gpt2 | 2021-05-30T04:42:56.000Z | []
| [
".gitattributes"
]
| prajwalcr | 0 | |||
prajwalcr/poetry_gpt2 | 2021-05-29T08:37:08.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| prajwalcr | 130 | transformers | |
pranavpsv/genre-story-generator-v2 | 2021-05-23T11:01:02.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| pranavpsv | 62 | transformers | |
pranavpsv/gpt2-genre-story-generator | 2021-05-23T11:02:06.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"eval_results_lm.txt",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| pranavpsv | 1,062 | transformers |
# GPT2 Genre Based Story Generator
## Model description
GPT2 fine-tuned on genre-based story generation.
## Intended uses
Used to generate stories based on user inputted genre and starting prompts.
## How to use
#### Supported Genres
superhero, action, drama, horror, thriller, sci_fi
#### Input text format
\<BOS> \<genre> Some optional text...
**Example**: \<BOS> \<sci_fi> After discovering time travel,
```python
# Example of usage
from transformers import pipeline
story_gen = pipeline("text-generation", "pranavpsv/gpt2-genre-story-generator")
print(story_gen("<BOS> <superhero> Batman"))
```
## Training data
Initialized with pre-trained weights of "gpt2" checkpoint. Fine-tuned the model on stories of various genres.
|
pranavpsv/gpt2-story-gen | 2021-05-23T11:03:13.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| pranavpsv | 51 | transformers | |
prashantsaans/model_name | 2021-06-14T16:42:24.000Z | []
| [
".gitattributes"
]
| prashantsaans | 0 | |||
premnaraindas/test | 2020-11-18T01:16:50.000Z | []
| [
".gitattributes"
]
| premnaraindas | 0 | |||
pricopgabriela/ronec-token-classification | 2021-04-23T12:54:50.000Z | []
| [
".gitattributes"
]
| pricopgabriela | 0 | |||
princeton-nlp/sup-simcse-bert-base-uncased | 2021-05-20T02:54:31.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| princeton-nlp | 4,047 | transformers | ||
princeton-nlp/sup-simcse-bert-large-uncased | 2021-05-20T02:56:23.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| princeton-nlp | 116 | transformers | ||
princeton-nlp/sup-simcse-roberta-base | 2021-05-20T19:33:45.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| princeton-nlp | 2,043 | transformers | ||
princeton-nlp/sup-simcse-roberta-large | 2021-05-20T19:36:20.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| princeton-nlp | 1,773 | transformers | ||
princeton-nlp/unsup-simcse-bert-base-uncased | 2021-05-20T02:57:45.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| princeton-nlp | 862 | transformers | ||
princeton-nlp/unsup-simcse-bert-large-uncased | 2021-05-20T02:59:52.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| princeton-nlp | 95 | transformers | ||
princeton-nlp/unsup-simcse-roberta-base | 2021-06-16T12:12:10.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| princeton-nlp | 160 | transformers | ||
princeton-nlp/unsup-simcse-roberta-large | 2021-06-16T12:15:47.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| princeton-nlp | 72 | transformers | ||
prithivida/active_to_passive_styletransfer | 2021-06-18T06:48:25.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer.json",
"tokenizer_config.json",
"training_args.bin"
]
| prithivida | 0 | transformers | |
prithivida/grammar_error_correcter | 2021-06-08T05:12:54.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer.json",
"tokenizer_config.json",
"training_args.bin",
"images/GLogo.png",
"images/dummy.txt"
]
| prithivida | 5,084 | transformers | **This model is part of the Gramformer library** please refer to https://github.com/PrithivirajDamodaran/Gramformer/
# Gramformer
Human and machine-generated text often suffer from grammatical and/or typographical errors. It can be spelling, punctuation, grammatical or word choice errors. Gramformer is a library that exposes 3 separate interfaces to a family of algorithms to **detect, highlight and correct** grammar errors. To make sure the corrections and highlights recommended are of high quality, it comes with a quality estimator. You can use Gramformer in one or more areas mentioned under the "use-cases" section below or any other use case as you see fit. Gramformer stands on the shoulders of giants, it combines some of the top-notch researches in grammar correction. *Note: It works at **sentence levels** and has been trained on 128 length sentences, so not (yet) suitable for long prose or paragraphs (stay tuned for upcoming releases)*
## Usecases for Gramformer
**Area 1: Post-processing machine-generated text**
Machine-Language generation is becoming mainstream, so will post-processing machine-generated text.
- Conditioned Text generation output(Text2Text generation).
- NMT: Machine Translated output.
- ASR or STT: Speech to text output.
- HTR: Handwritten text recognition output.
- Paraphrase generation output.
- Controlled Text generation output(Text generation with PPLM) **[TBD]**.
- Free-form text generation output(Text generation)**[TBD]**.
**Area 2:Human-In-The-Loop (HITL) text**
<ul>
<li>Most Supervised NLU (Chatbots and Conversational) systems need humans/experts to enter or edit text that needs to be grammatically correct otherwise the quality of HITL data can degrade the model over a period of time </li>
</ul>
**Area 3:Assisted writing for humans**
<ul>
<li>Integrating into custom Text editors of your Apps. (A Poor man's grammarly, if you will) </li>
</ul>
**Area 4:Custom Platform integration**
As of today grammatical safety nets for authoring social contents (Post or Comments) or text in messaging platforms is very little (word level correction) or non-existent.The onus is on the author to install tools like grammarly to proof read.
- Messaging platforms and Social platforms can highlight / correct grammtical errors automatically without altering the meaning or intent.
## Installation
```python
pip install git+https://github.com/PrithivirajDamodaran/[email protected]
```
## Quick Start
### Correcter - [Available now]
```python
from gramformer import Gramformer
import torch
def set_seed(seed):
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed_all(seed)
set_seed(1212)
gf = Gramformer(models = 2, use_gpu=False) # 0=detector, 1=highlighter, 2=corrector, 3=all
influent_sentences = [
"Matt like fish",
"the collection of letters was original used by the ancient Romans",
"We enjoys horror movies",
"Anna and Mike is going skiing",
"I walk to the store and I bought milk",
"We all eat the fish and then made dessert",
"I will eat fish for dinner and drank milk",
"what be the reason for everyone leave the company",
]
for influent_sentence in influent_sentences:
corrected_sentence = gf.correct(influent_sentence)
print("[Input] ", influent_sentence)
print("[Correction] ",corrected_sentence[0])
print("-" *100)
```
```text
[Input] Matt like fish
[Correction] Matt likes fish
----------------------------------------------------------------------------------------------------
[Input] the collection of letters was original used by the ancient Romans
[Correction] The collection of letters was originally used by the ancient Romans.
----------------------------------------------------------------------------------------------------
[Input] We enjoys horror movies
[Correction] We enjoy horror movies
----------------------------------------------------------------------------------------------------
[Input] Anna and Mike is going skiing
[Correction] Anna and Mike are going skiing
----------------------------------------------------------------------------------------------------
[Input] I walk to the store and I bought milk
[Correction] I walked to the store and bought milk.
----------------------------------------------------------------------------------------------------
[Input] We all eat the fish and then made dessert
[Correction] We all ate the fish and then made dessert
----------------------------------------------------------------------------------------------------
[Input] I will eat fish for dinner and drank milk
[Correction] I'll eat fish for dinner and drink milk.
----------------------------------------------------------------------------------------------------
[Input] what be the reason for everyone leave the company
[Correction] what can be the reason for everyone to leave the company.
----------------------------------------------------------------------------------------------------
```
|
prithivida/informal_to_formal_styletransfer | 2021-06-19T08:30:19.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer.json",
"tokenizer_config.json",
"training_args.bin"
]
| prithivida | 492 | transformers | ## This model belongs to the Styleformer project
[Please refer to github page](https://github.com/PrithivirajDamodaran/Styleformer)
|
prithivida/parrot_adequacy_on_BART | 2021-05-07T09:05:17.000Z | [
"pytorch",
"bart",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
]
| prithivida | 6,270 | transformers | # Parrot
THIS IS AN ANCILLARY MODEL FOR PARROT PARAPHRASER
## 1. What is Parrot?
Parrot is a paraphrase based utterance augmentation framework purpose built to accelerate training NLU models. A paraphrase framework is more than just a paraphrasing model. Please refer to the [github page](https://github.com/PrithivirajDamodaran/Parrot) or The model card prithivida/parrot_paraphraser_on_T5
|
|
prithivida/parrot_fluency_on_BERT | 2021-05-20T03:01:25.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| prithivida | 5,870 | transformers | # Parrot
THIS IS AN ANCILLARY MODEL FOR PARROT PARAPHRASER
## 1. What is Parrot?
Parrot is a paraphrase based utterance augmentation framework purpose built to accelerate training NLU models. A paraphrase framework is more than just a paraphrasing model. Please refer to the [github page](https://github.com/PrithivirajDamodaran/Parrot) or The model card prithivida/parrot_paraphraser_on_T5
|
prithivida/parrot_paraphraser_on_T5 | 2021-05-18T07:53:27.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| prithivida | 7,745 | transformers | # Parrot
## 1. What is Parrot?
Parrot is a paraphrase based utterance augmentation framework purpose built to accelerate training NLU models. A paraphrase framework is more than just a paraphrasing model. For more details on the library and usage please refer to the [github page](https://github.com/PrithivirajDamodaran/Parrot)
### Installation
```python
pip install git+https://github.com/PrithivirajDamodaran/Parrot_Paraphraser.git
```
### Quickstart
```python
from parrot import Parrot
import torch
import warnings
warnings.filterwarnings("ignore")
'''
uncomment to get reproducable paraphrase generations
def random_state(seed):
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed_all(seed)
random_state(1234)
'''
#Init models (make sure you init ONLY once if you integrate this to your code)
parrot = Parrot(model_tag="prithivida/parrot_paraphraser_on_T5", use_gpu=False)
phrases = ["Can you recommed some upscale restaurants in Newyork?",
"What are the famous places we should not miss in Russia?"
]
for phrase in phrases:
print("-"*100)
print("Input_phrase: ", phrase)
print("-"*100)
para_phrases = parrot.augment(input_phrase=phrase)
for para_phrase in para_phrases:
print(para_phrase)
```
```
----------------------------------------------------------------------
Input_phrase: Can you recommed some upscale restaurants in Newyork?
----------------------------------------------------------------------
list some excellent restaurants to visit in new york city?
what upscale restaurants do you recommend in new york?
i want to try some upscale restaurants in new york?
recommend some upscale restaurants in newyork?
can you recommend some high end restaurants in newyork?
can you recommend some upscale restaurants in new york?
can you recommend some upscale restaurants in newyork?
----------------------------------------------------------------------
Input_phrase: What are the famous places we should not miss in Russia
----------------------------------------------------------------------
what should we not miss when visiting russia?
recommend some of the best places to visit in russia?
list some of the best places to visit in russia?
can you list the top places to visit in russia?
show the places that we should not miss in russia?
list some famous places which we should not miss in russia?
```
### Knobs
```python
para_phrases = parrot.augment(input_phrase=phrase,
diversity_ranker="levenshtein",
do_diverse=False,
max_return_phrases = 10,
max_length=32,
adequacy_threshold = 0.99,
fluency_threshold = 0.90)
```
## 2. Why Parrot?
**Huggingface** lists [12 paraphrase models,](https://huggingface.co/models?pipeline_tag=text2text-generation&search=paraphrase) **RapidAPI** lists 7 fremium and commercial paraphrasers like [QuillBot](https://rapidapi.com/search/paraphrase?section=apis&page=1), Rasa has discussed an experimental paraphraser for augmenting text data [here](https://forum.rasa.com/t/paraphrasing-for-nlu-data-augmentation-experimental/27744), Sentence-transfomers offers a [paraphrase mining utility](https://www.sbert.net/examples/applications/paraphrase-mining/README.html) and [NLPAug](https://github.com/makcedward/nlpaug) offers word level augmentation with a [PPDB](http://paraphrase.org/#/download) (a multi-million paraphrase database). While these attempts at paraphrasing are great, there are still some gaps and paraphrasing is NOT yet a mainstream option for text augmentation in building NLU models....Parrot is a humble attempt to fill some of these gaps.
**What is a good paraphrase?** Almost all conditioned text generation models are validated on 2 factors, (1) if the generated text conveys the same meaning as the original context (Adequacy) (2) if the text is fluent / grammatically correct english (Fluency). For instance Neural Machine Translation outputs are tested for Adequacy and Fluency. But [a good paraphrase](https://www.aclweb.org/anthology/D10-1090.pdf) should be adequate and fluent while being as different as possible on the surface lexical form. With respect to this definition, the **3 key metrics** that measures the quality of paraphrases are:
- **Adequacy** (Is the meaning preserved adequately?)
- **Fluency** (Is the paraphrase fluent English?)
- **Diversity (Lexical / Phrasal / Syntactical)** (How much has the paraphrase changed the original sentence?)
*Parrot offers knobs to control Adequacy, Fluency and Diversity as per your needs.*
**What makes a paraphraser a good augmentor?** For training a NLU model we just don't need a lot of utterances but utterances with intents and slots/entities annotated. Typical flow would be:
- Given an **input utterance + input annotations** a good augmentor spits out N **output paraphrases** while preserving the intent and slots.
- The output paraphrases are then converted into annotated data using the input annotations that we got in step 1.
- The annotated data created out of the output paraphrases then makes the training dataset for your NLU model.
But in general being a generative model paraphrasers doesn't guarantee to preserve the slots/entities. So the ability to generate high quality paraphrases in a constrained fashion without trading off the intents and slots for lexical dissimilarity makes a paraphraser a good augmentor. *More on this in section 3 below*
## 3. Scope
In the space of conversational engines, knowledge bots are to which **we ask questions** like *"when was the Berlin wall teared down?"*, transactional bots are to which **we give commands** like *"Turn on the music please"* and voice assistants are the ones which can do both answer questions and action our commands. Parrot mainly foucses on augmenting texts typed-into or spoken-to conversational interfaces for building robust NLU models. (*So usually people neither type out or yell out long paragraphs to conversational interfaces. Hence the pre-trained model is trained on text samples of maximum length of 32.*)
*While Parrot predominantly aims to be a text augmentor for building good NLU models, it can also be used as a pure-play paraphraser.*
|
pritoms/supernative | 2021-01-29T00:58:33.000Z | []
| [
".gitattributes"
]
| pritoms | 0 | |||
priyank/Generate_instructions_t5 | 2021-05-13T14:28:11.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| priyank | 30 | transformers |
```
import torch
from transformers import T5ForConditionalGeneration,T5Tokenizer
def set_seed(seed):
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed_all(seed)
set_seed(42)
model = T5ForConditionalGeneration.from_pretrained("priyank/Generate_instructions_t5")
tokenizer = T5Tokenizer.from_pretrained("priyank/Generate_instructions_t5")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
sentence = "ask user to provide his date of birth"
text = "paraphrase: " + sentence + " </s>"
max_len = 256
encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to(device), encoding["attention_mask"].to(device)
# set top_k = 50 and set top_p = 0.95 and num_return_sequences = 3
beam_outputs = model.generate(
input_ids=input_ids, attention_mask=attention_masks,
do_sample=True,
max_length=256,
top_k=120,
top_p=0.98,
early_stopping=True,
num_return_sequences=10
)
print ("\\
Apprentice Query ::")
print (sentence)
print ("\\
Auto Generated Instruction ::")
final_outputs =[]
for beam_output in beam_outputs:
sent = tokenizer.decode(beam_output, skip_special_tokens=True,clean_up_tokenization_spaces=True)
if sent.lower() != sentence.lower() and sent not in final_outputs:
final_outputs.append(sent)
for i, final_output in enumerate(final_outputs):
print("{}: {}".format(i, final_output))
Apprentice Query ::
if balance is greater than $100, then tell the user he needs more balance
Auto Generated Instruction ::
0: IF (assert(user.balance > $100)) THEN (say you need more balance)
```
Reference: https://github.com/ramsrigouthamg/Paraphrase-any-question-with-T5-Text-To-Text-Transfer-Transformer- |
progg/shopping-list-ner | 2021-03-01T09:52:13.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| progg | 11 | transformers | |
projectaligned/gpt2-xl-reddit-writingprompts-behavior-cloning-full | 2021-04-06T05:34:03.000Z | []
| [
".gitattributes",
"README.md",
"checkpoint-14500/config.json",
"checkpoint-14500/latest",
"checkpoint-14500/merges.txt",
"checkpoint-14500/pytorch_model.bin",
"checkpoint-14500/special_tokens_map.json",
"checkpoint-14500/tokenizer_config.json",
"checkpoint-14500/trainer_state.json",
"checkpoint-14500/training_args.bin",
"checkpoint-14500/vocab.json",
"checkpoint-14500/global_step14500/mp_rank_00_model_states.pt",
"checkpoint-14500/global_step14500/zero_pp_rank_0_mp_rank_00optim_states.pt",
"checkpoint-14500/global_step14500/zero_pp_rank_1_mp_rank_00optim_states.pt",
"checkpoint-14500/global_step14500/zero_pp_rank_2_mp_rank_00optim_states.pt",
"checkpoint-14500/global_step14500/zero_pp_rank_3_mp_rank_00optim_states.pt",
"checkpoint-24000/config.json",
"checkpoint-24000/latest",
"checkpoint-24000/merges.txt",
"checkpoint-24000/pytorch_model.bin",
"checkpoint-24000/special_tokens_map.json",
"checkpoint-24000/tokenizer_config.json",
"checkpoint-24000/trainer_state.json",
"checkpoint-24000/training_args.bin",
"checkpoint-24000/vocab.json",
"checkpoint-24000/global_step24000/mp_rank_00_model_states.pt",
"checkpoint-24000/global_step24000/zero_pp_rank_0_mp_rank_00optim_states.pt",
"checkpoint-24000/global_step24000/zero_pp_rank_1_mp_rank_00optim_states.pt",
"checkpoint-24000/global_step24000/zero_pp_rank_2_mp_rank_00optim_states.pt",
"checkpoint-24000/global_step24000/zero_pp_rank_3_mp_rank_00optim_states.pt",
"checkpoint-500/config.json",
"checkpoint-500/latest",
"checkpoint-500/merges.txt",
"checkpoint-500/pytorch_model.bin",
"checkpoint-500/special_tokens_map.json",
"checkpoint-500/tokenizer_config.json",
"checkpoint-500/trainer_state.json",
"checkpoint-500/training_args.bin",
"checkpoint-500/vocab.json",
"checkpoint-500/global_step500/mp_rank_00_model_states.pt",
"checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00optim_states.pt",
"checkpoint-500/global_step500/zero_pp_rank_1_mp_rank_00optim_states.pt",
"checkpoint-500/global_step500/zero_pp_rank_2_mp_rank_00optim_states.pt",
"checkpoint-500/global_step500/zero_pp_rank_3_mp_rank_00optim_states.pt"
]
| projectaligned | 0 | This model was trained using prompt_responses_full.csv which you can read more about [here](https://huggingface.co/datasets/projectaligned/reddit_writingprompts_full).
All other training parameters and settings are accessible via the config.json and trainer_state.json files of the individual checkpoints |
||
projectaligned/gpt2-xl-reddit-writingprompts-behavior-cloning | 2021-05-23T11:41:20.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| projectaligned | 114 | transformers | _deprecated_
This model is fine-tuned on data from https://www.reddit.com/r/WritingPrompts/
- The model is based on gpt2-xl
- The prompt responses to the top 1000 prompts (by upvote) are used to fine-tune the model. |
projectaligned/gpt2-xl-reddit-writingprompts-reward-model-full | 2021-05-23T12:08:09.000Z | []
| [
".gitattributes"
]
| projectaligned | 53 | |||
proycon/bert-lemma-cased-cgn_elex-nld | 2021-05-20T03:02:48.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"labels.txt",
"model.ot",
"pytorch_model.bin",
"special_tokens_map.json",
"test_results.txt",
"tokenizer_config.json",
"vocab.txt"
]
| proycon | 21 | transformers |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.