modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 21
values | files
list | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
readerbench/RoBERT-small | 2021-05-20T04:10:36.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"ro",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| readerbench | 197 | transformers | Model card for RoBERT-small
---
language:
- ro
---
# RoBERT-small
## Pretrained BERT model for Romanian
Pretrained model on Romanian language using a masked language modeling (MLM) and next sentence prediction (NSP) objective.
It was introduced in this [paper](https://www.aclweb.org/anthology/2020.coling-main.581/). Three BERT models were released: **RoBERT-small**, RoBERT-base and RoBERT-large, all versions uncased.
| Model | Weights | L | H | A | MLM accuracy | NSP accuracy |
|----------------|:---------:|:------:|:------:|:------:|:--------------:|:--------------:|
| *RoBERT-small* | *19M* | *12* | *256* | *8* | *0.5363* | *0.9687* |
| RoBERT-base | 114M | 12 | 768 | 12 | 0.6511 | 0.9802 |
| RoBERT-large | 341M | 24 | 1024 | 24 | 0.6929 | 0.9843 |
All models are available:
* [RoBERT-small](https://huggingface.co/readerbench/RoBERT-small)
* [RoBERT-base](https://huggingface.co/readerbench/RoBERT-base)
* [RoBERT-large](https://huggingface.co/readerbench/RoBERT-large)
#### How to use
```python
# tensorflow
from transformers import AutoModel, AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("readerbench/RoBERT-small")
model = TFAutoModel.from_pretrained("readerbench/RoBERT-small")
inputs = tokenizer("exemplu de propoziție", return_tensors="tf")
outputs = model(inputs)
# pytorch
from transformers import AutoModel, AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("readerbench/RoBERT-small")
model = AutoModel.from_pretrained("readerbench/RoBERT-small")
inputs = tokenizer("exemplu de propoziție", return_tensors="pt")
outputs = model(**inputs)
```
## Training data
The model is trained on the following compilation of corpora. Note that we present the statistics after the cleaning process.
| Corpus | Words | Sentences | Size (GB)|
|-----------|:---------:|:---------:|:--------:|
| Oscar | 1.78B | 87M | 10.8 |
| RoTex | 240M | 14M | 1.5 |
| RoWiki | 50M | 2M | 0.3 |
| **Total** | **2.07B** | **103M** | **12.6** |
## Downstream performance
### Sentiment analysis
We report Macro-averaged F1 score (in %)
| Model | Dev | Test |
|------------------|:--------:|:--------:|
| multilingual-BERT| 68.96 | 69.57 |
| XLM-R-base | 71.26 | 71.71 |
| BERT-base-ro | 70.49 | 71.02 |
| *RoBERT-small* | *66.32* | *66.37* |
| RoBERT-base | 70.89 | 71.61 |
| RoBERT-large | **72.48**| **72.11**|
### Moldavian vs. Romanian Dialect and Cross-dialect Topic identification
We report results on [VarDial 2019](https://sites.google.com/view/vardial2019/campaign) Moldavian vs. Romanian Cross-dialect Topic identification Challenge, as Macro-averaged F1 score (in %).
| Model | Dialect Classification | MD to RO | RO to MD |
|-------------------|:----------------------:|:--------:|:--------:|
| 2-CNN + SVM | 93.40 | 65.09 | 75.21 |
| Char+Word SVM | 96.20 | 69.08 | 81.93 |
| BiGRU | 93.30 | **70.10**| 80.30 |
| multilingual-BERT | 95.34 | 68.76 | 78.24 |
| XLM-R-base | 96.28 | 69.93 | 82.28 |
| BERT-base-ro | 96.20 | 69.93 | 78.79 |
| *RoBERT-small* | *95.67* | *69.01* | *80.40* |
| RoBERT-base | 97.39 | 68.30 | 81.09 |
| RoBERT-large | **97.78** | 69.91 | **83.65**|
### Diacritics Restoration
Challenge can be found [here](https://diacritics-challenge.speed.pub.ro/). We report results on the official test set, as accuracies in %.
| Model | word level | char level |
|-----------------------------|:----------:|:----------:|
| BiLSTM | 99.42 | - |
| CharCNN | 98.40 | 99.65 |
| CharCNN + multilingual-BERT | 99.72 | 99.94 |
| CharCNN + XLM-R-base | 99.76 | **99.95** |
| CharCNN + BERT-base-ro | **99.79** | **99.95** |
| *CharCNN + RoBERT-small* | *99.73* | *99.94* |
| CharCNN + RoBERT-base | 99.78 | **99.95** |
| CharCNN + RoBERT-large | 99.76 | **99.95** |
### BibTeX entry and citation info
```bibtex
@inproceedings{masala2020robert,
title={RoBERT--A Romanian BERT Model},
author={Masala, Mihai and Ruseti, Stefan and Dascalu, Mihai},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={6626--6637},
year={2020}
}
```
|
|
redewiedergabe/bert-base-historical-german-rw-cased | 2021-05-20T04:11:23.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"de",
"arxiv:1508.01991",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| redewiedergabe | 58 | transformers | ---
language: de
---
# Model description
## Dataset
Trained on fictional and non-fictional German texts written between 1840 and 1920:
* Narrative texts from Digitale Bibliothek (https://textgrid.de/digitale-bibliothek)
* Fairy tales and sagas from Grimm Korpus (https://www1.ids-mannheim.de/kl/projekte/korpora/archiv/gri.html)
* Newspaper and magazine article from Mannheimer Korpus Historischer Zeitungen und Zeitschriften (https://repos.ids-mannheim.de/mkhz-beschreibung.html)
* Magazine article from the journal „Die Grenzboten“ (http://www.deutschestextarchiv.de/doku/textquellen#grenzboten)
* Fictional and non-fictional texts from Projekt Gutenberg (https://www.projekt-gutenberg.org)
## Hardware used
1 Tesla P4 GPU
## Hyperparameters
| Parameter | Value |
|-------------------------------|----------|
| Epochs | 3 |
| Gradient_accumulation_steps | 1 |
| Train_batch_size | 32 |
| Learning_rate | 0.00003 |
| Max_seq_len | 128 |
## Evaluation results: Automatic tagging of four forms of speech/thought/writing representation in historical fictional and non-fictional German texts
The language model was used in the task to tag direct, indirect, reported and free indirect speech/thought/writing representation in fictional and non-fictional German texts. The tagger is available and described in detail at https://github.com/redewiedergabe/tagger.
The tagging model was trained using the SequenceTagger Class of the Flair framework ([Akbik et al., 2019](https://www.aclweb.org/anthology/N19-4010)) which implements a BiLSTM-CRF architecture on top of a language embedding (as proposed by [Huang et al. (2015)](https://arxiv.org/abs/1508.01991)).
Hyperparameters
| Parameter | Value |
|-------------------------------|------------|
| Hidden_size | 256 |
| Learning_rate | 0.1 |
| Mini_batch_size | 8 |
| Max_epochs | 150 |
Results are reported below in comparison to a custom trained flair embedding, which was stacked onto a custom trained fastText-model. Both models were trained on the same dataset.
| | BERT ||| FastText+Flair |||Test data|
|----------------|----------|-----------|----------|------|-----------|--------|--------|
| | F1 | Precision | Recall | F1 | Precision | Recall ||
| Direct | 0.80 | 0.86 | 0.74 | 0.84 | 0.90 | 0.79 |historical German, fictional & non-fictional|
| Indirect | **0.76** | **0.79** | **0.73** | 0.73 | 0.78 | 0.68 |historical German, fictional & non-fictional|
| Reported | **0.58** | **0.69** | **0.51** | 0.56 | 0.68 | 0.48 |historical German, fictional & non-fictional|
| Free indirect | **0.57** | **0.80** | **0.44** | 0.47 | 0.78 | 0.34 |modern German, fictional|
## Intended use:
Historical German Texts (1840 to 1920)
(Showed good performance with modern German fictional texts as well)
|
redrussianarmy/gpt2-turkish-cased | 2021-05-23T12:12:42.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"tr",
"transformers",
"turkish",
"gpt2-tr",
"gpt2-turkish",
"text-generation"
]
| text-generation | [
".gitattributes",
".gitignore",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"metadata.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| redrussianarmy | 394 | transformers | ---
language: "tr"
tags:
- turkish
- tr
- gpt2-tr
- gpt2-turkish
---
# 🇹🇷 Turkish GPT-2 Model
In this repository I release GPT-2 model, that was trained on various texts for Turkish.
The model is meant to be an entry point for fine-tuning on other texts.
## Training corpora
I used a Turkish corpora that is taken from oscar-corpus.
It was possible to create byte-level BPE with Tokenizers library of Huggingface.
With the Tokenizers library, I created a 52K byte-level BPE vocab based on the training corpora.
After creating the vocab, I could train the GPT-2 for Turkish on two 2080TI over the complete training corpus (five epochs).
Logs during training:
https://tensorboard.dev/experiment/3AWKv8bBTaqcqZP5frtGkw/#scalars
## Model weights
Both PyTorch and Tensorflow compatible weights are available.
| Model | Downloads
| --------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `redrussianarmy/gpt2-turkish-cased` | [`config.json`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/config.json) • [`merges.txt`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/merges.txt) • [`pytorch_model.bin`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/pytorch_model.bin) • [`special_tokens_map.json`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/special_tokens_map.json) • [`tf_model.h5`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/tf_model.h5) • [`tokenizer_config.json`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/tokenizer_config.json) • [`traning_args.bin`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/training_args.bin) • [`vocab.json`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/vocab.json)
## Using the model
The model itself can be used in this way:
``` python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("redrussianarmy/gpt2-turkish-cased")
model = AutoModelWithLMHead.from_pretrained("redrussianarmy/gpt2-turkish-cased")
```
Here's an example that shows how to use the great Transformers Pipelines for generating text:
``` python
from transformers import pipeline
pipe = pipeline('text-generation', model="redrussianarmy/gpt2-turkish-cased",
tokenizer="redrussianarmy/gpt2-turkish-cased", config={'max_length':800})
text = pipe("Akşamüstü yolda ilerlerken, ")[0]["generated_text"]
print(text)
```
### How to clone the model repo?
```
git lfs install
git clone https://huggingface.co/redrussianarmy/gpt2-turkish-cased
```
## Contact (Bugs, Feedback, Contribution and more)
For questions about the GPT2-Turkish model, just open an issue [here](https://github.com/redrussianarmy/gpt2-turkish/issues) 🤗 |
remi/bertabs-finetuned-cnndm-extractive-abstractive-summarization | 2021-05-20T04:14:02.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin"
]
| remi | 68 | transformers | |
remi/bertabs-finetuned-extractive-abstractive-summarization | 2021-05-20T04:15:22.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin"
]
| remi | 195 | transformers | |
remi/bertabs-finetuned-xsum-extractive-abstractive-summarization | 2021-05-20T04:17:40.000Z | [
"pytorch",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| remi | 65 | transformers | |
remotejob/tweetsDISTILGPT2fi_v1 | 2021-06-15T21:59:21.000Z | [
"pytorch",
"rust",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"rust_model.ot",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
]
| remotejob | 71 | transformers | hello
|
remotejob/tweetsGPT2fi_v1 | 2021-06-12T16:40:33.000Z | [
"pytorch",
"rust",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"rust_model.ot",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
]
| remotejob | 153 | transformers | |
replydotai/albert-xxlarge-v1-finetuned-squad2 | 2020-04-24T16:05:36.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"added_tokens.json",
"config.json",
"nbest_predictions_.json",
"null_odds_.json",
"predictions_.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| replydotai | 47 | transformers | |
researchaccount/continue_mlm | 2021-05-20T04:18:46.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
]
| researchaccount | 7 | transformers | |
researchaccount/sa_cmlbert | 2021-05-22T23:55:52.000Z | []
| [
".gitattributes",
"train_0/best_model/config.json",
"train_0/best_model/pytorch_model.bin",
"train_0/best_model/special_tokens_map.json",
"train_0/best_model/tokenizer.json",
"train_0/best_model/tokenizer_config.json",
"train_0/best_model/training_args.bin",
"train_0/best_model/vocab.txt",
"train_1/best_model/config.json",
"train_1/best_model/pytorch_model.bin",
"train_1/best_model/special_tokens_map.json",
"train_1/best_model/tokenizer.json",
"train_1/best_model/tokenizer_config.json",
"train_1/best_model/training_args.bin",
"train_1/best_model/vocab.txt",
"train_2/best_model/config.json",
"train_2/best_model/pytorch_model.bin",
"train_2/best_model/special_tokens_map.json",
"train_2/best_model/tokenizer.json",
"train_2/best_model/tokenizer_config.json",
"train_2/best_model/training_args.bin",
"train_2/best_model/vocab.txt",
"train_3/best_model/config.json",
"train_3/best_model/pytorch_model.bin",
"train_3/best_model/special_tokens_map.json",
"train_3/best_model/tokenizer.json",
"train_3/best_model/tokenizer_config.json",
"train_3/best_model/training_args.bin",
"train_3/best_model/vocab.txt",
"train_4/best_model/config.json",
"train_4/best_model/pytorch_model.bin",
"train_4/best_model/special_tokens_map.json",
"train_4/best_model/tokenizer.json",
"train_4/best_model/tokenizer_config.json",
"train_4/best_model/training_args.bin",
"train_4/best_model/vocab.txt"
]
| researchaccount | 0 | |||
researchaccount/sa_cmlbert2 | 2021-05-23T01:12:34.000Z | []
| [
".gitattributes",
"train_0/best_model/config.json",
"train_0/best_model/pytorch_model.bin",
"train_0/best_model/special_tokens_map.json",
"train_0/best_model/tokenizer.json",
"train_0/best_model/tokenizer_config.json",
"train_0/best_model/training_args.bin",
"train_0/best_model/vocab.txt",
"train_1/best_model/config.json",
"train_1/best_model/pytorch_model.bin",
"train_1/best_model/special_tokens_map.json",
"train_1/best_model/tokenizer.json",
"train_1/best_model/tokenizer_config.json",
"train_1/best_model/training_args.bin",
"train_1/best_model/vocab.txt",
"train_2/best_model/config.json",
"train_2/best_model/pytorch_model.bin",
"train_2/best_model/special_tokens_map.json",
"train_2/best_model/tokenizer.json",
"train_2/best_model/tokenizer_config.json",
"train_2/best_model/training_args.bin",
"train_2/best_model/vocab.txt",
"train_3/best_model/config.json",
"train_3/best_model/pytorch_model.bin",
"train_3/best_model/special_tokens_map.json",
"train_3/best_model/tokenizer.json",
"train_3/best_model/tokenizer_config.json",
"train_3/best_model/training_args.bin",
"train_3/best_model/vocab.txt",
"train_4/best_model/config.json",
"train_4/best_model/pytorch_model.bin",
"train_4/best_model/special_tokens_map.json",
"train_4/best_model/tokenizer.json",
"train_4/best_model/tokenizer_config.json",
"train_4/best_model/training_args.bin",
"train_4/best_model/vocab.txt"
]
| researchaccount | 0 | |||
researchaccount/sa_cnnbert | 2021-05-30T00:46:12.000Z | []
| [
".gitattributes",
"train_0/best_model/config.json",
"train_0/best_model/pytorch_model.bin",
"train_0/best_model/special_tokens_map.json",
"train_0/best_model/tokenizer.json",
"train_0/best_model/tokenizer_config.json",
"train_0/best_model/training_args.bin",
"train_0/best_model/vocab.txt",
"train_1/best_model/config.json",
"train_1/best_model/pytorch_model.bin",
"train_1/best_model/special_tokens_map.json",
"train_1/best_model/tokenizer.json",
"train_1/best_model/tokenizer_config.json",
"train_1/best_model/training_args.bin",
"train_1/best_model/vocab.txt",
"train_2/best_model/config.json",
"train_2/best_model/pytorch_model.bin",
"train_2/best_model/special_tokens_map.json",
"train_2/best_model/tokenizer.json",
"train_2/best_model/tokenizer_config.json",
"train_2/best_model/training_args.bin",
"train_2/best_model/vocab.txt",
"train_3/best_model/config.json",
"train_3/best_model/pytorch_model.bin",
"train_3/best_model/special_tokens_map.json",
"train_3/best_model/tokenizer.json",
"train_3/best_model/tokenizer_config.json",
"train_3/best_model/training_args.bin",
"train_3/best_model/vocab.txt",
"train_4/best_model/config.json",
"train_4/best_model/pytorch_model.bin",
"train_4/best_model/special_tokens_map.json",
"train_4/best_model/tokenizer.json",
"train_4/best_model/tokenizer_config.json",
"train_4/best_model/training_args.bin",
"train_4/best_model/vocab.txt"
]
| researchaccount | 0 | |||
researchaccount/sa_no_AOA | 2021-04-28T23:25:27.000Z | []
| [
".gitattributes",
"train_0/best_model/config.json",
"train_0/best_model/pytorch_model.bin",
"train_0/best_model/special_tokens_map.json",
"train_0/best_model/tokenizer_config.json",
"train_0/best_model/training_args.bin",
"train_0/best_model/vocab.txt",
"train_1/best_model/config.json",
"train_1/best_model/pytorch_model.bin",
"train_1/best_model/special_tokens_map.json",
"train_1/best_model/tokenizer_config.json",
"train_1/best_model/training_args.bin",
"train_1/best_model/vocab.txt",
"train_2/best_model/config.json",
"train_2/best_model/pytorch_model.bin",
"train_2/best_model/special_tokens_map.json",
"train_2/best_model/tokenizer_config.json",
"train_2/best_model/training_args.bin",
"train_2/best_model/vocab.txt",
"train_3/best_model/config.json",
"train_3/best_model/pytorch_model.bin",
"train_3/best_model/special_tokens_map.json",
"train_3/best_model/tokenizer_config.json",
"train_3/best_model/training_args.bin",
"train_3/best_model/vocab.txt",
"train_4/best_model/config.json",
"train_4/best_model/pytorch_model.bin",
"train_4/best_model/special_tokens_map.json",
"train_4/best_model/tokenizer_config.json",
"train_4/best_model/training_args.bin",
"train_4/best_model/vocab.txt"
]
| researchaccount | 0 | |||
researchaccount/sa_no_aoa_in_neutral | 2021-05-29T23:50:48.000Z | []
| [
".gitattributes",
"train_0/best_model/config.json",
"train_0/best_model/pytorch_model.bin",
"train_0/best_model/special_tokens_map.json",
"train_0/best_model/tokenizer.json",
"train_0/best_model/tokenizer_config.json",
"train_0/best_model/training_args.bin",
"train_0/best_model/vocab.txt",
"train_1/best_model/config.json",
"train_1/best_model/pytorch_model.bin",
"train_1/best_model/special_tokens_map.json",
"train_1/best_model/tokenizer.json",
"train_1/best_model/tokenizer_config.json",
"train_1/best_model/training_args.bin",
"train_1/best_model/vocab.txt",
"train_2/best_model/config.json",
"train_2/best_model/pytorch_model.bin",
"train_2/best_model/special_tokens_map.json",
"train_2/best_model/tokenizer.json",
"train_2/best_model/tokenizer_config.json",
"train_2/best_model/training_args.bin",
"train_2/best_model/vocab.txt",
"train_3/best_model/config.json",
"train_3/best_model/pytorch_model.bin",
"train_3/best_model/special_tokens_map.json",
"train_3/best_model/tokenizer.json",
"train_3/best_model/tokenizer_config.json",
"train_3/best_model/training_args.bin",
"train_3/best_model/vocab.txt",
"train_4/best_model/config.json",
"train_4/best_model/pytorch_model.bin",
"train_4/best_model/special_tokens_map.json",
"train_4/best_model/tokenizer.json",
"train_4/best_model/tokenizer_config.json",
"train_4/best_model/training_args.bin",
"train_4/best_model/vocab.txt"
]
| researchaccount | 0 | |||
researchaccount/sa_no_emoji_aug | 2021-04-29T01:23:47.000Z | []
| [
".gitattributes",
"train_0/best_model/config.json",
"train_0/best_model/pytorch_model.bin",
"train_0/best_model/special_tokens_map.json",
"train_0/best_model/tokenizer_config.json",
"train_0/best_model/training_args.bin",
"train_0/best_model/vocab.txt",
"train_1/best_model/config.json",
"train_1/best_model/pytorch_model.bin",
"train_1/best_model/special_tokens_map.json",
"train_1/best_model/tokenizer_config.json",
"train_1/best_model/training_args.bin",
"train_1/best_model/vocab.txt",
"train_2/best_model/config.json",
"train_2/best_model/pytorch_model.bin",
"train_2/best_model/special_tokens_map.json",
"train_2/best_model/tokenizer_config.json",
"train_2/best_model/training_args.bin",
"train_2/best_model/vocab.txt",
"train_3/best_model/config.json",
"train_3/best_model/pytorch_model.bin",
"train_3/best_model/special_tokens_map.json",
"train_3/best_model/tokenizer_config.json",
"train_3/best_model/training_args.bin",
"train_3/best_model/vocab.txt",
"train_4/best_model/config.json",
"train_4/best_model/pytorch_model.bin",
"train_4/best_model/special_tokens_map.json",
"train_4/best_model/tokenizer_config.json",
"train_4/best_model/training_args.bin",
"train_4/best_model/vocab.txt"
]
| researchaccount | 0 | |||
researchaccount/sa_no_emojies_no_AOA | 2021-05-10T23:43:53.000Z | []
| [
".gitattributes",
"train_0/best_model/config.json",
"train_0/best_model/pytorch_model.bin",
"train_0/best_model/special_tokens_map.json",
"train_0/best_model/tokenizer_config.json",
"train_0/best_model/training_args.bin",
"train_0/best_model/vocab.txt",
"train_1/best_model/config.json",
"train_1/best_model/pytorch_model.bin",
"train_1/best_model/special_tokens_map.json",
"train_1/best_model/tokenizer_config.json",
"train_1/best_model/training_args.bin",
"train_1/best_model/vocab.txt",
"train_2/best_model/config.json",
"train_2/best_model/pytorch_model.bin",
"train_2/best_model/special_tokens_map.json",
"train_2/best_model/tokenizer_config.json",
"train_2/best_model/training_args.bin",
"train_2/best_model/vocab.txt",
"train_3/best_model/config.json",
"train_3/best_model/pytorch_model.bin",
"train_3/best_model/special_tokens_map.json",
"train_3/best_model/tokenizer_config.json",
"train_3/best_model/training_args.bin",
"train_3/best_model/vocab.txt",
"train_4/best_model/config.json",
"train_4/best_model/pytorch_model.bin",
"train_4/best_model/special_tokens_map.json",
"train_4/best_model/tokenizer_config.json",
"train_4/best_model/training_args.bin",
"train_4/best_model/vocab.txt"
]
| researchaccount | 0 | |||
researchaccount/sa_sarcasm | 2021-05-03T00:10:12.000Z | []
| [
".gitattributes",
"train_0/best_model/config.json",
"train_0/best_model/pytorch_model.bin",
"train_0/best_model/special_tokens_map.json",
"train_0/best_model/tokenizer_config.json",
"train_0/best_model/training_args.bin",
"train_0/best_model/vocab.txt",
"train_1/best_model/config.json",
"train_1/best_model/pytorch_model.bin",
"train_1/best_model/special_tokens_map.json",
"train_1/best_model/tokenizer_config.json",
"train_1/best_model/training_args.bin",
"train_1/best_model/vocab.txt",
"train_2/best_model/config.json",
"train_2/best_model/pytorch_model.bin",
"train_2/best_model/special_tokens_map.json",
"train_2/best_model/tokenizer_config.json",
"train_2/best_model/training_args.bin",
"train_2/best_model/vocab.txt",
"train_3/best_model/config.json",
"train_3/best_model/pytorch_model.bin",
"train_3/best_model/special_tokens_map.json",
"train_3/best_model/tokenizer_config.json",
"train_3/best_model/training_args.bin",
"train_3/best_model/vocab.txt",
"train_4/best_model/config.json",
"train_4/best_model/pytorch_model.bin",
"train_4/best_model/special_tokens_map.json",
"train_4/best_model/tokenizer_config.json",
"train_4/best_model/training_args.bin",
"train_4/best_model/vocab.txt"
]
| researchaccount | 0 | |||
researchaccount/sa_sub1 | 2021-05-20T04:20:12.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| researchaccount | 10 | transformers | ---
language: en
widget:
- text: "USER USER USER USER لاحول ولاقوه الا بالله 💔 💔 💔 💔 HASH TAG متي يصدر قرار العشرين ! ! ! ! ! !"
---
Sub 1 |
researchaccount/sa_sub11 | 2021-04-21T18:58:26.000Z | []
| [
".gitattributes",
"train_0/best_model/config.json",
"train_0/best_model/pytorch_model.bin",
"train_0/best_model/special_tokens_map.json",
"train_0/best_model/tokenizer_config.json",
"train_0/best_model/training_args.bin",
"train_0/best_model/vocab.txt",
"train_1/best_model/config.json",
"train_1/best_model/pytorch_model.bin",
"train_1/best_model/special_tokens_map.json",
"train_1/best_model/tokenizer_config.json",
"train_1/best_model/training_args.bin",
"train_1/best_model/vocab.txt",
"train_2/best_model/config.json",
"train_2/best_model/pytorch_model.bin",
"train_2/best_model/special_tokens_map.json",
"train_2/best_model/tokenizer_config.json",
"train_2/best_model/training_args.bin",
"train_2/best_model/vocab.txt",
"train_3/best_model/config.json",
"train_3/best_model/pytorch_model.bin",
"train_3/best_model/special_tokens_map.json",
"train_3/best_model/tokenizer_config.json",
"train_3/best_model/training_args.bin",
"train_3/best_model/vocab.txt",
"train_4/best_model/config.json",
"train_4/best_model/pytorch_model.bin",
"train_4/best_model/special_tokens_map.json",
"train_4/best_model/tokenizer_config.json",
"train_4/best_model/training_args.bin",
"train_4/best_model/vocab.txt"
]
| researchaccount | 0 | |||
researchaccount/sa_sub12 | 2021-04-21T22:26:57.000Z | []
| [
".gitattributes",
"train_0/best_model/config.json",
"train_0/best_model/pytorch_model.bin",
"train_0/best_model/special_tokens_map.json",
"train_0/best_model/tokenizer_config.json",
"train_0/best_model/training_args.bin",
"train_0/best_model/vocab.txt",
"train_1/best_model/config.json",
"train_1/best_model/pytorch_model.bin",
"train_1/best_model/special_tokens_map.json",
"train_1/best_model/tokenizer_config.json",
"train_1/best_model/training_args.bin",
"train_1/best_model/vocab.txt",
"train_2/best_model/config.json",
"train_2/best_model/pytorch_model.bin",
"train_2/best_model/special_tokens_map.json",
"train_2/best_model/tokenizer_config.json",
"train_2/best_model/training_args.bin",
"train_2/best_model/vocab.txt",
"train_3/best_model/config.json",
"train_3/best_model/pytorch_model.bin",
"train_3/best_model/special_tokens_map.json",
"train_3/best_model/tokenizer_config.json",
"train_3/best_model/training_args.bin",
"train_3/best_model/vocab.txt",
"train_4/best_model/config.json",
"train_4/best_model/pytorch_model.bin",
"train_4/best_model/special_tokens_map.json",
"train_4/best_model/tokenizer_config.json",
"train_4/best_model/training_args.bin",
"train_4/best_model/vocab.txt",
"train_5/best_model/config.json",
"train_5/best_model/pytorch_model.bin",
"train_5/best_model/special_tokens_map.json",
"train_5/best_model/tokenizer_config.json",
"train_5/best_model/training_args.bin",
"train_5/best_model/vocab.txt",
"train_6/best_model/config.json",
"train_6/best_model/pytorch_model.bin",
"train_6/best_model/special_tokens_map.json",
"train_6/best_model/tokenizer_config.json",
"train_6/best_model/training_args.bin",
"train_6/best_model/vocab.txt",
"train_7/best_model/config.json",
"train_7/best_model/pytorch_model.bin",
"train_7/best_model/special_tokens_map.json",
"train_7/best_model/tokenizer_config.json",
"train_7/best_model/training_args.bin",
"train_7/best_model/vocab.txt",
"train_8/best_model/config.json",
"train_8/best_model/pytorch_model.bin",
"train_8/best_model/special_tokens_map.json",
"train_8/best_model/tokenizer_config.json",
"train_8/best_model/training_args.bin",
"train_8/best_model/vocab.txt",
"train_9/best_model/config.json",
"train_9/best_model/pytorch_model.bin",
"train_9/best_model/special_tokens_map.json",
"train_9/best_model/tokenizer_config.json",
"train_9/best_model/training_args.bin",
"train_9/best_model/vocab.txt"
]
| researchaccount | 0 | |||
researchaccount/sa_sub2 | 2021-05-20T04:21:46.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
]
| researchaccount | 6 | transformers | ---
language: en
widget:
- text: "USER USER USER USER لاحول ولاقوه الا بالله 💔 💔 💔 💔 HASH TAG متي يصدر قرار العشرين ! ! ! ! ! !"
---
Sub 2 |
researchaccount/sa_sub3 | 2021-05-20T04:23:04.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| researchaccount | 17 | transformers | ---
language: en
widget:
- text: "USER USER USER USER لاحول ولاقوه الا بالله 💔 💔 💔 💔 HASH TAG متي يصدر قرار العشرين ! ! ! ! ! !"
---
Sub 3 |
researchaccount/sa_sub4 | 2021-05-20T04:24:52.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| researchaccount | 6 | transformers | ---
language: en
widget:
- text: "USER USER USER USER لاحول ولاقوه الا بالله 💔 💔 💔 💔 HASH TAG متي يصدر قرار العشرين ! ! ! ! ! !"
---
Sub 4 |
researchaccount/sa_sub5 | 2021-05-20T04:26:03.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| researchaccount | 7 | transformers | |
researchaccount/sa_sub6 | 2021-04-13T01:20:45.000Z | []
| [
".gitattributes",
"results.txt",
"train_0/best_model/config.json",
"train_0/best_model/pytorch_model.bin",
"train_0/best_model/special_tokens_map.json",
"train_0/best_model/tokenizer_config.json",
"train_0/best_model/training_args.bin",
"train_0/best_model/vocab.txt",
"train_1/best_model/config.json",
"train_1/best_model/pytorch_model.bin",
"train_1/best_model/special_tokens_map.json",
"train_1/best_model/tokenizer_config.json",
"train_1/best_model/training_args.bin",
"train_1/best_model/vocab.txt",
"train_2/best_model/config.json",
"train_2/best_model/pytorch_model.bin",
"train_2/best_model/special_tokens_map.json",
"train_2/best_model/tokenizer_config.json",
"train_2/best_model/training_args.bin",
"train_2/best_model/vocab.txt",
"train_3/best_model/config.json",
"train_3/best_model/pytorch_model.bin",
"train_3/best_model/special_tokens_map.json",
"train_3/best_model/tokenizer_config.json",
"train_3/best_model/training_args.bin",
"train_3/best_model/vocab.txt",
"train_4/best_model/config.json",
"train_4/best_model/pytorch_model.bin",
"train_4/best_model/special_tokens_map.json",
"train_4/best_model/tokenizer_config.json",
"train_4/best_model/training_args.bin",
"train_4/best_model/vocab.txt"
]
| researchaccount | 0 | |||
researchaccount/sa_sub7 | 2021-04-13T02:30:26.000Z | []
| [
".gitattributes",
"train_0/best_model/config.json",
"train_0/best_model/pytorch_model.bin",
"train_0/best_model/special_tokens_map.json",
"train_0/best_model/tokenizer_config.json",
"train_0/best_model/training_args.bin",
"train_0/best_model/vocab.txt",
"train_1/best_model/config.json",
"train_1/best_model/pytorch_model.bin",
"train_1/best_model/special_tokens_map.json",
"train_1/best_model/tokenizer_config.json",
"train_1/best_model/training_args.bin",
"train_1/best_model/vocab.txt",
"train_2/best_model/config.json",
"train_2/best_model/pytorch_model.bin",
"train_2/best_model/special_tokens_map.json",
"train_2/best_model/tokenizer_config.json",
"train_2/best_model/training_args.bin",
"train_2/best_model/vocab.txt",
"train_3/best_model/config.json",
"train_3/best_model/pytorch_model.bin",
"train_3/best_model/special_tokens_map.json",
"train_3/best_model/tokenizer_config.json",
"train_3/best_model/training_args.bin",
"train_3/best_model/vocab.txt",
"train_4/best_model/config.json",
"train_4/best_model/pytorch_model.bin",
"train_4/best_model/special_tokens_map.json",
"train_4/best_model/tokenizer_config.json",
"train_4/best_model/training_args.bin",
"train_4/best_model/vocab.txt"
]
| researchaccount | 0 | |||
researchaccount/sa_sub8 | 2021-04-15T20:31:53.000Z | []
| [
".gitattributes",
"train_0/best_model/config.json",
"train_0/best_model/pytorch_model.bin",
"train_0/best_model/special_tokens_map.json",
"train_0/best_model/tokenizer_config.json",
"train_0/best_model/training_args.bin",
"train_0/best_model/vocab.txt",
"train_1/best_model/config.json",
"train_1/best_model/pytorch_model.bin",
"train_1/best_model/special_tokens_map.json",
"train_1/best_model/tokenizer_config.json",
"train_1/best_model/training_args.bin",
"train_1/best_model/vocab.txt",
"train_2/best_model/config.json",
"train_2/best_model/pytorch_model.bin",
"train_2/best_model/special_tokens_map.json",
"train_2/best_model/tokenizer_config.json",
"train_2/best_model/training_args.bin",
"train_2/best_model/vocab.txt",
"train_3/best_model/config.json",
"train_3/best_model/pytorch_model.bin",
"train_3/best_model/special_tokens_map.json",
"train_3/best_model/tokenizer_config.json",
"train_3/best_model/training_args.bin",
"train_3/best_model/vocab.txt",
"train_4/best_model/config.json",
"train_4/best_model/pytorch_model.bin",
"train_4/best_model/special_tokens_map.json",
"train_4/best_model/tokenizer_config.json",
"train_4/best_model/training_args.bin",
"train_4/best_model/vocab.txt"
]
| researchaccount | 0 | |||
researchaccount/sa_trial5_1 | 2021-04-28T22:31:36.000Z | []
| [
".gitattributes",
"train_0/best_model/config.json",
"train_0/best_model/pytorch_model.bin",
"train_0/best_model/special_tokens_map.json",
"train_0/best_model/tokenizer_config.json",
"train_0/best_model/training_args.bin",
"train_0/best_model/vocab.txt",
"train_1/best_model/config.json",
"train_1/best_model/pytorch_model.bin",
"train_1/best_model/special_tokens_map.json",
"train_1/best_model/tokenizer_config.json",
"train_1/best_model/training_args.bin",
"train_1/best_model/vocab.txt",
"train_2/best_model/config.json",
"train_2/best_model/pytorch_model.bin",
"train_2/best_model/special_tokens_map.json",
"train_2/best_model/tokenizer_config.json",
"train_2/best_model/training_args.bin",
"train_2/best_model/vocab.txt",
"train_3/best_model/config.json",
"train_3/best_model/pytorch_model.bin",
"train_3/best_model/special_tokens_map.json",
"train_3/best_model/tokenizer_config.json",
"train_3/best_model/training_args.bin",
"train_3/best_model/vocab.txt",
"train_4/best_model/config.json",
"train_4/best_model/pytorch_model.bin",
"train_4/best_model/special_tokens_map.json",
"train_4/best_model/tokenizer_config.json",
"train_4/best_model/training_args.bin",
"train_4/best_model/vocab.txt"
]
| researchaccount | 0 | |||
researchaccount/sar_trial10 | 2021-05-02T07:30:04.000Z | []
| [
".gitattributes",
"train_0/best_model/config.json",
"train_0/best_model/pytorch_model.bin",
"train_0/best_model/special_tokens_map.json",
"train_0/best_model/tokenizer_config.json",
"train_0/best_model/training_args.bin",
"train_0/best_model/vocab.txt",
"train_1/best_model/config.json",
"train_1/best_model/pytorch_model.bin",
"train_1/best_model/special_tokens_map.json",
"train_1/best_model/tokenizer_config.json",
"train_1/best_model/training_args.bin",
"train_1/best_model/vocab.txt",
"train_2/best_model/config.json",
"train_2/best_model/pytorch_model.bin",
"train_2/best_model/special_tokens_map.json",
"train_2/best_model/tokenizer_config.json",
"train_2/best_model/training_args.bin",
"train_2/best_model/vocab.txt",
"train_3/best_model/config.json",
"train_3/best_model/pytorch_model.bin",
"train_3/best_model/special_tokens_map.json",
"train_3/best_model/tokenizer_config.json",
"train_3/best_model/training_args.bin",
"train_3/best_model/vocab.txt",
"train_4/best_model/config.json",
"train_4/best_model/pytorch_model.bin",
"train_4/best_model/special_tokens_map.json",
"train_4/best_model/tokenizer_config.json",
"train_4/best_model/training_args.bin",
"train_4/best_model/vocab.txt"
]
| researchaccount | 0 | |||
rewardsignal/behavior_cloning | 2021-06-03T15:41:19.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"latest",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
]
| rewardsignal | 14 | transformers | This model was trained using prompt_responses_full.csv which you can read more about [here](https://huggingface.co/datasets/rewardsignal/reddit_writing_prompts).
All other training parameters and settings are accessible via the config.json and trainer_state.json files of the individual checkpoints |
rewardsignal/reddit_reward_model | 2021-06-04T01:35:01.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| rewardsignal | 25 | transformers | This model was trained using comparisons_train.csv which you can read more about [here](https://huggingface.co/datasets/projectaligned/reddit_writingprompts_full).
All other training parameters and settings are accessible via the config.json and trainer_state.json files of the individual checkpoints |
rexarski/yes | 2021-04-11T13:13:58.000Z | []
| [
".gitattributes"
]
| rexarski | 0 | |||
ricardocalleja/new_model | 2021-06-14T22:25:39.000Z | []
| [
".gitattributes"
]
| ricardocalleja | 0 | |||
riccardode/multilingual-sentiment-IMDB | 2021-05-25T15:30:06.000Z | []
| [
".gitattributes"
]
| riccardode | 0 | |||
riccardode/tmp6wdd6hgk | 2021-05-17T14:16:56.000Z | []
| [
".gitattributes"
]
| riccardode | 0 | |||
riccardode/tmpdzcq6s_0 | 2021-05-25T20:40:56.000Z | [
"tf",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"tf_model.h5"
]
| riccardode | 42 | transformers | |
riccardode/tmplji2qjpc | 2021-05-17T14:17:45.000Z | []
| [
".gitattributes"
]
| riccardode | 0 | |||
riccardode/tmppm9oseam | 2021-05-17T14:18:32.000Z | [
"tf",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"tf_model.h5"
]
| riccardode | 42 | transformers | |
ricklon/GUINAN_MEDIUM_MODEL | 2021-06-04T18:31:01.000Z | []
| [
".gitattributes"
]
| ricklon | 0 | |||
rigo-ramos/biobert | 2020-12-22T21:08:26.000Z | []
| [
".gitattributes"
]
| rigo-ramos | 0 | |||
rinna/japanese-gpt2-medium | 2021-06-07T02:12:05.000Z | [
"pytorch",
"tf",
"jax",
"lm-head",
"ja",
"dataset:cc100",
"transformers",
"japanese",
"gpt2",
"text-generation",
"lm",
"nlp",
"license:mit"
]
| text-generation | [
".gitattributes",
".gitignore",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"rinna.png",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json"
]
| rinna | 5,343 | transformers | ---
language: ja
thumbnail: https://github.com/rinnakk/japanese-gpt2/blob/master/rinna.png
tags:
- ja
- japanese
- gpt2
- text-generation
- lm
- nlp
license: mit
datasets:
- cc100
---
# japanese-gpt2-medium

This repository provides a medium-sized Japanese GPT-2 model. The model is provided by [rinna](https://corp.rinna.co.jp/).
# How to use the model
*NOTE:* Use `T5Tokenizer` to initiate the tokenizer.
~~~~
from transformers import T5Tokenizer, AutoModelForCausalLM
tokenizer = T5Tokenizer.from_pretrained("rinna/japanese-gpt2-medium")
tokenizer.do_lower_case = True # due to some bug of tokenizer config loading
model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt2-medium")
~~~~
# Model architecture
A 24-layer, 1024-hidden-size transformer-based language model.
# Training
The model was trained on [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz) to optimize a traditional language modelling objective on 8\\*V100 GPUs for around 30 days. It reaches around 18 perplexity on a chosen validation set from the same data.
# Tokenization
The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer, the vocabulary was trained on the Japanese Wikipedia using the official sentencepiece training script.
# Licenese
[The MIT license](https://opensource.org/licenses/MIT)
|
riomus/test | 2021-03-09T11:31:25.000Z | []
| [
".gitattributes"
]
| riomus | 0 | |||
riteshpatil732/mobileappdevelopment | 2021-05-05T13:20:36.000Z | []
| [
".gitattributes"
]
| riteshpatil732 | 0 | |||
riteshsinha/distilgpt2-fine-tuned-001 | 2021-05-23T12:16:18.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| riteshsinha | 11 | transformers | |
rith87/distilbert-base-uncased-trained | 2021-02-28T23:45:21.000Z | []
| [
".gitattributes"
]
| rith87 | 0 | |||
rjbownes/BBC-GQA-eval | 2021-05-20T04:27:08.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| rjbownes | 16 | transformers | |
rjbownes/BBC-GQA | 2020-09-11T14:20:45.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| rjbownes | 11 | transformers | |
rjbownes/Magic-The-Generating | 2021-05-23T12:17:20.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| rjbownes | 42 | transformers | ---
widget:
- text: "Even the Dwarves"
- text: "The secrets of"
---
# Model name
Magic The Generating
## Model description
This is a fine tuned GPT-2 model trained on a corpus of all available English language Magic the Gathering card flavour texts.
## Intended uses & limitations
This is intended only for use in generating new, novel, and sometimes surprising, MtG like flavour texts.
#### How to use
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained("rjbownes/Magic-The-Generating")
model = GPT2LMHeadModel.from_pretrained("rjbownes/Magic-The-Generating")
```
#### Limitations and bias
The training corpus was surprisingly small, only ~29000 cards, I had suspected there were more. This might mean there is a real limit to the number of entirely original strings this will generate.
This is also only based on the 117M parameter GPT2, it's a pretty obvious upgrade to retrain with medium, large or XL models. However, despite this, the outputs I tested were very convincing!
## Training data
The data was 29222 MtG card flavour texts. The model was based on the "gpt2" pretrained transformer: https://huggingface.co/gpt2.
## Training procedure
Only English language MtG flavour texts were scraped from the [Scryfall](https://scryfall.com/) API. Empty strings and any non-UTF-8 encoded tokens were removed leaving 29222 entries.
This was trained using google Colab with a T4 instance. 4 epochs, adamW optimizer with default parameters and a batch size of 32. Token embedding lengths were capped at 98 tokens as this was the longest string and an attention mask was added to the training model to ignore all padding tokens.
## Eval results
Average Training Loss: 0.44866578806635815.
Validation loss: 0.5606984243444775.
Sample model outputs:
1. "Every branch a crossroads, every vine a swift steed."
—Gwendlyn Di Corci
2. "The secrets of this world will tell their masters where to strike if need be."
—Noyan Dar, Tazeem roilmage
3. "The secrets of nature are expensive. You'd be better off just to have more freedom."
4. "Even the Dwarves knew to leave some stones unturned."
5. "The wise always keep an ear open to the whispers of power."
### BibTeX entry and citation info
```bibtex
@article{BownesLM,
title={Fine Tuning GPT-2 for Magic the Gathering flavour text generation.},
author={Richard J. Bownes},
journal={Medium},
year={2020}
}
```
|
rjbownes/lovelace-evaluator | 2021-05-20T04:28:03.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| rjbownes | 12 | transformers | |
rjbownes/lovelace-generator | 2020-09-23T21:11:03.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| rjbownes | 12 | transformers | |
rmaheshh/zeroshot | 2021-01-11T16:10:34.000Z | []
| [
".gitattributes"
]
| rmaheshh | 0 | |||
rmontero/capas | 2021-02-10T16:02:26.000Z | []
| [
".gitattributes",
"capas.py"
]
| rmontero | 0 | |||
rmxkyz/zh_tf2 | 2021-03-12T01:33:57.000Z | []
| [
".gitattributes",
"config.json",
"model.ckpt.data-00000-of-00001",
"model.ckpt.index",
"vocab.txt"
]
| rmxkyz | 5 | |||
rndlr96/EnBERT_BCE | 2021-05-20T04:29:02.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| rndlr96 | 10 | transformers | ||
rndlr96/EnBERT_Nfocal | 2020-11-17T09:08:06.000Z | []
| [
".gitattributes"
]
| rndlr96 | 0 | |||
rndlr96/Focalbest | 2021-05-20T04:29:24.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| rndlr96 | 12 | transformers | ||
rndlr96/Nfocal_label_v2 | 2021-05-20T04:29:46.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| rndlr96 | 9 | transformers | ||
rndlr96/Nfocal_label_v2_512 | 2021-05-20T04:30:09.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| rndlr96 | 13 | transformers | ||
rndlr96/bce_cls_5e_512 | 2021-05-20T04:30:32.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| rndlr96 | 10 | transformers | ||
rndlr96/cls_256 | 2021-05-20T04:30:54.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| rndlr96 | 10 | transformers | ||
rndlr96/kobert_cls_ipc | 2021-05-20T04:31:13.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| rndlr96 | 13 | transformers | ||
rndlr96/kobert_label_ipc | 2021-05-20T04:31:33.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| rndlr96 | 13 | transformers | ||
rndlr96/label256 | 2021-05-20T04:31:56.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| rndlr96 | 11 | transformers | ||
rndlr96/patent_classification | 2020-11-17T09:04:21.000Z | []
| [
".gitattributes"
]
| rndlr96 | 0 | |||
robot-test/dummy-tokenizer-fast-with-model-config | 2021-05-31T15:40:58.000Z | [
"albert",
"transformers"
]
| [
".gitattributes",
"config.json",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json"
]
| robot-test | 1,291 | transformers | ||
robot-test/dummy-tokenizer-fast | 2021-05-24T15:02:29.000Z | []
| [
".gitattributes",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json"
]
| robot-test | 0 | |||
robot-test/dummy-tokenizer-no-model-config | 2021-05-19T17:56:06.000Z | []
| [
".gitattributes"
]
| robot-test | 0 | |||
rohanrajpal/bert-base-codemixed-uncased-sentiment | 2021-05-20T04:32:54.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"hi",
"en",
"dataset:SAIL 2017",
"transformers",
"codemix"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| rohanrajpal | 164 | transformers | ---
language:
- hi
- en
tags:
- hi
- en
- codemix
datasets:
- SAIL 2017
---
# Model name
## Model description
I took a bert-base-multilingual-cased model from huggingface and finetuned it on SAIL 2017 dataset.
## Intended uses & limitations
#### How to use
```python
# You can include sample code which will be formatted
#Coming soon!
```
#### Limitations and bias
Provide examples of latent issues and potential remediations.
## Training data
I trained on the SAIL 2017 dataset [link](http://amitavadas.com/SAIL/Data/SAIL_2017.zip) on this [pretrained model](https://huggingface.co/bert-base-multilingual-cased).
## Training procedure
No preprocessing.
## Eval results
### BibTeX entry and citation info
```bibtex
@inproceedings{khanuja-etal-2020-gluecos,
title = "{GLUEC}o{S}: An Evaluation Benchmark for Code-Switched {NLP}",
author = "Khanuja, Simran and
Dandapat, Sandipan and
Srinivasan, Anirudh and
Sitaram, Sunayana and
Choudhury, Monojit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.329",
pages = "3575--3585"
}
```
|
rohanrajpal/bert-base-en-es-codemix-cased | 2021-05-19T00:26:38.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"es",
"en",
"dataset:SAIL 2017",
"transformers",
"codemix",
"license:apache-2.0"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"dataset-metadata.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"test_predictions.txt",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| rohanrajpal | 112 | transformers | ---
language:
- es
- en
tags:
- es
- en
- codemix
license: "apache-2.0"
datasets:
- SAIL 2017
metrics:
- fscore
- accuracy
- precision
- recall
---
# BERT codemixed base model for spanglish (cased)
This model was built using [lingualytics](https://github.com/lingualytics/py-lingualytics), an open-source library that supports code-mixed analytics.
## Model description
Input for the model: Any codemixed spanglish text
Output for the model: Sentiment. (0 - Negative, 1 - Neutral, 2 - Positive)
I took a bert-base-multilingual-cased model from Huggingface and finetuned it on [CS-EN-ES-CORPUS](http://www.grupolys.org/software/CS-CORPORA/cs-en-es-corpus-wassa2015.txt) dataset.
Performance of this model on the dataset
| metric | score |
|------------|----------|
| acc | 0.718615 |
| f1 | 0.71759 |
| acc_and_f1 | 0.718103 |
| precision | 0.719302 |
| recall | 0.718615 |
## Intended uses & limitations
Make sure to preprocess your data using [these methods](https://github.com/microsoft/GLUECoS/blob/master/Data/Preprocess_Scripts/preprocess_sent_en_es.py) before using this model.
#### How to use
Here is how to use this model to get the features of a given text in *PyTorch*:
```python
# You can include sample code which will be formatted
from transformers import BertTokenizer, BertModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('rohanrajpal/bert-base-en-es-codemix-cased')
model = AutoModelForSequenceClassification.from_pretrained('rohanrajpal/bert-base-en-es-codemix-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in *TensorFlow*:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('rohanrajpal/bert-base-en-es-codemix-cased')
model = TFBertModel.from_pretrained('rohanrajpal/bert-base-en-es-codemix-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
#### Limitations and bias
Since I dont know spanish, I cant verify the quality of annotations or the dataset itself. This is a very simple transfer learning approach and I'm open to discussions to improve upon this.
## Training data
I trained on the dataset on the [bert-base-multilingual-cased model](https://huggingface.co/bert-base-multilingual-cased).
## Training procedure
Followed the preprocessing techniques followed [here](https://github.com/microsoft/GLUECoS/blob/master/Data/Preprocess_Scripts/preprocess_sent_en_es.py)
## Eval results
### BibTeX entry and citation info
```bibtex
@inproceedings{khanuja-etal-2020-gluecos,
title = "{GLUEC}o{S}: An Evaluation Benchmark for Code-Switched {NLP}",
author = "Khanuja, Simran and
Dandapat, Sandipan and
Srinivasan, Anirudh and
Sitaram, Sunayana and
Choudhury, Monojit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.329",
pages = "3575--3585"
}
```
|
rohanrajpal/bert-base-en-hi-codemix-cased | 2021-05-19T00:31:33.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"hi",
"en",
"dataset:SAIL 2017",
"transformers",
"es",
"codemix",
"license:apache-2.0"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"dataset-metadata.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| rohanrajpal | 90 | transformers | ---
language:
- hi
- en
tags:
- es
- en
- codemix
license: "apache-2.0"
datasets:
- SAIL 2017
metrics:
- fscore
- accuracy
- precision
- recall
---
# BERT codemixed base model for Hinglish (cased)
This model was built using [lingualytics](https://github.com/lingualytics/py-lingualytics), an open-source library that supports code-mixed analytics.
## Model description
Input for the model: Any codemixed Hinglish text
Output for the model: Sentiment. (0 - Negative, 1 - Neutral, 2 - Positive)
I took a bert-base-multilingual-cased model from Huggingface and finetuned it on [SAIL 2017](http://www.dasdipankar.com/SAILCodeMixed.html) dataset.
## Eval results
Performance of this model on the dataset
| metric | score |
|------------|----------|
| acc | 0.55873 |
| f1 | 0.558369 |
| acc_and_f1 | 0.558549 |
| precision | 0.558075 |
| recall | 0.55873 |
#### How to use
Here is how to use this model to get the features of a given text in *PyTorch*:
```python
# You can include sample code which will be formatted
from transformers import BertTokenizer, BertModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('rohanrajpal/bert-base-en-es-codemix-cased')
model = AutoModelForSequenceClassification.from_pretrained('rohanrajpal/bert-base-en-es-codemix-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in *TensorFlow*:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('rohanrajpal/bert-base-en-es-codemix-cased')
model = TFBertModel.from_pretrained('rohanrajpal/bert-base-en-es-codemix-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
#### Preprocessing
Followed standard preprocessing techniques:
- removed digits
- removed punctuation
- removed stopwords
- removed excess whitespace
Here's the snippet
```python
from pathlib import Path
import pandas as pd
from lingualytics.preprocessing import remove_lessthan, remove_punctuation, remove_stopwords
from lingualytics.stopwords import hi_stopwords,en_stopwords
from texthero.preprocessing import remove_digits, remove_whitespace
root = Path('<path-to-data>')
for file in 'test','train','validation':
tochange = root / f'{file}.txt'
df = pd.read_csv(tochange,header=None,sep='\t',names=['text','label'])
df['text'] = df['text'].pipe(remove_digits) \
.pipe(remove_punctuation) \
.pipe(remove_stopwords,stopwords=en_stopwords.union(hi_stopwords)) \
.pipe(remove_whitespace)
df.to_csv(tochange,index=None,header=None,sep='\t')
```
## Training data
The dataset and annotations are not good, but this is the best dataset I could find. I am working on procuring my own dataset and will try to come up with a better model!
## Training procedure
I trained on the dataset on the [bert-base-multilingual-cased model](https://huggingface.co/bert-base-multilingual-cased).
|
rohanrajpal/bert-base-multilingual-codemixed-cased-sentiment | 2021-05-19T00:35:16.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"hi",
"en",
"dataset:SAIL 2017",
"transformers",
"codemix",
"license:apache-2.0"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| rohanrajpal | 144 | transformers | ---
language:
- hi
- en
tags:
- hi
- en
- codemix
license: "apache-2.0"
datasets:
- SAIL 2017
metrics:
- fscore
- accuracy
---
# BERT codemixed base model for hinglish (cased)
## Model description
Input for the model: Any codemixed hinglish text
Output for the model: Sentiment. (0 - Negative, 1 - Neutral, 2 - Positive)
I took a bert-base-multilingual-cased model from Huggingface and finetuned it on [SAIL 2017](http://www.dasdipankar.com/SAILCodeMixed.html) dataset.
Performance of this model on the SAIL 2017 dataset
| metric | score |
|------------|----------|
| acc | 0.588889 |
| f1 | 0.582678 |
| acc_and_f1 | 0.585783 |
| precision | 0.586516 |
| recall | 0.588889 |
## Intended uses & limitations
#### How to use
Here is how to use this model to get the features of a given text in *PyTorch*:
```python
# You can include sample code which will be formatted
from transformers import BertTokenizer, BertModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("rohanrajpal/bert-base-codemixed-uncased-sentiment")
model = AutoModelForSequenceClassification.from_pretrained("rohanrajpal/bert-base-codemixed-uncased-sentiment")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in *TensorFlow*:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('rohanrajpal/bert-base-codemixed-uncased-sentiment')
model = TFBertModel.from_pretrained("rohanrajpal/bert-base-codemixed-uncased-sentiment")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
#### Limitations and bias
Coming soon!
## Training data
I trained on the SAIL 2017 dataset [link](http://amitavadas.com/SAIL/Data/SAIL_2017.zip) on this [pretrained model](https://huggingface.co/bert-base-multilingual-cased).
## Training procedure
No preprocessing.
## Eval results
### BibTeX entry and citation info
```bibtex
@inproceedings{khanuja-etal-2020-gluecos,
title = "{GLUEC}o{S}: An Evaluation Benchmark for Code-Switched {NLP}",
author = "Khanuja, Simran and
Dandapat, Sandipan and
Srinivasan, Anirudh and
Sitaram, Sunayana and
Choudhury, Monojit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.329",
pages = "3575--3585"
}
```
|
rohitkatlaa/Longformer-pcra-trained | 2021-03-19T17:17:12.000Z | []
| [
".gitattributes"
]
| rohitkatlaa | 0 | |||
ronalddt/test | 2021-06-14T11:39:02.000Z | []
| [
".gitattributes"
]
| ronalddt | 0 | |||
royeis/T5-Factual-Classifier-V1 | 2021-04-05T11:50:20.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json"
]
| royeis | 13 | transformers | |
royeis/T5-FlowNLG-Planner | 2021-01-25T09:50:56.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| royeis | 15 | transformers | |
royeis/T5-FlowNLG-Realizer | 2021-01-23T08:49:14.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| royeis | 13 | transformers | |
rpowalski/layoutlm-base-qa | 2021-06-17T09:44:07.000Z | [
"pytorch"
]
| [
".gitattributes",
"all_tags.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
]
| rpowalski | 105 | |||
rrsartpl/Test1 | 2021-03-12T07:24:02.000Z | []
| [
".gitattributes"
]
| rrsartpl | 0 | |||
rsvp-AI-ca/bert-uncased-base-50k | 2020-12-13T03:01:46.000Z | [
"pytorch",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"vocab.txt"
]
| rsvp-AI-ca | 12 | transformers | |
rsvp-AI-ca/segabert-large | 2021-05-20T04:34:27.000Z | [
"pytorch",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"vocab.txt"
]
| rsvp-AI-ca | 12 | transformers | |
rsvp-AI-ca/segabert-uncased-base-50k | 2020-12-13T03:04:25.000Z | [
"pytorch",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"vocab.txt"
]
| rsvp-AI-ca | 11 | transformers | |
rsvp-AI-ca/segatransformer-xl-base | 2020-12-13T03:29:09.000Z | []
| [
".gitattributes",
"model.pt"
]
| rsvp-AI-ca | 0 | |||
rsvp-AI-ca/segatransformer-xl-large | 2020-12-13T04:09:18.000Z | []
| [
".gitattributes",
"model.pt"
]
| rsvp-AI-ca | 0 | |||
rsvp-AI-ca/sentence-segabert-large | 2020-12-13T00:00:15.000Z | [
"transformers"
]
| [
".gitattributes",
"config.json",
"modules.json",
"0_Transformer/added_tokens.json",
"0_Transformer/config.json",
"0_Transformer/pytorch_model.bin",
"0_Transformer/sentence_bert_config.json",
"0_Transformer/special_tokens_map.json",
"0_Transformer/tokenizer_config.json",
"0_Transformer/vocab.txt",
"1_Pooling/config.json"
]
| rsvp-AI-ca | 10 | transformers | ||
rsvp-ai/bertserini-bert-base-cmrc | 2021-05-19T00:38:49.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"nbest_predictions_.json",
"predictions_.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| rsvp-ai | 21 | transformers | |
rsvp-ai/bertserini-bert-base-squad | 2021-05-19T00:40:30.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| rsvp-ai | 463 | transformers | |
rsvp-ai/bertserini-bert-large-squad | 2021-05-19T00:44:05.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| rsvp-ai | 1,483 | transformers | |
rsvp-ai/bertserini-roberta-base | 2021-05-20T19:56:06.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| rsvp-ai | 20 | transformers | |
rsvp-ai/segabert-large | 2020-12-12T23:32:29.000Z | []
| [
".gitattributes"
]
| rsvp-ai | 0 | |||
rti-international/rota | 2021-05-20T19:57:32.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"en",
"transformers"
]
| text-classification | [
".gitattributes",
".gitignore",
"LICENSE",
"README.md",
"code_map.json",
"config.backup.json",
"config.json",
"flax_model.msgpack",
"merges.txt",
"model_args.json",
"onnx-convert-requirements.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json",
".github/workflows/onnx-release.yml",
"utils/convert_onnx.py",
"utils/update_labels.py"
]
| rti-international | 20 | transformers | ---
language:
- en
widget:
- text: theft 3
- text: forgery
- text: unlawful possession short-barreled shotgun
- text: criminal trespass 2nd degree
- text: eluding a police vehicle
- text: upcs synthetic narcotic
---
# ROTA
## Rapid Offense Text Autocoder
[](https://huggingface.co/rti-international/rota)
[](https://github.com/RTIInternational/rota)
[](https://doi.org/10.5281/zenodo.4770492)
Criminal justice research often requires conversion of free-text offense descriptions into overall charge categories to aid analysis. For example, the free-text offense of "eluding a police vehicle" would be coded to a charge category of "Obstruction - Law Enforcement". Since free-text offense descriptions aren't standardized and often need to be categorized in large volumes, this can result in a manual and time intensive process for researchers. ROTA is a machine learning model for converting offense text into offense codes.
Currently ROTA predicts the *Charge Category* of a given offense text. A *charge category* is one of the headings for offense codes in the [2009 NCRP Codebook: Appendix F](https://www.icpsr.umich.edu/web/NACJD/studies/30799/datadocumentation#).
The model was trained on [publicly available data](https://web.archive.org/web/20201021001250/https://www.icpsr.umich.edu/web/pages/NACJD/guides/ncrp.html) from a crosswalk containing offenses from all 50 states combined with three additional hand-labeled offense text datasets.
<details>
<summary>Charge Category Example</summary>
<img src="https://i.ibb.co/xLsrzmV/charge-category-example.png" width="500">
</details>
### Data Preprocessing
The input text is standardized through a series of preprocessing steps. The text is first passed through a sequence of 500+ case-insensitive regular expressions that identify common misspellings and abbreviations and expand the text to a more full, correct English text. Some data-specific prefixes and suffixes are then removed from the text -- e.g. some states included a statute as a part of the text. Finally, punctuation (excluding dollar signs) are removed from the input, multiple spaces between words are removed, and the text is lowercased.
## Cross-Validation Performance
This model was evaluated using 3-fold cross validation. Except where noted, numbers presented below are the mean value across the 3 folds.
The model in this repository is trained on all available data. Because of this, you can typically expect production performance to be (unknowably) better than the numbers presented below.
### Overall Metrics
| Metric | Value |
| -------- | ----- |
| Accuracy | 0.934 |
| MCC | 0.931 |
| Metric | precision | recall | f1-score |
| --------- | --------- | ------ | -------- |
| macro avg | 0.811 | 0.786 | 0.794 |
*Note*: These are the average of the values *per fold*, so *macro avg* is the average of the macro average of all categories per fold.
### Per-Category Metrics
| Category | precision | recall | f1-score | support |
| ------------------------------------------------------ | --------- | ------ | -------- | ------- |
| AGGRAVATED ASSAULT | 0.954 | 0.954 | 0.954 | 4085 |
| ARMED ROBBERY | 0.961 | 0.955 | 0.958 | 1021 |
| ARSON | 0.946 | 0.954 | 0.95 | 344 |
| ASSAULTING PUBLIC OFFICER | 0.914 | 0.905 | 0.909 | 588 |
| AUTO THEFT | 0.962 | 0.962 | 0.962 | 1660 |
| BLACKMAIL/EXTORTION/INTIMIDATION | 0.872 | 0.871 | 0.872 | 627 |
| BRIBERY AND CONFLICT OF INTEREST | 0.784 | 0.796 | 0.79 | 216 |
| BURGLARY | 0.979 | 0.981 | 0.98 | 2214 |
| CHILD ABUSE | 0.805 | 0.78 | 0.792 | 139 |
| COCAINE OR CRACK VIOLATION OFFENSE UNSPECIFIED | 0.827 | 0.815 | 0.821 | 47 |
| COMMERCIALIZED VICE | 0.818 | 0.788 | 0.802 | 666 |
| CONTEMPT OF COURT | 0.982 | 0.987 | 0.984 | 2952 |
| CONTRIBUTING TO DELINQUENCY OF A MINOR | 0.544 | 0.333 | 0.392 | 50 |
| CONTROLLED SUBSTANCE - OFFENSE UNSPECIFIED | 0.864 | 0.791 | 0.826 | 280 |
| COUNTERFEITING (FEDERAL ONLY) | 0 | 0 | 0 | 2 |
| DESTRUCTION OF PROPERTY | 0.97 | 0.968 | 0.969 | 2560 |
| DRIVING UNDER INFLUENCE - DRUGS | 0.567 | 0.603 | 0.581 | 34 |
| DRIVING UNDER THE INFLUENCE | 0.951 | 0.946 | 0.949 | 2195 |
| DRIVING WHILE INTOXICATED | 0.986 | 0.981 | 0.984 | 2391 |
| DRUG OFFENSES - VIOLATION/DRUG UNSPECIFIED | 0.903 | 0.911 | 0.907 | 3100 |
| DRUNKENNESS/VAGRANCY/DISORDERLY CONDUCT | 0.856 | 0.861 | 0.858 | 380 |
| EMBEZZLEMENT | 0.865 | 0.759 | 0.809 | 100 |
| EMBEZZLEMENT (FEDERAL ONLY) | 0 | 0 | 0 | 1 |
| ESCAPE FROM CUSTODY | 0.988 | 0.991 | 0.989 | 4035 |
| FAMILY RELATED OFFENSES | 0.739 | 0.773 | 0.755 | 442 |
| FELONY - UNSPECIFIED | 0.692 | 0.735 | 0.712 | 122 |
| FLIGHT TO AVOID PROSECUTION | 0.46 | 0.407 | 0.425 | 38 |
| FORCIBLE SODOMY | 0.82 | 0.8 | 0.809 | 76 |
| FORGERY (FEDERAL ONLY) | 0 | 0 | 0 | 2 |
| FORGERY/FRAUD | 0.911 | 0.928 | 0.919 | 4687 |
| FRAUD (FEDERAL ONLY) | 0 | 0 | 0 | 2 |
| GRAND LARCENY - THEFT OVER $200 | 0.957 | 0.973 | 0.965 | 2412 |
| HABITUAL OFFENDER | 0.742 | 0.627 | 0.679 | 53 |
| HEROIN VIOLATION - OFFENSE UNSPECIFIED | 0.879 | 0.811 | 0.843 | 24 |
| HIT AND RUN DRIVING | 0.922 | 0.94 | 0.931 | 303 |
| HIT/RUN DRIVING - PROPERTY DAMAGE | 0.929 | 0.918 | 0.923 | 362 |
| IMMIGRATION VIOLATIONS | 0.84 | 0.609 | 0.697 | 19 |
| INVASION OF PRIVACY | 0.927 | 0.923 | 0.925 | 1235 |
| JUVENILE OFFENSES | 0.928 | 0.866 | 0.895 | 144 |
| KIDNAPPING | 0.937 | 0.93 | 0.933 | 553 |
| LARCENY/THEFT - VALUE UNKNOWN | 0.955 | 0.945 | 0.95 | 3175 |
| LEWD ACT WITH CHILDREN | 0.775 | 0.85 | 0.811 | 596 |
| LIQUOR LAW VIOLATIONS | 0.741 | 0.768 | 0.755 | 214 |
| MANSLAUGHTER - NON-VEHICULAR | 0.626 | 0.802 | 0.701 | 139 |
| MANSLAUGHTER - VEHICULAR | 0.79 | 0.853 | 0.819 | 117 |
| MARIJUANA/HASHISH VIOLATION - OFFENSE UNSPECIFIED | 0.741 | 0.662 | 0.699 | 62 |
| MISDEMEANOR UNSPECIFIED | 0.63 | 0.243 | 0.347 | 57 |
| MORALS/DECENCY - OFFENSE | 0.774 | 0.764 | 0.769 | 412 |
| MURDER | 0.965 | 0.915 | 0.939 | 621 |
| OBSTRUCTION - LAW ENFORCEMENT | 0.939 | 0.947 | 0.943 | 4220 |
| OFFENSES AGAINST COURTS, LEGISLATURES, AND COMMISSIONS | 0.881 | 0.895 | 0.888 | 1965 |
| PAROLE VIOLATION | 0.97 | 0.953 | 0.962 | 946 |
| PETTY LARCENY - THEFT UNDER $200 | 0.965 | 0.761 | 0.85 | 139 |
| POSSESSION/USE - COCAINE OR CRACK | 0.893 | 0.928 | 0.908 | 68 |
| POSSESSION/USE - DRUG UNSPECIFIED | 0.624 | 0.535 | 0.572 | 189 |
| POSSESSION/USE - HEROIN | 0.884 | 0.852 | 0.866 | 25 |
| POSSESSION/USE - MARIJUANA/HASHISH | 0.977 | 0.97 | 0.973 | 556 |
| POSSESSION/USE - OTHER CONTROLLED SUBSTANCES | 0.975 | 0.965 | 0.97 | 3271 |
| PROBATION VIOLATION | 0.963 | 0.953 | 0.958 | 1158 |
| PROPERTY OFFENSES - OTHER | 0.901 | 0.87 | 0.885 | 446 |
| PUBLIC ORDER OFFENSES - OTHER | 0.7 | 0.721 | 0.71 | 1871 |
| RACKETEERING/EXTORTION (FEDERAL ONLY) | 0 | 0 | 0 | 2 |
| RAPE - FORCE | 0.842 | 0.873 | 0.857 | 641 |
| RAPE - STATUTORY - NO FORCE | 0.707 | 0.55 | 0.611 | 140 |
| REGULATORY OFFENSES (FEDERAL ONLY) | 0.847 | 0.567 | 0.674 | 70 |
| RIOTING | 0.784 | 0.605 | 0.68 | 119 |
| SEXUAL ASSAULT - OTHER | 0.836 | 0.836 | 0.836 | 971 |
| SIMPLE ASSAULT | 0.976 | 0.967 | 0.972 | 4577 |
| STOLEN PROPERTY - RECEIVING | 0.959 | 0.957 | 0.958 | 1193 |
| STOLEN PROPERTY - TRAFFICKING | 0.902 | 0.888 | 0.895 | 491 |
| TAX LAW (FEDERAL ONLY) | 0.373 | 0.233 | 0.286 | 30 |
| TRAFFIC OFFENSES - MINOR | 0.974 | 0.977 | 0.976 | 8699 |
| TRAFFICKING - COCAINE OR CRACK | 0.896 | 0.951 | 0.922 | 185 |
| TRAFFICKING - DRUG UNSPECIFIED | 0.709 | 0.795 | 0.749 | 516 |
| TRAFFICKING - HEROIN | 0.871 | 0.92 | 0.894 | 54 |
| TRAFFICKING - OTHER CONTROLLED SUBSTANCES | 0.963 | 0.954 | 0.959 | 2832 |
| TRAFFICKING MARIJUANA/HASHISH | 0.921 | 0.943 | 0.932 | 255 |
| TRESPASSING | 0.974 | 0.98 | 0.977 | 1916 |
| UNARMED ROBBERY | 0.941 | 0.939 | 0.94 | 377 |
| UNAUTHORIZED USE OF VEHICLE | 0.94 | 0.908 | 0.924 | 304 |
| UNSPECIFIED HOMICIDE | 0.61 | 0.554 | 0.577 | 60 |
| VIOLENT OFFENSES - OTHER | 0.827 | 0.817 | 0.822 | 606 |
| VOLUNTARY/NONNEGLIGENT MANSLAUGHTER | 0.619 | 0.513 | 0.542 | 54 |
| WEAPON OFFENSE | 0.943 | 0.949 | 0.946 | 2466 |
*Note: `support` is the average number of observations predicted on per fold, so the total number of observations per class is roughly 3x `support`.*
### Using Confidence Scores
If we interpret the classification probability as a confidence score, we can use it to filter out predictions that the model isn't as confident about. We applied this process in 3-fold cross validation. The numbers presented below indicate how much of the prediction data is retained given a confidence score cutoff of `p`. We present the overall accuracy and MCC metrics as if the model was only evaluated on this subset of confident predictions.
| | cutoff | percent retained | mcc | acc |
| --- | ------ | ---------------- | ----- | ----- |
| 0 | 0.85 | 0.952 | 0.96 | 0.961 |
| 1 | 0.9 | 0.943 | 0.964 | 0.965 |
| 2 | 0.95 | 0.928 | 0.97 | 0.971 |
| 3 | 0.975 | 0.912 | 0.975 | 0.976 |
| 4 | 0.99 | 0.886 | 0.982 | 0.983 |
| 5 | 0.999 | 0.733 | 0.995 | 0.996 | |
ruishan-lin/investopedia-QnA | 2021-01-09T00:22:09.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| ruishan-lin | 9 | transformers | ---hello
|
rupert/ucc_roberta | 2021-05-11T15:51:14.000Z | []
| [
".gitattributes"
]
| rupert | 0 | |||
russab0/bert-qa | 2021-04-27T15:17:59.000Z | []
| [
".gitattributes"
]
| russab0 | 0 | |||
russab0/distilbert-qa | 2021-04-27T16:27:50.000Z | [
"pytorch",
"distilbert",
"multiple-choice",
"english",
"dataset:race",
"transformers",
"license:mit"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| russab0 | 124 | transformers | ---
language: "english"
license: "mit"
datasets:
- race
metrics:
- accuracy
---
# MCQ with Distilbert |
|
rvutukuru4226/model_name | 2020-11-15T20:01:48.000Z | []
| [
".gitattributes"
]
| rvutukuru4226 | 0 | |||
rwimmer56/electra_large_squad2 | 2021-05-04T12:12:31.000Z | []
| [
".gitattributes"
]
| rwimmer56 | 0 | |||
rwnchen/model | 2021-03-17T21:22:41.000Z | []
| [
".gitattributes"
]
| rwnchen | 0 | |||
rynelfa9sa/rolemodel | 2021-02-09T17:39:57.000Z | []
| [
".gitattributes"
]
| rynelfa9sa | 0 | |||
rywerth/Rupi-or-Not-Rupi | 2021-05-23T12:18:29.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".DS_Store",
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| rywerth | 7 | transformers | hello
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.