modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 21
values | files
list | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
kornosk/bert-election2020-twitter-stance-trump-KE-MLM | 2021-05-24T04:31:17.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"transformers",
"twitter",
"stance-detection",
"election2020",
"license:gpl-3.0"
]
| text-classification | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| kornosk | 27 | transformers | ---
language: "en"
tags:
- twitter
- stance-detection
- election2020
license: "gpl-3.0"
---
# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Donald Trump (KE-MLM)
Pre-trained weights for **KE-MLM model** in [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Training Data
This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our [stance-labeled data](https://github.com/GU-DataLab/stance-detection-KE-MLM) for stance detection towards Donald Trump.
# Training Objective
This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Donald Trump.
# Usage
This pre-trained language model is fine-tuned to the stance detection task specifically for Donald Trump.
Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import numpy as np
# choose GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# select mode path here
pretrained_LM_path = "kornosk/bert-election2020-twitter-stance-trump-KE-MLM"
# load model
tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path)
model = AutoModelForSequenceClassification.from_pretrained(pretrained_LM_path)
id2label = {
0: "AGAINST",
1: "FAVOR",
2: "NONE"
}
##### Prediction Neutral #####
sentence = "Hello World."
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
##### Prediction Favor #####
sentence = "Go Go Trump!!!"
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
##### Prediction Against #####
sentence = "Trump is the worst."
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
# please consider citing our paper if you feel this is useful :)
```
# Reference
- [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Citation
```bibtex
@inproceedings{kawintiranon2021knowledge,
title={Knowledge Enhanced Masked Language Model for Stance Detection},
author={Kawintiranon, Kornraphop and Singh, Lisa},
booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
year={2021},
publisher={Association for Computational Linguistics},
url={https://www.aclweb.org/anthology/2021.naacl-main.376}
}
``` |
kornosk/bert-election2020-twitter-stance-trump | 2021-05-24T04:30:19.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"transformers",
"twitter",
"stance-detection",
"election2020",
"license:gpl-3.0"
]
| text-classification | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| kornosk | 21 | transformers | ---
language: "en"
tags:
- twitter
- stance-detection
- election2020
license: "gpl-3.0"
---
# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Donald Trump (f-BERT)
Pre-trained weights for **f-BERT** in [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Training Data
This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our [stance-labeled data](https://github.com/GU-DataLab/stance-detection-KE-MLM) for stance detection towards Donald Trump.
# Training Objective
This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Donald Trump.
# Usage
This pre-trained language model is fine-tuned to the stance detection task specifically for Donald Trump.
Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import numpy as np
# choose GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# select mode path here
pretrained_LM_path = "kornosk/bert-election2020-twitter-stance-trump"
# load model
tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path)
model = AutoModelForSequenceClassification.from_pretrained(pretrained_LM_path)
id2label = {
0: "AGAINST",
1: "FAVOR",
2: "NONE"
}
##### Prediction Neutral #####
sentence = "Hello World."
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
##### Prediction Favor #####
sentence = "Go Go Trump!!!"
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
##### Prediction Against #####
sentence = "Trump is the worst."
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
# please consider citing our paper if you feel this is useful :)
```
# Reference
- [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Citation
```bibtex
@inproceedings{kawintiranon2021knowledge,
title={Knowledge Enhanced Masked Language Model for Stance Detection},
author={Kawintiranon, Kornraphop and Singh, Lisa},
booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
year={2021},
publisher={Association for Computational Linguistics},
url={https://www.aclweb.org/anthology/2021.naacl-main.376}
}
``` |
kornosk/bert-political-election2020-twitter-mlm | 2021-05-24T04:26:14.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"en",
"transformers",
"twitter",
"masked-token-prediction",
"election2020",
"license:gpl-3.0",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"eval_results_lm.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| kornosk | 80 | transformers | ---
language: "en"
tags:
- twitter
- masked-token-prediction
- election2020
license: "gpl-3.0"
---
# Pre-trained BERT on Twitter US Political Election 2020
Pre-trained weights for [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
We use the initialized weights from BERT-base (uncased) or `bert-base-uncased`.
# Training Data
This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election.
# Training Objective
This model is initialized with BERT-base and trained with normal MLM objective.
# Usage
This pre-trained language model **can be fine-tunned to any downstream task (e.g. classification)**.
Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail.
```python
from transformers import BertTokenizer, BertForMaskedLM, pipeline
import torch
# choose GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# select mode path here
pretrained_LM_path = "kornosk/bert-political-election2020-twitter-mlm"
# load model
tokenizer = BertTokenizer.from_pretrained(pretrained_LM_path)
model = BertForMaskedLM.from_pretrained(pretrained_LM_path)
# fill mask
example = "Trump is the [MASK] of USA"
fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
outputs = fill_mask(example)
print(outputs)
# see embeddings
inputs = tokenizer(example, return_tensors="pt")
outputs = model(**inputs)
print(outputs)
# OR you can use this model to train on your downstream task!
# please consider citing our paper if you feel this is useful :)
```
# Reference
- [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Citation
```bibtex
@inproceedings{kawintiranon2021knowledge,
title={Knowledge Enhanced Masked Language Model for Stance Detection},
author={Kawintiranon, Kornraphop and Singh, Lisa},
booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
year={2021},
publisher={Association for Computational Linguistics},
url={https://www.aclweb.org/anthology/2021.naacl-main.376}
}
``` |
kosuke-kitahara/wav2vec2-large-xlsr-53-phoneme | 2021-03-29T03:32:30.000Z | []
| [
".gitattributes"
]
| kosuke-kitahara | 0 | |||
kouohhashi/roberta_ja | 2021-05-20T17:36:27.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| kouohhashi | 10 | transformers | hello
|
|
krevas/finance-electra-small-discriminator | 2020-07-09T05:46:38.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| krevas | 13 | transformers | ||
krevas/finance-electra-small-generator | 2020-07-09T05:47:53.000Z | [
"pytorch",
"electra",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| krevas | 19 | transformers | |
krevas/finance-koelectra-base-discriminator | 2020-12-11T21:48:27.000Z | [
"pytorch",
"electra",
"pretraining",
"ko",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| krevas | 27 | transformers | ---
language: ko
---
# ๐ Financial Korean ELECTRA model
Pretrained ELECTRA Language Model for Korean (`finance-koelectra-base-discriminator`)
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB)
or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
## Stats
The current version of the model is trained on a financial news data of Naver news.
The final training corpus has a size of 25GB and 2.3B tokens.
This model was trained a cased model on a TITAN RTX for 500k steps.
## Usage
```python
from transformers import ElectraForPreTraining, ElectraTokenizer
import torch
discriminator = ElectraForPreTraining.from_pretrained("krevas/finance-koelectra-base-discriminator")
tokenizer = ElectraTokenizer.from_pretrained("krevas/finance-koelectra-base-discriminator")
sentence = "๋ด์ผ ํด๋น ์ข
๋ชฉ์ด ๋ํญ ์์นํ ๊ฒ์ด๋ค"
fake_sentence = "๋ด์ผ ํด๋น ์ข
๋ชฉ์ด ๋ง์๊ฒ ์์นํ ๊ฒ์ด๋ค"
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % int(prediction), end="") for prediction in predictions.tolist()[1:-1]]
print("fake token : %s" % fake_tokens[predictions.tolist()[1:-1].index(1)])
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/krevas).
|
|
krevas/finance-koelectra-base-generator | 2020-12-11T21:48:30.000Z | [
"pytorch",
"electra",
"masked-lm",
"ko",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| krevas | 22 | transformers | ---
language: ko
---
# ๐ Financial Korean ELECTRA model
Pretrained ELECTRA Language Model for Korean (`finance-koelectra-base-generator`)
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB)
or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
## Stats
The current version of the model is trained on a financial news data of Naver news.
The final training corpus has a size of 25GB and 2.3B tokens.
This model was trained a cased model on a TITAN RTX for 500k steps.
## Usage
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="krevas/finance-koelectra-base-generator",
tokenizer="krevas/finance-koelectra-base-generator"
)
print(fill_mask(f"๋ด์ผ ํด๋น ์ข
๋ชฉ์ด ๋ํญ {fill_mask.tokenizer.mask_token}ํ ๊ฒ์ด๋ค."))
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/krevas).
|
krevas/finance-koelectra-small-discriminator | 2020-12-11T21:48:34.000Z | [
"pytorch",
"electra",
"pretraining",
"ko",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| krevas | 40 | transformers | ---
language: ko
---
# ๐ Financial Korean ELECTRA model
Pretrained ELECTRA Language Model for Korean (`finance-koelectra-small-discriminator`)
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB)
or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
## Stats
The current version of the model is trained on a financial news data of Naver news.
The final training corpus has a size of 25GB and 2.3B tokens.
This model was trained a cased model on a TITAN RTX for 500k steps.
## Usage
```python
from transformers import ElectraForPreTraining, ElectraTokenizer
import torch
discriminator = ElectraForPreTraining.from_pretrained("krevas/finance-koelectra-small-discriminator")
tokenizer = ElectraTokenizer.from_pretrained("krevas/finance-koelectra-small-discriminator")
sentence = "๋ด์ผ ํด๋น ์ข
๋ชฉ์ด ๋ํญ ์์นํ ๊ฒ์ด๋ค"
fake_sentence = "๋ด์ผ ํด๋น ์ข
๋ชฉ์ด ๋ง์๊ฒ ์์นํ ๊ฒ์ด๋ค"
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % int(prediction), end="") for prediction in predictions.tolist()[1:-1]]
print("fake token : %s" % fake_tokens[predictions.tolist()[1:-1].index(1)])
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/krevas).
|
|
krevas/finance-koelectra-small-generator | 2020-12-11T21:48:37.000Z | [
"pytorch",
"electra",
"masked-lm",
"ko",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| krevas | 27 | transformers | ---
language: ko
---
# ๐ Financial Korean ELECTRA model
Pretrained ELECTRA Language Model for Korean (`finance-koelectra-small-generator`)
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB)
or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
## Stats
The current version of the model is trained on a financial news data of Naver news.
The final training corpus has a size of 25GB and 2.3B tokens.
This model was trained a cased model on a TITAN RTX for 500k steps.
## Usage
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="krevas/finance-koelectra-small-generator",
tokenizer="krevas/finance-koelectra-small-generator"
)
print(fill_mask(f"๋ด์ผ ํด๋น ์ข
๋ชฉ์ด ๋ํญ {fill_mask.tokenizer.mask_token}ํ ๊ฒ์ด๋ค."))
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/krevas).
|
krishnakatyal/my_model_emotion | 2021-05-14T11:38:33.000Z | []
| [
".gitattributes"
]
| krishnakatyal | 0 | |||
krupine/telectra-discriminator | 2021-01-22T08:41:00.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| krupine | 6 | transformers | ||
kssteven/ibert-roberta-base | 2021-05-10T05:31:46.000Z | [
"pytorch",
"ibert",
"masked-lm",
"arxiv:1907.11692",
"arxiv:2101.01321",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
]
| kssteven | 543 | transformers | # I-BERT base model
This model, `ibert-roberta-base`, is an integer-only quantized version of [RoBERTa](https://arxiv.org/abs/1907.11692), and was introduced in [this papaer](https://arxiv.org/abs/2101.01321).
I-BERT stores all parameters with INT8 representation, and carries out the entire inference using integer-only arithmetic.
In particular, I-BERT replaces all floating point operations in the Transformer architectures (e.g., MatMul, GELU, Softmax, and LayerNorm) with closely approximating integer operations.
This can result in upto 4x inference speed up as compared to floating point counterpart when tested on an Nvidia T4 GPU.
The best model parameters searched via quantization-aware finetuning can be then exported (e.g., to TensorRT) for integer-only deployment of the model.
## Finetuning Procedure
Finetuning of I-BERT consists of 3 stages: (1) Full-precision finetuning from the pretrained model on a down-stream task, (2) model quantization, and (3) integer-only finetuning (i.e., quantization-aware training) of the quantized model.
### Full-precision finetuning
Full-precision finetuning of I-BERT is similar to RoBERTa finetuning.
For instance, you can run the following command to finetune on the [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) text classification task.
```
python examples/text-classification/run_glue.py \
--model_name_or_path kssteven/ibert-roberta-base \
--task_name MRPC \
--do_eval \
--do_train \
--evaluation_strategy epoch \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--save_steps 115 \
--learning_rate 2e-5 \
--num_train_epochs 10 \
--output_dir $OUTPUT_DIR
```
### Model Quantization
Once you are done with full-precision finetuning, open up `config.json` in your checkpoint directory and set the `quantize` attribute as `true`.
```
{
"_name_or_path": "kssteven/ibert-roberta-base",
"architectures": [
"IBertForSequenceClassification"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"finetuning_task": "mrpc",
"force_dequant": "none",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "ibert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"quant_mode": true,
"tokenizer_class": "RobertaTokenizer",
"transformers_version": "4.4.0.dev0",
"type_vocab_size": 1,
"vocab_size": 50265
}
```
Then, your model will automatically run as the integer-only mode when you load the checkpoint.
Also, make sure to delete `optimizer.pt`, `scheduler.pt` and `trainer_state.json` in the same directory.
Otherwise, HF will not reset the optimizer, scheduler, or trainer state for the following integer-only finetuning.
### Integer-only finetuning (Quantization-aware training)
Finally, you will be able to run integer-only finetuning simply by loading the checkpoint file you modified.
Note that the only difference in the example command below is `model_name_or_path`.
```
python examples/text-classification/run_glue.py \
--model_name_or_path $CHECKPOINT_DIR
--task_name MRPC \
--do_eval \
--do_train \
--evaluation_strategy epoch \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--save_steps 115 \
--learning_rate 1e-6 \
--num_train_epochs 10 \
--output_dir $OUTPUT_DIR
```
## Citation info
If you use I-BERT, please cite [our papaer](https://arxiv.org/abs/2101.01321).
```
@article{kim2021bert,
title={I-BERT: Integer-only BERT Quantization},
author={Kim, Sehoon and Gholami, Amir and Yao, Zhewei and Mahoney, Michael W and Keutzer, Kurt},
journal={arXiv preprint arXiv:2101.01321},
year={2021}
}
```
|
kssteven/ibert-roberta-large-mnli | 2021-05-10T05:35:32.000Z | [
"pytorch",
"ibert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
]
| kssteven | 95 | transformers | |
kssteven/ibert-roberta-large | 2021-05-10T05:34:01.000Z | [
"pytorch",
"ibert",
"masked-lm",
"arxiv:1907.11692",
"arxiv:2101.01321",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
]
| kssteven | 161 | transformers | # I-BERT large model
This model, `ibert-roberta-large`, is an integer-only quantized version of [RoBERTa](https://arxiv.org/abs/1907.11692), and was introduced in [this papaer](https://arxiv.org/abs/2101.01321).
I-BERT stores all parameters with INT8 representation, and carries out the entire inference using integer-only arithmetic.
In particular, I-BERT replaces all floating point operations in the Transformer architectures (e.g., MatMul, GELU, Softmax, and LayerNorm) with closely approximating integer operations.
This can result in upto 4x inference speed up as compared to floating point counterpart when tested on an Nvidia T4 GPU.
The best model parameters searched via quantization-aware finetuning can be then exported (e.g., to TensorRT) for integer-only deployment of the model.
## Finetuning Procedure
Finetuning of I-BERT consists of 3 stages: (1) Full-precision finetuning from the pretrained model on a down-stream task, (2) model quantization, and (3) integer-only finetuning (i.e., quantization-aware training) of the quantized model.
### Full-precision finetuning
Full-precision finetuning of I-BERT is similar to RoBERTa finetuning.
For instance, you can run the following command to finetune on the [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) text classification task.
```
python examples/text-classification/run_glue.py \
--model_name_or_path kssteven/ibert-roberta-large \
--task_name MRPC \
--do_eval \
--do_train \
--evaluation_strategy epoch \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--save_steps 115 \
--learning_rate 2e-5 \
--num_train_epochs 10 \
--output_dir $OUTPUT_DIR
```
### Model Quantization
Once you are done with full-precision finetuning, open up `config.json` in your checkpoint directory and set the `quantize` attribute as `true`.
```
{
"_name_or_path": "kssteven/ibert-roberta-large",
"architectures": [
"IBertForSequenceClassification"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"finetuning_task": "mrpc",
"force_dequant": "none",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "ibert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"quant_mode": true,
"tokenizer_class": "RobertaTokenizer",
"transformers_version": "4.4.0.dev0",
"type_vocab_size": 1,
"vocab_size": 50265
}
```
Then, your model will automatically run as the integer-only mode when you load the checkpoint.
Also, make sure to delete `optimizer.pt`, `scheduler.pt` and `trainer_state.json` in the same directory.
Otherwise, HF will not reset the optimizer, scheduler, or trainer state for the following integer-only finetuning.
### Integer-only finetuning (Quantization-aware training)
Finally, you will be able to run integer-only finetuning simply by loading the checkpoint file you modified.
Note that the only difference in the example command below is `model_name_or_path`.
```
python examples/text-classification/run_glue.py \
--model_name_or_path $CHECKPOINT_DIR
--task_name MRPC \
--do_eval \
--do_train \
--evaluation_strategy epoch \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--save_steps 115 \
--learning_rate 1e-6 \
--num_train_epochs 10 \
--output_dir $OUTPUT_DIR
```
## Citation info
If you use I-BERT, please cite [our papaer](https://arxiv.org/abs/2101.01321).
```
@article{kim2021bert,
title={I-BERT: Integer-only BERT Quantization},
author={Kim, Sehoon and Gholami, Amir and Yao, Zhewei and Mahoney, Michael W and Keutzer, Kurt},
journal={arXiv preprint arXiv:2101.01321},
year={2021}
}
```
|
ktalley524/Class_Eval_Results | 2021-03-22T14:19:27.000Z | []
| [
".gitattributes",
"README.md"
]
| ktalley524 | 0 | I love this class |
||
ktrapeznikov/albert-xlarge-v2-squad-v2 | 2020-12-11T21:48:41.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"eval.csv",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| ktrapeznikov | 2,587 | transformers | ### Model
**[`albert-xlarge-v2`](https://huggingface.co/albert-xlarge-v2)** fine-tuned on **[`SQuAD V2`](https://rajpurkar.github.io/SQuAD-explorer/)** using **[`run_squad.py`](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)**
### Training Parameters
Trained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb
```bash
BASE_MODEL=albert-xlarge-v2
python run_squad.py \
--version_2_with_negative \
--model_type albert \
--model_name_or_path $BASE_MODEL \
--output_dir $OUTPUT_MODEL \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v2.0.json \
--predict_file $SQUAD_DIR/dev-v2.0.json \
--per_gpu_train_batch_size 3 \
--per_gpu_eval_batch_size 64 \
--learning_rate 3e-5 \
--num_train_epochs 3.0 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 2000 \
--threads 24 \
--warmup_steps 814 \
--gradient_accumulation_steps 4 \
--fp16 \
--do_train
```
### Evaluation
Evaluation on the dev set. I did not sweep for best threshold.
| | val |
|-------------------|-------------------|
| exact | 84.41842836688285 |
| f1 | 87.4628460501696 |
| total | 11873.0 |
| HasAns_exact | 80.68488529014844 |
| HasAns_f1 | 86.78245127423482 |
| HasAns_total | 5928.0 |
| NoAns_exact | 88.1412952060555 |
| NoAns_f1 | 88.1412952060555 |
| NoAns_total | 5945.0 |
| best_exact | 84.41842836688285 |
| best_exact_thresh | 0.0 |
| best_f1 | 87.46284605016956 |
| best_f1_thresh | 0.0 |
### Usage
See [huggingface documentation](https://huggingface.co/transformers/model_doc/albert.html#albertforquestionanswering). Training on `SQuAD V2` allows the model to score if a paragraph contains an answer:
```python
start_scores, end_scores = model(input_ids)
span_scores = start_scores.softmax(dim=1).log()[:,:,None] + end_scores.softmax(dim=1).log()[:,None,:]
ignore_score = span_scores[:,0,0] #no answer scores
```
|
ktrapeznikov/biobert_v1.1_pubmed_squad_v2 | 2021-05-19T21:10:03.000Z | [
"pytorch",
"jax",
"tfsavedmodel",
"bert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"eval.csv",
"flax_model.msgpack",
"pytorch_model.bin",
"saved_model.tar.gz",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| ktrapeznikov | 1,207 | transformers | ### Model
**[`monologg/biobert_v1.1_pubmed`](https://huggingface.co/monologg/biobert_v1.1_pubmed)** fine-tuned on **[`SQuAD V2`](https://rajpurkar.github.io/SQuAD-explorer/)** using **[`run_squad.py`](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)**
This model is cased.
### Training Parameters
Trained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb
```bash
BASE_MODEL=monologg/biobert_v1.1_pubmed
python run_squad.py \
--version_2_with_negative \
--model_type albert \
--model_name_or_path $BASE_MODEL \
--output_dir $OUTPUT_MODEL \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v2.0.json \
--predict_file $SQUAD_DIR/dev-v2.0.json \
--per_gpu_train_batch_size 18 \
--per_gpu_eval_batch_size 64 \
--learning_rate 3e-5 \
--num_train_epochs 3.0 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 2000 \
--threads 24 \
--warmup_steps 550 \
--gradient_accumulation_steps 1 \
--fp16 \
--logging_steps 50 \
--do_train
```
### Evaluation
Evaluation on the dev set. I did not sweep for best threshold.
| | val |
|-------------------|-------------------|
| exact | 75.97068980038743 |
| f1 | 79.37043950121722 |
| total | 11873.0 |
| HasAns_exact | 74.13967611336032 |
| HasAns_f1 | 80.94892513460755 |
| HasAns_total | 5928.0 |
| NoAns_exact | 77.79646761984861 |
| NoAns_f1 | 77.79646761984861 |
| NoAns_total | 5945.0 |
| best_exact | 75.97068980038743 |
| best_exact_thresh | 0.0 |
| best_f1 | 79.37043950121729 |
| best_f1_thresh | 0.0 |
### Usage
See [huggingface documentation](https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering). Training on `SQuAD V2` allows the model to score if a paragraph contains an answer:
```python
start_scores, end_scores = model(input_ids)
span_scores = start_scores.softmax(dim=1).log()[:,:,None] + end_scores.softmax(dim=1).log()[:,None,:]
ignore_score = span_scores[:,0,0] #no answer scores
```
|
ktrapeznikov/gpt2-medium-topic-news-v2 | 2021-05-23T06:14:58.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"en",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| ktrapeznikov | 26 | transformers | ---
language:
- en
thumbnail:
widget:
- text: "topic climate source washington post title "
---
# GPT2-medium-topic-news
## Model description
GPT2-medium fine tuned on a largish news corpus conditioned on a topic, source, title
## Intended uses & limitations
#### How to use
To generate a news article text conditioned on a topic, source, title or some subsets, prompt model with:
```python
f"topic {topic} source"
f"topic {topic} source {source} title"
f"topic {topic} source {source} title {title} body"
```
Try the following tags for `topic: climate, weather, vaccination`.
Zero shot generation works pretty well as long as `topic` is a single word and not too specific.
```python
device = "cuda:0"
tokenizer = AutoTokenizer.from_pretrained("ktrapeznikov/gpt2-medium-topic-small-set")
model = AutoModelWithLMHead.from_pretrained("ktrapeznikov/gpt2-medium-topic-small-set")
model.to(device)
topic = "climate"
prompt = tokenizer(f"topic {topics} source straitstimes title", return_tensors="pt")
out = model.generate(prompt["input_ids"].to(device), do_sample=True,max_length=500, early_stopping=True, top_p=.9)
print(tokenizer.decode(out[0].cpu(), skip_special_tokens=True))
``` |
ktrapeznikov/gpt2-medium-topic-news | 2021-05-23T06:18:56.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"en",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| ktrapeznikov | 108 | transformers | ---
language:
- en
thumbnail:
widget:
- text: "topic: climate article:"
---
# GPT2-medium-topic-news
## Model description
GPT2-medium fine tuned on a large news corpus conditioned on a topic
## Intended uses & limitations
#### How to use
To generate a news article text conditioned on a topic, prompt model with:
`topic: climate article:`
The following tags were used during training:
`arts law international science business politics disaster world conflict football sport sports artanddesign environment music film lifeandstyle business health commentisfree books technology media education politics travel stage uk society us money culture religion science news tv fashion uk australia cities global childrens sustainable global voluntary housing law local healthcare theguardian`
Zero shot generation works pretty well as long as `topic` is a single word and not too specific.
```python
device = "cuda:0"
tokenizer = AutoTokenizer.from_pretrained("ktrapeznikov/gpt2-medium-topic-news")
model = AutoModelWithLMHead.from_pretrained("ktrapeznikov/gpt2-medium-topic-news")
model.to(device)
topic = "climate"
prompt = tokenizer(f"topic: {topic} article:", return_tensors="pt")
out = model.generate(prompt["input_ids"].to(device), do_sample=True,max_length=500, early_stopping=True, top_p=.9)
print(tokenizer.decode(list(out.cpu()[0])))
```
## Training data
## Training procedure
|
ktrapeznikov/gpt2-medium-topic-small-set | 2021-05-23T06:21:38.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"en",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| ktrapeznikov | 36 | transformers | ---
language:
- en
thumbnail:
widget:
- text: "topic climate source"
---
# GPT2-medium-topic-news
## Model description
GPT2-medium fine tuned on a small news corpus conditioned on a topic, source, title
## Intended uses & limitations
#### How to use
To generate a news article text conditioned on a topic, source, title or some subsets, prompt model with:
```python
f"topic {topic} source"
f"topic {topic} source {source} title"
f"topic {topic} source {source} title {title} body"
```
Try the following tags for `topic: climate, weather, vaccination`.
Zero shot generation works pretty well as long as `topic` is a single word and not too specific.
```python
device = "cuda:0"
tokenizer = AutoTokenizer.from_pretrained("ktrapeznikov/gpt2-medium-topic-small-set")
model = AutoModelWithLMHead.from_pretrained("ktrapeznikov/gpt2-medium-topic-small-set")
model.to(device)
topic = "climate"
prompt = tokenizer(f"topic {topics} source straitstimes title", return_tensors="pt")
out = model.generate(prompt["input_ids"].to(device), do_sample=True,max_length=500, early_stopping=True, top_p=.9)
print(tokenizer.decode(out[0].cpu(), skip_special_tokens=True))
```
## Sample Output
>[topic] military [source] straitstimes [title] Trump signs bill on military aid to Israel [body] WASHINGTON (AFP) - US President Donald Trump signed into law Thursday (April 24) legislation to provide more than US$15 billion (S$20.43 billion) in military aid to Israel, a move the Obama administration had resisted for political reasons. The White House did not immediately respond to a request for comment on the Israel measure, which Trump had sought unsuccessfully to block during the Obama pres ...
>[topic] military [source] straitstimes [title] Hong Kong's leaders to discuss new travel restrictions as lockdown looms [body] HONG KONG (REUTERS) - Hong Kong authorities said they would hold a meeting of the Legislative Council on Monday (July 21) to discuss new travel restrictions on Hong Kong residents, as the city reported a record daily increase in coronavirus cases. The authorities said they would consider the proposal after meeting government chiefs and reviewing other measures. The co ...
>[topic] military [source] straitstimes [title] Trump signs Bill that gives US troops wider latitude to conduct operations abroad [body] WASHINGTON (AFP) - US President Donald Trump on Thursday (July 23) signed a controversial law that gives US troops more leeway to conduct operations abroad, as he seeks to shore up the embattled government's defences against the coronavirus pandemic and stave off a potentially devastating election defeat. Trump's signature Bill, named after his late father's l ...
>[topic] military [source] straitstimes [title] China's Foreign Ministry responds to Japan's statement on South China Sea: 'No one should assume the role of mediator' [body] BEIJING (AFP) - The Ministry of Foreign Affairs on Tuesday (Oct 18) told Japan to stop taking sides in the South China Sea issue and not interfere in the bilateral relationship, as Japan said it would do "nothing". Foreign Ministry spokesman Zhao Lijian told reporters in Beijing that the Chinese government's position on the ...
>[topic] military [source] straitstimes [title] US warns North Korea on potential nuclear strike [body] WASHINGTON - The United States warned North Korea last Friday that an attack by the North could be a "provocation" that would have "a devastating effect" on its security, as it took aim at Pyongyang over its continued efforts to develop weapons of mass destruction. US Secretary of State Mike Pompeo was speaking at the conclusion of a White House news conference when a reporter asked him how t ...
>[topic] military [source] straitstimes [title] China calls Hong Kong to halt 'illegal and illegal military acts' [body] WASHINGTON โข Chinese Foreign Ministry spokeswoman Hua Chunying said yesterday that Hong Kong must stop 'illegal and illegal military acts' before Beijing can recognise the city as its own. In her annual State Councillor's speech, Ms Hua made the case for Hong Kong to resume Hong Kong's status as a semi-autonomous city, and vowed to use its "great power position to actively an ...
## Training data
## Training procedure
|
ktrapeznikov/scibert_scivocab_uncased_squad_v2 | 2021-05-19T21:11:07.000Z | [
"pytorch",
"jax",
"tfsavedmodel",
"bert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"eval.csv",
"flax_model.msgpack",
"pytorch_model.bin",
"saved_model.tar.gz",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| ktrapeznikov | 715 | transformers | ### Model
**[`allenai/scibert_scivocab_uncased`](https://huggingface.co/allenai/scibert_scivocab_uncased)** fine-tuned on **[`SQuAD V2`](https://rajpurkar.github.io/SQuAD-explorer/)** using **[`run_squad.py`](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)**
### Training Parameters
Trained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb
```bash
BASE_MODEL=allenai/scibert_scivocab_uncased
python run_squad.py \
--version_2_with_negative \
--model_type albert \
--model_name_or_path $BASE_MODEL \
--output_dir $OUTPUT_MODEL \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v2.0.json \
--predict_file $SQUAD_DIR/dev-v2.0.json \
--per_gpu_train_batch_size 18 \
--per_gpu_eval_batch_size 64 \
--learning_rate 3e-5 \
--num_train_epochs 3.0 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 2000 \
--threads 24 \
--warmup_steps 550 \
--gradient_accumulation_steps 1 \
--fp16 \
--logging_steps 50 \
--do_train
```
### Evaluation
Evaluation on the dev set. I did not sweep for best threshold.
| | val |
|-------------------|-------------------|
| exact | 75.07790785816559 |
| f1 | 78.47735207283013 |
| total | 11873.0 |
| HasAns_exact | 70.76585695006747 |
| HasAns_f1 | 77.57449412292718 |
| HasAns_total | 5928.0 |
| NoAns_exact | 79.37762825904122 |
| NoAns_f1 | 79.37762825904122 |
| NoAns_total | 5945.0 |
| best_exact | 75.08633032931863 |
| best_exact_thresh | 0.0 |
| best_f1 | 78.48577454398324 |
| best_f1_thresh | 0.0 |
### Usage
See [huggingface documentation](https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering). Training on `SQuAD V2` allows the model to score if a paragraph contains an answer:
```python
start_scores, end_scores = model(input_ids)
span_scores = start_scores.softmax(dim=1).log()[:,:,None] + end_scores.softmax(dim=1).log()[:,None,:]
ignore_score = span_scores[:,0,0] #no answer scores
```
|
kuaiboard/default | 2021-02-18T00:35:44.000Z | []
| [
".gitattributes"
]
| kuaiboard | 0 | |||
kuisailab/albert-base-arabic | 2021-04-25T22:56:25.000Z | [
"pytorch",
"tf",
"albert",
"ar",
"dataset:oscar",
"dataset:wikipedia",
"transformers",
"masked-lm",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json"
]
| kuisailab | 354 | transformers | ---
language: ar
datasets:
- oscar
- wikipedia
tags:
- ar
- masked-lm
---
# Arabic-ALBERT Base
Arabic edition of ALBERT Base pretrained language model
_If you use any of these models in your work, please cite this work as:_
```
@software{ali_safaya_2020_4718724,
author = {Ali Safaya},
title = {Arabic-ALBERT},
month = aug,
year = 2020,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.4718724},
url = {https://doi.org/10.5281/zenodo.4718724}
}
```
## Pretraining data
The models were pretrained on ~4.4 Billion words:
- Arabic version of [OSCAR](https://oscar-corpus.com/) (unshuffled version of the corpus) - filtered from [Common Crawl](http://commoncrawl.org/)
- Recent dump of Arabic [Wikipedia](https://dumps.wikimedia.org/backup-index.html)
__Notes on training data:__
- Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER.
- Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters do not have upper or lower case, there is no cased and uncased version of the model.
- The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too.
## Pretraining details
- These models were trained using Google ALBERT's github [repository](https://github.com/google-research/albert) on a single TPU v3-8 provided for free from [TFRC](https://www.tensorflow.org/tfrc).
- Our pretraining procedure follows training settings of bert with some changes: trained for 7M training steps with batchsize of 64, instead of 125K with batchsize of 4096.
## Models
| | albert-base | albert-large | albert-xlarge |
|:---:|:---:|:---:|:---:|
| Hidden Layers | 12 | 24 | 24 |
| Attention heads | 12 | 16 | 32 |
| Hidden size | 768 | 1024 | 2048 |
## Results
For further details on the models performance or any other queries, please refer to [Arabic-ALBERT](https://github.com/KUIS-AI-Lab/Arabic-ALBERT/)
## How to use
You can use these models by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
# loading the tokenizer
base_tokenizer = AutoTokenizer.from_pretrained("kuisailab/albert-base-arabic")
# loading the model
base_model = AutoModelForMaskedLM.from_pretrained("kuisailab/albert-base-arabic")
```
## Acknowledgement
Thanks to Google for providing free TPU for the training process and for Huggingface for hosting these models on their servers ๐
|
kuisailab/albert-large-arabic | 2021-04-25T22:57:35.000Z | [
"pytorch",
"tf",
"albert",
"ar",
"dataset:oscar",
"dataset:wikipedia",
"transformers",
"masked-lm",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json"
]
| kuisailab | 103 | transformers | ---
language: ar
datasets:
- oscar
- wikipedia
tags:
- ar
- masked-lm
---
# Arabic-ALBERT Large
Arabic edition of ALBERT Large pretrained language model
_If you use any of these models in your work, please cite this work as:_
```
@software{ali_safaya_2020_4718724,
author = {Ali Safaya},
title = {Arabic-ALBERT},
month = aug,
year = 2020,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.4718724},
url = {https://doi.org/10.5281/zenodo.4718724}
}
```
## Pretraining data
The models were pretrained on ~4.4 Billion words:
- Arabic version of [OSCAR](https://oscar-corpus.com/) (unshuffled version of the corpus) - filtered from [Common Crawl](http://commoncrawl.org/)
- Recent dump of Arabic [Wikipedia](https://dumps.wikimedia.org/backup-index.html)
__Notes on training data:__
- Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER.
- Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters do not have upper or lower case, there is no cased and uncased version of the model.
- The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too.
## Pretraining details
- These models were trained using Google ALBERT's github [repository](https://github.com/google-research/albert) on a single TPU v3-8 provided for free from [TFRC](https://www.tensorflow.org/tfrc).
- Our pretraining procedure follows training settings of bert with some changes: trained for 7M training steps with batchsize of 64, instead of 125K with batchsize of 4096.
## Models
| | albert-base | albert-large | albert-xlarge |
|:---:|:---:|:---:|:---:|
| Hidden Layers | 12 | 24 | 24 |
| Attention heads | 12 | 16 | 32 |
| Hidden size | 768 | 1024 | 2048 |
## Results
For further details on the models performance or any other queries, please refer to [Arabic-ALBERT](https://github.com/KUIS-AI-Lab/Arabic-ALBERT/)
## How to use
You can use these models by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
# loading the tokenizer
tokenizer = AutoTokenizer.from_pretrained("kuisailab/albert-large-arabic")
# loading the model
model = AutoModelForMaskedLM.from_pretrained("kuisailab/albert-large-arabic")
```
## Acknowledgement
Thanks to Google for providing free TPU for the training process and for Huggingface for hosting these models on their servers ๐
|
kuisailab/albert-xlarge-arabic | 2021-04-25T22:58:13.000Z | [
"pytorch",
"tf",
"albert",
"ar",
"dataset:oscar",
"dataset:wikipedia",
"transformers",
"masked-lm",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json"
]
| kuisailab | 129 | transformers | ---
language: ar
datasets:
- oscar
- wikipedia
tags:
- ar
- masked-lm
---
# Arabic-ALBERT Xlarge
Arabic edition of ALBERT Xlarge pretrained language model
_If you use any of these models in your work, please cite this work as:_
```
@software{ali_safaya_2020_4718724,
author = {Ali Safaya},
title = {Arabic-ALBERT},
month = aug,
year = 2020,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.4718724},
url = {https://doi.org/10.5281/zenodo.4718724}
}
```
## Pretraining data
The models were pretrained on ~4.4 Billion words:
- Arabic version of [OSCAR](https://oscar-corpus.com/) (unshuffled version of the corpus) - filtered from [Common Crawl](http://commoncrawl.org/)
- Recent dump of Arabic [Wikipedia](https://dumps.wikimedia.org/backup-index.html)
__Notes on training data:__
- Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER.
- Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters do not have upper or lower case, there is no cased and uncased version of the model.
- The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too.
## Pretraining details
- These models were trained using Google ALBERT's github [repository](https://github.com/google-research/albert) on a single TPU v3-8 provided for free from [TFRC](https://www.tensorflow.org/tfrc).
- Our pretraining procedure follows training settings of bert with some changes: trained for 7M training steps with batchsize of 64, instead of 125K with batchsize of 4096.
## Models
| | albert-base | albert-large | albert-xlarge |
|:---:|:---:|:---:|:---:|
| Hidden Layers | 12 | 24 | 24 |
| Attention heads | 12 | 16 | 32 |
| Hidden size | 768 | 1024 | 2048 |
## Results
For further details on the models performance or any other queries, please refer to [Arabic-ALBERT](https://github.com/KUIS-AI-Lab/Arabic-ALBERT/)
## How to use
You can use these models by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
# loading the tokenizer
tokenizer = AutoTokenizer.from_pretrained("kuisailab/albert-xlarge-arabic")
# loading the model
model = AutoModelForMaskedLM.from_pretrained("kuisailab/albert-xlarge-arabic")
```
## Acknowledgement
Thanks to Google for providing free TPU for the training process and for Huggingface for hosting these models on their servers ๐
|
kuppuluri/telugu_bertu | 2021-05-19T21:12:30.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"te",
"transformers",
"fill-mask"
]
| fill-mask | [
".DS_Store",
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"log_history.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt",
"model_cards/README.md"
]
| kuppuluri | 18 | transformers | ---
language:
-
-
thumbnail:
tags:
-
-
-
license:
datasets:
-
-
metrics:
-
-
---
# MyModelName
## Model description
You can embed local or remote images using ``
## Intended uses & limitations
#### How to use
```python
# You can include sample code which will be formatted
```
#### Limitations and bias
Provide examples of latent issues and potential remediations.
## Training data
Describe the data you used to train the model.
If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data.
## Training procedure
Preprocessing, hardware used, hyperparameters...
## Eval results
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2020}
}
```
|
kuppuluri/telugu_bertu_ner | 2021-05-19T21:13:30.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
]
| token-classification | [
".DS_Store",
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"model_args.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| kuppuluri | 14 | transformers | # Named Entity Recognition Model for Telugu
#### How to use
```python
from simpletransformers.ner import NERModel
model = NERModel('bert',
'kuppuluri/telugu_bertu_ner',
labels=[
'B-PERSON', 'I-ORG', 'B-ORG', 'I-LOC', 'B-MISC',
'I-MISC', 'I-PERSON', 'B-LOC', 'O'
],
use_cuda=False,
args={"use_multiprocessing": False})
text = "เฐตเฐฟเฐฐเฐพเฐเฑ เฐเฑเฐนเฑเฐฒเฑ เฐเฑเฐกเฐพ เฐ
เฐฆเฑ เฐจเฐฟเฐฐเฑเฐฒเฐเฑเฐทเฑเฐฏเฐพเฐจเฑเฐจเฐฟ เฐชเฑเฐฐเฐฆเฐฐเฑเฐถเฐฟเฐเฐเฐฟ เฐเฑเฐตเฐฒเฐ เฐเฐ เฐชเฐฐเฑเฐเฑเฐเฑ เฐฐเฐจเฑเฐเฑ เฐชเฑเฐตเฐฟเฐฒเฐฟเฐฏเฐจเฑ เฐเฑเฐฐเฐพเฐกเฑ ."
results = model.predict([text])
```
## Training data
Training data is from https://github.com/anikethjr/NER_Telugu
## Eval results
On the test set my results were
eval_loss = 0.0004407190410447974
f1_score = 0.999519076627124
precision = 0.9994389677005691
recall = 0.9995991983967936
|
kuppuluri/telugu_bertu_pos | 2021-05-19T21:14:40.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
]
| token-classification | [
".DS_Store",
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"model_args.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| kuppuluri | 14 | transformers | # Part of Speech tagging Model for Telugu
#### How to use
```python
from simpletransformers.ner import NERModel
model = NERModel('bert',
'kuppuluri/telugu_bertu_pos',
args={"use_multiprocessing": False},
labels=[
'QC', 'JJ', 'NN', 'QF', 'RDP', 'O',
'NNO', 'PRP', 'RP', 'VM', 'WQ',
'PSP', 'UT', 'CC', 'INTF', 'SYMP',
'NNP', 'INJ', 'SYM', 'CL', 'QO',
'DEM', 'RB', 'NST', ],
use_cuda=False)
text = "เฐตเฐฟเฐฐเฐพเฐเฑ เฐเฑเฐนเฑเฐฒเฑ เฐเฑเฐกเฐพ เฐ
เฐฆเฑ เฐจเฐฟเฐฐเฑเฐฒเฐเฑเฐทเฑเฐฏเฐพเฐจเฑเฐจเฐฟ เฐชเฑเฐฐเฐฆเฐฐเฑเฐถเฐฟเฐเฐเฐฟ เฐเฑเฐตเฐฒเฐ เฐเฐ เฐชเฐฐเฑเฐเฑเฐเฑ เฐฐเฐจเฑเฐเฑ เฐชเฑเฐตเฐฟเฐฒเฐฟเฐฏเฐจเฑ เฐเฑเฐฐเฐพเฐกเฑ ."
results = model.predict([text])
```
## Training data
Training data is from https://github.com/anikethjr/NER_Telugu
## Eval results
On the test set my results were
eval_loss = 0.0036797842364565416
f1_score = 0.9983795127912227
precision = 0.9984325602401637
recall = 0.9983264709788816
|
kuppuluri/telugu_bertu_tydiqa | 2021-05-19T21:15:58.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"nbest_predictions_.json",
"predictions_.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| kuppuluri | 29 | transformers | # Telugu Question-Answering model trained on Tydiqa dataset from Google
#### How to use
```python
from transformers.pipelines import pipeline, AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained("kuppuluri/telugu_bertu_tydiqa",
clean_text=False,
handle_chinese_chars=False,
strip_accents=False,
wordpieces_prefix='##')
nlp = pipeline('question-answering', model=model, tokenizer=tokenizer)
result = nlp({'question': question, 'context': context})
```
## Training data
I used Tydiqa Telugu data from Google https://github.com/google-research-datasets/tydiqa
|
kuzgunlar/electra-turkish-ner | 2020-07-31T08:55:28.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| kuzgunlar | 33 | transformers | |
kuzgunlar/electra-turkish-qa | 2020-07-31T09:15:54.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| kuzgunlar | 21 | transformers | |
kuzgunlar/electra-turkish-sentiment-analysis | 2020-08-16T13:05:57.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"model_args.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| kuzgunlar | 49 | transformers | |
kykim/albert-kor-base | 2021-01-22T00:27:49.000Z | [
"pytorch",
"tf",
"albert",
"masked-lm",
"ko",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| kykim | 165 | transformers | ---
language: ko
---
# Albert base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
from transformers import BertTokenizerFast, AlbertModel
tokenizer_albert = BertTokenizerFast.from_pretrained("kykim/albert-kor-base")
model_albert = AlbertModel.from_pretrained("kykim/albert-kor-base")
``` |
kykim/bert-kor-base | 2021-05-19T21:17:13.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"ko",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| kykim | 4,960 | transformers | ---
language: ko
---
# Bert base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
from transformers import BertTokenizerFast, BertModel
tokenizer_bert = BertTokenizerFast.from_pretrained("kykim/bert-kor-base")
model_bert = BertModel.from_pretrained("kykim/bert-kor-base")
``` |
kykim/bertshared-kor-base | 2021-02-23T11:49:50.000Z | [
"pytorch",
"encoder-decoder",
"seq2seq",
"ko",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| kykim | 127 | transformers | ---
language: ko
---
# Bert base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
# only for pytorch in transformers
from transformers import BertTokenizerFast, EncoderDecoderModel
tokenizer = BertTokenizerFast.from_pretrained("kykim/bertshared-kor-base")
model = EncoderDecoderModel.from_pretrained("kykim/bertshared-kor-base")
``` |
kykim/electra-kor-base | 2021-01-22T00:28:50.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"ko",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| kykim | 1,483 | transformers | ---
language: ko
---
# Electra base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
from transformers import ElectraTokenizerFast, ElectraModel
tokenizer_electra = ElectraTokenizerFast.from_pretrained("kykim/electra-kor-base")
model = ElectraModel.from_pretrained("kykim/electra-kor-base")
``` |
|
kykim/funnel-kor-base | 2021-01-22T01:56:37.000Z | [
"pytorch",
"tf",
"funnel",
"ko",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| kykim | 63 | transformers | ---
language: ko
---
# Funnel-transformer base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("kykim/funnel-kor-base")
model = FunnelModel.from_pretrained("kykim/funnel-kor-base")
``` |
|
kykim/gpt3-kor-small_based_on_gpt2 | 2021-05-23T06:24:05.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"ko",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| kykim | 670 | transformers | ---
language: ko
---
# Bert base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
from transformers import BertTokenizerFast, GPT2LMHeadModel
tokenizer_gpt3 = BertTokenizerFast.from_pretrained("kykim/gpt3-kor-small_based_on_gpt2")
input_ids = tokenizer_gpt3.encode("text to tokenize")[1:] # remove cls token
model_gpt3 = GPT2LMHeadModel.from_pretrained("kykim/gpt3-kor-small_based_on_gpt2")
``` |
|
kykim/t5-kor-small | 2021-01-29T05:33:08.000Z | [
"pytorch",
"tf",
"t5",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| kykim | 13 | transformers | ||
kz/mt5base-finetuned-ECC-japanese-small | 2021-04-08T00:48:32.000Z | [
"pytorch",
"mt5",
"seq2seq",
"ja",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| kz | 47 | transformers | ---
language: "ja"
widget:
- text: "ๅพ่ผฉใใฏ็ซใงใใใๅๅใใฏใพใ ใชใใ"
---
Google's mt5-base fine-tuned in Japanese to solve error detection and correction.
#ๆฅๆฌ่ช่ชคใ่จๆญฃ
- "ๅพ่ผฉใใฏ็ซใงใใใๅๅใใฏใพใ ใชใใ"โ"ๅพ่ผฉใฏ็ซใงใใใๅๅใฏใพใ ใชใใ"
- "-small" has been trained on 20,000 text pairs.
- dataset: [link](http://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9EWikipedia%E5%85%A5%E5%8A%9B%E8%AA%A4%E3%82%8A%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88)
- prefix: "correction: "ใ๏ผnotice: single task trained.)
##ๅ่
- "ๆฑๅๅคงๅญฆใงMASKใ็ ็ฉถใใใฆใใพใใ"โ"ๆฑๅๅคงๅญฆใงMASKใฎ็ ็ฉถใใใฆใใพใใ"ใใธใ ใปใญใฃใชใผใไธป่ชใจใใๅฏไธใฎใฌๆ ผใๆถใใใใธใ ใปใญใฃใชใผใฏ็ ็ฉถๅฏพ่ฑกใจใชใฃใใๆ่ชญๅใฎใใใซ็จใใใใไธป่ชใจๅ่ฉใ่ฟใฅใใ่จๆณใฏ่ชคใๆฑใ๏ผ
- "ๆฑๅๅคงๅญฆใงใในใฏใ็ ็ฉถใใใฆใใพใใ"โ"ๆฑๅๅคงๅญฆใงใในใฏใฎ็ ็ฉถใใใฆใใพใใ"ใ"ๆฑๅๅคงๅญฆใงใคใผใญใณใปใในใฏใ็ ็ฉถใใใฆใใพใใ"โ"ๆฑๅๅคงๅญฆใงใคใผใญใณใปใในใฏใ็ ็ฉถใใใฆใใพใใ"ใ"ๆฑๅๅคงๅญฆใงใใคใผใญใณใปใในใฏใใ็ ็ฉถใใใฆใใพใใ"โ"ๆฑๅๅคงๅญฆใงใใคใผใญใณใปใในใฏใใฎ็ ็ฉถใใใฆใใพใใ"ใๅ่ชใฎๆๅณใ่ๆ
ฎใใใฆใใ๏ผ
- "ๆฑๅๅคงๅญฆใงใคใในใฏใ็ ็ฉถใใใฆใใพใใ"โ"ๆฑๅๅคงๅญฆใงใคใในใฏใฎ็ ็ฉถใใใฆใใพใใ"
- "ๆฑๅๅคงๅญฆใงใฏใ็ ็ฉถใใใฆใใพใใ"โ"ๆฑๅๅคงๅญฆใงใณใณใใฅใผใฟใผใ็ ็ฉถใใใฆใใพใใ" ใใใฏใกใใฃใจๅพ
ใฃใฆใ
- "ๆฑๅๅคงๅญฆใง ๏ผextra_id_0๏ผ ใฎ็ ็ฉถใใใฆใใพใใ"โ"ๆฑๅๅคงๅญฆใงๅๅญฆใฎ็ ็ฉถใใใฆใใพใใ"
- "ๆฑๅๅคงๅญฆใง ๏ผextra_id_0๏ผ ใ็ ็ฉถใใใฆใใพใใ"โ"ๆฑๅๅคงๅญฆใงๅทฅๅญฆใ็ ็ฉถใใใฆใใพใใ"ใๅทฅๅญฆใใใ
- "ๅพ่ผฉใฏ ๏ผextra_id_0๏ผ ใงใใใ"โ"ๅพ่ผฉใฏๅพ่ผฉใงใใใ"ใ"็ญใใฏ็ซใงใใๅพ่ผฉใฏ ๏ผextra_id_0๏ผ ใงใใใ"โ"็ญใใฏ็ซใงใใๅพ่ผฉใฏ็ซใงใใใ"ใ"็ญใใฏ็ซใงใใๅพ่ผฉใฎ ๏ผextra_id_0๏ผ ใงใใใ"โ"็ญใใฏ็ซใงใใๅพ่ผฉใฎๅฟใฏ็ซใงใใใ"
- "Aใฏ11ใBใฏ9ใAใฏ ๏ผextra_id_0๏ผ ใBใฏ ๏ผextra_id_1๏ผ ใ"โ"Aใฏ11ใBใฏ9ใAใฏ11ใBใฏ9ใ"
**check in progress**
|
kz/mt5base-finetuned-patentsum-japanese-small | 2021-04-23T22:06:32.000Z | [
"pytorch",
"mt5",
"seq2seq",
"ja",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| kz | 120 | transformers | ---
language: "ja"
widget:
- text: "่ซๆฑ้
<extra_id_0>"
---
Google's mt5-base fine-tuned in Japanese to summarize patent claims in a limited Pharmaceutical domain.
#ๆฅๆฌ่ช็น่จฑ่ซๆฑ้
่ฆ็ด๏ผๅป่ฌ็นๅฎใใกใคใณ้ๅฎ๏ผ
- """ใ่ซๆฑ้
๏ผใ
ใใ๏ผฃ๏ผค๏ผ๏ผ๏ผ้
ๅ็ชๅท๏ผ๏ผๅใณใซใใฏใคใถใซ๏ผฃ๏ผค๏ผ๏ผ๏ผ้
ๅ็ชๅท๏ผ๏ผใซ็น็ฐ็ใซ็ตๅใใๅ้ขใใใๆไฝใงใใฃใฆใ
๏ฝ๏ผไปฅไธใๅซใ้้ๅฏๅค้ ๅ๏ผ
๏ฝ๏ผ้
ๅ็ชๅท๏ผใๅซใ็ฌฌ๏ผใฎ๏ผฃ๏ผค๏ผฒ๏ผ
๏ฝ๏ฝ๏ผ้
ๅ็ชๅท๏ผใๅซใ็ฌฌ๏ผใฎ๏ผฃ๏ผค๏ผฒ๏ผ
๏ฝ๏ฝ๏ฝ๏ผ้
ๅ็ชๅท๏ผใๅซใ็ฌฌ๏ผใฎ๏ผฃ๏ผค๏ผฒ๏ผๅใณ
๏ฝ๏ผไปฅไธใๅซใ่ปฝ้ๅฏๅค้ ๅ๏ผ
๏ฝ๏ผ้
ๅ็ชๅท๏ผใๅซใ็ฌฌ๏ผใฎ๏ผฃ๏ผค๏ผฒ๏ผ
๏ฝ๏ฝ๏ผ้
ๅ็ชๅท๏ผใๅซใ็ฌฌ๏ผใฎ๏ผฃ๏ผค๏ผฒ๏ผ
๏ฝ๏ฝ๏ฝ๏ผ้
ๅ็ชๅท๏ผใๅซใ็ฌฌ๏ผใฎ๏ผฃ๏ผค๏ผฒ๏ผ
ใๅซใใๆไฝใ(่ซๆฑ้
๏ผ๏ฝ๏ผ๏ผ็็ฅ)ใ่ซๆฑ้
๏ผ๏ผใ
ๅ่จ่ชๅทฑๅ
็ซ็พๆฃใใ้ข็ฏใชใฆใใใๅ
จ่บซๆงใจใชใใใใผใในใ็็ๆง่
ธ็พๆฃใๆฝฐ็ๆงๅคง่
ธ็ๅใณ็งปๆค็ๅฏพๅฎฟไธป็
ใใใชใ็พคใใ้ธๆใใใใ่ซๆฑ้
๏ผ๏ผ่จ่ผใฎๆนๆณใ
"""
- โ"ๆฌ็บๆใฏใใใCD38ใฟใณใใฏ่ณช(้
ๅ็ชๅท0)ๅใณใซใใฏใคใถใซCD38(้
ๅ็ชๅท2)ใซ็น็ฐ็ใซ็ตๅใใๆไฝใซ้ขใใใๆฌ็บๆใฏใพใใใใCD38ใฟใณใใฏ่ณช(้
ๅ็ชๅท0)ๅใณใซใใฏใคใถใซCD38(้
ๅ็ชๅท2)ใซ็น็ฐ็ใซ็ตๅใใๆไฝใใใใใๅฟ
่ฆใจใใๆฃ่
ใซๆไธใใใใจใๅซใใ่ชๅทฑๅ
็ซ็พๆฃใฎๆฒป็ๆนๆณใซ้ขใใใ"
- "-small" has been trained on 20,000 text pairs only.
- dataset: ๏ผ
- prefix: "patent claim summarization: "ใ๏ผnotice: single task trained.)
#ๅ่
- https://huggingface.co/blog/how-to-generate
- ๅๅฆ็ใๆ้ฉใงใฏใชใใฃใใไฟฎๆญฃใใใ
- ไปปๆใซไธไฝๆฆๅฟตใปไธไฝๆฆๅฟตใจๅคๆใงใใใใprefixใ่ฟฝๅ ใใใ
- ไปปๆใฎใใผใใซๆฒฟใฃใ่ฆ็ดใจใงใใใใprefixใ่ฟฝๅ ใใใ
- prefixใ่ฟฝๅ ใใใจใใใใ็จๅบฆไปปๆใฎใใผใใซๆฒฟใฃใ่ฆ็ดใจใใใใจใฏๅฏ่ฝใ่ซๆฑ้
ใฎๆง้ ใๅฉ็จใใใไปปๆใฎใใผใใซๆฒฟใฃใฆใใใๅคๅฎใใใขใใซใ็จใ็ๆใ่ฃๆญฃใใใชใฉใ
**check in progress**
|
l3cube-pune/MarathiSentiment | 2021-05-18T07:35:10.000Z | [
"pytorch",
"tf",
"albert",
"text-classification",
"mr",
"dataset:L3CubeMahaSent",
"arxiv:2103.11408",
"transformers",
"license:cc-by-4.0"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json"
]
| l3cube-pune | 99 | transformers | ---
language: mr
tags:
- albert
license: cc-by-4.0
datasets:
- L3CubeMahaSent
widget:
- text: "I like you. </s></s> I love you."
---
## MarathiSentiment
MarathiSentiment is an IndicBERT(ai4bharat/indic-bert) model fine-tuned on L3CubeMahaSent - a Marathi tweet-based sentiment analysis dataset.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (http://arxiv.org/abs/2103.11408)
```
@inproceedings{kulkarni2021l3cubemahasent,
title={L3CubeMahaSent: A Marathi Tweet-based Sentiment Analysis Dataset},
author={Kulkarni, Atharva and Mandhane, Meet and Likhitkar, Manali and Kshirsagar, Gayatri and Joshi, Raviraj},
booktitle={Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis},
pages={213--220},
year={2021}
}
``` |
lab01/future | 2021-03-15T14:09:39.000Z | []
| [
".gitattributes"
]
| lab01 | 0 | |||
labbli/wetlab | 2021-02-23T11:09:54.000Z | []
| [
".gitattributes"
]
| labbli | 0 | |||
laboro-ai/distilbert-base-japanese-finetuned-ddqa | 2020-12-18T03:10:13.000Z | [
"pytorch",
"distilbert",
"question-answering",
"ja",
"transformers",
"license:cc-by-nc-4.0"
]
| question-answering | [
".gitattributes",
".gitignore",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| laboro-ai | 273 | transformers | ---
language: ja
tags:
- distilbert
license: cc-by-nc-4.0
---
|
laboro-ai/distilbert-base-japanese-finetuned-livedoor | 2020-12-18T03:09:54.000Z | [
"pytorch",
"distilbert",
"text-classification",
"ja",
"transformers",
"license:cc-by-nc-4.0"
]
| text-classification | [
".gitattributes",
".gitignore",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| laboro-ai | 69 | transformers | ---
language: ja
tags:
- distilbert
license: cc-by-nc-4.0
---
|
laboro-ai/distilbert-base-japanese | 2020-12-18T03:09:19.000Z | [
"pytorch",
"distilbert",
"ja",
"transformers",
"license:cc-by-nc-4.0"
]
| [
".gitattributes",
".gitignore",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| laboro-ai | 556 | transformers | ---
language: ja
tags:
- distilbert
license: cc-by-nc-4.0
---
|
|
laifuchicago/farm2tran | 2020-09-16T01:15:14.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"xlm-roberta-large/special_tokens_map.json"
]
| laifuchicago | 12 | transformers | |
laksh001/gpt2_manage_new | 2021-05-25T17:46:17.000Z | []
| [
".gitattributes"
]
| laksh001 | 0 | |||
laksh001/gpt2_manage_new1 | 2021-05-25T17:46:53.000Z | []
| [
".gitattributes"
]
| laksh001 | 0 | |||
laksh001/gpt2_manage_new2 | 2021-05-25T17:49:39.000Z | []
| [
".gitattributes"
]
| laksh001 | 0 | |||
lakshayt/Full-Humor-Models | 2020-12-09T03:42:04.000Z | []
| [
".gitattributes"
]
| lakshayt | 0 | |||
lakshayt/roberta-valid | 2020-12-08T01:35:19.000Z | []
| [
".gitattributes"
]
| lakshayt | 0 | |||
lalalala1/lalala | 2021-04-08T14:56:43.000Z | []
| [
".gitattributes"
]
| lalalala1 | 0 | |||
lalopey/benn_eifert | 2021-05-23T06:25:18.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"eval_results_lm.txt",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| lalopey | 14 | transformers | |
lalopey/pearkes | 2021-05-23T06:26:23.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"eval_results_lm.txt",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| lalopey | 14 | transformers | |
lalopey/saeed | 2021-05-23T06:27:31.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"eval_results_lm.txt",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| lalopey | 22 | transformers | |
lamhieu/distilbert-base-multilingual-cased-vietnamese-topicifier | 2021-04-29T18:01:33.000Z | [
"pytorch",
"distilbert",
"text-classification",
"vi",
"transformers",
"vietnamese",
"topicifier",
"multilingual",
"tiny",
"license:mit",
"pipeline_tag:text-classification"
]
| text-classification | [
".DS_Store",
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| lamhieu | 32 | transformers | ---
language:
- vi
tags:
- vietnamese
- topicifier
- multilingual
- tiny
license:
- mit
pipeline_tag: text-classification
widget:
- text: "ฤam mรช cแปงa tรดi lร nhiแบฟp แบฃnh"
---
# distilbert-base-multilingual-cased-vietnamese-topicifier
## About
Fine-tuning from `distilbert-base-multilingual-cased` with a tiny dataset about Vietnamese topics.
## Usage
Try entering a message to predict what topic is being discussed. For example:
```
# Photography
ฤam mรช cแปงa tรดi lร nhiแบฟp แบฃnh
# World War I
Bแบกn ฤรฃ tแปซng nghe vแป cuแปc ฤแบกi thแบฟ chiแบฟn ?
```
## Other
The model was fine-tuning with a tiny dataset, don't use it for a product. |
lammi/audit_lm | 2021-03-17T00:40:10.000Z | []
| [
".gitattributes"
]
| lammi | 0 | |||
lancelvlu/auto-classification-demo | 2021-06-03T02:37:41.000Z | []
| [
".gitattributes"
]
| lancelvlu | 0 | |||
lannelin/bert-imdb-1hidden | 2021-05-19T21:17:56.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"dataset:imdb",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| lannelin | 13 | transformers | ---
language:
- en
datasets:
- imdb
metrics:
- accuracy
---
# bert-imdb-1hidden
## Model description
A `bert-base-uncased` model was restricted to 1 hidden layer and
fine-tuned for sequence classification on the
imdb dataset loaded using the `datasets` library.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
pretrained = "lannelin/bert-imdb-1hidden"
tokenizer = AutoTokenizer.from_pretrained(pretrained)
model = AutoModelForSequenceClassification.from_pretrained(pretrained)
LABELS = ["negative", "positive"]
def get_sentiment(text: str):
inputs = tokenizer.encode_plus(text, return_tensors='pt')
output = model(**inputs)[0].squeeze()
return LABELS[(output.argmax())]
print(get_sentiment("What a terrible film!"))
```
#### Limitations and bias
No special consideration given to limitations and bias.
Any bias held by the imdb dataset may be reflected in the model's output.
## Training data
Initialised with [bert-base-uncased](https://huggingface.co/bert-base-uncased)
Fine tuned on [imdb](https://huggingface.co/datasets/imdb)
## Training procedure
The model was fine-tuned for 1 epoch with a batch size of 64,
a learning rate of 5e-5, and a maximum sequence length of 512.
## Eval results
Accuracy on imdb test set: 0.87132 |
lanwuwei/BERTOverflow_stackoverflow_github | 2021-05-19T00:15:32.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| lanwuwei | 106 | transformers |
# BERTOverflow
## Model description
We pre-trained BERT-base model on 152 million sentences from the StackOverflow's 10 year archive. More details of this model can be found in our ACL 2020 paper: [Code and Named Entity Recognition in StackOverflow](https://www.aclweb.org/anthology/2020.acl-main.443/).
#### How to use
```python
from transformers import *
import torch
tokenizer = AutoTokenizer.from_pretrained("lanwuwei/BERTOverflow_stackoverflow_github")
model = AutoModelForTokenClassification.from_pretrained("lanwuwei/BERTOverflow_stackoverflow_github")
```
### BibTeX entry and citation info
```bibtex
@inproceedings{tabassum2020code,
title={Code and Named Entity Recognition in StackOverflow},
author={Tabassum, Jeniya and Maddela, Mounica and Xu, Wei and Ritter, Alan },
booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL)},
url={https://www.aclweb.org/anthology/2020.acl-main.443/}
year = {2020},
}
```
|
|
lanwuwei/GigaBERT-v3-Arabic-and-English | 2021-05-19T00:17:42.000Z | [
"pytorch",
"jax",
"bert",
"en",
"ar",
"dataset:gigaword",
"dataset:oscar",
"dataset:wikipedia",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| lanwuwei | 3,161 | transformers | ---
language:
- en
- ar
datasets:
- gigaword
- oscar
- wikipedia
---
## GigaBERT-v3
GigaBERT-v3 is a customized bilingual BERT for English and Arabic. It was pre-trained in a large-scale corpus (Gigaword+Oscar+Wikipedia) with ~10B tokens, showing state-of-the-art zero-shot transfer performance from English to Arabic on information extraction (IE) tasks. More details can be found in the following paper:
@inproceedings{lan2020gigabert,
author = {Lan, Wuwei and Chen, Yang and Xu, Wei and Ritter, Alan},
title = {An Empirical Study of Pre-trained Transformers for Arabic Information Extraction},
booktitle = {Proceedings of The 2020 Conference on Empirical Methods on Natural Language Processing (EMNLP)},
year = {2020}
}
## Usage
```
from transformers import *
tokenizer = BertTokenizer.from_pretrained("lanwuwei/GigaBERT-v3-Arabic-and-English", do_lower_case=True)
model = BertForTokenClassification.from_pretrained("lanwuwei/GigaBERT-v3-Arabic-and-English")
```
More code examples can be found [here](https://github.com/lanwuwei/GigaBERT).
|
|
lanwuwei/GigaBERT-v4-Arabic-and-English | 2021-05-19T21:19:13.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| lanwuwei | 236 | transformers | ## GigaBERT-v4
GigaBERT-v4 is a continued pre-training of [GigaBERT-v3](https://huggingface.co/lanwuwei/GigaBERT-v3-Arabic-and-English) on code-switched data, showing improved zero-shot transfer performance from English to Arabic on information extraction (IE) tasks. More details can be found in the following paper:
@inproceedings{lan2020gigabert,
author = {Lan, Wuwei and Chen, Yang and Xu, Wei and Ritter, Alan},
title = {GigaBERT: Zero-shot Transfer Learning from English to Arabic},
booktitle = {Proceedings of The 2020 Conference on Empirical Methods on Natural Language Processing (EMNLP)},
year = {2020}
}
## Download
```
from transformers import *
tokenizer = BertTokenizer.from_pretrained("lanwuwei/GigaBERT-v4-Arabic-and-English", do_lower_case=True)
model = BertForTokenClassification.from_pretrained("lanwuwei/GigaBERT-v4-Arabic-and-English")
```
Here is downloadable link [GigaBERT-v4](https://drive.google.com/drive/u/1/folders/1uFGzMuTOD7iNsmKQYp_zVuvsJwOaIdar).
|
|
laqwerta/test_hf | 2021-03-12T10:24:31.000Z | []
| [
".gitattributes"
]
| laqwerta | 0 | |||
larskjeldgaard/senda | 2021-05-19T21:20:48.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"da",
"transformers",
"danish",
"sentiment",
"polarity",
"license:cc-by-4.0"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| larskjeldgaard | 15 | transformers | ---
language: da
tags:
- danish
- bert
- sentiment
- polarity
license: cc-by-4.0
widget:
- text: "Sikke en dejlig dag det er i dag"
---
# Danish BERT fine-tuned for Sentiment Analysis (Polarity)
This model detects polarity ('positive', 'neutral', 'negative') of danish texts.
It is trained and tested on Tweets annotated by [Alexandra Institute](https://github.com/alexandrainst).
Here is an example on how to load the model in PyTorch using the [๐คTransformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("larskjeldgaard/senda")
model = AutoModelForSequenceClassification.from_pretrained("larskjeldgaard/senda")
# create 'senda' sentiment analysis pipeline
senda_pipeline = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
senda_pipeline("Sikke en dejlig dag det er i dag")
```
|
laugustyniak/roberta-polish-web-embedding-v1 | 2021-05-20T17:37:19.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"training_args.bin"
]
| laugustyniak | 28 | transformers | |
laxya007/gpt2_BE_ISI_NE_BI_INR | 2021-05-23T06:42:28.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"vocab.json"
]
| laxya007 | 9 | transformers | |
laxya007/gpt2_BSA_Leg_ipr_OE | 2021-06-10T16:10:11.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"vocab.json"
]
| laxya007 | 22 | transformers | |
laxya007/gpt2_BSA_Leg_ipr_OE_OS | 2021-06-18T08:40:06.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"vocab.json"
]
| laxya007 | 0 | transformers | |
laxya007/gpt2_TS_DM_AS_CC_TM | 2021-05-23T07:14:50.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"vocab.json"
]
| laxya007 | 13 | transformers | |
laxya007/gpt2_manage | 2021-05-23T07:42:33.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"vocab.json"
]
| laxya007 | 12 | transformers | |
laxya007/gpt2_tech | 2021-05-23T08:18:57.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"vocab.json"
]
| laxya007 | 53 | transformers | |
laxya007/gpt2_till10 | 2021-05-23T08:21:38.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"vocab.json"
]
| laxya007 | 16 | transformers | |
leduytan93/Fine-Tune-XLSR-Wav2Vec2-Speech2Text-Vietnamese | 2021-05-30T07:25:06.000Z | [
"pytorch",
"wav2vec2",
"vi",
"transformers",
"language-modeling",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README copy.md",
"README.md",
"added_tokens.json",
"config.json",
"optimizer.pt",
"preprocessor_config.json",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
]
| leduytan93 | 415 | transformers | ---
language: vi
datasets:
- common_voice
- FOSD: https://data.mendeley.com/datasets/k9sxg2twv4/4
metrics:
- wer
tags:
- language-modeling
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: MT5 Fix Asr Vietnamese by Ontocord
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice vi
type: common_voice
args: vi
metrics:
- name: Test WER
type: wer
value: 25.207182
---
|
lee1jun/wav2vec2-base-100h-finetuned | 2021-03-08T09:00:58.000Z | [
"pytorch",
"wav2vec2",
"transformers"
]
| [
".gitattributes",
"config.json",
"optimizer.pt",
"preprocessor_config.json",
"pytorch_model.bin",
"scheduler.pt",
"trainer_state.json",
"training_args.bin"
]
| lee1jun | 6 | transformers | ||
leemeng/core-term-ner-v1 | 2021-05-19T21:21:42.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| leemeng | 15 | transformers | |
leemii18/robustqa-baseline-02 | 2021-05-05T17:47:41.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| leemii18 | 11 | transformers | |
lemon234071/ct5-base | 2020-12-16T09:27:36.000Z | [
"pytorch",
"mt5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| lemon234071 | 9 | transformers | |
lemon234071/ct5-small | 2020-12-16T08:54:54.000Z | [
"pytorch",
"mt5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| lemon234071 | 8 | transformers | |
lenvl01/pegasus-reddit_tifu | 2021-01-20T12:59:06.000Z | []
| [
".gitattributes"
]
| lenvl01 | 0 | |||
leonweber/PEDL | 2021-06-16T09:19:35.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"training_args.bin",
"vocab.txt"
]
| leonweber | 17 | transformers | ||
leslie/bert_cn_finetuning | 2021-05-19T21:21:59.000Z | [
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"eval_results.txt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin"
]
| leslie | 13 | transformers | |
leslie/bert_finetuning_test | 2021-05-19T21:22:40.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| leslie | 22 | transformers | |
lesniewski/test | 2021-04-16T17:43:04.000Z | []
| [
".gitattributes"
]
| lesniewski | 0 | |||
lewish/MyModel | 2021-05-29T21:38:59.000Z | []
| [
".gitattributes"
]
| lewish | 0 | |||
lewtun/bert-base-uncased-finetuned-boolq | 2021-05-19T21:23:35.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| lewtun | 35 | transformers | |
lewtun/bert-base-uncased-finetuned-clinc | 2021-05-19T21:24:32.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| lewtun | 12 | transformers | |
lewtun/bert-base-uncased-finetuned-squad-v1 | 2021-05-19T21:25:25.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| lewtun | 31 | transformers | |
lewtun/bert-large-uncased-wwm-finetuned-boolq | 2021-05-19T21:27:34.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| lewtun | 23 | transformers | |
lewtun/distilbert-base-uncased-distilled-squad-v1 | 2021-01-29T12:43:49.000Z | [
"pytorch",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"transformers",
"license:apache-2.0"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| lewtun | 36 | transformers | ---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
lewtun/distilbert-base-uncased-finetuned-squad-v1 | 2021-01-31T11:55:20.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| lewtun | 12 | transformers | |
lg/fexp_1 | 2021-05-20T23:37:11.000Z | [
"pytorch",
"gpt_neo",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| lg | 22 | transformers | # This model is probably not what you're looking for. |
lg/fexp_2 | 2021-05-01T17:56:11.000Z | [
"pytorch",
"gpt_neo",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| lg | 13 | transformers | |
lg/fexp_3 | 2021-05-01T06:03:40.000Z | [
"pytorch",
"gpt_neo",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| lg | 14 | transformers | |
lg/fexp_4 | 2021-05-01T17:25:46.000Z | [
"pytorch",
"gpt_neo",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| lg | 12 | transformers | |
lg/fexp_5 | 2021-05-01T23:26:00.000Z | [
"pytorch",
"gpt_neo",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| lg | 11 | transformers | |
lg/fexp_7 | 2021-05-03T05:27:39.000Z | [
"pytorch",
"gpt_neo",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| lg | 9 | transformers |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.