modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 21
values | files
sequence | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
thunlp/neuba-roberta | 2021-06-11T06:55:10.000Z | [
"pytorch",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | thunlp | 0 | transformers | |
thusken/mt5-small-nl-summarization-wiki-lingua | 2021-06-09T07:06:10.000Z | [
"pytorch",
"mt5",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer.json",
"tokenizer_config.json"
] | thusken | 6 | transformers | Update: model is not working in the current state, trying to find out what is causing this.
This is a fine-tuned version of the mt5-small model by Google: https://huggingface.co/google/mt5-small. The model was finetuned for summarization purposes.
Fine-tuning was done for 1 epoch on the WikiLingua-dataset: https://huggingface.co/datasets/wiki_lingua.
|
tianxing1994/test | 2020-12-06T10:18:16.000Z | [] | [
".gitattributes"
] | tianxing1994 | 0 | |||
tiedeman/opus-mt-en-he | 2021-03-04T17:50:20.000Z | [
"pytorch",
"rust",
"marian",
"seq2seq",
"en",
"he",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"rust_model.ot",
"source.spm",
"special_tokens_map.json",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | tiedeman | 44 | transformers | ---
language:
- en
- he
tags:
- translation
license: apache-2.0
---
### en-he
* source group: English
* target group: Hebrew
* OPUS readme: [eng-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-heb/README.md)
* model: transformer
* source language(s): eng
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-10-04.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus-2020-10-04.zip)
* test set translations: [opus-2020-10-04.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus-2020-10-04.test.txt)
* test set scores: [opus-2020-10-04.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus-2020-10-04.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.heb | 37.9 | 0.602 |
### System Info:
- hf_name: en-he
- source_languages: eng
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'he']
- src_constituents: ('English', {'eng'})
- tgt_constituents: ('Hebrew', {'heb'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: eng-heb
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus-2020-10-04.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus-2020-10-04.test.txt
- src_alpha3: eng
- tgt_alpha3: heb
- chrF2_score: 0.602
- bleu: 37.9
- brevity_penalty: 1.0
- ref_len: 60359.0
- src_name: English
- tgt_name: Hebrew
- train_date: 2020-10-04 00:00:00
- src_alpha2: en
- tgt_alpha2: he
- prefer_old: False
- short_pair: en-he
- helsinki_git_sha: 61fd6908b37d9a7b21cc3e27c1ae1fccedc97561
- transformers_git_sha: d99ed7ad618037ae878f0758157ed0764bd7f935
- port_machine: LM0-400-22516.local
- port_time: 2020-10-15-16:31 |
tiedeman/opus-mt-he-en | 2021-03-04T17:46:12.000Z | [
"pytorch",
"rust",
"marian",
"seq2seq",
"he",
"en",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"rust_model.ot",
"source.spm",
"special_tokens_map.json",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | tiedeman | 1,513 | transformers | ---
language:
- he
- en
tags:
- translation
license: apache-2.0
---
### he-en
* source group: Hebrew
* target group: English
* OPUS readme: [heb-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-eng/README.md)
* model: transformer
* source language(s): heb
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-10-04.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-eng/opus-2020-10-04.zip)
* test set translations: [opus-2020-10-04.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-eng/opus-2020-10-04.test.txt)
* test set scores: [opus-2020-10-04.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-eng/opus-2020-10-04.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.eng | 52.0 | 0.670 |
### System Info:
- hf_name: he-en
- source_languages: heb
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'en']
- src_constituents: ('Hebrew', {'heb'})
- tgt_constituents: ('English', {'eng'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: heb-eng
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-eng/opus-2020-10-04.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-eng/opus-2020-10-04.test.txt
- src_alpha3: heb
- tgt_alpha3: eng
- chrF2_score: 0.67
- bleu: 52.0
- brevity_penalty: 0.9690000000000001
- ref_len: 73560.0
- src_name: Hebrew
- tgt_name: English
- train_date: 2020-10-04 00:00:00
- src_alpha2: he
- tgt_alpha2: en
- prefer_old: False
- short_pair: he-en
- helsinki_git_sha: 61fd6908b37d9a7b21cc3e27c1ae1fccedc97561
- transformers_git_sha: d99ed7ad618037ae878f0758157ed0764bd7f935
- port_machine: LM0-400-22516.local
- port_time: 2020-10-15-16:31 |
tiedeman/opus-mt-he-es | 2020-12-11T08:24:24.000Z | [] | [
".gitattributes"
] | tiedeman | 0 | |||
tillfurger/twitter-sent | 2021-05-20T07:50:40.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | tillfurger | 28 | transformers | |
tillfurger/twitter-sentiment | 2020-12-04T23:53:18.000Z | [] | [
".gitattributes"
] | tillfurger | 0 | |||
timhu/model1 | 2021-02-08T23:27:00.000Z | [] | [
".gitattributes"
] | timhu | 0 | |||
timm/eca_nfnet_l0 | 2021-03-18T18:43:26.000Z | [
"pytorch",
"dataset:imagenet",
"arxiv:2102.06171",
"arxiv:1910.03151",
"arxiv:1903.10520",
"arxiv:1906.02659",
"arxiv:2010.15052",
"arxiv:1909.13719",
"image-classification",
"timm",
"normalization-free",
"efficient-channel-attention",
"license:apache-2.0"
] | image-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin"
] | timm | 17 | timm | ---
tags:
- image-classification
- timm
- normalization-free
- efficient-channel-attention
license: apache-2.0
datasets:
- imagenet
inference: false
---
# ECA-NFNet-L0
Pretrained model on [ImageNet](http://www.image-net.org/), this is a variant of the [NFNet (Normalization Free)](https://arxiv.org/abs/2102.06171) model family.
## Model description
This model variant was slimmed down from the original F0 variant in the paper for improved runtime characteristics (throughput, memory use) in PyTorch, on a GPU accelerator. It utilizes [Efficient Channel Attention (ECA)](https://arxiv.org/abs/1910.03151) instead of Squeeze-Excitation. It also features SiLU activations instead of the usual GELU.
Like other models in the NF family, this model contains no normalization layers (batch, group, etc). The models make use of [Weight Standardized](https://arxiv.org/abs/1903.10520) convolutions with additional scaling values in lieu of normalization layers.
## Intended uses & limitations
You can use the raw model to classify images along the 1,000 ImageNet labels, but you can also change its head
to fine-tune it on a downstream task (another classification task with different labels, image segmentation or
object detection, to name a few).
### How to use
You can use this model with the usual factory method in [`timm`](https://github.com/rwightman/pytorch-image-models):
```python
import PIL
import timm
import torch
model = timm.create_model("hf_hub:timm/eca_nfnet_l0")
config = model.default_cfg
img_size = config["test_input_size"][-1] if "test_input_size" in config else config["input_size"][-1]
transform = timm.data.transforms_factory.transforms_imagenet_eval(
img_size=img_size,
interpolation=config["interpolation"],
mean=config["mean"],
std=config["std"],
crop_pct=config["crop_pct"],
)
img = PIL.Image.open(path_to_an_image)
img = img.convert("RGB")
input_tensor = transform(cat_img)
input_tensor = input_tensor.unsqueeze(0)
# ^ batch size = 1
with torch.no_grad():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
### Limitations and bias
The training images in the dataset are usually photos clearly representing one of the 1,000 labels. The model will
probably not generalize well on drawings or images containing multiple objects with different labels.
The training images in the dataset come mostly from the US (45.4%) and Great Britain (7.6%). As such the model or
models created by fine-tuning this model will work better on images picturing scenes from these countries (see
[this paper](https://arxiv.org/abs/1906.02659) for examples).
More generally, [recent research](https://arxiv.org/abs/2010.15052) has shown that even models trained in an
unsupervised fashion on ImageNet (i.e. without using the labels) will pick up racial and gender bias represented in
the training images.
## Training data
This model was pretrained on [ImageNet](http://www.image-net.org/), a dataset consisting of 14 millions of
hand-annotated images with 1,000 categories.
## Training procedure
For stability during training it is highly recommended to train all NFNet variants with gradient clipping enabled. This model was trained with an Adaptive Gradient Clipping (AGC) factor of 0.015 as described in [the paper](https://arxiv.org/abs/2102.06171). Similar to the paper, a cosine learning rate decay was employed using SGD w/ nesterov. Moderate to heavy augmentation ([RandAugment](https://arxiv.org/abs/1909.13719)) and regularization (dropout, stochastic depth) is recommended for training.
### Preprocessing
The images are resized using bicubic interpolation to 288x288 and normalized with the usual ImageNet statistics.
## Evaluation results
This model has a top1-accuracy of 82.6% and a top-5 accuracy of 96.5% on the ImageNet evaluation set.
### BibTeX entry and citation info
NFNet model architecture:
```bibtex
@article{brock2021high,
author={Andrew Brock and Soham De and Samuel L. Smith and Karen Simonyan},
title={High-Performance Large-Scale Image Recognition Without Normalization},
journal={arXiv preprint arXiv:2102.06171},
year={2021}
}
```
L0 model variant & pretraining:
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
|
timm/vit_huge_patch14_224_in21k | 2021-03-18T10:58:13.000Z | [
"pytorch",
"dataset:imagenet_21k",
"image-classification",
"timm",
"vision-transformer",
"license:apache-2.0"
] | image-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin"
] | timm | 13 | timm | ---
tags:
- image-classification
- timm
- vision-transformer
license: apache-2.0
datasets:
- imagenet_21k
inference: false
---
# ViT-H/14 (ImageNet-21k)
...
|
timo/timo-BART-german | 2020-10-28T19:09:26.000Z | [
"pytorch",
"fsmt",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"bpecodes",
"config.json",
"dict.source.txt",
"dict.target.txt",
"dict.txt",
"merges.txt",
"model.pt",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab-src.json",
"vocab-tgt.json"
] | timo | 47 | transformers | |
timpal0l/sbert-swe | 2020-11-26T12:50:41.000Z | [] | [
".gitattributes"
] | timpal0l | 0 | |||
titusbensigar/model_name | 2021-06-11T18:01:07.000Z | [] | [
".gitattributes"
] | titusbensigar | 0 | |||
tjarmain/deberta | 2021-02-22T20:07:56.000Z | [] | [
".gitattributes"
] | tjarmain | 0 | |||
tk3879110/bert_cn_finetuning | 2021-05-20T07:51:31.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_sst-2.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | tk3879110 | 30 | transformers | |
tk3879110/bert_finetuning_test | 2021-05-20T07:52:28.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results_mrpc.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | tk3879110 | 16 | transformers | |
tkwoo/electra-small-discriminator | 2020-06-04T08:01:53.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
] | tkwoo | 32 | transformers | ||
tkwoo/electra-small-generator | 2020-06-04T08:02:16.000Z | [
"pytorch",
"electra",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
] | tkwoo | 29 | transformers | |
tlemberger/sd-ner | 2021-05-20T22:31:05.000Z | [
"pytorch",
"jax",
"roberta",
"token-classification",
"english",
"dataset:EMBO/sd-panels",
"transformers",
"token classification"
] | token-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"training_args.bin"
] | tlemberger | 14 | transformers | ---
language:
- english
thumbnail:
tags:
- token classification
license:
datasets:
- EMBO/sd-panels
metrics:
-
---
# sd-ner
## Model description
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang) and fine-tuned for token classification on the SourceData [sd-panels](https://huggingface.co/datasets/EMBO/sd-panels) dataset to perform Named Entity Recognition of bioentities.
## Intended uses & limitations
#### How to use
The intended use of this model is for Named Entity Recognition of biological entitie used in SourceData annotations (https://sourcedata.embo.org), including small molecules, gene products (genes and proteins), subcellular components, cell line and cell types, organ and tissues, species as well as experimental methods.
To have a quick check of the model:
```python
from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification
example = """<s> F. Western blot of input and eluates of Upf1 domains purification in a Nmd4-HA strain. The band with the # might corresponds to a dimer of Upf1-CH, bands marked with a star correspond to residual signal with the anti-HA antibodies (Nmd4). Fragments in the eluate have a smaller size because the protein A part of the tag was removed by digestion with the TEV protease. G6PDH served as a loading control in the input samples </s>"""
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
model = RobertaForTokenClassification.from_pretrained('EMBO/sd-ner')
ner = pipeline('ner', model, tokenizer=tokenizer)
res = ner(example)
for r in res:
print(r['word'], r['entity'])
```
#### Limitations and bias
The model must be used with the `roberta-base` tokenizer.
## Training data
The model was trained for token classification using the [EMBO/sd-panels dataset](https://huggingface.co/datasets/EMBO/biolang) wich includes manually annotated examples.
## Training procedure
The training was run on a NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at https://github.com/source-data/soda-roberta
- Command: `python -m tokcl.train /data/json/sd_panels NER --num_train_epochs=3.5`
- Tokenizer vocab size: 50265
- Training data: EMBO/biolang MLM
- Training with 31410 examples.
- Evaluating on 8861 examples.
- Training on 15 features: O, I-SMALL_MOLECULE, B-SMALL_MOLECULE, I-GENEPROD, B-GENEPROD, I-SUBCELLULAR, B-SUBCELLULAR, I-CELL, B-CELL, I-TISSUE, B-TISSUE, I-ORGANISM, B-ORGANISM, I-EXP_ASSAY, B-EXP_ASSAY
- Epochs: 3.5
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
## Eval results
On test set with `sklearn.metrics`:
```
precision recall f1-score support
CELL 0.77 0.81 0.79 3477
EXP_ASSAY 0.71 0.70 0.71 7049
GENEPROD 0.86 0.90 0.88 16140
ORGANISM 0.80 0.82 0.81 2759
SMALL_MOLECULE 0.78 0.82 0.80 4446
SUBCELLULAR 0.71 0.75 0.73 2125
TISSUE 0.70 0.75 0.73 1971
micro avg 0.79 0.82 0.81 37967
macro avg 0.76 0.79 0.78 37967
weighted avg 0.79 0.82 0.81 37967
```
|
tli8hf/robertabase-crf-conll2012 | 2021-05-20T22:31:59.000Z | [
"pytorch",
"roberta",
"transformers"
] | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | tli8hf | 19 | transformers | ||
tli8hf/robertabase-structured-tuning-srl-conll2012 | 2021-05-20T22:32:29.000Z | [
"pytorch",
"roberta",
"transformers"
] | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | tli8hf | 22 | transformers | ||
tli8hf/robertabase_snli | 2020-11-04T05:42:29.000Z | [
"pytorch",
"transformerfornli",
"transformers"
] | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | tli8hf | 34 | transformers | ||
tli8hf/unqover-bert-base-uncased-newsqa | 2021-05-20T07:53:24.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | tli8hf | 17 | transformers | |
tli8hf/unqover-bert-base-uncased-squad | 2021-05-20T07:54:17.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"nbest_predictions_.json",
"predictions_.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | tli8hf | 12 | transformers | |
tli8hf/unqover-bert-large-uncased-newsqa | 2021-05-20T07:56:02.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | tli8hf | 15 | transformers | |
tli8hf/unqover-bert-large-uncased-squad | 2021-05-20T07:58:54.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
] | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | tli8hf | 10 | transformers | ||
tli8hf/unqover-distilbert-base-uncased-newsqa | 2020-10-19T22:41:55.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | tli8hf | 10 | transformers | |
tli8hf/unqover-distilbert-base-uncased-squad | 2020-10-19T23:39:01.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"config.json",
"nbest_predictions_.json",
"predictions_.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | tli8hf | 10 | transformers | |
tli8hf/unqover-roberta-base-newsqa | 2021-05-20T22:33:16.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | tli8hf | 9 | transformers | |
tli8hf/unqover-roberta-base-squad | 2021-05-20T22:34:19.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | tli8hf | 9 | transformers | |
tli8hf/unqover-roberta-large-newsqa | 2021-05-20T22:36:39.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | tli8hf | 146 | transformers | |
tli8hf/unqover-roberta-large-squad | 2021-05-20T22:39:19.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | tli8hf | 8 | transformers | |
tlkh/distilroberta-base-ag_news | 2020-12-22T09:27:57.000Z | [] | [
".gitattributes"
] | tlkh | 0 | |||
tlkh/distilroberta-base-glue-cola | 2020-12-22T11:05:24.000Z | [] | [
".gitattributes"
] | tlkh | 0 | |||
tlkh/distilroberta-base-yelp_polarity | 2020-12-22T09:41:16.000Z | [] | [
".gitattributes"
] | tlkh | 0 | |||
tmck/test | 2021-03-21T10:34:29.000Z | [] | [
".gitattributes"
] | tmck | 0 | |||
tmhtc/test | 2021-04-24T18:27:34.000Z | [] | [
".gitattributes"
] | tmhtc | 0 | |||
tmills/clinical_tempeval | 2021-05-20T22:40:33.000Z | [
"pytorch",
"roberta",
"transformers"
] | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | tmills | 11 | transformers | ||
tmills/roberta_sfda_sharpseed | 2021-05-20T22:41:21.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | tmills | 27 | transformers | |
toast22a/race_natural_number_oqpl_mc | 2021-05-23T13:11:24.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | toast22a | 18 | transformers | |
toast22a/squad_natural_question_oqpl | 2021-05-23T13:12:28.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | toast22a | 38 | transformers | |
toastynews/electra-hongkongese-base-discriminator | 2020-07-07T17:55:51.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"yue",
"transformers",
"license:apache-2.0"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"vocab.txt"
] | toastynews | 17 | transformers | ---
language: yue
license: apache-2.0
metrics:
- DRCD
- openrice-senti
- lihkg-cat
- wordshk-sem
---
# ELECTRA Hongkongese Base
## Model description
ELECTRA trained exclusively with data from Hong Kong. A signaficant amount of Hongkongese/Cantonese/Yue is included in the training data.
## Intended uses & limitations
This model is an alternative to Chinese models. It may offer better performance for tasks catering to the langauge usage of Hong Kongers. Yue Wikipedia is used which is much smaller than Chinese Wikipedia; this model will lack the breath of knowledge compared to other Chinese models.
#### How to use
This is the base model trained from the official repo. Further finetuning will be needed for use on downstream tasks. Other model sizes are also available.
#### Limitations and bias
The training data consists of mostly news articles and blogs. There is probably a bias towards formal language usage.
## Training data
The following is the list of data sources. Total characters is about 507M.
| Data | % |
| ------------------------------------------------- | --: |
| News Articles / Blogs | 58% |
| Yue Wikipedia / EVCHK | 18% |
| Restaurant Reviews | 12% |
| Forum Threads | 12% |
| Online Fiction | 1% |
The following is the distribution of different languages within the corpus.
| Language | % |
| ------------------------------------------------- | --: |
| Standard Chinese | 62% |
| Hongkongese | 30% |
| English | 8% |
## Training procedure
Model was trained on a single TPUv3 from the official repo with the default parameters.
| Parameter | Value |
| ------------------------------------------------ | ----: |
| Batch Size | 256 |
| Max Sequence Size | 512 |
| Vocab Size | 30000 |
*Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC)*
## Eval results
Average evaluation task results over 10 runs. Comparison using the original repo model and code. Chinese models are available from [Joint Laboratory of HIT and iFLYTEK Research (HFL)](https://huggingface.co/hfl)
| Model | DRCD (EM/F1) | openrice-senti | lihkg-cat | wordshk-sem |
|:-----------:|:------------:|:--------------:|:---------:|:-----------:|
| Chinese | 86.6 / 91.7 | 79.1 | 67.4 | 88.1 |
| Hongkongese | 83.0 / 89.6 | 81.5 | 70.0 | 90.1 |
|
|
toastynews/electra-hongkongese-base-generator | 2020-07-07T04:20:58.000Z | [
"pytorch",
"tf",
"electra",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"vocab.txt"
] | toastynews | 15 | transformers | |
toastynews/electra-hongkongese-large-discriminator | 2020-07-07T17:56:12.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"yue",
"transformers",
"license:apache-2.0"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"vocab.txt"
] | toastynews | 15 | transformers | ---
language: yue
license: apache-2.0
metrics:
- DRCD
- openrice-senti
- lihkg-cat
- wordshk-sem
---
# ELECTRA Hongkongese Large
## Model description
ELECTRA trained exclusively with data from Hong Kong. A signaficant amount of Hongkongese/Cantonese/Yue is included in the training data.
## Intended uses & limitations
This model is an alternative to Chinese models. It may offer better performance for tasks catering to the langauge usage of Hong Kongers. Yue Wikipedia is used which is much smaller than Chinese Wikipedia; this model will lack the breath of knowledge compared to other Chinese models.
#### How to use
This is the large model trained from the official repo. Further finetuning will be needed for use on downstream tasks. Other model sizes are also available.
#### Limitations and bias
The training data consists of mostly news articles and blogs. There is probably a bias towards formal language usage.
## Training data
The following is the list of data sources. Total characters is about 507M.
| Data | % |
| ------------------------------------------------- | --: |
| News Articles / Blogs | 58% |
| Yue Wikipedia / EVCHK | 18% |
| Restaurant Reviews | 12% |
| Forum Threads | 12% |
| Online Fiction | 1% |
The following is the distribution of different languages within the corpus.
| Language | % |
| ------------------------------------------------- | --: |
| Standard Chinese | 62% |
| Hongkongese | 30% |
| English | 8% |
## Training procedure
Model was trained on a single TPUv3 from the official repo with the default parameters.
| Parameter | Value |
| ------------------------------------------------ | ----: |
| Batch Size | 96 |
| Max Sequence Size | 512 |
| Mask Prob | 0.25 |
| Learning Rate | 2e-4 |
| Vocab Size | 30000 |
*Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC)*
## Eval results
Average evaluation task results over 10 runs. Comparison using the original repo model and code. Chinese models are available from [Joint Laboratory of HIT and iFLYTEK Research (HFL)](https://huggingface.co/hfl)
| Model | DRCD (EM/F1) | openrice-senti | lihkg-cat | wordshk-sem |
|:-----------:|:------------:|:--------------:|:---------:|:-----------:|
| Chinese | 88.8 / 93.6 | 79.8 | 70.4 | 90.4 |
| Hongkongese | 84.7 / 90.9 | 79.7 | 69.9 | 91.5 |
|
|
toastynews/electra-hongkongese-large-generator | 2020-07-07T04:45:30.000Z | [
"pytorch",
"tf",
"electra",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"vocab.txt"
] | toastynews | 12 | transformers | |
toastynews/electra-hongkongese-small-discriminator | 2020-07-07T17:55:30.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"yue",
"transformers",
"license:apache-2.0"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"vocab.txt"
] | toastynews | 18 | transformers | ---
language: yue
license: apache-2.0
metrics:
- DRCD
- openrice-senti
- lihkg-cat
- wordshk-sem
---
# ELECTRA Hongkongese Small
## Model description
ELECTRA trained exclusively with data from Hong Kong. A signaficant amount of Hongkongese/Cantonese/Yue is included in the training data.
## Intended uses & limitations
This model is an alternative to Chinese models. It may offer better performance for tasks catering to the langauge usage of Hong Kongers. Yue Wikipedia is used which is much smaller than Chinese Wikipedia; this model will lack the breath of knowledge compared to other Chinese models.
#### How to use
This is the small model trained from the official repo. Further finetuning will be needed for use on downstream tasks. Other model sizes are also available.
#### Limitations and bias
The training data consists of mostly news articles and blogs. There is probably a bias towards formal language usage.
## Training data
The following is the list of data sources. Total characters is about 507M.
| Data | % |
| ------------------------------------------------- | --: |
| News Articles / Blogs | 58% |
| Yue Wikipedia / EVCHK | 18% |
| Restaurant Reviews | 12% |
| Forum Threads | 12% |
| Online Fiction | 1% |
The following is the distribution of different languages within the corpus.
| Language | % |
| ------------------------------------------------- | --: |
| Standard Chinese | 62% |
| Hongkongese | 30% |
| English | 8% |
## Training procedure
Model was trained on a single TPUv3 from the official repo with the default parameters.
| Parameter | Value |
| ------------------------------------------------ | ----: |
| Batch Size | 384 |
| Max Sequence Size | 512 |
| Generator Hidden Size | 1.0 |
| Vocab Size | 30000 |
*Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC)*
## Eval results
Average evaluation task results over 10 runs. Comparison using the original repo model and code. Chinese models are available from [Joint Laboratory of HIT and iFLYTEK Research (HFL)](https://huggingface.co/hfl)
| Model | DRCD (EM/F1) | openrice-senti | lihkg-cat | wordshk-sem |
|:-----------:|:------------:|:--------------:|:---------:|:-----------:|
| Chinese | 78.5 / 85.6 | 77.9 | 63.7 | 79.2 |
| Hongkongese | 76.7 / 84.4 | 79.0 | 62.6 | 80.0 |
|
|
toastynews/electra-hongkongese-small-generator | 2020-07-07T04:13:10.000Z | [
"pytorch",
"tf",
"electra",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"vocab.txt"
] | toastynews | 15 | transformers | |
toastynews/xlnet-hongkongese-base | 2020-07-07T17:52:07.000Z | [
"pytorch",
"tf",
"xlnet",
"lm-head",
"causal-lm",
"yue",
"transformers",
"license:apache-2.0",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model",
"tf_model.h5"
] | toastynews | 27 | transformers | ---
language: yue
license: apache-2.0
metrics:
- DRCD
- openrice-senti
- lihkg-cat
- wordshk-sem
---
# XLNet Hongkongese Base
## Model description
XLNet trained exclusively with data from Hong Kong. A signaficant amount of Hongkongese/Cantonese/Yue is included in the training data.
## Intended uses & limitations
This model is an alternative to Chinese models. It may offer better performance for tasks catering to the langauge usage of Hong Kongers. Yue Wikipedia is used which is much smaller than Chinese Wikipedia; this model will lack the breath of knowledge compared to other Chinese models.
#### How to use
This is the base model trained from the official repo. Further finetuning will be needed for use on downstream tasks. It can also be used to generate text.
#### Limitations and bias
The training data consists of mostly news articles and blogs. There is probably a bias towards formal language usage.
For text generation, like other XLNet models, a longer context will help generate better text. Overall result is not as good as GPT-2.
## Training data
The following is the list of data sources. Total characters is about 507M.
| Data | % |
| ------------------------------------------------- | --: |
| News Articles / Blogs | 58% |
| Yue Wikipedia / EVCHK | 18% |
| Restaurant Reviews | 12% |
| Forum Threads | 12% |
| Online Fiction | 1% |
The following is the distribution of different languages within the corpus.
| Language | % |
| ------------------------------------------------- | --: |
| Standard Chinese | 62% |
| Hongkongese | 30% |
| English | 8% |
## Training procedure
Model was trained on a single TPUv3 from the official repo with the default parameters.
| Parameter | Value |
| ------------------------------------------------ | ----: |
| Batch Size | 32 |
| Max Sequence Size | 512 |
| Vocab Size | 32000 |
*Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC)*
## Eval results
Average evaluation task results over 10 runs. Comparison using the original repo model and code. Chinese models are available from [Joint Laboratory of HIT and iFLYTEK Research (HFL)](https://huggingface.co/hfl)
| Model | DRCD (EM/F1) | openrice-senti | lihkg-cat | wordshk-sem |
|:-----------:|:------------:|:--------------:|:---------:|:-----------:|
| Chinese | 82.8 / 91.8 | 79.8 | 70.7 | 72.0 / 78.9*|
| Hongkongese | 76.1 / 76.1 | 81.4 | 69.5 | 66.7 / 87.3*|
\* With the default of 3 epoches, 6 of 10 Chinese finetuned models have accuracy of 66.7 (always negative baseline). All Hongkongese finetuned models have accuracy of 66.7. The \* values are the accuracy after 24 epoches. |
tobiaslee/bert-6L-768H | 2021-05-20T08:00:41.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
] | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"vocab.txt"
] | tobiaslee | 8 | transformers | ||
tomato/electra-Question-answer | 2021-06-03T18:52:15.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | tomato | 41 | transformers | |
tomato/sentiment_analysis | 2021-06-03T18:55:58.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | tomato | 75 | transformers | |
tommy19970714/translation-japanese | 2021-04-28T03:59:58.000Z | [
"pytorch",
"marian",
"seq2seq",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | tommy19970714 | 280 | transformers | ---
tags:
- translation
---
### japanese translation
* source languages: ja
* target languages: en
* model: transformer-align
* pre-processing: normalization + SentencePiece
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ja.en | 41.7 | 0.589 |
|
tommy19970714/wav2vec2-base-960h | 2021-02-26T09:35:06.000Z | [
"pytorch",
"wav2vec2",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"transformers",
"audio",
"automatic-speech-recognition",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | tommy19970714 | 18 | transformers | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
license: apache-2.0
widget:
- label: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- label: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
---
# Wav2Vec2-Base-960h
This repository is a reimplementation of [official Facebook’s wav2vec](https://huggingface.co/facebook/wav2vec2-base-960h).
There is no description of converting the wav2vec [pretrain model](https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20) to a pytorch.bin file.
We are rebuilding pytorch.bin from the pretrain model.
Here is the conversion method.
```bash
pip install transformers[sentencepiece]
pip install fairseq -U
git clone https://github.com/huggingface/transformers.git
cp transformers/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py .
wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small_960h.pt -O ./wav2vec_small_960h.pt
mkdir dict
wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/dict.ltr.txt
mkdir outputs
python convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py --pytorch_dump_folder_path ./outputs --checkpoint_path ./wav2vec_small_960h.pt --dict_path ./dict
```
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Tokenizer, Wav2Vec2ForCTC
from datasets import load_dataset
import soundfile as sf
import torch
# load model and tokenizer
tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
# define function to read in sound file
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
# tokenize
input_values = tokenizer(ds["speech"][:2], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = tokenizer.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-base-960h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Tokenizer
import soundfile as sf
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to("cuda")
tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
input_values = tokenizer(batch["speech"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = tokenizer.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 3.4 | 8.6 |
# Reference
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
[Facebook's huggingface Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h)
[Paper](https://arxiv.org/abs/2006.11477)
|
tommy19970714/wav2vec2-large-xlsr-ja | 2021-03-28T10:31:51.000Z | [] | [
".gitattributes"
] | tommy19970714 | 0 | |||
tomrisris/mvgl-model | 2020-12-31T19:55:47.000Z | [] | [
".gitattributes"
] | tomrisris | 0 | |||
tonitt97/AIS-Classification | 2021-04-23T18:51:57.000Z | [] | [
".gitattributes"
] | tonitt97 | 0 | |||
tosh99/finance-sentiment | 2021-06-06T13:25:13.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | tosh99 | 202 | transformers | |
tr3cks/2LabelsSentimentAnalysisSpanish | 2021-05-20T08:01:29.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | tr3cks | 149 | transformers | |
tr3cks/3LabelsSentimentAnalysisSpanish | 2021-05-20T08:02:41.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | tr3cks | 84 | transformers | |
tr3cks/SentimentAnalysis_BETO | 2021-05-20T08:03:50.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | tr3cks | 64 | transformers | |
tr3cks/bert-ner-es | 2021-05-20T08:04:44.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
] | token-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | tr3cks | 14 | transformers | |
trangdieu/distilroberta-retrained-6-epochs | 2021-05-30T03:57:07.000Z | [
"pytorch",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"training_args.bin"
] | trangdieu | 147 | transformers | |
trangdieu/roberta-base-retrained-6-epochs | 2021-06-02T17:56:07.000Z | [
"pytorch",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"training_args.bin"
] | trangdieu | 24 | transformers | |
trangdieu/roberta-large-retrained-2-epochs | 2021-06-12T19:45:22.000Z | [
"pytorch",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"training_args.bin"
] | trangdieu | 630 | transformers | |
treeven88/gpt-neo-2.7B | 2021-05-01T14:24:10.000Z | [] | [
".gitattributes"
] | treeven88 | 0 | |||
trisongz/biobert_large_cased | 2020-04-29T21:35:30.000Z | [
"pytorch",
"transformers"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | trisongz | 11 | transformers | ||
trituenhantaoio/bert-base-vietnamese-diacritics-uncased | 2021-05-20T08:05:47.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | trituenhantaoio | 22 | transformers | ## Usage
```python
from transformers import BertForSequenceClassification
from transformers import BertTokenizer
model = BertForSequenceClassification.from_pretrained("trituenhantaoio/bert-base-vietnamese-diacritics-uncased")
tokenizer = BertTokenizer.from_pretrained("trituenhantaoio/bert-base-vietnamese-diacritics-uncased")
```
### References
```
@article{ttnt2020bertdiacritics,
title={Vietnamese BERT Diacritics: Pretrained on News and Wiki},
author={trituenhantao.io},
year = {2020},
publisher = {Hugging Face},
journal = {Hugging Face repository}
}
```
[trituenhantao.io](https://trituenhantao.io) |
|
trituenhantaoio/bert-base-vietnamese-uncased | 2021-05-20T08:06:49.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | trituenhantaoio | 110 | transformers | ## Usage
```python
from transformers import BertForSequenceClassification
from transformers import BertTokenizer
model = BertForSequenceClassification.from_pretrained("trituenhantaoio/bert-base-vietnamese-uncased")
tokenizer = BertTokenizer.from_pretrained("trituenhantaoio/bert-base-vietnamese-uncased")
```
### References
```
@article{ttnt2020bert,
title={Vietnamese BERT: Pretrained on News and Wiki},
author={trituenhantao.io},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/trituenhantaoio/vn-bert-base-uncased}},
}
```
[trituenhantao.io](https://trituenhantao.io) |
|
tromedlov/t5-small-cnn | 2020-07-26T23:14:58.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".DS_Store",
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | tromedlov | 52 | transformers | |
trtd56/autonlp-wrime_joy_only-117396 | 2021-05-20T08:07:48.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"ja",
"dataset:trtd56/autonlp-data-wrime_joy_only",
"transformers",
"autonlp"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"sample_input.pkl",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | trtd56 | 20 | transformers | ---
tags: autonlp
language: ja
widget:
- text: "I love AutoNLP 🤗"
datasets:
- trtd56/autonlp-data-wrime_joy_only
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 117396
## Validation Metrics
- Loss: 0.4094310998916626
- Accuracy: 0.8201678240740741
- Precision: 0.6750303520841765
- Recall: 0.7912713472485768
- AUC: 0.8927167943538512
- F1: 0.728543350076436
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/trtd56/autonlp-wrime_joy_only-117396
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("trtd56/autonlp-wrime_joy_only-117396", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("trtd56/autonlp-wrime_joy_only-117396", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
trueto/medalbert-base-chinese | 2021-03-26T05:29:51.000Z | [
"pytorch",
"albert",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | trueto | 8 | transformers | # [medbert](https://github.com/trueto/medbert)
本项目开源硕士毕业论文“BERT模型在中文临床自然语言处理中的应用探索与研究”相关模型
## 评估基准
构建了中文电子病历命名实体识别数据集(CEMRNER)、中文医学文本命名实体识别数据集(CMTNER)、
中文医学问句-问句识别数据集(CMedQQ)和中文临床文本分类数据集(CCTC)。
| **数据集** | **训练集** | **验证集** | **测试集** | **任务类型** | **语料来源** |
| ---- | ---- | ---- |---- |---- |:----:|
| CEMRNER | 965 | 138 | 276 | 命名实体识别 | 医渡云 |
| CMTNER | 14000 | 2000 | 4000 | 命名实体识别 | CHIP2020 |
| CMedQQ | 14000 | 2000 | 4000 | 句对识别 | 平安医疗 |
| CCTC | 26837 | 3834 | 7669 | 句子分类 | CHIP2019 |
## 开源模型
在6.5亿字符中文临床自然语言文本语料上基于BERT模型和Albert模型预训练获得了MedBERT和MedAlbert模型。
## 性能表现
在同等实验环境,相同训练参数和脚本下,各模型的性能表现
| **模型** | **CEMRNER** | **CMTNER** | **CMedQQ** | **CCTC** |
| :---- | :----: | :----: | :----: | :----: |
| [BERT](https://huggingface.co/bert-base-chinese) | 81.17% | 65.67% | 87.77% | 81.62% |
| [MC-BERT](https://github.com/alibaba-research/ChineseBLUE) | 80.93% | 66.15% | 89.04% | 80.65% |
| [PCL-BERT](https://code.ihub.org.cn/projects/1775) | 81.58% | 67.02% | 88.81% | 80.27% |
| MedBERT | 82.29% | 66.49% | 88.32% | **81.77%** |
|MedBERT-wwm| **82.60%** | 67.11% | 88.02% | 81.72% |
|MedBERT-kd | 82.58% | **67.27%** | **89.34%** | 80.73% |
|- | - | - | - | - |
| [Albert](https://huggingface.co/voidful/albert_chinese_base) | 79.98% | 62.42% | 86.81% | 79.83% |
| MedAlbert | 81.03% | 63.81% | 87.56% | 80.05% |
|MedAlbert-wwm| **81.28%** | **64.12%** | **87.71%** | **80.46%** |
## 引用格式
```
杨飞洪,王序文,李姣.BERT模型在中文临床自然语言处理中的应用探索与研究[EB/OL].https://github.com/trueto/medbert, 2021-03.
``` |
|
trueto/medalbert-base-wwm-chinese | 2021-03-26T05:33:51.000Z | [
"pytorch",
"albert",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | trueto | 8 | transformers | # [medbert](https://github.com/trueto/medbert)
本项目开源硕士毕业论文“BERT模型在中文临床自然语言处理中的应用探索与研究”相关模型
## 评估基准
构建了中文电子病历命名实体识别数据集(CEMRNER)、中文医学文本命名实体识别数据集(CMTNER)、
中文医学问句-问句识别数据集(CMedQQ)和中文临床文本分类数据集(CCTC)。
| **数据集** | **训练集** | **验证集** | **测试集** | **任务类型** | **语料来源** |
| ---- | ---- | ---- |---- |---- |:----:|
| CEMRNER | 965 | 138 | 276 | 命名实体识别 | 医渡云 |
| CMTNER | 14000 | 2000 | 4000 | 命名实体识别 | CHIP2020 |
| CMedQQ | 14000 | 2000 | 4000 | 句对识别 | 平安医疗 |
| CCTC | 26837 | 3834 | 7669 | 句子分类 | CHIP2019 |
## 开源模型
在6.5亿字符中文临床自然语言文本语料上基于BERT模型和Albert模型预训练获得了MedBERT和MedAlbert模型。
## 性能表现
在同等实验环境,相同训练参数和脚本下,各模型的性能表现
| **模型** | **CEMRNER** | **CMTNER** | **CMedQQ** | **CCTC** |
| :---- | :----: | :----: | :----: | :----: |
| [BERT](https://huggingface.co/bert-base-chinese) | 81.17% | 65.67% | 87.77% | 81.62% |
| [MC-BERT](https://github.com/alibaba-research/ChineseBLUE) | 80.93% | 66.15% | 89.04% | 80.65% |
| [PCL-BERT](https://code.ihub.org.cn/projects/1775) | 81.58% | 67.02% | 88.81% | 80.27% |
| MedBERT | 82.29% | 66.49% | 88.32% | **81.77%** |
|MedBERT-wwm| **82.60%** | 67.11% | 88.02% | 81.72% |
|MedBERT-kd | 82.58% | **67.27%** | **89.34%** | 80.73% |
|- | - | - | - | - |
| [Albert](https://huggingface.co/voidful/albert_chinese_base) | 79.98% | 62.42% | 86.81% | 79.83% |
| MedAlbert | 81.03% | 63.81% | 87.56% | 80.05% |
|MedAlbert-wwm| **81.28%** | **64.12%** | **87.71%** | **80.46%** |
## 引用格式
```
杨飞洪,王序文,李姣.BERT模型在中文临床自然语言处理中的应用探索与研究[EB/OL].https://github.com/trueto/medbert, 2021-03.
``` |
|
trueto/medbert-base-chinese | 2021-05-20T08:08:47.000Z | [
"pytorch",
"jax",
"bert",
"pretraining",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"vocab.txt"
] | trueto | 209 | transformers | # [medbert](https://github.com/trueto/medbert)
本项目开源硕士毕业论文“BERT模型在中文临床自然语言处理中的应用探索与研究”相关模型
## 评估基准
构建了中文电子病历命名实体识别数据集(CEMRNER)、中文医学文本命名实体识别数据集(CMTNER)、
中文医学问句-问句识别数据集(CMedQQ)和中文临床文本分类数据集(CCTC)。
| **数据集** | **训练集** | **验证集** | **测试集** | **任务类型** | **语料来源** |
| ---- | ---- | ---- |---- |---- |:----:|
| CEMRNER | 965 | 138 | 276 | 命名实体识别 | 医渡云 |
| CMTNER | 14000 | 2000 | 4000 | 命名实体识别 | CHIP2020 |
| CMedQQ | 14000 | 2000 | 4000 | 句对识别 | 平安医疗 |
| CCTC | 26837 | 3834 | 7669 | 句子分类 | CHIP2019 |
## 开源模型
在6.5亿字符中文临床自然语言文本语料上基于BERT模型和Albert模型预训练获得了MedBERT和MedAlbert模型。
## 性能表现
在同等实验环境,相同训练参数和脚本下,各模型的性能表现
| **模型** | **CEMRNER** | **CMTNER** | **CMedQQ** | **CCTC** |
| :---- | :----: | :----: | :----: | :----: |
| [BERT](https://huggingface.co/bert-base-chinese) | 81.17% | 65.67% | 87.77% | 81.62% |
| [MC-BERT](https://github.com/alibaba-research/ChineseBLUE) | 80.93% | 66.15% | 89.04% | 80.65% |
| [PCL-BERT](https://code.ihub.org.cn/projects/1775) | 81.58% | 67.02% | 88.81% | 80.27% |
| MedBERT | 82.29% | 66.49% | 88.32% | **81.77%** |
|MedBERT-wwm| **82.60%** | 67.11% | 88.02% | 81.72% |
|MedBERT-kd | 82.58% | **67.27%** | **89.34%** | 80.73% |
|- | - | - | - | - |
| [Albert](https://huggingface.co/voidful/albert_chinese_base) | 79.98% | 62.42% | 86.81% | 79.83% |
| MedAlbert | 81.03% | 63.81% | 87.56% | 80.05% |
|MedAlbert-wwm| **81.28%** | **64.12%** | **87.71%** | **80.46%** |
## 引用格式
```
杨飞洪,王序文,李姣.BERT模型在中文临床自然语言处理中的应用探索与研究[EB/OL].https://github.com/trueto/medbert, 2021-03.
``` |
|
trueto/medbert-base-wwm-chinese | 2021-05-20T08:09:44.000Z | [
"pytorch",
"jax",
"bert",
"pretraining",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"vocab.txt"
] | trueto | 1,288 | transformers | # [medbert](https://github.com/trueto/medbert)
本项目开源硕士毕业论文“BERT模型在中文临床自然语言处理中的应用探索与研究”相关模型
## 评估基准
构建了中文电子病历命名实体识别数据集(CEMRNER)、中文医学文本命名实体识别数据集(CMTNER)、
中文医学问句-问句识别数据集(CMedQQ)和中文临床文本分类数据集(CCTC)。
| **数据集** | **训练集** | **验证集** | **测试集** | **任务类型** | **语料来源** |
| ---- | ---- | ---- |---- |---- |:----:|
| CEMRNER | 965 | 138 | 276 | 命名实体识别 | 医渡云 |
| CMTNER | 14000 | 2000 | 4000 | 命名实体识别 | CHIP2020 |
| CMedQQ | 14000 | 2000 | 4000 | 句对识别 | 平安医疗 |
| CCTC | 26837 | 3834 | 7669 | 句子分类 | CHIP2019 |
## 开源模型
在6.5亿字符中文临床自然语言文本语料上基于BERT模型和Albert模型预训练获得了MedBERT和MedAlbert模型。
## 性能表现
在同等实验环境,相同训练参数和脚本下,各模型的性能表现
| **模型** | **CEMRNER** | **CMTNER** | **CMedQQ** | **CCTC** |
| :---- | :----: | :----: | :----: | :----: |
| [BERT](https://huggingface.co/bert-base-chinese) | 81.17% | 65.67% | 87.77% | 81.62% |
| [MC-BERT](https://github.com/alibaba-research/ChineseBLUE) | 80.93% | 66.15% | 89.04% | 80.65% |
| [PCL-BERT](https://code.ihub.org.cn/projects/1775) | 81.58% | 67.02% | 88.81% | 80.27% |
| MedBERT | 82.29% | 66.49% | 88.32% | **81.77%** |
|MedBERT-wwm| **82.60%** | 67.11% | 88.02% | 81.72% |
|MedBERT-kd | 82.58% | **67.27%** | **89.34%** | 80.73% |
|- | - | - | - | - |
| [Albert](https://huggingface.co/voidful/albert_chinese_base) | 79.98% | 62.42% | 86.81% | 79.83% |
| MedAlbert | 81.03% | 63.81% | 87.56% | 80.05% |
|MedAlbert-wwm| **81.28%** | **64.12%** | **87.71%** | **80.46%** |
## 引用格式
```
杨飞洪,王序文,李姣.BERT模型在中文临床自然语言处理中的应用探索与研究[EB/OL].https://github.com/trueto/medbert, 2021-03.
``` |
|
trueto/medbert-kd-chinese | 2021-05-20T08:10:57.000Z | [
"pytorch",
"jax",
"bert",
"pretraining",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"vocab.txt"
] | trueto | 153 | transformers | # [medbert](https://github.com/trueto/medbert)
本项目开源硕士毕业论文“BERT模型在中文临床自然语言处理中的应用探索与研究”相关模型
## 评估基准
构建了中文电子病历命名实体识别数据集(CEMRNER)、中文医学文本命名实体识别数据集(CMTNER)、
中文医学问句-问句识别数据集(CMedQQ)和中文临床文本分类数据集(CCTC)。
| **数据集** | **训练集** | **验证集** | **测试集** | **任务类型** | **语料来源** |
| ---- | ---- | ---- |---- |---- |:----:|
| CEMRNER | 965 | 138 | 276 | 命名实体识别 | 医渡云 |
| CMTNER | 14000 | 2000 | 4000 | 命名实体识别 | CHIP2020 |
| CMedQQ | 14000 | 2000 | 4000 | 句对识别 | 平安医疗 |
| CCTC | 26837 | 3834 | 7669 | 句子分类 | CHIP2019 |
## 开源模型
在6.5亿字符中文临床自然语言文本语料上基于BERT模型和Albert模型预训练获得了MedBERT和MedAlbert模型。
## 性能表现
在同等实验环境,相同训练参数和脚本下,各模型的性能表现
| **模型** | **CEMRNER** | **CMTNER** | **CMedQQ** | **CCTC** |
| :---- | :----: | :----: | :----: | :----: |
| [BERT](https://huggingface.co/bert-base-chinese) | 81.17% | 65.67% | 87.77% | 81.62% |
| [MC-BERT](https://github.com/alibaba-research/ChineseBLUE) | 80.93% | 66.15% | 89.04% | 80.65% |
| [PCL-BERT](https://code.ihub.org.cn/projects/1775) | 81.58% | 67.02% | 88.81% | 80.27% |
| MedBERT | 82.29% | 66.49% | 88.32% | **81.77%** |
|MedBERT-wwm| **82.60%** | 67.11% | 88.02% | 81.72% |
|MedBERT-kd | 82.58% | **67.27%** | **89.34%** | 80.73% |
|- | - | - | - | - |
| [Albert](https://huggingface.co/voidful/albert_chinese_base) | 79.98% | 62.42% | 86.81% | 79.83% |
| MedAlbert | 81.03% | 63.81% | 87.56% | 80.05% |
|MedAlbert-wwm| **81.28%** | **64.12%** | **87.71%** | **80.46%** |
## 引用格式
```
杨飞洪,王序文,李姣.BERT模型在中文临床自然语言处理中的应用探索与研究[EB/OL].https://github.com/trueto/medbert, 2021-03.
``` |
|
truongphan/mt5-vi-question-generation | 2021-01-11T14:53:03.000Z | [] | [
".gitattributes"
] | truongphan | 0 | |||
ts1829/obama_gpt2 | 2021-05-23T13:13:35.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | ts1829 | 23 | transformers | |
ts1829/trump_gpt2 | 2021-05-23T13:14:40.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | ts1829 | 28 | transformers | |
tsuberim/kjhgfds | 2021-02-22T16:26:33.000Z | [] | [
".gitattributes"
] | tsuberim | 0 | |||
ttumyche/bluebert | 2020-09-21T04:57:19.000Z | [
"pytorch",
"transformers"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | ttumyche | 11 | transformers | ||
tugstugi/bert-base-mongolian-cased | 2021-05-20T08:12:07.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"mn",
"arxiv:1810.04805",
"transformers",
"mongolian",
"cased",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json"
] | tugstugi | 45 | transformers | ---
language: "mn"
tags:
- bert
- mongolian
- cased
---
# BERT-BASE-MONGOLIAN-CASED
[Link to Official Mongolian-BERT repo](https://github.com/tugstugi/mongolian-bert)
## Model description
This repository contains pre-trained Mongolian [BERT](https://arxiv.org/abs/1810.04805) models trained by [tugstugi](https://github.com/tugstugi), [enod](https://github.com/enod) and [sharavsambuu](https://github.com/sharavsambuu).
Special thanks to [nabar](https://github.com/nabar) who provided 5x TPUs.
This repository is based on the following open source projects: [google-research/bert](https://github.com/google-research/bert/),
[huggingface/pytorch-pretrained-BERT](https://github.com/huggingface/pytorch-pretrained-BERT) and [yoheikikuta/bert-japanese](https://github.com/yoheikikuta/bert-japanese).
#### How to use
```python
from transformers import pipeline, AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('tugstugi/bert-base-mongolian-cased', use_fast=False)
model = AutoModelForMaskedLM.from_pretrained('tugstugi/bert-base-mongolian-cased')
## declare task ##
pipe = pipeline(task="fill-mask", model=model, tokenizer=tokenizer)
## example ##
input_ = '[MASK] хот Монгол улсын нийслэл.'
output_ = pipe(input_)
for i in range(len(output_)):
print(output_[i])
## output ##
# {'sequence': 'Улаанбаатар хот Монгол улсын нийслэл.', 'score': 0.826970100402832, 'token': 281, 'token_str': 'Улаанбаатар'}
# {'sequence': 'Нийслэл хот Монгол улсын нийслэл.', 'score': 0.06551621109247208, 'token': 4059, 'token_str': 'Нийслэл'}
# {'sequence': 'Эрдэнэт хот Монгол улсын нийслэл.', 'score': 0.0264141745865345, 'token': 2229, 'token_str': 'Эрдэнэт'}
# {'sequence': 'Дархан хот Монгол улсын нийслэл.', 'score': 0.017083868384361267, 'token': 1646, 'token_str': 'Дархан'}
# {'sequence': 'УБ хот Монгол улсын нийслэл.', 'score': 0.010854342952370644, 'token': 7389, 'token_str': 'УБ'}
```
## Training data
Mongolian Wikipedia and the 700 million word Mongolian news data set [[Pretraining Procedure](https://github.com/tugstugi/mongolian-bert#pre-training)]
### BibTeX entry and citation info
```bibtex
@misc{mongolian-bert,
author = {Tuguldur, Erdene-Ochir and Gunchinish, Sharavsambuu and Bataa, Enkhbold},
title = {BERT Pretrained Models on Mongolian Datasets},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tugstugi/mongolian-bert/}}
}
```
|
tugstugi/bert-base-mongolian-uncased | 2021-05-20T08:13:09.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"mn",
"arxiv:1810.04805",
"transformers",
"mongolian",
"uncased",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json"
] | tugstugi | 150 | transformers | ---
language: "mn"
tags:
- bert
- mongolian
- uncased
---
# BERT-BASE-MONGOLIAN-UNCASED
[Link to Official Mongolian-BERT repo](https://github.com/tugstugi/mongolian-bert)
## Model description
This repository contains pre-trained Mongolian [BERT](https://arxiv.org/abs/1810.04805) models trained by [tugstugi](https://github.com/tugstugi), [enod](https://github.com/enod) and [sharavsambuu](https://github.com/sharavsambuu).
Special thanks to [nabar](https://github.com/nabar) who provided 5x TPUs.
This repository is based on the following open source projects: [google-research/bert](https://github.com/google-research/bert/),
[huggingface/pytorch-pretrained-BERT](https://github.com/huggingface/pytorch-pretrained-BERT) and [yoheikikuta/bert-japanese](https://github.com/yoheikikuta/bert-japanese).
#### How to use
```python
from transformers import pipeline, AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('tugstugi/bert-base-mongolian-uncased', use_fast=False)
model = AutoModelForMaskedLM.from_pretrained('tugstugi/bert-base-mongolian-uncased')
## declare task ##
pipe = pipeline(task="fill-mask", model=model, tokenizer=tokenizer)
## example ##
input_ = 'Миний [MASK] хоол идэх нь тун чухал.'
output_ = pipe(input_)
for i in range(len(output_)):
print(output_[i])
## output ##
#{'sequence': 'миний хувьд хоол идэх нь тун чухал.', 'score': 0.7889143824577332, 'token': 126, 'token_str': 'хувьд'}
#{'sequence': 'миний бодлоор хоол идэх нь тун чухал.', 'score': 0.18616807460784912, 'token': 6106, 'token_str': 'бодлоор'}
#{'sequence': 'миний зүгээс хоол идэх нь тун чухал.', 'score': 0.004825591575354338, 'token': 761, 'token_str': 'зүгээс'}
#{'sequence': 'миний биед хоол идэх нь тун чухал.', 'score': 0.0015743684489279985, 'token': 3010, 'token_str': 'биед'}
#{'sequence': 'миний тухайд хоол идэх нь тун чухал.', 'score': 0.0014919431414455175, 'token': 1712, 'token_str': 'тухайд'}
```
## Training data
Mongolian Wikipedia and the 700 million word Mongolian news data set [[Pretraining Procedure](https://github.com/tugstugi/mongolian-bert#pre-training)]
### BibTeX entry and citation info
```bibtex
@misc{mongolian-bert,
author = {Tuguldur, Erdene-Ochir and Gunchinish, Sharavsambuu and Bataa, Enkhbold},
title = {BERT Pretrained Models on Mongolian Datasets},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tugstugi/mongolian-bert/}}
}
```
|
tugstugi/bert-large-mongolian-cased | 2021-05-20T08:16:24.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"mn",
"arxiv:1810.04805",
"transformers",
"mongolian",
"cased",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json"
] | tugstugi | 15 | transformers | ---
language: "mn"
tags:
- bert
- mongolian
- cased
---
# BERT-LARGE-MONGOLIAN-CASED
[Link to Official Mongolian-BERT repo](https://github.com/tugstugi/mongolian-bert)
## Model description
This repository contains pre-trained Mongolian [BERT](https://arxiv.org/abs/1810.04805) models trained by [tugstugi](https://github.com/tugstugi), [enod](https://github.com/enod) and [sharavsambuu](https://github.com/sharavsambuu).
Special thanks to [nabar](https://github.com/nabar) who provided 5x TPUs.
This repository is based on the following open source projects: [google-research/bert](https://github.com/google-research/bert/),
[huggingface/pytorch-pretrained-BERT](https://github.com/huggingface/pytorch-pretrained-BERT) and [yoheikikuta/bert-japanese](https://github.com/yoheikikuta/bert-japanese).
#### How to use
```python
from transformers import pipeline, AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('tugstugi/bert-large-mongolian-cased', use_fast=False)
model = AutoModelForMaskedLM.from_pretrained('tugstugi/bert-large-mongolian-cased')
## declare task ##
pipe = pipeline(task="fill-mask", model=model, tokenizer=tokenizer)
## example ##
input_ = 'Монгол улсын [MASK] Улаанбаатар хотоос ярьж байна.'
output_ = pipe(input_)
for i in range(len(output_)):
print(output_[i])
## output ##
# {'sequence': 'Монгол улсын нийслэл Улаанбаатар хотоос ярьж байна.', 'score': 0.9779232740402222, 'token': 1176, 'token_str': 'нийслэл'}
# {'sequence': 'Монгол улсын Нийслэл Улаанбаатар хотоос ярьж байна.', 'score': 0.015034765936434269, 'token': 4059, 'token_str': 'Нийслэл'}
# {'sequence': 'Монгол улсын Ерөнхийлөгч Улаанбаатар хотоос ярьж байна.', 'score': 0.0021413620561361313, 'token': 325, 'token_str': 'Ерөнхийлөгч'}
# {'sequence': 'Монгол улсын ерөнхийлөгч Улаанбаатар хотоос ярьж байна.', 'score': 0.0008035294013097882, 'token': 1215, 'token_str': 'ерөнхийлөгч'}
# {'sequence': 'Монгол улсын нийслэлийн Улаанбаатар хотоос ярьж байна.', 'score': 0.0006434018723666668, 'token': 356, 'token_str': 'нийслэлийн'}
```
## Training data
Mongolian Wikipedia and the 700 million word Mongolian news data set [[Pretraining Procedure](https://github.com/tugstugi/mongolian-bert#pre-training)]
### BibTeX entry and citation info
```bibtex
@misc{mongolian-bert,
author = {Tuguldur, Erdene-Ochir and Gunchinish, Sharavsambuu and Bataa, Enkhbold},
title = {BERT Pretrained Models on Mongolian Datasets},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tugstugi/mongolian-bert/}}
}
```
|
tugstugi/bert-large-mongolian-uncased | 2021-05-20T08:19:28.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"mn",
"arxiv:1810.04805",
"transformers",
"mongolian",
"uncased",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json"
] | tugstugi | 17 | transformers | ---
language: "mn"
tags:
- bert
- mongolian
- uncased
---
# BERT-LARGE-MONGOLIAN-UNCASED
[Link to Official Mongolian-BERT repo](https://github.com/tugstugi/mongolian-bert)
## Model description
This repository contains pre-trained Mongolian [BERT](https://arxiv.org/abs/1810.04805) models trained by [tugstugi](https://github.com/tugstugi), [enod](https://github.com/enod) and [sharavsambuu](https://github.com/sharavsambuu).
Special thanks to [nabar](https://github.com/nabar) who provided 5x TPUs.
This repository is based on the following open source projects: [google-research/bert](https://github.com/google-research/bert/),
[huggingface/pytorch-pretrained-BERT](https://github.com/huggingface/pytorch-pretrained-BERT) and [yoheikikuta/bert-japanese](https://github.com/yoheikikuta/bert-japanese).
#### How to use
```python
from transformers import pipeline, AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('tugstugi/bert-large-mongolian-uncased', use_fast=False)
model = AutoModelForMaskedLM.from_pretrained('tugstugi/bert-large-mongolian-uncased')
## declare task ##
pipe = pipeline(task="fill-mask", model=model, tokenizer=tokenizer)
## example ##
input_ = 'Монгол улсын [MASK] Улаанбаатар хотоос ярьж байна.'
output_ = pipe(input_)
for i in range(len(output_)):
print(output_[i])
## output ##
# {'sequence': 'монгол улсын нийслэл улаанбаатар хотоос ярьж байна.', 'score': 0.7867621183395386, 'token': 849, 'token_str': 'нийслэл'}
# {'sequence': 'монгол улсын ерөнхийлөгч улаанбаатар хотоос ярьж байна.', 'score': 0.14303277432918549, 'token': 244, 'token_str': 'ерөнхийлөгч'}
# {'sequence': 'монгол улсын ерөнхийлөгчийг улаанбаатар хотоос ярьж байна.', 'score': 0.011642335914075375, 'token': 8373, 'token_str': 'ерөнхийлөгчийг'}
# {'sequence': 'монгол улсын иргэд улаанбаатар хотоос ярьж байна.', 'score': 0.006592822726815939, 'token': 247, 'token_str': 'иргэд'}
# {'sequence': 'монгол улсын нийслэлийг улаанбаатар хотоос ярьж байна.', 'score': 0.006165097933262587, 'token': 15501, 'token_str': 'нийслэлийг'}
```
## Training data
Mongolian Wikipedia and the 700 million word Mongolian news data set [[Pretraining Procedure](https://github.com/tugstugi/mongolian-bert#pre-training)]
### BibTeX entry and citation info
```bibtex
@misc{mongolian-bert,
author = {Tuguldur, Erdene-Ochir and Gunchinish, Sharavsambuu and Bataa, Enkhbold},
title = {BERT Pretrained Models on Mongolian Datasets},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tugstugi/mongolian-bert/}}
}
```
|
tugstugi/wav2vec2-large-xlsr-53-kalmyk | 2021-03-17T20:16:29.000Z | [
"pytorch",
"wav2vec2",
"xal",
"transformers",
"speech",
"audio",
"automatic-speech-recognition",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | tugstugi | 9 | transformers | ---
language: xal
tags:
- speech
- audio
- automatic-speech-recognition
license: apache-2.0
---
## Info
Wav2Vec XLSR finetuned on the Kalmyk Bible.
|
tugstugi/wav2vec2-large-xlsr-53-mongolian | 2021-03-22T07:19:25.000Z | [
"pytorch",
"wav2vec2",
"mn",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
".gitignore",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | tugstugi | 17 | transformers | ---
language: mn
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Mongolian by Tugstugi
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice mn
type: common_voice
args: mn
metrics:
- name: Test WER
type: wer
value: 42.80
---
# Wav2Vec2-Large-XLSR-53-Mongolian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Mongolian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "mn", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("wav2vec2-large-xlsr-53-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("wav2vec2-large-xlsr-53-mongolian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Mongolian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "mn", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("wav2vec2-large-xlsr-53-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("wav2vec2-large-xlsr-53-mongolian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 42.80 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found ??? |
tuner007/pegasus_paraphrase | 2021-03-22T21:11:33.000Z | [
"pytorch",
"pegasus",
"seq2seq",
"en",
"transformers",
"license:apache-2.0",
"paraphrasing",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | tuner007 | 44,979 | transformers | ---
language: en
license: apache-2.0
tags:
- pegasus
- paraphrasing
- seq2seq
---
## Model description
[PEGASUS](https://github.com/google-research/pegasus) fine-tuned for paraphrasing
## Model in Action 🚀
```
import torch
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
model_name = 'tuner007/pegasus_paraphrase'
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device)
def get_response(input_text,num_return_sequences,num_beams):
batch = tokenizer([input_text],truncation=True,padding='longest',max_length=60, return_tensors="pt").to(torch_device)
translated = model.generate(**batch,max_length=60,num_beams=num_beams, num_return_sequences=num_return_sequences, temperature=1.5)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
return tgt_text
```
#### Example:
```
num_beams = 10
num_return_sequences = 10
context = "The ultimate test of your knowledge is your capacity to convey it to another."
get_response(context,num_return_sequences,num_beams)
# output:
['The test of your knowledge is your ability to convey it.',
'The ability to convey your knowledge is the ultimate test of your knowledge.',
'The ability to convey your knowledge is the most important test of your knowledge.',
'Your capacity to convey your knowledge is the ultimate test of it.',
'The test of your knowledge is your ability to communicate it.',
'Your capacity to convey your knowledge is the ultimate test of your knowledge.',
'Your capacity to convey your knowledge to another is the ultimate test of your knowledge.',
'Your capacity to convey your knowledge is the most important test of your knowledge.',
'The test of your knowledge is how well you can convey it.',
'Your capacity to convey your knowledge is the ultimate test.']
```
> Created by [Arpit Rajauria](https://twitter.com/arpit_rajauria)
[](https://twitter.com/arpit_rajauria)
|
tuner007/pegasus_qa | 2020-12-11T22:02:48.000Z | [
"pytorch",
"pegasus",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | tuner007 | 23 | transformers | # Pegasus for question-answering
Pegasus model fine-tuned for QA using text-to-text approach
## Model in Action 🚀
```
import torch
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
model_name = 'tuner007/pegasus_qa'
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device)
def get_answer(question, context):
input_text = "question: %s text: %s" % (question,context)
batch = tokenizer.prepare_seq2seq_batch([input_text], truncation=True, padding='longest', return_tensors="pt").to(torch_device)
translated = model.generate(**batch)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
return tgt_text[0]
```
#### Example:
```
context = "PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."
question = "How many customers were affected by the shutoffs?"
get_answer(question, context)
# output: '800 thousand'
```
> Created by Arpit Rajauria
[](https://twitter.com/arpit_rajauria)
|
tuner007/t5_abs_qa | 2020-12-11T22:02:51.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
] | tuner007 | 638 | transformers | # T5 for abstractive question-answering
This is T5-base model fine-tuned for abstractive QA using text-to-text approach
## Model training
This model was trained on colab TPU with 35GB RAM for 2 epochs
## Model in Action 🚀
```
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("tuner007/t5_abs_qa")
model = AutoModelWithLMHead.from_pretrained("tuner007/t5_abs_qa")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
def get_answer(question, context):
input_text = "context: %s <question for context: %s </s>" % (context,question)
features = tokenizer([input_text], return_tensors='pt')
out = model.generate(input_ids=features['input_ids'].to(device), attention_mask=features['attention_mask'].to(device))
return tokenizer.decode(out[0])
```
#### Example 1: Answer available
```
context = "In Norse mythology, Valhalla is a majestic, enormous hall located in Asgard, ruled over by the god Odin."
question = "What is Valhalla?"
get_answer(question, context)
# output: 'It is a hall of worship ruled by Odin.'
```
#### Example 2: Answer not available
```
context = "In Norse mythology, Valhalla is a majestic, enormous hall located in Asgard, ruled over by the god Odin."
question = "What is Asgard?"
get_answer(question, context)
# output: 'No answer available in context.'
```
> Created by Arpit Rajauria
[](https://twitter.com/arpit_rajauria)
|
turtlesoupy/forward-dictionary-model-v1 | 2021-05-23T13:15:50.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"merges.txt",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | turtlesoupy | 26 | transformers | |
turtlesoupy/forward-dictionary-model | 2021-05-23T13:16:34.000Z | [
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | turtlesoupy | 13 | transformers | |
turtlesoupy/inverse-dictionary-model-v1 | 2021-05-23T13:17:21.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"merges.txt",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | turtlesoupy | 22 | transformers | |
tuscan-chicken-wrap/semeval2020_task11_si | 2021-05-20T22:43:34.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"test_predictions.txt",
"test_results.txt",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | tuscan-chicken-wrap | 7 | transformers | |
twdooley/breitbot | 2021-05-23T13:18:29.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | twdooley | 18 | transformers | <h1>BreitBot</h1><h2>Timothy W. Dooley</h2>___________________________________________________<h3>GitHub</h3>The GitHub for the project can be found [here](https://github.com/twdooley/election_news)<h3>Model</h3><br>This model was trained on about 16,000 headlines from Breitbart.com spannning March 2019- 11 November 2020. The purpose of this project was to better understand how strongly polarized news crafts a narrative through Natural Language Processing. The BreitBot model was specifically created to understand the 'clickbaity' nature of a Breitbart headline. Many of the results are 'reasonable' within the scope of Breitbart's production. I will leave it to the user to make further interpretation. The full project noted that over 70% of Breitbart's articles from month to month have a negative sentiment score. Subjectively, I believe this is shown through the headlines generated.<br><h3>Training</h3><br>BreitBot is a finetuned on GPT2 with about 16,000 headlines. The maximum length allowed in the tokenizer was the length of the longest headline (~50 tokens). A huge credit goes to Richard Bownes, PhD whose article ["Fine Tuning GPT-2 for Magic the Gathering Flavour Text Generation"](https://medium.com/swlh/fine-tuning-gpt-2-for-magic-the-gathering-flavour-text-generation-3bafd0f9bb93) provided incredible direction and help in training this model. It was trained using a GPU on Google Colab. |
twmkn9/albert-base-v2-squad2 | 2020-12-11T22:02:54.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
] | twmkn9 | 1,083 | transformers | This model is [ALBERT base v2](https://huggingface.co/albert-base-v2) trained on SQuAD v2 as:
```
export SQUAD_DIR=../../squad2
python3 run_squad.py
--model_type albert
--model_name_or_path albert-base-v2
--do_train
--do_eval
--overwrite_cache
--do_lower_case
--version_2_with_negative
--save_steps 100000
--train_file $SQUAD_DIR/train-v2.0.json
--predict_file $SQUAD_DIR/dev-v2.0.json
--per_gpu_train_batch_size 8
--num_train_epochs 3
--learning_rate 3e-5
--max_seq_length 384
--doc_stride 128
--output_dir ./tmp/albert_fine/
```
Performance on a dev subset is close to the original paper:
```
Results:
{
'exact': 78.71010200723923,
'f1': 81.89228117126069,
'total': 6078,
'HasAns_exact': 75.39518900343643,
'HasAns_f1': 82.04167868004215,
'HasAns_total': 2910,
'NoAns_exact': 81.7550505050505,
'NoAns_f1': 81.7550505050505,
'NoAns_total': 3168,
'best_exact': 78.72655478775913,
'best_exact_thresh': 0.0,
'best_f1': 81.90873395178066,
'best_f1_thresh': 0.0
}
```
We are hopeful this might save you time, energy, and compute. Cheers! |
twmkn9/bert-base-uncased-squad2 | 2021-05-20T08:21:23.000Z | [
"pytorch",
"jax",
"tfsavedmodel",
"bert",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"saved_model.tar.gz",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | twmkn9 | 368 | transformers | This model is [BERT base uncased](https://huggingface.co/bert-base-uncased) trained on SQuAD v2 as:
```
export SQUAD_DIR=../../squad2
python3 run_squad.py
--model_type bert
--model_name_or_path bert-base-uncased
--do_train
--do_eval
--overwrite_cache
--do_lower_case
--version_2_with_negative
--save_steps 100000
--train_file $SQUAD_DIR/train-v2.0.json
--predict_file $SQUAD_DIR/dev-v2.0.json
--per_gpu_train_batch_size 8
--num_train_epochs 3
--learning_rate 3e-5
--max_seq_length 384
--doc_stride 128
--output_dir ./tmp/bert_fine_tuned/
```
Performance on a dev subset is close to the original paper:
```
Results:
{
'exact': 72.35932872655479,
'f1': 75.75355132564763,
'total': 6078,
'HasAns_exact': 74.29553264604812,
'HasAns_f1': 81.38490892002987,
'HasAns_total': 2910,
'NoAns_exact': 70.58080808080808,
'NoAns_f1': 70.58080808080808,
'NoAns_total': 3168,
'best_exact': 72.35932872655479,
'best_exact_thresh': 0.0,
'best_f1': 75.75355132564766,
'best_f1_thresh': 0.0
}
```
We are hopeful this might save you time, energy, and compute. Cheers! |
twmkn9/distilbert-base-uncased-squad2 | 2020-12-11T22:03:01.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | twmkn9 | 1,347 | transformers | This model is [Distilbert base uncased](https://huggingface.co/distilbert-base-uncased) trained on SQuAD v2 as:
```
export SQUAD_DIR=../../squad2
python3 run_squad.py
--model_type distilbert
--model_name_or_path distilbert-base-uncased
--do_train
--do_eval
--overwrite_cache
--do_lower_case
--version_2_with_negative
--save_steps 100000
--train_file $SQUAD_DIR/train-v2.0.json
--predict_file $SQUAD_DIR/dev-v2.0.json
--per_gpu_train_batch_size 8
--num_train_epochs 3
--learning_rate 3e-5
--max_seq_length 384
--doc_stride 128
--output_dir ./tmp/distilbert_fine_tuned/
```
Performance on a dev subset is close to the original paper:
```
Results:
{
'exact': 64.88976637051661,
'f1': 68.1776176526635,
'total': 6078,
'HasAns_exact': 69.7594501718213,
'HasAns_f1': 76.62665295288285,
'HasAns_total': 2910,
'NoAns_exact': 60.416666666666664,
'NoAns_f1': 60.416666666666664,
'NoAns_total': 3168,
'best_exact': 64.88976637051661,
'best_exact_thresh': 0.0,
'best_f1': 68.17761765266337,
'best_f1_thresh': 0.0
}
```
We are hopeful this might save you time, energy, and compute. Cheers! |
twmkn9/distilroberta-base-squad2 | 2021-05-20T22:45:57.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | twmkn9 | 81 | transformers | This model is [Distilroberta base](https://huggingface.co/distilroberta-base) trained on SQuAD v2 as:
```
export SQUAD_DIR=../../squad2
python3 run_squad.py
--model_type robberta
--model_name_or_path distilroberta-base
--do_train
--do_eval
--overwrite_cache
--do_lower_case
--version_2_with_negative
--save_steps 100000
--train_file $SQUAD_DIR/train-v2.0.json
--predict_file $SQUAD_DIR/dev-v2.0.json
--per_gpu_train_batch_size 8
--num_train_epochs 3
--learning_rate 3e-5
--max_seq_length 384
--doc_stride 128
--output_dir ./tmp/distilroberta_fine_tuned/
```
Performance on a dev subset is close to the original paper:
```
Results:
{
'exact': 70.9279368213228,
'f1': 74.60439802429168,
'total': 6078,
'HasAns_exact': 67.62886597938144,
'HasAns_f1': 75.30774267754136,
'HasAns_total': 2910,
'NoAns_exact': 73.95833333333333,
'NoAns_f1': 73.95833333333333, 'NoAns_total': 3168,
'best_exact': 70.94438960184272,
'best_exact_thresh': 0.0,
'best_f1': 74.62085080481161,
'best_f1_thresh': 0.0
}
```
We are hopeful this might save you time, energy, and compute. Cheers! |
tyoc213/wav2vec2-large-xlsr-nahuatl | 2021-04-07T02:59:04.000Z | [
"pytorch",
"wav2vec2",
"nah specifically ncj",
"dataset:created a new dataset based on https://www.openslr.org/92/",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"less60wer.ipynb",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | tyoc213 | 7 | transformers |
---
language: nah specifically ncj
datasets:
- created a new dataset based on https://www.openslr.org/92/
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Nahuatl XLSR Wav2Vec 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
metrics:
- name: Test WER
type: wer
value: 69.11
---
# Wav2Vec2-Large-XLSR-53-ncj/nah
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Nahuatl specifically of the Nort of Puebla (ncj) using a derivate of [SLR92](https://www.openslr.org/92/), and some samples of `es` and `de` datasets from [Common Voice](https://huggingface.co/datasets/common_voice).
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "{lang_id}", split="test[:2%]") # TODO: publish nahuatl_slr92_by_sentence
processor = Wav2Vec2Processor.from_pretrained("tyoc213/wav2vec2-large-xlsr-nahuatl")
model = Wav2Vec2ForCTC.from_pretrained("tyoc213/wav2vec2-large-xlsr-nahuatl")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Nahuatl specifically of the Nort of Puebla (ncj) test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "{lang_id}", split="test") # TODO: publish nahuatl_slr92_by_sentence
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("tyoc213/wav2vec2-large-xlsr-nahuatl")
model = Wav2Vec2ForCTC.from_pretrained("tyoc213/wav2vec2-large-xlsr-nahuatl")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\"\“\%\‘\”\�\(\)\-]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 50.95 %
## Training
A derivate of [SLR92](https://www.openslr.org/92/) to be published soon.And some samples of `es` and `de` datasets from [Common Voice](https://huggingface.co/datasets/common_voice)
The script used for training can be found [less60wer.ipynb](./less60wer.ipynb)
|
Subsets and Splits