modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 21
values | files
list | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
jack-oh/korbert_morp_korquad | 2021-05-25T08:06:32.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenization_morp.py",
"tokenizer_config.json",
"vocab.txt"
]
| jack-oh | 7 | transformers | ||
jacob-valdez/blenderbot-small-tflite | 2021-04-25T00:47:29.000Z | [
"tflite",
"en",
"Android",
"blenderbot",
"license:apache-2.0"
]
| [
".gitattributes",
"README.md",
"blenderbot.tflite"
]
| jacob-valdez | 0 | ---
language: "en"
#thumbnail: "url to a thumbnail used in social sharing"
tags:
- Android
- tflite
- blenderbot
license: "apache-2.0"
#datasets:
#metrics:
---
# Model Card
`blenderbot-small-tflite` is a tflite version of `blenderbot-small-90M` I converted for my UTA CSE3310 class. See the repo at [https://github.com/kmosoti/DesparadosAEYE](https://github.com/kmosoti/DesparadosAEYE) and the conversion process [here](https://drive.google.com/file/d/1F93nMsDIm1TWhn70FcLtcaKQUynHq9wS/view?usp=sharing).
You have to right pad your user and model input integers to make them [32,]-shaped. Then indicate te true length with the 3rd and 4th params.
```python
display(interpreter.get_input_details())
display(interpreter.get_output_details())
```
```json
[{'dtype': numpy.int32,
'index': 0,
'name': 'input_tokens',
'quantization': (0.0, 0),
'quantization_parameters': {'quantized_dimension': 0,
'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32)},
'shape': array([32], dtype=int32),
'shape_signature': array([32], dtype=int32),
'sparsity_parameters': {}},
{'dtype': numpy.int32,
'index': 1,
'name': 'decoder_input_tokens',
'quantization': (0.0, 0),
'quantization_parameters': {'quantized_dimension': 0,
'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32)},
'shape': array([32], dtype=int32),
'shape_signature': array([32], dtype=int32),
'sparsity_parameters': {}},
{'dtype': numpy.int32,
'index': 2,
'name': 'input_len',
'quantization': (0.0, 0),
'quantization_parameters': {'quantized_dimension': 0,
'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32)},
'shape': array([], dtype=int32),
'shape_signature': array([], dtype=int32),
'sparsity_parameters': {}},
{'dtype': numpy.int32,
'index': 3,
'name': 'decoder_input_len',
'quantization': (0.0, 0),
'quantization_parameters': {'quantized_dimension': 0,
'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32)},
'shape': array([], dtype=int32),
'shape_signature': array([], dtype=int32),
'sparsity_parameters': {}}]
[{'dtype': numpy.int32,
'index': 3113,
'name': 'Identity',
'quantization': (0.0, 0),
'quantization_parameters': {'quantized_dimension': 0,
'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32)},
'shape': array([1], dtype=int32),
'shape_signature': array([1], dtype=int32),
'sparsity_parameters': {}}]
``` |
||
jacobshein/bert-danish-uncased-by-botxo | 2021-01-27T14:18:15.000Z | []
| [
".gitattributes"
]
| jacobshein | 0 | |||
jaehyeong/koelectra-base-v3-finetuned-generalized-sentiment-analysis | 2020-12-04T13:59:51.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| jaehyeong | 10 | transformers | |
jaimin/wav2vec2-base-gujarati-demo | 2021-03-31T07:37:36.000Z | [
"pytorch",
"wav2vec2",
"Guj",
"dataset:google",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"template.README.md",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
]
| jaimin | 7 | transformers | ---
language:
-
-
thumbnail:
tags:
-
-
-
license:
datasets:
-
-
metrics:
-
-
---
# MyModelName
## Model description
You can embed local or remote images using ``
## Intended uses & limitations
#### How to use
```python
# You can include sample code which will be formatted
```
#### Limitations and bias
Provide examples of latent issues and potential remediations.
## Training data
Describe the data you used to train the model.
If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data.
## Training procedure
Preprocessing, hardware used, hyperparameters...
## Eval results
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2020}
}
```
|
jakelever/coronabert | 2021-05-19T20:34:36.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"en",
"dataset:cord19",
"dataset:pubmed",
"transformers",
"coronavirus",
"covid",
"bionlp",
"license:mit"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tf_model.preproc",
"tokenizer_config.json",
"vocab.txt"
]
| jakelever | 250 | transformers | ---
language: en
thumbnail: https://coronacentral.ai/logo-with-name.png?1
tags:
- coronavirus
- covid
- bionlp
datasets:
- cord19
- pubmed
license: mit
widget:
- text: "Pre-existing T-cell immunity to SARS-CoV-2 in unexposed healthy controls in Ecuador, as detected with a COVID-19 Interferon-Gamma Release Assay."
- text: "Lifestyle and mental health disruptions during COVID-19."
- text: "More than 50 Long-term effects of COVID-19: a systematic review and meta-analysis"
---
# CoronaCentral BERT Model for Topic / Article Type Classification
This is the topic / article type multi-label classification for the [CoronaCentral website](https://coronacentral.ai). This forms part of the pipeline for downloading and processing coronavirus literature described in the [corona-ml repo](https://github.com/jakelever/corona-ml) with available [step-by-step descriptions](https://github.com/jakelever/corona-ml/blob/master/stepByStep.md). The method is described in the [preprint](https://doi.org/10.1101/2020.12.21.423860) and detailed performance results can be found in the [machine learning details](https://github.com/jakelever/corona-ml/blob/master/machineLearningDetails.md) document.
This model was derived by fine-tuning the [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract) model on this coronavirus sequence (document) classification task.
## Usage
Below are two Google Colab notebooks with example usage of this sequence classification model using HuggingFace transformers and KTrain.
- [HuggingFace example on Google Colab](https://colab.research.google.com/drive/1cBNgKd4o6FNWwjKXXQQsC_SaX1kOXDa4?usp=sharing)
- [KTrain example on Google Colab](https://colab.research.google.com/drive/1h7oJa2NDjnBEoox0D5vwXrxiCHj3B1kU?usp=sharing)
## Training Data
The model is trained on ~3200 manually-curated articles sampled at various stages during the coronavirus pandemic. The code for training is available in the [category\_prediction](https://github.com/jakelever/corona-ml/tree/master/category_prediction) directory of the main Github Repo. The data is available in the [annotated_documents.json.gz](https://github.com/jakelever/corona-ml/blob/master/category_prediction/annotated_documents.json.gz) file.
## Inputs and Outputs
The model takes in a tokenized title and abstract (combined into a single string and separated by a new line). The outputs are topics and article types, broadly called categories in the pipeline code. The types are listed below. Some others are managed by hand-coded rules described in the [step-by-step descriptions](https://github.com/jakelever/corona-ml/blob/master/stepByStep.md).
### List of Article Types
- Comment/Editorial
- Meta-analysis
- News
- Review
### List of Topics
- Clinical Reports
- Communication
- Contact Tracing
- Diagnostics
- Drug Targets
- Education
- Effect on Medical Specialties
- Forecasting & Modelling
- Health Policy
- Healthcare Workers
- Imaging
- Immunology
- Inequality
- Infection Reports
- Long Haul
- Medical Devices
- Misinformation
- Model Systems & Tools
- Molecular Biology
- Non-human
- Non-medical
- Pediatrics
- Prevalence
- Prevention
- Psychology
- Recommendations
- Risk Factors
- Surveillance
- Therapeutics
- Transmission
- Vaccines
|
jaketae/bert-cola | 2021-02-03T03:57:55.000Z | []
| [
".gitattributes"
]
| jaketae | 0 | |||
jakobwes/xlm_roberta_squad_v1.1 | 2021-05-09T14:08:37.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json"
]
| jakobwes | 15 | transformers | XLM-RoBERTa base (`xlm-roberta-base`) finetuned on squad v1.1.
**Training-specifications:**
- training_epochs: 3.0
- max_seq_length: 384
- batch_size: 16
- dataset_name: squad
- doc_stride 128
**Train-results:**
```
{
"epoch": 3.0,
"init_mem_cpu_alloc_delta": 991453184,
"init_mem_cpu_peaked_delta": 0,
"init_mem_gpu_alloc_delta": 1109893120,
"init_mem_gpu_peaked_delta": 0,
"train_mem_cpu_alloc_delta": 14753792,
"train_mem_cpu_peaked_delta": 0,
"train_mem_gpu_alloc_delta": 3330195456,
"train_mem_gpu_peaked_delta": 8287144960,
"train_runtime": 11376.3034,
"train_samples": 89597,
"train_samples_per_second": 1.477
}
```
**Eval-results:**
```
{
"epoch": 3.0,
"eval_samples": 10918,
"exact_match": 82.06244087038789,
"f1": 89.09539709124654
}
```
|
jamesmark/mark | 2021-04-09T05:32:14.000Z | []
| [
".gitattributes"
]
| jamesmark | 0 | |||
jannesg/bertsson | 2021-05-19T20:36:10.000Z | [
"pytorch",
"jax",
"bert",
"sv",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"model.ckpt-1000000.data-00000-of-00001",
"model.ckpt-1000000.index",
"model.ckpt-1000000.meta",
"model.ckpt-1110000.data-00000-of-00001",
"model.ckpt-1110000.index",
"model.ckpt-1110000.meta",
"model.ckpt-990000.data-00000-of-00001",
"model.ckpt-990000.index",
"model.ckpt-990000.meta",
"model.ckpt-992500.data-00000-of-00001",
"model.ckpt-992500.index",
"model.ckpt-992500.meta",
"model.ckpt-995000.data-00000-of-00001",
"model.ckpt-995000.index",
"model.ckpt-995000.meta",
"model.ckpt-997500.data-00000-of-00001",
"model.ckpt-997500.index",
"model.ckpt-997500.meta",
"pytorch_model.bin",
"vocab.txt"
]
| jannesg | 74 | transformers | ---
language: sv
---
# BERTSSON Models
The models are trained on:
- Government Text
- Swedish Literature
- Swedish News
Corpus size: Roughly 6B tokens.
The following models are currently available:
- **bertsson** - A BERT base model trained with the same hyperparameters as first published by Google.
All models are cased and trained with whole word masking.
Stay tuned for evaluations.
|
|
jannesg/takalane_afr_roberta | 2021-05-20T16:58:24.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"af",
"transformers",
"fill-mask",
"license:mit"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
]
| jannesg | 16 | transformers | ---
language:
- af
thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg
tags:
- af
- fill-mask
- pytorch
- roberta
- masked-lm
license: MIT
---
# Takalani Sesame - Salie - Afrikaans 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_afr_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_afr_roberta")
```
#### Limitations and bias
Updates will be added continuously to improve performance.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 2.8M
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
jannesg/takalane_nbl_roberta | 2021-05-20T16:59:09.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"nr",
"transformers",
"fill-mask",
"license:mit"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
]
| jannesg | 14 | transformers | ---
language:
- nr
thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg
tags:
- nr
- fill-mask
- pytorch
- roberta
- masked-lm
license: MIT
---
# Takalani Sesame - Ndebele 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_nbl_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_nbl_roberta")
```
#### Limitations and bias
Updates will be added continously to improve performance. This is a very low resource language, results may be poor at first.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 318M
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
jannesg/takalane_nso_roberta | 2021-05-20T17:00:02.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"nso",
"transformers",
"fill-mask",
"license:mit"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
]
| jannesg | 15 | transformers | ---
language:
- nso
thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg
tags:
- nso
- fill-mask
- pytorch
- roberta
- masked-lm
license: MIT
---
# Takalani Sesame - Northern Sotho 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_nso_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_nso_roberta")
```
#### Limitations and bias
Updates will be added continously to improve performance.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 4746
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
jannesg/takalane_sot_roberta | 2021-05-20T17:00:50.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"sot",
"transformers",
"fill-mask",
"license:mit"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
]
| jannesg | 16 | transformers | ---
language:
- sot
thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg
tags:
- sot
- fill-mask
- pytorch
- roberta
- masked-lm
license: MIT
---
# Takalani Sesame - Southern Sotho 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_sot_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_sot_roberta")
```
#### Limitations and bias
Updates will be added continously to improve performance.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 20000
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
jannesg/takalane_ssw_roberta | 2021-05-20T17:01:40.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"tn",
"transformers",
"fill-mask",
"license:mit"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
]
| jannesg | 17 | transformers | ---
language:
- tn
thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg
tags:
- tn
- fill-mask
- pytorch
- roberta
- masked-lm
license: MIT
---
# Takalani Sesame - Tswana 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_ssw_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_ssw_roberta")
```
#### Limitations and bias
Updates will be added continously to improve performance.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 380
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
jannesg/takalane_tsn_roberta | 2021-05-20T17:02:28.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"tn",
"transformers",
"fill-mask",
"license:mit"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
]
| jannesg | 15 | transformers | ---
language:
- tn
thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg
tags:
- tn
- fill-mask
- pytorch
- roberta
- masked-lm
license: MIT
---
# Takalani Sesame - Tswana 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_tsn_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_tsn_roberta")
```
#### Limitations and bias
Updates will be added continously to improve performance.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 10000
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
jannesg/takalane_tso_roberta | 2021-05-20T17:03:37.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"ts",
"transformers",
"fill-mask",
"license:mit"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
]
| jannesg | 19 | transformers | ---
language:
- ts
thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg
tags:
- ts
- fill-mask
- pytorch
- roberta
- masked-lm
license: MIT
---
# Takalani Sesame - Tsonga 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_tso_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_tso_roberta")
```
#### Limitations and bias
Updates will be added continously to improve performance.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 20000
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
jannesg/takalane_ven_roberta | 2021-05-20T17:04:26.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"ven",
"transformers",
"fill-mask",
"license:mit"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
]
| jannesg | 20 | transformers | ---
language:
- ven
thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg
tags:
- ven
- fill-mask
- pytorch
- roberta
- masked-lm
license: MIT
---
# Takalani Sesame - Venda 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_ven_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_ven_roberta")
```
#### Limitations and bias
Updates will be added continously to improve performance.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 9279
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
jannesg/takalane_xho_roberta | 2021-05-20T17:05:15.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"xho",
"transformers",
"fill-mask",
"license:mit"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
]
| jannesg | 20 | transformers | ---
language:
- xho
thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg
tags:
- xho
- fill-mask
- pytorch
- roberta
- masked-lm
license: MIT
---
# Takalani Sesame - Xhosa 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_xho_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_xho_roberta")
```
#### Limitations and bias
Updates will be added continously to improve performance.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 100000
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
jannesg/takalane_zul_roberta | 2021-05-20T17:06:46.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"zul",
"transformers",
"fill-mask",
"license:mit"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
]
| jannesg | 16 | transformers | ---
language:
- zul
thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg
tags:
- zul
- fill-mask
- pytorch
- roberta
- masked-lm
license: MIT
---
# Takalani Sesame - Zulu 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_zul_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_zul_roberta")
```
#### Limitations and bias
Updates will be added continously to improve performance.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 410000
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
jaron-maene/gpt2-large-nl2bash | 2021-05-23T05:38:26.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| jaron-maene | 16 | transformers | |
jaron-maene/gpt2-medium-nl2bash | 2021-05-23T05:42:13.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| jaron-maene | 12 | transformers | |
jason9693/SoongsilBERT-beep-base | 2021-05-20T17:07:42.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| jason9693 | 76 | transformers | # Finetuning
## Result
### Base Model
| | Size | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) | **Korean-Hate-Speech (Dev)**<br/>(F1) |
| :-------------------- | :---: | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :-----------------------------------: |
| KoBERT | 351M | 89.59 | 87.92 | 81.25 | 79.62 | 81.59 | 94.85 | 51.75 / 79.15 | 66.21 |
| XLM-Roberta-Base | 1.03G | 89.03 | 86.65 | 82.80 | 80.23 | 78.45 | 93.80 | 64.70 / 88.94 | 64.06 |
| HanBERT | 614M | 90.06 | 87.70 | 82.95 | 80.32 | 82.73 | 94.72 | 78.74 / 92.02 | 68.32 |
| KoELECTRA-Base-v3 | 431M | 90.63 | 88.11 | 84.45 | 82.24 | 85.53 | 95.25 | 84.83 / 93.45 | 67.61 |
| Soongsil-BERT | 370M | **91.2** | - | - | - | 76 | 94 | - | **69** |
### Small Model
| | Size | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) | **Korean-Hate-Speech (Dev)**<br/>(F1) |
| :--------------------- | :--: | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :-----------------------------------: |
| DistilKoBERT | 108M | 88.60 | 84.65 | 60.50 | 72.00 | 72.59 | 92.48 | 54.40 / 77.97 | 60.72 |
| KoELECTRA-Small-v3 | 54M | 89.36 | 85.40 | 77.45 | 78.60 | 80.79 | 94.85 | 82.11 / 91.13 | 63.07 |
| Soongsil-BERT | 213M | **90.7** | 84 | 69.1 | 76 | - | 92 | - | **66** |
## Reference
- [Transformers Examples](https://github.com/huggingface/transformers/blob/master/examples/README.md)
- [NSMC](https://github.com/e9t/nsmc)
- [Naver NER Dataset](https://github.com/naver/nlp-challenge)
- [PAWS](https://github.com/google-research-datasets/paws)
- [KorNLI/KorSTS](https://github.com/kakaobrain/KorNLUDatasets)
- [Question Pair](https://github.com/songys/Question_pair)
- [KorQuad](https://korquad.github.io/category/1.0_KOR.html)
- [Korean Hate Speech](https://github.com/kocohub/korean-hate-speech)
- [KoELECTRA](https://github.com/monologg/KoELECTRA)
- [KoBERT](https://github.com/SKTBrain/KoBERT)
- [HanBERT](https://github.com/tbai2019/HanBert-54k-N)
- [HanBert Transformers](https://github.com/monologg/HanBert-Transformers)
|
jason9693/SoongsilBERT-notice-base | 2021-05-20T14:04:16.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| jason9693 | 74 | transformers | # Finetuning
## Result
### Base Model
| | Size | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) | **Korean-Hate-Speech (Dev)**<br/>(F1) |
| :-------------------- | :---: | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :-----------------------------------: |
| KoBERT | 351M | 89.59 | 87.92 | 81.25 | 79.62 | 81.59 | 94.85 | 51.75 / 79.15 | 66.21 |
| XLM-Roberta-Base | 1.03G | 89.03 | 86.65 | 82.80 | 80.23 | 78.45 | 93.80 | 64.70 / 88.94 | 64.06 |
| HanBERT | 614M | 90.06 | 87.70 | 82.95 | 80.32 | 82.73 | 94.72 | 78.74 / 92.02 | 68.32 |
| KoELECTRA-Base-v3 | 431M | 90.63 | 88.11 | 84.45 | 82.24 | 85.53 | 95.25 | 84.83 / 93.45 | 67.61 |
| Soongsil-BERT | 370M | **91.2** | - | - | - | 76 | 94 | - | **69** |
### Small Model
| | Size | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) | **Korean-Hate-Speech (Dev)**<br/>(F1) |
| :--------------------- | :--: | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :-----------------------------------: |
| DistilKoBERT | 108M | 88.60 | 84.65 | 60.50 | 72.00 | 72.59 | 92.48 | 54.40 / 77.97 | 60.72 |
| KoELECTRA-Small-v3 | 54M | 89.36 | 85.40 | 77.45 | 78.60 | 80.79 | 94.85 | 82.11 / 91.13 | 63.07 |
| Soongsil-BERT | 213M | **90.7** | 84 | 69.1 | 76 | - | 92 | - | **66** |
## Reference
- [Transformers Examples](https://github.com/huggingface/transformers/blob/master/examples/README.md)
- [NSMC](https://github.com/e9t/nsmc)
- [Naver NER Dataset](https://github.com/naver/nlp-challenge)
- [PAWS](https://github.com/google-research-datasets/paws)
- [KorNLI/KorSTS](https://github.com/kakaobrain/KorNLUDatasets)
- [Question Pair](https://github.com/songys/Question_pair)
- [KorQuad](https://korquad.github.io/category/1.0_KOR.html)
- [Korean Hate Speech](https://github.com/kocohub/korean-hate-speech)
- [KoELECTRA](https://github.com/monologg/KoELECTRA)
- [KoBERT](https://github.com/SKTBrain/KoBERT)
- [HanBERT](https://github.com/tbai2019/HanBert-54k-N)
- [HanBert Transformers](https://github.com/monologg/HanBert-Transformers)
|
jason9693/SoongsilBERT-nsmc-base | 2021-05-20T17:08:31.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| jason9693 | 69 | transformers | # Finetuning
## Result
### Base Model
| | Size | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) | **Korean-Hate-Speech (Dev)**<br/>(F1) |
| :-------------------- | :---: | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :-----------------------------------: |
| KoBERT | 351M | 89.59 | 87.92 | 81.25 | 79.62 | 81.59 | 94.85 | 51.75 / 79.15 | 66.21 |
| XLM-Roberta-Base | 1.03G | 89.03 | 86.65 | 82.80 | 80.23 | 78.45 | 93.80 | 64.70 / 88.94 | 64.06 |
| HanBERT | 614M | 90.06 | 87.70 | 82.95 | 80.32 | 82.73 | 94.72 | 78.74 / 92.02 | 68.32 |
| KoELECTRA-Base-v3 | 431M | 90.63 | 88.11 | 84.45 | 82.24 | 85.53 | 95.25 | 84.83 / 93.45 | 67.61 |
| Soongsil-BERT | 370M | **91.2** | - | - | - | 76 | 94 | - | **69** |
### Small Model
| | Size | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) | **Korean-Hate-Speech (Dev)**<br/>(F1) |
| :--------------------- | :--: | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :-----------------------------------: |
| DistilKoBERT | 108M | 88.60 | 84.65 | 60.50 | 72.00 | 72.59 | 92.48 | 54.40 / 77.97 | 60.72 |
| KoELECTRA-Small-v3 | 54M | 89.36 | 85.40 | 77.45 | 78.60 | 80.79 | 94.85 | 82.11 / 91.13 | 63.07 |
| Soongsil-BERT | 213M | **90.7** | 84 | 69.1 | 76 | - | 92 | - | **66** |
## Reference
- [Transformers Examples](https://github.com/huggingface/transformers/blob/master/examples/README.md)
- [NSMC](https://github.com/e9t/nsmc)
- [Naver NER Dataset](https://github.com/naver/nlp-challenge)
- [PAWS](https://github.com/google-research-datasets/paws)
- [KorNLI/KorSTS](https://github.com/kakaobrain/KorNLUDatasets)
- [Question Pair](https://github.com/songys/Question_pair)
- [KorQuad](https://korquad.github.io/category/1.0_KOR.html)
- [Korean Hate Speech](https://github.com/kocohub/korean-hate-speech)
- [KoELECTRA](https://github.com/monologg/KoELECTRA)
- [KoBERT](https://github.com/SKTBrain/KoBERT)
- [HanBERT](https://github.com/tbai2019/HanBert-54k-N)
- [HanBert Transformers](https://github.com/monologg/HanBert-Transformers)
|
jason9693/soongsil-roberta-base | 2021-05-20T17:09:28.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".DS_Store",
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| jason9693 | 186 | transformers | |
jason9693/soongsil-roberta-small | 2021-05-20T17:11:38.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".DS_Store",
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| jason9693 | 29 | transformers | |
jasonwu/ToD-BERT-jnt | 2021-05-19T20:38:18.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"added_tokens.json",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| jasonwu | 21 | transformers | |
jazzisfuture/new_summary_t5_small | 2021-04-20T12:18:02.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin"
]
| jazzisfuture | 50 | transformers | |
jcblaise/bert-tagalog-base-cased-WWM | 2021-05-19T20:39:12.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"tl",
"transformers",
"tagalog",
"filipino",
"license:gpl-3.0",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"bert_model.ckpt.data-00000-of-00001",
"bert_model.ckpt.index",
"bert_model.ckpt.meta",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| jcblaise | 28 | transformers | ---
language: tl
tags:
- bert
- tagalog
- filipino
license: gpl-3.0
inference: false
---
# BERT Tagalog Base Cased (Whole Word Masking)
Tagalog version of BERT trained on a large preprocessed text corpus scraped and sourced from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community. This particular version uses whole word masking.
## Usage
The model can be loaded and used in both PyTorch and TensorFlow through the HuggingFace Transformers package.
```python
from transformers import TFAutoModel, AutoModel, AutoTokenizer
# TensorFlow
model = TFAutoModel.from_pretrained('jcblaise/bert-tagalog-base-cased-WWM', from_pt=True)
tokenizer = AutoTokenizer.from_pretrained('jcblaise/bert-tagalog-base-cased-WWM', do_lower_case=False)
# PyTorch
model = AutoModel.from_pretrained('jcblaise/bert-tagalog-base-cased-WWM')
tokenizer = AutoTokenizer.from_pretrained('jcblaise/bert-tagalog-base-cased-WWM', do_lower_case=False)
```
Finetuning scripts and other utilities we use for our projects can be found in our centralized repository at https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@inproceedings{localization2020cruz,
title={{Localization of Fake News Detection via Multitask Transfer Learning}},
author={Cruz, Jan Christian Blaise and Tan, Julianne Agatha and Cheng, Charibeth},
booktitle={Proceedings of The 12th Language Resources and Evaluation Conference},
pages={2589--2597},
year={2020},
url={https://www.aclweb.org/anthology/2020.lrec-1.315}
}
@article{cruz2020establishing,
title={Establishing Baselines for Text Classification in Low-Resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:2005.02068},
year={2020}
}
@article{cruz2019evaluating,
title={Evaluating Language Model Finetuning Techniques for Low-resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:1907.00409},
year={2019}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
jcblaise/bert-tagalog-base-cased | 2021-05-19T20:40:23.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"tl",
"transformers",
"tagalog",
"filipino",
"license:gpl-3.0",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"bert_model.ckpt.data-00000-of-00001",
"bert_model.ckpt.index",
"bert_model.ckpt.meta",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| jcblaise | 97 | transformers | ---
language: tl
tags:
- bert
- tagalog
- filipino
license: gpl-3.0
inference: false
---
# BERT Tagalog Base Cased
Tagalog version of BERT trained on a large preprocessed text corpus scraped and sourced from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
## Usage
The model can be loaded and used in both PyTorch and TensorFlow through the HuggingFace Transformers package.
```python
from transformers import TFAutoModel, AutoModel, AutoTokenizer
# TensorFlow
model = TFAutoModel.from_pretrained('jcblaise/bert-tagalog-base-cased', from_pt=True)
tokenizer = AutoTokenizer.from_pretrained('jcblaise/bert-tagalog-base-cased', do_lower_case=False)
# PyTorch
model = AutoModel.from_pretrained('jcblaise/bert-tagalog-base-cased')
tokenizer = AutoTokenizer.from_pretrained('jcblaise/bert-tagalog-base-cased', do_lower_case=False)
```
Finetuning scripts and other utilities we use for our projects can be found in our centralized repository at https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@inproceedings{localization2020cruz,
title={{Localization of Fake News Detection via Multitask Transfer Learning}},
author={Cruz, Jan Christian Blaise and Tan, Julianne Agatha and Cheng, Charibeth},
booktitle={Proceedings of The 12th Language Resources and Evaluation Conference},
pages={2589--2597},
year={2020},
url={https://www.aclweb.org/anthology/2020.lrec-1.315}
}
@article{cruz2020establishing,
title={Establishing Baselines for Text Classification in Low-Resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:2005.02068},
year={2020}
}
@article{cruz2019evaluating,
title={Evaluating Language Model Finetuning Techniques for Low-resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:1907.00409},
year={2019}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
jcblaise/bert-tagalog-base-uncased-WWM | 2021-05-19T20:44:17.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"tl",
"transformers",
"tagalog",
"filipino",
"license:gpl-3.0",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"bert_model.ckpt.data-00000-of-00001",
"bert_model.ckpt.index",
"bert_model.ckpt.meta",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| jcblaise | 37 | transformers | ---
language: tl
tags:
- bert
- tagalog
- filipino
license: gpl-3.0
inference: false
---
# BERT Tagalog Base Uncased (Whole Word Masking)
Tagalog version of BERT trained on a large preprocessed text corpus scraped and sourced from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community. This particular version uses whole word masking.
## Usage
The model can be loaded and used in both PyTorch and TensorFlow through the HuggingFace Transformers package.
```python
from transformers import TFAutoModel, AutoModel, AutoTokenizer
# TensorFlow
model = TFAutoModel.from_pretrained('jcblaise/bert-tagalog-base-uncased-WWM', from_pt=True)
tokenizer = AutoTokenizer.from_pretrained('jcblaise/bert-tagalog-base-uncased-WWM', do_lower_case=True)
# PyTorch
model = AutoModel.from_pretrained('jcblaise/bert-tagalog-base-uncased-WWM')
tokenizer = AutoTokenizer.from_pretrained('jcblaise/bert-tagalog-base-uncased-WWM', do_lower_case=True)
```
Finetuning scripts and other utilities we use for our projects can be found in our centralized repository at https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@inproceedings{localization2020cruz,
title={{Localization of Fake News Detection via Multitask Transfer Learning}},
author={Cruz, Jan Christian Blaise and Tan, Julianne Agatha and Cheng, Charibeth},
booktitle={Proceedings of The 12th Language Resources and Evaluation Conference},
pages={2589--2597},
year={2020},
url={https://www.aclweb.org/anthology/2020.lrec-1.315}
}
@article{cruz2020establishing,
title={Establishing Baselines for Text Classification in Low-Resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:2005.02068},
year={2020}
}
@article{cruz2019evaluating,
title={Evaluating Language Model Finetuning Techniques for Low-resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:1907.00409},
year={2019}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
jcblaise/bert-tagalog-base-uncased | 2021-05-19T20:45:20.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"tl",
"transformers",
"tagalog",
"filipino",
"license:gpl-3.0",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"bert_model.ckpt.data-00000-of-00001",
"bert_model.ckpt.index",
"bert_model.ckpt.meta",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| jcblaise | 111 | transformers | ---
language: tl
tags:
- bert
- tagalog
- filipino
license: gpl-3.0
inference: false
---
# BERT Tagalog Base Uncased
Tagalog version of BERT trained on a large preprocessed text corpus scraped and sourced from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
## Usage
The model can be loaded and used in both PyTorch and TensorFlow through the HuggingFace Transformers package.
```python
from transformers import TFAutoModel, AutoModel, AutoTokenizer
# TensorFlow
model = TFAutoModel.from_pretrained('jcblaise/bert-tagalog-base-uncased', from_pt=True)
tokenizer = AutoTokenizer.from_pretrained('jcblaise/bert-tagalog-base-uncased', do_lower_case=True)
# PyTorch
model = AutoModel.from_pretrained('jcblaise/bert-tagalog-base-uncased')
tokenizer = AutoTokenizer.from_pretrained('jcblaise/bert-tagalog-base-uncased', do_lower_case=True)
```
Finetuning scripts and other utilities we use for our projects can be found in our centralized repository at https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@inproceedings{localization2020cruz,
title={{Localization of Fake News Detection via Multitask Transfer Learning}},
author={Cruz, Jan Christian Blaise and Tan, Julianne Agatha and Cheng, Charibeth},
booktitle={Proceedings of The 12th Language Resources and Evaluation Conference},
pages={2589--2597},
year={2020},
url={https://www.aclweb.org/anthology/2020.lrec-1.315}
}
@article{cruz2020establishing,
title={Establishing Baselines for Text Classification in Low-Resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:2005.02068},
year={2020}
}
@article{cruz2019evaluating,
title={Evaluating Language Model Finetuning Techniques for Low-resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:1907.00409},
year={2019}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
jcblaise/distilbert-tagalog-base-cased | 2021-05-19T20:46:16.000Z | [
"pytorch",
"jax",
"distilbert",
"tl",
"transformers",
"bert",
"tagalog",
"filipino",
"license:gpl-3.0"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| jcblaise | 128 | transformers | ---
language: tl
tags:
- distilbert
- bert
- tagalog
- filipino
license: gpl-3.0
inference: false
---
# DistilBERT Tagalog Base Cased
Tagalog version of DistilBERT, distilled from [`bert-tagalog-base-cased`](https://huggingface.co/jcblaise/bert-tagalog-base-cased). This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
## Usage
The model can be loaded and used in both PyTorch and TensorFlow through the HuggingFace Transformers package.
```python
from transformers import TFAutoModel, AutoModel, AutoTokenizer
# TensorFlow
model = TFAutoModel.from_pretrained('jcblaise/distilbert-tagalog-base-cased', from_pt=True)
tokenizer = AutoTokenizer.from_pretrained('jcblaise/distilbert-tagalog-base-cased', do_lower_case=False)
# PyTorch
model = AutoModel.from_pretrained('jcblaise/distilbert-tagalog-base-cased')
tokenizer = AutoTokenizer.from_pretrained('jcblaise/distilbert-tagalog-base-cased', do_lower_case=False)
```
Finetuning scripts and other utilities we use for our projects can be found in our centralized repository at https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@inproceedings{localization2020cruz,
title={{Localization of Fake News Detection via Multitask Transfer Learning}},
author={Cruz, Jan Christian Blaise and Tan, Julianne Agatha and Cheng, Charibeth},
booktitle={Proceedings of The 12th Language Resources and Evaluation Conference},
pages={2589--2597},
year={2020},
url={https://www.aclweb.org/anthology/2020.lrec-1.315}
}
@article{cruz2020establishing,
title={Establishing Baselines for Text Classification in Low-Resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:2005.02068},
year={2020}
}
@article{cruz2019evaluating,
title={Evaluating Language Model Finetuning Techniques for Low-resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:1907.00409},
year={2019}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
|
jcblaise/electra-tagalog-base-cased-discriminator | 2020-12-11T21:47:12.000Z | [
"pytorch",
"electra",
"pretraining",
"tl",
"transformers",
"tagalog",
"filipino",
"license:gpl-3.0"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| jcblaise | 21 | transformers | ---
language: tl
tags:
- electra
- tagalog
- filipino
license: gpl-3.0
inference: false
---
# ELECTRA Tagalog Base Cased Discriminator
Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This is the discriminator model, which is the main Transformer used for finetuning to downstream tasks. For generation, mask-filling, and retraining, refer to the Generator models.
## Usage
The model can be loaded and used in both PyTorch and TensorFlow through the HuggingFace Transformers package.
```python
from transformers import TFAutoModel, AutoModel, AutoTokenizer
# TensorFlow
model = TFAutoModel.from_pretrained('jcblaise/electra-tagalog-base-cased-discriminator', from_pt=True)
tokenizer = AutoTokenizer.from_pretrained('jcblaise/electra-tagalog-base-cased-discriminator', do_lower_case=False)
# PyTorch
model = AutoModel.from_pretrained('jcblaise/electra-tagalog-base-cased-discriminator')
tokenizer = AutoTokenizer.from_pretrained('jcblaise/electra-tagalog-base-cased-discriminator', do_lower_case=False)
```
Finetuning scripts and other utilities we use for our projects can be found in our centralized repository at https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@article{cruz2020investigating,
title={Investigating the True Performance of Transformers in Low-Resource Languages: A Case Study in Automatic Corpus Creation},
author={Jan Christian Blaise Cruz and Jose Kristian Resabal and James Lin and Dan John Velasco and Charibeth Cheng},
journal={arXiv preprint arXiv:2010.11574},
year={2020}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
|
jcblaise/electra-tagalog-base-cased-generator | 2020-12-11T21:47:15.000Z | [
"pytorch",
"electra",
"masked-lm",
"tl",
"transformers",
"tagalog",
"filipino",
"license:gpl-3.0",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| jcblaise | 40 | transformers | ---
language: tl
tags:
- electra
- tagalog
- filipino
license: gpl-3.0
inference: false
---
# ELECTRA Tagalog Base Cased Generator
Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This is the generator model used to sample synthetic text and pretrain the discriminator. Only use this model for retraining and mask-filling. For the actual model for downstream tasks, please refer to the discriminator models.
## Usage
The model can be loaded and used in both PyTorch and TensorFlow through the HuggingFace Transformers package.
```python
from transformers import TFAutoModel, AutoModel, AutoTokenizer
# TensorFlow
model = TFAutoModel.from_pretrained('jcblaise/electra-tagalog-base-cased-generator', from_pt=True)
tokenizer = AutoTokenizer.from_pretrained('jcblaise/electra-tagalog-base-cased-generator', do_lower_case=False)
# PyTorch
model = AutoModel.from_pretrained('jcblaise/electra-tagalog-base-cased-generator')
tokenizer = AutoTokenizer.from_pretrained('jcblaise/electra-tagalog-base-cased-generator', do_lower_case=False)
```
Finetuning scripts and other utilities we use for our projects can be found in our centralized repository at https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@article{cruz2020investigating,
title={Investigating the True Performance of Transformers in Low-Resource Languages: A Case Study in Automatic Corpus Creation},
author={Jan Christian Blaise Cruz and Jose Kristian Resabal and James Lin and Dan John Velasco and Charibeth Cheng},
journal={arXiv preprint arXiv:2010.11574},
year={2020}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
jcblaise/electra-tagalog-base-uncased-discriminator | 2020-12-11T21:47:18.000Z | [
"pytorch",
"electra",
"pretraining",
"tl",
"transformers",
"tagalog",
"filipino",
"license:gpl-3.0"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| jcblaise | 23 | transformers | ---
language: tl
tags:
- electra
- tagalog
- filipino
license: gpl-3.0
inference: false
---
# ELECTRA Tagalog Base Uncased Discriminator
Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This is the discriminator model, which is the main Transformer used for finetuning to downstream tasks. For generation, mask-filling, and retraining, refer to the Generator models.
## Usage
The model can be loaded and used in both PyTorch and TensorFlow through the HuggingFace Transformers package.
```python
from transformers import TFAutoModel, AutoModel, AutoTokenizer
# TensorFlow
model = TFAutoModel.from_pretrained('jcblaise/electra-tagalog-base-uncased-discriminator', from_pt=True)
tokenizer = AutoTokenizer.from_pretrained('jcblaise/electra-tagalog-base-uncased-discriminator', do_lower_case=False)
# PyTorch
model = AutoModel.from_pretrained('jcblaise/electra-tagalog-base-uncased-discriminator')
tokenizer = AutoTokenizer.from_pretrained('jcblaise/electra-tagalog-base-uncased-discriminator', do_lower_case=False)
```
Finetuning scripts and other utilities we use for our projects can be found in our centralized repository at https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@article{cruz2020investigating,
title={Investigating the True Performance of Transformers in Low-Resource Languages: A Case Study in Automatic Corpus Creation},
author={Jan Christian Blaise Cruz and Jose Kristian Resabal and James Lin and Dan John Velasco and Charibeth Cheng},
journal={arXiv preprint arXiv:2010.11574},
year={2020}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
|
jcblaise/electra-tagalog-base-uncased-generator | 2020-12-11T21:47:21.000Z | [
"pytorch",
"electra",
"masked-lm",
"tl",
"transformers",
"tagalog",
"filipino",
"license:gpl-3.0",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| jcblaise | 38 | transformers | ---
language: tl
tags:
- electra
- tagalog
- filipino
license: gpl-3.0
inference: false
---
# ELECTRA Tagalog Base Uncased Generator
Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This is the generator model used to sample synthetic text and pretrain the discriminator. Only use this model for retraining and mask-filling. For the actual model for downstream tasks, please refer to the discriminator models.
## Usage
The model can be loaded and used in both PyTorch and TensorFlow through the HuggingFace Transformers package.
```python
from transformers import TFAutoModel, AutoModel, AutoTokenizer
# TensorFlow
model = TFAutoModel.from_pretrained('jcblaise/electra-tagalog-base-uncased-generator', from_pt=True)
tokenizer = AutoTokenizer.from_pretrained('jcblaise/electra-tagalog-base-uncased-generator', do_lower_case=False)
# PyTorch
model = AutoModel.from_pretrained('jcblaise/electra-tagalog-base-uncased-generator')
tokenizer = AutoTokenizer.from_pretrained('jcblaise/electra-tagalog-base-uncased-generator', do_lower_case=False)
```
Finetuning scripts and other utilities we use for our projects can be found in our centralized repository at https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@article{cruz2020investigating,
title={Investigating the True Performance of Transformers in Low-Resource Languages: A Case Study in Automatic Corpus Creation},
author={Jan Christian Blaise Cruz and Jose Kristian Resabal and James Lin and Dan John Velasco and Charibeth Cheng},
journal={arXiv preprint arXiv:2010.11574},
year={2020}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
jcblaise/electra-tagalog-small-cased-discriminator | 2020-12-11T21:47:25.000Z | [
"pytorch",
"electra",
"pretraining",
"tl",
"transformers",
"tagalog",
"filipino",
"license:gpl-3.0"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| jcblaise | 21 | transformers | ---
language: tl
tags:
- electra
- tagalog
- filipino
license: gpl-3.0
inference: false
---
# ELECTRA Tagalog Small Cased Discriminator
Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This is the discriminator model, which is the main Transformer used for finetuning to downstream tasks. For generation, mask-filling, and retraining, refer to the Generator models.
## Usage
The model can be loaded and used in both PyTorch and TensorFlow through the HuggingFace Transformers package.
```python
from transformers import TFAutoModel, AutoModel, AutoTokenizer
# TensorFlow
model = TFAutoModel.from_pretrained('jcblaise/electra-tagalog-small-cased-discriminator', from_pt=True)
tokenizer = AutoTokenizer.from_pretrained('jcblaise/electra-tagalog-small-cased-discriminator', do_lower_case=False)
# PyTorch
model = AutoModel.from_pretrained('jcblaise/electra-tagalog-small-cased-discriminator')
tokenizer = AutoTokenizer.from_pretrained('jcblaise/electra-tagalog-small-cased-discriminator', do_lower_case=False)
```
Finetuning scripts and other utilities we use for our projects can be found in our centralized repository at https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@article{cruz2020investigating,
title={Investigating the True Performance of Transformers in Low-Resource Languages: A Case Study in Automatic Corpus Creation},
author={Jan Christian Blaise Cruz and Jose Kristian Resabal and James Lin and Dan John Velasco and Charibeth Cheng},
journal={arXiv preprint arXiv:2010.11574},
year={2020}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
|
jcblaise/electra-tagalog-small-cased-generator | 2020-12-11T21:47:28.000Z | [
"pytorch",
"electra",
"masked-lm",
"tl",
"transformers",
"tagalog",
"filipino",
"license:gpl-3.0",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| jcblaise | 27 | transformers | ---
language: tl
tags:
- electra
- tagalog
- filipino
license: gpl-3.0
inference: false
---
# ELECTRA Tagalog Small Cased Generator
Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This is the generator model used to sample synthetic text and pretrain the discriminator. Only use this model for retraining and mask-filling. For the actual model for downstream tasks, please refer to the discriminator models.
## Usage
The model can be loaded and used in both PyTorch and TensorFlow through the HuggingFace Transformers package.
```python
from transformers import TFAutoModel, AutoModel, AutoTokenizer
# TensorFlow
model = TFAutoModel.from_pretrained('jcblaise/electra-tagalog-small-cased-generator', from_pt=True)
tokenizer = AutoTokenizer.from_pretrained('jcblaise/electra-tagalog-small-cased-generator', do_lower_case=False)
# PyTorch
model = AutoModel.from_pretrained('jcblaise/electra-tagalog-small-cased-generator')
tokenizer = AutoTokenizer.from_pretrained('jcblaise/electra-tagalog-small-cased-generator', do_lower_case=False)
```
Finetuning scripts and other utilities we use for our projects can be found in our centralized repository at https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@article{cruz2020investigating,
title={Investigating the True Performance of Transformers in Low-Resource Languages: A Case Study in Automatic Corpus Creation},
author={Jan Christian Blaise Cruz and Jose Kristian Resabal and James Lin and Dan John Velasco and Charibeth Cheng},
journal={arXiv preprint arXiv:2010.11574},
year={2020}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
jcblaise/electra-tagalog-small-uncased-discriminator-newsphnli | 2020-12-08T10:24:28.000Z | [
"pytorch",
"tf",
"electra",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| jcblaise | 18 | transformers | |
jcblaise/electra-tagalog-small-uncased-discriminator | 2020-12-11T21:47:31.000Z | [
"pytorch",
"electra",
"pretraining",
"tl",
"transformers",
"tagalog",
"filipino",
"license:gpl-3.0"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| jcblaise | 22 | transformers | ---
language: tl
tags:
- electra
- tagalog
- filipino
license: gpl-3.0
inference: false
---
# ELECTRA Tagalog Small Uncased Discriminator
Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This is the discriminator model, which is the main Transformer used for finetuning to downstream tasks. For generation, mask-filling, and retraining, refer to the Generator models.
## Usage
The model can be loaded and used in both PyTorch and TensorFlow through the HuggingFace Transformers package.
```python
from transformers import TFAutoModel, AutoModel, AutoTokenizer
# TensorFlow
model = TFAutoModel.from_pretrained('jcblaise/electra-tagalog-small-uncased-discriminator', from_pt=True)
tokenizer = AutoTokenizer.from_pretrained('jcblaise/electra-tagalog-small-uncased-discriminator', do_lower_case=False)
# PyTorch
model = AutoModel.from_pretrained('jcblaise/electra-tagalog-small-uncased-discriminator')
tokenizer = AutoTokenizer.from_pretrained('jcblaise/electra-tagalog-small-uncased-discriminator', do_lower_case=False)
```
Finetuning scripts and other utilities we use for our projects can be found in our centralized repository at https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@article{cruz2020investigating,
title={Investigating the True Performance of Transformers in Low-Resource Languages: A Case Study in Automatic Corpus Creation},
author={Jan Christian Blaise Cruz and Jose Kristian Resabal and James Lin and Dan John Velasco and Charibeth Cheng},
journal={arXiv preprint arXiv:2010.11574},
year={2020}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
|
jcblaise/electra-tagalog-small-uncased-generator | 2020-12-11T21:47:34.000Z | [
"pytorch",
"electra",
"masked-lm",
"tl",
"transformers",
"tagalog",
"filipino",
"license:gpl-3.0",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| jcblaise | 21 | transformers | ---
language: tl
tags:
- electra
- tagalog
- filipino
license: gpl-3.0
inference: false
---
# ELECTRA Tagalog Small Uncased Generator
Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This is the generator model used to sample synthetic text and pretrain the discriminator. Only use this model for retraining and mask-filling. For the actual model for downstream tasks, please refer to the discriminator models.
## Usage
The model can be loaded and used in both PyTorch and TensorFlow through the HuggingFace Transformers package.
```python
from transformers import TFAutoModel, AutoModel, AutoTokenizer
# TensorFlow
model = TFAutoModel.from_pretrained('jcblaise/electra-tagalog-small-uncased-generator', from_pt=True)
tokenizer = AutoTokenizer.from_pretrained('jcblaise/electra-tagalog-small-uncased-generator', do_lower_case=False)
# PyTorch
model = AutoModel.from_pretrained('jcblaise/electra-tagalog-small-uncased-generator')
tokenizer = AutoTokenizer.from_pretrained('jcblaise/electra-tagalog-small-uncased-generator', do_lower_case=False)
```
Finetuning scripts and other utilities we use for our projects can be found in our centralized repository at https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@article{cruz2020investigating,
title={Investigating the True Performance of Transformers in Low-Resource Languages: A Case Study in Automatic Corpus Creation},
author={Jan Christian Blaise Cruz and Jose Kristian Resabal and James Lin and Dan John Velasco and Charibeth Cheng},
journal={arXiv preprint arXiv:2010.11574},
year={2020}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
jcblaise/gpt2-tagalog | 2021-05-23T05:44:21.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"tl",
"transformers",
"tagalog",
"filipino",
"license:gpl-3.0",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
]
| jcblaise | 12 | transformers | ---
language: tl
tags:
- gpt2
- tagalog
- filipino
license: gpl-3.0
inference: false
---
# GPT-2 Tagalog
This is a prototype GPT-2 model of the smallest variant, trained using a combination of WikiText-TL-39 and the NewsPH-Raw datasets. The checkpoint provided can be used for text generation as-is, but should be finetuned for more specific tasks or generation topics.
## Usage
Weights are provided in both PyTorch and TensorFlow and can be used with ease via the HuggingFace Transformers library:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('jcblaise/gpt2-tagalog')
model = GPT2Model.from_pretrained('jcblaise/gpt2-tagalog')
s = "Palitan ito ng iyong nais na pangungusap."
s_in = tokenizer(s, return_tensors='pt')
out = model(**s_in)
```
## Limitations and Bias
The model was trained with two language modeling datasets for Tagalog:
* **WikiText-TL-39**, which is sourced from a dump of Tagalog WikiPedia.
* **NewsPH**, which is a dump of news articles from all available mainstream news outlets in the Philippines.
Due to the source of the training data, generated sentences out-of-the-box may sound and read like actual news articles, possessing the common tone and style of these works. While these may *look* like news articles, these are *not* news articles, and should not be read, understood, published, or shared as one. Language models do not inherently distinguish factual statements from non-factual ones, and as such, we discourage use of the model in systems and use-cases where the generated output is required to be true.
As this model is currently a prototype, bias was not thoroughly studied. Models inherit biases that are present in the data that they are trained with. Thing such as frequency of association of gender to occupation can induce certain biases in the model that will remain undetected unless thoroughly tested. As with the original GPT-2 model, we recommend that this model not be deployed or used in systems that interact with humans unless thorough study of potential biases is carried out.
We release this model with the intent that it may aid in the advancement of Filipino NLP, and that researchers and engineers who are interested in applying their work to the language may have a baseline model to use. For future work, in addition to the study of inherent bias, we mainly look into improving the quality of our models. As this is a prototype, a large-scale corpora was not used to train it. We plan to train larger GPT-2 models with larger corpora in the future.
## Citations
This model is part of a much larger work-in-progress, and as such, does not have a citeable paper at the moment. We will update this repository once a paper has been released.
For the datasets used to train the model, please cite the following papers:
```bibtex
@article{cruz2020investigating,
title={Investigating the True Performance of Transformers in Low-Resource Languages: A Case Study in Automatic Corpus Creation},
author={Jan Christian Blaise Cruz and Jose Kristian Resabal and James Lin and Dan John Velasco and Charibeth Cheng},
journal={arXiv preprint arXiv:2010.11574},
year={2020}
}
@article{cruz2019evaluating,
title={Evaluating Language Model Finetuning Techniques for Low-resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:1907.00409},
year={2019}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected] |
jcblaise/roberta-tagalog-base | 2021-02-18T06:13:56.000Z | []
| [
".gitattributes",
"README.md"
]
| jcblaise | 0 | RoBERTa Tagalog Base
|
||
jcblaise/roberta-tagalog-small | 2021-05-20T17:12:24.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"tf_model.h5",
"vocab.json"
]
| jcblaise | 12 | transformers | |
jcoelho/robertapp | 2021-06-06T14:09:15.000Z | []
| [
".gitattributes"
]
| jcoelho | 0 | |||
jcpwfloi/gpt2-story-generation | 2021-05-23T05:48:11.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| jcpwfloi | 19 | transformers | |
jeew/xlm-roberta-ckpt-95000 | 2020-07-14T06:50:56.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
".zip",
"config.json",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin"
]
| jeew | 18 | transformers | |
jeniya/BERTOverflow | 2021-05-19T20:47:17.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| jeniya | 556 | transformers |
# BERTOverflow
## Model description
We pre-trained BERT-base model on 152 million sentences from the StackOverflow's 10 year archive. More details of this model can be found in our ACL 2020 paper: [Code and Named Entity Recognition in StackOverflow](https://www.aclweb.org/anthology/2020.acl-main.443/). We would like to thank [Wuwei Lan](https://lanwuwei.github.io/) for helping us in training this model.
#### How to use
```python
from transformers import *
import torch
tokenizer = AutoTokenizer.from_pretrained("jeniya/BERTOverflow")
model = AutoModelForTokenClassification.from_pretrained("jeniya/BERTOverflow")
```
### BibTeX entry and citation info
```bibtex
@inproceedings{tabassum2020code,
title={Code and Named Entity Recognition in StackOverflow},
author={Tabassum, Jeniya and Maddela, Mounica and Xu, Wei and Ritter, Alan },
booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL)},
url={https://www.aclweb.org/anthology/2020.acl-main.443/}
year = {2020},
}
``` |
|
jeniya/BERTOverflow_stackoverflow_github | 2021-05-19T20:48:44.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| jeniya | 46 | transformers |
# BERTOverflow
## Model description
We pre-trained BERT-base model on 152 million sentences from the StackOverflow's 10 year archive. More details of this model can be found in our ACL 2020 paper: [Code and Named Entity Recognition in StackOverflow](https://www.aclweb.org/anthology/2020.acl-main.443/). We would like to thank [Wuwei Lan](https://lanwuwei.github.io/) for helping us in training this model.
#### How to use
```python
from transformers import *
import torch
tokenizer = AutoTokenizer.from_pretrained("jeniya/BERTOverflow")
model = AutoModelForTokenClassification.from_pretrained("jeniya/BERTOverflow")
```
### BibTeX entry and citation info
```bibtex
@inproceedings{tabassum2020code,
title={Code and Named Entity Recognition in StackOverflow},
author={Tabassum, Jeniya and Maddela, Mounica and Xu, Wei and Ritter, Alan },
booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL)},
url={https://www.aclweb.org/anthology/2020.acl-main.443/}
year = {2020},
}
``` |
|
jeremy-vianai/test | 2021-04-24T00:53:44.000Z | []
| [
".gitattributes"
]
| jeremy-vianai | 0 | |||
jeyco89/JustTalking | 2021-03-05T03:27:31.000Z | []
| [
".gitattributes"
]
| jeyco89 | 0 | |||
ji-xin/bert_base-MNLI-two_stage | 2020-07-08T14:51:18.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"added_tokens.json",
"config.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| ji-xin | 9 | transformers | ||
ji-xin/bert_base-MRPC-two_stage | 2020-07-07T20:05:34.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"added_tokens.json",
"config.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| ji-xin | 13 | transformers | ||
ji-xin/bert_base-QNLI-two_stage | 2020-07-08T14:53:19.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"added_tokens.json",
"config.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| ji-xin | 17 | transformers | ||
ji-xin/bert_base-QQP-two_stage | 2020-07-08T14:53:42.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"added_tokens.json",
"config.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| ji-xin | 16 | transformers | ||
ji-xin/bert_base-RTE-two_stage | 2020-07-08T14:54:15.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"added_tokens.json",
"config.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| ji-xin | 14 | transformers | ||
ji-xin/bert_base-SST2-two_stage | 2020-07-08T14:54:44.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"added_tokens.json",
"config.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| ji-xin | 14 | transformers | ||
ji-xin/bert_large-MRPC-two_stage | 2020-07-08T15:02:27.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"added_tokens.json",
"config.json",
"eval_results.txt",
"layer_example_counter",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| ji-xin | 17 | transformers | ||
ji-xin/bert_large-SST2-two_stage | 2020-07-08T15:00:26.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"added_tokens.json",
"config.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| ji-xin | 10 | transformers | ||
ji-xin/roberta_base-MNLI-two_stage | 2020-07-08T15:05:22.000Z | [
"pytorch",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"added_tokens.json",
"config.json",
"eval_results.txt",
"layer_example_counter",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| ji-xin | 34 | transformers | |
ji-xin/roberta_base-MRPC-two_stage | 2021-05-20T17:13:04.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"eval_results.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| ji-xin | 27 | transformers | |
ji-xin/roberta_base-QNLI-two_stage | 2020-07-08T15:06:38.000Z | [
"pytorch",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"added_tokens.json",
"config.json",
"eval_results.txt",
"layer_example_counter",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| ji-xin | 10 | transformers | |
ji-xin/roberta_base-QQP-two_stage | 2020-07-08T15:07:16.000Z | [
"pytorch",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"added_tokens.json",
"config.json",
"eval_results.txt",
"layer_example_counter",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| ji-xin | 17 | transformers | |
ji-xin/roberta_base-RTE-two_stage | 2020-07-08T15:08:42.000Z | [
"pytorch",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"added_tokens.json",
"config.json",
"eval_results.txt",
"layer_example_counter",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| ji-xin | 21 | transformers | |
ji-xin/roberta_base-SST2-two_stage | 2020-07-08T15:09:27.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"added_tokens.json",
"config.json",
"eval_results.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| ji-xin | 17 | transformers | ||
ji-xin/roberta_large-MRPC-two_stage | 2020-07-08T15:03:50.000Z | [
"pytorch",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"added_tokens.json",
"config.json",
"eval_results.txt",
"layer_example_counter",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| ji-xin | 14 | transformers | |
ji-xin/roberta_large-SST2-two_stage | 2020-07-07T20:25:04.000Z | [
"pytorch",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"added_tokens.json",
"config.json",
"eval_results.txt",
"layer_example_counter",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| ji-xin | 9 | transformers | |
jieun/senti_model | 2021-04-05T01:50:50.000Z | [
"tf"
]
| [
".gitattributes",
"config.json",
"tf_model.h5",
"tokenizer_78b3253a26.model",
"tokenizer_config.json",
"vocab.txt"
]
| jieun | 5 | |||
jieun/tempBERT | 2021-03-15T09:53:28.000Z | [
"pytorch",
"tf"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| jieun | 6 | |||
jieun/tfbertforsc | 2021-03-26T03:18:33.000Z | [
"tf"
]
| [
".gitattributes",
"config.json",
"tf_model.h5",
"tokenizer_78b3253a26.model",
"tokenizer_config.json",
"vocab.txt"
]
| jieun | 4 | |||
jieun/tmpBERT_v2 | 2021-03-25T01:53:56.000Z | [
"tf"
]
| [
".gitattributes",
"config.json",
"tf_model.h5",
"tokenizer_78b3253a26.model",
"tokenizer_config.json",
"vocab.txt"
]
| jieun | 6 | |||
jihopark/GPT2-Article-Large2 | 2021-05-23T05:51:18.000Z | [
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"a.taraa",
"a.tarab",
"a.tarac",
"a.tarad",
"a.tarae",
"a.taraf",
"a.tarag",
"a.tarah",
"a.tarai",
"a.taraj",
"a.tarak",
"a.taral",
"a.taram",
"a.taran",
"a.tarao",
"a.tarap",
"a.taraq",
"a.tarar",
"a.taras",
"a.tarat",
"a.tarau",
"a.tarav",
"a.taraw",
"a.tarax",
"a.taray",
"a.taraz",
"a.tarba",
"a.tarbb",
"a.tarbc",
"a.tarbd",
"a.tarbe",
"a.tarbf",
"a.tarbg",
"a.tarbh",
"a.tarbi",
"a.tarbj",
"a.tarbk",
"a.tarbl",
"a.tarbm",
"a.tarbn",
"a.tarbo",
"a.tarbp",
"a.tarbq",
"a.tarbr",
"a.tarbs",
"a.tarbt",
"a.tarbu",
"a.tarbv",
"a.tarbw",
"a.tarbx",
"a.tarby",
"a.tarbz",
"a.tarca",
"a.tarcb",
"a.tarcc",
"a.tarcd",
"a.tarce",
"a.tarcf",
"a.tarcg",
"a.tarch",
"config.json",
"merges.txt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| jihopark | 8 | transformers | |
jihopark/KoCulture-Large | 2021-05-23T05:52:01.000Z | [
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"a.taraa",
"a.tarab",
"a.tarac",
"a.tarad",
"a.tarae",
"a.taraf",
"a.tarag",
"a.tarah",
"a.tarai",
"a.taraj",
"a.tarak",
"a.taral",
"a.taram",
"a.taran",
"a.tarao",
"a.tarap",
"a.taraq",
"a.tarar",
"a.taras",
"a.tarat",
"a.tarau",
"a.tarav",
"a.taraw",
"a.tarax",
"a.taray",
"a.taraz",
"a.tarba",
"a.tarbb",
"a.tarbc",
"a.tarbd",
"a.tarbe",
"a.tarbf",
"a.tarbg",
"a.tarbh",
"a.tarbi",
"a.tarbj",
"a.tarbk",
"a.tarbl",
"a.tarbm",
"a.tarbn",
"a.tarbo",
"a.tarbp",
"a.tarbq",
"a.tarbr",
"a.tarbs",
"a.tarbt",
"a.tarbu",
"a.tarbv",
"a.tarbw",
"a.tarbx",
"a.tarby",
"a.tarbz",
"a.tarca",
"a.tarcb",
"a.tarcc",
"a.tarcd",
"a.tarce",
"a.tarcf",
"a.tarcg",
"a.tarch",
"config.json",
"merges.txt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| jihopark | 7 | transformers | |
jihopark/article_large | 2021-05-23T05:52:44.000Z | [
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"a.taraa",
"a.tarab",
"a.tarac",
"a.tarad",
"a.tarae",
"a.taraf",
"a.tarag",
"a.tarah",
"a.tarai",
"a.taraj",
"a.tarak",
"a.taral",
"a.taram",
"a.taran",
"a.tarao",
"a.tarap",
"a.taraq",
"a.tarar",
"a.taras",
"a.tarat",
"a.tarau",
"a.tarav",
"a.taraw",
"a.tarax",
"a.taray",
"a.taraz",
"a.tarba",
"a.tarbb",
"a.tarbc",
"a.tarbd",
"a.tarbe",
"a.tarbf",
"a.tarbg",
"a.tarbh",
"a.tarbi",
"a.tarbj",
"a.tarbk",
"a.tarbl",
"a.tarbm",
"a.tarbn",
"a.tarbo",
"a.tarbp",
"a.tarbq",
"a.tarbr",
"a.tarbs",
"a.tarbt",
"a.tarbu",
"a.tarbv",
"a.tarbw",
"a.tarbx",
"a.tarby",
"a.tarbz",
"a.tarca",
"a.tarcb",
"a.tarcc",
"a.tarcd",
"a.tarce",
"a.tarcf",
"a.tarcg",
"a.tarch",
"config.json",
"merges.txt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| jihopark | 8 | transformers | |
jihopark/colloquial-large | 2021-05-23T05:53:27.000Z | [
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"a.taraa",
"a.tarab",
"a.tarac",
"a.tarad",
"a.tarae",
"a.taraf",
"a.tarag",
"a.tarah",
"a.tarai",
"a.taraj",
"a.tarak",
"a.taral",
"a.taram",
"a.taran",
"a.tarao",
"a.tarap",
"a.taraq",
"a.tarar",
"a.taras",
"a.tarat",
"a.tarau",
"a.tarav",
"a.taraw",
"a.tarax",
"a.taray",
"a.taraz",
"a.tarba",
"a.tarbb",
"a.tarbc",
"a.tarbd",
"a.tarbe",
"a.tarbf",
"a.tarbg",
"a.tarbh",
"a.tarbi",
"a.tarbj",
"a.tarbk",
"a.tarbl",
"a.tarbm",
"a.tarbn",
"a.tarbo",
"a.tarbp",
"a.tarbq",
"a.tarbr",
"a.tarbs",
"a.tarbt",
"a.tarbu",
"a.tarbv",
"a.tarbw",
"a.tarbx",
"a.tarby",
"a.tarbz",
"a.tarca",
"a.tarcb",
"a.tarcc",
"a.tarcd",
"a.tarce",
"a.tarcf",
"a.tarcg",
"a.tarch",
"config.json",
"merges.txt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| jihopark | 7 | transformers | |
jihopark/colloquial | 2021-05-23T05:54:19.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| jihopark | 61 | transformers | |
jihopark/colloquialV2 | 2021-05-23T05:55:26.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| jihopark | 7 | transformers | |
jihopark/dialog | 2021-01-26T08:11:50.000Z | []
| [
".gitattributes"
]
| jihopark | 0 | |||
jihopark/wiki_large | 2021-05-23T05:56:40.000Z | [
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"a.taraa",
"a.tarab",
"a.tarac",
"a.tarad",
"a.tarae",
"a.taraf",
"a.tarag",
"a.tarah",
"a.tarai",
"a.taraj",
"a.tarak",
"a.taral",
"a.taram",
"a.taran",
"a.tarao",
"a.tarap",
"a.taraq",
"a.tarar",
"a.taras",
"a.tarat",
"a.tarau",
"a.tarav",
"a.taraw",
"a.tarax",
"a.taray",
"a.taraz",
"a.tarba",
"a.tarbb",
"a.tarbc",
"a.tarbd",
"a.tarbe",
"a.tarbf",
"a.tarbg",
"a.tarbh",
"a.tarbi",
"a.tarbj",
"a.tarbk",
"a.tarbl",
"a.tarbm",
"a.tarbn",
"a.tarbo",
"a.tarbp",
"a.tarbq",
"a.tarbr",
"a.tarbs",
"a.tarbt",
"a.tarbu",
"a.tarbv",
"a.tarbw",
"a.tarbx",
"a.tarby",
"a.tarbz",
"a.tarca",
"a.tarcb",
"a.tarcc",
"a.tarcd",
"a.tarce",
"a.tarcf",
"a.tarcg",
"a.tarch",
"config.json",
"merges.txt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| jihopark | 7 | transformers | |
jikehukai/test_learn | 2021-03-13T08:24:36.000Z | []
| [
".gitattributes",
"README.md"
]
| jikehukai | 0 | |||
jimregan/BERTreach | 2021-05-20T17:13:40.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"ga",
"transformers",
"irish",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
]
| jimregan | 14 | transformers | ---
language: ga
tags:
- irish
---
## BERTreach
([beirtreach](https://www.teanglann.ie/en/fgb/beirtreach) means 'oyster bed')
**Model size:** 84M
**Training data:**
* [PARSEME 1.2](https://gitlab.com/parseme/parseme_corpus_ga/-/blob/master/README.md)
* Newscrawl 300k portion of the [Leipzig Corpora](https://wortschatz.uni-leipzig.de/en/download/irish)
* Private news corpus crawled with [Corpus Crawler](https://github.com/google/corpuscrawler)
(2125804 sentences, 47419062 tokens, as reckoned by wc)
```
from transformers import pipeline
fill_mask = pipeline("fill-mask", model="jimregan/BERTreach", tokenizer="jimregan/BERTreach")
```
|
jimregan/wav2vec2-large-xlsr-irish-basic | 2021-03-27T08:26:49.000Z | [
"pytorch",
"wav2vec2",
"ga",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| jimregan | 35 | transformers | ---
language: ga
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2
model-index:
- name: XLSR Wav2Vec2 Irish by Jim O'Regan
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ga-IE
type: common_voice
args: ga-IE
metrics:
- name: Test WER
type: wer
value: 47.4
---
# Wav2Vec2-Large-XLSR-Irish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Irish Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ga-IE", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-irish-basic")
model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-irish-basic")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Irish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ga-IE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-irish-basic")
model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-irish-basic")
model.to("cuda")
# So, tolower() for Irish is a bit complicated: tAthar -> t-athair
# toupper() is non-deterministic :)
def is_upper_vowel(letter):
if letter in ['A', 'E', 'I', 'O', 'U', 'Á', 'É', 'Í', 'Ó', 'Ú']:
return True
else:
return False
def irish_lower(word):
if len(word) > 1 and word[0] in ['n', 't'] and is_upper_vowel(word[1]):
return word[0] + '-' + word[1:].lower()
else:
return word.lower()
def irish_lower_sentence(sentence):
return " ".join([irish_lower(w) for w in sentence.split(" ")])
chars_to_ignore_regex = '[,\?\.\!\;\:\"\“\%\‘\”\(\)\*]'
def remove_special_characters(sentence):
tmp = re.sub('’ ', ' ', sentence)
tmp = re.sub("’$", '', tmp)
tmp = re.sub('’', '\'', tmp)
tmp = re.sub(chars_to_ignore_regex, '', tmp)
sentence = irish_lower_sentence(tmp) + ' '
return sentence
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = remove_special_characters(batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 43.7 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
The script used for training can be found [here](https://github.com/jimregan/wav2vec2-sprint/blob/main/irish/fine-tune-xlsr-wav2vec2-on-irish-asr-with-transformers.ipynb)
|
jimregan/wav2vec2-large-xlsr-latvian-cv | 2021-03-22T10:35:47.000Z | [
"pytorch",
"wav2vec2",
"lv",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| jimregan | 10 | transformers | ---
language: lv
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2
model-index:
- name: jimregan/wav2vec2-large-xlsr-latvian-cv
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice lv
type: common_voice
args: lv
metrics:
- name: Test WER
type: wer
value: 29.95
---
# Wav2Vec2-Large-XLSR-Latvian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Latvian Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lv", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-latvian-cv")
model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-latvian-cv")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Latvian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "lv", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-latvian-cv")
model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-latvian-cv")
model.to("cuda")
chars_to_ignore_regex = '[,\?\.\!\;\:\"\“\%\‘\”\(\)\*\…\—\–\']'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 29.95 %
|
jimregan/wav2vec2-large-xlsr-slovakian | 2021-03-31T22:11:08.000Z | [
"pytorch",
"wav2vec2",
"transformers"
]
| [
".gitattributes",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| jimregan | 14 | transformers | ||
jimregan/wav2vec2-large-xlsr-upper-sorbian-mixed | 2021-03-29T18:12:57.000Z | [
"pytorch",
"wav2vec2",
"hsb",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| jimregan | 10 | transformers | ---
language: hsb
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Upper Sorbian mixed by Jim O'Regan
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice hsb
type: common_voice
args: hsb
metrics:
- name: Test WER
type: wer
value: 43.48
---
# Wav2Vec2-Large-XLSR-Upper-Sorbian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Upper Sorbian Common Voice dataset](https://huggingface.co/datasets/common_voice), with an
extra 28 minutes of audio from an online [Sorbian course](https://sprachkurs.sorbischlernen.de/).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hsb", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-upper-sorbian-mixed")
model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-upper-sorbian-mixed")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Upper Sorbian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ga-IE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-upper-sorbian-mixed")
model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-upper-sorbian-mixed")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\“\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\‘\\\\\\\\\\\\\\\\”\\\\\\\\\\\\\\\\�„«»–]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = remove_special_characters(batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 48.2 %
## Training
The Common Voice `train` and `validation` datasets were used for training, with the vocabulary from the English A1 lesson from an online [Sorbian course](https://sprachkurs.sorbischlernen.de/)
The script used for training can be found [here](https://github.com/jimregan/wav2vec2-sprint/blob/main/upper_sorbian/fine-tune-xlsr-wav2vec2-on-upper-sorbian-asr-with-transformers.ipynb)
The script used for cleaning the transcripts of the vocabulary data is [here](https://github.com/jimregan/wav2vec2-sprint/blob/main/upper_sorbian/sprachkurs.ipynb) |
jitrrrronic/thehum | 2021-04-10T13:43:07.000Z | []
| [
".gitattributes"
]
| jitrrrronic | 0 | |||
jiwoo/Pinobot01 | 2021-04-15T08:51:57.000Z | []
| [
".gitattributes",
"README.md"
]
| jiwoo | 0 | |||
jiyingz/gpt2-clinicalnotes-mimic-iii | 2021-02-28T15:48:21.000Z | []
| [
".gitattributes"
]
| jiyingz | 0 | |||
jkeruotis/LitBERTa-uncased | 2021-05-20T17:15:42.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"lt",
"transformers",
"exbert",
"license:mit",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_results.txt",
"trainer_state.json",
"training_args.bin",
"vocab.json"
]
| jkeruotis | 38 | transformers | ---
language: lt
tags:
- exbert
license: mit
---
# LitBERTa uncased model
Not the best model because of limited resources (Trained on ~4.7 GB of data on RTX2070 8GB for ~10 days) but it covers special lithuanian symbols `ąčęėįšųūž`. 128K vocabulary chosen because language has a lot of word forms.
## How to use
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='jkeruotis/LitBERTa-uncased')
unmasker('lietuvių kalba yra viena iš <mask> kalbų pasaulyje.')
[{'sequence': 'lietuvių kalba yra viena iš populiariausių kalbų pasaulyje.',
'score': 0.13887910544872284,
'token': 9404,
'token_str': ' populiariausių'},
{'sequence': 'lietuvių kalba yra viena iš pirmaujančių kalbų pasaulyje.',
'score': 0.13532795011997223,
'token': 27431,
'token_str': ' pirmaujančių'},
{'sequence': 'lietuvių kalba yra viena iš seniausių kalbų pasaulyje.',
'score': 0.1184583529829979,
'token': 14775,
'token_str': ' seniausių'},
{'sequence': 'lietuvių kalba yra viena iš geriausių kalbų pasaulyje.',
'score': 0.09306756407022476,
'token': 5617,
'token_str': ' geriausių'},
{'sequence': 'lietuvių kalba yra viena iš nedaugelio kalbų pasaulyje.',
'score': 0.08187634497880936,
'token': 28150,
'token_str': ' nedaugelio'}]```
|
jkgrad/longformer-base-stsb | 2021-02-04T07:57:06.000Z | [
"pytorch",
"longformer",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| jkgrad | 42 | transformers | |
jkgrad/spanbert-base-cased-coref | 2021-05-19T20:49:46.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin"
]
| jkgrad | 9 | transformers | ||
jkgrad/xlnet-base-cased-qqp | 2021-02-05T07:32:36.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| jkgrad | 8 | transformers | |
jkgrad/xlnet-base-cased-squad-quoref | 2021-01-28T06:54:08.000Z | [
"pytorch",
"xlnet",
"question-answering",
"arxiv:1906.08237",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| jkgrad | 28 | transformers | # XLNet Fine-tuned on SQuAD / Quoref Dataset
[XLNet](https://arxiv.org/abs/1906.08237) jointly developed by Google and CMU and fine-tuned on [SQuAD / SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) and [Quoref](https://leaderboard.allenai.org/quoref) for question answering down-stream task.
## Evaluation Result on Quoref
```
{
"exact_match": 73.65591397848462,
"f1": 77.9981532789881
}
```
## Results Comparison on Quoref
| Metric | XLNet Base Line | Model FT on SQuAD |
| ------ | --------- | --------- |
| **EM** | **61.88** | **73.66** (+11.78) |
| **F1** | **70.51** | **78.00** (+7.49)|
## How to Use
```
from transformers import XLNetForQuestionAnswering, XLNetTokenizerFast
model = XLNetForQuestionAnswering.from_pretrained('jkgrad/xlnet-base-cased-squad-quoref)
tokenizer = XLNetTokenizerFast.from_pretrained('jkgrad/xlnet-base-cased-squad-quoref')
``` |
jkgrad/xlnet-base-squadv2 | 2021-01-17T11:52:34.000Z | [
"pytorch",
"xlnet",
"question-answering",
"arxiv:1906.08237",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| jkgrad | 70 | transformers | # XLNet Fine-tuned on SQuAD 2.0 Dataset
[XLNet](https://arxiv.org/abs/1906.08237) jointly developed by Google and CMU and fine-tuned on [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) for question answering down-stream task.
## Training Results (Metrics)
```
{
"HasAns_exact": 74.7132253711201
"HasAns_f1": 82.11971607032643
"HasAns_total": 5928
"NoAns_exact": 73.38940285954584
"NoAns_f1": 73.38940285954584
"NoAns_total": 5945
"best_exact": 75.67590331003116
"best_exact_thresh": -19.554906845092773
"best_f1": 79.16215426779269
"best_f1_thresh": -19.554906845092773
"epoch": 4.0
"exact": 74.05036637749515
"f1": 77.74830934598614
"total": 11873
}
```
## Results Comparison
| Metric | Paper | Model |
| ------ | --------- | --------- |
| **EM** | **78.46** | **75.68** (-2.78) |
| **F1** | **81.33** | **79.16** (-2.17)|
Better fine-tuned models coming soon.
## How to Use
```
from transformers import XLNetForQuestionAnswering, XLNetTokenizerFast
model = XLNetForQuestionAnswering.from_pretrained('jkgrad/xlnet-base-squadv2)
tokenizer = XLNetTokenizerFast.from_pretrained('jkgrad/xlnet-base-squadv2')
``` |
jkulhanek/augpt-bigdata | 2021-05-23T05:57:14.000Z | [
"pytorch",
"gpt2",
"transformers"
]
| [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| jkulhanek | 135 | transformers | ||
jkulhanek/augpt-mw-20 | 2021-05-23T05:57:45.000Z | [
"pytorch",
"gpt2",
"transformers"
]
| [
".gitattributes",
"added_tokens.json",
"config.json",
"database.zip",
"lexicalizer.zip",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"system_actions.txt",
"tokenizer_config.json",
"training_args.bin",
"user_intents.txt",
"vocab.json"
]
| jkulhanek | 13 | transformers | ||
jkulhanek/augpt-mw-21 | 2021-05-23T05:58:15.000Z | [
"pytorch",
"gpt2",
"transformers"
]
| [
".gitattributes",
"added_tokens.json",
"config.json",
"database.zip",
"lexicalizer.zip",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"system_actions.txt",
"tokenizer_config.json",
"training_args.bin",
"user_intents.txt",
"vocab.json"
]
| jkulhanek | 24 | transformers | ||
jky594176/BART1_GRU | 2021-05-30T12:59:07.000Z | [
"pytorch",
"bart",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| jky594176 | 16 | transformers |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.