modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 21
values | files
list | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
s4sarath/malayalam | 2021-03-19T15:17:41.000Z | []
| [
".gitattributes"
]
| s4sarath | 0 | |||
saafpk/saafpk | 2021-05-07T05:49:27.000Z | []
| [
".gitattributes"
]
| saafpk | 0 | |||
saburbutt/albert_xxlarge_tweetqa | 2021-04-13T22:33:28.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| saburbutt | 205 | transformers | |
saburbutt/albert_xxlarge_tweetqa_v2 | 2021-04-13T22:36:46.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| saburbutt | 58 | transformers | |
saburbutt/roberta_base_tweetqa_model | 2021-05-20T19:58:30.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| saburbutt | 11 | transformers | |
saburbutt/roberta_base_tweetqaa_model | 2020-11-12T20:13:14.000Z | []
| [
".gitattributes"
]
| saburbutt | 0 | |||
saburbutt/roberta_large_tweetqa | 2021-05-20T20:01:21.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| saburbutt | 21 | transformers | |
saburbutt/testing | 2020-12-09T17:11:22.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| saburbutt | 10 | transformers | |
saburbutt/xlmroberta_large_tweetqa | 2020-11-16T01:21:38.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
]
| saburbutt | 11 | transformers | |
saburbutt/xlnet_large_tweetqa | 2021-04-13T22:34:59.000Z | [
"pytorch",
"xlnet",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| saburbutt | 23 | transformers | |
sachaarbonel/bert-italian-cased-finetuned-pos | 2021-05-19T00:47:05.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"it",
"dataset:xtreme",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"test_predictions.txt",
"test_results.txt",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sachaarbonel | 371 | transformers | ---
language: it
datasets:
- xtreme
---
# Italian-Bert (Italian Bert) + POS 🎃🏷
This model is a fine-tuned on [xtreme udpos Italian](https://huggingface.co/nlp/viewer/?dataset=xtreme&config=udpos.Italian) version of [Bert Base Italian](https://huggingface.co/dbmdz/bert-base-italian-cased) for **POS** downstream task.
## Details of the downstream task (POS) - Dataset
- [Dataset: xtreme udpos Italian](https://huggingface.co/nlp/viewer/?dataset=xtreme&config=udpos.Italian) 📚
| Dataset | # Examples |
| ---------------------- | ----- |
| Train | 716 K |
| Dev | 85 K |
- [Fine-tune on NER script provided by @stefan-it](https://raw.githubusercontent.com/stefan-it/fine-tuned-berts-seq/master/scripts/preprocess.py)
- Labels covered:
```
ADJ
ADP
ADV
AUX
CCONJ
DET
INTJ
NOUN
NUM
PART
PRON
PROPN
PUNCT
SCONJ
SYM
VERB
X
```
## Metrics on evaluation set 🧾
| Metric | # score |
| :------------------------------------------------------------------------------------: | :-------: |
| F1 | **97.25**
| Precision | **97.15** |
| Recall | **97.36** |
## Model in action 🔨
Example of usage
```python
from transformers import pipeline
nlp_pos = pipeline(
"ner",
model="sachaarbonel/bert-italian-cased-finetuned-pos",
tokenizer=(
'sachaarbonel/bert-spanish-cased-finetuned-pos',
{"use_fast": False}
))
text = 'Roma è la Capitale d'Italia.'
nlp_pos(text)
'''
Output:
--------
[{'entity': 'PROPN', 'index': 1, 'score': 0.9995346665382385, 'word': 'roma'},
{'entity': 'AUX', 'index': 2, 'score': 0.9966597557067871, 'word': 'e'},
{'entity': 'DET', 'index': 3, 'score': 0.9994786977767944, 'word': 'la'},
{'entity': 'NOUN',
'index': 4,
'score': 0.9995198249816895,
'word': 'capitale'},
{'entity': 'ADP', 'index': 5, 'score': 0.9990894198417664, 'word': 'd'},
{'entity': 'PART', 'index': 6, 'score': 0.57159024477005, 'word': "'"},
{'entity': 'PROPN',
'index': 7,
'score': 0.9994804263114929,
'word': 'italia'},
{'entity': 'PUNCT', 'index': 8, 'score': 0.9772886633872986, 'word': '.'}]
'''
```
Yeah! Not too bad 🎉
> Created by [Sacha Arbonel/@sachaarbonel](https://twitter.com/sachaarbonel) | [LinkedIn](https://www.linkedin.com/in/sacha-arbonel)
> Made with <span style="color: #e25555;">♥</span> in Paris
|
sackoh/bert-base-multilingual-cased-nsmc | 2021-05-19T00:50:32.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sackoh | 433 | transformers | |
sadakmed/distiluse-base-multilingual-cased-v1 | 2021-06-15T14:14:22.000Z | [
"pytorch",
"multilingual",
"DistilBert",
"Universal Sentence Encoder",
"sentence-embeddings",
"sentence-transformers",
"sentence-similarity",
"license:apache-2.0"
]
| sentence-similarity | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sadakmed | 152 | sentence-transformers | ---
language: multilingual
tags:
- DistilBert
- Universal Sentence Encoder
- sentence-embeddings
- sentence-transformers
- sentence-similarity
license: Apache-2.0
---
Knowledge distilled version of multilingual Universal Sentence Encoder. Supports 15 languages: Arabic, Chinese, Dutch, English, French, German, Italian, Korean, Polish, Portuguese, Russian, Spanish, Turkish.
This Model is saved from 'distiluse-base-multilingual-cased-v1' in `sentence-transformers`, to be used directly from `transformers`
|
sadakmed/distiluse-base-multilingual-cased-v2 | 2021-06-14T13:34:51.000Z | [
"pytorch",
"distilbert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sadakmed | 6 | transformers | ||
sadakmed/dpr-passage_encoder-spanish | 2021-05-20T04:37:11.000Z | [
"pytorch",
"bert",
"es",
"transformers",
"dpr"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sadakmed | 41 | transformers | ---
language: es
tags:
- dpr
---
This is a DPR passage_encoder model, finetuned with `dpr-question_encoder-spanish` on Spanish question answering data. |
|
sadana1og/qa-model-t1 | 2021-01-15T05:00:14.000Z | []
| [
".gitattributes"
]
| sadana1og | 0 | |||
sadana1og/tha-bert-qa | 2021-01-14T08:21:34.000Z | []
| [
".gitattributes"
]
| sadana1og | 0 | |||
safsaf/poemAR | 2021-05-23T12:19:44.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"merges.txt",
"model_args.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| safsaf | 38 | transformers | |
sagar/pretrained-FinBERT | 2021-01-04T04:34:18.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tokenizer.json",
"vocab.txt"
]
| sagar | 23 | transformers | FinBert Pretrained model to be used with downstream tasks |
|
sagorsarker/bangla-bert-base | 2021-06-03T06:39:42.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"bn",
"dataset:common_crawl",
"dataset:wikipedia",
"dataset:oscar",
"arxiv:1810.04805",
"arxiv:2012.14353",
"arxiv:2104.08613",
"transformers",
"bengali",
"bengali-lm",
"bangla",
"license:mit",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"vocab.txt"
]
| sagorsarker | 2,534 | transformers | ---
language: bn
tags:
- bert
- bengali
- bengali-lm
- bangla
license: MIT
datasets:
- common_crawl
- wikipedia
- oscar
---
# Bangla BERT Base
A long way passed. Here is our **Bangla-Bert**! It is now available in huggingface model hub.
[Bangla-Bert-Base](https://github.com/sagorbrur/bangla-bert) is a pretrained language model of Bengali language using mask language modeling described in [BERT](https://arxiv.org/abs/1810.04805) and it's github [repository](https://github.com/google-research/bert)
## Pretrain Corpus Details
Corpus was downloaded from two main sources:
* Bengali commoncrawl corpus downloaded from [OSCAR](https://oscar-corpus.com/)
* [Bengali Wikipedia Dump Dataset](https://dumps.wikimedia.org/bnwiki/latest/)
After downloading these corpora, we preprocessed it as a Bert format. which is one sentence per line and an extra newline for new documents.
```
sentence 1
sentence 2
sentence 1
sentence 2
```
## Building Vocab
We used [BNLP](https://github.com/sagorbrur/bnlp) package for training bengali sentencepiece model with vocab size 102025. We preprocess the output vocab file as Bert format.
Our final vocab file availabe at [https://github.com/sagorbrur/bangla-bert](https://github.com/sagorbrur/bangla-bert) and also at [huggingface](https://huggingface.co/sagorsarker/bangla-bert-base) model hub.
## Training Details
* Bangla-Bert was trained with code provided in Google BERT's github repository (https://github.com/google-research/bert)
* Currently released model follows bert-base-uncased model architecture (12-layer, 768-hidden, 12-heads, 110M parameters)
* Total Training Steps: 1 Million
* The model was trained on a single Google Cloud TPU
## Evaluation Results
### LM Evaluation Results
After training 1 million steps here are the evaluation results.
```
global_step = 1000000
loss = 2.2406516
masked_lm_accuracy = 0.60641736
masked_lm_loss = 2.201459
next_sentence_accuracy = 0.98625
next_sentence_loss = 0.040997364
perplexity = numpy.exp(2.2406516) = 9.393331287442784
Loss for final step: 2.426227
```
### Downstream Task Evaluation Results
- Evaluation on Bengali Classification Benchmark Datasets
Huge Thanks to [Nick Doiron](https://twitter.com/mapmeld) for providing evaluation results of the classification task.
He used [Bengali Classification Benchmark](https://github.com/rezacsedu/Classification_Benchmarks_Benglai_NLP) datasets for the classification task.
Comparing to Nick's [Bengali electra](https://huggingface.co/monsoon-nlp/bangla-electra) and multi-lingual BERT, Bangla BERT Base achieves a state of the art result.
Here is the [evaluation script](https://github.com/sagorbrur/bangla-bert/blob/master/notebook/bangla-bert-evaluation-classification-task.ipynb).
| Model | Sentiment Analysis | Hate Speech Task | News Topic Task | Average |
| ----- | -------------------| ---------------- | --------------- | ------- |
| mBERT | 68.15 | 52.32 | 72.27 | 64.25 |
| Bengali Electra | 69.19 | 44.84 | 82.33 | 65.45 |
| Bangla BERT Base | 70.37 | 71.83 | 89.19 | 77.13 |
- Evaluation on [Wikiann](https://huggingface.co/datasets/wikiann) Datasets
We evaluated `Bangla-BERT-Base` with [Wikiann](https://huggingface.co/datasets/wikiann) Bengali NER datasets along with another benchmark three models(mBERT, XLM-R, Indic-BERT). </br>
`Bangla-BERT-Base` got a third-place where `mBERT` got first and `XML-R` got second place after training these models 5 epochs.
| Base Pre-trained Model | F1 Score | Accuracy |
| ----- | -------------------| ---------------- |
| [mBERT-uncased](https://huggingface.co/bert-base-multilingual-uncased) | 97.11 | 97.68 |
| [XLM-R](https://huggingface.co/xlm-roberta-base) | 96.22 | 97.03 |
| [Indic-BERT](https://huggingface.co/ai4bharat/indic-bert)| 92.66 | 94.74 |
| Bangla-BERT-Base | 95.57 | 97.49 |
All four model trained with [transformers-token-classification](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/token_classification.ipynb) notebook.
You can find all models evaluation results [here](https://github.com/sagorbrur/bangla-bert/tree/master/evaluations/wikiann)
Also, you can check the below paper list. They used this model on their datasets.
* [arXiv:2012.14353](https://arxiv.org/abs/2012.14353)
* [arxiv:2104.08613](https://arxiv.org/abs/2104.08613)
**NB: If you use this model for any NLP task please share evaluation results with us. We will add it here.**
## Limitations and Biases
## How to Use
**Bangla BERT Tokenizer**
```py
from transformers import AutoTokenizer, AutoModel
bnbert_tokenizer = AutoTokenizer.from_pretrained("sagorsarker/bangla-bert-base")
text = "আমি বাংলায় গান গাই।"
bnbert_tokenizer.tokenize(text)
# ['আমি', 'বাংলা', '##য', 'গান', 'গাই', '।']
```
**MASK Generation**
You can use this model directly with a pipeline for masked language modeling:
```py
from transformers import BertForMaskedLM, BertTokenizer, pipeline
model = BertForMaskedLM.from_pretrained("sagorsarker/bangla-bert-base")
tokenizer = BertTokenizer.from_pretrained("sagorsarker/bangla-bert-base")
nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer)
for pred in nlp(f"আমি বাংলায় {nlp.tokenizer.mask_token} গাই।"):
print(pred)
# {'sequence': '[CLS] আমি বাংলায গান গাই । [SEP]', 'score': 0.13404667377471924, 'token': 2552, 'token_str': 'গান'}
```
## Author
[Sagor Sarker](https://github.com/sagorbrur)
## Acknowledgements
* Thanks to Google [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) for providing the free TPU credits - thank you!
* Thank to all the people around, who always helping us to build something for Bengali.
## Reference
* https://github.com/google-research/bert
## Citation
If you find this model helpful, please cite.
```
@misc{Sagor_2020,
title = {BanglaBERT: Bengali Mask Language Model for Bengali Language Understading},
author = {Sagor Sarker},
year = {2020},
url = {https://github.com/sagorbrur/bangla-bert}
}
```
|
sagorsarker/codeswitch-hineng-lid-lince | 2021-05-19T01:00:45.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"hi",
"en",
"dataset:lince",
"transformers",
"license:mit",
"codeswitching",
"hindi-english",
"language-identification"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"test_predictions.txt",
"test_results.txt",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sagorsarker | 466 | transformers | ---
language:
- hi
- en
datasets:
- lince
license: mit
tags:
- codeswitching
- hindi-english
- language-identification
---
# codeswitch-hineng-lid-lince
This is a pretrained model for **language identification** of `hindi-english` code-mixed data used from [LinCE](https://ritual.uh.edu/lince/home)
This model is trained for this below repository.
[https://github.com/sagorbrur/codeswitch](https://github.com/sagorbrur/codeswitch)
To install codeswitch:
```
pip install codeswitch
```
## Identify Language
* **Method-1**
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("sagorsarker/codeswitch-hineng-lid-lince")
model = AutoModelForTokenClassification.from_pretrained("sagorsarker/codeswitch-hineng-lid-lince")
lid_model = pipeline('ner', model=model, tokenizer=tokenizer)
lid_model("put any hindi english code-mixed sentence")
```
* **Method-2**
```py
from codeswitch.codeswitch import LanguageIdentification
lid = LanguageIdentification('hin-eng')
text = "" # your code-mixed sentence
result = lid.identify(text)
print(result)
```
|
sagorsarker/codeswitch-hineng-ner-lince | 2021-05-19T01:03:28.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"hi",
"en",
"dataset:lince",
"transformers",
"license:mit",
"codeswitching",
"hindi-english",
"ner"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"test_predictions.txt",
"test_results.txt",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sagorsarker | 64 | transformers | ---
language:
- hi
- en
datasets:
- lince
license: mit
tags:
- codeswitching
- hindi-english
- ner
---
# codeswitch-hineng-ner-lince
This is a pretrained model for **Name Entity Recognition** of `Hindi-english` code-mixed data used from [LinCE](https://ritual.uh.edu/lince/home)
This model is trained for this below repository.
[https://github.com/sagorbrur/codeswitch](https://github.com/sagorbrur/codeswitch)
To install codeswitch:
```
pip install codeswitch
```
## Name Entity Recognition of Code-Mixed Data
* **Method-1**
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("sagorsarker/codeswitch-hineng-ner-lince")
model = AutoModelForTokenClassification.from_pretrained("sagorsarker/codeswitch-hineng-ner-lince")
ner_model = pipeline('ner', model=model, tokenizer=tokenizer)
ner_model("put any hindi english code-mixed sentence")
```
* **Method-2**
```py
from codeswitch.codeswitch import NER
ner = NER('hin-eng')
text = "" # your mixed sentence
result = ner.tag(text)
print(result)
```
|
sagorsarker/codeswitch-hineng-pos-lince | 2021-05-19T01:06:07.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"hi",
"en",
"dataset:lince",
"transformers",
"license:mit",
"codeswitching",
"hindi-english",
"pos"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"test_predictions.txt",
"test_results.txt",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sagorsarker | 32 | transformers | ---
language:
- hi
- en
datasets:
- lince
license: mit
tags:
- codeswitching
- hindi-english
- pos
---
# codeswitch-hineng-pos-lince
This is a pretrained model for **Part of Speech Tagging** of `hindi-english` code-mixed data used from [LinCE](https://ritual.uh.edu/lince/home)
This model is trained for this below repository.
[https://github.com/sagorbrur/codeswitch](https://github.com/sagorbrur/codeswitch)
To install codeswitch:
```
pip install codeswitch
```
## Part-of-Speech Tagging of Hindi-English Mixed Data
* **Method-1**
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("sagorsarker/codeswitch-hineng-pos-lince")
model = AutoModelForTokenClassification.from_pretrained("sagorsarker/codeswitch-hineng-pos-lince")
pos_model = pipeline('ner', model=model, tokenizer=tokenizer)
pos_model("put any hindi english code-mixed sentence")
```
* **Method-2**
```py
from codeswitch.codeswitch import POS
pos = POS('hin-eng')
text = "" # your mixed sentence
result = pos.tag(text)
print(result)
```
|
sagorsarker/codeswitch-nepeng-lid-lince | 2021-05-19T01:11:01.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"ne",
"en",
"dataset:lince",
"transformers",
"license:mit",
"codeswitching",
"nepali-english",
"language-identification"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"test_predictions.txt",
"test_results.txt",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sagorsarker | 14 | transformers | ---
language:
- ne
- en
datasets:
- lince
license: mit
tags:
- codeswitching
- nepali-english
- language-identification
---
# codeswitch-nepeng-lid-lince
This is a pretrained model for **language identification** of `nepali-english` code-mixed data used from [LinCE](https://ritual.uh.edu/lince/home).
This model is trained for this below repository.
[https://github.com/sagorbrur/codeswitch](https://github.com/sagorbrur/codeswitch)
To install codeswitch:
```
pip install codeswitch
```
## Identify Language
* **Method-1**
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("sagorsarker/codeswitch-nepeng-lid-lince")
model = AutoModelForTokenClassification.from_pretrained("sagorsarker/codeswitch-nepeng-lid-lince")
lid_model = pipeline('ner', model=model, tokenizer=tokenizer)
lid_model("put any nepali english code-mixed sentence")
```
* **Method-2**
```py
from codeswitch.codeswitch import LanguageIdentification
lid = LanguageIdentification('nep-eng')
text = "" # your code-mixed sentence
result = lid.identify(text)
print(result)
```
|
sagorsarker/codeswitch-spaeng-lid-lince | 2021-06-11T04:12:00.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"es",
"en",
"dataset:lince",
"transformers",
"license:mit",
"codeswitching",
"spanish-english",
"language-identification"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"test_predictions.txt",
"test_results.txt",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sagorsarker | 1,275 | transformers | ---
language:
- es
- en
datasets:
- lince
license: mit
tags:
- codeswitching
- spanish-english
- language-identification
---
# codeswitch-spaeng-lid-lince
This is a pretrained model for **language identification** of `spanish-english` code-mixed data used from [LinCE](https://ritual.uh.edu/lince/home)
This model is trained for this below repository.
[https://github.com/sagorbrur/codeswitch](https://github.com/sagorbrur/codeswitch)
To install codeswitch:
```
pip install codeswitch
```
## Identify Language
* **Method-1**
```py
from codeswitch.codeswitch import LanguageIdentification
lid = LanguageIdentification('spa-eng')
text = "" # your code-mixed sentence
result = lid.identify(text)
print(result)
```
* **Method-2**
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("sagorsarker/codeswitch-spaeng-lid-lince")
model = AutoModelForTokenClassification.from_pretrained("sagorsarker/codeswitch-spaeng-lid-lince")
lid_model = pipeline('ner', model=model, tokenizer=tokenizer)
lid_model("put any spanish english code-mixed sentence")
```
|
sagorsarker/codeswitch-spaeng-ner-lince | 2021-05-19T01:16:32.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"es",
"en",
"dataset:lince",
"transformers",
"license:mit",
"codeswitching",
"spanish-english",
"ner"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"test_predictions.txt",
"test_results.txt",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sagorsarker | 286 | transformers | ---
language:
- es
- en
datasets:
- lince
license: mit
tags:
- codeswitching
- spanish-english
- ner
---
# codeswitch-spaeng-ner-lince
This is a pretrained model for **Name Entity Recognition** of `spanish-english` code-mixed data used from [LinCE](https://ritual.uh.edu/lince/home)
This model is trained for this below repository.
[https://github.com/sagorbrur/codeswitch](https://github.com/sagorbrur/codeswitch)
To install codeswitch:
```
pip install codeswitch
```
## Name Entity Recognition of Spanish-English Mixed Data
* **Method-1**
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("sagorsarker/codeswitch-spaeng-ner-lince")
model = AutoModelForTokenClassification.from_pretrained("sagorsarker/codeswitch-spaeng-ner-lince")
ner_model = pipeline('ner', model=model, tokenizer=tokenizer)
ner_model("put any spanish english code-mixed sentence")
```
* **Method-2**
```py
from codeswitch.codeswitch import NER
ner = NER('spa-eng')
text = "" # your mixed sentence
result = ner.tag(text)
print(result)
```
|
sagorsarker/codeswitch-spaeng-pos-lince | 2021-05-19T01:19:43.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"es",
"en",
"dataset:lince",
"transformers",
"license:mit",
"codeswitching",
"spanish-english",
"pos"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"test_predictions.txt",
"test_results.txt",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sagorsarker | 62 | transformers | ---
language:
- es
- en
datasets:
- lince
license: mit
tags:
- codeswitching
- spanish-english
- pos
---
# codeswitch-spaeng-pos-lince
This is a pretrained model for **Part of Speech Tagging** of `spanish-english` code-mixed data used from [LinCE](https://ritual.uh.edu/lince/home)
This model is trained for this below repository.
[https://github.com/sagorbrur/codeswitch](https://github.com/sagorbrur/codeswitch)
To install codeswitch:
```
pip install codeswitch
```
## Part-of-Speech Tagging of Spanish-English Mixed Data
* **Method-1**
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("sagorsarker/codeswitch-spaeng-pos-lince")
model = AutoModelForTokenClassification.from_pretrained("sagorsarker/codeswitch-spaeng-pos-lince")
pos_model = pipeline('ner', model=model, tokenizer=tokenizer)
pos_model("put any spanish english code-mixed sentence")
```
* **Method-2**
```py
from codeswitch.codeswitch import POS
pos = POS('spa-eng')
text = "" # your mixed sentence
result = pos.tag(text)
print(result)
```
|
sagorsarker/codeswitch-spaeng-sentiment-analysis-lince | 2021-05-19T01:22:56.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"es",
"en",
"dataset:lince",
"transformers",
"license:mit",
"codeswitching",
"spanish-english",
"sentiment-analysis"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results_sst-2.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sagorsarker | 869 | transformers | ---
language:
- es
- en
datasets:
- lince
license: mit
tags:
- codeswitching
- spanish-english
- sentiment-analysis
---
# codeswitch-spaeng-sentiment-analysis-lince
This is a pretrained model for **Sentiment Analysis** of `spanish-english` code-mixed data used from [LinCE](https://ritual.uh.edu/lince/home)
This model is trained for this below repository.
[https://github.com/sagorbrur/codeswitch](https://github.com/sagorbrur/codeswitch)
To install codeswitch:
```
pip install codeswitch
```
## Sentiment Analysis of Spanish-English Code-Mixed Data
* **Method-1**
```py
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("sagorsarker/codeswitch-spaeng-sentiment-analysis-lince")
model = AutoModelForSequenceClassification.from_pretrained("sagorsarker/codeswitch-spaeng-sentiment-analysis-lince")
nlp = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
sentence = "El perro le ladraba a La Gatita .. .. lol #teamlagatita en las playas de Key Biscayne este Memorial day"
nlp(sentence)
```
* **Method-2**
```py
from codeswitch.codeswitch import SentimentAnalysis
sa = SentimentAnalysis('spa-eng')
sentence = "El perro le ladraba a La Gatita .. .. lol #teamlagatita en las playas de Key Biscayne este Memorial day"
result = sa.analyze(sentence)
print(result)
```
|
sagorsarker/mbert-bengali-ner | 2021-06-03T16:59:35.000Z | [
"pytorch",
"bert",
"token-classification",
"bn",
"dataset:wikiann",
"dataset:xtreme",
"transformers",
"bengali-ner",
"bengali",
"bangla",
"NER",
"license:mit"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
]
| sagorsarker | 65 | transformers | ---
language: bn
tags:
- bengali-ner
- bengali
- bangla
- NER
license: MIT
datasets:
- wikiann
- xtreme
---
# Multi-lingual BERT Bengali Name Entity Recognition
`mBERT-Bengali-NER` is a transformer-based Bengali NER model build with [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) model and [Wikiann](https://huggingface.co/datasets/wikiann) Datasets.
## How to Use
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("sagorsarker/mbert-bengali-ner")
model = AutoModelForTokenClassification.from_pretrained("sagorsarker/mbert-bengali-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "আমি জাহিদ এবং আমি ঢাকায় বাস করি।"
ner_results = nlp(example)
print(ner_results)
```
## Label and ID Mapping
| Label ID | Label |
| -------- | ----- |
|0 | O |
| 1 | B-PER |
| 2 | I-PER |
| 3 | B-ORG|
| 4 | I-ORG |
| 5 | B-LOC |
| 6 | I-LOC |
## Training Details
- mBERT-Bengali-NER trained with [Wikiann](https://huggingface.co/datasets/wikiann) datasets
- mBERT-Bengali-NER trained with [transformers-token-classification](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/token_classification.ipynb) script
- mBERT-Bengali-NER total trained 5 epochs.
- Trained in Kaggle GPU
## Evaluation Results
|Model | F1 | Precision | Recall | Accuracy | Loss |
| ---- | --- | --------- | ----- | -------- | --- |
|mBert-Bengali-NER | 0.97105 | 0.96769| 0.97443 | 0.97682 | 0.12511 |
|
sagorsarker/mbert-bengali-tydiqa-qa | 2021-06-04T12:28:03.000Z | [
"pytorch",
"bert",
"question-answering",
"bn",
"dataset:tydiqa",
"transformers",
"mbert",
"bengali",
"bangla",
"qa",
"license:mit"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sagorsarker | 90 | transformers | ---
language: bn
tags:
- mbert
- bengali
- question-answering
- bangla
- qa
license: MIT
datasets:
- tydiqa
---
# mBERT Bengali Question Answering
`mBERT-Bengali-Tydiqa-QA` is a question answering model fine-tuning [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) model with [tydiqa](https://github.com/google-research-datasets/tydiqa) Bengali datasets.
## Usage
You can use [bntransformer](https://github.com/sagorbrur/bntransformer)
### Installation
`pip install bntransformer`
### Generate Answer
```py
from bntransformer import BanglaQA
bnqa = BanglaQA()
# you can custom model path or other bengali huggingface model path
# default it takes "sagorsarker/mbert-bengali-tydiqa-qa"
context = "সূর্য সেন ১৮৯৪ সালের ২২ মার্চ চট্টগ্রামের রাউজান থানার নোয়াপাড়ায় অর্থনৈতিক ভাবে অস্বচ্ছল পরিবারে জন্মগ্রহণ করেন। তাঁর পিতার নাম রাজমনি সেন এবং মাতার নাম শশী বালা সেন। রাজমনি সেনের দুই ছেলে আর চার মেয়ে। সূর্য সেন তাঁদের পরিবারের চতুর্থ সন্তান। দুই ছেলের নাম সূর্য ও কমল। চার মেয়ের নাম বরদাসুন্দরী, সাবিত্রী, ভানুমতী ও প্রমিলা। শৈশবে পিতা মাতাকে হারানো সূর্য সেন কাকা গৌরমনি সেনের কাছে মানুষ হয়েছেন। সূর্য সেন ছেলেবেলা থেকেই খুব মনোযোগী ভাল ছাত্র ছিলেন এবং ধর্মভাবাপন্ন গম্ভীর প্রকৃতির ছিলেন।"
question = "মাস্টারদা সূর্যকুমার সেনের বাবার নাম কী ছিল ?"
answers = bnqa.find_answer(context, question)
print(answers)
```
or
### Transformers QA Pipeline
```py
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "sagorsarker/mbert-bengali-tydiqa-qa"
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
qa_input = {
'question': 'মাস্টারদা সূর্যকুমার সেনের বাবার নাম কী ছিল ?',
'context': 'সূর্য সেন ১৮৯৪ সালের ২২ মার্চ চট্টগ্রামের রাউজান থানার নোয়াপাড়ায় অর্থনৈতিক ভাবে অস্বচ্ছল পরিবারে জন্মগ্রহণ করেন। তাঁর পিতার নাম রাজমনি সেন এবং মাতার নাম শশী বালা সেন। রাজমনি সেনের দুই ছেলে আর চার মেয়ে। সূর্য সেন তাঁদের পরিবারের চতুর্থ সন্তান। দুই ছেলের নাম সূর্য ও কমল। চার মেয়ের নাম বরদাসুন্দরী, সাবিত্রী, ভানুমতী ও প্রমিলা। শৈশবে পিতা মাতাকে হারানো সূর্য সেন কাকা গৌরমনি সেনের কাছে মানুষ হয়েছেন। সূর্য সেন ছেলেবেলা থেকেই খুব মনোযোগী ভাল ছাত্র ছিলেন এবং ধর্মভাবাপন্ন গম্ভীর প্রকৃতির ছিলেন।'
}
result = nlp(qa_input)
print(result)
```
## Training Details
- `mBERT-Bengali-Tydiqa-QA` model build using [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased)
- `mBERT-Bengali-Tydiqa-QA` model trained with [tydiqa](https://github.com/google-research-datasets/tydiqa) Bengali datasets.
- Tydiqa Bengali data contains **2390 train** data and **113 validation** data
- `mBERT-Bengali-Tydiqa-QA` model trained in [kaggle](https://www.kaggle.com/) GPU
- `mBERT-Bengali-Tydiqa-QA` model trained total 5 epochs
- `mBERT-Bengali-Tydiqa-QA` trained using [transformers/example/question-aswering](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb) notebook with all default settings except pre-trained model and datasets part
## Evaluation Results
Here is the training evaluation part
```
Exact Match: 57.52212389380531
F1 Score: 68.66183963529096
```
## Authors
- Sagor Sarker
- [Github](https://github.com/sagorbrur)
- [LinkedIn](https://www.linkedin.com/in/sagor-sarker/) |
saharnaz/clinic | 2021-02-10T06:34:04.000Z | []
| [
".gitattributes"
]
| saharnaz | 0 | |||
saibo/legal-longformer-base-4096 | 2020-12-28T12:57:09.000Z | [
"pytorch",
"tf",
"longformer",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
]
| saibo | 40 | transformers | |
saibo/legal-roberta-base | 2021-05-20T20:04:13.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"masked-lm",
"en",
"transformers",
"legal",
"license:apache-2.0",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
]
| saibo | 545 | transformers | ---
language:
- en
tags:
- legal
license: apache-2.0
metrics:
- precision
- recall
---
# LEGAL-ROBERTA
We introduce LEGAL-ROBERTA, which is a domain-specific language representation model fine-tuned on large-scale legal corpora(4.6 GB).
## Demo
'This \<mask\> Agreement is between General Motors and John Murray .'
| Model | top1 | top2 | top3 | top4 | top5 |
| ------------ | ---- | --- | --- | --- | -------- |
| Bert | new | current | proposed | marketing | joint |
| legalBert | settlement | letter | dealer | master | supplemental |
| legalRoberta | License | Settlement | Contract | license | Trust |
> LegalRoberta captures the case
'The applicant submitted that her husband was subjected to treatment amounting to \<mask\> whilst in the custody of Adana Security Directorate'
| Model | top1 | top2 | top3 | top4 | top5 |
| ------------ | ---- | --- | --- | --- | -------- |
| Bert | torture | rape | abuse | death | violence |
| legalBert | torture | detention | arrest | rape | death |
| legalRoberta | torture | abuse | insanity | cruelty | confinement |
'Establishing a system for the identification and registration of \<mask\> animals and regarding the labeling of beef and beef products .':
| Model | top1 | top2 | top3 | top4 | top5 |
| ------------ | ---- | --- | --- | --- | -------- |
| Bert | farm | livestock | draft | domestic | wild |
| legalBert | live | beef | farm | pet | dairy |
| legalRoberta | domestic | all | beef | wild | registered |
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("saibo/legal-roberta-base")
model = AutoModel.from_pretrained("saibo/legal-roberta-base")
```
## Training data
The training data consists of 3 origins:
1. Patent Litigations (https://www.kaggle.com/uspto/patent-litigations): This dataset covers over 74k cases across 52 years and over 5 million relevant documents. 5 different files detail the litigating parties, their attorneys, results, locations, and dates.
1. *1.57GB*
2. abbrev:PL
3. *clean 1.1GB*
2. Caselaw Access Project (CAP) (https://case.law/): Following 360 years of United States case law, Caselaw Access Project (CAP) API and bulk data services includes 40 million pages of U.S. court decisions and almost 6.5 million individual cases.
1. *raw 5.6*
2. abbrev:CAP
3. *clean 2.8GB*
3. Google Patents Public Data (https://www.kaggle.com/bigquery/patents): The Google Patents Public Data contains a collection of publicly accessible, connected database tables for empirical analysis of the international patent system.
1. *BigQuery (https://www.kaggle.com/sohier/beyond-queries-exploring-the-bigquery-api)*
2. abbrev:GPPD(1.1GB,patents-public-data.uspto_oce_litigation.documents)
3. *clean 1GB*
## Training procedure
We start from a pretrained ROBERTA-BASE model and fine-tune it on the legal corpus.
Fine-tuning configuration:
- lr = 5e-5(with lr decay, ends at 4.95e-8)
- num_epoch = 3
- Total steps = 446500
- Total_flos = 2.7365e18
Loss starts at 1.850 and ends at 0.880
The perplexity after fine-tuning on legal corpus = 2.2735
Device:
2*GeForce GTX TITAN X computeCapability: 5.2
## Eval results
We benchmarked the model on two downstream tasks: Multi-Label Classification for Legal Text and Catchphrase Retrieval with Legal Case Description.
1.LMTC, Legal Multi-Label Text Classification
Dataset:
Labels shape: 4271
Frequent labels: 739
Few labels: 3369
Zero labels: 163
Hyperparameters:
- lr: 1e-05
- batch_size: 4
- max_sequence_size: 512
- max_label_size: 15
- few_threshold: 50
- epochs: 10
- dropout:0.1
- early stop:yes
- patience: 3
| model | Precision | Recall | F1 | R@10 | P@10 | RP@10 | NDCG@10 |
| --------------- | --------- | ------ | ----- | ----- | ----- | ----- | ------- |
| LegalBert | **0.866** | 0.439 | 0.582 | 0.749 | 0.368 | 0.749 | 0.753 |
| LegalRoberta | 0.859 | **0.457** | **0.596** | **0.750** | **0.369** |**0.750** | **0.754** |
| Roberta | 0.858 | 0.440 | 0.582 | 0.743 | 0.365 | 0.743 | 0.746 |
tranining time per epoch(including validation ):
| model(exp_name) | time |
| --------------- | --- |
| Bert | 1h40min |
| Roberta | 2h20min |
## Limitations:
In the Masked Language Model showroom, the tokens have the prefix **Ġ**. This seems to be wired but I haven't yet been able to fix it.
I know in the case of BPE tokenizer(ROBERTA's tokenizer), the symbol Ġ means the end of a new token, and the majority of tokens in the vocabs of pre-trained tokenizers start with Ġ.
For example
```python
import transformers
tokenizer = transformers.RobertaTokenizer.from_pretrained('roberta-base')
print(tokenizer.tokenize('I love salad'))
```
Outputs:
```
['I', 'Ġlove', 'Ġsalad']
```
So I think this is not fundamentally linked to the model itself.
## BibTeX entry and citation info
|
saichandrapandraju/t5_base_tabqgen | 2021-06-18T14:59:46.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer.json",
"tokenizer_config.json"
]
| saichandrapandraju | 0 | transformers | |
sajjad/edge | 2021-03-27T19:27:57.000Z | []
| [
".gitattributes"
]
| sajjad | 0 | |||
sakares/wav2vec2-large-xlsr-thai-demo | 2021-03-22T07:15:18.000Z | [
"pytorch",
"wav2vec2",
"th",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"optimizer.pt",
"preprocessor_config.json",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
]
| sakares | 159 | transformers | ---
language: th
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large Thai by Sakares
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice th
type: common_voice
args: th
metrics:
- name: Test WER
type: wer
value: 44.46
---
# Wav2Vec2-Large-XLSR-53-Thai
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Thai using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from pythainlp.tokenize import word_tokenize
test_dataset = load_dataset("common_voice", "th", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("sakares/wav2vec2-large-xlsr-thai-demo")
model = Wav2Vec2ForCTC.from_pretrained("sakares/wav2vec2-large-xlsr-thai-demo")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
## For Thai NLP Library, please feel free to check https://pythainlp.github.io/docs/2.2/api/tokenize.html
def th_tokenize(batch):
batch["sentence"] = " ".join(word_tokenize(batch["sentence"], engine="newmm"))
return batch
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn).map(th_tokenize)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
Usage script [here](https://colab.research.google.com/drive/1w0VywsBtjrO2pHHPmiPugYI9yeF8nUKg?usp=sharing)
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from pythainlp.tokenize import word_tokenize
import re
test_dataset = load_dataset("common_voice", "th", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("sakares/wav2vec2-large-xlsr-thai-demo")
model = Wav2Vec2ForCTC.from_pretrained("sakares/wav2vec2-large-xlsr-thai-demo")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
## For Thai NLP Library, please feel free to check https://pythainlp.github.io/docs/2.2/api/tokenize.html
def th_tokenize(batch):
batch["sentence"] = " ".join(word_tokenize(batch["sentence"], engine="newmm"))
return batch
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn).map(th_tokenize)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 44.46 %
Evaluate script [here](https://colab.research.google.com/drive/1WZGtHKWXBztRsuXHIdebf6uoAsp7rTnK?usp=sharing)
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://colab.research.google.com/drive/18oUbeZgBGSkz16zC_WOa154QZOdmvjyt?usp=sharing)
|
salesken/clariq_gpt2 | 2021-05-23T12:22:04.000Z | [
"pytorch",
"jax",
"salesken",
"gpt2",
"lm-head",
"causal-lm",
"license:apache-2.0",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_results.json",
"training_args.bin",
"vocab.json"
]
| salesken | 471 |
---
tags:
- salesken
- gpt2
- lm-head
- causal-lm
- salesken
license: apache-2.0
inference: False
---
The ClariQ challenge [3] is organized as part of the Search-oriented Conversational AI (SCAI) EMNLP workshop in 2020. The main aim of the conversational systems is to return an appropriate answer in response to the user requests. However, some user requests might be ambiguous. In Information Retrieval (IR) settings such a situation is handled mainly through the diversification of search result page. It is however much more challenging in dialogue settings. Hence, we aim to study the following situation for dialogue settings:<br />
A user is asking an ambiguous question (where ambiguous question is a question to which one can return > 1 possible answers);, instead of trying to answer it directly, ask a good clarifying question.
__Query: Serve your models directly from Hugging Face infrastructure and run large scale NLP models in milliseconds with just a few lines of code__
***Top 5 clarifications generated:*** <br />
- are you looking for a suitable cloud platform to run your models on (Score: 0.3862) <br />
- are you looking for a quick test or a more complex model (Score: 0.3364) <br />
- how would you like your nlp model to be used (Score: 0.3249) <br />
- are you looking for a suitable ldl to use as a server or a client (Score: 0.3182) <br />
- how would you like to consume the nlp model (Score: 0.2842) <br />
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("salesken/clariq_gpt2")
model = AutoModelWithLMHead.from_pretrained("salesken/clariq_gpt2")
input_query="Serve your models directly from Hugging Face infrastructure and run large scale NLP models in milliseconds with just a few lines of code"
query= input_query + " ~~ "
input_ids = tokenizer.encode(query.lower(), return_tensors='pt')
sample_outputs = model.generate(input_ids,
do_sample=True,
num_beams=1,
max_length=128,
temperature=0.9,
top_k = 40,
num_return_sequences=10)
clarifications_gen = []
for i in range(len(sample_outputs)):
r = tokenizer.decode(sample_outputs[i], skip_special_tokens=True).split('||')[0]
r = r.split(' ~~ ~~')[1]
if r not in clarifications_gen:
clarifications_gen.append(r)
print(clarifications_gen)
# to select the top n results:
from sentence_transformers import SentenceTransformer, util
import torch
embedder = SentenceTransformer('paraphrase-distilroberta-base-v1')
corpus = clarifications_gen
corpus_embeddings = embedder.encode(corpus, convert_to_tensor=True)
query = input_query.lower()
query_embedding = embedder.encode(query, convert_to_tensor=True)
cos_scores = util.pytorch_cos_sim(query_embedding, corpus_embeddings)[0]
top_results = torch.topk(cos_scores, k=5)
print("Top clarifications generated :")
for score, idx in zip(top_results[0], top_results[1]):
print(corpus[idx], "(Score: {:.4f})".format(score))
``` |
|
salesken/content_generation_from_phrases | 2021-05-23T12:23:54.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"salesken",
"license:apache-2.0",
"text-generation"
]
| text-generation | [
".DS_Store",
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| salesken | 538 | transformers |
---
tags: salesken
license: apache-2.0
inference: false
---
We attempted an entailment-encouraging text generation model to generate content , given a short phrase .
Some the generated sentences like below, for the phrase "data science beginner", really got us excited about the potential applications:
<b> ['Where can I find a list of questions, tutorials, and resources for getting a data scientist job?
'Do you know of any research articles about how to improve your skills as a Data Science/Data Management Programmer? ',
'What are the pros and cons to having a Data Science/Data Mining Masters? '] .</b>
Utility of the model ? Automate your conversational AI training data creation process by feeding some meaningful phrases to the model , to generate entailment-encouraging sentences; select the most diverse sentences and generate semantic variations for these, using our paraphrase generation model (https://huggingface.co/salesken/paraphrase_generation), and rank the generated sentence encouraging diversity by using our NLG ranker model (https://huggingface.co/salesken/paraphrase_diversity_ranker)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
import pprint
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = "cpu"
tokenizer = AutoTokenizer.from_pretrained("salesken/content_generation_from_phrases")
model = AutoModelWithLMHead.from_pretrained("salesken/content_generation_from_phrases").to(device)
input_query=["data science beginner"]
query = "<|startoftext|> " + input_query[0] + " ~~"
input_ids = tokenizer.encode(query.lower(), return_tensors='pt').to(device)
sample_outputs = model.generate(input_ids,
do_sample=True,
num_beams=1,
max_length=256,
temperature=0.9,
top_k = 30,
num_return_sequences=100)
content = []
for i in range(len(sample_outputs)):
r = tokenizer.decode(sample_outputs[i], skip_special_tokens=True).split('||')[0]
r = r.split(' ~~ ')[1]
if r not in content:
content.append(r)
pprint.pprint(content)
```
You may use our ranker model to rank the generated content to encourage diversity.
https://huggingface.co/salesken/paraphrase_diversity_ranker
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import pandas as pd
import numpy as np
rank_tokenizer = AutoTokenizer.from_pretrained("salesken/paraphrase_diversity_ranker")
rank_model = AutoModelForSequenceClassification.from_pretrained("salesken/paraphrase_diversity_ranker")
content_pairs=list(pd.MultiIndex.from_product([input_query, content]))
features = rank_tokenizer(content_pairs, padding=True, truncation=True, return_tensors="pt")
rank_model.eval()
with torch.no_grad():
scores = rank_model(**features).logits
label_mapping = ['surface_level_variation', 'semantic_variation']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
generated_content= np.array(content)[scores[:,1].sort(descending=True).indices].tolist()
```
|
salesken/grammar_correction | 2021-05-23T12:26:50.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"salesken",
"license:apache-2.0",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| salesken | 153 | transformers | ---
tags: salesken
license: apache-2.0
inference: false
---
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, AutoModelForCausalLM
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
else :
device = "cpu"
tokenizer = AutoTokenizer.from_pretrained("salesken/grammar_correction")
model = AutoModelForCausalLM.from_pretrained("salesken/grammar_correction").to(device)
input_query="what be the reason for everyone leave the company"
query= "<|startoftext|> " + input_query + " ~~~"
input_ids = tokenizer.encode(query.lower(), return_tensors='pt').to(device)
sample_outputs = model.generate(input_ids,
do_sample=True,
num_beams=1,
max_length=128,
temperature=0.9,
top_p= 0.7,
top_k = 5,
num_return_sequences=3)
corrected_sentences = []
for i in range(len(sample_outputs)):
r = tokenizer.decode(sample_outputs[i], skip_special_tokens=True).split('||')[0]
r = r.split('~~~')[1]
if r not in corrected_sentences:
corrected_sentences.append(r)
print(corrected_sentences)
```
|
salesken/natural_rephrase | 2021-05-23T12:30:24.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"license:apache-2.0",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| salesken | 115 | transformers | ---
license: apache-2.0
inference: false
widget:
- text: "Hey Siri, Send message to mom to say thank you for the delicious dinner yesterday"
---
NLG model trained on the rephrase generation dataset published by Fb
Paper : https://research.fb.com/wp-content/uploads/2020/12/Sound-Natural-Content-Rephrasing-in-Dialog-Systems.pdf
Paper Abstract :
" We introduce a new task of rephrasing for a more natural virtual assistant. Currently, vir- tual assistants work in the paradigm of intent- slot tagging and the slot values are directly passed as-is to the execution engine. However, this setup fails in some scenarios such as mes- saging when the query given by the user needs to be changed before repeating it or sending it to another user. For example, for queries like ‘ask my wife if she can pick up the kids’ or ‘re- mind me to take my pills’, we need to rephrase the content to ‘can you pick up the kids’and
‘take your pills’. In this paper, we study the problem of rephrasing with messaging as a use case and release a dataset of 3000 pairs of original query and rephrased query.. "
Training data :
http://dl.fbaipublicfiles.com/rephrasing/rephrasing_dataset.tar.gz
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("salesken/natural_rephrase")
model = AutoModelWithLMHead.from_pretrained("salesken/natural_rephrase")
Input_query="Hey Siri, Send message to mom to say thank you for the delicious dinner yesterday"
query= Input_query + " ~~ "
input_ids = tokenizer.encode(query.lower(), return_tensors='pt')
sample_outputs = model.generate(input_ids,
do_sample=True,
num_beams=1,
max_length=len(Input_query),
temperature=0.2,
top_k = 10,
num_return_sequences=1)
for i in range(len(sample_outputs)):
result = tokenizer.decode(sample_outputs[i], skip_special_tokens=True).split('||')[0].split('~~')[1]
print(result)
```
|
salesken/paraphrase_diversity_ranker | 2021-05-20T20:05:19.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers",
"salesken",
"license:apache-2.0"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"model_args.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| salesken | 525 | transformers | ---
tags: salesken
license: apache-2.0
inference: false
---
We have trained a model to evaluate if a paraphrase is a semantic variation to the input query or just a surface level variation. Data augmentation by adding Surface level variations does not add much value to the NLP model training. if the approach to paraphrase generation is "OverGenerate and Rank" , Its important to have a robust model of scoring/ ranking paraphrases. NLG Metrics like bleu ,BleuRT, gleu , Meteor have not proved very effective in scoring paraphrases.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import pandas as pd
import numpy as np
tokenizer = AutoTokenizer.from_pretrained("salesken/paraphrase_diversity_ranker")
model = AutoModelForSequenceClassification.from_pretrained("salesken/paraphrase_diversity_ranker")
input_query = ["tough challenges make you stronger."]
paraphrases = [
"tough problems make you stronger",
"tough problems will make you stronger",
"tough challenges make you stronger",
"tough challenges will make you a stronger person",
"tough challenges will make you stronger",
"tough tasks make you stronger",
"the tough task makes you stronger",
"tough stuff makes you stronger",
"if tough times make you stronger",
"the tough part makes you stronger",
"tough issues strengthens you",
"tough shit makes you stronger",
"tough tasks force you to be stronger",
"tough challenge is making you stronger",
"tough problems make you have more strength"]
para_pairs=list(pd.MultiIndex.from_product([input_query, paraphrases]))
features = tokenizer(para_pairs, padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['surface_level_variation', 'semantic_variation']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
sorted_diverse_paraphrases= np.array(para_pairs)[scores[:,1].sort(descending=True).indices].tolist()
print(sorted_diverse_paraphrases)
# to identify the type of paraphrase (surface-level variation or semantic variation)
print("Paraphrase type detection=====", list(zip(para_pairs, labels)))
```
============================================================================
For more robust results, filter out the paraphrases which are not semantically
similar using a model trained on NLI, STS task and then apply the ranker .
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
from transformers import AutoModelForSequenceClassification
from sentence_transformers import SentenceTransformer, util
import torch
import pandas as pd
import numpy as np
tokenizer = AutoTokenizer.from_pretrained("salesken/paraphrase_diversity_ranker")
model = AutoModelForSequenceClassification.from_pretrained("salesken/paraphrase_diversity_ranker")
embedder = SentenceTransformer('stsb-bert-large')
input_query = ["tough challenges make you stronger."]
paraphrases = [
"tough problems make you stronger",
"tough problems will make you stronger",
"tough challenges make you stronger",
"tough challenges will make you a stronger person",
"tough challenges will make you stronger",
"tough tasks make you stronger",
"the tough task makes you stronger",
"tough stuff makes you stronger",
"tough people make you stronger",
"if tough times make you stronger",
"the tough part makes you stronger",
"tough issues strengthens you",
"tough shit makes you stronger",
"tough tasks force you to be stronger",
"tough challenge is making you stronger",
"tough problems make you have more strength"]
corpus_embeddings = embedder.encode(paraphrases, convert_to_tensor=True)
query_embedding = embedder.encode(input_query, convert_to_tensor=True)
cos_scores = util.pytorch_cos_sim(query_embedding, corpus_embeddings)[0]
para_set=np.array(paraphrases)
a=cos_scores.sort(descending=True)
para= para_set[a.indices[a.values>=0.7].cpu()].tolist()
para_pairs=list(pd.MultiIndex.from_product([input_query, para]))
import torch
features = tokenizer(para_pairs, padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['surface_level_variation', 'semantic_variation']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
sorted_diverse_paraphrases= np.array(para)[scores[:,1].sort(descending=True).indices].tolist()
print("Paraphrases sorted by diversity:=======",sorted_diverse_paraphrases)
# to identify the type of paraphrase (surface-level variation or semantic variation)
print("Paraphrase type detection=====", list(zip(para_pairs, labels)))
``` |
salesken/paraphrase_generation | 2021-05-23T12:33:04.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"en",
"transformers",
"license:apache-2.0",
"salesken",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| salesken | 286 | transformers | ---
language: en
thumbnail: https://salesken.ai/assets/images/logo.png
license: apache-2.0
inference: false
widget:
- text: "every moment is a fresh beginning"
tags: salesken
---
Use this model to generate variations to augment the training data used for NLU systems.
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
else :
device = "cpu"
tokenizer = AutoTokenizer.from_pretrained("salesken/paraphrase_generation")
model = AutoModelWithLMHead.from_pretrained("salesken/paraphrase_generation").to(device)
input_query="every moment is a fresh beginning"
query= input_query + " ~~ "
input_ids = tokenizer.encode(query.lower(), return_tensors='pt').to(device)
sample_outputs = model.generate(input_ids,
do_sample=True,
num_beams=1,
max_length=128,
temperature=0.9,
top_p= 0.99,
top_k = 30,
num_return_sequences=40)
paraphrases = []
for i in range(len(sample_outputs)):
r = tokenizer.decode(sample_outputs[i], skip_special_tokens=True).split('||')[0]
r = r.split(' ~~ ')[1]
if r not in paraphrases:
paraphrases.append(r)
print(paraphrases)
```
To evaluate if a paraphrase is a semantic variation to the input query or just a surface level variation & rank the generated paraphrases, use the following model:
https://huggingface.co/salesken/paraphrase_diversity_ranker
|
salesken/query_wellformedness_score | 2021-05-20T20:07:29.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"dataset:google_wellformed_query",
"transformers",
"salesken",
"license:apache-2.0"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"model_args.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| salesken | 39 | transformers | ---
tags: salesken
license: apache-2.0
inference: true
datasets: google_wellformed_query
widget:
- text: "what was the reason for everyone for leave the company"
---
This model evaluates the wellformedness (non-fragment, grammatically correct) score of a sentence. Model is case-sensitive and penalises for incorrect case and grammar as well.
['She is presenting a paper tomorrow','she is presenting a paper tomorrow','She present paper today']
[[0.8917],[0.4270],[0.0134]]
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("salesken/query_wellformedness_score")
model = AutoModelForSequenceClassification.from_pretrained("salesken/query_wellformedness_score")
sentences = [' what was the reason for everyone to leave the company ',
' What was the reason behind everyone leaving the company ',
' why was everybody leaving the company ',
' what was the reason to everyone leave the company ',
' what be the reason for everyone to leave the company ',
' what was the reasons for everyone to leave the company ',
' what were the reasons for everyone to leave the company ']
features = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
|
salesken/text_generate | 2021-05-23T12:38:21.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"salesken",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"tokenizer.json",
"vocab.json"
]
| salesken | 64 | transformers | ---
tags: salesken
widget:
- text: "Which name is also used to describe the Amazon rainforest in English? "
---
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
else :
device = "cpu"
tokenizer = AutoTokenizer.from_pretrained("salesken/text_generate")
model = AutoModelWithLMHead.from_pretrained("salesken/text_generate").to(device)
input_query="tough challenges make you stronger. "
input_ids = tokenizer.encode(input_query.lower(), return_tensors='pt').to(device)
sample_outputs = model.generate(input_ids,
do_sample=True,
num_beams=1,
max_length=1024,
temperature=0.99,
top_k = 10,
num_return_sequences=1)
for i in range(len(sample_outputs)):
print(tokenizer.decode(sample_outputs[i], skip_special_tokens=True))
``` |
salilpn/wav2vec2-large-xlsr-53-marathi | 2021-04-10T15:13:54.000Z | []
| [
".gitattributes"
]
| salilpn | 0 | |||
salti/AraElectra-base-finetuned-ARCD | 2021-01-29T20:39:31.000Z | [
"pytorch",
"electra",
"question-answering",
"ar",
"dataset:arcd",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| salti | 19 | transformers | ---
language:
- ar
datasets:
- arcd
widget:
- text: "أين يعيش محمد ؟"
context: "اسمي محمد وأنا أعيش في سوريا"
- text: "ما العدد الذري للهيدروجين ؟"
context: "الهيدروجين هو عنصر كيميائي عدده الذري 1 ، وهو غاز عديم الرائحة واللون وهو سريع الاشتعال"
- text: "ما خواص الهيدروجين ؟"
context: "الهيدروجين هو عنصر كيميائي عدده الذري 1 ، وهو غاز عديم الرائحة واللون وهو سريع الاشتعال"
---
|
salti/bert-base-multilingual-cased-finetuned-squad | 2021-05-19T01:26:36.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"question-answering",
"multilingual",
"dataset:squad",
"dataset:arcd",
"dataset:xquad",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| salti | 548 | transformers | ---
language:
- multilingual
datasets:
- squad
- arcd
- xquad
---
# Multilingual BERT fine-tuned on SQuADv1.1
[**WandB run link**](https://wandb.ai/salti/mBERT_QA/runs/wkqzhrp2)
**GPU**: Tesla P100-PCIE-16GB
## Training Arguments
```python
max_seq_length = 512
doc_stride = 256
max_answer_length = 64
bacth_size = 16
gradient_accumulation_steps = 2
learning_rate = 5e-5
weight_decay = 3e-7
num_train_epochs = 3
warmup_ratio = 0.1
fp16 = True
fp16_opt_level = "O1"
seed = 0
```
## Results
| EM | F1 |
| :----: | :----: |
| 81.731 | 89.009 |
## Zero-shot performance
### on ARCD
| EM | F1 |
| :----: | :----: |
| 20.655 | 48.051 |
### on XQuAD
| Language | EM | F1 |
| :--------: | :----: | :----: |
| Arabic | 42.185 | 57.803 |
| English | 73.529 | 85.01 |
| German | 55.882 | 72.555 |
| Greek | 45.21 | 62.207 |
| Spanish | 58.067 | 76.406 |
| Hindi | 40.588 | 55.29 |
| Russian | 55.126 | 71.617 |
| Thai | 26.891 | 39.965 |
| Turkish | 34.874 | 51.138 |
| Vietnamese | 47.983 | 68.125 |
| Chinese | 47.395 | 58.928 |
|
salti/xlm-roberta-large-arabic_qa | 2020-08-16T06:11:43.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin"
]
| salti | 98 | transformers | |
sambelanz/cruella | 2021-06-03T16:05:46.000Z | []
| [
".gitattributes",
"README.md"
]
| sambelanz | 0 | |||
sambelanz/dvdtrimble | 2021-05-08T05:38:28.000Z | []
| [
".gitattributes",
"README.md"
]
| sambelanz | 0 | |||
sambelanz/ggfilm | 2021-04-27T09:06:04.000Z | []
| [
".gitattributes",
"README.md"
]
| sambelanz | 0 | |||
sambelanz/hu-hd | 2021-03-14T15:58:01.000Z | []
| [
".gitattributes",
"README.md"
]
| sambelanz | 0 | https://www.artandhealing.org/community/profile/fontos-vagy-nekem-teljes-film/
https://www.artandhealing.org/community/profile/raya-es-az-utolso-sarkany-hu/
https://www.artandhealing.org/community/profile/cherry-teljes-film-magyarul/
https://www.artandhealing.org/community/profile/tom-es-jerry-teljes-film-hu/
https://www.artandhealing.org/community/profile/willy-csodaorszaga-teljes-film/
|
||
sambelanz/jklstreaming | 2021-05-09T00:53:43.000Z | []
| [
".gitattributes",
"README.md"
]
| sambelanz | 0 | |||
sambelanz/nlfilms | 2021-04-24T23:10:25.000Z | []
| [
".gitattributes",
"readme"
]
| sambelanz | 0 | |||
sambelanz/sffilm | 2021-05-02T05:39:14.000Z | []
| [
".gitattributes",
"README.md"
]
| sambelanz | 0 | |||
sambelanz/ss | 2021-05-19T17:10:24.000Z | []
| [
".gitattributes",
"README.md"
]
| sambelanz | 0 | |||
sambhuhug/First | 2021-04-11T08:18:06.000Z | []
| [
".gitattributes"
]
| sambhuhug | 0 | |||
sammy786/wav2vec2-large-xlsr-mongolian | 2021-04-02T11:36:53.000Z | [
"pytorch",
"wav2vec2",
"mn",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| sammy786 | 11 | transformers | ---
language: mn
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Mongolian by Salim Shaikh
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice mn
type: common_voice
args: {mn}
metrics:
- name: Test WER
type: wer
value: 38.14
---
# Wav2Vec2-Large-XLSR-53-Mongolian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Mongolian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "sammy786/wav2vec2-large-xlsr-mongolian"
device = "cuda"
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�\\)\\(\\*)]'
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "mn", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
**Test Result**: 38.14 %
|
sampathkethineedi/bert-topic-sentiment | 2021-05-20T04:37:38.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sampathkethineedi | 19 | transformers | ||
sampathkethineedi/industry-classification-api | 2021-05-19T01:29:31.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"transformers",
"industry tags",
"buisiness description",
"multi-label",
"classification",
"inference"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sampathkethineedi | 38 | transformers | ---
language: "en"
thumbnail: "https://huggingface.co/sampathkethineedi"
widget:
- text: "3rd Rock Multimedia Limited is an India-based event management company. The Company conducts film promotions, international events, corporate events and cultural events. The Company's entertainment properties include 3rd Rock Fashion Fiesta and 3rd Rock Calendar. The Company's association with various events in Mumbai includes Bryan Adam's Live in Concert, Michael Learns to Rock (MLTR) Eternity Concert, 3rd Rock's Calendar Launch 2011-2012, Airtel I Phone 4 Launch and ISPL Cricket Tournament 2012."
- text: "Stellar Capital Services Limited is an India-based non-banking financial company. The Company is mainly engaged in the business of providing loans and advances and investing in shares, both quoted and unquoted. The Company's segments are trading in share and securities, and advancing of loans. The trading in share and securities segment includes trading in quoted equity shares, mutual funds, bonds, futures and options, and currency. The Company's financial services include inter corporate deposits, financial consultancy, retail initial public offering (IPO) funding, loan against property, management consultancy, personal loans and unsecured loans."
- text: "Chemcrux Enterprises Ltd is a manufacturer of intermediates for bulk drugs, and dyes and pigments. The Company's products include 2 Chloro Benzoic Acid; 3 Chloro Benzoic Acid; 4 Chloro Benzoic Acid; 4 Nitro Benzoic Acid; 2,4 Dichloro Benzoic Acid; 4 Chloro 3 Nitro Benzoic Acid; 2 Chloro 5 Nitro Benzoic Acid; Meta Nitro Benzoic Acid; Lassamide, and Meta Chloro Per Benzoic Acid. The Company also offers various products on custom requirements, including Aceturic Acid; Meta Chloro Benzoyl Chloride; 3-Nitro-4-Methoxy Benzoic Acid; 2 Amino 5 Sulfonamide Benzoic Acid; 3,4 Dichloro Benzoic Acid; 5-Nitro Salycylic Acid, and 4-Chloro Benzoic Acid -3-Sulfonamide. The Company's plant has a capacity of 120 metric tons per month. The Company exports to Europe, Japan, the Middle East and East Africa. It is engaged in development and execution of various processes, such as High Pressure Oxidation, Nitration and Chloro Sulfonation."
tags:
- bert
- pytorch
- text-classification
- industry tags
- buisiness description
- multi-label
- classification
- inference
liscence: "mit"
---
# industry-classification-api
## Model description
BERT Model to classify a business description into one of **62 industry tags**.
Trained on 7000 samples of Business Descriptions and associated labels of companies in India.
## How to use
PyTorch only
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("sampathkethineedi/industry-classification")
model = AutoModelForSequenceClassification.from_pretrained("industry-classification")
industry_tags = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
industry_tags("Stellar Capital Services Limited is an India-based non-banking financial company ... loan against property, management consultancy, personal loans and unsecured loans.")
'''Ouput'''
[{'label': 'Consumer Finance', 'score': 0.9841355681419373}]
```
## Limitations and bias
Training data is only for Indian companies
|
sampathkethineedi/industry-classification | 2020-07-16T15:27:38.000Z | [
"pytorch",
"tf",
"distilbert",
"text-classification",
"en",
"transformers",
"tensorflow",
"industry",
"buisiness",
"description",
"multi-class",
"classification"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"config_new.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| sampathkethineedi | 160 | transformers | ---
language: "en"
thumbnail: "https://huggingface.co/sampathkethineedi"
tags:
- distilbert
- pytorch
- tensorflow
- text-classification
- industry
- buisiness
- description
- multi-class
- classification
liscence: "mit"
inference: false
---
# industry-classification
## Model description
DistilBERT Model to classify a business description into one of **62 industry tags**.
Trained on 7000 samples of Business Descriptions and associated labels of companies in India.
## How to use
PyTorch and TF models available
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("sampathkethineedi/industry-classification")
model = AutoModelForSequenceClassification.from_pretrained("sampathkethineedi/industry-classification")
industry_tags = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
industry_tags("Stellar Capital Services Limited is an India-based non-banking financial company ... loan against property, management consultancy, personal loans and unsecured loans.")
'''Ouput'''
[{'label': 'Consumer Finance', 'score': 0.9841355681419373}]
```
## Limitations and bias
Training data is only for Indian companies
|
samrawal/bert-base-uncased_clinical-ner | 2021-05-19T01:32:48.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"test_results.p",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
]
| samrawal | 401 | transformers | A Named Entity Recognition model for clinical entities (`problem`, `treatment`, `test`)
The model has been trained on the [i2b2 (now n2c2) dataset](https://n2c2.dbmi.hms.harvard.edu) for the 2010 - Relations task. Please visit the n2c2 site to request access to the dataset. |
samrawal/bert-large-uncased_med-ner | 2021-05-20T04:38:49.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| samrawal | 81 | transformers | A Named Entity Recognition model for medication entities (`medication name`, `dosage`, `duration`, `frequency`, `reason`).
The model has been trained on the i2b2 (now n2c2) dataset for the 2009 - Medication task. Please visit the n2c2 site to request access to the dataset. |
samyou/demo | 2021-01-14T14:37:05.000Z | []
| [
".gitattributes"
]
| samyou | 0 | |||
sanayAI/12 | 2021-05-20T04:39:45.000Z | [
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"vocab.txt"
]
| sanayAI | 14 | transformers | |
sanayAI/bert-base-sanay-uncased | 2021-05-20T04:39:49.000Z | [
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"eval_results_lm.txt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sanayAI | 14 | transformers | |
sanayAI/bert-sanay-2 | 2021-05-20T04:39:54.000Z | [
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"eval_results_lm.txt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sanayAI | 22 | transformers | |
sanayAI/model_fa | 2021-05-20T04:40:46.000Z | [
"tf",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"tf_model.h5",
"vocab.txt"
]
| sanayAI | 13 | transformers | |
sanayAI/output | 2021-05-20T04:41:38.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"eval_results_lm.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sanayAI | 22 | transformers | |
sanayAI/parsbert-base-sanay-uncased | 2021-05-20T04:43:22.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"eval_results_lm.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sanayAI | 19 | transformers | |
sanayAI/pretrain_model | 2021-05-20T04:43:55.000Z | [
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"eval_results_lm.txt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sanayAI | 20 | transformers | |
sanayAI/sanay-bert | 2021-05-20T04:44:48.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"eval_results_lm.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sanayAI | 25 | transformers | |
sanayAI/sanayBERT | 2021-05-20T04:45:22.000Z | [
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"eval_results.txt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sanayAI | 16 | transformers | |
sanayAI/sanayBERT_model | 2021-05-20T04:45:27.000Z | [
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"eval_results.txt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sanayAI | 36 | transformers | |
sanayAI/sanayBERT_model_V1 | 2021-05-20T04:46:35.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"model_args.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| sanayAI | 18 | transformers | |
sangrimlee/bert-base-multilingual-cased-korquad | 2021-05-20T04:47:57.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sangrimlee | 134 | transformers | |
sangrimlee/bert-base-multilingual-cased-nsmc | 2021-06-02T18:46:18.000Z | [
"pytorch",
"bert",
"text-classification",
"ko",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"all_results.json",
"config.json",
"eval_results.json",
"predict_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
]
| sangrimlee | 48 | transformers | ---
language: ko
---
# BERT multilingual basecased finetuned with NSMC
This model is a fine-tune checkpoint of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased), fine-tuned on [NSMC(Naver Sentiment Movie Corpus)](https://github.com/e9t/nsmc).
## Usage
You can use this model directly with a pipeline for sentiment-analysis:
```python
>>> from transformers import pipeline
>>> classifier = pipeline(
"sentiment-analysis", model="sangrimlee/bert-base-multilingual-cased-nsmc"
)
>>> classifier("흠...포스터보고 초딩영화줄....오버연기조차 가볍지 않구나.")
>>> classifier("액션이 없는데도 재미 있는 몇안되는 영화")
[{'label': 'negative', 'score': 0.9642567038536072}]
[{'label': 'positive', 'score': 0.9970554113388062}]
```
|
sangrimlee/mt5-small-ans-ext | 2021-03-03T12:14:59.000Z | [
"pytorch",
"mt5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| sangrimlee | 10 | transformers | |
sangrimlee/mt5-small-e2e-qg | 2021-03-10T04:21:12.000Z | [
"pytorch",
"mt5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| sangrimlee | 16 | transformers | |
sangrimlee/mt5-small-multitask | 2021-03-30T00:50:38.000Z | [
"pytorch",
"mt5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| sangrimlee | 9 | transformers | |
sangrimlee/mt5-small-qg-hl | 2021-03-03T01:32:58.000Z | [
"pytorch",
"mt5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| sangrimlee | 16 | transformers | |
santhoshkolloju/ans_gen | 2020-07-07T10:57:52.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| santhoshkolloju | 32 | transformers | |
santhoshkolloju/ans_gen2 | 2020-07-07T11:19:42.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| santhoshkolloju | 13 | transformers | |
santhoshkolloju/ques_gen | 2020-07-07T10:36:21.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| santhoshkolloju | 21 | transformers | |
santhoshkolloju/t5_qg_model_with_answer2 | 2020-07-02T08:55:13.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| santhoshkolloju | 21 | transformers | |
santhoshkolloju/t5_qg_multi2 | 2020-07-05T11:13:54.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| santhoshkolloju | 9 | transformers | |
santhoshkolloju/t5_qg_multi3 | 2020-07-06T14:45:44.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| santhoshkolloju | 9 | transformers | |
sap-ai-research/BERT-Large-Contrastive-Self-Supervised-ACL2020 | 2021-05-20T04:50:14.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sap-ai-research | 23 | transformers | |
sapthrishi007/xlm-roberta-large-mnli-xnli | 2020-12-09T17:38:49.000Z | []
| [
".gitattributes"
]
| sapthrishi007 | 0 | |||
sarahgritzka/bert-for-patents | 2021-06-01T09:00:43.000Z | []
| [
".gitattributes"
]
| sarahgritzka | 0 | |||
sarahlintang/Documents | 2020-08-07T16:32:36.000Z | [
"transformers"
]
| [
".DS_Store",
".gitattributes",
"bert_model.ckpt.meta",
"config.json"
]
| sarahlintang | 12 | transformers | ||
sarahlintang/IndoBERT | 2021-05-20T04:51:45.000Z | [
"pytorch",
"jax",
"bert",
"id",
"dataset:oscar",
"transformers"
]
| [
".gitattributes",
"README.md",
"bert_model.ckpt.data-00000-of-00001",
"bert_model.ckpt.index",
"bert_model.ckpt.meta",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"vocab.txt"
]
| sarahlintang | 238 | transformers | ---
language: id
datasets:
- oscar
---
# IndoBERT (Indonesian BERT Model)
## Model description
IndoBERT is a pre-trained language model based on BERT architecture for the Indonesian Language.
This model is base-uncased version which use bert-base config.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("sarahlintang/IndoBERT")
model = AutoModel.from_pretrained("sarahlintang/IndoBERT")
tokenizer.encode("hai aku mau makan.")
[2, 8078, 1785, 2318, 1946, 18, 4]
```
## Training data
This model was pre-trained on 16 GB of raw text ~2 B words from Oscar Corpus (https://oscar-corpus.com/).
This model is equal to bert-base model which has 32,000 vocabulary size.
## Training procedure
The training of the model has been performed using Google’s original Tensorflow code on eight core Google Cloud TPU v2.
We used a Google Cloud Storage bucket, for persistent storage of training data and models.
## Eval results
We evaluate this model on three Indonesian NLP downstream task:
- some extractive summarization model
- sentiment analysis
- Part-of-Speech Tagger
it was proven that this model outperforms multilingual BERT for all downstream tasks.
|
|
sarahmul/SnoBERT | 2020-12-10T04:20:15.000Z | []
| [
".gitattributes"
]
| sarahmul | 0 | |||
sarahmul/SnoSBERT | 2020-12-10T04:49:35.000Z | []
| [
".gitattributes"
]
| sarahmul | 0 | |||
sarim/myModel | 2021-03-20T12:53:37.000Z | [
"pytorch",
"distilbert",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| sarim | 10 | transformers | first commit |
|
sarnikowski/convbert-medium-small-da-cased | 2021-03-18T22:27:12.000Z | [
"pytorch",
"tf",
"convbert",
"pretraining",
"da",
"arxiv:2008.02496",
"transformers",
"license:cc-by-4.0"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| sarnikowski | 485 | transformers | ---
language: da
license: cc-by-4.0
---
# Danish ConvBERT medium small (cased)
[ConvBERT](https://arxiv.org/abs/2008.02496) model pretrained on a custom Danish corpus (~17.5gb).
For details regarding data sources and training procedure, along with benchmarks on downstream tasks, go to: https://github.com/sarnikowski/danish_transformers
## Usage
```python
from transformers import ConvBertTokenizer, ConvBertModel
tokenizer = ConvBertTokenizer.from_pretrained("sarnikowski/convbert-medium-small-da-cased")
model = ConvBertModel.from_pretrained("sarnikowski/convbert-medium-small-da-cased")
```
## Questions?
If you have any questions feel free to open an issue on the [danish_transformers](https://github.com/sarnikowski/danish_transformers) repository, or send an email to [email protected]
|
|
sarnikowski/convbert-small-da-cased | 2021-03-01T22:15:15.000Z | [
"pytorch",
"tf",
"convbert",
"pretraining",
"da",
"arxiv:2008.02496",
"transformers",
"license:cc-by-4.0"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| sarnikowski | 14 | transformers | ---
language: da
license: cc-by-4.0
---
# Danish ConvBERT small (cased)
[ConvBERT](https://arxiv.org/abs/2008.02496) model pretrained on a custom Danish corpus (~17.5gb).
For details regarding data sources and training procedure, along with benchmarks on downstream tasks, go to: https://github.com/sarnikowski/danish_transformers
## Usage
```python
from transformers import ConvBertTokenizer, ConvBertModel
tokenizer = ConvBertTokenizer.from_pretrained("sarnikowski/convbert-small-da-cased")
model = ConvBertModel.from_pretrained("sarnikowski/convbert-small-da-cased")
```
## Questions?
If you have any questions feel free to open an issue on the [danish_transformers](https://github.com/sarnikowski/danish_transformers) repository, or send an email to [email protected]
|
|
sarnikowski/electra-small-discriminator-da-256-cased | 2020-12-11T22:01:11.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"da",
"arxiv:2003.10555",
"transformers",
"license:cc-by-4.0"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| sarnikowski | 15 | transformers | ---
language: da
license: cc-by-4.0
---
# Danish ELECTRA small (cased)
An [ELECTRA](https://arxiv.org/abs/2003.10555) model pretrained on a custom Danish corpus (~17.5gb).
For details regarding data sources and training procedure, along with benchmarks on downstream tasks, go to: https://github.com/sarnikowski/danish_transformers/tree/main/electra
## Usage
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("sarnikowski/electra-small-discriminator-da-256-cased")
model = AutoModel.from_pretrained("sarnikowski/electra-small-discriminator-da-256-cased")
```
## Questions?
If you have any questions feel free to open an issue on the [danish_transformers](https://github.com/sarnikowski/danish_transformers) repository, or send an email to [email protected]
|
|
sarnikowski/electra-small-generator-da-256-cased | 2021-01-23T19:38:37.000Z | [
"pytorch",
"tf",
"electra",
"masked-lm",
"da",
"arxiv:2003.10555",
"transformers",
"license:cc-by-4.0",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| sarnikowski | 8 | transformers | ---
language: da
license: cc-by-4.0
---
# Danish ELECTRA small (cased)
An [ELECTRA](https://arxiv.org/abs/2003.10555) model pretrained on a custom Danish corpus (~17.5gb).
For details regarding data sources and training procedure, along with benchmarks on downstream tasks, go to: https://github.com/sarnikowski/danish_transformers/tree/main/electra
## Usage
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("sarnikowski/electra-small-generator-da-256-cased")
model = AutoModel.from_pretrained("sarnikowski/electra-small-generator-da-256-cased")
```
## Questions?
If you have any questions feel free to open an issue in the [danish_transformers](https://github.com/sarnikowski/danish_transformers) repository, or send an email to [email protected]
|
sarrouti/t5-cord19 | 2020-12-07T21:36:49.000Z | []
| [
".gitattributes"
]
| sarrouti | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.