modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 21
values | files
list | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
huseinzol05/electra-small-generator-bahasa-cased
|
2020-12-11T21:44:30.000Z
|
[
"pytorch",
"electra",
"masked-lm",
"ms",
"transformers",
"fill-mask"
] |
fill-mask
|
[
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] |
huseinzol05
| 34 |
transformers
|
---
language: ms
---
# Bahasa ELECTRA Model
Pretrained ELECTRA small language model for Malay and Indonesian.
## Pretraining Corpus
`electra-small-generator-bahasa-cased` model was pretrained on ~1.8 Billion words. We trained on both standard and social media language structures, and below is list of data we trained on,
1. [dumping wikipedia](https://github.com/huseinzol05/Malaya-Dataset#wikipedia-1).
2. [local instagram](https://github.com/huseinzol05/Malaya-Dataset#instagram).
3. [local twitter](https://github.com/huseinzol05/Malaya-Dataset#twitter-1).
4. [local news](https://github.com/huseinzol05/Malaya-Dataset#public-news).
5. [local parliament text](https://github.com/huseinzol05/Malaya-Dataset#parliament).
6. [local singlish/manglish text](https://github.com/huseinzol05/Malaya-Dataset#singlish-text).
7. [IIUM Confession](https://github.com/huseinzol05/Malaya-Dataset#iium-confession).
8. [Wattpad](https://github.com/huseinzol05/Malaya-Dataset#wattpad).
9. [Academia PDF](https://github.com/huseinzol05/Malaya-Dataset#academia-pdf).
Preprocessing steps can reproduce from here, [Malaya/pretrained-model/preprocess](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/preprocess).
## Pretraining details
- This model was trained using Google ELECTRA's github [repository](https://github.com/google-research/electra) on a single TESLA V100 32GB VRAM.
- All steps can reproduce from here, [Malaya/pretrained-model/electra](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/electra).
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import ElectraTokenizer, ElectraModel
model = ElectraModel.from_pretrained('huseinzol05/electra-small-generator-bahasa-cased')
tokenizer = ElectraTokenizer.from_pretrained(
'huseinzol05/electra-small-generator-bahasa-cased',
do_lower_case = False,
)
```
## Example using AutoModelWithLMHead
```python
from transformers import ElectraTokenizer, AutoModelWithLMHead, pipeline
model = AutoModelWithLMHead.from_pretrained('huseinzol05/electra-small-generator-bahasa-cased')
tokenizer = ElectraTokenizer.from_pretrained(
'huseinzol05/electra-small-generator-bahasa-cased',
do_lower_case = False,
)
fill_mask = pipeline('fill-mask', model = model, tokenizer = tokenizer)
print(fill_mask('makan ayam dengan [MASK]'))
```
Output is,
```text
[{'sequence': '[CLS] makan ayam dengan ayam [SEP]',
'score': 0.08424834907054901,
'token': 3255},
{'sequence': '[CLS] makan ayam dengan rendang [SEP]',
'score': 0.064150370657444,
'token': 6288},
{'sequence': '[CLS] makan ayam dengan nasi [SEP]',
'score': 0.033446669578552246,
'token': 2533},
{'sequence': '[CLS] makan ayam dengan kucing [SEP]',
'score': 0.02803465723991394,
'token': 3577},
{'sequence': '[CLS] makan ayam dengan telur [SEP]',
'score': 0.026627106592059135,
'token': 6350}]
```
## Results
For further details on the model performance, simply checkout accuracy page from Malaya, https://malaya.readthedocs.io/en/latest/Accuracy.html, we compared with traditional models.
## Acknowledgement
Thanks to [Im Big](https://www.facebook.com/imbigofficial/), [LigBlou](https://www.facebook.com/ligblou), [Mesolitica](https://mesolitica.com/) and [KeyReply](https://www.keyreply.com/) for sponsoring AWS, Google and GPU clouds to train ELECTRA for Bahasa.
|
huseinzol05/gpt2-117M-bahasa-cased
|
2021-05-23T05:28:53.000Z
|
[
"pytorch",
"jax",
"gpt2",
"ms",
"transformers"
] |
[
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] |
huseinzol05
| 40 |
transformers
|
---
language: ms
---
# Bahasa GPT2 Model
Pretrained GPT2 117M model for Malay.
## Pretraining Corpus
`gpt2-117M-bahasa-cased` model was pretrained on ~0.9 Billion words. We trained on standard language structure only, and below is list of data we trained on,
1. [dumping wikipedia](https://github.com/huseinzol05/Malaya-Dataset#wikipedia-1).
2. [local news](https://github.com/huseinzol05/Malaya-Dataset#public-news).
3. [local parliament text](https://github.com/huseinzol05/Malaya-Dataset#parliament).
4. [local singlish/manglish text](https://github.com/huseinzol05/Malaya-Dataset#singlish-text).
5. [IIUM Confession](https://github.com/huseinzol05/Malaya-Dataset#iium-confession).
6. [Wattpad](https://github.com/huseinzol05/Malaya-Dataset#wattpad).
7. [Academia PDF](https://github.com/huseinzol05/Malaya-Dataset#academia-pdf).
8. [Common-Crawl](https://github.com/huseinzol05/malaya-dataset#common-crawl).
Preprocessing steps can reproduce from here, [Malaya/pretrained-model/preprocess](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/preprocess).
## Pretraining details
- This model was trained using GPT2's github [repository](https://github.com/openai/gpt-2) on a V3-8 TPU.
- All steps can reproduce from here, [Malaya/pretrained-model/gpt2](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/gpt2).
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import GPT2Tokenizer, GPT2Model
model = GPT2Model.from_pretrained('huseinzol05/gpt2-117M-bahasa-cased')
tokenizer = GPT2Tokenizer.from_pretrained(
'huseinzol05/gpt2-117M-bahasa-cased',
)
```
## Example using GPT2LMHeadModel
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained('huseinzol05/gpt2-117M-bahasa-cased')
model = GPT2LMHeadModel.from_pretrained(
'huseinzol05/gpt2-117M-bahasa-cased', pad_token_id = tokenizer.eos_token_id
)
input_ids = tokenizer.encode(
'penat bak hang, macam ni aku takmau kerja dah', return_tensors = 'pt'
)
sample_outputs = model.generate(
input_ids,
do_sample = True,
max_length = 50,
top_k = 50,
top_p = 0.95,
num_return_sequences = 3,
)
print('Output:\n' + 100 * '-')
for i, sample_output in enumerate(sample_outputs):
print(
'{}: {}'.format(
i, tokenizer.decode(sample_output, skip_special_tokens = True)
)
)
```
Output is,
```text
Output:
----------------------------------------------------------------------------------------------------
0: penat bak hang, macam ni aku takmau kerja dah jadi aku pernah beritahu orang.
Ini bukan aku rasa cam nak ajak teman kan ni.
Tengok ni aku dah ada adik-adik & anak yang tinggal dan kerja2 yang kat sekolah.
1: penat bak hang, macam ni aku takmau kerja dah.
Takleh takleh nak ambik air.
Tgk jugak aku kat rumah ni.
Pastu aku nak bagi aku.
So aku dah takde masalah pulak.
Balik aku pun
2: penat bak hang, macam ni aku takmau kerja dah macam tu.
Tapi semua tu aku ingat cakap, ada cara hidup ni yang kita kena bayar.. pastu kita tak mampu bayar.. kan!!
Takpelah, aku nak cakap, masa yang
```
|
|
huseinzol05/gpt2-345M-bahasa-cased
|
2021-05-23T05:31:06.000Z
|
[
"pytorch",
"jax",
"gpt2",
"ms",
"transformers"
] |
[
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] |
huseinzol05
| 26 |
transformers
|
---
language: ms
---
# Bahasa GPT2 Model
Pretrained GPT2 345M model for Malay.
## Pretraining Corpus
`gpt2-345M-bahasa-cased` model was pretrained on ~0.9 Billion words. We trained on standard language structure only, and below is list of data we trained on,
1. [dumping wikipedia](https://github.com/huseinzol05/Malaya-Dataset#wikipedia-1).
2. [local news](https://github.com/huseinzol05/Malaya-Dataset#public-news).
3. [local parliament text](https://github.com/huseinzol05/Malaya-Dataset#parliament).
4. [local singlish/manglish text](https://github.com/huseinzol05/Malaya-Dataset#singlish-text).
5. [IIUM Confession](https://github.com/huseinzol05/Malaya-Dataset#iium-confession).
6. [Wattpad](https://github.com/huseinzol05/Malaya-Dataset#wattpad).
7. [Academia PDF](https://github.com/huseinzol05/Malaya-Dataset#academia-pdf).
8. [Common-Crawl](https://github.com/huseinzol05/malaya-dataset#common-crawl).
Preprocessing steps can reproduce from here, [Malaya/pretrained-model/preprocess](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/preprocess).
## Pretraining details
- This model was trained using GPT2's github [repository](https://github.com/openai/gpt-2) on a V3-8 TPU.
- All steps can reproduce from here, [Malaya/pretrained-model/gpt2](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/gpt2).
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import GPT2Tokenizer, GPT2Model
model = GPT2Model.from_pretrained('huseinzol05/gpt2-345M-bahasa-cased')
tokenizer = GPT2Tokenizer.from_pretrained(
'huseinzol05/gpt2-345M-bahasa-cased',
)
```
## Example using GPT2LMHeadModel
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained('huseinzol05/gpt2-345M-bahasa-cased')
model = GPT2LMHeadModel.from_pretrained(
'huseinzol05/gpt2-345M-bahasa-cased', pad_token_id = tokenizer.eos_token_id
)
input_ids = tokenizer.encode(
'penat bak hang, macam ni aku takmau kerja dah', return_tensors = 'pt'
)
sample_outputs = model.generate(
input_ids,
do_sample = True,
max_length = 50,
top_k = 50,
top_p = 0.95,
num_return_sequences = 3,
)
print('Output:\n' + 100 * '-')
for i, sample_output in enumerate(sample_outputs):
print(
'{}: {}'.format(
i, tokenizer.decode(sample_output, skip_special_tokens = True)
)
)
```
Output is,
```text
Output:
----------------------------------------------------------------------------------------------------
0: penat bak hang, macam ni aku takmau kerja dah dekat 2,3 jam.
Aku harap aku dapat berjimat banyak.
Ini pun masa kerja, bila dah kerja jadi satu.
Aku buat kerja ni la.
Aku memang kalau ada
1: penat bak hang, macam ni aku takmau kerja dah.
Tapi nak buat macam mana kan, aku tolong bentang tugas.
Dan, memang sangat-sangat tak mahu buat kerja sekarang ni.
Aku pun suka sangat kerja di luar bandar
2: penat bak hang, macam ni aku takmau kerja dah pun.
Takpa nak buat kerja-kerja sampingan, baru boleh dapat hadiah pulak.
Ni la tempat paling best bila duduk di restoran yang ada pekena kopi.
Cumanya
```
|
|
huseinzol05/t5-base-bahasa-cased
|
2020-12-11T21:44:41.000Z
|
[
"pytorch",
"t5",
"ms",
"transformers"
] |
[
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] |
huseinzol05
| 41 |
transformers
|
---
language: ms
---
# Bahasa T5 Model
Pretrained T5 base language model for Malay and Indonesian.
## Pretraining Corpus
`t5-base-bahasa-cased` model was pretrained on multiple tasks. Below is list of tasks we trained on,
1. [Unsupervised](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1875) on [local Wikipedia](https://github.com/huseinzol05/Malaya-Dataset#wikipedia-1).
2. [Unsupervised](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1875) on [local news](https://github.com/huseinzol05/Malaya-Dataset#public-news).
3. [Unsupervised](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1875) on [local parliament text](https://github.com/huseinzol05/Malaya-Dataset#parliament).
4. [Unsupervised](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1875) on [IIUM Confession](https://github.com/huseinzol05/Malaya-Dataset#iium-confession).
5. [Unsupervised](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1875) on [Wattpad](https://github.com/huseinzol05/Malaya-Dataset#wattpad).
6. [Unsupervised](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1875) on [Academia PDF](https://github.com/huseinzol05/Malaya-Dataset#academia-pdf).
7. [Next sentence prediction](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1129) on [local Wikipedia](https://github.com/huseinzol05/Malaya-Dataset#wikipedia-1).
8. [Next sentence prediction](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1129) on [local news](https://github.com/huseinzol05/Malaya-Dataset#public-news).
9. [Next sentence prediction](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1129) on [local parliament text](https://github.com/huseinzol05/Malaya-Dataset#parliament).
10. [Next sentence prediction](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1129) on [IIUM Confession](https://github.com/huseinzol05/Malaya-Dataset#iium-confession).
11. [Next sentence prediction](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1129) on [Wattpad](https://github.com/huseinzol05/Malaya-Dataset#wattpad).
12. [Next sentence prediction](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1129) on [Academia PDF](https://github.com/huseinzol05/Malaya-Dataset#academia-pdf).
13. [Bahasa SNLI](https://github.com/huseinzol05/Malaya-Dataset#snli).
14. [Bahasa Question Quora](https://github.com/huseinzol05/Malaya-Dataset#quora).
15. [Bahasa Natural Questions](https://github.com/huseinzol05/Malaya-Dataset#natural-questions).
16. [News title summarization](https://github.com/huseinzol05/Malaya-Dataset#crawled-news).
17. [Stemming to original wikipedia](https://github.com/huseinzol05/Malaya/blob/master/pretrained-model/t5/generate-stemming.ipynb).
18. [Synonym to original wikipedia](https://github.com/huseinzol05/Malaya/blob/master/pretrained-model/t5/generate-synonym.ipynb).
Preprocessing steps can reproduce from here, [Malaya/pretrained-model/preprocess](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/preprocess).
## Pretraining details
- This model was trained using Google T5's github [repository](https://github.com/google-research/text-to-text-transfer-transformer) on v3-8 TPU.
- All steps can reproduce from here, [Malaya/pretrained-model/t5](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/t5).
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import T5Tokenizer, T5Model
model = T5Model.from_pretrained('huseinzol05/t5-base-bahasa-cased')
tokenizer = T5Tokenizer.from_pretrained('huseinzol05/t5-base-bahasa-cased')
```
## Example using T5ForConditionalGeneration
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('huseinzol05/t5-base-bahasa-cased')
model = T5ForConditionalGeneration.from_pretrained('huseinzol05/t5-base-bahasa-cased')
input_ids = tokenizer.encode('soalan: siapakah perdana menteri malaysia?', return_tensors = 'pt')
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
Output is,
```
'Mahathir Mohamad'
```
## Results
For further details on the model performance, simply checkout accuracy page from Malaya, https://malaya.readthedocs.io/en/latest/Accuracy.html, we compared with traditional models.
## Acknowledgement
Thanks to [Im Big](https://www.facebook.com/imbigofficial/), [LigBlou](https://www.facebook.com/ligblou), [Mesolitica](https://mesolitica.com/) and [KeyReply](https://www.keyreply.com/) for sponsoring AWS, Google and GPU clouds to train T5 for Bahasa.
|
|
huseinzol05/t5-base-bahasa-summarization-cased
|
2020-12-11T21:44:44.000Z
|
[
"pytorch",
"t5",
"seq2seq",
"ms",
"transformers",
"text2text-generation"
] |
text2text-generation
|
[
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] |
huseinzol05
| 22 |
transformers
|
---
language: ms
---
# Bahasa T5 Summarization Model
Finetuned T5 base summarization model for Malay and Indonesian.
## Finetuning Corpus
`t5-base-bahasa-summarization-cased` model was finetuned on multiple summarization dataset. Below is list of tasks we trained on,
1. [Translated CNN News](https://github.com/huseinzol05/Malay-Dataset#cnn-news)
2. [Translated Gigawords](https://github.com/huseinzol05/Malay-Dataset#gigawords)
3. [Translated Multinews](https://github.com/huseinzol05/Malay-Dataset#multinews)
## Finetuning details
- This model was trained using Malaya T5's github [repository](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/t5) on v3-8 TPU using Base size.
- All steps can reproduce from here, [Malaya/session/summarization](https://github.com/huseinzol05/Malaya/tree/master/session/summarization).
## Load Finetuned Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import T5Tokenizer, T5Model
tokenizer = T5Tokenizer.from_pretrained('huseinzol05/t5-base-bahasa-summarization-cased')
model = T5ForConditionalGeneration.from_pretrained('huseinzol05/t5-base-bahasa-summarization-cased')
```
## Example using T5ForConditionalGeneration
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('huseinzol05/t5-base-bahasa-summarization-cased')
model = T5ForConditionalGeneration.from_pretrained('huseinzol05/t5-base-bahasa-summarization-cased')
# https://www.hmetro.com.my/mutakhir/2020/05/580438/peletakan-jawatan-tun-m-ditolak-bukan-lagi-isu
# original title, Peletakan jawatan Tun M ditolak, bukan lagi isu
string = 'PELETAKAN jawatan Tun Dr Mahathir Mohamad sebagai Pengerusi Parti Pribumi Bersatu Malaysia (Bersatu) ditolak di dalam mesyuarat khas Majlis Pimpinan Tertinggi (MPT) pada 24 Februari lalu. Justeru, tidak timbul soal peletakan jawatan itu sah atau tidak kerana ia sudah pun diputuskan pada peringkat parti yang dipersetujui semua termasuk Presiden, Tan Sri Muhyiddin Yassin. Bekas Setiausaha Agung Bersatu Datuk Marzuki Yahya berkata, pada mesyuarat itu MPT sebulat suara menolak peletakan jawatan Dr Mahathir. "Jadi ini agak berlawanan dengan keputusan yang kita sudah buat. Saya tak faham bagaimana Jabatan Pendaftar Pertubuhan Malaysia (JPPM) kata peletakan jawatan itu sah sedangkan kita sudah buat keputusan di dalam mesyuarat, bukan seorang dua yang buat keputusan. "Semua keputusan mesti dibuat melalui parti. Walau apa juga perbincangan dibuat di luar daripada keputusan mesyuarat, ini bukan keputusan parti. "Apa locus standy yang ada pada Setiausaha Kerja untuk membawa perkara ini kepada JPPM. Seharusnya ia dibawa kepada Setiausaha Agung sebagai pentadbir kepada parti," katanya kepada Harian Metro. Beliau mengulas laporan media tempatan hari ini mengenai pengesahan JPPM bahawa Dr Mahathir tidak lagi menjadi Pengerusi Bersatu berikutan peletakan jawatannya di tengah-tengah pergolakan politik pada akhir Februari adalah sah. Laporan itu juga menyatakan, kedudukan Muhyiddin Yassin memangku jawatan itu juga sah. Menurutnya, memang betul Dr Mahathir menghantar surat peletakan jawatan, tetapi ditolak oleh MPT. "Fasal yang disebut itu terpakai sekiranya berhenti atau diberhentikan, tetapi ini mesyuarat sudah menolak," katanya. Marzuki turut mempersoal kenyataan media yang dibuat beberapa pimpinan parti itu hari ini yang menyatakan sokongan kepada Perikatan Nasional. "Kenyataan media bukanlah keputusan rasmi. Walaupun kita buat 1,000 kenyataan sekali pun ia tetap tidak merubah keputusan yang sudah dibuat di dalam mesyuarat. Kita catat di dalam minit apa yang berlaku di dalam mesyuarat," katanya.'
# https://huggingface.co/blog/how-to-generate
# generate summary
input_ids = tokenizer.encode(f'ringkasan: {string}', return_tensors = 'pt')
outputs = model.generate(
input_ids,
do_sample = True,
temperature = 0.8,
top_k = 50,
top_p = 0.95,
max_length = 300,
num_return_sequences = 3,
)
for i, sample_output in enumerate(outputs):
print(
'{}: {}'.format(
i, tokenizer.decode(sample_output, skip_special_tokens = True)
)
)
# generate news title
input_ids = tokenizer.encode(f'tajuk: {string}', return_tensors = 'pt')
outputs = model.generate(
input_ids,
do_sample = True,
temperature = 0.8,
top_k = 50,
top_p = 0.95,
max_length = 300,
num_return_sequences = 3,
)
for i, sample_output in enumerate(outputs):
print(
'{}: {}'.format(
i, tokenizer.decode(sample_output, skip_special_tokens = True)
)
)
```
Output is,
```
0: "Ini agak berlawanan dengan keputusan yang kita sudah buat," kata Marzuki Yahya. Kenyataan media adalah keputusan rasmi. Marzuki: Kenyataan media tidak mengubah keputusan mesyuarat
1: MPT sebulat suara menolak peletakan jawatan Dr M di mesyuarat 24 Februari. Tidak ada persoalan peletakan jawatan itu sah atau tidak, tetapi ia adalah keputusan parti yang dipersetujui semua. Bekas Setiausaha Agung Bersatu mengatakan keputusan itu perlu dibuat melalui parti. Bekas setiausaha agung itu mengatakan kenyataan media tidak lagi menyokong Perikatan Nasional
2: Kenyataan media menunjukkan sokongan kepada Perikatan Nasional. Marzuki: Kedudukan Dr M sebagai Pengerusi Bersatu juga sah. Beliau berkata pengumuman itu harus diserahkan kepada setiausaha Agung
0: 'Kalah Tun M, Muhyiddin tetap sah'
1: Boleh letak jawatan PM di MPT
2: 'Ketegangan Dr M sudah tolak, tak timbul isu peletakan jawatan'
```
## Result
We found out using original Tensorflow implementation gives better results, check it at https://malaya.readthedocs.io/en/latest/Abstractive.html#generate-ringkasan
## Acknowledgement
Thanks to [Im Big](https://www.facebook.com/imbigofficial/), [LigBlou](https://www.facebook.com/ligblou), [Mesolitica](https://mesolitica.com/) and [KeyReply](https://www.keyreply.com/) for sponsoring AWS, Google and GPU clouds to train T5 for Bahasa.
|
huseinzol05/t5-large-bahasa-cased
|
2020-10-01T12:42:38.000Z
|
[
"pytorch",
"t5",
"transformers"
] |
[
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] |
huseinzol05
| 22 |
transformers
| ||
huseinzol05/t5-small-bahasa-cased
|
2020-12-11T21:44:48.000Z
|
[
"pytorch",
"t5",
"ms",
"transformers"
] |
[
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] |
huseinzol05
| 20 |
transformers
|
---
language: ms
---
# Bahasa T5 Model
Pretrained T5 small language model for Malay and Indonesian.
## Pretraining Corpus
`t5-small-bahasa-cased` model was pretrained on multiple tasks. Below is list of tasks we trained on,
1. [Unsupervised](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1875) on [local Wikipedia](https://github.com/huseinzol05/Malaya-Dataset#wikipedia-1).
2. [Unsupervised](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1875) on [local news](https://github.com/huseinzol05/Malaya-Dataset#public-news).
3. [Unsupervised](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1875) on [local parliament text](https://github.com/huseinzol05/Malaya-Dataset#parliament).
4. [Unsupervised](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1875) on [IIUM Confession](https://github.com/huseinzol05/Malaya-Dataset#iium-confession).
5. [Unsupervised](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1875) on [Wattpad](https://github.com/huseinzol05/Malaya-Dataset#wattpad).
6. [Unsupervised](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1875) on [Academia PDF](https://github.com/huseinzol05/Malaya-Dataset#academia-pdf).
7. [Next sentence prediction](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1129) on [local Wikipedia](https://github.com/huseinzol05/Malaya-Dataset#wikipedia-1).
8. [Next sentence prediction](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1129) on [local news](https://github.com/huseinzol05/Malaya-Dataset#public-news).
9. [Next sentence prediction](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1129) on [local parliament text](https://github.com/huseinzol05/Malaya-Dataset#parliament).
10. [Next sentence prediction](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1129) on [IIUM Confession](https://github.com/huseinzol05/Malaya-Dataset#iium-confession).
11. [Next sentence prediction](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1129) on [Wattpad](https://github.com/huseinzol05/Malaya-Dataset#wattpad).
12. [Next sentence prediction](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py#L1129) on [Academia PDF](https://github.com/huseinzol05/Malaya-Dataset#academia-pdf).
13. [Bahasa SNLI](https://github.com/huseinzol05/Malaya-Dataset#snli).
14. [Bahasa Question Quora](https://github.com/huseinzol05/Malaya-Dataset#quora).
15. [Bahasa Natural Questions](https://github.com/huseinzol05/Malaya-Dataset#natural-questions).
16. [News title summarization](https://github.com/huseinzol05/Malaya-Dataset#crawled-news).
17. [Stemming to original wikipedia](https://github.com/huseinzol05/Malaya/blob/master/pretrained-model/t5/generate-stemming.ipynb).
18. [Synonym to original wikipedia](https://github.com/huseinzol05/Malaya/blob/master/pretrained-model/t5/generate-synonym.ipynb).
Preprocessing steps can reproduce from here, [Malaya/pretrained-model/preprocess](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/preprocess).
## Pretraining details
- This model was trained using Google T5's github [repository](https://github.com/google-research/text-to-text-transfer-transformer) on v3-8 TPU.
- All steps can reproduce from here, [Malaya/pretrained-model/t5](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/t5).
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import T5Tokenizer, T5Model
model = T5Model.from_pretrained('huseinzol05/t5-small-bahasa-cased')
tokenizer = T5Tokenizer.from_pretrained('huseinzol05/t5-small-bahasa-cased')
```
## Example using T5ForConditionalGeneration
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('huseinzol05/t5-small-bahasa-cased')
model = T5ForConditionalGeneration.from_pretrained('huseinzol05/t5-small-bahasa-cased')
input_ids = tokenizer.encode('soalan: siapakah perdana menteri malaysia?', return_tensors = 'pt')
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
Output is,
```
'Mahathir Mohamad'
```
## Results
For further details on the model performance, simply checkout accuracy page from Malaya, https://malaya.readthedocs.io/en/latest/Accuracy.html, we compared with traditional models.
## Acknowledgement
Thanks to [Im Big](https://www.facebook.com/imbigofficial/), [LigBlou](https://www.facebook.com/ligblou), [Mesolitica](https://mesolitica.com/) and [KeyReply](https://www.keyreply.com/) for sponsoring AWS, Google and GPU clouds to train T5 for Bahasa.
|
|
huseinzol05/t5-small-bahasa-summarization-cased
|
2020-12-11T21:44:51.000Z
|
[
"pytorch",
"t5",
"seq2seq",
"ms",
"transformers",
"text2text-generation"
] |
text2text-generation
|
[
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] |
huseinzol05
| 16 |
transformers
|
---
language: ms
---
# Bahasa T5 Summarization Model
Finetuned T5 small summarization model for Malay and Indonesian.
## Finetuning Corpus
`t5-small-bahasa-summarization-cased` model was finetuned on multiple summarization dataset. Below is list of tasks we trained on,
1. [Translated CNN News](https://github.com/huseinzol05/Malay-Dataset#cnn-news)
2. [Translated Gigawords](https://github.com/huseinzol05/Malay-Dataset#gigawords)
3. [Translated Multinews](https://github.com/huseinzol05/Malay-Dataset#multinews)
## Finetuning details
- This model was trained using Malaya T5's github [repository](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/t5) on v3-8 TPU using small size.
- All steps can reproduce from here, [Malaya/session/summarization](https://github.com/huseinzol05/Malaya/tree/master/session/summarization).
## Load Finetuned Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import T5Tokenizer, T5Model
tokenizer = T5Tokenizer.from_pretrained('huseinzol05/t5-small-bahasa-summarization-cased')
model = T5ForConditionalGeneration.from_pretrained('huseinzol05/t5-small-bahasa-summarization-cased')
```
## Example using T5ForConditionalGeneration
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('huseinzol05/t5-small-bahasa-summarization-cased')
model = T5ForConditionalGeneration.from_pretrained('huseinzol05/t5-small-bahasa-summarization-cased')
# https://www.hmetro.com.my/mutakhir/2020/05/580438/peletakan-jawatan-tun-m-ditolak-bukan-lagi-isu
# original title, Peletakan jawatan Tun M ditolak, bukan lagi isu
string = 'PELETAKAN jawatan Tun Dr Mahathir Mohamad sebagai Pengerusi Parti Pribumi Bersatu Malaysia (Bersatu) ditolak di dalam mesyuarat khas Majlis Pimpinan Tertinggi (MPT) pada 24 Februari lalu. Justeru, tidak timbul soal peletakan jawatan itu sah atau tidak kerana ia sudah pun diputuskan pada peringkat parti yang dipersetujui semua termasuk Presiden, Tan Sri Muhyiddin Yassin. Bekas Setiausaha Agung Bersatu Datuk Marzuki Yahya berkata, pada mesyuarat itu MPT sebulat suara menolak peletakan jawatan Dr Mahathir. "Jadi ini agak berlawanan dengan keputusan yang kita sudah buat. Saya tak faham bagaimana Jabatan Pendaftar Pertubuhan Malaysia (JPPM) kata peletakan jawatan itu sah sedangkan kita sudah buat keputusan di dalam mesyuarat, bukan seorang dua yang buat keputusan. "Semua keputusan mesti dibuat melalui parti. Walau apa juga perbincangan dibuat di luar daripada keputusan mesyuarat, ini bukan keputusan parti. "Apa locus standy yang ada pada Setiausaha Kerja untuk membawa perkara ini kepada JPPM. Seharusnya ia dibawa kepada Setiausaha Agung sebagai pentadbir kepada parti," katanya kepada Harian Metro. Beliau mengulas laporan media tempatan hari ini mengenai pengesahan JPPM bahawa Dr Mahathir tidak lagi menjadi Pengerusi Bersatu berikutan peletakan jawatannya di tengah-tengah pergolakan politik pada akhir Februari adalah sah. Laporan itu juga menyatakan, kedudukan Muhyiddin Yassin memangku jawatan itu juga sah. Menurutnya, memang betul Dr Mahathir menghantar surat peletakan jawatan, tetapi ditolak oleh MPT. "Fasal yang disebut itu terpakai sekiranya berhenti atau diberhentikan, tetapi ini mesyuarat sudah menolak," katanya. Marzuki turut mempersoal kenyataan media yang dibuat beberapa pimpinan parti itu hari ini yang menyatakan sokongan kepada Perikatan Nasional. "Kenyataan media bukanlah keputusan rasmi. Walaupun kita buat 1,000 kenyataan sekali pun ia tetap tidak merubah keputusan yang sudah dibuat di dalam mesyuarat. Kita catat di dalam minit apa yang berlaku di dalam mesyuarat," katanya.'
# https://huggingface.co/blog/how-to-generate
# generate summary
input_ids = tokenizer.encode(f'ringkasan: {string}', return_tensors = 'pt')
outputs = model.generate(
input_ids,
do_sample = True,
temperature = 0.8,
top_k = 50,
top_p = 0.95,
max_length = 300,
num_return_sequences = 3,
)
for i, sample_output in enumerate(outputs):
print(
'{}: {}'.format(
i, tokenizer.decode(sample_output, skip_special_tokens = True)
)
)
# generate news title
input_ids = tokenizer.encode(f'tajuk: {string}', return_tensors = 'pt')
outputs = model.generate(
input_ids,
do_sample = True,
temperature = 0.8,
top_k = 50,
top_p = 0.95,
max_length = 300,
num_return_sequences = 3,
)
for i, sample_output in enumerate(outputs):
print(
'{}: {}'.format(
i, tokenizer.decode(sample_output, skip_special_tokens = True)
)
)
```
Output is,
```
0: Pengerusi Bersatu Bersatu menafikan peletakan jawatan dalam mesyuarat khas Majlis Pimpinan Tertinggi. Tidak timbul isu peletakan jawatan itu sah atau tidak kerana ia sudah diputuskan di peringkat parti. Kenyataan media yang dibuat oleh pemimpin parti hari ini menyokong Perikatan Nasional
1: Tiada keputusan kerana ia sudah diputuskan pada peringkat parti, Marzuki berkata. Pejabat rasmi parti menolak peletakan jawatan Dr M, dengan mengatakan ia adalah keputusan. Kedudukan Muhyiddin memangku jawatan itu juga sah, katanya
2: Tiada peletakan jawatan Dr Mahathir dalam mesyuarat khas MPT pada 24 Februari. Ketua parti menolak peletakan jawatan itu. Tidak timbul isu peletakan jawatan itu sah atau tidak, katanya
0: Tiada peletakan jawatan Tun M dalam mesyuarat khas
1: ‘Tidak timbul peletakan jawatan Tun M’
2: Tidak timbul isu peletakan jawatan Tun M di mesyuarat khas
```
## Result
We found out using original Tensorflow implementation gives better results, check it at https://malaya.readthedocs.io/en/latest/Abstractive.html#generate-ringkasan
## Acknowledgement
Thanks to [Im Big](https://www.facebook.com/imbigofficial/), [LigBlou](https://www.facebook.com/ligblou), [Mesolitica](https://mesolitica.com/) and [KeyReply](https://www.keyreply.com/) for sponsoring AWS, Google and GPU clouds to train T5 for Bahasa.
|
huseinzol05/tiny-bert-bahasa-cased
|
2021-05-19T20:14:59.000Z
|
[
"pytorch",
"jax",
"bert",
"masked-lm",
"ms",
"transformers",
"fill-mask"
] |
fill-mask
|
[
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] |
huseinzol05
| 52 |
transformers
|
---
language: ms
---
# Bahasa Tiny-BERT Model
General Distilled Tiny BERT language model for Malay and Indonesian.
## Pretraining Corpus
`tiny-bert-bahasa-cased` model was distilled on ~1.8 Billion words. We distilled on both standard and social media language structures, and below is list of data we distilled on,
1. [dumping wikipedia](https://github.com/huseinzol05/Malaya-Dataset#wikipedia-1).
2. [local instagram](https://github.com/huseinzol05/Malaya-Dataset#instagram).
3. [local twitter](https://github.com/huseinzol05/Malaya-Dataset#twitter-1).
4. [local news](https://github.com/huseinzol05/Malaya-Dataset#public-news).
5. [local parliament text](https://github.com/huseinzol05/Malaya-Dataset#parliament).
6. [local singlish/manglish text](https://github.com/huseinzol05/Malaya-Dataset#singlish-text).
7. [IIUM Confession](https://github.com/huseinzol05/Malaya-Dataset#iium-confession).
8. [Wattpad](https://github.com/huseinzol05/Malaya-Dataset#wattpad).
9. [Academia PDF](https://github.com/huseinzol05/Malaya-Dataset#academia-pdf).
Preprocessing steps can reproduce from here, [Malaya/pretrained-model/preprocess](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/preprocess).
## Distilling details
- This model was distilled using huawei-noah Tiny-BERT's github [repository](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/TinyBERT) on 3 Titan V100 32GB VRAM.
- All steps can reproduce from here, [Malaya/pretrained-model/tiny-bert](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/tiny-bert).
## Load Distilled Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AlbertTokenizer, BertModel
model = BertModel.from_pretrained('huseinzol05/tiny-bert-bahasa-cased')
tokenizer = AlbertTokenizer.from_pretrained(
'huseinzol05/tiny-bert-bahasa-cased',
unk_token = '[UNK]',
pad_token = '[PAD]',
do_lower_case = False,
)
```
We use [google/sentencepiece](https://github.com/google/sentencepiece) to train the tokenizer, so to use it, need to load from `AlbertTokenizer`.
## Example using AutoModelWithLMHead
```python
from transformers import AlbertTokenizer, AutoModelWithLMHead, pipeline
model = AutoModelWithLMHead.from_pretrained('huseinzol05/tiny-bert-bahasa-cased')
tokenizer = AlbertTokenizer.from_pretrained(
'huseinzol05/tiny-bert-bahasa-cased',
unk_token = '[UNK]',
pad_token = '[PAD]',
do_lower_case = False,
)
fill_mask = pipeline('fill-mask', model = model, tokenizer = tokenizer)
print(fill_mask('makan ayam dengan [MASK]'))
```
Output is,
```text
[{'sequence': '[CLS] makan ayam dengan berbual[SEP]',
'score': 0.00015769545279908925,
'token': 17859},
{'sequence': '[CLS] makan ayam dengan kembar[SEP]',
'score': 0.0001448775001335889,
'token': 8289},
{'sequence': '[CLS] makan ayam dengan memaklumkan[SEP]',
'score': 0.00013484008377417922,
'token': 6881},
{'sequence': '[CLS] makan ayam dengan Senarai[SEP]',
'score': 0.00013061291247140616,
'token': 11698},
{'sequence': '[CLS] makan ayam dengan Tiga[SEP]',
'score': 0.00012453157978598028,
'token': 4232}]
```
## Results
For further details on the model performance, simply checkout accuracy page from Malaya, https://malaya.readthedocs.io/en/latest/Accuracy.html, we compared with traditional models.
## Acknowledgement
Thanks to [Im Big](https://www.facebook.com/imbigofficial/), [LigBlou](https://www.facebook.com/ligblou), [Mesolitica](https://mesolitica.com/) and [KeyReply](https://www.keyreply.com/) for sponsoring AWS, Google and GPU clouds to train BERT for Bahasa.
|
huseinzol05/xlnet-base-bahasa-cased
|
2020-12-11T21:44:58.000Z
|
[
"pytorch",
"xlnet",
"lm-head",
"causal-lm",
"ms",
"transformers",
"text-generation"
] |
text-generation
|
[
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] |
huseinzol05
| 143 |
transformers
|
---
language: ms
---
# Bahasa XLNet Model
Pretrained XLNet base language model for Malay and Indonesian.
## Pretraining Corpus
`XLNET-base-bahasa-cased` model was pretrained on ~1.8 Billion words. We trained on both standard and social media language structures, and below is list of data we trained on,
1. [dumping wikipedia](https://github.com/huseinzol05/Malaya-Dataset#wikipedia-1).
2. [local instagram](https://github.com/huseinzol05/Malaya-Dataset#instagram).
3. [local twitter](https://github.com/huseinzol05/Malaya-Dataset#twitter-1).
4. [local news](https://github.com/huseinzol05/Malaya-Dataset#public-news).
5. [local parliament text](https://github.com/huseinzol05/Malaya-Dataset#parliament).
6. [local singlish/manglish text](https://github.com/huseinzol05/Malaya-Dataset#singlish-text).
7. [IIUM Confession](https://github.com/huseinzol05/Malaya-Dataset#iium-confession).
8. [Wattpad](https://github.com/huseinzol05/Malaya-Dataset#wattpad).
9. [Academia PDF](https://github.com/huseinzol05/Malaya-Dataset#academia-pdf).
Preprocessing steps can reproduce from here, [Malaya/pretrained-model/preprocess](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/preprocess).
## Pretraining details
- This model was trained using zihangdai XLNet's github [repository](https://github.com/zihangdai/xlnet) on 3 Titan V100 32GB VRAM.
- All steps can reproduce from here, [Malaya/pretrained-model/xlnet](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/xlnet).
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import XLNetTokenizer, XLNetModel
model = XLNetModel.from_pretrained('huseinzol05/xlnet-base-bahasa-cased')
tokenizer = XLNetTokenizer.from_pretrained(
'huseinzol05/xlnet-base-bahasa-cased', do_lower_case = False
)
```
## Example using AutoModelWithLMHead
```python
from transformers import AlbertTokenizer, AutoModelWithLMHead, pipeline
model = AutoModelWithLMHead.from_pretrained('huseinzol05/xlnet-base-bahasa-cased')
tokenizer = XLNetTokenizer.from_pretrained(
'huseinzol05/xlnet-base-bahasa-cased', do_lower_case = False
)
fill_mask = pipeline('fill-mask', model = model, tokenizer = tokenizer)
print(fill_mask('makan ayam dengan <mask>'))
```
## Results
For further details on the model performance, simply checkout accuracy page from Malaya, https://malaya.readthedocs.io/en/latest/Accuracy.html, we compared with traditional models.
## Acknowledgement
Thanks to [Im Big](https://www.facebook.com/imbigofficial/), [LigBlou](https://www.facebook.com/ligblou), [Mesolitica](https://mesolitica.com/) and [KeyReply](https://www.keyreply.com/) for sponsoring AWS, Google and GPU clouds to train XLNet for Bahasa.
|
huseinzol05/xlnet-base-bahasa-standard-cased
|
2020-10-02T16:43:02.000Z
|
[
"pytorch",
"xlnet",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] |
text-generation
|
[
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] |
huseinzol05
| 12 |
transformers
| |
huseinzol05/xlnet-large-bahasa-standard-cased
|
2020-10-20T13:02:08.000Z
|
[
"pytorch",
"xlnet",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] |
text-generation
|
[
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] |
huseinzol05
| 15 |
transformers
| |
hushgeekart/testingModel
|
2021-06-18T04:31:18.000Z
|
[] |
[
".gitattributes",
"README.md"
] |
hushgeekart
| 0 | |||
huwendeng/distilroberta_b
|
2021-05-20T16:41:42.000Z
|
[
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] |
text-classification
|
[
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] |
huwendeng
| 11 |
transformers
| |
hxiaom/test
|
2021-02-17T09:24:56.000Z
|
[] |
[
".gitattributes"
] |
hxiaom
| 0 | |||
hyunbo/model_name
|
2021-05-03T03:17:51.000Z
|
[] |
[
".gitattributes"
] |
hyunbo
| 0 | |||
hyunwoongko/asian-bart-ecjk
|
2021-04-01T07:36:52.000Z
|
[
"pytorch",
"mbart",
"seq2seq",
"transformers",
"text2text-generation"
] |
text2text-generation
|
[
".DS_Store",
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"tokenizer.json"
] |
hyunwoongko
| 154 |
transformers
| |
hyunwoongko/asian-bart-en
|
2021-04-01T07:47:06.000Z
|
[
"pytorch",
"mbart",
"seq2seq",
"transformers",
"text2text-generation"
] |
text2text-generation
|
[
".DS_Store",
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"tokenizer.json"
] |
hyunwoongko
| 8 |
transformers
| |
hyunwoongko/asian-bart-ja
|
2021-04-01T08:10:05.000Z
|
[
"pytorch",
"mbart",
"seq2seq",
"transformers",
"text2text-generation"
] |
text2text-generation
|
[
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"tokenizer.json"
] |
hyunwoongko
| 6 |
transformers
| |
hyunwoongko/asian-bart-ko
|
2021-04-01T08:17:28.000Z
|
[
"pytorch",
"mbart",
"seq2seq",
"transformers",
"text2text-generation"
] |
text2text-generation
|
[
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"tokenizer.json"
] |
hyunwoongko
| 13 |
transformers
| |
hyunwoongko/asian-bart-zh
|
2021-04-01T08:23:40.000Z
|
[
"pytorch",
"mbart",
"seq2seq",
"transformers",
"text2text-generation"
] |
text2text-generation
|
[
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"tokenizer.json"
] |
hyunwoongko
| 112 |
transformers
| |
hyunwoongko/blenderbot-9B
|
2021-06-17T01:26:34.000Z
|
[
"pytorch",
"blenderbot",
"seq2seq",
"en",
"dataset:blended_skill_talk",
"arxiv:1907.06616",
"transformers",
"convAI",
"conversational",
"facebook",
"license:apache-2.0",
"text2text-generation"
] |
conversational
|
[
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] |
hyunwoongko
| 9 |
transformers
|
---
language:
- en
thumbnail:
tags:
- convAI
- conversational
- facebook
license: apache-2.0
datasets:
- blended_skill_talk
metrics:
- perplexity
---
## Model description
+ Paper: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/1907.06616)
+ [Original PARLAI Code](https://parl.ai/projects/recipes/)
### Abstract
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
|
hyunwoongko/brainbert-base-ko-kornli
|
2021-05-20T16:42:42.000Z
|
[
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] |
text-classification
|
[
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"tokenizer.json",
"vocab.json"
] |
hyunwoongko
| 116 |
transformers
| |
hyunwoongko/ctrlsum-arxiv
|
2021-03-21T15:56:59.000Z
|
[
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
] |
text2text-generation
|
[
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] |
hyunwoongko
| 51 |
transformers
| |
hyunwoongko/ctrlsum-bigpatent
|
2021-03-21T15:56:21.000Z
|
[
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
] |
text2text-generation
|
[
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] |
hyunwoongko
| 97 |
transformers
| |
hyunwoongko/ctrlsum-cnndm
|
2021-03-21T15:55:50.000Z
|
[
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
] |
text2text-generation
|
[
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] |
hyunwoongko
| 586 |
transformers
| |
hyunwoongko/jaberta-base-ja-xnli
|
2021-05-20T16:43:51.000Z
|
[
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] |
text-classification
|
[
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.origin",
"vocab.txt"
] |
hyunwoongko
| 20 |
transformers
| |
hyunwoongko/kobart
|
2020-12-22T17:07:59.000Z
|
[
"pytorch",
"bart",
"transformers"
] |
[
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] |
hyunwoongko
| 82,496 |
transformers
| ||
hyunwoongko/reddit-3B
|
2021-06-17T01:14:31.000Z
|
[] |
[
".gitattributes"
] |
hyunwoongko
| 0 | |||
hyunwoongko/reddit-9B
|
2021-06-17T01:14:42.000Z
|
[] |
[
".gitattributes"
] |
hyunwoongko
| 0 | |||
hyunwoongko/roberta-base-en-mnli
|
2021-05-20T16:44:48.000Z
|
[
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] |
text-classification
|
[
".gitattributes",
"config.json",
"dict.txt",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"tokenizer.json",
"vocab.json"
] |
hyunwoongko
| 68 |
transformers
| |
hyunwoongko/zhberta-base-zh-xnli
|
2021-05-20T16:45:43.000Z
|
[
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] |
text-classification
|
[
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tokenizer.json",
"tokenizer.origin",
"tokenizer_config.json",
"vocab.origin",
"vocab.txt"
] |
hyunwoongko
| 8 |
transformers
| |
hzanghi/test
|
2020-12-18T21:23:58.000Z
|
[] |
[
".gitattributes"
] |
hzanghi
| 0 | |||
iAmmarTahir/domain-adapted-negation
|
2021-05-20T16:46:43.000Z
|
[
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] |
text-classification
|
[
".gitattributes",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] |
iAmmarTahir
| 33 |
transformers
| |
iAmmarTahir/domain-adapted-timex
|
2021-05-20T16:48:01.000Z
|
[
"pytorch",
"jax",
"roberta",
"token-classification",
"transformers"
] |
token-classification
|
[
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] |
iAmmarTahir
| 9 |
transformers
| |
iamapythongeek/testing1234
|
2021-04-11T21:28:56.000Z
|
[] |
[
".gitattributes"
] |
iamapythongeek
| 0 | |||
ian/BlueBERT-Finetuned-MIMIC-III-ICD-9-Classification
|
2021-04-20T04:08:01.000Z
|
[] |
[
".gitattributes"
] |
ian
| 0 | |||
iarfmoose/bert-base-cased-qa-evaluator
|
2021-05-19T20:15:52.000Z
|
[
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
[
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] |
iarfmoose
| 2,278 |
transformers
|
# BERT-base-cased-qa-evaluator
This model takes a question answer pair as an input and outputs a value representing its prediction about whether the input was a valid question and answer pair or not. The model is a pretrained [BERT-base-cased](https://huggingface.co/bert-base-cased) with a sequence classification head.
## Intended uses
The QA evaluator was originally designed to be used with the [t5-base-question-generator](https://huggingface.co/iarfmoose/t5-base-question-generator) for evaluating the quality of generated questions.
The input for the QA evaluator follows the format for `BertForSequenceClassification`, but using the question and answer as the two sequences. Inputs should take the following format:
```
[CLS] <question> [SEP] <answer [SEP]
```
## Limitations and bias
The model is trained to evaluate if a question and answer are semantically related, but cannot determine whether an answer is actually true/correct or not.
## Training data
The training data was made up of question-answer pairs from the following datasets:
- [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/)
- [RACE](http://www.cs.cmu.edu/~glai1/data/race/)
- [CoQA](https://stanfordnlp.github.io/coqa/)
- [MSMARCO](https://microsoft.github.io/msmarco/)
## Training procedure
The question and answer were concatenated 50% of the time. In the other 50% of the time a corruption operation was performed (either swapping the answer for an unrelated answer, or by copying part of the question into the answer). The model was then trained to predict whether the input sequence represented one of the original QA pairs or a corrupted input.
|
iarfmoose/roberta-base-bulgarian-pos
|
2021-05-20T16:49:07.000Z
|
[
"pytorch",
"tf",
"jax",
"roberta",
"token-classification",
"bg",
"arxiv:1907.11692",
"transformers"
] |
token-classification
|
[
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] |
iarfmoose
| 36 |
transformers
|
---
language: bg
---
# RoBERTa-base-bulgarian-POS
The RoBERTa model was originally introduced in [this paper](https://arxiv.org/abs/1907.11692). This model is a version of [RoBERTa-base-Bulgarian](https://huggingface.co/iarfmoose/roberta-base-bulgarian) fine-tuned for part-of-speech tagging.
## Intended uses
The model can be used to predict part-of-speech tags in Bulgarian text. Since the tokenizer uses byte-pair encoding, each word in the text may be split into more than one token. When predicting POS-tags, the last token from each word can be used. Using the last token was found to slightly outperform predictions based on the first token.
An example of this can be found [here](https://github.com/iarfmoose/bulgarian-nlp/blob/master/models/postagger.py).
## Limitations and bias
The pretraining data is unfiltered text from the internet and may contain all sorts of biases.
## Training data
In addition to the pretraining data used in [RoBERTa-base-Bulgarian]([RoBERTa-base-Bulgarian](https://huggingface.co/iarfmoose/roberta-base-bulgarian)), the model was trained on the UPOS tags from [UD_Bulgarian-BTB](https://github.com/UniversalDependencies/UD_Bulgarian-BTB).
## Training procedure
The model was trained for 5 epochs over the training set. The loss was calculated based on label predictions for the last POS-tag for each word. The model achieves 97% on the test set.
|
iarfmoose/roberta-base-bulgarian
|
2021-05-20T16:50:24.000Z
|
[
"pytorch",
"tf",
"jax",
"roberta",
"masked-lm",
"bg",
"arxiv:1907.11692",
"transformers",
"fill-mask"
] |
fill-mask
|
[
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] |
iarfmoose
| 53 |
transformers
|
---
language: bg
---
# RoBERTa-base-bulgarian
The RoBERTa model was originally introduced in [this paper](https://arxiv.org/abs/1907.11692). This is a version of [RoBERTa-base](https://huggingface.co/roberta-base) pretrained on Bulgarian text.
## Intended uses
This model can be used for cloze tasks (masked language modeling) or finetuned on other tasks in Bulgarian.
## Limitations and bias
The training data is unfiltered text from the internet and may contain all sorts of biases.
## Training data
This model was trained on the following data:
- [bg_dedup from OSCAR](https://oscar-corpus.com/)
- [Newscrawl 1 million sentences 2017 from Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download/bulgarian)
- [Wikipedia 1 million sentences 2016 from Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download/bulgarian)
## Training procedure
The model was pretrained using a masked language-modeling objective with dynamic masking as described [here](https://huggingface.co/roberta-base#preprocessing)
It was trained for 200k steps. The batch size was limited to 8 due to GPU memory limitations.
|
iarfmoose/roberta-small-bulgarian-ner
|
2021-05-20T16:51:18.000Z
|
[
"pytorch",
"tf",
"jax",
"roberta",
"token-classification",
"transformers"
] |
token-classification
|
[
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] |
iarfmoose
| 14 |
transformers
| |
iarfmoose/roberta-small-bulgarian-pos
|
2021-05-20T16:52:10.000Z
|
[
"pytorch",
"tf",
"jax",
"roberta",
"token-classification",
"bg",
"arxiv:1907.11692",
"transformers"
] |
token-classification
|
[
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] |
iarfmoose
| 14 |
transformers
|
---
language: bg
---
# RoBERTa-small-bulgarian-POS
The RoBERTa model was originally introduced in [this paper](https://arxiv.org/abs/1907.11692). This model is a version of [RoBERTa-small-Bulgarian](https://huggingface.co/iarfmoose/roberta-small-bulgarian) fine-tuned for part-of-speech tagging.
## Intended uses
The model can be used to predict part-of-speech tags in Bulgarian text. Since the tokenizer uses byte-pair encoding, each word in the text may be split into more than one token. When predicting POS-tags, the last token from each word can be used. Using the last token was found to slightly outperform predictions based on the first token.
An example of this can be found [here](https://github.com/iarfmoose/bulgarian-nlp/blob/master/models/postagger.py).
## Limitations and bias
The pretraining data is unfiltered text from the internet and may contain all sorts of biases.
## Training data
In addition to the pretraining data used in [RoBERTa-base-Bulgarian]([RoBERTa-base-Bulgarian](https://huggingface.co/iarfmoose/roberta-base-bulgarian)), the model was trained on the UPOS tags from (UD_Bulgarian-BTB)[https://github.com/UniversalDependencies/UD_Bulgarian-BTB].
## Training procedure
The model was trained for 5 epochs over the training set. The loss was calculated based on label predictions for the last POS-tag for each word. The model achieves 98% on the test set.
|
iarfmoose/roberta-small-bulgarian
|
2021-05-20T16:54:01.000Z
|
[
"pytorch",
"tf",
"jax",
"roberta",
"masked-lm",
"bg",
"arxiv:1907.11692",
"transformers",
"fill-mask"
] |
fill-mask
|
[
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] |
iarfmoose
| 31 |
transformers
|
---
language: bg
---
# RoBERTa-small-bulgarian
The RoBERTa model was originally introduced in [this paper](https://arxiv.org/abs/1907.11692). This is a smaller version of [RoBERTa-base-bulgarian](https://huggingface.co/iarfmoose/roberta-small-bulgarian) with only 6 hidden layers, but similar performance.
## Intended uses
This model can be used for cloze tasks (masked language modeling) or finetuned on other tasks in Bulgarian.
## Limitations and bias
The training data is unfiltered text from the internet and may contain all sorts of biases.
## Training data
This model was trained on the following data:
- [bg_dedup from OSCAR](https://oscar-corpus.com/)
- [Newscrawl 1 million sentences 2017 from Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download/bulgarian)
- [Wikipedia 1 million sentences 2016 from Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download/bulgarian)
## Training procedure
The model was pretrained using a masked language-modeling objective with dynamic masking as described [here](https://huggingface.co/roberta-base#preprocessing)
It was trained for 160k steps. The batch size was limited to 8 due to GPU memory limitations.
|
iarfmoose/t5-base-question-generator
|
2020-12-11T21:45:23.000Z
|
[
"pytorch",
"tf",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
] |
text2text-generation
|
[
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json"
] |
iarfmoose
| 2,767 |
transformers
|
# Model name
## Model description
This model is a sequence-to-sequence question generator which takes an answer and context as an input, and generates a question as an output. It is based on a pretrained `t5-base` model.
## Intended uses & limitations
The model is trained to generate reading comprehension-style questions with answers extracted from a text. The model performs best with full sentence answers, but can also be used with single word or short phrase answers.
#### How to use
The model takes concatenated answers and context as an input sequence, and will generate a full question sentence as an output sequence. The max sequence length is 512 tokens. Inputs should be organised into the following format:
```
answer_token <answer-phrase> context_token <context-from-text>
```
The input sequence can then be encoded and passed as the `input_ids` argument in the model's `generate()` method.
For best results, a large number of questions can be generated, and then filtered using [iarfmoose/bert-base-cased-qa-evaluator](https://huggingface.co/iarfmoose/bert-base-cased-qa-evaluator).
For examples, please see https://github.com/iarfmoose/question_generator.
#### Limitations and bias
The model is limited to generating questions in the same style as those found in [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/), [CoQA](https://stanfordnlp.github.io/coqa/), and [MSMARCO](https://microsoft.github.io/msmarco/). The generated questions can potentially be leading or reflect biases that are present in the context. If the context is too short or completely absent, or if the context and answer do not match, the generated question is likely to be incoherent.
## Training data
The model was fine-tuned on a dataset made up of several well-known QA datasets ([SQuAD](https://rajpurkar.github.io/SQuAD-explorer/), [CoQA](https://stanfordnlp.github.io/coqa/), and [MSMARCO](https://microsoft.github.io/msmarco/)). The datasets were restructured by concatenating the answer and context fields into the previously-mentioned format. The question field from the datasets was used as the target during training. The full training set was roughly 200,000 examples.
## Training procedure
The model was trained for 20 epochs over the training set with a learning rate of 1e-3. The batch size was only 4 due to GPU memory limitations when training on Google Colab.
|
iarfmoose/wav2vec2-large-xlsr-frisian
|
2021-03-29T18:05:16.000Z
|
[
"pytorch",
"wav2vec2",
"fy-NL",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] |
automatic-speech-recognition
|
[
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] |
iarfmoose
| 8 |
transformers
|
---
language: fy-NL
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Frisian by Adam Montgomerie
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fy-NL
type: common_voice
args: fy-NL
metrics:
- name: Test WER
type: wer
value: 21.72
---
# Wav2Vec2-Large-XLSR-53-Frisian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Frisian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fy-NL", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-frisian")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-frisian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
tbatch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Frisian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fy-NL", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-frisian")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-frisian")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�\\–\\—\\¬\\⅛]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 21.72 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Frisian/XLSR_Frisian.ipynb)
A notebook of the evaluation script can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Frisian/wav2vec2_fyNL_eval.ipynb)
|
iarfmoose/wav2vec2-large-xlsr-kyrgyz
|
2021-03-29T18:06:48.000Z
|
[
"pytorch",
"wav2vec2",
"ky",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] |
automatic-speech-recognition
|
[
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] |
iarfmoose
| 22 |
transformers
|
---
language: ky
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Kyrgyz by Adam Montgomerie
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ky
type: common_voice
args: ky
metrics:
- name: Test WER
type: wer
value: 34.71
---
# Wav2Vec2-Large-XLSR-53-Kyrgyz
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Kyrgyz using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ky", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-kyrgyz")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-kyrgyz")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Kyrgyz test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ky", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-kyrgyz")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-kyrgyz")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“\\\\\\\\%\\\\\\\\‘\\\\\\\\”\\\\\\\\�\\\\\\\\–\\\\\\\\—\\\\\\\\¬\\\\\\\\⅛]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 34.71 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Kyrgyz/XLSR_Kyrgyz.ipynb)
A notebook of the evaluation script can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Kyrgyz/wav2vec2_ky_eval.ipynb)
|
iarfmoose/wav2vec2-large-xlsr-sorbian
|
2021-03-29T20:33:01.000Z
|
[
"pytorch",
"wav2vec2",
"hsb",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] |
automatic-speech-recognition
|
[
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] |
iarfmoose
| 27 |
transformers
|
---
language: hsb
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Sorbian by Adam Montgomerie
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice hsb
type: common_voice
args: hsb
metrics:
- name: Test WER
type: wer
value: 41.74
---
# Wav2Vec2-Large-XLSR-53-Sorbian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Sorbian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hsb", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-sorbian")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-sorbian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
tbatch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Sorbian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "hsb", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-sorbian")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-sorbian")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\“\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\‘\\\\\\\\\\\\\\\\”\\\\\\\\\\\\\\\\�\\\\\\\\\\\\\\\\–\\\\\\\\\\\\\\\\—\\\\\\\\\\\\\\\\¬\\\\\\\\\\\\\\\\⅛]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 41.74 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Sorbian/XLSR_Sorbian.ipynb)
A notebook of the evaluation script can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Sorbian/wav2vec2_hsb_eval.ipynb)
|
ibahadiraltun/berturk-social
|
2021-05-20T16:54:49.000Z
|
[
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
] |
fill-mask
|
[
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
] |
ibahadiraltun
| 17 |
transformers
| |
iby/bert-turkish-cased
|
2021-03-23T12:29:09.000Z
|
[] |
[
".gitattributes",
"README.md"
] |
iby
| 0 | |||
idb-ita/gilberto-uncased-from-camembert
|
2020-04-24T16:01:20.000Z
|
[
"pytorch",
"camembert",
"masked-lm",
"transformers",
"fill-mask"
] |
fill-mask
|
[
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model"
] |
idb-ita
| 2,612 |
transformers
| |
idjotherwise/autonlp-reading_prediction-172506
|
2021-05-20T16:57:07.000Z
|
[
"pytorch",
"jax",
"roberta",
"text-classification",
"en",
"dataset:idjotherwise/autonlp-data-reading_prediction",
"transformers",
"autonlp"
] |
text-classification
|
[
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"sample_input.pkl",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] |
idjotherwise
| 89 |
transformers
|
---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- idjotherwise/autonlp-data-reading_prediction
---
# Model Trained Using AutoNLP
- Problem type: Single Column Regression
- Model ID: 172506
## Validation Metrics
- Loss: 0.03257797285914421
- MSE: 0.03257797285914421
- MAE: 0.14246532320976257
- R2: 0.9693824457290849
- RMSE: 0.18049369752407074
- Explained Variance: 0.9699198007583618
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/idjotherwise/autonlp-reading_prediction-172506
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("idjotherwise/autonlp-reading_prediction-172506")
tokenizer = AutoTokenizer.from_pretrained("idjotherwise/autonlp-reading_prediction-172506")
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
idobegaming/idobegaming
|
2021-01-24T21:27:07.000Z
|
[] |
[
".gitattributes",
"README.md"
] |
idobegaming
| 0 |
a
|
||
ielab/TILDE
|
2021-06-15T01:36:26.000Z
|
[
"pytorch",
"bert",
"lm-head",
"masked-lm",
"transformers",
"fill-mask"
] |
fill-mask
|
[
".gitattributes",
"config.json",
"pytorch_model.bin"
] |
ielab
| 54 |
transformers
| |
ietz/comment-linking-distilbert-base-german-cased
|
2020-10-22T17:41:09.000Z
|
[
"pytorch",
"distilbert",
"text-classification",
"transformers"
] |
text-classification
|
[
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] |
ietz
| 24 |
transformers
| |
igunathilake/bert-it
|
2021-03-27T11:16:29.000Z
|
[] |
[
".gitattributes",
"README.md"
] |
igunathilake
| 0 | |||
ihiratanveer/COVID-QA-Longformer
|
2021-02-05T18:56:05.000Z
|
[] |
[
".gitattributes"
] |
ihiratanveer
| 0 | |||
illuin/camembert-base-fquad
|
2020-12-11T21:45:27.000Z
|
[
"pytorch",
"camembert",
"question-answering",
"fr",
"dataset:fquad",
"transformers",
"license:gpl-3.0"
] |
question-answering
|
[
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
] |
illuin
| 2,790 |
transformers
|
---
language: fr
tags:
- question-answering
- camembert
license: gpl-3.0
datasets:
- fquad
---
# camembert-base-fquad
## Description
A native French Question Answering model [CamemBERT-base](https://camembert-model.fr/) fine-tuned on [FQuAD](https://fquad.illuin.tech/).
## Evaluation results
On the development set.
```shell
{"f1": 88.1, "exact_match": 78.1}
```
On the test set.
```shell
{"f1": 88.3, "exact_match": 78.0}
```
## Usage
```python
from transformers import pipeline
nlp = pipeline('question-answering', model='illuin/camembert-base-fquad', tokenizer='illuin/camembert-base-fquad')
nlp({
'question': "Qui est Claude Monet?",
'context': "Claude Monet, né le 14 novembre 1840 à Paris et mort le 5 décembre 1926 à Giverny, est un peintre français et l’un des fondateurs de l'impressionnisme."
})
```
## Citation
If you use our work, please cite:
```bibtex
@article{dHoffschmidt2020FQuADFQ,
title={FQuAD: French Question Answering Dataset},
author={Martin d'Hoffschmidt and Maxime Vidal and Wacim Belblidia and Tom Brendl'e and Quentin Heinrich},
journal={ArXiv},
year={2020},
volume={abs/2002.06071}
}
```
|
ilya68rus/test
|
2020-12-17T11:00:23.000Z
|
[] |
[
".gitattributes"
] |
ilya68rus
| 0 | |||
imaguru/aaaa
|
2021-01-30T06:19:35.000Z
|
[] |
[
".gitattributes"
] |
imaguru
| 0 | |||
imvladikon/bert-large-cased-finetuned-conll03-english
|
2021-05-19T20:18:21.000Z
|
[
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
] |
token-classification
|
[
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] |
imvladikon
| 23 |
transformers
|
```json
{
'epoch': 2.0,
'eval_accuracy': 0.9878289280037675,
'eval_f1': 0.9524406066842648,
'eval_loss': 0.06057225540280342,
'eval_mem_cpu_alloc_delta': 2711552,
'eval_mem_cpu_peaked_delta': 2113536,
'eval_mem_gpu_alloc_delta': 0,
'eval_mem_gpu_peaked_delta': 126590464,
'eval_precision': 0.9499330655957162,
'eval_recall': 0.9549614211376278,
'eval_runtime': 20.9379,
'eval_samples_per_second': 155.221
}
```
|
imvladikon/wav2vec2-large-xlsr-53-hebrew
|
2021-04-04T18:14:53.000Z
|
[
"pytorch",
"wav2vec2",
"he",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] |
automatic-speech-recognition
|
[
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] |
imvladikon
| 7,559 |
transformers
|
---
language: he
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Hebrew XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name:
type:
args: he
metrics:
- name: Test WER
type: wer
value:
---
# wav2vec2-large-xlsr-53-hebrew
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the several downloaded youtube samples.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "he", split="test[:2%]") # there is no common dataset for Hebrew, please, paste your data
processor = Wav2Vec2Processor.from_pretrained("imvladikon/wav2vec2-large-xlsr-53-hebrew")
model = Wav2Vec2ForCTC.from_pretrained("imvladikon/wav2vec2-large-xlsr-53-hebrew")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on some Hebrew test data
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "he", split="test") # there is no common dataset for Hebrew, please, paste your data
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("imvladikon/wav2vec2-large-xlsr-53-hebrew")
model = Wav2Vec2ForCTC.from_pretrained("imvladikon/wav2vec2-large-xlsr-53-hebrew").to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**:
# Example Predictions
|
imxly/sentence_roberta_wwm_ext
|
2021-05-19T20:20:32.000Z
|
[
"pytorch",
"jax",
"bert",
"transformers"
] |
[
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"vocab.txt"
] |
imxly
| 13,600 |
transformers
| ||
imxly/sentence_rtb3
|
2021-05-19T20:21:34.000Z
|
[
"pytorch",
"jax",
"bert",
"transformers"
] |
[
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"vocab.txt"
] |
imxly
| 31 |
transformers
| ||
imxly/t5-pegasus-small
|
2021-03-29T05:28:17.000Z
|
[
"pytorch",
"mt5",
"seq2seq",
"transformers",
"text2text-generation"
] |
text2text-generation
|
[
".gitattributes",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] |
imxly
| 97 |
transformers
| |
imxly/t5-pegasus
|
2021-03-29T03:34:09.000Z
|
[
"pytorch",
"mt5",
"seq2seq",
"transformers",
"text2text-generation"
] |
text2text-generation
|
[
".gitattributes",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] |
imxly
| 242 |
transformers
| |
indigo-ai/BERTino
|
2020-10-21T15:28:39.000Z
|
[
"pytorch",
"tf",
"distilbert",
"masked-lm",
"it",
"transformers",
"DISTILbert",
"Italian",
"license:mit",
"fill-mask"
] |
fill-mask
|
[
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] |
indigo-ai
| 1,490 |
transformers
|
---
language: it
tags:
- DISTILbert
- Italian
license: MIT
widget:
- text: "Vado al [MASK] a fare la spesa"
- text: "Vado al parco a guardare le [MASK]"
- text: "Il cielo è [MASK] di stelle."
---
# BERTino: an Italian DistilBERT model
This repository hosts BERTino, an Italian DistilBERT model pre-trained by
[indigo.ai](https://indigo.ai/en/)
on a large general-domain Italian corpus. BERTino is task-agnostic and can be
fine-tuned for every downstream task.
### Corpus
The pre-training corpus that we used is the union of the
[Paisa](https://www.corpusitaliano.it/) and
[ItWaC](https://corpora.dipintra.it/public/run.cgi/corp_info?corpname=itwac_full)
corpora. The final corpus counts 14 millions of sentences for a total of 12 GB
of text.
### Downstream Results
To validate the pre-training that we conducted, we evaluated BERTino on the
[Italian ParTUT](https://universaldependencies.org/treebanks/it_partut/index.html),
[Italian ISDT](https://universaldependencies.org/treebanks/it_isdt/index.html),
[Italian WikiNER](https://figshare.com/articles/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500)
and multi-class sentence classification tasks. We report for comparison results
obtained by the [teacher model](https://huggingface.co/dbmdz/bert-base-italian-xxl-uncased)
fine-tuned in the same tasks and for the same number of epochs.
**Italian ISDT:**
| Model | F1 score | Fine-tuning time | Evaluation time |
|--------------|----------|------------------|-----------------|
| BERTino | 0,9801 | 9m, 4s | 3s |
| Teacher | 0,983 | 16m, 28s | 5s |
**Italian ParTUT:**
| Model | F1 score | Fine-tuning time | Evaluation time |
|--------------|----------|------------------|-----------------|
| BERTino | 0,9268 | 1m, 18s | 1s |
| Teacher | 0,9688 | 2m, 18s | 1s |
**Italian WikiNER:**
| Model | F1 score | Fine-tuning time | Evaluation time |
|--------------|----------|------------------|-----------------|
| BERTino | 0,9038 | 35m, 35s | 3m, 1s |
| Teacher | 0,9178 | 67m, 8s | 5m, 16s |
**Multi-class sentence classification:**
| Model | F1 score | Fine-tuning time | Evaluation time |
|--------------|----------|------------------|-----------------|
| BERTino | 0,7788 | 4m, 40s | 6s |
| Teacher | 0,7986 | 8m, 52s | 9s |
|
indobenchmark/indobert-base-p1
|
2021-05-19T20:22:23.000Z
|
[
"pytorch",
"tf",
"jax",
"bert",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"transformers",
"indobert",
"indobenchmark",
"indonlu",
"license:mit"
] |
[
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] |
indobenchmark
| 4,400 |
transformers
|
---
language: id
tags:
- indobert
- indobenchmark
- indonlu
license: mit
inference: false
datasets:
- Indo4B
---
# IndoBERT Base Model (phase1 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
## All Pre-trained Models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) |
## How to use
### Load model and tokenizer
```python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-base-p1")
model = AutoModel.from_pretrained("indobenchmark/indobert-base-p1")
```
### Extract contextual representation
```python
x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1)
print(x, model(x)[0].sum())
```
## Authors
<b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
|
|
indobenchmark/indobert-base-p2
|
2021-05-19T20:24:07.000Z
|
[
"pytorch",
"tf",
"jax",
"bert",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"transformers",
"indobert",
"indobenchmark",
"indonlu",
"license:mit"
] |
[
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] |
indobenchmark
| 1,813 |
transformers
|
---
language: id
tags:
- indobert
- indobenchmark
- indonlu
license: mit
inference: false
datasets:
- Indo4B
---
# IndoBERT Base Model (phase2 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
## All Pre-trained Models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) |
## How to use
### Load model and tokenizer
```python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-base-p2")
model = AutoModel.from_pretrained("indobenchmark/indobert-base-p2")
```
### Extract contextual representation
```python
x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1)
print(x, model(x)[0].sum())
```
## Authors
<b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
|
|
indobenchmark/indobert-large-p1
|
2021-05-19T20:26:01.000Z
|
[
"pytorch",
"tf",
"jax",
"bert",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"transformers",
"indobert",
"indobenchmark",
"indonlu",
"license:mit"
] |
[
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] |
indobenchmark
| 232 |
transformers
|
---
language: id
tags:
- indobert
- indobenchmark
- indonlu
license: mit
inference: false
datasets:
- Indo4B
---
# IndoBERT Large Model (phase1 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
## All Pre-trained Models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) |
## How to use
### Load model and tokenizer
```python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-large-p1")
model = AutoModel.from_pretrained("indobenchmark/indobert-large-p1")
```
### Extract contextual representation
```python
x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1)
print(x, model(x)[0].sum())
```
## Authors
<b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
|
|
indobenchmark/indobert-large-p2
|
2021-05-19T20:28:22.000Z
|
[
"pytorch",
"tf",
"jax",
"bert",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"transformers",
"indobert",
"indobenchmark",
"indonlu",
"license:mit"
] |
[
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] |
indobenchmark
| 153 |
transformers
|
---
language: id
tags:
- indobert
- indobenchmark
- indonlu
license: mit
inference: false
datasets:
- Indo4B
---
# IndoBERT Large Model (phase2 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
## All Pre-trained Models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) |
## How to use
### Load model and tokenizer
```python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-large-p2")
model = AutoModel.from_pretrained("indobenchmark/indobert-large-p2")
```
### Extract contextual representation
```python
x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1)
print(x, model(x)[0].sum())
```
## Authors
<b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
|
|
indobenchmark/indobert-lite-base-p1
|
2020-12-11T21:45:50.000Z
|
[
"pytorch",
"tf",
"albert",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"transformers",
"indobert",
"indobenchmark",
"indonlu",
"license:mit"
] |
[
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] |
indobenchmark
| 600 |
transformers
|
---
language: id
tags:
- indobert
- indobenchmark
- indonlu
license: mit
inference: false
datasets:
- Indo4B
---
# IndoBERT-Lite Base Model (phase1 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
## All Pre-trained Models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) |
## How to use
### Load model and tokenizer
```python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-lite-base-p1")
model = AutoModel.from_pretrained("indobenchmark/indobert-lite-base-p1")
```
### Extract contextual representation
```python
x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1)
print(x, model(x)[0].sum())
```
## Authors
<b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
|
|
indobenchmark/indobert-lite-base-p2
|
2020-12-11T21:45:53.000Z
|
[
"pytorch",
"tf",
"albert",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"transformers",
"indobert",
"indobenchmark",
"indonlu",
"license:mit"
] |
[
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] |
indobenchmark
| 127 |
transformers
|
---
language: id
tags:
- indobert
- indobenchmark
- indonlu
license: mit
inference: false
datasets:
- Indo4B
---
# IndoBERT-Lite Base Model (phase2 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
## All Pre-trained Models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) |
## How to use
### Load model and tokenizer
```python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-lite-base-p2")
model = AutoModel.from_pretrained("indobenchmark/indobert-lite-base-p2")
```
### Extract contextual representation
```python
x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1)
print(x, model(x)[0].sum())
```
## Authors
<b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
|
|
indobenchmark/indobert-lite-large-p1
|
2020-12-11T21:45:56.000Z
|
[
"pytorch",
"tf",
"albert",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"transformers",
"indobert",
"indobenchmark",
"indonlu",
"license:mit"
] |
[
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] |
indobenchmark
| 130 |
transformers
|
---
language: id
tags:
- indobert
- indobenchmark
- indonlu
license: mit
inference: false
datasets:
- Indo4B
---
# IndoBERT-Lite Large Model (phase1 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
## All Pre-trained Models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) |
## How to use
### Load model and tokenizer
```python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-lite-large-p1")
model = AutoModel.from_pretrained("indobenchmark/indobert-lite-large-p1")
```
### Extract contextual representation
```python
x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1)
print(x, model(x)[0].sum())
```
## Authors
<b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
|
|
indobenchmark/indobert-lite-large-p2
|
2020-12-11T21:45:59.000Z
|
[
"pytorch",
"tf",
"albert",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"transformers",
"indobert",
"indobenchmark",
"indonlu",
"license:mit"
] |
[
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] |
indobenchmark
| 135 |
transformers
|
---
language: id
tags:
- indobert
- indobenchmark
- indonlu
license: mit
inference: false
datasets:
- Indo4B
---
# IndoBERT-Lite Large Model (phase2 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
## All Pre-trained Models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) |
## How to use
### Load model and tokenizer
```python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-lite-large-p2")
model = AutoModel.from_pretrained("indobenchmark/indobert-lite-large-p2")
```
### Extract contextual representation
```python
x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1)
print(x, model(x)[0].sum())
```
## Authors
<b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
|
|
indolem/indobert-base-uncased
|
2021-05-19T20:30:10.000Z
|
[
"pytorch",
"jax",
"bert",
"masked-lm",
"id",
"dataset:220M words (IndoWiki, IndoWC, News)",
"arxiv:2011.00677",
"transformers",
"indobert",
"indolem",
"license:mit",
"fill-mask"
] |
fill-mask
|
[
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] |
indolem
| 5,226 |
transformers
|
---
language: id
tags:
- indobert
- indolem
license: mit
inference: false
datasets:
- 220M words (IndoWiki, IndoWC, News)
---
## About
[IndoBERT](https://arxiv.org/pdf/2011.00677.pdf) is the Indonesian version of BERT model. We train the model using over 220M words, aggregated from three main sources:
* Indonesian Wikipedia (74M words)
* news articles from Kompas, Tempo (Tala et al., 2003), and Liputan6 (55M words in total)
* an Indonesian Web Corpus (Medved and Suchomel, 2017) (90M words).
We trained the model for 2.4M steps (180 epochs) with the final perplexity over the development set being <b>3.97</b> (similar to English BERT-base).
This <b>IndoBERT</b> was used to examine IndoLEM - an Indonesian benchmark that comprises of seven tasks for the Indonesian language, spanning morpho-syntax, semantics, and discourse.
| Task | Metric | Bi-LSTM | mBERT | MalayBERT | IndoBERT |
| ---- | ---- | ---- | ---- | ---- | ---- |
| POS Tagging | Acc | 95.4 | <b>96.8</b> | <b>96.8</b> | <b>96.8</b> |
| NER UGM | F1| 70.9 | 71.6 | 73.2 | <b>74.9</b> |
| NER UI | F1 | 82.2 | 82.2 | 87.4 | <b>90.1</b> |
| Dep. Parsing (UD-Indo-GSD) | UAS/LAS | 85.25/80.35 | 86.85/81.78 | 86.99/81.87 | <b>87.12<b/>/<b>82.32</b> |
| Dep. Parsing (UD-Indo-PUD) | UAS/LAS | 84.04/79.01 | <b>90.58</b>/<b>85.44</b> | 88.91/83.56 | 89.23/83.95 |
| Sentiment Analysis | F1 | 71.62 | 76.58 | 82.02 | <b>84.13</b> |
| Summarization | R1/R2/RL | 67.96/61.65/67.24 | 68.40/61.66/67.67 | 68.44/61.38/67.71 | <b>69.93</b>/<b>62.86</b>/<b>69.21</b> |
| Next Tweet Prediction | Acc | 73.6 | 92.4 | 93.1 | <b>93.7</b> |
| Tweet Ordering | Spearman corr. | 0.45 | 0.53 | 0.51 | <b>0.59</b> |
The paper is published at the 28th COLING 2020. Please refer to https://indolem.github.io for more details about the benchmarks.
## How to use
### Load model and tokenizer (tested with transformers==3.5.1)
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("indolem/indobert-base-uncased")
model = AutoModel.from_pretrained("indolem/indobert-base-uncased")
```
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{koto2020indolem,
title={IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP},
author={Fajri Koto and Afshin Rahimi and Jey Han Lau and Timothy Baldwin},
booktitle={Proceedings of the 28th COLING},
year={2020}
}
```
|
indonesian-nlp/wav2vec2-large-xlsr-indonesian-baseline
|
2021-03-29T14:25:17.000Z
|
[
"pytorch",
"wav2vec2",
"id",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] |
automatic-speech-recognition
|
[
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] |
indonesian-nlp
| 24 |
transformers
|
---
language: id
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Indonesian Baseline by indonesian-nlp
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice id
type: common_voice
args: id
metrics:
- name: Test WER
type: wer
value: 25.55
---
# Wav2Vec2-Large-XLSR-Indonesian
This is the baseline for Wav2Vec2-Large-XLSR-Indonesian, a fine-tuned
[facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
model on the [Indonesian Common Voice dataset](https://huggingface.co/datasets/common_voice).
It was trained using the default hyperparamer and for 2x30 epochs.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "id", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian-baseline")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian-baseline")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "id", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian-baseline")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian-baseline")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 25.55 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/indonesian-nlp/indonesian-speech-recognition)
(will be available soon)
|
indonesian-nlp/wav2vec2-large-xlsr-indonesian
|
2021-04-13T15:48:53.000Z
|
[
"pytorch",
"wav2vec2",
"id",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] |
automatic-speech-recognition
|
[
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] |
indonesian-nlp
| 283 |
transformers
|
---
language: id
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Indonesian by Indonesian NLP
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice id
type: common_voice
args: id
metrics:
- name: Test WER
type: wer
value: 14.29
---
# Wav2Vec2-Large-XLSR-Indonesian
This is the model for Wav2Vec2-Large-XLSR-Indonesian, a fine-tuned
[facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
model on the [Indonesian Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "id", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "id", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 14.29 %
## Training
The Common Voice `train`, `validation`, and [synthetic voice datasets](https://cloud.uncool.ai/index.php/s/Kg4C6f5NJGN9ZdR) were used for training.
The script used for training can be found [here](https://github.com/indonesian-nlp/wav2vec2-indonesian)
|
infinex/translation_model
|
2021-05-04T09:06:20.000Z
|
[
"pytorch",
"marian",
"seq2seq",
"transformers",
"text2text-generation"
] |
text2text-generation
|
[
".gitattributes",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] |
infinex
| 36 |
transformers
| |
infinitejoy/Wav2Vec2-Large-XLSR-53-Assamese
|
2021-03-29T13:01:40.000Z
|
[
"pytorch",
"wav2vec2",
"as",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] |
automatic-speech-recognition
|
[
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] |
infinitejoy
| 9 |
transformers
|
---
language: as
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Joydeep Bhattacharjee XLSR Wav2Vec2 Large 53 Assamese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice as
type: common_voice
args: as
metrics:
- name: Test WER
type: wer
value: 69.63
---
# Wav2Vec2-Large-XLSR-53-Assamese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Assamese using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "as", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Assamese")
model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Assamese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Assamese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "as", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Assamese")
model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Assamese")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\।]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub('’ ',' ',batch["sentence"])
batch["sentence"] = re.sub(' ‘',' ',batch["sentence"])
batch["sentence"] = re.sub('’|‘','\'',batch["sentence"])
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 69.63 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
infinitejoy/Wav2Vec2-Large-XLSR-53-Odia
|
2021-03-30T13:10:24.000Z
|
[
"pytorch",
"wav2vec2",
"or",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] |
automatic-speech-recognition
|
[
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] |
infinitejoy
| 7 |
transformers
|
---
language: or
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Joydeep Bhattacharjee XLSR Wav2Vec2 Large 53 Odia
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice as
type: common_voice
args: or
metrics:
- name: Test WER
type: wer
value: 55.07
---
# Wav2Vec2-Large-XLSR-53-Odia
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Odia using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "or", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Odia")
model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Odia")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Assamese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "or", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Odia")
model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Odia")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\।\–]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub('’ ',' ',batch["sentence"])
batch["sentence"] = re.sub(' ‘',' ',batch["sentence"])
batch["sentence"] = re.sub('’|‘','\'',batch["sentence"])
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 55.07 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil
|
2021-04-01T14:15:14.000Z
|
[
"pytorch",
"wav2vec2",
"ta",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] |
automatic-speech-recognition
|
[
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] |
infinitejoy
| 6 |
transformers
|
---
language: ta
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Joydeep Bhattacharjee XLSR Wav2Vec2 Large 53 Tamil
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ta
type: common_voice
args: ta
metrics:
- name: Test WER
type: wer
value: 71.29
---
# Wav2Vec2-Large-XLSR-53-Tamil
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ta", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil")
model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Tamil test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ta", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil")
model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\’\–\(\)]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub('’ ',' ',batch["sentence"])
batch["sentence"] = re.sub(' ‘',' ',batch["sentence"])
batch["sentence"] = re.sub('’|‘','\'',batch["sentence"])
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 71.29 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
infominer/bertweet-ner
|
2021-05-29T11:14:44.000Z
|
[
"pytorch",
"roberta",
"token-classification",
"transformers"
] |
token-classification
|
[
".gitattributes",
"added_tokens.json",
"bpe.codes",
"config.json",
"eval_results.txt",
"model_args.json",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] |
infominer
| 1,439 |
transformers
| |
ing3/csmt
|
2021-03-08T22:26:57.000Z
|
[] |
[
".gitattributes"
] |
ing3
| 0 | |||
inspectorsolaris/gpt2_french
|
2021-05-24T09:02:18.000Z
|
[
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] |
text-generation
|
[
".gitattributes",
"README.md",
"config.json",
"eval_results.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] |
inspectorsolaris
| 122 |
transformers
|
hello
|
inspectorsolaris/gpt2_french_pre_trained
|
2021-05-27T04:31:28.000Z
|
[
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] |
text-generation
|
[
".gitattributes",
"config.json",
"eval_results.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] |
inspectorsolaris
| 121 |
transformers
| |
inspectorsolaris/microsoft_gpt_2_alex_o
|
2021-05-15T11:06:27.000Z
|
[] |
[
".gitattributes"
] |
inspectorsolaris
| 0 | |||
insub/vectorizing_BART
|
2021-03-31T08:57:35.000Z
|
[] |
[
".gitattributes",
"README.md"
] |
insub
| 0 |
hello
|
||
ipuneetrathore/bert-base-cased-finetuned-finBERT
|
2021-05-19T20:30:58.000Z
|
[
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
[
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] |
ipuneetrathore
| 14,129 |
transformers
|
## FinBERT
Code for importing and using this model is available [here](https://github.com/ipuneetrathore/BERT_models)
|
irvingpop/dreambank
|
2021-05-23T05:34:04.000Z
|
[
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] |
text-generation
|
[
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] |
irvingpop
| 20 |
transformers
| |
ishan/bert-base-uncased-mnli
|
2021-05-19T20:32:21.000Z
|
[
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"dataset:MNLI",
"arxiv:1810.04805",
"transformers"
] |
text-classification
|
[
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] |
ishan
| 115 |
transformers
|
---
language: en
thumbnail:
tags:
- pytorch
- text-classification
datasets:
- MNLI
---
# bert-base-uncased finetuned on MNLI
## Model Details and Training Data
We used the pretrained model from [bert-base-uncased](https://huggingface.co/bert-base-uncased) and finetuned it on [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) dataset.
The training parameters were kept the same as [Devlin et al., 2019](https://arxiv.org/abs/1810.04805) (learning rate = 2e-5, training epochs = 3, max_sequence_len = 128 and batch_size = 32).
## Evaluation Results
The evaluation results are mentioned in the table below.
| Test Corpus | Accuracy |
|:---:|:---------:|
| Matched | 0.8456 |
| Mismatched | 0.8484 |
|
ishan/distilbert-base-uncased-mnli
|
2020-08-21T10:23:40.000Z
|
[
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:MNLI",
"arxiv:1810.04805",
"transformers"
] |
text-classification
|
[
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] |
ishan
| 73 |
transformers
|
---
language: en
thumbnail:
tags:
- pytorch
- text-classification
datasets:
- MNLI
---
# distilbert-base-uncased finetuned on MNLI
## Model Details and Training Data
We used the pretrained model from [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) and finetuned it on [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) dataset.
The training parameters were kept the same as [Devlin et al., 2019](https://arxiv.org/abs/1810.04805) (learning rate = 2e-5, training epochs = 3, max_sequence_len = 128 and batch_size = 32).
## Evaluation Results
The evaluation results are mentioned in the table below.
| Test Corpus | Accuracy |
|:---:|:---------:|
| Matched | 0.8223 |
| Mismatched | 0.8216 |
|
itaifarhi/Itai
|
2021-03-26T10:45:32.000Z
|
[] |
[
".gitattributes"
] |
itaifarhi
| 0 | |||
itspacchu/pacchuslave
|
2021-04-12T16:01:54.000Z
|
[] |
[
".gitattributes",
"README.md"
] |
itspacchu
| 0 | |||
itsunoda/wolfbbsRoBERTa-large
|
2021-01-11T12:34:29.000Z
|
[
"pytorch",
"tf",
"camembert",
"masked-lm",
"transformers",
"fill-mask"
] |
fill-mask
|
[
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json"
] |
itsunoda
| 43 |
transformers
| |
itsunoda/wolfbbsRoBERTa-small
|
2021-01-11T12:07:14.000Z
|
[
"pytorch",
"tf",
"camembert",
"masked-lm",
"transformers",
"fill-mask"
] |
fill-mask
|
[
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json"
] |
itsunoda
| 9 |
transformers
|
Note that model type is Camembert.
|
iuliaturc/bert_uncased_L-2_H-128_A-2
|
2021-05-19T20:32:57.000Z
|
[
"pytorch",
"jax",
"bert",
"arxiv:1908.08962",
"transformers",
"license:apache-2.0"
] |
[
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"vocab.txt"
] |
iuliaturc
| 14 |
transformers
|
---
thumbnail: https://huggingface.co/front/thumbnails/google.png
license: apache-2.0
---
BERT Miniatures
===
This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking).
We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher.
Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity.
You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below:
| |H=128|H=256|H=512|H=768|
|---|:---:|:---:|:---:|:---:|
| **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]|
| **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]|
| **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]|
| **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]|
| **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]|
| **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]|
Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model.
Here are the corresponding GLUE scores on the test set:
|Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0|
|BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1|
|BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6|
|BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5|
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs:
- batch sizes: 8, 16, 32, 64, 128
- learning rates: 3e-4, 1e-4, 5e-5, 3e-5
If you use these models, please cite the following paper:
```
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
```
[2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2
[2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4
[2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8
[2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12
[4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2
[4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4
[4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8
[4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12
[6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2
[6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4
[6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8
[6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12
[8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2
[8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4
[8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8
[8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12
[10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2
[10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4
[10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8
[10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12
[12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2
[12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4
[12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8
[12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
|
|
ivangtorre/wav2vec2-large-xlsr-53-basque
|
2021-04-08T11:21:02.000Z
|
[
"wav2vec2",
"eu",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] |
automatic-speech-recognition
|
[
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
] |
ivangtorre
| 12 |
transformers
|
---
language: eu
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Basque Ivan G Torre
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice eu
type: common_voice
args: eu
metrics:
- name: Test WER
type: wer
value: 16.99
---
# Wav2Vec2-Large-XLSR-53-euskera
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Euskera using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "eu", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("ivangtorre/wav2vec2-large-xlsr-53-basque")
model = Wav2Vec2ForCTC.from_pretrained("ivangtorre/wav2vec2-large-xlsr-53-basque")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Basque test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "eu", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("ivangtorre/wav2vec2-large-xlsr-53-basque")
model = Wav2Vec2ForCTC.from_pretrained("ivangtorre/wav2vec2-large-xlsr-53-basque")
model.to("cuda")
chars_to_ignore_regex = '[",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�"]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 16.99 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found in [Github](https://github.com/ivangtorre/wav2vec2-xlsr-fine-tuning-week-Basque)
|
ixa-ehu/berteus-base-cased
|
2021-05-19T20:33:41.000Z
|
[
"pytorch",
"jax",
"bert",
"eu",
"arxiv:2004.00033",
"transformers"
] |
[
".gitattributes",
"LICENSE",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
] |
ixa-ehu
| 55,710 |
transformers
|
---
language: eu
---
# BERTeus base cased
This is the Basque language pretrained model presented in [Give your Text Representation Models some Love: the Case for Basque](https://arxiv.org/pdf/2004.00033.pdf). This model has been trained on a Basque corpus comprising Basque crawled news articles from online newspapers and the Basque Wikipedia. The training corpus contains 224.6 million tokens, of which 35 million come from the Wikipedia.
BERTeus has been tested on four different downstream tasks for Basque: part-of-speech (POS) tagging, named entity recognition (NER), sentiment analysis and topic classification; improving the state of the art for all tasks. See summary of results below:
| Downstream task | BERTeus | mBERT | Previous SOTA |
| --------------- | ------- | ------| ------------- |
| Topic Classification | **76.77** | 68.42 | 63.00 |
| Sentiment | **78.10** | 71.02 | 74.02 |
| POS | **97.76** | 96.37 | 96.10 |
| NER | **87.06** | 81.52 | 76.72 |
If using this model, please cite the following paper:
```
@inproceedings{agerri2020give,
title={Give your Text Representation Models some Love: the Case for Basque},
author={Rodrigo Agerri and I{\~n}aki San Vicente and Jon Ander Campos and Ander Barrena and Xabier Saralegi and Aitor Soroa and Eneko Agirre},
booktitle={Proceedings of the 12th International Conference on Language Resources and Evaluation},
year={2020}
}
```
|
|
ixa-ehu/ixambert-base-cased
|
2021-02-02T15:09:00.000Z
|
[
"pytorch",
"en",
"es",
"eu",
"transformers"
] |
[
".gitattributes",
"LICENSE",
"README.md",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
] |
ixa-ehu
| 1,108 |
transformers
|
---
language:
- en
- es
- eu
---
# IXAmBERT base cased
This is a multilingual language pretrained for English, Spanish and Basque. The training corpora is composed by the English, Spanish and Basque Wikipedias, together with Basque crawled news articles from online newspapers. The model has been successfully used to transfer knowledge from English to Basque in a conversational QA system, as reported in the paper [Conversational Question Answering in Low Resource Scenarios: A Dataset and Case Study for Basque](http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.55.pdf). In the paper, IXAmBERT performed better than mBERT when transferring knowledge from English to Basque, as shown in the following Table:
| Model | Zero-shot | Transfer learning |
|--------------------|-----------|-------------------|
| Baseline | 28.7 | 28.7 |
| mBERT | 31.5 | 37.4 |
| IXAmBERT | 38.9 | **41.2** |
| mBERT + history | 33.3 | 28.7 |
| IXAmBERT + history | **40.7** | 40.0 |
This Table shows the results on a Basque CQA dataset. *Zero-shot* means that the model is fine-tuned using using QuaC, an English CQA dataset. In the *Transfer Learning* setting the model is first fine-tuned on QuaC, and then on a Basque CQA dataset.
If using this model, please cite the following paper:
```
@inproceedings{otegi2020conversational,
title={Conversational Question Answering in Low Resource Scenarios: A Dataset and Case Study for Basque},
author={Otegi, Arantxa and Agirre, Aitor and Campos, Jon Ander and Soroa, Aitor and Agirre, Eneko},
booktitle={Proceedings of The 12th Language Resources and Evaluation Conference},
pages={436--442},
year={2020}
}
```
|
|
j-hartmann/emotion-english-distilroberta-base
|
2021-06-17T09:40:51.000Z
|
[
"pytorch",
"roberta",
"text-classification",
"en",
"transformers",
"sentiment",
"emotion",
"twitter"
] |
text-classification
|
[
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] |
j-hartmann
| 171 |
transformers
|
---
language: "en"
tags:
- sentiment
- emotion
- twitter
widget:
- text: "Oh wow. I didn't know that."
- text: "This movie always makes me cry.."
---
## Description
With this model, you can classify emotions in English text data. The model was trained on diverse datasets and predicts 7 emotions:
1) anger
2) disgust
3) fear
4) joy
5) neutral
6) sadness
7) surprise
The model is a fine-tuned checkpoint of DistilRoBERTa-base.
## Application
a) Run emotion model with 3 lines of code on single text example using Hugging Face's pipeline command on Google Colab:
[](https://colab.research.google.com/github/j-hartmann/emotion-english-distilroberta-base/blob/main/simple_emotion_pipeline.ipynb)
b) Run emotion model on multiple examples and full datasets (e.g., .csv files) on Google Colab:
[](https://colab.research.google.com/github/j-hartmann/emotion-english-distilroberta-base/blob/main/emotion_prediction_example.ipynb)
## Contact
Please reach out to [email protected] if you have any questions or feedback.
Thanks to Samuel Domdey and chrsiebert for their support in making this model available.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.