Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
question-answering
|
transformers
|
## MobileBERT fine-tuned on SQuAD v2
[MobileBERT](https://arxiv.org/abs/2004.02984) is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance
between self-attentions and feed-forward networks.
This model was fine-tuned from the HuggingFace checkpoint `google/mobilebert-uncased` on [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer).
## Details
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD2.0 | train | 130k |
| SQuAD2.0 | eval | 12.3k |
### Fine-tuning
- Python: `3.7.5`
- Machine specs:
`CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz`
`Memory: 32 GiB`
`GPUs: 2 GeForce GTX 1070, each with 8GiB memory`
`GPU driver: 418.87.01, CUDA: 10.1`
- script:
```shell
# after install https://github.com/huggingface/transformers
cd examples/question-answering
mkdir -p data
wget -O data/train-v2.0.json https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json
wget -O data/dev-v2.0.json https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json
export SQUAD_DIR=`pwd`/data
python run_squad.py \
--model_type mobilebert \
--model_name_or_path google/mobilebert-uncased \
--do_train \
--do_eval \
--do_lower_case \
--version_2_with_negative \
--train_file $SQUAD_DIR/train-v2.0.json \
--predict_file $SQUAD_DIR/dev-v2.0.json \
--per_gpu_train_batch_size 16 \
--per_gpu_eval_batch_size 16 \
--learning_rate 4e-5 \
--num_train_epochs 5.0 \
--max_seq_length 320 \
--doc_stride 128 \
--warmup_steps 1400 \
--save_steps 2000 \
--output_dir $SQUAD_DIR/mobilebert-uncased-warmup-squad_v2 2>&1 | tee train-mobilebert-warmup-squad_v2.log
```
It took about 3.5 hours to finish.
### Results
**Model size**: `95M`
| Metric | # Value | # Original ([Table 5](https://arxiv.org/pdf/2004.02984.pdf))|
| ------ | --------- | --------- |
| **EM** | **75.2** | **76.2** |
| **F1** | **78.8** | **79.2** |
Note that the above results didn't involve any hyperparameter search.
## Example Usage
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="csarron/mobilebert-uncased-squad-v2",
tokenizer="csarron/mobilebert-uncased-squad-v2"
)
predictions = qa_pipeline({
'context': "The game was played on February 7, 2016 at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California.",
'question': "What day was the game played on?"
})
print(predictions)
# output:
# {'score': 0.71434086561203, 'start': 23, 'end': 39, 'answer': 'February 7, 2016'}
```
> Created by [Qingqing Cao](https://awk.ai/) | [GitHub](https://github.com/csarron) | [Twitter](https://twitter.com/sysnlp)
> Made with ❤️ in New York.
|
{"language": "en", "license": "mit", "tags": ["question-answering", "mobilebert"], "datasets": ["squad_v2"], "metrics": ["squad_v2"], "widget": [{"text": "Which name is also used to describe the Amazon rainforest in English?", "context": "The Amazon rainforest (Portuguese: Floresta Amaz\u00f4nica or Amaz\u00f4nia; Spanish: Selva Amaz\u00f3nica, Amazon\u00eda or usually Amazonia; French: For\u00eat amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."}, {"text": "How many square kilometers of rainforest is covered in the basin?", "context": "The Amazon rainforest (Portuguese: Floresta Amaz\u00f4nica or Amaz\u00f4nia; Spanish: Selva Amaz\u00f3nica, Amazon\u00eda or usually Amazonia; French: For\u00eat amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."}]}
|
csarron/mobilebert-uncased-squad-v2
| null |
[
"transformers",
"pytorch",
"onnx",
"safetensors",
"mobilebert",
"question-answering",
"en",
"dataset:squad_v2",
"arxiv:2004.02984",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
question-answering
|
transformers
|
## RoBERTa-base fine-tuned on SQuAD v1
This model was fine-tuned from the HuggingFace [RoBERTa](https://arxiv.org/abs/1907.11692) base checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer).
This model is case-sensitive: it makes a difference between english and English.
## Details
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD1.1 | train | 96.8K |
| SQuAD1.1 | eval | 11.8k |
### Fine-tuning
- Python: `3.7.5`
- Machine specs:
`CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz`
`Memory: 32 GiB`
`GPUs: 2 GeForce GTX 1070, each with 8GiB memory`
`GPU driver: 418.87.01, CUDA: 10.1`
- script:
```shell
# after install https://github.com/huggingface/transformers
cd examples/question-answering
mkdir -p data
wget -O data/train-v1.1.json https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json
wget -O data/dev-v1.1.json https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json
python run_energy_squad.py \
--model_type roberta \
--model_name_or_path roberta-base \
--do_train \
--do_eval \
--train_file train-v1.1.json \
--predict_file dev-v1.1.json \
--per_gpu_train_batch_size 12 \
--per_gpu_eval_batch_size 16 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 320 \
--doc_stride 128 \
--data_dir data \
--output_dir data/roberta-base-squad-v1 2>&1 | tee train-roberta-base-squad-v1.log
```
It took about 2 hours to finish.
### Results
**Model size**: `477M`
| Metric | # Value |
| ------ | --------- |
| **EM** | **83.0** |
| **F1** | **90.4** |
Note that the above results didn't involve any hyperparameter search.
## Example Usage
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="csarron/roberta-base-squad-v1",
tokenizer="csarron/roberta-base-squad-v1"
)
predictions = qa_pipeline({
'context': "The game was played on February 7, 2016 at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California.",
'question': "What day was the game played on?"
})
print(predictions)
# output:
# {'score': 0.8625259399414062, 'start': 23, 'end': 39, 'answer': 'February 7, 2016'}
```
> Created by [Qingqing Cao](https://awk.ai/) | [GitHub](https://github.com/csarron) | [Twitter](https://twitter.com/sysnlp)
> Made with ❤️ in New York.
|
{"language": "en", "license": "mit", "tags": ["question-answering", "roberta", "roberta-base"], "datasets": ["squad"], "metrics": ["squad"], "widget": [{"text": "Which name is also used to describe the Amazon rainforest in English?", "context": "The Amazon rainforest (Portuguese: Floresta Amaz\u00f4nica or Amaz\u00f4nia; Spanish: Selva Amaz\u00f3nica, Amazon\u00eda or usually Amazonia; French: For\u00eat amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."}, {"text": "How many square kilometers of rainforest is covered in the basin?", "context": "The Amazon rainforest (Portuguese: Floresta Amaz\u00f4nica or Amaz\u00f4nia; Spanish: Selva Amaz\u00f3nica, Amazon\u00eda or usually Amazonia; French: For\u00eat amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."}]}
|
csarron/roberta-base-squad-v1
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"question-answering",
"roberta-base",
"en",
"dataset:squad",
"arxiv:1907.11692",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
question-answering
|
transformers
|
{}
|
csarron/roberta-large-squad-v1
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
{}
|
csatapathy/interview-ratings-bert
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
{}
|
csbongga/Machi-QAG-01
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
{}
|
csbongga/Machi-QAG-02
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
csbongga/test-ner-2
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2175
- Accuracy: 0.923
- F1: 0.9233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8352 | 1.0 | 250 | 0.3079 | 0.91 | 0.9086 |
| 0.247 | 2.0 | 500 | 0.2175 | 0.923 | 0.9233 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.923, "name": "Accuracy"}, {"type": "f1", "value": 0.9232542847906783, "name": "F1"}]}]}]}
|
cscottp27/distilbert-base-uncased-finetuned-emotion
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null |
transformers
|
# BanglaBERT
This repository contains the pretrained discriminator checkpoint of the model **BanglaBERT**. This is an [ELECTRA](https://openreview.net/pdf?id=r1xMH1BtvB) discriminator model pretrained with the Replaced Token Detection (RTD) objective. Finetuned models using this checkpoint achieve state-of-the-art results on many of the NLP tasks in bengali.
For finetuning on different downstream tasks such as `Sentiment classification`, `Named Entity Recognition`, `Natural Language Inference` etc., refer to the scripts in the official GitHub [repository](https://github.com/csebuetnlp/banglabert).
**Note**: This model was pretrained using a specific normalization pipeline available [here](https://github.com/csebuetnlp/normalizer). All finetuning scripts in the official GitHub repository uses this normalization by default. If you need to adapt the pretrained model for a different task make sure the text units are normalized using this pipeline before tokenizing to get best results. A basic example is given below:
## Using this model as a discriminator in `transformers` (tested on 4.11.0.dev0)
```python
from transformers import AutoModelForPreTraining, AutoTokenizer
from normalizer import normalize # pip install git+https://github.com/csebuetnlp/normalizer
import torch
model = AutoModelForPreTraining.from_pretrained("csebuetnlp/banglabert")
tokenizer = AutoTokenizer.from_pretrained("csebuetnlp/banglabert")
original_sentence = "আমি কৃতজ্ঞ কারণ আপনি আমার জন্য অনেক কিছু করেছেন।"
fake_sentence = "আমি হতাশ কারণ আপনি আমার জন্য অনেক কিছু করেছেন।"
fake_sentence = normalize(fake_sentence) # this normalization step is required before tokenizing the text
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = model(fake_inputs).logits
predictions = torch.round((torch.sign(discriminator_outputs) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
print("\n" + "-" * 50)
[print("%7s" % int(prediction), end="") for prediction in predictions.squeeze().tolist()[1:-1]]
print("\n" + "-" * 50)
```
## Benchmarks
* Zero-shot cross-lingual transfer-learning
| Model | Params | SC (macro-F1) | NLI (accuracy) | NER (micro-F1) | QA (EM/F1) | BangLUE score |
|----------------|-----------|-----------|-----------|-----------|-----------|-----------|
|[mBERT](https://huggingface.co/bert-base-multilingual-cased) | 180M | 27.05 | 62.22 | 39.27 | 59.01/64.18 | 50.35 |
|[XLM-R (base)](https://huggingface.co/xlm-roberta-base) | 270M | 42.03 | 72.18 | 45.37 | 55.03/61.83 | 55.29 |
|[XLM-R (large)](https://huggingface.co/xlm-roberta-large) | 550M | 49.49 | 78.13 | 56.48 | 71.13/77.70 | 66.59 |
|[BanglishBERT](https://huggingface.co/csebuetnlp/banglishbert) | 110M | 48.39 | 75.26 | 55.56 | 72.87/78.63 | 66.14 |
* Supervised fine-tuning
| Model | Params | SC (macro-F1) | NLI (accuracy) | NER (micro-F1) | QA (EM/F1) | BangLUE score |
|----------------|-----------|-----------|-----------|-----------|-----------|-----------|
|[mBERT](https://huggingface.co/bert-base-multilingual-cased) | 180M | 67.59 | 75.13 | 68.97 | 67.12/72.64 | 70.29 |
|[XLM-R (base)](https://huggingface.co/xlm-roberta-base) | 270M | 69.54 | 78.46 | 73.32 | 68.09/74.27 | 72.82 |
|[XLM-R (large)](https://huggingface.co/xlm-roberta-large) | 550M | 70.97 | 82.40 | 78.39 | 73.15/79.06 | 76.79 |
|[sahajBERT](https://huggingface.co/neuropark/sahajBERT) | 18M | 71.12 | 76.92 | 70.94 | 65.48/70.69 | 71.03 |
|[BanglishBERT](https://huggingface.co/csebuetnlp/banglishbert) | 110M | 70.61 | 80.95 | 76.28 | 72.43/78.40 | 75.73 |
|[BanglaBERT](https://huggingface.co/csebuetnlp/banglabert) | 110M | 72.89 | 82.80 | 77.78 | 72.63/79.34 | **77.09** |
The benchmarking datasets are as follows:
* **SC:** **[Sentiment Classification](https://aclanthology.org/2021.findings-emnlp.278)**
* **NER:** **[Named Entity Recognition](https://multiconer.github.io/competition)**
* **NLI:** **[Natural Language Inference](https://github.com/csebuetnlp/banglabert/#datasets)**
* **QA:** **[Question Answering](https://github.com/csebuetnlp/banglabert/#datasets)**
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{bhattacharjee-etal-2022-banglabert,
title = "{B}angla{BERT}: Language Model Pretraining and Benchmarks for Low-Resource Language Understanding Evaluation in {B}angla",
author = "Bhattacharjee, Abhik and
Hasan, Tahmid and
Ahmad, Wasi and
Mubasshir, Kazi Samin and
Islam, Md Saiful and
Iqbal, Anindya and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-naacl.98",
pages = "1318--1327",
abstract = "In this work, we introduce BanglaBERT, a BERT-based Natural Language Understanding (NLU) model pretrained in Bangla, a widely spoken yet low-resource language in the NLP literature. To pretrain BanglaBERT, we collect 27.5 GB of Bangla pretraining data (dubbed {`}Bangla2B+{'}) by crawling 110 popular Bangla sites. We introduce two downstream task datasets on natural language inference and question answering and benchmark on four diverse NLU tasks covering text classification, sequence labeling, and span prediction. In the process, we bring them under the first-ever Bangla Language Understanding Benchmark (BLUB). BanglaBERT achieves state-of-the-art results outperforming multilingual and monolingual models. We are making the models, datasets, and a leaderboard publicly available at \url{https://github.com/csebuetnlp/banglabert} to advance Bangla NLP.",
}
```
If you use the normalization module, please cite the following paper:
```
@inproceedings{hasan-etal-2020-low,
title = "Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for {B}engali-{E}nglish Machine Translation",
author = "Hasan, Tahmid and
Bhattacharjee, Abhik and
Samin, Kazi and
Hasan, Masum and
Basak, Madhusudan and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.207",
doi = "10.18653/v1/2020.emnlp-main.207",
pages = "2612--2623",
abstract = "Despite being the seventh most widely spoken language in the world, Bengali has received much less attention in machine translation literature due to being low in resources. Most publicly available parallel corpora for Bengali are not large enough; and have rather poor quality, mostly because of incorrect sentence alignments resulting from erroneous sentence segmentation, and also because of a high volume of noise present in them. In this work, we build a customized sentence segmenter for Bengali and propose two novel methods for parallel corpus creation on low-resource setups: aligner ensembling and batch filtering. With the segmenter and the two methods combined, we compile a high-quality Bengali-English parallel corpus comprising of 2.75 million sentence pairs, more than 2 million of which were not available before. Training on neural models, we achieve an improvement of more than 9 BLEU score over previous approaches to Bengali-English machine translation. We also evaluate on a new test set of 1000 pairs made with extensive quality control. We release the segmenter, parallel corpus, and the evaluation set, thus elevating Bengali from its low-resource status. To the best of our knowledge, this is the first ever large scale study on Bengali-English machine translation. We believe our study will pave the way for future research on Bengali-English machine translation as well as other low-resource languages. Our data and code are available at https://github.com/csebuetnlp/banglanmt.",
}
```
|
{"language": ["bn"], "licenses": ["cc-by-nc-sa-4.0"]}
|
csebuetnlp/banglabert
| null |
[
"transformers",
"pytorch",
"electra",
"pretraining",
"bn",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
summarization
|
transformers
|
# mT5-m2o-english-CrossSum
This repository contains the many-to-one (m2o) mT5 checkpoint finetuned on all cross-lingual pairs of the [CrossSum](https://huggingface.co/datasets/csebuetnlp/CrossSum) dataset, where the target summary was in **english**, i.e. this model tries to **summarize text written in any language in English.** For finetuning details and scripts, see the [paper](https://arxiv.org/abs/2112.08804) and the [official repository](https://github.com/csebuetnlp/CrossSum).
## Using this model in `transformers` (tested on 4.11.0.dev0)
```python
import re
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip()))
article_text = """Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. "We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO," the post said, referring to the World Health Organization."""
model_name = "csebuetnlp/mT5_m2o_english_crossSum"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
input_ids = tokenizer(
[WHITESPACE_HANDLER(article_text)],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
max_length=84,
no_repeat_ngram_size=2,
num_beams=4
)[0]
summary = tokenizer.decode(
output_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
print(summary)
```
## Citation
If you use this model, please cite the following paper:
```
@article{hasan2021crosssum,
author = {Tahmid Hasan and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yuan-Fang Li and Yong-bin Kang and Rifat Shahriyar},
title = {CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs},
journal = {CoRR},
volume = {abs/2112.08804},
year = {2021},
url = {https://arxiv.org/abs/2112.08804},
eprinttype = {arXiv},
eprint = {2112.08804}
}
```
|
{"language": ["am", "ar", "az", "bn", "my", "zh", "en", "fr", "gu", "ha", "hi", "ig", "id", "ja", "rn", "ko", "ky", "mr", "ne", "om", "ps", "fa", "pcm", "pt", "pa", "ru", "gd", "sr", "si", "so", "es", "sw", "ta", "te", "th", "ti", "tr", "uk", "ur", "uz", "vi", "cy", "yo"], "tags": ["summarization", "mT5"], "licenses": ["cc-by-nc-sa-4.0"], "widget": [{"text": "Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs \"spill over into misinformation about vaccines in general\". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. \"We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO,\" the post said, referring to the World Health Organization."}]}
|
csebuetnlp/mT5_m2o_english_crossSum
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"mT5",
"am",
"ar",
"az",
"bn",
"my",
"zh",
"en",
"fr",
"gu",
"ha",
"hi",
"ig",
"id",
"ja",
"rn",
"ko",
"ky",
"mr",
"ne",
"om",
"ps",
"fa",
"pcm",
"pt",
"pa",
"ru",
"gd",
"sr",
"si",
"so",
"es",
"sw",
"ta",
"te",
"th",
"ti",
"tr",
"uk",
"ur",
"uz",
"vi",
"cy",
"yo",
"arxiv:2112.08804",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
summarization
|
transformers
|
# mT5-multilingual-XLSum
This repository contains the mT5 checkpoint finetuned on the 45 languages of [XL-Sum](https://huggingface.co/datasets/csebuetnlp/xlsum) dataset. For finetuning details and scripts,
see the [paper](https://aclanthology.org/2021.findings-acl.413/) and the [official repository](https://github.com/csebuetnlp/xl-sum).
## Using this model in `transformers` (tested on 4.11.0.dev0)
```python
import re
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip()))
article_text = """Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. "We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO," the post said, referring to the World Health Organization."""
model_name = "csebuetnlp/mT5_multilingual_XLSum"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
input_ids = tokenizer(
[WHITESPACE_HANDLER(article_text)],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
max_length=84,
no_repeat_ngram_size=2,
num_beams=4
)[0]
summary = tokenizer.decode(
output_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
print(summary)
```
## Benchmarks
Scores on the XL-Sum test sets are as follows:
Language | ROUGE-1 / ROUGE-2 / ROUGE-L
---------|----------------------------
Amharic | 20.0485 / 7.4111 / 18.0753
Arabic | 34.9107 / 14.7937 / 29.1623
Azerbaijani | 21.4227 / 9.5214 / 19.3331
Bengali | 29.5653 / 12.1095 / 25.1315
Burmese | 15.9626 / 5.1477 / 14.1819
Chinese (Simplified) | 39.4071 / 17.7913 / 33.406
Chinese (Traditional) | 37.1866 / 17.1432 / 31.6184
English | 37.601 / 15.1536 / 29.8817
French | 35.3398 / 16.1739 / 28.2041
Gujarati | 21.9619 / 7.7417 / 19.86
Hausa | 39.4375 / 17.6786 / 31.6667
Hindi | 38.5882 / 16.8802 / 32.0132
Igbo | 31.6148 / 10.1605 / 24.5309
Indonesian | 37.0049 / 17.0181 / 30.7561
Japanese | 48.1544 / 23.8482 / 37.3636
Kirundi | 31.9907 / 14.3685 / 25.8305
Korean | 23.6745 / 11.4478 / 22.3619
Kyrgyz | 18.3751 / 7.9608 / 16.5033
Marathi | 22.0141 / 9.5439 / 19.9208
Nepali | 26.6547 / 10.2479 / 24.2847
Oromo | 18.7025 / 6.1694 / 16.1862
Pashto | 38.4743 / 15.5475 / 31.9065
Persian | 36.9425 / 16.1934 / 30.0701
Pidgin | 37.9574 / 15.1234 / 29.872
Portuguese | 37.1676 / 15.9022 / 28.5586
Punjabi | 30.6973 / 12.2058 / 25.515
Russian | 32.2164 / 13.6386 / 26.1689
Scottish Gaelic | 29.0231 / 10.9893 / 22.8814
Serbian (Cyrillic) | 23.7841 / 7.9816 / 20.1379
Serbian (Latin) | 21.6443 / 6.6573 / 18.2336
Sinhala | 27.2901 / 13.3815 / 23.4699
Somali | 31.5563 / 11.5818 / 24.2232
Spanish | 31.5071 / 11.8767 / 24.0746
Swahili | 37.6673 / 17.8534 / 30.9146
Tamil | 24.3326 / 11.0553 / 22.0741
Telugu | 19.8571 / 7.0337 / 17.6101
Thai | 37.3951 / 17.275 / 28.8796
Tigrinya | 25.321 / 8.0157 / 21.1729
Turkish | 32.9304 / 15.5709 / 29.2622
Ukrainian | 23.9908 / 10.1431 / 20.9199
Urdu | 39.5579 / 18.3733 / 32.8442
Uzbek | 16.8281 / 6.3406 / 15.4055
Vietnamese | 32.8826 / 16.2247 / 26.0844
Welsh | 32.6599 / 11.596 / 26.1164
Yoruba | 31.6595 / 11.6599 / 25.0898
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{hasan-etal-2021-xl,
title = "{XL}-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages",
author = "Hasan, Tahmid and
Bhattacharjee, Abhik and
Islam, Md. Saiful and
Mubasshir, Kazi and
Li, Yuan-Fang and
Kang, Yong-Bin and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.413",
pages = "4693--4703",
}
```
|
{"language": ["am", "ar", "az", "bn", "my", "zh", "en", "fr", "gu", "ha", "hi", "ig", "id", "ja", "rn", "ko", "ky", "mr", "ne", "om", "ps", "fa", "pcm", "pt", "pa", "ru", "gd", "sr", "si", "so", "es", "sw", "ta", "te", "th", "ti", "tr", "uk", "ur", "uz", "vi", "cy", "yo"], "tags": ["summarization", "mT5"], "datasets": ["csebuetnlp/xlsum"], "licenses": ["cc-by-nc-sa-4.0"], "widget": [{"text": "Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs \"spill over into misinformation about vaccines in general\". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. \"We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO,\" the post said, referring to the World Health Organization."}], "model-index": [{"name": "csebuetnlp/mT5_multilingual_XLSum", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "xsum", "type": "xsum", "config": "default", "split": "test"}, "metrics": [{"type": "rouge", "value": 36.5002, "name": "ROUGE-1", "verified": true}, {"type": "rouge", "value": 13.934, "name": "ROUGE-2", "verified": true}, {"type": "rouge", "value": 28.9876, "name": "ROUGE-L", "verified": true}, {"type": "rouge", "value": 28.9958, "name": "ROUGE-LSUM", "verified": true}, {"type": "loss", "value": 2.0674800872802734, "name": "loss", "verified": true}, {"type": "gen_len", "value": 26.9733, "name": "gen_len", "verified": true}]}]}]}
|
csebuetnlp/mT5_multilingual_XLSum
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"mT5",
"am",
"ar",
"az",
"bn",
"my",
"zh",
"en",
"fr",
"gu",
"ha",
"hi",
"ig",
"id",
"ja",
"rn",
"ko",
"ky",
"mr",
"ne",
"om",
"ps",
"fa",
"pcm",
"pt",
"pa",
"ru",
"gd",
"sr",
"si",
"so",
"es",
"sw",
"ta",
"te",
"th",
"ti",
"tr",
"uk",
"ur",
"uz",
"vi",
"cy",
"yo",
"dataset:csebuetnlp/xlsum",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
# FrALBERT Base Cased
Pretrained model on French language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert).
This model, unlike other ALBERT models, is cased: it does make a difference between french and French.
## Model description
FrALBERT is a transformers model pretrained on 16Go of French Wikipedia in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): FrALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the FrALBERT model as inputs.
FrALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the second version of the base model.
This model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 768 hidden dimension
- 12 attention heads
- 11M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=fralbert-base-cased) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='cservan/fralbert-base-cased')
>>> unmasker("Paris est la capitale de la [MASK] .")
[
{
"sequence": "paris est la capitale de la france.",
"score": 0.6231236457824707,
"token": 3043,
"token_str": "france"
},
{
"sequence": "paris est la capitale de la region.",
"score": 0.2993471622467041,
"token": 10531,
"token_str": "region"
},
{
"sequence": "paris est la capitale de la societe.",
"score": 0.02028230018913746,
"token": 24622,
"token_str": "societe"
},
{
"sequence": "paris est la capitale de la bretagne.",
"score": 0.012089950032532215,
"token": 24987,
"token_str": "bretagne"
},
{
"sequence": "paris est la capitale de la chine.",
"score": 0.010002839379012585,
"token": 14860,
"token_str": "chine"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('cservan/fralbert-base-cased')
model = AlbertModel.from_pretrained("cservan/fralbert-base-cased")
text = "Remplacez-moi par le texte en français que vous souhaitez."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('cservan/fralbert-base-cased')
model = TFAlbertModel.from_pretrained("cservan/fralbert-base-cased")
text = "Remplacez-moi par le texte en français que vous souhaitez."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The FrALBERT model was pretrained on 4go of [French Wikipedia](https://fr.wikipedia.org/wiki/French_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 32,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The FrALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
Slot-filling:
| | FrALBERT-base | FrALBERT-base-cased
|----------------|---------------|--------------------
| MEDIA | 81.76 (0.59) | 85.09 (0.14)
|
### BibTeX entry and citation info
```bibtex
@inproceedings{cattan2021fralbert,
author = {Oralie Cattan and
Christophe Servan and
Sophie Rosset},
booktitle = {Recent Advances in Natural Language Processing, RANLP 2021},
title = {{On the Usability of Transformers-based models for a French Question-Answering task}},
year = {2021},
address = {Online},
month = sep,
}
```
Link to the paper: [PDF](https://hal.archives-ouvertes.fr/hal-03336060)
|
{"language": "fr", "license": "apache-2.0", "datasets": ["wikipedia"]}
|
cservan/fralbert-base-cased
| null |
[
"transformers",
"pytorch",
"albert",
"fill-mask",
"fr",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
csfraley/thalweg_test
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
csibilevente14/distilbert-base-uncased-finetuned-emotion
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-bemba-fds
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the [BembaSpeech](https://github.com/csikasote/BembaSpeech) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2898
- Wer: 0.3435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.7986 | 0.34 | 500 | 0.4549 | 0.7292 |
| 0.5358 | 0.67 | 1000 | 0.3325 | 0.4491 |
| 0.4559 | 1.01 | 1500 | 0.3090 | 0.3954 |
| 0.3983 | 1.35 | 2000 | 0.3067 | 0.4105 |
| 0.4067 | 1.68 | 2500 | 0.2838 | 0.3678 |
| 0.3722 | 2.02 | 3000 | 0.2824 | 0.3762 |
| 0.3286 | 2.36 | 3500 | 0.2810 | 0.3670 |
| 0.3239 | 2.69 | 4000 | 0.2643 | 0.3501 |
| 0.3187 | 3.03 | 4500 | 0.2838 | 0.3754 |
| 0.2801 | 3.36 | 5000 | 0.2815 | 0.3507 |
| 0.2806 | 3.7 | 5500 | 0.2725 | 0.3486 |
| 0.2714 | 4.04 | 6000 | 0.2898 | 0.3435 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer", "bem", "robust-speech-event"], "model-index": [{"name": "wav2vec2-large-xls-r-1b-bemba-fds", "results": []}]}
|
csikasote/wav2vec2-large-xls-r-1b-bemba-fds
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"bem",
"robust-speech-event",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-bemba-fds
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [BembaSpeech](https://github.com/csikasote/BembaSpeech) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3594
- Wer: 0.3838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9961 | 0.67 | 500 | 0.5157 | 0.7133 |
| 0.5903 | 1.34 | 1000 | 0.3663 | 0.4989 |
| 0.4804 | 2.02 | 1500 | 0.3547 | 0.4653 |
| 0.4146 | 2.69 | 2000 | 0.3274 | 0.4345 |
| 0.3792 | 3.36 | 2500 | 0.3586 | 0.4640 |
| 0.3509 | 4.03 | 3000 | 0.3360 | 0.4316 |
| 0.3114 | 4.7 | 3500 | 0.3382 | 0.4303 |
| 0.2935 | 5.38 | 4000 | 0.3263 | 0.4091 |
| 0.2723 | 6.05 | 4500 | 0.3348 | 0.4175 |
| 0.2502 | 6.72 | 5000 | 0.3317 | 0.4147 |
| 0.2334 | 7.39 | 5500 | 0.3542 | 0.4030 |
| 0.2287 | 8.06 | 6000 | 0.3594 | 0.4067 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer", "bem", "robust-speech-event"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-bemba-fds", "results": []}]}
|
csikasote/wav2vec2-large-xls-r-300m-bemba-fds
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"bem",
"robust-speech-event",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Bemba
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Bemba language of Zambia using the [BembaSpeech](https://csikasote.github.io/BembaSpeech). When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("csv", data_files={"test": "/content/test.csv"}, delimiter="\t")["test"] # Adapt the path to test.csv
processor = Wav2Vec2Processor.from_pretrained("csikasote/wav2vec2-large-xlsr-bemba")
model = Wav2Vec2ForCTC.from_pretrained("csikasote/wav2vec2-large-xlsr-bemba")
#BembaSpeech is sample at 16kHz so we you do not need to resample
#resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = speech_array.squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Bemba test data of BembaSpeech.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("csv", data_files={"test": "/content/test.csv"}, delimiter="\\t")["test"]
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("csikasote/wav2vec2-large-xlsr-bemba")
model = Wav2Vec2ForCTC.from_pretrained("csikasote/wav2vec2-large-xlsr-bemba")
model.to("cuda")
chars_to_ignore_regex = '[\,\_\?\.\!\;\:\"\“]'
#resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = speech_array.squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 42.17 %
## Training
The BembaSpeech `train`, `dev` and `test` datasets were used for training, development and evaluation respectively. The script used for evaluating the model on the test dataset can be found [here](https://colab.research.google.com/drive/1aplFHfaXE68HGDwBYV2KqUWPasrk7bXv?usp=sharing).
|
{"language": "bem", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["BembaSpeech"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Bemba by Claytone Sikasote", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "BembaSpeech bem", "type": "bembaspeech", "args": "bem"}, "metrics": [{"type": "wer", "value": 42.17, "name": "Test WER"}]}]}]}
|
csikasote/wav2vec2-large-xlsr-bemba
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"bem",
"dataset:BembaSpeech",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
csj88/Model6
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
translation
|
transformers
|
### marianmt-th-zh_cn
* source languages: th
* target languages: zh_cn
* dataset:
* model: transformer-align
* pre-processing: normalization + SentencePiece
* test set translations:
* test set scores:
## Training
Training scripts from [LalitaDeelert/NLP-ZH_TH-Project](https://github.com/LalitaDeelert/NLP-ZH_TH-Project). Experiments tracked at [cstorm125/marianmt-th-zh_cn](https://wandb.ai/cstorm125/marianmt-th-zh_cn).
```
export WANDB_PROJECT=marianmt-th-zh_cn
python train_model.py --input_fname ../data/v1/Train.csv \
--output_dir ../models/marianmt-th-zh_cn \
--source_lang th --target_lang zh \
--metric_tokenize zh --fp16
```
## Usage
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("cstorm125/marianmt-zh_cn-th")
model = AutoModelForSeq2SeqLM.from_pretrained("cstorm125/marianmt-zh_cn-th").cpu()
src_text = [
'ฉันรักคุณ',
'ฉันอยากกินข้าว',
]
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
print([tokenizer.decode(t, skip_special_tokens=True) for t in translated])
> ['我爱你', '我想吃饭。']
```
## Requirements
```
transformers==4.6.0
torch==1.8.0
```
|
{"tags": ["translation", "torch==1.8.0"], "widget": [{"text": "Inference Unavailable"}]}
|
cstorm125/marianmt-th-zh_cn
| null |
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"torch==1.8.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
translation
|
transformers
|
### marianmt-zh_cn-th
* source languages: zh_cn
* target languages: th
* dataset:
* model: transformer-align
* pre-processing: normalization + SentencePiece
* test set translations:
* test set scores:
## Training
Training scripts from [LalitaDeelert/NLP-ZH_TH-Project](https://github.com/LalitaDeelert/NLP-ZH_TH-Project). Experiments tracked at [cstorm125/marianmt-zh_cn-th](https://wandb.ai/cstorm125/marianmt-zh_cn-th).
```
export WANDB_PROJECT=marianmt-zh_cn-th
python train_model.py --input_fname ../data/v1/Train.csv \
\\t--output_dir ../models/marianmt-zh_cn-th \
\\t--source_lang zh --target_lang th \
\\t--metric_tokenize th_syllable --fp16
```
## Usage
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("cstorm125/marianmt-zh_cn-th")
model = AutoModelForSeq2SeqLM.from_pretrained("cstorm125/marianmt-zh_cn-th").cpu()
src_text = [
'我爱你',
'我想吃米饭',
]
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
print([tokenizer.decode(t, skip_special_tokens=True) for t in translated])
> ['ผมรักคุณนะ', 'ฉันอยากกินข้าว']
```
## Requirements
```
transformers==4.6.0
torch==1.8.0
```
|
{"tags": ["translation", "torch==1.8.0"], "widget": [{"text": "Inference Unavailable"}]}
|
cstorm125/marianmt-zh_cn-th
| null |
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"torch==1.8.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
question-answering
|
transformers
|
# wangchan-deberta_v1-base-wiki-20210520-news-spm-finetune-qa
Finetuning `airesearch/wangchan-deberta_v1-base-wiki-20210520-news-spm` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`.
Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py).
Run with:
```
export MODEL_NAME=wangchan-deberta_v1-base-wiki-20210520-news-spm
CUDA_LAUNCH_BLOCKING=1 python train_question_answering_lm_finetuning.py \
--model_name $MODEL_NAME \
--dataset_name chimera_qa \
--revision mlm@ckp-41100 \
--output_dir $MODEL_NAME-finetune-chimera_qa-model \
--log_dir $MODEL_NAME-finetune-chimera_qa-log \
--model_max_length 400 \
--pad_on_right \
--fp16 \
--use_auth_token
```
|
{"widget": [{"text": "\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2d\u0e30\u0e44\u0e23", "context": "\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e27\u0e34\u0e17\u0e22\u0e32\u0e25\u0e31\u0e22 (Suankularb Wittayalai School) (\u0e2d\u0e31\u0e01\u0e29\u0e23\u0e22\u0e48\u0e2d : \u0e2a.\u0e01. / S.K.) \u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e0a\u0e32\u0e22\u0e25\u0e49\u0e27\u0e19 \u0e23\u0e30\u0e14\u0e31\u0e1a\u0e0a\u0e31\u0e49\u0e19\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e19\u0e32\u0e14\u0e43\u0e2b\u0e0d\u0e48\u0e1e\u0e34\u0e40\u0e28\u0e29 \u0e2a\u0e31\u0e07\u0e01\u0e31\u0e14\u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e40\u0e02\u0e15\u0e1e\u0e37\u0e49\u0e19\u0e17\u0e35\u0e48\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e40\u0e02\u0e15 1 \u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e04\u0e13\u0e30\u0e01\u0e23\u0e23\u0e21\u0e01\u0e32\u0e23\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e31\u0e49\u0e19\u0e1e\u0e37\u0e49\u0e19\u0e10\u0e32\u0e19 (\u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e14\u0e34\u0e21: \u0e01\u0e23\u0e21\u0e2a\u0e32\u0e21\u0e31\u0e0d\u0e28\u0e36\u0e01\u0e29\u0e32) \u0e01\u0e23\u0e30\u0e17\u0e23\u0e27\u0e07\u0e28\u0e36\u0e01\u0e29\u0e32\u0e18\u0e34\u0e01\u0e32\u0e23 \u0e01\u0e48\u0e2d\u0e15\u0e31\u0e49\u0e07\u0e42\u0e14\u0e22 \u0e1e\u0e23\u0e30\u0e1a\u0e32\u0e17\u0e2a\u0e21\u0e40\u0e14\u0e47\u0e08\u0e1e\u0e23\u0e30\u0e08\u0e38\u0e25\u0e08\u0e2d\u0e21\u0e40\u0e01\u0e25\u0e49\u0e32\u0e40\u0e08\u0e49\u0e32\u0e2d\u0e22\u0e39\u0e48\u0e2b\u0e31\u0e27 \u0e44\u0e14\u0e49\u0e23\u0e31\u0e1a\u0e01\u0e32\u0e23\u0e2a\u0e16\u0e32\u0e1b\u0e19\u0e32\u0e02\u0e36\u0e49\u0e19\u0e43\u0e19\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 8 \u0e21\u0e35\u0e19\u0e32\u0e04\u0e21 \u0e1e.\u0e28. 2424 (\u0e02\u0e13\u0e30\u0e19\u0e31\u0e49\u0e19\u0e19\u0e31\u0e1a\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 1 \u0e40\u0e21\u0e29\u0e32\u0e22\u0e19 \u0e40\u0e1b\u0e47\u0e19\u0e27\u0e31\u0e19\u0e02\u0e36\u0e49\u0e19\u0e1b\u0e35\u0e43\u0e2b\u0e21\u0e48 \u0e40\u0e21\u0e37\u0e48\u0e2d\u0e19\u0e31\u0e1a\u0e2d\u0e22\u0e48\u0e32\u0e07\u0e2a\u0e32\u0e01\u0e25\u0e16\u0e37\u0e2d\u0e40\u0e1b\u0e47\u0e19 \u0e1e.\u0e28. 2425) \u0e42\u0e14\u0e22\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e23\u0e31\u0e10\u0e1a\u0e32\u0e25\u0e41\u0e2b\u0e48\u0e07\u0e41\u0e23\u0e01\u0e02\u0e2d\u0e07\u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28\u0e44\u0e17\u0e22"}]}
|
cstorm125/wangchan-deberta_v1-base-wiki-20210520-news-spm-finetune-qa
| null |
[
"transformers",
"pytorch",
"deberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
question-answering
|
transformers
|
# airesearch/wangchanberta-base-att-spm-uncased
Finetuning `airesearch/wangchanberta-base-att-spm-uncased` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`.
Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py).
Run with:
```
export MODEL_NAME=airesearch/wangchanberta-base-att-spm-uncased
python train_question_answering_lm_finetuning.py \
--model_name $MODEL_NAME \
--dataset_name chimera_qa \
--output_dir $MODEL_NAME-finetune-chimera_qa-model \
--log_dir $MODEL_NAME-finetune-chimera_qa-log \
--lowercase \
--pad_on_right \
--fp16
```
|
{"widget": [{"text": "\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2d\u0e30\u0e44\u0e23", "context": "\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e27\u0e34\u0e17\u0e22\u0e32\u0e25\u0e31\u0e22 (Suankularb Wittayalai School) (\u0e2d\u0e31\u0e01\u0e29\u0e23\u0e22\u0e48\u0e2d : \u0e2a.\u0e01. / S.K.) \u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e0a\u0e32\u0e22\u0e25\u0e49\u0e27\u0e19 \u0e23\u0e30\u0e14\u0e31\u0e1a\u0e0a\u0e31\u0e49\u0e19\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e19\u0e32\u0e14\u0e43\u0e2b\u0e0d\u0e48\u0e1e\u0e34\u0e40\u0e28\u0e29 \u0e2a\u0e31\u0e07\u0e01\u0e31\u0e14\u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e40\u0e02\u0e15\u0e1e\u0e37\u0e49\u0e19\u0e17\u0e35\u0e48\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e40\u0e02\u0e15 1 \u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e04\u0e13\u0e30\u0e01\u0e23\u0e23\u0e21\u0e01\u0e32\u0e23\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e31\u0e49\u0e19\u0e1e\u0e37\u0e49\u0e19\u0e10\u0e32\u0e19 (\u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e14\u0e34\u0e21: \u0e01\u0e23\u0e21\u0e2a\u0e32\u0e21\u0e31\u0e0d\u0e28\u0e36\u0e01\u0e29\u0e32) \u0e01\u0e23\u0e30\u0e17\u0e23\u0e27\u0e07\u0e28\u0e36\u0e01\u0e29\u0e32\u0e18\u0e34\u0e01\u0e32\u0e23 \u0e01\u0e48\u0e2d\u0e15\u0e31\u0e49\u0e07\u0e42\u0e14\u0e22 \u0e1e\u0e23\u0e30\u0e1a\u0e32\u0e17\u0e2a\u0e21\u0e40\u0e14\u0e47\u0e08\u0e1e\u0e23\u0e30\u0e08\u0e38\u0e25\u0e08\u0e2d\u0e21\u0e40\u0e01\u0e25\u0e49\u0e32\u0e40\u0e08\u0e49\u0e32\u0e2d\u0e22\u0e39\u0e48\u0e2b\u0e31\u0e27 \u0e44\u0e14\u0e49\u0e23\u0e31\u0e1a\u0e01\u0e32\u0e23\u0e2a\u0e16\u0e32\u0e1b\u0e19\u0e32\u0e02\u0e36\u0e49\u0e19\u0e43\u0e19\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 8 \u0e21\u0e35\u0e19\u0e32\u0e04\u0e21 \u0e1e.\u0e28. 2424 (\u0e02\u0e13\u0e30\u0e19\u0e31\u0e49\u0e19\u0e19\u0e31\u0e1a\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 1 \u0e40\u0e21\u0e29\u0e32\u0e22\u0e19 \u0e40\u0e1b\u0e47\u0e19\u0e27\u0e31\u0e19\u0e02\u0e36\u0e49\u0e19\u0e1b\u0e35\u0e43\u0e2b\u0e21\u0e48 \u0e40\u0e21\u0e37\u0e48\u0e2d\u0e19\u0e31\u0e1a\u0e2d\u0e22\u0e48\u0e32\u0e07\u0e2a\u0e32\u0e01\u0e25\u0e16\u0e37\u0e2d\u0e40\u0e1b\u0e47\u0e19 \u0e1e.\u0e28. 2425) \u0e42\u0e14\u0e22\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e23\u0e31\u0e10\u0e1a\u0e32\u0e25\u0e41\u0e2b\u0e48\u0e07\u0e41\u0e23\u0e01\u0e02\u0e2d\u0e07\u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28\u0e44\u0e17\u0e22"}]}
|
cstorm125/wangchanberta-base-att-spm-uncased-finetune-qa
| null |
[
"transformers",
"pytorch",
"camembert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
question-answering
|
transformers
|
# wangchanberta-base-wiki-20210520-news-spm-finetune-qa
Finetuning `airesearchth/wangchanberta-base-wiki-20210520-news-spm` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`.
Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py).
Run with:
```
export MODEL_NAME=airesearchth/wangchanberta-base-wiki-20210520-news-spm
CUDA_LAUNCH_BLOCKING=1 python train_question_answering_lm_finetuning.py \
--model_name $MODEL_NAME \
--dataset_name chimera_qa \
--output_dir $MODEL_NAME-finetune-chimera_qa-model \
--log_dir $MODEL_NAME-finetune-chimera_qa-log \
--model_max_length 400 \
--pad_on_right \
--fp16
```
|
{"widget": [{"text": "\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2d\u0e30\u0e44\u0e23", "context": "\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e27\u0e34\u0e17\u0e22\u0e32\u0e25\u0e31\u0e22 (Suankularb Wittayalai School) (\u0e2d\u0e31\u0e01\u0e29\u0e23\u0e22\u0e48\u0e2d : \u0e2a.\u0e01. / S.K.) \u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e0a\u0e32\u0e22\u0e25\u0e49\u0e27\u0e19 \u0e23\u0e30\u0e14\u0e31\u0e1a\u0e0a\u0e31\u0e49\u0e19\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e19\u0e32\u0e14\u0e43\u0e2b\u0e0d\u0e48\u0e1e\u0e34\u0e40\u0e28\u0e29 \u0e2a\u0e31\u0e07\u0e01\u0e31\u0e14\u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e40\u0e02\u0e15\u0e1e\u0e37\u0e49\u0e19\u0e17\u0e35\u0e48\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e40\u0e02\u0e15 1 \u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e04\u0e13\u0e30\u0e01\u0e23\u0e23\u0e21\u0e01\u0e32\u0e23\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e31\u0e49\u0e19\u0e1e\u0e37\u0e49\u0e19\u0e10\u0e32\u0e19 (\u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e14\u0e34\u0e21: \u0e01\u0e23\u0e21\u0e2a\u0e32\u0e21\u0e31\u0e0d\u0e28\u0e36\u0e01\u0e29\u0e32) \u0e01\u0e23\u0e30\u0e17\u0e23\u0e27\u0e07\u0e28\u0e36\u0e01\u0e29\u0e32\u0e18\u0e34\u0e01\u0e32\u0e23 \u0e01\u0e48\u0e2d\u0e15\u0e31\u0e49\u0e07\u0e42\u0e14\u0e22 \u0e1e\u0e23\u0e30\u0e1a\u0e32\u0e17\u0e2a\u0e21\u0e40\u0e14\u0e47\u0e08\u0e1e\u0e23\u0e30\u0e08\u0e38\u0e25\u0e08\u0e2d\u0e21\u0e40\u0e01\u0e25\u0e49\u0e32\u0e40\u0e08\u0e49\u0e32\u0e2d\u0e22\u0e39\u0e48\u0e2b\u0e31\u0e27 \u0e44\u0e14\u0e49\u0e23\u0e31\u0e1a\u0e01\u0e32\u0e23\u0e2a\u0e16\u0e32\u0e1b\u0e19\u0e32\u0e02\u0e36\u0e49\u0e19\u0e43\u0e19\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 8 \u0e21\u0e35\u0e19\u0e32\u0e04\u0e21 \u0e1e.\u0e28. 2424 (\u0e02\u0e13\u0e30\u0e19\u0e31\u0e49\u0e19\u0e19\u0e31\u0e1a\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 1 \u0e40\u0e21\u0e29\u0e32\u0e22\u0e19 \u0e40\u0e1b\u0e47\u0e19\u0e27\u0e31\u0e19\u0e02\u0e36\u0e49\u0e19\u0e1b\u0e35\u0e43\u0e2b\u0e21\u0e48 \u0e40\u0e21\u0e37\u0e48\u0e2d\u0e19\u0e31\u0e1a\u0e2d\u0e22\u0e48\u0e32\u0e07\u0e2a\u0e32\u0e01\u0e25\u0e16\u0e37\u0e2d\u0e40\u0e1b\u0e47\u0e19 \u0e1e.\u0e28. 2425) \u0e42\u0e14\u0e22\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e23\u0e31\u0e10\u0e1a\u0e32\u0e25\u0e41\u0e2b\u0e48\u0e07\u0e41\u0e23\u0e01\u0e02\u0e2d\u0e07\u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28\u0e44\u0e17\u0e22"}]}
|
cstorm125/wangchanberta-base-wiki-20210520-news-spm-finetune-qa
| null |
[
"transformers",
"pytorch",
"camembert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
question-answering
|
transformers
|
# wangchanberta-base-wiki-20210520-news-spm_span-mask-finetune-qa
Finetuning `airesearch/wangchanberta-base-wiki-20210520-news-spm_span-mask` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`.
Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py).
Run with:
```
export MODEL_NAME=airesearch/wangchanberta-base-wiki-20210520-news-spm_span-mask
CUDA_LAUNCH_BLOCKING=1 python train_question_answering_lm_finetuning.py \
--model_name $MODEL_NAME \
--dataset_name chimera_qa \
--output_dir $MODEL_NAME-finetune-chimera_qa-model \
--log_dir $MODEL_NAME-finetune-chimera_qa-log \
--model_max_length 400 \
--pad_on_right \
--fp16 \
--use_auth_token
```
|
{"widget": [{"text": "\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2d\u0e30\u0e44\u0e23", "context": "\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e27\u0e34\u0e17\u0e22\u0e32\u0e25\u0e31\u0e22 (Suankularb Wittayalai School) (\u0e2d\u0e31\u0e01\u0e29\u0e23\u0e22\u0e48\u0e2d : \u0e2a.\u0e01. / S.K.) \u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e0a\u0e32\u0e22\u0e25\u0e49\u0e27\u0e19 \u0e23\u0e30\u0e14\u0e31\u0e1a\u0e0a\u0e31\u0e49\u0e19\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e19\u0e32\u0e14\u0e43\u0e2b\u0e0d\u0e48\u0e1e\u0e34\u0e40\u0e28\u0e29 \u0e2a\u0e31\u0e07\u0e01\u0e31\u0e14\u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e40\u0e02\u0e15\u0e1e\u0e37\u0e49\u0e19\u0e17\u0e35\u0e48\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e40\u0e02\u0e15 1 \u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e04\u0e13\u0e30\u0e01\u0e23\u0e23\u0e21\u0e01\u0e32\u0e23\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e31\u0e49\u0e19\u0e1e\u0e37\u0e49\u0e19\u0e10\u0e32\u0e19 (\u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e14\u0e34\u0e21: \u0e01\u0e23\u0e21\u0e2a\u0e32\u0e21\u0e31\u0e0d\u0e28\u0e36\u0e01\u0e29\u0e32) \u0e01\u0e23\u0e30\u0e17\u0e23\u0e27\u0e07\u0e28\u0e36\u0e01\u0e29\u0e32\u0e18\u0e34\u0e01\u0e32\u0e23 \u0e01\u0e48\u0e2d\u0e15\u0e31\u0e49\u0e07\u0e42\u0e14\u0e22 \u0e1e\u0e23\u0e30\u0e1a\u0e32\u0e17\u0e2a\u0e21\u0e40\u0e14\u0e47\u0e08\u0e1e\u0e23\u0e30\u0e08\u0e38\u0e25\u0e08\u0e2d\u0e21\u0e40\u0e01\u0e25\u0e49\u0e32\u0e40\u0e08\u0e49\u0e32\u0e2d\u0e22\u0e39\u0e48\u0e2b\u0e31\u0e27 \u0e44\u0e14\u0e49\u0e23\u0e31\u0e1a\u0e01\u0e32\u0e23\u0e2a\u0e16\u0e32\u0e1b\u0e19\u0e32\u0e02\u0e36\u0e49\u0e19\u0e43\u0e19\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 8 \u0e21\u0e35\u0e19\u0e32\u0e04\u0e21 \u0e1e.\u0e28. 2424 (\u0e02\u0e13\u0e30\u0e19\u0e31\u0e49\u0e19\u0e19\u0e31\u0e1a\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 1 \u0e40\u0e21\u0e29\u0e32\u0e22\u0e19 \u0e40\u0e1b\u0e47\u0e19\u0e27\u0e31\u0e19\u0e02\u0e36\u0e49\u0e19\u0e1b\u0e35\u0e43\u0e2b\u0e21\u0e48 \u0e40\u0e21\u0e37\u0e48\u0e2d\u0e19\u0e31\u0e1a\u0e2d\u0e22\u0e48\u0e32\u0e07\u0e2a\u0e32\u0e01\u0e25\u0e16\u0e37\u0e2d\u0e40\u0e1b\u0e47\u0e19 \u0e1e.\u0e28. 2425) \u0e42\u0e14\u0e22\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e23\u0e31\u0e10\u0e1a\u0e32\u0e25\u0e41\u0e2b\u0e48\u0e07\u0e41\u0e23\u0e01\u0e02\u0e2d\u0e07\u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28\u0e44\u0e17\u0e22"}]}
|
cstorm125/wangchanberta-base-wiki-20210520-news-spm_span-mask-finetune-qa
| null |
[
"transformers",
"pytorch",
"camembert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
cstrathe435/CiViL_Test
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cstrathe435/OBJCONT
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cstrathe435/obshousetest
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cstrathe435/test12
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
csukuangfj/conformer_ctc
| null |
[
"tensorboard",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
csukuangfj/cudnn
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
k2
|
# Introduction
This repo contains pre-trained model using
<https://github.com/k2-fsa/icefall/pull/219>.
It is trained on [AIShell](https://www.openslr.org/33/) dataset
using modified transducer from [optimized_transducer](https://github.com/csukuangfj/optimized_transducer).
Also, it uses [aidatatang_200zh](http://www.openslr.org/62/) as extra training data.
## How to clone this repo
```
sudo apt-get install git-lfs
git clone https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2-2022-03-01
cd icefall-aishell-transducer-stateless-modified-2-2022-03-01
git lfs pull
```
**Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later.
The model in this repo is trained using the commit `TODO`.
You can use
```
git clone https://github.com/k2-fsa/icefall
cd icefall
git checkout TODO
```
to download `icefall`.
You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/TODO/egs/aishell/ASR/transducer_stateless_modified-2/train.py#L232>.
In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward;
the decoder contains a 512-dim embedding layer and a Conv1d with kernel size 2.
The decoder architecture is modified from
[Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419).
A Conv1d layer is placed right after the input embedding layer.
-----
## Description
This repo provides pre-trained transducer Conformer model for the AIShell dataset
using [icefall][icefall]. There are no RNNs in the decoder. The decoder is stateless
and contains only an embedding layer and a Conv1d.
The commands for training are:
```bash
cd egs/aishell/ASR
./prepare.sh --stop-stage 6
./prepare_aidatatang_200zh.sh
export CUDA_VISIBLE_DEVICES="0,1,2"
./transducer_stateless_modified-2/train.py \
--world-size 3 \
--num-epochs 90 \
--start-epoch 0 \
--exp-dir transducer_stateless_modified-2/exp-2 \
--max-duration 250 \
--lr-factor 2.0 \
--context-size 2 \
--modified-transducer-prob 0.25 \
--datatang-prob 0.2
```
The tensorboard training log can be found at
<https://tensorboard.dev/experiment/oG72ZlWaSGua6fXkcGRRjA/>
The commands for decoding are
```bash
# greedy search
for epoch in 89; do
for avg in 38; do
./transducer_stateless_modified-2/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir transducer_stateless_modified-2/exp-2 \
--max-duration 100 \
--context-size 2 \
--decoding-method greedy_search \
--max-sym-per-frame 1
done
done
# modified beam search
for epoch in 89; do
for avg in 38; do
./transducer_stateless_modified-2/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir transducer_stateless_modified-2/exp-2 \
--max-duration 100 \
--context-size 2 \
--decoding-method modified_beam_search \
--beam-size 4
done
done
```
You can find the decoding log for the above command in this
repo (in the folder [log][log]).
The WER for the test dataset is
| | test |comment |
|------------------------|------|----------------------------------------------------------------|
| greedy search | 4.94 |--epoch 89, --avg 38, --max-duration 100, --max-sym-per-frame 1 |
| modified beam search | 4.68 |--epoch 89, --avg 38, --max-duration 100 --beam-size 4 |
# File description
- [log][log], this directory contains the decoding log and decoding results
- [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model
- [data][data], this directory contains files generated by [prepare.sh][prepare]
- [exp][exp], this directory contains only one file: `preprained.pt`
`exp/pretrained.pt` is generated by the following command:
```bash
epoch=89
avg=38
./transducer_stateless_modified-2/export.py \
--exp-dir ./transducer_stateless_modified-2/exp-2 \
--lang-dir ./data/lang_char \
--epoch $epoch \
--avg $avg
```
**HINT**: To use `pretrained.pt` to compute the WER for the `test` dataset,
just do the following:
```bash
cp icefall-aishell-transducer-stateless-modified-2-2022-03-01/exp/pretrained.pt \
/path/to/icefall/egs/aishell/ASR/transducer_stateless_modified-2/exp/epoch-999.pt
```
and pass `--epoch 999 --avg 1` to `transducer_stateless_modified-2/decode.py`.
[icefall]: https://github.com/k2-fsa/icefall
[prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/aishell/ASR/prepare.sh
[exp]: https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2-2022-03-01/tree/main/exp
[data]: https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2-2022-03-01/tree/main/data
[test_wavs]: https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2-2022-03-01/tree/main/test_wavs
[log]: https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2-2022-03-01/tree/main/log
[icefall]: https://github.com/k2-fsa/icefall
|
{"language": "en", "license": "apache-2.0", "tags": ["icefall", "k2", "transducer", "aishell", "ASR", "stateless transducer", "PyTorch"], "datasets": ["aishell", "aidatatang_200zh"], "metrics": ["WER"]}
|
csukuangfj/icefall-aishell-transducer-stateless-modified-2-2022-03-01
| null |
[
"k2",
"icefall",
"transducer",
"aishell",
"ASR",
"stateless transducer",
"PyTorch",
"en",
"dataset:aishell",
"dataset:aidatatang_200zh",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null |
k2
|
# Introduction
This repo contains pre-trained model using
<https://github.com/k2-fsa/icefall/pull/219>.
It is trained on [AIShell](https://www.openslr.org/33/) dataset
using modified transducer from [optimized_transducer](https://github.com/csukuangfj/optimized_transducer).
## How to clone this repo
```
sudo apt-get install git-lfs
git clone https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2022-03-01
cd icefall-aishell-transducer-stateless-modified-2022-03-01
git lfs pull
```
**Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later.
The model in this repo is trained using the commit `TODO`.
You can use
```
git clone https://github.com/k2-fsa/icefall
cd icefall
git checkout TODO
```
to download `icefall`.
You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/TODO/egs/aishell/ASR/transducer_stateless_modified/train.py#L232>.
In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward;
the decoder contains a 512-dim embedding layer and a Conv1d with kernel size 2.
The decoder architecture is modified from
[Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419).
A Conv1d layer is placed right after the input embedding layer.
-----
## Description
This repo provides pre-trained transducer Conformer model for the AIShell dataset
using [icefall][icefall]. There are no RNNs in the decoder. The decoder is stateless
and contains only an embedding layer and a Conv1d.
The commands for training are:
```bash
cd egs/aishell/ASR
./prepare.sh --stop-stage 6
export CUDA_VISIBLE_DEVICES="0,1,2"
./transducer_stateless_modified/train.py \
--world-size 3 \
--num-epochs 90 \
--start-epoch 0 \
--exp-dir transducer_stateless_modified/exp-4 \
--max-duration 250 \
--lr-factor 2.0 \
--context-size 2 \
--modified-transducer-prob 0.25
```
The tensorboard training log can be found at
<https://tensorboard.dev/experiment/C27M8YxRQCa1t2XglTqlWg>
The commands for decoding are
```bash
# greedy search
for epoch in 64; do
for avg in 33; do
./transducer_stateless_modified-2/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir transducer_stateless_modified/exp-4 \
--max-duration 100 \
--context-size 2 \
--decoding-method greedy_search \
--max-sym-per-frame 1
done
done
# modified beam search
for epoch in 64; do
for avg in 33; do
./transducer_stateless_modified/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir transducer_stateless_modified/exp-4 \
--max-duration 100 \
--context-size 2 \
--decoding-method modified_beam_search \
--beam-size 4
done
done
```
You can find the decoding log for the above command in this
repo (in the folder [log][log]).
The WER for the test dataset is
| | test |comment |
|------------------------|------|----------------------------------------------------------------|
| greedy search | 5.22 |--epoch 64, --avg 33, --max-duration 100, --max-sym-per-frame 1 |
| modified beam search | 5.02 |--epoch 64, --avg 33, --max-duration 100 --beam-size 4 |
# File description
- [log][log], this directory contains the decoding log and decoding results
- [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model
- [data][data], this directory contains files generated by [prepare.sh][prepare]
- [exp][exp], this directory contains only one file: `preprained.pt`
`exp/pretrained.pt` is generated by the following command:
```bash
epoch=64
avg=33
./transducer_stateless_modified/export.py \
--exp-dir ./transducer_stateless_modified/exp-4 \
--lang-dir ./data/lang_char \
--epoch $epoch \
--avg $avg
```
**HINT**: To use `pretrained.pt` to compute the WER for the `test` dataset,
just do the following:
```bash
cp icefall-aishell-transducer-stateless-modified-2022-03-01/exp/pretrained.pt \
/path/to/icefall/egs/aishell/ASR/transducer_stateless_modified/exp/epoch-999.pt
```
and pass `--epoch 999 --avg 1` to `transducer_stateless_modified/decode.py`.
[icefall]: https://github.com/k2-fsa/icefall
[prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/aishell/ASR/prepare.sh
[exp]: https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2022-03-01/tree/main/exp
[data]: https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2022-03-01/tree/main/data
[test_wavs]: https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2022-03-01/tree/main/test_wavs
[log]: https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2022-03-01/tree/main/log
[icefall]: https://github.com/k2-fsa/icefall
|
{"language": "en", "license": "apache-2.0", "tags": ["icefall", "k2", "transducer", "aishell", "ASR", "stateless transducer", "PyTorch"], "datasets": ["aishell"], "metrics": ["WER"]}
|
csukuangfj/icefall-aishell-transducer-stateless-modified-2022-03-01
| null |
[
"k2",
"icefall",
"transducer",
"aishell",
"ASR",
"stateless transducer",
"PyTorch",
"en",
"dataset:aishell",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
# Introduction
This repo contains pre-trained model using
<https://github.com/k2-fsa/icefall/pull/213>.
It is trained on train-clean-100 subset of the LibriSpeech dataset.
Also, it uses the `S` subset from GigaSpeech as extra training data.
## How to clone this repo
```
sudo apt-get install git-lfs
git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-100h-transducer-stateless-multi-datasets-bpe-500-2022-02-21
cd icefall-asr-librispeech-100h-transducer-stateless-multi-datasets-bpe-500-2022-02-21
git lfs pull
```
**Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later.
The model in this repo is trained using the commit `2332ba312d7ce72f08c7bac1e3312f7e3dd722dc`.
You can use
```
git clone https://github.com/k2-fsa/icefall
cd icefall
git checkout 2332ba312d7ce72f08c7bac1e3312f7e3dd722dc
```
to download `icefall`.
You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/2332ba312d7ce72f08c7bac1e3312f7e3dd722dc/egs/librispeech/ASR/transducer_stateless_multi_datasets/train.py#L198>.
In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward;
the decoder contains a 1024-dim embedding layer and a Conv1d with kernel size 2.
The decoder architecture is modified from
[Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419).
A Conv1d layer is placed right after the input embedding layer.
-----
## Description
This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset
using [icefall][icefall]. There are no RNNs in the decoder. The decoder is stateless
and contains only an embedding layer and a Conv1d.
The commands for training are:
```
cd egs/librispeech/ASR/
./prepare.sh
./prepare_giga_speech.sh
export CUDA_VISIBLE_DEVICES="0,1"
./transducer_stateless_multi_datasets/train.py \
--world-size 2 \
--num-epochs 60 \
--start-epoch 0 \
--exp-dir transducer_stateless_multi_datasets/exp-100-2 \
--full-libri 0 \
--max-duration 300 \
--lr-factor 1 \
--bpe-model data/lang_bpe_500/bpe.model \
--modified-transducer-prob 0.25
--giga-prob 0.2
```
The tensorboard training log can be found at
<https://tensorboard.dev/experiment/qUEKzMnrTZmOz1EXPda9RA/>
The command for decoding is:
```
epoch=57
avg=17
## greedy search
for epoch in 57; do
for avg in 17; do
for sym in 1 2 3; do
./transducer_stateless_multi_datasets/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir transducer_stateless_multi_datasets/exp-100-2 \
--bpe-model ./data/lang_bpe_500/bpe.model \
--max-duration 100 \
--context-size 2 \
--max-sym-per-frame $sym
done
done
done
## modified beam search
epoch=57
avg=17
./transducer_stateless_multi_datasets/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir transducer_stateless_multi_datasets/exp-100-2 \
--bpe-model ./data/lang_bpe_500/bpe.model \
--max-duration 100 \
--context-size 2 \
--decoding-method modified_beam_search \
--beam-size 4
```
You can find the decoding log for the above command in this
repo (in the folder `log`).
The WERs for the test datasets are
| | test-clean | test-other | comment |
|-------------------------------------|------------|------------|------------------------------------------|
| greedy search (max sym per frame 1) | 6.34 | 16.7 | --epoch 57, --avg 17, --max-duration 100 |
| greedy search (max sym per frame 2) | 6.34 | 16.7 | --epoch 57, --avg 17, --max-duration 100 |
| greedy search (max sym per frame 3) | 6.34 | 16.7 | --epoch 57, --avg 17, --max-duration 100 |
| modified beam search (beam size 4) | 6.31 | 16.3 | --epoch 57, --avg 17, --max-duration 100 |
# File description
- [log][log], this directory contains the decoding log and decoding results
- [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model
- [data][data], this directory contains files generated by [prepare.sh][prepare]
- [exp][exp], this directory contains only one file: `preprained.pt`
`exp/pretrained.pt` is generated by the following command:
```bash
./transducer_stateless_multi_datasets/export.py \
--epoch 57 \
--avg 17 \
--bpe-model data/lang_bpe_500/bpe.model \
--exp-dir transducer_stateless_multi_datasets/exp-full
```
**HINT**: To use `pretrained.pt` to compute the WER for test-clean and test-other,
just do the following:
```
cp icefall-asr-librispeech-100h-transducer-stateless-multi-datasets-bpe-500-2022-02-21/exp/pretrained.pt \
/path/to/icefall/egs/librispeech/ASR/transducer_stateless_multi_datasets/exp/epoch-999.pt
```
and pass `--epoch 999 --avg 1` to `transducer_stateless_multi_datasets/decode.py`.
[icefall]: https://github.com/k2-fsa/icefall
[prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh
[exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-100h-transducer-stateless-multi-datasets-bpe-500-2022-02-21/tree/main/exp
[data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-100h-transducer-stateless-multi-datasets-bpe-500-2022-02-21/tree/main/data
[test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-100h-transducer-stateless-multi-datasets-bpe-500-2022-02-21/tree/main/test_wavs
[log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-100h-transducer-stateless-multi-datasets-bpe-500-2022-02-21/tree/main/log
[icefall]: https://github.com/k2-fsa/icefall
|
{}
|
csukuangfj/icefall-asr-librispeech-100h-transducer-stateless-multi-datasets-bpe-500-2022-02-21
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
# Introduction
## How to clone this repo
```
sudo apt-get install git-lfs
git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09
cd icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09
git lfs pull
```
**Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later.
-----
## Description
This repo provides pre-trained conformer CTC model for the librispeech dataset
using [icefall][icefall].
The commands for training are:
```
cd egs/librispeech/ASR/conformer_ctc
./prepare.sh
export CUDA_VISIBLE_DEVICES="0,1,2,3"
./conformer_ctc/train.py \
--exp-dir conformer_ctc/exp_500_att0.8 \
--lang-dir data/lang_bpe_500 \
--att-rate 0.8 \
--full-libri 1 \
--max-duration 200 \
--concatenate-cuts 0 \
--world-size 4 \
--bucketing-sampler 1 \
--start-epoch 0 \
--num-epochs 90
```
The command for decoding is:
```
./conformer_ctc/decode.py \
--exp-dir conformer_ctc/exp_500_att0.8 \
--lang-dir data/lang_bpe_500 \
--max-duration 30 \
--concatenate-cuts 0 \
--bucketing-sampler 1 \
--num-paths 1000 \
--epoch 77 \
--avg 55 \
--method attention-decoder \
--nbest-scale 0.5
```
You can find the decoding log for the above command in this
repo: [log/log-decode-2021-11-09-17-38-28](log/log-decode-2021-11-09-17-38-28).
The best WER for the librispeech test dataset is:
| | test-clean | test-other |
|-----|------------|------------|
| WER | 2.42 | 5.73 |
Scale values used in n-gram LM rescoring and attention rescoring for the best WERs are:
| ngram_lm_scale | attention_scale |
|----------------|-----------------|
| 2.0 | 2.0 |
# File description
- [log][log], this directory contains the decoding log
- [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model
- [data][data], this directory contains files generated by [prepare.sh][prepare]
Note: For the `data/lm` directory, we provide only `G_4_gram.pt`. If you need other files
in this directory, please run [prepare.sh][prepare].
- [exp][exp], this directory contains two files: `preprained.pt` and `cpu_jit.pt`.
`exp/pretrained.pt` is generated by the following command:
```
./conformer_ctc/export.py \
--epoch 77 \
--avg 55 \
--jit 0 \
--lang-dir data/lang_bpe_500 \
--exp-dir conformer_ctc/exp_500_att0.8
```
**HINT**: To use `pre-trained.pt` to compute the WER for test-clean and test-other,
just do the following:
```
cp icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/exp/pretrained.pt \
/path/to/icefall/egs/librispeech/ASR/conformer_ctc/exp/epoch-999.pt
```
and pass `--epoch 999 --avg 1` to `conformer_ctc/decode.py`.
`exp/cpu_jit.pt` is generated by the following command:
```
./conformer_ctc/export.py \
--epoch 77 \
--avg 55 \
--jit 1 \
--lang-dir data/lang_bpe_500 \
--exp-dir conformer_ctc/exp_500_att0.8
```
# Deploy your model in C++ using k2
To deploy your model in C++ using k2 without depending on Python, do the following:
```
# Note: It requires torch >= 1.8.0
git clone https://github.com/k2-fsa/k2
cd k2
git checkout v2.0-pre
mkdir build_release
cd build_release
cmake -DCMAKE_BUILD_TYPE=Release ..
make -j ctc_decode hlg_decode ngram_lm_rescore attention_rescore
```
## CTC decoding
```
cd k2/build_release
./bin/ctc_decode \
--use_gpu true \
--nn_model ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/exp/cpu_jit.pt \
--bpe_model ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/data/lang_bpe_500/bpe.model \
./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1089-134686-0001.wav \
./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1221-135766-0001.wav \
./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1221-135766-0002.wav
```
## HLG decoding
```
./bin/hlg_decode \
--use_gpu true \
--nn_model ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/exp/cpu_jit.pt \
--hlg ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/data/lang_bpe_500/HLG.pt \
--word_table ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/data/lang_bpe_500/words.txt \
./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1089-134686-0001.wav \
./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1221-135766-0001.wav \
./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1221-135766-0002.wav
```
## HLG decoding + n-gram LM rescoring
**NOTE**: V100 GPU with 16 GB RAM is known NOT to work because of OOM.
V100 GPU with 32 GB RAM is known to work.
```
./bin/ngram_lm_rescore \
--use_gpu true \
--nn_model ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/exp/cpu_jit.pt \
--hlg ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/data/lang_bpe_500/HLG.pt \
--g ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/data/lm/G_4_gram.pt \
--ngram_lm_scale 1.0 \
--word_table ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/data/lang_bpe_500/words.txt \
./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1089-134686-0001.wav \
./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1221-135766-0001.wav \
./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1221-135766-0002.wav
```
## HLG decoding + n-gram LM rescoring + attention decoder rescoring
**NOTE**: V100 GPU with 16 GB RAM is known NOT to work because of OOM.
V100 GPU with 32 GB RAM is known to work.
```
./bin/attention_rescore \
--use_gpu true \
--nn_model ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/exp/cpu_jit.pt \
--hlg ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/data/lang_bpe_500/HLG.pt \
--g ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/data/lm/G_4_gram.pt \
--ngram_lm_scale 2.0 \
--attention_scale 2.0 \
--num_paths 100 \
--nbest_scale 0.5 \
--word_table ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/data/lang_bpe_500/words.txt \
--sos_id 1 \
--eos_id 1 \
./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1089-134686-0001.wav \
./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1221-135766-0001.wav \
./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1221-135766-0002.wav
```
[prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh
[exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/tree/main/exp
[data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/tree/main/data
[test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/tree/main/test_wavs
[log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/tree/main/log
[icefall]: https://github.com/k2-fsa/icefall
|
{}
|
csukuangfj/icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
# Introduction
## How to clone this repo
```
sudo apt-get install git-lfs
git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-17
cd icefall-asr-librispeech-transducer-bpe-500-2021-12-17
git lfs pull
```
**Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later.
The model in this repo is trained using the commit `cb04c8a7509425ab45fae888b0ca71bbbd23f0de`.
You can use
```
git clone https://github.com/k2-fsa/icefall
cd icefall
git checkout cb04c8a7509425ab45fae888b0ca71bbbd23f0de
```
to download `icefall`.
You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/cb04c8a7509425ab45fae888b0ca71bbbd23f0de/egs/librispeech/ASR/transducer/train.py#L196>
In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward;
the decoder contains a 1024-dim embedding layer, plus a 4-layer LSTM with hidden size 512.
-----
## Description
This repo provides pre-trained RNN-T Conformer model for the librispeech dataset
using [icefall][icefall].
The commands for training are:
```
cd egs/librispeech/ASR/
./prepare.sh
export CUDA_VISIBLE_DEVICES="0,1,2,3"
./transducer/train.py \
--world-size 4 \
--num-epochs 30 \
--start-epoch 0 \
--exp-dir transducer/exp-lr-2.5-full \
--full-libri 1 \
--max-duration 250 \
--lr-factor 2.5
```
The command for decoding is:
```
epoch=26
avg=12
./transducer/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir transducer/exp-lr-2.5-full \
--bpe-model ./data/lang_bpe_500/bpe.model \
--max-duration 100
```
You can find the decoding log for the above command in this
repo: [log/log-decode-epoch-26-avg-12-2021-12-17-09-33-04](log/log-decode-epoch-26-avg-12-2021-12-17-09-33-04).
The best WER using greedy search is:
| | test-clean | test-other |
|-----|------------|------------|
| WER | 3.16 | 7.71 |
# File description
- [log][log], this directory contains the decoding log and decoding results
- [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model
- [data][data], this directory contains files generated by [prepare.sh][prepare]
- [exp][exp], this directory contains only one file: `preprained.pt`
`exp/pretrained.pt` is generated by the following command:
```
./transducer/export.py \
--epoch 26 \
--avg 12 \
--bpe-model data/lang_bpe_500/bpe.model \
--exp-dir transducer/exp-lr-2.5-full
```
**HINT**: To use `pre-trained.pt` to compute the WER for test-clean and test-other,
just do the following:
```
cp icefall-asr-librispeech-transducer-bpe-500-2021-12-17/exp/pretrained.pt \
/path/to/icefall/egs/librispeech/ASR/transducer/exp/epoch-999.pt
```
and pass `--epoch 999 --avg 1` to `transducer/decode.py`.
[icefall]: https://github.com/k2-fsa/icefall
[prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh
[exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-17/tree/main/exp
[data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-17/tree/main/data
[test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-17/tree/main/test_wavs
[log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-17/tree/main/log
[icefall]: https://github.com/k2-fsa/icefall
|
{}
|
csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-17
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
# Introduction
## How to clone this repo
```
sudo apt-get install git-lfs
git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-23
cd icefall-asr-librispeech-transducer-bpe-500-2021-12-23
git lfs pull
```
**Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later.
The model in this repo is trained using the commit `5b6699a8354b70b23b252b371c612a35ed186ec2`.
You can use
```
git clone https://github.com/k2-fsa/icefall
cd icefall
git checkout 5b6699a8354b70b23b252b371c612a35ed186ec2
```
to download `icefall`.
You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/5b6699a8354b70b23b252b371c612a35ed186ec2/egs/librispeech/ASR/transducer/train.py#L191>
In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward;
the decoder contains a 1024-dim embedding layer, plus a 2-layer LSTM with hidden size 512.
-----
## Description
This repo provides pre-trained RNN-T Conformer model for the librispeech dataset
using [icefall][icefall].
The commands for training are:
```
cd egs/librispeech/ASR/
./prepare.sh
export CUDA_VISIBLE_DEVICES="0,1,2,3"
./transducer/train.py \
--world-size 4 \
--num-epochs 35 \
--start-epoch 0 \
--exp-dir transducer/exp-lr-2.5-full \
--full-libri 1 \
--max-duration 180 \
--lr-factor 2.5
```
The command for decoding is:
```
epoch=34
avg=11
./transducer/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir transducer/exp-lr-2.5-full \
--bpe-model ./data/lang_bpe_500/bpe.model \
--max-duration 100
```
You can find the decoding log for the above command in the `log` folder
of this repo.
The best WER using greedy search is:
| | test-clean | test-other |
|-----|------------|------------|
| WER | 3.07 | 7.51 |
# File description
- [log][log], this directory contains the decoding log and decoding results
- [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model
- [data][data], this directory contains files generated by [prepare.sh][prepare]
- [exp][exp], this directory contains only one file: `preprained.pt`
`exp/pretrained.pt` is generated by the following command:
```
./transducer/export.py \
--epoch 34 \
--avg 11 \
--bpe-model data/lang_bpe_500/bpe.model \
--exp-dir transducer/exp-lr-2.5-full
```
**HINT**: To use `pre-trained.pt` to compute the WER for test-clean and test-other,
just do the following:
```
cp icefall-asr-librispeech-transducer-bpe-500-2021-12-23/exp/pretrained.pt \
/path/to/icefall/egs/librispeech/ASR/transducer/exp/epoch-999.pt
```
and pass `--epoch 999 --avg 1` to `transducer/decode.py`.
[icefall]: https://github.com/k2-fsa/icefall
[prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh
[exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-23/tree/main/exp
[data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-23/tree/main/data
[test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-23/tree/main/test_wavs
[log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-23/tree/main/log
[icefall]: https://github.com/k2-fsa/icefall
|
{}
|
csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-23
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
# Introduction
## How to clone this repo
```
sudo apt-get install git-lfs
git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-22
cd icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-22
git lfs pull
```
**Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later.
The model in this repo is trained using the commit `fb6a57e9e01dd8aae2af2a6b4568daad8bc8ab32`.
You can use
```
git clone https://github.com/k2-fsa/icefall
cd icefall
git checkout fb6a57e9e01dd8aae2af2a6b4568daad8bc8ab32
```
to download `icefall`.
You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/fb6a57e9e01dd8aae2af2a6b4568daad8bc8ab32/egs/librispeech/ASR/transducer_stateless/train.py#L195>.
In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward;
the decoder contains a 1024-dim embedding layer and a Conv1d with kernel size 2.
The decoder architecture is modified from
[Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419).
A Conv1d layer is placed right after the input embedding layer.
-----
## Description
This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset
using [icefall][icefall]. There are no RNNs in the decoder. The decoder is stateless
and contains only an embedding layer and a Conv1d.
The commands for training are:
```
cd egs/librispeech/ASR/
./prepare.sh
export CUDA_VISIBLE_DEVICES="0,1,2,3"
./transducer_stateless/train.py \
--world-size 4 \
--num-epochs 30 \
--start-epoch 0 \
--exp-dir transducer_stateless/exp-full \
--full-libri 1 \
--max-duration 250 \
--lr-factor 3
```
The tensorboard training log can be found at
<https://tensorboard.dev/experiment/PsJ3LgkEQfOmzedAlYfVeg/#scalars&_smoothingWeight=0>
The command for decoding is:
```
epoch=20
avg=10
## greedy search
./transducer_stateless/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir transducer_stateless/exp-full \
--bpe-model ./data/lang_bpe_500/bpe.model \
--max-duration 100
## beam search
./transducer_stateless/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir transducer_stateless/exp-full \
--bpe-model ./data/lang_bpe_500/bpe.model \
--max-duration 100 \
--decoding-method beam_search \
--beam-size 4
```
You can find the decoding log for the above command in this
repo (in the folder `log`).
The WERs for the test datasets are
| | test-clean | test-other | comment |
|---------------------------|------------|------------|------------------------------------------|
| greedy search | 2.99 | 7.52 | --epoch 20, --avg 10, --max-duration 100 |
| beam search (beam size 2) | 2.95 | 7.43 | |
| beam search (beam size 3) | 2.94 | 7.37 | |
| beam search (beam size 4) | 2.92 | 7.37 | |
| beam search (beam size 5) | 2.93 | 7.38 | |
| beam search (beam size 8) | 2.92 | 7.38 | |
# File description
- [log][log], this directory contains the decoding log and decoding results
- [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model
- [data][data], this directory contains files generated by [prepare.sh][prepare]
- [exp][exp], this directory contains only one file: `preprained.pt`
`exp/pretrained.pt` is generated by the following command:
```
./transducer_stateless/export.py \
--epoch 20 \
--avg 10 \
--bpe-model data/lang_bpe_500/bpe.model \
--exp-dir transducer_stateless/exp-full
```
**HINT**: To use `pre-trained.pt` to compute the WER for test-clean and test-other,
just do the following:
```
cp icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-22/exp/pretrained.pt \
/path/to/icefall/egs/librispeech/ASR/transducer_stateless/exp/epoch-999.pt
```
and pass `--epoch 999 --avg 1` to `transducer_stateless/decode.py`.
[icefall]: https://github.com/k2-fsa/icefall
[prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh
[exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-22/tree/main/exp
[data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-22/tree/main/data
[test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-22/tree/main/test_wavs
[log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-22/tree/main/log
[icefall]: https://github.com/k2-fsa/icefall
|
{}
|
csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-22
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
# Introduction
## How to clone this repo
```
sudo apt-get install git-lfs
git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27
cd icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27
git lfs pull
```
**Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later.
The model in this repo is trained using the commit `14c93add507982306f5a478cd144e0e32e0f970d`.
You can use
```
git clone https://github.com/k2-fsa/icefall
cd icefall
git checkout 14c93add507982306f5a478cd144e0e32e0f970d
```
to download `icefall`.
You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/14c93add507982306f5a478cd144e0e32e0f970d/egs/librispeech/ASR/transducer_stateless/train.py#L198>.
In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward;
the decoder contains a 1024-dim embedding layer and a Conv1d with kernel size 2.
The decoder architecture is modified from
[Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419).
A Conv1d layer is placed right after the input embedding layer.
-----
## Description
This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset
using [icefall][icefall]. There are no RNNs in the decoder. The decoder is stateless
and contains only an embedding layer and a Conv1d.
The commands for training are:
```
cd egs/librispeech/ASR/
./prepare.sh
export CUDA_VISIBLE_DEVICES="0,1,2,3"
./transducer_stateless/train.py \
--world-size 4 \
--num-epochs 30 \
--start-epoch 0 \
--exp-dir transducer_stateless/exp-full \
--full-libri 1 \
--max-duration 250 \
--lr-factor 3
```
The tensorboard training log can be found at
<https://tensorboard.dev/experiment/Mjx7MeTgR3Oyr1yBCwjozw/>
The command for decoding is:
```
epoch=29
avg=13
## greedy search
./transducer_stateless/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir transducer_stateless/exp-full \
--bpe-model ./data/lang_bpe_500/bpe.model \
--max-duration 100
## beam search
./transducer_stateless/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir transducer_stateless/exp-full \
--bpe-model ./data/lang_bpe_500/bpe.model \
--max-duration 100 \
--decoding-method beam_search \
--beam-size 4
```
You can find the decoding log for the above command in this
repo (in the folder `log`).
The WERs for the test datasets are
| | test-clean | test-other | comment |
|---------------------------|------------|------------|------------------------------------------|
| greedy search | 2.85 | 7.30 | --epoch 29, --avg 13, --max-duration 100 |
| beam search (beam size 4) | 2.83 | 7.19 | |
# File description
- [log][log], this directory contains the decoding log and decoding results
- [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model
- [data][data], this directory contains files generated by [prepare.sh][prepare]
- [exp][exp], this directory contains only one file: `preprained.pt`
`exp/pretrained.pt` is generated by the following command:
```
./transducer_stateless/export.py \
--epoch 29 \
--avg 13 \
--bpe-model data/lang_bpe_500/bpe.model \
--exp-dir transducer_stateless/exp-full
```
**HINT**: To use `pre-trained.pt` to compute the WER for test-clean and test-other,
just do the following:
```
cp icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27/exp/pretrained.pt \
/path/to/icefall/egs/librispeech/ASR/transducer_stateless/exp/epoch-999.pt
```
and pass `--epoch 999 --avg 1` to `transducer_stateless/decode.py`.
[icefall]: https://github.com/k2-fsa/icefall
[prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh
[exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27/tree/main/exp
[data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27/tree/main/data
[test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27/tree/main/test_wavs
[log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27/tree/main/log
[icefall]: https://github.com/k2-fsa/icefall
|
{}
|
csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
# Introduction
## How to clone this repo
```
sudo apt-get install git-lfs
git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-01-10
cd icefall-asr-librispeech-transducer-stateless-bpe-500-2022-01-10
git lfs pull
```
**Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later.
The model in this repo is trained using the commit `4c1b3665ee6efb935f4dd93a80ff0e154b13efb6`.
You can use
```
git clone https://github.com/k2-fsa/icefall
cd icefall
git checkout 4c1b3665ee6efb935f4dd93a80ff0e154b13efb6
```
to download `icefall`.
You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/273e5fb2f3ac2620bafdffe2689b8b3ee10173d3/egs/librispeech/ASR/transducer_stateless/train.py#L198>.
In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward;
the decoder contains a 1024-dim embedding layer and a Conv1d with kernel size 2.
The decoder architecture is modified from
[Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419).
A Conv1d layer is placed right after the input embedding layer.
-----
## Description
This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset
using [icefall][icefall]. There are no RNNs in the decoder. The decoder is stateless
and contains only an embedding layer and a Conv1d.
The commands for training are:
```
cd egs/librispeech/ASR/
./prepare.sh
export CUDA_VISIBLE_DEVICES="0,1,2,3"
./transducer_stateless/train.py \
--world-size 4 \
--num-epochs 76 \
--start-epoch 0 \
--exp-dir transducer_stateless/exp-full \
--full-libri 1 \
--max-duration 250 \
--lr-factor 3
```
The tensorboard training log can be found at
<https://tensorboard.dev/experiment/qGdqzHnxS0WJ695OXfZDzA/>
The command for decoding is:
```
epoch=71
avg=15
## greedy search
./transducer_stateless/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir transducer_stateless/exp-full \
--bpe-model ./data/lang_bpe_500/bpe.model \
--max-duration 100
## beam search
./transducer_stateless/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir transducer_stateless/exp-full \
--bpe-model ./data/lang_bpe_500/bpe.model \
--max-duration 100 \
--decoding-method beam_search \
--beam-size 4
```
You can find the decoding log for the above command in this
repo (in the folder `log`).
The WERs for the test datasets are
| | test-clean | test-other | comment |
|---------------------------|------------|------------|------------------------------------------|
| greedy search | 2.69 | 6.81 | --epoch 71, --avg 15, --max-duration 100 |
| beam search (beam size 4) | 2.68 | 6.72 | --epoch 71, --avg 15, --max-duration 100 |
# File description
- [log][log], this directory contains the decoding log and decoding results
- [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model
- [data][data], this directory contains files generated by [prepare.sh][prepare]
- [exp][exp], this directory contains only one file: `preprained.pt`
`exp/pretrained.pt` is generated by the following command:
```
./transducer_stateless/export.py \
--epoch 71 \
--avg 15 \
--bpe-model data/lang_bpe_500/bpe.model \
--exp-dir transducer_stateless/exp-full
```
**HINT**: To use `pre-trained.pt` to compute the WER for test-clean and test-other,
just do the following:
```
cp icefall-asr-librispeech-transducer-stateless-bpe-500-2022-01-10/exp/pretrained.pt \
/path/to/icefall/egs/librispeech/ASR/transducer_stateless/exp/epoch-999.pt
```
and pass `--epoch 999 --avg 1` to `transducer_stateless/decode.py`.
[icefall]: https://github.com/k2-fsa/icefall
[prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh
[exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-01-10/tree/main/exp
[data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-01-10/tree/main/data
[test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-01-10/tree/main/test_wavs
[log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-01-10/tree/main/log
[icefall]: https://github.com/k2-fsa/icefall
|
{}
|
csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-01-10
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
# Introduction
## How to clone this repo
```
sudo apt-get install git-lfs
git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-02-07
cd icefall-asr-librispeech-transducer-stateless-bpe-500-2022-02-07
git lfs pull
```
**Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later.
The model in this repo is trained using the commit `a8150021e01d34ecbd6198fe03a57eacf47a16f2`.
You can use
```
git clone https://github.com/k2-fsa/icefall
cd icefall
git checkout a8150021e01d34ecbd6198fe03a57eacf47a16f2
```
to download `icefall`.
You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/a8150021e01d34ecbd6198fe03a57eacf47a16f2/egs/librispeech/ASR/transducer_stateless/train.py#L198>.
In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward;
the decoder contains a 1024-dim embedding layer and a Conv1d with kernel size 2.
The decoder architecture is modified from
[Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419).
A Conv1d layer is placed right after the input embedding layer.
-----
## Description
This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset
using [icefall][icefall]. There are no RNNs in the decoder. The decoder is stateless
and contains only an embedding layer and a Conv1d.
The commands for training are:
```
cd egs/librispeech/ASR/
./prepare.sh
export CUDA_VISIBLE_DEVICES="0,1,2,3"
./transducer_stateless/train.py \
--world-size 4 \
--num-epochs 76 \
--start-epoch 0 \
--exp-dir transducer_stateless/exp-full \
--full-libri 1 \
--max-duration 300 \
--lr-factor 5 \
--bpe-model data/lang_bpe_500/bpe.model \
--modified-transducer-prob 0.25
```
The tensorboard training log can be found at
<https://tensorboard.dev/experiment/qgvWkbF2R46FYA6ZMNmOjA/>
The command for decoding is:
```
epoch=63
avg=19
## greedy search
for sym in 1 2 3; do
./transducer_stateless/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir transducer_stateless/exp-full \
--bpe-model ./data/lang_bpe_500/bpe.model \
--max-duration 100 \
--max-sym-per-frame $sym
done
## modified beam search
./transducer_stateless/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir transducer_stateless/exp-full \
--bpe-model ./data/lang_bpe_500/bpe.model \
--max-duration 100 \
--context-size 2 \
--decoding-method modified_beam_search \
--beam-size 4
```
You can find the decoding log for the above command in this
repo (in the folder `log`).
The WERs for the test datasets are
| | test-clean | test-other | comment |
|-------------------------------------|------------|------------|------------------------------------------|
| greedy search (max sym per frame 1) | 2.67 | 6.67 | --epoch 63, --avg 19, --max-duration 100 |
| greedy search (max sym per frame 2) | 2.67 | 6.67 | --epoch 63, --avg 19, --max-duration 100 |
| greedy search (max sym per frame 3) | 2.67 | 6.67 | --epoch 63, --avg 19, --max-duration 100 |
| modified beam search (beam size 4) | 2.67 | 6.57 | --epoch 63, --avg 19, --max-duration 100 |
# File description
- [log][log], this directory contains the decoding log and decoding results
- [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model
- [data][data], this directory contains files generated by [prepare.sh][prepare]
- [exp][exp], this directory contains only one file: `preprained.pt`
`exp/pretrained.pt` is generated by the following command:
```
./transducer_stateless/export.py \
--epoch 63 \
--avg 19 \
--bpe-model data/lang_bpe_500/bpe.model \
--exp-dir transducer_stateless/exp-full
```
**HINT**: To use `pretrained.pt` to compute the WER for test-clean and test-other,
just do the following:
```
cp icefall-asr-librispeech-transducer-stateless-bpe-500-2022-02-07/exp/pretrained.pt \
/path/to/icefall/egs/librispeech/ASR/transducer_stateless/exp/epoch-999.pt
```
and pass `--epoch 999 --avg 1` to `transducer_stateless/decode.py`.
[icefall]: https://github.com/k2-fsa/icefall
[prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh
[exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-02-07/tree/main/exp
[data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-02-07/tree/main/data
[test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-02-07/tree/main/test_wavs
[log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-02-07/tree/main/log
[icefall]: https://github.com/k2-fsa/icefall
|
{}
|
csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-02-07
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null |
k2
|
# Introduction
This repo contains pre-trained model using
<https://github.com/k2-fsa/icefall/pull/213>.
It is trained on full LibriSpeech dataset.
Also, it uses the `L` subset from [GigaSpeech](https://github.com/SpeechColab/GigaSpeech)
as extra training data.
## How to clone this repo
```
sudo apt-get install git-lfs
git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01
cd icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01
git lfs pull
```
**Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later.
The model in this repo is trained using the commit `2332ba312d7ce72f08c7bac1e3312f7e3dd722dc`.
You can use
```
git clone https://github.com/k2-fsa/icefall
cd icefall
git checkout 2332ba312d7ce72f08c7bac1e3312f7e3dd722dc
```
to download `icefall`.
You can find the model information by visiting
<https://github.com/k2-fsa/icefall/blob/2332ba312d7ce72f08c7bac1e3312f7e3dd722dc/egs/librispeech/ASR/transducer_stateless_multi_datasets/train.py#L218>
In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward;
the decoder contains a 1024-dim embedding layer and a Conv1d with kernel size 2.
The decoder architecture is modified from
[Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419).
A Conv1d layer is placed right after the input embedding layer.
-----
## Description
This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset
using [icefall][icefall]. There are no RNNs in the decoder. The decoder is stateless
and contains only an embedding layer and a Conv1d.
The commands for training are:
```
cd egs/librispeech/ASR/
./prepare.sh
./prepare_giga_speech.sh
export CUDA_VISIBLE_DEVICES="0,1,2,3"
./transducer_stateless_multi_datasets/train.py \
--world-size 4 \
--num-epochs 40 \
--start-epoch 0 \
--exp-dir transducer_stateless_multi_datasets/exp-full-2 \
--full-libri 1 \
--max-duration 300 \
--lr-factor 5 \
--bpe-model data/lang_bpe_500/bpe.model \
--modified-transducer-prob 0.25 \
--giga-prob 0.2
```
The tensorboard training log can be found at
<https://tensorboard.dev/experiment/xmo5oCgrRVelH9dCeOkYBg/>
The command for decoding is:
```bash
epoch=39
avg=15
sym=1
# greedy search
./transducer_stateless_multi_datasets/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir transducer_stateless_multi_datasets/exp-full-2 \
--bpe-model ./data/lang_bpe_500/bpe.model \
--max-duration 100 \
--context-size 2 \
--max-sym-per-frame $sym
# modified beam search
./transducer_stateless_multi_datasets/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir transducer_stateless_multi_datasets/exp-full-2 \
--bpe-model ./data/lang_bpe_500/bpe.model \
--max-duration 100 \
--context-size 2 \
--decoding-method modified_beam_search \
--beam-size 4
```
You can find the decoding log for the above command in this
repo (in the folder `log`).
The WERs for the test datasets are
| | test-clean | test-other | comment |
|-------------------------------------|------------|------------|------------------------------------------|
| greedy search (max sym per frame 1) | 2.64 | 6.55 | --epoch 39, --avg 15, --max-duration 100 |
| modified beam search (beam size 4) | 2.61 | 6.46 | --epoch 39, --avg 15, --max-duration 100 |
# File description
- [log][log], this directory contains the decoding log and decoding results
- [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model
- [data][data], this directory contains files generated by [prepare.sh][prepare]
- [exp][exp], this directory contains only one file: `preprained.pt`
`exp/pretrained.pt` is generated by the following command:
```bash
./transducer_stateless_multi_datasets/export.py \
--epoch 39 \
--avg 15 \
--bpe-model data/lang_bpe_500/bpe.model \
--exp-dir transducer_stateless_multi_datasets/exp-full-2
```
**HINT**: To use `pretrained.pt` to compute the WER for test-clean and test-other,
just do the following:
```
cp icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01/exp/pretrained.pt \
/path/to/icefall/egs/librispeech/ASR/transducer_stateless_multi_datasets/exp/epoch-999.pt
```
and pass `--epoch 999 --avg 1` to `transducer_stateless_multi_datasets/decode.py`.
[icefall]: https://github.com/k2-fsa/icefall
[prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh
[exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01/tree/main/exp
[data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01/tree/main/data
[test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01/tree/main/test_wavs
[log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01/tree/main/log
[icefall]: https://github.com/k2-fsa/icefall
|
{"language": "en", "license": "apache-2.0", "tags": ["icefall", "k2", "transducer", "librispeech", "ASR", "stateless transducer", "PyTorch", "RNN-T", "speech recognition"], "datasets": ["librispeech"], "metrics": ["WER"]}
|
csukuangfj/icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01
| null |
[
"k2",
"icefall",
"transducer",
"librispeech",
"ASR",
"stateless transducer",
"PyTorch",
"RNN-T",
"speech recognition",
"en",
"dataset:librispeech",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
## Pre-trained TDNN models for the yesno dataset with icefall.
Refer to <https://github.com/k2-fsa/icefall/tree/master/egs/yesno/ASR>
for more information about this pre-trained model.
You can find usage instructions there.
## Sound files for testing the pre-trained model
The folder `test_waves` contains test sound files. They
are downloaded from <https://www.openslr.org/1/>.
There are 60 files in the dataset, 30 are used for training.
The remaining 30 files, contained in `test_waves` are kept for testing.
The code for splitting the dataset can be found at
<https://github.com/lhotse-speech/lhotse/blob/master/lhotse/recipes/yesno.py#L138>
```python
wave_files = list(corpus_dir.glob("*.wav"))
assert len(wave_files) == 60
wave_files.sort()
train_set = wave_files[::2]
test_set = wave_files[1::2]
assert len(train_set) == 30
assert len(test_set) == 30
```
|
{}
|
csukuangfj/icefall_asr_yesno_tdnn
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
See
https://colab.research.google.com/drive/14MozS-9jWD3XQ0o-dZ-meqnblgHs70P2?usp=sharing
|
{}
|
csukuangfj/test-data-for-optimized-transducer
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
csukuangfj/test_hugging_face
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
# Introduction
This repo contains the benchmark results for <https://github.com/csukuangfj/transducer-loss-benchmarking>
## Usage
First, install `git-lfs`.
Second, use the following command to clone this repo:
```bash
git lfs install
git clone https://huggingface.co/csukuangfj/transducer-loss-benchmarking
```
**Caution**: You have to run `git lfs install` first. Otherwise, you will be **SAD** later.
Third,
```
pip install torch-tb-profiler
cd transducer-loss-benchmarking
tensorboard --logdir ./log/torchaudio-30 --port 6006
tensorboard --logdir ./log/optimized_transducer-30 --port 6007
```
Fourth, open your browser and go to
- <http://localhost:6006/#pytorch_profiler>
- <http://localhost:6006/#pytorch_profiler>
You will see the following images:


|
{}
|
csukuangfj/transducer-loss-benchmarking
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Cantonese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Cantonese using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "zh-HK", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("ctl/wav2vec2-large-xlsr-cantonese")
model = Wav2Vec2ForCTC.from_pretrained("ctl/wav2vec2-large-xlsr-cantonese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Chinese (Hong Kong) test data of Common Voice.
```python
!pip install jiwer
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
import argparse
lang_id = "zh-HK"
model_id = "ctl/wav2vec2-large-xlsr-cantonese"
chars_to_ignore_regex = '[\,\?\.\!\-\;\:"\“\%\‘\”\�\.\⋯\!\-\:\–\。\》\,\)\,\?\;\~\~\…\︰\,\(\」\‧\《\﹔\、\—\/\,\「\﹖\·\']'
test_dataset = load_dataset("common_voice", f"{lang_id}", split="test")
cer = load_metric("cer")
processor = Wav2Vec2Processor.from_pretrained(f"{model_id}")
model = Wav2Vec2ForCTC.from_pretrained(f"{model_id}")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=16)
print("CER: {:2f}".format(100 * cer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 15.51 %
## Training
The Common Voice `train`, `validation` were used for training.
The script used for training will be posted [here](https://github.com/chutaklee/CantoASR)
|
{"language": ["yue"], "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["cer"], "language_bcp47": ["zh-HK"], "model-index": [{"name": "wav2vec2-large-xlsr-cantonese", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice zh-HK", "type": "common_voice", "args": "zh-HK"}, "metrics": [{"type": "cer", "value": 15.36, "name": "Test CER"}]}]}]}
|
ctl/wav2vec2-large-xlsr-cantonese
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"yue",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
ctrlshiftw/test001
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cumtowndiscord/DialoGPT-cumtown2
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
# My Awesome Model
|
{"tags": ["conversational"]}
|
cumtowndiscord/DialoGPT-small-joshua
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
cumtowndiscord/cumtowndiscord
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
token-classification
|
transformers
|
Fine tuning LayoutLMv2 model on Vietnamese bill dataset
```python
from transformers import LayoutLMv2ForTokenClassification
model = LayoutLMv2ForTokenClassification.from_pretrained('cuongngm/layoutlm-bill', num_labels=len(labels))
```
labels = ['price',
'storename',
'total_cost',
'phone',
'address',
'unitprice',
'item',
'subitem',
'other',
'time',
'unit',
'total refunds',
'total_qty',
'seller',
'total_received']
|
{}
|
cuongngm/layoutlm-bill
| null |
[
"transformers",
"pytorch",
"layoutlmv2",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
{}
|
cuongtran/BARTTextSummarization
| null |
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
{}
|
cuongtran/RobertaTextSummarization
| null |
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
# Harry Potter DialoGPT Model
|
{"tags": ["conversational"]}
|
cutiebunny639/DialoGPT-small-harry
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
**Disclaimer**: *This model is still under testing and may change in the future, we will try to keep backwards compatibility. For any questions reach us at [email protected]*
# MediaWatch News Topics (Greek)
Fine-tuned model for multi-label text-classification (SequenceClassification), based on [roberta-el-news](https://huggingface.co/cvcio/roberta-el-news), using [Hugging Face's](https://huggingface.co/) [Transformers](https://github.com/huggingface/transformers) library. This model is to classify news in real-time on upto 33 topics including: *AFFAIRS*, *AGRICULTURE*, *ARTS_AND_CULTURE*, *BREAKING_NEWS*, *BUSINESS*, *COVID*, *ECONOMY*, *EDUCATION*, *ELECTIONS*, *ENTERTAINMENT*, *ENVIRONMENT*, *FOOD*, *HEALTH*, *INTERNATIONAL*, *LAW_AND_ORDER*, *MILITARY*, *NON_PAPER*, *OPINION*, *POLITICS*, *REFUGEE*, *REGIONAL*, *RELIGION*, *SCIENCE*, *SOCIAL_MEDIA*, *SOCIETY*, *SPORTS*, *TECH*, *TOURISM*, *TRANSPORT*, *TRAVEL*, *WEATHER*, *CRIME*, *JUSTICE*.
## How to use
You can use this model directly with a pipeline for text-classification:
```python
from transformers import pipeline
pipe = pipeline(
task="text-classification",
model="cvcio/mediawatch-el-topics",
tokenizer="cvcio/roberta-el-news" # or cvcio/mediawatch-el-topics
)
topics = pipe(
"Η βιασύνη αρκετών χωρών να άρουν τους περιορισμούς κατά του κορονοϊού, "+
"αν όχι να κηρύξουν το τέλος της πανδημίας, με το σκεπτικό ότι έφτασε "+
"πλέον η ώρα να συμβιώσουμε με την Covid-19, έχει κάνει μερικούς πιο "+
"επιφυλακτικούς επιστήμονες να προειδοποιούν ότι πρόκειται μάλλον "+
"για «ενδημική αυταπάτη» και ότι είναι πρόωρη τέτοια υπερβολική "+
"χαλάρωση. Καθώς τα κρούσματα της Covid-19, μετά το αιφνιδιαστικό "+
"μαζικό κύμα της παραλλαγής Όμικρον, εμφανίζουν τάση υποχώρησης σε "+
"Ευρώπη και Βόρεια Αμερική, όπου περισσεύει η κόπωση μεταξύ των "+
"πολιτών μετά από δύο χρόνια πανδημίας, ειδικοί και μη αδημονούν να "+
"«ξεμπερδέψουν» με τον κορονοϊό.",
padding=True,
truncation=True,
max_length=512,
return_all_scores=True
)
print(topics)
# outputs
[
[
{'label': 'AFFAIRS', 'score': 0.0018806682201102376},
{'label': 'AGRICULTURE', 'score': 0.00014653144171461463},
{'label': 'ARTS_AND_CULTURE', 'score': 0.0012948638759553432},
{'label': 'BREAKING_NEWS', 'score': 0.0001729220530251041},
{'label': 'BUSINESS', 'score': 0.0028276608791202307},
{'label': 'COVID', 'score': 0.4407998025417328},
{'label': 'ECONOMY', 'score': 0.039826102554798126},
{'label': 'EDUCATION', 'score': 0.0019098613411188126},
{'label': 'ELECTIONS', 'score': 0.0003333651984576136},
{'label': 'ENTERTAINMENT', 'score': 0.004249618388712406},
{'label': 'ENVIRONMENT', 'score': 0.0015828514005988836},
{'label': 'FOOD', 'score': 0.0018390495097264647},
{'label': 'HEALTH', 'score': 0.1204477995634079},
{'label': 'INTERNATIONAL', 'score': 0.25892165303230286},
{'label': 'LAW_AND_ORDER', 'score': 0.07646272331476212},
{'label': 'MILITARY', 'score': 0.00033025629818439484},
{'label': 'NON_PAPER', 'score': 0.011991199105978012},
{'label': 'OPINION', 'score': 0.16166265308856964},
{'label': 'POLITICS', 'score': 0.0008890336030162871},
{'label': 'REFUGEE', 'score': 0.0011504743015393615},
{'label': 'REGIONAL', 'score': 0.0008734092116355896},
{'label': 'RELIGION', 'score': 0.0009001944563351572},
{'label': 'SCIENCE', 'score': 0.05075162276625633},
{'label': 'SOCIAL_MEDIA', 'score': 0.00039615994319319725},
{'label': 'SOCIETY', 'score': 0.0043518817983567715},
{'label': 'SPORTS', 'score': 0.002416545059531927},
{'label': 'TECH', 'score': 0.0007818648009561002},
{'label': 'TOURISM', 'score': 0.011870541609823704},
{'label': 'TRANSPORT', 'score': 0.0009422845905646682},
{'label': 'TRAVEL', 'score': 0.03004464879631996},
{'label': 'WEATHER', 'score': 0.00040286066359840333},
{'label': 'CRIME', 'score': 0.0005416403291746974},
{'label': 'JUSTICE', 'score': 0.000990519649349153}
]
]
```
## Labels
All labels, except *NON_PAPER*, retrieved by source articles during the data collection step, without any preprocessing, assuming that journalists and newsrooms assign correct tags to the articles. We disregarded all articles with more than 6 tags to reduce bias and tag manipulation.
| label | roc_auc | samples |
|-------:|--------:|--------:|
| AFFAIRS | 0.9872 | 6,314 |
| AGRICULTURE | 0.9799 | 1,254 |
| ARTS_AND_CULTURE | 0.9838 | 15,968 |
| BREAKING_NEWS | 0.9675 | 827 |
| BUSINESS | 0.9811 | 6,507 |
| COVID | 0.9620 | 50,000 |
| CRIME | 0.9885 | 34,421 |
| ECONOMY | 0.9765 | 45,474 |
| EDUCATION | 0.9865 | 10,111 |
| ELECTIONS | 0.9940 | 7,571 |
| ENTERTAINMENT | 0.9925 | 23,323 |
| ENVIRONMENT | 0.9847 | 23,060 |
| FOOD | 0.9934 | 3,712 |
| HEALTH | 0.9723 | 16,852 |
| INTERNATIONAL | 0.9624 | 50,000 |
| JUSTICE | 0.9862 | 4,860 |
| LAW_AND_ORDER | 0.9177 | 50,000 |
| MILITARY | 0.9838 | 6,536 |
| NON_PAPER | 0.9595 | 4,589 |
| OPINION | 0.9624 | 6,296 |
| POLITICS | 0.9773 | 50,000 |
| REFUGEE | 0.9949 | 4,536 |
| REGIONAL | 0.9520 | 50,000 |
| RELIGION | 0.9922 | 11,533 |
| SCIENCE | 0.9837 | 1,998 |
| SOCIAL_MEDIA | 0.991 | 6,212 |
| SOCIETY | 0.9439 | 50,000 |
| SPORTS | 0.9939 | 31,396 |
| TECH | 0.9923 | 8,225 |
| TOURISM | 0.9900 | 8,081 |
| TRANSPORT | 0.9879 | 3,211 |
| TRAVEL | 0.9832 | 4,638 |
| WEATHER | 0.9950 | 19,931 |
| loss | 0.0533 | - |
| roc_auc | 0.9855 | - |
## Pretraining
The model was pretrained using an NVIDIA A10 GPU for 15 epochs (~ approx 59K steps, 8 hours training) with a batch size of 128. The optimizer used is Adam with a learning rate of 1e-5, and weight decay 0.01. We used roc_auc_micro to evaluate the results.
### Framework versions
- Transformers 4.13.0
- Pytorch 1.9.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
## Authors
Dimitris Papaevagelou - [@andefined](https://github.com/andefined)
## About Us
[Civic Information Office](https://cvcio.org/) is a Non Profit Organization based in Athens, Greece focusing on creating technology and research products for the public interest.
|
{"language": "el", "license": "gpl-3.0", "tags": ["roberta", "Greek", "news", "transformers", "text-classification"], "pipeline_tag": "text-classification", "widget": [{"text": "\u03a0\u03b1\u03c1\u2019 \u03bf\u03bb\u03af\u03b3\u03bf\u03bd \u00ab\u03b8\u03b5\u03c1\u03bc\u03cc\u00bb \u03b5\u03c0\u03b5\u03b9\u03c3\u03cc\u03b4\u03b9\u03bf \u03c4\u03bf\u03c5\u03c1\u03ba\u03b9\u03ba\u03bf\u03cd \u03c0\u03bf\u03bb\u03b5\u03bc\u03b9\u03ba\u03bf\u03cd \u03c0\u03bb\u03bf\u03af\u03bf\u03c5 \u03bc\u03b5 \u03b5\u03bb\u03bb\u03b7\u03bd\u03b9\u03ba\u03cc \u03c9\u03ba\u03b5\u03b1\u03bd\u03bf\u03b3\u03c1\u03b1\u03c6\u03b9\u03ba\u03cc \u03c3\u03c4\u03b7\u03bd \u03c0\u03b5\u03c1\u03b9\u03bf\u03c7\u03ae \u03bc\u03b5\u03c4\u03b1\u03be\u03cd \u03a1\u03cc\u03b4\u03bf\u03c5 \u03ba\u03b1\u03b9 \u039a\u03b1\u03c3\u03c4\u03b5\u03bb\u03cc\u03c1\u03b9\u03b6\u03bf\u03c5, \u03c3\u03c4\u03bf \u03b4\u03b9\u03ac\u03c3\u03c4\u03b7\u03bc\u03b1 20-23 \u03a3\u03b5\u03c0\u03c4\u03b5\u03bc\u03b2\u03c1\u03af\u03bf\u03c5, \u03b1\u03c0\u03bf\u03ba\u03ac\u03bb\u03c5\u03c8\u03b5 \u03c4\u03bf \u039f\u03a1\u0395\u039d. \u03a3\u03cd\u03bc\u03c6\u03c9\u03bd\u03b1 \u03bc\u03b5 \u03c0\u03bb\u03b7\u03c1\u03bf\u03c6\u03bf\u03c1\u03af\u03b5\u03c2 \u03c0\u03bf\u03c5 \u03bc\u03b5\u03c4\u03ad\u03b4\u03c9\u03c3\u03b5 \u03c4\u03bf \u03ba\u03b5\u03bd\u03c4\u03c1\u03b9\u03ba\u03cc \u03b4\u03b5\u03bb\u03c4\u03af\u03bf \u03b5\u03b9\u03b4\u03ae\u03c3\u03b5\u03c9\u03bd, \u03cc\u03c4\u03b1\u03bd \u03c4\u03bf \u03b5\u03bb\u03bb\u03b7\u03bd\u03b9\u03ba\u03cc \u03b5\u03c1\u03b5\u03c5\u03bd\u03b7\u03c4\u03b9\u03ba\u03cc \u00ab \u0391\u0399\u0393\u0391\u0399\u039f \u00bb \u03c0\u03bf\u03c5 \u03b1\u03bd\u03ae\u03ba\u03b5\u03b9 \u03c3\u03c4\u03bf \u0395\u03bb\u03bb\u03b7\u03bd\u03b9\u03ba\u03cc \u039a\u03ad\u03bd\u03c4\u03c1\u03bf \u0398\u03b1\u03bb\u03b1\u03c3\u03c3\u03af\u03c9\u03bd \u0395\u03c1\u03b5\u03c5\u03bd\u03ce\u03bd \u03b2\u03b3\u03ae\u03ba\u03b5 \u03ad\u03be\u03c9 \u03b1\u03c0\u03cc \u03c4\u03b1 6 \u03bd.\u03bc, \u03c3\u03b5 \u03b4\u03b9\u03b5\u03b8\u03bd\u03ae \u03cd\u03b4\u03b1\u03c4\u03b1, \u03c4\u03bf \u03c0\u03c1\u03bf\u03c3\u03ad\u03b3\u03b3\u03b9\u03c3\u03b5 \u03c4\u03bf\u03c5\u03c1\u03ba\u03b9\u03ba\u03cc \u03c0\u03bf\u03bb\u03b5\u03bc\u03b9\u03ba\u03cc \u03c0\u03bb\u03bf\u03af\u03bf, \u03bf \u03ba\u03c5\u03b2\u03b5\u03c1\u03bd\u03ae\u03c4\u03b7\u03c2 \u03c4\u03bf\u03c5 \u03bf\u03c0\u03bf\u03af\u03bf\u03c5 \u03b6\u03ae\u03c4\u03b7\u03c3\u03b5 \u03b4\u03cd\u03bf \u03c6\u03bf\u03c1\u03ad\u03c2 \u03bc\u03ad\u03c3\u03c9 \u03b1\u03c3\u03c5\u03c1\u03bc\u03ac\u03c4\u03bf\u03c5 \u03bd\u03b1 \u03b5\u03bd\u03b7\u03bc\u03b5\u03c1\u03c9\u03b8\u03b5\u03af \u03b3\u03b9\u03b1 \u03c4\u03b1 \u03c3\u03c4\u03bf\u03b9\u03c7\u03b5\u03af\u03b1 \u03c4\u03bf\u03c5 \u03c0\u03bb\u03bf\u03af\u03bf\u03c5, \u03b1\u03bb\u03bb\u03ac \u03ba\u03b1\u03b9 \u03b3\u03b9\u03b1 \u03c4\u03b7\u03bd \u03b1\u03c0\u03bf\u03c3\u03c4\u03bf\u03bb\u03ae \u03c4\u03bf\u03c5. \u039f \u03c0\u03bb\u03bf\u03af\u03b1\u03c1\u03c7\u03bf\u03c2 \u03c4\u03bf\u03c5 \u03b5\u03bb\u03bb\u03b7\u03bd\u03b9\u03ba\u03bf\u03cd \u03b5\u03c1\u03b5\u03c5\u03bd\u03b7\u03c4\u03b9\u03ba\u03bf\u03cd \u03b4\u03b5\u03bd \u03b1\u03c0\u03ac\u03bd\u03c4\u03b7\u03c3\u03b5 \u03ba\u03b1\u03b9 \u03c4\u03b5\u03bb\u03b9\u03ba\u03ac \u03c4\u03bf \u03c4\u03bf\u03c5\u03c1\u03ba\u03b9\u03ba\u03cc \u03c0\u03bf\u03bb\u03b5\u03bc\u03b9\u03ba\u03cc \u03b1\u03c0\u03bf\u03bc\u03b1\u03ba\u03c1\u03cd\u03bd\u03b8\u03b7\u03ba\u03b5.", "example_title": "Topic AFFAIRS"}, {"text": "\u0397 \u03ba\u03c5\u03b2\u03b5\u03c1\u03bd\u03b7\u03c4\u03b9\u03ba\u03ae \u03b1\u03bd\u03b9\u03ba\u03b1\u03bd\u03cc\u03c4\u03b7\u03c4\u03b1 \u03bf\u03b4\u03b7\u03b3\u03b5\u03af \u03c4\u03b7\u03bd \u03c7\u03ce\u03c1\u03b1 \u03c3\u03c4\u03bf \u03c7\u03ac\u03bf\u03c2. \u0397 \u03ba\u03c5\u03b2\u03b5\u03c1\u03bd\u03b7\u03c3\u03b7 \u039c\u03b7\u03c4\u03c3\u03bf\u03c4\u03b1\u03ba\u03b7 \u03b1\u03b4\u03c5\u03bd\u03b1\u03c4\u03b5\u03af \u03bd\u03b1 \u03b4\u03b9\u03b1\u03c7\u03b5\u03b9\u03c1\u03b9\u03c3\u03c4\u03b5\u03af \u03c4\u03b7\u03bd \u03c0\u03b1\u03bd\u03b4\u03b7\u03bc\u03af\u03b1. \u0394\u03b5\u03bd \u03bc\u03c0\u03bf\u03c1\u03b5\u03b9 \u03bf\u03cd\u03c4\u03b5 \u03bd\u03b1 \u03c0\u03b5\u03af\u03c3\u03b5\u03b9 \u03c4\u03bf\u03bd \u03ba\u03cc\u03c3\u03bc\u03bf \u03bd\u03b1 \u03b5\u03bc\u03b2\u03bf\u03bb\u03b9\u03b1\u03c3\u03c4\u03b5\u03af, \u03c0\u03bf\u03c5 \u03ae\u03c4\u03b1\u03bd \u03c4\u03bf \u03c0\u03b9\u03bf \u03b1\u03c0\u03bb\u03bf \u03c0\u03c1\u03ac\u03b3\u03bc\u03b1. \u03a3\u03b7\u03bc\u03b5\u03c1\u03b1 \u03bb\u03bf\u03b9\u03c0\u03cc\u03bd \u03c6\u03c4\u03ac\u03c3\u03b1\u03bc\u03b5 \u03c3\u03c4\u03bf \u03c3\u03b7\u03bc\u03b5\u03af\u03bf \u03bd\u03b1 \u03bc\u03b9\u03bb\u03ac\u03bc\u03b5 \u03b3\u03b9\u03b1 \u03b5\u03c0\u03b1\u03bd\u03b1\u03c6\u03bf\u03c1\u03ac \u03c4\u03b7\u03c2 \u03c7\u03c1\u03ae\u03c3\u03b7\u03c2 \u03bc\u03ac\u03c3\u03ba\u03b1\u03c2 \u03c3\u03b5 \u03b5\u03be\u03c9\u03c4\u03b5\u03c1\u03b9\u03ba\u03bf\u03cd\u03c2 \u03c7\u03ce\u03c1\u03bf\u03c5\u03c2 \u03b1\u03ba\u03cc\u03bc\u03b7 \u03ba\u03b1\u03b9 \u03cc\u03c0\u03bf\u03c5 \u03b4\u03b5\u03bd \u03c5\u03c0\u03ac\u03c1\u03c7\u03b5\u03b9 \u03c3\u03c5\u03b3\u03c7\u03c1\u03c9\u03c4\u03b9\u03c3\u03bc\u03cc\u03c2. \u03a3\u03c4\u03b9\u03c2 \u03c3\u03c5\u03b6\u03b7\u03c4\u03ae\u03c3\u03b5\u03b9\u03c2 \u03c4\u03c9\u03bd \u03b5\u03b9\u03b4\u03b9\u03ba\u03ce\u03bd \u03b8\u03b1 \u03b2\u03c1\u03b5\u03b8\u03b5\u03af \u03b5\u03c0\u03af\u03c3\u03b7\u03c2 \u03c4\u03bf \u03b5\u03bd\u03b4\u03b5\u03c7\u03cc\u03bc\u03b5\u03bd\u03bf \u03b3\u03b9\u03b1 \u03c4\u03bf\u03c0\u03b9\u03ba\u03ac lockdown \u03c3\u03b5 \u03c0\u03b5\u03c1\u03b9\u03bf\u03c7\u03ad\u03c2 \u03bc\u03b5 \u03b2\u03b1\u03c1\u03cd \u03b9\u03b9\u03ba\u03cc \u03c6\u03bf\u03c1\u03c4\u03af\u03bf \u03b3\u03b9\u03b1 \u03bd\u03b1 \u03bc\u03b7\u03bd \u03be\u03b5\u03c6\u03cd\u03b3\u03b5\u03b9 \u03b7 \u03ba\u03b1\u03c4\u03ac\u03c3\u03c4\u03b1\u03c3\u03b7, \u03b5\u03bd\u03ce \u03b8\u03b1 \u03c7\u03c1\u03b5\u03b9\u03ac\u03b6\u03b5\u03c4\u03b1\u03b9 \u03ba\u03ac\u03c0\u03bf\u03b9\u03bf\u03c2 \u03b3\u03b9\u03b1 \u03c4\u03b9\u03c2 \u03bc\u03b5\u03c4\u03b1\u03ba\u03b9\u03bd\u03ae\u03c3\u03b5\u03b9\u03c2 \u03c4\u03bf\u03c5 \u03b5\u03af\u03c4\u03b5 \u03c0\u03b9\u03c3\u03c4\u03bf\u03c0\u03bf\u03b9\u03b7\u03c4\u03b9\u03ba\u03cc \u03b5\u03bc\u03b2\u03bf\u03bb\u03b9\u03b1\u03c3\u03bc\u03bf\u03cd \u03ae \u03bd\u03cc\u03c3\u03b7\u03c3\u03b7\u03c2 \u03ba\u03b1\u03b9 \u03bf\u03b9 \u03b1\u03bd\u03b5\u03bc\u03b2\u03bf\u03bb\u03af\u03b1\u03c3\u03c4\u03bf\u03b9 rapid \u03ae \u03bc\u03bf\u03c1\u03b9\u03b1\u03ba\u03cc \u03c4\u03b5\u03c3\u03c4.", "example_title": "Topic COVID"}, {"text": "\u0397 \u00ab\u03c9\u03c1\u03b1\u03af\u03b1 \u0395\u03bb\u03ad\u03bd\u03b7\u00bb \u03b5\u03c0\u03ad\u03c3\u03c4\u03c1\u03b5\u03c8\u03b5 \u03c3\u03c4\u03b7\u03bd \u03c4\u03b7\u03bb\u03b5\u03cc\u03c1\u03b1\u03c3\u03b7, \u03bc\u03ad\u03c3\u03b1 \u03b1\u03c0\u03cc \u03c4\u03b7 \u03c3\u03c5\u03c7\u03bd\u03cc\u03c4\u03b7\u03c4\u03b1 \u03c4\u03bf\u03c5 MEGA \u03ba\u03b1\u03b9 \u03ac\u03c6\u03b7\u03c3\u03b5 \u03c4\u03b9\u03c2 \u03ba\u03b1\u03bb\u03cd\u03c4\u03b5\u03c1\u03b5\u03c2 \u03b5\u03bd\u03c4\u03c5\u03c0\u03ce\u03c3\u03b5\u03b9\u03c2. \u03a4\u03bf \u03c0\u03bb\u03b1\u03c4\u03cc \u03b1\u03c0\u03cc \u03c4\u03bf \u03bf\u03c0\u03bf\u03af\u03bf \u03b5\u03bc\u03c6\u03b1\u03bd\u03af\u03b6\u03b5\u03c4\u03b1\u03b9 \u03b7 \u0395\u03bb\u03ad\u03bd\u03b7 \u039c\u03b5\u03bd\u03b5\u03b3\u03ac\u03ba\u03b7 \u03ad\u03c7\u03b5\u03b9 \u03c6\u03c4\u03b9\u03b1\u03c7\u03c4\u03b5\u03af \u03b1\u03c0\u03cc \u03c4\u03b7\u03bd \u03b1\u03c1\u03c7\u03ae \u03b3\u03b9\u03b1 \u03c4\u03b7\u03bd \u03b5\u03ba\u03c0\u03bf\u03bc\u03c0\u03ae \u03c4\u03b7\u03c2. \u03a3\u03ae\u03bc\u03b5\u03c1\u03b1, \u03c3\u03c4\u03bf \u03ba\u03bb\u03b5\u03af\u03c3\u03b9\u03bc\u03bf \u03c4\u03b7\u03c2 \u03b5\u03ba\u03c0\u03bf\u03bc\u03c0\u03ae\u03c2 \u03b7 \u0395\u03bb\u03ad\u03bd\u03b7 \u03c0\u03ad\u03c1\u03b1\u03c3\u03b5 \u03b1\u03bd\u03ac\u03bc\u03b5\u03c3\u03b1 \u03b1\u03c0\u03cc \u03c4\u03b9\u03c2 \u03ba\u03ac\u03bc\u03b5\u03c1\u03b5\u03c2 \u03b3\u03b9\u03b1 \u03bd\u03b1 \u03bc\u03c0\u03b5\u03b9 \u03c3\u03c4\u03bf \u03ba\u03b1\u03bc\u03b1\u03c1\u03af\u03bd\u03b9 \u03c4\u03b7\u03c2 \u00ab\u039c\u03b7\u03bd \u03c4\u03c1\u03bf\u03bc\u03bf\u03ba\u03c1\u03b1\u03c4\u03b5\u03af\u03c3\u03c4\u03b5, \u03b5\u03af\u03bc\u03b1\u03b9 \u03b7 \u0395\u03bb\u03ad\u03bd\u03b7 \u039c\u03b5\u03bd\u03b5\u03b3\u03ac\u03ba\u03b7, \u03c4\u03b1 \u03ba\u03ac\u03bd\u03c9 \u03b1\u03c5\u03c4\u03ac. \u039c\u03b5 \u03c3\u03c5\u03b3\u03c7\u03c9\u03c1\u03b5\u03af\u03c4\u03b1\u03b9, \u03ad\u03c7\u03c9 \u03c8\u03c5\u03c7\u03bf\u03bb\u03bf\u03b3\u03b9\u03ba\u03ac \u03b1\u03bd \u03b4\u03b5\u03bd \u03b5\u03af\u03bc\u03b1\u03b9 \u03b5\u03bb\u03b5\u03cd\u03b8\u03b5\u03c1\u03b7\u00bb \u03b5\u03af\u03c0\u03b5 \u03b1\u03c1\u03c7\u03b9\u03ba\u03ac \u03b7 \u03c0\u03b1\u03c1\u03bf\u03c5\u03c3\u03b9\u03ac\u03c3\u03c4\u03c1\u03b9\u03b1 \u03c3\u03c4\u03bf\u03c5\u03c2 \u03c3\u03c5\u03bd\u03b5\u03c1\u03b3\u03ac\u03c4\u03b5\u03c2 \u03c4\u03b7\u03c2 \u03ba\u03b1\u03b9 \u03c0\u03c1\u03cc\u03c3\u03b8\u03b5\u03c3\u03b5 \u03c3\u03c4\u03b7 \u03c3\u03c5\u03bd\u03ad\u03c7\u03b5\u03b9\u03b1: \u00ab\u0397 \u0395\u03bb\u03ad\u03bd\u03b7 \u03bf\u03bb\u03bf\u03ba\u03bb\u03ae\u03c1\u03c9\u03c3\u03b5. \u039c\u03c0\u03bf\u03c1\u03b5\u03af\u03c4\u03b5 \u03bd\u03b1 \u03c3\u03c5\u03bd\u03b5\u03c7\u03af\u03c3\u03b5\u03c4\u03b5 \u03bc\u03b5 \u03c4\u03bf \u03c5\u03c0\u03cc\u03bb\u03bf\u03b9\u03c0\u03bf \u03c0\u03c1\u03cc\u03b3\u03c1\u03b1\u03bc\u03bc\u03b1 \u03c4\u03bf\u03c5 Mega. \u0395\u03b3\u03ce \u03b1\u03bd\u03bf\u03af\u03b3\u03c9 \u03c4\u03bf \u03ba\u03b1\u03bc\u03b1\u03c1\u03af\u03bd\u03b9, \u03b1\u03bd \u03bc\u03b5 \u03b1\u03c6\u03ae\u03c3\u03bf\u03c5\u03bd. \u039c\u03c0\u03b1\u03af\u03bd\u03c9 \u03ba\u03b1\u03bc\u03b1\u03c1\u03af\u03bd\u03b9\u00bb. \u0394\u03b5\u03af\u03c4\u03b5 \u03c4\u03bf \u03b1\u03c0\u03cc\u03c3\u03c0\u03b1\u03c3\u03bc\u03b1!", "example_title": "Topic ENTERTAINMENT"}, {"text": "\u0388\u03bd\u03b1 \u03b5\u03be\u03b1\u03b9\u03c1\u03b5\u03c4\u03b9\u03ba\u03ac \u03b5\u03bd\u03b4\u03b9\u03b1\u03c6\u03ad\u03c1\u03bf\u03bd \u00ab\u03ba\u03bf\u03c5\u03c4\u03c3\u03bf\u03bc\u03c0\u03bf\u03bb\u03b9\u03cc\u00bb \u03b5\u03bd\u03c4\u03cc\u03c0\u03b9\u03c3\u03b1\u03bd \u03bf\u03b9 \u03ba\u03b5\u03c1\u03b1\u03af\u03b5\u03c2 \u03c4\u03b7\u03c2 \u03c3\u03c4\u03ae\u03bb\u03b7\u03c2 \u03c0\u03ad\u03c1\u03b9\u03be \u03c4\u03bf\u03c5 \u039c\u03b5\u03b3\u03ac\u03c1\u03bf\u03c5 \u039c\u03b1\u03be\u03af\u03bc\u03bf\u03c5 : \u03c4\u03bf \u03ba\u03b1\u03c4\u03ac \u03c0\u03cc\u03c3\u03bf\u03bd, \u03b4\u03b7\u03bb\u03b1\u03b4\u03ae, \u03bf \u00ab\u03b5\u03be \u03b1\u03c0\u03bf\u03c1\u03c1\u03ae\u03c4\u03c9\u03bd\u00bb \u03c4\u03bf\u03c5 \u039a\u03c5\u03c1\u03b9\u03ac\u03ba\u03bf\u03c5 \u039c\u03b7\u03c4\u03c3\u03bf\u03c4\u03ac\u03ba\u03b7 , \u0393\u03b9\u03ce\u03c1\u03b3\u03bf\u03c2 \u0393\u03b5\u03c1\u03b1\u03c0\u03b5\u03c4\u03c1\u03af\u03c4\u03b7\u03c2 \u03bc\u03b5\u03c4\u03ad\u03c7\u03b5\u03b9 \u03c3\u03c4\u03b7 \u03b4\u03b9\u03b1\u03c7\u03b5\u03af\u03c1\u03b9\u03c3\u03b7 \u03c4\u03b7\u03c2 \u03c0\u03b1\u03bd\u03b4\u03b7\u03bc\u03af\u03b1\u03c2 \u03ba\u03b1\u03b9 \u03c3\u03c4\u03b7\u03bd \u03b4\u03b9\u03b1\u03b4\u03b9\u03ba\u03b1\u03c3\u03af\u03b1 \u03bb\u03ae\u03c8\u03b7\u03c2 \u03b1\u03c0\u03bf\u03c6\u03ac\u03c3\u03b5\u03c9\u03bd. \u03a4\u03bf \u03b5\u03bd \u03bb\u03cc\u03b3\u03c9 \u00ab\u03ba\u03bf\u03c5\u03c4\u03c3\u03bf\u03bc\u03c0\u03bf\u03bb\u03b9\u03cc\u00bb \u03c0\u03c5\u03c1\u03bf\u03b4\u03cc\u03c4\u03b7\u03c3\u03b5 \u03c4\u03bf \u03b3\u03b5\u03b3\u03bf\u03bd\u03cc\u03c2 \u03cc\u03c4\u03b9 \u03c3\u03b5 \u03c3\u03b1\u03b2\u03b2\u03b1\u03c4\u03b9\u03ac\u03c4\u03b9\u03ba\u03b7 \u03b5\u03c6\u03b7\u03bc\u03b5\u03c1\u03af\u03b4\u03b1 \u03b4\u03b7\u03bc\u03bf\u03c3\u03b9\u03b5\u03cd\u03b8\u03b7\u03ba\u03b1\u03bd \u03c0\u03c1\u03bf\u03c7\u03b8\u03ad\u03c2 \u03b4\u03b7\u03bb\u03ce\u03c3\u03b5\u03b9\u03c2 \u03c4\u03bf\u03c5 \u03c5\u03c0\u03bf\u03c5\u03c1\u03b3\u03bf\u03cd \u0395\u03c0\u03b9\u03ba\u03c1\u03b1\u03c4\u03b5\u03af\u03b1\u03c2 \u03bc\u03b5 \u03c4\u03b9\u03c2 \u03bf\u03c0\u03bf\u03af\u03b5\u03c2 \u03b1\u03c0\u03ad\u03ba\u03bb\u03b5\u03b9\u03b5 \u03ba\u03ac\u03b8\u03b5 \u03c3\u03b5\u03bd\u03ac\u03c1\u03b9\u03bf \u03bd\u03ad\u03c9\u03bd \u03bf\u03c1\u03b9\u03b6\u03cc\u03bd\u03c4\u03b9\u03c9\u03bd \u03bc\u03ad\u03c4\u03c1\u03c9\u03bd \u03ba\u03b1\u03b9 \u03c4\u03b7\u03bd \u03af\u03b4\u03b9\u03b1 \u03ce\u03c1\u03b1, \u03c4\u03bf \u039c\u03b1\u03be\u03af\u03bc\u03bf\u03c5 \u03b1\u03bd\u03ae\u03b3\u03b3\u03b5\u03bb\u03bb\u03b5\u2026 \u03ba\u03b1\u03c1\u03b1\u03bd\u03c4\u03af\u03bd\u03b1 \u03c3\u03c4\u03b7 \u039c\u03cd\u03ba\u03bf\u03bd\u03bf. \u00ab\u0395\u03af\u03bd\u03b1\u03b9 \u03b1\u03c5\u03c4\u03bf\u03bd\u03cc\u03b7\u03c4\u03bf \u03cc\u03c4\u03b9 \u03b7 \u03ba\u03bf\u03b9\u03bd\u03c9\u03bd\u03af\u03b1 \u03ba\u03b1\u03b9 \u03b7 \u03bf\u03b9\u03ba\u03bf\u03bd\u03bf\u03bc\u03af\u03b1 \u03b4\u03b5\u03bd \u03b1\u03bd\u03c4\u03ad\u03c7\u03bf\u03c5\u03bd \u03bf\u03c1\u03b9\u03b6\u03cc\u03bd\u03c4\u03b9\u03bf\u03c5\u03c2 \u03c0\u03b5\u03c1\u03b9\u03bf\u03c1\u03b9\u03c3\u03bc\u03bf\u03cd\u03c2\u00bb, \u03ad\u03bb\u03b5\u03b3\u03b5 \u03c7\u03b1\u03c1\u03b1\u03ba\u03c4\u03b7\u03c1\u03b9\u03c3\u03c4\u03b9\u03ba\u03ac \u03bf \u0393\u03b5\u03c1\u03b1\u03c0\u03b5\u03c4\u03c1\u03af\u03c4\u03b7\u03c2, \u03c4\u03b7\u03bd \u03ce\u03c1\u03b1 \u03c0\u03bf\u03c5 \u03b7 \u03ba\u03c5\u03b2\u03ad\u03c1\u03bd\u03b7\u03c3\u03b7 \u03b1\u03bd\u03b1\u03ba\u03bf\u03af\u03bd\u03c9\u03bd\u03b5\u2026 \u03b1\u03c5\u03c4\u03bf\u03cd\u03c2 \u03c4\u03bf\u03c5\u03c2 \u03bf\u03c1\u03b9\u03b6\u03cc\u03bd\u03c4\u03b9\u03bf\u03c5\u03c2 \u03c0\u03b5\u03c1\u03b9\u03bf\u03c1\u03b9\u03c3\u03bc\u03bf\u03cd\u03c2. \u03a9\u03c2 \u03b5\u03ba \u03c4\u03bf\u03cd\u03c4\u03c9\u03bd, \u03b4\u03cd\u03bf \u03c4\u03b9\u03bd\u03ac \u03bc\u03c0\u03bf\u03c1\u03b5\u03af \u03bd\u03b1 \u03c3\u03c5\u03bc\u03b2\u03b1\u03af\u03bd\u03bf\u03c5\u03bd: \u03b5\u03af\u03c4\u03b5 \u03bf \u03c5\u03c0\u03bf\u03c5\u03c1\u03b3\u03cc\u03c2 \u0395\u03c0\u03b9\u03ba\u03c1\u03b1\u03c4\u03b5\u03af\u03b1\u03c2 \u03b4\u03b5\u03bd \u03bc\u03b5\u03c4\u03ad\u03c7\u03b5\u03b9 \u03c0\u03bb\u03ad\u03bf\u03bd \u03c3\u03c4\u03b7 \u03bb\u03ae\u03c8\u03b7 \u03c4\u03c9\u03bd \u03b1\u03c0\u03bf\u03c6\u03ac\u03c3\u03b5\u03c9\u03bd, \u03b5\u03af\u03c4\u03b5 \u03b7 \u03b1\u03c0\u03cc\u03c6\u03b1\u03c3\u03b7 \u03b3\u03b9\u03b1 \u03bf\u03c1\u03b9\u03b6\u03cc\u03bd\u03c4\u03b9\u03b1 \u03bc\u03ad\u03c4\u03c1\u03b1 \u03b5\u03bb\u03ae\u03c6\u03b8\u03b7 \u03c5\u03c0\u03cc \u03c4\u03bf \u03ba\u03c1\u03ac\u03c4\u03bf\u03c2 \u03c0\u03b1\u03bd\u03b9\u03ba\u03bf\u03cd \u03c4\u03bf \u03c0\u03c1\u03c9\u03af \u03c4\u03bf\u03c5 \u03a3\u03b1\u03b2\u03b2\u03ac\u03c4\u03bf\u03c5, \u03cc\u03c4\u03b1\u03bd \u03ad\u03c6\u03c4\u03b1\u03c3\u03b5 \u03c3\u03c4\u03bf \u039c\u03b1\u03be\u03af\u03bc\u03bf\u03c5 \u03b7 \u03c4\u03b5\u03bb\u03b5\u03c5\u03c4\u03b1\u03af\u03b1 \u00ab\u03c6\u03bf\u03c5\u03c1\u03bd\u03b9\u03ac\u00bb \u03c4\u03c9\u03bd \u03b5\u03c0\u03b9\u03b4\u03b7\u03bc\u03b9\u03bf\u03bb\u03bf\u03b3\u03b9\u03ba\u03ce\u03bd \u03b4\u03b5\u03b4\u03bf\u03bc\u03ad\u03bd\u03c9\u03bd \u03b3\u03b9\u03b1 \u03c4\u03bf \u03bd\u03b7\u03c3\u03af \u03c4\u03c9\u03bd \u03b1\u03bd\u03ad\u03bc\u03c9\u03bd\u2026", "example_title": "Topic NON_PAPER"}, {"text": "\u0395\u03af\u03bd\u03b1\u03b9 \u03be\u03b5\u03ba\u03ac\u03b8\u03b1\u03c1\u03bf \u03cc\u03c4\u03b9 \u03bc\u03b5\u03c4\u03ac \u03c4\u03bf \u03c0\u03bb\u03ae\u03b3\u03bc\u03b1 \u03c0\u03bf\u03c5 \u03b4\u03ad\u03c7\u03b8\u03b7\u03ba\u03b5 \u03b7 \u03ba\u03c5\u03b2\u03ad\u03c1\u03bd\u03b7\u03c3\u03ae \u03c4\u03bf\u03c5 \u03b1\u03c0\u03cc \u03c4\u03b9\u03c2 \u03b1\u03b4\u03c5\u03bd\u03b1\u03bc\u03af\u03b5\u03c2 \u03c3\u03c4\u03b7\u03bd \u03b1\u03bd\u03c4\u03b9\u03bc\u03b5\u03c4\u03ce\u03c0\u03b9\u03c3\u03b7 \u03c4\u03c9\u03bd \u03ba\u03b1\u03c4\u03b1\u03c3\u03c4\u03c1\u03bf\u03c6\u03b9\u03ba\u03ce\u03bd \u03c0\u03c5\u03c1\u03ba\u03b1\u03b3\u03b9\u03ce\u03bd \u03c4\u03bf \u03bc\u03b5\u03b3\u03ac\u03bb\u03bf \u03c3\u03c4\u03bf\u03af\u03c7\u03b7\u03bc\u03b1 \u03b3\u03b9\u03b1 \u03c4\u03bf\u03bd \u039a\u03c5\u03c1\u03b9\u03ac\u03ba\u03bf \u039c\u03b7\u03c4\u03c3\u03bf\u03c4\u03ac\u03ba\u03b7 \u03b5\u03af\u03bd\u03b1\u03b9 \u03bd\u03b1 \u03c0\u03c1\u03bf\u03c7\u03c9\u03c1\u03ae\u03c3\u03b5\u03b9 \u03c3\u03c5\u03bd\u03c4\u03b5\u03c4\u03b1\u03b3\u03bc\u03ad\u03bd\u03b1 \u03ba\u03b1\u03b9 \u03c7\u03c9\u03c1\u03af\u03c2 \u03c0\u03b1\u03c1\u03b1\u03c4\u03c1\u03ac\u03b3\u03bf\u03c5\u03b4\u03b1 \u03bf \u03c3\u03c7\u03b5\u03b4\u03b9\u03b1\u03c3\u03bc\u03cc\u03c2 \u03b3\u03b9\u03b1 \u03c4\u03b7\u03bd \u03b1\u03c0\u03bf\u03ba\u03b1\u03c4\u03ac\u03c3\u03c4\u03b1\u03c3\u03b7 \u03c4\u03c9\u03bd \u03b6\u03b7\u03bc\u03b9\u03ce\u03bd. \u039f \u03a0\u03c1\u03c9\u03b8\u03c5\u03c0\u03bf\u03c5\u03c1\u03b3\u03cc\u03c2 \u03ad\u03c7\u03b5\u03b9 \u03ae\u03b4\u03b7 \u03c6\u03c4\u03b9\u03ac\u03be\u03b5\u03b9 \u03bc\u03b9\u03b1 \u03bf\u03bc\u03ac\u03b4\u03b1 \u03ba\u03c1\u03bf\u03cd\u03c3\u03b7\u03c2 \u03c4\u03b7\u03bd \u03bf\u03c0\u03bf\u03af\u03b1 \u03b1\u03c0\u03bf\u03c4\u03b5\u03bb\u03bf\u03cd\u03bd 9 \u03c5\u03c0\u03bf\u03c5\u03c1\u03b3\u03bf\u03af. \u03a4\u03b1 \u03bc\u03ad\u03bb\u03b7 \u03c0\u03bf\u03c5 \u03b1\u03c0\u03b1\u03c1\u03c4\u03af\u03b6\u03bf\u03c5\u03bd \u03c4\u03b7\u03bd \u03bf\u03bc\u03ac\u03b4\u03b1 \u03ba\u03c1\u03bf\u03cd\u03c3\u03b7\u03c2 \u03ba\u03b1\u03b9 \u03c4\u03b1 \u03bf\u03c0\u03bf\u03af\u03b1 \u03b2\u03c1\u03af\u03c3\u03ba\u03bf\u03bd\u03c4\u03b1\u03b9 \u03c3\u03b5 \u03c3\u03c5\u03bd\u03b5\u03c7\u03ae, \u03ba\u03b1\u03b8\u03b7\u03bc\u03b5\u03c1\u03b9\u03bd\u03ae \u03b5\u03c0\u03b1\u03c6\u03ae \u03bc\u03b5 \u03c4\u03bf\u03bd \u039a\u03c5\u03c1\u03b9\u03ac\u03ba\u03bf \u039c\u03b7\u03c4\u03c3\u03bf\u03c4\u03ac\u03ba\u03b7 \u03b5\u03af\u03bd\u03b1\u03b9, \u03cc\u03c0\u03c9\u03c2 \u03bc\u03b1\u03c2 \u03c0\u03bb\u03b7\u03c1\u03bf\u03c6\u03bf\u03c1\u03b5\u03af \u03b7 \u03c3\u03c4\u03ae\u03bb\u03b7 \u00ab\u0398\u03b5\u03c9\u03c1\u03b5\u03af\u03bf\u00bb \u03c4\u03b7\u03c2 \u00ab\u039a\u03b1\u03b8\u03b7\u03bc\u03b5\u03c1\u03b9\u03bd\u03ae\u03c2\u00bb \u03b5\u03af\u03bd\u03b1\u03b9 \u03bf\u03b9: \u0393. \u0393\u03b5\u03c1\u03b1\u03c0\u03b5\u03c4\u03c1\u03af\u03c4\u03b7\u03c2, \u0391. \u03a3\u03ba\u03ad\u03c1\u03c4\u03c3\u03bf\u03c2, \u03a7\u03c1. \u03a4\u03c1\u03b9\u03b1\u03bd\u03c4\u03cc\u03c0\u03bf\u03c5\u03bb\u03bf\u03c2, \u039a. \u039a\u03b1\u03c1\u03b1\u03bc\u03b1\u03bd\u03bb\u03ae\u03c2, \u039a. \u03a3\u03ba\u03c1\u03ad\u03ba\u03b1\u03c2, \u03a3\u03c4. \u03a0\u03ad\u03c4\u03c3\u03b1\u03c2, \u03a3\u03c0. \u039b\u03b9\u03b2\u03b1\u03bd\u03cc\u03c2 \u03ba\u03b1\u03b9 \u03c6\u03c5\u03c3\u03b9\u03ba\u03ac \u03bf\u03b9 \u03a7\u03c1. \u03a3\u03c4\u03b1\u03b9\u03ba\u03bf\u03cd\u03c1\u03b1\u03c2 \u03ba\u03b1\u03b9 \u0398. \u03a3\u03ba\u03c5\u03bb\u03b1\u03ba\u03ac\u03ba\u03b7\u03c2.", "example_title": "Topic OPINION"}]}
|
cvcio/mediawatch-el-topics
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"Greek",
"news",
"el",
"doi:10.57967/hf/0711",
"license:gpl-3.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
# RoBERTa Greek base model
Pretrained model on Greek language with the Masked Language Modeling (MLM) objective using [Hugging Face's](https://huggingface.co/) [Transformers](https://github.com/huggingface/transformers) library. This model is *NOT* case-sensitive and all Greek diacritics retained.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
# example url
# https://www.news247.gr/politiki/misologa-maximoy-gia-tin-ekthesi-tsiodra-lytra-gia-ti-thnitotita-ektos-meth.9462425.html
# not present in train/eval set
from transformers import pipeline
pipe = pipeline('fill-mask', model='cvcio/roberta-el-news')
pipe(
'Η κυβέρνηση μουδιασμένη από τη <mask> της έκθεσης Τσιόδρα-Λύτρα, '
'επιχειρεί χωρίς να απαντά ουσιαστικά να ρίξει ευθύνες στον ΣΥΡΙΖΑ, που κυβερνούσε πριν... 2 χρόνια.'
)
# outputs
[
{
'sequence': 'Η κυβέρνηση μουδιασμένη από τη δημοσιοποίηση της έκθεσης Τσιόδρα-Λύτρα, επιχειρεί χωρίς να απαντά ουσιαστικά να ρίξει ευθύνες στον ΣΥΡΙΖΑ, που κυβερνούσε πριν... 2 χρόνια.',
'score': 0.5881184339523315, 'token': 20235, 'token_str': ' δημοσιοποίηση'
},
{
'sequence': 'Η κυβέρνηση μουδιασμένη από τη δημοσίευση της έκθεσης Τσιόδρα-Λύτρα, επιχειρεί χωρίς να απαντά ουσιαστικά να ρίξει ευθύνες στον ΣΥΡΙΖΑ, που κυβερνούσε πριν... 2 χρόνια.',
'score': 0.05952141433954239, 'token': 9696, 'token_str': ' δημοσίευση'
},
{
'sequence': 'Η κυβέρνηση μουδιασμένη από τη διαχείριση της έκθεσης Τσιόδρα-Λύτρα, επιχειρεί χωρίς να απαντά ουσιαστικά να ρίξει ευθύνες στον ΣΥΡΙΖΑ, που κυβερνούσε πριν... 2 χρόνια.',
'score': 0.029887061566114426, 'token': 4315, 'token_str': ' διαχείριση'
},
{
'sequence': 'Η κυβέρνηση μουδιασμένη από τη διαρροή της έκθεσης Τσιόδρα-Λύτρα, επιχειρεί χωρίς να απαντά ουσιαστικά να ρίξει ευθύνες στον ΣΥΡΙΖΑ, που κυβερνούσε πριν... 2 χρόνια.',
'score': 0.022848669439554214, 'token': 24940, 'token_str': ' διαρροή'
},
{
'sequence': 'Η κυβέρνηση μουδιασμένη από τη ματαίωση της έκθεσης Τσιόδρα-Λύτρα, επιχειρεί χωρίς να απαντά ουσιαστικά να ρίξει ευθύνες στον ΣΥΡΙΖΑ, που κυβερνούσε πριν... 2 χρόνια.',
'score': 0.01729060709476471, 'token': 46913, 'token_str': ' ματαίωση'
}
]
```
## Training data
The model was pretrained on 8 millon unique news articles (~ approx 160M sentences, 33GB of text), collected with [MediaWatch](https://mediawatch.io/), from October 2016 upto December 2021.
## Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,265. During the preprocessing we only unescaped html text to the correspoing Unicode characters (ex. `&` => `&`).
## Pretraining
The model was pretrained using an NVIDIA A10 GPU for 3 epochs (~ approx 760K steps, 182 hours) with a batch size of 14 (x2 gradient accumulation steps = 28) and a sequence length of 512 tokens. The optimizer used is Adam with a learning rate of 5e-5, and linear decay of the learning rate.
### Training results
| epochs | steps | train/train_loss | train/loss | eval/loss |
|-------:|--------:|-----------------:|------------:|----------:|
| 3 | 765,414 | 0.3960 | 1.2356 | 0.9028 |
### Evaluation results
The model fine-tuned on ner task using the [elNER](https://github.com/nmpartzio/elner) dataset and achieved the following results:
| task | epochs | lr | batch | dataset | precision | recall | f1 | accuracy |
|-----:|-------:|-----:|------:|--------:|----------:|-------:|-------:|---------:|
| ner | 5 | 1e-5 | 16/16 | elNER4 | 0.8954 | 0.9280 | 0.9114 | 0.9872 |
| ner | 5 | 1e-4 | 16/16 | elNER18 | 0.9069 | 0.9268 | 0.9168 | 0.9823 |
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-5
- train_batch_size: 14
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 28
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.13.0
- Pytorch 1.9.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
## Authors
Dimitris Papaevagelou - [@andefined](https://github.com/andefined)
## About Us
[Civic Information Office](https://cvcio.org/) is a Non Profit Organization based in Athens, Greece focusing on creating technology and research products for the public interest.
|
{"language": "el", "license": "gpl-3.0", "tags": ["generated_from_trainer", "roberta", "Greek", "news", "transformers"], "widget": [{"text": "\u0397 \u03ba\u03c5\u03b2\u03ad\u03c1\u03bd\u03b7\u03c3\u03b7 \u03bc\u03bf\u03c5\u03b4\u03b9\u03b1\u03c3\u03bc\u03ad\u03bd\u03b7 \u03b1\u03c0\u03cc \u03c4\u03b7 <mask> \u03c4\u03b7\u03c2 \u03ad\u03ba\u03b8\u03b5\u03c3\u03b7\u03c2 \u03a4\u03c3\u03b9\u03cc\u03b4\u03c1\u03b1-\u039b\u03cd\u03c4\u03c1\u03b1, \u03b5\u03c0\u03b9\u03c7\u03b5\u03b9\u03c1\u03b5\u03af \u03c7\u03c9\u03c1\u03af\u03c2 \u03bd\u03b1 \u03b1\u03c0\u03b1\u03bd\u03c4\u03ac \u03bf\u03c5\u03c3\u03b9\u03b1\u03c3\u03c4\u03b9\u03ba\u03ac \u03bd\u03b1 \u03c1\u03af\u03be\u03b5\u03b9 \u03b5\u03c5\u03b8\u03cd\u03bd\u03b5\u03c2 \u03c3\u03c4\u03bf\u03bd \u03a3\u03a5\u03a1\u0399\u0396\u0391, \u03c0\u03bf\u03c5 \u03ba\u03c5\u03b2\u03b5\u03c1\u03bd\u03bf\u03cd\u03c3\u03b5 \u03c0\u03c1\u03b9\u03bd... 2 \u03c7\u03c1\u03cc\u03bd\u03b9\u03b1."}], "model-index": [{"name": "roberta-el-news", "results": []}]}
|
cvcio/roberta-el-news
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"Greek",
"news",
"el",
"doi:10.57967/hf/0712",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
# Greek RoBERTa Uncased (v1)
Pretrained model on Greek language using a masked language modeling (MLM) objective using [Hugging Face's](https://huggingface.co/) [Transformers](https://github.com/huggingface/transformers) library. This model is case-sensitive and has no Greek diacritics (uncased, no-accents).
### Training data
This model was pretrained on almost 18M unique tweets, all Greek, collected between 2008-2021, from almost 450K distinct users.
### Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50256. For the tokenizer we splited strings containing any numbers (ex. EU2019 ==> EU 2019). The tweet normalization logic described in the example listed bellow.
```python
import unicodedata
from transformers import pipeline
def normalize_tweet(tweet, do_lower = True, do_strip_accents = True, do_split_word_numbers = False, user_fill = '', url_fill = ''):
# your tweet pre-processing logic goes here
# example...
# remove extra spaces, escape HTML, replace non-standard punctuation
# replace any @user with blank
# replace any link with blank
# explode hashtags to strings (ex. #EU2019 ==> EU 2019)
# remove all emojis
# if do_split_word_numbers:
# splited strings containing any numbers
# standardize punctuation
# remove unicode symbols
if do_lower:
tweet = tweet.lower()
if do_strip_accents:
tweet = strip_accents(tweet)
return tweet.strip()
def strip_accents(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
nlp = pipeline('fill-mask', model = 'cvcio/roberta-el-uncased-twitter-v1')
print(
nlp(
normalize_tweet(
'<mask>: Μεγάλη υποχώρηση του ιικού φορτίου σε Αττική και Θεσσαλονίκη'
)
)
)
```
### Pretraining
The model was pretrained on a T4 GPU for 1.2M steps with a batch size of 96 and a sequence length of 96. The optimizer used is Adam with a learning rate of 1e-5, gradient accumulation steps of 8, learning rate warmup for 50000 steps and linear decay of the learning rate after.
### Authors
Dimitris Papaevagelou - [@andefined](https://github.com/andefined)
### About Us
[Civic Information Office](https://cvcio.org/) is a Non Profit Organization based in Athens, Greece focusing on creating technology and research products for the public interest.
|
{"language": "el", "tags": ["roberta", "twitter", "Greek"], "widget": [{"text": "<mask>: \u03bc\u03b5\u03b3\u03b1\u03bb\u03b7 \u03c5\u03c0\u03bf\u03c7\u03c9\u03c1\u03b7\u03c3\u03b7 \u03c4\u03bf\u03c5 \u03b9\u03b9\u03ba\u03bf\u03c5 \u03c6\u03bf\u03c1\u03c4\u03b9\u03bf\u03c5 \u03c3\u03b5 \u03b1\u03c4\u03c4\u03b9\u03ba\u03b7 \u03ba\u03b1\u03b9 \u03b8\u03b5\u03c3\u03c3\u03b1\u03bb\u03bf\u03bd\u03b9\u03ba\u03b7"}]}
|
cvcio/roberta-el-uncased-twitter-v1
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"twitter",
"Greek",
"el",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
spacy
|
{"language": ["multilingual"], "tags": ["spacy", "text-classification"], "model-index": [{"name": "xx_cat_pateexx_md", "results": []}]}
|
cverluise/xx_cat_pateexx_md
| null |
[
"spacy",
"text-classification",
"multilingual",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cwenner/distilbert-base-uncased-finetuned-cola
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cwh/distilgpt2-finetuned-wikitext2
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cwh/gpt2-large-finetuned-wikitext2
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
{}
|
cwh/gpt2-medium-finetuned-wikitext2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cwhao98/test
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cwijayasundara/pegasus-custom
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
cwitcate/mymodel1001
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
token-classification
|
transformers
|
## Hello World
|
{}
|
cwtpc/wangchanberta-ner-8989
| null |
[
"transformers",
"pytorch",
"camembert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
cx6319/maihou
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cxue34/hf_1
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cyan/distilbert-base-uncased-finetuned-mnli
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cyatreya/distilbert-base-uncased-finetuned-cola
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
## Cyclone Chinese NER
This model provides simplified Chinese NER model based on pretrained model BERT (specifically BERT + CRF)
Currently, we only support 8 general type of entities ("address", "company", "government", "name", "organization", "position", "scene", "time")
### Usage
from transformers import BertConfig
config = BertConfig.from_pretrained("bert-base-chinese", num_labels=num_labels)
model_path = "cyclone/cyclone-ner"
tokenizer = CNerTokenizer.from_pretrained(model_path, do_lower_case=True)
model = BertCrfForNer.from_pretrained(model_path, config=config)
|
{}
|
cyclone/cyclone-ner
| null |
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
feature-extraction
|
transformers
|
## Cyclone SIMCSE RoBERTa WWM Ext Chinese
This model provides simplified Chinese sentence embeddings encoding based on [Simple Contrastive Learning](https://arxiv.org/abs/2104.08821).
The pretrained model(Chinese RoBERTa WWM Ext) is used for token encoding.
### Usage
Please use [SentenceTransformer](https://github.com/UKPLab/sentence-transformers) to load the model.
from sentence_transformers import SentenceTransformer
encoder = SentenceTransformer('cyclone/simcse-chinese-roberta-wwm-ext')
|
{}
|
cyclone/simcse-chinese-roberta-wwm-ext
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.08821",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
cyko/patent_summarization
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
{}
|
cyl/adapter_t5-3b_mnli
| null |
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
{}
|
cyl/adapter_t5-3b_qqp
| null |
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
{}
|
cyl/adapter_t5-3b_rte
| null |
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
{}
|
cyl/adapter_t5-3b_sst2
| null |
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
{}
|
cyl/adapter_t5-3b_stsb
| null |
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
{}
|
cyl/bitfit_t5-3b_cola
| null |
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cyl/mnli
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cyl/mrpc
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cyl/qnli
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cyl/qqp
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
cylee/bert-base-cased-tutorial
| null |
[
"transformers",
"tf",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cylee/bert-finetuned-mrpc
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cylee/code-search-net-tokenizer-tutorial
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
cylee/dummy-model
| null |
[
"transformers",
"tf",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
cylee/hugging_hub_tutorial
| null |
[
"transformers",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
# About
This is a sample repo.
|
{}
|
cylee/tutorial
| null |
[
"transformers",
"tf",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
cyou/bert-base-jp1
| null |
[
"pytorch",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cyprste291274/DialoGPT-small-sheldon
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cys/text-similarity-faq
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cytochrome/gpt2-wikitext2
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
cyy/soft_prompt
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
d0a0l0l0/dfirstmodel
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
# Description:
This is a smaller per-trained model on Sinhalese Language using Masked Language Modeling(MLM). And the model is trained on Oscar Sinhala dataset.
# How to Use:
The model can be used directly with a pipeline for masked language modeling:
```python
>>> from transformers import AutoTokenizer, AutoModelForMaskedLM, pipeline
>>> tokenizer = AutoTokenizer.from_pretrained("d42kw01f/Sinhala-RoBERTa")
>>> model = AutoModelForMaskedLM.from_pretrained("d42kw01f/Sinhala-RoBERTa")
>>> fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
>>> fill_mask("මම ගෙදර <mask>.")
[{'score': 0.1822454035282135,
'sequence': 'මම ගෙදර ආව.',
'token': 701,
'token_str': ' ආව'},
{'score': 0.10513380169868469,
'sequence': 'මම ගෙදර ය.',
'token': 310,
'token_str': ' ය'},
{'score': 0.06417194753885269,
'sequence': 'මම ගෙදර එක.',
'token': 328,
'token_str': ' එක'},
{'score': 0.05026362091302872,
'sequence': 'මම ගෙදර ඇත.',
'token': 330,
'token_str': ' ඇත'},
{'score': 0.029960114508867264,
'sequence': 'මම ගෙදර යනව.',
'token': 834,
'token_str': ' යනව'}]
```
|
{}
|
d42kw01f/Sinhala-RoBERTa
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
# Description:
This is a smaller per-trained model on Tamil Language using Masked Language Modeling(MLM). And the model is trained on Oscar Tamil dataset.
# How to Use:
The model can be used directly with a pipeline for masked language modeling:
```python
>>> from transformers import AutoTokenizer, AutoModelForMaskedLM, pipeline
>>> tokenizer = AutoTokenizer.from_pretrained("d42kw01f/Tamil-RoBERTa")
>>> model = AutoModelForMaskedLM.from_pretrained("d42kw01f/Tamil-RoBERTa")
>>> fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
>>> fill_mask("நான் வீட்டு <mask>.")
```
|
{}
|
d42kw01f/Tamil-RoBERTa
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
## About the Model
An English sequence classification model, trained on MBAD Dataset to detect bias and fairness in sentences (news articles). This model was built on top of distilbert-base-uncased model and trained for 30 epochs with a batch size of 16, a learning rate of 5e-5, and a maximum sequence length of 512.
- Dataset : MBAD Data
- Carbon emission 0.319355 Kg
| Train Accuracy | Validation Accuracy | Train loss | Test loss |
|---------------:| -------------------:| ----------:|----------:|
| 76.97 | 62.00 | 0.45 | 0.96 |
## Usage
The easiest way is to load the inference api from huggingface and second method is through the pipeline object offered by transformers library.
```python
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("d4data/bias-detection-model")
model = TFAutoModelForSequenceClassification.from_pretrained("d4data/bias-detection-model")
classifier = pipeline('text-classification', model=model, tokenizer=tokenizer) # cuda = 0,1 based on gpu availability
classifier("The irony, of course, is that the exhibit that invites people to throw trash at vacuuming Ivanka Trump lookalike reflects every stereotype feminists claim to stand against, oversexualizing Ivanka’s body and ignoring her hard work.")
```
## Author
This model is part of the Research topic "Bias and Fairness in AI" conducted by Deepak John Reji, Shaina Raza. If you use this work (code, model or dataset), please star at:
> Bias & Fairness in AI, (2022), GitHub repository, <https://github.com/dreji18/Fairness-in-AI>
|
{"language": ["en"], "tags": ["Text Classification"], "co2_eq_emissions": 0.319355, "widget": [{"text": "Nevertheless, Trump and other Republicans have tarred the protests as havens for terrorists intent on destroying property.", "example_title": "Biased example 1"}, {"text": "Billie Eilish issues apology for mouthing an anti-Asian derogatory term in a resurfaced video.", "example_title": "Biased example 2"}, {"text": "Christians should make clear that the perpetuation of objectionable vaccines and the lack of alternatives is a kind of coercion.", "example_title": "Biased example 3"}, {"text": "There have been a protest by a group of people", "example_title": "Non-Biased example 1"}, {"text": "While emphasizing he\u2019s not singling out either party, Cohen warned about the danger of normalizing white supremacist ideology.", "example_title": "Non-Biased example 2"}]}
|
d4data/bias-detection-model
| null |
[
"transformers",
"tf",
"distilbert",
"text-classification",
"Text Classification",
"en",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
spacy
|
## About the Model
This model is trained on MBAD Dataset to recognize the biased word/phrases in a sentence. This model was built on top of roberta-base offered by Spacy transformers.
This model is in association with https://huggingface.co/d4data/bias-detection-model
| Feature | Description |
| --- | --- |
| **Name** | `Bias Recognizer Model` |
| **Version** | `1.0` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
## Author
This model is part of the Research topic "Bias and Fairness in AI" conducted by Deepak John Reji, Shaina Raza. If you use this work (code, model or dataset), please star at:
> Bias & Fairness in AI, (2022), GitHub repository, <https://github.com/dreji18/Fairness-in-AI>
|
{"language": ["en"], "tags": ["spacy", "token-classification"], "widget": [{"text": "Billie Eilish issues apology for mouthing an anti-Asian derogatory term in a resurfaced video.", "example_title": "Biased example 1"}, {"text": "Christians should make clear that the perpetuation of objectionable vaccines and the lack of alternatives is a kind of coercion.", "example_title": "Biased example 2"}, {"text": "But, whether this switch constitutes a true win for the racist right or not, it\u2019s clear that MAGA conservatives are highly attuned to how decisions are made in the White House and which positions they want to control.", "example_title": "Biased example 3"}, {"text": "The fact that the abortion rate among American blacks is far higher than the rate for whites is routinely chronicled and mourned.", "example_title": "Biased example 4"}]}
|
d4data/en_pipeline
| null |
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
## About the Model
An Environmental due diligence classification model, trained on customized environmental Dataset to detect contamination and remediation activities (both prevailing as well as planned) as a part of site assessment process. This model can identify the source of contamination, the extent of contamination, the types of contaminants present at the site, the flow of contaminants and their interaction with ground water, surface water and other surrounding water bodies .
This model was built on top of distilbert-base-uncased model and trained for 10 epochs with a batch size of 16, a learning rate of 5e-5, and a maximum sequence length of 512.
- Dataset : Open Source News data + Custom data
- Carbon emission 0.1069 Kg
## Usage
The easiest way is to load through the pipeline object offered by transformers library.
```python
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("d4data/environmental-due-diligence-model")
model = TFAutoModelForSequenceClassification.from_pretrained("d4data/environmental-due-diligence-model")
classifier = pipeline('text-classification', model=model, tokenizer=tokenizer) # cuda = 0,1 based on gpu availability
classifier("At the every month post-injection monitoring event, TCE, carbon tetrachloride, and chloroform concentrations were above CBSGs in three of the wells")
```
## Author
This model is part of the Research topic "Environmental Due Diligence" conducted by Deepak John Reji, Afreen Aman. If you use this work (code, model or dataset), please cite as:
> Environmental Due Diligence, (2020), https://www.sciencedirect.com/science/article/pii/S2665963822001117
## You can support me here :)
<a href="https://www.buymeacoffee.com/deepakjohnreji" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
|
{"language": ["en"], "tags": ["Text Classification"], "co2_eq_emissions": 0.1069, "widget": [{"text": "At the every month post-injection monitoring event, TCE, carbon tetrachloride, and chloroform concentrations were above CBSGs in three of the wells", "example_title": "Remediation Standards"}, {"text": "TRPH exceedances were observed in the subsurface soils immediately above the water table and there are no TRPH exceedances in surface soils.", "example_title": "Extent of Contamination"}, {"text": "weathered shale was encountered below the surface area with fluvial deposits. Sediments in the coastal plain region are found above and below the bedrock with sandstones and shales that form the basement rock", "example_title": "Geology"}]}
|
d4data/environmental-due-diligence-model
| null |
[
"transformers",
"tf",
"distilbert",
"text-classification",
"Text Classification",
"en",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.