Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
feature-extraction
|
transformers
|
## Model description
This is the question encoder for the Polish DPR question answering model. The full model consists of two encoders.
Please read [context encoder documentation](https://huggingface.co/enelpol/czywiesz-context) to get the details of the model.
|
{"language": "pl", "datasets": ["enelpol/czywiesz"], "task_categories": ["question_answering"], "task_ids": ["open-domain-qa"], "multilinguality": ["monolingual"], "size_categories": ["1k<n<10K"]}
|
enelpol/czywiesz-question
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"pl",
"dataset:enelpol/czywiesz",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
{}
|
enelpol/poleval2021-task1
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
{}
|
enelpol/poleval2021-task2
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
Trained with prefix `ocr: `.
|
{}
|
enelpol/poleval2021-task3
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
{}
|
enod/esg-bert
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
enriqueyanh/bert1
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
enriqueyanh/bert_cn
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
{}
|
ensamblador/gpt2-derecha-with-bos-eos-48heads
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
{}
|
ensamblador/gpt2-derecha-with-bos-eos-8heads
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
{}
|
ensamblador/gpt2-es-48heads
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
{}
|
ensamblador/gpt2-es-8heads
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
{}
|
ensamblador/gpt2-twitter-politico
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
{}
|
ensamblador/gpt2_espanol_8hx512pos
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
{}
|
ensamblador/model_es_custom
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
entelecheia/eKonBERT
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
{}
|
entelecheia/ekonbert-base
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
{}
|
entelecheia/ekonelectra-base-discriminator
| null |
[
"transformers",
"pytorch",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
entelecheia/ekonelectra-base-generator
| null |
[
"transformers",
"pytorch",
"electra",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
{}
|
entelecheia/ekonelectra-small-discriminator
| null |
[
"transformers",
"pytorch",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
entelecheia/ekonelectra-small-generator
| null |
[
"transformers",
"pytorch",
"electra",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
enyakong/tek
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
enzomarcus/enzo
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
enzomarcus/enzooo
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
feature-extraction
|
transformers
|
{}
|
eooitom/phobertlong4096
| null |
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
This is fine-tuned model on Bhagvad Gita and creates text based on prompts.
Example of usage:
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("epsil/bhagvad_gita")
model = AutoModelForCausalLM.from_pretrained("epsil/bhagvad_gita")
```
Input
```
from transformers import pipeline
pipeline = pipeline('text-generation',model=model, tokenizer=tokenizer)
result = samples('Krishna show me the right path')[0]['generated_text']
print(result)
```
Output
```
Krishna show me the right path, and I also to remember the lessons, and to remember them right.
Sama! in His Day, and by Thy own Eternal Grace.
A man like that who shall come to us
```
> Created by [Saurabh Mishra](https://www.linkedin.com/in/saurabh-mishra-12b5a1216/)
> Made with <span style="color: #e25555;">♥</span> in India
|
{}
|
epsil/bhagvad_gita
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
feature-extraction
|
transformers
|
{}
|
epwalsh/bert-xsmall-dummy
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
equ1/mnist_interface
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
er/17731000248
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
erasedwalt/rubert-base-vet
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
feature-extraction
|
transformers
|
{}
|
erayyildiz/electra-turkish-cased
| null |
[
"transformers",
"pytorch",
"electra",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
erdody/distilbert-base-uncased-finetuned-squad
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
erensezener/norwegian-t5-base
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
erensezener/t5-base-it
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
# Persian-t5-formality-transfer
This is a formality style transfer model for the Persian language to convert colloquial text into a formal one. It is based on [the monolingual T5 model for Persian.](https://huggingface.co/Ahmad/parsT5-base) and [Persian T5 paraphraser](https://huggingface.co/erfan226/persian-t5-paraphraser)
Note: This model is still in development and therefore its outputs might not be very good. However, you can experiment with different values for the decoder to get better results. For more info check this [link.](https://huggingface.co/blog/how-to-generate)
## Usage
```python
>>> pip install transformers
>>> from transformers import (T5ForConditionalGeneration, AutoTokenizer, pipeline)
>>> import torch
model_path = 'erfan226/persian-t5-formality-transfer'
model = T5ForConditionalGeneration.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
pipe = pipeline(task='text2text-generation', model=model, tokenizer=tokenizer)
def paraphrase(text):
for j in range(3):
out = pipe(text, encoder_no_repeat_ngram_size=4, do_sample=True, num_beams=5, max_length=128)[0]['generated_text']
print("Paraphrase:", out)
text = "من با دوستام میرم بازی"
print("Original:", text)
paraphrase(text)
# Original: من با دوستام میرم بازی
# Paraphrase: دوست دارم با دوستانم بازی کنم.
# Paraphrase: من با دوستانم میرم...
# Paraphrase: من با دوستام بازی می کنم.
```
## Training data
TBD
|
{"language": "fa", "tags": ["Style transfer", "Formality style transfer"], "widget": [{"text": "\u0645\u0646 \u0628\u0627 \u062f\u0648\u0633\u062a\u0627\u0645 \u0645\u06cc\u0631\u0645 \u0628\u0627\u0632\u06cc."}, {"text": "\u0645\u0646 \u0628\u0647 \u062e\u0648\u0646\u0647 \u062f\u0648\u0633\u062a\u0645 \u0631\u0641\u062a\u0645."}]}
|
erfan226/persian-t5-formality-transfer
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"Style transfer",
"Formality style transfer",
"fa",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
# Persian-t5-paraphraser
This is a paraphrasing model for the Persian language. It is based on [the monolingual T5 model for Persian.](https://huggingface.co/Ahmad/parsT5-base)
## Usage
```python
>>> pip install transformers
>>> from transformers import (T5ForConditionalGeneration, AutoTokenizer, pipeline)
>>> import torch
model_path = 'erfan226/persian-t5-paraphraser'
model = T5ForConditionalGeneration.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
pipe = pipeline(task='text2text-generation', model=model, tokenizer=tokenizer)
def paraphrase(text):
for j in range(5):
out = pipe(text, encoder_no_repeat_ngram_size=5, do_sample=True, num_beams=5, max_length=128)[0]['generated_text']
print("Paraphrase:", out)
text = "این یک مقالهٔ خرد آلمان است. میتوانید با گسترش آن به ویکیپدیا کمک کنید."
print("Original:", text)
paraphrase(text)
# Original: این یک مقالهٔ خرد آلمان است. میتوانید با گسترش آن به ویکیپدیا کمک کنید.
# Paraphrase: این یک مقالهٔ کوچک است.
# Paraphrase: این یک مقالهٔ کوچک است.
# Paraphrase: شما می توانید با گسترش این مقاله، به کسب و کار خود کمک کنید.
# Paraphrase: می توانید با گسترش این مقالهٔ خرد آلمان کمک کنید.
# Paraphrase: شما می توانید با گسترش این مقالهٔ خرد، به گسترش آن کمک کنید.
```
## Training data
This model was trained on the Persian subset of the [Tapaco dataset](https://huggingface.co/datasets/tapaco). It should be noted that this model was trained on a very small dataset and therefore the performance might not be as expected, for now.
|
{"language": "fa", "tags": ["paraphrasing"], "datasets": ["tapaco"], "widget": [{"text": "\u0627\u06cc\u0646 \u06cc\u06a9 \u0645\u0642\u0627\u0644\u0647\u0654 \u062e\u0631\u062f \u0622\u0644\u0645\u0627\u0646 \u0627\u0633\u062a. \u0645\u06cc\u200c\u062a\u0648\u0627\u0646\u06cc\u062f \u0628\u0627 \u06af\u0633\u062a\u0631\u0634 \u0622\u0646 \u0628\u0647 \u0648\u06cc\u06a9\u06cc\u200c\u067e\u062f\u06cc\u0627 \u06a9\u0645\u06a9 \u06a9\u0646\u06cc\u062f."}, {"text": "\u0628\u0631\u0627\u06cc \u062e\u0631\u06cc\u062f \u06cc\u06a9 \u06a9\u062a\u0627\u0628 \u0628\u0627\u06cc\u062f \u0627\u0632 \u0641\u0631\u0648\u0634\u06af\u0627\u0647 \u0627\u06cc\u0646\u062a\u0631\u0646\u062a\u06cc \u0627\u0633\u062a\u0641\u0627\u062f\u0647 \u06a9\u0646\u06cc\u062f."}]}
|
erfan226/persian-t5-paraphraser
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"paraphrasing",
"fa",
"dataset:tapaco",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
erga/bert-base-cased-finetuned-inf8460kaggle
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
eric14a/xlm-roberta-base-finetuned-panx-de
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0178
## Model description
Base model weights were frozen leaving only to finetune the last layer (qa outputs).
## Training and evaluation data
Achieved EM: 8.013245033112582, F1: 15.9706088498649
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.3602 | 1.0 | 5533 | 4.3460 |
| 4.0995 | 2.0 | 11066 | 4.0787 |
| 4.0302 | 3.0 | 16599 | 4.0178 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-finetuned-squad", "results": []}]}
|
ericRosello/bert-base-uncased-finetuned-squad-frozen-v1
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4571
## Model description
Most base model weights were frozen leaving only to finetune the last layer (qa outputs) and 3 last layers of the encoder.
## Training and evaluation data
Achieved EM: 76.77388836329234, F1: 85.41893520501723
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.2944 | 1.0 | 44262 | 1.3432 |
| 1.0152 | 2.0 | 88524 | 1.3450 |
| 1.0062 | 3.0 | 132786 | 1.4571 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-finetuned-squad", "results": []}]}
|
ericRosello/bert-base-uncased-finetuned-squad-frozen-v2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
ericRosello/bert-base-uncased-finetuned-squad
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
feature-extraction
|
transformers
|
{}
|
ericRosello/bert-frozen-v1
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3629
## Model description
Base model weights were frozen leaving only to finetune the last layer (qa outputs).
## Training and evaluation data
Achieved EM: 4.7776726584673606, F1: 11.440882287905591
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.679 | 1.0 | 5533 | 4.6713 |
| 4.4171 | 2.0 | 11066 | 4.4218 |
| 4.3464 | 3.0 | 16599 | 4.3629 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
|
ericRosello/distilbert-base-uncased-finetuned-squad-frozen-v1
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2104
## Model description
Most base model weights were frozen leaving only to finetune the last layer (qa outputs) and 3 last layers of the encoder.
## Training and evaluation data
Achieved EM: 73.519394512772, F1: 82.71779517079237
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.3937 | 1.0 | 5533 | 1.2915 |
| 1.1522 | 2.0 | 11066 | 1.2227 |
| 1.0055 | 3.0 | 16599 | 1.2104 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
|
ericRosello/distilbert-base-uncased-finetuned-squad-frozen-v2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
ericRosello/distilbert-base-uncased-finetuned-squad
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
feature-extraction
|
transformers
|
{}
|
ericRosello/distilbert-frozen-v1
| null |
[
"transformers",
"pytorch",
"distilbert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
feature-extraction
|
transformers
|
{}
|
ericRosello/distilbert-frozen-v2
| null |
[
"transformers",
"pytorch",
"distilbert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
ericRosello/qa_outputs.bias-frozen-v2.5
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
question-answering
|
transformers
|
{}
|
ericRosello/results
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
feature-extraction
|
transformers
|
{}
|
ericRosello/trial
| null |
[
"transformers",
"pytorch",
"distilbert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
erica/kc_900
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
erica/kcbase400
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
erica/kob400
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
erica/kob900
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
erica/krm_fin
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
{}
|
erica/krm_sa2
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
{}
|
erica/krm_sa3
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
ericchchiu/dummy-model
| null |
[
"transformers",
"pytorch",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
ericdoug/reoberta_qq
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
# Harry Potter DialoGPT Model
|
{"tags": ["conversational"]}
|
ericklasco/DialoGPT-small-erickHarryPotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# Rick
|
{"tags": ["conversational"]}
|
ericzhou/DialoGPT-Medium-Rick
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# rick
|
{"tags": ["conversational"]}
|
ericzhou/DialoGPT-Medium-Rick_v2
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# elon
|
{"tags": ["conversational"]}
|
ericzhou/DialoGPT-medium-elon
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
{"tags": ["conversational"]}
|
ericzhou/tsundere_v1
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
erikedwards4/roberta
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
# GPT2 Keyword Based Lecture Generator
## Model description
GPT2 fine-tuned on the TED Talks Dataset (published under the Creative Commons BY-NC-ND license).
## Intended uses
Used to generate spoken-word lectures.
### How to use
Input text:
<BOS> title <|SEP|> Some keywords <|SEP|>
Keyword Format: "Main Topic"."Subtopic1","Subtopic2","Subtopic3"
Code Example:
```
prompt = <BOS> + title + \\
<|SEP|> + keywords + <|SEP|>
generated = torch.tensor(tokenizer.encode(prompt)).unsqueeze(0)
model.eval();
```
|
{}
|
erikinfo/gpt2TEDlectures
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
ernieho/model_name
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
# Classifying Text into DB07 Codes
This model is [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) fine-tuned to classify Danish descriptions of activities into [Dansk Branchekode DB07](https://www.dst.dk/en/Statistik/dokumentation/nomenklaturer/dansk-branchekode-db07) codes.
## Data
Approximately 2.5 million business names and descriptions of activities from Norwegian and Danish businesses were used to fine-tune the model. The Norwegian descriptions were translated into Danish and the Norwegian SN 2007 codes were translated into Danish DB07 codes.
Activity descriptions and business names were concatenated but separated by the separator token `</s>`. Thus, the model was trained on input texts in the format `f"{description_of_activity}</s>{business_name}"`.
## Quick Start
```python
from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("erst/xlm-roberta-base-finetuned-db07")
model = AutoModelForSequenceClassification.from_pretrained("erst/xlm-roberta-base-finetuned-db07")
pl = pipeline(
"sentiment-analysis",
model=model,
tokenizer=tokenizer,
return_all_scores=False,
)
pl("Vi sælger sko")
pl("We sell clothes</s>Clothing ApS")
```
## License
This model is released under the MIT License.
|
{}
|
erst/xlm-roberta-base-finetuned-db07
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
# Classifying Text into NACE Codes
This model is [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) fine-tuned to classify descriptions of activities into [NACE Rev. 2](https://ec.europa.eu/eurostat/web/nace-rev2) codes.
## Data
The data used to fine-tune the model consist of 2.5 million descriptions of activities from Norwegian and Danish businesses. To improve the model's multilingual performance, random samples of the Norwegian and Danish descriptions were machine translated into the following languages:
- English
- German
- Spanish
- French
- Finnish
- Polish
## Quick Start
```python
from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("erst/xlm-roberta-base-finetuned-nace")
model = AutoModelForSequenceClassification.from_pretrained("erst/xlm-roberta-base-finetuned-nace")
pl = pipeline(
"sentiment-analysis",
model=model,
tokenizer=tokenizer,
return_all_scores=False,
)
pl("The purpose of our company is to build houses")
```
## License
This model is released under the MIT License
|
{}
|
erst/xlm-roberta-base-finetuned-nace
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
ervis/aaaa
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
ervis/test
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-cocktails_recipe-base
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "t5-base", "model-index": [{"name": "t5-cocktails_recipe-base", "results": []}]}
|
erwanlc/t5-cocktails_recipe-base
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-cocktails_recipe-small
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "t5-base", "model-index": [{"name": "t5-cocktails_recipe-small", "results": []}]}
|
erwanlc/t5-cocktails_recipe-small
| null |
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-coktails_recipe-base
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google/t5-v1_1-base", "model-index": [{"name": "t5-coktails_recipe-base", "results": []}]}
|
erwanlc/t5-coktails_recipe-base
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/t5-v1_1-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-coktails_recipe-small
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "t5-coktails_recipe-small", "results": []}]}
|
erwanlc/t5-coktails_recipe-small
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
eshaaftab900/distilbert-base-uncased-finetuned-squad
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
eshaoliu/dayumodel
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
esnaultloi/streamlite
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
{}
|
espejelomar/BETO_Clasificar_Tweets_Mexicano
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
espejelomar/beto-base-cased
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
image-classification
|
fastai
|
## Pet breeds classification model
Finetuned model on The Oxford-IIIT Pet Dataset. It was introduced in
[this paper](https://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/) and first released in
[this webpage](https://www.robots.ox.ac.uk/~vgg/data/pets/).
The pretrained model was trained on the ImageNet dataset, a dataset that has 100,000+ images across 200 different classes. It was introduced in [this paper](https://image-net.org/static_files/papers/imagenet_cvpr09.pdf) and available [in this webpage](https://image-net.org/download.php)
Disclaimer: The model was fine-tuned after [Chapter 5](https://github.com/fastai/fastbook/blob/master/05_pet_breeds.ipynb) of [Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD (2020)](https://github.com/fastai/fastbook) written by Jeremy Howard and Sylvain Gugger.
## Model description
The model was finetuned using the `cnn_learner` method of the fastai library suing a Resnet 34 backbone pretrained on the ImageNet dataset. The fastai library uses PyTorch for the undelying operations. `cnn_learner` automatically gets a pretrained model from a given architecture with a custom head that is suitable for the target data.
Resnet34 is a 34 layer convolutional neural network. It takes residuals from each layer and uses them in the subsequent connected layers. Advantages of a resnet arquitecture ([Neurohive, 2019](https://neurohive.io/en/popular-networks/resnet/)):
- Are easy to optimize, but the “plain” networks (that simply stack layers) shows higher training error when the depth increases.
- Can easily gain accuracy from greatly increased depth, producing results which are better than previous networks.
Please refer to the original paper '[Deep Residual Learning for Image Recognition](https://arxiv.org/pdf/1512.03385.pdf)' written by Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun.
Specifically, the model was obtained:
```
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(2)
```
## How to use
Download the model this way:
```python
from huggingface_hub import hf_hub_download
from fastai.learner import load_learner
model = load_learner(
hf_hub_download('espejelomar/fastai-pet-breeds-classification', filename="model.pkl")
)
```
Then you can use your downloaded fastai model in any way you want. For example, if the input is a PIL Image, with the following code you can obtain the resulting outputs for each class:
```python
_, _, preds = self.model.predict(np.array(inputs))
```
## Training data
The Resnet34 model was pretrained on [ImageNet](https://image-net.org/static_files/papers/imagenet_cvpr09.pdf), a dataset that has 100,000+ images across 200 different classes, and fine-tuned on [The Oxford-IIIT Pet Dataset](https://www.robots.ox.ac.uk/~vgg/data/pets/).
## Preprocessing
For more detailed information on the preprocessing procedure, refer to the [Chapter 5](https://github.com/fastai/fastbook/blob/master/05_pet_breeds.ipynb) of [Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD (2020)](https://github.com/fastai/fastbook).
Two main strategies are followed to presizing the images:
- Resize images to relatively "large" dimensions—that is, dimensions significantly larger than the target training dimensions.
- Compose all of the common augmentation operations (including a resize to the final target size) into one, and perform the combined operation on the GPU only once at the end of processing, rather than performing the operations individually and interpolating multiple times.
"The first step, the resize, creates images large enough that they have spare margin to allow further augmentation transforms on their inner regions without creating empty zones. This transformation works by resizing to a square, using a large crop size. On the training set, the crop area is chosen randomly, and the size of the crop is selected to cover the entire width or height of the image, whichever is smaller.
In the second step, the GPU is used for all data augmentation, and all of the potentially destructive operations are done together, with a single interpolation at the end." ([Howard and Gugger, 2020](https://github.com/fastai/fastbook))
Specifically, the following code is used for preprocessing:
```python
#hide_input
#id interpolations
#caption A comparison of fastai's data augmentation strategy (left) and the traditional approach (right).
dblock1 = DataBlock(blocks=(ImageBlock(), CategoryBlock()),
get_y=parent_label,
item_tfms=Resize(460))
# Place an image in the 'images/grizzly.jpg' subfolder where this notebook is located before running this
dls1 = dblock1.dataloaders([(Path.cwd()/'images'/'grizzly.jpg')]*100, bs=8)
dls1.train.get_idxs = lambda: Inf.ones
x,y = dls1.valid.one_batch()
_,axs = subplots(1, 2)
x1 = TensorImage(x.clone())
x1 = x1.affine_coord(sz=224)
x1 = x1.rotate(draw=30, p=1.)
x1 = x1.zoom(draw=1.2, p=1.)
x1 = x1.warp(draw_x=-0.2, draw_y=0.2, p=1.)
tfms = setup_aug_tfms([Rotate(draw=30, p=1, size=224), Zoom(draw=1.2, p=1., size=224),
Warp(draw_x=-0.2, draw_y=0.2, p=1., size=224)])
x = Pipeline(tfms)(x)
#x.affine_coord(coord_tfm=coord_tfm, sz=size, mode=mode, pad_mode=pad_mode)
TensorImage(x[0]).show(ctx=axs[0])
TensorImage(x1[0]).show(ctx=axs[1]);
```
### BibTeX entry and citation info
```bibtex
@book{howard2020deep,
author = {Howard, J. and Gugger, S.},
title = {Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD},
isbn = {9781492045526},
year = {2020},
url = {https://books.google.no/books?id=xd6LxgEACAAJ},
publisher = {O'Reilly Media, Incorporated},
}
```
|
{"library_name": "fastai", "tags": ["image-classification", "fastai"], "datasets": ["Oxford-IIIT Pet Dataset", "ImageNet"]}
|
espejelomar/fastai-pet-breeds-classification
| null |
[
"fastai",
"image-classification",
"arxiv:1512.03385",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
audio-to-audio
|
espnet
|
## Example ESPnet2 ENH model
### `Chenda_Li/wsj0_2mix_enh_train_enh_conv_tasnet_raw_valid.si_snr.ave`
♻️ Imported from https://zenodo.org/record/4498562/
This model was trained by Chenda Li using wsj0_2mix/enh1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "speech-enhancement", "audio-to-audio"], "datasets": ["wsj0_2mix"]}
|
espnet/Chenda_Li_wsj0_2mix_enh_train_enh_conv_tasnet_raw_valid.si_snr.ave
| null |
[
"espnet",
"audio",
"speech-enhancement",
"audio-to-audio",
"en",
"dataset:wsj0_2mix",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
audio-to-audio
|
espnet
|
## Example ESPnet2 ENH model
### `Chenda_Li/wsj0_2mix_enh_train_enh_rnn_tf_raw_valid.si_snr.ave`
♻️ Imported from https://zenodo.org/record/4498554/
This model was trained by Chenda Li using wsj0_2mix/enh1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "speech-enhancement", "audio-to-audio"], "datasets": ["wsj0_2mix"]}
|
espnet/Chenda_Li_wsj0_2mix_enh_train_enh_rnn_tf_raw_valid.si_snr.ave
| null |
[
"espnet",
"audio",
"speech-enhancement",
"audio-to-audio",
"en",
"dataset:wsj0_2mix",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `Dan_Berrebbi_aishell4_asr`
This model was trained by dan_berrebbi using aishell4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout da1a26652f7d5a019cc24ad1e0e6e844f2b57e1b
pip install -e .
cd egs2/aishell4/asr1
./run.sh --skip_data_prep false --skip_train true --download_model Dan_Berrebbi_aishell4_asr
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue Sep 21 09:36:01 EDT 2021`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.3a1`
- pytorch version: `pytorch 1.9.0`
- Git hash: `7887faeabbc2299922267928e190ed89cb032a36`
- Commit date: `Mon Sep 20 16:25:02 2021 -0400`
## asr_fine_tune5_100ep
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_rnn_lm_lm_nuit_valid.loss.ave_asr_model_valid.acc.ave/dev|599|601|6.8|92.7|0.5|0.0|93.2|93.2|
|decode_transformer_lm_lm_nuit_valid.loss.ave_asr_model_valid.acc.ave/dev|599|601|6.8|92.8|0.3|0.0|93.2|93.2|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_rnn_lm_lm_nuit_valid.loss.ave_asr_model_valid.acc.ave/dev|599|15936|66.9|25.6|7.5|9.8|42.9|93.2|
|decode_transformer_lm_lm_nuit_valid.loss.ave_asr_model_valid.acc.ave/dev|599|15936|64.7|27.6|7.7|11.0|46.3|93.2|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer5.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_fine_tune5_100ep
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 3
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 10000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_zh_char/train/speech_shape
- exp/asr_stats_raw_zh_char/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_zh_char/valid/speech_shape
- exp/asr_stats_raw_zh_char/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 51200
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_nodev/wav.scp
- speech
- sound
- - dump/raw/train_nodev/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 4.0
scheduler: noamlr
scheduler_conf:
model_size: 256
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ,
- 的
- 是
- 个
- 这
- 一
- 。
- 就
- 儿
- 嗯
- 们
- 呃
- 我
- 有
- <sil>
- 那
- 说
- 不
- 些
- 也
- 他
- 你
- 要
- 后
- 以
- 咱
- 在
- 啊
- 了
- 然
- 家
- 都
- 来
- 还
- 可
- 子
- 下
- 上
- 时
- 比
- 话
- 孩
- 呢
- 去
- 人
- 好
- 对
- 能
- 么
- 吧
- 学
- 多
- 到
- 看
- 为
- 进
- 把
- 大
- 做
- 生
- 种
- 品
- 给
- 没
- 行
- 现
- 小
- 会
- 作
- 较
- 方
- 块
- 业
- 让
- 点
- 定
- 因
- 什
- 长
- 面
- 如
- 安
- 客
- 问
- 过
- 车
- 出
- 啦
- 边
- 候
- 主
- 所
- 题
- 买
- 销
- 天
- 意
- 自
- 全
- 动
- 工
- '&'
- 老
- 或
- 者
- 年
- 着
- 实
- 活
- 理
- 包
- 样
- 再
- 区
- 用
- 呀
- 零
- 员
- 发
- 先
- 部
- 放
- 门
- 情
- 像
- 分
- 售
- 很
- 开
- 己
- 十
- 括
- 跟
- 事
- 需
- 更
- 其
- 装
- 市
- 成
- 里
- 物
- 别
- 间
- 第
- 次
- 中
- 提
- 超
- 顾
- 保
- 感
- 加
- 量
- 二
- 和
- 各
- 嘛
- 新
- 每
- 完
- 力
- 消
- 得
- 店
- 本
- 通
- 习
- 觉
- 道
- 心
- 校
- 菜
- 交
- 哪
- 产
- 于
- 位
- 电
- 想
- 三
- 况
- 度
- 期
- 应
- 但
- 教
- 体
- 常
- 师
- 它
- 高
- 前
- 之
- 西
- 特
- 商
- 果
- 场
- 重
- 防
- 管
- 起
- 地
- 该
- 东
- 少
- 打
- 费
- 当
- 带
- 服
- 口
- 购
- 知
- 回
- 同
- 钱
- 外
- 户
- 注
- 促
- 价
- 解
- <#>
- 水
- 百
- 今
- 太
- 最
- 报
- 怎
- 才
- 等
- 及
- 关
- <->
- 肯
- 火
- 机
- 流
- 制
- 送
- 手
- 确
- 法
- 写
- 玩
- 传
- 路
- 班
- 查
- 招
- 卖
- 几
- 正
- 合
- 够
- 五
- 引
- 容
- 只
- 男
- 日
- 四
- 宣
- 反
- 两
- 清
- 处
- 周
- 单
- 首
- 课
- 衣
- 便
- 身
- 气
- 针
- 奶
- 六
- 经
- 接
- 女
- 育
- 鲜
- 赠
- 试
- 停
- 晚
- 类
- 故
- 入
- 性
- 增
- 食
- 满
- 格
- 基
- 备
- 洗
- 培
- 质
- 美
- 明
- 整
- 化
- 公
- 案
- 哎
- 吸
- 原
- 易
- 幺
- 总
- 尽
- 优
- 而
- 建
- 责
- 啥
- 干
- 月
- 使
- 找
- 季
- 望
- 器
- 目
- 识
- 低
- 听
- 烟
- 相
- 早
- 检
- 护
- 摆
- 住
- 直
- 从
- 务
- 希
- 导
- 内
- 八
- 持
- 近
- 配
- 叫
- 见
- 设
- 吗
- 非
- 调
- 程
- 拿
- 训
- <%>
- 结
- 标
- 挺
- 花
- <$>
- 受
- 式
- 求
- 平
- 换
- 具
- 愿
- 货
- 牌
- 专
- 轻
- 推
- 妈
- 司
- 辆
- 存
- 名
- 且
- 欢
- 喜
- 吃
- 数
- 段
- 议
- 控
- 往
- 礼
- 决
- 走
- 养
- 免
- 惠
- 园
- 档
- 谁
- 真
- 快
- 置
- 幼
- 乐
- 证
- 向
- 厂
- 简
- 声
- 视
- 划
- 绩
- 适
- 集
- 搞
- 办
- 规
- 灾
- 造
- 准
- 必
- 任
- 险
- 响
- 毕
- 群
- 鞋
- 九
- 嘞
- 信
- 库
- 计
- 认
- 奖
- 表
- 无
- 影
- 头
- 卡
- 告
- 考
- 抽
- 竟
- 选
- 帮
- 何
- 修
- 酒
- 尤
- 线
- 穿
- 讲
- 光
- 留
- 讨
- 随
- 请
- 卫
- 系
- 队
- 失
- 双
- 庭
- 强
- 微
- 折
- 色
- 半
- 否
- 立
- 差
- 沟
- 冬
- 批
- 害
- 已
- 危
- 白
- 爆
- 节
- 参
- 逛
- 搭
- 风
- 朋
- 友
- 环
- 验
- 评
- 严
- 般
- 效
- 舞
- 饭
- 境
- 负
- 又
- 底
- 术
- 刚
- 件
- 罚
- 助
- 态
- 状
- 室
- 房
- 游
- 息
- 领
- 难
- 警
- 按
- 级
- 错
- 利
- 与
- 餐
- 陪
- 蹈
- 论
- 记
- 许
- 马
- 算
- 楼
- 型
- 排
- 广
- 值
- 油
- 糕
- 楚
- 步
- 至
- 拉
- 紧
- 灯
- 升
- 七
- 共
- 努
- 除
- 展
- 形
- 元
- 网
- 宜
- 营
- 兴
- 互
- 蛋
- 燃
- 冷
- 条
- 思
- 巡
- 净
- 须
- 遇
- 落
- 禁
- 科
- 款
- 哦
- 止
- 采
- 材
- 介
- 套
- 围
- 维
- 旦
- 切
- 显
- 汇
- 损
- 速
- 越
- 模
- 假
- 精
- 稍
- 书
- 绍
- 父
- 积
- 策
- 示
- 骑
- 改
- 跑
- 运
- 变
- 洁
- 仓
- 鱼
- <space>
- 绝
- 诶
- 伤
- 细
- 职
- 离
- 慢
- 素
- 料
- 睡
- 趣
- 爱
- 母
- 眼
- 味
- 列
- 督
- 张
- 率
- 被
- 域
- 语
- 坏
- 资
- 红
- 减
- 励
- 择
- 预
- 层
- 陈
- 根
- 休
- 毒
- 球
- 爸
- 登
- 足
- 取
- 指
- 柜
- 限
- 降
- 概
- 院
- 供
- 支
- 额
- 源
- 始
- 盘
- 饮
- 项
- 液
- 童
- 爷
- 号
- 抓
- 台
- 转
- 观
- 金
- 照
- 滑
- 岁
- 致
- 文
- 她
- 弄
- 站
- 酸
- 音
- 胎
- 投
- 疏
- 乱
- 临
- 允
- 狗
- 疫
- 询
- 、
- 象
- 占
- 坐
- 倒
- 争
- 午
- 亲
- 读
- 演
- 退
- 惯
- 贵
- 达
- 监
- 志
- 绿
- 醒
- 急
- 驾
- 违
- 诉
- 片
- 空
- 势
- 极
- 豆
- 独
- 钟
- 代
- 瓶
- 纸
- 并
- 企
- 映
- 统
- 属
- 省
- 夜
- 障
- 谈
- 避
- 由
- 终
- 频
- 掉
- 估
- 激
- 仅
- 布
- 谢
- 灭
- 忙
- 码
- 伙
- 缺
- 叶
- 功
- 析
- 赖
- 架
- 范
- 签
- D
- 待
- 神
- 龄
- 画
- 券
- 居
- 杜
- 堵
- 您
- 勤
- 扫
- 技
- 财
- 隐
- 患
- 例
- 乘
- 摩
- 戏
- 鼓
- 份
- 杂
- 散
- 热
- 铺
- 据
- 肤
- 怕
- 依
- 拖
- 充
- 智
- 偷
- 远
- 挂
- 盗
- 附
- 梯
- 冰
- 联
- 借
- 蹭
- 异
- 蔬
- 绑
- 堂
- 将
- 厨
- 帽
- 破
- 戴
- 皮
- 粉
- 氛
- 仪
- 国
- 益
- 闯
- 惩
- 逃
- 刻
- 突
- 申
- 略
- 顿
- 毛
- 召
- 海
- 黄
- 青
- 士
- 移
- 喝
- 板
- 练
- 歌
- 千
- 床
- 享
- 磨
- 构
- 收
- 万
- 摸
- 圈
- 亮
- 刹
- 逆
- 驶
- 赶
- 松
- 呐
- 压
- 拥
- 辅
- 协
- 托
- 断
- 轮
- 善
- 哈
- 捆
- 座
- 病
- 健
- 牛
- 草
- 释
- 似
- 土
- 补
- 俩
- 堆
- 即
- 密
- 背
- 言
- 街
- 尚
- 窗
- C
- 艺
- 纠
- 纷
- 忽
- 句
- 另
- 施
- 政
- 温
- 某
- 翻
- 章
- 守
- 熟
- 民
- 续
- 良
- 挤
- 础
- 字
- 瓜
- 乎
- 竞
- 距
- 际
- 暖
- 凭
- 董
- 碗
- 短
- 渠
- 康
- 藏
- 香
- 虽
- 露
- 厉
- 忘
- 误
- 冒
- 窃
- 络
- 淡
- 腐
- 颜
- 播
- 默
- 锻
- 炼
- 宝
- 组
- 淘
- 则
- 逻
- 垃
- 圾
- 复
- 贴
- 靠
- 潜
- 察
- 晨
- 碰
- 剩
- 峰
- 深
- 偏
- 虑
- 念
- 初
- 闹
- 幸
- 跳
- 米
- 旧
- 蛤
- 虾
- 汽
- 苦
- 螃
- 蟹
- 冲
- 固
- 隔
- 懂
- 卷
- 镜
- 罩
- 暴
- 闭
- 野
- 玻
- 璃
- 义
- B
- 煤
- 富
- 踩
- 途
- 闲
- 紫
- 北
- 欲
- 曲
- 榜
- 垒
- 伴
- 累
- 判
- 搜
- 困
- 租
- 键
- 肥
- 社
- 弯
- 角
- 纪
- 律
- 详
- 右
- 刮
- 继
- 撤
- 输
- 普
- 未
- 稳
- 摔
- 访
- 扩
- 扣
- 末
- 票
- 承
- 担
- 丢
- 涉
- 欠
- 创
- 获
- 摊
- 疑
- 蓝
- 答
- 霜
- 录
- 齐
- 烦
- 治
- 粗
- 叛
- 污
- 址
- 若
- 染
- 含
- 药
- 雨
- 此
- 陌
- 研
- 催
- 拨
- 页
- 磕
- 呆
- 脸
- 墙
- 夫
- A
- 棉
- 袜
- 填
- 死
- 懒
- 植
- 扇
- 捡
- 遍
- 操
- 摄
- 箱
- ?
- 繁
- 城
- 咯
- 左
- 拐
- 悉
- 犯
- 宽
- 伞
- 余
- 糊
- 巧
- 透
- 贪
- 顺
- 局
- 妇
- 私
- 浪
- 岗
- 棋
- 序
- 辛
- V
- 握
- 擦
- 扔
- 斤
- 付
- 剐
- 锁
- 麻
- 敢
- 桶
- 佩
- 坠
- 封
- 替
- 塞
- 斗
- 攀
- 爽
- 沉
- 混
- 滋
- 刺
- 潮
- 皿
- 端
- 刷
- 刀
- 巾
- 烫
- 木
- 漏
- 迅
- 织
- 救
- 吹
- 仔
- 称
- 返
- 景
- 聚
- 阶
- 秀
- 涨
- P
- 颈
- 肩
- 泥
- I
- 侣
- 尔
- 伍
- 甚
- 皂
- 蒙
- 世
- 界
- 嘻
- 辈
- Q
- 审
- 尾
- 浇
- 遛
- 馨
- 措
- 邻
- 撒
- 挥
- 遵
- 予
- 击
- 鉴
- 殊
- 哇
- 载
- 添
- 盈
- 盯
- 惊
- 喷
- 荷
- 怠
- 抢
- 喂
- 饱
- 谅
- 团
- 龙
- 冻
- 图
- 掺
- 扑
- 刊
- 葱
- 薄
- 萝
- 卜
- 麦
- 苹
- 触
- 飞
- 艳
- 畅
- 鸡
- 权
- 趟
- 连
- 哭
- 旁
- 漂
- 焊
- 敞
- 叉
- 钢
- 氧
- 溺
- 聊
- 巢
- 衡
- 淀
- 劣
- 虫
- 符
- 均
- 辨
- 菌
- 彻
- 烂
- 厅
- 皱
- 妥
- 拾
- 插
- 携
- 竹
- 碍
- 湿
- 灵
- 忌
- 旅
- 勿
- 宿
- 迷
- 探
- 春
- 劵
- 星
- 耐
- 裤
- 颖
- 韩
- 艾
- 灸
- 邀
- 婚
- 乳
- 芽
- 挑
- 摘
- 阿
- 姨
- 伊
- 慕
- 纯
- 貌
- 嘴
- 偶
- 睛
- 献
- 坚
- 账
- 典
- 唱
- L
- E
- 贡
- 寒
- 唧
- Y
- 尝
- 抹
- 汰
- 腾
- 哼
- 仿
- 英
- 舒
- 扰
- 拒
- 剪
- 夏
- 宠
- 咬
- 派
- 委
- 婉
- 执
- 呗
- 悄
- 搬
- 雪
- 盐
- 暂
- 奸
- 耍
- 僻
- 却
- 署
- 寻
- 串
- 援
- 亏
- 烈
- 印
- 捎
- 幅
- 绘
- 锈
- 闸
- 罪
- 嫌
- 俗
- 歹
- 劳
- 兜
- 喽
- 谓
- 鹤
- 舍
- 克
- 徇
- 倍
- 敏
- 丝
- 纺
- 拭
- 融
- 蔫
- 掂
- 测
- T
- 众
- 卸
- 暗
- 赔
- 偿
- 举
- 劲
- 篮
- 储
- 乙
- 炔
- 软
- 侵
- 诱
- 浊
- 蚀
- 秽
- 炸
- 泽
- 闻
- 鼻
- 甜
- 澈
- 脏
- 官
- 凝
- 芳
- 灰
- 卵
- 农
- 烧
- 肉
- 桌
- 椅
- 垫
- 硬
- 叠
- 瓷
- 碎
- 柄
- 屉
- 拳
- 撞
- 铝
- 歇
- 遗
- 炮
- 掌
- 妨
- 静
- 浸
- 涂
- 凉
- 炫
- 耀
- 姓
- 究
- 奏
- 缆
- 脚
- 酿
- 抄
- 慌
- 戚
- 燥
- 毯
- 挽
- 诺
- 济
- 旺
- 抖
- 郊
- 疗
- 巴
- 痧
- 脊
- 膜
- 晒
- 润
- 掏
- 笔
- 鞭
- 博
- 捧
- 函
- 胡
- 锅
- 雾
- 疯
- 狂
- 趋
- 膏
- 妆
- 尘
- 袋
- 贝
- 俺
- 耽
- 怀
- 恐
- 赋
- 脑
- 焉
- 愣
- 呵
- 噼
- 啪
- 虚
- 河
- 归
- 绊
- 械
- 扬
- 筒
- 靴
- 束
- 彩
- 荐
- 沙
- 迎
- 荡
- 凌
- 昂
- 碑
- 蹦
- 扉
- 泼
- 丰
- 滴
- 沾
- 亭
- 粘
- 奇
- 饼
- 牙
- 娃
- 杯
- 踢
- 嘿
- 抛
- 枯
- 剔
- 苗
- 纹
- 永
- 津
- 唉
- 趁
- 屡
- 逮
- 戒
- 肃
- 仁
- 肇
- 醉
- 糟
- 馈
- 横
- 扭
- 盔
- 侧
- 鲁
- 莽
- 飙
- 稿
- 逐
- 谋
- 京
- 苏
- 宁
- 驻
- 咨
- 旷
- 拓
- 杆
- 秤
- 叮
- 嘱
- 咋
- 炊
- 怪
- 婆
- 阎
- 王
- 饿
- 鬼
- 惨
- 渡
- 坎
- 囤
- 甲
- 蛙
- 鲤
- 桂
- 石
- 玉
- 溪
- 华
- 窝
- 截
- 秩
- 嗨
- 芹
- 梨
- 蕉
- S
- 煲
- 汤
- 鲫
- 揽
- 挡
- 柚
- 瑞
- 匹
- '2'
- 踹
- 吵
- 凶
- 矩
- 迟
- 脾
- 纳
- 朵
- 墨
- 袖
- 链
- 钩
- 笼
- 熄
- 盆
- 殴
- 欺
- 诈
- 厕
- 娱
- 爬
- 威
- 胁
- 阅
- 赌
- 拢
- 症
- 伪
- 脂
- 堪
- 盛
- 蚊
- 蝇
- 煎
- 晰
- 柔
- 涩
- 汁
- 腹
- 胃
- 痉
- 挛
- 颗
- 粒
- 匀
- 败
- 历
- 佳
- 乏
- 寄
- 残
- 杀
- 剂
- 疾
- 衍
- 溅
- 倘
- 褶
- 席
- 启
- 遮
- 槽
- 递
- 橱
- 迹
- 镁
- 泄
- 阀
- 柴
- 阻
- 恋
- 盲
- 浓
- 捂
- 腰
- 姿
- 缝
- 肿
- 焦
- 骗
- 伺
- 嘘
- 掩
- 褥
- 帘
- 籍
- 锥
- 锋
- 尖
- 锐
- 祸
- 秒
- 李
- 伸
- 浏
- 览
- 航
- 讯
- 谨
- 慎
- 匪
- 劫
- 医
- 族
- 忧
- 孤
- 拜
- 窄
- 唯
- 搁
- 朝
- 尺
- 盟
- 波
- 隆
- 词
- 村
- 娶
- 媳
- 县
- 聘
- 醇
- 泡
- 坨
- 淋
- 延
- 柱
- 肾
- 蒸
- 槛
- 赚
- 凡
- 恩
- 厚
- 赞
- 茎
- 蒜
- 苔
- 甘
- 菠
- 涮
- 霾
- 仍
- 云
- 追
- 丽
- 盖
- 欧
- 莱
- 雅
- 婴
- 孕
- 敲
- 约
- 惰
- 谱
- 射
- 惑
- 睹
- 奉
- 诚
- 惶
- 卓
- 勉
- 聪
- 疼
- 弃
- 奴
- 隶
- 嚷
- 眠
- 躺
- 乒
- 乓
- 琴
- 挖
- 掘
- 阵
- 浆
- 索
- 呼
- 古
- 弥
- 熔
- 抱
- 怨
- 猫
- 笑
- 挣
- 黑
- 猛
- 令
- 核
- 磊
- 橙
- 吨
- 吊
- 蘸
- 氮
- 罐
- 战
- 懈
- 渐
- 胜
- 命
- 抬
- 缘
- 睦
- 扮
- 珠
- 颁
- 蔼
- 凳
- 饰
- 缤
- 晶
- 抵
- 遥
- 腿
- 拍
- 妻
- 羽
- 绒
- 梳
- 袄
- 述
- 跆
- 屈
- 脱
- 朗
- 劝
- 胆
- 腔
- 圆
- 亚
- 宴
- 编
- 肢
- 壶
- 暑
- 怒
- 描
- 绕
- 悦
- 忆
- 嗓
- 胖
- 疙
- 瘩
- 哒
- 碴
- 棱
- 炒
- 井
- 漫
- 烘
- 焙
- 涤
- 船
- 纱
- 君
- 茉
- 莉
- 钙
- 瞩
- <_>
- 塌
- 嗷
- 屁
- 股
- 绪
- 勇
- 奋
- 荣
- 诲
- 卑
- 挫
- 昧
- 疲
- 惫
- 册
- 呈
- 僵
- 熬
- 敬
- 呦
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: /ocean/projects/cis210027p/berrebbi/espnet/egs2/aishell4/asr1/data/nlsyms.txt
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_zh_char/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
input_layer: conv2d
num_blocks: 12
linear_units: 2048
dropout_rate: 0.1
output_size: 256
attention_heads: 4
attention_dropout_rate: 0.0
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
macaron_style: true
use_cnn_module: true
cnn_module_kernel: 15
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
input_layer: embed
num_blocks: 6
linear_units: 2048
dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.3a1
distributed: false
```
</details>
## LM config
<details><summary>expand</summary>
```
config: conf/train_lm_transformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/lm_nuit
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 15
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
keep_nbest_models: 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 2000000
valid_batch_bins: null
train_shape_file:
- exp/lm_stats_zh_char/train/text_shape.char
valid_shape_file:
- exp/lm_stats_zh_char/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/lm_train.txt
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.005
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ,
- 的
- 是
- 个
- 这
- 一
- 。
- 就
- 儿
- 嗯
- 们
- 呃
- 我
- 有
- <sil>
- 那
- 说
- 不
- 些
- 也
- 他
- 你
- 要
- 后
- 以
- 咱
- 在
- 啊
- 了
- 然
- 家
- 都
- 来
- 还
- 可
- 子
- 下
- 上
- 时
- 比
- 话
- 孩
- 呢
- 去
- 人
- 好
- 对
- 能
- 么
- 吧
- 学
- 多
- 到
- 看
- 为
- 进
- 把
- 大
- 做
- 生
- 种
- 品
- 给
- 没
- 行
- 现
- 小
- 会
- 作
- 较
- 方
- 块
- 业
- 让
- 点
- 定
- 因
- 什
- 长
- 面
- 如
- 安
- 客
- 问
- 过
- 车
- 出
- 啦
- 边
- 候
- 主
- 所
- 题
- 买
- 销
- 天
- 意
- 自
- 全
- 动
- 工
- '&'
- 老
- 或
- 者
- 年
- 着
- 实
- 活
- 理
- 包
- 样
- 再
- 区
- 用
- 呀
- 零
- 员
- 发
- 先
- 部
- 放
- 门
- 情
- 像
- 分
- 售
- 很
- 开
- 己
- 十
- 括
- 跟
- 事
- 需
- 更
- 其
- 装
- 市
- 成
- 里
- 物
- 别
- 间
- 第
- 次
- 中
- 提
- 超
- 顾
- 保
- 感
- 加
- 量
- 二
- 和
- 各
- 嘛
- 新
- 每
- 完
- 力
- 消
- 得
- 店
- 本
- 通
- 习
- 觉
- 道
- 心
- 校
- 菜
- 交
- 哪
- 产
- 于
- 位
- 电
- 想
- 三
- 况
- 度
- 期
- 应
- 但
- 教
- 体
- 常
- 师
- 它
- 高
- 前
- 之
- 西
- 特
- 商
- 果
- 场
- 重
- 防
- 管
- 起
- 地
- 该
- 东
- 少
- 打
- 费
- 当
- 带
- 服
- 口
- 购
- 知
- 回
- 同
- 钱
- 外
- 户
- 注
- 促
- 价
- 解
- <#>
- 水
- 百
- 今
- 太
- 最
- 报
- 怎
- 才
- 等
- 及
- 关
- <->
- 肯
- 火
- 机
- 流
- 制
- 送
- 手
- 确
- 法
- 写
- 玩
- 传
- 路
- 班
- 查
- 招
- 卖
- 几
- 正
- 合
- 够
- 五
- 引
- 容
- 只
- 男
- 日
- 四
- 宣
- 反
- 两
- 清
- 处
- 周
- 单
- 首
- 课
- 衣
- 便
- 身
- 气
- 针
- 奶
- 六
- 经
- 接
- 女
- 育
- 鲜
- 赠
- 试
- 停
- 晚
- 类
- 故
- 入
- 性
- 增
- 食
- 满
- 格
- 基
- 备
- 洗
- 培
- 质
- 美
- 明
- 整
- 化
- 公
- 案
- 哎
- 吸
- 原
- 易
- 幺
- 总
- 尽
- 优
- 而
- 建
- 责
- 啥
- 干
- 月
- 使
- 找
- 季
- 望
- 器
- 目
- 识
- 低
- 听
- 烟
- 相
- 早
- 检
- 护
- 摆
- 住
- 直
- 从
- 务
- 希
- 导
- 内
- 八
- 持
- 近
- 配
- 叫
- 见
- 设
- 吗
- 非
- 调
- 程
- 拿
- 训
- <%>
- 结
- 标
- 挺
- 花
- <$>
- 受
- 式
- 求
- 平
- 换
- 具
- 愿
- 货
- 牌
- 专
- 轻
- 推
- 妈
- 司
- 辆
- 存
- 名
- 且
- 欢
- 喜
- 吃
- 数
- 段
- 议
- 控
- 往
- 礼
- 决
- 走
- 养
- 免
- 惠
- 园
- 档
- 谁
- 真
- 快
- 置
- 幼
- 乐
- 证
- 向
- 厂
- 简
- 声
- 视
- 划
- 绩
- 适
- 集
- 搞
- 办
- 规
- 灾
- 造
- 准
- 必
- 任
- 险
- 响
- 毕
- 群
- 鞋
- 九
- 嘞
- 信
- 库
- 计
- 认
- 奖
- 表
- 无
- 影
- 头
- 卡
- 告
- 考
- 抽
- 竟
- 选
- 帮
- 何
- 修
- 酒
- 尤
- 线
- 穿
- 讲
- 光
- 留
- 讨
- 随
- 请
- 卫
- 系
- 队
- 失
- 双
- 庭
- 强
- 微
- 折
- 色
- 半
- 否
- 立
- 差
- 沟
- 冬
- 批
- 害
- 已
- 危
- 白
- 爆
- 节
- 参
- 逛
- 搭
- 风
- 朋
- 友
- 环
- 验
- 评
- 严
- 般
- 效
- 舞
- 饭
- 境
- 负
- 又
- 底
- 术
- 刚
- 件
- 罚
- 助
- 态
- 状
- 室
- 房
- 游
- 息
- 领
- 难
- 警
- 按
- 级
- 错
- 利
- 与
- 餐
- 陪
- 蹈
- 论
- 记
- 许
- 马
- 算
- 楼
- 型
- 排
- 广
- 值
- 油
- 糕
- 楚
- 步
- 至
- 拉
- 紧
- 灯
- 升
- 七
- 共
- 努
- 除
- 展
- 形
- 元
- 网
- 宜
- 营
- 兴
- 互
- 蛋
- 燃
- 冷
- 条
- 思
- 巡
- 净
- 须
- 遇
- 落
- 禁
- 科
- 款
- 哦
- 止
- 采
- 材
- 介
- 套
- 围
- 维
- 旦
- 切
- 显
- 汇
- 损
- 速
- 越
- 模
- 假
- 精
- 稍
- 书
- 绍
- 父
- 积
- 策
- 示
- 骑
- 改
- 跑
- 运
- 变
- 洁
- 仓
- 鱼
- <space>
- 绝
- 诶
- 伤
- 细
- 职
- 离
- 慢
- 素
- 料
- 睡
- 趣
- 爱
- 母
- 眼
- 味
- 列
- 督
- 张
- 率
- 被
- 域
- 语
- 坏
- 资
- 红
- 减
- 励
- 择
- 预
- 层
- 陈
- 根
- 休
- 毒
- 球
- 爸
- 登
- 足
- 取
- 指
- 柜
- 限
- 降
- 概
- 院
- 供
- 支
- 额
- 源
- 始
- 盘
- 饮
- 项
- 液
- 童
- 爷
- 号
- 抓
- 台
- 转
- 观
- 金
- 照
- 滑
- 岁
- 致
- 文
- 她
- 弄
- 站
- 酸
- 音
- 胎
- 投
- 疏
- 乱
- 临
- 允
- 狗
- 疫
- 询
- 、
- 象
- 占
- 坐
- 倒
- 争
- 午
- 亲
- 读
- 演
- 退
- 惯
- 贵
- 达
- 监
- 志
- 绿
- 醒
- 急
- 驾
- 违
- 诉
- 片
- 空
- 势
- 极
- 豆
- 独
- 钟
- 代
- 瓶
- 纸
- 并
- 企
- 映
- 统
- 属
- 省
- 夜
- 障
- 谈
- 避
- 由
- 终
- 频
- 掉
- 估
- 激
- 仅
- 布
- 谢
- 灭
- 忙
- 码
- 伙
- 缺
- 叶
- 功
- 析
- 赖
- 架
- 范
- 签
- D
- 待
- 神
- 龄
- 画
- 券
- 居
- 杜
- 堵
- 您
- 勤
- 扫
- 技
- 财
- 隐
- 患
- 例
- 乘
- 摩
- 戏
- 鼓
- 份
- 杂
- 散
- 热
- 铺
- 据
- 肤
- 怕
- 依
- 拖
- 充
- 智
- 偷
- 远
- 挂
- 盗
- 附
- 梯
- 冰
- 联
- 借
- 蹭
- 异
- 蔬
- 绑
- 堂
- 将
- 厨
- 帽
- 破
- 戴
- 皮
- 粉
- 氛
- 仪
- 国
- 益
- 闯
- 惩
- 逃
- 刻
- 突
- 申
- 略
- 顿
- 毛
- 召
- 海
- 黄
- 青
- 士
- 移
- 喝
- 板
- 练
- 歌
- 千
- 床
- 享
- 磨
- 构
- 收
- 万
- 摸
- 圈
- 亮
- 刹
- 逆
- 驶
- 赶
- 松
- 呐
- 压
- 拥
- 辅
- 协
- 托
- 断
- 轮
- 善
- 哈
- 捆
- 座
- 病
- 健
- 牛
- 草
- 释
- 似
- 土
- 补
- 俩
- 堆
- 即
- 密
- 背
- 言
- 街
- 尚
- 窗
- C
- 艺
- 纠
- 纷
- 忽
- 句
- 另
- 施
- 政
- 温
- 某
- 翻
- 章
- 守
- 熟
- 民
- 续
- 良
- 挤
- 础
- 字
- 瓜
- 乎
- 竞
- 距
- 际
- 暖
- 凭
- 董
- 碗
- 短
- 渠
- 康
- 藏
- 香
- 虽
- 露
- 厉
- 忘
- 误
- 冒
- 窃
- 络
- 淡
- 腐
- 颜
- 播
- 默
- 锻
- 炼
- 宝
- 组
- 淘
- 则
- 逻
- 垃
- 圾
- 复
- 贴
- 靠
- 潜
- 察
- 晨
- 碰
- 剩
- 峰
- 深
- 偏
- 虑
- 念
- 初
- 闹
- 幸
- 跳
- 米
- 旧
- 蛤
- 虾
- 汽
- 苦
- 螃
- 蟹
- 冲
- 固
- 隔
- 懂
- 卷
- 镜
- 罩
- 暴
- 闭
- 野
- 玻
- 璃
- 义
- B
- 煤
- 富
- 踩
- 途
- 闲
- 紫
- 北
- 欲
- 曲
- 榜
- 垒
- 伴
- 累
- 判
- 搜
- 困
- 租
- 键
- 肥
- 社
- 弯
- 角
- 纪
- 律
- 详
- 右
- 刮
- 继
- 撤
- 输
- 普
- 未
- 稳
- 摔
- 访
- 扩
- 扣
- 末
- 票
- 承
- 担
- 丢
- 涉
- 欠
- 创
- 获
- 摊
- 疑
- 蓝
- 答
- 霜
- 录
- 齐
- 烦
- 治
- 粗
- 叛
- 污
- 址
- 若
- 染
- 含
- 药
- 雨
- 此
- 陌
- 研
- 催
- 拨
- 页
- 磕
- 呆
- 脸
- 墙
- 夫
- A
- 棉
- 袜
- 填
- 死
- 懒
- 植
- 扇
- 捡
- 遍
- 操
- 摄
- 箱
- ?
- 繁
- 城
- 咯
- 左
- 拐
- 悉
- 犯
- 宽
- 伞
- 余
- 糊
- 巧
- 透
- 贪
- 顺
- 局
- 妇
- 私
- 浪
- 岗
- 棋
- 序
- 辛
- V
- 握
- 擦
- 扔
- 斤
- 付
- 剐
- 锁
- 麻
- 敢
- 桶
- 佩
- 坠
- 封
- 替
- 塞
- 斗
- 攀
- 爽
- 沉
- 混
- 滋
- 刺
- 潮
- 皿
- 端
- 刷
- 刀
- 巾
- 烫
- 木
- 漏
- 迅
- 织
- 救
- 吹
- 仔
- 称
- 返
- 景
- 聚
- 阶
- 秀
- 涨
- P
- 颈
- 肩
- 泥
- I
- 侣
- 尔
- 伍
- 甚
- 皂
- 蒙
- 世
- 界
- 嘻
- 辈
- Q
- 审
- 尾
- 浇
- 遛
- 馨
- 措
- 邻
- 撒
- 挥
- 遵
- 予
- 击
- 鉴
- 殊
- 哇
- 载
- 添
- 盈
- 盯
- 惊
- 喷
- 荷
- 怠
- 抢
- 喂
- 饱
- 谅
- 团
- 龙
- 冻
- 图
- 掺
- 扑
- 刊
- 葱
- 薄
- 萝
- 卜
- 麦
- 苹
- 触
- 飞
- 艳
- 畅
- 鸡
- 权
- 趟
- 连
- 哭
- 旁
- 漂
- 焊
- 敞
- 叉
- 钢
- 氧
- 溺
- 聊
- 巢
- 衡
- 淀
- 劣
- 虫
- 符
- 均
- 辨
- 菌
- 彻
- 烂
- 厅
- 皱
- 妥
- 拾
- 插
- 携
- 竹
- 碍
- 湿
- 灵
- 忌
- 旅
- 勿
- 宿
- 迷
- 探
- 春
- 劵
- 星
- 耐
- 裤
- 颖
- 韩
- 艾
- 灸
- 邀
- 婚
- 乳
- 芽
- 挑
- 摘
- 阿
- 姨
- 伊
- 慕
- 纯
- 貌
- 嘴
- 偶
- 睛
- 献
- 坚
- 账
- 典
- 唱
- L
- E
- 贡
- 寒
- 唧
- Y
- 尝
- 抹
- 汰
- 腾
- 哼
- 仿
- 英
- 舒
- 扰
- 拒
- 剪
- 夏
- 宠
- 咬
- 派
- 委
- 婉
- 执
- 呗
- 悄
- 搬
- 雪
- 盐
- 暂
- 奸
- 耍
- 僻
- 却
- 署
- 寻
- 串
- 援
- 亏
- 烈
- 印
- 捎
- 幅
- 绘
- 锈
- 闸
- 罪
- 嫌
- 俗
- 歹
- 劳
- 兜
- 喽
- 谓
- 鹤
- 舍
- 克
- 徇
- 倍
- 敏
- 丝
- 纺
- 拭
- 融
- 蔫
- 掂
- 测
- T
- 众
- 卸
- 暗
- 赔
- 偿
- 举
- 劲
- 篮
- 储
- 乙
- 炔
- 软
- 侵
- 诱
- 浊
- 蚀
- 秽
- 炸
- 泽
- 闻
- 鼻
- 甜
- 澈
- 脏
- 官
- 凝
- 芳
- 灰
- 卵
- 农
- 烧
- 肉
- 桌
- 椅
- 垫
- 硬
- 叠
- 瓷
- 碎
- 柄
- 屉
- 拳
- 撞
- 铝
- 歇
- 遗
- 炮
- 掌
- 妨
- 静
- 浸
- 涂
- 凉
- 炫
- 耀
- 姓
- 究
- 奏
- 缆
- 脚
- 酿
- 抄
- 慌
- 戚
- 燥
- 毯
- 挽
- 诺
- 济
- 旺
- 抖
- 郊
- 疗
- 巴
- 痧
- 脊
- 膜
- 晒
- 润
- 掏
- 笔
- 鞭
- 博
- 捧
- 函
- 胡
- 锅
- 雾
- 疯
- 狂
- 趋
- 膏
- 妆
- 尘
- 袋
- 贝
- 俺
- 耽
- 怀
- 恐
- 赋
- 脑
- 焉
- 愣
- 呵
- 噼
- 啪
- 虚
- 河
- 归
- 绊
- 械
- 扬
- 筒
- 靴
- 束
- 彩
- 荐
- 沙
- 迎
- 荡
- 凌
- 昂
- 碑
- 蹦
- 扉
- 泼
- 丰
- 滴
- 沾
- 亭
- 粘
- 奇
- 饼
- 牙
- 娃
- 杯
- 踢
- 嘿
- 抛
- 枯
- 剔
- 苗
- 纹
- 永
- 津
- 唉
- 趁
- 屡
- 逮
- 戒
- 肃
- 仁
- 肇
- 醉
- 糟
- 馈
- 横
- 扭
- 盔
- 侧
- 鲁
- 莽
- 飙
- 稿
- 逐
- 谋
- 京
- 苏
- 宁
- 驻
- 咨
- 旷
- 拓
- 杆
- 秤
- 叮
- 嘱
- 咋
- 炊
- 怪
- 婆
- 阎
- 王
- 饿
- 鬼
- 惨
- 渡
- 坎
- 囤
- 甲
- 蛙
- 鲤
- 桂
- 石
- 玉
- 溪
- 华
- 窝
- 截
- 秩
- 嗨
- 芹
- 梨
- 蕉
- S
- 煲
- 汤
- 鲫
- 揽
- 挡
- 柚
- 瑞
- 匹
- '2'
- 踹
- 吵
- 凶
- 矩
- 迟
- 脾
- 纳
- 朵
- 墨
- 袖
- 链
- 钩
- 笼
- 熄
- 盆
- 殴
- 欺
- 诈
- 厕
- 娱
- 爬
- 威
- 胁
- 阅
- 赌
- 拢
- 症
- 伪
- 脂
- 堪
- 盛
- 蚊
- 蝇
- 煎
- 晰
- 柔
- 涩
- 汁
- 腹
- 胃
- 痉
- 挛
- 颗
- 粒
- 匀
- 败
- 历
- 佳
- 乏
- 寄
- 残
- 杀
- 剂
- 疾
- 衍
- 溅
- 倘
- 褶
- 席
- 启
- 遮
- 槽
- 递
- 橱
- 迹
- 镁
- 泄
- 阀
- 柴
- 阻
- 恋
- 盲
- 浓
- 捂
- 腰
- 姿
- 缝
- 肿
- 焦
- 骗
- 伺
- 嘘
- 掩
- 褥
- 帘
- 籍
- 锥
- 锋
- 尖
- 锐
- 祸
- 秒
- 李
- 伸
- 浏
- 览
- 航
- 讯
- 谨
- 慎
- 匪
- 劫
- 医
- 族
- 忧
- 孤
- 拜
- 窄
- 唯
- 搁
- 朝
- 尺
- 盟
- 波
- 隆
- 词
- 村
- 娶
- 媳
- 县
- 聘
- 醇
- 泡
- 坨
- 淋
- 延
- 柱
- 肾
- 蒸
- 槛
- 赚
- 凡
- 恩
- 厚
- 赞
- 茎
- 蒜
- 苔
- 甘
- 菠
- 涮
- 霾
- 仍
- 云
- 追
- 丽
- 盖
- 欧
- 莱
- 雅
- 婴
- 孕
- 敲
- 约
- 惰
- 谱
- 射
- 惑
- 睹
- 奉
- 诚
- 惶
- 卓
- 勉
- 聪
- 疼
- 弃
- 奴
- 隶
- 嚷
- 眠
- 躺
- 乒
- 乓
- 琴
- 挖
- 掘
- 阵
- 浆
- 索
- 呼
- 古
- 弥
- 熔
- 抱
- 怨
- 猫
- 笑
- 挣
- 黑
- 猛
- 令
- 核
- 磊
- 橙
- 吨
- 吊
- 蘸
- 氮
- 罐
- 战
- 懈
- 渐
- 胜
- 命
- 抬
- 缘
- 睦
- 扮
- 珠
- 颁
- 蔼
- 凳
- 饰
- 缤
- 晶
- 抵
- 遥
- 腿
- 拍
- 妻
- 羽
- 绒
- 梳
- 袄
- 述
- 跆
- 屈
- 脱
- 朗
- 劝
- 胆
- 腔
- 圆
- 亚
- 宴
- 编
- 肢
- 壶
- 暑
- 怒
- 描
- 绕
- 悦
- 忆
- 嗓
- 胖
- 疙
- 瘩
- 哒
- 碴
- 棱
- 炒
- 井
- 漫
- 烘
- 焙
- 涤
- 船
- 纱
- 君
- 茉
- 莉
- 钙
- 瞩
- <_>
- 塌
- 嗷
- 屁
- 股
- 绪
- 勇
- 奋
- 荣
- 诲
- 卑
- 挫
- 昧
- 疲
- 惫
- 册
- 呈
- 僵
- 熬
- 敬
- 呦
- <sos/eos>
init: null
model_conf:
ignore_id: 0
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: /ocean/projects/cis210027p/berrebbi/espnet/egs2/aishell4/asr1/data/nlsyms.txt
cleaner: null
g2p: null
lm: transformer
lm_conf:
pos_enc: null
embed_unit: 128
att_unit: 512
head: 8
unit: 2048
layer: 16
dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.3a1
distributed: false
```
</details>
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["aishell4"]}
|
espnet/Dan_Berrebbi_aishell4_asr
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"zh",
"dataset:aishell4",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `Emiru_Tsunoo/aishell_asr_train_asr_streaming_transformer_raw_zh_char_sp_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4604023/
This model was trained by Emiru Tsunoo using aishell/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["aishell"]}
|
espnet/Emiru_Tsunoo_aishell_asr_train_asr_streaming_transformer_raw_zh_char_sp_valid.acc.ave
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"zh",
"dataset:aishell",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `Hoon_Chung/jsut_asr_train_asr_conformer8_raw_char_sp_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4292742/
This model was trained by Hoon Chung using jsut/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["jsut"]}
|
espnet/Hoon_Chung_jsut_asr_train_asr_conformer8_raw_char_sp_valid.acc.ave
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `Hoon_Chung/zeroth_korean_asr_train_asr_transformer5_raw_bpe_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4014588/
This model was trained by Hoon Chung using zeroth_korean/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "kr", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["zeroth_korean"]}
|
espnet/Hoon_Chung_zeroth_korean_asr_train_asr_transformer5_raw_bpe_valid.acc.ave
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"kr",
"dataset:zeroth_korean",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR pretrained model
### `espnet/Karthik_DSTC2_asr_train_asr_Hubert_transformer`
This model was trained by Karthik using DSTC2/asr1 recipe in [espnet](https://github.com/espnet/espnet/)
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["sinhala"]}
|
espnet/Karthik_DSTC2_asr_train_asr_Hubert_transformer
| null |
[
"espnet",
"tensorboard",
"audio",
"automatic-speech-recognition",
"en",
"dataset:sinhala",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR pretrained model
### `espnet/Karthik_DSTC2_asr_train_asr_transformer`
This model was trained by Karthik using DSTC2/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["sinhala"]}
|
espnet/Karthik_DSTC2_asr_train_asr_transformer
| null |
[
"espnet",
"tensorboard",
"audio",
"automatic-speech-recognition",
"en",
"dataset:sinhala",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR pretrained model
### `espnet/Karthik_sinhala_asr_train_asr_transformer`
This model was trained by Karthik using sinhala/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["sinhala"]}
|
espnet/Karthik_sinhala_asr_train_asr_transformer
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:sinhala",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `Shinji_Watanabe/laborotv_asr_train_asr_conformer2_latest33_raw_char_sp_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4304245/
This model was trained by Shinji Watanabe using laborotv/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["laborotv"]}
|
espnet/Shinji_Watanabe_laborotv_asr_train_asr_conformer2_latest33_raw_char_sp_valid.acc.ave
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"ja",
"dataset:laborotv",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `Shinji_Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.acc.best`
♻️ Imported from https://zenodo.org/record/4030677/
This model was trained by Shinji Watanabe using librispeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"]}
|
espnet/Shinji_Watanabe_librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.acc.best
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR pretrained model
### `Shinji Watanabe/open_li52_asr_train_asr_raw_bpe7000_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4630406/
This model was trained by Shinji Watanabe using gigaspeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["gigaspeech"]}
|
espnet/Shinji_Watanabe_open_li52_asr_train_asr_raw_bpe7000_valid.acc.ave
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:gigaspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `Shinji_Watanabe/spgispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_bpe5000_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4585546/
This model was trained by Shinji Watanabe using spgispeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["spgispeech"]}
|
espnet/Shinji_Watanabe_spgispeech_asr_train_asr_conformer6_n_fft512_hop_lengt-truncated-f1ac86
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:spgispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `Shinji_Watanabe/spgispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_unnorm_bpe5000_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4585558/
This model was trained by Shinji Watanabe using spgispeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en_unnorm", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["spgispeech"]}
|
espnet/Shinji_Watanabe_spgispeech_asr_train_asr_conformer6_n_fft512_hop_lengt-truncated-a013d0
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:spgispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
audio-to-audio
|
espnet
|
## ESPnet2 ENH model
### `espnet/Wangyou_Zhang_chime4_enh_train_enh_beamformer_mvdr_raw`
This model was trained by Wangyou Zhang using chime4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
pip install -e .
cd egs2/chime4/enh1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/Wangyou_Zhang_chime4_enh_train_enh_beamformer_mvdr_raw
```
## ENH config
<details><summary>expand</summary>
```
config: conf/tuning/train_enh_beamformer_mvdr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/enh_train_enh_beamformer_mvdr_raw
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 35841
dist_launcher: null
multiprocessing_distributed: true
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 70
patience: 4
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- si_snr
- max
- - valid
- loss
- min
keep_nbest_models: 1
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
unused_parameters: false
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
pretrain_path: null
init_param: []
freeze_param: []
num_iters_per_epoch: null
batch_size: 8
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/enh_stats_16k/train/speech_mix_shape
- exp/enh_stats_16k/train/speech_ref1_shape
- exp/enh_stats_16k/train/noise_ref1_shape
valid_shape_file:
- exp/enh_stats_16k/valid/speech_mix_shape
- exp/enh_stats_16k/valid/speech_ref1_shape
- exp/enh_stats_16k/valid/noise_ref1_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 80000
- 80000
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr05_simu_isolated_6ch_track/wav.scp
- speech_mix
- sound
- - dump/raw/tr05_simu_isolated_6ch_track/spk1.scp
- speech_ref1
- sound
- - dump/raw/tr05_simu_isolated_6ch_track/noise1.scp
- noise_ref1
- sound
valid_data_path_and_name_and_type:
- - dump/raw/dt05_simu_isolated_6ch_track/wav.scp
- speech_mix
- sound
- - dump/raw/dt05_simu_isolated_6ch_track/spk1.scp
- speech_ref1
- sound
- - dump/raw/dt05_simu_isolated_6ch_track/noise1.scp
- noise_ref1
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-08
weight_decay: 0
scheduler: reducelronplateau
scheduler_conf:
mode: min
factor: 0.5
patience: 1
init: xavier_uniform
model_conf:
loss_type: mask_mse
mask_type: PSM^2
use_preprocessor: false
encoder: stft
encoder_conf:
n_fft: 512
hop_length: 128
separator: wpe_beamformer
separator_conf:
num_spk: 1
loss_type: mask_mse
use_wpe: false
wnet_type: blstmp
wlayers: 3
wunits: 300
wprojs: 320
wdropout_rate: 0.0
taps: 5
delay: 3
use_dnn_mask_for_wpe: true
use_beamformer: true
bnet_type: blstmp
blayers: 3
bunits: 512
bprojs: 512
badim: 320
ref_channel: 3
use_noise_mask: true
beamformer_type: mvdr_souden
bdropout_rate: 0.0
decoder: stft
decoder_conf:
n_fft: 512
hop_length: 128
required:
- output_dir
version: 0.9.7
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{li2021espnetse,
title={{ESPnet-SE}: End-to-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
author={Li, Chenda and Shi, Jing and Zhang, Wangyou and Subramanian, Aswin Shanmugam and Chang, Xuankai and Kamo, Naoyuki and Hira, Moto and Hayashi, Tomoki and Boeddeker, Christoph and Chen, Zhuo and Watanabe, Shinji},
booktitle={Proc. IEEE Spoken Language Technology Workshop (SLT)},
pages={785--792},
year={2021},
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{li2021espnetse,
title={{ESPnet-SE}: End-to-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
author={Li, Chenda and Shi, Jing and Zhang, Wangyou and Subramanian, Aswin Shanmugam and Chang, Xuankai and Kamo, Naoyuki and Hira, Moto and Hayashi, Tomoki and Boeddeker, Christoph and Chen, Zhuo and Watanabe, Shinji},
year={2020},
eprint={2011.03706},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
|
{"license": "cc-by-4.0", "tags": ["espnet", "audio", "audio-to-audio"], "datasets": ["chime4"]}
|
espnet/Wangyou_Zhang_chime4_enh_train_enh_beamformer_mvdr_raw
| null |
[
"espnet",
"audio",
"audio-to-audio",
"dataset:chime4",
"arxiv:1804.00015",
"arxiv:2011.03706",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer`
This model was trained by Yushi Ueda using iemocap recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout dfa2868243a897c2a6c34b7407eaea5e4b5508a5
pip install -e .
cd egs2/iemocap/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Thu Feb 17 11:25:22 EST 2022`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.9.0+cu102`
- Git hash: `f6cde1c419c814a14ccd40abe557a780508cbcdf`
- Commit date: `Fri Feb 11 12:25:33 2022 -0500`
## Using Conformer based encoder and Transformer based decoder with spectral augmentation and predicting transcript along with sentiment
- ASR config: [conf/tuning/train_asr_conformer.yaml](conf/tuning/train_asr_conformer.yaml)
- token_type: word
- labels: Positive, Neutral, Negative
|dataset|Snt|Intent Classification Macro F1 (%)| Weighted F1 (%)| Micro F1 (%)|
|---|---|---|---|---|
|decode_asr_model_valid.acc.ave_10best/valid|754|53.9|65.7|66.4|
|decode_asr_model_valid.acc.ave_10best/test|1650|50.3|54.5|55.7|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_raw_en_word
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 200
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 64
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_word/train/speech_shape
- exp/asr_stats_raw_en_word/train/text_shape.word
valid_shape_file:
- exp/asr_stats_raw_en_word/valid/speech_shape
- exp/asr_stats_raw_en_word/valid/text_shape.word
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/valid/wav.scp
- speech
- sound
- - dump/raw/valid/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0005
scheduler: warmuplr
scheduler_conf:
warmup_steps: 5000
token_list:
- <blank>
- <unk>
- i
- you
- Negative
- to
- it
- '''s'
- the
- '''t'
- that
- and
- Neutral
- Positive
- a
- know
- what
- of
- like
- we
- don
- just
- is
- do
- this
- '''m'
- me
- have
- can
- in
- for
- 'no'
- so
- not
- '''re'
- my
- but
- mean
- be
- going
- all
- was
- they
- well
- want
- yeah
- right
- get
- 'on'
- there
- he
- oh
- here
- go
- out
- with
- your
- if
- okay
- are
- she
- at
- '''ll'
- '''ve'
- got
- think
- about
- up
- see
- then
- why
- how
- time
- really
- one
- now
- or
- as
- back
- look
- her
- him
- been
- because
- 'yes'
- would
- didn
- little
- did
- good
- some
- them
- something
- need
- maybe
- never
- um
- come
- take
- god
- had
- could
- will
- uh
- am
- people
- thing
- when
- very
- let
- much
- sorry
- from
- again
- long
- give
- anything
- too
- make
- fish
- years
- where
- isn
- three
- said
- things
- nothing
- help
- work
- tell
- guess
- over
- 'off'
- business
- even
- sir
- any
- his
- around
- were
- way
- who
- new
- kind
- '''d'
- our
- everything
- more
- came
- an
- should
- down
- understand
- only
- great
- else
- man
- line
- us
- ask
- last
- doing
- say
- waiting
- other
- lot
- job
- feel
- yourself
- point
- thought
- day
- whole
- away
- coming
- better
- marry
- always
- these
- still
- wrong
- two
- sure
- care
- phone
- probably
- remember
- annie
- life
- year
- believe
- gonna
- supposed
- went
- first
- talk
- listen
- alright
- before
- thinking
- after
- stuff
- happy
- ever
- turn
- thank
- home
- fine
- into
- than
- call
- money
- stay
- actually
- every
- hope
- love
- huh
- married
- wait
- somewhere
- has
- being
- father
- larry
- hell
- wanted
- trying
- getting
- guys
- name
- saying
- bag
- hear
- girl
- hey
- flashlight
- beach
- put
- leave
- dollars
- mind
- augie
- does
- won
- fifty
- excited
- hate
- four
- done
- through
- their
- keep
- car
- lost
- doesn
- happen
- wouldn
- school
- big
- calm
- night
- '''cause'
- id
- another
- though
- myself
- nobody
- somebody
- best
- might
- same
- form
- mom
- nice
- matter
- spot
- stop
- told
- by
- shut
- enough
- five
- joe
- hard
- find
- course
- chris
- drunk
- snap
- luggage
- rather
- standing
- someone
- laugh
- took
- those
- please
- live
- six
- ridiculous
- minute
- looking
- bring
- show
- start
- brought
- days
- must
- pretty
- sort
- talking
- sand
- child
- working
- send
- next
- hundred
- whatever
- many
- moon
- moment
- champagne
- s
- problem
- end
- real
- dear
- happened
- person
- place
- fill
- awesome
- house
- such
- cool
- c
- haven
- knew
- die
- finally
- glasses
- stupid
- least
- dad
- supervisor
- totally
- each
- try
- waited
- idea
- u
- party
- asked
- anymore
- sick
- evening
- license
- kid
- wow
- flight
- felt
- pay
- since
- single
- miss
- without
- different
- mmhmm
- free
- sometimes
- yet
- couldn
- view
- hour
- knows
- drive
- themselves
- swim
- ah
- brandy
- fact
- ma
- '''am'
- already
- part
- sit
- thanks
- comes
- check
- everyone
- started
- kiss
- weren
- hotel
- own
- beast
- bad
- above
- run
- worst
- grunions
- darling
- seem
- baby
- turned
- gone
- shouldn
- exactly
- reason
- full
- both
- crazy
- pack
- bit
- swimming
- liquor
- seemed
- serious
- cause
- peter
- burden
- gosh
- forgot
- happens
- alone
- pass
- letters
- heard
- manager
- hours
- baggage
- card
- number
- argue
- seen
- walk
- forget
- kids
- family
- blanket
- honey
- open
- quite
- gotta
- forms
- mother
- old
- needs
- times
- airline
- which
- once
- service
- week
- together
- twenty
- stand
- made
- fun
- dead
- sake
- men
- kate
- today
- plane
- most
- carla
- driving
- deal
- information
- wanna
- definitely
- while
- yea
- certificate
- particular
- lots
- calling
- fortune
- write
- entire
- found
- trouble
- use
- forever
- woman
- enjoy
- room
- damn
- war
- meaning
- longer
- jacket
- ticket
- twice
- sent
- wonder
- small
- amanda
- cannot
- able
- half
- ha
- saw
- bus
- ago
- hmm
- hi
- kidding
- giving
- gave
- move
- women
- ahead
- york
- guy
- suppose
- company
- incredible
- either
- minutes
- tonight
- shoes
- utterly
- wasn
- filled
- gets
- amazing
- beautiful
- hello
- birth
- prove
- choice
- friend
- expect
- says
- blue
- anywhere
- died
- weird
- umm
- blood
- d
- face
- body
- alive
- diagram
- goes
- read
- far
- race
- wind
- fly
- interested
- california
- coast
- news
- past
- charles
- floor
- idiotic
- indeed
- absolutely
- softball
- answer
- somehow
- having
- campus
- completely
- file
- everybody
- given
- fair
- front
- telling
- tried
- sign
- helping
- dollar
- used
- takes
- hair
- behind
- head
- also
- question
- pull
- brother
- nonsense
- kill
- pocket
- cold
- mine
- watching
- shall
- divorce
- driver
- m
- makes
- cried
- security
- suitcase
- seems
- control
- set
- letter
- realized
- paper
- weeks
- address
- sweet
- lose
- huge
- death
- ones
- living
- glad
- bed
- until
- thinks
- wedding
- pieces
- parents
- ready
- almost
- forgive
- kissed
- silver
- during
- forty
- lives
- grow
- arrive
- eyes
- putting
- quiet
- poor
- presents
- sting
- tired
- row
- anyhow
- window
- v
- thousand
- watch
- ashamed
- figure
- vacation
- application
- left
- certainly
- calls
- months
- student
- close
- helpful
- called
- welcome
- major
- match
- morning
- fit
- reach
- door
- wife
- faith
- noticed
- several
- killed
- accident
- rat
- flop
- hands
- ear
- dancing
- hairs
- bugging
- dinner
- bills
- worked
- bored
- conversation
- tunis
- overbearing
- grand
- nine
- amusing
- vile
- tempered
- obviously
- tomorrow
- taken
- eight
- venice
- worth
- boy
- realize
- midnight
- evil
- sixteen
- gotten
- paying
- bottle
- smart
- cindy
- excuse
- along
- seven
- children
- figured
- jobs
- joke
- charge
- memorial
- sitting
- hardly
- young
- story
- feels
- pronouncing
- insane
- forgotten
- fast
- inspire
- grub
- tough
- arguing
- air
- toss
- instance
- raining
- pair
- dry
- socks
- selfish
- included
- yours
- mystery
- mindedness
- urgency
- pure
- urge
- insulting
- ideas
- herself
- period
- missed
- backwards
- dance
- worms
- pop
- except
- perfect
- blow
- funny
- listening
- sadistic
- bully
- cruel
- 'true'
- second
- acting
- lucky
- handle
- loved
- hit
- shaking
- destroyed
- changed
- book
- eleven
- animals
- ice
- cream
- brings
- frustrating
- otherwise
- onto
- pregnant
- operator
- baltimore
- san
- diego
- contract
- brown
- friends
- pictures
- internet
- piece
- high
- anyone
- tickets
- inconvenience
- gift
- usually
- green
- city
- couple
- chuck
- growing
- pick
- throw
- yay
- walking
- grave
- considerate
- inspired
- looked
- mistake
- believes
- avoid
- sucker
- rock
- strangers
- missing
- hide
- geez
- imagination
- overseas
- command
- earth
- monument
- difference
- zipped
- kansas
- reservations
- ahh
- formed
- barefoot
- shower
- running
- garage
- knickerbocker
- locker
- wasting
- roses
- peaches
- rosy
- mention
- shh
- behave
- exquisitely
- beautifully
- rolling
- biting
- scratching
- panthers
- suddenly
- ought
- dreadfully
- pity
- eye
- world
- making
- bark
- roll
- hoops
- insufferable
- weak
- upstairs
- insist
- boorish
- conceited
- impossible
- torment
- brute
- perfectly
- wicked
- crawling
- top
- wish
- wants
- bank
- plan
- soon
- plenty
- bags
- congratulations
- play
- carry
- ignore
- sudden
- refrigerator
- loot
- fight
- lights
- swallows
- goose
- bumps
- keeps
- fighting
- massive
- celebration
- sex
- human
- ours
- light
- minded
- social
- needed
- anyway
- words
- problems
- claim
- reimburse
- checked
- airport
- meet
- e
- responsibility
- grunion
- knees
- thousands
- important
- shows
- goddamn
- strong
- law
- sara
- brent
- passport
- aren
- month
- romantic
- leaving
- random
- applied
- interesting
- regular
- taking
- harder
- hurt
- movie
- freaking
- record
- airlines
- responsible
- honestly
- grew
- proud
- hang
- mrs
- fellow
- terrible
- contradict
- infuriate
- throws
- afraid
- suffer
- bloody
- settled
- thrash
- may
- son
- faithful
- moments
- act
- sleep
- detroit
- planning
- yard
- particularly
- natural
- phenomenon
- highlight
- flopping
- laying
- eggs
- mating
- orgy
- magic
- unexplainable
- instincts
- seaweed
- instinctual
- firecracker
- spent
- clasped
- intimate
- special
- wishes
- seriously
- refreshments
- ooh
- pinpoint
- marge
- dishes
- fat
- ring
- later
- shivers
- spine
- sillier
- poise
- trumpets
- squeakers
- sockets
- allure
- contrary
- violently
- glass
- temperamental
- fiend
- loathe
- adder
- riotous
- mentioned
- intemperate
- tots
- downstairs
- mad
- loose
- lived
- yelling
- happening
- promise
- known
- exciting
- finish
- college
- atlanta
- searching
- fired
- drinking
- jesus
- lock
- plans
- hole
- santa
- kitchen
- invite
- believing
- ann
- landing
- eats
- panties
- sore
- throat
- unmistakable
- capistrano
- lemmings
- cliffs
- invitation
- map
- heaven
- carpet
- poodle
- suicide
- pact
- turns
- court
- dies
- mustn
- vampire
- identification
- places
- danger
- hand
- middle
- situation
- option
- willing
- paid
- horrible
- pain
- anybody
- paperwork
- difficult
- dream
- sakes
- matters
- toes
- become
- habit
- hold
- survive
- break
- babe
- shit
- contact
- land
- water
- transfer
- backersen
- desk
- wallet
- stolen
- credit
- cards
- clearly
- appreciate
- complicated
- uhuh
- bucks
- win
- theatre
- resume
- riding
- helps
- less
- planes
- means
- future
- ran
- red
- wrote
- loans
- spend
- dreaming
- proof
- shooting
- crack
- cracked
- dares
- invited
- breaks
- embarrassed
- wondering
- aw
- style
- granted
- embarrassing
- mixed
- su
- spawning
- stubbed
- toe
- bodies
- expectantly
- meant
- beginning
- traumatized
- freda
- sooner
- applies
- philosophers
- rots
- trivial
- torture
- stiff
- venom
- fangs
- wake
- bended
- voice
- build
- unbelievable
- hiring
- resumes
- eventually
- aggressive
- awhile
- especially
- further
- mass
- pointless
- claus
- neither
- mmm
- cannes
- figures
- burnt
- debate
- exception
- busy
- safe
- possible
- spring
- starting
- buy
- rest
- office
- complaint
- accepted
- ten
- area
- seats
- foam
- vibrations
- drives
- popped
- slightly
- exaggerated
- scientific
- proposed
- bathroom
- awful
- scene
- adders
- afford
- packet
- forward
- customer
- brand
- yellow
- fifteen
- brian
- asking
- percent
- girlfriend
- acceptance
- patient
- patience
- dishonest
- cheese
- restaurant
- t
- sixty
- direct
- holiday
- inn
- refund
- hmmm
- receiving
- sim
- browns
- unacceptable
- northwest
- dorky
- putt
- change
- filling
- z
- x
- simple
- mail
- request
- raise
- town
- hadn
- played
- pennies
- visa
- visit
- loves
- list
- environment
- frustrated
- ride
- imagine
- flew
- nash
- replace
- paris
- personal
- issue
- flights
- track
- angry
- headstone
- cemetery
- cancer
- poetry
- palm
- l
- dropped
- bunch
- p
- chair
- broke
- o
- allow
- nights
- talent
- ignoring
- center
- lovely
- sneaking
- whose
- es
- naturally
- stays
- wide
- bought
- arm
- exact
- curtsy
- wiggle
- superficial
- paint
- naked
- vendome
- rouser
- younger
- jealous
- fascinating
- duty
- photographer
- studio
- cad
- restraint
- ill
- knee
- applying
- questions
- picture
- fake
- apartment
- cash
- drink
- upset
- sending
- flying
- speak
- details
- wherever
- unfortunate
- education
- leaves
- basically
- hospital
- messed
- sounds
- pinch
- malibu
- drop
- team
- professional
- till
- ambiguous
- seeing
- ugh
- wet
- heading
- release
- fire
- inside
- pr
- includes
- rub
- ludicrous
- wriggle
- flippancy
- acid
- sweetness
- curling
- dressing
- gown
- broach
- enjoyable
- original
- '''em'
- early
- ok
- daughter
- age
- steps
- rejected
- starts
- competitive
- hired
- worse
- itself
- nowhere
- unfortunately
- process
- fault
- decision
- package
- easy
- transferred
- straight
- suckers
- none
- returning
- throwing
- cork
- softest
- breathe
- road
- catch
- threw
- canal
- comb
- towels
- sacred
- savor
- delight
- needn
- late
- web
- website
- rough
- daddy
- talked
- feeling
- talented
- interview
- food
- looks
- misplaced
- theft
- likely
- stuck
- tags
- cult
- everywhere
- menu
- choose
- press
- lady
- bill
- department
- online
- immediately
- miles
- notice
- vote
- heavens
- yell
- anna
- tables
- hasn
- stole
- losing
- unfair
- positive
- boston
- celebrate
- system
- turning
- newspapers
- pays
- dare
- jokes
- swine
- demand
- building
- finished
- staying
- cheap
- anyways
- okey
- lobster
- wonderful
- harvard
- engineering
- summer
- lawyer
- mr
- lax
- delta
- funeral
- report
- property
- whoever
- corporate
- miso
- soup
- holy
- olivia
- camera
- power
- sold
- testing
- greens
- explain
- agreement
- undecided
- access
- babies
- street
- vegas
- slot
- honeymoon
- husband
- penny
- slots
- wheel
- cat
- citizenship
- england
- fan
- spending
- craig
- services
- monster
- baloney
- saving
- necessarily
- carousel
- cameras
- airplane
- sentimental
- value
- incredibly
- shopping
- jet
- clothes
- apologize
- allowed
- amount
- candy
- redlands
- sprinklers
- whenever
- brain
- park
- holding
- memorized
- surgery
- audience
- joy
- scholarships
- commuting
- h
- ruined
- mm
- bet
- neighborhood
- sticking
- woo
- teach
- class
- confused
- clock
- foolish
- ocean
- distinctly
- whispered
- wishing
- white
- elliott
- strange
- quest
- ultimate
- truth
- shan
- word
- disagreeable
- wench
- birthday
- national
- thin
- rent
- colors
- citizen
- account
- '''til'
- hire
- short
- fuse
- america
- audition
- sponge
- language
- arriving
- reimbursement
- computer
- cover
- ass
- dealing
- quick
- freaks
- pitch
- hitting
- housing
- force
- scholarship
- dirty
- depends
- helicopter
- wild
- sport
- games
- streets
- although
- mi
- trust
- cracker
- curtsey
- bicker
- irons
- besides
- splendid
- born
- weekends
- letting
- tear
- apart
- touch
- flipped
- hot
- outside
- flowers
- candles
- approve
- surprised
- lead
- ends
- worthless
- apparently
- worker
- annoy
- belongings
- disappeared
- under
- case
- checking
- admit
- risk
- agreed
- yesterday
- country
- financial
- aid
- within
- automated
- systems
- specific
- rate
- star
- aisle
- afternoon
- maui
- machine
- waste
- available
- confirmed
- thinkin
- liked
- kicked
- intermittently
- burned
- desire
- fade
- passion
- laughable
- cunning
- mirrors
- painted
- wooden
- snake
- suspicious
- nosey
- silly
- wonders
- order
- standard
- site
- sense
- dangerous
- cute
- whether
- considering
- opinion
- f
- few
- guarantee
- possessions
- claims
- sue
- easier
- cared
- expected
- trip
- europe
- its
- circles
- large
- store
- macy
- rotary
- instead
- showed
- hundreds
- planned
- someplace
- sensitive
- popping
- opened
- backrub
- fantasy
- damned
- sheet
- cut
- purchase
- amy
- quit
- clapping
- onstage
- eighteen
- auditioning
- rejection
- prepared
- thirty
- master
- kelly
- natalie
- pants
- isabella
- verizon
- goodbye
- fucking
- challenge
- slept
- created
- checkbook
- argument
- uhh
- perhaps
- loath
- complete
- sad
- priorities
- between
- moving
- song
- temporary
- pulling
- smith
- receptionist
- extra
- lodging
- eh
- la
- cost
- boss
- peanuts
- doctor
- production
- downtown
- april
- contracts
- incompetent
- realtor
- fix
- payphone
- verify
- electrical
- outage
- symptoms
- nature
- pilot
- hook
- realizes
- bother
- trade
- event
- meadow
- faint
- blues
- bananas
- overnight
- station
- attention
- purchasing
- terms
- taser
- excellent
- counsel
- sorority
- golfing
- library
- dork
- taco
- branch
- separate
- sacrifices
- mothers
- kicking
- videotape
- stream
- sitters
- moved
- computers
- machines
- bride
- cruise
- likes
- tabs
- plays
- giant
- renamed
- brenda
- lumber
- janet
- state
- quarters
- costs
- escort
- reliable
- board
- posting
- trail
- following
- fantastic
- mighty
- recommending
- generally
- outline
- affords
- save
- carpool
- frustration
- refuse
- anger
- fourth
- lines
- fourteen
- mileage
- candid
- packed
- replaced
- expensive
- lawsuit
- cruising
- bruising
- president
- mistakenly
- behalf
- listed
- liable
- held
- sean
- badge
- employee
- impression
- cemeteries
- urban
- oasis
- wandering
- hers
- pathetic
- ground
- stones
- tumors
- heather
- built
- prospect
- garden
- section
- parties
- feet
- poems
- curly
- tree
- crown
- john
- dunn
- begin
- wheelchair
- reciting
- envelope
- grants
- mold
- minds
- mess
- rapper
- ho
- masters
- teacher
- dash
- popular
- seasoning
- messing
- ruin
- woke
- darkest
- beating
- bush
- porch
- fresh
- rooms
- sweetest
- pets
- cheeked
- brooch
- however
- jones
- voices
- berating
- christmas
- shame
- bunker
- guard
- spread
- companies
- shipping
- shock
- group
- dual
- unattached
- engagement
- sock
- dude
- lucked
- blush
- beige
- loaded
- craziest
- offered
- spoke
- english
- accent
- illegal
- jail
- caught
- hardcore
- tropical
- bahamas
- tahiti
- wealthy
- royalty
- removed
- attitude
- extremely
- hostile
- cutting
- sentence
- jumping
- produce
- field
- shake
- across
- soaked
- dying
- georgia
- educated
- boarding
- attendance
- seat
- offer
- publicize
- abuse
- insinuating
- smug
- mouth
- tossing
- hanky
- black
- wheels
- easily
- overhead
- compartment
- data
- collecting
- lip
- coffee
- smoking
- cigarettes
- union
- differently
- numb
- sickness
- boom
- mortality
- affecting
- slow
- books
- per
- diem
- victorian
- houses
- west
- sider
- commute
- practice
- neon
- softballs
- glow
- co
- ed
- nationally
- ranked
- ping
- pong
- denigrate
- rookie
- donuts
- recently
- pitcher
- hitter
- mostly
- shortstop
- ex
- trojans
- sports
- nicer
- monica
- player
- type
- helipad
- fell
- literally
- doubt
- cares
- mustache
- papers
- crying
- floorboards
- sorted
- everyday
- seas
- bringing
- sacrifice
- guilty
- opening
- return
- jumped
- distinctively
- direction
- tiny
- action
- passed
- cheeks
- darn
- urgh
- restrain
- self
- centered
- registration
- lunch
- documents
- identifications
- deadline
- carries
- official
- documentation
- government
- wireless
- crucial
- pulls
- kinda
- girly
- radiant
- ya
- shine
- invitations
- response
- mcdonald
- level
- member
- pavement
- indicators
- prejudice
- against
- applications
- hating
- physically
- amateur
- crawl
- dumber
- cases
- etiquette
- bug
- opinions
- magically
- irresponsible
- carrousel
- contents
- main
- liability
- provides
- shops
- reimbursed
- investigate
- provide
- uncommon
- johnny
- conscious
- stories
- africa
- image
- hurts
- goout
- gradual
- impact
- subside
- heals
- parts
- football
- recognizable
- accomplished
- prestige
- load
- worrying
- decide
- tour
- friendly
- ivy
- walls
- collegiate
- g
- choices
- math
- prestigious
- departments
- orientation
- graduate
- shiloh
- valued
- customers
- previous
- purchases
- scheduling
- highly
- discounted
- uses
- corporation
- hotels
- rated
- aisles
- switch
- fortunately
- allows
- spare
- shuttle
- appropriate
- traveling
- deals
- shuttles
- sleeps
- gee
- futile
- moralists
- unbearable
- flippant
- shibboleths
- rush
- madly
- piazza
- iron
- dri
- counter
- applica
- lonely
- disappear
- video
- definitive
- magazine
- boyfriend
- stage
- golly
- concert
- crew
- freak
- guaranteed
- nervous
- hah
- persistence
- factors
- types
- male
- female
- consideration
- cooking
- reconsidering
- uhm
- retirement
- foot
- persistent
- table
- skewed
- painting
- outer
- employment
- unlucky
- planet
- normal
- peoples
- reading
- difficulties
- loading
- mishap
- cart
- shipped
- tracking
- reim
- tight
- error
- continue
- 'false'
- compensate
- policy
- gifts
- nobodies
- tag
- originally
- shoe
- core
- memories
- kathy
- lasted
- gary
- closed
- surreal
- troops
- loving
- los
- angeles
- schools
- kinds
- secrets
- explore
- rip
- nuts
- champions
- leaning
- towards
- communications
- broad
- confined
- ropes
- recording
- depending
- leads
- bypass
- zero
- pleasant
- ebay
- bye
- steve
- hint
- asks
- tone
- pretend
- protection
- rid
- submit
- print
- regarding
- grievance
- sites
- protected
- processed
- careful
- secure
- unreliable
- trash
- kept
- spotting
- certain
- specifically
- pushing
- headed
- ears
- watched
- sends
- ceaseless
- wear
- often
- pleasure
- sonya
- promoted
- nurses
- mommy
- va
- videotaped
- cousin
- postpone
- performance
- swear
- cast
- spotlight
- microphone
- tripped
- surprise
- scored
- points
- members
- loser
- marrying
- weddings
- carats
- lousy
- chaperone
- drowsy
- deserve
- cry
- tears
- happiness
- marriage
- commercials
- refection
- financially
- studied
- passing
- russel
- crowe
- pooling
- funds
- owe
- learning
- role
- auditions
- denny
- tip
- teaching
- oof
- france
- steal
- keys
- laughing
- rosenkrantz
- thingy
- bopper
- limit
- whoa
- ways
- suffered
- disease
- handsome
- gifted
- parent
- ripped
- uveny
- tricia
- chemo
- baseball
- benny
- nat
- nation
- bread
- eat
- beer
- dorm
- sometime
- mattresses
- reserved
- grauman
- scale
- whooooo
- acti
- film
- art
- academy
- films
- fuck
- ethiopia
- cuddle
- profanity
- provider
- satellites
- average
- compensating
- unbeknownst
- satellite
- exaggerate
- advising
- addressed
- fax
- dumb
- fritz
- incoming
- million
- grown
- fella
- shootin
- travel
- sat
- instinct
- goosebumps
- arms
- danced
- intimately
- spart
- strumpets
- bristling
- diamonds
- taste
- portion
- side
- stairs
- condescending
- copy
- proceed
- remove
- missy
- behaving
- sweetie
- deploy
- specialist
- increase
- triple
- promotion
- retire
- quiets
- faster
- career
- lame
- drew
- barrymore
- nasty
- mouse
- cheesy
- jane
- tarzan
- engaged
- esmeralda
- hitched
- spontaneous
- character
- conga
- dim
- pulled
- chucky
- sarah
- guiding
- graduated
- apply
- colleges
- energy
- busing
- clerk
- excuses
- qualified
- chang
- investment
- banking
- deloitte
- touche
- temp
- degrading
- smarter
- astronaut
- biomedical
- internship
- plus
- breaking
- evicting
- typing
- shoot
- degree
- science
- club
- joking
- doomed
- maryland
- cooperate
- emergency
- pounds
- urn
- deduction
- sherlock
- holmes
- vessel
- burst
- caption
- therefore
- placed
- firing
- lobby
- fastest
- ibm
- misplace
- count
- hanging
- explanation
- follow
- footsteps
- overboard
- paralyzed
- coma
- fucked
- studying
- countries
- goal
- met
- greatest
- hopefully
- mmmm
- cinema
- chapter
- professionals
- sipping
- martinis
- sushi
- vat
- assistance
- starve
- south
- central
- firm
- police
- officer
- viacom
- digits
- speaking
- network
- charging
- connect
- outages
- hurricane
- katrina
- chose
- maam
- proven
- failing
- receive
- cuts
- using
- flip
- writing
- ms
- fall
- older
- game
- orange
- pink
- goodies
- battling
- sees
- flat
- stronger
- acted
- deserves
- hats
- shore
- pokes
- nah
- paul
- boats
- dammit
- enjoys
- bound
- harm
- pleasured
- lure
- devil
- rile
- topic
- initialed
- lets
- correctly
- spelled
- signed
- shitty
- timing
- susie
- tours
- emotionally
- bullshit
- enlist
- lie
- traditional
- church
- cabins
- flowery
- naturey
- midsummer
- excitement
- hoping
- attacked
- bears
- trim
- cooler
- dog
- tanish
- contrast
- cake
- buffet
- fried
- chicken
- mashed
- potatoes
- happier
- thrilled
- ecstatic
- rushed
- pressure
- interviews
- favors
- bite
- excessive
- unemployed
- cab
- gas
- possibly
- extreme
- trained
- presentable
- quote
- buck
- chugging
- engine
- realm
- minimum
- wage
- fry
- flipper
- bottom
- clear
- affect
- cle
- dressed
- shave
- legs
- presentation
- eighty
- success
- position
- training
- mcdonalds
- tv
- rainbow
- colored
- crap
- safely
- destination
- percoes
- equivalent
- amends
- courtesy
- inconveniencing
- near
- communicate
- conditions
- frequently
- current
- expecting
- pissed
- honor
- grandmother
- condition
- inevitable
- peace
- general
- mace
- present
- knife
- puny
- underwater
- basket
- weaving
- lying
- decided
- works
- worried
- occasion
- cruisers
- vibe
- greek
- lessons
- suck
- celebrating
- crush
- throughout
- test
- waters
- movies
- vermont
- cruiser
- abused
- frat
- boys
- dorms
- dell
- requests
- fixed
- dealt
- worries
- refunded
- situa
- relevant
- ordered
- orders
- others
- incorrectly
- tomatoes
- del
- cents
- attached
- cuz
- hoped
- opportunity
- rushing
- goods
- skipped
- breath
- kleenex
- alaska
- bearing
- hated
- holes
- calf
- witch
- whore
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.5
ignore_id: -1
lsm_weight: 0.0
length_normalized_loss: false
report_cer: true
report_wer: true
sym_space: <space>
sym_blank: <blank>
extract_feats_in_collect_stats: true
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["iemocap"]}
|
espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:iemocap",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer_hubert`
This model was trained by Yushi Ueda using iemocap recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout dfa2868243a897c2a6c34b7407eaea5e4b5508a5
pip install -e .
cd egs2/iemocap/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer_hubert
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sat Feb 12 23:11:32 EST 2022`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.9.0+cu102`
- Git hash: `f6cde1c419c814a14ccd40abe557a780508cbcdf`
- Commit date: `Fri Feb 11 12:25:33 2022 -0500`
## Using Conformer based encoder, Transformer based decoder, and self-supervised learning features with spectral augmentation and predicting transcript along with sentiment
- ASR config: [conf/tuning/train_asr_conformer_hubert.yaml](conf/tuning/train_asr_conformer_hubert.yaml)
- token_type: word
- Sentiment Labels: Positive, Neutral, Negative
|dataset|Snt|Intent Classification Macro F1 (%)| Weighted F1 (%)| Micro F1 (%)|
|---|---|---|---|---|
|decode_asr_model_valid.acc.ave_10best/valid|754|66.5|76.4|75.7|
|decode_asr_model_valid.acc.ave_10best/test|1650|62.0|65.5|65.8|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer_hubert.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_hubert_sentiment
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_word/train/speech_shape
- exp/asr_stats_raw_en_word/train/text_shape.word
valid_shape_file:
- exp/asr_stats_raw_en_word/valid/speech_shape
- exp/asr_stats_raw_en_word/valid/text_shape.word
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/valid/wav.scp
- speech
- sound
- - dump/raw/valid/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0002
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- i
- you
- Negative
- to
- it
- '''s'
- the
- '''t'
- that
- and
- Neutral
- Positive
- a
- know
- what
- of
- like
- we
- don
- just
- is
- do
- this
- '''m'
- me
- have
- can
- in
- for
- 'no'
- so
- not
- '''re'
- my
- but
- mean
- be
- going
- all
- was
- they
- well
- want
- yeah
- right
- get
- 'on'
- there
- he
- oh
- here
- go
- out
- with
- your
- if
- okay
- are
- she
- at
- '''ll'
- '''ve'
- got
- think
- about
- up
- see
- then
- why
- how
- time
- really
- one
- now
- or
- as
- back
- look
- her
- him
- been
- because
- 'yes'
- would
- didn
- little
- did
- good
- some
- them
- something
- need
- maybe
- never
- um
- come
- take
- god
- had
- could
- will
- uh
- am
- people
- thing
- when
- very
- let
- much
- sorry
- from
- again
- long
- give
- anything
- too
- make
- fish
- years
- where
- isn
- three
- said
- things
- nothing
- help
- work
- tell
- guess
- over
- 'off'
- business
- even
- sir
- any
- his
- around
- were
- way
- who
- new
- kind
- '''d'
- our
- everything
- more
- came
- an
- should
- down
- understand
- only
- great
- else
- man
- line
- us
- ask
- last
- doing
- say
- waiting
- other
- lot
- job
- feel
- yourself
- point
- thought
- day
- whole
- away
- coming
- better
- marry
- always
- these
- still
- wrong
- two
- sure
- care
- phone
- probably
- remember
- annie
- life
- year
- believe
- gonna
- supposed
- went
- first
- talk
- listen
- alright
- before
- thinking
- after
- stuff
- happy
- ever
- turn
- thank
- home
- fine
- into
- than
- call
- money
- stay
- actually
- every
- hope
- love
- huh
- married
- wait
- somewhere
- has
- being
- father
- larry
- hell
- wanted
- trying
- getting
- guys
- name
- saying
- bag
- hear
- girl
- hey
- flashlight
- beach
- put
- leave
- dollars
- mind
- augie
- does
- won
- fifty
- excited
- hate
- four
- done
- through
- their
- keep
- car
- lost
- doesn
- happen
- wouldn
- school
- big
- calm
- night
- '''cause'
- id
- another
- though
- myself
- nobody
- somebody
- best
- might
- same
- form
- mom
- nice
- matter
- spot
- stop
- told
- by
- shut
- enough
- five
- joe
- hard
- find
- course
- chris
- drunk
- snap
- luggage
- rather
- standing
- someone
- laugh
- took
- those
- please
- live
- six
- ridiculous
- minute
- looking
- bring
- show
- start
- brought
- days
- must
- pretty
- sort
- talking
- sand
- child
- working
- send
- next
- hundred
- whatever
- many
- moon
- moment
- champagne
- s
- problem
- end
- real
- dear
- happened
- person
- place
- fill
- awesome
- house
- such
- cool
- c
- haven
- knew
- die
- finally
- glasses
- stupid
- least
- dad
- supervisor
- totally
- each
- try
- waited
- idea
- u
- party
- asked
- anymore
- sick
- evening
- license
- kid
- wow
- flight
- felt
- pay
- since
- single
- miss
- without
- different
- mmhmm
- free
- sometimes
- yet
- couldn
- view
- hour
- knows
- drive
- themselves
- swim
- ah
- brandy
- fact
- ma
- '''am'
- already
- part
- sit
- thanks
- comes
- check
- everyone
- started
- kiss
- weren
- hotel
- own
- beast
- bad
- above
- run
- worst
- grunions
- darling
- seem
- baby
- turned
- gone
- shouldn
- exactly
- reason
- full
- both
- crazy
- pack
- bit
- swimming
- liquor
- seemed
- serious
- cause
- peter
- burden
- gosh
- forgot
- happens
- alone
- pass
- letters
- heard
- manager
- hours
- baggage
- card
- number
- argue
- seen
- walk
- forget
- kids
- family
- blanket
- honey
- open
- quite
- gotta
- forms
- mother
- old
- needs
- times
- airline
- which
- once
- service
- week
- together
- twenty
- stand
- made
- fun
- dead
- sake
- men
- kate
- today
- plane
- most
- carla
- driving
- deal
- information
- wanna
- definitely
- while
- yea
- certificate
- particular
- lots
- calling
- fortune
- write
- entire
- found
- trouble
- use
- forever
- woman
- enjoy
- room
- damn
- war
- meaning
- longer
- jacket
- ticket
- twice
- sent
- wonder
- small
- amanda
- cannot
- able
- half
- ha
- saw
- bus
- ago
- hmm
- hi
- kidding
- giving
- gave
- move
- women
- ahead
- york
- guy
- suppose
- company
- incredible
- either
- minutes
- tonight
- shoes
- utterly
- wasn
- filled
- gets
- amazing
- beautiful
- hello
- birth
- prove
- choice
- friend
- expect
- says
- blue
- anywhere
- died
- weird
- umm
- blood
- d
- face
- body
- alive
- diagram
- goes
- read
- far
- race
- wind
- fly
- interested
- california
- coast
- news
- past
- charles
- floor
- idiotic
- indeed
- absolutely
- softball
- answer
- somehow
- having
- campus
- completely
- file
- everybody
- given
- fair
- front
- telling
- tried
- sign
- helping
- dollar
- used
- takes
- hair
- behind
- head
- also
- question
- pull
- brother
- nonsense
- kill
- pocket
- cold
- mine
- watching
- shall
- divorce
- driver
- m
- makes
- cried
- security
- suitcase
- seems
- control
- set
- letter
- realized
- paper
- weeks
- address
- sweet
- lose
- huge
- death
- ones
- living
- glad
- bed
- until
- thinks
- wedding
- pieces
- parents
- ready
- almost
- forgive
- kissed
- silver
- during
- forty
- lives
- grow
- arrive
- eyes
- putting
- quiet
- poor
- presents
- sting
- tired
- row
- anyhow
- window
- v
- thousand
- watch
- ashamed
- figure
- vacation
- application
- left
- certainly
- calls
- months
- student
- close
- helpful
- called
- welcome
- major
- match
- morning
- fit
- reach
- door
- wife
- faith
- noticed
- several
- killed
- accident
- rat
- flop
- hands
- ear
- dancing
- hairs
- bugging
- dinner
- bills
- worked
- bored
- conversation
- tunis
- overbearing
- grand
- nine
- amusing
- vile
- tempered
- obviously
- tomorrow
- taken
- eight
- venice
- worth
- boy
- realize
- midnight
- evil
- sixteen
- gotten
- paying
- bottle
- smart
- cindy
- excuse
- along
- seven
- children
- figured
- jobs
- joke
- charge
- memorial
- sitting
- hardly
- young
- story
- feels
- pronouncing
- insane
- forgotten
- fast
- inspire
- grub
- tough
- arguing
- air
- toss
- instance
- raining
- pair
- dry
- socks
- selfish
- included
- yours
- mystery
- mindedness
- urgency
- pure
- urge
- insulting
- ideas
- herself
- period
- missed
- backwards
- dance
- worms
- pop
- except
- perfect
- blow
- funny
- listening
- sadistic
- bully
- cruel
- 'true'
- second
- acting
- lucky
- handle
- loved
- hit
- shaking
- destroyed
- changed
- book
- eleven
- animals
- ice
- cream
- brings
- frustrating
- otherwise
- onto
- pregnant
- operator
- baltimore
- san
- diego
- contract
- brown
- friends
- pictures
- internet
- piece
- high
- anyone
- tickets
- inconvenience
- gift
- usually
- green
- city
- couple
- chuck
- growing
- pick
- throw
- yay
- walking
- grave
- considerate
- inspired
- looked
- mistake
- believes
- avoid
- sucker
- rock
- strangers
- missing
- hide
- geez
- imagination
- overseas
- command
- earth
- monument
- difference
- zipped
- kansas
- reservations
- ahh
- formed
- barefoot
- shower
- running
- garage
- knickerbocker
- locker
- wasting
- roses
- peaches
- rosy
- mention
- shh
- behave
- exquisitely
- beautifully
- rolling
- biting
- scratching
- panthers
- suddenly
- ought
- dreadfully
- pity
- eye
- world
- making
- bark
- roll
- hoops
- insufferable
- weak
- upstairs
- insist
- boorish
- conceited
- impossible
- torment
- brute
- perfectly
- wicked
- crawling
- top
- wish
- wants
- bank
- plan
- soon
- plenty
- bags
- congratulations
- play
- carry
- ignore
- sudden
- refrigerator
- loot
- fight
- lights
- swallows
- goose
- bumps
- keeps
- fighting
- massive
- celebration
- sex
- human
- ours
- light
- minded
- social
- needed
- anyway
- words
- problems
- claim
- reimburse
- checked
- airport
- meet
- e
- responsibility
- grunion
- knees
- thousands
- important
- shows
- goddamn
- strong
- law
- sara
- brent
- passport
- aren
- month
- romantic
- leaving
- random
- applied
- interesting
- regular
- taking
- harder
- hurt
- movie
- freaking
- record
- airlines
- responsible
- honestly
- grew
- proud
- hang
- mrs
- fellow
- terrible
- contradict
- infuriate
- throws
- afraid
- suffer
- bloody
- settled
- thrash
- may
- son
- faithful
- moments
- act
- sleep
- detroit
- planning
- yard
- particularly
- natural
- phenomenon
- highlight
- flopping
- laying
- eggs
- mating
- orgy
- magic
- unexplainable
- instincts
- seaweed
- instinctual
- firecracker
- spent
- clasped
- intimate
- special
- wishes
- seriously
- refreshments
- ooh
- pinpoint
- marge
- dishes
- fat
- ring
- later
- shivers
- spine
- sillier
- poise
- trumpets
- squeakers
- sockets
- allure
- contrary
- violently
- glass
- temperamental
- fiend
- loathe
- adder
- riotous
- mentioned
- intemperate
- tots
- downstairs
- mad
- loose
- lived
- yelling
- happening
- promise
- known
- exciting
- finish
- college
- atlanta
- searching
- fired
- drinking
- jesus
- lock
- plans
- hole
- santa
- kitchen
- invite
- believing
- ann
- landing
- eats
- panties
- sore
- throat
- unmistakable
- capistrano
- lemmings
- cliffs
- invitation
- map
- heaven
- carpet
- poodle
- suicide
- pact
- turns
- court
- dies
- mustn
- vampire
- identification
- places
- danger
- hand
- middle
- situation
- option
- willing
- paid
- horrible
- pain
- anybody
- paperwork
- difficult
- dream
- sakes
- matters
- toes
- become
- habit
- hold
- survive
- break
- babe
- shit
- contact
- land
- water
- transfer
- backersen
- desk
- wallet
- stolen
- credit
- cards
- clearly
- appreciate
- complicated
- uhuh
- bucks
- win
- theatre
- resume
- riding
- helps
- less
- planes
- means
- future
- ran
- red
- wrote
- loans
- spend
- dreaming
- proof
- shooting
- crack
- cracked
- dares
- invited
- breaks
- embarrassed
- wondering
- aw
- style
- granted
- embarrassing
- mixed
- su
- spawning
- stubbed
- toe
- bodies
- expectantly
- meant
- beginning
- traumatized
- freda
- sooner
- applies
- philosophers
- rots
- trivial
- torture
- stiff
- venom
- fangs
- wake
- bended
- voice
- build
- unbelievable
- hiring
- resumes
- eventually
- aggressive
- awhile
- especially
- further
- mass
- pointless
- claus
- neither
- mmm
- cannes
- figures
- burnt
- debate
- exception
- busy
- safe
- possible
- spring
- starting
- buy
- rest
- office
- complaint
- accepted
- ten
- area
- seats
- foam
- vibrations
- drives
- popped
- slightly
- exaggerated
- scientific
- proposed
- bathroom
- awful
- scene
- adders
- afford
- packet
- forward
- customer
- brand
- yellow
- fifteen
- brian
- asking
- percent
- girlfriend
- acceptance
- patient
- patience
- dishonest
- cheese
- restaurant
- t
- sixty
- direct
- holiday
- inn
- refund
- hmmm
- receiving
- sim
- browns
- unacceptable
- northwest
- dorky
- putt
- change
- filling
- z
- x
- simple
- mail
- request
- raise
- town
- hadn
- played
- pennies
- visa
- visit
- loves
- list
- environment
- frustrated
- ride
- imagine
- flew
- nash
- replace
- paris
- personal
- issue
- flights
- track
- angry
- headstone
- cemetery
- cancer
- poetry
- palm
- l
- dropped
- bunch
- p
- chair
- broke
- o
- allow
- nights
- talent
- ignoring
- center
- lovely
- sneaking
- whose
- es
- naturally
- stays
- wide
- bought
- arm
- exact
- curtsy
- wiggle
- superficial
- paint
- naked
- vendome
- rouser
- younger
- jealous
- fascinating
- duty
- photographer
- studio
- cad
- restraint
- ill
- knee
- applying
- questions
- picture
- fake
- apartment
- cash
- drink
- upset
- sending
- flying
- speak
- details
- wherever
- unfortunate
- education
- leaves
- basically
- hospital
- messed
- sounds
- pinch
- malibu
- drop
- team
- professional
- till
- ambiguous
- seeing
- ugh
- wet
- heading
- release
- fire
- inside
- pr
- includes
- rub
- ludicrous
- wriggle
- flippancy
- acid
- sweetness
- curling
- dressing
- gown
- broach
- enjoyable
- original
- '''em'
- early
- ok
- daughter
- age
- steps
- rejected
- starts
- competitive
- hired
- worse
- itself
- nowhere
- unfortunately
- process
- fault
- decision
- package
- easy
- transferred
- straight
- suckers
- none
- returning
- throwing
- cork
- softest
- breathe
- road
- catch
- threw
- canal
- comb
- towels
- sacred
- savor
- delight
- needn
- late
- web
- website
- rough
- daddy
- talked
- feeling
- talented
- interview
- food
- looks
- misplaced
- theft
- likely
- stuck
- tags
- cult
- everywhere
- menu
- choose
- press
- lady
- bill
- department
- online
- immediately
- miles
- notice
- vote
- heavens
- yell
- anna
- tables
- hasn
- stole
- losing
- unfair
- positive
- boston
- celebrate
- system
- turning
- newspapers
- pays
- dare
- jokes
- swine
- demand
- building
- finished
- staying
- cheap
- anyways
- okey
- lobster
- wonderful
- harvard
- engineering
- summer
- lawyer
- mr
- lax
- delta
- funeral
- report
- property
- whoever
- corporate
- miso
- soup
- holy
- olivia
- camera
- power
- sold
- testing
- greens
- explain
- agreement
- undecided
- access
- babies
- street
- vegas
- slot
- honeymoon
- husband
- penny
- slots
- wheel
- cat
- citizenship
- england
- fan
- spending
- craig
- services
- monster
- baloney
- saving
- necessarily
- carousel
- cameras
- airplane
- sentimental
- value
- incredibly
- shopping
- jet
- clothes
- apologize
- allowed
- amount
- candy
- redlands
- sprinklers
- whenever
- brain
- park
- holding
- memorized
- surgery
- audience
- joy
- scholarships
- commuting
- h
- ruined
- mm
- bet
- neighborhood
- sticking
- woo
- teach
- class
- confused
- clock
- foolish
- ocean
- distinctly
- whispered
- wishing
- white
- elliott
- strange
- quest
- ultimate
- truth
- shan
- word
- disagreeable
- wench
- birthday
- national
- thin
- rent
- colors
- citizen
- account
- '''til'
- hire
- short
- fuse
- america
- audition
- sponge
- language
- arriving
- reimbursement
- computer
- cover
- ass
- dealing
- quick
- freaks
- pitch
- hitting
- housing
- force
- scholarship
- dirty
- depends
- helicopter
- wild
- sport
- games
- streets
- although
- mi
- trust
- cracker
- curtsey
- bicker
- irons
- besides
- splendid
- born
- weekends
- letting
- tear
- apart
- touch
- flipped
- hot
- outside
- flowers
- candles
- approve
- surprised
- lead
- ends
- worthless
- apparently
- worker
- annoy
- belongings
- disappeared
- under
- case
- checking
- admit
- risk
- agreed
- yesterday
- country
- financial
- aid
- within
- automated
- systems
- specific
- rate
- star
- aisle
- afternoon
- maui
- machine
- waste
- available
- confirmed
- thinkin
- liked
- kicked
- intermittently
- burned
- desire
- fade
- passion
- laughable
- cunning
- mirrors
- painted
- wooden
- snake
- suspicious
- nosey
- silly
- wonders
- order
- standard
- site
- sense
- dangerous
- cute
- whether
- considering
- opinion
- f
- few
- guarantee
- possessions
- claims
- sue
- easier
- cared
- expected
- trip
- europe
- its
- circles
- large
- store
- macy
- rotary
- instead
- showed
- hundreds
- planned
- someplace
- sensitive
- popping
- opened
- backrub
- fantasy
- damned
- sheet
- cut
- purchase
- amy
- quit
- clapping
- onstage
- eighteen
- auditioning
- rejection
- prepared
- thirty
- master
- kelly
- natalie
- pants
- isabella
- verizon
- goodbye
- fucking
- challenge
- slept
- created
- checkbook
- argument
- uhh
- perhaps
- loath
- complete
- sad
- priorities
- between
- moving
- song
- temporary
- pulling
- smith
- receptionist
- extra
- lodging
- eh
- la
- cost
- boss
- peanuts
- doctor
- production
- downtown
- april
- contracts
- incompetent
- realtor
- fix
- payphone
- verify
- electrical
- outage
- symptoms
- nature
- pilot
- hook
- realizes
- bother
- trade
- event
- meadow
- faint
- blues
- bananas
- overnight
- station
- attention
- purchasing
- terms
- taser
- excellent
- counsel
- sorority
- golfing
- library
- dork
- taco
- branch
- separate
- sacrifices
- mothers
- kicking
- videotape
- stream
- sitters
- moved
- computers
- machines
- bride
- cruise
- likes
- tabs
- plays
- giant
- renamed
- brenda
- lumber
- janet
- state
- quarters
- costs
- escort
- reliable
- board
- posting
- trail
- following
- fantastic
- mighty
- recommending
- generally
- outline
- affords
- save
- carpool
- frustration
- refuse
- anger
- fourth
- lines
- fourteen
- mileage
- candid
- packed
- replaced
- expensive
- lawsuit
- cruising
- bruising
- president
- mistakenly
- behalf
- listed
- liable
- held
- sean
- badge
- employee
- impression
- cemeteries
- urban
- oasis
- wandering
- hers
- pathetic
- ground
- stones
- tumors
- heather
- built
- prospect
- garden
- section
- parties
- feet
- poems
- curly
- tree
- crown
- john
- dunn
- begin
- wheelchair
- reciting
- envelope
- grants
- mold
- minds
- mess
- rapper
- ho
- masters
- teacher
- dash
- popular
- seasoning
- messing
- ruin
- woke
- darkest
- beating
- bush
- porch
- fresh
- rooms
- sweetest
- pets
- cheeked
- brooch
- however
- jones
- voices
- berating
- christmas
- shame
- bunker
- guard
- spread
- companies
- shipping
- shock
- group
- dual
- unattached
- engagement
- sock
- dude
- lucked
- blush
- beige
- loaded
- craziest
- offered
- spoke
- english
- accent
- illegal
- jail
- caught
- hardcore
- tropical
- bahamas
- tahiti
- wealthy
- royalty
- removed
- attitude
- extremely
- hostile
- cutting
- sentence
- jumping
- produce
- field
- shake
- across
- soaked
- dying
- georgia
- educated
- boarding
- attendance
- seat
- offer
- publicize
- abuse
- insinuating
- smug
- mouth
- tossing
- hanky
- black
- wheels
- easily
- overhead
- compartment
- data
- collecting
- lip
- coffee
- smoking
- cigarettes
- union
- differently
- numb
- sickness
- boom
- mortality
- affecting
- slow
- books
- per
- diem
- victorian
- houses
- west
- sider
- commute
- practice
- neon
- softballs
- glow
- co
- ed
- nationally
- ranked
- ping
- pong
- denigrate
- rookie
- donuts
- recently
- pitcher
- hitter
- mostly
- shortstop
- ex
- trojans
- sports
- nicer
- monica
- player
- type
- helipad
- fell
- literally
- doubt
- cares
- mustache
- papers
- crying
- floorboards
- sorted
- everyday
- seas
- bringing
- sacrifice
- guilty
- opening
- return
- jumped
- distinctively
- direction
- tiny
- action
- passed
- cheeks
- darn
- urgh
- restrain
- self
- centered
- registration
- lunch
- documents
- identifications
- deadline
- carries
- official
- documentation
- government
- wireless
- crucial
- pulls
- kinda
- girly
- radiant
- ya
- shine
- invitations
- response
- mcdonald
- level
- member
- pavement
- indicators
- prejudice
- against
- applications
- hating
- physically
- amateur
- crawl
- dumber
- cases
- etiquette
- bug
- opinions
- magically
- irresponsible
- carrousel
- contents
- main
- liability
- provides
- shops
- reimbursed
- investigate
- provide
- uncommon
- johnny
- conscious
- stories
- africa
- image
- hurts
- goout
- gradual
- impact
- subside
- heals
- parts
- football
- recognizable
- accomplished
- prestige
- load
- worrying
- decide
- tour
- friendly
- ivy
- walls
- collegiate
- g
- choices
- math
- prestigious
- departments
- orientation
- graduate
- shiloh
- valued
- customers
- previous
- purchases
- scheduling
- highly
- discounted
- uses
- corporation
- hotels
- rated
- aisles
- switch
- fortunately
- allows
- spare
- shuttle
- appropriate
- traveling
- deals
- shuttles
- sleeps
- gee
- futile
- moralists
- unbearable
- flippant
- shibboleths
- rush
- madly
- piazza
- iron
- dri
- counter
- applica
- lonely
- disappear
- video
- definitive
- magazine
- boyfriend
- stage
- golly
- concert
- crew
- freak
- guaranteed
- nervous
- hah
- persistence
- factors
- types
- male
- female
- consideration
- cooking
- reconsidering
- uhm
- retirement
- foot
- persistent
- table
- skewed
- painting
- outer
- employment
- unlucky
- planet
- normal
- peoples
- reading
- difficulties
- loading
- mishap
- cart
- shipped
- tracking
- reim
- tight
- error
- continue
- 'false'
- compensate
- policy
- gifts
- nobodies
- tag
- originally
- shoe
- core
- memories
- kathy
- lasted
- gary
- closed
- surreal
- troops
- loving
- los
- angeles
- schools
- kinds
- secrets
- explore
- rip
- nuts
- champions
- leaning
- towards
- communications
- broad
- confined
- ropes
- recording
- depending
- leads
- bypass
- zero
- pleasant
- ebay
- bye
- steve
- hint
- asks
- tone
- pretend
- protection
- rid
- submit
- print
- regarding
- grievance
- sites
- protected
- processed
- careful
- secure
- unreliable
- trash
- kept
- spotting
- certain
- specifically
- pushing
- headed
- ears
- watched
- sends
- ceaseless
- wear
- often
- pleasure
- sonya
- promoted
- nurses
- mommy
- va
- videotaped
- cousin
- postpone
- performance
- swear
- cast
- spotlight
- microphone
- tripped
- surprise
- scored
- points
- members
- loser
- marrying
- weddings
- carats
- lousy
- chaperone
- drowsy
- deserve
- cry
- tears
- happiness
- marriage
- commercials
- refection
- financially
- studied
- passing
- russel
- crowe
- pooling
- funds
- owe
- learning
- role
- auditions
- denny
- tip
- teaching
- oof
- france
- steal
- keys
- laughing
- rosenkrantz
- thingy
- bopper
- limit
- whoa
- ways
- suffered
- disease
- handsome
- gifted
- parent
- ripped
- uveny
- tricia
- chemo
- baseball
- benny
- nat
- nation
- bread
- eat
- beer
- dorm
- sometime
- mattresses
- reserved
- grauman
- scale
- whooooo
- acti
- film
- art
- academy
- films
- fuck
- ethiopia
- cuddle
- profanity
- provider
- satellites
- average
- compensating
- unbeknownst
- satellite
- exaggerate
- advising
- addressed
- fax
- dumb
- fritz
- incoming
- million
- grown
- fella
- shootin
- travel
- sat
- instinct
- goosebumps
- arms
- danced
- intimately
- spart
- strumpets
- bristling
- diamonds
- taste
- portion
- side
- stairs
- condescending
- copy
- proceed
- remove
- missy
- behaving
- sweetie
- deploy
- specialist
- increase
- triple
- promotion
- retire
- quiets
- faster
- career
- lame
- drew
- barrymore
- nasty
- mouse
- cheesy
- jane
- tarzan
- engaged
- esmeralda
- hitched
- spontaneous
- character
- conga
- dim
- pulled
- chucky
- sarah
- guiding
- graduated
- apply
- colleges
- energy
- busing
- clerk
- excuses
- qualified
- chang
- investment
- banking
- deloitte
- touche
- temp
- degrading
- smarter
- astronaut
- biomedical
- internship
- plus
- breaking
- evicting
- typing
- shoot
- degree
- science
- club
- joking
- doomed
- maryland
- cooperate
- emergency
- pounds
- urn
- deduction
- sherlock
- holmes
- vessel
- burst
- caption
- therefore
- placed
- firing
- lobby
- fastest
- ibm
- misplace
- count
- hanging
- explanation
- follow
- footsteps
- overboard
- paralyzed
- coma
- fucked
- studying
- countries
- goal
- met
- greatest
- hopefully
- mmmm
- cinema
- chapter
- professionals
- sipping
- martinis
- sushi
- vat
- assistance
- starve
- south
- central
- firm
- police
- officer
- viacom
- digits
- speaking
- network
- charging
- connect
- outages
- hurricane
- katrina
- chose
- maam
- proven
- failing
- receive
- cuts
- using
- flip
- writing
- ms
- fall
- older
- game
- orange
- pink
- goodies
- battling
- sees
- flat
- stronger
- acted
- deserves
- hats
- shore
- pokes
- nah
- paul
- boats
- dammit
- enjoys
- bound
- harm
- pleasured
- lure
- devil
- rile
- topic
- initialed
- lets
- correctly
- spelled
- signed
- shitty
- timing
- susie
- tours
- emotionally
- bullshit
- enlist
- lie
- traditional
- church
- cabins
- flowery
- naturey
- midsummer
- excitement
- hoping
- attacked
- bears
- trim
- cooler
- dog
- tanish
- contrast
- cake
- buffet
- fried
- chicken
- mashed
- potatoes
- happier
- thrilled
- ecstatic
- rushed
- pressure
- interviews
- favors
- bite
- excessive
- unemployed
- cab
- gas
- possibly
- extreme
- trained
- presentable
- quote
- buck
- chugging
- engine
- realm
- minimum
- wage
- fry
- flipper
- bottom
- clear
- affect
- cle
- dressed
- shave
- legs
- presentation
- eighty
- success
- position
- training
- mcdonalds
- tv
- rainbow
- colored
- crap
- safely
- destination
- percoes
- equivalent
- amends
- courtesy
- inconveniencing
- near
- communicate
- conditions
- frequently
- current
- expecting
- pissed
- honor
- grandmother
- condition
- inevitable
- peace
- general
- mace
- present
- knife
- puny
- underwater
- basket
- weaving
- lying
- decided
- works
- worried
- occasion
- cruisers
- vibe
- greek
- lessons
- suck
- celebrating
- crush
- throughout
- test
- waters
- movies
- vermont
- cruiser
- abused
- frat
- boys
- dorms
- dell
- requests
- fixed
- dealt
- worries
- refunded
- situa
- relevant
- ordered
- orders
- others
- incorrectly
- tomatoes
- del
- cents
- attached
- cuz
- hoped
- opportunity
- rushing
- goods
- skipped
- breath
- kleenex
- alaska
- bearing
- hated
- holes
- calf
- witch
- whore
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: hubert_large_ll60k
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["iemocap"]}
|
espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer_hubert
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:iemocap",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null |
espnet
|
## ESPnet2 DIAR model
### `espnet/YushiUeda_mini_librispeech_diar_train_diar_raw_valid.acc.best`
This model was trained by YushiUeda using mini_librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 650472b45a67612eaac09c7fbd61dc25f8ff2405
pip install -e .
cd egs2/mini_librispeech/diar1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/YushiUeda_mini_librispeech_diar_train_diar_raw_valid.acc.best
```
<!-- Generated by scripts/utils/show_diar_result.sh -->
# RESULTS
## Environments
- date: `Tue Jan 4 16:43:34 EST 2022`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.5a1`
- pytorch version: `pytorch 1.9.0+cu102`
- Git hash: `0b2a6786b6f627f47defaee22911b3c2dc04af2a`
- Commit date: `Thu Dec 23 12:22:49 2021 -0500`
## diar_train_diar_raw
### DER
dev_clean_2_ns2_beta2_500
|threshold_median_collar|DER|
|---|---|
|result_th0.3_med11_collar0.0|32.28|
|result_th0.3_med1_collar0.0|32.64|
|result_th0.4_med11_collar0.0|30.43|
|result_th0.4_med1_collar0.0|31.15|
|result_th0.5_med11_collar0.0|29.45|
|result_th0.5_med1_collar0.0|30.53|
|result_th0.6_med11_collar0.0|29.52|
|result_th0.6_med1_collar0.0|30.95|
|result_th0.7_med11_collar0.0|30.92|
|result_th0.7_med1_collar0.0|32.69|
## DIAR config
<details><summary>expand</summary>
```
config: conf/train_diar.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/diar_train_diar_raw
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 33757
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: 3
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 3
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/diar_stats_8k/train/speech_shape
- exp/diar_stats_8k/train/spk_labels_shape
valid_shape_file:
- exp/diar_stats_8k/valid/speech_shape
- exp/diar_stats_8k/valid/spk_labels_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 200000
chunk_shift_ratio: 0.5
num_cache_chunks: 64
train_data_path_and_name_and_type:
- - dump/raw/simu/data/train_clean_5_ns2_beta2_500/wav.scp
- speech
- sound
- - dump/raw/simu/data/train_clean_5_ns2_beta2_500/espnet_rttm
- spk_labels
- rttm
valid_data_path_and_name_and_type:
- - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/wav.scp
- speech
- sound
- - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/espnet_rttm
- spk_labels
- rttm
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.01
scheduler: noamlr
scheduler_conf:
warmup_steps: 1000
num_spk: 2
init: xavier_uniform
input_size: null
model_conf:
attractor_weight: 1.0
use_preprocessor: true
frontend: default
frontend_conf:
fs: 8k
hop_length: 128
specaug: null
specaug_conf: {}
normalize: global_mvn
normalize_conf:
stats_file: exp/diar_stats_8k/train/feats_stats.npz
encoder: transformer
encoder_conf:
input_layer: linear
num_blocks: 2
linear_units: 512
dropout_rate: 0.1
output_size: 256
attention_heads: 4
attention_dropout_rate: 0.0
decoder: linear
decoder_conf: {}
label_aggregator: label_aggregator
label_aggregator_conf: {}
attractor: null
attractor_conf: {}
required:
- output_dir
version: 0.10.5a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "noinfo", "license": "cc-by-4.0", "tags": ["espnet", "audio", "diarization"], "datasets": ["mini_librispeech"]}
|
espnet/YushiUeda_mini_librispeech_diar_train_diar_raw_valid.acc.best
| null |
[
"espnet",
"audio",
"diarization",
"dataset:mini_librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR pretrained model
### `Yushi Ueda/ksponspeech_asr_train_asr_conformer8_n_fft512_hop_length256_raw_kr_bpe2309_valid.acc.best`
♻️ Imported from https://zenodo.org/record/5154341/
This model was trained by Yushi Ueda using ksponspeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "kr", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["ksponspeech"]}
|
espnet/Yushi_Ueda_ksponspeech_asr_train_asr_conformer8_n_fft512_hop_length256-truncated-eb42e5
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"kr",
"dataset:ksponspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null |
espnet
|
## ESPnet2 DIAR pretrained model
### `Yushi Ueda/mini_librispeech_diar_train_diar_raw_max_epoch20_valid.acc.best`
♻️ Imported from https://zenodo.org/record/5264020/
This model was trained by Yushi Ueda using mini_librispeech/diar1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "speaker-diarization"], "datasets": ["mini_librispeech"]}
|
espnet/Yushi_Ueda_mini_librispeech_diar_train_diar_raw_max_epoch20_valid.acc.best
| null |
[
"espnet",
"audio",
"speaker-diarization",
"en",
"dataset:mini_librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.