Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/libritts_gst+xvector_trasnformer`
♻️ Imported from https://zenodo.org/record/4409702/
This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["libritts"]}
|
espnet/kan-bayashi_libritts_gst_xvector_trasnformer
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:libritts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/libritts_tts_train_gst+xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.loss`
♻️ Imported from https://zenodo.org/record/4418774/
This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["libritts"]}
|
espnet/kan-bayashi_libritts_tts_train_gst_xvector_conformer_fastspeech2_trans-truncated-c3209b
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:libritts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/libritts_tts_train_gst+xvector_trasnformer_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4409702/
This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["libritts"]}
|
espnet/kan-bayashi_libritts_tts_train_gst_xvector_trasnformer_raw_phn_tacotro-truncated-250027
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:libritts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/libritts_tts_train_xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.loss`
♻️ Imported from https://zenodo.org/record/4418754/
This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["libritts"]}
|
espnet/kan-bayashi_libritts_tts_train_xvector_conformer_fastspeech2_transform-truncated-42b443
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:libritts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/libritts_tts_train_xvector_trasnformer_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4409704/
This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["libritts"]}
|
espnet/kan-bayashi_libritts_tts_train_xvector_trasnformer_raw_phn_tacotron_g2-truncated-e5fb13
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:libritts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/libritts_tts_train_xvector_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5521416/
This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["libritts"]}
|
espnet/kan-bayashi_libritts_tts_train_xvector_vits_raw_phn_tacotron_g2p_en_no-truncated-09d645
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:libritts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/libritts_xvector_conformer_fastspeech2`
♻️ Imported from https://zenodo.org/record/4418754/
This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["libritts"]}
|
espnet/kan-bayashi_libritts_xvector_conformer_fastspeech2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:libritts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/libritts_xvector_trasnformer`
♻️ Imported from https://zenodo.org/record/4409704/
This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["libritts"]}
|
espnet/kan-bayashi_libritts_xvector_trasnformer
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:libritts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/libritts_xvector_vits`
♻️ Imported from https://zenodo.org/record/5521416/
This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["libritts"]}
|
espnet/kan-bayashi_libritts_xvector_vits
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:libritts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_conformer_fastspeech2`
♻️ Imported from https://zenodo.org/record/4036268/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_conformer_fastspeech2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_fastspeech`
♻️ Imported from https://zenodo.org/record/3986231/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_fastspeech
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_fastspeech2`
♻️ Imported from https://zenodo.org/record/4036272/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_fastspeech2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/ljspeech_joint_finetune_conformer_fastspeech2_hifigan`
♻️ Imported from https://zenodo.org/record/5498896/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_joint_finetune_conformer_fastspeech2_hifigan
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/ljspeech_joint_train_conformer_fastspeech2_hifigan`
♻️ Imported from https://zenodo.org/record/5498487/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_joint_train_conformer_fastspeech2_hifigan
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_tacotron2`
♻️ Imported from https://zenodo.org/record/3989498/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_tacotron2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_transformer`
♻️ Imported from https://zenodo.org/record/4039194/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_transformer
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/ljspeech_tts_finetune_joint_conformer_fastspeech2_hifigan_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5498896/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_tts_finetune_joint_conformer_fastspeech2_hifigan_-truncated-737899
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_tts_train_conformer_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4036268/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_tts_train_conformer_fastspeech2_raw_phn_tacotron_-truncated-ec9e34
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_tts_train_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4036272/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_tts_train_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_tts_train_fastspeech_raw_phn_tacotron_g2p_en_no_space_train.loss.best`
♻️ Imported from https://zenodo.org/record/3986231/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_tts_train_fastspeech_raw_phn_tacotron_g2p_en_no_space_train.loss.best
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/ljspeech_tts_train_joint_conformer_fastspeech2_hifigan_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5498487/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_tts_train_joint_conformer_fastspeech2_hifigan_raw-truncated-af8fe0
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_tts_train_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.best`
♻️ Imported from https://zenodo.org/record/3989498/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_tts_train_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.best
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_tts_train_transformer_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4039194/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_tts_train_transformer_raw_phn_tacotron_g2p_en_no_space_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/ljspeech_tts_train_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5443814/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_tts_train_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/ljspeech_vits`
♻️ Imported from https://zenodo.org/record/5443814/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_vits
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/tsukuyomi_full_band_vits_prosody`
♻️ Imported from https://zenodo.org/record/5521446/
This model was trained by kan-bayashi using tsukuyomi/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["tsukuyomi"]}
|
espnet/kan-bayashi_tsukuyomi_full_band_vits_prosody
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:tsukuyomi",
"arxiv:1804.00015",
"license:cc-by-4.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/tsukuyomi_tts_finetune_full_band_jsut_vits_raw_phn_jaconv_pyopenjtalk_prosody_latest`
♻️ Imported from https://zenodo.org/record/5521446/
This model was trained by kan-bayashi using tsukuyomi/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["tsukuyomi"]}
|
espnet/kan-bayashi_tsukuyomi_tts_finetune_full_band_jsut_vits_raw_phn_jaconv_pyopenjtalk_prosody_latest
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:tsukuyomi",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/vctk_full_band_multi_spk_vits`
♻️ Imported from https://zenodo.org/record/5521431/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_full_band_multi_spk_vits
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_gst_conformer_fastspeech2`
♻️ Imported from https://zenodo.org/record/4036264/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_gst_conformer_fastspeech2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_gst_fastspeech`
♻️ Imported from https://zenodo.org/record/3986241/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_gst_fastspeech
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_gst_fastspeech2`
♻️ Imported from https://zenodo.org/record/4036266/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_gst_fastspeech2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_gst_tacotron2`
♻️ Imported from https://zenodo.org/record/3986237/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_gst_tacotron2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_gst_transformer`
♻️ Imported from https://zenodo.org/record/4037456/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_gst_transformer
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_gst+xvector_conformer_fastspeech2`
♻️ Imported from https://zenodo.org/record/4394608/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_gst_xvector_conformer_fastspeech2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_gst+xvector_tacotron2`
♻️ Imported from https://zenodo.org/record/4394598/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_gst_xvector_tacotron2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_gst+xvector_transformer`
♻️ Imported from https://zenodo.org/record/4393277/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_gst_xvector_transformer
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/vctk_multi_spk_vits`
♻️ Imported from https://zenodo.org/record/5500759/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_multi_spk_vits
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/vctk_tts_train_full_band_multi_spk_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5521431/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_full_band_multi_spk_vits_raw_phn_tacotron_g-truncated-50b003
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_gst_conformer_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4036264/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_gst_conformer_fastspeech2_raw_phn_tacotron_-truncated-69081b
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_gst_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4036266/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_gst_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_gst_fastspeech_raw_phn_tacotron_g2p_en_no_space_train.loss.best`
♻️ Imported from https://zenodo.org/record/3986241/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_gst_fastspeech_raw_phn_tacotron_g2p_en_no_space_train.loss.best
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_gst_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.best`
♻️ Imported from https://zenodo.org/record/3986237/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_gst_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.best
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_gst_transformer_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4037456/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_gst_transformer_raw_phn_tacotron_g2p_en_no_space_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_gst+xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4394608/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_gst_xvector_conformer_fastspeech2_transform-truncated-e051a9
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_gst+xvector_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4394598/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_gst_xvector_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/vctk_tts_train_multi_spk_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5500759/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_multi_spk_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4394602/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_xvector_conformer_fastspeech2_transformer_t-truncated-69a657
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_xvector_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4394600/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_xvector_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_xvector_transformer_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4393279/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_xvector_transformer_raw_phn_tacotron_g2p_en_no_space_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_xvector_conformer_fastspeech2`
♻️ Imported from https://zenodo.org/record/4394602/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_xvector_conformer_fastspeech2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_xvector_tacotron2`
♻️ Imported from https://zenodo.org/record/4394600/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_xvector_tacotron2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_xvector_transformer`
♻️ Imported from https://zenodo.org/record/4393279/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_xvector_transformer
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
# ESPnet2 ASR pretrained model
## `kan-bayashi/jsut_tts_train_conformer_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.loss.ave`
♻️ Imported from <https://zenodo.org/record/4017026#.YN70XJozZH4>
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Training config
See full config in [`config.yaml`](./config.yaml)
```yaml
config: conf/tuning/train_conformer_fastspeech2.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_conformer_fastspeech2_raw_phn_jaconv_pyopenjtalk
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"], "widget": [{"text": "Hello, how are you doing?"}]}
|
espnet/kan_bayashi_jsut_tts_train_conformer_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/pengcheng_guo_wenetspeech_asr_train_asr_raw_zh_char`
This model was trained by Pengcheng Guo using wenetspeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 5c21f63e45e0961a5d817017c282b0cafd68a3aa
pip install -e .
cd egs2/wenetspeech/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/pengcheng_guo_wenetspeech_asr_train_asr_raw_zh_char
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Oct 6 15:11:20 CST 2021`
- python version: `3.8.11 (default, Aug 3 2021, 15:09:35) [GCC 7.5.0]`
- espnet version: `espnet 0.10.2a1`
- pytorch version: `pytorch 1.9.0`
- Git hash: ``
- Commit date: ``
## asr_train_asr_conformer_raw_zh_char
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/aishell_test|7176|7176|67.1|32.9|0.0|0.1|33.0|32.9|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/dev|13825|16684|32.1|54.1|13.8|0.1|68.0|64.2|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/test_meeting|8370|8599|13.4|84.6|2.0|0.1|86.7|86.8|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/test_net|24774|25995|46.2|50.4|3.4|1.1|54.9|52.5|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/aishell_test|7176|104765|96.3|3.6|0.1|0.2|3.9|32.9|
|decode_asr_rnn_asr_model_valid.acc.ave_10bestdev|13825|333357|90.7|3.4|5.9|0.4|9.7|64.2|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/test_meeting|8370|220614|84.6|5.0|10.4|0.5|15.9|86.8|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/test_net|24774|416968|91.8|5.3|2.9|0.6|8.8|52.5|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_raw_zh_char
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 8
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 44205
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 30
patience: null
val_scheduler_criterion:
- valid
- acc
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 30000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_zh_char/train/speech_shape
- exp/asr_stats_raw_zh_char/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_zh_char/valid/speech_shape
- exp/asr_stats_raw_zh_char/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 51200
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_l/wav.scp
- speech
- sound
- - dump/raw/train_l/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0015
scheduler: warmuplr
scheduler_conf:
warmup_steps: 30000
token_list:
- <blank>
- <unk>
- 的
- 我
- 是
- 你
- 了
- 一
- 不
- 这
- 个
- 有
- 就
- 们
- 在
- 他
- 人
- 么
- 来
- 说
- 那
- 要
- 好
- 啊
- 大
- 到
- 上
- 也
- 没
- 都
- 去
- 能
- 子
- 会
- 为
- 得
- 时
- 还
- 可
- 以
- 什
- 家
- 后
- 看
- 呢
- 对
- 事
- 天
- 下
- 过
- 想
- 多
- 小
- 出
- 自
- 儿
- 生
- 给
- 里
- 现
- 着
- 然
- 吧
- 样
- 道
- 吗
- 心
- 跟
- 中
- 很
- 点
- 年
- 和
- 地
- 怎
- 知
- 十
- 老
- 当
- 把
- 话
- 别
- 所
- 之
- 情
- 实
- 开
- 面
- 回
- 行
- 国
- 做
- 己
- 经
- 如
- 真
- 起
- 候
- 些
- 让
- 发
- 她
- 觉
- 但
- 成
- 定
- 意
- 二
- 长
- 最
- 方
- 三
- 前
- 因
- 用
- 呀
- 种
- 只
- 走
- 其
- 问
- 再
- 果
- 而
- 分
- 两
- 打
- 学
- 间
- 您
- 本
- 于
- 明
- 手
- 公
- 听
- 比
- 作
- 女
- 太
- 今
- 从
- 关
- 妈
- 同
- 法
- 动
- 已
- 见
- 才
- 孩
- 感
- 吃
- 常
- 次
- 它
- 进
- 先
- 找
- 身
- 全
- 理
- 又
- 力
- 正
- 主
- 应
- 高
- 被
- 钱
- 快
- 等
- 头
- 重
- 车
- 谢
- 日
- 东
- 放
- 无
- 工
- 咱
- 哪
- 五
- 者
- 像
- 西
- 该
- 干
- 相
- 信
- 机
- 百
- 特
- 业
- 活
- 师
- 边
- 爱
- 友
- 新
- 外
- 位
- 更
- 直
- 几
- 第
- 非
- 四
- 题
- 接
- 少
- 哥
- 死
- 完
- 刚
- 电
- 气
- 安
- 爸
- 白
- 告
- 美
- 解
- 叫
- 月
- 带
- 欢
- 谁
- 体
- 喜
- 部
- 场
- 姐
- 军
- 万
- 结
- 合
- 难
- 八
- 每
- 目
- 亲
- 朋
- 认
- 总
- 加
- 通
- 办
- 马
- 件
- 受
- 任
- 请
- 住
- 王
- 思
- 门
- 名
- 平
- 系
- 文
- 帮
- 路
- 变
- 记
- 水
- 九
- 算
- 将
- 口
- 男
- 度
- 报
- 六
- 张
- 管
- 够
- 性
- 表
- 提
- 何
- 讲
- 期
- 拿
- 保
- 嘛
- 司
- 原
- 始
- 此
- 诉
- 处
- 清
- 内
- 产
- 金
- 晚
- 早
- 交
- 离
- 眼
- 队
- 七
- 入
- 山
- 代
- 市
- 海
- 物
- 零
- 望
- 世
- 婚
- 命
- 越
- 收
- 向
- 花
- 房
- 错
- 节
- 父
- 反
- 战
- 买
- 量
- 或
- 员
- 号
- 千
- 怕
- 底
- 且
- 品
- 民
- 化
- 爷
- 并
- 与
- 服
- 需
- 资
- 求
- 教
- 娘
- 医
- 数
- 院
- 书
- 利
- 往
- 确
- 各
- 单
- 风
- 送
- 必
- 条
- 包
- 准
- 光
- 整
- 病
- 弟
- 嗯
- 计
- 照
- 强
- 务
- 影
- 城
- 夫
- 俩
- 决
- 声
- 连
- 乐
- 息
- 远
- 北
- 至
- 饭
- 留
- 宝
- 神
- 近
- 考
- 备
- 案
- 界
- 容
- 况
- 母
- 较
- 持
- 证
- 选
- 制
- 程
- 喝
- 害
- 字
- 失
- 立
- 台
- 玩
- 查
- 块
- 便
- 挺
- 段
- 周
- 由
- 句
- 紧
- 李
- 据
- 杀
- 南
- 商
- 识
- 网
- 式
- 愿
- 传
- 流
- 消
- 伤
- 根
- 演
- 希
- 故
- 坐
- 建
- 注
- 许
- 调
- 共
- 空
- 半
- 却
- 酒
- 联
- 微
- 言
- 肯
- 赶
- 跑
- 笑
- 区
- 岁
- 红
- 达
- 官
- 轻
- 易
- 火
- 线
- 拉
- 首
- 导
- 团
- 慢
- 指
- 写
- 深
- 论
- 片
- 改
- 啥
- 满
- 步
- 音
- 功
- 聊
- 客
- 未
- 格
- 基
- 睡
- 观
- 份
- 视
- 色
- 价
- 政
- 转
- 终
- 复
- 啦
- 呃
- 阿
- 倒
- 义
- 警
- 林
- 使
- 科
- 运
- 苦
- 待
- 费
- 随
- 救
- 试
- 班
- 敢
- 精
- 及
- 术
- 造
- 续
- 养
- 展
- 答
- 绝
- 众
- 站
- 妹
- 差
- 谈
- 卖
- 播
- 创
- 领
- 象
- 志
- 投
- 习
- 兄
- 元
- 皇
- 专
- 态
- 急
- 局
- 兴
- 楚
- 飞
- 护
- 装
- 热
- 奶
- 取
- 设
- 游
- 读
- 福
- 药
- 担
- 历
- 忙
- 规
- 掉
- 刘
- 切
- 断
- 尽
- 社
- 久
- 支
- 板
- 星
- 姑
- 曾
- 突
- 除
- 华
- 责
- 排
- 京
- 值
- 士
- 统
- 换
- 德
- 衣
- 组
- 示
- 脸
- 刻
- 黑
- 遇
- 虽
- 顾
- 戏
- 怪
- 懂
- 叔
- 夜
- 陈
- 亮
- 江
- 兵
- 负
- 布
- 青
- 落
- 推
- 假
- 类
- 令
- 技
- 英
- 质
- 黄
- 治
- 形
- 助
- 球
- 歌
- 参
- 广
- 继
- 简
- 画
- 奇
- 陪
- 阳
- 险
- 须
- 念
- 迎
- 幸
- 抓
- 破
- 另
- 争
- 竟
- 户
- 律
- 择
- 究
- 龙
- 足
- 店
- 脑
- 斯
- 党
- 权
- 约
- 疑
- 议
- 严
- 密
- 克
- 存
- 穿
- 承
- 校
- 击
- 际
- 标
- 云
- 营
- 察
- 超
- 食
- 集
- 级
- 礼
- 静
- 背
- 武
- 初
- 拍
- 梦
- 验
- 响
- 角
- 石
- 股
- 追
- 怀
- 婆
- 适
- 独
- 忘
- 血
- 醒
- 具
- 罪
- 享
- 毛
- 香
- 状
- 配
- 靠
- 语
- 仅
- 低
- 细
- 米
- 既
- 钟
- 极
- 停
- 味
- 则
- 油
- 器
- 楼
- 菜
- 研
- 互
- 压
- 贵
- 村
- 属
- 派
- 乎
- 坏
- 控
- 显
- 图
- 双
- 职
- 永
- 哈
- 鬼
- 依
- 料
- 按
- 府
- 坚
- 某
- 甚
- 居
- 练
- 顺
- 模
- 即
- 州
- 引
- 乱
- 速
- 庭
- 朝
- 室
- 似
- 付
- 划
- 尔
- 境
- 犯
- 烦
- 环
- 伙
- 巴
- 春
- 古
- 妇
- 势
- 款
- 增
- 财
- 河
- 守
- 虑
- 汉
- 枪
- 妻
- 爹
- 弄
- 委
- 企
- 冲
- 置
- 麻
- 育
- 项
- 防
- 胡
- 杨
- 致
- 辈
- 括
- 毕
- 卫
- 修
- 史
- 型
- 牌
- 嘴
- 苏
- 群
- 举
- 痛
- 座
- 概
- 搞
- 围
- 土
- 毒
- 唱
- 冷
- 累
- 玉
- 获
- 误
- 跳
- 脚
- 雨
- 剧
- 休
- 皮
- 止
- 济
- 肉
- 丽
- 借
- 铁
- 牛
- 哭
- 招
- 闹
- 银
- 优
- 温
- 狗
- 退
- 洗
- 拜
- 否
- 票
- 偷
- 抱
- 博
- 般
- 效
- 套
- 维
- 普
- 康
- 富
- 宫
- 索
- 罗
- 堂
- 智
- 省
- 介
- 孙
- 灵
- 评
- 藏
- 称
- 课
- 货
- 姨
- 艺
- 骗
- 雪
- 赛
- 景
- 昨
- 健
- 鱼
- 激
- 危
- 熟
- 圈
- 闻
- 监
- 替
- 君
- 恋
- 良
- 掌
- 草
- 松
- 供
- 努
- 例
- 短
- 帝
- 姓
- 率
- 族
- 亿
- 赵
- 蛋
- 判
- 预
- 频
- 卡
- 架
- 纪
- 弃
- 秀
- 兰
- 层
- 检
- 伴
- 抗
- 讨
- 源
- 夏
- 咋
- 惊
- 录
- 善
- 补
- 刀
- 充
- 升
- 章
- 午
- 若
- 私
- 吴
- 素
- 旅
- 临
- 挑
- 唐
- 露
- 树
- 斗
- 舞
- 左
- 叶
- 副
- 晓
- 厂
- 弹
- 印
- 秘
- 屋
- 田
- 木
- 困
- 园
- 封
- 逃
- 批
- 馆
- 疼
- 败
- 陆
- 敌
- 散
- 采
- 翻
- 缺
- 胜
- 免
- 销
- 鸡
- 降
- 波
- 测
- 限
- 释
- 忍
- 归
- 床
- 餐
- 茶
- 码
- 宁
- 乡
- 辛
- 彩
- 亚
- 浪
- 漂
- 庆
- 训
- 范
- 烧
- 词
- 吵
- 媳
- 探
- 余
- 恐
- 积
- 农
- 遍
- 舒
- 顶
- 构
- 呼
- 丝
- 执
- 雅
- 惯
- 右
- 脱
- 恩
- 野
- 折
- 趣
- 笔
- 谓
- 盘
- 贝
- 宣
- 绍
- 嘉
- 宋
- 抢
- 嫌
- 尊
- 碰
- 绪
- 丢
- 厉
- 沙
- 轮
- 施
- 织
- 托
- 县
- 策
- 杯
- 逼
- 傻
- 束
- 街
- 疗
- 益
- 骨
- 迷
- 姻
- 恶
- 默
- 寻
- 搜
- 哦
- 材
- 吸
- 劳
- 勇
- 占
- 暴
- 船
- 徐
- 虎
- 融
- 异
- 审
- 攻
- 雷
- 稳
- 呗
- 输
- 睛
- 臣
- 端
- 威
- 秋
- 欧
- 冰
- 韩
- 减
- <space>
- 操
- 混
- 汽
- 暗
- 隐
- 嫂
- 沉
- 烟
- 顿
- 凭
- 洋
- 嫁
- 购
- 粉
- 遗
- 杂
- 协
- 尝
- 键
- 亡
- 秦
- 纸
- 拥
- 革
- 猫
- 伯
- 祝
- 签
- 傅
- 牙
- 湖
- 莫
- 杰
- 旁
- 港
- 劲
- 宗
- 偏
- 触
- 唯
- 吓
- 辆
- 沈
- 列
- 梅
- 祖
- 舍
- 尤
- 赚
- 疫
- 腾
- 拼
- 奖
- 刺
- 齐
- 诚
- 媒
- 戴
- 账
- 炸
- 骂
- 避
- 麦
- 爆
- 域
- 烈
- 暖
- 季
- 猜
- 佳
- 净
- 腿
- 磨
- 曲
- 虚
- 阵
- 荣
- 访
- 核
- 鲜
- 阶
- 镇
- 灯
- 估
- 剩
- 硬
- 租
- 敬
- 损
- 惜
- 挂
- 董
- 巨
- 忆
- 登
- 丈
- 帅
- 童
- 耳
- 央
- 软
- 移
- 略
- 额
- 厅
- 挥
- 透
- 络
- 弱
- 珍
- 恨
- 巧
- 丁
- 谋
- 孤
- 豆
- 诗
- 冒
- 狼
- 渐
- 峰
- 售
- 凡
- 聚
- 洞
- 抽
- 劝
- 闭
- 摆
- 冬
- 凶
- 魔
- 灭
- 雄
- 挣
- 搬
- 龄
- 朱
- 编
- 航
- 席
- 驾
- 授
- 鼓
- 握
- 隔
- 猪
- 仙
- 颜
- 镜
- 胖
- 赢
- 仇
- 晨
- 欺
- 刑
- 谷
- 旦
- 亏
- 盖
- 症
- 喊
- 蓝
- 讯
- 殿
- 梁
- 躲
- 旧
- 针
- 箱
- 丰
- 洲
- 鞋
- 征
- 蒙
- 伟
- 袋
- 庄
- 患
- 怨
- 佛
- 稍
- 朵
- 纳
- 吉
- 川
- 典
- 迹
- 瑞
- 废
- 搭
- 涨
- 汤
- 启
- 桌
- 摸
- 赔
- 宜
- 纯
- 贴
- 聪
- 熊
- 延
- 瓶
- 版
- 缘
- 距
- 甜
- 析
- 盛
- 孕
- 彻
- 桥
- 尚
- 染
- 撞
- 途
- 沟
- 疯
- 敏
- 瞧
- 漫
- 胆
- 诺
- 刷
- 饿
- 仍
- 喂
- 辞
- 迟
- 淡
- 郑
- 歉
- 扰
- 宾
- 圆
- 赞
- 肚
- 慧
- 泪
- 吹
- 拖
- 遭
- 穷
- 罚
- 悔
- 绿
- 忽
- 唉
- 毫
- 绩
- 暂
- 射
- 岛
- 拾
- 珠
- 欠
- 忠
- 陷
- 阴
- 尼
- 悲
- 糊
- 撤
- 徒
- 剑
- 币
- 娜
- 违
- 泡
- 仗
- 粮
- 培
- 趟
- 菲
- 拒
- 棒
- 脾
- 赏
- 窗
- 宇
- 闲
- 附
- 踏
- 彼
- 涉
- 锁
- 撒
- 魂
- 羊
- 述
- 屈
- 库
- 滚
- 凉
- 颗
- 寒
- 呐
- 墙
- 娃
- 序
- 迪
- 丹
- 扬
- 瞎
- 递
- 凤
- 碗
- 屁
- 锅
- 奔
- 幅
- 债
- 糖
- 奋
- 汇
- 圣
- 订
- 偶
- 残
- 宽
- 狂
- 鼠
- 狠
- 幕
- 固
- 竞
- 蜜
- 吐
- 摄
- 骑
- 篇
- 毁
- 尾
- 摇
- 奥
- 厚
- 妖
- 禁
- 逐
- 均
- 尸
- 冠
- 阅
- 辑
- 捕
- 载
- 郭
- 俺
- 诊
- 欲
- 扎
- 鸟
- 柔
- 迫
- 豪
- 踪
- 扔
- 碎
- 末
- 娶
- 扫
- 朕
- 励
- 乔
- 闺
- 档
- 厨
- 倍
- 湾
- 郎
- 幼
- 纷
- 奴
- 阻
- 饮
- 怒
- 妙
- 琴
- 曹
- 脏
- 牵
- 瓜
- 滴
- 炮
- 缓
- 含
- 献
- 柜
- 仔
- 艾
- 潜
- 赌
- 震
- 础
- 添
- 兔
- 焦
- 躺
- 森
- 肥
- 洪
- 孝
- 偿
- 悉
- 撑
- 甘
- 桃
- 苹
- 魏
- 鲁
- 池
- 狱
- 厌
- 纠
- 朗
- 贷
- 铺
- 殊
- 坦
- 爬
- 擦
- 酸
- 钢
- 咖
- 瞒
- 蛮
- 谅
- 耐
- 申
- 夸
- 欣
- 诶
- 驶
- 屏
- 烂
- 凌
- 甲
- 胎
- 仪
- 貌
- 番
- 涂
- 抬
- 舅
- 扯
- 鹿
- 摩
- 诸
- 秒
- 泽
- 埋
- 蒋
- 隆
- 赖
- 奸
- 咬
- 恢
- 宿
- 乖
- 邀
- 抵
- 臭
- 闪
- 莉
- 熬
- 链
- 盯
- 侦
- 灾
- 堆
- 灰
- 卷
- 盾
- 障
- 截
- 恰
- 佩
- 戒
- 莲
- 裁
- 芬
- 戚
- 匪
- 滑
- 趁
- 询
- 绑
- 辣
- 挖
- 俗
- 祸
- 符
- 扣
- 插
- 仁
- 壁
- 腰
- 斤
- 燕
- 筑
- 柱
- 夺
- 援
- 映
- 壮
- 杜
- 摔
- 润
- 恭
- 乌
- 慰
- 啡
- 著
- 井
- 跌
- 牢
- 荐
- 拔
- 惹
- 侯
- 玲
- 炎
- 胸
- 旗
- 牲
- 喽
- 涛
- 衡
- 矛
- 伍
- 贤
- 惨
- 糟
- 慌
- 伏
- 醉
- 仓
- 拆
- 乘
- 疾
- 鼻
- 潮
- 予
- 奉
- 伦
- 劫
- 伊
- 怜
- 孟
- 肺
- 忧
- 倾
- 矩
- 荒
- 奏
- 塔
- 塞
- 迅
- 轨
- 瞬
- 丫
- 狐
- 叛
- 繁
- 眠
- 孔
- 谱
- 悄
- 泰
- 姜
- 侵
- 妃
- 冯
- 柳
- 洛
- 岸
- 凯
- 陛
- 幺
- 仿
- 氏
- 窝
- 曼
- 挡
- 浩
- 盟
- 轩
- 牺
- 贫
- 绕
- 谎
- 措
- 扶
- 梯
- 炼
- 勤
- 霸
- 横
- 罢
- 呆
- 税
- 桂
- 哎
- 慕
- 植
- 允
- 荡
- 洁
- 肖
- 耗
- 贼
- 艰
- 贺
- 幻
- 饱
- 胃
- 袭
- 廷
- 泥
- 丧
- 缩
- 砸
- 姥
- 拦
- 扮
- 糕
- 肤
- 猴
- 脆
- 炒
- 耀
- 盗
- 邓
- 扩
- 纵
- 振
- 敲
- 鹏
- 姆
- 湿
- 丑
- 召
- 苗
- 伸
- 惑
- 碍
- 萨
- 瘦
- 闯
- 迁
- 坑
- 弯
- 卑
- 尖
- 遥
- 侠
- 犹
- 押
- 冤
- 钻
- 汗
- 闷
- 邻
- 淘
- 抛
- 妆
- 贾
- 侧
- 傲
- 描
- 耍
- 猛
- 薇
- 裤
- 憾
- 督
- 贸
- 墨
- 勒
- 薄
- 嘞
- 渡
- 紫
- 悟
- 锦
- 溜
- 逆
- 惠
- 辉
- 贪
- 圾
- 垃
- 券
- 燃
- 虫
- 悠
- 伪
- 尿
- 懒
- 俊
- 寄
- 歇
- 盒
- 潘
- 储
- 愈
- 脉
- 粗
- 返
- 昌
- 泉
- 蔡
- 愧
- 赤
- 岳
- 婷
- 猎
- 饼
- 肩
- 勾
- 巡
- 竹
- 催
- 陌
- 踩
- 促
- 扭
- 堵
- 酷
- 芳
- 逛
- 陵
- 耽
- 凑
- 寿
- 缝
- 剪
- 郁
- 宅
- 抚
- 筹
- 沿
- 烤
- 奈
- 挨
- 晋
- 崩
- 浮
- 阁
- 彭
- 裂
- 崇
- 眉
- 桑
- 辩
- 漏
- 稀
- 液
- 汪
- 袁
- 掩
- 浑
- 坡
- 晕
- 缠
- 仰
- 挤
- 睁
- 羽
- 岗
- 捡
- 墓
- 综
- 矿
- 妥
- 厕
- 辱
- 惧
- 逗
- 帽
- 寸
- 搁
- 跨
- 渴
- 饰
- 璃
- 琳
- 爽
- 愤
- 饶
- 卧
- 誓
- 滋
- 鉴
- 腐
- 鸭
- 蛇
- 妮
- 莱
- 哟
- 钥
- 甄
- 肠
- 畅
- 慎
- 悬
- 逻
- 胁
- 辰
- 呈
- 棋
- 寨
- 萌
- 覆
- 姚
- 津
- 笨
- 轰
- 乏
- 匙
- 摊
- 陶
- 恼
- 昏
- 抑
- 姿
- 愁
- 誉
- 椅
- 羞
- 澡
- 踢
- 晶
- 萧
- 箭
- 罩
- 宠
- 羡
- 亦
- 祥
- 串
- 昆
- 煮
- 疏
- 纹
- 泄
- 痕
- 喷
- 册
- 跃
- 卢
- 岩
- 跪
- 兽
- 桶
- 飘
- 漠
- 堪
- 哄
- 寂
- 崔
- 腹
- 癌
- 拳
- 驻
- 霍
- 拨
- 诞
- 捐
- 御
- 榜
- 唤
- 荷
- 径
- 署
- 锋
- 玛
- 匆
- 恒
- 吕
- 邮
- 圳
- 黎
- 掏
- 莎
- 寞
- 佐
- 诈
- 牧
- 盐
- 叹
- 尬
- 匹
- 狸
- 膀
- 谨
- 尘
- 驱
- 乳
- 晒
- 宴
- 辜
- 哲
- 铜
- 薪
- 盆
- 割
- 忌
- 旋
- 翼
- 哀
- 咨
- 遵
- 夹
- 侣
- 译
- 胞
- 浅
- 邦
- 俄
- 弗
- 豫
- 甭
- 乃
- 扛
- 杭
- 瓦
- 槽
- 污
- 尴
- 琢
- 枝
- 详
- 柴
- 佑
- 盼
- 抖
- 惩
- 捷
- 葬
- 贡
- 艳
- 塑
- 茫
- 叨
- 浓
- 拐
- 捉
- 憋
- 稿
- 苍
- 葛
- 扑
- 娱
- 赋
- 杆
- 绘
- 聆
- 肌
- 婴
- 摘
- 岂
- 呵
- 冻
- 泳
- 揭
- 坤
- 盈
- 毅
- 撕
- 娇
- 唠
- 宏
- 吊
- 籍
- 楠
- 肃
- 抹
- 玄
- 湘
- 迈
- 酱
- 骄
- 咐
- 扇
- 幽
- 疲
- 邪
- 吞
- 趋
- 尺
- 玻
- 溃
- 诱
- 翠
- 兼
- 辅
- 岭
- 栏
- 柏
- 址
- 寺
- 逢
- 琪
- 慈
- 愣
- 契
- 渠
- 齿
- 薛
- 拟
- 填
- 坛
- 抄
- 痴
- 绳
- 役
- 擅
- 晃
- 斌
- 愉
- 届
- 悦
- 旨
- 砍
- 弥
- 挽
- 肝
- 鸣
- 庙
- 烫
- 聘
- 皆
- 婶
- 舌
- 枉
- 赫
- 蓉
- 瞅
- 阔
- 俱
- 循
- 鸿
- 彪
- 伺
- 堡
- 谦
- 剂
- 洒
- 赴
- 妨
- 磊
- 嘱
- 蝶
- 兆
- 豹
- 绣
- 篮
- 锻
- 陕
- 霉
- 涵
- 疆
- 丸
- 蠢
- 铃
- 浙
- 庞
- 萝
- 泛
- 芝
- 煤
- 甩
- 氛
- 页
- 逸
- 袖
- 携
- 躁
- 夕
- 匠
- 蹈
- 坊
- 雾
- 蹲
- 颠
- 脂
- 塌
- 棵
- 鹰
- 澳
- 哇
- 筋
- 纽
- 脖
- 棉
- 渣
- 寡
- 践
- 侄
- 披
- 魅
- 虹
- 肿
- 胶
- 霞
- 罐
- 晴
- 拓
- 卿
- 耻
- 砖
- 宪
- 歪
- 兜
- 衰
- 捧
- 歹
- 雕
- 穆
- 栋
- 瑶
- 毙
- 衷
- 膜
- 囊
- 莹
- 垫
- 吻
- 嘟
- 舰
- 虾
- 壳
- 穴
- 勉
- 裙
- 旺
- 柯
- 磕
- 贩
- 腻
- 蹦
- 卜
- 茹
- 驴
- 臂
- 删
- 菌
- 妾
- 蜂
- 祭
- 菊
- 咸
- 淑
- 笼
- 涯
- 碧
- 宙
- 骚
- 皓
- 赐
- 晰
- 腔
- 龟
- 泼
- 鹅
- 啪
- 巾
- 炉
- 沾
- 醋
- 澜
- 朴
- 棍
- 伞
- 雀
- 赠
- 妞
- 淋
- 刮
- 汁
- 椒
- 埃
- 嚷
- 盲
- 窃
- 辽
- 贱
- 滩
- 昭
- 贯
- 珊
- 涌
- 辨
- 捞
- 仲
- 拘
- 碑
- 侍
- 剿
- 搅
- 狮
- 藤
- 旭
- 翅
- 滨
- 禀
- 遮
- 瑟
- 斩
- 攒
- 犬
- 挫
- 僧
- 吩
- 渊
- 蒂
- 萍
- 庸
- 蓄
- 鼎
- 咪
- 姬
- 溪
- 郡
- 镖
- 怡
- 杉
- 畏
- 瓷
- 枚
- 煎
- 劣
- 饺
- 妄
- 卓
- 蔽
- 蒸
- 垂
- 嘲
- 慨
- 谊
- 蹭
- 逮
- 锐
- 钉
- 舟
- 沃
- 凝
- 翔
- 颈
- 靖
- 灌
- 膊
- 崖
- 娟
- 胳
- 铭
- 灿
- 亭
- 粒
- 卸
- 咕
- 坎
- 攀
- 婿
- 奢
- 茂
- 趴
- 耿
- 捏
- 怖
- 浴
- 婉
- 煌
- 霖
- 揍
- 昂
- 驰
- 壶
- 械
- 卦
- 粥
- 尹
- 瘾
- 雇
- 翰
- 肆
- 寇
- 曦
- 厢
- 杠
- 屠
- 芒
- 谣
- 沫
- 掘
- 酬
- 讼
- 乾
- 玫
- 瑰
- 逊
- 惦
- 儒
- 肾
- 粹
- 愚
- 渔
- 暑
- 伐
- 潇
- 喘
- 敦
- 翁
- 斥
- 帖
- 纱
- 梳
- 缴
- 茅
- 谭
- 氧
- 遣
- 履
- 刹
- 枕
- 婢
- 徽
- 轿
- 寓
- 咽
- 叉
- 嗓
- 捣
- 裹
- 览
- 拯
- 疚
- 蜀
- 丛
- 框
- 斑
- 宵
- 郝
- 蛙
- 熙
- 祁
- 哑
- 葱
- 唇
- 韦
- 媛
- 魄
- 锤
- 绵
- 炫
- 吨
- 稻
- 碌
- 刊
- 漆
- 搏
- 讶
- 痒
- 枫
- 妒
- 冥
- 郊
- 爵
- 逝
- 栽
- 叠
- 蚁
- 裕
- 帕
- 剥
- 谐
- 巫
- 颇
- 娥
- 廊
- 蕾
- 丘
- 丞
- 葡
- 坠
- 鸦
- 糗
- 虐
- 唬
- 屎
- 顽
- 巷
- 硅
- 罕
- 殖
- 嘿
- 韵
- 歧
- 垮
- 淮
- 馈
- 昊
- 宰
- 钦
- 霜
- 兑
- 萄
- 塘
- 胀
- 樱
- 枯
- 咳
- 窑
- 募
- 缸
- 昧
- 仑
- 恕
- 氓
- 叮
- 吼
- 坟
- 轴
- 贞
- 赎
- 帆
- 嫩
- 蚂
- 僵
- 颖
- 噜
- 咒
- 琐
- 勃
- 芯
- 绸
- 哼
- 仨
- 挪
- 狡
- 禅
- 粘
- 雯
- 扒
- 恳
- 蔬
- 匈
- 钓
- 桐
- 菇
- 哒
- 稚
- 膏
- 纲
- 狄
- 硕
- 廉
- 衙
- 艘
- 廖
- 腊
- 蟹
- 邱
- 缉
- 曝
- 桩
- 啤
- 嫉
- 棚
- 矮
- 汰
- 衍
- 拽
- 削
- 彤
- 斜
- 揉
- 樊
- 馨
- 钩
- 浦
- 肢
- 敷
- 喻
- 鞭
- 瞪
- 耕
- 掐
- 屡
- 榴
- 勋
- 泊
- 竭
- 鹤
- 溢
- 淳
- 倩
- 驳
- 抠
- 捅
- 筒
- 窄
- 鄙
- 嗦
- 袍
- 劈
- 炖
- 裸
- 贬
- 敞
- 嘎
- 淹
- 耶
- 秩
- 舱
- 厦
- 叙
- 孽
- 筷
- 浇
- 饥
- 噩
- 蚊
- 兮
- 皱
- 侃
- 辟
- 弊
- 袜
- 吾
- 俘
- 芸
- 夷
- 芦
- 囚
- 倡
- 琦
- 哨
- 巢
- 烛
- 帐
- 燥
- 讽
- 俞
- 馅
- 柿
- 墅
- 妍
- 瘤
- 沦
- 衬
- 瑜
- 蒜
- 蛛
- 窟
- 勿
- 沛
- 磁
- 狭
- 栈
- 懵
- 酿
- 戈
- 邵
- 龚
- 衫
- 勺
- 哗
- 叽
- 畜
- 爪
- 惫
- 颁
- 浸
- 摧
- 勘
- 惕
- 蔓
- 馒
- 挠
- 陀
- 豁
- 帘
- 淀
- 藩
- 蜡
- 凳
- 蘑
- 琼
- 棺
- 蝴
- 骆
- 掰
- 枣
- 遂
- 飙
- 咧
- 掀
- 梨
- 杏
- 嗑
- 棠
- 绽
- 捆
- 舆
- 肇
- 葩
- 呦
- 膝
- 鹊
- 揣
- 瓣
- 靓
- 卵
- 鲍
- 炭
- 戳
- 颤
- 禄
- 菩
- 崛
- 驸
- 佣
- 眨
- 聂
- 乙
- 嘻
- 拧
- 喵
- 佟
- 靳
- 阎
- 拢
- 厘
- 凰
- 疤
- 螺
- 淇
- 涩
- 拎
- 嗨
- 魁
- 薯
- 歼
- 沪
- 筛
- 谍
- 揪
- 刁
- 秃
- 谜
- 撇
- 肪
- 绊
- 逞
- 滥
- 寝
- 麟
- 奕
- 侮
- 喉
- 柄
- 荆
- 撼
- 窦
- 姗
- 乞
- 艇
- 竖
- 剖
- 嗽
- 捂
- 腕
- 鸽
- 刃
- 弓
- 辙
- 粤
- 泣
- 梗
- 茄
- 茜
- 驼
- 冈
- 倔
- 啃
- 蹄
- 唧
- 祈
- 腺
- 焰
- 睿
- 崽
- A
- 苛
- 窍
- 凿
- 倭
- 骤
- 槛
- 碳
- 诏
- 芽
- 浆
- 隶
- 搂
- 睦
- 彬
- 岔
- 诀
- 嚼
- 掺
- 殷
- 吁
- 啰
- 侈
- 亩
- 纤
- 倦
- 揽
- 媚
- 潭
- 莽
- 赃
- 睹
- 脊
- 逍
- 淼
- 沸
- 峡
- 仆
- 眷
- 屯
- 璐
- 雁
- 澄
- 渗
- 咔
- 啸
- 怂
- 娄
- 惶
- 恍
- 锡
- 秉
- 猾
- 挟
- 舔
- 弦
- 阱
- 俭
- 嚣
- 搓
- 懈
- 诡
- 隙
- 苟
- 倘
- 瘫
- 扁
- 鑫
- 撩
- 蓬
- 铲
- 峥
- 巅
- 葫
- 膳
- 狙
- 晏
- 祠
- 峻
- 尉
- 毯
- 沧
- 熏
- 咯
- 株
- 沐
- 奎
- 锣
- 霄
- 彦
- 叭
- 臻
- 昔
- 灶
- 傍
- 腥
- 屑
- 禾
- 彰
- 冉
- 矫
- 滞
- 瘩
- 匀
- 椎
- 槐
- 岚
- 跷
- 剔
- 倪
- 盏
- 泌
- 灸
- 隧
- 函
- 壤
- 剃
- 蹊
- 葵
- 拌
- 琅
- 炳
- 跋
- 瑾
- 哩
- 蔷
- 鳌
- 莺
- 诵
- 疙
- 吱
- 蓓
- 绎
- 匿
- 铮
- 怼
- 踹
- 嗅
- 焚
- 躯
- 蝇
- 橘
- 祟
- 辖
- 砂
- 韧
- 粪
- 诬
- 擒
- 黏
- 衔
- 溺
- 蜘
- 篷
- 贿
- 闫
- 焕
- 邢
- 兹
- 窖
- 旬
- 铸
- 咚
- 惭
- 佬
- 裴
- 裳
- 犀
- 弘
- 莓
- 钏
- 鄂
- 陋
- 伽
- 鞠
- 氪
- 垒
- 窜
- 橙
- 讳
- 甥
- 淫
- 拱
- 袱
- 坨
- 暧
- 渺
- 蕉
- 晗
- 茬
- 盔
- 妓
- 蚕
- 僻
- 朽
- 呛
- 挚
- 擎
- 绅
- 喇
- 鳄
- 巩
- 蜗
- 遛
- 俯
- 汹
- 猩
- 奠
- 钙
- 悍
- 躬
- 菱
- 翘
- 琉
- 虏
- 凄
- 稼
- 炕
- 皂
- 漱
- 斋
- 撂
- 敛
- 阮
- 芭
- 阀
- 缚
- 懦
- 亨
- 螃
- 侥
- 膨
- 筝
- 惟
- 黛
- 眯
- 茨
- 怠
- 辐
- 捎
- 殴
- 桓
- 瞄
- 冀
- 雍
- 霾
- 酵
- 檬
- 哺
- 裔
- 兢
- 麒
- 烹
- 绒
- 丐
- 娅
- 钞
- 垄
- 笛
- 赣
- 蕊
- 暮
- 噪
- 沮
- 肋
- 庇
- 橡
- 摁
- 痘
- 棘
- 拂
- 绷
- 刨
- 晾
- 蹬
- 鸥
- 璇
- 掠
- 瘟
- 俐
- 糙
- 骏
- 牡
- 撵
- 嘘
- 沥
- 庶
- 赁
- 喧
- 涡
- 瞳
- 迭
- 肘
- 颂
- 珑
- 觅
- 埔
- G
- 跤
- 朔
- 詹
- 梭
- 暇
- 惺
- 甸
- 怯
- 聋
- 赦
- 屉
- 闸
- 坝
- 吟
- 凸
- 拴
- 堤
- 矣
- 斧
- 呸
- 啼
- 韬
- 钧
- 坞
- 纺
- 氢
- 嵩
- 镯
- 髓
- 檐
- 涕
- 剁
- 稽
- 烨
- 钮
- 闽
- 仕
- 驯
- 吭
- 漓
- 眸
- 鞅
- 枢
- 煞
- 昕
- 畔
- 疹
- 矶
- 呱
- 熄
- 吏
- 泻
- 拙
- 蛤
- 禽
- 甫
- 厮
- 乍
- 蝉
- 撬
- 嘀
- 衅
- 鲨
- 萱
- 霹
- 旷
- 辫
- 坷
- 眶
- 蟆
- 呜
- 猬
- 嬷
- 萎
- 靶
- 雳
- 煲
- 溯
- 蚀
- 狈
- 滤
- 恙
- 瑛
- 栓
- 嫣
- 碟
- 祷
- 驿
- 犊
- 灼
- 哆
- 宛
- 榨
- 寥
- 翟
- 栗
- 滔
- 馋
- 杖
- 茉
- 饲
- 庐
- 隋
- 旱
- 崎
- 颅
- 焉
- 墩
- 篱
- 晟
- 扳
- 咎
- 竿
- 僚
- 溶
- 俏
- 霆
- 堕
- 冕
- 叩
- 绰
- 洽
- 襄
- 蛊
- 缅
- 侨
- 伶
- 蕴
- 酥
- 坂
- 拇
- 庚
- 卒
- 诛
- 禧
- 瓢
- 锯
- 扉
- 饷
- 诅
- 烘
- 浏
- 痰
- 榆
- 窥
- 鲸
- 捋
- 戎
- 笋
- 璋
- 诫
- 珈
- 癫
- 囤
- 厥
- 癖
- 翩
- 芹
- 匣
- 噬
- 栖
- 蝎
- 锄
- 玺
- 疮
- 缕
- 猥
- 槿
- 蔑
- 汝
- 珂
- 撮
- 坪
- 蒲
- 倚
- 嗷
- 撰
- 荧
- 芙
- 豚
- 筱
- 敖
- 孵
- 猝
- D
- 弈
- 徊
- 辗
- 赘
- 徘
- 烙
- 娲
- 嚎
- 迢
- 绥
- 羁
- 屌
- 铅
- 澎
- S
- 嬛
- 晦
- 煽
- 逾
- 饵
- 虞
- 筐
- 哧
- 抒
- 醇
- 祀
- 瑕
- 岐
- 潼
- 惚
- C
- 苑
- 靡
- 菠
- 赡
- 惰
- 梓
- 铛
- 澈
- 莞
- 呕
- 驭
- 邝
- 砰
- 轼
- 窒
- 慷
- 绞
- 絮
- 虔
- 惮
- 柬
- 嗡
- 拣
- 羲
- 蹋
- 隘
- 帜
- 卤
- 雌
- 唾
- 邹
- 俑
- 碾
- 婪
- 咏
- 粟
- 崭
- 钝
- 彝
- 陡
- 谛
- 秤
- 磅
- 淌
- 炊
- 鲤
- 羹
- 殉
- 曰
- 萤
- 阐
- 鬟
- 拭
- T
- 沁
- 滇
- 梧
- 烁
- 瞻
- 淤
- 凹
- 撸
- 棕
- 腌
- 缪
- 祺
- 痊
- 忑
- 柠
- 矜
- 忐
- 讹
- 瀚
- 尧
- 昼
- 芊
- 憨
- 鳞
- 匮
- 鸳
- 鸯
- 湃
- 屿
- 馍
- 沽
- 栾
- 蝠
- 窘
- 绛
- 巍
- 悯
- 焊
- 谴
- 浊
- 娴
- 畴
- 湛
- 螂
- 韭
- 哮
- 拷
- 攥
- 凛
- 颓
- 恺
- 蝙
- 襟
- 粑
- 洼
- 笃
- 渝
- 骁
- 殃
- 酌
- 乒
- 臊
- 疵
- 诧
- 谬
- 锈
- 袄
- 膛
- 瘸
- 嫖
- 梢
- 沼
- 棱
- 嚓
- 耸
- 喳
- 舵
- 橱
- 涮
- 檀
- 瞩
- 腑
- 岑
- 痪
- 墟
- 蔚
- 捍
- 徙
- 棣
- 猖
- 掷
- 恬
- 嫦
- 噔
- 饪
- 掂
- 恤
- 叱
- 芷
- 弩
- 楷
- 镶
- 茧
- 诠
- 咙
- 匡
- 擂
- 亵
- 杞
- 乓
- 渤
- 藉
- 憔
- 渭
- 禹
- 睐
- 趾
- 抉
- 悴
- 忒
- 茸
- 纬
- 懊
- 浚
- 溅
- 遏
- 琛
- 靴
- 戮
- 翎
- 谕
- 濒
- 锵
- 嬉
- 籽
- 殆
- 叼
- 苔
- 灏
- 嗖
- 俪
- 亢
- 冶
- 嗜
- 磋
- 汀
- 讪
- 萃
- 菁
- 镑
- 紊
- 脯
- 缆
- 哉
- 赂
- 婊
- B
- 蕃
- 迄
- 蜓
- 舜
- 嚏
- 昱
- 黔
- 犟
- 汐
- 昵
- 嗣
- 唆
- 蛾
- 黯
- 绯
- 瀑
- 憬
- 狩
- 掖
- 崴
- 褪
- 髦
- 酝
- 弧
- 咄
- 吝
- 馄
- 娩
- 窿
- 蜻
- 袒
- 玮
- 阙
- 篡
- 邯
- 朦
- 邑
- 喃
- 粽
- 捶
- 嫔
- 钗
- 穗
- 骼
- 胭
- 寐
- 噎
- M
- 碱
- 荤
- 笙
- 矢
- 芥
- 廓
- 扼
- 厄
- 毋
- 糯
- 惋
- 纶
- 碜
- 胧
- 懿
- 偃
- 沏
- 痹
- 慑
- 鹦
- 娠
- 铐
- 绢
- 傀
- 孜
- 饨
- 儡
- 孰
- 焱
- 峭
- 伎
- 幌
- 椰
- 譬
- 藕
- 坍
- 铝
- 鞍
- 蘸
- 貂
- 猿
- 炙
- 琊
- 峙
- 硝
- 幂
- 钰
- 眩
- 亥
- 簇
- 鹉
- 睫
- 斟
- 簧
- 颐
- 薰
- 癞
- 祛
- 燎
- 缎
- 簸
- 咣
- 绚
- 簿
- 邋
- 嵌
- 肮
- 稷
- 辍
- 闵
- 枸
- 撅
- 曙
- 苇
- K
- 悼
- 汶
- 匕
- 皖
- 腮
- 琶
- 汲
- 鼹
- 礁
- 颊
- 怔
- 汕
- 喀
- 砌
- 釜
- 畸
- 鹃
- 峨
- 奄
- 骡
- 斐
- 芈
- 莘
- 蟑
- 荔
- 缇
- 犒
- 宓
- 汾
- 沌
- 宦
- 憧
- 咤
- 吆
- 攘
- 漩
- 梵
- 阂
- 吒
- 芜
- 缔
- 秧
- 翊
- 晌
- 剐
- 蜕
- 芋
- 彷
- 牟
- 诲
- 臀
- 徨
- Q
- 杵
- 荫
- 榄
- 蹿
- 豌
- 迂
- 琵
- 拗
- 帷
- 楞
- 嘶
- 橄
- 胺
- 圭
- 砚
- 藻
- 凋
- 啄
- 褒
- 嗝
- 殡
- 嫡
- 恃
- 濡
- 缜
- 孺
- 泸
- 妊
- 衩
- 驹
- 榻
- 腆
- 鹂
- 箍
- 璧
- 熔
- 悚
- 遢
- 弛
- 诋
- 羚
- 鹭
- 嘚
- 骸
- 瘪
- 铠
- 瞿
- 屹
- 邸
- 痨
- 辘
- 浒
- 忏
- 钊
- 潦
- 怅
- 肴
- 蚯
- 胚
- 茵
- 蚓
- 戬
- 瘀
- 翡
- 恪
- 卉
- 蝌
- 雏
- 祯
- 谏
- 蚪
- 钵
- 馊
- 嗒
- 犁
- 寅
- V
- 锥
- 娼
- 晖
- 啬
- 纣
- 淆
- 丙
- 夯
- 竣
- 褚
- 褥
- 轧
- 氨
- 褂
- 钳
- 轲
- 竺
- 疡
- 淞
- 胤
- 摹
- 鳅
- 珀
- 偕
- 匾
- 觑
- 扈
- 傣
- 绫
- 枷
- 阑
- 柚
- 烊
- 怦
- 腼
- 珺
- 缀
- 裘
- 碉
- 峪
- 俸
- 羯
- 姊
- 疟
- 砺
- 盎
- 嘣
- 釉
- 溥
- 熠
- 垢
- 摞
- 哽
- 槟
- 囧
- 胰
- 遁
- 痞
- 熹
- 忡
- 稠
- 顷
- 瑚
- 卯
- 渎
- 炅
- 褶
- 烽
- 瞑
- 嘈
- 硫
- 壹
- 悖
- 酪
- 跺
- 阜
- 帛
- 漪
- 蝗
- 迦
- 蟒
- 咀
- 谤
- 睬
- 辕
- 绮
- 搀
- 裆
- 鳖
- 囡
- 羔
- 痣
- 滕
- 佘
- 樟
- 韶
- 霓
- 劾
- 赈
- 唏
- 闰
- 脐
- 沓
- 瓮
- 篓
- 笠
- 暄
- 涅
- 诽
- 洱
- 栅
- 蚱
- 囔
- 攸
- 酣
- 阪
- 榕
- 骇
- 婧
- 陨
- 憎
- 沂
- 磷
- 壕
- 醺
- 惬
- 璀
- 璨
- 喋
- P
- 炽
- 瘁
- 羿
- 褐
- 簪
- 冽
- 驮
- 芮
- 辄
- 咆
- 渍
- 觐
- 炷
- 蛰
- 驷
- 帚
- 蜷
- O
- X
- 邂
- 逅
- 缭
- 秽
- 琰
- 龌
- 龊
- 俨
- 涟
- 噼
- 掇
- 哔
- 炬
- 佯
- 粱
- 霁
- 鱿
- 夭
- 擀
- 陇
- 瞥
- 壑
- 盹
- 馁
- 蚌
- 焖
- 蛟
- 囱
- 蚝
- 抿
- 脓
- 蒿
- 飓
- 渲
- 宸
- 酗
- 荻
- 缥
- 弑
- 偎
- 宕
- 耘
- 瞌
- 瘴
- 溉
- 涝
- 咿
- 垛
- 垦
- 缈
- 苞
- 惆
- 汛
- 鹑
- 町
- 抡
- 慵
- 浣
- 耙
- 砥
- 噱
- 孬
- 札
- 弼
- 酋
- 镳
- 萦
- 泾
- 挞
- 钾
- 讷
- 圃
- 舶
- 穹
- 戾
- 汴
- 锂
- 昀
- 镀
- 眺
- 捺
- 猕
- 阚
- 骋
- 悸
- 蜚
- 咩
- 讥
- 篆
- 鸠
- 哐
- 锚
- 幢
- 翱
- 螳
- 徇
- 踞
- 蔗
- 蔼
- 漉
- 衲
- N
- 漳
- 枭
- 漾
- 歆
- 烬
- 曳
- 岌
- 孚
- 戛
- 呲
- 箫
- 娓
- 桨
- 涓
- 獭
- 芃
- 摒
- 戍
- 踝
- 轱
- 沱
- 锢
- 堰
- 抨
- 昙
- 鹌
- 蔻
- 迸
- 泯
- 龈
- 痔
- 骛
- 淄
- 泵
- 烯
- 蔫
- F
- 胥
- 忱
- 纫
- 搪
- 茎
- 暨
- 泞
- 踵
- 璞
- 佗
- 荃
- 鬓
- 蚣
- 罔
- 臆
- 贻
- 橇
- 麓
- 槌
- 琥
- I
- 纥
- 薅
- 樵
- 苓
- 熨
- 钨
- 骞
- 诣
- 涤
- 踊
- 醛
- 碴
- 蹴
- 缤
- 赊
- 岖
- 戊
- 禺
- 坯
- 戟
- 楂
- 隅
- 酶
- 邃
- 蛀
- 皎
- 炯
- 垣
- 锹
- 镰
- 夙
- 甬
- 叵
- 茁
- 珞
- 妲
- 涸
- 兀
- 嘤
- 谙
- 噗
- 榔
- 稣
- 剽
- 奚
- 啕
- 袅
- 讧
- 钠
- 怄
- 晤
- 肛
- 氰
- 迥
- 唰
- 诩
- 籁
- 砒
- 谩
- 诟
- 斓
- 泷
- 幡
- 爻
- 痫
- 眈
- 漕
- 惘
- 挎
- 噶
- 喱
- 氯
- U
- 跆
- 嗤
- 锏
- 睽
- 缮
- 蟋
- 蠕
- 扪
- 狞
- 飒
- 吮
- 弋
- 奘
- 蟠
- 梆
- 拈
- 帧
- 蟀
- 胯
- 掳
- 蝈
- 帼
- 瞰
- 嵇
- 阉
- 篝
- 笆
- 亘
- L
- 喔
- 愕
- 谚
- 轶
- 岱
- 丕
- 婕
- 羌
- 毡
- 呻
- 鼾
- 蜥
- 偌
- 庵
- 敝
- 蛐
- 麝
- 鞘
- 拮
- 涣
- 葆
- 雹
- 踌
- 蜈
- 馥
- 跻
- 狰
- 桀
- 毗
- 皿
- 缨
- 磐
- 啾
- 牒
- 缰
- 躇
- 踮
- 糠
- 嗲
- 刽
- 咫
- 殇
- 瀛
- 胱
- 炀
- 虱
- 砾
- 獒
- 涎
- 袤
- 鄱
- 瓯
- 锭
- 塾
- 蹉
- 珏
- 豺
- 锌
- 蜿
- 牦
- 瓒
- 莆
- 蜴
- 氮
- 跎
- 咛
- 骜
- 郸
- 搐
- 堑
- 涞
- 寰
- 跛
- 鸵
- 毂
- 妩
- 铤
- 薏
- 烩
- 遐
- 煦
- 仃
- 髅
- 酮
- 榷
- 腋
- 珩
- 臃
- 愫
- 蜒
- 荼
- 侬
- 淬
- 婵
- 偻
- 焯
- 骊
- 恻
- 濮
- 泱
- 庖
- 惴
- 鲫
- 硌
- 肓
- 芪
- 礴
- 磺
- 腱
- 冢
- 谪
- 骷
- 哏
- 腩
- 蓦
- 焙
- 桢
- 阖
- 睾
- 疱
- 郴
- 铿
- 铡
- 祉
- 跄
- 桦
- 椭
- 拄
- 皙
- 膈
- 裱
- 髋
- 伢
- 罹
- 鳍
- 赝
- 嬴
- 痤
- 藿
- 镐
- 铎
- 瘠
- 簌
- 杳
- 铢
- 阡
- 忤
- 舀
- 悻
- 媲
- 茗
- 湍
- 舫
- 瘙
- 瞟
- 擞
- 荀
- 刍
- J
- 潍
- 莴
- 斛
- 郦
- 栩
- 绾
- 蕙
- 黜
- 湄
- 藓
- 躏
- 锱
- 捻
- 佼
- 砝
- E
- 罡
- 忻
- 鹜
- 滟
- 傥
- 蛳
- W
- 铀
- 魇
- 觎
- 蹂
- 佞
- 诃
- 灞
- 镣
- 痱
- 侏
- 峦
- 榛
- 饽
- 龋
- 嗔
- 芍
- 椿
- 璎
- 渥
- 蟾
- 骰
- 吠
- 挛
- 倜
- 鳝
- 糜
- 噢
- 黝
- 藐
- 绡
- 掣
- 鳗
- 璜
- 犷
- 痉
- 膺
- 罄
- 阄
- 纨
- 纭
- 彗
- 嵘
- 埠
- 潢
- 桔
- 耷
- 逵
- 诓
- 怵
- 蚤
- 苯
- 邈
- 谑
- 颌
- 珐
- 踱
- 髻
- 倏
- 啷
- 篑
- 冗
- 蹶
- 荥
- 涧
- 镂
- 踉
- 呷
- 衢
- 荟
- 箴
- 桧
- 恿
- 坳
- 瑙
- 珅
- 莅
- 膘
- 宥
- 氟
- 秆
- 诙
- 蹑
- 茴
- 翳
- 渚
- H
- 唁
- 诿
- 窈
- 窕
- 膻
- 荨
- 蛔
- 筵
- 钛
- 獾
- 琏
- 箩
- 栀
- 隼
- 煸
- 罂
- 蛎
- 咂
- 谗
- 颦
- 佝
- 苣
- 搡
- 仄
- 垠
- 濂
- 泗
- 亟
- 蔺
- 蛆
- 霏
- 榈
- 裟
- 瑁
- 酚
- 蝼
- 怆
- 犄
- 沣
- 揖
- 斡
- 刎
- 鲟
- 峒
- 瞭
- 晁
- 袈
- 蓟
- 镁
- 骥
- 掸
- 玳
- 娑
- 馀
- 跚
- 槃
- 缄
- 猢
- 粕
- 隍
- 佃
- 獗
- 唢
- 菏
- 酰
- 腚
- 笈
- 哙
- 孢
- 飕
- 嘹
- 茱
- 蹒
- 殓
- 柩
- 谀
- 姣
- 戌
- 柑
- 粼
- 淅
- 啧
- 盅
- 鼬
- 啜
- 绉
- 咻
- 锲
- 铆
- Y
- 螨
- 茯
- 憩
- 臼
- 谄
- 讴
- 濠
- 雎
- 噻
- 淦
- 懋
- 尕
- 氦
- 褛
- 颉
- 喆
- 铬
- 褴
- 燮
- 銮
- 侗
- 蹙
- 煜
- 邺
- 锃
- 麋
- 矗
- 娆
- 匐
- 噌
- 潸
- 碘
- 浔
- 檄
- 皈
- 铂
- 遨
- 炜
- 曜
- 饴
- 舷
- 胫
- 叟
- 祎
- 沅
- 潺
- 楣
- 埂
- 瞠
- 幔
- 稞
- 抻
- 匝
- 幄
- 殒
- 瑭
- 袂
- 囫
- 瓴
- 攫
- 鲈
- 箔
- 哝
- 馗
- 蜍
- 痧
- 脘
- 姘
- 苒
- 缢
- 觞
- 蛹
- 饬
- 胄
- 筏
- 鸾
- 儆
- 痿
- 矬
- 酊
- 纾
- 铖
- 荏
- 掬
- 膑
- 贮
- 觊
- 囵
- 泓
- 搔
- 汞
- 蚩
- 婀
- 谧
- 恣
- 霎
- 饕
- 赅
- 鲶
- 梏
- 獠
- 俶
- 龛
- 桅
- 鹄
- 旌
- 鲲
- 姒
- 蠡
- 繇
- 祜
- 诨
- 汩
- 觥
- 孀
- R
- 谥
- 蕨
- 祐
- 榭
- 皑
- 纂
- 獐
- 覃
- 痂
- 孑
- 砧
- 圩
- 桎
- 啵
- 葚
- 嗫
- 浃
- 荠
- 阈
- 遴
- 枇
- 狒
- 秸
- 筠
- 硒
- 卞
- 玷
- 杈
- 狲
- 忿
- 俎
- 拚
- 颍
- 睢
- 颧
- 滦
- 霭
- 雉
- 毽
- 蓑
- 歙
- 鳃
- 鹬
- 墉
- 楔
- 舐
- 绔
- 弭
- 馏
- 挝
- 奂
- 嘭
- 忪
- 箕
- 诌
- 谒
- 颚
- 滂
- 醍
- 洵
- 鹫
- 虢
- 苋
- 玥
- 臾
- 蹩
- Z
- 杷
- 痍
- 酉
- 疸
- 鄢
- 垩
- 烷
- 湮
- 钎
- 樽
- 旮
- 葭
- 邬
- 缱
- 糍
- 亳
- 咦
- 苷
- 伉
- 隽
- 伫
- 聒
- 匍
- 飚
- 桠
- 睑
- 脍
- 焘
- 谶
- 赳
- 萸
- 讣
- 疽
- 臧
- 巽
- 毓
- 鸢
- 纰
- 啐
- 噙
- 舛
- 敕
- 醐
- 痢
- 嚯
- 婺
- 勖
- 岷
- 溧
- 骅
- 犸
- 麾
- 嗟
- 诘
- 懑
- 貔
- 貅
- 啉
- 崂
- 鸩
- 镭
- 绻
- 逑
- 煨
- 褓
- 姝
- 藜
- 溟
- 儋
- 谡
- 欸
- 郢
- 荚
- 疝
- 遽
- 陂
- 饯
- 孪
- 巳
- 荞
- 泔
- 岿
- 谆
- 镍
- 洙
- 佻
- 盂
- 睨
- 铄
- 餮
- 酯
- 癣
- 浜
- 酩
- 焗
- 挲
- 鬃
- 鲠
- 仞
- 诰
- 谔
- 胛
- 萼
- 涿
- 莠
- 珲
- 旯
- 蜢
- 黍
- 肽
- 涪
- 髡
- 氙
- 陉
- 鬶
- 侩
- 糅
- 氤
- 芾
- 砷
- 鳕
- 钣
- 锒
- 闱
- 铵
- 镊
- 玑
- 砀
- 癜
- 颔
- 楹
- 螈
- 醚
- 琮
- 铩
- 笄
- 瓤
- 裨
- 潋
- 悌
- 聿
- 祢
- 郜
- 汨
- 棂
- 氲
- 嶙
- 聩
- 菅
- 腧
- 妯
- 龇
- 谲
- 耄
- 耋
- 囿
- 黢
- 揄
- 鲇
- 仝
- 個
- 忖
- 峋
- 揶
- 迩
- 诳
- 踽
- 骐
- 趸
- 颞
- 撺
- 辇
- 猷
- 铉
- 羸
- 徜
- 徉
- 襁
- 镌
- 孱
- 钒
- 铣
- 呤
- 遑
- 俾
- 皋
- 笕
- 笺
- 趔
- 趄
- 辋
- 鄞
- 殚
- 岫
- 跬
- 嘌
- 苻
- 绶
- 郅
- 瑄
- 萋
- 蘼
- 湎
- 砣
- 钜
- 捭
- 喹
- 恹
- 娌
- 螯
- 锰
- 祚
- 阆
- 矾
- 厩
- 龅
- 炝
- 黠
- 妁
- 濑
- 鞑
- 柒
- 滁
- 淖
- 鸬
- 鬣
- 晔
- 恸
- 赓
- 侉
- 溏
- 還
- 珮
- 鸨
- 嚅
- 笤
- 靥
- 啮
- 滓
- 俚
- 唳
- 苜
- 蓿
- 鹚
- 耦
- 莜
- 麸
- 粳
- 綦
- 盱
- 噤
- 遒
- 玟
- 魍
- 魉
- 旖
- 栉
- 锷
- 醴
- 泮
- 恁
- 甾
- 琬
- 丶
- 擤
- 桉
- 踟
- 誊
- 谟
- 澧
- 玖
- 畿
- 顼
- 兖
- 贰
- 茏
- 愎
- 豇
- 旎
- 蹰
- 蜃
- 屐
- 芡
- 鎏
- 癸
- 卅
- 枥
- 陟
- 琨
- 粝
- 掮
- 妪
- 姹
- 鏖
- 捯
- 钴
- 竽
- 恽
- 佰
- 胗
- 崧
- 磴
- 绺
- 鳏
- 槁
- 啖
- 矍
- 徕
- 忾
- 烃
- 喏
- 囹
- 圄
- 砭
- 邕
- 犍
- 鸮
- 剜
- 琚
- 瘢
- 魑
- 眦
- 锉
- 柘
- 痦
- 苕
- 牯
- 湟
- 厝
- 濛
- 赭
- 馐
- 蜇
- 嶂
- 贲
- 靼
- 臬
- 陲
- 潞
- 芩
- 腓
- 锨
- 寮
- 於
- 洇
- 愠
- 疖
- 鹧
- 鸪
- 茕
- 戕
- 壬
- 庾
- 莒
- 鹈
- 鹕
- 蠹
- 勐
- 疥
- 辎
- 耒
- 嗬
- 沔
- 睥
- 邙
- 篾
- 揩
- 肱
- 胍
- 磬
- 菟
- 豢
- 垓
- 唑
- 剌
- 阗
- 汜
- 佤
- 璟
- 麽
- 鬻
- 怏
- 蕤
- 茭
- 睚
- 淙
- 牍
- 榫
- 濯
- 稹
- 媾
- 悱
- 骶
- 蛭
- 鞣
- 椁
- 槊
- 擢
- 滢
- 佚
- 菡
- 沭
- 扦
- 镆
- 闾
- 缛
- 窠
- 疣
- 骠
- 俅
- 喙
- 蹼
- 硼
- 黩
- 腴
- 醮
- 邛
- 漯
- 豉
- 昶
- 刿
- 凇
- 鲅
- 舸
- 邳
- 俟
- 铰
- 翌
- 鳟
- 葳
- 寤
- 碣
- 秭
- 揠
- 熵
- 燧
- 靛
- 嵊
- 窨
- 鹗
- 芎
- 颢
- 佶
- 骢
- 圜
- 岘
- 燊
- 壅
- 畲
- 萘
- 煊
- 粲
- 倌
- 嗳
- 橹
- 椽
- 夔
- 鲑
- 赧
- 殄
- 沆
- 瀣
- 廪
- 舢
- 狍
- 挈
- 鹳
- 蚜
- 彧
- 羟
- 盥
- 镛
- 痈
- 蜊
- 皲
- 篦
- 喑
- 鲢
- 邡
- 蕲
- 僳
- 秣
- 蛉
- 讫
- 祗
- 鹩
- 撷
- 狎
- 郓
- 镕
- 榉
- 鲷
- 娣
- 淝
- 桷
- 镉
- 郫
- 髌
- 醪
- 僭
- 伧
- 嵬
- 苁
- 鹘
- 徭
- 歃
- 阕
- 鸱
- 貉
- 闳
- 坻
- 缙
- 媪
- 莨
- 菪
- 绦
- 恫
- 崆
- 喟
- 葺
- 逶
- 迤
- 骈
- 馔
- 苎
- 溘
- 垭
- 樯
- 诤
- 魃
- 搽
- 绀
- 蚴
- 澶
- 蒺
- 罘
- 眙
- 怍
- 來
- 荪
- 贶
- 亓
- 唻
- 畈
- 谌
- 芨
- 鲀
- 窸
- 窣
- 荜
- 楫
- 衮
- 趵
- 勰
- 髯
- 椴
- 缶
- 荸
- 秫
- 菖
- 甙
- 翦
- 椟
- 峤
- 掼
- 謇
- 洄
- 鄯
- 妗
- 浐
- 颀
- 箸
- 畦
- 痼
- 橛
- 鲛
- 蝾
- 愍
- 蒹
- 嘁
- 韪
- 劭
- 垅
- 暹
- 僮
- 稗
- 筚
- 煅
- 嬅
- 蜉
- 骝
- 碚
- 冼
- 吶
- 洹
- 郧
- 炴
- 绌
- 泠
- 呓
- 簋
- 溴
- 篁
- 仟
- 锟
- 羧
- 鹞
- 嘬
- 渌
- 笸
- 霰
- 稔
- 钡
- 齁
- 胪
- 衾
- 尻
- 洮
- 蘅
- 鲳
- 殂
- 腭
- 涔
- 蝣
- 孳
- 澍
- 钼
- 蒡
- 枳
- 渑
- 茼
- 馕
- 埙
- 珣
- 菘
- 邰
- 樾
- 铱
- 鳐
- 唔
- 篙
- 箜
- 篌
- 耆
- 啫
- 枞
- 杼
- 嵋
- 舂
- 娉
- 铨
- 崃
- 笳
- 邗
- 逡
- 僖
- 泫
- 疴
- 捱
- 醅
- 堇
- 肄
- 荇
- 虬
- 谯
- 酞
- 桡
- 艮
- 膦
- 艹
- 啻
- 滏
- 茆
- 圪
- 磡
- 麼
- 闼
- 郯
- 仡
- 氐
- 贽
- 俦
- 蓖
- 跹
- 帏
- 氅
- 趿
- 暝
- 缟
- 棹
- 滹
- 毖
- 蝰
- 虻
- 缫
- 诮
- 闩
- ○
- 潴
- 樨
- 瘘
- 襦
- 妤
- 郾
- 衿
- 鸷
- 旰
- 镢
- 傈
- 倨
- 笏
- 蒽
- 醌
- 驽
- 浠
- 涠
- 蓁
- 柞
- 钺
- 蜮
- 诂
- 徵
- 锆
- 椋
- 叻
- 廿
- 藁
- 乜
- 摈
- 這
- 茌
- 辊
- 岬
- 郇
- 杓
- 轳
- 酎
- 蟥
- 時
- 镒
- 蚬
- 澹
- 赟
- 後
- 怿
- 箐
- 囍
- 揆
- 蹁
- 鬄
- 苫
- 蕖
- 卺
- 辔
- 偈
- 俳
- 吲
- 哚
- 瘆
- 蕞
- 笞
- 氩
- 嫘
- 墁
- 帔
- 褡
- 裢
- 乩
- 褊
- 颏
- 喒
- 錾
- 皌
- 戗
- 唪
- 啭
- 伥
- 茔
- 斫
- 齉
- 仵
- 赉
- 吡
- 啶
- 蹇
- 螅
- 汊
- 湓
- 凫
- 珙
- 腈
- 洌
- Ω
- 憷
- 跶
- 抔
- 濞
- 崤
- 殍
- 浥
- 铳
- 酽
- 馑
- 髂
- 隗
- 韫
- 晷
- 诒
- 埭
- 鹪
- 蕻
- 昃
- 瓠
- 萁
- 癔
- 怩
- 疳
- 跖
- 疔
- 簟
- 汆
- 疠
- 卟
- 墒
- 穰
- 铍
- 珥
- 钤
- 隻
- 樓
- 墎
- 鳜
- 沒
- 岀
- 杪
- 単
- 鲧
- 呋
- 彀
- 祇
- 豸
- 胴
- 唷
- 丨
- 燚
- 麴
- 觇
- 缑
- 橐
- 蚡
- 朊
- 俣
- 垡
- <sos/eos>
init: null
input_size: null
ctc_conf:
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
use_preprocessor_valid: false
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_utt_prefix: null
rir_apply_prob: 1.0
noise_scp: null
noise_utt_prefix: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_zh_char/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
macaron_style: true
use_cnn_module: true
cnn_module_kernel: 15
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
required:
- output_dir
- token_list
version: 0.10.2a1
distributed: true
```
</details>
## LM config
<details><summary>expand</summary>
```
NONE
```
</details>
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["wenetspeech"]}
|
espnet/pengcheng_guo_wenetspeech_asr_train_asr_raw_zh_char
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"zh",
"dataset:wenetspeech",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null |
espnet
|
## ESPnet2 ASR model
### `espnet/roshansh_how2_asr_raw_ft_sum_valid.acc`
This model was trained by roshansh-cmu using how2 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout e6f42a9783a5d9eba0687c19417f933e890722d7
pip install -e .
cd egs2/how2/sum1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/roshansh_how2_asr_raw_ft_sum_valid.acc
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Feb 7 15:24:21 EST 2022`
- python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `04561cdf3b6c3bc1d51edb04c93b953759ef551d`
- Commit date: `Mon Feb 7 09:06:12 2022 -0500`
## asr_raw_ft_sum
|dataset|Snt|Wrd|ROUGE-1|ROUGE-2|ROUGE-L|METEOR|BERTScore|
|---|---|---|---|---|---|---|---|
|decode_sum_asr_model_valid.acc.best/dev5_test_sum|2127|69795|60.72|44.7|56.1|29.36|91.53|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_conformer_vid_lf.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_raw_ft_sum
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 8
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 45875
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: 10
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 10
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 5000
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- exp/asr_raw_utt_conformer/valid.acc.ave_10best.pth:::ctc
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 60000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_vid_sum/train/speech_shape
- exp/asr_stats_raw_vid_sum/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_vid_sum/valid/speech_shape
- exp/asr_stats_raw_vid_sum/valid/text_shape.bpe
batch_type: length
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr_2000h_sum_trim/wav.scp
- speech
- sound
- - dump/raw/tr_2000h_sum_trim/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/cv05_sum_trim/wav.scp
- speech
- sound
- - dump/raw/cv05_sum_trim/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
scheduler: reducelronplateau
scheduler_conf:
mode: min
factor: 0.5
patience: 1
token_list:
- <blank>
- <unk>
- '[hes]'
- S
- ▁THE
- ▁TO
- ''''
- ▁AND
- ▁YOU
- ▁A
- ▁IT
- T
- ▁THAT
- ▁OF
- ▁I
- ▁IS
- RE
- ▁IN
- ING
- ▁WE
- M
- ▁GOING
- ▁SO
- ▁THIS
- ▁YOUR
- ▁ON
- E
- D
- ▁BE
- ▁CAN
- N
- Y
- O
- ER
- ▁HAVE
- ▁JUST
- ▁FOR
- ▁WITH
- ▁DO
- ED
- ▁ARE
- ▁WANT
- ▁UP
- R
- LL
- P
- ▁
- L
- B
- ▁IF
- C
- ▁ONE
- ▁S
- ▁OR
- A
- ▁GO
- ▁LIKE
- ▁NOW
- ▁HERE
- VE
- LE
- U
- ▁GET
- ▁WHAT
- ▁OUT
- IN
- W
- ▁C
- ▁LITTLE
- ▁THERE
- LY
- ▁AS
- ▁MAKE
- I
- ▁THEY
- ▁MY
- K
- ▁THEN
- ▁BUT
- AL
- G
- ▁ALL
- OR
- ▁BACK
- ▁NOT
- ▁ABOUT
- ▁RIGHT
- ▁OUR
- EN
- ▁SOME
- ▁DOWN
- F
- ▁WHEN
- CH
- ▁F
- ▁HOW
- AR
- ▁WILL
- ▁RE
- CK
- ▁G
- ES
- CE
- ▁TAKE
- ▁AT
- ▁FROM
- ▁WAY
- TER
- ▁SEE
- RA
- ▁USE
- ▁REALLY
- RI
- TH
- ▁TWO
- ▁ME
- ▁VERY
- ▁E
- ▁B
- AT
- ▁THEM
- ▁DON
- ▁AN
- ▁BECAUSE
- ▁MORE
- RO
- H
- 'ON'
- LI
- ▁PUT
- ▁ST
- IL
- ▁BIT
- ▁START
- ▁NEED
- ▁INTO
- UR
- ▁TIME
- ▁OVER
- ▁W
- ▁DE
- ▁LOOK
- ▁THESE
- ▁LET
- ▁GOOD
- ▁ALSO
- AN
- ▁OFF
- ▁HE
- ▁KIND
- ▁SIDE
- ▁CO
- ▁SURE
- ▁AGAIN
- ▁MA
- ▁KNOW
- IT
- ▁WOULD
- IC
- ▁OTHER
- LA
- ▁P
- ▁WHICH
- '-'
- IR
- ▁LA
- ▁HAND
- EL
- ▁LOT
- ▁WHERE
- ▁THREE
- ▁PA
- ION
- LO
- ▁KEEP
- ▁SHOW
- ▁THING
- ▁FIRST
- TE
- ENT
- ATE
- ▁COME
- AD
- ▁GOT
- NG
- ▁NICE
- ▁T
- ET
- ▁MO
- ▁ANY
- ▁ACTUALLY
- ▁DIFFERENT
- ▁SE
- GE
- ▁WORK
- ▁THROUGH
- ▁O
- KE
- V
- ▁AROUND
- ▁BA
- PE
- ▁HI
- ▁BY
- SH
- ATION
- ▁SU
- ▁CA
- ▁D
- ▁LO
- ▁HAS
- ▁LI
- ▁PLAY
- Z
- ▁ADD
- ▁RO
- ▁TA
- AS
- ▁FOUR
- ▁CON
- ▁THOSE
- MP
- NE
- ▁SP
- UT
- ▁GIVE
- ▁WELL
- ▁BALL
- TING
- RY
- X
- ▁HO
- INE
- IVE
- ▁NEXT
- ▁PO
- ▁STEP
- ▁EVEN
- TION
- ▁MI
- MENT
- ▁CUT
- ▁BO
- ▁LINE
- ▁MUCH
- ▁THINGS
- ▁TALK
- UN
- ▁PART
- ▁WAS
- ▁FA
- ▁SOMETHING
- PP
- ANCE
- ND
- DI
- ▁RA
- AGE
- ▁SAME
- ▁EXPERT
- ▁DOING
- ▁LEFT
- IST
- ▁DI
- ▁NO
- RU
- ME
- TA
- UL
- TI
- ▁VILLAGE
- DE
- ERS
- ▁PEOPLE
- ▁TURN
- VER
- ▁FL
- ▁LEG
- ▁ONCE
- ▁COLOR
- ▁PULL
- ▁USING
- VI
- ▁WATER
- ▁SHE
- ▁TOP
- ▁OKAY
- ▁ANOTHER
- ▁THEIR
- ▁SAY
- URE
- ▁HA
- ▁IMPORTANT
- ▁PIECE
- ▁FOOT
- ▁TRA
- ▁SC
- ▁BODY
- ▁SET
- ▁POINT
- ▁HELP
- ▁TODAY
- ▁BRING
- ▁V
- ▁END
- MA
- ▁CH
- ▁MOST
- ▁K
- ▁AHEAD
- ▁HER
- OL
- ▁SA
- AM
- IES
- ▁THINK
- ▁NAME
- ▁TRY
- ▁MOVE
- ONE
- ▁LE
- ▁TOO
- TO
- UM
- ▁PLACE
- ▁COULD
- ▁FIND
- ▁FIVE
- ▁ALWAYS
- ID
- TY
- NT
- ▁FEEL
- ▁HEAD
- ▁THAN
- NA
- ▁EX
- ▁EYE
- ITY
- CI
- OP
- ▁SHOULD
- ▁MIGHT
- ▁HOLD
- ▁CAR
- AND
- ▁GREAT
- ▁RI
- ▁BU
- ▁HIGH
- ▁OPEN
- ▁BEFORE
- US
- ▁FRONT
- ▁LONG
- ▁TOGETHER
- NI
- ▁HAIR
- ▁LIGHT
- ▁TEN
- ▁HIT
- EST
- OUS
- ▁PRETTY
- ▁TYPE
- IP
- CO
- ▁FINGER
- ▁JO
- ▁UN
- ▁PRO
- ▁STRAIGHT
- ▁BEHALF
- ▁TI
- ▁SIX
- ▁CLEAN
- ▁DIS
- ▁DA
- ▁POSITION
- IGHT
- ACT
- ▁CHA
- ▁PE
- GG
- AP
- ▁MEAN
- ▁COMP
- FI
- ▁KNEE
- ▁CALLED
- ▁HANDS
- ▁PRE
- ▁FORWARD
- ▁AREA
- ANT
- ▁TE
- ▁WA
- ▁AFTER
- ▁SMALL
- ▁THROW
- ▁EVERY
- ▁SHOULDER
- NC
- PER
- ▁MAYBE
- ▁ABLE
- ▁BASICALLY
- ▁AM
- ▁READY
- ▁BOTTOM
- IE
- ▁HALF
- FF
- ▁BIG
- ▁EACH
- ▁PUSH
- ▁EIGHT
- ▁NEW
- ▁DONE
- ▁MAY
- ▁GETTING
- HO
- ▁HIS
- ▁HARD
- ▁CLOSE
- ALLY
- ▁SECOND
- ▁FEET
- ICAL
- ▁JA
- ▁PAINT
- ▁LEARN
- ▁SOUND
- HE
- ▁ROLL
- ▁ONLY
- ▁DOESN
- WA
- ▁DRAW
- ▁VI
- ▁DID
- ▁SHA
- ▁CENTER
- CU
- ▁CLIP
- ▁PI
- ▁CARD
- ▁INSIDE
- ▁PERSON
- ▁STILL
- ▁MAKING
- 'NO'
- ▁EVERYTHING
- .
- ▁FUN
- ARD
- ▁REMEMBER
- ▁AWAY
- ATED
- COM
- ▁SEVEN
- ▁BEEN
- ▁MANY
- ABLE
- ▁DAY
- ▁SIT
- IZE
- ▁REAL
- ▁HIP
- ▁BASIC
- ▁KICK
- ▁TU
- ATING
- ▁STICK
- ▁FLAT
- ▁WHO
- END
- HA
- ▁EXP
- ▁PICK
- ▁MIX
- ▁TRI
- ▁BI
- ▁WHOLE
- ▁STRETCH
- ▁BOTH
- ▁PROBABLY
- CA
- ▁HIM
- ▁STRING
- ▁EDGE
- ▁BASE
- ▁COMING
- UGH
- ▁LIFT
- ▁STA
- ▁WORKING
- ▁MU
- ▁QUICK
- ▁SOMETIMES
- ▁HAPPEN
- ▁YOURSELF
- ▁TALKING
- ▁DR
- ▁TELL
- ▁ANYTHING
- ▁BRA
- ▁LOOKING
- ▁SLOW
- ▁NE
- ▁STAND
- NER
- ▁COMES
- ▁GOES
- ISE
- BE
- ▁USED
- ▁UNDER
- ▁BETWEEN
- ▁HU
- ▁CREATE
- ▁NA
- ▁USUALLY
- ▁ARM
- ▁DRY
- ▁RUN
- LING
- ▁BRUSH
- ▁COVER
- ▁HEAR
- ▁DOES
- ▁STAY
- ▁EN
- ▁FOLD
- ▁CHANGE
- ▁LAST
- ▁EASY
- ▁US
- ▁PER
- ▁FACE
- ▁EAR
- ▁TIGHT
- ▁FE
- ▁PIN
- ▁MAN
- ▁BETTER
- ▁CALL
- ▁PRI
- ▁BEST
- ▁KI
- ▁COUPLE
- ▁WHILE
- ▁SHAPE
- ▁GAME
- IV
- ▁SHOT
- ▁PAPER
- ▁OWN
- ▁ALRIGHT
- ▁HAD
- TIC
- ▁BREATH
- ▁TOOL
- '2'
- ▁ENOUGH
- ▁COURSE
- ▁SKIN
- ▁SPIN
- ▁VA
- ▁ARMS
- ▁TEA
- ▁BREAK
- ▁DOG
- ▁1
- QUE
- ▁DROP
- ▁NUMBER
- IG
- ▁RED
- ▁NOTE
- ▁WEIGHT
- WARD
- ▁PLAYING
- ▁FINISH
- ▁MINUTE
- ▁R
- ▁PRESS
- ▁EITHER
- ▁CHE
- ▁PU
- BER
- ▁FEW
- ▁SIZE
- ▁MADE
- ▁LEAVE
- ▁GA
- ▁ALREADY
- ▁GUY
- ▁FAR
- ▁HOME
- ▁BAR
- UP
- ▁GRAB
- ▁MARK
- ▁WHITE
- ▁PROPER
- ▁CAUSE
- ▁OK
- ▁ART
- HI
- ▁SORT
- ▁EXERCISE
- ▁LOWER
- PORT
- ▁PLANT
- ▁BOARD
- ▁CASE
- ▁YEAR
- CENT
- ▁DU
- ▁CHECK
- ▁WHATEVER
- ▁OIL
- ▁IDEA
- ▁SIMPLE
- ▁PRACTICE
- ▁FAST
- '0'
- ▁CONTROL
- ▁J
- ▁KEY
- ▁MIDDLE
- ▁FULL
- ▁GLASS
- ▁OUTSIDE
- ▁LOW
- ▁REST
- ▁STUFF
- ▁ACT
- ▁UNTIL
- ▁BLACK
- ▁POP
- ▁CLICK
- ▁HOLE
- ▁Z
- ▁COUNT
- ▁POT
- ▁ALLOW
- ▁HAVING
- ▁TRYING
- ▁MUSCLE
- ▁GU
- ▁BOX
- ▁NOTICE
- ▁EXAMPLE
- UND
- ▁ALONG
- FUL
- ISH
- ▁STORE
- ▁LU
- ▁FLOOR
- ▁MOVING
- ▁LARGE
- ▁STOP
- ▁PH
- ▁WALK
- '5'
- ▁QU
- ▁TECHNIQUE
- ▁SOFT
- ▁GROUND
- ▁JUMP
- ▁JU
- ▁FILL
- ▁WHY
- ▁BUY
- ▁GREEN
- ▁WALL
- ▁HEEL
- NESS
- ▁LEVEL
- ▁UNDERNEATH
- ▁PATTERN
- ▁BEHIND
- ▁OLD
- ▁TIP
- ▁COMPLETE
- ▁WON
- ▁TEACH
- ▁FIT
- ▁NECK
- ▁REMOVE
- ▁TRICK
- ▁MOVEMENT
- ▁TOWARDS
- ▁PARTICULAR
- ▁CHI
- ▁EFFECT
- J
- ▁FREE
- ▁ACROSS
- ▁BEND
- ▁SAFE
- ▁SLIDE
- ▁PROBLEM
- ▁BLOCK
- ▁PAN
- ▁NATURAL
- ▁TOUCH
- ▁CHILD
- LINE
- ▁CROSS
- ▁REASON
- '4'
- ▁POWER
- ▁APPLY
- ▁FOLLOW
- ▁DESIGN
- ▁SPACE
- ▁ORDER
- ▁WOOD
- ▁RID
- '3'
- ▁COOK
- ▁BEGIN
- ▁WATCH
- ▁STYLE
- QUA
- ▁PRODUCT
- ▁TAKING
- ▁PUTTING
- ▁EXHALE
- ▁THOUGH
- ▁DEEP
- IAN
- ▁REACH
- ▁FOOD
- ▁ALMOST
- ▁COOL
- ▁SECTION
- ▁SAID
- ▁ANGLE
- ▁MUSIC
- ▁RELAX
- ▁CORNER
- ▁DARK
- ▁CHORD
- ▁ESPECIALLY
- ▁SCALE
- ▁WARM
- ▁WITHOUT
- ▁WHEEL
- ▁SEGMENT
- ▁TABLE
- ▁BOOK
- ▁PASS
- ▁ELBOW
- ▁ROUND
- ▁INHALE
- ▁SMOOTH
- ▁ROOM
- /
- ▁NINE
- ▁SHORT
- ▁MEASURE
- ▁LESS
- ▁TWIST
- ▁BALANCE
- ▁PROCESS
- ▁SWITCH
- ▁GENERAL
- ▁CLAY
- ▁CERTAIN
- ▁NEVER
- ▁BLUE
- ▁CUP
- ▁HOUSE
- ▁EXTRA
- ▁MOTION
- ▁PRESSURE
- ▁FIRE
- ▁SIMPLY
- ▁DOUBLE
- ▁TWENTY
- ▁CATCH
- ▁BECOME
- ▁BUILD
- ▁SPEED
- ▁TRANS
- ▁DRUM
- ▁CHEST
- ▁PICTURE
- ▁LENGTH
- ▁CONTINUE
- ▁COMFORTABLE
- ▁FISH
- ▁PHOTO
- ▁LOOSE
- ▁SKI
- ▁LIFE
- ▁DEGREE
- ▁OPTION
- ▁WORD
- ▁SHARP
- ▁SHOOT
- ▁FOUND
- ▁STRONG
- ▁QUITE
- ▁THIRD
- ▁GLUE
- ▁MIND
- ▁DEFINITELY
- ▁EASIER
- GRAPH
- ▁HOOK
- ▁CLEAR
- ▁POSE
- ▁BUTTON
- ▁CHOOSE
- ▁THICK
- ▁SYSTEM
- ▁PERFECT
- ▁BEAUTIFUL
- ▁SPOT
- ▁GROW
- ▁SIGN
- ▁ELSE
- ▁CONNECT
- ▁SELECT
- ▁PUNCH
- ▁DIRECTION
- ▁WRAP
- ▁RELEASE
- QUI
- SIDE
- ▁CAREFUL
- ▁VIDEO
- ▁INSTEAD
- ▁CIRCLE
- ▁WIRE
- ▁NOSE
- ▁AMOUNT
- ▁FOCUS
- ▁NORMAL
- ▁MAJOR
- ▁WHETHER
- ▁SURFACE
- ▁THUMB
- ▁DRIVE
- ▁SCREW
- ▁POSSIBLE
- ▁OBVIOUSLY
- ▁COMMON
- ▁REGULAR
- ▁ADJUST
- ▁WIDE
- ▁BLADE
- ▁FRET
- ▁RECOMMEND
- ▁BOWL
- BOARD
- ▁IMAGE
- ▁DEPENDING
- ▁PROTECT
- ▁CLOTH
- ▁HEALTH
- ▁WRIST
- ▁CLUB
- ▁DRINK
- ▁SINCE
- ▁FRIEND
- '00'
- ▁RUNNING
- ▁ITSELF
- ▁RECORD
- ▁SWING
- ▁DIRECT
- ▁MATERIAL
- ▁YO
- ▁LEAST
- ▁EXACTLY
- ▁BEGINNING
- ▁SLIGHTLY
- ▁TREAT
- ▁CAMERA
- ▁QUARTER
- ▁WINDOW
- '8'
- ▁SOMEBODY
- ▁BURN
- ▁DEMONSTRATE
- ▁DIFFERENCE
- ▁COMPUTER
- IBLE
- ▁SHOE
- ▁PERFORM
- ▁SQUARE
- ▁CONSIDER
- ▁DRILL
- ▁TEXT
- ▁FILE
- ▁RUB
- ▁FABRIC
- ▁HUNDRED
- ▁GRIP
- ▁CHARACTER
- ▁SPECIFIC
- ▁KNOT
- ▁CURL
- ▁STITCH
- ▁BLEND
- ▁FRAME
- ▁THIRTY
- '1'
- ▁HORSE
- ▁ATTACH
- ▁GROUP
- ▁STROKE
- ▁GUITAR
- ▁APART
- ▁MACHINE
- ▁CLASS
- ▁COMB
- ▁ROOT
- ▁HELLO
- ▁ENERGY
- ▁ATTACK
- ▁CORRECT
- ▁EXTEND
- ▁MINOR
- ▁PROFESSIONAL
- ▁MONEY
- ▁STRIP
- ▁FLAVOR
- ▁EVERYBODY
- ▁RULE
- ▁DIFFICULT
- ▁PROJECT
- ▁DISCUSS
- ▁FIGURE
- ▁HOWEVER
- ▁FINAL
- ▁STRENGTH
- ▁ENTIRE
- ▁FIELD
- ▁CONTACT
- ▁SUPPORT
- ▁PALM
- ▁SERIES
- ▁ENJOY
- '6'
- ▁WORLD
- ▁DECIDE
- ▁SPEAK
- ▁SEVERAL
- ▁WRITE
- ▁PROGRAM
- ABILITY
- ▁KNIFE
- ▁PLASTIC
- ▁ORGAN
- '7'
- ▁UNDERSTAND
- ▁FIFTEEN
- ▁FLEX
- ▁INFORMATION
- ▁TWELVE
- ▁DETAIL
- ▁STRIKE
- ▁ACTUAL
- ▁SPRAY
- ▁LOCAL
- ▁MOUTH
- ▁NIGHT
- ▁VEHICLE
- ▁OPPOSITE
- ▁SCHOOL
- '9'
- ▁QUESTION
- ▁SPECIAL
- ▁BIGGER
- ▁DEVELOP
- ▁PEPPER
- ▁PREFER
- Q
- '%'
- ']'
- '['
- '&'
- ','
- _
- '#'
- '='
- '@'
- +
- '*'
- $
- '~'
- <sos/eos>
init: null
input_size: null
ctc_conf:
ignore_nan_grad: true
model_conf:
ctc_weight: 0.0
lsm_weight: 0.15
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram1000/bpe.model
non_linguistic_symbols: data/nlsyms
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
hop_length: 256
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_vid_sum/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: abs_pos
selfattention_layer_type: lf_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
attention_windows:
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
attention_dilation:
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
attention_mode: tvm
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 512
num_blocks: 6
dropout_rate: 0.15
positional_dropout_rate: 0.15
self_attention_dropout_rate: 0.15
src_attention_dropout_rate: 0.15
required:
- output_dir
- token_list
version: 0.10.0
distributed: true
```
</details>
Please cite the following paper if you use this recipe:
```BibTex
@misc{sharma2022speech,
title={Speech Summarization using Restricted Self-Attention},
author={Roshan Sharma and Shruti Palaskar and Alan W Black and Florian Metze},
year={2022},
eprint={2110.06263},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title##3={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass{cs.CL}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-summarization"], "datasets": ["how2"]}
|
espnet/roshansh_how2_asr_raw_ft_sum_valid.acc
| null |
[
"espnet",
"audio",
"automatic-speech-summarization",
"en",
"dataset:how2",
"arxiv:2110.06263",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
# ESPnet2 ASR pretrained model
## `Shinji Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.acc.best, fs=16k, lang=en`
♻️ Imported from <https://zenodo.org/record/3966501#.YOAOUZozZH5>
This model was trained by Shinji Watanabe using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Python API
```text
See https://github.com/espnet/espnet_model_zoo
```
### Evaluate in the recipe
```python
# coming soon
```
### Results
```bash
# RESULTS
## Environments
- date: `Tue Jul 21 07:58:39 EDT 2020`
- python version: `3.7.3 (default, Mar 27 2019, 22:11:17) [GCC 7.3.0]`
- espnet version: `espnet 0.8.0`
- pytorch version: `pytorch 1.4.0`
- Git hash: `75db853dd26a40d3d4dd979b2ff2457fbbb0cd69`
- Commit date: `Mon Jul 20 10:49:12 2020 -0400`
## asr_train_asr_transformer_e18_raw_bpe_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_dev_clean_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2703|54402|97.9|1.8|0.2|0.2|2.3|28.2|
|decode_dev_clean_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2703|54402|97.9|1.9|0.2|0.3|2.4|29.5|
|decode_dev_other_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2864|50948|94.6|4.7|0.7|0.7|6.0|46.6|
|decode_dev_other_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2864|50948|94.4|5.0|0.5|0.8|6.3|47.5|
|decode_test_clean_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2620|52576|97.7|2.0|0.3|0.3|2.6|30.4|
|decode_test_clean_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2620|52576|97.7|2.0|0.2|0.3|2.6|30.1|
|decode_test_other_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2939|52343|94.5|4.8|0.7|0.7|6.2|49.7|
|decode_test_other_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2939|52343|94.3|5.1|0.6|0.8|6.5|50.3|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_dev_clean_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2703|288456|99.3|0.3|0.3|0.2|0.9|28.2|
|decode_dev_clean_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2703|288456|99.3|0.4|0.3|0.2|0.9|29.5|
|decode_dev_other_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2864|265951|97.7|1.2|1.1|0.6|2.9|46.6|
|decode_dev_other_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2864|265951|97.7|1.3|1.0|0.8|3.0|47.5|
|decode_test_clean_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2620|281530|99.3|0.3|0.4|0.3|1.0|30.4|
|decode_test_clean_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2620|281530|99.4|0.3|0.3|0.3|0.9|30.1|
|decode_test_other_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2939|272758|97.8|1.1|1.1|0.7|2.9|49.7|
|decode_test_other_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2939|272758|97.9|1.2|0.9|0.8|2.9|50.3|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_dev_clean_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2703|69307|97.2|1.8|1.0|0.4|3.2|28.2|
|decode_dev_clean_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2703|69307|97.2|1.9|1.0|0.5|3.3|29.5|
|decode_dev_other_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2864|64239|93.3|4.4|2.2|1.2|7.9|46.6|
|decode_dev_other_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2864|64239|93.2|4.9|1.9|1.5|8.3|47.5|
|decode_test_clean_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2620|66712|97.0|1.9|1.1|0.4|3.3|30.4|
|decode_test_clean_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2620|66712|97.1|1.9|1.0|0.5|3.3|30.1|
|decode_test_other_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2939|66329|93.1|4.5|2.4|1.0|7.9|49.7|
|decode_test_other_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2939|66329|93.1|4.8|2.1|1.4|8.3|50.3|
```
### Training config
See full config in [`config.yaml`](./exp/asr_train_asr_transformer_e18_raw_bpe_sp/config.yaml)
```yaml
config: conf/tuning/train_asr_transformer_e18.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_e18_raw_bpe_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 3
local_rank: 3
dist_master_addr: localhost
dist_master_port: 33643
dist_launcher: null
multiprocessing_distributed: true
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"], "inference": false}
|
espnet/shinji-watanabe-librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.acc.best
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## ESPnet2 SLU pretrained model
### `siddhana/fsc_asr_train_asr_hubert_transformer_adam_specaug_raw_en_word_valid.acc.ave_5best`
♻️ Imported from https://zenodo.org/record/5590204
This model was trained by siddhana using fsc/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["fsc"]}
|
espnet/siddhana_fsc_asr_train_asr_hubert_transformer_adam_specaug_raw_en_word_valid.acc.ave_5best
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:fsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR pretrained model
### `siddhana/fsc_challenge_asr_train_asr_hubert_transformer_adam_specaug_raw_en_word_valid.acc.ave_5best`
♻️ Imported from https://zenodo.org/record/5656007
This model was trained by siddhana using fsc_challenge/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["fsc_challenge"]}
|
espnet/siddhana_fsc_challenge_asr_train_asr_hubert_transformer_adam_specaug_r-truncated-36174d
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:fsc_challenge",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR pretrained model
### `siddhana/fsc_unseen_asr_train_asr_hubert_transformer_adam_specaug_finetune_raw_en_word_valid.acc.ave_5best`
♻️ Imported from https://zenodo.org/record/5655832
This model was trained by siddhana using fsc_unseen/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["fsc_unseen"]}
|
espnet/siddhana_fsc_unseen_asr_train_asr_hubert_transformer_adam_specaug_fine-truncated-ef9dab
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:fsc_unseen",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/siddhana_slue_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best`
This model was trained by Siddhant using slue-voxceleb recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 17758ad804fd7c4b6f88ef5601f475a241dc4605
pip install -e .
cd egs2/slue-voxceleb/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/siddhana_slue_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue Dec 28 12:28:28 EST 2021`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 0.10.3a2`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `6bf3c2a4f138d35331634d2e879bbc5c32a5266e`
- Commit date: `Mon Dec 22 15:41:32 EST 2021`
## Using Conformer based encoder and Transformer based decoder with spectral augmentation and predicting transcript along with intent
- ASR config: [conf/train_asr.yaml](conf/tuning/train_asr_conformer.yaml)
- token_type: word
|dataset|Snt|Intent Classification Accuracy (%)|Intent Classification Macro F1 (%)|
|---|---|---|---|
|inference_asr_model_valid.acc.ave_10best/devel|955|80.2|29.7|
### Detailed Classification Report
|dataset|Label|Snt|Prec|Recall|F1|
|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave_10best/devel|Neutral|784|85|93|89|
|inference_asr_model_valid.acc.ave_10best/devel|Positive|167|40|24|30|
|inference_asr_model_valid.acc.ave_10best/devel|Negative|3|0|0|0|
|inference_asr_model_valid.acc.ave_10best/devel|Mixed|1|0|0|0|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_raw_en_word
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_word/train/speech_shape
- exp/asr_stats_raw_en_word/train/text_shape.word
valid_shape_file:
- exp/asr_stats_raw_en_word/valid/speech_shape
- exp/asr_stats_raw_en_word/valid/text_shape.word
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/devel/wav.scp
- speech
- sound
- - dump/raw/devel/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0002
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ▁i
- s
- ▁and
- ''''
- ▁the
- ▁a
- ▁to
- ▁it
- Neutral
- ▁you
- ▁that
- ▁of
- t
- ing
- ▁in
- ▁was
- ed
- ▁uh
- ▁know
- e
- m
- ▁he
- y
- er
- ▁so
- ▁we
- re
- a
- o
- d
- ▁um
- i
- ▁s
- c
- ▁like
- n
- ▁is
- ▁be
- ▁f
- ▁but
- ▁c
- Positive
- en
- l
- ve
- ▁just
- ▁m
- st
- ▁they
- le
- an
- ▁on
- ▁p
- u
- ▁my
- ar
- p
- ▁this
- ▁for
- ▁b
- ▁think
- in
- ▁with
- g
- or
- ▁h
- r
- ly
- w
- ▁me
- ▁d
- ▁e
- ▁have
- ▁she
- it
- ▁t
- ▁what
- b
- ▁st
- al
- es
- ▁there
- ▁really
- ic
- ▁g
- ▁as
- ▁w
- ▁l
- ▁do
- ll
- v
- ▁all
- at
- 'on'
- as
- ▁about
- h
- ▁not
- ▁re
- ▁o
- ▁at
- k
- ▁don
- ▁had
- ▁when
- ou
- ent
- is
- ra
- ▁who
- ri
- ▁go
- se
- f
- ▁out
- ▁get
- ▁an
- ▁people
- nd
- ▁kind
- ▁very
- ce
- ▁because
- ▁are
- ion
- ▁some
- et
- ▁can
- ge
- ▁or
- me
- ▁up
- ▁n
- ▁if
- ▁no
- ▁one
- ▁were
- ct
- ▁mean
- ad
- ▁time
- ▁ch
- ▁then
- ro
- ▁ex
- ▁mo
- ▁her
- ▁every
- ▁would
- ▁co
- ▁work
- ir
- ▁sh
- ay
- ▁se
- ol
- ver
- ▁su
- ▁got
- ▁k
- th
- ▁love
- ▁from
- ld
- ation
- ▁him
- ▁said
- ▁how
- ▁well
- ▁lot
- ▁show
- ch
- ard
- ie
- ▁pro
- ▁de
- ▁gonna
- ▁bo
- ▁say
- ▁see
- ▁li
- one
- ▁his
- ther
- ▁been
- ur
- ▁any
- ▁great
- ▁
- ▁yeah
- pe
- ▁which
- ▁come
- ▁them
- ot
- ▁play
- ab
- ite
- ▁way
- ally
- id
- gh
- ▁r
- ▁sc
- our
- x
- mp
- ers
- ong
- ate
- ▁your
- ss
- ast
- ▁did
- ▁sort
- ▁am
- am
- and
- ▁make
- ant
- ▁thing
- ▁ha
- ▁te
- ▁has
- ess
- ▁v
- ▁something
- ▁back
- ▁where
- ▁things
- red
- ▁al
- ut
- el
- ight
- ment
- un
- ive
- ▁th
- ▁le
- il
- ▁j
- op
- ▁more
- ▁ro
- ill
- ▁fi
- ies
- ▁much
- ck
- ▁ne
- ▁wh
- ▁always
- ▁act
- ine
- pp
- z
- ▁now
- ▁con
- thing
- ▁us
- body
- ▁want
- ▁other
- ort
- ice
- ▁doing
- ▁sa
- ▁feel
- ow
- ▁int
- ne
- ▁these
- ▁could
- ▁good
- ▁cause
- Negative
- ▁actually
- ▁wr
- ▁little
- ain
- ▁being
- ▁look
- ▁into
- ere
- ul
- ▁our
- ▁guy
- ▁first
- ud
- ▁by
- ▁fun
- ▁qu
- ▁didn
- us
- ity
- ▁jo
- od
- ▁u
- ▁part
- ▁off
- ▁pre
- ▁right
- ▁film
- ▁start
- ok
- ▁two
- ving
- ▁never
- pt
- um
- te
- ▁movie
- ▁going
- ff
- nder
- ke
- ▁ag
- ▁en
- ▁try
- ful
- im
- ays
- ▁life
- ▁different
- ach
- are
- ▁di
- ist
- ▁oh
- au
- ▁po
- nt
- ▁com
- all
- ▁lo
- om
- ▁real
- ▁y
- ame
- ▁went
- ry
- ber
- ▁even
- ci
- ▁ho
- ▁years
- ▁their
- ▁happen
- ure
- self
- per
- ▁pl
- ▁those
- ble
- 'no'
- ▁day
- ▁take
- ▁does
- ien
- ▁br
- be
- wn
- ▁thought
- ▁fe
- ght
- ▁tr
- ▁story
- ty
- ▁down
- ous
- ish
- ▁wom
- ▁wanna
- ▁put
- ▁through
- ide
- ▁ab
- ▁new
- ▁also
- ▁big
- ▁call
- ▁around
- ▁character
- ▁read
- iz
- ▁came
- act
- ily
- ath
- ag
- ree
- ▁per
- ▁will
- ▁mu
- ▁talk
- ▁over
- ▁friend
- atch
- ▁bl
- ade
- ▁world
- ▁many
- ▁sp
- sic
- ▁cl
- ▁bit
- ▁man
- ace
- ▁person
- ft
- ip
- ▁than
- ▁wanted
- ▁may
- ven
- ick
- ious
- ▁mar
- ▁before
- ▁rel
- j
- ting
- ▁set
- sh
- ep
- ▁un
- ue
- ▁aw
- ▁find
- ▁kid
- tain
- ▁such
- ter
- ▁end
- ▁tw
- ind
- aking
- ▁after
- ▁fam
- ars
- ig
- ore
- ▁bec
- ak
- art
- reat
- ust
- rou
- ack
- ▁ye
- ould
- ime
- itt
- ▁gu
- qu
- ose
- fe
- ▁wor
- lf
- alk
- ▁charact
- ▁mov
- out
- ich
- ▁happ
- ▁thou
- ith
- <mixed>
- rom
- ake
- ▁diff
- ▁char
- na
- round
- ory
- ink
- ually
- ▁gon
- ▁pe
- right
- ody
- ah
- rie
- riend
- now
- so
- ause
- ▁fil
- ▁pers
- fore
- very
- ▁differe
- rough
- q
- ▁fir
- anna
- ways
- ':'
- '&'
- fter
- <sos/eos>
transcript_token_list: null
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
postdecoder: null
postdecoder_conf: {}
required:
- output_dir
- token_list
version: 0.10.3a2
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["slue-voxceleb"]}
|
espnet/siddhana_slue_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:slue-voxceleb",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## ESPnet2 SLU (Entity Classification) pretrained model
### `siddhana/slurp_entity_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best`
♻️ Imported from https://zenodo.org/record/5590204
This model was trained by siddhana using fsc/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["fsc"]}
|
espnet/siddhana_slurp_entity_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:fsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## ESPnet2 SLU pretrained model
### `siddhana/slurp_new_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best`
♻️ Imported from https://zenodo.org/record/5590384
This model was trained by siddhana using slurp/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["slurp"]}
|
espnet/siddhana_slurp_new_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:slurp",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/simpleoier_librispeech_asr_train_asr_conformer7_hubert_ll60k_large_raw_en_bpe5000_sp`
This model was trained by simpleoier using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout b0ff60946ada6753af79423a2e6063984bec2926
pip install -e .
cd egs2/librispeech/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/simpleoier_librispeech_asr_train_asr_conformer7_hubert_ll60k_large_raw_en_bpe5000_sp
```
## ASR config
<details><summary>expand</summary>
```
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"]}
|
espnet/simpleoier_librispeech_asr_train_asr_conformer7_hubert_ll60k_large_raw_en_bpe5000_sp
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/simpleoier_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp`
This model was trained by simpleoier using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout b0ff60946ada6753af79423a2e6063984bec2926
pip install -e .
cd egs2/librispeech/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/simpleoier_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp
```
## ASR config
<details><summary>expand</summary>
```
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"]}
|
espnet/simpleoier_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/simpleoier_librispeech_asr_train_asr_conformer7_wavlm_large_raw_en_bpe5000_sp`
This model was trained by simpleoier using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout b0ff60946ada6753af79423a2e6063984bec2926
pip install -e .
cd egs2/librispeech/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/simpleoier_librispeech_asr_train_asr_conformer7_wavlm_large_raw_en_bpe5000_sp
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue Jan 4 20:52:48 EST 2022`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.5a1`
- pytorch version: `pytorch 1.8.1`
- Git hash: ``
- Commit date: ``
## asr_train_asr_conformer7_wavlm_large_raw_en_bpe5000_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_clean|2703|54402|98.4|1.4|0.1|0.2|1.7|23.1|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_other|2864|50948|96.7|3.0|0.3|0.3|3.6|35.5|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_clean|2620|52576|98.4|1.5|0.1|0.2|1.8|23.7|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_other|2939|52343|96.7|3.0|0.3|0.4|3.7|37.9|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_clean|2703|288456|99.7|0.2|0.2|0.2|0.5|23.1|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_other|2864|265951|98.9|0.6|0.4|0.4|1.5|35.5|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_clean|2620|281530|99.6|0.2|0.2|0.2|0.6|23.7|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_other|2939|272758|99.1|0.5|0.4|0.4|1.3|37.9|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_clean|2703|68010|98.2|1.4|0.4|0.3|2.1|23.1|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_other|2864|63110|96.0|3.1|0.9|0.9|4.9|35.5|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_clean|2620|65818|98.1|1.4|0.5|0.4|2.3|23.7|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_other|2939|65101|96.1|2.9|1.0|0.8|4.7|37.9|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer7_wavlm_large.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer7_wavlm_large_raw_en_bpe5000_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
num_targets: 1
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 45342
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 35
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 3
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 40000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_960_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_960_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0025
scheduler: warmuplr
scheduler_conf:
warmup_steps: 40000
token_list:
- <blank>
- <unk>
- ▁THE
- S
- ▁AND
- ▁OF
- ▁TO
- ▁A
- ▁IN
- ▁I
- ▁HE
- ▁THAT
- ▁WAS
- ED
- ▁IT
- ''''
- ▁HIS
- ING
- ▁YOU
- ▁WITH
- ▁FOR
- ▁HAD
- T
- ▁AS
- ▁HER
- ▁IS
- ▁BE
- ▁BUT
- ▁NOT
- ▁SHE
- D
- ▁AT
- ▁ON
- LY
- ▁HIM
- ▁THEY
- ▁ALL
- ▁HAVE
- ▁BY
- ▁SO
- ▁THIS
- ▁MY
- ▁WHICH
- ▁ME
- ▁SAID
- ▁FROM
- ▁ONE
- Y
- E
- ▁WERE
- ▁WE
- ▁NO
- N
- ▁THERE
- ▁OR
- ER
- ▁AN
- ▁WHEN
- ▁ARE
- ▁THEIR
- ▁WOULD
- ▁IF
- ▁WHAT
- ▁THEM
- ▁WHO
- ▁OUT
- M
- ▁DO
- ▁WILL
- ▁UP
- ▁BEEN
- P
- R
- ▁MAN
- ▁THEN
- ▁COULD
- ▁MORE
- C
- ▁INTO
- ▁NOW
- ▁VERY
- ▁YOUR
- ▁SOME
- ▁LITTLE
- ES
- ▁TIME
- RE
- ▁CAN
- ▁LIKE
- LL
- ▁ABOUT
- ▁HAS
- ▁THAN
- ▁DID
- ▁UPON
- ▁OVER
- IN
- ▁ANY
- ▁WELL
- ▁ONLY
- B
- ▁SEE
- ▁GOOD
- ▁OTHER
- ▁TWO
- L
- ▁KNOW
- ▁GO
- ▁DOWN
- ▁BEFORE
- A
- AL
- ▁OUR
- ▁OLD
- ▁SHOULD
- ▁MADE
- ▁AFTER
- ▁GREAT
- ▁DAY
- ▁MUST
- ▁COME
- ▁HOW
- ▁SUCH
- ▁CAME
- LE
- ▁WHERE
- ▁US
- ▁NEVER
- ▁THESE
- ▁MUCH
- ▁DE
- ▁MISTER
- ▁WAY
- G
- ▁S
- ▁MAY
- ATION
- ▁LONG
- OR
- ▁AM
- ▁FIRST
- ▁BACK
- ▁OWN
- ▁RE
- ▁AGAIN
- ▁SAY
- ▁MEN
- ▁WENT
- ▁HIMSELF
- ▁HERE
- NESS
- ▁THINK
- V
- IC
- ▁EVEN
- ▁THOUGHT
- ▁HAND
- ▁JUST
- ▁O
- ▁UN
- VE
- ION
- ▁ITS
- 'ON'
- ▁MAKE
- ▁MIGHT
- ▁TOO
- K
- ▁AWAY
- ▁LIFE
- TH
- ▁WITHOUT
- ST
- ▁THROUGH
- ▁MOST
- ▁TAKE
- ▁DON
- ▁EVERY
- F
- O
- ▁SHALL
- ▁THOSE
- ▁EYES
- AR
- ▁STILL
- ▁LAST
- ▁HOUSE
- ▁HEAD
- ABLE
- ▁NOTHING
- ▁NIGHT
- ITY
- ▁LET
- ▁MANY
- ▁OFF
- ▁BEING
- ▁FOUND
- ▁WHILE
- EN
- ▁SAW
- ▁GET
- ▁PEOPLE
- ▁FACE
- ▁YOUNG
- CH
- ▁UNDER
- ▁ONCE
- ▁TELL
- AN
- ▁THREE
- ▁PLACE
- ▁ROOM
- ▁YET
- ▁SAME
- IL
- US
- U
- ▁FATHER
- ▁RIGHT
- EL
- ▁THOUGH
- ▁ANOTHER
- LI
- RI
- ▁HEART
- IT
- ▁PUT
- ▁TOOK
- ▁GIVE
- ▁EVER
- ▁E
- ▁PART
- ▁WORK
- ERS
- ▁LOOK
- ▁NEW
- ▁KING
- ▁MISSUS
- ▁SIR
- ▁LOVE
- ▁MIND
- ▁LOOKED
- W
- RY
- ▁ASKED
- ▁LEFT
- ET
- ▁LIGHT
- CK
- ▁DOOR
- ▁MOMENT
- RO
- ▁WORLD
- ▁THINGS
- ▁HOME
- UL
- ▁THING
- LA
- ▁WHY
- ▁MOTHER
- ▁ALWAYS
- ▁FAR
- FUL
- ▁WATER
- CE
- IVE
- UR
- ▁HEARD
- ▁SOMETHING
- ▁SEEMED
- I
- LO
- ▁BECAUSE
- OL
- ▁END
- ▁TOLD
- ▁CON
- ▁YES
- ▁GOING
- ▁GOT
- RA
- IR
- ▁WOMAN
- ▁GOD
- EST
- TED
- ▁FIND
- ▁KNEW
- ▁SOON
- ▁EACH
- ▁SIDE
- H
- TON
- MENT
- ▁OH
- NE
- Z
- LING
- ▁AGAINST
- TER
- ▁NAME
- ▁MISS
- ▁QUITE
- ▁WANT
- ▁YEARS
- ▁FEW
- ▁BETTER
- ENT
- ▁HALF
- ▁DONE
- ▁ALSO
- ▁BEGAN
- ▁HAVING
- ▁ENOUGH
- IS
- ▁LADY
- ▁WHOLE
- LESS
- ▁BOTH
- ▁SEEN
- ▁SET
- ▁WHITE
- ▁COURSE
- IES
- ▁VOICE
- ▁CALLED
- ▁D
- ▁EX
- ATE
- ▁TURNED
- ▁GAVE
- ▁C
- ▁POOR
- MAN
- UT
- NA
- ▁DEAR
- ISH
- ▁GIRL
- ▁MORNING
- ▁BETWEEN
- LED
- ▁NOR
- IA
- ▁AMONG
- MA
- ▁
- ▁SMALL
- ▁REST
- ▁WHOM
- ▁FELT
- ▁HANDS
- ▁MYSELF
- ▁HIGH
- ▁M
- ▁HOWEVER
- ▁HERSELF
- ▁P
- CO
- ▁STOOD
- ID
- ▁KIND
- ▁HUNDRED
- AS
- ▁ROUND
- ▁ALMOST
- TY
- ▁SINCE
- ▁G
- AM
- ▁LA
- SE
- ▁BOY
- ▁MA
- ▁PERHAPS
- ▁WORDS
- ATED
- ▁HO
- X
- ▁MO
- ▁SAT
- ▁REPLIED
- ▁FOUR
- ▁ANYTHING
- ▁TILL
- ▁UNTIL
- ▁BLACK
- TION
- ▁CRIED
- RU
- TE
- ▁FACT
- ▁HELP
- ▁NEXT
- ▁LOOKING
- ▁DOES
- ▁FRIEND
- ▁LAY
- ANCE
- ▁POWER
- ▁BROUGHT
- VER
- ▁FIRE
- ▁KEEP
- PO
- FF
- ▁COUNTRY
- ▁SEA
- ▁WORD
- ▁CAR
- ▁DAYS
- ▁TOGETHER
- ▁IMP
- ▁REASON
- KE
- ▁INDEED
- TING
- ▁MATTER
- ▁FULL
- ▁TEN
- TIC
- ▁LAND
- ▁RATHER
- ▁AIR
- ▁HOPE
- ▁DA
- ▁OPEN
- ▁FEET
- ▁EN
- ▁FIVE
- ▁POINT
- ▁CO
- OM
- ▁LARGE
- ▁B
- ▁CL
- ME
- ▁GONE
- ▁CHILD
- INE
- GG
- ▁BEST
- ▁DIS
- UM
- ▁HARD
- ▁LORD
- OUS
- ▁WIFE
- ▁SURE
- ▁FORM
- DE
- ▁DEATH
- ANT
- ▁NATURE
- ▁BA
- ▁CARE
- ▁BELIEVE
- PP
- ▁NEAR
- ▁RO
- ▁RED
- ▁WAR
- IE
- ▁SPEAK
- ▁FEAR
- ▁CASE
- ▁TAKEN
- ▁ALONG
- ▁CANNOT
- ▁HEAR
- ▁THEMSELVES
- CI
- ▁PRESENT
- AD
- ▁MASTER
- ▁SON
- ▁THUS
- ▁LI
- ▁LESS
- ▁SUN
- ▁TRUE
- IM
- IOUS
- ▁THOUSAND
- ▁MONEY
- ▁W
- ▁BEHIND
- ▁CHILDREN
- ▁DOCTOR
- AC
- ▁TWENTY
- ▁WISH
- ▁SOUND
- ▁WHOSE
- ▁LEAVE
- ▁ANSWERED
- ▁THOU
- ▁DUR
- ▁HA
- ▁CERTAIN
- ▁PO
- ▁PASSED
- GE
- TO
- ▁ARM
- ▁LO
- ▁STATE
- ▁ALONE
- TA
- ▁SHOW
- ▁NEED
- ▁LIVE
- ND
- ▁DEAD
- ENCE
- ▁STRONG
- ▁PRE
- ▁TI
- ▁GROUND
- SH
- TI
- ▁SHORT
- IAN
- UN
- ▁PRO
- ▁HORSE
- MI
- ▁PRINCE
- ARD
- ▁FELL
- ▁ORDER
- ▁CALL
- AT
- ▁GIVEN
- ▁DARK
- ▁THEREFORE
- ▁CLOSE
- ▁BODY
- ▁OTHERS
- ▁SENT
- ▁SECOND
- ▁OFTEN
- ▁CA
- ▁MANNER
- MO
- NI
- ▁BRING
- ▁QUESTION
- ▁HOUR
- ▁BO
- AGE
- ▁ST
- ▁TURN
- ▁TABLE
- ▁GENERAL
- ▁EARTH
- ▁BED
- ▁REALLY
- ▁SIX
- 'NO'
- IST
- ▁BECOME
- ▁USE
- ▁READ
- ▁SE
- ▁VI
- ▁COMING
- ▁EVERYTHING
- ▁EM
- ▁ABOVE
- ▁EVENING
- ▁BEAUTIFUL
- ▁FEEL
- ▁RAN
- ▁LEAST
- ▁LAW
- ▁ALREADY
- ▁MEAN
- ▁ROSE
- WARD
- ▁ITSELF
- ▁SOUL
- ▁SUDDENLY
- ▁AROUND
- RED
- ▁ANSWER
- ICAL
- ▁RA
- ▁WIND
- ▁FINE
- ▁WON
- ▁WHETHER
- ▁KNOWN
- BER
- NG
- ▁TA
- ▁CAPTAIN
- ▁EYE
- ▁PERSON
- ▁WOMEN
- ▁SORT
- ▁ASK
- ▁BROTHER
- ▁USED
- ▁HELD
- ▁BIG
- ▁RETURNED
- ▁STRANGE
- ▁BU
- ▁PER
- ▁FREE
- ▁EITHER
- ▁WITHIN
- ▁DOUBT
- ▁YEAR
- ▁CLEAR
- ▁SIGHT
- ▁GRA
- ▁LOST
- ▁KEPT
- ▁F
- PE
- ▁BAR
- ▁TOWN
- ▁SLEEP
- ARY
- ▁HAIR
- ▁FRIENDS
- ▁DREAM
- ▁FELLOW
- PER
- ▁DEEP
- QUE
- ▁BECAME
- ▁REAL
- ▁PAST
- ▁MAKING
- RING
- ▁COMP
- ▁ACT
- ▁BAD
- HO
- STER
- ▁YE
- ▁MEANS
- ▁RUN
- MEN
- ▁DAUGHTER
- ▁SENSE
- ▁CITY
- ▁SOMETIMES
- ▁TOWARDS
- ▁ROAD
- ▁SP
- ▁LU
- ▁READY
- ▁FOOT
- ▁COLD
- ▁SA
- ▁LETTER
- ▁ELSE
- ▁MAR
- ▁STA
- BE
- ▁TRUTH
- ▁LE
- BO
- ▁BUSINESS
- CHE
- ▁JOHN
- ▁SUBJECT
- ▁COURT
- ▁IDEA
- ILY
- ▁RIVER
- ATING
- ▁FAMILY
- HE
- ▁DIDN
- ▁GLAD
- ▁SEVERAL
- IAL
- ▁UNDERSTAND
- ▁SC
- ▁POSSIBLE
- ▁DIFFERENT
- ▁RETURN
- ▁ARMS
- ▁LOW
- ▁HOLD
- ▁TALK
- ▁RU
- ▁WINDOW
- ▁INTEREST
- ▁SISTER
- SON
- ▁SH
- ▁BLOOD
- ▁SAYS
- ▁CAP
- ▁DI
- ▁HUMAN
- ▁CAUSE
- NCE
- ▁THANK
- ▁LATE
- GO
- ▁CUT
- ▁ACROSS
- ▁STORY
- NT
- ▁COUNT
- ▁ABLE
- DY
- LEY
- ▁NUMBER
- ▁STAND
- ▁CHURCH
- ▁THY
- ▁SUPPOSE
- LES
- BLE
- OP
- ▁EFFECT
- BY
- ▁K
- ▁NA
- ▁SPOKE
- ▁MET
- ▁GREEN
- ▁HUSBAND
- ▁RESPECT
- ▁PA
- ▁FOLLOWED
- ▁REMEMBER
- ▁LONGER
- ▁AGE
- ▁TAKING
- ▁LINE
- ▁SEEM
- ▁HAPPY
- LAND
- EM
- ▁STAY
- ▁PLAY
- ▁COMMON
- ▁GA
- ▁BOOK
- ▁TIMES
- ▁OBJECT
- ▁SEVEN
- QUI
- DO
- UND
- ▁FL
- ▁PRETTY
- ▁FAIR
- WAY
- ▁WOOD
- ▁REACHED
- ▁APPEARED
- ▁SWEET
- ▁FALL
- BA
- ▁PASS
- ▁SIGN
- ▁TREE
- IONS
- ▁GARDEN
- ▁ILL
- ▁ART
- ▁REMAIN
- ▁OPENED
- ▁BRIGHT
- ▁STREET
- ▁TROUBLE
- ▁PAIN
- ▁CONTINUED
- ▁SCHOOL
- OUR
- ▁CARRIED
- ▁SAYING
- HA
- ▁CHANGE
- ▁FOLLOW
- ▁GOLD
- ▁SW
- ▁FEELING
- ▁COMMAND
- ▁BEAR
- ▁CERTAINLY
- ▁BLUE
- ▁NE
- CA
- ▁WILD
- ▁ACCOUNT
- ▁OUGHT
- UD
- ▁T
- ▁BREATH
- ▁WANTED
- ▁RI
- ▁HEAVEN
- ▁PURPOSE
- ▁CHARACTER
- ▁RICH
- ▁PE
- ▁DRESS
- OS
- FA
- ▁TH
- ▁ENGLISH
- ▁CHANCE
- ▁SHIP
- ▁VIEW
- ▁TOWARD
- AK
- ▁JOY
- ▁JA
- ▁HAR
- ▁NEITHER
- ▁FORCE
- ▁UNCLE
- DER
- ▁PLAN
- ▁PRINCESS
- DI
- ▁CHIEF
- ▁HAT
- ▁LIVED
- ▁AB
- ▁VISIT
- ▁MOR
- TEN
- ▁WALL
- UC
- ▁MINE
- ▁PLEASURE
- ▁SMILE
- ▁FRONT
- ▁HU
- ▁DEAL
- OW
- ▁FURTHER
- GED
- ▁TRIED
- DA
- VA
- ▁NONE
- ▁ENTERED
- ▁QUEEN
- ▁PAY
- ▁EL
- ▁EXCEPT
- ▁SHA
- ▁FORWARD
- ▁EIGHT
- ▁ADDED
- ▁PUBLIC
- ▁EIGHTEEN
- ▁STAR
- ▁HAPPENED
- ▁LED
- ▁WALKED
- ▁ALTHOUGH
- ▁LATER
- ▁SPIRIT
- ▁WALK
- ▁BIT
- ▁MEET
- LIN
- ▁FI
- LT
- ▁MOUTH
- ▁WAIT
- ▁HOURS
- ▁LIVING
- ▁YOURSELF
- ▁FAST
- ▁CHA
- ▁HALL
- ▁BEYOND
- ▁BOAT
- ▁SECRET
- ENS
- ▁CHAIR
- RN
- ▁RECEIVED
- ▁CAT
- RESS
- ▁DESIRE
- ▁GENTLEMAN
- UGH
- ▁LAID
- EVER
- ▁OCCASION
- ▁WONDER
- ▁GU
- ▁PARTY
- DEN
- ▁FISH
- ▁SEND
- ▁NEARLY
- ▁TRY
- CON
- ▁SEEMS
- RS
- ▁BELL
- ▁BRA
- ▁SILENCE
- IG
- ▁GUARD
- ▁DIE
- ▁DOING
- ▁TU
- ▁COR
- ▁EARLY
- ▁BANK
- ▁FIGURE
- IF
- ▁ENGLAND
- ▁MARY
- ▁AFRAID
- LER
- ▁FO
- ▁WATCH
- ▁FA
- ▁VA
- ▁GRE
- ▁AUNT
- PED
- ▁SERVICE
- ▁JE
- ▁PEN
- ▁MINUTES
- ▁PAN
- ▁TREES
- NED
- ▁GLASS
- ▁TONE
- ▁PLEASE
- ▁FORTH
- ▁CROSS
- ▁EXCLAIMED
- ▁DREW
- ▁EAT
- ▁AH
- ▁GRAVE
- ▁CUR
- PA
- URE
- CENT
- ▁MILES
- ▁SOFT
- ▁AGO
- ▁POSITION
- ▁WARM
- ▁LENGTH
- ▁NECESSARY
- ▁THINKING
- ▁PICTURE
- ▁PI
- SHIP
- IBLE
- ▁HEAVY
- ▁ATTENTION
- ▁DOG
- ABLY
- ▁STANDING
- ▁NATURAL
- ▁APPEAR
- OV
- ▁CAUGHT
- VO
- ISM
- ▁SPRING
- ▁EXPERIENCE
- ▁PAT
- OT
- ▁STOPPED
- ▁REGARD
- ▁HARDLY
- ▁SELF
- ▁STRENGTH
- ▁GREW
- ▁KNIGHT
- ▁OPINION
- ▁WIDE
- ▁INSTEAD
- ▁SOUTH
- ▁TRANS
- ▁CORNER
- ▁LEARN
- ▁ISLAND
- ▁MI
- ▁THIRD
- ▁STE
- ▁STRAIGHT
- ▁TEA
- ▁BOUND
- ▁SEEING
- ▁JU
- ▁DINNER
- ▁BEAUTY
- ▁PEACE
- AH
- ▁REP
- ▁SILENT
- ▁CRE
- ALLY
- RIC
- ▁STEP
- ▁VER
- ▁JO
- GER
- ▁SITTING
- ▁THIRTY
- ▁SAVE
- ENED
- ▁GLANCE
- ▁REACH
- ▁ACTION
- ▁SAL
- ▁SAD
- ▁STONE
- ITIES
- ▁FRENCH
- ▁STRUCK
- ▁PAPER
- ▁WHATEVER
- ▁SUB
- ▁DISTANCE
- ▁WRONG
- ▁KNOWLEDGE
- ▁SAFE
- ▁SNOW
- ▁MUSIC
- ▁FIFTY
- RON
- ▁ATTEMPT
- ▁GOVERNMENT
- TU
- ▁CROWD
- ▁BESIDES
- ▁LOVED
- ▁BOX
- ▁DIRECTION
- ▁TRAIN
- ▁NORTH
- ▁THICK
- ▁GETTING
- AV
- ▁FLOOR
- ▁COMPANY
- ▁BLOW
- ▁PLAIN
- TRO
- ▁BESIDE
- ▁ROCK
- ▁IMMEDIATELY
- FI
- ▁SHADOW
- ▁SIT
- ORS
- ILE
- ▁DRINK
- ▁SPOT
- ▁DANGER
- ▁AL
- ▁SAINT
- ▁SLOWLY
- ▁PALACE
- IER
- ▁RESULT
- ▁PETER
- ▁FOREST
- ▁BELONG
- ▁SU
- ▁PAR
- RIS
- ▁TEARS
- ▁APPEARANCE
- ▁GATE
- BU
- ITION
- ▁QUICKLY
- ▁QUIET
- ▁LONDON
- ▁START
- ▁BROWN
- TRA
- KIN
- ▁CONSIDER
- ▁BATTLE
- ▁ANNE
- ▁PIECE
- ▁DIED
- ▁SUCCESS
- ▁LIPS
- ▁FILLED
- ▁FORGET
- ▁POST
- IFIED
- ▁MARGARET
- ▁FOOD
- HAM
- ▁PLEASANT
- ▁FE
- ▁EXPRESSION
- ▁POCKET
- ▁FRESH
- ▁WEAR
- TRI
- ▁BROKEN
- ▁LAUGHED
- GING
- ▁FOLLOWING
- WN
- IP
- ▁TOUCH
- ▁YOUTH
- ATIVE
- ▁LEG
- ▁WEEK
- ▁REMAINED
- ▁EASY
- NER
- RK
- ▁ENTER
- ▁FIGHT
- ▁PLACED
- ▁TRAVEL
- ▁SIMPLE
- ▁GIRLS
- ▁WAITING
- ▁STOP
- ▁WAVE
- AU
- ▁WISE
- ▁CAMP
- TURE
- UB
- ▁VE
- ▁OFFICE
- ▁GRAND
- ▁FIT
- ▁JUDGE
- UP
- MENTS
- ▁QUICK
- HI
- ▁FLO
- RIES
- VAL
- ▁COMFORT
- ▁PARTICULAR
- ▁STARTED
- ▁SUIT
- ▁NI
- ▁PALE
- ▁IMPOSSIBLE
- ▁HOT
- ▁CONVERSATION
- ▁SCENE
- ▁BOYS
- ▁WIN
- ▁BRE
- ▁SOCIETY
- ▁OUTSIDE
- ▁WRITE
- ▁EFFORT
- ▁TALKING
- ▁FORTUNE
- ▁NINE
- ▁WA
- ▁SINGLE
- ▁RULE
- ▁PORT
- ▁WINTER
- ▁CAST
- ▁CRA
- ▁HAPPEN
- ▁CRO
- ▁SHUT
- NING
- ▁GUN
- ▁NOBLE
- ▁BEGIN
- ▁PATH
- ▁SKY
- ▁WONDERFUL
- ▁SUDDEN
- ▁ARMY
- ▁CHE
- ▁WORTH
- ▁MOUNTAIN
- ▁MIN
- AG
- ▁FLU
- ▁GRACE
- ▁CHAPTER
- ▁BELOW
- ▁RING
- ▁TURNING
- ▁IRON
- ▁TOP
- ▁AFTERNOON
- ORY
- ▁EVIL
- ▁TRUST
- ▁BOW
- ▁TRI
- ▁SAIL
- ▁CONTENT
- ▁HORSES
- ITE
- ▁SILVER
- AP
- ▁LAD
- ▁RUNNING
- ▁HILL
- ▁BEGINNING
- ▁MAD
- ▁HABIT
- GRA
- ▁CLOTHES
- ▁MORROW
- ▁CRY
- ▁FASHION
- ▁PRESENCE
- ▁Z
- FE
- ▁ARRIVED
- ▁QUARTER
- ▁PERFECT
- ▁WO
- ▁TRA
- ▁USUAL
- ▁NECK
- ▁MARRIED
- ▁SEAT
- ▁WI
- ▁GAR
- ▁SAND
- ▁SHORE
- ▁GIVING
- NY
- ▁PROBABLY
- ▁MINUTE
- ▁EXPECT
- ▁DU
- ▁SHOT
- ▁INSTANT
- ▁DEGREE
- ▁COLOR
- ▁WEST
- RT
- ▁MARCH
- ▁BIRD
- ▁SHOWED
- ▁GREATER
- ▁SERIOUS
- ▁CARRY
- ▁COVERED
- ▁FORMER
- ▁LOUD
- ▁MOVED
- ▁MASS
- ▁SEEK
- ▁CHO
- GEN
- ▁ROMAN
- IB
- ▁MOON
- ▁BOARD
- ▁STREAM
- ▁EASILY
- ▁WISHED
- ▁SEARCH
- ▁COULDN
- ▁MONTHS
- ▁SICK
- LIE
- ▁DUTY
- ▁TWELVE
- ▁FAINT
- ▁STRANGER
- ▁SURPRISE
- ▁KILL
- ▁LEAVING
- ▁JOURNEY
- ▁SCARCELY
- ▁RAISED
- ▁SPEAKING
- ▁TERRIBLE
- ▁TOM
- ▁FIELD
- ▁GAME
- ▁QUA
- ▁PROMISE
- ▁LIE
- ▁CONDITION
- ▁TRO
- ▁PERSONAL
- ▁TALL
- ▁STICK
- ▁THREW
- ▁MARRY
- ▁VAN
- ▁BURN
- ▁ACCORDING
- ▁RISE
- ▁ATTACK
- ▁SWORD
- ▁GUESS
- ▁THOUGHTS
- ▁THIN
- ▁THROW
- ▁CALM
- SIDE
- ▁VILLAGE
- ▁DEN
- ▁ANXIOUS
- ▁MER
- GI
- ▁EXPECTED
- ▁BALL
- ▁ESPECIALLY
- ▁CHARGE
- ▁MEASURE
- ISE
- ▁NICE
- ▁TRYING
- ▁ALLOW
- ▁SHARP
- ▁BREAD
- ▁HONOUR
- ▁HONOR
- ▁ENTIRELY
- ▁BILL
- ▁BRI
- ▁WRITTEN
- ▁AR
- ▁BROKE
- ▁KILLED
- ▁MARK
- ▁VEN
- ▁LADIES
- ▁LEARNED
- ▁FLOWERS
- PLE
- ▁FORTY
- ▁OFFER
- ▁HAPPINESS
- ▁PRAY
- ▁CLASS
- ▁FER
- ▁PRINCIPLE
- GU
- ▁BOOKS
- ▁SHAPE
- ▁SUMMER
- ▁JACK
- ▁DRAW
- ▁GOLDEN
- ▁DECIDED
- ▁LEAD
- ▁UNLESS
- ▁HARM
- ▁LISTEN
- HER
- ▁SHOOK
- ▁INFLUENCE
- ▁PERFECTLY
- ▁MARRIAGE
- ▁BROAD
- ▁ESCAPE
- ▁STATES
- ▁MIDDLE
- ▁PLANT
- ▁MIL
- ▁MOVEMENT
- ▁NOISE
- ▁ENEMY
- ▁HISTORY
- ▁BREAK
- ROUS
- ▁UNDERSTOOD
- ▁LATTER
- FER
- ▁COMES
- ▁MERELY
- ▁SIMPLY
- WI
- ▁IMAGINE
- ▁LOWER
- ▁CONDUCT
- ▁BORN
- WA
- ▁YARD
- ▁KA
- ▁CLOSED
- ▁NOTE
- GA
- ▁STRA
- RAN
- ▁EXIST
- EV
- ▁SPEECH
- ▁BITTER
- JO
- ▁MAKES
- ▁GRASS
- ▁REPLY
- ▁CHANGED
- ▁MON
- ▁LYING
- ▁DANCE
- ▁FINALLY
- ▁AMERICAN
- ▁ENJOY
- ▁CONTAIN
- ▁MEANT
- USE
- ▁OBSERVED
- THER
- ▁LAUGH
- ▁AFTERWARDS
- ▁BEAT
- ▁RACE
- ▁EQUAL
- ▁RAIN
- PS
- ▁STEPS
- ▁BENEATH
- ▁TAIL
- ▁TASTE
- IO
- EY
- ▁CHAR
- ▁GE
- GN
- TIN
- ▁GROW
- ▁TE
- IANS
- ▁MOVE
- ▁REPEATED
- ▁DRIVE
- TUR
- ▁SI
- CLOCK
- ▁BRAVE
- ▁MADAME
- ▁LOT
- ▁CASTLE
- ▁HI
- AND
- ▁FUTURE
- ▁RELATION
- ▁SORRY
- ▁HEALTH
- ▁DICK
- ▁R
- ▁BUILDING
- ▁EDGE
- ▁BLESS
- ▁SPITE
- WE
- ▁MIS
- ▁PRISONER
- ▁ALLOWED
- ▁PH
- ▁CATCH
- MER
- ETH
- ▁COAT
- ▁COMPLETE
- ▁WOULDN
- ▁CREATURE
- ▁YELLOW
- ▁IMPORTANT
- ▁ADD
- ▁PASSING
- ▁DARKNESS
- ▁CARRIAGE
- ▁MILL
- ▁FIFTEEN
- NCY
- ▁HUNG
- ▁OB
- ▁PLEASED
- ▁SPREAD
- ▁CURIOUS
- ▁WORSE
- ▁CIRCUMSTANCES
- ▁GI
- LAR
- ▁CAL
- ▁HY
- ▁MERE
- ▁JANE
- ▁EAST
- BI
- ▁CUP
- ▁BLIND
- ▁PASSION
- ▁DISCOVERED
- ▁NOTICE
- ▁REPORT
- ▁SPACE
- ▁PRESENTLY
- ▁SORROW
- ▁PACK
- ▁DIN
- CY
- ▁DRY
- ▁ANCIENT
- ▁DRESSED
- ▁COVER
- ▁VO
- ▁EXISTENCE
- ▁EXACTLY
- ▁BEAST
- ▁PROPER
- ▁DROPPED
- ▁CLEAN
- ▁COLOUR
- ▁HOST
- ▁CHAMBER
- ▁FAITH
- LET
- ▁DETERMINED
- ▁PRIEST
- ▁STORM
- ▁SKIN
- ▁DARE
- ▁PERSONS
- ▁PICK
- ▁NARROW
- ▁SUPPORT
- ▁PRIVATE
- ▁SMILED
- ▁COUSIN
- ▁DRAWING
- ▁ATTEND
- ▁COOK
- ▁PREVENT
- ▁VARIOUS
- ▁BLA
- ▁FIXED
- ▁WEAK
- THE
- ▁HOLE
- ▁BOTTOM
- ▁NOBODY
- ADE
- ▁LEGS
- ITCH
- ▁INDIVIDUAL
- ▁EARS
- LIKE
- ▁ADVANTAGE
- ▁FRANCE
- ▁BON
- ▁WINE
- ▁LIVES
- OD
- ▁WALLS
- ▁TIRED
- ▁SHOP
- ▁ANIMAL
- ▁CRU
- ▁WROTE
- ▁ROYAL
- ▁CONSIDERED
- ▁MORAL
- ▁COMPANION
- ▁LOSE
- ▁ISN
- ▁BAG
- ▁LAKE
- ▁INTER
- ▁COM
- ▁LETTERS
- ▁LUCK
- ▁EAR
- ▁GERMAN
- ▁PET
- ▁SAKE
- ▁DROP
- ▁PAID
- ▁BREAKFAST
- ▁LABOR
- ▁DESERT
- ▁DECLARED
- ▁HUM
- ▁STUDY
- ▁INSTANCE
- ONE
- ▁SOMEWHAT
- ▁CLOTH
- ▁SPECIAL
- ▁COLONEL
- ▁SONG
- ▁MAIN
- ▁VALUE
- ▁PROUD
- ▁EXPRESS
- ▁NATION
- ▁HANDSOME
- ▁CONFESS
- ▁PU
- ▁PASSAGE
- ▁PERIOD
- ▁CUSTOM
- ▁HURT
- ▁SHOULDER
- ▁CHRIST
- ZA
- ▁RECEIVE
- ▁DIFFICULT
- ▁DEPEND
- ▁MEETING
- ▁CHI
- ▁GEN
- LIGHT
- ▁BELIEVED
- ▁SOCIAL
- ▁DIFFICULTY
- ▁GREATEST
- ▁DRAWN
- ▁GRANT
- ▁BIRDS
- ▁ANGRY
- ▁HEAT
- UFF
- ▁DUE
- ▁PLACES
- ▁SIN
- ▁COURAGE
- ▁EVIDENTLY
- ▁GENTLE
- ▁CRUEL
- ▁GEORGE
- ▁GRI
- ▁SERVANT
- ▁U
- ▁PURE
- OOK
- ▁KNOWS
- ▁KNOWING
- LF
- ▁WRITING
- ▁REMEMBERED
- ▁CU
- ▁HOLDING
- ▁TENDER
- ▁QUI
- ▁BURST
- ▁SURELY
- IGN
- ▁VALLEY
- ▁FU
- ▁BUTTER
- ▁SPOKEN
- ▁STORE
- ▁DISC
- ▁CHRISTIAN
- ▁PARIS
- ▁HENRY
- ▁FINISHED
- ▁PROVE
- ▁FOOL
- ▁SOLDIERS
- ▁LANGUAGE
- ▁INSIDE
- ▁BAN
- ▁FALLEN
- ROW
- ▁MAL
- ▁BABY
- ▁SITUATION
- ▁WATCHED
- ANS
- ▁RUIN
- ▁GENTLEMEN
- ▁FRO
- ▁FANCY
- ▁ACCEPT
- ▁SEASON
- ▁OURSELVES
- ▁SAN
- ▁SPEED
- IZED
- ▁COOL
- ▁SERVE
- ▁VESSEL
- ▁WILLIAM
- ▁OBLIGED
- ▁GROUP
- FORM
- ▁GOES
- UOUS
- ▁LEAVES
- ▁PECULIAR
- ▁NEWS
- ▁VAIN
- ▁EVERYBODY
- ▁PIN
- UG
- ▁FORGOTTEN
- ▁FRA
- GAN
- ▁CAREFULLY
- ▁FLASH
- UCH
- ▁FUR
- ▁MURDER
- ▁DELIGHT
- ▁WAITED
- ▁RENDER
- ▁PROPERTY
- ▁NOTICED
- ▁ROLL
- ▁KNOCK
- ▁EARNEST
- KI
- ▁HONEST
- ▁PROMISED
- ▁BAL
- AW
- ▁WALKING
- ANG
- ▁SQUARE
- ▁QUIETLY
- ▁CLOUD
- WOOD
- ▁FORMED
- ▁HIGHER
- ▁BUILT
- ▁FATE
- ▁TEACH
- MY
- ▁FALSE
- ▁YORK
- ▁DUST
- ▁CLIMB
- ▁FOND
- ▁GROWN
- ▁DESCEND
- ▁RAG
- ▁FRUIT
- ▁GENERALLY
- ▁OFFERED
- ▁ER
- ▁NURSE
- POSE
- ▁SPENT
- ▁JOIN
- ▁STATION
- ▁MEANING
- ▁SMOKE
- HOOD
- ▁ROUGH
- JU
- ▁LIKELY
- ▁SURFACE
- ▁KE
- ▁MONTH
- ▁POSSESSION
- ▁TONGUE
- ▁DUKE
- ▁NOSE
- ▁LAUGHING
- ▁WEATHER
- ▁WHISPERED
- ▁SYSTEM
- ▁LAWS
- DDLE
- ▁TOUCHED
- ▁TRADE
- LD
- ▁SURPRISED
- RIN
- ▁ARCH
- ▁WEALTH
- FOR
- ▁TEMPER
- ▁FRANK
- ▁GAL
- ▁BARE
- ▁OPPORTUNITY
- ▁CLAIM
- ▁ANIMALS
- ▁REV
- ▁COST
- ▁WASH
- ZE
- ▁CORN
- ▁OPPOSITE
- ▁POLICE
- ▁IDEAS
- LON
- ▁KEY
- ▁READING
- ▁COLLECT
- CHED
- ▁H
- ▁CROWN
- ▁TAR
- ▁SWIFT
- ▁SHOULDERS
- ▁ICE
- ▁GRAY
- ▁SHARE
- ▁PREPARED
- ▁GRO
- ▁UND
- ▁TER
- ▁EMPTY
- CING
- ▁SMILING
- ▁AVOID
- ▁DIFFERENCE
- ▁EXPLAIN
- ▁POUR
- ▁ATTRACT
- ▁OPENING
- ▁WHEEL
- ▁MATERIAL
- ▁BREAST
- ▁SUFFERING
- ▁DISTINCT
- ▁BOOT
- ▁ROW
- ▁FINGERS
- HAN
- ▁ALTOGETHER
- ▁FAT
- ▁PAPA
- ▁BRAIN
- ▁ASLEEP
- ▁GREY
- ▁SUM
- ▁GAS
- ▁WINDOWS
- ▁ALIVE
- ▁PROCEED
- ▁FLOWER
- ▁LEAP
- ▁PUR
- ▁PIECES
- ▁ALTER
- ▁MEMORY
- IENT
- ▁FILL
- ▁CLO
- ▁THROWN
- ▁KINGDOM
- ▁RODE
- IUS
- ▁MAID
- ▁DIM
- ▁BAND
- ▁VIRTUE
- ▁DISH
- ▁GUEST
- ▁LOSS
- ▁CAUSED
- ▁MOTION
- ▁POT
- ▁MILLION
- ▁FAULT
- ▁LOVELY
- ▁HERO
- PPING
- ▁UNITED
- ▁SPI
- SOME
- BRA
- ▁MOUNTAINS
- ▁NU
- ▁SATISFIED
- ▁DOLLARS
- ▁LOVER
- ▁CONCEAL
- ▁VAST
- ▁PULL
- ▁HATH
- ▁RUSH
- ▁J
- ▁DESPAIR
- EX
- ▁HEIGHT
- ▁CE
- ▁BENT
- ▁PITY
- ▁RISING
- ATH
- ▁PRIDE
- ▁HURRY
- KA
- ▁SETTLED
- ▁JUSTICE
- ▁LIFTED
- PEN
- ▁SOLDIER
- ▁FINDING
- ▁REMARK
- ▁REGULAR
- ▁STRUGGLE
- ▁MACHINE
- ▁SING
- ▁HURRIED
- ▁SUFFICIENT
- ▁REPRESENT
- ▁DOUBLE
- ▁ALARM
- ▁SUPPER
- ▁DREADFUL
- ▁FORE
- ATOR
- ▁STOCK
- ▁TIN
- ▁EXAMPLE
- ▁ROOF
- ▁FLOW
- ▁SUPPOSED
- ▁PRESERV
- ▁L
- ▁LISTENED
- OC
- ▁STO
- ▁SECURE
- ▁FRIGHTENED
- ▁DISTURB
- ▁EMOTION
- ▁SERVANTS
- ▁YO
- ▁BUY
- ▁FORCED
- ▁KITCHEN
- ▁TERROR
- ▁STAIRS
- ▁SIXTY
- KER
- ▁ORDINARY
- ▁DIRECTLY
- ▁HEADS
- ▁METHOD
- ▁FORGIVE
- ▁AWFUL
- ▁REFLECT
- ▁GREATLY
- ▁TALKED
- ▁RIDE
- STONE
- ▁FAVOUR
- ▁WELCOME
- ▁SEIZED
- OU
- ▁CONTROL
- ▁ORDERED
- ▁ANGEL
- ▁USUALLY
- ▁POET
- ▁BOLD
- LINE
- ▁ADVENTURE
- ▁WATCHING
- ▁FOLK
- ▁MISTRESS
- IZE
- ▁GROWING
- ▁CAVE
- ▁EVIDENCE
- ▁FINGER
- ▁SEVENTEEN
- ▁MOVING
- EOUS
- ▁DOESN
- ▁COW
- ▁TYPE
- ▁BOIL
- ▁TALE
- ▁DELIVER
- ▁FARM
- ▁MONSIEUR
- ▁GATHERED
- ▁FEELINGS
- ▁RATE
- ▁REMARKED
- ▁PUTTING
- ▁MAT
- ▁CONTRARY
- ▁CRIME
- ▁PLA
- ▁COL
- ▁NEARER
- TES
- ▁CIVIL
- ▁SHAME
- ▁LOOSE
- ▁DISCOVER
- ▁FLAT
- ▁TWICE
- ▁FAIL
- VIS
- ▁UNC
- EA
- ▁EUROPE
- ▁PATIENT
- ▁UNTO
- ▁SUFFER
- ▁PAIR
- ▁TREASURE
- OSE
- ▁EAGER
- ▁FLY
- ▁N
- ▁VAL
- ▁DAN
- ▁SALT
- ▁BORE
- BBE
- ▁ARTHUR
- ▁AFFAIRS
- ▁SLOW
- ▁CONSIST
- ▁DEVIL
- LAN
- ▁AFFECTION
- ▁ENGAGED
- ▁KISS
- ▁YA
- ▁OFFICER
- IFICATION
- ▁LAMP
- ▁PARTS
- HEN
- ▁MILK
- ▁PROCESS
- ▁GIFT
- ▁PULLED
- ▁HID
- ▁RAY
- ▁EXCELLENT
- ▁IMPRESSION
- ▁AUTHORITY
- ▁PROVED
- ▁TELLING
- TTE
- ▁TOWER
- ▁CONSEQUENCE
- ▁FAVOR
- ▁FLEW
- ▁CHARLES
- ISTS
- ▁ADDRESS
- ▁FAMILIAR
- ▁LIMIT
- ▁CONFIDENCE
- ▁RARE
- ▁WEEKS
- ▁WOODS
- ▁INTENTION
- ▁DIRECT
- ▁PERFORM
- ▁SOLEMN
- ▁DISTANT
- ▁IMAGE
- ▁PRESIDENT
- ▁FIRM
- ▁INDIAN
- ▁RANK
- ▁LIKED
- ▁AGREE
- ▁HOUSES
- ▁WIL
- ▁MATTERS
- ▁PRISON
- ▁MODE
- ▁MAJOR
- ▁WORKING
- ▁SLIP
- ▁WEIGHT
- ▁AWARE
- ▁BUSY
- ▁LOOKS
- ▁WOUND
- ▁THOR
- ▁BATH
- ▁EXERCISE
- ▁SIMILAR
- ▁WORE
- ▁AMOUNT
- ▁QUESTIONS
- ▁VIOLENT
- ▁EXCUSE
- ▁ASIDE
- ▁TUR
- ▁DULL
- OF
- ▁EMPEROR
- ▁NEVERTHELESS
- ▁SHOUT
- ▁EXPLAINED
- ▁SIZE
- ▁ACCOMPLISH
- FORD
- CAN
- ▁MISTAKE
- ▁INSTANTLY
- ▁SMOOTH
- ▁STRIKE
- ▁BOB
- ISED
- ▁HORROR
- ▁SCIENCE
- ▁PROTEST
- ▁MANAGE
- ▁OBEY
- ▁NECESSITY
- ▁SPLENDID
- ▁PRESS
- ▁INTERESTING
- ▁RELIGION
- ▁UNKNOWN
- ▁FIERCE
- ▁DISAPPEARED
- ▁HOLY
- ▁HATE
- ▁PLAYED
- ▁LIN
- ▁NATURALLY
- ▁DROVE
- ▁LOUIS
- TIES
- ▁BRAND
- INESS
- RIE
- ▁SHOOT
- ▁CONSENT
- ▁SEATED
- ▁LINES
- GUE
- ▁AGREED
- ▁CIRCLE
- ▁STIR
- ▁STREETS
- ▁TASK
- ▁RID
- ▁PRODUCED
- ▁ACCIDENT
- ▁WITNESS
- ▁LIBERTY
- ▁DETAIL
- ▁MINISTER
- ▁POWERFUL
- ▁SAVAGE
- ▁SIXTEEN
- ▁PRETEND
- ▁COAST
- ▁SQU
- ▁UTTER
- ▁NAMED
- ▁CLEVER
- ▁ADMIT
- ▁COUPLE
- ▁WICKED
- ▁MESSAGE
- ▁TEMPLE
- ▁STONES
- ▁YESTERDAY
- ▁HILLS
- DAY
- ▁SLIGHT
- ▁DIAMOND
- ▁POSSIBLY
- ▁AFFAIR
- ▁ORIGINAL
- ▁HEARING
- ▁WORTHY
- ▁SELL
- NEY
- ICK
- ▁COTTAGE
- ▁SACRIFICE
- ▁PROGRESS
- ▁SHOCK
- ▁DESIGN
- ▁SOUGHT
- ▁PIT
- ▁SUNDAY
- ▁OTHERWISE
- ▁CABIN
- ▁PRAYER
- ▁DWELL
- ▁GAIN
- ▁BRIDGE
- ▁PARTICULARLY
- ▁YIELD
- ▁TREAT
- RIGHT
- ▁OAK
- ▁ROPE
- WIN
- ▁ORDERS
- ▁SUSPECT
- ▁EDWARD
- AB
- ▁ELEVEN
- ▁TEETH
- ▁OCCURRED
- DDING
- ▁AMERICA
- ▁FALLING
- ▁LION
- ▁DEPART
- ▁KEEPING
- ▁DEMAND
- ▁PAUSED
- ▁CEASED
- INA
- ▁FUN
- ▁CHEER
- ▁PARDON
- ▁NATIVE
- LUS
- LOW
- ▁DOGS
- ▁REQUIRED
- ILITY
- ▁ELECT
- ▁ENTERTAIN
- ITUDE
- ▁HUGE
- ▁CARRYING
- ▁BLU
- ▁INSIST
- ▁SATISFACTION
- ▁HUNT
- ▁COUNTENANCE
- ▁UPPER
- ▁MAIDEN
- ▁FAILED
- ▁JAMES
- ▁FOREIGN
- ▁GATHER
- ▁TEST
- BOARD
- ▁TERMS
- ▁SILK
- ▁BEG
- ▁BROTHERS
- ▁PAGE
- ▁KNEES
- ▁SHOWN
- ▁PROFESSOR
- ▁MIGHTY
- ▁DEFI
- ▁CHARM
- ▁REQUIRE
- ▁LOG
- MORE
- ▁PROOF
- ▁POSSESSED
- ▁SOFTLY
- ▁UNFORTUNATE
- ▁PRICE
- ▁SEVERE
- ▁SINGING
- ▁STAGE
- ▁FREEDOM
- ▁SHOUTED
- ▁FARTHER
- ▁MAJESTY
- ▁PREVIOUS
- ▁GUIDE
- ▁MATCH
- ▁CHEST
- ▁INTENDED
- ▁BI
- ▁EXCITEMENT
- ▁OFFICERS
- ▁SUR
- ▁SHAKE
- ▁SENTIMENT
- ▁GENTLY
- ▁SUCCEEDED
- ▁MENTION
- ▁LOCK
- ▁ACQUAINTANCE
- ▁IMAGINATION
- ▁PHYSICAL
- ▁LEADING
- ▁SLAVE
- ▁CART
- ▁POINTED
- ▁STEAM
- ▁SHADE
- ▁PIPE
- ▁BASE
- ▁INVENT
- ▁ALAS
- ▁WORKED
- ▁REGRET
- ▁BUR
- ▁FAITHFUL
- ▁MENTIONED
- ▁RECORD
- ▁COMPLAIN
- ▁SUPERIOR
- ▁BAY
- ▁PAL
- EMENT
- UE
- ▁SEVENTY
- ▁HOTEL
- ▁SHEEP
- ▁MEAL
- ▁ADVICE
- ▁HIDDEN
- ▁DEMANDED
- ▁CONSCIOUS
- ▁BROW
- ▁POSSESS
- ▁FOURTH
- ▁EVENTS
- ▁FRI
- ▁PRAISE
- ▁ADVANCED
- ▁RESOLVED
- ▁STUFF
- ▁CHEERFUL
- ▁BIRTH
- ▁GRIEF
- ▁AFFORD
- ▁FAIRY
- ▁WAKE
- ▁SIDES
- ▁SUBSTANCE
- ▁ARTICLE
- ▁LEVEL
- ▁MIST
- ▁JOINED
- ▁PRACTICAL
- ▁CLEARLY
- ▁TRACE
- ▁AWAKE
- ▁OBSERVE
- ▁BASKET
- ▁LACK
- VILLE
- ▁SPIRITS
- ▁EXCITED
- ▁ABANDON
- ▁SHINING
- ▁FULLY
- ▁CALLING
- ▁CONSIDERABLE
- ▁SPRANG
- ▁MILE
- ▁DOZEN
- ▁PEA
- ▁DANGEROUS
- ▁WIT
- ▁JEW
- ▁POUNDS
- ▁FOX
- ▁INFORMATION
- ▁LIES
- ▁DECK
- NNY
- ▁PAUL
- ▁STARS
- ▁ANGER
- ▁SETTLE
- ▁WILLING
- ▁ADAM
- ▁FACES
- ▁SMITH
- ▁IMPORTANCE
- ▁STRAIN
- WAR
- ▁SAM
- ▁FEATHER
- ▁SERVED
- ▁AUTHOR
- ▁PERCEIVED
- ▁FLAME
- ▁DIVINE
- ▁TRAIL
- ▁ANYBODY
- ▁SIGH
- ▁DELICATE
- KY
- ▁FOLD
- ▁HAVEN
- ▁DESIRED
- ▁CURIOSITY
- ▁PRACTICE
- ▁CONSIDERATION
- ▁ABSOLUTELY
- ▁CITIZEN
- ▁BOTTLE
- ▁INTERESTED
- ▁MEAT
- ▁OCCUPIED
- ▁CHOOSE
- ▁THROAT
- ETTE
- ▁CANDLE
- ▁DAWN
- ▁PROTECT
- ▁SENTENCE
- IED
- ▁ROCKS
- ▁PORTION
- ▁APPARENTLY
- ▁PRESENTED
- ▁TIGHT
- ▁ACTUALLY
- ▁DYING
- ▁HAM
- ▁DAILY
- ▁SUFFERED
- ▁POLITICAL
- ▁BODIES
- ▁MODERN
- ▁COMPLETELY
- ▁SOONER
- TAN
- ▁PROP
- ▁ADVANCE
- ▁REFUSED
- ▁FARMER
- ▁POLITE
- ▁THUNDER
- ▁BRIEF
- ▁ELSIE
- ▁SAILOR
- ▁SUGGESTED
- ▁PLATE
- ▁AID
- ▁FLESH
- ▁WEEP
- ▁BUCK
- ▁ANTI
- ▁OCEAN
- ▁SPEND
- WELL
- ▁ODD
- ▁GOVERNOR
- ▁ENTRANCE
- ▁SUSPICION
- ▁STEPPED
- ▁RAPIDLY
- ▁CHECK
- ▁HIDE
- ▁FLIGHT
- ▁CLUB
- ▁ENTIRE
- ▁INDIANS
- ASH
- ▁CAPITAL
- ▁MAMMA
- HAR
- ▁CORRECT
- ▁CRACK
- ▁SENSATION
- ▁WORST
- ▁PACE
- ▁MIDST
- ▁AUGUST
- ▁PROPORTION
- ▁INNOCENT
- LINESS
- ▁REGARDED
- ▁DRIVEN
- ORD
- ▁HASTE
- ▁EDUCATION
- ▁EMPLOY
- ▁TRULY
- ▁INSTRUMENT
- ▁MAG
- ▁FRAME
- ▁FOOLISH
- ▁TAUGHT
- ▁HANG
- ▁ARGUMENT
- ▁NINETEEN
- ▁ELDER
- ▁NAY
- ▁NEEDED
- ▁NEIGHBOR
- ▁INSTRUCT
- ▁PAPERS
- ▁REWARD
- ▁EQUALLY
- ▁FIELDS
- ▁DIG
- HIN
- ▁CONDITIONS
- JA
- ▁SPAR
- ▁REQUEST
- ▁WORN
- ▁REMARKABLE
- ▁LOAD
- ▁WORSHIP
- ▁PARK
- ▁KI
- ▁INTERRUPTED
- ▁SKILL
- ▁TERM
- LAC
- ▁CRITIC
- ▁DISTRESS
- ▁BELIEF
- ▁STERN
- IGHT
- ▁TRACK
- ▁HUNTING
- ▁JEWEL
- ▁GRADUALLY
- ▁GLOW
- ▁RUSHED
- ▁MENTAL
- ▁VISITOR
- ▁PICKED
- ▁BEHOLD
- ▁EXPRESSED
- ▁RUB
- ▁SKI
- ARTAGNAN
- ▁MOREOVER
- ▁OPERATION
- ▁CAREFUL
- ▁KEEN
- ▁ASSERT
- ▁WANDER
- ▁ENEMIES
- ▁MYSTERIOUS
- ▁DEPTH
- ▁PREFER
- ▁CROSSED
- ▁CHARMING
- ▁DREAD
- ▁FLOUR
- ▁ROBIN
- ▁TRE
- ▁RELIEF
- ▁INQUIRED
- ▁APPLE
- ▁HENCE
- ▁WINGS
- ▁CHOICE
- ▁JUD
- OO
- ▁SPECIES
- ▁DELIGHTED
- IUM
- ▁RAPID
- ▁APPEAL
- ▁FAMOUS
- ▁USEFUL
- ▁HELEN
- ▁NEWSPAPER
- ▁PLENTY
- ▁BEARING
- ▁NERVOUS
- ▁PARA
- ▁URGE
- ▁ROAR
- ▁WOUNDED
- ▁CHAIN
- ▁PRODUCE
- ▁REFLECTION
- ▁MERCHANT
- ▁QUARREL
- ▁GLORY
- ▁BEGUN
- ▁BARON
- CUS
- ▁QUEER
- ▁MIX
- ▁GAZE
- ▁WHISPER
- ▁BURIED
- ▁DIV
- ▁CARD
- ▁FREQUENTLY
- ▁TIP
- ▁KNEE
- ▁REGION
- ▁ROOT
- ▁LEST
- ▁JEALOUS
- CTOR
- ▁SAVED
- ▁ASKING
- ▁TRIP
- QUA
- ▁UNION
- HY
- ▁COMPANIONS
- ▁SHIPS
- ▁HALE
- ▁APPROACHED
- ▁HARRY
- ▁DRUNK
- ▁ARRIVAL
- ▁SLEPT
- ▁FURNISH
- HEAD
- ▁PIG
- ▁ABSENCE
- ▁PHIL
- ▁HEAP
- ▁SHOES
- ▁CONSCIOUSNESS
- ▁KINDLY
- ▁EVIDENT
- ▁SCAR
- ▁DETERMIN
- ▁GRASP
- ▁STEAL
- ▁OWE
- ▁KNIFE
- ▁PRECIOUS
- ▁ELEMENT
- ▁PROCEEDED
- ▁FEVER
- ▁LEADER
- ▁RISK
- ▁EASE
- ▁GRIM
- ▁MOUNT
- ▁MEANWHILE
- ▁CENTURY
- OON
- ▁JUDGMENT
- ▁AROSE
- ▁VISION
- ▁SPARE
- ▁EXTREME
- ▁CONSTANT
- ▁OBSERVATION
- ▁THRUST
- ▁DELAY
- ▁CENT
- ▁INCLUD
- ▁LIFT
- ▁ADMIRE
- ▁ISSUE
- ▁FRIENDSHIP
- ▁LESSON
- ▁PRINCIPAL
- ▁MOURN
- ▁ACCEPTED
- ▁BURNING
- ▁CAPABLE
- ▁EXTRAORDINARY
- ▁SANG
- ▁REMOVED
- ▁HOPED
- ▁HORN
- ▁ALICE
- ▁MUD
- ▁APARTMENT
- ▁FIGHTING
- ▁BLAME
- ▁TREMBLING
- ▁SOMEBODY
- ▁ANYONE
- ▁BRIDE
- ▁READER
- ▁ROB
- ▁EVERYWHERE
- ▁LABOUR
- ▁RECALL
- ▁BULL
- ▁HIT
- ▁COUNCIL
- ▁POPULAR
- ▁CHAP
- ▁TRIAL
- ▁DUN
- ▁WISHES
- ▁BRILLIANT
- ▁ASSURED
- ▁FORGOT
- ▁CONTINUE
- ▁ACKNOWLEDG
- ▁RETREAT
- ▁INCREASED
- ▁CONTEMPT
- ▁GRANDFATHER
- ▁SYMPATHY
- ▁GHOST
- ▁STRETCHED
- ▁CREATURES
- ▁CAB
- ▁HIND
- ▁PLAYING
- ▁MISERABLE
- ▁MEMBERS
- ▁KINDNESS
- ▁HIGHEST
- ▁PRIM
- ▁KISSED
- ▁DESERVE
- ▁HUT
- ▁BEGGED
- ▁EIGHTY
- ▁CLOSELY
- ▁WONDERED
- ▁MILITARY
- ▁REMIND
- ▁ACCORDINGLY
- ▁LARGER
- ▁MAINTAIN
- ▁ENGINE
- ▁MOTIVE
- ▁DESTROY
- ▁STRIP
- ▁HANS
- ▁AHEAD
- ▁INFINITE
- ▁PROMPT
- ▁INFORMED
- TTLE
- ▁PEER
- ▁PRESSED
- ▁TRAP
- ▁SOMEWHERE
- ▁BOUGHT
- ▁VISIBLE
- ▁ASHAMED
- ▁TEAR
- ▁NEIGHBOUR
- ▁CONSTITUTION
- ▁INTELLIGENCE
- ▁PROFESSION
- ▁HUNGRY
- RIDGE
- ▁SMELL
- ▁STORIES
- ▁LISTENING
- ▁APPROACH
- ▁STRING
- ▁EXPLANATION
- ▁IMMENSE
- ▁RELIGIOUS
- ▁THROUGHOUT
- ▁HOLLOW
- ▁AWAIT
- ▁FLYING
- ▁SCREAM
- ▁ACTIVE
- ▁RUM
- ▁PRODUCT
- ▁UNHAPPY
- ▁VAGUE
- ARIES
- ▁ELIZABETH
- ▁STUPID
- ▁DIGNITY
- ▁ISABEL
- GAR
- ▁BRO
- ▁PITCH
- ▁COMRADE
- ▁STIFF
- ▁RECKON
- ▁SOLD
- ▁SPARK
- ▁STRO
- ▁CRYING
- ▁MAGIC
- ▁REPEAT
- PORT
- ▁MARKED
- ▁COMFORTABLE
- ▁PROJECT
- ▁BECOMING
- ▁PARENTS
- ▁SHELTER
- ▁STOLE
- ▁HINT
- ▁NEST
- ▁TRICK
- ▁THOROUGHLY
- ▁HOSPITAL
- ▁WEAPON
- ▁ROME
- ▁STYLE
- ▁ADMITTED
- ▁SAFETY
- FIELD
- ▁UNDERSTANDING
- ▁TREMBLE
- ▁PRINT
- ▁SLAVES
- ▁WEARY
- ▁ARTIST
- ▁CREDIT
- BURG
- ▁CONCLUSION
- ▁SELDOM
- ▁UNUSUAL
- ▁CLOUDS
- ▁UNABLE
- ▁GAY
- ▁HANGING
- ▁SCR
- ▁BOWED
- ▁DAVID
- ▁VOL
- ▁PUSHED
- ▁ESCAPED
- MOND
- ▁WARN
- ▁BETRAY
- ▁EGGS
- ▁PLAINLY
- ▁EXHIBIT
- ▁DISPLAY
- ▁MEMBER
- ▁GRIN
- ▁PROSPECT
- ▁BRUSH
- ▁BID
- ▁SUCCESSFUL
- ▁EXTENT
- ▁PERSUADE
- ▁MID
- ▁MOOD
- ▁ARRANGED
- ▁UNIVERSAL
- ▁JIM
- ▁SIGNAL
- ▁WHILST
- ▁PHILIP
- ▁WOLF
- RATE
- ▁EAGERLY
- ▁BILLY
- ▁RETURNING
- ▁CONSCIENCE
- ▁FORTUNATE
- ▁FEMALE
- ▁GLEAM
- ▁HASTILY
- ▁PROVIDED
- ▁OBTAIN
- ▁INSTINCT
- ▁CONCERNED
- ▁CONCERNING
- ▁SOMEHOW
- ▁PINK
- ▁RAGE
- ▁ACCUSTOMED
- ▁UNCONSCIOUS
- ▁ADVISE
- ▁BRANCHES
- ▁TINY
- ▁REFUSE
- ▁BISHOP
- ▁SUPPLY
- ▁PEASANT
- ▁LAWYER
- ▁WASTE
- ▁CONNECTION
- ▁DEVELOP
- ▁CORRESPOND
- ▁PLUM
- ▁NODDED
- ▁SLIPPED
- ▁EU
- ▁CONSTANTLY
- CUM
- MMED
- ▁FAIRLY
- HOUSE
- ▁KIT
- ▁RANG
- ▁FEATURES
- ▁PAUSE
- ▁PAINFUL
- ▁JOE
- ▁WHENCE
- ▁LAUGHTER
- ▁COACH
- ▁CHRISTMAS
- ▁EATING
- ▁WHOLLY
- ▁APART
- ▁SUPER
- ▁REVOLUTION
- ▁LONELY
- ▁CHEEKS
- ▁THRONE
- ▁CREW
- ▁ATTAIN
- ▁ESTABLISHED
- TIME
- ▁DASH
- ▁FRIENDLY
- ▁OPERA
- ▁EARL
- ▁EXHAUST
- ▁CLIFF
- ▁REVEAL
- ▁ADOPT
- ▁CENTRE
- ▁MERRY
- ▁SYLVIA
- ▁IDEAL
- ▁MISFORTUNE
- ▁FEAST
- ▁ARAB
- ▁NUT
- ▁FETCH
- ▁FOUGHT
- ▁PILE
- ▁SETTING
- ▁SOURCE
- ▁PERSIST
- ▁MERCY
- ▁BARK
- ▁LUC
- ▁DEEPLY
- ▁COMPARE
- ▁ATTITUDE
- ▁ENDURE
- ▁DELIGHTFUL
- ▁BEARD
- ▁PATIENCE
- ▁LOCAL
- ▁UTTERED
- ▁VICTORY
- ▁TREATED
- ▁SEPARATE
- ▁WAG
- ▁DRAGG
- ▁TITLE
- ▁TROOPS
- ▁TRIUMPH
- ▁REAR
- ▁GAINED
- ▁SINK
- ▁DEFEND
- ▁TIED
- ▁FLED
- ▁DARED
- ▁INCREASE
- ▁POND
- ▁CONQUER
- ▁FOREHEAD
- ▁FAN
- ▁ANXIETY
- ▁ENCOUNTER
- ▁SEX
- ▁HALT
- ▁SANK
- ▁CHEEK
- ▁HUMBLE
- ▁WRITER
- ▁EMPLOYED
- ▁DISTINGUISHED
- ▁RAISE
- ▁WHIP
- ▁GIANT
- ▁RANGE
- ▁OBTAINED
- ▁FLAG
- ▁MAC
- ▁JUMPED
- ▁DISCOVERY
- ▁NATIONAL
- ▁COMMISSION
- ▁POSITIVE
- ▁LOVING
- ▁EXACT
- ▁MURMURED
- ▁GAZED
- ▁REFER
- ▁COLLEGE
- ▁ENCOURAGE
- ▁NOVEL
- ▁CLOCK
- ▁MORTAL
- ▁ROLLED
- ▁RAT
- IZING
- ▁GUILTY
- ▁VICTOR
- WORTH
- ▁PRA
- ▁APPROACHING
- ▁RELATIVE
- ▁ESTATE
- ▁UGLY
- ▁METAL
- ▁ROBERT
- ▁TENT
- ▁ADMIRATION
- ▁FOURTEEN
- ▁BARBAR
- ▁WITCH
- ELLA
- ▁CAKE
- ▁SHONE
- ▁MANAGED
- ▁VOLUME
- ▁GREEK
- ▁DANCING
- ▁WRETCHED
- ▁CONDEMN
- ▁MAGNIFICENT
- ▁CONSULT
- J
- ▁ORGAN
- ▁FLEET
- ▁ARRANGEMENT
- ▁INCIDENT
- ▁MISERY
- ▁ARROW
- ▁STROKE
- ▁ASSIST
- ▁BUILD
- ▁SUCCEED
- ▁DESPERATE
- ▁WIDOW
- UDE
- ▁MARKET
- ▁WISDOM
- ▁PRECISE
- ▁CURRENT
- ▁SPOIL
- ▁BADE
- ▁WOODEN
- ▁RESIST
- ▁OBVIOUS
- ▁SENSIBLE
- FALL
- ▁ADDRESSED
- ▁GIL
- ▁COUNSEL
- ▁PURCHASE
- ▁SELECT
- ▁USELESS
- ▁STARED
- ▁ARREST
- ▁POISON
- ▁FIN
- ▁SWALLOW
- ▁BLOCK
- ▁SLID
- ▁NINETY
- ▁SPORT
- ▁PROVIDE
- ▁ANNA
- ▁LAMB
- ▁INTERVAL
- ▁JUMP
- ▁DESCRIBED
- ▁STRIKING
- ▁PROVISION
- ▁PROPOSED
- ▁MELANCHOLY
- ▁WARRIOR
- ▁SUGGEST
- ▁DEPARTURE
- ▁BURDEN
- ▁LIMB
- ▁TROUBLED
- ▁MEADOW
- ▁SACRED
- ▁SOLID
- ▁TRU
- ▁LUCY
- ▁RECOVER
- ▁ENERGY
- ▁POWDER
- ▁RESUMED
- ▁INTENSE
- ▁BRITISH
- ▁STRAW
- ▁AGREEABLE
- ▁EVERYONE
- ▁CONCERN
- ▁VOYAGE
- ▁SOUTHERN
- ▁BOSOM
- ▁UTTERLY
- ▁FEED
- ▁ESSENTIAL
- ▁CONFINE
- ▁HOUSEHOLD
- ▁EXTREMELY
- ▁WONDERING
- ▁LIST
- ▁PINE
- PHA
- ▁EXPERIMENT
- ▁JOSEPH
- ▁MYSTERY
- ▁RESTORE
- ▁BLUSH
- FOLD
- ▁CHOSEN
- ▁INTELLECT
- ▁CURTAIN
- OLOGY
- ▁MOUNTED
- ▁LAP
- ▁EPI
- ▁PUNISH
- ▁WEDDING
- ▁RECOGNIZED
- ▁DRIFT
- ▁PREPARATION
- ▁RESOLUTION
- ▁OPPRESS
- ▁FIX
- ▁VICTIM
- OGRAPH
- ▁SUMMON
- ▁JULIA
- ▁FLOOD
- ▁WAL
- ULATION
- ▁SLIGHTLY
- ▁LODGE
- ▁WIRE
- ▁CONFUSION
- ▁UNEXPECTED
- ▁CONCEIVE
- ▁PRIZE
- ▁JESUS
- ▁ADDITION
- ▁RUDE
- ▁FATAL
- ▁CARELESS
- ▁PATCH
- ▁KO
- ▁CATHERINE
- ▁PARLIAMENT
- ▁PROFOUND
- ▁ALOUD
- ▁RELIEVE
- ▁PUSH
- ABILITY
- ▁ACCOMPANIED
- ▁SOVEREIGN
- ▁SINGULAR
- ▁ECHO
- ▁COMPOSED
- ▁SHAKING
- ATORY
- ▁ASSISTANCE
- ▁TEACHER
- ▁HORRIBLE
- ▁STRICT
- ▁VERSE
- ▁PUNISHMENT
- ▁GOWN
- ▁MISTAKEN
- ▁VARI
- ▁SWEPT
- ▁GESTURE
- ▁BUSH
- ▁STEEL
- ▁AFFECTED
- ▁DIRECTED
- ▁SURROUNDED
- ▁ABSURD
- ▁SUGAR
- ▁SCRAP
- ▁IMMEDIATE
- ▁SADDLE
- ▁TY
- ▁ARISE
- ▁SIGHED
- ▁EXCHANGE
- ▁IMPATIENT
- ▁SNAP
- ▁EMBRACE
- ▁DISEASE
- ▁PROFIT
- ▁RIDING
- ▁RECOVERED
- ▁GOVERN
- ▁STRETCH
- ▁CONVINCED
- ▁LEANING
- ▁DOMESTIC
- ▁COMPLEX
- ▁MANIFEST
- ▁INDULGE
- ▁GENIUS
- ▁AGENT
- ▁VEIL
- ▁DESCRIPTION
- ▁INCLINED
- ▁DECEIVE
- ▁DARLING
- ▁REIGN
- HU
- ▁ENORMOUS
- ▁RESTRAIN
- ▁DUTIES
- BURY
- TTERED
- ▁POLE
- ▁ENABLE
- ▁EXCEPTION
- ▁INTIMATE
- ▁COUNTESS
- ▁TRIBE
- ▁HANDKERCHIEF
- ▁MIDNIGHT
- ▁PROBLEM
- ▁TRAMP
- ▁OIL
- CAST
- ▁CRUSH
- ▁DISCUSS
- ▁RAM
- ▁TROT
- ▁UNRE
- ▁WHIRL
- ▁LOCKED
- ▁HORIZON
- ▁OFFICIAL
- ▁SCHEME
- ▁DROWN
- ▁PIERRE
- ▁PERMITTED
- ▁CONNECTED
- ▁ASSURE
- ▁COCK
- ▁UTMOST
- ▁DEVOTED
- ▁RELI
- ▁SUFFICIENTLY
- ▁INTELLECTUAL
- ▁CARPET
- ▁OBJECTION
- ▁AFTERWARD
- ▁REALITY
- ▁NEGRO
- ▁RETAIN
- ▁ASCEND
- ▁CEASE
- ▁KATE
- ▁MARVEL
- KO
- ▁BOND
- MOST
- ▁COAL
- GATE
- ▁IGNORANT
- ▁BREAKING
- ▁TWIN
- ▁ASTONISHMENT
- ▁COFFEE
- ▁JAR
- ▁CITIES
- ▁ORIGIN
- ▁EXECUT
- ▁FINAL
- ▁INHABITANTS
- ▁STABLE
- ▁CHIN
- ▁PARTIES
- ▁PLUNGE
- ▁GENEROUS
- ▁DESCRIBE
- ▁ANNOUNCED
- ▁MERIT
- ▁REVERE
- ▁ERE
- ACIOUS
- ZI
- ▁DISAPPOINT
- ▁SUGGESTION
- ▁DOUBTLESS
- ▁TRUNK
- ▁STAMP
- ▁JOB
- ▁APPOINTED
- ▁DIVIDED
- ▁ACQUAINTED
- CHI
- ▁ABSOLUTE
- ▁FEARFUL
- ▁PRIVILEGE
- ▁CRAFT
- ▁STEEP
- ▁HUNTER
- ▁FORBID
- ▁MODEST
- ▁ENDEAVOUR
- ▁SWEEP
- ▁BEHELD
- ▁ABSORB
- ▁CONSTRUCT
- ▁EMPIRE
- ▁EXPEDITION
- ▁ERECT
- ▁OFFEND
- ▁INTEND
- ▁PERMIT
- ▁DESTROYED
- ▁CONTRACT
- ▁THIRST
- ▁WAGON
- ▁EVA
- ▁GLOOM
- ▁ATMOSPHERE
- ▁RESERVE
- ▁VOTE
- ▁GER
- ▁NONSENSE
- ▁PREVAIL
- ▁QUALITY
- ▁CLASP
- ▁CONCLUDED
- ▁RAP
- ▁KATY
- ▁ETERNAL
- ▁MUTTERED
- ▁NEGLECT
- ▁SQUIRE
- ▁CREEP
- LOCK
- ▁ELECTRIC
- ▁HAY
- ▁EXPENSE
- ▁SCORN
- ▁RETIRED
- ▁STOUT
- ▁MURMUR
- ▁SHARPLY
- ▁DISTRICT
- ▁LEAF
- ▁FAILURE
- WICK
- ▁JEAN
- ▁NUMEROUS
- ▁INFANT
- ▁REALIZED
- ▁TRAVELLER
- ▁HUNGER
- ▁JUNE
- ▁MUN
- ▁RECOMMEND
- ▁CREP
- ZZLE
- ▁RICHARD
- WORK
- ▁MONTE
- ▁PREACH
- ▁PALM
- AVI
- ▁ANYWHERE
- ▁DISPOSITION
- ▁MIRROR
- ▁VENTURE
- ▁POUND
- ▁CIGAR
- ▁INVITED
- ▁BENCH
- ▁PROTECTION
- ▁BENEFIT
- ▁THOMAS
- ▁CLERK
- ▁REPROACH
- ▁UNIFORM
- ▁GENERATION
- ▁SEAL
- ▁COMPASS
- ▁WARNING
- ▁EXTENDED
- ▁DIFFICULTIES
- ▁MAYBE
- ▁GROAN
- ▁AFFECT
- ▁COMB
- ▁EARN
- ▁WESTERN
- ▁IDLE
- ▁SCORE
- ▁TAP
- ▁ASTONISHED
- ▁INTRODUCED
- ▁LEISURE
- ▁LIEUTENANT
- ▁VIOLENCE
- ▁FIRMLY
- ▁MONSTER
- ▁UR
- ▁PROPERLY
- ▁TWIST
- ▁PIRATE
- ▁ROBBER
- ▁BATTER
- ▁WEPT
- ▁LEANED
- ▁FOG
- ▁ORNAMENT
- ▁ANDREW
- ▁BUSHES
- ▁REPUBLIC
- ▁CONFIDENT
- ▁LEAN
- ▁DART
- ▁STOOP
- ▁CURL
- ▁COUNTER
- ▁NORTHERN
- ▁PEARL
- ▁NEAREST
- ▁FRANCIS
- ▁WANDERING
- ▁FREQUENT
- ▁STARTLED
- ▁STATEMENT
- ▁OCCUR
- ▁BLOOM
- ▁NERVE
- ▁INSPECT
- ▁INDUCE
- ▁FLATTER
- ▁DATE
- ▁AMBITION
- ▁SLOPE
- ▁MALE
- ▁MADAM
- ▁MONK
- ▁RENT
- ▁CONFIRM
- ▁INVESTIGAT
- ▁RABBIT
- ▁REGIMENT
- ▁SUBMIT
- ▁SPELL
- ▁FURIOUS
- ▁RAIL
- ▁BESTOW
- ▁RALPH
- ▁SCATTERED
- ▁COMPELLED
- ▁THREAD
- ▁CHILL
- ▁DENY
- ▁PRONOUNC
- ▁MANKIND
- ▁CATTLE
- ▁EXECUTION
- ▁REBEL
- ▁SUPREME
- ▁VALUABLE
- ▁LIKEWISE
- ▁CONVEY
- ▁TIDE
- ▁GLOOMY
- ▁COIN
- ▁ACTUAL
- ▁TAX
- ▁PROVINCE
- ▁GRATEFUL
- ▁SPIRITUAL
- ▁VANISHED
- ▁DIANA
- ▁HAUNT
- ▁DRAGON
- ▁CRAWL
- ▁CHINA
- ▁GRATITUDE
- ▁NEAT
- ▁FINISH
- ▁INTENT
- ▁FRIGHT
- ▁EMBARRASS
- ▁THIRTEEN
- ▁RUTH
- ▁SLIGHTEST
- ▁DEVELOPMENT
- ▁INTERVIEW
- ▁SPECTACLE
- ▁BROOK
- VIE
- ▁WEAKNESS
- ▁AUDIENCE
- ▁CONSEQUENTLY
- ▁ABROAD
- ▁ASPECT
- ▁PAINTED
- ▁RELEASE
- ▁INSULT
- ▁SOOTH
- ▁DISAPPOINTMENT
- ▁EMERG
- ▁BRIG
- ▁ESTEEM
- ▁INVITATION
- ▁PASSENGER
- ▁PUBLISH
- ▁PIANO
- ▁IRISH
- ▁DESK
- ▁BEATEN
- ▁FIFTH
- ▁IMPULSE
- ▁SWEAR
- ▁EATEN
- ▁PURPLE
- ▁COMMITTED
- ▁COUNTRIES
- ▁PERCEIVE
- ISON
- ▁CELEBRAT
- ▁GRANDMOTHER
- ▁SHUDDER
- ▁SUNSHINE
- ▁SPANISH
- ▁HITHERTO
- ▁MARILLA
- ▁SNAKE
- ▁MOCK
- ▁INTERFERE
- ▁WALTER
- ▁AMID
- ▁MARBLE
- ▁MISSION
- TERIOR
- ▁DRIVING
- ▁FURNITURE
- ▁STEADY
- ▁CIRCUMSTANCE
- ▁INTERPRET
- ▁ENCHANT
- ▁ERROR
- ▁CONVICTION
- ▁HELPLESS
- ▁MEDICINE
- ▁QUALITIES
- ▁ITALIAN
- ▁HASTENED
- ▁OCCASIONALLY
- ▁PURSUED
- ▁HESITATED
- ▁INDEPENDENT
- ▁OLIVER
- ▁LINGER
- UX
- ▁EXAMINED
- ▁REPENT
- ▁PHYSICIAN
- ▁CHASE
- ▁BELOVED
- ▁ATTACHED
- ▁FLORENCE
- ▁HONEY
- ▁MOUSE
- ▁CRIES
- ▁BAKE
- ▁POEM
- ▁DESTRUCTION
- ▁FULFIL
- ▁MESSENGER
- ▁TRISTRAM
- ▁FANCIED
- ▁EXCESS
- ▁CURSE
- ▁CHU
- ▁QUANTITY
- ▁THORNTON
- ▁CREATED
- ▁CONTINUALLY
- ▁LIGHTNING
- ▁BORNE
- ▁TOTAL
- ▁DISPOSED
- ▁RIFLE
- ▁POLLY
- ▁GOAT
- ▁BACKWARD
- ▁VIRGINIA
- ▁KICK
- ▁PERIL
- ▁QUO
- ▁GLORIOUS
- ▁MULTITUDE
- ▁LEATHER
- ▁ABSENT
- ▁DEMON
- ▁DEBT
- ▁TORTURE
- ▁ACCORD
- ▁MATE
- ▁CATHOLIC
- ▁PILL
- ▁LIBRARY
- ▁PURSUIT
- ▁SHIRT
- ▁DEAREST
- ▁COLLAR
- ▁BEACH
- ▁ROBE
- ▁DECLARE
- ▁BRANCH
- ▁TEMPT
- ▁STEADILY
- ▁DISGUST
- ▁SILLY
- ▁ARRIVE
- ▁DRANK
- ▁LEVI
- ▁COMMUNICAT
- ▁RACHEL
- ▁WASHINGTON
- ▁RESIGN
- ▁MEANTIME
- ▁LACE
- ▁ENGAGEMENT
- ▁QUIVER
- ▁SEPARATED
- ▁DISCUSSION
- ▁VENTURED
- ▁SURROUNDING
- ▁POLISH
- ▁NAIL
- ▁SWELL
- ▁JOKE
- ▁LINCOLN
- ▁STUDENT
- ▁GLITTER
- ▁RUSSIAN
- ▁READILY
- ▁CHRIS
- ▁POVERTY
- ▁DISGRACE
- ▁CHEESE
- ▁HEAVILY
- ▁SCALE
- ▁STAFF
- ▁ENTREAT
- ▁FAREWELL
- ▁LUNCH
- ▁PEEP
- ▁MULE
- ▁SOMEONE
- ▁DISAPPEAR
- ▁DECISION
- ▁PISTOL
- ▁PUN
- ▁SPUR
- ▁ASSUMED
- ▁EXTEND
- ▁ENTHUSIASM
- ▁DEFINITE
- ▁UNDERTAKE
- ▁COMMITTEE
- ▁SIMON
- ▁FENCE
- ▁APPLIED
- ▁RELATED
- ▁VICE
- ▁UNPLEASANT
- ▁PROBABLE
- ▁PROCURE
- ▁FROWN
- ▁CLOAK
- ▁HUMANITY
- ▁FAMILIES
- ▁PHILOSOPHER
- ▁DWARF
- ▁OVERCOME
- ▁DEFEAT
- ▁FASTENED
- ▁MARSH
- ▁CLASSES
- ▁TOMB
- ▁GRACIOUS
- ▁REMOTE
- ▁CELL
- ▁SHRIEK
- ▁RESCUE
- ▁POOL
- ▁ORGANIZ
- ▁CHOSE
- ▁CUTTING
- ▁COWARD
- ▁BORDER
- ▁DIRTY
- ▁MONKEY
- ▁HOOK
- ▁CHUCK
- ▁EMILY
- ▁JEST
- ▁PLAC
- ▁WEIGH
- ▁ASSOCIATE
- ▁GLIMPSE
- ▁STUCK
- ▁BOLT
- ▁MURDERER
- ▁PONY
- ▁DISTINGUISH
- ▁INSTITUTION
- ▁CUNNING
- ▁COMPLIMENT
- ▁APPETITE
- ▁REPUTATION
- ▁FEEBLE
- ▁KIN
- ▁SERIES
- ▁GRACEFUL
- ▁PLATFORM
- ▁BREEZE
- ▁PHRASE
- ▁CLAY
- MONT
- ▁RATTL
- ▁OPPOSITION
- ▁LANE
- ▁BOAST
- ▁GROWTH
- ▁INCLINATION
- ▁BEHAVE
- ▁SUSAN
- ▁DISTINCTION
- ▁DISLIKE
- ▁NICHOLAS
- ▁SATISFY
- ▁DRAMA
- ▁ELBOW
- ▁GAZING
- ▁CONSUM
- ▁SPIN
- ▁OATH
- ▁CHANNEL
- ▁CHARACTERISTIC
- ▁SPEAR
- ▁SLAIN
- ▁SAUCE
- ▁FROG
- ▁CONCEPTION
- ▁TIMID
- ▁ZEAL
- ▁APPARENT
- SHIRE
- ▁CENTER
- ▁VARIETY
- ▁DUSK
- ▁APT
- ▁COLUMN
- ▁REVENGE
- ▁RIVAL
- ▁IMITAT
- ▁PASSIONATE
- ▁SELFISH
- ▁NORMAN
- ▁REPAIR
- ▁THRILL
- ▁TREATMENT
- ▁ROSA
- ▁MARTIN
- ▁INDIFFERENT
- ▁THITHER
- ▁GALLANT
- ▁PEPPER
- ▁RECOLLECT
- ▁VINE
- ▁SCARCE
- ▁SHIELD
- ▁MINGLED
- CLOSE
- ▁HARSH
- ▁BRICK
- ▁HUMOR
- ▁MISCHIEF
- ▁TREMENDOUS
- ▁FUNCTION
- ▁SMART
- ▁SULTAN
- ▁DISMISS
- ▁THREATENED
- ▁CHEAP
- ▁FLOCK
- ▁ENDEAVOR
- ▁WHISK
- ▁ITALY
- ▁WAIST
- ▁FLUTTER
- ▁SMOKING
- ▁MONARCH
- ▁AFRICA
- ▁ACCUSE
- ▁HERBERT
- ▁REFRESH
- ▁REJOICE
- ▁PILLOW
- ▁EXPECTATION
- ▁POETRY
- ▁HOPELESS
- ▁PERISH
- ▁PHILOSOPHY
- ▁WHISTLE
- ▁BERNARD
- ▁LAMENT
- ▁IMPROVE
- ▁SUP
- ▁PERPLEX
- ▁FOUNTAIN
- ▁LEAGUE
- ▁DESPISE
- ▁IGNORANCE
- ▁REFERENCE
- ▁DUCK
- ▁GROVE
- ▁PURSE
- ▁PARTNER
- ▁PROPHET
- ▁SHIVER
- ▁NEIGHBOURHOOD
- ▁REPRESENTATIVE
- SAIL
- ▁WIP
- ▁ACQUIRED
- ▁CHIMNEY
- ▁DOCTRINE
- ▁MAXIM
- ▁ANGLE
- ▁MAJORITY
- ▁AUTUMN
- ▁CONFUSED
- ▁CRISTO
- ▁ACHIEVE
- ▁DISGUISE
- ▁REDUCED
- ▁EARLIER
- ▁THEATRE
- ▁DECIDE
- MINATED
- OLOGICAL
- ▁OCCUPATION
- ▁VIGOROUS
- ▁CONTINENT
- ▁DECLINE
- ▁COMMUNITY
- ▁MOTIONLESS
- ▁HATRED
- ▁COMMUNICATION
- ▁BOWL
- ▁COMMENT
- ▁APPROVE
- ▁CEREMONY
- ▁CRIMINAL
- ▁SCIENTIFIC
- ▁DUCHESS
- ▁VIVID
- ▁SHIFT
- ▁AVAIL
- ▁DAMP
- ▁JOHNSON
- ▁SLENDER
- ▁CONTRAST
- ▁AMUSEMENT
- ▁PLOT
- ▁LYN
- ▁ASSOCIATION
- ▁SNATCH
- ▁UNCERTAIN
- ▁PRESSURE
- ▁PERCH
- ▁APPLY
- ▁PLANET
- ▁NOTWITHSTANDING
- ▁SWUNG
- ▁STIRRED
- ▁ATTENDANT
- ▁ENJOYMENT
- ▁WORRY
- ▁ALBERT
- ▁NAKED
- ▁TALENT
- ▁MARIAN
- ▁REFORM
- ▁DELIBERATE
- ▁INTELLIGENT
- ▁SENSITIVE
- ▁YONDER
- ▁PUPIL
- ▁FRIGHTFUL
- ▁DOUBTFUL
- ▁STANDARD
- ▁MAGISTRATE
- ▁SHEPHERD
- ▁STOMACH
- ▁DEPOSIT
- ▁RENEW
- ▁HEDGE
- ▁FRANCS
- ▁POSSIBILITY
- ▁RESEMBLE
- ▁FATIGUE
- ▁PORTRAIT
- ▁FAVORITE
- ▁CREAM
- ▁BURG
- ▁SECRETARY
- ▁DIVERS
- ▁ACTIVITY
- ▁SPECULAT
- ▁HUMOUR
- ▁FITTED
- ▁EXTERNAL
- ▁CETERA
- ▁WRAPPED
- ▁WHIT
- ▁FRED
- ▁EXAMINATION
- ▁LODGING
- ▁OWING
- ▁JAW
- ▁CROW
- ▁BALANCE
- ▁PUFF
- ▁TENDERNESS
- ▁PORTHOS
- ▁ANCHOR
- ▁INTERRUPT
- ▁NECESSARILY
- ▁PERPETUAL
- ▁AGONY
- ▁POPE
- ▁SCHOLAR
- ▁SCOTLAND
- ▁SUPPRESS
- ▁WRATH
- ▁WRECK
- ▁EXCEED
- ▁PERFECTION
- ▁INDIA
- ▁TRADITION
- ▁SECTION
- ▁EASTERN
- ▁DOORWAY
- ▁WIVES
- ▁CONVENTION
- ▁ANNOUNC
- ▁EGYPT
- ▁CONTRADICT
- ▁SCRATCH
- ▁CENTRAL
- ▁GLOVE
- ▁WAX
- ▁PREPARE
- ▁ACCOMPANY
- ▁INCREASING
- ▁LIBERAL
- ▁RAISING
- ▁ORANGE
- ▁SHOE
- ▁ATTRIBUTE
- ▁LITERATURE
- ▁PUZZLED
- ▁WITHDRAW
- ▁WHITHER
- ▁HAWK
- ▁MOONLIGHT
- ▁EXAMINE
- ▁HAPPILY
- ▁PRECEDE
- ▁DETECTIVE
- ▁INCHES
- ▁SOLITARY
- ▁DUTCH
- ▁NAPOLEON
- ▁UNEASY
- ▁CARDINAL
- ▁BLEW
- ▁FOWL
- ▁DECORAT
- ▁CHILDHOOD
- ▁TORMENT
- ▁LOSING
- ▁PERMISSION
- ▁BLANK
- ▁UPSTAIRS
- ▁CAPACITY
- ▁TRIFLE
- ▁FOLLY
- ▁RECOGNIZE
- ▁REMOVE
- ▁VENGEANCE
- ▁ENTERPRISE
- ▁BEDROOM
- ▁ANYHOW
- ▁INQUIRY
- ▁ASHES
- ▁DRAG
- ▁HUSH
- ▁AWKWARD
- ▁SATURDAY
- ▁GENUINE
- ▁SURVIV
- ▁SKIRT
- ▁AFFECTIONATE
- ▁TANG
- ▁MUTUAL
- ▁DISPUTE
- ▁EAGLE
- ▁INCOME
- ▁BIND
- ▁FAME
- ▁IMPROVEMENT
- ROVING
- ▁DIFFER
- ▁AWOKE
- ▁SLEEVE
- ▁SOLITUDE
- ▁FAVOURITE
- JI
- ▁DETECT
- ▁COMPREHEND
- ▁PREPARING
- ▁SERPENT
- ▁SUMMIT
- ▁KNOT
- ▁KNIT
- ▁COPY
- ▁STOPPING
- ▁FADED
- ▁HIDEOUS
- ▁JULIE
- STEAD
- ▁SHINE
- ▁CONFLICT
- ▁PROPOSITION
- ▁REFUGE
- ▁GALLERY
- ▁BUNDLE
- ▁AXE
- ▁SLAVERY
- ▁MASK
- ▁ALYOSHA
- ▁LADDER
- ▁DEPARTMENT
- ▁DISCHARGE
- ▁DEPRESS
- ▁GALLOP
- ▁SCARLET
- ▁KITTY
- ▁RECEIVING
- ▁SURRENDER
- ▁SUSTAIN
- ▁TWILIGHT
- ▁CONGRESS
- ▁IRELAND
- ▁FUNNY
- ▁LEND
- ▁CONSTITUTE
- ▁FUNERAL
- ▁CRYSTAL
- ▁SPAIN
- ▁EXCEEDINGLY
- ▁DAMN
- ▁COMMUN
- ▁CIVILIZATION
- ▁PREJUDICE
- ▁PORCH
- ▁ASSISTANT
- ▁INDUSTRY
- ▁TUMBLE
- ▁DEFENCE
- ▁HITHER
- ▁SMOT
- ▁COLONI
- ▁AMAZEMENT
- ▁MARGUERITE
- ▁MIRACLE
- ▁INHERIT
- ▁BEGGAR
- ▁ENVELOPE
- ▁INDIGNATION
- ▁NATASHA
- ▁PROPOSAL
- ▁FRAGMENT
- ▁ROUSED
- ▁ROAST
- ENCIES
- ▁COMMENCED
- ▁RESOURCE
- ▁POPULATION
- ▁QUOTH
- ▁PURSUE
- ▁EDUCAT
- ▁AFFLICT
- ▁CONTACT
- ▁CRIMSON
- ▁DIVISION
- ▁DISORDER
- ▁COPPER
- ▁SOLICIT
- ▁MODERATE
- ▁DRUM
- ▁SWIM
- ▁SALUTE
- ▁ASSUME
- ▁MUSCLE
- ▁OVERWHELM
- ▁SHAKESPEARE
- ▁STRUGGLING
- ▁TRANQUIL
- ▁CHICKEN
- ▁TREAD
- ▁CLAW
- ▁BIBLE
- ▁RIDGE
- ▁THREAT
- ▁VELVET
- ▁EXPOSED
- ▁IDIOT
- ▁BARREL
- ▁PENNY
- ▁TEMPTATION
- ▁DANGLARS
- ▁CENTURIES
- ▁DISTRIBUT
- ▁REJECT
- ▁RETORTED
- ▁CONCENTRAT
- ▁CORDIAL
- ▁MOTOR
- ▁CANNON
- KEEP
- ▁WRETCH
- ▁ASSURANCE
- ▁THIEF
- ▁SURVEY
- ▁VITAL
- ▁RAILWAY
- ▁JACKSON
- ▁CRASH
- ▁GROWL
- ▁COMBAT
- ▁RECOLLECTION
- ▁SECURITY
- ▁JACOB
- ▁CLUTCH
- ▁BLANKET
- ▁NANCY
- ▁CELLAR
- ▁CONVENIENT
- ▁INDIGNANT
- ▁COARSE
- ▁WORM
- ▁SCREEN
- ▁TRANSPORT
- ▁BULLET
- ▁APPRECIATE
- ▁DEVOTION
- ▁INVISIBLE
- ▁DRIED
- ▁MIXTURE
- ▁CANDID
- ▁PERFORMANCE
- ▁RIPE
- ▁EXQUISITE
- ▁BARGAIN
- ▁TOBACCO
- ▁LOYAL
- ▁MOULD
- ▁ATTENTIVE
- ▁DOROTHY
- ▁BRUTE
- ▁ESTABLISHMENT
- ▁ABILITY
- ▁INHABIT
- ▁OBSCURE
- ▁BORROW
- ▁ESSENCE
- ▁DISMAY
- ▁FLEE
- ▁BLADE
- ▁PLUCK
- ▁COFFIN
- ▁SUNSET
- ▁STEPHEN
- ▁ECONOMIC
- ▁HOLIDAY
- ▁MECHANICAL
- ▁COTTON
- ▁AWAKENED
- ▁SEIZE
- ▁RIDICULOUS
- ▁SANCHO
- ▁HESITATION
- ▁CORPSE
- ▁SAVING
- HOLD
- FOOT
- ▁ELDEST
- ▁DESPITE
- ▁EDITH
- ▁CHERISH
- ▁RESISTANCE
- ▁WILSON
- ▁ARGUE
- ▁INQUIRE
- ▁APPREHENSION
- ▁AVENUE
- ▁DRAKE
- ▁PROPOSE
- HURST
- ▁INFERIOR
- ▁STAIRCASE
- ▁WHEREFORE
- ▁CARLYLE
- ▁COUCH
- ▁ROUTE
- ▁POLITICS
- ▁TOMORROW
- ▁THRONG
- ▁NAUGHT
- ▁SUNLIGHT
- ▁INDIFFERENCE
- ▁OBEDIENCE
- ▁RECEPTION
- ▁VEGETABLE
- ▁IMPERFECT
- ▁RESIDENCE
- ▁TURKEY
- ▁VIOLET
- ▁SARAH
- ▁ALTAR
- ▁GRIEVE
- ▁JERK
- ▁ENSU
- ▁MAGICIAN
- ▁BLOSSOM
- ▁LANTERN
- ▁RESOLUTE
- ▁THOUGHTFULLY
- ▁FORTNIGHT
- ▁TRUMPET
- ▁VALJEAN
- ▁UNWILLING
- ▁LECTURE
- ▁WHEREUPON
- ▁HOLLAND
- ▁CHANGING
- ▁CREEK
- ▁SLICE
- ▁NORMAL
- ▁ANNIE
- ▁ACCENT
- ▁FREDERICK
- ▁DISAGREEABLE
- ▁RUBBED
- ▁DUMB
- ▁ESTABLISH
- ▁IMPORT
- ▁AFFIRM
- ▁MATTHEW
- ▁BRISK
- ▁CONVERT
- ▁BENDING
- ▁IVAN
- ▁MADEMOISELLE
- ▁MICHAEL
- ▁EASIER
- ▁JONES
- ▁FACING
- ▁EXCELLENCY
- ▁LITERARY
- ▁GOSSIP
- ▁DEVOUR
- ▁STAGGER
- ▁PENCIL
- ▁AVERAGE
- ▁HAMMER
- ▁TRIUMPHANT
- ▁PREFERRED
- ▁APPLICATION
- ▁OCCUPY
- ▁AUTHORITIES
- BURN
- ▁ASCERTAIN
- ▁CORRIDOR
- ▁DELICIOUS
- ▁PRACTISE
- ▁UNIVERSE
- ▁SHILLING
- ▁CONTEST
- ▁ASHORE
- ▁COMMIT
- ▁ADMINISTRATION
- ▁STUDIED
- ▁RIGID
- ▁ADORN
- ▁ELSEWHERE
- ▁INNOCENCE
- ▁JOURNAL
- ▁LANDSCAPE
- ▁TELEGRAPH
- ▁ANGRILY
- ▁CAMPAIGN
- ▁UNJUST
- ▁CHALLENGE
- ▁TORRENT
- ▁RELATE
- ▁ASSEMBLED
- ▁IMPRESSED
- ▁CANOE
- ▁CONCLUD
- ▁QUIXOTE
- ▁SATISFACTORY
- ▁NIECE
- ▁DEAF
- ▁RAFT
- ▁JIMMY
- ▁GLID
- ▁REGULAT
- ▁CHATTER
- ▁GLACIER
- ▁ENVY
- ▁STATUE
- ▁BOSTON
- ▁RICHMOND
- ▁DENIED
- ▁FANNY
- ▁SOLOMON
- ▁VULGAR
- ▁STALK
- ▁REPLACE
- ▁SPOON
- ▁BASIN
- ▁FEATURE
- ▁CONVICT
- ▁ARCHITECT
- ▁ADMIRAL
- ▁RIBBON
- ▁PERMANENT
- ▁APRIL
- ▁JOLLY
- ▁NEIGHBORHOOD
- ▁IMPART
- BOROUGH
- CAMP
- ▁HORRID
- ▁IMMORTAL
- ▁PRUDENCE
- ▁SPANIARD
- ▁SUPPOSING
- ▁TELEPHONE
- ▁TEMPERATURE
- ▁PENETRATE
- ▁OYSTER
- ▁APPOINTMENT
- ▁EGYPTIAN
- ▁DWELT
- ▁NEPHEW
- ▁RAILROAD
- ▁SEPTEMBER
- ▁DEVICE
- ▁WHEAT
- ▁GILBERT
- ▁ELEGANT
- ▁ADVERTISE
- ▁RATIONAL
- ▁TURTLE
- ▁BROOD
- ▁ASSEMBLY
- ▁CULTIVATE
- ▁EDITOR
- ▁SPECIMEN
- ▁UNDOUBTEDLY
- ▁WHALE
- ▁DROPPING
- ▁BALLOON
- ▁MEDICAL
- COMB
- ▁COMPOSITION
- ▁FOOTSTEPS
- ▁LAUNCELOT
- ▁DISCOURSE
- ▁ERRAND
- ▁CONVERSE
- ▁ADVANCING
- ▁DOWNSTAIRS
- ▁TUMULT
- ▁CORRUPT
- ▁SUFFICE
- ▁ANGUISH
- ▁SHAGGY
- ▁RETIRE
- ▁TIMBER
- ▁BLAZE
- ▁ABSTRACT
- ▁EMBROIDER
- ▁PHOTOGRAPH
- ▁PROSPERITY
- ▁TERRIBLY
- ▁TERRITORY
- ▁THRESHOLD
- ▁PAVEMENT
- ▁INJURED
- ▁LIMP
- ▁AGITATION
- ▁RASCAL
- ▁PRESUME
- ▁OBSERVING
- ▁OBSTACLE
- ▁SIMPLICITY
- ▁SLUMBER
- ▁SUPPLIED
- ▁COMBINATION
- ▁DRAIN
- ▁WILDERNESS
- ▁BELIEVING
- ▁VILLAIN
- ▁RECKLESS
- ▁INJURY
- ▁CLAPP
- ▁FRIDAY
- ▁HERCULES
- ▁KENNEDY
- ▁SYMPTOM
- ▁SLEDGE
- ▁CEILING
- ▁LEMON
- ▁PLAGUE
- ▁MONDAY
- ▁CANVAS
- ▁IMPATIENCE
- ▁UNCOMFORTABLE
- ▁ACCESS
- ▁FROZEN
- ▁SENATOR
- ▁FRANZ
- ▁SWIMMING
- ▁BARRIER
- ▁ADJUST
- ▁COMPARISON
- ▁PROCLAIM
- ▁WRINKL
- ▁OVERLOOK
- ▁MITYA
- ▁GUILT
- ▁PERCEPTION
- ▁PRECAUTION
- ▁SPECTATOR
- ▁SURPRISING
- ▁DISTRACT
- ▁DISDAIN
- ▁BONNET
- ▁MAGNET
- ▁PROFESS
- ▁CONFOUND
- ▁NARRATIVE
- ▁STRUCTURE
- ▁SKETCH
- ▁ULTIMATE
- ▁GLOBE
- ▁INSECT
- FICIENCY
- ▁ORCHARD
- ▁AMIABLE
- ▁DESCENT
- ▁INDEPENDENCE
- ▁MANUFACTURE
- ▁SPRINKLE
- ▁NIGHTINGALE
- ▁CUSHION
- ▁EMINENT
- ▁SCOTT
- ▁ARRAY
- ▁COSETTE
- ▁WAVING
- ▁EXTRACT
- ▁IRREGULAR
- ▁PERSECUT
- ▁DERIVED
- ▁WITHDREW
- ▁CAUTION
- ▁SUSPICIOUS
- ▁MEMORIES
- ▁NOWHERE
- ▁SUBTLE
- ▁THOROUGH
- Q
- ▁APPROPRIATE
- ▁SLAUGHTER
- ▁YOURSELVES
- ▁THUMB
- ▁TWAS
- ▁ABODE
- ▁BIDDING
- ▁CONSPICUOUS
- ▁REBECCA
- ▁SERGEANT
- ▁APRON
- ▁ANTICIPATE
- ▁DISCIPLINE
- ▁GLANCING
- ▁PILGRIM
- ▁SULLEN
- ▁CONTRIBUTE
- ▁PRAIRIE
- ▁CARVED
- ▁COMMERCE
- ▁EXCLAMATION
- ▁MUSCULAR
- ▁NOVEMBER
- ▁PHENOMENA
- ▁SYMBOL
- ▁UMBRELLA
- ▁DIMINISH
- ▁PARLOUR
- ▁THREATENING
- ▁STUMP
- ▁EXTENSIVE
- ▁PLEASING
- ▁REMEMBRANCE
- ▁COMBINED
- ▁SHERIFF
- ▁SHAFT
- ▁LAURA
- ▁INTERCOURSE
- ▁STRICKEN
- ▁SUPPLIES
- ▁LANDLORD
- ▁SHRINK
- ▁PRICK
- ▁CAESAR
- ▁DRUG
- ▁BEWILDERED
- ▁NAUTILUS
- ▁BRUTAL
- ▁COMMERCIAL
- ▁MAGGIE
- ▁SPHERE
- ▁VIRGIN
- ▁BRETHREN
- ▁DESTINY
- ▁POLICY
- ▁TERRIFIED
- ▁HOUSEKEEPER
- ▁CRAZY
- ▁ARDENT
- ▁DISCERN
- ▁WRAP
- ▁MARQUIS
- ▁RUSSIA
- MOUTH
- ▁BRITAIN
- ▁HARBOUR
- ▁CONCERT
- ▁DONKEY
- ▁DAMAGE
- ▁SLIM
- ABOUT
- ▁LUXURY
- ▁MONSTROUS
- ▁TENDENCY
- ▁PARADISE
- ▁CULTURE
- ▁JULIUS
- ▁RAOUL
- ▁REMEDY
- ▁DECAY
- ▁SCOLD
- ▁SPLIT
- ▁ASSAULT
- ▁DECEMBER
- ▁MOSCOW
- ▁EXPLORE
- ▁TROUSERS
- ▁WRIST
- PIECE
- ▁MUSKET
- ▁VALENTINE
- ▁TYRANT
- ▁ABRAHAM
- ▁MEDIUM
- ▁ARTIFICIAL
- ▁FACULTY
- ▁OBLIGATION
- ▁RESEMBLANCE
- ▁INQUIRIES
- ▁DETAIN
- ▁SWARM
- ▁PLEDGE
- ▁ADMIRABLE
- ▁DEFECT
- ▁SUPERINTEND
- ▁PATRIOT
- ▁CLUNG
- ▁DISMAL
- ▁RECIT
- ▁IGNOR
- ▁AMELIA
- ▁JUSTIFY
- ▁ELEPHANT
- ▁ESTIMATE
- ▁KNELT
- ▁SERVING
- ▁WHIM
- ▁SHRILL
- ▁STUDIO
- ▁TEXT
- ▁ALEXANDER
- ▁WROUGHT
- ▁ABUNDANT
- ▁SITUATED
- ▁REGAIN
- ▁FIERY
- ▁SNEER
- ▁SWEAT
- ▁GLARE
- ▁NIGH
- ▁ESCORT
- ▁INEVITABLE
- ▁PSMITH
- ▁RELUCTANT
- ▁PRECEDING
- ▁RESORT
- ▁OUTRAGE
- ▁AMBASSADOR
- ▁CONSOLATION
- ▁RECOGNITION
- ▁REMORSE
- ▁BEHALF
- ▁FORMIDABLE
- ▁GRAVITY
- ▁DIVIDE
- ▁CONFRONT
- ▁GIGANTIC
- ▁OCTOBER
- ▁FLANK
- ▁SLEW
- ▁CLARA
- ▁FILM
- ▁BULK
- ▁POMP
- ▁ELEANOR
- ▁EMPHASIS
- ▁JAPANESE
- ▁CAVALRY
- ▁EXCLUSIVE
- ▁PERFUME
- ▁BRONZE
- ▁FEDERAL
- ▁LIQUID
- ▁RUBBING
- ▁OVEN
- DOLPH
- ▁CONVULS
- ▁DEPRIVED
- ▁RESPONSIBILITY
- ▁SIGNIFICANT
- ▁WAISTCOAT
- ▁CLUSTER
- ▁MARTHA
- ▁REVERSE
- ▁ATTORNEY
- ▁DROOP
- ▁SKILFUL
- ▁HABITUAL
- ▁PUMP
- ▁INTERVEN
- ▁OWL
- ▁CONJECTURE
- ▁FANTASTIC
- ▁RESPONSIBLE
- ▁DESTINED
- ▁DOCUMENT
- ▁THEREUPON
- ▁GODDESS
- ▁PACIFIC
- ▁WARRANT
- ▁COSTUME
- ▁BRIDLE
- ▁CALIFORNIA
- ▁DEMOCRATIC
- ▁EUSTACE
- ▁SQUIRREL
- ▁UNCOMMON
- ▁MARVELLOUS
- ▁PLOUGH
- ▁TRAGEDY
- ▁VAULT
- ▁HESITATE
- ▁REFRAIN
- ▁ADMIRING
- ▁CORPORAL
- ▁ENTITLED
- ▁SHREWD
- ▁SQUEEZ
- ▁ACCURATE
- ▁TEMPEST
- ▁MONUMENT
- ▁SIEGE
- ▁CHINESE
- ▁RAVEN
- ▁LOUNG
- ▁ASSASSIN
- ▁INFLICT
- ▁AGITATED
- ▁DESIRABLE
- ▁EARLIEST
- ▁LAUNCH
- ▁PILOT
- ▁PULSE
- ▁MUTE
- LEIGH
- ▁LIQUOR
- ▁SCARECROW
- ▁SKULL
- ▁DESOLATE
- ▁SUBLIME
- ▁SERENE
- ▁RECESS
- ▁WAKING
- ▁CHARLOTTE
- ▁CIRCULAR
- ▁INJUSTICE
- ▁PINOCCHIO
- ▁PRISCILLA
- ▁THYSELF
- ▁OCCURRENCE
- ▁CASUAL
- ▁FRANTIC
- ▁LEGEND
- ▁FERTIL
- ▁BACKGROUND
- ▁DELICACY
- ▁ESTRALLA
- ▁MANUSCRIPT
- ▁RESPONSE
- ▁UNIVERSITY
- ▁WOLVES
- ▁SCANDAL
- ▁STUMBLE
- ▁HOARSE
- ▁BODILY
- ▁CONVENT
- ▁EXAMINING
- ▁INCAPABLE
- ▁PERCEIVING
- ▁PHILADELPHIA
- ▁SUBSEQUENT
- ▁THIEVES
- ▁ACCUMULAT
- ▁DAMSEL
- ▁SCOTCH
- ▁UNDERNEATH
- ▁NOBILITY
- ▁SMASH
- ▁REVOLT
- ▁ENGAGE
- ▁CATHEDRAL
- ▁CHAMPION
- ▁DESPATCH
- ▁ETERNITY
- ▁JANUARY
- ▁PLEADED
- ▁PROBABILITY
- ▁JIMMIE
- ▁PARALLEL
- ▁FISHERMAN
- ▁JERRY
- ▁SWORE
- ▁DRAUGHT
- ▁OPPONENT
- ▁PRIMITIVE
- ▁SIGNIFICANCE
- ▁SUBSTANTIAL
- ▁AMAZED
- ▁DUNBAR
- ▁COMMEND
- ▁CONTEMPLATE
- ▁TESTIMONY
- ▁IMPERIAL
- ▁ADAPT
- ▁JUICE
- ▁CALAMIT
- CULAR
- ▁CHATEAU
- ▁PHOENIX
- ▁PRUDENT
- ▁SOLUTION
- ▁VILLEFORT
- ▁REACTION
- ▁RELAX
- ▁YU
- ▁PROHIBIT
- ▁DISTRUST
- ▁PLUNDER
- ▁WELFARE
- ▁NAVIGAT
- ▁PARLOR
- ▁LAZY
- ▁DETACH
- OMETER
- ▁PRIV
- ▁DISCOURAGE
- ▁OBSTINATE
- ▁REJOICING
- ▁SERMON
- ▁VEHICLE
- ▁FANCIES
- ▁ENLIGHTEN
- ▁ACUTE
- ▁ILLUSION
- ▁ANTHEA
- ▁MARTIAN
- ▁EXCITE
- ▁GENEROSITY
- OLOGIST
- ▁AMAZING
- ▁UNWORTHY
- ▁INTERNAL
- ▁INCENSE
- ▁VIBRAT
- ▁ADHERE
- ROACH
- ▁FEBRUARY
- ▁MEXICAN
- ▁POTATOES
- ▁INCESSANT
- ▁INTERPOSED
- ▁PARCEL
- ▁VEXED
- ▁PROMOTE
- MIDST
- ▁ARISTOCRAT
- ▁CYRIL
- ▁EMBARK
- ▁ABUNDANCE
- ▁LITERALLY
- ▁SURGEON
- ▁TERRACE
- ▁ATLANTIC
- ▁MARTYR
- ▁SPECK
- ▁SENATE
- ▁LOAF
- ▁ADMINISTER
- ▁APPREHEND
- ▁SUBDUED
- ▁TEMPORARY
- ▁DOMINION
- ▁ELABORATE
- ▁DIGNIFIED
- ▁ELIZA
- ▁SPLASH
- ▁CONSEIL
- ▁DEXTER
- ▁UNSEEN
- ▁TRAGIC
- VOCATION
- ▁GRATIFY
- ▁BACHELOR
- ▁DEFENSE
- ▁EXCURSION
- ▁FACULTIES
- ▁PROPRIETOR
- ▁SYMPATHETIC
- ▁UNNECESSARY
- ▁RADIANT
- ▁VACANT
- ▁OUNCE
- ▁SCREW
- ▁PHENOMENON
- ▁PROMINENT
- ▁WORRIED
- ▁STUDIES
- ▁CLIMATE
- ▁KEITH
- ▁ARAMIS
- ▁BLISS
- ▁CONTINUAL
- ▁SURPASS
- ▁HEBREW
- ▁IDENTITY
- ▁PROVOKE
- ▁TEMPERAMENT
- ▁CHARIOT
- ▁HARBOR
- ▁NINTH
- ▁PRIOR
- ▁DESIROUS
- ▁JERUSALEM
- ▁UNDERTAKING
- ▁EDISON
- ▁MIRTH
- ▁SCOUT
- ▁APPARATUS
- ▁ILLUSTRATION
- ▁INTELLIGIBLE
- ▁INVARIABLY
- ▁PIERCED
- ▁REVIEW
- ▁FLICKER
- ▁HAZARD
- ▁REVELATION
- ▁DIXON
- ▁EXCITING
- ▁GOSPEL
- ▁CONSTANCE
- ▁OVERTAKE
- ▁GUINEA
- ▁ALADDIN
- ▁CHICAGO
- ▁TULLIVER
- ▁HAMILTON
- ▁GARRISON
- ▁DISCIPLE
- ▁INTENSITY
- ▁TRAITOR
- ▁CHANCELLOR
- ▁PROVERB
- ▁DAGGER
- ▁FORESEE
- ▁CONFIDE
- ▁GLIMMER
- ▁CHAUVELIN
- ▁ILLUSTRATE
- ▁VOLUNTEER
- ▁JUNGLE
- ▁STREAK
- ▁SUNRISE
- ▁DISSOLV
- ▁QUEST
- ▁AWHILE
- ▁FELICITY
- ▁LEGISLATURE
- ▁LEONORA
- ▁MAGAZINE
- ▁PITIFUL
- ▁COLONY
- ▁SHAWL
- ▁ARRIVING
- ▁FUNDAMENTAL
- ▁CARPENTER
- ▁OVERFLOW
- ▁EXPAND
- ▁HARVEST
- ▁FEMININE
- ▁INNUMERABLE
- ▁SCRAMBLE
- ▁TWENTIETH
- ▁TRIFLING
- ▁GHASTL
- ▁CONQUEST
- ▁DANIEL
- ▁FACILIT
- ▁FORSAKE
- ▁BEHAVIOUR
- ▁GORGEOUS
- ▁PRODUCING
- ▁HAPPIER
- ▁PROMISING
- ▁RAINBOW
- ▁INSTINCTIVELY
- ▁DECREE
- ▁EYEBROWS
- ▁IRRESISTIBLE
- ▁PHARAOH
- ▁SCROOGE
- ▁UNNATURAL
- ▁CRUMBS
- ▁REFINED
- ▁DREARY
- ▁TRENCH
- ▁CONVINCE
- ▁FRINGE
- ▁EXTREMITY
- ▁INTIMACY
- ▁SCOUNDREL
- ▁SUFFRAGE
- ▁UNEASINESS
- ▁BARRICADE
- ▁CIRCULAT
- ▁SAMUEL
- ▁BRUCE
- ▁DARCY
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram5000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wavlm_large
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.5a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"]}
|
espnet/simpleoier_librispeech_asr_train_asr_conformer7_wavlm_large_raw_en_bpe5000_sp
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR pretrained model
### `su_openslr36`
♻️ Imported from https://zenodo.org/record/5090135/
This model was trained by su_openslr36 using su_openslr36/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "su", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["su_openslr36"]}
|
espnet/su_openslr36
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"su",
"dataset:su_openslr36",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/sujay_catslu_map`
This model was trained by Sujay S Kumar using catslu recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout e31965d55993766461f0964216a0bb9aea3cfb7a
pip install -e .
cd egs2/catslu/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/sujay_catslu_map
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sun Oct 3 12:53:16 EDT 2021`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 0.10.3a3`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `b41391336042a4876e30d9fe5c66afb4e4be404c`
- Commit date: `Wed Sep 22 10:02:03 2021 -0400`
## asr_train_asr_smaller_aishell_xlsr_raw_zh_word
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave_5best/test|1577|11441|46.1|30.1|23.7|2.5|56.4|81.3|
|inference_asr_model_valid.acc.ave_5best/valid|921|6438|49.4|29.2|21.4|2.7|53.4|79.2|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave_5best/test|1577|45924|74.4|13.0|12.5|3.2|28.8|81.3|
|inference_asr_model_valid.acc.ave_5best/valid|921|26110|77.0|11.9|11.1|2.7|25.7|79.2|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_smaller_aishell_xlsr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp_train_asr_smaller_aishell_xlsr/asr_train_asr_smaller_aishell_xlsr_raw_zh_word
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- loss
- min
- - valid
- loss
- min
- - train
- acc
- max
- - valid
- acc
- max
keep_nbest_models: 5
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp_train_asr_smaller_aishell_xlsr/asr_stats_raw_zh_word/train/speech_shape
- exp_train_asr_smaller_aishell_xlsr/asr_stats_raw_zh_word/train/text_shape.word
valid_shape_file:
- exp_train_asr_smaller_aishell_xlsr/asr_stats_raw_zh_word/valid/speech_shape
- exp_train_asr_smaller_aishell_xlsr/asr_stats_raw_zh_word/valid/text_shape.word
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/valid/wav.scp
- speech
- sound
- - dump/raw/valid/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 2500
token_list:
- <blank>
- <unk>
- 航
- 导
- inform_操作_none
- inform_终点名称_none
- 去
- none_none_none
- 我
- 到
- inform_poi名称_none
- unknown
- 要
- 市
- side
- 一
- 个
- 路
- 区
- 第
- 大
- 县
- 你
- inform_序列号_none
- 小
- 城
- 站
- 家
- 南
- 中
- 山
- 州
- 好
- 镇
- 场
- 的
- 院
- 西
- 店
- 东
- 车
- 阳
- 学
- 北
- 园
- dialect
- 安
- 新
- 海
- 回
- 公
- 医
- 二
- 不
- 三
- 广
- 天
- 村
- 有
- 闭
- 开
- 酒
- 下
- 江
- 消
- 人
- 帮
- 金
- 是
- 取
- 花
- 近
- 政
- 民
- 口
- 十
- 里
- 河
- 府
- 请
- 关
- 国
- 了
- 华
- 那
- 高
- robot
- 出
- 平
- 湖
- 在
- 省
- 定
- 号
- 门
- 想
- 街
- 四
- 道
- 水
- 龙
- 京
- 啊
- 地
- 行
- 么
- 五
- 都
- 桥
- 上
- 给
- 明
- 业
- 哪
- 附
- 八
- 宁
- 心
- 长
- 馆
- 百
- 这
- 汽
- 机
- 工
- 庄
- 方
- 商
- 司
- 石
- 确
- 兴
- 火
- 走
- 乡
- 万
- 通
- 加
- 银
- 青
- 发
- 校
- 速
- 交
- 退
- 德
- 际
- 电
- 楼
- 宾
- 找
- 苑
- 和
- 嗯
- 油
- 林
- 乐
- 景
- 打
- 达
- 来
- 七
- 川
- inform_请求类型_none
- 最
- noise
- 兰
- 湾
- 台
- 所
- 保
- 什
- 福
- 建
- 说
- 就
- 沙
- 页
- 宝
- 子
- 厂
- 科
- 尔
- 光
- inform_页码_none
- 六
- 费
- 环
- 成
- 昌
- 吗
- 汉
- 白
- 黄
- 限
- 局
- 泉
- 怎
- 云
- 武
- 源
- 吃
- 前
- 点
- 收
- 物
- 滨
- 溪
- 马
- 贵
- 务
- 世
- 岛
- 没
- 生
- 常
- 理
- 会
- 们
- 重
- 浦
- 名
- 合
- 运
- 顺
- 美
- 儿
- 头
- 乌
- 设
- 厦
- 化
- 郑
- 时
- inform_poi目标_none
- 现
- 农
- 港
- 泰
- 停
- 宜
- 昆
- 九
- 对
- 管
- 看
- 界
- 张
- 庆
- 文
- 博
- 嘉
- 零
- 苏
- 能
- 面
- 客
- 红
- 搜
- 远
- 古
- 津
- 始
- 王
- 呃
- 用
- 瑞
- 后
- 雅
- 带
- 流
- 木
- 之
- 汇
- 夏
- 他
- 还
- 清
- 临
- 服
- 渡
- 日
- 幺
- 济
- 田
- 锦
- 吉
- 呀
- 利
- 神
- 饭
- 香
- 太
- 双
- 永
- 图
- 洲
- 集
- 特
- 吧
- request_位置_none
- 技
- 把
- 寺
- 爱
- 丰
- 春
- 盛
- 罗
- 队
- 也
- 亚
- 线
- 玉
- 哦
- 贸
- 果
- 连
- 正
- 结
- 与
- 米
- 鲁
- 警
- 信
- 捷
- 样
- 温
- 岭
- 丽
- 育
- 凤
- 位
- 听
- 动
- 可
- 原
- 年
- 经
- 纪
- 齐
- 索
- inform_对象_none
- 义
- 多
- 叫
- 况
- 气
- 老
- 派
- 池
- 曲
- 营
- 返
- 置
- 品
- 程
- 同
- 辉
- 批
- 音
- 康
- 威
- 幼
- 斯
- 库
- 拉
- 星
- 团
- 风
- 岗
- 话
- 放
- 泽
- 晋
- 部
- 知
- 外
- 塔
- 沈
- 奇
- 卫
- 月
- 庭
- 眼
- 总
- 梅
- 房
- 千
- 哈
- 自
- 字
- 呢
- 豪
- 直
- 盘
- 屯
- 超
- 祥
- 佳
- 恒
- 过
- 以
- 两
- 蓝
- 修
- 入
- 松
- 铁
- 职
- 珠
- 凯
- 快
- 丹
- 体
- 书
- 游
- 转
- 莱
- 寨
- 克
- 当
- 李
- 钱
- s
- 货
- 惠
- 格
- 岳
- 淮
- 束
- 社
- 莞
- 森
- 堵
- 内
- 蒙
- 分
- 柏
- 富
- 碧
- 凰
- 陵
- 桐
- 边
- 坡
- 胶
- 得
- 力
- 滚
- 喀
- 旗
- 料
- 歌
- 块
- 滩
- 查
- 虹
- 续
- 为
- 驾
- 许
- 峰
- 问
- 真
- 视
- 选
- 接
- 语
- 洪
- 众
- 全
- 徽
- 鄂
- 实
- 未
- 杭
- 尚
- 胜
- 塘
- 产
- 鱼
- 叉
- 岸
- 洛
- 随
- 哎
- 配
- 丁
- 继
- 迪
- 牛
- 坪
- 无
- 深
- 圳
- 韩
- 法
- 灵
- 迁
- 间
- 逼
- 步
- 咸
- 期
- 菜
- 紫
- 邢
- 赣
- 横
- 播
- 鼎
- 进
- 止
- 铜
- 便
- 鸡
- 巴
- 仁
- 财
- 佛
- 桂
- 官
- 英
- 绵
- 奥
- 矿
- 波
- 治
- 元
- 首
- 钟
- 计
- 飞
- 坊
- 阿
- 代
- 周
- 朝
- 固
- 错
- 向
- 潭
- 隆
- 装
- 纳
- 伊
- 将
- 军
- 师
- 途
- 影
- 怀
- 择
- 药
- 术
- 手
- 于
- 离
- 族
- 莲
- 布
- 呼
- 峡
- 迈
- 委
- 叮
- 咚
- 阴
- 宏
- 郡
- 健
- 本
- 洋
- 再
- 支
- 划
- 郊
- 绿
- 妈
- 旅
- 堰
- 肥
- 玛
- 左
- 网
- inform_途经点名称_none
- 拜
- 材
- inform_终点修饰_none
- 辽
- 煤
- 谢
- 则
- 土
- 草
- 埠
- 伦
- 堂
- 卡
- 肉
- 底
- 灯
- 树
- 寻
- 掉
- 展
- 庙
- 赵
- 余
- 见
- 望
- 故
- 事
- 相
- 杨
- inform_终点目标_none
- 馨
- 税
- 属
- 资
- 井
- 艺
- 越
- 微
- 包
- 阜
- 记
- 窗
- 维
- 甲
- 鑫
- 休
- 啥
- 锡
- 渝
- 岩
- 彩
- 少
- 处
- 往
- 从
- 封
- 联
- 觉
- 验
- 容
- 萨
- 普
- 弄
- 干
- 强
- 鲜
- 柳
- 衡
- 规
- request_路况_none
- 靖
- 沃
- 板
- 防
- 约
- 球
- 居
- 至
- 坝
- 翠
- 持
- 具
- 烟
- 榆
- 枫
- 照
- 意
- 目
- t
- 凌
- 邦
- 报
- 码
- 轻
- 欣
- 复
- 买
- 玻
- 璃
- 住
- 恩
- 女
- 嘴
- 级
- 振
- 邵
- 浴
- 茂
- 黔
- 您
- 比
- 显
- 渭
- 钢
- 妇
- 易
- 党
- 版
- 介
- 姐
- 才
- 览
- k
- 崇
- 桃
- 厅
- 虎
- 皮
- 仪
- 赤
- 寓
- 洞
- 绍
- 饰
- 很
- 病
- 度
- 胡
- 像
- 邮
- 又
- 充
- 贤
- 御
- 然
- 潍
- 基
- 启
- 聊
- 驶
- inform_路线偏好_none
- 澄
- 几
- 等
- 塑
- 监
- 办
- 沧
- 亭
- 观
- 螺
- 领
- 秀
- 咋
- 坨
- 奎
- 优
- 半
- 贡
- 唐
- 写
- 今
- 慢
- 傻
- 反
- 次
- 甘
- 肃
- 它
- 泗
- 贺
- 拍
- 咱
- 留
- ktv
- 察
- 顶
- 啦
- 别
- 润
- 谷
- 仙
- 慧
- 朱
- 靠
- 座
- 锅
- 麦
- 雁
- 羊
- 共
- 邓
- 荣
- 食
- 陕
- 邑
- 右
- 铺
- 梁
- 宣
- 幸
- 哥
- 士
- 员
- 招
- 番
- 徐
- 检
- 巷
- 私
- 堡
- 跟
- 器
- 峪
- 立
- 氏
- 教
- 圣
- 购
- 印
- 黑
- 完
- 条
- 唉
- 燕
- 屿
- 闸
- 茶
- 任
- 种
- 蛋
- 荆
- 岔
- inform_value_none
- 黎
- 奉
- 准
- 熟
- 薛
- 朔
- 范
- 械
- 菲
- 雪
- 腾
- 备
- 琼
- 尹
- 垣
- 吴
- 示
- 嫖
- 宫
- 冲
- 毛
- 绘
- 菏
- 嘞
- 浙
- 遵
- 各
- 饶
- 嗷
- 简
- 施
- 俱
- 岚
- 豆
- 栋
- 险
- 岘
- 滇
- 叶
- 卓
- 荔
- 刘
- 滕
- 系
- 统
- e
- 做
- 巡
- 坐
- 研
- 究
- 盐
- 冀
- 象
- 斗
- 娄
- 先
- 陆
- deny_操作_none
- 户
- 额
- 价
- 更
- 拆
- 溧
- 量
- 帝
- 断
- 态
- 智
- 蜀
- 庐
- 舟
- 摄
- 泡
- 洗
- 历
- 咖
- 啡
- 湘
- 甸
- 泾
- 卖
- 朗
- 芜
- 棠
- 凉
- 嵩
- 焦
- 让
- 夫
- 吐
- 童
- 薇
- 旺
- 浩
- 息
- 裕
- 禄
- 睡
- 狮
- 质
- 樱
- 递
- 鸣
- 句
- 韶
- 色
- 典
- 厉
- 测
- 应
- 尉
- 汤
- 己
- 宸
- 漳
- 证
- 沟
- 巩
- 扬
- 笨
- 旁
- 湟
- 主
- 浪
- 殡
- request_前方路况_none
- 竹
- 列
- 季
- 唱
- 冠
- 泥
- 懂
- 秋
- 君
- 祁
- 声
- 拥
- 曹
- 嘛
- 静
- 嗨
- 起
- 刚
- 墨
- 宿
- 络
- 襄
- 葫
- 芦
- 漫
- 峨
- 需
- 眉
- 瓦
- 如
- 根
- 域
- 式
- 何
- 鞍
- 饺
- 票
- 冶
- 喷
- 映
- 组
- 昭
- 延
- 萌
- 角
- 解
- 玲
- 蟹
- 晃
- 瀑
- 纽
- 逸
- 些
- 猪
- 蹄
- 亲
- 野
- 蒋
- 喂
- 荷
- 窝
- 锁
- 试
- 桑
- 沥
- 非
- 制
- 督
- 贝
- 址
- 识
- 侬
- 烧
- 翡
- 堤
- 伟
- 驼
- 昊
- 牌
- 陶
- 室
- 轩
- 鹰
- 钉
- 空
- 着
- 蛳
- 已
- 砖
- 姓
- 顿
- 麓
- 亿
- 售
- 功
- 淄
- 澳
- 斜
- 击
- 活
- 缴
- 输
- 雍
- 鄄
- 降
- 革
- 恢
- 卸
- 承
- 箬
- 澧
- 栈
- 疗
- 传
- 媒
- 血
- 战
- 舞
- 姨
- 婆
- 辆
- 蚌
- 鹅
- 剧
- 湛
- 亳
- b
- 敦
- 煌
- 迎
- 味
- 数
- 妞
- 嫂
- 厚
- hi
- 邹
- 摁
- 榄
- 梨
- 亮
- 纺
- 婚
- 培
- 训
- inform_起点名称_none
- 护
- 霍
- 升
- 考
- m
- 呗
- 摩
- 送
- 段
- 悦
- 餐
- 早
- 议
- 互
- 助
- 抚
- 慈
- 按
- 调
- 杰
- 份
- 兵
- 粥
- 邻
- 墅
- 鬃
- 泳
- 朋
- 良
- 缘
- 鼓
- 赛
- 枝
- 藏
- 鸿
- 冷
- 匀
- 征
- 欢
- 闯
- 汝
- 讲
- 肤
- 响
- 浮
- 录
- 冰
- 圆
- 算
- 思
- 储
- 蓄
- 苗
- 聚
- 湿
- 肇
- 阆
- 拿
- 沣
- 渔
- 铝
- 植
- 托
- 盟
- 宇
- 但
- 渠
- 告
- 丘
- 拓
- 陇
- 鹤
- 操
- 珙
- deny_poi名称_none
- 询
- 攀
- 寿
- 副
- 或
- 假
- 焰
- 夜
- 妓
- 而
- 漆
- 濮
- 胥
- 密
- 志
- 苹
- 彭
- 陪
- 添
- 满
- 章
- 骨
- 栖
- 呦
- 善
- 乖
- 姑
- 爷
- 鸟
- 璧
- 专
- 洧
- 依
- 仔
- 晨
- 沂
- 券
- 晓
- 压
- 涨
- 闻
- 男
- 诊
- 融
- 怡
- 蓬
- 廊
- 殖
- 益
- 必
- 靓
- 蒲
- beyond
- i
- love
- you
- 旋
- 尖
- 驿
- 貂
- 蝉
- 足
- 迹
- 翰
- 杏
- 牡
- 帅
- 雨
- 呈
- 迷
- 哟
- 召
- 娼
- 辛
- 顾
- 殷
- 闵
- 潮
- 脑
- 彗
- 枣
- 杆
- 洁
- 画
- 片
- 认
- 灰
- 鞋
- 宠
- 劫
- 潘
- 烤
- 破
- 隶
- 搞
- 忠
- 仕
- 郴
- 梧
- 酌
- 涵
- 醍
- 候
- 俩
- 馈
- 磨
- 骤
- 翔
- 莘
- 希
- 娅
- 剑
- 权
- 壹
- 冕
- 蛟
- 拨
- 诶
- 盖
- 楠
- 只
- 编
- 虾
- 尽
- 尧
- 晚
- 珍
- 因
- 捆
- 绑
- 端
- 盱
- 眙
- 贩
- 卷
- 养
- 陂
- 晟
- 巧
- 椿
- 毕
- 沭
- 供
- 秒
- 眠
- 状
- 璟
- 受
- 伤
- 萍
- 奔
- 效
- 禽
- 玫
- 瑰
- request_剩余距离_none
- 序
- 鹃
- 齿
- 厕
- 厨
- 忻
- 埔
- 茅
- 芳
- 雕
- 刻
- 蜜
- 筝
- g
- 橄
- 畜
- 牧
- 仑
- 臣
- 溆
- 纱
- 卉
- 群
- 痛
- 疼
- 仟
- 赶
- 紧
- 闫
- 嘶
- 潼
- 烽
- 勾
- 驰
- 麻
- 烦
- 遍
- 樟
- 浜
- 极
- 酷
- 晶
- 穿
- 芽
- 害
- 钓
- 棍
- 核
- 橙
- 琴
- 滋
- 柯
- 箐
- 株
- 陌
- 坤
- 炳
- 槐
- 协
- 湄
- 滏
- 旦
- 策
- 虞
- 陈
- 情
- 潞
- 藁
- 豹
- 若
- 垃
- 圾
- 舰
- 造
- 珥
- 董
- 泼
- 乾
- 瑶
- 龚
- 撤
- 钛
- 责
- 吶
- 喜
- 隔
- 碗
- 倒
- 椰
- 冬
- 伯
- 乳
- 隐
- 尼
- 境
- 圩
- 卧
- 抱
- 使
- 玩
- 饮
- 峤
- 炉
- 终
- 霸
- 晴
- 糕
- 疫
- 弥
- 萧
- 围
- 邬
- 贞
- 逊
- 祠
- 泛
- 逯
- 侯
- 距
- 织
- 谋
- 嵋
- 楚
- 瑜
- 妹
- 误
- 念
- 镜
- 粮
- 涮
- 值
- 鹿
- 捞
- 沅
- 移
- 涉
- 模
- 饿
- 佩
- 汀
- 朐
- 魔
- 细
- 者
- 暖
- 汕
- 谛
- 棣
- 敖
- 此
- 背
- 鲅
- 圈
- 逻
- 绕
- 锋
- 班
- 珲
- 汾
- 著
- 参
- 且
- 摇
- 宕
- 缅
- 柔
- 脂
- 肪
- 变
- 谱
- 积
- 礼
- 凡
- 落
- 羽
- 歇
- 仰
- 聋
- 雷
- 磊
- 繁
- 吭
- 皇
- 晖
- 粤
- 腊
- 习
- 题
- 绅
- 畔
- 啤
- 弋
- 匹
- 订
- 单
- ok
- 灶
- 描
- 婺
- 沿
- 莉
- 弘
- 茵
- 换
- 屏
- 瞎
- 较
- 岁
- 湫
- 塞
- 疏
- 勒
- 涟
- 巫
- 违
- 戈
- 吾
- 脏
- 葛
- 轮
- 胎
- 霞
- 鹭
- 废
- 稍
- 谨
- 慎
- 淡
- 注
- 每
- 既
- 删
- 喝
- 付
- 诸
- 暨
- 戴
- 綦
- 伍
- 诚
- 坦
- 兜
- 残
- 韵
- 喽
- 廖
- 麒
- 麟
- n
- 感
- 籍
- 难
- 死
- 笑
- 哭
- 孩
- 频
- 舍
- 溶
- 垸
- 淀
- 奸
- 改
- 藤
- 狭
- 隧
- 翁
- 陀
- 扎
- 肯
- 揭
- 壁
- 件
- 刷
- 牙
- 节
- 恋
- 淹
- 桦
- 幢
- 棉
- 俺
- 屎
- 彬
- 牟
- 亩
- 傣
- 裴
- 翼
- 辰
- 剪
- 挡
- 凹
- 投
- 碣
- 妆
- 荡
- 驻
- 颍
- 狐
- 享
- 恐
- 汶
- 寅
- 仍
- 睿
- 搁
- 尊
- 泊
- 仲
- 午
- 枞
- 仓
- 卞
- 瀚
- 佰
- 暮
- 拐
- 崔
- 榭
- 棵
- 孕
- 潜
- 俏
- 葡
- 萄
- 采
- 摘
- 癜
- 屑
- 芙
- 蓉
- 咏
- 忙
- 漂
- 父
- 母
- 差
- 彻
- 魏
- 绥
- 闲
- 遥
- 棕
- 榈
- 壶
- 疆
- 苍
- 磁
- 辅
- 泸
- 淅
- a
- 呐
- 燃
- 沱
- 禺
- 宛
- 友
- 俊
- 筑
- 贾
- 宋
- 梯
- 吨
- inform_poi修饰_none
- 础
- 碑
- request_剩余路程_none
- 创
- 孙
- 枢
- 翟
- 浑
- 糖
- 舜
- 橱
- 柜
- 浠
- 莒
- 乔
- 幕
- 磅
- 嘿
- 曼
- 昔
- 衣
- 铭
- 浏
- 喆
- 垦
- 墓
- 戍
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wav2vec2_xlsr
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 15
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 4
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
required:
- output_dir
- token_list
version: 0.10.3a3
distributed: false
```
</details>
## LM config
<details><summary>expand</summary>
```
NONE
```
</details>
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["catslu"]}
|
espnet/sujay_catslu_map
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"zh",
"dataset:catslu",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR pretrained model
### `https://zenodo.org/record/5845307/files/asr_conformer_ar_valid.acc.ave.zip?download=1`
♻️ Imported from https://zenodo.org/record/5845307/files/asr_conformer_ar_valid.acc.ave.zip?download=1
This model was trained by vectominist using seame/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["en", "zh", "multilingual"], "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["seame"]}
|
espnet/vectominist_seame_asr_conformer_bpe5626
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"multilingual",
"dataset:seame",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
# ESPnet2 ASR pretrained model
## `Xuankai Chang/xuankai_chang_librispeech_asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp_26epoch, fs=16k, lang=en`
This model was trained by Takashi Maekaku using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Python API
```text
See https://github.com/espnet/espnet_model_zoo
```
### Evaluate in the recipe
```python
# coming soon
```
### Results
```bash
# RESULTS
## Environments
- date: `Fri Aug 6 11:44:39 JST 2021`
- python version: `3.7.9 (default, Apr 23 2021, 13:48:31) [GCC 5.5.0 20171010]`
- espnet version: `espnet 0.9.9`
- pytorch version: `pytorch 1.7.0`
- Git hash: `0f7558a716ab830d0c29da8785840124f358d47b`
- Commit date: `Tue Jun 8 15:33:49 2021 -0400`
## asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|54402|98.5|1.3|0.2|0.2|1.7|22.1|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|50948|96.8|2.8|0.4|0.3|3.4|33.7|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|52576|98.4|1.4|0.2|0.2|1.8|22.1|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|52343|96.8|2.8|0.4|0.4|3.6|36.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|288456|99.6|0.2|0.2|0.2|0.6|22.1|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|265951|98.8|0.6|0.6|0.3|1.5|33.7|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|281530|99.6|0.2|0.2|0.2|0.6|22.1|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|272758|98.9|0.5|0.5|0.4|1.4|36.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|68010|98.2|1.3|0.5|0.4|2.2|22.1|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|63110|96.0|2.8|1.2|0.6|4.6|33.7|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|65818|98.1|1.3|0.6|0.4|2.3|22.1|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|65101|96.0|2.7|1.3|0.6|4.6|36.0|
```
### Training config
See full config in [`config.yaml`](./exp/asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp/config.yaml)
```yaml
config: conf/tuning/train_asr_conformer7_hubert_960hr_large.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp
ngpu: 3
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 3
local_rank: 3
dist_master_addr: localhost
dist_master_port: 33643
dist_launcher: null
multiprocessing_distributed: true
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"], "inference": false}
|
espnet/xuankai_chang_librispeech_asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp_26epoch
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
automatic-speech-recognition
|
espnet
|
# ESPnet2 ASR pretrained model
## `Xuankai Chang/xuankai_chang_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp_25epoch, fs=16k, lang=en`
This model was trained by Takashi Maekaku using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Python API
```text
See https://github.com/espnet/espnet_model_zoo
```
### Evaluate in the recipe
```python
# coming soon
```
### Results
```bash
# RESULTS
## Environments
- date: `Sat Jul 3 23:10:19 JST 2021`
- python version: `3.7.9 (default, Apr 23 2021, 13:48:31) [GCC 5.5.0 20171010]`
- espnet version: `espnet 0.9.9`
- pytorch version: `pytorch 1.7.0`
- Git hash: `0f7558a716ab830d0c29da8785840124f358d47b`
- Commit date: `Tue Jun 8 15:33:49 2021 -0400`
## asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|54402|98.3|1.6|0.2|0.2|1.9|24.9|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|50948|95.1|4.3|0.6|0.4|5.4|42.8|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|52576|98.1|1.7|0.2|0.2|2.2|26.8|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|52343|95.3|4.1|0.6|0.5|5.2|45.8|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|288456|99.5|0.2|0.2|0.2|0.6|24.9|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|265951|98.1|1.0|0.9|0.5|2.4|42.8|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|281530|99.5|0.2|0.3|0.2|0.7|26.8|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|272758|98.3|0.8|0.9|0.5|2.3|45.8|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|68010|97.8|1.6|0.6|0.4|2.6|24.9|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|63110|94.1|4.3|1.6|1.1|7.0|42.8|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|65818|97.6|1.6|0.8|0.4|2.8|26.8|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|65101|94.3|4.0|1.8|1.0|6.7|45.8|
```
### Training config
See full config in [`config.yaml`](./exp/asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp/config.yaml)
```yaml
config: conf/tuning/train_asr_conformer7_hubert_960hr_large.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp
ngpu: 3
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 3
local_rank: 3
dist_master_addr: localhost
dist_master_port: 33643
dist_launcher: null
multiprocessing_distributed: true
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"], "inference": false}
|
espnet/xuankai_chang_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_larg-truncated-5b94d9
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
audio-to-audio
|
espnet
|
# ESPnet2 ENH pretrained model
## `neillu23/dns_ins20_enh_train_enh_blstm_tf_raw_valid.loss.best, fs=16k, lang=en`
♻️ Imported from <https://zenodo.org/record/4923697#.YOAOIpozZH4>.
This model was trained by neillu23 using dns_ins20 recipe in [espnet](https://github.com/espnet/espnet/).
### Python API
```text
See https://github.com/espnet/espnet_model_zoo
```
### Evaluate in the recipe
```python
# coming soon
```
### Results
```bash
# RESULTS
## Environments
- date: `Wed Jun 9 09:49:34 CST 2021`
- python version: `3.8.10 (default, May 19 2021, 18:05:58) [GCC 7.3.0]`
- espnet version: `espnet 0.9.9`
- pytorch version: `pytorch 1.4.0`
- Git hash: `c1dfefb98bf59f654e0907b9681668eaca8ddfcc`
- Commit date: `Tue Jun 8 17:23:26 2021 +0800`
## enh_train_enh_blstm_tf_raw
config: ./conf/tuning/train_enh_blstm_tf.yaml
|dataset|STOI|SAR|SDR|SIR|
|---|---|---|---|---|
|enhanced_cv_synthetic|0.98|23.87|23.87|0.00|
|enhanced_tt_synthetic_no_reverb|0.96|15.94|15.94|0.00|
|enhanced_tt_synthetic_with_reverb|0.84|11.86|11.86|0.00|
```
### Training config
See full config in [`config.yaml`](./exp/enh_train_enh_blstm_tf_raw/config.yaml)
```yaml
config: ./conf/tuning/train_enh_blstm_tf.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/enh_train_enh_blstm_tf_raw
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 45398
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "audio-source-separation", "audio-to-audio"], "datasets": ["dns_ins20"], "inference": false}
|
espnet/yen-ju-lu-dns_ins20_enh_train_enh_blstm_tf_raw_valid.loss.best
| null |
[
"espnet",
"audio",
"audio-source-separation",
"audio-to-audio",
"en",
"dataset:dns_ins20",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
estanislao/estarlin
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
| null |
# Bot Edan
|
{"tags": ["conversational"]}
|
estehpanas/pascalbot
| null |
[
"conversational",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
question-answering
|
transformers
|
# camembert-base-squadFR-fquad-piaf
## Description
Question-answering French model, using base [CamemBERT](https://camembert-model.fr/) fine-tuned on a combo of three French Q&A datasets:
1. [PIAFv1.1](https://www.data.gouv.fr/en/datasets/piaf-le-dataset-francophone-de-questions-reponses/)
2. [FQuADv1.0](https://fquad.illuin.tech/)
3. [SQuAD-FR (SQuAD automatically translated to French)](https://github.com/Alikabbadj/French-SQuAD)
## Training hyperparameters
```shell
python run_squad.py \
--model_type camembert \
--model_name_or_path camembert-base \
--do_train --do_eval \
--train_file data/SQuAD+fquad+piaf.json \
--predict_file data/fquad_valid.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 4 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 10000
```
## Evaluation results
### FQuAD v1.0 Evaluation
```shell
{"f1": 79.81, "exact_match": 55.14}
```
### SQuAD-FR Evaluation
```shell
{"f1": 80.61, "exact_match": 59.54}
```
## Usage
```python
from transformers import pipeline
nlp = pipeline('question-answering', model='etalab-ia/camembert-base-squadFR-fquad-piaf', tokenizer='etalab-ia/camembert-base-squadFR-fquad-piaf')
nlp({
'question': "Qui est Claude Monet?",
'context': "Claude Monet, né le 14 novembre 1840 à Paris et mort le 5 décembre 1926 à Giverny, est un peintre français et l’un des fondateurs de l'impressionnisme."
})
```
## Acknowledgments
This work was performed using HPC resources from GENCI–IDRIS (Grant 2020-AD011011224).
## Citations
### PIAF
```
@inproceedings{KeraronLBAMSSS20,
author = {Rachel Keraron and
Guillaume Lancrenon and
Mathilde Bras and
Fr{\'{e}}d{\'{e}}ric Allary and
Gilles Moyse and
Thomas Scialom and
Edmundo{-}Pavel Soriano{-}Morales and
Jacopo Staiano},
title = {Project {PIAF:} Building a Native French Question-Answering Dataset},
booktitle = {{LREC}},
pages = {5481--5490},
publisher = {European Language Resources Association},
year = {2020}
}
```
### FQuAD
```
@article{dHoffschmidt2020FQuADFQ,
title={FQuAD: French Question Answering Dataset},
author={Martin d'Hoffschmidt and Maxime Vidal and Wacim Belblidia and Tom Brendl'e and Quentin Heinrich},
journal={ArXiv},
year={2020},
volume={abs/2002.06071}
}
```
### SQuAD-FR
```
@MISC{kabbadj2018,
author = "Kabbadj, Ali",
title = "Something new in French Text Mining and Information Extraction (Universal Chatbot): Largest Q&A French training dataset (110 000+) ",
editor = "linkedin.com",
month = "November",
year = "2018",
url = "\url{https://www.linkedin.com/pulse/something-new-french-text-mining-information-chatbot-largest-kabbadj/}",
note = "[Online; posted 11-November-2018]",
}
```
### CamemBERT
HF model card : [https://huggingface.co/camembert-base](https://huggingface.co/camembert-base)
```
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
|
{"language": "fr", "datasets": ["piaf", "FQuAD", "SQuAD-FR"], "widget": [{"text": "Comment s'appelle le portail open data du gouvernement ?", "context": "Etalab est une administration publique fran\u00e7aise qui fait notamment office de Chief Data Officer de l'\u00c9tat et coordonne la conception et la mise en \u0153uvre de sa strat\u00e9gie dans le domaine de la donn\u00e9e (ouverture et partage des donn\u00e9es publiques ou open data, exploitation des donn\u00e9es et intelligence artificielle...). Ainsi, Etalab d\u00e9veloppe et maintient le portail des donn\u00e9es ouvertes du gouvernement fran\u00e7ais data.gouv.fr. Etalab promeut \u00e9galement une plus grande ouverture l'administration sur la soci\u00e9t\u00e9 (gouvernement ouvert) : transparence de l'action publique, innovation ouverte, participation citoyenne... elle promeut l\u2019innovation, l\u2019exp\u00e9rimentation, les m\u00e9thodes de travail ouvertes, agiles et it\u00e9ratives, ainsi que les synergies avec la soci\u00e9t\u00e9 civile pour d\u00e9cloisonner l\u2019administration et favoriser l\u2019adoption des meilleures pratiques professionnelles dans le domaine du num\u00e9rique. \u00c0 ce titre elle \u00e9tudie notamment l\u2019opportunit\u00e9 de recourir \u00e0 des technologies en voie de maturation issues du monde de la recherche. Cette entit\u00e9 charg\u00e9e de l'innovation au sein de l'administration doit contribuer \u00e0 l'am\u00e9lioration du service public gr\u00e2ce au num\u00e9rique. Elle est rattach\u00e9e \u00e0 la Direction interminist\u00e9rielle du num\u00e9rique, dont les missions et l\u2019organisation ont \u00e9t\u00e9 fix\u00e9es par le d\u00e9cret du 30 octobre 2019.\u2009 Dirig\u00e9 par Laure Lucchesi depuis 2016, elle rassemble une \u00e9quipe pluridisciplinaire d'une trentaine de personnes."}]}
|
AgentPublic/camembert-base-squadFR-fquad-piaf
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"camembert",
"question-answering",
"fr",
"dataset:piaf",
"dataset:FQuAD",
"dataset:SQuAD-FR",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null |
transformers
|
# dpr-ctx_encoder-fr_qa-camembert
## Description
French [DPR model](https://arxiv.org/abs/2004.04906) using [CamemBERT](https://arxiv.org/abs/1911.03894) as base and then fine-tuned on a combo of three French Q&A
## Data
### French Q&A
We use a combination of three French Q&A datasets:
1. [PIAFv1.1](https://www.data.gouv.fr/en/datasets/piaf-le-dataset-francophone-de-questions-reponses/)
2. [FQuADv1.0](https://fquad.illuin.tech/)
3. [SQuAD-FR (SQuAD automatically translated to French)](https://github.com/Alikabbadj/French-SQuAD)
### Training
We are using 90 562 random questions for `train` and 22 391 for `dev`. No question in `train` exists in `dev`. For each question, we have a single `positive_context` (the paragraph where the answer to this question is found) and around 30 `hard_negtive_contexts`. Hard negative contexts are found by querying an ES instance (via bm25 retrieval) and getting the top-k candidates **that do not contain the answer**.
The files are over [here](https://drive.google.com/file/d/1W5Jm3sqqWlsWsx2sFpA39Ewn33PaLQ7U/view?usp=sharing).
### Evaluation
We use FQuADv1.0 and French-SQuAD evaluation sets.
## Training Script
We use the official [Facebook DPR implentation](https://github.com/facebookresearch/DPR) with a slight modification: by default, the code can work with Roberta models, still we changed a single line to make it easier to work with Camembert. This modification can be found [over here](https://github.com/psorianom/DPR).
### Hyperparameters
```shell
python -m torch.distributed.launch --nproc_per_node=8 train_dense_encoder.py \
--max_grad_norm 2.0 \
--encoder_model_type fairseq_roberta \
--pretrained_file data/camembert-base \
--seed 12345 \
--sequence_length 256 \
--warmup_steps 1237 \
--batch_size 16 \
--do_lower_case \
--train_file ./data/DPR_FR_train.json \
--dev_file ./data/DPR_FR_dev.json \
--output_dir ./output/ \
--learning_rate 2e-05 \
--num_train_epochs 35 \
--dev_batch_size 16 \
--val_av_rank_start_epoch 30 \
--pretrained_model_cfg ./data/camembert-base/
```
###
## Evaluation results
We obtain the following evaluation by using FQuAD and SQuAD-FR evaluation (or validation) sets. To obtain these results, we use [haystack's evaluation script](https://github.com/deepset-ai/haystack/blob/db4151bbc026f27c6d709fefef1088cd3f1e18b9/tutorials/Tutorial5_Evaluation.py) (**we report Retrieval results only**).
### DPR
#### FQuAD v1.0 Evaluation
```shell
For 2764 out of 3184 questions (86.81%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.87
Retriever Mean Avg Precision: 0.57
```
#### SQuAD-FR Evaluation
```shell
For 8945 out of 10018 questions (89.29%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.89
Retriever Mean Avg Precision: 0.63
```
### BM25
For reference, BM25 gets the results shown below. As in the original paper, regarding SQuAD-like datasets, the results of DPR are consistently superseeded by BM25.
#### FQuAD v1.0 Evaluation
```shell
For 2966 out of 3184 questions (93.15%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.93
Retriever Mean Avg Precision: 0.74
```
#### SQuAD-FR Evaluation
```shell
For 9353 out of 10018 questions (93.36%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.93
Retriever Mean Avg Precision: 0.77
```
## Usage
The results reported here are obtained with the `haystack` library. To get to similar embeddings using exclusively HF `transformers` library, you can do the following:
```python
from transformers import AutoTokenizer, AutoModel
query = "Salut, mon chien est-il mignon ?"
tokenizer = AutoTokenizer.from_pretrained("etalab-ia/dpr-ctx_encoder-fr_qa-camembert", do_lower_case=True)
input_ids = tokenizer(query, return_tensors='pt')["input_ids"]
model = AutoModel.from_pretrained("etalab-ia/dpr-ctx_encoder-fr_qa-camembert", return_dict=True)
embeddings = model.forward(input_ids).pooler_output
print(embeddings)
```
And with `haystack`, we use it as a retriever:
```
retriever = DensePassageRetriever(
document_store=document_store,
query_embedding_model="etalab-ia/dpr-question_encoder-fr_qa-camembert",
passage_embedding_model="etalab-ia/dpr-ctx_encoder-fr_qa-camembert",
model_version=dpr_model_tag,
infer_tokenizer_classes=True,
)
```
## Acknowledgments
This work was performed using HPC resources from GENCI–IDRIS (Grant 2020-AD011011224).
## Citations
### Datasets
#### PIAF
```
@inproceedings{KeraronLBAMSSS20,
author = {Rachel Keraron and
Guillaume Lancrenon and
Mathilde Bras and
Fr{\'{e}}d{\'{e}}ric Allary and
Gilles Moyse and
Thomas Scialom and
Edmundo{-}Pavel Soriano{-}Morales and
Jacopo Staiano},
title = {Project {PIAF:} Building a Native French Question-Answering Dataset},
booktitle = {{LREC}},
pages = {5481--5490},
publisher = {European Language Resources Association},
year = {2020}
}
```
#### FQuAD
```
@article{dHoffschmidt2020FQuADFQ,
title={FQuAD: French Question Answering Dataset},
author={Martin d'Hoffschmidt and Maxime Vidal and Wacim Belblidia and Tom Brendl'e and Quentin Heinrich},
journal={ArXiv},
year={2020},
volume={abs/2002.06071}
}
```
#### SQuAD-FR
```
@MISC{kabbadj2018,
author = "Kabbadj, Ali",
title = "Something new in French Text Mining and Information Extraction (Universal Chatbot): Largest Q&A French training dataset (110 000+) ",
editor = "linkedin.com",
month = "November",
year = "2018",
url = "\url{https://www.linkedin.com/pulse/something-new-french-text-mining-information-chatbot-largest-kabbadj/}",
note = "[Online; posted 11-November-2018]",
}
```
### Models
#### CamemBERT
HF model card : [https://huggingface.co/camembert-base](https://huggingface.co/camembert-base)
```
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
#### DPR
```
@misc{karpukhin2020dense,
title={Dense Passage Retrieval for Open-Domain Question Answering},
author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih},
year={2020},
eprint={2004.04906},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "fr", "datasets": ["piaf", "FQuAD", "SQuAD-FR"]}
|
AgentPublic/dpr-ctx_encoder-fr_qa-camembert
| null |
[
"transformers",
"pytorch",
"camembert",
"fr",
"dataset:piaf",
"dataset:FQuAD",
"dataset:SQuAD-FR",
"arxiv:2004.04906",
"arxiv:1911.03894",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
feature-extraction
|
transformers
|
# dpr-question_encoder-fr_qa-camembert
## Description
French [DPR model](https://arxiv.org/abs/2004.04906) using [CamemBERT](https://arxiv.org/abs/1911.03894) as base and then fine-tuned on a combo of three French Q&A
## Data
### French Q&A
We use a combination of three French Q&A datasets:
1. [PIAFv1.1](https://www.data.gouv.fr/en/datasets/piaf-le-dataset-francophone-de-questions-reponses/)
2. [FQuADv1.0](https://fquad.illuin.tech/)
3. [SQuAD-FR (SQuAD automatically translated to French)](https://github.com/Alikabbadj/French-SQuAD)
### Training
We are using 90 562 random questions for `train` and 22 391 for `dev`. No question in `train` exists in `dev`. For each question, we have a single `positive_context` (the paragraph where the answer to this question is found) and around 30 `hard_negtive_contexts`. Hard negative contexts are found by querying an ES instance (via bm25 retrieval) and getting the top-k candidates **that do not contain the answer**.
The files are over [here](https://drive.google.com/file/d/1W5Jm3sqqWlsWsx2sFpA39Ewn33PaLQ7U/view?usp=sharing).
### Evaluation
We use FQuADv1.0 and French-SQuAD evaluation sets.
## Training Script
We use the official [Facebook DPR implentation](https://github.com/facebookresearch/DPR) with a slight modification: by default, the code can work with Roberta models, still we changed a single line to make it easier to work with Camembert. This modification can be found [over here](https://github.com/psorianom/DPR).
### Hyperparameters
```shell
python -m torch.distributed.launch --nproc_per_node=8 train_dense_encoder.py \
--max_grad_norm 2.0 --encoder_model_type hf_bert --pretrained_file data/bert-base-multilingual-uncased \
--seed 12345 --sequence_length 256 --warmup_steps 1237 --batch_size 16 --do_lower_case \
--train_file DPR_FR_train.json \
--dev_file ./data/100_hard_neg_ctxs/DPR_FR_dev.json \
--output_dir ./output/bert --learning_rate 2e-05 --num_train_epochs 35 \
--dev_batch_size 16 --val_av_rank_start_epoch 25 \
--pretrained_model_cfg ./data/bert-base-multilingual-uncased
```
###
## Evaluation results
We obtain the following evaluation by using FQuAD and SQuAD-FR evaluation (or validation) sets. To obtain these results, we use [haystack's evaluation script](https://github.com/deepset-ai/haystack/blob/db4151bbc026f27c6d709fefef1088cd3f1e18b9/tutorials/Tutorial5_Evaluation.py) (**we report Retrieval results only**).
### DPR
#### FQuAD v1.0 Evaluation
```shell
For 2764 out of 3184 questions (86.81%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.87
Retriever Mean Avg Precision: 0.57
```
#### SQuAD-FR Evaluation
```shell
For 8945 out of 10018 questions (89.29%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.89
Retriever Mean Avg Precision: 0.63
```
### BM25
For reference, BM25 gets the results shown below. As in the original paper, regarding SQuAD-like datasets, the results of DPR are consistently superseeded by BM25.
#### FQuAD v1.0 Evaluation
```shell
For 2966 out of 3184 questions (93.15%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.93
Retriever Mean Avg Precision: 0.74
```
#### SQuAD-FR Evaluation
```shell
For 9353 out of 10018 questions (93.36%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.93
Retriever Mean Avg Precision: 0.77
```
## Usage
The results reported here are obtained with the `haystack` library. To get to similar embeddings using exclusively HF `transformers` library, you can do the following:
```python
from transformers import AutoTokenizer, AutoModel
query = "Salut, mon chien est-il mignon ?"
tokenizer = AutoTokenizer.from_pretrained("etalab-ia/dpr-question_encoder-fr_qa-camembert", do_lower_case=True)
input_ids = tokenizer(query, return_tensors='pt')["input_ids"]
model = AutoModel.from_pretrained("etalab-ia/dpr-question_encoder-fr_qa-camembert", return_dict=True)
embeddings = model.forward(input_ids).pooler_output
print(embeddings)
```
And with `haystack`, we use it as a retriever:
```
retriever = DensePassageRetriever(
document_store=document_store,
query_embedding_model="etalab-ia/dpr-question_encoder-fr_qa-camembert",
passage_embedding_model="etalab-ia/dpr-ctx_encoder-fr_qa-camembert",
model_version=dpr_model_tag,
infer_tokenizer_classes=True,
)
```
## Acknowledgments
This work was performed using HPC resources from GENCI–IDRIS (Grant 2020-AD011011224).
## Citations
### Datasets
#### PIAF
```
@inproceedings{KeraronLBAMSSS20,
author = {Rachel Keraron and
Guillaume Lancrenon and
Mathilde Bras and
Fr{\'{e}}d{\'{e}}ric Allary and
Gilles Moyse and
Thomas Scialom and
Edmundo{-}Pavel Soriano{-}Morales and
Jacopo Staiano},
title = {Project {PIAF:} Building a Native French Question-Answering Dataset},
booktitle = {{LREC}},
pages = {5481--5490},
publisher = {European Language Resources Association},
year = {2020}
}
```
#### FQuAD
```
@article{dHoffschmidt2020FQuADFQ,
title={FQuAD: French Question Answering Dataset},
author={Martin d'Hoffschmidt and Maxime Vidal and Wacim Belblidia and Tom Brendl'e and Quentin Heinrich},
journal={ArXiv},
year={2020},
volume={abs/2002.06071}
}
```
#### SQuAD-FR
```
@MISC{kabbadj2018,
author = "Kabbadj, Ali",
title = "Something new in French Text Mining and Information Extraction (Universal Chatbot): Largest Q&A French training dataset (110 000+) ",
editor = "linkedin.com",
month = "November",
year = "2018",
url = "\url{https://www.linkedin.com/pulse/something-new-french-text-mining-information-chatbot-largest-kabbadj/}",
note = "[Online; posted 11-November-2018]",
}
```
### Models
#### CamemBERT
HF model card : [https://huggingface.co/camembert-base](https://huggingface.co/camembert-base)
```
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
#### DPR
```
@misc{karpukhin2020dense,
title={Dense Passage Retrieval for Open-Domain Question Answering},
author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih},
year={2020},
eprint={2004.04906},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "fr", "datasets": ["piaf", "FQuAD", "SQuAD-FR"]}
|
AgentPublic/dpr-question_encoder-fr_qa-camembert
| null |
[
"transformers",
"pytorch",
"camembert",
"feature-extraction",
"fr",
"dataset:piaf",
"dataset:FQuAD",
"dataset:SQuAD-FR",
"arxiv:2004.04906",
"arxiv:1911.03894",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
AgentPublic/ranker_question-text-fr_base_camembert
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
ethan21814/CLMOpenAIGPT
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
ethanbmehta/test_repo
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
# Guwen CLS
A Classical Chinese Text Classifier.
See also:
<a href="https://github.com/ethan-yt/guwen-models">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/cclue/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/guwenbert/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
|
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "pytorch", "text classificatio"], "thumbnail": "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png", "pipeline_tag": "text-classification", "widget": [{"text": "\u5b50\u66f0\uff1a\u201c\u5f1f\u5b50\u5165\u5219\u5b5d\uff0c\u51fa\u5219\u608c\uff0c\u8c28\u800c\u4fe1\uff0c\u6cdb\u7231\u4f17\uff0c\u800c\u4eb2\u4ec1\u3002\u884c\u6709\u9980\u529b\uff0c\u5219\u4ee5\u5b66\u6587\u3002\u201d"}]}
|
ethanyt/guwen-cls
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"bert",
"text classificatio",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
# Guwen NER
A Classical Chinese Named Entity Recognizer.
Note: There are some problems with decoding using the default sequence classification model. Use the CRF model to achieve the best results. CRF related code please refer to
[Guwen Models](https://github.com/ethan-yt/guwen-models).
See also:
<a href="https://github.com/ethan-yt/guwen-models">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/cclue/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/guwenbert/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
|
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "pytorch"], "thumbnail": "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png", "pipeline_tag": "token-classification", "widget": [{"text": "\u53ca\u79e6\u59cb\u7687\uff0c\u706d\u5148\u4ee3\u5178\u7c4d\uff0c\u711a\u4e66\u5751\u5112\uff0c\u5929\u4e0b\u5b66\u58eb\u9003\u96be\u89e3\u6563\uff0c\u6211\u5148\u4eba\u7528\u85cf\u5176\u5bb6\u4e66\u4e8e\u5c4b\u58c1\u3002\u6c49\u5ba4\u9f99\u5174\uff0c\u5f00\u8bbe\u5b66\u6821\uff0c\u65c1\u6c42\u5112\u96c5\uff0c\u4ee5\u9610\u5927\u7337\u3002\u6d4e\u5357\u4f0f\u751f\uff0c\u5e74\u8fc7\u4e5d\u5341\uff0c\u5931\u5176\u672c\u7ecf\uff0c\u53e3\u4ee5\u4f20\u6388\uff0c\u88c1\u4e8c\u5341\u9980\u7bc7\uff0c\u4ee5\u5176\u4e0a\u53e4\u4e4b\u4e66\uff0c\u8c13\u4e4b\u5c1a\u4e66\u3002\u767e\u7bc7\u4e4b\u4e49\uff0c\u4e16\u83ab\u5f97\u95fb\u3002"}]}
|
ethanyt/guwen-ner
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"token-classification",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"bert",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
# Guwen Punc
A Classical Chinese Punctuation Marker.
See also:
<a href="https://github.com/ethan-yt/guwen-models">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/cclue/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/guwenbert/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
|
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "pytorch", "punctuation marker"], "thumbnail": "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png", "pipeline_tag": "token-classification", "widget": [{"text": "\u53ca\u79e6\u59cb\u7687\u706d\u5148\u4ee3\u5178\u7c4d\u711a\u4e66\u5751\u5112\u5929\u4e0b\u5b66\u58eb\u9003\u96be\u89e3\u6563\u6211\u5148\u4eba\u7528\u85cf\u5176\u5bb6\u4e66\u4e8e\u5c4b\u58c1\u6c49\u5ba4\u9f99\u5174\u5f00\u8bbe\u5b66\u6821\u65c1\u6c42\u5112\u96c5\u4ee5\u9610\u5927\u7337\u6d4e\u5357\u4f0f\u751f\u5e74\u8fc7\u4e5d\u5341\u5931\u5176\u672c\u7ecf\u53e3\u4ee5\u4f20\u6388\u88c1\u4e8c\u5341\u9980\u7bc7\u4ee5\u5176\u4e0a\u53e4\u4e4b\u4e66\u8c13\u4e4b\u5c1a\u4e66\u767e\u7bc7\u4e4b\u4e49\u4e16\u83ab\u5f97\u95fb"}]}
|
ethanyt/guwen-punc
| null |
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"bert",
"punctuation marker",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
# Guwen Quote
A Classical Chinese Quotation Detector.
Note: There are some problems with decoding using the default sequence classification model. Use the CRF model to achieve the best results. CRF related code please refer to
[Guwen Models](https://github.com/ethan-yt/guwen-models).
See also:
<a href="https://github.com/ethan-yt/guwen-models">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/cclue/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/guwenbert/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
|
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "pytorch", "quotation detection"], "thumbnail": "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png", "pipeline_tag": "token-classification", "widget": [{"text": "\u5b50\u66f0\u5b66\u800c\u65f6\u4e60\u4e4b\u4e0d\u4ea6\u8bf4\u4e4e\u6709\u670b\u81ea\u8fdc\u65b9\u6765\u4e0d\u4ea6\u4e50\u4e4e\u4eba\u4e0d\u77e5\u800c\u4e0d\u6120\u4e0d\u4ea6\u541b\u5b50\u4e4e\u6709\u5b50\u66f0\u5176\u4e3a\u4eba\u4e5f\u5b5d\u5f1f\u800c\u597d\u72af\u4e0a\u8005\u9c9c\u77e3\u4e0d\u597d\u72af\u4e0a\u800c\u597d\u4f5c\u4e71\u8005\u672a\u4e4b\u6709\u4e5f\u541b\u5b50\u52a1\u672c\u672c\u7acb\u800c\u9053\u751f\u5b5d\u5f1f\u4e5f\u8005\u5176\u4e3a\u4ec1\u4e4b\u672c\u4e0e\u5b50\u66f0\u5de7\u8a00\u4ee4\u8272\u9c9c\u77e3\u4ec1\u66fe\u5b50\u66f0\u543e\u65e5\u4e09\u7701\u543e\u8eab\u4e3a\u4eba\u8c0b\u800c\u4e0d\u5fe0\u4e4e\u4e0e\u670b\u53cb\u4ea4\u800c\u4e0d\u4fe1\u4e4e\u4f20\u4e0d\u4e60\u4e4e\u5b50\u66f0\u9053\u5343\u4e58\u4e4b\u56fd\u656c\u4e8b\u800c\u4fe1\u8282\u7528\u800c\u7231\u4eba\u4f7f\u6c11\u4ee5\u65f6"}]}
|
ethanyt/guwen-quote
| null |
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"bert",
"quotation detection",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
# Guwen Seg
A Classical Chinese Sentence Segmenter.
See also:
<a href="https://github.com/ethan-yt/guwen-models">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/cclue/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/guwenbert/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
|
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "pytorch", "sentence segmentation"], "thumbnail": "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png", "pipeline_tag": "token-classification", "widget": [{"text": "\u53ca\u79e6\u59cb\u7687\u706d\u5148\u4ee3\u5178\u7c4d\u711a\u4e66\u5751\u5112\u5929\u4e0b\u5b66\u58eb\u9003\u96be\u89e3\u6563\u6211\u5148\u4eba\u7528\u85cf\u5176\u5bb6\u4e66\u4e8e\u5c4b\u58c1\u6c49\u5ba4\u9f99\u5174\u5f00\u8bbe\u5b66\u6821\u65c1\u6c42\u5112\u96c5\u4ee5\u9610\u5927\u7337\u6d4e\u5357\u4f0f\u751f\u5e74\u8fc7\u4e5d\u5341\u5931\u5176\u672c\u7ecf\u53e3\u4ee5\u4f20\u6388\u88c1\u4e8c\u5341\u9980\u7bc7\u4ee5\u5176\u4e0a\u53e4\u4e4b\u4e66\u8c13\u4e4b\u5c1a\u4e66\u767e\u7bc7\u4e4b\u4e49\u4e16\u83ab\u5f97\u95fb"}]}
|
ethanyt/guwen-seg
| null |
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"bert",
"sentence segmentation",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
# Guwen Sent
A Classical Chinese Poem Sentiment Classifier.
See also:
<a href="https://github.com/ethan-yt/guwen-models">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/cclue/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/guwenbert/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
|
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "pytorch", "sentiment classificatio"], "thumbnail": "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png", "pipeline_tag": "text-classification", "widget": [{"text": "\u6eda\u6eda\u957f\u6c5f\u4e1c\u901d\u6c34\uff0c\u6d6a\u82b1\u6dd8\u5c3d\u82f1\u96c4"}, {"text": "\u5bfb\u5bfb\u89c5\u89c5\uff0c\u51b7\u51b7\u6e05\u6e05\uff0c\u51c4\u51c4\u60e8\u60e8\u621a\u621a"}, {"text": "\u6267\u624b\u76f8\u770b\u6cea\u773c\uff0c\u7adf\u65e0\u8bed\u51dd\u564e\uff0c\u5ff5\u53bb\u53bb\uff0c\u5343\u91cc\u70df\u6ce2\uff0c\u66ae\u972d\u6c89\u6c89\u695a\u5929\u9614\u3002"}, {"text": "\u5ffd\u5982\u4e00\u591c\u6625\u98ce\u6765\uff0c\u5e72\u6811\u4e07\u6811\u68a8\u82b1\u5f00"}]}
|
ethanyt/guwen-sent
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"bert",
"sentiment classificatio",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
# GuwenBERT
## Model description

This is a RoBERTa model pre-trained on Classical Chinese. You can fine-tune GuwenBERT for downstream tasks, such as sentence breaking, punctuation, named entity recognition, and so on.
For more information about RoBERTa, take a look at the RoBERTa's offical repo.
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("ethanyt/guwenbert-base")
model = AutoModel.from_pretrained("ethanyt/guwenbert-base")
```
## Training data
The training data is daizhige dataset (殆知阁古代文献) which is contains of 15,694 books in Classical Chinese, covering Buddhism, Confucianism, Medicine, History, Zi, Yi, Yizang, Shizang, Taoism, and Jizang.
76% of them are punctuated.
The total number of characters is 1.7B (1,743,337,673).
All traditional Characters are converted to simplified characters.
The vocabulary is constructed from this data set and the size is 23,292.
## Training procedure
The models are initialized with `hfl/chinese-roberta-wwm-ext` and then pre-trained with a 2-step strategy.
In the first step, the model learns MLM with only word embeddings updated during training, until convergence. In the second step, all parameters are updated during training.
The models are trained on 4 V100 GPUs for 120K steps (20K for step#1, 100K for step#2) with a batch size of 2,048 and a sequence length of 512. The optimizer used is Adam with a learning rate of 2e-4, adam-betas of (0.9,0.98), adam-eps of 1e-6, a weight decay of 0.01, learning rate warmup for 5K steps, and linear decay of learning rate after.
## Eval results
### "Gulian Cup" Ancient Books Named Entity Recognition Evaluation
Second place in the competition. Detailed test results:
| NE Type | Precision | Recall | F1 |
|:----------:|:-----------:|:------:|:-----:|
| Book Name | 77.50 | 73.73 | 75.57 |
| Other Name | 85.85 | 89.32 | 87.55 |
| Micro Avg. | 83.88 | 85.39 | 84.63 |
## About Us
We are from [Datahammer](https://datahammer.net), Beijing Institute of Technology.
For more cooperation, please contact email: ethanyt [at] qq.com
> Created with ❤️ by Tan Yan [](https://github.com/Ethan-yt) and Zewen Chi [](https://github.com/CZWin32768)
|
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "pytorch"], "thumbnail": "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png", "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "[MASK]\u592a\u5143\u4e2d\uff0c\u6b66\u9675\u4eba\u6355\u9c7c\u4e3a\u4e1a\u3002"}, {"text": "\u95ee\u5f81\u592b\u4ee5\u524d\u8def\uff0c\u6068\u6668\u5149\u4e4b[MASK]\u5fae\u3002"}, {"text": "\u6d54\u9633\u6c5f\u5934\u591c\u9001\u5ba2\uff0c\u67ab\u53f6[MASK]\u82b1\u79cb\u745f\u745f\u3002"}]}
|
ethanyt/guwenbert-base
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"bert",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
# GuwenBERT
## Model description

This is a RoBERTa model pre-trained on Classical Chinese. You can fine-tune GuwenBERT for downstream tasks, such as sentence breaking, punctuation, named entity recognition, and so on.
For more information about RoBERTa, take a look at the RoBERTa's offical repo.
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("ethanyt/guwenbert-large")
model = AutoModel.from_pretrained("ethanyt/guwenbert-large")
```
## Training data
The training data is daizhige dataset (殆知阁古代文献) which is contains of 15,694 books in Classical Chinese, covering Buddhism, Confucianism, Medicine, History, Zi, Yi, Yizang, Shizang, Taoism, and Jizang.
76% of them are punctuated.
The total number of characters is 1.7B (1,743,337,673).
All traditional Characters are converted to simplified characters.
The vocabulary is constructed from this data set and the size is 23,292.
## Training procedure
The models are initialized with `hfl/chinese-roberta-wwm-ext-large` and then pre-trained with a 2-step strategy.
In the first step, the model learns MLM with only word embeddings updated during training, until convergence. In the second step, all parameters are updated during training.
The models are trained on 4 V100 GPUs for 120K steps (20K for step#1, 100K for step#2) with a batch size of 2,048 and a sequence length of 512. The optimizer used is Adam with a learning rate of 1e-4, adam-betas of (0.9,0.98), adam-eps of 1e-6, a weight decay of 0.01, learning rate warmup for 5K steps, and linear decay of learning rate after.
## Eval results
### "Gulian Cup" Ancient Books Named Entity Recognition Evaluation
Second place in the competition. Detailed test results:
| NE Type | Precision | Recall | F1 |
|:----------:|:-----------:|:------:|:-----:|
| Book Name | 77.50 | 73.73 | 75.57 |
| Other Name | 85.85 | 89.32 | 87.55 |
| Micro Avg. | 83.88 | 85.39 | 84.63 |
## About Us
We are from [Datahammer](https://datahammer.net), Beijing Institute of Technology.
For more cooperation, please contact email: ethanyt [at] qq.com
> Created with ❤️ by Tan Yan [](https://github.com/Ethan-yt) and Zewen Chi [](https://github.com/CZWin32768)
|
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "pytorch"], "thumbnail": "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png", "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "[MASK]\u592a\u5143\u4e2d\uff0c\u6b66\u9675\u4eba\u6355\u9c7c\u4e3a\u4e1a\u3002"}, {"text": "\u95ee\u5f81\u592b\u4ee5\u524d\u8def\uff0c\u6068\u6668\u5149\u4e4b[MASK]\u5fae\u3002"}, {"text": "\u6d54\u9633\u6c5f\u5934\u591c\u9001\u5ba2\uff0c\u67ab\u53f6[MASK]\u82b1\u79cb\u745f\u745f\u3002"}]}
|
ethanyt/guwenbert-large
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"bert",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
ethman/test
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
# ai-msgbot GPT2-L + daily dialogues
_NOTE: this model card is a WIP_
GPT2-L (774M parameters) fine-tuned on the Wizard of Wikipedia dataset for 40k steps with 34/36 layers frozen using `aitextgen`. This model was then subsequently further fine-tuned on the [Daily Dialogues](http://yanran.li/dailydialog) dataset for an additional 40k steps, this time with **35** of 36 layers frozen.
Designed for use with [ai-msgbot](https://github.com/pszemraj/ai-msgbot) to create an open-ended chatbot (of course, if other use cases arise, have at it).
## conversation data
The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses.
`script_speaker_name` = `person alpha`
`script_responder_name` = `person beta`
## examples
- the default inference API examples should work _okay_
- an ideal test would be explicitly adding `person beta` to the **end** of the prompt text. The model is forced to respond to the entered chat prompt instead of adding to the entered prompt and then responding to that (which may cut off the response text due to the Inference API limits).
### Example prompt:
```
do you like to eat beans?
person beta:
```
### Resulting output
```
do you like to eat beans?
person beta:
no, i don't like
```
## citations
```
@inproceedings{dinan2019wizard,
author={Emily Dinan and Stephen Roller and Kurt Shuster and Angela Fan and Michael Auli and Jason Weston},
title={{W}izard of {W}ikipedia: Knowledge-powered Conversational Agents},
booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)},
year={2019},
}
@inproceedings{li-etal-2017-dailydialog,
title = "{D}aily{D}ialog: A Manually Labelled Multi-turn Dialogue Dataset",
author = "Li, Yanran and
Su, Hui and
Shen, Xiaoyu and
Li, Wenjie and
Cao, Ziqiang and
Niu, Shuzi",
booktitle = "Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = nov,
year = "2017",
address = "Taipei, Taiwan",
publisher = "Asian Federation of Natural Language Processing",
url = "https://aclanthology.org/I17-1099",
pages = "986--995",
abstract = "We develop a high-quality multi-turn dialog dataset, \textbf{DailyDialog}, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. The dataset is available on \url{http://yanran.li/dailydialog}",
}
```
|
{}
|
ethzanalytics/ai-msgbot-gpt2-L-dialogue
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# ai-msgbot GPT2-L
_NOTE: model card is WIP_
GPT2-L (774M parameters) trained on [the Wizard of Wikipedia dataset](https://parl.ai/projects/wizard_of_wikipedia/) for 40k steps with 34/36 layers frozen using `aitextgen`.
Designed for use with [ai-msgbot](https://github.com/pszemraj/ai-msgbot) to create an open-ended chatbot (of course, if other use cases arise have at it).
## conversation data
The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. this is relevant for writing prompts and filtering/extracting text from responses.
`script_speaker_name` = `person alpha`
`script_responder_name` = `person beta`
## examples
- the default inference API examples should work _okay_
- an ideal test would be explicitly adding `person beta` to the **end** of the prompt text. The model is forced to respond to the entered chat prompt instead of adding to the entered prompt and then responding to that (which may cut off the response text due to the Inference API limits).
## citations
```
@inproceedings{dinan2019wizard,
author={Emily Dinan and Stephen Roller and Kurt Shuster and Angela Fan and Michael Auli and Jason Weston},
title={{W}izard of {W}ikipedia: Knowledge-powered Conversational Agents},
booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)},
year={2019},
}
@inproceedings{li-etal-2017-dailydialog,
title = "{D}aily{D}ialog: A Manually Labelled Multi-turn Dialogue Dataset",
author = "Li, Yanran and
Su, Hui and
Shen, Xiaoyu and
Li, Wenjie and
Cao, Ziqiang and
Niu, Shuzi",
booktitle = "Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = nov,
year = "2017",
address = "Taipei, Taiwan",
publisher = "Asian Federation of Natural Language Processing",
url = "https://aclanthology.org/I17-1099",
pages = "986--995",
abstract = "We develop a high-quality multi-turn dialog dataset, \textbf{DailyDialog}, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. The dataset is available on \url{http://yanran.li/dailydialog}",
}
```
|
{}
|
ethzanalytics/ai-msgbot-gpt2-L
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# ai-msgbot GPT-2 M Conversational
A GPT-2 M 355M parameter model for usage with [ai-msgbot](https://github.com/pszemraj/ai-msgbot) to create a chatbot-like tool.
This model was fine-tuned on a parsed version of [the Wizard of Wikipedia dataset](https://parl.ai/projects/wizard_of_wikipedia/) for 10,000 steps. 20/24 layers were frozen for the fine-tuning process.
## conversation data
The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. this is relevant for writing prompts and filtering/extracting text from responses.
`script_speaker_name` = `person alpha`
`script_responder_name` = `person beta`
## usage
### in ai-msgbot
```
python ai_single_response.py --model GPT2_conversational_355M_WoW10k --prompt "hi! what are your hobbies?"
... generating...
finished!
'i like to read.'
```
### examples with Inference API
The model training (and the ai-msgbot scripts) "force" GPT-2 to generate text in a chat-like structure. If you want non-garbage outputs, these need to be specified manually:
```
person alpha:
hi! what are your hobbies?
```
then model will respond, ideally with person beta: "response text"
---
- the default inference API examples should work _okay_
- an ideal test would be explicitly adding `person beta` to the **end** of the prompt text. The model is forced to respond to the entered chat prompt instead of adding to the entered prompt and then responding to that (which may cut off the response text due to the Inference API limits).
## citations
```
@inproceedings{dinan2019wizard,
author={Emily Dinan and Stephen Roller and Kurt Shuster and Angela Fan and Michael Auli and Jason Weston},
title={{W}izard of {W}ikipedia: Knowledge-powered Conversational Agents},
booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)},
year={2019},
}
@inproceedings{li-etal-2017-dailydialog,
title = "{D}aily{D}ialog: A Manually Labelled Multi-turn Dialogue Dataset",
author = "Li, Yanran and
Su, Hui and
Shen, Xiaoyu and
Li, Wenjie and
Cao, Ziqiang and
Niu, Shuzi",
booktitle = "Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = nov,
year = "2017",
address = "Taipei, Taiwan",
publisher = "Asian Federation of Natural Language Processing",
url = "https://aclanthology.org/I17-1099",
pages = "986--995",
abstract = "We develop a high-quality multi-turn dialog dataset, \textbf{DailyDialog}, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. The dataset is available on \url{http://yanran.li/dailydialog}",
}
```
|
{}
|
ethzanalytics/ai-msgbot-gpt2-M
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# ai-msgbot: GPT2-XL-dialogue
GPT2-XL (~1.5 B parameters) trained on [the Wizard of Wikipedia dataset](https://parl.ai/projects/wizard_of_wikipedia/) for 40k steps with **33**/36 layers frozen using `aitextgen`. The resulting model was then **further fine-tuned** on the [Daily Dialogues](http://yanran.li/dailydialog) for 40k steps, with **34**/36 layers frozen.
Designed for use with [ai-msgbot](https://github.com/pszemraj/ai-msgbot) to create an open-ended chatbot (of course, if other use cases arise, have at it).
## conversation data
The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses.
`script_speaker_name` = `person alpha`
`script_responder_name` = `person beta`
## examples
- the default inference API examples should work _okay_
- an ideal test would be explicitly adding `person beta` into the prompt text the model is forced to respond to instead of adding onto the entered prompt.
## citations
```
@inproceedings{dinan2019wizard,
author={Emily Dinan and Stephen Roller and Kurt Shuster and Angela Fan and Michael Auli and Jason Weston},
title={{W}izard of {W}ikipedia: Knowledge-powered Conversational Agents},
booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)},
year={2019},
}
@inproceedings{li-etal-2017-dailydialog,
title = "{D}aily{D}ialog: A Manually Labelled Multi-turn Dialogue Dataset",
author = "Li, Yanran and
Su, Hui and
Shen, Xiaoyu and
Li, Wenjie and
Cao, Ziqiang and
Niu, Shuzi",
booktitle = "Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = nov,
year = "2017",
address = "Taipei, Taiwan",
publisher = "Asian Federation of Natural Language Processing",
url = "https://aclanthology.org/I17-1099",
pages = "986--995",
abstract = "We develop a high-quality multi-turn dialog dataset, \textbf{DailyDialog}, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. The dataset is available on \url{http://yanran.li/dailydialog}",
}
```
|
{"language": ["en"], "license": "mit", "tags": ["text-generation", "gpt2", "gpt"], "datasets": ["natural_questions"], "widget": [{"text": "Do you like my new haircut?\nperson beta:\n\n", "example_title": "haircut"}, {"text": "I love to learn new things.. are you willing to teach me something?\nperson beta:\n\n", "example_title": "teaching"}, {"text": "What's your favorite animal? Mine is the dog? \nperson beta:\n\n", "example_title": "favorite"}, {"text": "how much does it cost?\nperson beta:\n\n", "example_title": "money"}], "inference": {"parameters": {"min_length": 2, "max_length": 64, "length_penalty": 0.6, "no_repeat_ngram_size": 3, "do_sample": true, "top_p": 0.85, "top_k": 10, "repetition_penalty": 2.1}}}
|
ethzanalytics/ai-msgbot-gpt2-XL-dialogue
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"gpt",
"en",
"dataset:natural_questions",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# ai-msgbot GPT2-XL
_NOTE: model card is WIP_
GPT2-XL (~1.5 B parameters) trained on [the Wizard of Wikipedia dataset](https://parl.ai/projects/wizard_of_wikipedia/) for 40k steps with **33**/36 layers frozen using `aitextgen`.
Designed for use with [ai-msgbot](https://github.com/pszemraj/ai-msgbot) to create an open-ended chatbot (of course, if other use cases arise, have at it).
## conversation data
The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses.
`script_speaker_name` = `person alpha`
`script_responder_name` = `person beta`
## examples
- the default inference API examples should work _okay_
- an ideal test would be explicitly adding `person beta` into the prompt text the model is forced to respond to instead of adding onto the entered prompt.
### Example prompt:
```
do you like to eat beans?
person beta:
```
### Resulting output
```
do you like to eat beans?person beta:
yes, i like fried beans.
person alpha:
i wonder when the first beans were cultivated and how they were processed.
person beta:
nitrogenic bacteria (in
```
_Note: the Inference API cuts off generation due to length, if run elsewhere you would see what comes after "(in"_
## citations
```
@inproceedings{dinan2019wizard,
author={Emily Dinan and Stephen Roller and Kurt Shuster and Angela Fan and Michael Auli and Jason Weston},
title={{W}izard of {W}ikipedia: Knowledge-powered Conversational Agents},
booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)},
year={2019},
}
@inproceedings{li-etal-2017-dailydialog,
title = "{D}aily{D}ialog: A Manually Labelled Multi-turn Dialogue Dataset",
author = "Li, Yanran and
Su, Hui and
Shen, Xiaoyu and
Li, Wenjie and
Cao, Ziqiang and
Niu, Shuzi",
booktitle = "Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = nov,
year = "2017",
address = "Taipei, Taiwan",
publisher = "Asian Federation of Natural Language Processing",
url = "https://aclanthology.org/I17-1099",
pages = "986--995",
abstract = "We develop a high-quality multi-turn dialog dataset, \textbf{DailyDialog}, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. The dataset is available on \url{http://yanran.li/dailydialog}",
}
```
|
{"language": ["en"], "license": "mit", "tags": ["text-generation", "gpt2", "gpt"], "datasets": ["natural questions"], "widget": [{"text": "Do you like my new haircut?\nperson beta:\n\n", "example_title": "haircut"}, {"text": "I love to learn new things.. are you willing to teach me something?\nperson beta:\n\n", "example_title": "teaching"}, {"text": "What's your favorite animal? Mine is the dog? \nperson beta:\n\n", "example_title": "favorite"}, {"text": "how much does it cost?\nperson beta:\n\n", "example_title": "money"}], "inference": {"parameters": {"min_length": 2, "max_length": 64, "length_penalty": 0.6, "no_repeat_ngram_size": 3, "do_sample": true, "top_p": 0.85, "top_k": 10, "repetition_penalty": 2.1}}}
|
ethzanalytics/ai-msgbot-gpt2-XL
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"gpt",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# distilgpt2-tiny-conversational
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on a parsed version of Wizard of Wikipedia. Persona alpha/beta framework designed for use with [ai-msgbot](https://github.com/pszemraj/ai-msgbot).
It achieves the following results on the evaluation set:
- Loss: 2.2461
## Model description
- a basic dialogue model for conversation. It can be used as a chatbot.
- check out a [simple demo here](https://huggingface.co/spaces/ethzanalytics/dialogue-demo)
## Intended uses & limitations
- usage is designed for integrating with this repo: [ai-msgbot](https://github.com/pszemraj/ai-msgbot)
- the main specific information to know is that the model generates whole conversations between two entities, `person alpha` and `person beta`. These entity names are used functionally as custom `<bos>` tokens to extract when one response ends and another begins.
## Training and evaluation data
- [wizard of Wikipedia](https://parl.ai/projects/wizard_of_wikipedia/) parsed, from parlAI
## Training procedure
- deepspeed + huggingface trainer, an example notebook is in [ai-msgbot](https://github.com/pszemraj/ai-msgbot)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 1.0 | 418 | 2.7793 |
| 2.9952 | 2.0 | 836 | 2.6914 |
| 2.7684 | 3.0 | 1254 | 2.6348 |
| 2.685 | 4.0 | 1672 | 2.5938 |
| 2.6243 | 5.0 | 2090 | 2.5625 |
| 2.5816 | 6.0 | 2508 | 2.5332 |
| 2.5816 | 7.0 | 2926 | 2.5098 |
| 2.545 | 8.0 | 3344 | 2.4902 |
| 2.5083 | 9.0 | 3762 | 2.4707 |
| 2.4793 | 10.0 | 4180 | 2.4551 |
| 2.4531 | 11.0 | 4598 | 2.4395 |
| 2.4269 | 12.0 | 5016 | 2.4238 |
| 2.4269 | 13.0 | 5434 | 2.4102 |
| 2.4051 | 14.0 | 5852 | 2.3945 |
| 2.3777 | 15.0 | 6270 | 2.3848 |
| 2.3603 | 16.0 | 6688 | 2.3711 |
| 2.3394 | 17.0 | 7106 | 2.3613 |
| 2.3206 | 18.0 | 7524 | 2.3516 |
| 2.3206 | 19.0 | 7942 | 2.3398 |
| 2.3026 | 20.0 | 8360 | 2.3301 |
| 2.2823 | 21.0 | 8778 | 2.3203 |
| 2.2669 | 22.0 | 9196 | 2.3105 |
| 2.2493 | 23.0 | 9614 | 2.3027 |
| 2.2334 | 24.0 | 10032 | 2.2930 |
| 2.2334 | 25.0 | 10450 | 2.2852 |
| 2.2194 | 26.0 | 10868 | 2.2754 |
| 2.2014 | 27.0 | 11286 | 2.2695 |
| 2.1868 | 28.0 | 11704 | 2.2598 |
| 2.171 | 29.0 | 12122 | 2.2539 |
| 2.1597 | 30.0 | 12540 | 2.2461 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["text-generation", "chatbot", "dialogue", "distilgpt2", "gpt2", "ai-msgbot"], "widget": [{"text": "I know you're tired, but can we go for another walk this evening?\nperson beta:\n\n", "example_title": "walk"}, {"text": "Have you done anything exciting lately?\nperson beta:\n\n", "example_title": "activities"}, {"text": "hey - do you have a favorite grocery store around here?\nperson beta:\n\n", "example_title": "grocery"}, {"text": "Can you take me for dinner somewhere nice this time?\nperson beta:\n\n", "example_title": "dinner"}, {"text": "What's your favorite form of social media?\nperson beta:\n\n", "example_title": "social media"}, {"text": "Hi, how are you?\nperson beta:\n\n", "example_title": "greeting"}, {"text": "I am the best; my sister is the worst. What am I?\nperson beta:\n\n", "example_title": "sister"}, {"text": "What do you call an alligator who's just had surgery to remove his left arm?\nperson beta:\n\n", "example_title": "alligator"}, {"text": "A man walks into a bar and asks for a drink. The bartender asks for $10, and he pays him $1. What did he pay him with?\nperson beta:\n\n", "example_title": "dollar"}, {"text": "What did I say was in the mailbox when it was actually in the cabinet?\nperson beta:\n\n", "example_title": "mailbox"}, {"text": "My friend says that she knows every language, but she doesn't speak any of them.. what's wrong with her?\nperson beta:\n\n", "example_title": "language"}], "inference": {"parameters": {"min_length": 2, "max_length": 64, "length_penalty": 0.7, "no_repeat_ngram_size": 2, "do_sample": true, "top_p": 0.95, "top_k": 20, "temperature": 0.3, "repetition_penalty": 3.5}}}
|
ethzanalytics/distilgpt2-tiny-conversational
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"chatbot",
"dialogue",
"distilgpt2",
"ai-msgbot",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
ethzhou/joobboob
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
ethzhou/joobjoob
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
| null |
{"tags": ["conversational"]}
|
ethzhou/jooby
| null |
[
"conversational",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
| null |
{"tags": ["conversational"]}
|
ethzhou/joobyChat
| null |
[
"conversational",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
#blabla
|
{"tags": ["conversational"]}
|
ethzhou/newJooby
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
eubinecto/wisdomifier-ver-4
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.