Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
automatic-speech-recognition
espnet
## ESPnet2 ASR model ### `akreal/espnet2_swbd_da_hubert_conformer` This model was trained by Pavel Denisov using swbd_da recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 08c6efbc6299c972301236625f9abafe087c9f9c pip install -e . cd egs2/swbd_da/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/akreal_swbd_da_hubert_conformer ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Thu Jan 20 19:31:21 CET 2022` - python version: `3.8.12 (default, Aug 30 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.10.1+cu113` - Git hash: `08c6efbc6299c972301236625f9abafe087c9f9c` - Commit date: `Tue Jan 4 13:40:33 2022 +0100` ## asr_train_asr_raw_en_word_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.loss.ave/test_context3|2379|2379|66.3|33.7|0.0|0.0|33.7|33.7| |decode_asr_asr_model_valid.loss.ave/valid_context3|8116|8116|69.5|30.5|0.0|0.0|30.5|30.5| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.loss.ave/test_context3|2379|19440|76.1|17.7|6.2|8.1|32.0|33.7| |decode_asr_asr_model_valid.loss.ave/valid_context3|8116|66353|79.5|16.1|4.4|8.0|28.5|30.5| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_conformer_hubert_context3.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_conformer_hubert_context3_raw_en_word_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 35 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min keep_nbest_models: 7 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: - frontend.upstream num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 4000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_context3_raw_en_word_sp/train/speech_shape - exp/asr_stats_context3_raw_en_word_sp/train/text_shape.word valid_shape_file: - exp/asr_stats_context3_raw_en_word_sp/valid/speech_shape - exp/asr_stats_context3_raw_en_word_sp/valid/text_shape.word batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_context3_sp/wav.scp - speech - sound - - dump/raw/train_context3_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/valid_context3/wav.scp - speech - sound - - dump/raw/valid_context3/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0001 scheduler: warmuplr scheduler_conf: warmup_steps: 25000 token_list: - <blank> - <unk> - statement - backchannel - opinion - abandon - agree - yn_q - apprec - 'yes' - uninterp - close - wh_q - acknowledge - 'no' - yn_decl_q - hedge - backchannel_q - sum - quote - affirm - other - directive - repeat - open_q - completion - rhet_q - hold - reject - answer - neg - ans_dispref - repeat_q - open - or - commit - maybe - decl_q - third_pty - self_talk - thank - apology - tag_q - downplay - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.0 extract_feats_in_collect_stats: false use_preprocessor: true token_type: word bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: s3prl frontend_conf: frontend_conf: upstream: hubert_large_ll60k download_dir: ./hub multilayer_feature: true fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: utterance_mvn normalize_conf: {} preencoder: linear preencoder_conf: input_size: 1024 output_size: 80 encoder: conformer encoder_conf: output_size: 512 attention_heads: 8 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d normalize_before: true macaron_style: true pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 8 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 required: - output_dir - token_list version: 0.10.5a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["swbd_da"]}
espnet/akreal_swbd_da_hubert_conformer
null
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:swbd_da", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
audio-to-audio
espnet
# ESPnet2 ENH pretrained model ## `anogkongda/librimix_enh_train_raw_valid.si_snr.ave` ♻️ Imported from <https://zenodo.org/record/4480771#.YN70WJozZH4> This model was trained by anogkongda using librimix recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Training config See full config in [`config.yaml`](./config.yaml) ```yaml config: conf/tuning/train_conformer_fastspeech2.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/tts_train_conformer_fastspeech2_raw_phn_jaconv_pyopenjtalk ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "audio-source-separation", "audio-to-audio"], "datasets": ["librimix"], "inference": false}
espnet/anogkongda-librimix_enh_train_raw_valid.si_snr.ave
null
[ "espnet", "audio", "audio-source-separation", "audio-to-audio", "en", "dataset:librimix", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
audio-to-audio
espnet
## Example ESPnet2 ENH model ### `anogkongda/librimix_enh_train_raw_valid.si_snr.ave` ♻️ Imported from https://zenodo.org/record/4480771/ This model was trained by anogkongda using librimix/enh1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "speech-enhancement", "audio-to-audio"], "datasets": ["librimix"]}
espnet/anogkongda_librimix_enh_train_raw_valid.si_snr.ave
null
[ "espnet", "audio", "speech-enhancement", "audio-to-audio", "en", "dataset:librimix", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
espnet
## ESPnet2 ST model ### `espnet/brianyan918_iwslt22_dialect_st_transformer_fisherlike_4gpu_bbins16m_fix` This model was trained by Brian Yan using iwslt22_dialect recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 77fce65312877a132bbae01917ad26b74f6e2e14 pip install -e . cd egs2/iwslt22_dialect/st1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/brianyan918_iwslt22_dialect_st_transformer_fisherlike_4gpu_bbins16m_fix ``` <!-- Generated by scripts/utils/show_st_results.sh --> # RESULTS ## Environments - date: `Tue Feb 8 13:29:21 EST 2022` - python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]` - espnet version: `espnet 0.10.7a1` - pytorch version: `pytorch 1.8.1` - Git hash: `77fce65312877a132bbae01917ad26b74f6e2e14` - Commit date: `Tue Feb 8 10:48:10 2022 -0500` ## st_transformer_fisherlike_4gpu_bbins16m_fix_raw_bpe_tc1000_sp ### BLEU |dataset|bleu_score|verbose_score| |---|---|---| p3_st_model_valid.acc.ave|12.0|37.4/17.3/8.6/4.5 (BP = 0.952 ratio = 0.953 hyp_len = 40192 ref_len = 42181) ## ST config <details><summary>expand</summary> ``` config: conf/tuning/transformer_fisherlike_4gpu_bbins16m_fix.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/st_transformer_fisherlike_4gpu_bbins16m_fix_raw_bpe_tc1000_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 36641 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 50 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 3 grad_clip_type: 2.0 grad_noise: false accum_grad: 2 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 16000000 valid_batch_bins: null train_shape_file: - exp/st_stats_raw_bpe1000_sp/train/speech_shape - exp/st_stats_raw_bpe1000_sp/train/text_shape.bpe - exp/st_stats_raw_bpe1000_sp/train/src_text_shape.bpe valid_shape_file: - exp/st_stats_raw_bpe1000_sp/valid/speech_shape - exp/st_stats_raw_bpe1000_sp/valid/text_shape.bpe - exp/st_stats_raw_bpe1000_sp/valid/src_text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - /scratch/iwslt22dump//raw/train_sp/wav.scp - speech - kaldi_ark - - /scratch/iwslt22dump//raw/train_sp/text.tc.en - text - text - - /scratch/iwslt22dump//raw/train_sp/text.tc.rm.ta - src_text - text valid_data_path_and_name_and_type: - - /scratch/iwslt22dump//raw/dev/wav.scp - speech - kaldi_ark - - /scratch/iwslt22dump//raw/dev/text.tc.en - text - text - - /scratch/iwslt22dump//raw/dev/text.tc.rm.ta - src_text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 12.5 scheduler: noamlr scheduler_conf: model_size: 256 warmup_steps: 25000 token_list: - <blank> - <unk> - s - ▁ - apo - '&' - ; - ▁i - ▁you - t - ▁it - ▁the - ▁and - ▁to - ▁that - ▁a - n - a - ▁he - ▁me - m - d - ▁yes - ▁she - ▁no - ▁in - ▁what - ▁for - ▁we - ing - ll - ▁they - re - ▁are - ▁did - ▁god - ▁is - e - ed - ▁so - ▁her - ▁do - ▁have - ▁of - ▁with - ▁go - ▁know - ▁not - ▁was - ▁on - ▁don - y - ▁him - ▁one - ▁like - ▁there - '%' - ▁pw - ▁be - ▁at - ▁told - ▁good - ▁will - ▁my - ▁all - ▁or - c - er - p - ▁how - ▁ah - r - ▁but - ▁them - ▁see - ▁get - ▁can - i - ▁when - ▁going - ▁about - ▁mean - ▁this - k - ▁your - ▁by - ▁if - u - ▁come - ▁up - ▁tell - g - ▁said - ▁then - ▁now - ▁yeah - o - ▁out - al - ra - ▁because - ▁time - ▁well - ▁would - ▁p - ▁from - h - ar - f - ▁swear - ▁went - b - ▁really - or - ▁want - ri - ▁home - ▁work - ve - ▁take - ▁got - ▁just - l - ▁uh - ▁why - en - ▁even - ▁am - ▁who - ▁make - ▁day - '-' - in - ▁something - ▁some - ou - ▁us - ▁okay - ▁where - ▁does - ▁has - ▁thank - ▁c - ▁his - th - ▁back - ▁fine - ▁today - ly - ▁b - ▁oh - ▁doing - ▁everything - ▁here - le - ▁thing - ▁two - ▁anyway - li - ▁had - ▁still - ▁say - ro - ▁after - ce - ▁hello - ▁ma - ▁call - w - ▁listen - il - ▁should - ▁girl - ▁f - z - ▁too - ▁let - ▁understand - ▁may - ▁much - ▁think - ch - ir - ha - ▁other - ▁tomorrow - ▁were - ▁people - es - ▁year - di - ba - ▁right - el - ▁things - ▁house - v - ▁actually - un - ▁an - ▁give - ▁only - ▁better - pe - ▁need - ▁buy - ▁de - ne - ▁ha - ur - ion - ▁made - la - ▁willing - ▁nothing - ▁called - ▁night - ▁yesterday - se - ▁came - ▁lot - ter - ▁g - po - ▁find - ry - ▁car - ▁over - ic - ▁stay - ▁eat - ent - ▁always - ▁very - 'on' - ▁put - ▁ramadan - ▁those - ▁hear - is - ▁talk - ▁three - ▁anything - ▁mo - ▁little - ▁been - ▁already - fi - ation - ke - ▁first - ▁look - it - ▁won - ▁mom - ▁way - ▁before - ▁ok - ▁last - fa - ▁cook - vi - ▁hi - ▁same - ▁thought - ▁also - um - ate - ▁money - ▁start - ▁place - us - ▁morning - ▁could - ▁ask - ▁bring - ▁bit - ▁lo - ▁leave - ▁man - ▁left - ine - ▁days - ge - ▁la - ▁week - ▁friend - ▁problem - ▁sister - ▁allah - ▁feel - ▁every - ▁more - fe - ▁long - ▁hundred - ▁j - ▁eh - ho - ca - em - ▁talking - ▁exam - ▁next - ▁new - ▁fun - ▁took - ▁alright - co - ▁w - ▁um - ▁eid - ▁brother - ▁our - gh - ow - ▁o - ▁four - ni - wa - ▁else - ▁finish - bo - ▁sleep - ▁bless - ▁dear - ▁since - ▁play - ▁name - hi - ▁coming - ▁many - et - ▁usual - ▁con - ▁maybe - ▁off - bi - ▁than - ▁any - ▁mother - ▁son - om - ▁their - ▁keep - ▁dinner - ▁ten - ▁half - ▁help - ▁bad - and - ▁pass - ▁hot - ▁guy - ▁least - ▁down - ▁bought - ▁dinars - ▁working - ▁around - ▁normal - ▁poor - ▁stuff - ▁hope - ▁used - ▁again - ▁bro - ul - ▁phone - ▁ex - ▁done - ▁six - ▁na - ▁month - ▁tired - ▁check - ▁show - ▁together - oo - ▁later - ▁past - ▁five - ▁watch - ya - ▁coffee - ment - ut - ▁plan - ▁great - ▁daughter - j - ▁another - side - ▁change - ▁yet - ting - ▁until - ▁honestly - ▁whole - ol - ▁care - ▁sure - able - id - ▁big - ▁spend - ▁exactly - ▁boy - ▁course - ▁end - ▁please - ▁started - he - up - ▁found - ▁saw - ▁family - ▁asked - ▁enough - ▁during - ▁rest - ▁which - ▁gave - ▁true - ▁while - ▁job - ▁el - ▁each - ▁away - ▁kids - ▁goes - less - ▁twenty - ▁eight - ▁someone - ▁cha - ▁clothes - ah - ▁myself - ▁nice - ▁late - ▁old - ▁real - age - ant - ▁fast - ▁add - ▁hard - ▁these - ful - im - ▁close - ive - ▁dad - ▁pay - ies - ▁dude - ▁alone - ▁far - ance - ▁dis - ▁seven - ▁isn - ▁pro - our - ▁thousand - ▁break - ▁hour - ▁wait - ▁brought - ▁open - ▁un - ▁wedding - ▁walk - ▁father - ▁ka - ▁second - x - ▁saturday - ▁salad - ▁win - ▁everyone - ▁water - ▁tunis - ▁remember - ity - ▁wake - ▁minute - ▁school - ▁sunday - ▁own - ▁shop - ▁cold - ▁meet - ▁wear - ever - ▁send - ▁early - ▁gra - tic - ▁short - ▁use - ▁sometimes - hou - ▁love - ▁prepare - ▁sea - ▁study - ure - ▁com - qui - ▁hand - ▁both - ja - ▁summer - ▁wrong - ▁wanted - che - ▁miss - ▁try - ▁iftar - ▁yourself - q - ▁live - war - ▁expensive - ▁getting - ▁waiting - ▁once - ▁kh - ▁forgot - ▁nine - ▁anymore - ▁soup - ▁uncle - ▁beach - ▁saying - ▁into - ▁having - ▁brik - ▁room - ▁food - ▁visit - ▁matter - ▁thirty - ▁taking - ▁rain - ▁aunt - ▁never - ▁pick - ▁tunisia - ▁health - ▁head - ▁cut - ▁fasting - ▁sick - ▁friday - ▁forget - ▁monday - ▁become - ▁dress - ated - ▁most - wi - ▁hang - ▁life - ▁fish - ▁happy - ▁delicious - ▁deal - ▁finished - ble - ▁studying - ▁weather - ▁making - ▁cost - ▁bl - ▁stayed - ▁guess - ▁teach - ▁stop - ▁near - ▁watching - ▁without - ▁imagine - ▁seriously - fl - ▁speak - ▁idea - ▁must - ▁normally - ▁turn - ize - ▁clean - ▁tv - ▁meat - ▁woke - ▁example - ▁easy - ▁sent - ▁sell - over - ▁fifty - ▁amazing - ▁beautiful - ▁whatever - ▁enjoy - ▁talked - ▁believe - ▁thinking - ▁count - ▁almost - ▁longer - ▁afternoon - ▁hair - ▁front - ▁earlier - ▁mind - ▁kind - ▁tea - ▁best - ▁rent - ▁picture - ▁cooked - ▁price - ight - ▁soon - ▁woman - ▁otherwise - ▁happened - ▁story - ▁luck - ▁high - ▁happen - ▁arrive - ▁paper - ga - ▁quickly - ▁looking - ub - ▁number - ▁staying - ▁sit - man - ack - ▁important - ▁either - ▁person - ▁small - ▁free - ▁crazy - ▁playing - ▁kept - ▁part - ▁game - law - ▁till - uck - ▁ready - ▁might - ▁gone - ▁full - ▁fix - ▁subject - ▁laugh - ▁doctor - ▁welcome - ▁eleven - ▁sleeping - ▁heat - ▁probably - ▁such - ▁café - ▁fat - ▁sweet - ▁married - ▁drink - ▁move - ▁outside - ▁especially - ▁group - ji - ▁market - ▁through - ▁train - ▁protect - ▁turned - ▁red - ▁busy - ▁light - ▁noise - ▁street - ▁manage - ▁piece - ▁sitting - gue - ▁sake - ▁party - ish - ▁young - ▁case - ▁cool - huh - ▁marwa - ▁drive - ▁pray - clock - ▁couscous - ▁spent - ▁felt - ▁hopefully - ▁everybody - ▁living - ▁pain - line - ▁between - ▁match - ▁prayer - que - ian - ▁facebook - ▁spi - ▁eye - ▁children - ▁tonight - ▁mohamed - ▁understood - ▁black - ▁husband - ▁rid - ▁kitchen - ▁face - ▁swim - ▁kid - ▁invite - ▁cup - ▁grilled - ▁wife - ▁cousin - ▁drop - ▁wow - ▁table - ▁du - ▁bored - ▁neighborhood - ▁agree - ▁bread - ▁hamma - ▁straight - ▁tuesday - ▁anyone - ▁lunch - ade - ▁himself - ▁gather - ▁wish - ▁fifteen - ▁wednesday - ▁die - ▁thursday - ▁color - ▁asleep - ▁different - ▁whether - ▁ago - ▁middle - ▁class - ▁cake - shirt - ▁fight - ▁clear - ▁test - ▁plus - ▁sousse - ▁beginning - ▁result - ▁learn - ▁crowded - ▁slept - ▁shoes - ▁august - ▁pretty - ▁white - ▁apparently - ▁reach - ▁mariem - ▁return - ▁road - ▁million - ▁stand - ▁paid - ▁word - ious - ▁few - ▁breakfast - ▁post - ▁kilo - ▁chicken - ▁grade - ▁read - ▁accept - ▁birthday - ▁exhaust - ▁point - ▁july - ▁patience - ▁studies - ▁trouble - ▁along - ▁worry - ▁follow - ▁hurt - ▁afraid - ▁trip - ▁ahmed - ▁remain - ▁succeed - ▁mercy - ▁difficult - ▁weekend - ▁answer - ▁cheap - ▁repeat - ▁auntie - ▁sign - ▁hold - ▁under - ▁olive - ▁mahdi - ▁sfax - ▁annoy - ▁dishes - ▁message - ▁business - ▁french - ▁serious - ▁travel - ▁office - ▁wonder - ▁student - ▁internship - ▁pepper - ▁knew - ▁kill - ▁sauce - ▁herself - ▁hammamet - ▁damn - ▁mix - ▁suit - ▁medicine - ▁remove - ▁gonna - ▁company - ▁quarter - ▁shopping - ▁correct - ▁throw - ▁grow - ▁voice - ▁series - gotten - ▁taste - ▁driving - ▁hospital - ▁sorry - ▁aziz - ▁milk - ▁green - ▁baccalaureate - ▁running - ▁lord - ▁explain - ▁angry - ▁build - ▁fruit - ▁photo - é - ▁crying - ▁baby - ▁store - ▁project - ▁france - ▁twelve - ▁decide - ▁swimming - ▁world - ▁preparing - ▁special - ▁session - ▁behind - ▁vegetable - ▁strong - ▁fatma - ▁treat - ▁cream - ▁situation - ▁settle - ▁totally - ▁stopped - ▁book - ▁honest - ▁solution - ▁vacation - ▁cheese - ▁ahead - ▁sami - ▁focus - ▁scared - ▁club - ▁consider - ▁final - ▁naturally - ▁barely - ▁issue - ▁floor - ▁birth - ▁almighty - ▁engagement - ▁blue - ▁empty - ▁soccer - ▁prophet - ▁ticket - ▁indeed - ▁write - ▁present - ▁patient - ▁available - ▁holiday - ▁leaving - ▁became - ▁reason - ▁apart - ▁impossible - ▁shame - ▁worried - ▁body - ▁continue - ▁program - ▁stress - ▁arabic - ▁round - ▁taxi - ▁transport - ▁third - ▁certain - ▁downstairs - ▁neighbor - ▁directly - ▁giving - ▁june - ▁mini - ▁upstairs - ▁mistake - ▁period - ▁catch - ▁buddy - ▁success - ▁tajine - ▁excuse - ▁organize - ▁question - ▁suffer - ▁remind - ▁university - ▁downtown - ▁sugar - ▁twice - ▁women - ▁couple - ▁everyday - ▁condition - ▁obvious - ▁nobody - ▁complete - ▁stomach - ▁account - ▁september - ▁choose - ▁bottle - ▁figure - ▁instead - ▁salary - '0' - '1' - '3' - '2' - '5' - '7' - '4' - '9' - '8' - / - ° - '6' - è - $ - ï - <sos/eos> src_token_list: - <blank> - <unk> - ّ - ي - ا - ِ - ل - َ - و - ه - ة - م - ر - ك - ▁ما - ُ - ب - ش - د - ت - ▁في - َّ - ▁ن - ▁ي - ▁ت - ن - ▁لا - ح - ▁ه - س - وا - ▁م - ف - ▁إي - ع - ▁ب - ها - ط - ى - ق - ▁الل - ▁أ - ج - ▁والل - ▁و - ▁إيه - ▁ا - ▁يا - ز - ▁تو - ▁بش - ص - ▁أه - خ - ات - ▁إنت - ▁أنا - نا - ▁شن - ▁ق - ▁ش - ▁ك - يت - ين - ▁ف - ار - ▁قال - ▁باهي - ▁ع - ▁من - ▁ل - ▁مش - ▁كان - ▁حت - ▁ول - هم - ▁ر - ان - ▁س - ض - ني - ▁بال - ▁على - ▁متاع - ▁كي - ▁ال - ▁ح - ▁كل - ▁آنا - ▁الم - ▁خ - ▁الس - ▁وال - ون - ور - ▁أم - ▁هك - ▁آش - ▁الد - ▁عاد - ▁ج - ▁معناها - ▁مع - اش - ▁الص - ▁نهار - ▁لل - لها - ▁تي - ▁رب - ▁خاطر - ▁أكهو - غ - ▁شي - الل - ام - تها - ▁ون - ▁آك - ▁فهمت - وم - ▁موش - مشي - ▁ص - ▁اليوم - ▁مر - ست - ▁الب - ▁لاباس - تلي - ▁الكل - ▁عال - ذ - ▁فم - ▁الك - ▁حاجة - ▁شوي - اكا - ▁ياخي - ▁هاني - ▁صح - اس - ▁آه - ▁برشة - ▁الن - ▁وت - ▁الج - لك - ▁راهو - سم - ▁الح - مت - ▁الت - ▁بعد - اج - عد - ▁انشا - وش - لت - ▁وين - ث - ▁ولا - ▁باش - ▁فيها - نت - ▁إ - ▁الأ - ▁الف - ▁إم - ▁واحد - ▁ألو - ▁عندي - ▁أك - ▁خل - ▁وي - ▁تعمل - أ - ▁ريت - ▁وأ - ▁تعرف - بت - ▁الع - ▁مشيت - ▁وه - ▁حاصيلو - ▁بالل - ▁نعمل - ▁غ - ▁تجي - ▁يجي - ▁كيفاش - ▁عملت - ظ - اك - ▁هاو - ▁اش - ▁قد - ▁نق - ▁د - ▁زادا - ▁فيه - رة - ▁بر - ▁الش - ▁ز - ▁كيما - ▁الا - ند - عم - ▁نح - ▁بنتي - ▁نمشي - ▁عليك - ▁نعرفش - ▁كهو - ▁وم - ▁ط - تي - ▁خير - ▁آ - مش - ▁عليه - له - حت - ▁إيا - ▁أحنا - ▁تع - الا - عب - ▁ديما - ▁تت - ▁جو - ▁مالا - ▁أو - ▁قلتلك - ▁معنتها - لنا - ▁شكون - ▁تحب - بر - ▁الر - ▁وا - ▁الق - اء - ▁عل - ▁البارح - ▁وخ - ▁سافا - ▁هوما - ▁ولدي - ▁ - ▁نعرف - يف - رت - ▁وب - ▁روح - ▁علاش - ▁هاذاك - ▁رو - وس - ▁جا - ▁كيف - طر - ▁غادي - يكا - عمل - ▁نحب - ▁عندك - ▁وما - ▁فر - اني - ▁قلتله - ▁الط - فر - ▁دار - ▁عليها - ▁يعمل - ▁نت - ▁تح - باح - ▁ماهو - ▁وكل - ▁وع - قت - ▁فهمتك - عر - ▁وس - ▁تر - ▁سي - يلة - ▁قلت - ▁رمضان - صل - ▁آما - ▁الواحد - ▁بيه - ▁ثلاثة - ▁فهمتني - ▁ها - بط - ▁مازال - قل - ▁بالك - ▁معناتها - ▁ور - ▁قلتلها - ▁يس - رب - ▁ام - ▁وبعد - ▁الث - ▁وإنت - ▁بحذا - ▁لازم - ْ - ▁بن - قرا - سك - ▁يت - خل - ▁فه - عت - ▁هاك - ▁تق - ▁قبل - ▁وك - ▁نقول - ▁الز - حم - ▁عادش - حكي - وها - بة - نس - طل - ▁علاه - ذا - ▁سا - ▁طل - الي - ▁يق - ▁دو - حوا - حد - ▁نشوف - نة - ▁لي - ▁تك - ▁نا - ▁هاذ - ▁خويا - ▁المر - ▁وينك - ▁البر - ▁أتو - ينا - ▁حل - ولي - ▁ثم - ▁عم - ▁آي - ▁قر - از - ▁وح - كش - بعة - ▁كيفاه - ▁نع - ▁الحمدلله - ▁ياسر - ▁الخ - ▁معاك - ▁معاه - ▁تقول - دة - ▁حكاية - تش - ▁حس - ▁غدوا - ▁بالحق - روا - وز - ▁تخ - ▁العيد - رجع - ▁بالي - ▁جات - ▁وج - حة - ▁وش - ▁آخر - ▁طا - ▁مت - لقا - تك - ▁مس - ▁راني - كون - ▁صاحب - ▁هاكا - ▁قول - ▁عر - ▁عنده - ▁يلزم - ▁هاذا - ▁يخ - ▁وقتاش - ▁وقت - بع - ▁العش - ▁هاذي - هاش - ينة - ▁هاذاكا - عطي - ▁تنج - ▁باهية - نيا - فت - ▁يحب - ▁تف - ▁أهلا - وف - ▁غدوة - ▁بيك - ▁بد - عن - ▁در - ▁ننج - هار - ▁الحكاية - مون - وق - ▁نورمال - ▁عندها - خر - ▁بو - ▁حب - ▁آكا - ▁وف - ▁هاذيكا - ▁ديجا - ▁وق - ▁طي - لتل - بعث - ▁تص - رك - ▁مانيش - ▁العادة - ▁شوف - ضر - ▁يمشي - ▁نعملوا - ▁عرفت - ▁زال - ▁متع - ▁عمل - ▁بيها - ▁نحكي - اع - ▁نج - معة - ▁والكل - عناها - ▁يعي - ▁نجي - ستن - ▁هاذيك - ▁عام - ▁فلوس - قة - تين - ▁بالقدا - لهم - ▁تخدم - ▁ٱ - ▁شيء - ▁راهي - ▁جاب - ولاد - ابل - ▁ماك - عة - ▁نمشيوا - وني - شري - بار - انس - ▁وقتها - ▁جديد - ▁يز - ▁كر - ▁حاسيلو - ▁شق - ▁اه - ▁سايي - ▁انشالل - رج - مني - ▁بلا - ▁صحيح - ▁غير - ▁يخدم - مان - وكا - ▁عند - ▁قاعدة - ▁تس - ربة - ▁راس - ▁حط - ▁نكل - تني - ▁الو - سيون - ▁عندنا - ▁لو - ▁ست - صف - ▁ض - ▁كامل - ▁نخدم - ▁يبدا - ▁دونك - ▁أمور - رات - ▁تونس - بدا - ▁تحكي - ▁سو - ▁جاي - ▁وحدة - ▁ساعة - حنا - ▁بكري - ▁إل - ▁وبر - ▁كم - ▁تبدا - ارة - ادي - رق - لوا - ▁يمكن - ▁خاط - ▁وص - جين - ▁هاذاي - ▁هز - قد - ▁قل - ▁وكهو - ▁نص - ▁دي - لقى - ▁وأنا - سين - ▁يح - ▁ماشي - ▁شو - ▁خذيت - امات - ▁كنت - خرج - ▁لقيت - رتاح - كس - ▁حاجات - ▁مريق - ▁مل - ليفون - اوا - ▁شفت - ▁عاملة - ▁تن - ▁والا - سأل - ▁حد - ▁قاللك - ▁العباد - ▁عالاخ - ▁وآك - ▁ماني - ▁ناخذ - ▁حم - ▁الإ - ▁ماضي - ▁ث - الة - ▁أخرى - رين - ▁تشوف - ▁نخرج - ▁أربعة - ▁ألف - نيش - ▁هاي - آ - ▁فيك - رشة - ولة - فلة - ▁بابا - ▁أما - ▁روحي - ▁فيهم - ▁رج - ▁ليك - ونس - يرة - ▁وأكهو - ندي - ▁صار - شك - ▁نرو - ▁آكهو - ▁تش - ▁غاديكا - ▁معاها - ▁لب - ▁أذاكا - ▁آني - ▁يوم - عملوا - ▁نقعد - دوا - ▁عد - سمع - متني - ▁الخدمة - ▁مازلت - ▁قعدت - ايا - ▁برك - قعد - ▁خرجت - ضح - ▁قالل - ▁يقول - ▁وفي - ▁حق - ختي - ▁يعني - خدم - ▁جيت - ▁نرمال - طف - ▁عجب - ▁تقعد - ▁مشينا - اية - ▁خدمة - لدي - روف - ▁الفطر - ▁مشكل - ▁سل - ▁وآنا - الط - ▁بالس - ▁هانا - ▁أوه - ▁أذيكا - ▁وإ - ▁عليهم - ▁حالة - جت - قضي - ▁لق - ▁ونصف - سعة - عطيه - عاو - خانة - ▁مخ - ▁شبيك - بيعة - ▁أهوك - يني - ▁تعد - ▁خال - ▁قريب - ▁راك - ▁قالت - ▁لتو - ▁أكثر - اعة - ▁يظهرلي - ▁ماشية - سمعني - ▁نسيت - ▁ينج - ▁الحمدلل - هدي - ▁وشن - ▁تطي - ▁هنا - ▁نسمع - ▁إنتوما - ▁نحكيلك - ▁قاعد - ▁اسمعني - خرين - إ - ماعة - ▁بالر - ▁دا - ▁عمر - ▁نشري - ▁قهوة - ▁تبارك - ▁صب - ▁مشات - غر - ▁شريت - ▁عامل - ▁زوج - ثنين - ▁برب - ريق - ▁نكم - ▁لم - بيب - ▁مياة - ▁مالل - ▁قعد - ▁سخون - قس - ▁وحده - ▁اسمع - ▁خمسة - ▁غالي - ▁الأو - رلي - ▁العظيم - ▁ترو - تهم - كري - ▁نجيب - ▁جملة - قول - ▁قلتلي - ▁إيجا - ▁يقعد - ▁إيام - ▁يعطيك - ▁نخل - ▁دب - يمة - رهبة - ▁نهز - ▁محم - ▁بين - غار - ▁نحنا - ▁بون - ▁الغ - ▁شهر - ▁بار - رقة - ▁نطي - ئ - ترو - ▁ملا - ▁الكرهبة - ▁باه - ▁عالإخ - ▁عباد - ▁بلاصة - ▁مشى - بيع - ▁نفس - ▁عملنا - ▁واح - ▁أحلاه - ▁بحذاك - ▁لأ - ▁دخ - باب - ▁ودر - ▁غالب - ▁ناكل - ▁مثلا - ء - ▁راقد - ▁تفر - ▁الوقت - ▁تاخذ - حذا - نتر - ▁نبدا - ▁حال - ▁مريم - الم - ▁جمعة - رجول - ▁معايا - ▁تخرج - ▁باس - ▁ساعات - ▁عندهم - ▁نتفر - مسة - ▁الجمعة - بعين - ▁أكاهو - ▁ميش - مراة - ▁خذا - ▁ظ - ▁سيدي - ▁معاي - ▁شبيه - ▁حكا - ▁سف - ▁بعضنا - ▁بالض - ▁ليلة - ▁زعما - ▁الحق - مضان - ▁صعيب - ▁قالتلك - ً - ملة - ▁بق - عرف - لاطة - ▁خرج - ▁أخت - ▁تقوللي - ▁معانا - ▁صغير - ▁إسمه - ▁بعض - ▁العام - ▁علينا - ▁يتع - ▁فاش - ▁شع - ▁معاهم - ▁يسالش - ▁لهنا - ▁سمعت - ▁البار - ▁نتصو - ▁الاخ - ▁وكان - وبة - دمة - ▁كون - ▁مبعد - ▁تسمع - ▁بعيد - ▁تاكل - ▁نلقا - لامة - لاثة - ▁ذ - ▁تحس - ▁الواح - ▁لدار - ▁فاتت - ▁تاو - ▁أحوالك - ▁عاملين - ▁كبيرة - عجب - ▁بنت - ▁بيدي - ▁حكيت - ▁تحط - ▁مسكينة - ▁هاذوكم - ▁نزيد - لاث - ▁عشرة - ▁عيني - ▁تعب - ▁ياكل - ▁وزيد - ▁طول - ▁حمدلله - ▁وقتاه - ▁معناه - ▁وآش - ▁ووه - ▁وواحد - ▁نشوفوا - ▁عيد - ▁بصراحة - ▁بحذانا - ▁قاعدين - ▁راجل - ▁وحدي - ▁وعشرين - ▁لين - ▁خايب - ▁قالتله - ▁تهز - عيد - ▁كبير - ▁يعرف - ▁عارف - ▁الفلوس - ▁زايد - ▁خدمت - ▁هاذوما - ▁سلاطة - ▁فارغة - ▁ساعتين - ▁تبد - ▁راو - ▁مائة - ▁بعضهم - ▁ظاهرلي - ▁الفازة - كتب - ▁القهوة - سبوك - ▁زاد - ▁ضرب - حكيلي - ▁فوق - ▁عاود - ▁راي - ▁ومبعد - ▁حوايج - ▁دخلت - ▁يقوللك - ▁زيد - ▁زلت - لفزة - ▁وقال - ▁يهب - ▁يلزمني - ▁الحمد - ▁أذي - طبيعت - ▁دورة - ▁عالأقل - ▁آذاك - ▁وبال - ▁الجاي - عطيني - ▁ياخذ - ▁احكيلي - ▁نهبط - ▁رقدت - بلاصة - ▁عزيز - ▁صغار - ▁أقسم - ▁جيب - ▁وصلت - ▁أحوال - ▁جيست - ▁جماعة - سئل - ▁خوذ - ▁يهز - ▁الأخرى - ▁آلاف - ▁إسمع - ▁الحقيقة - ▁ناقص - ▁حاط - ▁موجود - عباد - ▁آذيك - ▁خارج - ▁الخير - ▁البنات - بقى - ▁طرف - ▁سينون - ▁ماذاب - ▁البحر - ▁نرقد - مدلله - ▁إيجى - ▁خالتي - ▁فازة - ▁بريك - ▁شريبتك - ▁تطلع - ؤ - ▁المشكلة - ▁طري - ▁مادام - ▁طلبت - ▁يلعب - ▁نعاود - ▁وحدك - ▁ظاهر - ٱ - ژ - ٍ - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true model_conf: asr_weight: 0.3 mt_weight: 0.0 mtlalpha: 1.0 lsm_weight: 0.1 length_normalized_loss: false use_preprocessor: true token_type: bpe src_token_type: bpe bpemodel: data/token_list/tgt_bpe_unigram1000/bpe.model src_bpemodel: data/token_list/src_bpe_unigram1000/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: n_fft: 512 win_length: 400 hop_length: 160 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/st_stats_raw_bpe1000_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: transformer encoder_conf: input_layer: conv2d num_blocks: 12 linear_units: 2048 dropout_rate: 0.1 output_size: 256 attention_heads: 4 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: input_layer: embed num_blocks: 6 linear_units: 2048 dropout_rate: 0.1 extra_asr_decoder: transformer extra_asr_decoder_conf: input_layer: embed num_blocks: 2 linear_units: 2048 dropout_rate: 0.1 extra_mt_decoder: transformer extra_mt_decoder_conf: input_layer: embed num_blocks: 2 linear_units: 2048 dropout_rate: 0.1 required: - output_dir - src_token_list - token_list version: 0.10.6a1 distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "noinfo", "license": "cc-by-4.0", "tags": ["espnet", "audio", "speech-translation"], "datasets": ["iwslt22_dialect"]}
espnet/brianyan918_iwslt22_dialect_st_transformer_fisherlike_4gpu_bbins16m_fix
null
[ "espnet", "audio", "speech-translation", "dataset:iwslt22_dialect", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## ESPnet2 ASR model ### `espnet/brianyan918_iwslt22_dialect_train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug` This model was trained by Brian Yan using iwslt22_dialect recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 77fce65312877a132bbae01917ad26b74f6e2e14 pip install -e . cd egs2/iwslt22_dialect/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/brianyan918_iwslt22_dialect_train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Wed Feb 2 05:32:30 EST 2022` - python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.8.1` - Git hash: `99581e0f5af3ad68851d556645e7292771436df9` - Commit date: `Sat Jan 29 11:32:38 2022 -0500` ## asr_train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug_raw_bpe1000_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/test1|4204|27370|54.7|39.5|5.8|8.8|54.2|87.9| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/test1|4204|145852|84.1|7.1|8.8|11.5|27.4|87.9| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/test1|4204|64424|63.8|22.8|13.4|12.2|48.3|87.9| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug_raw_bpe1000_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 55101 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 80 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 2 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 25000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_bpe1000_sp/train/speech_shape - exp/asr_stats_raw_bpe1000_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_bpe1000_sp/valid/speech_shape - exp/asr_stats_raw_bpe1000_sp/valid/text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - /scratch/iwslt22asrdump/raw/train_sp/wav.scp - speech - kaldi_ark - - /scratch/iwslt22asrdump/raw/train_sp/text - text - text valid_data_path_and_name_and_type: - - /scratch/iwslt22asrdump/raw/dev/wav.scp - speech - kaldi_ark - - /scratch/iwslt22asrdump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.002 weight_decay: 1.0e-06 scheduler: warmuplr scheduler_conf: warmup_steps: 15000 token_list: - <blank> - <unk> - ّ - ي - ا - ِ - ل - َ - و - ه - ة - م - ر - ك - ▁ما - ُ - ب - ش - د - ت - ▁في - َّ - ▁ن - ▁ي - ▁ت - ن - ▁لا - ح - ▁ه - س - وا - ▁م - ف - ▁إي - ع - ▁ب - ها - ط - ى - ق - ▁الل - ▁أ - ج - ▁والل - ▁و - ▁إيه - ▁ا - ▁يا - ز - ▁تو - ▁بش - ص - ▁أه - خ - ات - ▁إنت - ▁أنا - نا - ▁شن - ▁ق - ▁ش - ▁ك - يت - ين - ▁ف - ار - ▁قال - ▁باهي - ▁ع - ▁من - ▁ل - ▁مش - ▁كان - ▁حت - ▁ول - هم - ▁ر - ان - ▁س - ض - ني - ▁بال - ▁على - ▁متاع - ▁كي - ▁ال - ▁ح - ▁كل - ▁آنا - ▁الم - ▁خ - ▁الس - ▁وال - ون - ور - ▁أم - ▁هك - ▁آش - ▁الد - ▁عاد - ▁ج - ▁معناها - ▁مع - اش - ▁الص - ▁نهار - ▁لل - لها - ▁تي - ▁رب - ▁خاطر - ▁أكهو - غ - ▁شي - الل - ام - تها - ▁ون - ▁آك - ▁فهمت - وم - ▁موش - مشي - ▁ص - ▁اليوم - ▁مر - ست - ▁الب - ▁لاباس - تلي - ▁الكل - ▁عال - ذ - ▁فم - ▁الك - ▁حاجة - ▁شوي - اكا - ▁ياخي - ▁هاني - ▁صح - اس - ▁آه - ▁برشة - ▁الن - ▁وت - ▁الج - لك - ▁راهو - سم - ▁الح - مت - ▁الت - ▁بعد - اج - عد - ▁انشا - وش - لت - ▁وين - ث - ▁ولا - ▁باش - ▁فيها - نت - ▁إ - ▁الأ - ▁الف - ▁إم - ▁واحد - ▁ألو - ▁عندي - ▁أك - ▁خل - ▁وي - ▁تعمل - أ - ▁ريت - ▁وأ - ▁تعرف - بت - ▁الع - ▁مشيت - ▁وه - ▁حاصيلو - ▁بالل - ▁نعمل - ▁غ - ▁تجي - ▁يجي - ▁كيفاش - ▁عملت - ظ - اك - ▁هاو - ▁اش - ▁قد - ▁نق - ▁د - ▁زادا - ▁فيه - رة - ▁بر - ▁الش - ▁ز - ▁كيما - ▁الا - ند - عم - ▁نح - ▁بنتي - ▁نمشي - ▁عليك - ▁نعرفش - ▁كهو - ▁وم - ▁ط - تي - ▁خير - ▁آ - مش - ▁عليه - له - حت - ▁إيا - ▁أحنا - ▁تع - الا - عب - ▁ديما - ▁تت - ▁جو - ▁مالا - ▁أو - ▁قلتلك - ▁معنتها - لنا - ▁شكون - ▁تحب - بر - ▁الر - ▁وا - ▁الق - اء - ▁عل - ▁البارح - ▁وخ - ▁سافا - ▁هوما - ▁ولدي - ▁ - ▁نعرف - يف - رت - ▁وب - ▁روح - ▁علاش - ▁هاذاك - ▁رو - وس - ▁جا - ▁كيف - طر - ▁غادي - يكا - عمل - ▁نحب - ▁عندك - ▁وما - ▁فر - اني - ▁قلتله - ▁الط - فر - ▁دار - ▁عليها - ▁يعمل - ▁نت - ▁تح - باح - ▁ماهو - ▁وكل - ▁وع - قت - ▁فهمتك - عر - ▁وس - ▁تر - ▁سي - يلة - ▁قلت - ▁رمضان - صل - ▁آما - ▁الواحد - ▁بيه - ▁ثلاثة - ▁فهمتني - ▁ها - بط - ▁مازال - قل - ▁بالك - ▁معناتها - ▁ور - ▁قلتلها - ▁يس - رب - ▁ام - ▁وبعد - ▁الث - ▁وإنت - ▁بحذا - ▁لازم - ْ - ▁بن - قرا - سك - ▁يت - خل - ▁فه - عت - ▁هاك - ▁تق - ▁قبل - ▁وك - ▁نقول - ▁الز - حم - ▁عادش - حكي - وها - بة - نس - طل - ▁علاه - ذا - ▁سا - ▁طل - الي - ▁يق - ▁دو - حوا - حد - ▁نشوف - نة - ▁لي - ▁تك - ▁نا - ▁هاذ - ▁خويا - ▁المر - ▁وينك - ▁البر - ▁أتو - ينا - ▁حل - ولي - ▁ثم - ▁عم - ▁آي - ▁قر - از - ▁وح - كش - بعة - ▁كيفاه - ▁نع - ▁الحمدلله - ▁ياسر - ▁الخ - ▁معاك - ▁معاه - ▁تقول - دة - ▁حكاية - تش - ▁حس - ▁غدوا - ▁بالحق - روا - وز - ▁تخ - ▁العيد - رجع - ▁بالي - ▁جات - ▁وج - حة - ▁وش - ▁آخر - ▁طا - ▁مت - لقا - تك - ▁مس - ▁راني - كون - ▁صاحب - ▁هاكا - ▁قول - ▁عر - ▁عنده - ▁يلزم - ▁هاذا - ▁يخ - ▁وقتاش - ▁وقت - بع - ▁العش - ▁هاذي - هاش - ينة - ▁هاذاكا - عطي - ▁تنج - ▁باهية - نيا - فت - ▁يحب - ▁تف - ▁أهلا - وف - ▁غدوة - ▁بيك - ▁بد - عن - ▁در - ▁ننج - هار - ▁الحكاية - مون - وق - ▁نورمال - ▁عندها - خر - ▁بو - ▁حب - ▁آكا - ▁وف - ▁هاذيكا - ▁ديجا - ▁وق - ▁طي - لتل - بعث - ▁تص - رك - ▁مانيش - ▁العادة - ▁شوف - ضر - ▁يمشي - ▁نعملوا - ▁عرفت - ▁زال - ▁متع - ▁عمل - ▁بيها - ▁نحكي - اع - ▁نج - معة - ▁والكل - عناها - ▁يعي - ▁نجي - ستن - ▁هاذيك - ▁عام - ▁فلوس - قة - تين - ▁بالقدا - لهم - ▁تخدم - ▁ٱ - ▁شيء - ▁راهي - ▁جاب - ولاد - ابل - ▁ماك - عة - ▁نمشيوا - وني - شري - بار - انس - ▁وقتها - ▁جديد - ▁يز - ▁كر - ▁حاسيلو - ▁شق - ▁اه - ▁سايي - ▁انشالل - رج - مني - ▁بلا - ▁صحيح - ▁غير - ▁يخدم - مان - وكا - ▁عند - ▁قاعدة - ▁تس - ربة - ▁راس - ▁حط - ▁نكل - تني - ▁الو - سيون - ▁عندنا - ▁لو - ▁ست - صف - ▁ض - ▁كامل - ▁نخدم - ▁يبدا - ▁دونك - ▁أمور - رات - ▁تونس - بدا - ▁تحكي - ▁سو - ▁جاي - ▁وحدة - ▁ساعة - حنا - ▁بكري - ▁إل - ▁وبر - ▁كم - ▁تبدا - ارة - ادي - رق - لوا - ▁يمكن - ▁خاط - ▁وص - جين - ▁هاذاي - ▁هز - قد - ▁قل - ▁وكهو - ▁نص - ▁دي - لقى - ▁وأنا - سين - ▁يح - ▁ماشي - ▁شو - ▁خذيت - امات - ▁كنت - خرج - ▁لقيت - رتاح - كس - ▁حاجات - ▁مريق - ▁مل - ليفون - اوا - ▁شفت - ▁عاملة - ▁تن - ▁والا - سأل - ▁حد - ▁قاللك - ▁العباد - ▁عالاخ - ▁وآك - ▁ماني - ▁ناخذ - ▁حم - ▁الإ - ▁ماضي - ▁ث - الة - ▁أخرى - رين - ▁تشوف - ▁نخرج - ▁أربعة - ▁ألف - نيش - ▁هاي - آ - ▁فيك - رشة - ولة - فلة - ▁بابا - ▁أما - ▁روحي - ▁فيهم - ▁رج - ▁ليك - ونس - يرة - ▁وأكهو - ندي - ▁صار - شك - ▁نرو - ▁آكهو - ▁تش - ▁غاديكا - ▁معاها - ▁لب - ▁أذاكا - ▁آني - ▁يوم - عملوا - ▁نقعد - دوا - ▁عد - سمع - متني - ▁الخدمة - ▁مازلت - ▁قعدت - ايا - ▁برك - قعد - ▁خرجت - ضح - ▁قالل - ▁يقول - ▁وفي - ▁حق - ختي - ▁يعني - خدم - ▁جيت - ▁نرمال - طف - ▁عجب - ▁تقعد - ▁مشينا - اية - ▁خدمة - لدي - روف - ▁الفطر - ▁مشكل - ▁سل - ▁وآنا - الط - ▁بالس - ▁هانا - ▁أوه - ▁أذيكا - ▁وإ - ▁عليهم - ▁حالة - جت - قضي - ▁لق - ▁ونصف - سعة - عطيه - عاو - خانة - ▁مخ - ▁شبيك - بيعة - ▁أهوك - يني - ▁تعد - ▁خال - ▁قريب - ▁راك - ▁قالت - ▁لتو - ▁أكثر - اعة - ▁يظهرلي - ▁ماشية - سمعني - ▁نسيت - ▁ينج - ▁الحمدلل - هدي - ▁وشن - ▁تطي - ▁هنا - ▁نسمع - ▁إنتوما - ▁نحكيلك - ▁قاعد - ▁اسمعني - خرين - إ - ماعة - ▁بالر - ▁دا - ▁عمر - ▁نشري - ▁قهوة - ▁تبارك - ▁صب - ▁مشات - غر - ▁شريت - ▁عامل - ▁زوج - ثنين - ▁برب - ريق - ▁نكم - ▁لم - بيب - ▁مياة - ▁مالل - ▁قعد - ▁سخون - قس - ▁وحده - ▁اسمع - ▁خمسة - ▁غالي - ▁الأو - رلي - ▁العظيم - ▁ترو - تهم - كري - ▁نجيب - ▁جملة - قول - ▁قلتلي - ▁إيجا - ▁يقعد - ▁إيام - ▁يعطيك - ▁نخل - ▁دب - يمة - رهبة - ▁نهز - ▁محم - ▁بين - غار - ▁نحنا - ▁بون - ▁الغ - ▁شهر - ▁بار - رقة - ▁نطي - ئ - ترو - ▁ملا - ▁الكرهبة - ▁باه - ▁عالإخ - ▁عباد - ▁بلاصة - ▁مشى - بيع - ▁نفس - ▁عملنا - ▁واح - ▁أحلاه - ▁بحذاك - ▁لأ - ▁دخ - باب - ▁ودر - ▁غالب - ▁ناكل - ▁مثلا - ء - ▁راقد - ▁تفر - ▁الوقت - ▁تاخذ - حذا - نتر - ▁نبدا - ▁حال - ▁مريم - الم - ▁جمعة - رجول - ▁معايا - ▁تخرج - ▁باس - ▁ساعات - ▁عندهم - ▁نتفر - مسة - ▁الجمعة - بعين - ▁أكاهو - ▁ميش - مراة - ▁خذا - ▁ظ - ▁سيدي - ▁معاي - ▁شبيه - ▁حكا - ▁سف - ▁بعضنا - ▁بالض - ▁ليلة - ▁زعما - ▁الحق - مضان - ▁صعيب - ▁قالتلك - ً - ملة - ▁بق - عرف - لاطة - ▁خرج - ▁أخت - ▁تقوللي - ▁معانا - ▁صغير - ▁إسمه - ▁بعض - ▁العام - ▁علينا - ▁يتع - ▁فاش - ▁شع - ▁معاهم - ▁يسالش - ▁لهنا - ▁سمعت - ▁البار - ▁نتصو - ▁الاخ - ▁وكان - وبة - دمة - ▁كون - ▁مبعد - ▁تسمع - ▁بعيد - ▁تاكل - ▁نلقا - لامة - لاثة - ▁ذ - ▁تحس - ▁الواح - ▁لدار - ▁فاتت - ▁تاو - ▁أحوالك - ▁عاملين - ▁كبيرة - عجب - ▁بنت - ▁بيدي - ▁حكيت - ▁تحط - ▁مسكينة - ▁هاذوكم - ▁نزيد - لاث - ▁عشرة - ▁عيني - ▁تعب - ▁ياكل - ▁وزيد - ▁طول - ▁حمدلله - ▁وقتاه - ▁معناه - ▁وآش - ▁ووه - ▁وواحد - ▁نشوفوا - ▁عيد - ▁بصراحة - ▁بحذانا - ▁قاعدين - ▁راجل - ▁وحدي - ▁وعشرين - ▁لين - ▁خايب - ▁قالتله - ▁تهز - عيد - ▁كبير - ▁يعرف - ▁عارف - ▁الفلوس - ▁زايد - ▁خدمت - ▁هاذوما - ▁سلاطة - ▁فارغة - ▁ساعتين - ▁تبد - ▁راو - ▁مائة - ▁بعضهم - ▁ظاهرلي - ▁الفازة - كتب - ▁القهوة - سبوك - ▁زاد - ▁ضرب - حكيلي - ▁فوق - ▁عاود - ▁راي - ▁ومبعد - ▁حوايج - ▁دخلت - ▁يقوللك - ▁زيد - ▁زلت - لفزة - ▁وقال - ▁يهب - ▁يلزمني - ▁الحمد - ▁أذي - طبيعت - ▁دورة - ▁عالأقل - ▁آذاك - ▁وبال - ▁الجاي - عطيني - ▁ياخذ - ▁احكيلي - ▁نهبط - ▁رقدت - بلاصة - ▁عزيز - ▁صغار - ▁أقسم - ▁جيب - ▁وصلت - ▁أحوال - ▁جيست - ▁جماعة - سئل - ▁خوذ - ▁يهز - ▁الأخرى - ▁آلاف - ▁إسمع - ▁الحقيقة - ▁ناقص - ▁حاط - ▁موجود - عباد - ▁آذيك - ▁خارج - ▁الخير - ▁البنات - بقى - ▁طرف - ▁سينون - ▁ماذاب - ▁البحر - ▁نرقد - مدلله - ▁إيجى - ▁خالتي - ▁فازة - ▁بريك - ▁شريبتك - ▁تطلع - ؤ - ▁المشكلة - ▁طري - ▁مادام - ▁طلبت - ▁يلعب - ▁نعاود - ▁وحدك - ▁ظاهر - ٱ - ژ - ٍ - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false use_preprocessor: true token_type: bpe bpemodel: data/token_list/bpe_unigram1000/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: n_fft: 512 hop_length: 256 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 5 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_bpe1000_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: conformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 1024 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d normalize_before: true macaron_style: true rel_pos_type: latest pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 required: - output_dir - token_list version: 0.10.6a1 distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "noinfo", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["iwslt22_dialect"]}
espnet/brianyan918_iwslt22_dialect_train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug
null
[ "espnet", "audio", "automatic-speech-recognition", "dataset:iwslt22_dialect", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
espnet
## ESPnet2 ST model ### `espnet/brianyan918_iwslt22_dialect_train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug` This model was trained by Brian Yan using iwslt22_dialect recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 77fce65312877a132bbae01917ad26b74f6e2e14 pip install -e . cd egs2/iwslt22_dialect/st1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/brianyan918_iwslt22_dialect_train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug ``` <!-- Generated by scripts/utils/show_st_results.sh --> # RESULTS ## Environments - date: `Tue Feb 8 12:54:12 EST 2022` - python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]` - espnet version: `espnet 0.10.7a1` - pytorch version: `pytorch 1.8.1` - Git hash: `77fce65312877a132bbae01917ad26b74f6e2e14` - Commit date: `Tue Feb 8 10:48:10 2022 -0500` ## st_train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug_raw_bpe_tc1000_sp ### BLEU |dataset|bleu_score|verbose_score| |---|---|---| pen2_st_model_valid.acc.ave|13.9|44.0/21.8/11.4/6.2 (BP = 0.859 ratio = 0.868 hyp_len = 36614 ref_len = 42181) ## ST config <details><summary>expand</summary> ``` config: conf/tuning/train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/st_train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug_raw_bpe_tc1000_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 80 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 2 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: true freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 25000000 valid_batch_bins: null train_shape_file: - exp/st_stats_raw_bpe1000_sp/train/speech_shape - exp/st_stats_raw_bpe1000_sp/train/text_shape.bpe - exp/st_stats_raw_bpe1000_sp/train/src_text_shape.bpe valid_shape_file: - exp/st_stats_raw_bpe1000_sp/valid/speech_shape - exp/st_stats_raw_bpe1000_sp/valid/text_shape.bpe - exp/st_stats_raw_bpe1000_sp/valid/src_text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_sp/wav.scp - speech - kaldi_ark - - dump/raw/train_sp/text.tc.en - text - text - - dump/raw/train_sp/text.tc.rm.ta - src_text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - kaldi_ark - - dump/raw/dev/text.tc.en - text - text - - dump/raw/dev/text.tc.rm.ta - src_text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.002 weight_decay: 1.0e-06 scheduler: warmuplr scheduler_conf: warmup_steps: 15000 token_list: - <blank> - <unk> - s - ▁ - apo - '&' - ; - ▁i - ▁you - t - ▁it - ▁the - ▁and - ▁to - ▁that - ▁a - n - a - ▁he - ▁me - m - d - ▁yes - ▁she - ▁no - ▁in - ▁what - ▁for - ▁we - ing - ll - ▁they - re - ▁are - ▁did - ▁god - ▁is - e - ed - ▁so - ▁her - ▁do - ▁have - ▁of - ▁with - ▁go - ▁know - ▁not - ▁was - ▁on - ▁don - y - ▁him - ▁one - ▁like - ▁there - '%' - ▁pw - ▁be - ▁at - ▁told - ▁good - ▁will - ▁my - ▁all - ▁or - c - er - p - ▁how - ▁ah - r - ▁but - ▁them - ▁see - ▁get - ▁can - i - ▁when - ▁going - ▁about - ▁mean - ▁this - k - ▁your - ▁by - ▁if - u - ▁come - ▁up - ▁tell - g - ▁said - ▁then - ▁now - ▁yeah - o - ▁out - al - ra - ▁because - ▁time - ▁well - ▁would - ▁p - ▁from - h - ar - f - ▁swear - ▁went - b - ▁really - or - ▁want - ri - ▁home - ▁work - ve - ▁take - ▁got - ▁just - l - ▁uh - ▁why - en - ▁even - ▁am - ▁who - ▁make - ▁day - '-' - in - ▁something - ▁some - ou - ▁us - ▁okay - ▁where - ▁does - ▁has - ▁thank - ▁c - ▁his - th - ▁back - ▁fine - ▁today - ly - ▁b - ▁oh - ▁doing - ▁everything - ▁here - le - ▁thing - ▁two - ▁anyway - li - ▁had - ▁still - ▁say - ro - ▁after - ce - ▁hello - ▁ma - ▁call - w - ▁listen - il - ▁should - ▁girl - ▁f - z - ▁too - ▁let - ▁understand - ▁may - ▁much - ▁think - ch - ir - ha - ▁other - ▁tomorrow - ▁were - ▁people - es - ▁year - di - ba - ▁right - el - ▁things - ▁house - v - ▁actually - un - ▁an - ▁give - ▁only - ▁better - pe - ▁need - ▁buy - ▁de - ne - ▁ha - ur - ion - ▁made - la - ▁willing - ▁nothing - ▁called - ▁night - ▁yesterday - se - ▁came - ▁lot - ter - ▁g - po - ▁find - ry - ▁car - ▁over - ic - ▁stay - ▁eat - ent - ▁always - ▁very - 'on' - ▁put - ▁ramadan - ▁those - ▁hear - is - ▁talk - ▁three - ▁anything - ▁mo - ▁little - ▁been - ▁already - fi - ation - ke - ▁first - ▁look - it - ▁won - ▁mom - ▁way - ▁before - ▁ok - ▁last - fa - ▁cook - vi - ▁hi - ▁same - ▁thought - ▁also - um - ate - ▁money - ▁start - ▁place - us - ▁morning - ▁could - ▁ask - ▁bring - ▁bit - ▁lo - ▁leave - ▁man - ▁left - ine - ▁days - ge - ▁la - ▁week - ▁friend - ▁problem - ▁sister - ▁allah - ▁feel - ▁every - ▁more - fe - ▁long - ▁hundred - ▁j - ▁eh - ho - ca - em - ▁talking - ▁exam - ▁next - ▁new - ▁fun - ▁took - ▁alright - co - ▁w - ▁um - ▁eid - ▁brother - ▁our - gh - ow - ▁o - ▁four - ni - wa - ▁else - ▁finish - bo - ▁sleep - ▁bless - ▁dear - ▁since - ▁play - ▁name - hi - ▁coming - ▁many - et - ▁usual - ▁con - ▁maybe - ▁off - bi - ▁than - ▁any - ▁mother - ▁son - om - ▁their - ▁keep - ▁dinner - ▁ten - ▁half - ▁help - ▁bad - and - ▁pass - ▁hot - ▁guy - ▁least - ▁down - ▁bought - ▁dinars - ▁working - ▁around - ▁normal - ▁poor - ▁stuff - ▁hope - ▁used - ▁again - ▁bro - ul - ▁phone - ▁ex - ▁done - ▁six - ▁na - ▁month - ▁tired - ▁check - ▁show - ▁together - oo - ▁later - ▁past - ▁five - ▁watch - ya - ▁coffee - ment - ut - ▁plan - ▁great - ▁daughter - j - ▁another - side - ▁change - ▁yet - ting - ▁until - ▁honestly - ▁whole - ol - ▁care - ▁sure - able - id - ▁big - ▁spend - ▁exactly - ▁boy - ▁course - ▁end - ▁please - ▁started - he - up - ▁found - ▁saw - ▁family - ▁asked - ▁enough - ▁during - ▁rest - ▁which - ▁gave - ▁true - ▁while - ▁job - ▁el - ▁each - ▁away - ▁kids - ▁goes - less - ▁twenty - ▁eight - ▁someone - ▁cha - ▁clothes - ah - ▁myself - ▁nice - ▁late - ▁old - ▁real - age - ant - ▁fast - ▁add - ▁hard - ▁these - ful - im - ▁close - ive - ▁dad - ▁pay - ies - ▁dude - ▁alone - ▁far - ance - ▁dis - ▁seven - ▁isn - ▁pro - our - ▁thousand - ▁break - ▁hour - ▁wait - ▁brought - ▁open - ▁un - ▁wedding - ▁walk - ▁father - ▁ka - ▁second - x - ▁saturday - ▁salad - ▁win - ▁everyone - ▁water - ▁tunis - ▁remember - ity - ▁wake - ▁minute - ▁school - ▁sunday - ▁own - ▁shop - ▁cold - ▁meet - ▁wear - ever - ▁send - ▁early - ▁gra - tic - ▁short - ▁use - ▁sometimes - hou - ▁love - ▁prepare - ▁sea - ▁study - ure - ▁com - qui - ▁hand - ▁both - ja - ▁summer - ▁wrong - ▁wanted - che - ▁miss - ▁try - ▁iftar - ▁yourself - q - ▁live - war - ▁expensive - ▁getting - ▁waiting - ▁once - ▁kh - ▁forgot - ▁nine - ▁anymore - ▁soup - ▁uncle - ▁beach - ▁saying - ▁into - ▁having - ▁brik - ▁room - ▁food - ▁visit - ▁matter - ▁thirty - ▁taking - ▁rain - ▁aunt - ▁never - ▁pick - ▁tunisia - ▁health - ▁head - ▁cut - ▁fasting - ▁sick - ▁friday - ▁forget - ▁monday - ▁become - ▁dress - ated - ▁most - wi - ▁hang - ▁life - ▁fish - ▁happy - ▁delicious - ▁deal - ▁finished - ble - ▁studying - ▁weather - ▁making - ▁cost - ▁bl - ▁stayed - ▁guess - ▁teach - ▁stop - ▁near - ▁watching - ▁without - ▁imagine - ▁seriously - fl - ▁speak - ▁idea - ▁must - ▁normally - ▁turn - ize - ▁clean - ▁tv - ▁meat - ▁woke - ▁example - ▁easy - ▁sent - ▁sell - over - ▁fifty - ▁amazing - ▁beautiful - ▁whatever - ▁enjoy - ▁talked - ▁believe - ▁thinking - ▁count - ▁almost - ▁longer - ▁afternoon - ▁hair - ▁front - ▁earlier - ▁mind - ▁kind - ▁tea - ▁best - ▁rent - ▁picture - ▁cooked - ▁price - ight - ▁soon - ▁woman - ▁otherwise - ▁happened - ▁story - ▁luck - ▁high - ▁happen - ▁arrive - ▁paper - ga - ▁quickly - ▁looking - ub - ▁number - ▁staying - ▁sit - man - ack - ▁important - ▁either - ▁person - ▁small - ▁free - ▁crazy - ▁playing - ▁kept - ▁part - ▁game - law - ▁till - uck - ▁ready - ▁might - ▁gone - ▁full - ▁fix - ▁subject - ▁laugh - ▁doctor - ▁welcome - ▁eleven - ▁sleeping - ▁heat - ▁probably - ▁such - ▁café - ▁fat - ▁sweet - ▁married - ▁drink - ▁move - ▁outside - ▁especially - ▁group - ji - ▁market - ▁through - ▁train - ▁protect - ▁turned - ▁red - ▁busy - ▁light - ▁noise - ▁street - ▁manage - ▁piece - ▁sitting - gue - ▁sake - ▁party - ish - ▁young - ▁case - ▁cool - huh - ▁marwa - ▁drive - ▁pray - clock - ▁couscous - ▁spent - ▁felt - ▁hopefully - ▁everybody - ▁living - ▁pain - line - ▁between - ▁match - ▁prayer - que - ian - ▁facebook - ▁spi - ▁eye - ▁children - ▁tonight - ▁mohamed - ▁understood - ▁black - ▁husband - ▁rid - ▁kitchen - ▁face - ▁swim - ▁kid - ▁invite - ▁cup - ▁grilled - ▁wife - ▁cousin - ▁drop - ▁wow - ▁table - ▁du - ▁bored - ▁neighborhood - ▁agree - ▁bread - ▁hamma - ▁straight - ▁tuesday - ▁anyone - ▁lunch - ade - ▁himself - ▁gather - ▁wish - ▁fifteen - ▁wednesday - ▁die - ▁thursday - ▁color - ▁asleep - ▁different - ▁whether - ▁ago - ▁middle - ▁class - ▁cake - shirt - ▁fight - ▁clear - ▁test - ▁plus - ▁sousse - ▁beginning - ▁result - ▁learn - ▁crowded - ▁slept - ▁shoes - ▁august - ▁pretty - ▁white - ▁apparently - ▁reach - ▁mariem - ▁return - ▁road - ▁million - ▁stand - ▁paid - ▁word - ious - ▁few - ▁breakfast - ▁post - ▁kilo - ▁chicken - ▁grade - ▁read - ▁accept - ▁birthday - ▁exhaust - ▁point - ▁july - ▁patience - ▁studies - ▁trouble - ▁along - ▁worry - ▁follow - ▁hurt - ▁afraid - ▁trip - ▁ahmed - ▁remain - ▁succeed - ▁mercy - ▁difficult - ▁weekend - ▁answer - ▁cheap - ▁repeat - ▁auntie - ▁sign - ▁hold - ▁under - ▁olive - ▁mahdi - ▁sfax - ▁annoy - ▁dishes - ▁message - ▁business - ▁french - ▁serious - ▁travel - ▁office - ▁wonder - ▁student - ▁internship - ▁pepper - ▁knew - ▁kill - ▁sauce - ▁herself - ▁hammamet - ▁damn - ▁mix - ▁suit - ▁medicine - ▁remove - ▁gonna - ▁company - ▁quarter - ▁shopping - ▁correct - ▁throw - ▁grow - ▁voice - ▁series - gotten - ▁taste - ▁driving - ▁hospital - ▁sorry - ▁aziz - ▁milk - ▁green - ▁baccalaureate - ▁running - ▁lord - ▁explain - ▁angry - ▁build - ▁fruit - ▁photo - é - ▁crying - ▁baby - ▁store - ▁project - ▁france - ▁twelve - ▁decide - ▁swimming - ▁world - ▁preparing - ▁special - ▁session - ▁behind - ▁vegetable - ▁strong - ▁fatma - ▁treat - ▁cream - ▁situation - ▁settle - ▁totally - ▁stopped - ▁book - ▁honest - ▁solution - ▁vacation - ▁cheese - ▁ahead - ▁sami - ▁focus - ▁scared - ▁club - ▁consider - ▁final - ▁naturally - ▁barely - ▁issue - ▁floor - ▁birth - ▁almighty - ▁engagement - ▁blue - ▁empty - ▁soccer - ▁prophet - ▁ticket - ▁indeed - ▁write - ▁present - ▁patient - ▁available - ▁holiday - ▁leaving - ▁became - ▁reason - ▁apart - ▁impossible - ▁shame - ▁worried - ▁body - ▁continue - ▁program - ▁stress - ▁arabic - ▁round - ▁taxi - ▁transport - ▁third - ▁certain - ▁downstairs - ▁neighbor - ▁directly - ▁giving - ▁june - ▁mini - ▁upstairs - ▁mistake - ▁period - ▁catch - ▁buddy - ▁success - ▁tajine - ▁excuse - ▁organize - ▁question - ▁suffer - ▁remind - ▁university - ▁downtown - ▁sugar - ▁twice - ▁women - ▁couple - ▁everyday - ▁condition - ▁obvious - ▁nobody - ▁complete - ▁stomach - ▁account - ▁september - ▁choose - ▁bottle - ▁figure - ▁instead - ▁salary - '0' - '1' - '3' - '2' - '5' - '7' - '4' - '9' - '8' - / - ° - '6' - è - $ - ï - <sos/eos> src_token_list: - <blank> - <unk> - ّ - ي - ا - ِ - ل - َ - و - ه - ة - م - ر - ك - ▁ما - ُ - ب - ش - د - ت - ▁في - َّ - ▁ن - ▁ي - ▁ت - ن - ▁لا - ح - ▁ه - س - وا - ▁م - ف - ▁إي - ع - ▁ب - ها - ط - ى - ق - ▁الل - ▁أ - ج - ▁والل - ▁و - ▁إيه - ▁ا - ▁يا - ز - ▁تو - ▁بش - ص - ▁أه - خ - ات - ▁إنت - ▁أنا - نا - ▁شن - ▁ق - ▁ش - ▁ك - يت - ين - ▁ف - ار - ▁قال - ▁باهي - ▁ع - ▁من - ▁ل - ▁مش - ▁كان - ▁حت - ▁ول - هم - ▁ر - ان - ▁س - ض - ني - ▁بال - ▁على - ▁متاع - ▁كي - ▁ال - ▁ح - ▁كل - ▁آنا - ▁الم - ▁خ - ▁الس - ▁وال - ون - ور - ▁أم - ▁هك - ▁آش - ▁الد - ▁عاد - ▁ج - ▁معناها - ▁مع - اش - ▁الص - ▁نهار - ▁لل - لها - ▁تي - ▁رب - ▁خاطر - ▁أكهو - غ - ▁شي - الل - ام - تها - ▁ون - ▁آك - ▁فهمت - وم - ▁موش - مشي - ▁ص - ▁اليوم - ▁مر - ست - ▁الب - ▁لاباس - تلي - ▁الكل - ▁عال - ذ - ▁فم - ▁الك - ▁حاجة - ▁شوي - اكا - ▁ياخي - ▁هاني - ▁صح - اس - ▁آه - ▁برشة - ▁الن - ▁وت - ▁الج - لك - ▁راهو - سم - ▁الح - مت - ▁الت - ▁بعد - اج - عد - ▁انشا - وش - لت - ▁وين - ث - ▁ولا - ▁باش - ▁فيها - نت - ▁إ - ▁الأ - ▁الف - ▁إم - ▁واحد - ▁ألو - ▁عندي - ▁أك - ▁خل - ▁وي - ▁تعمل - أ - ▁ريت - ▁وأ - ▁تعرف - بت - ▁الع - ▁مشيت - ▁وه - ▁حاصيلو - ▁بالل - ▁نعمل - ▁غ - ▁تجي - ▁يجي - ▁كيفاش - ▁عملت - ظ - اك - ▁هاو - ▁اش - ▁قد - ▁نق - ▁د - ▁زادا - ▁فيه - رة - ▁بر - ▁الش - ▁ز - ▁كيما - ▁الا - ند - عم - ▁نح - ▁بنتي - ▁نمشي - ▁عليك - ▁نعرفش - ▁كهو - ▁وم - ▁ط - تي - ▁خير - ▁آ - مش - ▁عليه - له - حت - ▁إيا - ▁أحنا - ▁تع - الا - عب - ▁ديما - ▁تت - ▁جو - ▁مالا - ▁أو - ▁قلتلك - ▁معنتها - لنا - ▁شكون - ▁تحب - بر - ▁الر - ▁وا - ▁الق - اء - ▁عل - ▁البارح - ▁وخ - ▁سافا - ▁هوما - ▁ولدي - ▁ - ▁نعرف - يف - رت - ▁وب - ▁روح - ▁علاش - ▁هاذاك - ▁رو - وس - ▁جا - ▁كيف - طر - ▁غادي - يكا - عمل - ▁نحب - ▁عندك - ▁وما - ▁فر - اني - ▁قلتله - ▁الط - فر - ▁دار - ▁عليها - ▁يعمل - ▁نت - ▁تح - باح - ▁ماهو - ▁وكل - ▁وع - قت - ▁فهمتك - عر - ▁وس - ▁تر - ▁سي - يلة - ▁قلت - ▁رمضان - صل - ▁آما - ▁الواحد - ▁بيه - ▁ثلاثة - ▁فهمتني - ▁ها - بط - ▁مازال - قل - ▁بالك - ▁معناتها - ▁ور - ▁قلتلها - ▁يس - رب - ▁ام - ▁وبعد - ▁الث - ▁وإنت - ▁بحذا - ▁لازم - ْ - ▁بن - قرا - سك - ▁يت - خل - ▁فه - عت - ▁هاك - ▁تق - ▁قبل - ▁وك - ▁نقول - ▁الز - حم - ▁عادش - حكي - وها - بة - نس - طل - ▁علاه - ذا - ▁سا - ▁طل - الي - ▁يق - ▁دو - حوا - حد - ▁نشوف - نة - ▁لي - ▁تك - ▁نا - ▁هاذ - ▁خويا - ▁المر - ▁وينك - ▁البر - ▁أتو - ينا - ▁حل - ولي - ▁ثم - ▁عم - ▁آي - ▁قر - از - ▁وح - كش - بعة - ▁كيفاه - ▁نع - ▁الحمدلله - ▁ياسر - ▁الخ - ▁معاك - ▁معاه - ▁تقول - دة - ▁حكاية - تش - ▁حس - ▁غدوا - ▁بالحق - روا - وز - ▁تخ - ▁العيد - رجع - ▁بالي - ▁جات - ▁وج - حة - ▁وش - ▁آخر - ▁طا - ▁مت - لقا - تك - ▁مس - ▁راني - كون - ▁صاحب - ▁هاكا - ▁قول - ▁عر - ▁عنده - ▁يلزم - ▁هاذا - ▁يخ - ▁وقتاش - ▁وقت - بع - ▁العش - ▁هاذي - هاش - ينة - ▁هاذاكا - عطي - ▁تنج - ▁باهية - نيا - فت - ▁يحب - ▁تف - ▁أهلا - وف - ▁غدوة - ▁بيك - ▁بد - عن - ▁در - ▁ننج - هار - ▁الحكاية - مون - وق - ▁نورمال - ▁عندها - خر - ▁بو - ▁حب - ▁آكا - ▁وف - ▁هاذيكا - ▁ديجا - ▁وق - ▁طي - لتل - بعث - ▁تص - رك - ▁مانيش - ▁العادة - ▁شوف - ضر - ▁يمشي - ▁نعملوا - ▁عرفت - ▁زال - ▁متع - ▁عمل - ▁بيها - ▁نحكي - اع - ▁نج - معة - ▁والكل - عناها - ▁يعي - ▁نجي - ستن - ▁هاذيك - ▁عام - ▁فلوس - قة - تين - ▁بالقدا - لهم - ▁تخدم - ▁ٱ - ▁شيء - ▁راهي - ▁جاب - ولاد - ابل - ▁ماك - عة - ▁نمشيوا - وني - شري - بار - انس - ▁وقتها - ▁جديد - ▁يز - ▁كر - ▁حاسيلو - ▁شق - ▁اه - ▁سايي - ▁انشالل - رج - مني - ▁بلا - ▁صحيح - ▁غير - ▁يخدم - مان - وكا - ▁عند - ▁قاعدة - ▁تس - ربة - ▁راس - ▁حط - ▁نكل - تني - ▁الو - سيون - ▁عندنا - ▁لو - ▁ست - صف - ▁ض - ▁كامل - ▁نخدم - ▁يبدا - ▁دونك - ▁أمور - رات - ▁تونس - بدا - ▁تحكي - ▁سو - ▁جاي - ▁وحدة - ▁ساعة - حنا - ▁بكري - ▁إل - ▁وبر - ▁كم - ▁تبدا - ارة - ادي - رق - لوا - ▁يمكن - ▁خاط - ▁وص - جين - ▁هاذاي - ▁هز - قد - ▁قل - ▁وكهو - ▁نص - ▁دي - لقى - ▁وأنا - سين - ▁يح - ▁ماشي - ▁شو - ▁خذيت - امات - ▁كنت - خرج - ▁لقيت - رتاح - كس - ▁حاجات - ▁مريق - ▁مل - ليفون - اوا - ▁شفت - ▁عاملة - ▁تن - ▁والا - سأل - ▁حد - ▁قاللك - ▁العباد - ▁عالاخ - ▁وآك - ▁ماني - ▁ناخذ - ▁حم - ▁الإ - ▁ماضي - ▁ث - الة - ▁أخرى - رين - ▁تشوف - ▁نخرج - ▁أربعة - ▁ألف - نيش - ▁هاي - آ - ▁فيك - رشة - ولة - فلة - ▁بابا - ▁أما - ▁روحي - ▁فيهم - ▁رج - ▁ليك - ونس - يرة - ▁وأكهو - ندي - ▁صار - شك - ▁نرو - ▁آكهو - ▁تش - ▁غاديكا - ▁معاها - ▁لب - ▁أذاكا - ▁آني - ▁يوم - عملوا - ▁نقعد - دوا - ▁عد - سمع - متني - ▁الخدمة - ▁مازلت - ▁قعدت - ايا - ▁برك - قعد - ▁خرجت - ضح - ▁قالل - ▁يقول - ▁وفي - ▁حق - ختي - ▁يعني - خدم - ▁جيت - ▁نرمال - طف - ▁عجب - ▁تقعد - ▁مشينا - اية - ▁خدمة - لدي - روف - ▁الفطر - ▁مشكل - ▁سل - ▁وآنا - الط - ▁بالس - ▁هانا - ▁أوه - ▁أذيكا - ▁وإ - ▁عليهم - ▁حالة - جت - قضي - ▁لق - ▁ونصف - سعة - عطيه - عاو - خانة - ▁مخ - ▁شبيك - بيعة - ▁أهوك - يني - ▁تعد - ▁خال - ▁قريب - ▁راك - ▁قالت - ▁لتو - ▁أكثر - اعة - ▁يظهرلي - ▁ماشية - سمعني - ▁نسيت - ▁ينج - ▁الحمدلل - هدي - ▁وشن - ▁تطي - ▁هنا - ▁نسمع - ▁إنتوما - ▁نحكيلك - ▁قاعد - ▁اسمعني - خرين - إ - ماعة - ▁بالر - ▁دا - ▁عمر - ▁نشري - ▁قهوة - ▁تبارك - ▁صب - ▁مشات - غر - ▁شريت - ▁عامل - ▁زوج - ثنين - ▁برب - ريق - ▁نكم - ▁لم - بيب - ▁مياة - ▁مالل - ▁قعد - ▁سخون - قس - ▁وحده - ▁اسمع - ▁خمسة - ▁غالي - ▁الأو - رلي - ▁العظيم - ▁ترو - تهم - كري - ▁نجيب - ▁جملة - قول - ▁قلتلي - ▁إيجا - ▁يقعد - ▁إيام - ▁يعطيك - ▁نخل - ▁دب - يمة - رهبة - ▁نهز - ▁محم - ▁بين - غار - ▁نحنا - ▁بون - ▁الغ - ▁شهر - ▁بار - رقة - ▁نطي - ئ - ترو - ▁ملا - ▁الكرهبة - ▁باه - ▁عالإخ - ▁عباد - ▁بلاصة - ▁مشى - بيع - ▁نفس - ▁عملنا - ▁واح - ▁أحلاه - ▁بحذاك - ▁لأ - ▁دخ - باب - ▁ودر - ▁غالب - ▁ناكل - ▁مثلا - ء - ▁راقد - ▁تفر - ▁الوقت - ▁تاخذ - حذا - نتر - ▁نبدا - ▁حال - ▁مريم - الم - ▁جمعة - رجول - ▁معايا - ▁تخرج - ▁باس - ▁ساعات - ▁عندهم - ▁نتفر - مسة - ▁الجمعة - بعين - ▁أكاهو - ▁ميش - مراة - ▁خذا - ▁ظ - ▁سيدي - ▁معاي - ▁شبيه - ▁حكا - ▁سف - ▁بعضنا - ▁بالض - ▁ليلة - ▁زعما - ▁الحق - مضان - ▁صعيب - ▁قالتلك - ً - ملة - ▁بق - عرف - لاطة - ▁خرج - ▁أخت - ▁تقوللي - ▁معانا - ▁صغير - ▁إسمه - ▁بعض - ▁العام - ▁علينا - ▁يتع - ▁فاش - ▁شع - ▁معاهم - ▁يسالش - ▁لهنا - ▁سمعت - ▁البار - ▁نتصو - ▁الاخ - ▁وكان - وبة - دمة - ▁كون - ▁مبعد - ▁تسمع - ▁بعيد - ▁تاكل - ▁نلقا - لامة - لاثة - ▁ذ - ▁تحس - ▁الواح - ▁لدار - ▁فاتت - ▁تاو - ▁أحوالك - ▁عاملين - ▁كبيرة - عجب - ▁بنت - ▁بيدي - ▁حكيت - ▁تحط - ▁مسكينة - ▁هاذوكم - ▁نزيد - لاث - ▁عشرة - ▁عيني - ▁تعب - ▁ياكل - ▁وزيد - ▁طول - ▁حمدلله - ▁وقتاه - ▁معناه - ▁وآش - ▁ووه - ▁وواحد - ▁نشوفوا - ▁عيد - ▁بصراحة - ▁بحذانا - ▁قاعدين - ▁راجل - ▁وحدي - ▁وعشرين - ▁لين - ▁خايب - ▁قالتله - ▁تهز - عيد - ▁كبير - ▁يعرف - ▁عارف - ▁الفلوس - ▁زايد - ▁خدمت - ▁هاذوما - ▁سلاطة - ▁فارغة - ▁ساعتين - ▁تبد - ▁راو - ▁مائة - ▁بعضهم - ▁ظاهرلي - ▁الفازة - كتب - ▁القهوة - سبوك - ▁زاد - ▁ضرب - حكيلي - ▁فوق - ▁عاود - ▁راي - ▁ومبعد - ▁حوايج - ▁دخلت - ▁يقوللك - ▁زيد - ▁زلت - لفزة - ▁وقال - ▁يهب - ▁يلزمني - ▁الحمد - ▁أذي - طبيعت - ▁دورة - ▁عالأقل - ▁آذاك - ▁وبال - ▁الجاي - عطيني - ▁ياخذ - ▁احكيلي - ▁نهبط - ▁رقدت - بلاصة - ▁عزيز - ▁صغار - ▁أقسم - ▁جيب - ▁وصلت - ▁أحوال - ▁جيست - ▁جماعة - سئل - ▁خوذ - ▁يهز - ▁الأخرى - ▁آلاف - ▁إسمع - ▁الحقيقة - ▁ناقص - ▁حاط - ▁موجود - عباد - ▁آذيك - ▁خارج - ▁الخير - ▁البنات - بقى - ▁طرف - ▁سينون - ▁ماذاب - ▁البحر - ▁نرقد - مدلله - ▁إيجى - ▁خالتي - ▁فازة - ▁بريك - ▁شريبتك - ▁تطلع - ؤ - ▁المشكلة - ▁طري - ▁مادام - ▁طلبت - ▁يلعب - ▁نعاود - ▁وحدك - ▁ظاهر - ٱ - ژ - ٍ - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true model_conf: asr_weight: 0.3 mt_weight: 0.0 mtlalpha: 1.0 lsm_weight: 0.1 length_normalized_loss: false use_preprocessor: true token_type: bpe src_token_type: bpe bpemodel: data/token_list/tgt_bpe_unigram1000/bpe.model src_bpemodel: data/token_list/src_bpe_unigram1000/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: n_fft: 512 hop_length: 256 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 5 normalize: global_mvn normalize_conf: stats_file: exp/st_stats_raw_bpe1000_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: conformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 1024 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d normalize_before: true macaron_style: true rel_pos_type: latest pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 extra_asr_decoder: transformer extra_asr_decoder_conf: input_layer: embed num_blocks: 2 linear_units: 2048 dropout_rate: 0.1 extra_mt_decoder: transformer extra_mt_decoder_conf: input_layer: embed num_blocks: 2 linear_units: 2048 dropout_rate: 0.1 required: - output_dir - src_token_list - token_list version: 0.10.6a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "noinfo", "license": "cc-by-4.0", "tags": ["espnet", "audio", "speech-translation"], "datasets": ["iwslt22_dialect"]}
espnet/brianyan918_iwslt22_dialect_train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug
null
[ "espnet", "audio", "speech-translation", "dataset:iwslt22_dialect", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## ESPnet2 ASR model ### `espnet/brianyan918_iwslt22_dialect_transformer_fisherlike` This model was trained by Brian Yan using iwslt22_dialect recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 77fce65312877a132bbae01917ad26b74f6e2e14 pip install -e . cd egs2/iwslt22_dialect/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/brianyan918_iwslt22_dialect_transformer_fisherlike ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Mon Jan 31 10:15:38 EST 2022` - python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.8.1` - Git hash: `99581e0f5af3ad68851d556645e7292771436df9` - Commit date: `Sat Jan 29 11:32:38 2022 -0500` ## asr_transformer_fisherlike_4gpu_bbins16m_fix_raw_bpe1000_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/test1|4204|27370|53.4|41.1|5.5|9.5|56.1|88.2| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/test1|4204|145852|83.8|7.5|8.7|12.2|28.4|88.2| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/test1|4204|64424|62.9|23.9|13.3|13.4|50.5|88.2| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/transformer_fisherlike_4gpu_bbins16m_fix.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_transformer_fisherlike_4gpu_bbins16m_fix_raw_bpe1000_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 60761 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 3 grad_clip_type: 2.0 grad_noise: false accum_grad: 2 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 16000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_bpe1000_sp/train/speech_shape - exp/asr_stats_raw_bpe1000_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_bpe1000_sp/valid/speech_shape - exp/asr_stats_raw_bpe1000_sp/valid/text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - /scratch/iwslt22asrdump/raw/train_sp/wav.scp - speech - kaldi_ark - - /scratch/iwslt22asrdump/raw/train_sp/text - text - text valid_data_path_and_name_and_type: - - /scratch/iwslt22asrdump/raw/dev/wav.scp - speech - kaldi_ark - - /scratch/iwslt22asrdump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 5.0 scheduler: noamlr scheduler_conf: model_size: 256 warmup_steps: 25000 token_list: - <blank> - <unk> - ّ - ي - ا - ِ - ل - َ - و - ه - ة - م - ر - ك - ▁ما - ُ - ب - ش - د - ت - ▁في - َّ - ▁ن - ▁ي - ▁ت - ن - ▁لا - ح - ▁ه - س - وا - ▁م - ف - ▁إي - ع - ▁ب - ها - ط - ى - ق - ▁الل - ▁أ - ج - ▁والل - ▁و - ▁إيه - ▁ا - ▁يا - ز - ▁تو - ▁بش - ص - ▁أه - خ - ات - ▁إنت - ▁أنا - نا - ▁شن - ▁ق - ▁ش - ▁ك - يت - ين - ▁ف - ار - ▁قال - ▁باهي - ▁ع - ▁من - ▁ل - ▁مش - ▁كان - ▁حت - ▁ول - هم - ▁ر - ان - ▁س - ض - ني - ▁بال - ▁على - ▁متاع - ▁كي - ▁ال - ▁ح - ▁كل - ▁آنا - ▁الم - ▁خ - ▁الس - ▁وال - ون - ور - ▁أم - ▁هك - ▁آش - ▁الد - ▁عاد - ▁ج - ▁معناها - ▁مع - اش - ▁الص - ▁نهار - ▁لل - لها - ▁تي - ▁رب - ▁خاطر - ▁أكهو - غ - ▁شي - الل - ام - تها - ▁ون - ▁آك - ▁فهمت - وم - ▁موش - مشي - ▁ص - ▁اليوم - ▁مر - ست - ▁الب - ▁لاباس - تلي - ▁الكل - ▁عال - ذ - ▁فم - ▁الك - ▁حاجة - ▁شوي - اكا - ▁ياخي - ▁هاني - ▁صح - اس - ▁آه - ▁برشة - ▁الن - ▁وت - ▁الج - لك - ▁راهو - سم - ▁الح - مت - ▁الت - ▁بعد - اج - عد - ▁انشا - وش - لت - ▁وين - ث - ▁ولا - ▁باش - ▁فيها - نت - ▁إ - ▁الأ - ▁الف - ▁إم - ▁واحد - ▁ألو - ▁عندي - ▁أك - ▁خل - ▁وي - ▁تعمل - أ - ▁ريت - ▁وأ - ▁تعرف - بت - ▁الع - ▁مشيت - ▁وه - ▁حاصيلو - ▁بالل - ▁نعمل - ▁غ - ▁تجي - ▁يجي - ▁كيفاش - ▁عملت - ظ - اك - ▁هاو - ▁اش - ▁قد - ▁نق - ▁د - ▁زادا - ▁فيه - رة - ▁بر - ▁الش - ▁ز - ▁كيما - ▁الا - ند - عم - ▁نح - ▁بنتي - ▁نمشي - ▁عليك - ▁نعرفش - ▁كهو - ▁وم - ▁ط - تي - ▁خير - ▁آ - مش - ▁عليه - له - حت - ▁إيا - ▁أحنا - ▁تع - الا - عب - ▁ديما - ▁تت - ▁جو - ▁مالا - ▁أو - ▁قلتلك - ▁معنتها - لنا - ▁شكون - ▁تحب - بر - ▁الر - ▁وا - ▁الق - اء - ▁عل - ▁البارح - ▁وخ - ▁سافا - ▁هوما - ▁ولدي - ▁ - ▁نعرف - يف - رت - ▁وب - ▁روح - ▁علاش - ▁هاذاك - ▁رو - وس - ▁جا - ▁كيف - طر - ▁غادي - يكا - عمل - ▁نحب - ▁عندك - ▁وما - ▁فر - اني - ▁قلتله - ▁الط - فر - ▁دار - ▁عليها - ▁يعمل - ▁نت - ▁تح - باح - ▁ماهو - ▁وكل - ▁وع - قت - ▁فهمتك - عر - ▁وس - ▁تر - ▁سي - يلة - ▁قلت - ▁رمضان - صل - ▁آما - ▁الواحد - ▁بيه - ▁ثلاثة - ▁فهمتني - ▁ها - بط - ▁مازال - قل - ▁بالك - ▁معناتها - ▁ور - ▁قلتلها - ▁يس - رب - ▁ام - ▁وبعد - ▁الث - ▁وإنت - ▁بحذا - ▁لازم - ْ - ▁بن - قرا - سك - ▁يت - خل - ▁فه - عت - ▁هاك - ▁تق - ▁قبل - ▁وك - ▁نقول - ▁الز - حم - ▁عادش - حكي - وها - بة - نس - طل - ▁علاه - ذا - ▁سا - ▁طل - الي - ▁يق - ▁دو - حوا - حد - ▁نشوف - نة - ▁لي - ▁تك - ▁نا - ▁هاذ - ▁خويا - ▁المر - ▁وينك - ▁البر - ▁أتو - ينا - ▁حل - ولي - ▁ثم - ▁عم - ▁آي - ▁قر - از - ▁وح - كش - بعة - ▁كيفاه - ▁نع - ▁الحمدلله - ▁ياسر - ▁الخ - ▁معاك - ▁معاه - ▁تقول - دة - ▁حكاية - تش - ▁حس - ▁غدوا - ▁بالحق - روا - وز - ▁تخ - ▁العيد - رجع - ▁بالي - ▁جات - ▁وج - حة - ▁وش - ▁آخر - ▁طا - ▁مت - لقا - تك - ▁مس - ▁راني - كون - ▁صاحب - ▁هاكا - ▁قول - ▁عر - ▁عنده - ▁يلزم - ▁هاذا - ▁يخ - ▁وقتاش - ▁وقت - بع - ▁العش - ▁هاذي - هاش - ينة - ▁هاذاكا - عطي - ▁تنج - ▁باهية - نيا - فت - ▁يحب - ▁تف - ▁أهلا - وف - ▁غدوة - ▁بيك - ▁بد - عن - ▁در - ▁ننج - هار - ▁الحكاية - مون - وق - ▁نورمال - ▁عندها - خر - ▁بو - ▁حب - ▁آكا - ▁وف - ▁هاذيكا - ▁ديجا - ▁وق - ▁طي - لتل - بعث - ▁تص - رك - ▁مانيش - ▁العادة - ▁شوف - ضر - ▁يمشي - ▁نعملوا - ▁عرفت - ▁زال - ▁متع - ▁عمل - ▁بيها - ▁نحكي - اع - ▁نج - معة - ▁والكل - عناها - ▁يعي - ▁نجي - ستن - ▁هاذيك - ▁عام - ▁فلوس - قة - تين - ▁بالقدا - لهم - ▁تخدم - ▁ٱ - ▁شيء - ▁راهي - ▁جاب - ولاد - ابل - ▁ماك - عة - ▁نمشيوا - وني - شري - بار - انس - ▁وقتها - ▁جديد - ▁يز - ▁كر - ▁حاسيلو - ▁شق - ▁اه - ▁سايي - ▁انشالل - رج - مني - ▁بلا - ▁صحيح - ▁غير - ▁يخدم - مان - وكا - ▁عند - ▁قاعدة - ▁تس - ربة - ▁راس - ▁حط - ▁نكل - تني - ▁الو - سيون - ▁عندنا - ▁لو - ▁ست - صف - ▁ض - ▁كامل - ▁نخدم - ▁يبدا - ▁دونك - ▁أمور - رات - ▁تونس - بدا - ▁تحكي - ▁سو - ▁جاي - ▁وحدة - ▁ساعة - حنا - ▁بكري - ▁إل - ▁وبر - ▁كم - ▁تبدا - ارة - ادي - رق - لوا - ▁يمكن - ▁خاط - ▁وص - جين - ▁هاذاي - ▁هز - قد - ▁قل - ▁وكهو - ▁نص - ▁دي - لقى - ▁وأنا - سين - ▁يح - ▁ماشي - ▁شو - ▁خذيت - امات - ▁كنت - خرج - ▁لقيت - رتاح - كس - ▁حاجات - ▁مريق - ▁مل - ليفون - اوا - ▁شفت - ▁عاملة - ▁تن - ▁والا - سأل - ▁حد - ▁قاللك - ▁العباد - ▁عالاخ - ▁وآك - ▁ماني - ▁ناخذ - ▁حم - ▁الإ - ▁ماضي - ▁ث - الة - ▁أخرى - رين - ▁تشوف - ▁نخرج - ▁أربعة - ▁ألف - نيش - ▁هاي - آ - ▁فيك - رشة - ولة - فلة - ▁بابا - ▁أما - ▁روحي - ▁فيهم - ▁رج - ▁ليك - ونس - يرة - ▁وأكهو - ندي - ▁صار - شك - ▁نرو - ▁آكهو - ▁تش - ▁غاديكا - ▁معاها - ▁لب - ▁أذاكا - ▁آني - ▁يوم - عملوا - ▁نقعد - دوا - ▁عد - سمع - متني - ▁الخدمة - ▁مازلت - ▁قعدت - ايا - ▁برك - قعد - ▁خرجت - ضح - ▁قالل - ▁يقول - ▁وفي - ▁حق - ختي - ▁يعني - خدم - ▁جيت - ▁نرمال - طف - ▁عجب - ▁تقعد - ▁مشينا - اية - ▁خدمة - لدي - روف - ▁الفطر - ▁مشكل - ▁سل - ▁وآنا - الط - ▁بالس - ▁هانا - ▁أوه - ▁أذيكا - ▁وإ - ▁عليهم - ▁حالة - جت - قضي - ▁لق - ▁ونصف - سعة - عطيه - عاو - خانة - ▁مخ - ▁شبيك - بيعة - ▁أهوك - يني - ▁تعد - ▁خال - ▁قريب - ▁راك - ▁قالت - ▁لتو - ▁أكثر - اعة - ▁يظهرلي - ▁ماشية - سمعني - ▁نسيت - ▁ينج - ▁الحمدلل - هدي - ▁وشن - ▁تطي - ▁هنا - ▁نسمع - ▁إنتوما - ▁نحكيلك - ▁قاعد - ▁اسمعني - خرين - إ - ماعة - ▁بالر - ▁دا - ▁عمر - ▁نشري - ▁قهوة - ▁تبارك - ▁صب - ▁مشات - غر - ▁شريت - ▁عامل - ▁زوج - ثنين - ▁برب - ريق - ▁نكم - ▁لم - بيب - ▁مياة - ▁مالل - ▁قعد - ▁سخون - قس - ▁وحده - ▁اسمع - ▁خمسة - ▁غالي - ▁الأو - رلي - ▁العظيم - ▁ترو - تهم - كري - ▁نجيب - ▁جملة - قول - ▁قلتلي - ▁إيجا - ▁يقعد - ▁إيام - ▁يعطيك - ▁نخل - ▁دب - يمة - رهبة - ▁نهز - ▁محم - ▁بين - غار - ▁نحنا - ▁بون - ▁الغ - ▁شهر - ▁بار - رقة - ▁نطي - ئ - ترو - ▁ملا - ▁الكرهبة - ▁باه - ▁عالإخ - ▁عباد - ▁بلاصة - ▁مشى - بيع - ▁نفس - ▁عملنا - ▁واح - ▁أحلاه - ▁بحذاك - ▁لأ - ▁دخ - باب - ▁ودر - ▁غالب - ▁ناكل - ▁مثلا - ء - ▁راقد - ▁تفر - ▁الوقت - ▁تاخذ - حذا - نتر - ▁نبدا - ▁حال - ▁مريم - الم - ▁جمعة - رجول - ▁معايا - ▁تخرج - ▁باس - ▁ساعات - ▁عندهم - ▁نتفر - مسة - ▁الجمعة - بعين - ▁أكاهو - ▁ميش - مراة - ▁خذا - ▁ظ - ▁سيدي - ▁معاي - ▁شبيه - ▁حكا - ▁سف - ▁بعضنا - ▁بالض - ▁ليلة - ▁زعما - ▁الحق - مضان - ▁صعيب - ▁قالتلك - ً - ملة - ▁بق - عرف - لاطة - ▁خرج - ▁أخت - ▁تقوللي - ▁معانا - ▁صغير - ▁إسمه - ▁بعض - ▁العام - ▁علينا - ▁يتع - ▁فاش - ▁شع - ▁معاهم - ▁يسالش - ▁لهنا - ▁سمعت - ▁البار - ▁نتصو - ▁الاخ - ▁وكان - وبة - دمة - ▁كون - ▁مبعد - ▁تسمع - ▁بعيد - ▁تاكل - ▁نلقا - لامة - لاثة - ▁ذ - ▁تحس - ▁الواح - ▁لدار - ▁فاتت - ▁تاو - ▁أحوالك - ▁عاملين - ▁كبيرة - عجب - ▁بنت - ▁بيدي - ▁حكيت - ▁تحط - ▁مسكينة - ▁هاذوكم - ▁نزيد - لاث - ▁عشرة - ▁عيني - ▁تعب - ▁ياكل - ▁وزيد - ▁طول - ▁حمدلله - ▁وقتاه - ▁معناه - ▁وآش - ▁ووه - ▁وواحد - ▁نشوفوا - ▁عيد - ▁بصراحة - ▁بحذانا - ▁قاعدين - ▁راجل - ▁وحدي - ▁وعشرين - ▁لين - ▁خايب - ▁قالتله - ▁تهز - عيد - ▁كبير - ▁يعرف - ▁عارف - ▁الفلوس - ▁زايد - ▁خدمت - ▁هاذوما - ▁سلاطة - ▁فارغة - ▁ساعتين - ▁تبد - ▁راو - ▁مائة - ▁بعضهم - ▁ظاهرلي - ▁الفازة - كتب - ▁القهوة - سبوك - ▁زاد - ▁ضرب - حكيلي - ▁فوق - ▁عاود - ▁راي - ▁ومبعد - ▁حوايج - ▁دخلت - ▁يقوللك - ▁زيد - ▁زلت - لفزة - ▁وقال - ▁يهب - ▁يلزمني - ▁الحمد - ▁أذي - طبيعت - ▁دورة - ▁عالأقل - ▁آذاك - ▁وبال - ▁الجاي - عطيني - ▁ياخذ - ▁احكيلي - ▁نهبط - ▁رقدت - بلاصة - ▁عزيز - ▁صغار - ▁أقسم - ▁جيب - ▁وصلت - ▁أحوال - ▁جيست - ▁جماعة - سئل - ▁خوذ - ▁يهز - ▁الأخرى - ▁آلاف - ▁إسمع - ▁الحقيقة - ▁ناقص - ▁حاط - ▁موجود - عباد - ▁آذيك - ▁خارج - ▁الخير - ▁البنات - بقى - ▁طرف - ▁سينون - ▁ماذاب - ▁البحر - ▁نرقد - مدلله - ▁إيجى - ▁خالتي - ▁فازة - ▁بريك - ▁شريبتك - ▁تطلع - ؤ - ▁المشكلة - ▁طري - ▁مادام - ▁طلبت - ▁يلعب - ▁نعاود - ▁وحدك - ▁ظاهر - ٱ - ژ - ٍ - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false use_preprocessor: true token_type: bpe bpemodel: data/token_list/bpe_unigram1000/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: n_fft: 512 win_length: 400 hop_length: 160 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_bpe1000_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: transformer encoder_conf: input_layer: conv2d num_blocks: 12 linear_units: 2048 dropout_rate: 0.1 output_size: 256 attention_heads: 4 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: input_layer: embed num_blocks: 6 linear_units: 2048 dropout_rate: 0.1 required: - output_dir - token_list version: 0.10.6a1 distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "noinfo", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["iwslt22_dialect"]}
espnet/brianyan918_iwslt22_dialect_transformer_fisherlike
null
[ "espnet", "audio", "automatic-speech-recognition", "dataset:iwslt22_dialect", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## ESPnet2 ASR pretrained model ### `byan/librispeech_asr_train_asr_conformer_raw_bpe_batch_bins30000000_accum_grad3_optim_conflr0.001_sp` ♻️ Imported from https://huggingface.co/ This model was trained by byan using librispeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"]}
espnet/byan_librispeech_asr_train_asr_conformer_raw_bpe_batch_bins30000000_ac-truncated-68a97b
null
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:librispeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
audio-to-audio
espnet
# ESPnet2 ENH pretrained model ## `Chenda Li/wsj0_2mix_enh_train_enh_conv_tasnet_raw_valid.si_snr.ave, fs=8k, lang=en` ♻️ Imported from <https://zenodo.org/record/4498562#.YOAOApozZH4>. This model was trained by Chenda Li using wsj0_2mix recipe in [espnet](https://github.com/espnet/espnet/). ### Python API ```text See https://github.com/espnet/espnet_model_zoo ``` ### Evaluate in the recipe ```python # coming soon ``` ### Results ```bash # RESULTS ## Environments - date: `Thu Feb 4 01:16:18 CST 2021` - python version: `3.7.6 (default, Jan 8 2020, 19:59:22) [GCC 7.3.0]` - espnet version: `espnet 0.9.7` - pytorch version: `pytorch 1.5.0` - Git hash: `a3334220b0352931677946d178fade3313cf82bb` - Commit date: `Fri Jan 29 23:35:47 2021 +0800` ## enh_train_enh_conv_tasnet_raw config: ./conf/tuning/train_enh_conv_tasnet.yaml |dataset|STOI|SAR|SDR|SIR| |---|---|---|---|---| |enhanced_cv_min_8k|0.949205|17.3785|16.8028|26.9785| |enhanced_tt_min_8k|0.95349|16.6221|15.9494|25.9032| ``` ### Training config See full config in [`config.yaml`](./exp/enh_train_enh_conv_tasnet_raw/config.yaml) ```yaml config: ./conf/tuning/train_enh_conv_tasnet.yaml print_config: false log_level: INFO dry_run: false iterator_type: chunk output_dir: exp/enh_train_enh_conv_tasnet_raw ngpu: 1 seed: 0 num_workers: 4 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "audio-source-separation", "audio-to-audio"], "datasets": ["wsj0_2mix"], "inference": false}
espnet/chenda-li-wsj0_2mix_enh_train_enh_conv_tasnet_raw_valid.si_snr.ave
null
[ "espnet", "audio", "audio-source-separation", "audio-to-audio", "en", "dataset:wsj0_2mix", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
audio-to-audio
espnet
# ESPnet2 ENH pretrained model ## `Chenda Li/wsj0_2mix_enh_train_enh_rnn_tf_raw_valid.si_snr.ave, fs=8k, lang=en` ♻️ Imported from <https://zenodo.org/record/4498554#.YOAOEpozZH4>. This model was trained by Chenda Li using wsj0_2mix recipe in [espnet](https://github.com/espnet/espnet/). ### Python API ```text See https://github.com/espnet/espnet_model_zoo ``` ### Evaluate in the recipe ```python # coming soon ``` ### Results ```bash # RESULTS ## Environments - date: `Thu Feb 4 01:08:19 CST 2021` - python version: `3.7.6 (default, Jan 8 2020, 19:59:22) [GCC 7.3.0]` - espnet version: `espnet 0.9.7` - pytorch version: `pytorch 1.5.0` - Git hash: `a3334220b0352931677946d178fade3313cf82bb` - Commit date: `Fri Jan 29 23:35:47 2021 +0800` ## enh_train_enh_rnn_tf_raw config: conf/tuning/train_enh_rnn_tf.yaml |dataset|STOI|SAR|SDR|SIR| |---|---|---|---|---| |enhanced_cv_min_8k|0.891065|11.556|10.3982|18.0655| |enhanced_tt_min_8k|0.896373|11.4086|10.2433|18.0496| ``` ### Training config See full config in [`config.yaml`](./exp/enh_train_enh_rnn_tf_raw/config.yaml) ```yaml config: conf/tuning/train_enh_rnn_tf.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/enh_train_enh_rnn_tf_raw ngpu: 1 seed: 0 num_workers: 4 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "audio-source-separation", "audio-to-audio"], "datasets": ["wsj0_2mix"], "inference": false}
espnet/chenda-li-wsj0_2mix_enh_train_enh_rnn_tf_raw_valid.si_snr.ave
null
[ "espnet", "audio", "audio-source-separation", "audio-to-audio", "en", "dataset:wsj0_2mix", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## ESPnet2 ASR model ### `espnet/ftshijt_espnet2_asr_puebla_nahuatl_transfer` This model was trained by ftshijt using puebla_nahuatl recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet pip install -e . cd els/puebla_nahuatl/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/ftshijt_espnet2_asr_puebla_nahuatl_transfer ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Sun Nov 7 18:16:55 EST 2021` - python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]` - espnet version: `espnet 0.10.4a1` - pytorch version: `pytorch 1.9.0` - Git hash: `` - Commit date: `` ## asr_train_asr_transformer_hubert_raw_bpe500_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_bpe500_valid.loss.ave_asr_model_valid.acc.best/test|10576|90532|77.0|17.0|6.0|3.6|26.6|74.0| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_bpe500_valid.loss.ave_asr_model_valid.acc.best/test|10576|590273|92.2|2.1|5.7|3.0|10.8|74.0| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_bpe500_valid.loss.ave_asr_model_valid.acc.best/test|10576|242435|86.0|7.3|6.8|3.5|17.5|74.0| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_transformer_hubert.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_transformer_hubert_raw_bpe500_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: 15 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 grad_clip: 5 grad_clip_type: 2.0 grad_noise: false accum_grad: 2 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 32 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_bpe500_sp/train/speech_shape - exp/asr_stats_raw_bpe500_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_bpe500_sp/valid/speech_shape - exp/asr_stats_raw_bpe500_sp/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - /tmp/jiatong-150390.uytFFbyG/raw/train_sp/wav.scp - speech - kaldi_ark - - /tmp/jiatong-150390.uytFFbyG/raw/train_sp/text - text - text valid_data_path_and_name_and_type: - - /tmp/jiatong-150390.uytFFbyG/raw/dev/wav.scp - speech - kaldi_ark - - /tmp/jiatong-150390.uytFFbyG/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 1.0 scheduler: noamlr scheduler_conf: warmup_steps: 25000 token_list: - <blank> - <unk> - ':' - N - ▁A - ▁WA - ▁KE - ▁YO - ▁NE - ▁SE - H - MO - WA - '''' - ▁NO - ▁I - ▁N - S - ▁KI - K - ▁ - MAH - KA - TA - L - ▁POS - PA - ▁KA - ▁TA - ▁MO - T - ▁YEHWA - I - MEH - ▁YA - ▁DE - MA - A - ▁TE - TI - TSI - NI - CHI - ▁PERO - KI - LI - TO - WI - ▁PARA - KO - E - ▁O - ▁IKA - TE - O - W - ▁NEH - ▁NOCHI - CH - ▁TI - ▁TIK - LO - ▁SAH - ▁MAH - NA - LA - ▁OMPA - ▁IHKÓ - YA - ▁NI - ▁PORQUE - ▁MA - YO - ▁TEIN - LIA - ▁E - MPA - ▁NIKA - X - YAH - ▁KWALTSI - SA - TSA - ▁MOCHI - ▁NIK - ▁WE - ▁TO - TSÍ - ▁SEMI - ▁KITA - WAK - KWI - MI - ▁MM - ▁XO - ▁SEKI - JÓ - AH - ▁KOMO - R - NE - ▁OK - ▁KWALI - ▁CHI - ▁YEH - ▁NELI - SE - PO - WAH - PI - ME - KWA - ▁PA - ▁ONKAK - KE - ▁YE - ▁T - LTIK - ▁TEHWA - TAH - ▁TIKI - ▁QUE - ▁NIKI - PE - ▁IWKI - XI - TOK - ▁TAMAN - ▁KO - TSO - LE - RA - SI - WÍ - MAN - ▁TIMO - 'NO' - SO - ▁MIAK - U - ▁TEH - ▁KICHI - ▁XA - WE - ▁KOW - KEH - NÍ - LIK - ▁ITECH - TIH - ▁PE - ▁KIPIA - ▁CUANDO - ▁KWALTIA - ▁HASTA - LOWA - ▁ENTÓ - ▁NA - XO - RO - TIA - ▁NIKITA - CHIHCHI - ▁SEPA - ▁MAHYÁ - ▁PAHTI - ▁K - LIAH - ▁SAYOH - MATI - ▁PI - TS - ▁MÁS - XMATI - KAH - ▁XI - M - ▁ESTE - HKO - KOWIT - MIKI - CHO - ▁TAK - Á - ▁KILIAH - CHIO - ▁KIHTOWA - ▁KITE - NEKI - ▁ME - XA - ▁TEL - B - ▁KOWIT - ▁ATA - TIK - ▁EKINTSI - ▁IMA - ▁KWA - ▁OSO - ▁NEHJÓ - ▁ITEYO - Y - SKEH - ▁ISTA - ▁NIKILIA - LIH - ▁TIKWI - ▁PANÉ - KOWA - ▁OX - TEKI - ▁SA - NTE - ▁KIKWI - TSITSI - NOH - AHSI - ▁IXO - WIA - LTSI - ▁KIMA - C - ▁WEHWEI - ▁TEPITSI - ▁IHK - ▁XIWIT - YI - LIS - ▁CA - XMATTOK - SÁ - ▁MOTA - RE - ▁TIKIHTO - ▁MI - ▁X - D - ▁SAN - WIH - ▁WEHKA - KWE - CHA - ▁SI - KTIK - ▁YETOK - ▁MOKA - NEMI - LILIA - ▁¿ - TIW - ▁KIHTOWAH - LTI - Ó - MASÁ - ▁POR - ▁TIKITA - KETSA - ▁IWA - METS - YOH - ▁TAKWA - HKEH - ▁KIKWIH - ▁KIKWA - NIA - ▁ACHI - ▁KIKWAH - ▁KACHI - ▁PO - ▁IGUAL - NAL - ▁PILI - ▁NIMAN - YE - ▁NIKMATI - WIAH - ▁KIPA - ▁M - J - ▁KWI - ▁WI - WAYA - Z - ▁KITEKI - G - ▁' - ▁IHKO - CE - ▁TONI - ▁TSIKITSI - P - DO - TOKEH - NIK - ▁TIKILIAH - ▁KOWTAH - ▁TAI - ▁TATA - TIAH - CA - PIL - CHOWA - ▁KIMATI - ▁TAMA - XKA - XIWIT - TOS - KILIT - ILWI - SKI - YEH - DA - WAYO - ▁TAPA - ▁NIMO - CHIT - ▁NIMITS - ▁KINA - PAHTI - RI - ▁BUENO - ▁ESKI - WAYAH - PANO - KOW - WEYAK - LPAN - LTIA - ▁KITO - CO - ▁TINE - KIH - JO - ▁KATKA - ▁TIKTA - PAHTIA - ▁XIWTSI - ▁CHIKA - ▁KANAH - ▁KOYO - MPI - ▁IXIWYO - IHTIK - ▁KWE - ▁XIW - WILIA - XTIK - ▁VE - ▁TIKMATI - ▁KOKOLIS - LKWI - ▁AHKO - MEKAT - ▁TIKMA - ▁NIMITSILIA - ▁MITS - XTA - ▁CO - ▁KOMA - ▁KOMOHKÓ - F - ▁OKSEKI - ▁TEISÁ - ▁ESO - ▁IKOWYO - ▁ES - TOHTO - XTI - ▁TSI - ▁TIKO - PIHPI - ▁OKSÉ - ▁WEHKAPAN - KALAKI - ▁WEL - ▁MIGUEL - TEKITI - ▁TOKNI - ROWA - ▁MOSKALTIA - Í - XOKO - ▁TIKCHI - ▁EHE - ▁KWO - LPI - HTOK - TSTI - TÍ - ▁TEIHSÁ - KILO - ▁PUES - SKIA - HTIW - LILIAH - ▁IHWA - ▁KOSTIK - ▁TIKIHTOWAH - ▁CHA - ▁COMO - ▁KIMANA - CU - TAMAN - WITS - ▁KOKO - ILPIA - ▁NIMONO - ▁WELI - ▁NIKWI - WTOK - ▁KINEKI - KOKOH - ▁P - LTIAH - XKO - ▁ONKAYA - TAPOWI - MATTOK - ▁MISMO - ▁NIKIHTO - ▁NIKMATTOK - MESKIA - ▁SOH - KWOWIT - XTIA - WELITA - ▁DESPUÉS - ▁IXWA - ZA - TSAPOT - SKAL - ▁SIEMPRE - TINEMI - Ñ - ▁ESKIA - NELOWA - ▁TZINACAPAN - ▁DI - XIWYO - ▁AHA - ▁AHWIA - É - ▁KIKWIAH - MATTOKEH - ▁ACHTO - XTILIA - TAPAL - ▁KIHTO - TEHTE - ▁PORIN - ▁TSOPE - ▁KAHFE - GU - ▁NIMITSTAHTANI - ▁TAHTA - ▁KOWTATI - ISWAT - ▁TIKPIA - ▁KOMEKAT - TIOWIH - ▁TIMONOHNO - ▁TIEMPO - WEHKA - QUI - ▁TIHTI - ▁XOXOKTIK - ▁TAXKAL - EHE - ▁AJÁ - NANAKAT - NIWKI - ▁CI - ▁ITSMOL - ▁NIKPIA - TEKPA - ▁BO - ▁TASOHKA - Ú - ¡ - '8' - '9' - '0' - '1' - '2' - ¿ - Ò - '4' - À - '7' - '5' - '3' - ́ - V - ̈ - Ï - '6' - Q - Ì - <sos/eos> init: xavier_uniform input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false extract_feats_in_collect_stats: false use_preprocessor: true token_type: bpe bpemodel: data/token_list/bpe_unigram500/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: s3prl frontend_conf: frontend_conf: upstream: hubert_large_ll60k download_dir: ./hub multilayer_feature: true fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: utterance_mvn normalize_conf: {} preencoder: linear preencoder_conf: input_size: 1024 output_size: 80 encoder: transformer encoder_conf: input_layer: conv2d num_blocks: 12 linear_units: 2048 dropout_rate: 0.1 output_size: 256 attention_heads: 4 attention_dropout_rate: 0.0 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: input_layer: embed num_blocks: 6 linear_units: 2048 dropout_rate: 0.1 required: - output_dir - token_list version: 0.10.4a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "noinfo", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["puebla_nahuatl"]}
espnet/ftshijt_espnet2_asr_puebla_nahuatl_transfer
null
[ "espnet", "audio", "automatic-speech-recognition", "dataset:puebla_nahuatl", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## ESPnet2 ASR model ### `espnet/ftshijt_espnet2_asr_totonac_transformer` This model was trained by ftshijt using totonac recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet pip install -e . cd els/totonac/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/ftshijt_espnet2_asr_totonac_transformer ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Sun Nov 7 09:22:09 EST 2021` - python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]` - espnet version: `espnet 0.10.4a1` - pytorch version: `pytorch 1.9.0` - Git hash: `` - Commit date: `` ## asr_train_asr_transformer_specaug_raw_bpe250_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_bpe250_valid.loss.ave_asr_model_valid.acc.best/dev|530|3547|59.8|32.9|7.3|6.5|46.7|87.4| |decode_asr_lm_lm_train_bpe250_valid.loss.ave_asr_model_valid.acc.best/test|704|5018|55.5|35.7|8.8|6.1|50.6|92.0| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_bpe250_valid.loss.ave_asr_model_valid.acc.best/dev|530|22510|88.1|4.4|7.4|3.9|15.8|87.4| |decode_asr_lm_lm_train_bpe250_valid.loss.ave_asr_model_valid.acc.best/test|704|32990|86.9|4.3|8.8|4.0|17.1|92.0| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_bpe250_valid.loss.ave_asr_model_valid.acc.best/dev|530|9360|70.3|15.8|13.8|4.3|34.0|87.4| |decode_asr_lm_lm_train_bpe250_valid.loss.ave_asr_model_valid.acc.best/test|704|13835|70.5|16.0|13.6|4.4|33.9|92.0| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_transformer_specaug.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_transformer_specaug_raw_bpe250_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: 15 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 grad_clip: 5 grad_clip_type: 2.0 grad_noise: false accum_grad: 2 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 32 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_bpe250_sp/train/speech_shape - exp/asr_stats_raw_bpe250_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_bpe250_sp/valid/speech_shape - exp/asr_stats_raw_bpe250_sp/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - /tmp/jiatong-7359.okvPvI3Z/raw/train_sp/wav.scp - speech - kaldi_ark - - /tmp/jiatong-7359.okvPvI3Z/raw/train_sp/text - text - text valid_data_path_and_name_and_type: - - /tmp/jiatong-7359.okvPvI3Z/raw/dev/wav.scp - speech - kaldi_ark - - /tmp/jiatong-7359.okvPvI3Z/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 1.0 scheduler: noamlr scheduler_conf: warmup_steps: 4000 token_list: - <blank> - <unk> - ':' - ▁N - NI - N - ▁IYMA - ▁NA - NA - ▁WA - WA - ▁ - '''' - KA - ▁MA - MA - T - ▁XA - TA - NCHU - WI - ▁LI - ▁NI - PA - YI - ▁PUS - K - ▁PI - ▁X - S - ▁TA - YA - ▁LA - Q - QA - TI - ▁KA - QO - W - ▁KAH - ▁PALA - H - X - XA - ▁KI - A - LH - I - LA - ▁CHA - ▁A - ▁XLI - ▁LHI - U - ▁K - KANI - KU - Y - ▁LU - Á - ▁CHU - O - KI - ▁KIWI - NTLA - ▁TLA - M - ▁TAWA - ▁TI - ▁S - WANI - CHA - LHI - LI - ▁TU - ▁PALHA - Í - ▁CHANÁ - ▁KILHWAMPA - KÁN - ▁WAYMA - E - SA - ▁E - ▁LHU - LHA - PU - ▁LHA - ▁PA - ▁LAK - ▁ANTA - ▁KITI - NCHÚ - SI - TLA - PI - ▁KINI - CHI - ▁PEROH - ▁PU - QÓ - QALHCHIWINA - TU - ▁TLHA - ▁WI - NÁ - ▁KAN - ▁NAYI - CH - 'NO' - ▁U - TSA - MÁ - NQO - ▁ANA - ▁LIKWA - ▁XTA - J - ▁QALH - TO - TÁ - ▁USA - ▁PORQUE - ▁MI - L - ▁TAWÁ - XI - LHAQAPASA - P - CHIWI - WÁ - NTI - ▁JKA - Ú - NTLHA - R - TSI - C - STA - ▁LH - LHU - MPI - ▁I - ▁NILH - ▁KATSI - ▁LHAK - MAKLHAKASKI - ▁WANIKÁN - ▁WIXI - ▁TSI - KÚ - NÍ - ▁PAKS - NU - TLHA - YÁ - KUCHAN - XAQATLI - ▁MAX - ▁LAQAPASA - ▁LAQ - QALH - KATSI - Ó - LAQAPASA - ▁J - ▁QAMA - NTU - MI - KIWI - ▁KIN - ▁XANAT - ▁CHI - JA - ▁IY - ▁TSU - MAKLAKAS - ▁MAQA - LÁ - ▁KATSIYA - ▁TLANKA - ▁STAK - ▁XLA - ▁LHIKWA - ▁SQA - ▁P - TAHNA - ▁TLAQ - ▁JKATSI - MAKLAKASKINKA - YÁW - WATIYA - CHÁ - ▁IPORQUEI - ▁AKXNI - TSU - ▁TSINÓ - ▁STAKA - ▁AKXNÍ - LAKATA - KATSÍ - ▁XALHAK - TLAWAYA - SPUT - ▁XATAWA - QALHCHIWI - PÁ - JU - ▁XAXANAT - ▁PÉREZ - ▁AKTSU - ▁JKI - NTÚ - ▁KATSIYÁ - ▁IESTEI - LAQAPASÁ - ▁MASKI - ▁LAQSQATÁ - ▁TLHANKA - ▁WANIKANI - ▁LÓPEZ - MAKLAKASKINKÁN - ▁ANTÁ - ▁TACHIWÍ - ▁SEBAST - ▁CANO - ▁XKUTNI - ▁UKXILH - TANKAH - LAKASKINQO - LAKAPASTAK - ▁XCHACHAT - TAKAWANÍ - ▁TLÁ - ▁TSINOH - KAXTLAWA - ▁NÚÑEZ - ▁XLAKASKINKA - ▁WÁTIYA - ONCE - Z - É - D - Ñ - V - F - G - '1' - B - <sos/eos> init: xavier_uniform input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false use_preprocessor: true token_type: bpe bpemodel: data/token_list/bpe_unigram250/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_bpe250_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: transformer encoder_conf: input_layer: conv2d num_blocks: 12 linear_units: 2048 dropout_rate: 0.1 output_size: 256 attention_heads: 4 attention_dropout_rate: 0.0 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: input_layer: embed num_blocks: 6 linear_units: 2048 dropout_rate: 0.1 required: - output_dir - token_list version: 0.10.4a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "noinfo", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["totonac"]}
espnet/ftshijt_espnet2_asr_totonac_transformer
null
[ "espnet", "audio", "automatic-speech-recognition", "dataset:totonac", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## ESPnet2 ASR model ### `espnet/ftshijt_espnet2_asr_yolo_mixtec_transformer` This model was trained by ftshijt using yolo_mixtec recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet pip install -e . cd els/yolo_mixtec/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/ftshijt_espnet2_asr_yolo_mixtec_transformer ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Wed Nov 10 02:59:39 EST 2021` - python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]` - espnet version: `espnet 0.10.4a1` - pytorch version: `pytorch 1.9.0` - Git hash: `` - Commit date: `` ## asr_train_asr_transformer_specaug_raw_bpe500 ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_bpe500_valid.loss.ave_asr_model_valid.acc.best/test|4985|81348|84.1|11.8|4.1|2.5|18.3|82.5| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_bpe500_valid.loss.ave_asr_model_valid.acc.best/test|4985|626187|93.4|2.2|4.4|2.4|9.0|82.5| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_bpe500_valid.loss.ave_asr_model_valid.acc.best/test|4985|325684|90.7|5.2|4.1|2.2|11.5|82.5| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_transformer_specaug.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_transformer_specaug_raw_bpe500 ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: 15 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 grad_clip: 5 grad_clip_type: 2.0 grad_noise: false accum_grad: 2 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 32 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_bpe500/train/speech_shape - exp/asr_stats_raw_bpe500/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_bpe500/valid/speech_shape - exp/asr_stats_raw_bpe500/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - /tmp/st-jiatong-54826.tbQP9L0N/raw/train/wav.scp - speech - kaldi_ark - - /tmp/st-jiatong-54826.tbQP9L0N/raw/train/text - text - text valid_data_path_and_name_and_type: - - /tmp/st-jiatong-54826.tbQP9L0N/raw/dev/wav.scp - speech - kaldi_ark - - /tmp/st-jiatong-54826.tbQP9L0N/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 1.0 scheduler: noamlr scheduler_conf: warmup_steps: 25000 token_list: - <blank> - <unk> - '4' - '3' - '1' - '2' - A - ▁NDI - '''4' - '''1' - U - ▁BA - O - ▁I - E - 4= - ▁KU - ▁TAN - ▁KA - '''3' - NI - ▁YA - RA - 3= - 2= - IN - NA - ▁TA - AN - ▁KAN - ▁NI - ▁NDA - ▁NA - ▁JI - KAN - CHI - (3)= - I - UN - 1- - ▁SA - (4)= - ▁JA - XI - ▁KO - ▁TI - TA - KU - BI - ▁YU - ▁KWA - KA - XA - 1= - ▁YO - RI - NDO - ▁XA - TU - ▁TU - ▁ÑA - ▁KI - ▁XI - YO - NDU - NDA - ▁CHI - (2)= - ▁BI - ▁NU - KI - (1)= - YU - 3- - ▁MI - 'ON' - ▁A - BA - 4- - KO - ▁NDU - ▁ÑU - ▁NDO - NU - ÑU - '143' - ▁SI - ▁SO - 13- - NDI - ▁AN - ▁SU - TIN - SA - ▁BE - TO - RUN - KWA - KWI - ▁NDE - ▁KWI - XIN - ▁U - SI - SO - ▁TUN - EN - ▁KWE - YA - (4)=2 - NDE - TI - TUN - ▁TIN - MA - ▁SE - ▁XU - SU - ▁LU - ▁KE - ▁ - MI - ▁RAN - (3)=2 - 14- - ▁MA - KUN - LU - N - ▁O - KE - NGA - ▁IS - ▁JU - '=' - ▁LA - ÑA - JA - CHUN - R - TAN - PU - ▁TIEM - LI - LA - CHIU - ▁PA - M - ▁REY - ▁BAN - JI - L - SUN - ▁SEÑOR - ▁JO - ▁TIO - KWE - CHU - S - ▁YE - KIN - XU - BE - ▁CUENTA - ▁SAN - RRU - ▁¿ - CHA - ▁TO - RRA - LO - TE - ▁AMIGU - PA - XAN - ▁C - C - ▁CHA - ▁TE - ▁HIJO - ▁MB - ▁PI - G - ▁ÁNIMA - ▁CHE - ▁P - B - NDIO - SE - ▁SANTU - MU - ▁PADRE - D - JU - Z - ▁TORO - ▁PO - LE - ▁LI - RO - ▁LO - ▁MESA - CA - ▁CHIU - DO - ▁BU - ▁BUTA - JO - T - TRU - RU - ▁MBO - ▁JUAN - ▁MM - ▁CA - ▁M - ▁MAS - ▁DE - V - ▁MAÑA - ▁UTA - DA - ▁MULA - ▁YOLOXÓCHITL - ▁CONSEJU - ▁Y - ▁LE - ÓN - ▁MISA - TIU - ▁CANDELA - ▁PATRÓN - ▁PADRINU - ▁MARCU - ▁V - ▁G - Í - ▁XE - ▁MU - ▁XO - NGUI - ▁CO - ▁HOMBRE - ▁PESU - ▁PE - ▁D - ▁MACHITI - CO - REN - ▁RANCHU - ▁MIS - ▁MACHU - J - ▁PAN - CHO - H - ▁CHU - Y - ▁TON - GA - X - ▁VI - ▁FE - ▁TARRAYA - ▁SANTÍSIMA - ▁N - ▁MAYÓ - ▁CARRU - ▁F - ▁PAPÁ - ▁PALOMA - ▁MARÍA - ▁PEDRU - ▁CAFÉ - ▁COMISARIO - ▁PANELA - ▁PELÓN - É - ▁POZO - ▁CABRÓN - ▁GUACHU - ▁S - RES - ▁COSTUMBRE - ▁SEÑA - QUI - ▁ORO - CH - ▁MAR - SIN - SAN - ▁COSTA - ▁MAMÁ - ▁CINCUENTA - ▁CHO - ▁PEDR - ▁JUNTA - MÚ - ▁TIENDA - ▁JOSÉ - NC - ▁ES - ▁SUERTE - ▁FAMILIA - ▁ZAPATU - NTE - ▁PASTO - ▁CON - Ñ - ▁BOTE - CIÓN - ▁RE - ▁BOLSA - ▁MANGO - ▁JWE - ▁GASTU - ▁T - ▁B - ▁KW - ÍN - ▁HIJA - ▁CUARENT - ▁VAQUERU - ▁NECHITO - ▁NOVIA - ▁NOVIO - JWE - ▁PUENTE - ▁SANDÍA - ▁MALA - Ó - ▁ABONO - ▁JESÚS - ▁CUARTO - ▁EFE - ▁REINA - ▁COMANDANTE - ▁ESCUELA - ▁MANZANA - ▁MÁQUINA - LLA - ▁COR - ▁JERÓNIMO - ▁PISTOLA - NGI - CIO - ▁FRANCISCU - ▁TEODORO - CER - ▁SALUBI - ▁MEZA - ▁MÚSIC - ▁RU - ▁CONSTANTINO - ▁GARCÍA - ▁FRENU - ▁ROSA - ▁CERVEZA - ▁CIGARRU - ▁COMISIÓN - ▁CUNIJO - ▁FRANCISCO - ▁HÍJOLE - ▁NUEVE - ▁MUL - ▁PANTALÓN - ▁CAMISA - ▁CHINGADA - ▁SEMANA - ▁COM - GAR - ▁MARTÍN - ▁SÁBADO - ▁TRABAJO - ▁CINCO - ▁DIE - ▁EST - NDWA - ▁LECHIN - ▁COCO - ILLU - ▁CORRE - ▁MADR - ▁REC - ▁BAUTISTA - ▁VENTANA - ▁CUÑAD - ▁ANTONIU - ▁COPALA - LÍN - ▁SECUND - ▁COHETE - ▁HISTORIA - ▁POLICÍA - ENCIA - ▁CAD - ▁LUIS - ▁DOCTOR - ▁GONZÁLEZ - ▁JUEVE - ▁LIBRU - ▁QUESU - ▁VIAJE - ▁CART - ▁LOCO - ▁BOL - ▁COMPADRE - ▁JWI - ▁METRU - ▁BUENO - ▁TRE - ▁CASTILLO - ▁COMITÉ - ▁ETERNO - ▁LÍQUIDO - ▁MOLE - ▁CAPULCU - ▁DOMING - ▁ROMA - ▁CARAJU - ▁RIATA - ▁TRATU - ▁SEIS - ▁ADÁN - ▁JUANCITO - ▁HOR - '''' - ▁ARRÓ - ▁COCINA - ▁PALACIO - ▁RÓMULO - K - ▁ALFONSO - ▁BARTOLO - ▁FELIPE - ▁HERRER - ▁PAULINO - ▁YEGUA - ▁LISTA - Ú - ▁ABRIL - ▁CUATRO - ▁DICIEMBRE - ▁MARGARITO - ▁MOJONERA - ▁SOLEDAD - ▁VESTIDO - ▁PELOTA - RRET - ▁CAPITÁN - ▁COMUNIÓN - ▁CUCHARA - ▁FERNANDO - ▁GUADALUPE - ▁MIGUEL - ▁PELÚN - ▁SECRETARIU - ▁LENCHU - ▁EVA - ▁SEGUND - ▁CANTOR - ▁CHILPANCINGO - ▁GABRIEL - ▁QUINIENTO - ▁RAÚL - ▁SEVERIAN - ▁TUMBADA - ▁MALINCHI - ▁PRIMU - ▁MORAL - ▁AGOSTO - ▁CENTÍMETRO - ▁FIRMA - ▁HUEHUETÁN - ▁MANGUERA - ▁MEDI - ▁MUERT - ▁SALAZAR - ▁VIERNI - LILL - ▁LL - '-' - ▁CAMPESINO - ▁CIVIL - ▁COMISARIADO - ) - ( - Ã - ‘ - ¿ - Ü - ¡ - Q - F - Á - P - Ÿ - W - Ý - <sos/eos> init: xavier_uniform input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false use_preprocessor: true token_type: bpe bpemodel: data/token_list/bpe_unigram500/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_bpe500/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: transformer encoder_conf: input_layer: conv2d num_blocks: 12 linear_units: 2048 dropout_rate: 0.1 output_size: 512 attention_heads: 4 attention_dropout_rate: 0.0 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: input_layer: embed num_blocks: 6 linear_units: 2048 dropout_rate: 0.1 required: - output_dir - token_list version: 0.10.4a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "noinfo", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["yolo_mixtec"]}
espnet/ftshijt_espnet2_asr_yolo_mixtec_transformer
null
[ "espnet", "audio", "automatic-speech-recognition", "dataset:yolo_mixtec", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## Example ESPnet2 ASR model ### `ftshijt/mls_asr_transformer_valid.acc.best` ♻️ Imported from https://zenodo.org/record/4458452/ This model was trained by ftshijt using mls/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "es", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["mls"]}
espnet/ftshijt_mls_asr_transformer_valid.acc.best
null
[ "espnet", "audio", "automatic-speech-recognition", "es", "dataset:mls", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
espnet/ftshijt_open_li52_asr_train_asr_raw_bpe7000_valid.acc.ave_10best
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## ESPnet2 ASR pretrained model ### `jv_openslr35` ♻️ Imported from https://zenodo.org/record/5090139/ This model was trained by jv_openslr35 using jv_openslr35/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "jv", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["jv_openslr35"]}
espnet/jv_openslr35
null
[ "espnet", "audio", "automatic-speech-recognition", "jv", "dataset:jv_openslr35", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
# ESPnet2 ASR pretrained model ## `kamo-naoyuki/mini_an4_asr_train_raw_bpe_valid.acc.best` ♻️ Imported from <https://zenodo.org/record/3957940#.YN7zwJozZH4> This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Training config See full config in [`config.yaml`](./config.yaml) ```yaml config: null print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_raw_bpe ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["mini-an4"]}
espnet/kamo-naoyuki-mini_an4_asr_train_raw_bpe_valid.acc.best
null
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:mini-an4", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## Example ESPnet2 ASR model ### `kamo-naoyuki/aishell_conformer` ♻️ Imported from https://zenodo.org/record/4105763/ This model was trained by kamo-naoyuki using aishell/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["aishell"]}
espnet/kamo-naoyuki_aishell_conformer
null
[ "espnet", "audio", "automatic-speech-recognition", "zh", "dataset:aishell", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## Example ESPnet2 ASR model ### `kamo-naoyuki/chime4_asr_train_asr_transformer3_raw_en_char_sp_valid.acc.ave` ♻️ Imported from https://zenodo.org/record/4414883/ This model was trained by kamo-naoyuki using chime4/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["chime4"]}
espnet/kamo-naoyuki_chime4_asr_train_asr_transformer3_raw_en_char_sp_valid.acc.ave
null
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:chime4", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## Example ESPnet2 ASR model ### `kamo-naoyuki/dirha_wsj_asr_train_asr_transformer_cmvn_raw_char_rir_scpdatadirha_irwav.scp_noise_db_range10_17_noise_scpdatadirha_noisewav.scp_speech_volume_normalize1.0_num_workers2_rir_apply_prob1._sp_valid.acc.ave` ♻️ Imported from https://zenodo.org/record/4415021/ This model was trained by kamo-naoyuki using dirha_wsj/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["dirha_wsj"]}
espnet/kamo-naoyuki_dirha_wsj_asr_train_asr_transformer_cmvn_raw_char_rir_scp-truncated-2fd1f8
null
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:dirha_wsj", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## Example ESPnet2 ASR model ### `kamo-naoyuki/hkust_asr_train_asr_transformer2_raw_zh_char_batch_bins20000000_ctc_confignore_nan_gradtrue_sp_valid.acc.ave` ♻️ Imported from https://zenodo.org/record/4430974/ This model was trained by kamo-naoyuki using hkust/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["hkust"]}
espnet/kamo-naoyuki_hkust_asr_train_asr_transformer2_raw_zh_char_batch_bins20-truncated-934e17
null
[ "espnet", "audio", "automatic-speech-recognition", "zh", "dataset:hkust", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## Example ESPnet2 ASR model ### `kamo-naoyuki/librispeech_asr_train_asr_conformer5_raw_bpe5000_frontend_confn_fft400_frontend_confhop_length160_scheduler_confwarmup_steps25000_batch_bins140000000_optim_conflr0.0015_initnone_sp_valid.acc.ave` ♻️ Imported from https://zenodo.org/record/4543003/ This model was trained by kamo-naoyuki using librispeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"]}
espnet/kamo-naoyuki_librispeech_asr_train_asr_conformer5_raw_bpe5000_frontend-truncated-55c091
null
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:librispeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## Example ESPnet2 ASR model ### `kamo-naoyuki/librispeech_asr_train_asr_conformer5_raw_bpe5000_frontend_confn_fft512_frontend_confhop_length256_scheduler_confwarmup_steps25000_batch_bins140000000_optim_conflr0.0015_initnone_sp_valid.acc.ave` ♻️ Imported from https://zenodo.org/record/4543018/ This model was trained by kamo-naoyuki using librispeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"]}
espnet/kamo-naoyuki_librispeech_asr_train_asr_conformer5_raw_bpe5000_frontend-truncated-b76af5
null
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:librispeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## Example ESPnet2 ASR model ### `kamo-naoyuki/librispeech_asr_train_asr_conformer5_raw_bpe5000_scheduler_confwarmup_steps25000_batch_bins140000000_optim_conflr0.0015_initnone_accum_grad2_sp_valid.acc.ave` ♻️ Imported from https://zenodo.org/record/4541452/ This model was trained by kamo-naoyuki using librispeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"]}
espnet/kamo-naoyuki_librispeech_asr_train_asr_conformer5_raw_bpe5000_schedule-truncated-c8e5f9
null
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:librispeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## Example ESPnet2 ASR model ### `kamo-naoyuki/librispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_bpe5000_scheduler_confwarmup_steps40000_optim_conflr0.0025_sp_valid.acc.ave` ♻️ Imported from https://zenodo.org/record/4604066/ This model was trained by kamo-naoyuki using librispeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"]}
espnet/kamo-naoyuki_librispeech_asr_train_asr_conformer6_n_fft512_hop_length2-truncated-a63357
null
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:librispeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## Example ESPnet2 ASR model ### `kamo-naoyuki/mini_an4_asr_train_raw_bpe_valid.acc.best` ♻️ Imported from https://zenodo.org/record/3957940/ This model was trained by kamo-naoyuki using mini_an4/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["mini_an4"]}
espnet/kamo-naoyuki_mini_an4_asr_train_raw_bpe_valid.acc.best
null
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:mini_an4", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## Example ESPnet2 ASR model ### `kamo-naoyuki/reverb_asr_train_asr_transformer2_raw_en_char_rir_scpdatareverb_rir_singlewav.scp_noise_db_range12_17_noise_scpdatareverb_noise_singlewav.scp_speech_volume_normalize1.0_num_workers2_rir_apply_prob0.999_noise_apply_prob1._sp_valid.acc.ave` ♻️ Imported from https://zenodo.org/record/4441309/ This model was trained by kamo-naoyuki using reverb/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["reverb"]}
espnet/kamo-naoyuki_reverb_asr_train_asr_transformer2_raw_en_char_rir_scpdata-truncated-0e9753
null
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:reverb", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## Example ESPnet2 ASR model ### `kamo-naoyuki/reverb_asr_train_asr_transformer4_raw_char_batch_bins16000000_accum_grad1_sp_valid.acc.ave` ♻️ Imported from https://zenodo.org/record/4278363/ This model was trained by kamo-naoyuki using reverb/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["reverb"]}
espnet/kamo-naoyuki_reverb_asr_train_asr_transformer4_raw_char_batch_bins1600-truncated-1b72bb
null
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:reverb", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## Example ESPnet2 ASR model ### `kamo-naoyuki/timit_asr_train_asr_raw_word_valid.acc.ave` ♻️ Imported from https://zenodo.org/record/4284058/ This model was trained by kamo-naoyuki using timit/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["timit"]}
espnet/kamo-naoyuki_timit_asr_train_asr_raw_word_valid.acc.ave
null
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:timit", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## Example ESPnet2 ASR model ### `kamo-naoyuki/wsj` ♻️ Imported from https://zenodo.org/record/4003381/ This model was trained by kamo-naoyuki using wsj/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["wsj"]}
espnet/kamo-naoyuki_wsj
null
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:wsj", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## Example ESPnet2 ASR model ### `kamo-naoyuki/wsj_transformer2` ♻️ Imported from https://zenodo.org/record/4243201/ This model was trained by kamo-naoyuki using wsj/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["wsj"]}
espnet/kamo-naoyuki_wsj_transformer2
null
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:wsj", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## ESPnet2 ASR model ### `espnet/kan-bayashi_csj_asr_train_asr_conformer` This model was trained by Nelson Yalta using csj recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 0d8cd47dd3572248b502bc831cd305e648170233 pip install -e . cd egs2/csj/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/kan-bayashi_csj_asr_train_asr_conformer ``` ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_conformer.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_conformer_raw_char_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 47308 dist_launcher: null multiprocessing_distributed: true cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 6 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null pretrain_path: [] pretrain_key: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 15000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_sp/train/speech_shape - exp/asr_stats_raw_sp/train/text_shape.char valid_shape_file: - exp/asr_stats_raw_sp/valid/speech_shape - exp/asr_stats_raw_sp/valid/text_shape.char batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_nodup_sp/wav.scp - speech - sound - - dump/raw/train_nodup_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/train_dev/wav.scp - speech - sound - - dump/raw/train_dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 valid_max_cache_size: null optim: adam optim_conf: lr: 0.002 scheduler: warmuplr scheduler_conf: warmup_steps: 25000 token_list: - <blank> - <unk> - "\u306E" - "\u3044" - "\u3067" - "\u3068" - "\u30FC" - "\u3066" - "\u3046" - "\u307E" - "\u3059" - "\u3057" - "\u306B" - "\u3063" - "\u306A" - "\u3048" - "\u305F" - "\u3053" - "\u304C" - "\u304B" - "\u306F" - "\u308B" - "\u3042" - "\u3093" - "\u308C" - "\u3082" - "\u3092" - "\u305D" - "\u308A" - "\u3089" - "\u3051" - "\u304F" - "\u3069" - "\u3088" - "\u304D" - "\u3060" - "\u304A" - "\u30F3" - "\u306D" - "\u4E00" - "\u3055" - "\u30B9" - "\u8A00" - "\u3061" - "\u3064" - "\u5206" - "\u30C8" - "\u3084" - "\u4EBA" - "\u30EB" - "\u601D" - "\u308F" - "\u6642" - "\u65B9" - "\u3058" - "\u30A4" - "\u884C" - "\u4F55" - "\u307F" - "\u5341" - "\u30E9" - "\u4E8C" - "\u672C" - "\u8A9E" - "\u5927" - "\u7684" - "\u30AF" - "\u30BF" - "\u308D" - "\u3070" - "\u3087" - "\u3083" - "\u97F3" - "\u51FA" - "\u305B" - "\u30C3" - "\u5408" - "\u65E5" - "\u4E2D" - "\u751F" - "\u4ECA" - "\u898B" - "\u30EA" - "\u9593" - "\u8A71" - "\u3081" - "\u30A2" - "\u5F8C" - "\u81EA" - "\u305A" - "\u79C1" - "\u30C6" - "\u4E0A" - "\u5E74" - "\u5B66" - "\u4E09" - "\u30B7" - "\u5834" - "\u30C7" - "\u5B9F" - "\u5B50" - "\u4F53" - "\u8003" - "\u5BFE" - "\u7528" - "\u6587" - "\u30D1" - "\u5F53" - "\u7D50" - "\u5EA6" - "\u5165" - "\u8A33" - "\u30D5" - "\u98A8" - "\u30E0" - "\u30D7" - "\u6700" - "\u30C9" - "\u30EC" - "\u30ED" - "\u4F5C" - "\u6570" - "\u76EE" - "\u30B8" - "\u95A2" - "\u30B0" - "\u767A" - "\u8005" - "\u5B9A" - "\u3005" - "\u3050" - "\u30B3" - "\u4E8B" - "\u624B" - "\u5168" - "\u5909" - "\u30DE" - "\u6027" - "\u8868" - "\u4F8B" - "\u52D5" - "\u8981" - "\u5148" - "\u524D" - "\u610F" - "\u90E8" - "\u4F1A" - "\u6301" - "\u30E1" - "\u5316" - "\u9054" - "\u4ED8" - "\u5F62" - "\u73FE" - "\u4E94" - "\u30AB" - "\u3079" - "\u53D6" - "\u56DE" - "\u5E38" - "\u4F7F" - "\u611F" - "\u66F8" - "\u6C17" - "\u6CD5" - "\u7A0B" - "\u3071" - "\u56DB" - "\u591A" - "\u8272" - "\u30BB" - "\u7406" - "\u975E" - "\u30D0" - "\u58F0" - "\u5358" - "\u756A" - "\uFF21" - "\u6210" - "\u540C" - "\u901A" - "\u30A3" - "\u679C" - "\u30AD" - "\u554F" - "\u984C" - "\u69CB" - "\u56FD" - "\u6765" - "\u9AD8" - "\u6B21" - "\u9A13" - "\u3052" - "\u30C1" - "\u4EE5" - "\u3054" - "\u4EE3" - "\u30E2" - "\u30AA" - "\u51C4" - "\u7279" - "\u77E5" - "\u30E5" - "\u7269" - "\u660E" - "\u70B9" - "\u5473" - "\u767E" - "\u89E3" - "\u8FD1" - "\u8B58" - "\u5730" - "\u540D" - "\u805E" - "\u4E0B" - "\u5C0F" - "\u6559" - "\u30B5" - "\u70BA" - "\u4E5D" - "\u30D6" - "\u5BB6" - "\u30CB" - "\u521D" - "\u30D9" - "\u30E7" - "\u5C11" - "\u8A8D" - "\u8AD6" - "\u529B" - "\u516D" - "\u30D3" - "\u60C5" - "\u7FD2" - "\u30A6" - "\u7ACB" - "\u5FC3" - "\u8ABF" - "\u5831" - "\u30A8" - "\uFF24" - "\uFF2E" - "\u793A" - "\u793E" - "\u9055" - "\u969B" - "\u3056" - "\u8AAC" - "\u5FDC" - "\u98DF" - "\u72B6" - "\u9577" - "\u7814" - "\u6821" - "\u5185" - "\u639B" - "\u30DF" - "\u5916" - "\u5411" - "\u80FD" - "\u516B" - "\u9762" - "\u7A76" - "\u7136" - "\u3073" - "\u30D4" - "\u4E3B" - "\u4FC2" - "\u5024" - "\u91CD" - "\u8A5E" - "\u4F9B" - "\u5F97" - "\u5FC5" - "\u5973" - "\u78BA" - "\u7D42" - "\u30BA" - "\u6BCD" - "\u696D" - "\u7387" - "\u65B0" - "\u6D3B" - "\u697D" - "\u8449" - "\u8A08" - "\u30CA" - "\u3080" - "\u6240" - "\u4E16" - "\u6B63" - "\u30E3" - "\u8A18" - "\u671F" - "\u5207" - "\u3078" - "\u6A5F" - "\u30DA" - "\u5343" - "\u985E" - "\u5143" - "\u614B" - "\u826F" - "\u5728" - "\u6709" - "\u30C0" - "\u4E03" - "\uFF23" - "\u5225" - "\u30EF" - "\u691C" - "\u7D9A" - "\u9078" - "\u57FA" - "\u76F8" - "\u6708" - "\u4FA1" - "\u7D20" - "\u4ED6" - "\u6BD4" - "\u9023" - "\u96C6" - "\u30A7" - "\u307B" - "\u4F4D" - "\u597D" - "\uFF2D" - "\u5F37" - "\u4E0D" - "\u5FA1" - "\u6790" - "\u30DD" - "\u7121" - "\u89AA" - "\u53D7" - "\u3086" - "\u7F6E" - "\u8C61" - "\u4ED5" - "\u5F0F" - "\u30CD" - "\u6307" - "\u8AAD" - "\u6C7A" - "\u8ECA" - "\u96FB" - "\u904E" - "\u30B1" - "\u8A55" - "\u5229" - "\u6B8B" - "\u8D77" - "\u30CE" - "\u7D4C" - "\u56F3" - "\u4F1D" - "\u500B" - "\u30C4" - "\u7BC0" - "\u9053" - "\u5E73" - "\u91D1" - "\u899A" - "\uFF34" - "\u4F4F" - "\u59CB" - "\u63D0" - "\u5B58" - "\u5171" - "\u30DB" - "\u7B2C" - "\u7D44" - "\u89B3" - "\u80B2" - "\u6771" - "\u305E" - "\u958B" - "\u52A0" - "\u5F15" - "\uFF33" - "\u53E3" - "\u6C34" - "\u5BB9" - "\u5468" - "\u5B87" - "\u7D04" - "\u5B57" - "\u3076" - "\u9803" - "\u3072" - "\u5B99" - "\u6BB5" - "\u30BD" - "\u97FF" - "\u30DC" - "\u53CB" - "\u91CF" - "\u6599" - "\u3085" - "\u5CF6" - "\u8EAB" - "\u76F4" - "\u753B" - "\u7DDA" - "\u54C1" - "\u5DEE" - "\u4EF6" - "\u9069" - "\u5F35" - "\u8FBA" - "\u8FBC" - "\u91CE" - "\u69D8" - "\u578B" - "\u4E88" - "\u7A2E" - "\u5074" - "\u8FF0" - "\u5C71" - "\u5C4B" - "\u5E30" - "\u30CF" - "\u4E57" - "\u539F" - "\u683C" - "\u8CEA" - "\u666E" - "\uFF30" - "\u9020" - "\u753A" - "\u30B4" - "\u82F1" - "\u63A5" - "\u304E" - "\u6E2C" - "\u3075" - "\u7FA9" - "\u4EAC" - "\u5272" - "\u5236" - "\u7B54" - "\u5404" - "\u4FE1" - "\u754C" - "\u6211" - "\u7A7A" - "\uFF0E" - "\u7740" - "\u53EF" - "\u66F4" - "\u6D77" - "\u4E0E" - "\u9032" - "\u52B9" - "\u5F7C" - "\u771F" - "\u7530" - "\u5FB4" - "\u6D41" - "\u5177" - "\uFF32" - "\u5E02" - "\u67FB" - "\u5B89" - "\uFF22" - "\u5E83" - "\u50D5" - "\u6CE2" - "\u5C40" - "\u8A2D" - "\u7537" - "\u767D" - "\u30B6" - "\u53CD" - "\u6226" - "\u533A" - "\u6C42" - "\u96D1" - "\uFF29" - "\u6B69" - "\u8CB7" - "\u982D" - "\u7B97" - "\u534A" - "\u4FDD" - "\u5E03" - "\u96E3" - "\uFF2C" - "\u5224" - "\u843D" - "\u8DB3" - "\u5E97" - "\u7533" - "\u8FD4" - "\u30AE" - "\u4E07" - "\u6728" - "\u6614" - "\u8F03" - "\u7D22" - "\uFF26" - "\u30B2" - "\u6B86" - "\u60AA" - "\u5883" - "\u548C" - "\u907A" - "\u57DF" - "\u968E" - "\u542B" - "\u305C" - "\u30BC" - "\u65AD" - "\u9650" - "\u63A8" - "\u4F4E" - "\u5F71" - "\u898F" - "\u6319" - "\u90FD" - "\u307C" - "\u6848" - "\u4EEE" - "\u88AB" - "\u547C" - "\u30A1" - "\u96E2" - "\u7CFB" - "\u79FB" - "\u30AC" - "\u5DDD" - "\u6E96" - "\u904B" - "\u6761" - "\u5FF5" - "\u6C11" - "\uFF27" - "\u7236" - "\u75C5" - "\u79D1" - "\u4E21" - "\u7531" - "\u8A66" - "\u56E0" - "\u547D" - "\u795E" - "\uFF28" - "\u7570" - "\u7C21" - "\u53E4" - "\u6F14" - "\u5897" - "\u51E6" - "\u8B70" - "\u7DD2" - "\u7CBE" - "\u6613" - "\u53F7" - "\u65CF" - "\u52FF" - "\u60F3" - "\u5217" - "\u5C0E" - "\u8EE2" - "\u54E1" - "\u30E6" - "\u6BCE" - "\u8996" - "\u4E26" - "\u98DB" - "\u4F3C" - "\u6620" - "\u7D71" - "\u4EA4" - "\u30D2" - "\u6B4C" - "\u5F85" - "\u8CC7" - "\u8907" - "\u8AA4" - "\u63DB" - "\u6A19" - "\u6CC1" - "\u914D" - "\u62BD" - "\u822C" - "\u7403" - "\u9006" - "\u65C5" - "\u6628" - "\u9662" - "\u99C5" - "\u74B0" - "\u5BDF" - "\u516C" - "\u6B73" - "\u5C5E" - "\u8F9E" - "\u5947" - "\u6CBB" - "\u5E7E" - "\u82E5" - "\u58F2" - "\u632F" - "\u7686" - "\u6CE8" - "\u6B74" - "\u9805" - "\u5F93" - "\u5747" - "\u5F79" - "\u9806" - "\u53BB" - "\u56E3" - "\u8853" - "\u7DF4" - "\u6FC0" - "\u6982" - "\u66FF" - "\u7B49" - "\u98F2" - "\u53F2" - "\u88DC" - "\u901F" - "\u53C2" - "\u65E9" - "\u53CE" - "\u9332" - "\u671D" - "\u5186" - "\u5370" - "\u5668" - "\u63A2" - "\u7D00" - "\u9001" - "\u6E1B" - "\u571F" - "\u5929" - "\uFF2F" - "\u50BE" - "\u72AC" - "\u9060" - "\u5E2F" - "\u52A9" - "\u6A2A" - "\u591C" - "\u7523" - "\u8AB2" - "\u5BA2" - "\u629E" - "\u5712" - "\u4E38" - "\u50CF" - "\u50CD" - "\u6750" - "\u5DE5" - "\u904A" - "\u544A" - "\u523A" - "\u6539" - "\u8D64" - "\u8074" - "\u4ECB" - "\u8077" - "\u53F0" - "\u77ED" - "\u8AB0" - "\u7D30" - "\u672A" - "\u770C" - "\u9928" - "\u6B62" - "\u53F3" - "\u306C" - "\u3065" - "\u56F2" - "\u8A0E" - "\u6B7B" - "\u5EFA" - "\u592B" - "\u7AE0" - "\u964D" - "\u666F" - "\u706B" - "\u30A9" - "\u9E97" - "\u8B1B" - "\u72EC" - "\u5DE6" - "\u5C64" - "\uFF25" - "\u5C55" - "\u653F" - "\u5099" - "\u4F59" - "\u7D76" - "\u5065" - "\u518D" - "\u9580" - "\u5546" - "\u52DD" - "\u52C9" - "\u82B1" - "\u30E4" - "\u8EF8" - "\u97FB" - "\u66F2" - "\u6574" - "\u652F" - "\u6271" - "\u53E5" - "\u6280" - "\u5317" - "\u30D8" - "\u897F" - "\u5247" - "\u4FEE" - "\u6388" - "\u9031" - "\u5BA4" - "\u52D9" - "\u9664" - "\u533B" - "\u6563" - "\u56FA" - "\u7AEF" - "\u653E" - "\u99AC" - "\u7A4D" - "\u8208" - "\u592A" - "\u5ACC" - "\u9F62" - "\u672B" - "\u7D05" - "\u6E90" - "\u6E80" - "\u5931" - "\u5BDD" - "\u6D88" - "\u6E08" - "\u4FBF" - "\u983C" - "\u4F01" - "\u5B8C" - "\u4F11" - "\u9752" - "\u7591" - "\u8D70" - "\u6975" - "\u767B" - "\u8AC7" - "\u6839" - "\u6025" - "\u512A" - "\u7D75" - "\u623B" - "\u5E2B" - "\u5F59" - "\u6DF7" - "\u8DEF" - "\u7E70" - "\uFF2B" - "\u8A3C" - "\u713C" - "\u6562" - "\u5BB3" - "\u96F6" - "\u6253" - "\u82E6" - "\u7701" - "\u7D19" - "\u5C02" - "\u8DDD" - "\u9854" - "\u8D8A" - "\u4E89" - "\u56F0" - "\u5BC4" - "\u5199" - "\u4E92" - "\u6DF1" - "\u5A5A" - "\u7DCF" - "\u89A7" - "\u80CC" - "\u7BC9" - "\u6E29" - "\u8336" - "\u62EC" - "\u8CA0" - "\u590F" - "\u89E6" - "\u7D14" - "\u9045" - "\u58EB" - "\u96A3" - "\u6050" - "\u91C8" - "\u967A" - "\u5150" - "\u5BBF" - "\u6A21" - "\u77F3" - "\u983B" - "\u5B09" - "\u5EA7" - "\u7642" - "\u7E4B" - "\uFF38" - "\u5C06" - "\u8FFD" - "\u5EAD" - "\u6238" - "\u5371" - "\u5BC6" - "\u5DF1" - "\u9014" - "\u7BC4" - "\u99C4" - "\u7D39" - "\u4EFB" - "\u968F" - "\u5357" - "\uFF11" - "\u5EB7" - "\u9818" - "\u5FD8" - "\u3045" - "\u59FF" - "\u7F8E" - "\u55B6" - "\u6349" - "\u65E2" - "\u7167" - "\uFF2A" - "\u4EF2" - "\u9152" - "\u52E2" - "\u9ED2" - "\u5149" - "\u6E21" - "\u75DB" - "\u62C5" - "\u5F31" - "\u307D" - "\uFF36" - "\u7D0D" - "\u629C" - "\u5E45" - "\u6D17" - "\u7A81" - "\u671B" - "\u5373" - "\u9858" - "\u7565" - "\uFF12" - "\u9811" - "\u5FD7" - "\u5B85" - "\u7247" - "\u656C" - "\u6751" - "\u60B2" - "\u81A8" - "\u89D2" - "\u30E8" - "\u4F9D" - "\u8A73" - "\u5F8B" - "\u9B5A" - "\u52B4" - "\u5A66" - "\u6163" - "\u732B" - "\u5019" - "\u8001" - "\u558B" - "\u79F0" - "\u796D" - "\u7FA4" - "\u7E2E" - "\u6C38" - "\u616E" - "\u5EF6" - "\u7A3F" - "\u611B" - "\u8089" - "\u9589" - "\u8CBB" - "\u6295" - "\u6D3E" - "\u81F4" - "\u7BA1" - "\u7C73" - "\u5E95" - "\u7D99" - "\u6C0F" - "\u690D" - "\u501F" - "\u5727" - "\u52E4" - "\u6F22" - "\u66AE" - "\u5F27" - "\u88C5" - "\u57CE" - "\u5287" - "\u76DB" - "\u63F4" - "\u9244" - "\u8C37" - "\u5E72" - "\u7E26" - "\u8A31" - "\u6016" - "\u9A5A" - "\u8A8C" - "\uFF35" - "\u8B77" - "\u5B88" - "\u8033" - "\u6B32" - "\u8239" - "\uFF10" - "\u5178" - "\u67D3" - "\u7D1A" - "\u98FE" - "\u5144" - "\u71B1" - "\u8F09" - "\u88FD" - "\u5BFA" - "\u662D" - "\u7FFB" - "\u5426" - "\u5584" - "\u62BC" - "\u53CA" - "\u6A29" - "\u559C" - "\u670D" - "\u8CB0" - "\u8EFD" - "\u677F" - "\u61B6" - "\u98FC" - "\u5C3E" - "\u5FA9" - "\u5E78" - "\u7389" - "\u5354" - "\u679A" - "\u90CE" - "\u8840" - "\u524A" - "\u5922" - "\u63A1" - "\u6674" - "\u6B20" - "\u602A" - "\u65BD" - "\u7DE8" - "\u98EF" - "\u7B56" - "\u9000" - "\uFF39" - "\u8349" - "\u61F8" - "\u6458" - "\u58CA" - "\u4F38" - "\u85AC" - "\u9996" - "\u5BFF" - "\u53B3" - "\u606F" - "\u5C45" - "\u643A" - "\u9F3B" - "\u9280" - "\u4EA1" - "\u6CCA" - "\u8857" - "\u9759" - "\u9CE5" - "\u677E" - "\u5F92" - "\u969C" - "\u7B4B" - "\u7559" - "\u51B7" - "\u5C24" - "\u68EE" - "\u5438" - "\u5012" - "\u68B0" - "\u6D0B" - "\u821E" - "\u6A4B" - "\u500D" - "\u6255" - "\u5352" - "\u7E04" - "\u6C5A" - "\u53F8" - "\u6625" - "\u793C" - "\u66DC" - "\u6545" - "\u526F" - "\u5F01" - "\u5439" - "\u85E4" - "\u8DE1" - "\u962A" - "\u4E86" - "\u91E3" - "\u9632" - "\u7834" - "\u6012" - "\u662F" - "\u30A5" - "\u7AF6" - "\u8179" - "\u4E95" - "\u4E08" - "\u64AE" - "\u72ED" - "\u5BD2" - "\u7B46" - "\u5965" - "\u8C4A" - "\u732E" - "\u5C31" - "\u5A18" - "\u79D2" - "\u6C5F" - "\u8E0F" - "\u8A13" - "\u7372" - "\u96E8" - "\u6BBA" - "\u57CB" - "\u64CD" - "\u9AA8" - "\u8D85" - "\u6D5C" - "\u8B66" - "\u7DD1" - "\u7D61" - "\u8133" - "\u7B11" - "\u6D6E" - "\u7D66" - "\u7126" - "\u8A70" - "\u878D" - "\u738B" - "\u5C3A" - "\u5E7C" - "\u820C" - "\u663C" - "\u88CF" - "\u6CE3" - "\u67C4" - "\u9396" - "\u62E1" - "\u8A3A" - "\u7DE0" - "\u5B98" - "\u6697" - "\u820E" - "\u6298" - "\u5264" - "\u4E73" - "\u6B6F" - "\u7248" - "\u5C04" - "\u8108" - "\u9707" - "\u7802" - "\u4F34" - "\u72AF" - "\u4F50" - "\u5DDE" - "\u8FB2" - "\u8DA3" - "\u990A" - "\u675F" - "\u6E2F" - "\u8FEB" - "\u5F3E" - "\u798F" - "\u51AC" - "\u541B" - "\u6B66" - "\u77AC" - "\u67A0" - "\u6CA2" - "\u661F" - "\u5BCC" - "\u6557" - "\u5D0E" - "\u6355" - "\u8377" - "\u5F1F" - "\u95BE" - "\u7E54" - "\u7C89" - "\u725B" - "\u8DF5" - "\u9999" - "\u6797" - "\u83DC" - "\u62CD" - "\u63CF" - "\u888B" - "\u6607" - "\u91DD" - "\u8FCE" - "\u585A" - "\u5A46" - "\uFF49" - "\u8ECD" - "\uFF13" - "\uFF37" - "\u5BC2" - "\u8F29" - "\u3074" - "\u5DFB" - "\u4E01" - "\u504F" - "\u79CB" - "\u5E9C" - "\u6CC9" - "\u81F3" - "\u6368" - "\u7956" - "\u8584" - "\u5B97" - "\u5FB9" - "\u93E1" - "\u75C7" - "\u6CB9" - "\u8131" - "\u9CF4" - "\u7AE5" - "\u6BDB" - "\u9077" - "\u84CB" - "\u58C1" - "\u5915" - "\u5589" - "\u907F" - "\u984D" - "\u6EA2" - "\u96F0" - "\u4EE4" - "\u59C9" - "\u63E1" - "\u3077" - "\u523B" - "\u62E0" - "\u8CA1" - "\u8FF7" - "\u9063" - "\u82B8" - "\u5E8F" - "\u76E3" - "\u8457" - "\u5869" - "\u5009" - "\u7F6A" - "\u6F5C" - "\u7D5E" - "\u764C" - "\u5BAE" - "\u5E2D" - "\u8F2A" - "\u594F" - "\u846C" - "\u6C60" - "\u6CBF" - "\u5FAE" - "\u5305" - "\u76CA" - "\u76AE" - "\u4FC3" - "\u6297" - "\u5FEB" - "\u66AB" - "\u52E7" - "\u8CA9" - "\u8C46" - "\u5B63" - "\u529F" - "\u9A12" - "\uFF54" - "\u97D3" - "\u6ED1" - "\u75B2" - "\u9003" - "\u9061" - "\u5E79" - "\u60A9" - "\u83D3" - "\u672D" - "\u6804" - "\u9177" - "\u8B1D" - "\u6C96" - "\u96EA" - "\u5360" - "\u60D1" - "\u63FA" - "\u866B" - "\u62B1" - "\uFF4B" - "\u5CA1" - "\u6E9C" - "\u8535" - "\u7763" - "\u6838" - "\u4E71" - "\u4E45" - "\u9EC4" - "\u9670" - "\u7720" - "\u7B26" - "\u6B8A" - "\u628A" - "\u6291" - "\u5E0C" - "\u63C3" - "\u6483" - "\u5EAB" - "\u5409" - "\u6E6F" - "\u65CB" - "\u640D" - "\u52AA" - "\u64E6" - "\u9769" - "\u6E0B" - "\u773C" - "\u592E" - "\u8CDE" - "\u5374" - "\u5948" - "\u539A" - "\u59D4" - "\u83EF" - "\u96A0" - "\uFF4E" - "\u30CC" - "\u9BAE" - "\u515A" - "\u5C65" - "\u8A98" - "\u6469" - "\u6162" - "\u5442" - "\u7206" - "\u7BB1" - "\u6075" - "\u9678" - "\u7DCA" - "\u7E3E" - "\u5742" - "\u7B52" - "\u7532" - "\u5348" - "\u5230" - "\u8CAC" - "\u5C0A" - "\u6CF3" - "\u6279" - "\u7518" - "\u5B6B" - "\u7159" - "\u8A2A" - "\u50B7" - "\u6E05" - "\u716E" - "\u88C1" - "\u9694" - "\u8ED2" - "\uFF31" - "\u7FBD" - "\u5D29" - "\u7A74" - "\u7CD6" - "\u707D" - "\u5275" - "\u6F70" - "\u6691" - "\u87BA" - "\u653B" - "\u6577" - "\u6575" - "\u76E4" - "\u9732" - "\u7A93" - "\u63B2" - "\u81E8" - "\u53E9" - "\u5145" - "\u4FFA" - "\u8F38" - "\u967D" - "\u6B27" - "\u6687" - "\u6B6A" - "\u6DFB" - "\u60A3" - "\u5FD9" - "\u70AD" - "\u829D" - "\u8EDF" - "\u88D5" - "\u7E01" - "\u6F2B" - "\u7A1A" - "\u7968" - "\u8A69" - "\u5CB8" - "\u7687" - "\uFF4A" - "\u6627" - "\u5100" - "\u5857" - "\u8E0A" - "\u8AF8" - "\u6D74" - "\u904D" - "\u66D6" - "\u5BE7" - "\u99B4" - "\u5339" - "\u03B1" - "\u627F" - "\u30BE" - "\u6383" - "\u5375" - "\u5999" - "\u3043" - "\u66B4" - "\u62B5" - "\u604B" - "\u8863" - "\u6EB6" - "\u7DAD" - "\u514D" - "\u6392" - "\u685C" - "\u7573" - "\u7B87" - "\u6398" - "\u535A" - "\u6FC3" - "\u7FCC" - "\u8056" - "\u7DB2" - "\u885B" - "\u64EC" - "\u5E8A" - "\u9178" - "\u6669" - "\u4E7E" - "\u90AA" - "\u7551" - "\u6EDE" - "\u5802" - "\u7E41" - "\u4ECF" - "\u5FB3" - "\u7DE9" - "\u6A39" - "\u6551" - "\u633F" - "\u68D2" - "\u906D" - "\u676F" - "\u6065" - "\u6E56" - "\u6E09" - "\u81D3" - "\u8CB4" - "\u723A" - "\u7981" - "\u4F75" - "\u5263" - "\u786C" - "\u58C7" - "\u80A9" - "\u6D78" - "\u4F0A" - "\u5B9D" - "\u6094" - "\u8E8D" - "\u6DB2" - "\u99C6" - "\u6D25" - "\u307A" - "\u6D45" - "\u8B72" - "\u5CA9" - "\u9B45" - "\u587E" - "\u03B8" - "\u6696" - "\u6CB3" - "\u8A95" - "\u7F36" - "\u5507" - "\u80A2" - "\u6328" - "\u62F6" - "\u7A0E" - "\u50AC" - "\u8A34" - "\uFF58" - "\u968A" - "\u659C" - "\u770B" - "\uFF50" - "\u6D66" - "\u8352" - "\uFF41" - "\u71C3" - "\u52A3" - "\u5BA3" - "\u8FBF" - "\u790E" - "\u62FE" - "\u5C4A" - "\u6905" - "\u5EC3" - "\u6749" - "\u9AEA" - "\u77E2" - "\u67D4" - "\u55AB" - "\u73CD" - "\u57FC" - "\u88C2" - "\u63B4" - "\u59BB" - "\u8CA7" - "\u934B" - "\u59A5" - "\u59B9" - "\u5175" - "\uFF14" - "\u623F" - "\u5951" - "\u65E8" - "\uFF44" - "\u0394" - "\u5DE1" - "\u8A02" - "\u5F90" - "\u8CC0" - "\u7BED" - "\u9810" - "\u84C4" - "\u8846" - "\u5DE8" - "\u5506" - "\u65E6" - "\u5531" - "\u9047" - "\u6E67" - "\u8010" - "\u96C4" - "\u6D99" - "\u8CB8" - "\u822A" - "\u5104" - "\u5618" - "\u6C37" - "\u78C1" - "\u679D" - "\u8CAB" - "\u61D0" - "\u52DF" - "\u8155" - "\u65E7" - "\u7AF9" - "\u99D0" - "\u8A72" - "\uFF52" - "\u5893" - "\u518A" - "\u80F8" - "\u758E" - "\u773A" - "\uFF45" - "\u9855" - "\u631F" - "\u55A7" - "\u520A" - "\u68C4" - "\u990C" - "\u67F1" - "\u5800" - "\u8ACB" - "\u79D8" - "\u6717" - "\u96F2" - "\u8170" - "\u7A32" - "\u828B" - "\u8C9D" - "\u5C48" - "\u91CC" - "\u508D" - "\u8102" - "\u6FC1" - "\u54B2" - "\u6BD2" - "\u6EC5" - "\u5629" - "\u6442" - "\u6E7E" - "\u83CC" - "\u8150" - "\u5211" - "\u5F25" - "\u5AC1" - "\u61A7" - "\u4E18" - "\u5C90" - "\u52B1" - "\u8CA2" - "\u6C41" - "\u96C7" - "\u5076" - "\u9774" - "\u72D9" - "\u719F" - "\u900F" - "\uFF59" - "\u8CFC" - "\u5319" - "\uFF46" - "\uFF15" - "\u92AD" - "\u6D12" - "\u8A17" - "\u809D" - "\u963F" - "\u80C3" - "\uFF53" - "\u885D" - "\u621A" - "\uFF4D" - "\u84B8" - "\u4FF3" - "\u8972" - "\u5265" - "\u5BE9" - "\u6817" - "\u8A87" - "\u5237" - "\u7CF8" - "\u90F7" - "\u5049" - "\u6C57" - "\u53CC" - "\u98FD" - "\u77DB" - "\u984E" - "\u552F" - "\u6590" - "\u7DB4" - "\u5B64" - "\u90F5" - "\u76D7" - "\u9E7F" - "\u8CC3" - "\u76FE" - "\u682A" - "\u9ED9" - "\u7C8B" - "\u63DA" - "\u9808" - "\u7092" - "\u9285" - "\u5E81" - "\u9B54" - "\u75E9" - "\u9802" - "\u76BF" - "\u970A" - "\u5E55" - "\u570F" - "\u574A" - "\u72C2" - "\u8912" - "\u9451" - "\u50B5" - "\u77AD" - "\u565B" - "\u5E33" - "\u5782" - "\u8870" - "\u4ED9" - "\u9EA6" - "\u8CA8" - "\u7AAA" - "\u6F6E" - "\u6FEF" - "\u5238" - "\u7D1B" - "\u7384" - "\u7C4D" - "\uFF43" - "\u74F6" - "\u5DE3" - "\u5192" - "\u6CBC" - "\u99D2" - "\u5C3D" - "\u517C" - "\u7C97" - "\u63BB" - "\u80BA" - "\u9154" - "\uFF4C" - "\u702C" - "\u505C" - "\u6F20" - "\u673A" - "\u916C" - "\u4FD7" - "\u8986" - "\u5C3B" - "\u9375" - "\u5805" - "\u6F2C" - "\u2212" - "\u79C0" - "\u6885" - "\u9042" - "\u57F9" - "\u871C" - "\uFF42" - "\u30FB" - "\u52C7" - "\u8ECC" - "\u7F85" - "\uFF3A" - "\u5BB4" - "\u8C5A" - "\u7A3C" - "\u62AB" - "\u8CAF" - "\u9EBB" - "\u6C4E" - "\u51DD" - "\u5FE0" - "\uFF55" - "\u5F80" - "\u8AE6" - "\u8B19" - "\u6F0F" - "\u5410" - "\u3047" - "\u7652" - "\u9663" - "\u6D6A" - "\u52D8" - "\u53D9" - "\u5200" - "\u67B6" - "\u57F7" - "\u5674" - "\u5197" - "\u4E4F" - "\u837B" - "\u81ED" - "\u708A" - "\u598A" - "\u808C" - "\u8CDB" - "\u5C0B" - "\u9175" - "\u757F" - "\u5270" - "\u706F" - "\u8C6A" - "\u9685" - "\u9905" - "\u7949" - "\u80AF" - "\u62DB" - "\u7A3D" - "\u5F6B" - "\u5F69" - "\u03B2" - "\u6B04" - "\u718A" - "\u68CB" - "\u6CB8" - "\u6C88" - "\u8339" - "\u7ABA" - "\u5B9C" - "\u8217" - "\u7CA7" - "\u683D" - "\u80AA" - "\u9665" - "\u6CE1" - "\u95D8" - "\u8F3F" - "\u5353" - "\u7070" - "\u8F9B" - "\u6F01" - "\u9F13" - "\u585E" - "\u8CD1" - "\u76C6" - "\u68FA" - "\u6311" - "\u54F2" - "\u9867" - "\u8B21" - "\u8302" - "\u90A3" - "\u80DE" - "\u4F3A" - "\u5A92" - "\u708E" - "\u67D0" - "\u564C" - "\u5203" - "\u6F5F" - "\u7656" - "\u4E80" - "\u63EE" - "\u511F" - "\u4E39" - "\u7DEF" - "\u9DB4" - "\u4E4B" - "\u6BB4" - "\u4EF0" - "\u5949" - "\u7E2B" - "\u75F4" - "\u8650" - "\u61B2" - "\u71E5" - "\u6DC0" - "\uFF57" - "\u88F8" - "\u82BD" - "\u63A7" - "\u95A3" - "\u7587" - "\u925B" - "\u8178" - "\u5642" - "\u935B" - "\u654F" - "\u9162" - "\u938C" - "\u81E3" - "\u8E74" - "\u5A01" - "\u6D44" - "\u7965" - "\u795D" - "\u86C7" - "\u811A" - "\u4F0F" - "\u6F54" - "\u5510" - "\u6955" - "\u57A3" - "\u932F" - "\u514B" - "\u614C" - "\u6BBF" - "\u819C" - "\u61A9" - "\u9065" - "\u82DB" - "\u9676" - "\u8997" - "\u78E8" - "\u624D" - "\u5E1D" - "\u642C" - "\u722A" - "\u90CA" - "\u80A5" - "\u819D" - "\u62D2" - "\u868A" - "\u5208" - "\u5132" - "\uFF48" - "\u596E" - "\u7761" - "\u5BEE" - "\uFF17" - "\u4FB5" - "\u9B31" - "\u635C" - "\u6DBC" - "\u5A20" - "\u7363" - "\u7C92" - "\u963B" - "\u6CE5" - "\u7ADC" - "\u91A4" - "\u92ED" - "\u6606" - "\u9234" - "\u7DBF" - "\u830E" - "\u8107" - "\u7948" - "\u8A60" - "\u6B53" - "\u7F70" - "\u68DA" - "\u83CA" - "\u6069" - "\u7267" - "\u540A" - "\u8DF3" - "\u6DE1" - "\u7F72" - "\u596A" - "\u9038" - "\u6170" - "\u5EB6" - "\u9262" - "\u8B5C" - "\u5ECA" - "\u5606" - "\u62ED" - "\u8CED" - "\u99C1" - "\u7F8A" - "\u5384" - "\u7D10" - "\u9673" - "\u816B" - "\u6841" - "\u9298" - "\u96CC" - "\u636E" - "\u62DD" - "\u60E8" - "\u96DB" - "\u845B" - "\u7FA8" - "\u609F" - "\u76DF" - "\u7E4A" - "\u9192" - "\u65EC" - "\u6DAF" - "\u8CC4" - "\u6E7F" - "\u6F02" - "\u7D2B" - "\u30F4" - "\u4E9C" - "\u8AA0" - "\u5854" - "\u5E4C" - "\u80C6" - "\u64A5" - "\u865A" - "\u6F64" - "\u9699" - "\u5F84" - "\u6C72" - "\u8CE2" - "\u5BF8" - "\u8888" - "\u88DF" - "\u8266" - "\uFF19" - "\u62D8" - "\uFF47" - "\u5841" - "\u5BDB" - "\u51A0" - "\u614E" - "\u971E" - "\u731B" - "\u67CF" - "\u733F" - "\u9084" - "\u50E7" - "\u53EB" - "\u53F1" - "\u72E9" - "\u63C9" - "\u7D2F" - "\u5982" - "\u7897" - "\u6BBB" - "\u906E" - "\u5FCD" - "\u6EF4" - "\u6B96" - "\u8D08" - "\u74A7" - "\u6F38" - "\u6589" - "\u03BC" - "\u9686" - "\u6176" - "\u72A0" - "\u7272" - "\u5146" - "\u576A" - "\u6284" - "\u65D7" - "\u50DA" - "\u5C3F" - "\u51CD" - "\u902E" - "\u7B39" - "\u8F1D" - "\u5C1A" - "\u8015" - "\u51CC" - "\u632B" - "\u4F10" - "\u7BB8" - "\u4E91" - "\u5968" - "\u819A" - "\u9010" - "\u03B3" - "\u5F26" - "\u9700" - "\u5C01" - "\u5E3D" - "\u6F31" - "\u9283" - "\u507D" - "\u5875" - "\u7E1B" - "\u58A8" - "\u6020" - "\u96F7" - "\u5766" - "\u68A8" - "\u90ED" - "\u7A4F" - "\u67FF" - "\u7AFF" - "\u5E61" - "\u5F81" - "\u99B3" - "\u9EBA" - "\u03C4" - "\u8154" - "\u7C98" - "\u7409" - "\u731F" - "\u4EC1" - "\u8358" - "\u6492" - "\u7C3F" - "\u90E1" - "\u7B4C" - "\u5D8B" - "\u6FE1" - "\u618E" - "\u5446" - "\u6F15" - "\u5A29" - "\u68DF" - "\u6052" - "\uFF18" - "\u5553" - "\u5B5D" - "\u67F3" - "\u64A4" - "\u85CD" - "\u95C7" - "\u5B22" - "\u67F4" - "\u6734" - "\u6D1E" - "\u5CB3" - "\u9B3C" - "\u8DE8" - "\u3049" - "\u70C8" - "\u559A" - "\u6F84" - "\u6FEB" - "\u82A6" - "\u62D3" - "\u51FD" - "\u6843" - "\u76F2" - "\u6CA1" - "\u7A6B" - "\u6212" - "\u99FF" - "\u8D05" - "\u67AF" - "\u6C70" - "\u53F6" - "\u90A6" - "\u66C7" - "\u9A30" - "\u711A" - "\u51F6" - "\u5CF0" - "\u69FD" - "\u67DA" - "\u5320" - "\u9A19" - "\u502B" - "\u84EE" - "\u634C" - "\u61F2" - "\u8B0E" - "\u91B8" - "\u56DA" - "\u7344" - "\u6EDD" - "\u6795" - "\u60DC" - "\u7DB1" - "\u8B33" - "\u7089" - "\u5DFE" - "\u91DC" - "\u9BAB" - "\u6E58" - "\u92F3" - "\u5351" - "\uFF51" - "\u7DBB" - "\u5EF7" - "\u85A6" - "\u667A" - "\u6C99" - "\u8CBF" - "\u8098" - "\uFF16" - "\u5F0A" - "\u66F0" - "\u7881" - "\u9DFA" - "\u6676" - "\u8D74" - "\u8513" - "\u75D2" - "\u79E9" - "\u5DE7" - "\u9418" - "\u7B1B" - "\u638C" - "\u53EC" - "\u5347" - "\u6249" - "\u5A2F" - "\u8A1F" - "\u8247" - "\u64B2" - "\uFF56" - "\u6182" - "\u90B8" - "\u5098" - "\u7CDE" - "\u03BB" - "\u5C16" - "\u723D" - "\u7832" - "\u55A9" - "\u80CE" - "\u84B2" - "\u9DF9" - "\u755C" - "\u6897" - "\uFF4F" - "\u5023" - "\u6247" - "\u7DFB" - "\u6756" - "\u622F" - "\u5D50" - "\u6A3D" - "\u6F06" - "\u9CE9" - "\u039B" - "\u5FAA" - "\u8896" - "\u9784" - "\u6851" - "\u5D16" - "\u59A8" - "\u66A6" - "\u59D3" - "\u7A00" - "\u3041" - "\u920D" - "\u9727" - "\u9837" - "\u8105" - "\u7B20" - "\u86CD" - "\u8328" - "\u69CD" - "\u3062" - "\u59EB" - "\u6ABB" - "\u8463" - "\u6C7D" - "\u541F" - "\u807E" - "\u73E0" - "\u62B9" - "\u9D28" - "\u64AB" - "\u8607" - "\u7AC3" - "\u864E" - "\u78EF" - "\u77E9" - "\u7CCA" - "\u55AA" - "\u8A6E" - "\u82D1" - "\u98F4" - "\u6089" - "\u674F" - "\u9B42" - "\u914C" - "\u9BC9" - "\u8A50" - "\u03A3" - "\u7815" - "\u55DC" - "\u7FFC" - "\u4F0E" - "\u751A" - "\u5F66" - "\u961C" - "\u8706" - "\u6109" - "\u80F4" - "\u8776" - "\u8B00" - "\u9271" - "\u75E2" - "\u73ED" - "\u9438" - "\u92F8" - "\u62D9" - "\u6068" - "\u4EAD" - "\u4EAB" - "\u75AB" - "\u5F13" - "\u74E6" - "\u7D46" - "\u814E" - "\u62F3" - "\u9A0E" - "\u58B3" - "\u83F1" - "\u6813" - "\u5256" - "\u6D2A" - "\u5484" - "\u9591" - "\u58EE" - "\u9945" - "\u65ED" - "\u8987" - "\u80A1" - "\u86D9" - "\u724C" - "\u965B" - "\u714E" - "\u63AC" - "\u9AED" - "\u9019" - "\u5E7B" - "\u54B3" - "\u6E26" - "\u55C5" - "\u7A42" - "\u7434" - "\u5FCC" - "\u70CF" - "\u5448" - "\u91D8" - "\u611A" - "\u6C3E" - "\u8AFE" - "\u6E9D" - "\u7336" - "\u7AAF" - "\u8ACF" - "\u8CC2" - "\u57C3" - "\u51F8" - "\u7D0B" - "\u6ADB" - "\u525B" - "\u98E2" - "\u4FCA" - "\u54C0" - "\u5BB0" - "\u93AE" - "\u7435" - "\u7436" - "\u96C5" - "\u8494" - "\u85AA" - "\u8A93" - "\u59EA" - "\u62D7" - "\u8778" - "\u7169" - "\u7B51" - "\u690E" - "\u4FB6" - "\u553E" - "\u7BAA" - "\u5075" - "\u8861" - "\u03C3" - "\u88FE" - "\u95B2" - "\u805A" - "\u4E3C" - "\u633D" - "\u7E4D" - "\u82D7" - "\u9E93" - "\u03C6" - "\u03B4" - "\u4E32" - "\u51E1" - "\u5F18" - "\u85FB" - "\u61C7" - "\u817F" - "\u7A9F" - "\u6803" - "\u6652" - "\u5E84" - "\u7891" - "\u7B4F" - "\u7B25" - "\u5E06" - "\u96B7" - "\u8FB0" - "\u75BE" - "\u8FE6" - "\u8A6B" - "\u5617" - "\u582A" - "\u6842" - "\u5B9B" - "\u58F7" - "\u8AED" - "\u97AD" - "\u9310" - "\u6DF5" - "\u79E4" - "\u7525" - "\u4F8D" - "\u66FD" - "\u6572" - "\u63AA" - "\u6168" - "\u83E9" - "\u5CE0" - "\u901D" - "\u5F70" - "\u67F5" - "\u82AF" - "\u7C50" - "\u57A2" - "\u03BE" - "\u77EF" - "\u8C8C" - "\u8F44" - "\u8A89" - "\u9813" - "\u7D79" - "\u9E78" - "\u5E7D" - "\u6881" - "\u642D" - "\u54BD" - "\u82B3" - "\u7729" - "\u0393" - "\u61A4" - "\u7985" - "\u6063" - "\u5840" - "\u7149" - "\u75FA" - "\uFF06" - "\u7A40" - "\u545F" - "\u918D" - "\u9190" - "\u7901" - "\u51F9" - "\u86EE" - "\u5974" - "\u64AD" - "\u7E79" - "\u8499" - "\u8A63" - "\u4E5F" - "\u5420" - "\u4E59" - "\u8E8A" - "\u8E87" - "\u9D2C" - "\u7A92" - "\u59E5" - "\u9326" - "\u694A" - "\u8017" - "\u6F09" - "\u60E7" - "\u4FE3" - "\u6876" - "\u5CFB" - "\u905C" - "\u65FA" - "\u75D5" - "\u03A6" - "\u6234" - "\u658E" - "\u8CD3" - "\u7BC7" - "\u8429" - "\u85E9" - "\u7950" - "\u8B83" - "\u83AB" - "\u9C39" - "\u85A9" - "\u5378" - "\u4E9B" - "\u75B9" - "\u8E44" - "\u4E56" - "\uFF5A" - "\u92FC" - "\u6A3A" - "\u5B8F" - "\u7BE4" - "\u8258" - "\u81B3" - "\u7A83" - "\u7E82" - "\u5598" - "\u786B" - "\u99D5" - "\u7261" - "\u732A" - "\u62D0" - "\u60DA" - "\u60A0" - "\u7CE7" - "\u95A5" - "\u03C0" - "\u853D" - "\u6850" - "\u981A" - "\u9214" - "\u697C" - "\u8C9E" - "\u602F" - "\u817A" - "\u8305" - "\u6CF0" - "\u9913" - "\u5C51" - "\u9BDB" - "\u929B" - "\u9AB8" - "\u9C57" - "\u5824" - "\u9675" - "\u6DD8" - "\u64C1" - "\u81FC" - "\u6D32" - "\u8FBB" - "\u8A23" - "\u5C4F" - "\u9BE8" - "\u895F" - "\u5CE1" - "\u660C" - "\u982C" - "\u5806" - "\u865C" - "\u840E" - "\u9EB9" - "\u7CE0" - "\u68B1" - "\u8AFA" - "\u5403" - "\u66A2" - "\u5B54" - "\u5EB8" - "\u5DF3" - "\u589C" - "\u85AE" - "\u6101" - "\u664B" - "\u8236" - "\u8FC5" - "\u6B3A" - "\u9640" - "\u7709" - "\u6CC4" - "\u59FB" - "\u9688" - "\u58CC" - "\u69D9" - "\u5E87" - "\u52D2" - "\u6E07" - "\u91E7" - "\u4E43" - "\u82D4" - "\u9306" - "\u58D5" - "\u78D0" - "\u6962" - "\u65A7" - "\u5E63" - "\u03B7" - "\u7E55" - "\u83C5" - "\u7109" - "\u5112" - "\u5D07" - "\u8276" - "\u5449" - "\u7984" - "\u54C9" - "\u68AF" - "\u5937" - "\u546A" - "\u56C3" - "\u84BC" - "\u9A28" - "\u9D3B" - "\u862D" - "\u7CA5" - "\u7D3A" - "\u7D17" - "\u7164" - "\u03C9" - "\u52FE" - "\u97A0" - "\u4F3D" - "\u7AAE" - "\u6E15" - "\u0392" - "\u8D66" - "\u6597" - "\u66F9" - "\u8CE0" - "\u5CAC" - "\u847A" - "\u7D33" - "\u5B8D" - "\u6191" - "\u6357" - "\u7C9B" - "\u8CCA" - "\u9F8D" - "\u81C6" - "\u6C8C" - "\u52C5" - "\u8096" - "\u559D" - "\u8CAA" - "\u82AD" - "\u8549" - "\u919C" - "\u64B9" - "\u5740" - "\u7BE0" - "\u7D2C" - "\u75B1" - "\u52F2" - "\u86FE" - "\u88B4" - "\u8749" - "\u685F" - "\u4FF5" - "\u818F" - "\u5DF7" - "\u5072" - "\u6148" - "\u754F" - "\u96BB" - "\u606D" - "\u64B0" - "\u9D0E" - "\u52AB" - "\u63C6" - "\u914E" - "\u8106" - "\u6241" - "\u9761" - "\u8511" - "\u95CA" - "\u96BC" - "\u6CCC" - "\u5996" - "\u65A1" - "\u52C3" - "\u637B" - "\u6E13" - "\u937E" - "\u5954" - "\u6155" - "\u5984" - "\u6A0B" - "\u936C" - "\u502D" - "\u8679" - "\u03BD" - "\u60A6" - "\u8151" - "\u62EE" - "\u51E0" - "\u80E1" - "\u8FC2" - "\u8EAF" - "\u50ED" - "\u6ECB" - "\u7B8B" - "\u75F0" - "\u65AC" - "\u85AB" - "\u673D" - "\u82A5" - "\u9756" - "\u907C" - "\u6591" - "\u7953" - "\u5B95" - "\u976D" - "\u72D7" - "\u81BF" - "\u59AC" - "\u5A7F" - "\u7554" - "\u7AEA" - "\u9D5C" - "\u8CE6" - "\u7E1E" - "\u6731" - "\u7C95" - "\u69FB" - "\u6D69" - "\u511A" - "\u8CDC" - "\u8B39" - "\u68B5" - "\u5A9B" - "\u7947" - "\u5516" - "\u03C8" - "\u03C1" - "\u5A9A" - "\u540E" - "\u6FB1" - "\u7DBE" - "\u6372" - "\u67E9" - "\u6DF3" - "\u74DC" - "\u5631" - "\u51B4" - "\u6115" - "\u9211" - "\u51B6" - "\u67A2" - "\u03A9" - "\u77B0" - "\u6775" - "\u5EB5" - "\u4F2F" - "\u840C" - "\u5609" - "\u4FC4" - "\u7D06" - "\u81A0" - "\u7252" - "\u8EB0" - "\u543E" - "\u50FB" - "\u704C" - "\u646F" - "\u5091" - "\u929A" - "\u8B90" - "\u8910" - "\u8FB1" - "\u7345" - "\u7B94" - "\u73A9" - "\u4F43" - "\u583A" - "\u5504" - "\u515C" - "\u62CC" - "\u5751" - "\u75D8" - "\u69CC" - "\u77B3" - "\u79BF" - "\u66D9" - "\u5DF2" - "\u7FC1" - "\u5C3C" - "\u60BC" - "\u7F77" - "\u699C" - "\u5451" - "\u79E6" - "\u533F" - "\u03BA" - "\u7259" - "\u4F46" - "\u572D" - "\u548E" - "\u745E" - "\u7A1C" - "\u785D" - "\u6BC5" - "\u7015" - "\u8702" - "\u978D" - "\u6A2B" - "\u7566" - "\u660F" - "\u755D" - "\u4FAE" - "\u548B" - "\u6367" - "\u7F9E" - "\u803D" - "\u60B8" - "\u51E7" - "\u4EAE" - "\u9AC4" - "\u54FA" - "\u4FEF" - "\u567A" - "\u8058" - "\u8654" - "\u5B8B" - "\u93A7" - "\u968B" - "\u51B3" - "\u59D1" - "\u7078" - "\u927E" - "\u8F5F" - "\u60F0" - "\u03C7" - "\u643E" - "\u6854" - "\u7F6B" - "\u8E4A" - "\u68B6" - "\u6893" - "\u7F75" - "\u65A5" - "\u6276" - "\u6147" - "\u61C3" - "\u9949" - "\u6E25" - "\u6AD3" - "\u80E4" - "\u56A2" - "\u9CF3" - "\u6A84" - "\u8C79" - "\u50B2" - "\u50D1" - "\u7586" - "\u6134" - "\u53A8" - "\u6FB9" - "\u9320" - "\u64E2" - "\u6EBA" - "\u7624" - "\u73CA" - "\u5BC5" - "\u6977" - "\u9583" - "\u9CF6" - "\u7119" - "\u6912" - "\u9B4F" - "\u9798" - "\u68A2" - "\u6900" - "\u8ACC" - "\u696B" - "\u5F14" - "\u65D2" - "\u5957" - "\u9F5F" - "\u9F6C" - "\u7D18" - "\u810A" - "\u536F" - "\u727D" - "\u6BD8" - "\u6714" - "\u514E" - "\u721B" - "\u6D9C" - "\u5851" - "\u5F04" - "\u676D" - "\u63A0" - "\u80B4" - "\u626E" - "\u51F1" - "\u798D" - "\u8036" - "\u808B" - "\u7235" - "\u61AB" - "\u57D3" - "\u5983" - "\u9910" - "\u7C7E" - "\u7262" - "\u6816" - "\u9017" - "\u7058" - "\u5E5F" - "\u68F2" - "\u5687" - "\u7827" - "\u6E1A" - "\u7C9F" - "\u7A7F" - "\u7F60" - "\u68F9" - "\u8594" - "\u8587" - "\u526A" - "\u7B48" - "\u936E" - "\u892A" - "\u7AA9" - "\u58F1" - "\u30F2" - "\u7460" - "\u7483" - "\u61BE" - "\u5E16" - "\u6960" - "\u03B5" - "\u5480" - "\u56BC" - "\u56A5" - "\u6D29" - "\u6A58" - "\u6867" - "\u6A9C" - "\u63F6" - "\u63C4" - "\u88E1" - "\u6A80" - "\u900D" - "\u9081" - "\u6028" - "\u73B2" - "\u90C1" - "\u5815" - "\u8AB9" - "\u8B17" - "\u8956" - "\u51F0" - "\u9B41" - "\u5B75" - "\u7766" - "\u71FB" - "\u5243" - "\u53A9" - "\u71D7" - "\u84D1" - "\u5EFB" - "\u75D4" - "\u837C" - "\u6190" - "\u6070" - "\u8F9F" - "\u5F98" - "\u5F8A" - "\u4FA0" - "\u5830" - "\u971C" - "\u809B" - "\u76E7" - "\u5835" - "\u72DB" - "\u9D8F" - "\u9119" - "\u4F73" - "\u916A" - "\u8AE7" - "\u6973" - "\u7826" - "\u5AC9" - "\u5DEB" - "\u53E1" - "\u9716" - "\u6E23" - "\u5544" - "\u798E" - "\u6CAB" - "\u821F" - "\u6C5D" - "\u5302" - "\u99F1" - "\u6C08" - "\u308E" - "\u714C" - "\u7DAC" - "\u5F1B" - "\u586B" - "\u84C1" - "\u5039" - "\u7CFE" - "\u51A5" - "\u674E" - "\u966A" - "\u8877" - "\u59E6" - "\u5962" - "\u75BC" - "\u8A54" - "\u8599" - "\u8B5A" - "\u5CEF" - "\u684E" - "\u688F" - "\u9B92" - "\u8A1B" - "\u55B0" - "\u7960" - "\u67A1" - "\u6681" - "\u4E5E" - "\u91C7" - "\u9739" - "\u9742" - "\u687F" - "\u929C" - "\u4F51" - "\u79BE" - "\u5944" - "\u6930" - "\u87F9" - "\u8061" - "\u98AF" - "\u30C2" - "\u8E81" - "\u8E42" - "\u8E99" - "\u8695" - "\u693F" - "\u62F7" - "\u9257" - "\u8882" - "\u78CB" - "\u7422" - "\u6B3D" - "\u60B6" - "\u53C9" - "\u7E37" - "\u8A36" - "\u50C5" - "\u5C6F" - "\u5EEC" - "\u5C41" - "\u99A8" - "\u6E20" - "\u8568" - "\u699B" - "\u675C" - "\u7791" - "\u6A8E" - "\u8ECB" - "\u8F62" - "\u8700" - "\u8235" - "\u82B9" - "\u6B3E" - "\u639F" - "\u8E2A" - "\u745A" - "\u71E6" - "\u7D21" - "\u584A" - "\u8171" - "\u6753" - "\u65A4" - "\u786F" - "\u55AC" - "\u8B04" - "\u79DF" - "\u8180" - "\u80F1" - "\u6EC4" - "\u9C10" - "\u8475" - "\u8471" - "\u8461" - "\u5A49" - "\u88D4" - "\u9F0E" - "\u9187" - "\u67EF" - "\u991E" - "\u96C1" - "\u8AA6" - "\u8A62" - "\u633A" - "\u7AFA" - "\u8A82" - "\u5191" - "\u8718" - "\u86DB" - "\u70B8" - "\u932B" - "\u58C5" - "\u8087" - "\u54AC" - "\u9B8E" - "\u67D1" - "\u7D9C" - "\u5BE1" - "\u7977" - "\u522E" - "\u8CCE" - "\u9B18" - "\u884D" - "\u5FD6" - "\u685D" - "\u0398" - "\u039A" - "\u03A8" - "\u53E2" - "\u4FCE" - "\u7396" - "\u78A7" - "\u8766" - "\u8521" - "\u649A" - "\u7A14" - "\u752B" - "\u6D35" - "\u7893" - "\u9ECE" - "\u5AE1" - "\u8755" - "\u725F" - "\u6B89" - "\u6C83" - "\u7B50" - "\u619A" - "\u6E24" - "\u9B4D" - "\u9B4E" - "\u71ED" - "\u7940" - "\u6D1B" - "\u88F3" - "\u4E11" - "\u9846" - "\u9952" - "\u5EC9" - "\u689F" - "\u848B" - "\u6DD1" - "\u8737" - "\u9644" - "\u695A" - "\u9F20" - "\u5154" - "\u61AC" - "\u5F57" - "\u66FC" - "\u5D11" - "\u57DC" - "\u5F77" - "\u5F7F" - "\u5DF4" - "\u831C" - "\u6D9B" - "\u57E0" - "\u945A" - "\u92D2" - "\u5C09" - "\u53AD" - "\u7B75" - "\u7AE3" - "\u7E8F" - "\u6194" - "\u60B4" - "\u8E5F" - "\u675E" - "\u7825" - "\u8F14" - "\u9C52" - "\u4FAF" - "\u7D62" - "\u5475" - "\u698E" - "\u53EA" - "\u71D5" - "\u5C60" - "\u5614" - "\u74E2" - "\u9291" - "\u880D" - "\u932C" - "\u608C" - "\u8A1D" - "\u7DB8" - "\u530D" - "\u5310" - "\u637A" - "\u6A59" - "\u5BB5" - "\u9D60" - "\u57F4" - "\u7690" - "\u9021" - "\u4FF8" - "\u7A63" - "\u54A4" - "\u8309" - "\u8389" - "\u6643" - "\u6EF8" - "\u5289" - "\u5026" - "\u8944" - "\u7B4D" - "\u5239" - "\u83BD" - "\u9041" - "\u66F5" - "\u79BD" - "\u7B67" - "\u7E0A" - "\u7FD4" - "\u5BF5" - "\u834F" - "\u758B" - "\u84EC" - "\u83B1" - "\u8EAC" - "\u696E" - "\u76C8" - "\u5C13" - "\u72FC" - "\u85C9" - "\u965F" - "\u620E" - "\u4E8E" - "\u6F58" - "\u8012" - "\u5F82" - "\u5FA0" - "\u99AE" - "\u5F6D" - "\u5E47" - "\u9087" - "\u6CD3" - "\u80B1" - "\u65BC" - "\u6602" - "\u8E64" - "\u7463" - "\u9A65" - "\u4EA8" - "\u8AEE" - "\u77EE" - "\u8569" - "\u6566" - "\u30EE" - "\u6208" - "\u8229" - "\u9B6F" - "\u65E0" - "\u6159" - "\u6127" - "\u8340" - "\u6309" - "\u914B" - "\u59F6" - "\u723E" - "\u8602" - "\u986B" - "\u593E" - "\u59DA" - "\u701D" - "\u6FD8" - "\u964B" - "\u777E" - "\u5B30" - "\u5DBA" - "\u821B" - "\u7B65" - "\u95A4" - "\u68D8" - "\u9812" - "\u59BE" - "\u8B2C" - "\u4F0D" - "\u537F" - "\u8FEA" - "\u5686" - "\u60F9" - "\u80DA" - "\u6C6A" - "\u543B" - "\u9B51" - "\u8F3B" - "\u59C6" - "\u84FC" - "\u6AC2" - "\u5315" - "\u4F70" - "\u7246" - "\u5CD9" - "\u725D" - "\u9DF2" - "\u7DCB" - "\u7BAD" - "\u82EB" - "\u5366" - "\u5B5F" - "\u5323" - "\u4ED4" - "\u5D19" - "\u6787" - "\u6777" - "\u81C0" - "\u681E" - "\u9E1E" - "\u61FA" - "\u55DA" - "\u6DB8" - "\u30C5" - "\u8D16" - "\u5E9A" - "\u93D1" - "\u9149" - "\u670B" - "\u70F9" - "\u53C8" - "\u7337" - "\u7C00" - "\u5B2C" - "\u88B7" - "\u6BB7" - "\u51DB" - "\u4EC0" - "\u71FF" - "\u5556" - "\u7BC6" - "\u7DD8" - "\u5036" - "\u6AC3" - "\u8A03" - "\u540F" - "\u5CB1" - "\u8A25" - "\u958F" - "\u5DBD" - "\u722C" - "\u618A" - "\u7511" - "\u6144" - "\u5E25" - "\u7704" - "\u5A11" - "\u50E5" - "\u5016" - "\u800C" - "\u8F4D" - "\u5583" - "\u81BE" - "\u7099" - "\u85AF" - "\u97EE" - "\u4E99" - "\u8B14" - "\u86CE" - "\u7425" - "\u73C0" - "\u698A" - "\u7C3E" - "\u8D6D" - "\u8823" - "\u8299" - "\u8B01" - "\u9022" - "\u8466" - "\u6670" - "\u5398" - "\u707C" - "\u903C" - "\u9328" - "\u700B" - "\u5FF8" - "\u6029" - "\u7165" - "\u7B0F" - "\u5FFD" - "\u7708" - "\u7DEC" - "\u5C4D" - "\u75BD" - "\u6E5B" - "\u788D" - "\u8AE4" - <sos/eos> init: xavier_uniform input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false use_preprocessor: true token_type: char bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null frontend: default frontend_conf: fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_sp/train/feats_stats.npz encoder: conformer encoder_conf: output_size: 512 attention_heads: 8 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d6 normalize_before: true macaron_style: false pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 decoder: transformer decoder_conf: attention_heads: 8 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 required: - output_dir - token_list distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "jp", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["csj"]}
espnet/kan-bayashi_csj_asr_train_asr_conformer
null
[ "espnet", "audio", "automatic-speech-recognition", "jp", "dataset:csj", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
espnet
## Example ESPnet2 ASR model ### `kan-bayashi/csj_asr_train_asr_transformer_raw_char_sp_valid.acc.ave` ♻️ Imported from https://zenodo.org/record/4037458/ This model was trained by kan-bayashi using csj/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["csj"]}
espnet/kan-bayashi_csj_asr_train_asr_transformer_raw_char_sp_valid.acc.ave
null
[ "espnet", "audio", "automatic-speech-recognition", "ja", "dataset:csj", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/csmsc_conformer_fastspeech2` ♻️ Imported from https://zenodo.org/record/4031955/ This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
espnet/kan-bayashi_csmsc_conformer_fastspeech2
null
[ "espnet", "audio", "text-to-speech", "zh", "dataset:csmsc", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/csmsc_fastspeech` ♻️ Imported from https://zenodo.org/record/3986227/ This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
espnet/kan-bayashi_csmsc_fastspeech
null
[ "espnet", "audio", "text-to-speech", "zh", "dataset:csmsc", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/csmsc_fastspeech2` ♻️ Imported from https://zenodo.org/record/4031953/ This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
espnet/kan-bayashi_csmsc_fastspeech2
null
[ "espnet", "audio", "text-to-speech", "zh", "dataset:csmsc", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/csmsc_full_band_vits` ♻️ Imported from https://zenodo.org/record/5443852/ This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
espnet/kan-bayashi_csmsc_full_band_vits
null
[ "espnet", "audio", "text-to-speech", "zh", "dataset:csmsc", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/csmsc_tacotron2` ♻️ Imported from https://zenodo.org/record/3969118/ This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
espnet/kan-bayashi_csmsc_tacotron2
null
[ "espnet", "audio", "text-to-speech", "zh", "dataset:csmsc", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/csmsc_transformer` ♻️ Imported from https://zenodo.org/record/4034125/ This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
espnet/kan-bayashi_csmsc_transformer
null
[ "espnet", "audio", "text-to-speech", "zh", "dataset:csmsc", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/csmsc_tts_train_conformer_fastspeech2_raw_phn_pypinyin_g2p_phone_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4031955/ This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
espnet/kan-bayashi_csmsc_tts_train_conformer_fastspeech2_raw_phn_pypinyin_g2p_phone_train.loss.ave
null
[ "espnet", "audio", "text-to-speech", "zh", "dataset:csmsc", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/csmsc_tts_train_fastspeech2_raw_phn_pypinyin_g2p_phone_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4031953/ This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
espnet/kan-bayashi_csmsc_tts_train_fastspeech2_raw_phn_pypinyin_g2p_phone_train.loss.ave
null
[ "espnet", "audio", "text-to-speech", "zh", "dataset:csmsc", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/csmsc_tts_train_fastspeech_raw_phn_pypinyin_g2p_phone_train.loss.best` ♻️ Imported from https://zenodo.org/record/3986227/ This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
espnet/kan-bayashi_csmsc_tts_train_fastspeech_raw_phn_pypinyin_g2p_phone_train.loss.best
null
[ "espnet", "audio", "text-to-speech", "zh", "dataset:csmsc", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/csmsc_tts_train_full_band_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave` ♻️ Imported from https://zenodo.org/record/5443852/ This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
espnet/kan-bayashi_csmsc_tts_train_full_band_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave
null
[ "espnet", "audio", "text-to-speech", "zh", "dataset:csmsc", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/csmsc_tts_train_tacotron2_raw_phn_pypinyin_g2p_phone_train.loss.best` ♻️ Imported from https://zenodo.org/record/3969118/ This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 You first need to import the following packages ```bash pip install torch pip install espnet_model_zoo ``` Then start using it! ```python import soundfile from espnet2.bin.tts_inference import Text2Speech text2speech = Text2Speech.from_pretrained("espnet/kan-bayashi_csmsc_tts_train_tacotron2_raw_phn_pypinyin_g2p_phone_train.loss.best") text = "春江潮水连海平,海上明月共潮生" speech = text2speech(text)["wav"] soundfile.write("out.wav", speech.numpy(), text2speech.fs, "PCM_16") ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
espnet/kan-bayashi_csmsc_tts_train_tacotron2_raw_phn_pypinyin_g2p_phone_train.loss.best
null
[ "espnet", "audio", "text-to-speech", "zh", "dataset:csmsc", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/csmsc_tts_train_transformer_raw_phn_pypinyin_g2p_phone_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4034125/ This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
espnet/kan-bayashi_csmsc_tts_train_transformer_raw_phn_pypinyin_g2p_phone_train.loss.ave
null
[ "espnet", "audio", "text-to-speech", "zh", "dataset:csmsc", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/csmsc_tts_train_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave` ♻️ Imported from https://zenodo.org/record/5499120/ This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
espnet/kan-bayashi_csmsc_tts_train_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave
null
[ "espnet", "audio", "text-to-speech", "zh", "dataset:csmsc", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/csmsc_vits` ♻️ Imported from https://zenodo.org/record/5499120/ This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
espnet/kan-bayashi_csmsc_vits
null
[ "espnet", "audio", "text-to-speech", "zh", "dataset:csmsc", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_conformer_fastspeech2` ♻️ Imported from https://zenodo.org/record/4032246/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_conformer_fastspeech2
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_conformer_fastspeech2_accent` ♻️ Imported from https://zenodo.org/record/4381102/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_conformer_fastspeech2_accent
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_conformer_fastspeech2_accent_with_pause` ♻️ Imported from https://zenodo.org/record/4436448/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_conformer_fastspeech2_accent_with_pause
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_conformer_fastspeech2_tacotron2_prosody` ♻️ Imported from https://zenodo.org/record/5499050/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_conformer_fastspeech2_tacotron2_prosody
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_conformer_fastspeech2_transformer_prosody` ♻️ Imported from https://zenodo.org/record/5499066/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_conformer_fastspeech2_transformer_prosody
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_fastspeech` ♻️ Imported from https://zenodo.org/record/3986225/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_fastspeech
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_fastspeech2` ♻️ Imported from https://zenodo.org/record/4032224/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_fastspeech2
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_fastspeech2_accent` ♻️ Imported from https://zenodo.org/record/4381100/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_fastspeech2_accent
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_fastspeech2_accent_with_pause` ♻️ Imported from https://zenodo.org/record/4436450/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_fastspeech2_accent_with_pause
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_full_band_vits_accent_with_pause` ♻️ Imported from https://zenodo.org/record/5431984/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_full_band_vits_accent_with_pause
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_full_band_vits_prosody` ♻️ Imported from https://zenodo.org/record/5521340/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_full_band_vits_prosody
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tacotron2` ♻️ Imported from https://zenodo.org/record/3963886/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tacotron2
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tacotron2_accent` ♻️ Imported from https://zenodo.org/record/4381098/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tacotron2_accent
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tacotron2_accent_with_pause` ♻️ Imported from https://zenodo.org/record/4433194/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tacotron2_accent_with_pause
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_tacotron2_prosody` ♻️ Imported from https://zenodo.org/record/5499026/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tacotron2_prosody
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_transformer` ♻️ Imported from https://zenodo.org/record/4034121/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_transformer
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_transformer_accent` ♻️ Imported from https://zenodo.org/record/4381096/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_transformer_accent
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_transformer_accent_with_pause` ♻️ Imported from https://zenodo.org/record/4433196/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_transformer_accent_with_pause
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_transformer_prosody` ♻️ Imported from https://zenodo.org/record/5499040/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_transformer_prosody
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tts_train_conformer_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4032246/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_conformer_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.loss.ave
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4381102/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw-truncated-15ef5f
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4436448/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw-truncated-a7f080
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_prosody_train.loss.ave` ♻️ Imported from https://zenodo.org/record/5499050/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw-truncated-569e81
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tts_train_conformer_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4391409/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_conformer_fastspeech2_transformer_teacher_r-truncated-35ef5a
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tts_train_conformer_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4433198/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_conformer_fastspeech2_transformer_teacher_r-truncated-74c1b4
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_tts_train_conformer_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_prosody_train.loss.ave` ♻️ Imported from https://zenodo.org/record/5499066/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_conformer_fastspeech2_transformer_teacher_r-truncated-f43d8f
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tts_train_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4032224/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.loss.ave
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tts_train_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4381100/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_fastspeech2_tacotron2_teacher_raw_phn_jacon-truncated-f45dcb
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tts_train_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4436450/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_fastspeech2_tacotron2_teacher_raw_phn_jacon-truncated-e5d906
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tts_train_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4391405/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_fastspeech2_transformer_teacher_raw_phn_jac-truncated-6f4cf5
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tts_train_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4433200/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_fastspeech2_transformer_teacher_raw_phn_jac-truncated-60fc24
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tts_train_fastspeech_raw_phn_jaconv_pyopenjtalk_train.loss.best` ♻️ Imported from https://zenodo.org/record/3986225/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_fastspeech_raw_phn_jaconv_pyopenjtalk_train.loss.best
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.total_count.ave` ♻️ Imported from https://zenodo.org/record/5431984/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_a-truncated-d7d5d0
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_prosody_train.total_count.ave` ♻️ Imported from https://zenodo.org/record/5521340/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_p-truncated-66d5fc
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4381098/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4433194/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.loss.ave
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_prosody_train.loss.ave` ♻️ Imported from https://zenodo.org/record/5499026/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_prosody_train.loss.ave
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_train.loss.best` ♻️ Imported from https://zenodo.org/record/3963886/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_train.loss.best
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4381096/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4433196/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_acce-truncated-be0f66
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_prosody_train.loss.ave` ♻️ Imported from https://zenodo.org/record/5499040/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_prosody_train.loss.ave
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4034121/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_train.loss.ave
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_tts_train_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.total_count.ave` ♻️ Imported from https://zenodo.org/record/5414980/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_vits_raw_phn_jaconv_pyopenjtalk_accent_with-truncated-ba3566
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_tts_train_vits_raw_phn_jaconv_pyopenjtalk_prosody_train.total_count.ave` ♻️ Imported from https://zenodo.org/record/5521354/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_tts_train_vits_raw_phn_jaconv_pyopenjtalk_prosody_train.total_count.ave
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_vits_accent_with_pause` ♻️ Imported from https://zenodo.org/record/5414980/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_vits_accent_with_pause
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_vits_prosody` ♻️ Imported from https://zenodo.org/record/5521354/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
espnet/kan-bayashi_jsut_vits_prosody
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jvs_jvs001_vits_accent_with_pause` ♻️ Imported from https://zenodo.org/record/5432540/ This model was trained by kan-bayashi using jvs/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jvs"]}
espnet/kan-bayashi_jvs_jvs001_vits_accent_with_pause
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jvs", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jvs_jvs010_vits_accent_with_pause` ♻️ Imported from https://zenodo.org/record/5432566/ This model was trained by kan-bayashi using jvs/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jvs"]}
espnet/kan-bayashi_jvs_jvs010_vits_accent_with_pause
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jvs", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jvs_jvs010_vits_prosody` ♻️ Imported from https://zenodo.org/record/5521494/ This model was trained by kan-bayashi using jvs/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jvs"]}
espnet/kan-bayashi_jvs_jvs010_vits_prosody
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jvs", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jvs_tts_finetune_jvs001_jsut_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_latest` ♻️ Imported from https://zenodo.org/record/5432540/ This model was trained by kan-bayashi using jvs/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jvs"]}
espnet/kan-bayashi_jvs_tts_finetune_jvs001_jsut_vits_raw_phn_jaconv_pyopenjta-truncated-178804
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jvs", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jvs_tts_finetune_jvs010_jsut_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_latest` ♻️ Imported from https://zenodo.org/record/5432566/ This model was trained by kan-bayashi using jvs/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jvs"]}
espnet/kan-bayashi_jvs_tts_finetune_jvs010_jsut_vits_raw_phn_jaconv_pyopenjta-truncated-d57a28
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jvs", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## ESPnet2 TTS pretrained model ### `kan-bayashi/jvs_tts_finetune_jvs010_jsut_vits_raw_phn_jaconv_pyopenjtalk_prosody_latest` ♻️ Imported from https://zenodo.org/record/5521494/ This model was trained by kan-bayashi using jvs/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jvs"]}
espnet/kan-bayashi_jvs_tts_finetune_jvs010_jsut_vits_raw_phn_jaconv_pyopenjtalk_prosody_latest
null
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jvs", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-to-speech
espnet
## Example ESPnet2 TTS model ### `kan-bayashi/libritts_gst+xvector_conformer_fastspeech2` ♻️ Imported from https://zenodo.org/record/4418774/ This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["libritts"]}
espnet/kan-bayashi_libritts_gst_xvector_conformer_fastspeech2
null
[ "espnet", "audio", "text-to-speech", "en", "dataset:libritts", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00