2023-03-21 12:42:29,212 INFO [decode.py:690] Decoding started 2023-03-21 12:42:29,212 INFO [decode.py:696] Device: cuda:0 2023-03-21 12:42:29,235 INFO [decode.py:706] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.22', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '96c9a2aece2a3a7633da07740e24fa3d96f5498c', 'k2-git-date': 'Thu Nov 10 08:14:02 2022', 'lhotse-version': '1.13.0.dev+git.527d964.clean', 'torch-version': '1.12.1', 'torch-cuda-available': True, 'torch-cuda-version': '11.6', 'python-version': '3.8', 'icefall-git-branch': 'zipformer_libri_small_models', 'icefall-git-sha1': 'd3145cd-dirty', 'icefall-git-date': 'Thu Feb 16 15:24:55 2023', 'icefall-path': '/ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_small_models', 'k2-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_latest/lib/python3.8/site-packages/k2/__init__.py', 'lhotse-path': '/ceph-data4/yangxiaoyu/softwares/lhotse_development/lhotse_random_padding_left/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-10-0221105906-5745685d6b-t8zzx', 'IP address': '10.177.57.19'}, 'epoch': 30, 'iter': 0, 'avg': 10, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7_streaming_multi/exp-small-6M'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'greedy_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'simulate_streaming': False, 'decode_chunk_size': 16, 'left_context': 64, 'num_encoder_layers': '2,2,2,2,2', 'feedforward_dims': '256,256,512,512,256', 'nhead': '4,4,4,4,4', 'encoder_dims': '128,128,128,128,128', 'attention_dims': '96,96,96,96,96', 'encoder_unmasked_dims': '96,96,96,96,96', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'short_chunk_size': 50, 'num_left_chunks': 4, 'decode_chunk_len': 32, 'max_duration': 600, 'bucketing_sampler': True, 'num_buckets': 30, 'shuffle': True, 'return_cuts': True, 'num_workers': 2, 'on_the_fly_num_workers': 0, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'manifest_dir': PosixPath('data/fbank'), 'on_the_fly_feats': False, 'res_dir': PosixPath('pruned_transducer_stateless7_streaming_multi/exp-small-6M/greedy_search'), 'suffix': 'epoch-30-avg-10-context-2-max-sym-per-frame-1-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500} 2023-03-21 12:42:29,235 INFO [decode.py:708] About to create model 2023-03-21 12:42:29,390 INFO [zipformer.py:405] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8. 2023-03-21 12:42:29,398 INFO [train.py:536] Use giga 2023-03-21 12:42:29,401 INFO [decode.py:779] Calculating the averaged model over epoch range from 20 (excluded) to 30 2023-03-21 12:42:31,968 INFO [decode.py:813] Number of model parameters: 6061029 2023-03-21 12:42:31,968 INFO [librispeech.py:58] About to get test-clean cuts from data/fbank/librispeech_cuts_test-clean.jsonl.gz 2023-03-21 12:42:31,977 INFO [librispeech.py:63] About to get test-other cuts from data/fbank/librispeech_cuts_test-other.jsonl.gz 2023-03-21 12:42:35,929 INFO [decode.py:592] batch 0/?, cuts processed until now is 26 2023-03-21 12:43:11,716 INFO [zipformer.py:2441] attn_weights_entropy = tensor([1.0073, 1.5345, 1.5496, 1.3240], device='cuda:0'), covar=tensor([0.2695, 0.1998, 0.2626, 0.2169], device='cuda:0'), in_proj_covar=tensor([0.0488, 0.0749, 0.0718, 0.0687], device='cuda:0'), out_proj_covar=tensor([0.0007, 0.0010, 0.0010, 0.0009], device='cuda:0') 2023-03-21 12:43:13,944 INFO [decode.py:592] batch 50/?, cuts processed until now is 2526 2023-03-21 12:43:17,637 INFO [decode.py:608] The transcripts are stored in pruned_transducer_stateless7_streaming_multi/exp-small-6M/greedy_search/recogs-test-clean-greedy_search-epoch-30-avg-10-context-2-max-sym-per-frame-1-use-averaged-model.txt 2023-03-21 12:43:17,736 INFO [utils.py:558] [test-clean-greedy_search] %WER 6.11% [3215 / 52576, 343 ins, 302 del, 2570 sub ] 2023-03-21 12:43:17,965 INFO [decode.py:621] Wrote detailed error stats to pruned_transducer_stateless7_streaming_multi/exp-small-6M/greedy_search/errs-test-clean-greedy_search-epoch-30-avg-10-context-2-max-sym-per-frame-1-use-averaged-model.txt 2023-03-21 12:43:17,966 INFO [decode.py:637] For test-clean, WER of different settings are: greedy_search 6.11 best for test-clean 2023-03-21 12:43:20,725 INFO [decode.py:592] batch 0/?, cuts processed until now is 30 2023-03-21 12:43:58,180 INFO [decode.py:592] batch 50/?, cuts processed until now is 2840 2023-03-21 12:43:59,263 INFO [zipformer.py:2441] attn_weights_entropy = tensor([1.4355, 1.8449, 1.4393, 1.3780], device='cuda:0'), covar=tensor([0.3140, 0.2956, 0.3352, 0.2758], device='cuda:0'), in_proj_covar=tensor([0.1560, 0.1123, 0.1377, 0.1001], device='cuda:0'), out_proj_covar=tensor([0.0014, 0.0012, 0.0013, 0.0009], device='cuda:0') 2023-03-21 12:44:02,632 INFO [decode.py:608] The transcripts are stored in pruned_transducer_stateless7_streaming_multi/exp-small-6M/greedy_search/recogs-test-other-greedy_search-epoch-30-avg-10-context-2-max-sym-per-frame-1-use-averaged-model.txt 2023-03-21 12:44:02,732 INFO [utils.py:558] [test-other-greedy_search] %WER 15.28% [7998 / 52343, 819 ins, 936 del, 6243 sub ] 2023-03-21 12:44:02,954 INFO [decode.py:621] Wrote detailed error stats to pruned_transducer_stateless7_streaming_multi/exp-small-6M/greedy_search/errs-test-other-greedy_search-epoch-30-avg-10-context-2-max-sym-per-frame-1-use-averaged-model.txt 2023-03-21 12:44:02,955 INFO [decode.py:637] For test-other, WER of different settings are: greedy_search 15.28 best for test-other 2023-03-21 12:44:02,955 INFO [decode.py:845] Done!