2023-04-04 09:21:03,151 INFO [decode.py:649] Decoding started 2023-04-04 09:21:03,151 INFO [decode.py:655] Device: cuda:0 2023-04-04 09:21:03,214 INFO [decode.py:665] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.23.3', 'k2-build-type': 'Debug', 'k2-with-cuda': True, 'k2-git-sha1': '1c9950559223ec24d187f56bc424c3b43904bed3', 'k2-git-date': 'Thu Jan 26 22:00:26 2023', 'lhotse-version': '1.13.0.dev+git.ca98c73.dirty', 'torch-version': '2.0.0+cu117', 'torch-cuda-available': True, 'torch-cuda-version': '11.7', 'python-version': '3.8', 'icefall-git-branch': 'surt', 'icefall-git-sha1': '51e6a8a-dirty', 'icefall-git-date': 'Fri Mar 17 11:23:13 2023', 'icefall-path': '/exp/draj/mini_scale_2022/icefall', 'k2-path': '/exp/draj/mini_scale_2022/k2/k2/python/k2/__init__.py', 'lhotse-path': '/exp/draj/mini_scale_2022/lhotse/lhotse/__init__.py', 'hostname': 'r7n04', 'IP address': '10.1.7.4'}, 'epoch': 30, 'iter': 0, 'avg': 9, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7_streaming/exp/v2'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': '2,2,2,2,2', 'feedforward_dims': '768,768,768,768,768', 'nhead': '8,8,8,8,8', 'encoder_dims': '256,256,256,256,256', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '192,192,192,192,192', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'short_chunk_size': 50, 'num_left_chunks': 4, 'decode_chunk_len': 32, 'full_libri': True, 'manifest_dir': PosixPath('data/manifests'), 'max_duration': 500, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'res_dir': PosixPath('pruned_transducer_stateless7_streaming/exp/v2/fast_beam_search'), 'suffix': 'epoch-30-avg-9-streaming-chunk-size-32-beam-20.0-max-contexts-4-max-states-8-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500} 2023-04-04 09:21:03,214 INFO [decode.py:667] About to create model 2023-04-04 09:21:03,641 INFO [zipformer.py:405] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8. 2023-04-04 09:21:03,649 INFO [decode.py:738] Calculating the averaged model over epoch range from 21 (excluded) to 30 2023-04-04 09:21:12,177 INFO [decode.py:772] Number of model parameters: 20697573 2023-04-04 09:21:12,178 INFO [asr_datamodule.py:454] About to get test-clean cuts 2023-04-04 09:21:12,204 INFO [asr_datamodule.py:461] About to get test-other cuts 2023-04-04 09:21:21,894 INFO [decode.py:560] batch 0/?, cuts processed until now is 36 2023-04-04 09:22:03,674 INFO [zipformer.py:2441] attn_weights_entropy = tensor([1.3765, 1.4193, 1.7228, 1.7642, 1.3093, 1.6687, 1.6459, 1.5270], device='cuda:0'), covar=tensor([0.3476, 0.3824, 0.1703, 0.2398, 0.3968, 0.2215, 0.4112, 0.3021], device='cuda:0'), in_proj_covar=tensor([0.0922, 0.0996, 0.0730, 0.0941, 0.0899, 0.0836, 0.0850, 0.0796], device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002], device='cuda:0') 2023-04-04 09:22:18,478 INFO [decode.py:560] batch 20/?, cuts processed until now is 1038 2023-04-04 09:23:05,390 INFO [decode.py:560] batch 40/?, cuts processed until now is 2296 2023-04-04 09:23:30,148 INFO [decode.py:574] The transcripts are stored in pruned_transducer_stateless7_streaming/exp/v2/fast_beam_search/recogs-test-clean-epoch-30-avg-9-streaming-chunk-size-32-beam-20.0-max-contexts-4-max-states-8-use-averaged-model.txt 2023-04-04 09:23:30,221 INFO [utils.py:560] [test-clean-beam_20.0_max_contexts_4_max_states_8] %WER 3.57% [1879 / 52576, 218 ins, 142 del, 1519 sub ] 2023-04-04 09:23:30,377 INFO [decode.py:585] Wrote detailed error stats to pruned_transducer_stateless7_streaming/exp/v2/fast_beam_search/errs-test-clean-epoch-30-avg-9-streaming-chunk-size-32-beam-20.0-max-contexts-4-max-states-8-use-averaged-model.txt 2023-04-04 09:23:30,378 INFO [decode.py:599] For test-clean, WER of different settings are: beam_20.0_max_contexts_4_max_states_8 3.57 best for test-clean 2023-04-04 09:23:33,849 INFO [decode.py:560] batch 0/?, cuts processed until now is 43 2023-04-04 09:24:24,002 INFO [zipformer.py:2441] attn_weights_entropy = tensor([1.2127, 1.4455, 1.7780, 1.1393, 2.3796, 2.9278, 2.6542, 2.9877], device='cuda:0'), covar=tensor([0.1591, 0.3740, 0.3306, 0.2715, 0.0603, 0.0200, 0.0263, 0.0375], device='cuda:0'), in_proj_covar=tensor([0.0275, 0.0327, 0.0358, 0.0267, 0.0247, 0.0189, 0.0215, 0.0266], device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0003, 0.0002, 0.0002, 0.0003], device='cuda:0') 2023-04-04 09:24:25,372 INFO [decode.py:560] batch 20/?, cuts processed until now is 1198 2023-04-04 09:24:38,009 INFO [zipformer.py:2441] attn_weights_entropy = tensor([1.5290, 1.4286, 1.4607, 1.8452, 1.4408, 1.7147, 1.6404, 1.6043], device='cuda:0'), covar=tensor([0.0797, 0.0904, 0.0939, 0.0609, 0.0893, 0.0737, 0.0895, 0.0659], device='cuda:0'), in_proj_covar=tensor([0.0209, 0.0220, 0.0224, 0.0236, 0.0223, 0.0210, 0.0185, 0.0202], device='cuda:0'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0004], device='cuda:0') 2023-04-04 09:25:11,694 INFO [decode.py:560] batch 40/?, cuts processed until now is 2642 2023-04-04 09:25:33,579 INFO [decode.py:574] The transcripts are stored in pruned_transducer_stateless7_streaming/exp/v2/fast_beam_search/recogs-test-other-epoch-30-avg-9-streaming-chunk-size-32-beam-20.0-max-contexts-4-max-states-8-use-averaged-model.txt 2023-04-04 09:25:33,661 INFO [utils.py:560] [test-other-beam_20.0_max_contexts_4_max_states_8] %WER 9.05% [4738 / 52343, 515 ins, 457 del, 3766 sub ] 2023-04-04 09:25:33,838 INFO [decode.py:585] Wrote detailed error stats to pruned_transducer_stateless7_streaming/exp/v2/fast_beam_search/errs-test-other-epoch-30-avg-9-streaming-chunk-size-32-beam-20.0-max-contexts-4-max-states-8-use-averaged-model.txt 2023-04-04 09:25:33,839 INFO [decode.py:599] For test-other, WER of different settings are: beam_20.0_max_contexts_4_max_states_8 9.05 best for test-other 2023-04-04 09:25:33,839 INFO [decode.py:803] Done!