repo
stringlengths 8
116
| tasks
stringlengths 8
117
| titles
stringlengths 17
302
| dependencies
stringlengths 5
372k
| readme
stringlengths 5
4.26k
| __index_level_0__
int64 0
4.36k
|
---|---|---|---|---|---|
nbfigueroa/ICSC-HMM | ['time series'] | ['Transform-Invariant Non-Parametric Clustering of Covariance Matrices and its Application to Unsupervised Joint Segmentation and Action Discovery'] | plotting/mocap/buildHTMLForSkelViz.py ExtractInfoFromWeb CreateHTML GetBasename read strip index dict len splitext join read int arange replace print glob write close take open ExtractInfoFromWeb range split | # ICSC-HMM **[TODO: UPDATE README.. paper and code have drastically changed!]** ICSC-HMM : IBP Coupled SPCM-CRP Hidden Markov Model for Transform-Invariant Time-Series Segmentation Website: https://github.com/nbfigueroa/ICSC-HMM Author: Nadia Figueroa (nadia.figueroafernandez AT epfl.ch) **NOTE:** If you are solely interested in the transform-invariant metric and clustering algorithm for Covariance matrices introduced in [1] go to [https://github.com/nbfigueroa/SPCM-CRP.git](https://github.com/nbfigueroa/SPCM-CRP.git) This is a toolbox for inference of the ICSC-HMM (IBP Coupled SPCM-CRP Hidden Markov Model) [1]. The ICSC-HMM is a segmentation and action recognition algorithm that solves for three challenges in HMM-based segmentation and action recognition: **(1) Unknown cardinality:** The typical model selection problem, number of hidden states is unknown. This can be solved by formulating an HMM with the Bayesian Non-Parametric treatment. This is done by placing an infinite prior on the transition distributions, typically the Hierarchical Dirichlet Process (HDP). **(2) Fixed dynamics:** For BNP analysis of ***multiple*** time-series with the HDP prior, the time series are tied together with the same set of transition and emission parameters. This problem was alleviated by the Indian Buffet Process (IBP) prior, which relaxes the assumption of the multiple time-series following the same transition parameters and allowing for only a sub-set of states to be active. **(3) Transform-invariance:** For ***any*** type of HMM, the emission models are always assumed to be unique, there is no way to handle transformations within or throughout time-series. | 3,100 |
ncoop57/tango | ['optical character recognition'] | ['It Takes Two to Tango: Combining Visual and Textual Information for Detecting Duplicate Video-Based Bug Reports'] | two_to_tango/utils.py two_to_tango/cli.py two_to_tango/__init__.py setup.py two_to_tango/combo.py two_to_tango/approach.py two_to_tango/eval.py two_to_tango/features.py two_to_tango/prep.py two_to_tango/_nbdev.py two_to_tango/model.py compute_sims gen_lcs_similarity gen_tfidfs gen_bovw_similarity fuzzy_LCS gen_extracted_features flatten_dict sort_rankings fix_sims approach _download tango _print_performance _output_performance _generate_vis_results _get_single_model_performance _generate_txt_results reproduce download write_results convert_results_format write_rankings tango_combined run_settings execute_retrieval_run get_info_to_ranking_results hit_rate_at_k recall_at_k evaluate get_eval_results evaluate_ranking calc_tf_idf mean_average_precision precision_at_k average_precision cosine_similarity mean_reciprocal_rank rank_stats r_precision new_get_bovw imagenet_normalize_transform Extractor get_df get_bovw calc_tf_idf extract_features gen_vcodebook CNNExtractor get_transforms SimCLRExtractor gen_codebooks SIFTExtractor NTXEntCriterion get_train_transforms imagenet_normalize_transform SimCLRModel get_val_transforms sift_frame_sim SimCLRDataset simclr_frame_sim get_rand_imgs Video vid_from_frames VideoDataset get_rico_imgs read_video_data generate_setting2 get_all_texts get_non_duplicate_corpus read_json_line_by_line read_json load_settings deskew write_json_line_by_line read_csv_to_dic_list canny remove_noise preprocess_img find_file write_csv_from_json_list match_template extract_text dilate opening group_dict thresholding get_grayscale thresholding_med extract_frames erode process_frame custom_doc_links items list isinstance time labels tqdm calc_tf_idf new_get_bovw defaultdict deepcopy list time defaultdict norm items gen_tfidfs dot reversed list range sim_func deepcopy list time defaultdict items tqdm fuzzy_LCS len list reversed items sorted list tuple OrderedDict flatten_dict deepcopy list defaultdict items tqdm sort_rankings fix_sims items new_get_bovw time norm sorted list tuple labels gen_tfidfs calc_tf_idf dot tqdm OrderedDict gen_extracted_features flatten_dict extract_features get BytesIO extractall mkdir Path info content ZipFile _download load items time list evaluate gen_lcs_similarity SIFT_create tqdm eval gen_extracted_features gen_bovw_similarity mkdir info open approach SimCLRExtractor SIFTExtractor convert_results_format check_output read_video_data generate_setting2 mkdir get_all_texts info load append open print mean read_csv values _print_performance seed _download tango_combined _output_performance _generate_vis_results label_from_paths Path info _generate_txt_results load Video PrettyPrinter compute_sims eval pprint Path label_from_paths SimCLRExtractor open items sorted list OrderedDict split update evaluate_ranking print execute_retrieval_run append split to_csv join mkdir extend write_json_line_by_line join extend mkdir load join list write_results write_rankings print extend load_settings find_file run_settings open keys split update join format split arange evaluate_ranking read_json_line_by_line read_json load_settings write_json_line_by_line str list sorted OrderedDict get_info_to_ranking_results append mkdir zip items time join print to_csv group_dict dict split zip append sum array log len append asarray items list asarray hit_rate_at_k print extend mean_average_precision average_precision append rank_stats print int asarray recall_at_k precision_at_k average_precision append range len load extract time asarray dump KMeans fit extend tqdm append open list dump glob open zip sample gen_vcodebook extract asarray extend predict get VideoCapture str read extract list concatenate progress_bar CAP_PROP_FRAME_COUNT array range predict extract int append range len array extend predict norm calc_tf_idf dot expand_dims predict ColorJitter CrossEntropyLoss CosineSimilarity VideoCapture str read CAP_PROP_POS_MSEC set randrange append str parent output input run glob sorted len append extend list extend copy write_json_line_by_line pprint mkdir append keys range get_non_duplicate_corpus len join str sorted replace print stem write_json_line_by_line find_file extract_frames Path mkdir append join fnmatch walk to_csv sort list groupby find_file ones uint8 ones uint8 ones uint8 getRotationMatrix2D warpAffine where column_stack opening get_grayscale thresholding_med system imread extract_text preprocess_img | <!-- ################################################# ### THIS FILE WAS AUTOGENERATED! DO NOT EDIT! ### ################################################# # file to edit: nbs/index.ipynb # command to build the docs after a change: nbdev_build_docs --> # Tango > Summary description here. This file will become your README and also the index of your documentation. | 3,101 |
ndalsanto/PDE-DNN | ['network embedding'] | ['Data driven approximation of parametrized PDEs by Reduced Basis and Neural Networks'] | fem_data_generation.py pde_dnn_model.py navier_stokes/navier_stokes_pde_activation.py navier_stokes/main_pde_dnn_model.py navier_stokes/build_ns_rb_manager.py generate_fem_training_data generate_fem_coordinates PrintDot plot_history custom_loss pde_dnn_model build_ns_rb_manager navier_stokes_pde_activation seed print sort randint floor unique ceil zeros float range zeros range get_snapshot_function yscale show epoch plot xlabel rc close ylabel tight_layout ylim savefig figure legend array update_fom_specifics set_affine_decomposition_handler set_Q import_rb_affine_matrices get_rb_functions_dict get_num_mdeim_basis clear_fom_specifics compute_theta_min_max perform_pod substitute_parameter_generator set_affine_a build_snapshots M_snapshots_matrix import_snapshots_matrix set_deim import_snapshots_parameters AffineDecompositionHandler load_mdeim_offline get_number_of_basis get_basis_list save_rb_affine_decomposition M_snapshots_coefficients load_deim_offline Mdeim import_basis_matrix save_matrix __doc__ Tensor_parameter_generator set_mdeim perform_mdeim import_rb_affine_vectors get_num_basis loadtxt save_vector print save_offline_structures build_rb_affine_decompositions import_test_parameters set_affine_f RbManager set_save_offline import_test_snapshots_matrix zeros get_deim_basis_list Deim perform_deim zeros | # PDE-DNN This repository contains the numerical examples from the paper "Data driven approximation of parametrized PDEs by Reduced Basis and Neural Networks", by N. Dal Santo, S. Deparis and L. Pegolotti. If you use the code, please cite the following reference [arXiv:1904.01514](https://arxiv.org/abs/1904.01514). In this work we propose a novel way to integrate data and PDE simulations by combining DNNs and RB solvers for the prediction of the solution of a parametrized PDE. The proposed architecture features a MLP followed by a RB solver, acting as nonlinear activation function. The output of the MLP is interpreted as a prediction of parameter dependent quantities: physical parameters, theta functions of the approximated affine decomposition and approximated RB solutions. Compared to standard DNNs, we obtain as byproduct the solution in the full physical space and, for affine dependencies, the value of the parameter. Compared to the RB method, we obtain accurate solutions with a smaller number of affine components by solving a linear problem instead of a nonlinear one. # Running the test To train and use the networks, you need to have a working installation of [TensorFlow](https://www.tensorflow.org/). The RB structures during the offline phase of the RB method have been generated with [PyORB](https://github.com/ndalsanto/pyorb) and you need to have a working installation in order to run the example. PyORB itself must rely on a finite element (FE) library, which can be connected through the [pyorb-matlab-api](https://github.com/ndalsanto/pyorb-matlab-api). In the example provided [feamat](https://github.com/lucapegolotti/feamat) has been used as FE backend. | 3,102 |
ndrplz/ConvLSTM_pytorch | ['weather forecasting', 'video prediction'] | ['Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting'] | convlstm.py ConvLSTMCell ConvLSTM | # ConvLSTM_pytorch **[This](https://github.com/ndrplz/ConvLSTM_pytorch/blob/master/convlstm.py)** file **contains the implementation of Convolutional LSTM in PyTorch** made by [me](https://github.com/ndrplz) and [DavideA](https://github.com/DavideA). We started from [this](https://github.com/rogertrullo/pytorch_convlstm/blob/master/conv_lstm.py) implementation and heavily refactored it add added features to match our needs. Please note that in this repository we implement the following dynamics:  which is a bit different from the one in the original [paper](https://arxiv.org/pdf/1506.04214.pdf). ### How to Use The `ConvLSTM` module derives from `nn.Module` so it can be used as any other PyTorch module. The ConvLSTM class supports an arbitrary number of layers. In this case, it can be specified the hidden dimension (that is, the number of channels) and the kernel size of each layer. In the case more layers are present but a single value is provided, this is replicated for all the layers. For example, in the following snippet each of the three layers has a different hidden dimension but the same kernel size. Example usage: | 3,103 |
neerakara/TTA_prostate | ['medical image segmentation', 'denoising', 'domain generalization', 'semantic segmentation'] | ['Test-Time Adaptable Neural Networks for Robust Medical Image Segmentation'] | data/data_nci.py data/data_pirad_erc.py tfwrapper/layers.py config/system.py models/i2i/subject_88800049/i2i.py utils_vis.py data/data_promise.py experiments/i2i.py update_i2i.py update_i2i_all.py models/i2i/subject_88800036/i2i.py evaluate.py tfwrapper/losses.py model_zoo.py utils.py model.py main rescale_and_crop predict_segmentation prepare_tensor_for_summary evaluate_losses evaluation_dae predict_i2l evaluation_i2l training_step normalize predict_dae loss dae3D unet2D_i2l net2D_i2i iterate_minibatches_images_and_labels iterate_minibatches_images rescale_image_and_label run_test_time_training main elastic_transform_image_and_label crop_or_pad_slice_to_size_1hot elastic_transform_label_pair_3d elastic_transform_label_3d normalise_image crop_or_pad_slice_to_size crop_or_pad_volume_to_size_along_z makefolder make_onehot get_latest_model_checkpoint_path crop_or_pad_volume_to_size_along_x_1hot group_segmentation_classes_15 save_nii elastic_transform_label load_nii compute_surface_distance_per_label crop_or_pad_volume_to_size_along_x compute_surface_distance group_segmentation_classes save_sample_prediction_results save_sample_results add_1_pixel_each_class save_single_image save_samples_downsampled load_without_size_preprocessing count_slices get_patient_folders prepare_data test_train_val_split load_and_maybe_process_data _release_tmp_memory _write_range_to_hdf5 crop_or_pad_slice_to_size load_without_size_preprocessing count_slices prepare_data load_data _release_tmp_memory _write_range_to_hdf5 load_without_size_preprocessing prepare_data _release_tmp_memory test_train_val_split load_and_maybe_process_data count_slices_and_patient_ids_list convert_to_nii_and_correct_bias_field _write_range_to_hdf5 bilinear_upsample2D conv3D_layer deconv2D_layer_bn crop_and_concat_layer conv3D_layer_bn conv2D_layer_bn conv2D_layer max_pool_layer2d bilinear_upsample3D max_pool_layer3d deconv3D_layer bilinear_upsample3D_ pad_to_size dice_loss compute_dice pixel_wise_cross_entropy_loss_using_probs dice_loss_within_mask pixel_wise_cross_entropy_loss compute_dice_3d_without_batch_axis crop_or_pad_slice_to_size squeeze rescale swapaxes append rot90 array range arange test_dataset flatten load_and_maybe_process_data round open str basicConfig rescale_and_crop array append normalize sum range close mean info f1_score save_sample_prediction_results load_without_size_preprocessing predict_segmentation write compute_surface_distance orig_data_root_pirad_erc load_data zeros orig_data_root_nci std argmax model_handle_i2l softmax argmax softmax model_handle_l2l model_handle_normalizer one_hot pixel_wise_cross_entropy_loss_using_probs dice_loss dice_loss_within_mask softmax pixel_wise_cross_entropy_loss minimize group get_collection UPDATE_OPS optimizer_handle one_hot loss compute_dice prepare_tensor_for_summary concat image softmax evaluate_losses argmax int prepare_tensor_for_summary one_hot dice_loss concat image softmax argmax constant slice reshape squeeze stack cast get_cmap gather expand_dims colors reset_default_graph collect expand_dims arange range copy expand_dims arange range copy int uint8 collect __file__ rescale_image_and_label astype run_test_time_training copy project_code_root MakeDirs save_samples_downsampled uint8 astype float32 make_onehot rescale nlabels argmax makedirs int join glob append max load Nifti1Image to_filename percentile float32 divide copy mean std shape zeros shape zeros zeros zeros zeros group_segmentation_classes_15 arange print unique zeros array RandomState arange reshape rand shape meshgrid gaussian_filter RandomState arange reshape rand shape meshgrid gaussian_filter RandomState arange reshape rand copy meshgrid range gaussian_filter RandomState arange reshape rand copy meshgrid range gaussian_filter zeros shape concatenate distance_transform_edt astype ndim float32 atleast_1d generate_binary_structure bool compute_surface_distance_per_label percentile mean append range range copy clim axis close colorbar add_1_pixel_each_class imshow savefig figure subplot close colorbar imshow add_1_pixel_each_class savefig figure range subplot arange clim close copy colorbar imshow title savefig figure range len subplot zeros_like clim close copy colorbar imshow title savefig figure rot90 range len int join endswith test_train_val_split listdir walk int join test_train_val_split startswith append listdir rescale normalise_image crop_or_pad_slice_to_size str list count_slices squeeze call shape append walk range save_nii close swapaxes zip info float join SliceThickness read get_patient_folders print File read_file create_dataset zeros _release_tmp_memory pixel_array _write_range_to_hdf5 asarray info clear collect join prepare_data makefolder info join get_patient_folders normalise_image append walk join prepare_data makefolder info load_nii range rot90 load_nii unique listdir load_nii listdir call match imread walk save_nii int test_train_val_split match append walk count_slices_and_patient_ids_list GetSpacing ReadImage swapaxes as_list slice subtract append range len as_list subtract pad mod max_pooling2d conv2d conv2d batch_normalization activation batch_normalization conv2d_transpose activation resize_bilinear batch_normalization conv3d activation max_pooling3d conv3d_transpose conv3d reduce_mean softmax_cross_entropy_with_logits_v2 expand_dims reduce_sum copy | # TTA_prostate Code for the paper: https://arxiv.org/abs/2004.04668 | 3,104 |
neerakara/test-time-adaptable-neural-networks-for-domain-generalization | ['medical image segmentation', 'denoising', 'domain generalization', 'semantic segmentation'] | ['Test-Time Adaptable Neural Networks for Robust Medical Image Segmentation'] | evaluate_with_post_processing.py train_i2l_mapper.py utils_vis.py tfwrapper/losses.py train_l2l_mapper.py utils_masks.py experiments/l2i.py update_i2i_mapper_using_gt_for_multiple_subjects.py evaluate.py utils.py update_i2i_mapper.py update_i2i_mapper_for_multiple_subjects.py train_l2i_mapper.py make_hcp_atlas.py update_i2i_mapper_using_gt.py update_i2l_mapper_for_multiple_subjects.py data/data_hcp_3d.py data/data_abide.py tfwrapper/layers.py generate_visual_results.py model.py config/system.py generate_visual_results_all.py experiments/i2l.py data/data_hcp.py update_i2l_mapper.py experiments/l2l.py experiments/i2i.py model_zoo.py main rescale_and_crop predict_segmentation main rescale_and_crop predict_segmentation main rescale_and_crop predict_segmentation main rescale_and_crop predict_segmentation predict_l2i prepare_tensor_for_summary evaluate_losses likelihood_loss predict_i2l predict_l2l evaluation_i2l evaluation_l2i evaluation_l2l training_step normalize loss unet2D_l2i net2D_i2i unet3D_n5_l2l_no_skip_connections unet3D_n5_l2l_with_skip_connections_except_first_layer unet3D_n5_l2l_with_skip_connections unet2D_i2l unet3D_n4_l2l_no_skip_connections unet3D_n4_l2l_with_skip_connections_except_first_layer unet2D_l2l do_eval iterate_minibatches run_training do_data_augmentation main do_eval iterate_minibatches run_training do_data_augmentation main do_eval iterate_minibatches run_training do_data_augmentation main modify_image_and_label iterate_minibatches_images run_training main iterate_minibatches_images_and_downsampled_labels modify_image_and_label iterate_minibatches_images run_training main iterate_minibatches_images_and_downsampled_labels modify_image_and_label iterate_minibatches_images run_training main iterate_minibatches_images_and_downsampled_labels crop_or_pad_slice_to_size crop_or_pad_volume_to_size_along_z compute_surface_distance_per_label elastic_transform_image_and_label group_segmentation_classes_15 elastic_transform_label makefolder make_onehot crop_or_pad_volume_to_size_along_x elastic_transform_label_pair_3d load_nii group_segmentation_classes compute_surface_distance elastic_transform_label_3d normalise_image get_latest_model_checkpoint_path crop_or_pad_volume_to_size_along_x_1hot save_nii make_noise_masks_2d make_roi_mask make_noise_masks_3d save_sample_prediction_results save_sample_results add_1_pixel_each_class plot_graph save_single_image save_samples_downsampled load_without_size_preprocessing count_slices center_image_and_label prepare_data get_image_and_label_paths copy_site_files_abide_stanford load_and_maybe_process_data correct_bias_field _release_tmp_memory copy_site_files_abide_caltech _write_range_to_hdf5 load_without_size_preprocessing count_slices prepare_data get_image_and_label_paths load_and_maybe_process_data _release_tmp_memory _write_range_to_hdf5 prepare_data get_image_and_label_paths load_and_maybe_process_data _release_tmp_memory _write_range_to_hdf5 bilinear_upsample2D conv3D_layer deconv2D_layer_bn crop_and_concat_layer conv3D_layer_bn conv2D_layer_bn conv2D_layer max_pool_layer2d bilinear_upsample3D max_pool_layer3d deconv3D_layer bilinear_upsample3D_ pad_to_size dice_loss compute_dice pixel_wise_cross_entropy_loss_using_probs dice_loss_within_mask pixel_wise_cross_entropy_loss compute_dice_3d_without_batch_axis crop_or_pad_slice_to_size squeeze rescale swapaxes append rot90 array range arange test_dataset orig_data_root_ixi expname_normalizer flatten load_and_maybe_process_data round open str basicConfig image_depth_ixi rescale_and_crop array image_depth_hcp normalize orig_data_root_abide sum append range orig_data_root_hcp close mean info f1_score log_root save_sample_prediction_results join expname_i2l image_depth_stanford load_without_size_preprocessing predict_segmentation write compute_surface_distance image_depth_caltech std post_process dae_post_process_runs crop_or_pad_volume_to_size_along_z save_single_image argmax model_handle_i2l softmax model_handle_l2i one_hot argmax softmax model_handle_l2l model_handle_normalizer one_hot pixel_wise_cross_entropy_loss_using_probs dice_loss dice_loss_within_mask softmax pixel_wise_cross_entropy_loss ssim reduce_mean square reduce_sum minimize group get_collection UPDATE_OPS optimizer_handle one_hot loss compute_dice prepare_tensor_for_summary concat image softmax evaluate_losses argmax prepare_tensor_for_summary concat image softmax evaluate_losses argmax prepare_tensor_for_summary concat square reduce_mean image ssim argmax constant slice reshape squeeze stack cast get_cmap gather expand_dims colors int str orig_data_root_ixi orig_data_root_hcp experiment_name_i2l shape load_and_maybe_process_data info get_latest_model_checkpoint_path orig_data_root_abide iterate_minibatches info run permutation sort do_data_augmentation expand_dims range crop_or_pad_slice_to_size normal elastic_transform_image_and_label shift copy rotate rescale uniform round range __file__ run_training copy continue_run MakeDirs experiment_name make_roi_mask experiment_name_l2l make_onehot nlabels make_noise_masks_3d elastic_transform_label_3d downsampling_factor_x collect make_onehot rescale nlabels reset_default_graph crop_or_pad_volume_to_size_along_x_1hot expand_dims arange range copy expand_dims arange range copy modify_image_and_label preproc_folder_hcp save_samples_downsampled load int collect uint8 astype float32 make_onehot rescale nlabels argmax makedirs int join glob append max load Nifti1Image to_filename percentile float32 divide copy mean std shape zeros zeros zeros zeros group_segmentation_classes_15 arange print unique zeros array RandomState arange reshape rand shape meshgrid gaussian_filter RandomState arange reshape rand shape meshgrid gaussian_filter RandomState arange reshape rand copy meshgrid range gaussian_filter RandomState arange reshape rand copy meshgrid range gaussian_filter zeros shape concatenate distance_transform_edt astype ndim float32 atleast_1d generate_binary_structure bool compute_surface_distance_per_label percentile mean append range ones_like zeros_like where array range ones zeros range randint ones zeros range randint range copy clim axis close colorbar add_1_pixel_each_class imshow savefig figure subplot close colorbar imshow add_1_pixel_each_class savefig figure range subplot arange zeros_like clim close copy colorbar imshow title savefig figure rot90 range len subplot arange clim close copy colorbar imshow title savefig figure range len close savefig figure plot sorted glob len copyfile range makedirs sorted glob len copyfile range makedirs str sorted glob print call range len get_image_and_label_paths load_nii range minimum min maximum where max rescale normalise_image crop_or_pad_volume_to_size_along_z str sorted list crop_or_pad_slice_to_size count_slices center_image_and_label squeeze append range glob close copy get_image_and_label_paths load_nii swapaxes info float File group_segmentation_classes create_dataset _release_tmp_memory _write_range_to_hdf5 len asarray info clear collect crop_or_pad_volume_to_size_along_z sorted center_image_and_label glob copy get_image_and_label_paths load_nii group_segmentation_classes swapaxes normalise_image join prepare_data makefolder info argmax make_onehot_ array as_list slice subtract append range len as_list subtract pad mod max_pooling2d conv2d conv2d batch_normalization activation batch_normalization conv2d_transpose activation resize_bilinear batch_normalization conv3d activation max_pooling3d conv3d_transpose conv3d reduce_mean softmax_cross_entropy_with_logits_v2 expand_dims reduce_sum copy | # domain_generalization_image_segmentation Code for the paper "Test-time adaptable neural networks for robust medical image segmentation": https://arxiv.org/abs/2004.04668 The method consists of three steps: 1. Train a segmentation network on the source domain: train_i2l_mapper.py 2. Train a denoising autoencoder on the source domain labels: train_l2l_mapper.py 3. Adapt the normalization module of the segmentation network for each test image: update_i2i_mapper.py | 3,105 |
neeyoo/Neuralnetwork_project_art_transfer | ['style transfer'] | ['A Neural Algorithm of Artistic Style'] | nst_utils.py generate_noise_image reshape_and_normalize_image save_image CONFIG load_vgg_model reshape _conv2d_relu Variable zeros _avgpool loadmat astype reshape MEANS shape MEANS imwrite astype | # Neural Network Project: Art Style Transfer The algorithm of this project will realize Neural Style Transfer. By using neural network, this algorithm can extrat the style of an art masterpiece, for example "The Starry Night" of Vincent van Gogh:  Then merge the style with your own photo. Here is the original photo:  And then, the photo after art style transfer:  <br> | 3,106 |
nefujiangping/EncAttAgg | ['relation extraction'] | ['Improving Document-level Relation Extraction via Contextualizing Mention Representations and Weighting Mention Pairs'] | models/model_utils.py misc/constant.py models/modules.py models/lock_dropout.py data/gen_data.py misc/param.py models/embedding.py misc/util.py models/models.py misc/metrics.py data_loader/data_loader.py main.py trainer/training.py models/long_seq.py data_loader/data_utils.py set_wandb main init collate_fn_bert_enc to_tensor BertEncDataSet get_marker_to_token REDataset IntegrationAttender MutualAttender PoolingStyle compute_f1 __get__dev_true diff_dist_performance compute_ign_f1 __get__test_true num2_p_r_f p_r_f coref_vs_non_coref_performance Accuracy __p_r_f__ Config boolean_string DocumentEncoder LockedDropout process_long_input AttenderAggregator BiLSTMEncoder Embedding MentionExtractor RelationExtraction get_device_of masked_index_fill weighted_sum max_value_of_dtype flatten_and_batch_shift_indices tiny_value_of_dtype get_range_vector ConfigurationError min_value_of_dtype batched_index_select info_value_of_dtype masked_softmax TwoLayerLinear MutualAttendEncoderLayer _get_activation_fn MultiLayerMMA _get_clones EncoderLayer EAATrainer Trainer load FullLoader init open Config param_file EAATrainer evaluate logging train add_argument test print_params ArgumentParser parse_args RelationExtraction save max open list ones convert_tokens_to_ids add append range get dump set build_inputs_with_special_tokens zip tokenize enumerate load int join print min zeros len pad stack to_tensor append max load list format dict open keys len auto auto auto auto split enumerate set load set open enumerate split load dump list len add set num2_p_r_f open enumerate __get__dev_true __get__test_true items list isinstance add set num2_p_r_f append sum range enumerate items list isinstance add set num2_p_r_f append sum range enumerate asarray append float argmax enumerate asarray append float argmax enumerate lower model size tolist extend stack pad unsqueeze zip append to cat enumerate list insert size expand unsqueeze expand_as dim range dtype tiny_value_of_dtype min_value_of_dtype unsqueeze masked_fill softmax sum get_device_of view size get_range_vector unsqueeze range len list view size flatten_and_batch_shift_indices index_select view reshape size flatten_and_batch_shift_indices unsqueeze scatter bool is_floating_point | EncAttAgg --------- # ====== 2020.12.30 Update: codes have been re-constructed to [main](https://github.com/nefujiangping/EncAttAgg/tree/main) branch, the master branch is deprecated. ====== This is the source code for ICKG 2020 paper "[Improving Document-level Relation Extraction via Contextualizing Mention Representations and Weighting Mention Pairs](https://conferences.computer.org/ickg/pdfs/ICKG2020-66r9RP2mQIZywMjHhQVtDI/815600a305/815600a305.pdf)" We propose an effective **Enc**oder-**Att**ender-**Agg**regator (EncAttAgg) model for ducument-level RE. This model introduced two attenders to tackle two problems: 1) We introduce a mutual attender layer to efficiently obtain the entity-pair-specific mention representations. 2) We introduce an integration attender to weight mention pairs of a target entity pair.  ## Requirements + python 3.7.4 + scikit-learn | 3,107 |
nehap25/rlwithgp | ['gaussian processes'] | ['Manifold Gaussian Processes for Regression'] | syntheticChrissAlmgren.py robotics_fancy_kernel_experiment.py experiment.py ddpg_agent.py fancy_kernel.py robotics_exp.py gen_data.py fancy_kernel_experiment.py simple_world/primitives.py simple_world/constants.py simple_world/run.py torch_rbf.py main.py untitled.py simple_world/utils.py OUNoise Agent GP d Q get_reward_v2 covering_number argmax_action GP get_reward_v1 FancyKernel train d Q get_reward_v2 covering_number argmax_action GP get_reward_v1 calc_action_list conf_json get_conf gen_objects sample_attributes empty_steps setup data_to_json calc_action_list_v2 gen_stack init_world json_to_data region_json pose_json calc_reward simulate_push data_json robot_json object_json step covering_number d Q get_reward_v2 covering_number argmax_action GP get_reward_v1 d Q get_reward_v2 covering_number argmax_action GP get_reward_v1 MarketEnvironment RBF matern32 multiquadric linear inverse_quadratic gaussian poisson_two matern52 quadratic basis_func_dict spline poisson_one inverse_multiquadric test_cfree_push pose_gen move get_push_conf_fn test_feasible get_push_from_conf get_pose_kin test_touches new_move get_push_to_conf push get_push_conf execute_plan load_world get_problem main solve_problem Pose u_vec close_gripper Conf get_full_aabb get_conf sample_aabb close_enough Region create_object change_constraint collision get_turn_traj create_stack rrt create_constraint touches set_pose assert_poses_close rejection_sample_region get_obstacles get_yaw Robot eul_to_quat set_conf dense_path set_friction step_simulation contains get_straight_traj rejection_sample_aabb Obj sample_region get_dims read_file rrt_to_traj get_pose quat_to_eul pi range len float d min tuple float tolist Q backward step float forward range tuple pid Conf getBasePositionAndOrientation data_json Pose setGravity gen_objects createMultiBody conf createVisualShape connect init_world set_pose gen_stack getDataPath rejection_sample_region get_yaw pos setAdditionalSearchPath eul_to_quat subtract step_simulation getAABB divide extend array GEOM_BOX pos int Conf cos pi sin append range append Conf range step_simulation time simulate_push calc_reward get_conf calc_action_list_v2 step_simulation empty_steps list pid getAABB append range sample_attributes create_object list create_stack append range array sample_attributes Conf getAABB Robot loadURDF uniform rejection_sample_aabb new_move pow exp pow pow ones_like pow pow ones_like pow ones_like log ones_like exp ones_like exp ones_like exp pow ones_like exp pos eul_to_quat u_vec subtract get_yaw pos eul_to_quat u_vec subtract get_yaw T set_conf constrain close_gripper rrt get_conf step_simulation assert_poses_close rrt_to_traj range len time constrain step_simulation close_gripper move sample_aabb array aabb get_pose set_pose get_straight_traj set_conf collision set_pose get_full_aabb solve_problem input eval step_simulation print tuple extend dict load_world read_file append range len Pose setGravity Conf get_conf createMultiBody createVisualShape Region create_object create_stack set_pose loadURDF getDataPath append Robot setAdditionalSearchPath step_simulation getAABB extend get_pose array GEOM_BOX seed get_state DIRECT execute_plan enable disconnect GUI solve_incremental connect step_simulation load_world get_problem set_state disable print_stats print_solution Profile subtract abs norm tuple subtract add append max range len append list getAABB pid pos ori get_turn_traj u_vec subtract Conf extend append list u_vec subtract Conf append dot cross u_vec realpath join dirname sample_aabb any sample_region any array SearchSpace RRTStarBidirectional rrt_star_bidirectional pop ori pos get_turn_traj extend sample_aabb divide changeConstraint pid resetJointState Obj divide set_friction createMultiBody createCollisionShape createVisualShape GEOM_BOX list Pose pose set_pose append range create_object get_full_aabb getNumJoints pid getAABB vstack append amin array range amax tuple pid Pose getBasePositionAndOrientation resetBasePositionAndOrientation pid ori pos changeDynamics pid resetBasePositionAndOrientation pid ori pos stepSimulation range sleep | # rlwithgp This is our repository for implementing Algorithm 1 from http://proceedings.mlr.press/v32/grande14.pdf for Efficient Reinforcement Learning with Gaussian Processes, along with all of our other experiments. Our paper can be found at https://github.com/nehap25/rlwithgp/blob/master/6_435_Final_Project.pdf. **experiment.py** --> Implements Algorithm 1 and tests it on Experiment 1 from the paper, also contains our Gaussian Process class. **fancy_kernel.py** --> Implements the kernel from https://arxiv.org/abs/1402.5876 **fancy_kernel_experiment.py** --> Runs Experiment 1 from the paper with fancy_kernel.py **torch_rbf.py** --> implements radial basis kernel in pytorch for fancy_kernel.py **main.py** --> Runs Algorithm 1 for modeling optimal execution of Portfolio Transactions using the Chriss/Almgren model from https://www.math.nyu.edu/faculty/chriss/optliq_f.pdf **ddpg_agent.py** --> Implements the code for the agent's Q-function, the Gaussian Process, as well as noise sampling using an Ornstein-Uhlbeck process **syntheticChrissAlmgren.py** --> Creates a simple simulation trading environment, obtained from https://github.com/udacity/deep-reinforcement-learning/tree/master/finance **robotics_exp.py** --> implements robotics experiment described in paper | 3,108 |
neheller/labels18 | ['liver segmentation', 'semantic segmentation'] | ['Imperfect Segmentation Labels: How Much Do They Matter?'] | initializers/convtransposeinterp.py roc_curve.py figures.py Models/Mylayers.py Models/UNet.py Models/SegNet.py experiment_scripts/generate_experiment_scripts.py preprocessing/tools.py callbacks/topnsaver.py viz/create_viz_set.py AUC.py Models/FCN32.py Models/metrics.py model_runner.py callbacks/vizpreds.py losses/sampledbce.py test_perturb.py data_analysis/fancy_scatter.py DataGenerator.py data_analysis/get_performance.py DataGenerator perturb_row random_perturb smooth_function Vector2 choppy_perturb perturb_nat get_segnet get_unet get_callbacks get_run_dir get_time get_fcn32 get_performance dice load_bundles TopNSaver VizPreds get_marker_and_color get_performance get_runs_by_hand Experiment get_assets get_runs LoadBalancer get_access_info ConvTransposeInterp sampled_bce create_fcn32 recall precision dice_sorensen MaxPoolingWithArgmax2D MaxUnpooling2D create_segnet create_unet perturb_row random_perturb perturb smooth_function Vector2 choppy_perturb preprocess natural_perturb perturb_nat load_bundles append normal linspace pi smooth_function fillPoly findContours Vector2 f pi RETR_LIST shape boundingRect unitV zeros CHAIN_APPROX_NONE enumerate len arange normal perturb_row reshape sum range shape logical_and less random join parent existing print new exit map Path mkdir str TopNSaver print TensorBoard VizPreds lower run_dir CSVLogger visualize sum logical_and greater logical_not load int str glob zeros sum logical_and name glob greater cast float32 astype Input range logical_and round cast sum equal logical_and round cast sum equal logical_and round cast sum equal print Input int concatenate append Input range perturb_nat range uint8 random_perturb astype choppy_perturb shape natural_perturb zeros | # LABELS 2018 Code A study of how erroneous training data affects the performance of deep learning systems for semantic segmentation ## Usage ``` usage: model_runner.py [-h] [-n NEW] [-e EXISTING] [-b BATCH_SIZE] [-d DATASET] [-p PERTURBATION] [-g GPU_INDEX] [-m MODEL] [-t [TESTING]] [-v [VISUALIZE]] Run a training or testing round for our Labels 2018 Submission optional arguments: | 3,109 |
neka-nat/gazebo_domain_randomization | ['object localization'] | ['Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World'] | gazebo_domain_randomizer/src/gazebo_domain_randomizer/utils.py gazebo_domain_randomizer/setup.py get_model_properties _get_model_properties_once status_message wait_for_service ServiceProxy logwarn get_model_prop GetModelPropertiesRequest success range _get_model_properties_once sleep | # Gazebo Domain Randomization [](https://travis-ci.org/neka-nat/gazebo_domain_randomization) https://arxiv.org/abs/1703.06907 **Double pendulum demo**  **Shadow hand demo**  ## Run ``` roslaunch gazebo_domain_randomizer demo.launch | 3,110 |
nekitmm/FunnelAct_Pytorch | ['scene generation', 'semantic segmentation'] | ['Funnel Activation for Visual Recognition'] | main.py frelu.py resnet_frelu.py FReLU validate AverageMeter accuracy save_checkpoint ProgressMeter train conv1x1 resnext50_32x4d wide_resnet50_2 ResNet resnet50 resnext101_32x8d Bottleneck resnet152 wide_resnet101_2 conv3x3 _resnet resnet34 resnet18 BasicBlock Bottleneck_FReLU resnet101 len eval AverageMeter ProgressMeter model zero_grad cuda display update param_groups size item is_available enumerate time criterion backward print AverageMeter accuracy ProgressMeter step len copyfile save ResNet load_state_dict load_state_dict_from_url | # FunnelAct Pytorch Pytorch implementation of Funnel Activation (FReLU): https://arxiv.org/pdf/2007.11824.pdf Validation results are listed below: | Model | Activation | Err@1 | Err@5 | | :---------------------- | :--------: | :------: | :------: | | ResNet50 | FReLU | **22.40** | **6.164** | Note that from the file resnet_frelu.py you can call ResNet18, ResNet34, ResNet50, ResNet101 and ResNet152 but the weights in this repo only available for ResNet50 and I never tried to train other models, so no guaranties there! The code in this repo is based on pytorch imagenet example: | 3,111 |
nemanja-rakicevic/informed_search | ['active learning'] | ['Active learning via informed search in movement parameter space for efficient robot task learning and transfer'] | informed_search/analysis/evaluate_test_target.py informed_search/analysis/plot_evaluations.py informed_search/analysis/evaluate_test_full.py informed_search/tasks/experiment_manage.py informed_search/models/kernels.py informed_search/tasks/environments.py informed_search/utils/plotting.py informed_search/analysis/transfer_test.py informed_search/main_training.py setup.py informed_search/envs/__init__.py informed_search/models/modelling.py informed_search/envs/mujoco/striker_oneshot.py informed_search/utils/misc.py CleanCommand main_run _load_args _start_logging load_metadata main_test load_metadata main_test plot_performance load_metadata _start_logging main_test load_trial_data Striker2LinkEnv Striker5LinkNLEnv Striker5LinkEnv Striker2LinkNLEnv BaseStriker se_kernel mat_kernel rq_kernel UIDFSearch InformedSearch EntropySearch BOSearch BaseModel RandomSearch RobotExperiment SimulationExperiment ExperimentManager scaled_sqdist elementwise_sqdist plot_model_separate plot_model plot_evals MidpointNormalize config_file add_argument ArgumentParser vars parse_args join basicConfig format getcwd strftime upper info makedirs n_success format ExperimentManager n_fail evaluate_test_cases info range _load_args _start_logging update format ExperimentManager std load_metadata print dict mean evaluate_test_cases sum array len evaluate_single_test split subplots grid MultipleLocator set_major_formatter max clip show list set_major_locator set_xlabel FormatStrFormatter savefig legend append range format setdiff1d MaxNLocator plot glob set_xlim cla zip keys enumerate join set_size_inches min set_ylabel split fill_between get_legend_handles_labels array set_ylim len n_success execute_trial zip load_trial_data n_fail uncertainty info append enumerate update_model _start_logging cdist cdist cdist min max arange set_clim set_yticklabels round max MidpointNormalize show subplot set_title set_xlabel colorbar imshow savefig format set_xticklabels cla info join suptitle set_yticks makedirs set_ylabel set_xticks figure len set_clim set_yticklabels add_subplot mu_alpha set_visible uncertainty linspace tick_params round values coord_failed show list set_title tick_right uidf set_xlabel colorbar imshow scatter savefig ticklabel_format meshgrid format sidf set_xticklabels plot set_zticks set_xlim cla set_label_position set_zlim set_zlabel info join set_size_inches suptitle reshape set_yticks makedirs coord_explored set_ylabel set_xticks figure pidf mu_L plot_surface set_ylim len set_clim set_yticklabels add_subplot mu_alpha set_visible linspace tick_params round values list tick_right uidf set_xlabel len colorbar imshow scatter savefig ticklabel_format meshgrid format sidf set_xticklabels plot set_zticks set_xlim set_label_position set_zlim set_zlabel join set_size_inches reshape set_yticks coord_explored set_ylabel set_xticks figure pidf mu_L plot_surface set_ylim makedirs | # Informed Search Informed Search of the movement parameter space, that gathers most informative samples for fitting the forward model. <p align="center"> <a href="https://link.springer.com/article/10.1007%2Fs10514-019-09842-7">Full paper</a> | <a href="https://sites.google.com/view/informedsearch">Project website</a> </p> ## Motivation and Method Description The main goal is to build an invertible forward model (maps movement parameters to trial outputs) by fitting a Gaussian Process Regression model | 3,112 |
neo85824/epsnet | ['scene parsing', 'panoptic segmentation', 'instance segmentation', 'semantic segmentation'] | ['EPSNet: Efficient Panoptic Segmentation Network with Cross-layer Attention Fusion'] | scripts/cluster_bbox_sizes.py layers/box_utils.py layers/__init__.py scripts/plot_loss_pan.py utils/timer.py train.py data/__init__.py layers/interpolate.py layers/output_utils.py scripts/make_grid.py layers/functions/detection.py scripts/parse_eval.py panoptic_eval.py data/config.py utils/__init__.py layers/functions/__init__.py utils/augmentations.py scripts/bbox_recall.py epsnet.py scripts/save_bboxes.py layers/modules/multibox_loss.py data/coco.py web/server.py scripts/unpack_statedict.py utils/functions.py scripts/optimize_bboxes.py scripts/convert_darknet.py scripts/compute_masks.py layers/modules/__init__.py scripts/augment_bbox.py scripts/plot_loss.py backbone.py DarkNetBlock DarkNetBackbone darknetconvlayer ResNetBackboneGN VGGBackbone Bottleneck ResNetBackbone construct_backbone make_net FusionModule FPN EPSNet AddCoords Concat CAModule PredictionModule get_transformed_cat calc_map postprocess_ins_sem merge_segmentation evalvideo postprocess_stuff_lincomb CustomDataParallel print_maps prep_unified_display evalimages prep_panoptic_display parse_args savevideo prep_coco_cats postprocess_stuff get_coco_cat evalimage prep_unified_result str2bool evaluate prep_panoptic_result badhash set_lr replace compute_validation_loss ScatterWrapper prepare_data setup_eval compute_validation_map train str2bool COCOPanoptic_inst_sem get_label_map COCOPanoptic COCOAnnotationTransform COCODetection Config set_cfg set_dataset detection_collate index2d decode intersect atss_match log_sum_exp jaccard center_size match sanitize_coordinates is_in_box point_form encode center_distance crop change InterpolateModule postprocess undo_image_transformation display_lincomb instance_logit panoptic_logit semantic_logit Detect CrossEntropyLoss2d MultiBoxLoss FocalLoss2d random_sample_crop intersect prep_box augment_boxes jaccard_numpy make_priors jaccard to_relative intersect process to_relative sigmoid mask_iou logit paint_mask add_randomize test_uniqueness randomize update_centery update_scale update_centerx add render export update_angle update_spacing optimize intersect compute_recall pretty_str jaccard compute_hits print_out to_relative make_priors step grabMAP plot_train smoother plot_val plot_train smoother plot_val SwapChannels ToTensor RandomCrop ToAbsoluteCoords RandomBrightness PhotometricDistort enable_if RandomSaturation Resize BaseTransform RandomSampleCrop ToPercentCoords RandomFlip Pad intersect SSDAugmentation Lambda Compose FastBaseTransform ConvertColor BackboneTransform Expand jaccard_numpy RandomHue ConvertFromInts RandomMirror RandomContrast do_nothing PrepareMasks ToCV2Image RandomLightingNoise RandomScale init_console ProgressBar SavePath MovingAverage enable total_time enable_all start reset disable disable_all stop print_stats env Handler selected_layers type max add_layer sum seed output_web_json add_argument ArgumentParser set_defaults size semantic_logit items addWeighted uint8 IdGenerator undo_image_transformation astype copy shape zeros cuda shape cuda undo_image_transformation imwrite prep_unified_display unsqueeze reset prep_panoptic_display float net use_panoptic_head str join basename print glob evalimage mkdir VideoCapture eval_network transform_frame MovingAverage ThreadPool put cleanup_and_exit CAP_PROP_FPS get_next_frame video_multiframe cuda get_avg list apply_async exit add append get isdigit reversed Queue int time print extract_frame VideoCapture CAP_PROP_FRAME_HEIGHT MovingAverage VideoWriter CAP_PROP_FPS CAP_PROP_FRAME_COUNT get_avg round VideoWriter_fourcc release add set_val range get FastBaseTransform total_time ProgressBar CAP_PROP_FRAME_WIDTH print reset MovingAverage flip_test image unsqueeze save postprocess_ins_sem stuff_num_classes evalvideo cuda pull_image_name video merge_segmentation get_avg list str values display transpose len exit evalimages add from_numpy set_val prep_panoptic_display append multiscale_test savevideo range cat imsave pull_item format total_time ProgressBar shuffle evalimage prep_unified_result flip net use_panoptic_head enumerate fast_nms print sort mask_proto_debug Variable prep_panoptic_result clone reset disable split zeros makedirs class_names values print print_maps get_ap append sum range enumerate len print make_sep make_row len setattr getattr batch_size save_folder MovingAverage lr_warmup_until ScatterWrapper zero_grad make_mask SGD tuple DataLoader save_weights compute_validation_map MultiBoxLoss cuda max max_iter set_lr lr_warmup_init name COCOPanoptic get_interrupt add delayed_settings ceil sum range format replace save_path init_weights load_weights resume mkdir lr item start_iter gamma net time get_latest remove criterion backward print keep_latest EPSNet prepare_data setup_eval parameters reset iteration disable_all step len param_groups Variable cuda parse_args eval replace eval append LongTensor FloatTensor clamp size min expand max intersect expand_as size expand transpose mm view topk encode std use_yolo_regressors zeros_like reshape size jaccard mean long is_in_box append center_distance max range cat enumerate use_yolo_regressors fill_ size jaccard index_fill_ encode max range enumerate center_size log cat cat point_form max max min long clamp sanitize_coordinates size expand size expand_as faster_rcnn_scale sanitize_coordinates interpolate save min_size view mask_proto_mask_activation squeeze matmul range size gt_ max_size float long crop mask_size display_lincomb contiguous mask_proto_debug t center_size preserve_aspect_ratio zeros numpy zeros astype cuda instance_logit num_classes semantic_logit size cuda permute things_to_stuff_map min_size faster_rcnn_scale subtract_means astype float32 preserve_aspect_ratio resize max_size normalize numpy array clip show exp size astype matmul t argsort imshow zeros float numpy range append prep_box concatenate random_sample_crop print uniform randint array minimum clip maximum intersect minimum maximum copy choice uniform float array range jaccard_numpy list range product zip t view matmul shape tile reshape copy exp reshape draw_idle cos square pi imshow set_data tile sin clip render render render render render exp pi set_val uniform render test_uniqueness str set_text draw_idle stack append len randomize add clear str set_text print draw_idle stack save len sum print reshape astype int32 abs range max jaccard float make_priors x_func minimize size min compute_hits cat ndarray isinstance print pretty_str MovingAverage append get_avg range len show basename smoother plot xlabel ylabel title legend append show basename plot xlabel ylabel title legend basesname init add remove clear append perf_counter stop print start pop format print total_time find max len | # EPSNet: Efficient Panoptic Segmentation Network with Cross-layer Attention Fusion This project hosts the code for implementing the EPSNet for panoptic segmentation. - [EPSNet: Efficient Panoptic Segmentation Network with Cross-layer Attention Fusion](https://arxiv.org/abs/2003.10142) Some examples from our EPSNet model (19 fps on a 2080Ti and 38.9 PQ on COCO Panoptic test-dev):    ## Models Here are our EPSNet models trained on COCO Panoptic dataset along with their FPS on a 2080Ti and PQ on `val`: | Image Size | Backbone | FPS | PQ | Weights | | 3,113 |
networkinference/ESL | ['time series'] | ['Inferring network connectivity from event timing patterns'] | simulation.py reconstruction.py reconstruction subplots euclidean_distances kurtosis vstack max skew use set_title ones set_xlabel tolist argmin legend append range asarray fabs plot close tight_layout copy mean pinv auc time T norm print loadtxt reshape roc_curve argsort dot set_ylabel zeros std len | # ESL -- Event Space Linearization The Event Space Linearization (ESL) is the framework proposed in the article [Inferring network connectivity from event timing patterns, PRL](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.121.054101). Relying only on spike timing data, ESL reveals the interaction topology of networks of spiking neurons without assuming specific neuron models to be known in advance. In this repository, you will find simple example codes for simulating and reconstructing networks of spiking neurons in Python. [Nest Simulator](http://www.nest-simulator.org/) is required for simulations. Optimized codes for reconstruction may also be found in [Connectivity_from_event_timing_patterns](https://gitlab.com/di.ma/Connectivity_from_event_timing_patterns/) ## Citation ``` @article{PhysRevLett.121.054101, title = {Inferring Network Connectivity from Event Timing Patterns}, author = {Casadiego, Jose and Maoutsa, Dimitra and Timme, Marc}, | 3,114 |
neu-spiral/AcceleratedExperimentalDesign | ['experimental design'] | ['Accelerated Experimental Design for Pairwise Comparisons'] | PlotCode/Test_Plot.py Code/Lazypack.py Code/AUCTool.py Code/FourSubmodular.py PlotCode/Time_All.py Code/mathpackage.py Code/Greedypack.py Code/Main.py Code/TimeTool.py AUCReturn AppendY EntGreedy Random MutGreedy Greedy CovGreedy FishGreedy FactorizationGreedy NaiveGreedy Greedy ScalarGreedy ScalarLazyMemo LazyGreedy DeMem FactorizationLazyMemo FactorizationLazy NaiveLazy ScalarLazy Hp shermanMorissonUpdate lamfunction EMupdateVariational logis TimeTransfer SizePlot AllData_Bar shape concatenate concatenate copy LogisticRegression dot append array AppendY roc_auc_score fit sample dot exp dot entropy exp ones inv copy outer dot sqrt range len reverse append array range len str subplots arange plot tick_right suptitle set_yscale set_xlabel set_ylabel savefig legend fill_between set_tick_params array set_ylim len subplots arange tick_right set_xticklabels set_xlim bar set_ylabel set_xticks savefig legend append set_tick_params range set_ylim | Experimental Design Acceleration ============================== This code improves the time performance of the greedy algorithm for maximizing D-optimal design for comparisons. We also provide three algorithms Mutual Information (Mut), Fisher Information (Fisher), Entropy (Ent). When using this code, cite the paper >["Accelerated Experimental Design for Pairwise Comparisons"](https://arxiv.org/abs/1901.06080). >Yuan Guo, Jennifer Dy, Deniz Erdogmus, Jayashree Kalpathy-Cramer, Susan Ostmo, J. Peter Campbell, Michael F. Chiang, Stratis Ioannidis. >SIAM International Conference on Data Mining (SDM), Calgary, Alberta, 2019. Usage ====================== An example execution is as follows: | 3,115 |
neulab/REALSumm | ['text summarization', 'text generation'] | ['Re-evaluating Evaluation in Text Summarization'] | process_data/realign_all.py scoring/gehrmann_rouge_opennmt/rouge_baselines/g_rouge.py scoring/get_scores.py process_data/process_bart.py process_data/write_to_presumm_format.py scoring/all_metrics/sentence_mover/wmd/evaluator.py scoring/peyrard_s3/S3/JS_eval.py process_data/process_tac.py process_data/realign.py scoring/peyrard_genetic/example.py scoring/utils.py scoring/gehrmann_rouge_opennmt/rouge_baselines/pyrouge/pyrouge/utils/string_utils.py scoring/all_metrics/get_rouge_pyrouge.py scoring/gehrmann_rouge_opennmt/rouge_baselines/util.py scoring/gehrmann_rouge_opennmt/rouge_baselines/Rouge155.py scoring/gehrmann_rouge_opennmt/rouge_baselines/pyrouge/pyrouge/tests/Rouge155_test.py scoring/peyrard_s3/S3/S3.py process_data/process_fast_abs_rl.py process_data/subsample.py process_data/process_unilm.py scoring/peyrard_genetic/SwarmOptimizer.py scoring/peyrard_genetic/JS.py scoring/peyrard_genetic/greedy.py scoring/gehrmann_rouge_opennmt/rouge_baselines/pyrouge/pyrouge/utils/argparsers.py scoring/all_metrics/sentence_mover/corr_examples.py process_data/file2dir.py scoring/all_metrics/moverscore.py scoring/gehrmann_rouge_opennmt/rouge_baselines/pyrouge/pyrouge/__init__.py scoring/gehrmann_rouge_opennmt/rouge_baselines/pyrouge/pyrouge/tests/__main__.py scoring/peyrard_genetic/GeneticOptimizer.py scoring/peyrard_genetic/run.py scoring/gehrmann_rouge_opennmt/rouge_baselines/pyrouge/pyrouge/utils/file_utils.py process_data/process_reddit.py scoring/gehrmann_rouge_opennmt/rouge_baselines/pyrouge/pyrouge/utils/sentence_splitter.py scoring/all_metrics/sentence_mover/wmd/__init__.py analysis/utils.py process_data/utils.py process_data/process_t5.py scoring/peyrard_s3/S3/ROUGE.py scoring/peyrard_s3/S3/word_embeddings.py scoring/gehrmann_rouge_opennmt/rouge_baselines/baseline.py scoring/scorer.py scoring/gehrmann_rouge_opennmt/rouge_baselines/pyrouge/pyrouge/Rouge155.py process_data/preprocess.py scoring/all_metrics/sentence_mover/smd.py scoring/score_dict_update.py process_data/get_alignment.py process_data/filter_by_rouge.py scoring/peyrard_genetic/nlp_utils.py scoring/peyrard_s3/S3/utils.py scoring/peyrard_genetic/post_process_genetic_out.py process_data/process_neusumm.py scoring/gehrmann_rouge_opennmt/rouge_baselines/pyrouge/setup.py scoring/gehrmann_rouge_opennmt/rouge_baselines/pyrouge/pyrouge/test.py scoring/gehrmann_rouge_opennmt/rouge_baselines/pyrouge/pyrouge/utils/log.py get_doc_y_val get_topk filter_summaries get_metrics_list print_ktau_matrix init_logger print_score_ranges get_pickle get_correlation get_system_level_scores file2dir file_split process_files_in_dir process_files_q_to_t get_dataset_from_jsonl process_ptrgen remove_dups process_arxiv process_pubmed match find_matching process_files_in_dir process_files_q_to_t get_dataset_from_jsonl process_ptrgen remove_dups process_arxiv process_pubmed remove_empty tokenize read_summaries read_source_docs write_aligned_data listdir_fullpath get_all_summaries process_t5 convert_to_txt_files load_tac_json_files get_clean_text get_num_sys_outs tokenize write_to_disk read_yang_presum find_matching read_processed_file read_msft_unilm write_to_disk get_chunks read_bottom_up match read_abisee read_fast_abs listdir_fullpath get_content subsample tokenize_file get_sents_from_tags get_word_tokenizer get_chunks sent_tokenize sent_list_to_tagged_str apply_function_to_all_items_in_dir read_file get_sent_tokenizer listdir_fullpath get_pickle retain_first_n_sent sent_tokenize_by_tags write_pickle write_to_presumm_format get_num_lines_ref main get_model_paths get_scores_dict_parallel Scorer invert_metric merge_score_dicts remove_metrics apply_cutoff remove_dup_sys_summs sd_to_new_format remove_duplicates add_normed_scores sd_to_new_format_file word_tokenize tokenize_file tokenize_strings get_sents_from_tags get_word_tokenizer get_chunks sent_tokenize sent_list_to_tagged_str apply_function_to_all_items_in_dir get_metrics_list read_file init_logger get_sent_tokenizer listdir_fullpath get_pickle retain_first_n_sent sent_tokenize_by_tags write_pickle get_rouge get_idf_dict collate_idf padding word_mover_score get_bert_embedding bert_encode batched_cdist_l2 truncate process _safe_divide read_normal_file get_overlap_examples process_files read_rouge_wmd_file get_examples get_embeddings tokenize_texts get_weights get_sent_embedding smd Evaluator TailVocabularyOptimizer WMD split_sentences first_sentence first_two_sentences baseline_main verbatim sent_tag_p_verbatim first_three_sentences sent_tag_verbatim adhoc_old0 adhoc_base pre_sent_tag_verbatim register_to_registry second_sentence no_sent_tag full _len_lcs _get_ngrams rouge rouge_l_summary_level rouge_n _recon_lcs _split_into_words _lcs rouge_l_sentence_level _union_lcs _f_p_r_lcs _get_word_ngrams Rouge155 evaluate_rouge n_grams has_repeat Rouge155 main PyrougeTest str_from_file list_files xml_equal DirectoryProcessor verify_dir get_global_console_logger get_console_logger PunktSentenceSplitter remove_newlines cleanup remove_extraneous_whitespace JS_Swarm JS_Gen save_scored_population GeneticOptimizer get_len greedy_optimizer compute_average_freq compute_tf get_all_content_words_lemmatized get_content_words_in_sentence js_divergence compute_word_freq get_all_content_words compute_tf_doc get_all_content_words_stemmed kl_divergence sentence_tokenizer stem_word get_ngrams to_unicode normalize_word get_len remove_duplicates read_out_file write_to_files get_sents generate SwarmOptimizer compute_average_freq compute_tf is_ngram_content KL_Divergence compute_word_freq JS_eval get_all_content_words pre_process_summary JS_Divergence rouge_n_we _ngram_counts rouge_n _find_closest _safe_f1 _ngrams _get_embedding _has_embedding get_all_content_words pre_process_summary _counter_overlap _ngram_count _soft_overlap _safe_divide S3 extract_feature stem_word get_ngrams normalize_word get_len get_words _convert_to_numpy load_embeddings setFormatter getLogger addHandler StreamHandler Formatter setLevel INFO FileHandler append get_metrics_list print tabulate list set print len mean append zeros enumerate percentile kendalltau list isnan filter_summaries spearmanr pearsonr values items list ddict min mean append max values sorted kendalltau join list append pearsonr values join basename print file2dir mkdir normpath listdir exists read_file print join split print join listdir process_ptrgen join listdir mkdir join listdir items list format print lower sub randint len print sent_list_to_tagged_str append enumerate split listdir read_summaries join listdir_fullpath join split listdir_fullpath join append train_test_split out_dec out_src print strip write close out_ref sent_tokenize sent_list_to_tagged_str loads get_sent_tokenizer infile enumerate open update load tac_files info open isnumeric list max values sent_list_to_tagged_str join dump get_clean_text info get_num_sys_outs write close mkdir out_dir range enumerate open sep print items list range len join format print read_folder exists split format print split range len print len format split split length append items sorted list n_jobs get_chunks len create_pipe add_pipe English create_pipe add_pipe English create_tokenizer English tokenizer_fn append strip find join print join listdir function create_pipe add_pipe English findall print sent_tokenizer get_sent_tokenizer zip get_word_tokenizer word_tokenizer append append listdir pjoin get_model_paths pjoin Scorer update list get_model_paths get_num_lines_ref glob print pprint pjoin info keys range get_scores_dict_parallel get_pickle sd_to_new_format write_pickle print list keys set update deepcopy print set keys pop list print startswith keys len sorted print dict append keys print list keys set print keys items print min get_metrics_list max list print set append keys print create_tokenizer join English print strip create_tokenizer nlp append create_pipe add_pipe enumerate tokenizer_fn join list items isinstance Rouge get_scores truncate convert_tokens_to_ids tokenize update defaultdict partial Counter len LongTensor ones item zeros tensor enumerate len eval padding to collate_idf convert_tokens_to_ids tokenize cat len sum sqrt_ _safe_divide sum get_bert_embedding emd_with_flow unsqueeze div_ append zeros numpy array range cat len strip len append float range split append float strip list read_normal_file print read_rouge_wmd_file values len percentile str print min split append max range len percentile str print split append range len append strip range average embed_batch append get_sent_embedding get_vector sum max range len mean list array pop list Counter array append sum keys range len get_embeddings exp ElmoEmbedder print tokenize_texts WMD append get_weights range len findall split_sentences split_sentences split_sentences split_sentences append strip split split_sentences strip split_sentences append join split_sentences append join split_sentences index findall list strip split_sentences join time format list sorted run_rouge rouge print reduce stemming mean evaluate_rouge get_each_score check_repeats run_google_rouge keys tuple add set range len _split_into_words _lcs dict max range tuple _lcs intersection _get_word_ngrams len _len_lcs _split_into_words len _recon_lcs set _split_into_words union len _split_into_words len mean list map zip str join print Rouge155 output_to_dict map rmtree convert_and_evaluate zip randint exists enumerate makedirs len set join walk extend format setFormatter getLogger addHandler StreamHandler Formatter setLevel remove_extraneous_whitespace sub compile sub compile append compute_tf GeneticOptimizer extend append compute_tf extend SwarmOptimizer append get_len index list map extend list map extend list map tokenize extend tokenize list extend set dict append get_all_content_words_stemmed get dict compute_word_freq get_all_content_words_stemmed len get keys set items list compute_average_freq compute_tf kl_divergence print str hasattr isinstance lower tokenize int sorted get_sents_from_tags tuple add set enumerate format print write zip enumerate findall join compute_tf format get_sents evolve save_scored_population info GeneticOptimizer get_all_content_words isnan items list compute_average_freq isnan KL_Divergence pre_process_summary get_all_content_words deque append iteritems _safe_divide pre_process_summary _ngram_counts _ngram_count append append iteritems _get_embedding sorted iteritems _find_closest pre_process_summary _ngram_counts _ngram_count len rouge_n_we JS_eval rouge_n append sorted extract_feature array | # REALSumm: Re-evaluating EvALuation in Summarization #### Paper: [Re-evaluating Evaluation in Text Summarization](https://www.aclweb.org/anthology/2020.emnlp-main.751/) #### Authors: [Manik Bhandari](https://manikbhandari.github.io/), [Pranav Gour](https://scholar.google.com/citations?user=OKM72KwAAAAJ&hl=en), [Atabak Ashfaq](https://www.linkedin.com/in/atabakashfaq), [Pengfei Liu](http://pfliu.com/), [Graham Neubig](http://www.phontron.com/) ## Outline * ### [Leaderboard](https://github.com/neulab/Leaderboard-1) * ### [Motivation](https://github.com/neulab/REALSumm#Motivation-1) * ### [Released Data](https://github.com/neulab/REALSumm#Released-Data-1) * ### [Meta-evaluation Tool](https://github.com/neulab/REALSumm#Meta-evaluation-Tool-1) * ### [Bib](https://github.com/neulab/REALSumm#Bib-1) ## Leaderboard | 3,116 |
neulab/incremental_tree_edit | ['imitation learning'] | ['Learning Structural Edits via Incremental Tree Transformations'] | common/utils.py edit_model/data_model.py asdl/asdl.py asdl/transition_system.py edit_components/change_graph.py edit_model/edit_encoder/edit_encoder.py asdl/lang/csharp/demo.py edit_components/utils/decode.py __init__.py edit_model/embedder.py edit_model/utils.py trees/utils.py edit_model/edit_encoder/hybrid_change_encoder.py asdl/hypothesis.py edit_model/edit_encoder/sequential_change_encoder.py edit_model/edit_encoder/bag_of_edits_change_encoder.py common/savable.py edit_model/encdec/encoder.py edit_model/edit_encoder/tree_diff_encoder.py asdl/utils.py edit_components/utils/sub_token.py asdl/__init__.py asdl/asdl_ast.py trees/__init__.py asdl/lang/csharp/demo_edits.py edit_model/pointer_net.py edit_components/dataset.py edit_components/utils/relevance.py edit_model/encdec/__init__.py edit_components/utils/utils.py edit_model/edit_encoder/__init__.py edit_model/nn_utils.py edit_model/encdec/edit_decoder.py edit_model/editor.py asdl/lang/csharp/csharp_grammar.py edit_model/encdec/decoder.py trees/hypothesis.py edit_model/edit_encoder/graph_change_encoder.py exp_githubedits.py asdl/lang/csharp/csharp_transition.py edit_components/change_entry.py trees/substitution_system.py edit_model/encdec/transition_decoder.py edit_model/gnn.py edit_components/vocab.py edit_model/encdec/sequential_encoder.py edit_model/encdec/graph_encoder.py trees/edits.py edit_components/diff_utils.py asdl/lang/csharp/csharp_hypothesis.py edit_components/utils/wikidata.py edit_model/encdec/sequential_decoder.py datasets/utils.py edit_components/utils/unary_closure.py datasets/githubedits/common/config.py common/registerable.py edit_components/evaluate.py _collect_correction_iteration_example_in_batch decode_updated_code imitation_learning eval_csharp_fixer collect_edit_vecs _collect_iteration_example_in_batch _load_dataset _extract_record _train_fn test_ppl _eval_decode_in_batch train _eval_decode _collect_iteration_example _eval_ppl Field ASDLPrimitiveType ASDLCompositeType ASDLGrammar ASDLType ASDLConstructor ASDLProduction AbstractSyntaxTree RealizedField SyntaxToken DummyReduce AbstractSyntaxNode Hypothesis GenTokenDecodingAction ApplySubTreeAction ApplyRuleAction DecodingAction GenTokenAction ReduceDecodingAction TransitionSystem ApplyRuleDecodingAction ApplySubTreeDecodingAction ReduceAction Action remove_comment CSharpASDLGrammar CSharpHypothesis CSharpTransitionSystem _encode Registrable Savable init_arg_parser cached_property update_args ExampleProcessor get_example_processor_cls isfloat isint Arguments ChangeExample ChangeGraph DataSet _encode load_one_change_entry_csharp default_collate_override TokenLevelDiffer evaluate_nll Vocab VocabEntry dump_rerank_file decode_change_vec ndcg dcg get_nn get_rank_score dump_aggregated_query_results_from_query_results_for_annotation gather_all_query_results_from_annotations generate_reranked_list load_query_results generate_top_k_query_results SubTokenHelper extract_unary_closure get_unary_closure_syntax_sub_tree get_entry_str run_from_ipython BatchedCodeChunk Graph2IterEditEditor SequentialAutoEncoder _prepare_edit_encoding NeuralEditor WordPredictionMultiTask ChangedWordPredictionMultiTask Graph2TreeEditor TreeBasedAutoEncoderWithGraphEncoder Seq2SeqEditor SyntaxTreeEmbedder EmbeddingTable CodeTokenEmbedder ConvolutionalCharacterEmbedder Embedder AdjacencyList GatedGraphNeuralNetwork main input_transpose batch_iter dot_prod_attention log_softmax log_sum_exp to_input_variable word2id pad_lists length_array_to_mask_tensor get_sort_map id2word anonymize_unk_tokens PointerNet get_method_args_dict cached_property BagOfEditsChangeEncoder EditEncoder GraphChangeEncoder HybridChangeEncoder SequentialChangeEncoder TreeDiffEncoder Decoder IterativeDecoder SyntaxTreeEncoder SequentialDecoderWithTreeEncoder SequentialDecoder ContextEncoder SequentialEncoder TransitionDecoder Delete Edit Stop Add AddSubtree Hypothesis SubstitutionSystem copy_tree_field get_field_repr get_productions_str get_sibling_ids find_by_id get_field_node_queue calculate_tree_prod_f1 stack_subtrees update hasattr load compute_change_edges time examples dump replace isinstance tgt_actions print enable populate_aligned_token_index_and_mask average load_from_jsonl disable exists enumerate open edit_mappings batch_iter copy_and_reindex_w_dummy_reduce model clip_grad_norm_ zero_grad save cuda list _generate_target_tree_edits Adam load_state_dict tgt_edits append state_dict tgt_actions param_groups item _eval_decode_in_batch _eval_ppl load join time updated_code_ast isinstance backward print SubstitutionSystem parameters transition_system substitution_system prev_code_ast train step print time evaluate_nll isinstance print average eval sum len print average eval sum len compute_change_edges to_string join decode_with_gold_sample root_node get_productions_str ChangeExample populate_aligned_token_index_and_mask add append calculate_tree_prod_f1 float compute_change_edges to_string join tgt_actions root_node get_productions_str decode_with_gold_sample_in_batch ChangeExample populate_aligned_token_index_and_mask add append calculate_tree_prod_f1 float enumerate to_string compute_change_edges decode_with_extend_correction_in_batch tgt_actions root_node get_productions_str ChangeExample add populate_aligned_token_index_and_mask append calculate_tree_prod_f1 range len join to_string list print from_file makedirs write close build Adam parameters _load_dataset _train_fn device to open to_string _train_fn save device max open list defaultdict Adam build add load_state_dict append to range state_dict examples tgt_actions DataSet enable close eval float load join deepcopy time print from_file write extend parameters _load_dataset disable makedirs load setrecursionlimit print _load_dataset _eval_ppl args load int dump basename setrecursionlimit print _load_dataset _eval_decode_in_batch _eval_decode _eval_ppl open seed int load setrecursionlimit print eval _load_dataset args load dump replace print eval _load_dataset open numpy args join sub compile dict add_argument ArgumentParser _actions setattr dest default float int float get_ast_from_json_obj compute_change_edges reindex_w_dummy_reduce populate_gen_and_copy_index_and_mask ChangeExample populate_aligned_token_index_and_mask loads get_decoding_edits_fast _encode get_decoding_actions namedtuple compile list exp training keys dict eval train array load print eval load_from_jsonl generate_reranked_list startswith open args load dump examples isinstance print encode_code_changes code_change_encoder eval load_from_jsonl startswith open args exp log2 float range len int readline print strip close append open update print glob dict isfile load_query_results exp log2 float range len sorted dcg examples sort append dist_func range len join get_example_by_id replace print get_nn makedirs update items list join get_example_by_id replace dict gather_all_query_results_from_annotations keys makedirs join items dump replace print dict open makedirs CSharpTransitionSystem load_from_jsonl read from_roslyn_xml is_composite append add_value AbstractSyntaxNode fields production ndarray isinstance unsqueeze device to AdjacencyList GatedGraphNeuralNetwork compute_node_representations print log_softmax squeeze masked_fill_ softmax bool squeeze masked_fill_ bool max log masked_fill_ bool ones len tensor max enumerate append max len tensor word2id input_transpose pad_lists append max range len list sorted len range enumerate int sorted arange shuffle ceil float range enumerate len dict list __to_new_token_seq getfullargspec dict compile enumerate parent_node as_value_list find_by_id parent_field fields as_value_list parent_node find_by_id append parent_field fields pop copy_and_reindex_w_dummy_reduce root_node parent_node as_value_list find_by_id append parent_field fields copy_and_reindex_wo_dummy_reduce isinstance as_value_list extend fields len get str list items isinstance as_value_list dict fields items list sum append parent_node id | # Learning Structural Edits via Incremental Tree Transformations Code for ["Learning Structural Edits via Incremental Tree Transformations" (ICLR'21)](https://openreview.net/pdf?id=v9hAX77--cZ) If you use our code and data, please cite our paper: ``` @inproceedings{yao2021learning, title={Learning Structural Edits via Incremental Tree Transformations}, author={Ziyu Yao and Frank F. Xu and Pengcheng Yin and Huan Sun and Graham Neubig}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=v9hAX77--cZ} | 3,117 |
neulab/langrank | ['cross lingual transfer'] | ['Choosing Transfer Languages for Cross-Lingual Learning'] | langrank_predict.py tests/test_train_file.py index_ted_datasets.py index_el_datasets.py index_pos_datasets.py index_parsing_datasets.py langrank.py get_vocab read_data get_vocab read_data get_vocab read_data read_data map_task_to_models map_task_to_data uriel_distance_vec check_task_model_data check_task_model prepare_train_file distance_vec lgbm_rel_exp rank prepare_new_dataset get_candidates read_vocab_file train check_task test_train append int split append split map_task_to_models join check_task join check_task_model map_task_to_data append int split join map_task_to_data resource_filename item append isinstance print set float len inventory_distance print geographic_distance syntactic_distance genetic_distance phonological_distance featural_distance set float intersection len ndarray isinstance join format uriel_distance_vec print len write close distance_vec prepare_new_dataset mkdir zip open array lgbm_rel_exp enumerate join save_model load_svmlight_file LGBMRanker fit Booster list check_task_model distance_vec append sum range predict map_task_to_models format resource_filename enumerate join uriel_distance_vec print min argsort get_candidates array len prepare_train_file format train | # LangRank by [NeuLab](http://www.cs.cmu.edu/~neulab/) @ [CMU LTI](https://lti.cs.cmu.edu) <img align="right" width="400px" src="figures/overview.png"> LangRank is a program for **Choosing Transfer Languages for Cross-lingual Transfer Learning**, described by our [paper](https://arxiv.org/abs/1905.12688) on the topic at ACL 2019. Cross-lingual transfer, where a high-resource *transfer* language is used to improve the accuracy of a low-resource *task* language, is now an invaluable tool for improving performance of natural language processing (NLP) on low-resource languages. However, given a particular task language, it is not clear *which* language to transfer from, and the standard strategy is to select languages based on *ad hoc* criteria, usually the intuition of the experimenter. Since a large number of features contribute to the success of cross-lingual transfer (including phylogenetic similarity, typological properties, lexical overlap, or size of available data), even the most enlightened experimenter rarely considers all these factors for the particular task at hand. LangRank is a program to solve this task of automatically selecting optimal transfer languages, treating it as a ranking problem and building models that consider the aforementioned features to perform this prediction. For example, let's say you have a *machine translation* (MT) task, and you want to know which languages/datasets you should use to build a system for the low-resource language *Azerbaijani*. In this case, you would prepare an example of the type of data you want to translate (in word and sub-word format, details below), and the language code "aze", then run a command like the following (where `-n 3` are the top 3 languages): | 3,118 |
neuro-ml/inverse_weighting | ['medical image segmentation', 'semantic segmentation'] | ['Universal Loss Reweighting to Balance Lesion Size Inequality in 3D Medical Image Segmentation'] | iw/dataset/lits.py iw/path.py iw/metric.py iw/utils.py iw/torch.py iw/xml_parsing.py iw/batch_iter.py iw/cv.py iw/io.py iw/model/unet.py iw/dataset/luna.py sample_center_uniformly cc2weight center_choice extract_patch scale_flair interpolate_np fill3d get_connected_components load_itkimage_lungmask itkimage2image load_nii get_ids load_pred_from_exp save_nii get_iw_dir_name load_itkimage_ct exp2prc_df get_size_df prc_records get_prc evaluate_individual_metrics_with_prc get_intersection_stat_dice_id generalized_dice_loss asymmetric_similarity_loss asymmetric_similarity_loss_orig dice_loss train_step_with_cc is_right_shape volume2diameter np_sigmoid get_positive_class_fraction itkimage2image bin_segm fix_seed get_pred get_iw_dir_name nodules2centers root2expert_roots get_nodules expert_root2nodules id2root scale_ct LITS Proxy scale_ct get_n_tumors apply_mask LUNA get_unet all sample_center_uniformly shape array zeros_like shape unique sum prod len get_centered_box cc2weight crop_to_box array int T float32 int32 label percentile min astype float32 max clip repeat jp range load_pred get_data Nifti1Image eye save jp range ReadImage jp ReadImage jp listdir list dice_score dict unique zip append argmax max list volume2diameter inf append label sum get_pred range values get_intersection_stat_dice_id list from_records jp concat load_json keys range append concat index np_sigmoid mean linspace append std fraction len load_from_folder items list defaultdict join metric tqdm save_json load_pred load_y_true makedirs list sigmoid sum dim range cat list sigmoid sum dim range list sigmoid sum dim range list sigmoid sum dim range optimizer_step criterion sequence_to_var architecture train len load_y get_pred shape array seed manual_seed jp listdir getroot append append text float int append mean root2expert_roots id2root clip float32 partial FPN BatchNorm3d Sequential Conv3d add PreActivation3d Upsample MaxPool3d ResBlock3d | # Universal Loss Reweighting to Balance Lesion Size Inequality in 3D Medical Image Segmentation Code release ## Table of Contents * [*iw* in a few lines of code](#*iw*-in-a-few-lines-of-code) * [Requirements](#requirements) * [Repository Structure](#repository-structure) * [Experiment Reproduction](#experiment-reproduction) ## *iw* in a few lines of code The purpose of this repo is reproducibility. The method itself could be explained in a few lines of code | 3,119 |
neuroailab/flex-ml-agents | ['unity'] | ['Unity: A General Platform for Intelligent Agents'] | ml-agents/mlagents/envs/communicator_objects/environment_parameters_proto_pb2.py ml-agents/tests/trainers/test_trainer_controller.py ml-agents/mlagents/trainers/buffer.py ml-agents/mlagents/envs/communicator_objects/unity_rl_initialization_input_pb2.py ml-agents/mlagents/envs/communicator_objects/brain_parameters_proto_pb2.py ml-agents/tests/envs/test_envs.py ml-agents/mlagents/envs/communicator_objects/__init__.py ml-agents/mlagents/envs/rpc_communicator.py ml-agents/mlagents/trainers/ppo/__init__.py gym-unity/gym_unity/envs/__init__.py ml-agents/mlagents/envs/communicator_objects/agent_action_proto_pb2.py ml-agents/mlagents/trainers/learn.py gym-unity/gym_unity/envs/unity_env.py ml-agents/mlagents/trainers/bc/trainer.py ml-agents/mlagents/trainers/policy.py ml-agents/mlagents/envs/communicator_objects/unity_rl_initialization_output_pb2.py ml-agents/tests/trainers/test_curriculum.py ml-agents/mlagents/trainers/meta_curriculum.py ml-agents/mlagents/trainers/curriculum.py ml-agents/mlagents/trainers/ppo/models.py ml-agents/mlagents/envs/communicator_objects/space_type_proto_pb2.py ml-agents/mlagents/envs/communicator_objects/unity_output_pb2.py ml-agents/mlagents/envs/communicator_objects/unity_input_pb2.py gym-unity/gym_unity/__init__.py ml-agents/mlagents/trainers/ppo/policy.py ml-agents/mlagents/envs/communicator_objects/engine_configuration_proto_pb2.py ml-agents/mlagents/envs/communicator_objects/brain_type_proto_pb2.py ml-agents/mlagents/envs/socket_communicator.py gym-unity/setup.py ml-agents/mlagents/trainers/trainer_controller.py ml-agents/mlagents/envs/communicator_objects/agent_info_proto_pb2.py ml-agents/mlagents/envs/communicator_objects/unity_to_external_pb2_grpc.py ml-agents/tests/trainers/test_ppo.py ml-agents/mlagents/envs/brain.py ml-agents/mlagents/trainers/bc/policy.py ml-agents/tests/trainers/test_bc.py ml-agents/tests/mock_communicator.py ml-agents/mlagents/envs/communicator_objects/unity_message_pb2.py ml-agents/mlagents/trainers/models.py ml-agents/mlagents/trainers/__init__.py ml-agents/mlagents/envs/communicator_objects/resolution_proto_pb2.py ml-agents/mlagents/envs/communicator_objects/unity_to_external_pb2.py ml-agents/mlagents/envs/communicator_objects/unity_rl_input_pb2.py ml-agents/tests/trainers/test_buffer.py ml-agents/mlagents/trainers/trainer.py ml-agents/mlagents/envs/communicator.py ml-agents/setup.py ml-agents/mlagents/envs/communicator_objects/unity_rl_output_pb2.py ml-agents/mlagents/envs/__init__.py ml-agents/mlagents/trainers/bc/__init__.py gym-unity/tests/test_gym.py ml-agents/mlagents/envs/exception.py ml-agents/mlagents/envs/environment.py ml-agents/mlagents/trainers/bc/models.py ml-agents/mlagents/envs/communicator_objects/command_proto_pb2.py ml-agents/mlagents/trainers/exception.py ml-agents/tests/trainers/test_meta_curriculum.py ml-agents/mlagents/trainers/ppo/trainer.py ml-agents/mlagents/envs/communicator_objects/header_pb2.py UnityGymException UnityEnv test_gym_wrapper test_multi_agent BrainInfo BrainParameters Communicator UnityEnvironment UnityException UnityTimeOutException UnityEnvironmentException UnityActionException RpcCommunicator UnityToExternalServicerImplementation SocketCommunicator UnityToExternalServicer UnityToExternalStub add_UnityToExternalServicer_to_server BufferException Buffer Curriculum CurriculumError MetaCurriculumError TrainerError main run_training MetaCurriculum LearningModel Policy UnityPolicyException UnityTrainerException Trainer TrainerController BehavioralCloningModel BCPolicy BehavioralCloningTrainer PPOModel PPOPolicy PPOTrainer get_gae discount_rewards MockCommunicator test_initialization test_reset test_close test_step test_handles_bad_filename test_dc_bc_model test_cc_bc_model test_visual_cc_bc_model test_bc_policy_evaluate dummy_config test_visual_dc_bc_model assert_array test_buffer location default_reset_parameters test_init_curriculum_bad_curriculum_raises_error test_init_curriculum_happy_path test_increment_lesson test_get_config test_init_meta_curriculum_happy_path test_increment_lessons_with_reward_buff_sizes default_reset_parameters MetaCurriculumTest test_increment_lessons measure_vals reward_buff_sizes test_set_all_curriculums_to_lesson_num test_get_config test_set_lesson_nums test_init_meta_curriculum_bad_curriculum_folder_raises_error more_reset_parameters test_rl_functions test_ppo_model_dc_vector_curio test_ppo_model_dc_vector_rnn test_ppo_model_cc_vector_rnn test_ppo_policy_evaluate test_ppo_model_cc_visual dummy_config test_ppo_model_dc_vector test_ppo_model_dc_visual test_ppo_model_cc_visual_curio test_ppo_model_dc_visual_curio test_ppo_model_cc_vector_curio test_ppo_model_cc_vector test_initialization test_initialize_trainers dummy_bc_config dummy_bad_config dummy_config dummy_start test_load_config sample step MockCommunicator UnityEnv step MockCommunicator UnityEnv method_handlers_generic_handler add_generic_rpc_handlers start_learning int str TrainerController int Process getLogger print start info append randint docopt range list zeros_like size reversed range asarray tolist discount_rewards UnityEnvironment close MockCommunicator UnityEnvironment close MockCommunicator reset str local_done print agents step close reset MockCommunicator UnityEnvironment len UnityEnvironment close MockCommunicator reset_default_graph close reset_default_graph reset_default_graph reset_default_graph reset_default_graph flatten list range len get_batch Buffer assert_array append_update_buffer make_mini_batch append reset_agent array range Curriculum Curriculum Curriculum MetaCurriculum assert_has_calls MetaCurriculumTest increment_lessons assert_called_with MetaCurriculumTest increment_lessons assert_called_with assert_not_called MetaCurriculumTest set_all_curriculums_to_lesson_num MetaCurriculumTest dict update MetaCurriculumTest reset_default_graph reset_default_graph reset_default_graph reset_default_graph reset_default_graph reset_default_graph reset_default_graph reset_default_graph reset_default_graph reset_default_graph reset_default_graph assert_array_almost_equal array discount_rewards TrainerController | # FleX ML Agents Simulation Environment - Unity ML Agents Fork with NVidia FleX support This is a fork of Unity ML Agents extended to support [NVidia FleX](https://developer.nvidia.com/flex "NVidia FleX"), a particle based physics simulation engine, able to simulate various types of material such as rigid bodies, soft bodies, cloth, inflatables, fluids, and gases. This implementation uses the [NVidia FleX Unity plugin](https://assetstore.unity.com/packages/tools/physics/nvidia-flex-for-unity-1-0-beta-120425 "NVidia FleX plugin"). Please visit [the official website](https://github.com/neuroailab/flex-ml-agents/blob/master/UnitySDK/Assets/README.md "FleX ML Agents Website") for a brief introduction into flex-ml-agents and have a look at our [documentation](https://github.com/neuroailab/flex-ml-agents/blob/master/UnitySDK/Assets/ "FleX ML Agents Documentation"). The FleX ML Agents Simulation Environment is part of our code release for [Flexible Neural Representation for Physics Prediction](https://neuroailab.github.io/physics/ "Flexible Neural Representation for Physics Prediction"). # In what follows is the original ml-agents documentation <img src="docs/images/unity-wide.png" align="middle" width="3000"/> <img src="docs/images/image-banner.png" align="middle" width="3000"/> | 3,120 |
neuropoly/axondeepseg | ['active learning', 'semantic segmentation'] | ['Deep Active Learning for Axon-Myelin Segmentation on Histology Data'] | test/test_config_tools.py test/testing/test_segmentation_scoring.py AxonDeepSeg/integrity_test.py AxonDeepSeg/data_management/input_data.py test/testing/test_noise_simulation.py AxonDeepSeg/segment.py AxonDeepSeg/mapping_results.py AxonDeepSeg/download_model.py test/data_management/test_patch_extraction.py ads_plugin.py AxonDeepSeg/trainingforhelios.py AxonDeepSeg/morphometrics/compute_morphometrics.py AxonDeepSeg/morphometrics/launch_morphometrics_computation.py test/test_postprocessing.py AxonDeepSeg/visualization/visualize.py AxonDeepSeg/visualization/get_masks.py AxonDeepSeg/visualization/generate_axons_from_myelin.py AxonDeepSeg/patch_management_tools.py AxonDeepSeg/network_construction.py test/visualization/test_get_masks.py test/test_download_models.py test/visualization/test_generate_axons_from_myelin.py test/test_train_network.py AxonDeepSeg/download_tests.py AxonDeepSeg/params.py AxonDeepSeg/testing/noise_simulation.py test/test_params.py test/morphometrics/test_compute_morphometrics.py test/test_integrity_test.py AxonDeepSeg/train_network.py AxonDeepSeg/data_management/patch_extraction.py AxonDeepSeg/ads_utils.py test/data_management/test_dataset_building.py test/testing/test_launch_performance_metrics.py test/morphometrics/test_launch_morphometrics_computation.py AxonDeepSeg/visualization/merge_masks.py docs/source/conf.py AxonDeepSeg/postprocessing.py models/config_to_csv.py AxonDeepSeg/testing/segmentation_scoring.py test/visualization/test_simulate_axons.py test/testing/test_statistics_generation.py test/visualization/test_merge_masks.py test/test_segment.py AxonDeepSeg/data_management/dataset_building.py AxonDeepSeg/testing/launch_performance_metrics.py test/visualization/test_visualize.py AxonDeepSeg/__init__.py AxonDeepSeg/apply_model.py AxonDeepSeg/testing/statistics_generation.py test/test_ads_utils.py test/test_download_tests.py setup.py config.py AxonDeepSeg/config_tools.py AxonDeepSeg/visualization/simulate_axons.py ADSsettings ADScontrol PostDevelopCommand get_config_path imwrite config_setup get_existing_models_list extract_axon_and_myelin_masks_from_image_data read_config traceback_to_server convert_path download_data init_ads _main_thread_terminated imread init_error_client load_acquisitions apply_convnet prepare_patches process_segmented_patches perform_batch_inference axon_segmentation ensure_list_type default_configuration grid_config generate_name_config generate_config generate_struct flatten generate_features validate_config rec_update update_config main download_model main download_tests integrity_test segment_list result_mapping map_model_to_images conv_relu downconv uconv_net im2patches_overlap patches2im_overlap remove_intersection floodfill_axons generate_and_save_colored_image_with_index_numbers generate_axon_numbers_image get_centroids segment_folders generate_config_dict segment_image generate_resolution main generate_default_parameters main compute_training train_model dice_coef_loss dice_axon dice_myelin main dice_coef patched_to_dataset split_data find_minority_type sort_list_files raw_img_to_patches descritize_mask DataGen labellize_mask_2d extract_patch get_axon_morphometrics evaluate_myelin_thickness_in_px get_pixelsize _check_measures_are_relatively_valid save_map_of_axon_diameters get_aggregate_morphometrics draw_axon_diameter load_axon_morphometrics write_aggregate_morphometrics warn_if_measures_are_unexpected save_axon_morphometrics evaluate_myelin_area_in_px main launch_morphometrics_computation launch_performance_metrics change_brightness add_additive_gaussian_noise add_multiplicative_gaussian_noise score_analysis dice Metrics_calculator pw_dice metrics binarize print_metrics labellize metrics_classic_wrapper generate_statistics metrics_single_wrapper compute_metrics save_metrics main volumize output_network_to_proba generate_axons_from_myelin get_masks get_image_unique_vals_properties rgb_rendering_of_mask merge_masks calc_myelin_thickness SimulateAxons visualize_segmentation retrieve_hyperparameters retrieve_training_data visualize_training remove_struct config_decode describe main compare describe_model TestCore TestCore TestCore TestCore TestCore TestCore TestCore TestCore TestCore TestCore TestCore TestCore TestCore TestCore TestCore TestCore TestCore TestCore TestCore TestCore TestCore TestCore get_config_path put_nowait format qsize print _terminator _timed_queue_join acquire get_config_path format strtobool print ConfigParser eval input bool ConfigParser get_config_path read str get get_config_path config_setup read_config init_error_client traceback_to_server Client strtobool get print Retry HTTPAdapter Session parse_header mount append absolute isinstance image_as_uint uint8 array astype realpath join dirname remove convert_path prepare_patches process_segmented_patches Saver set_verbosity reset_default_graph Session str restore load_acquisitions uconv_net set_session placeholder range asarray filterwarnings ConfigProto with_suffix print divmod float32 output perform_batch_inference extend array len list default_configuration imwrite apply_convnet zeros_like get_masks convert_path stem map range update_config exists enumerate len list print convert_path astype map shape append imread enumerate im2patches_overlap append len cumsum print stack append enumerate split reshape argmax predict list keys get items list isinstance Mapping default_configuration isinstance update deepcopy list default_configuration items isinstance generate_struct flatten rec_update Iterable append update_config int float generate_features join list str move print convert_path set rmtree joinpath Path iterdir exists download_model join list str move print convert_path set rmtree joinpath Path iterdir exists download_tests str parent print stem pw_dice logical_and axon_segmentation Path mkdir imread axon_segmentation list segment_list len axon_segmentation enumerate range len downconv Model append conv_relu Input range append divmod meshgrid zeros stack max enumerate label regionprops fromarray ones_like uint8 colorize convert floodfill equal astype get_centroids array range len bool astype uint8 int truetype asarray Draw text new size findfont sum max range fromarray colorize paste save open format parts print name convert_path stem axon_segmentation Path exists str format print name convert_path min exit stem write tqdm shape axon_segmentation imread convert_path generate_config_dict convert_path segment_image __version__ ArgumentParser exists open str segment_folders exit shape reverse parse_args format vars float int read parent print add_argument_group add_argument min generate_resolution generate_default_parameters chdir train_model compute_training convert_path Saver save str list uconv_net load_model TensorBoard Adam range DataGen Compose fit_generator choice load_weights mkdir listdir BORDER_REFLECT_101 compile int get_session ModelCheckpoint BORDER_CONSTANT len flatten sum flatten sum flatten sum train_model generate_config Path read imwrite extract_patch convert_path astype tqdm rescale is_dir joinpath mkdir float imread labellize_mask_2d enumerate open imwrite convert_path is_dir floor sort_list_files seed name append imread range asarray find_minority_type mkdir float tqdm joinpath split iterdir len len seed sorted convert_path choice is_dir rmtree rename mkdir len zeros_like where mean range len uint8 astype labellize_mask_2d zeros range len append shape divmod convert_path arange zeros_like tuple convert_path area solidity generate_axon_numbers_image orientation regionprops watershed get_pixelsize shape append minor_axis_length range format evaluate_myelin_thickness_in_px size distance_transform_edt reversed label empty perimeter equivalent_diameter eccentricity print index average evaluate_myelin_area_in_px centroid len equivalent_diameter minor_axis_length warn_if_measures_are_unexpected warn_if_measures_are_unexpected _check_measures_are_relatively_valid print safe_substitute Template centroid getattr str convert_path save load str convert_path get_axon_morphometrics asarray arange parent subplots set_title convert_path colorbar FigureCanvas imshow Figure label zeros convert_path savefig count_nonzero get_axon_morphometrics get_pixelsize convert_path size mean sqrt float len convert_path convert_path imread imwrite endswith is_dir to_excel stem append imread get_axon_morphometrics with_suffix to_csv generate_and_save_colored_image_with_index_numbers tqdm array ew_dice Metrics_calculator logical_and imread array normal add shape normal multiply shape list _save_prediction_and_ground_truth_images argmin astype set difference round append label float sum array regionprops len sum astype shape label float DataFrame zeros regionprops sum logical_and convert_path print_metrics generate_statistics is_dir save_metrics convert_path print_metrics generate_statistics is_dir save_metrics str list items print PrettyTable add_row values update load output_network_to_proba print name convert_path len labellize tqdm axon_segmentation compute_metrics imread iterdir enumerate open convert_path rec_update exists sum exp update reshape pw_dice volumize max enumerate zeros_like len zeros unique enumerate zeros range metrics_classic_wrapper metrics_single_wrapper imwrite parent convert_path logical_or logical_xor imread imwrite parent endswith convert_path astype imread zeros convert_path imwrite dict imread unique len imwrite convert_path imread parent retrieve_training_data convert_path _create_figure_helper load score_analysis binarize print reshape convert_path write close _create_fig_helper mean accuracy_score append imread tabulate masked_where open load convert_path open convert_path open tolist drop sorted columns iterrows config_decode add_row reindex_axis print tolist to_csv is_dir match PrettyTable DataFrame iterdir fillna columns config_decode add_row print match append PrettyTable DataFrame iterdir columns config_decode add_row print match PrettyTable DataFrame iterdir describe describe_model compare | <img src="https://github.com/neuropoly/axondeepseg/blob/master/docs/source/_static/logo_ads-alpha.png" width="385"> [](https://mybinder.org/v2/gh/neuropoly/axondeepseg/master?filepath=notebooks%2Fgetting_started.ipynb) [](https://github.com/neuropoly/axondeepseg/actions/workflows/run_tests.yaml) [](http://axondeepseg.readthedocs.io/en/latest/?badge=latest) [](https://coveralls.io/github/neuropoly/axondeepseg?branch=master) [](https://twitter.com/axondeepseg) Segment axon and myelin from microscopy data using deep learning. Written in Python. Using the TensorFlow framework. Based on a convolutional neural network architecture. Pixels are classified as either axon, myelin or background. For more information, see the [documentation website](http://axondeepseg.readthedocs.io/).  | 3,121 |
neuropoly/ivado-medical-imaging | ['medical image segmentation', 'lesion segmentation', 'semantic segmentation'] | ['Automatic segmentation of spinal multiple sclerosis lesions: How to generalize across MRI contrasts?'] | ivadomed/testing.py testing/functional_tests/test_download_data.py testing/unit_tests/test_split_dataset.py ivadomed/scripts/automate_training.py dev/filtering_lesion.py ivadomed/loader/bids_dataframe.py ivadomed/loader/film.py ivadomed/transforms.py ivadomed/keywords.py ivadomed/loader/balanced_sampler.py ivadomed/visualize.py testing/common_testing_util.py testing/functional_tests/test_automate_training.py dev/plot_film_parameters.py testing/functional_tests/test_training_curve.py setup.py ivadomed/loader/utils.py dev/seek_contrast_sctTesting.py testing/unit_tests/test_testing.py ivadomed/scripts/extract_small_dataset.py testing/unit_tests/test_automate_training.py ivadomed/scripts/convert_to_onnx.py ivadomed/loader/slice_filter.py testing/unit_tests/test_transfer_learning.py ivadomed/losses.py testing/unit_tests/test_slice_filter.py testing/functional_tests/t_utils.py testing/unit_tests/test_bounding_box.py ivadomed/models.py testing/unit_tests/test_loader.py ivadomed/object_detection/utils.py ivadomed/scripts/visualize_transforms.py docs/source/conf.py dev/metadata_config.py testing/unit_tests/test_sampler.py dev/reproducibility.py testing/unit_tests/test_onnx.py ivadomed/scripts/compare_models.py ivadomed/config_manager.py testing/unit_tests/t_utils.py testing/unit_tests/test_transforms.py ivadomed/loader/sample_meta_data.py testing/functional_tests/test_convert_to_onnx.py ivadomed/utils.py dev/df_new_loader.py ivadomed/evaluation.py testing/functional_tests/test_compare_models.py ivadomed/__init__.py testing/unit_tests/test_tensorboard_save.py testing/unit_tests/test_model.py dev/data_aug_dilation.py testing/unit_tests/test_rgb.py ivadomed/scripts/prepare_dataset_vertebral_labeling.py ivadomed/loader/bids_dataset.py testing/unit_tests/test_orientation.py ivadomed/training.py dev/plot_cluster_metadata.py ivadomed/scripts/training_curve.py ivadomed/loader/bids3d_dataset.py dev/target_size.py ivadomed/loader/mri2d_segmentation_dataset.py ivadomed/maths.py testing/functional_tests/t_template.py testing/functional_tests/test_prepare_dataset_vertebral_labelling.py testing/unit_tests/test_mixup.py ivadomed/loader/segmentation_pair.py ivadomed/metrics.py ivadomed/loader/loader.py testing/unit_tests/test_postprocessing.py ivadomed/scripts/visualize_and_compare_testing_models.py ivadomed/mixup.py ivadomed/preprocessing.py ivadomed/main.py dev/class_balance.py ivadomed/loader/mri3d_subvolume_segmentation_dataset.py testing/functional_tests/test_extract_small_dataset.py testing/unit_tests/test_losses.py testing/unit_tests/test_training_time.py testing/functional_tests/test_visualize_transforms.py ivadomed/uncertainty.py ivadomed/scripts/download_data.py dev/prepare_data/prepdata.py ivadomed/inference.py testing/functional_tests/test_segment_volume.py testing/unit_tests/test_metrics.py ivadomed/postprocessing.py testing/unit_tests/t_template.py print_stats get_parser run_main dilate_mask run_main save_sample print_retained_elt run_experiment run_main count_retained run_inference print_unc_stats plot_roc get_parser auc_homemade run_main plot_decision_boundaries run_main run_main visualize_tsne visualize_pca plot_histogram main get_results get_parser compute_csa run_main plot_distrib print_stats get_parser run_main remove_slice run_main _patched_sphinx_jsonschema_simpletype parse_class_attributes_section parse_attributes_section patched_parse parse_keys_section update deep_dict_compare load_json ConfigurationManager evaluate Evaluation3DMetrics onnx_inference set_option segment_volume get_preds reconstruct_3d_object image_reconstruction get_onehotencoder split_classes set_postprocessing_options pred_to_nib process_transformations volume_reconstruction pred_to_png LoaderParamsKW ContrastParamsKW BalanceSamplesKW OptionKW BinarizeProdictionKW ModelParamsKW MetadataKW UncertaintyKW ROIParamsKW SplitDatasetKW MetricsKW IgnoredFolderKW SubjectDataFrameKW ObjectDetectionParamsKW DataTestingKW BidsDataFrameKW PostprocessingKW TransformationKW ConfigKW SubjectDictKW SliceFilterParamsKW TrainingParamsKW TverskyLoss MultiClassDiceLoss GeneralizedDiceLoss FocalDiceLoss BinaryCrossEntropyLoss AdapWingLoss LossCombination FocalTverskyLoss L2loss DiceLoss FocalLoss run_main set_output_path save_config_file get_dataset create_path_model run_command film_normalize_data set_model_params create_dataset_and_ivadomed_version_log run_segment_command set_loader_params get_parser check_multiple_raters update_film_model_params gaussian_kernel rescale_values_array heatmap_generation plot_dice_thr dice_score get_metric_fns hausdorff_score recall_score multi_class_dice_score mse plot_roc_curve precision_score intersection_over_union accuracy_score MetricManager numeric_score specificity_score mixup save_mixup_sample Countception HeMISUnet GridAttentionBlockND UpConv set_model_for_retrain get_model_filenames ConvBlock DownConv UNet3D resnet18 FiLMgenerator FiLMedUnet DenseNet ResNet SimpleBlock Encoder weights_init_kaiming Modified3DUNet Decoder FiLMlayer Unet UnetGridGatingSignal3 densenet121 keep_largest_object threshold_predictions keep_largest_object_per_slice binarize_with_low_threshold fill_holes label_file_from_coordinates multilabel_capable coordinate_from_heatmap Postprocessing nifti_capable remove_small_objects mask_predictions get_midslice_average threshold_analysis get_gt run_inference test get_sampler load_checkpoint get_loss_function get_metadata get_scheduler train two_dim_compatible CenterCrop apply_preprocessing_transforms HistogramClipping ElasticTransform AdditiveGaussianNoise RandomAffine RandomBlur UndoTransform NumpyToTensor ROICrop Crop NormalizeInstance tio_transform Compose Resample get_preprocessing_transforms BoundingBoxCrop prepare_transforms multichannel_capable DilateGT RandomBiasField UndoCompose get_subdatasets_transforms RandomReverse RandomShiftIntensity ImedTransform RandomGamma Clahe CroppableArray voxelwise_uncertainty run_uncertainty structurewise_uncertainty combine_predictions display_selected_model_spec Metavar define_device __get_commit generate_sha_256 display_selected_transfoms _version_string cuda check_exe get_task __get_branch ArgParseException _git_info get_path_data get_arguments init_ivadomed format_path_data get_path_output plot_transformed_sample unstack_tensors get_command save_onnx_model LoopingPillowWriter overlap_im_seg convert_labels_to_RGB save_color_labels AnimatedGif HookBasedFeatureExtractor save_feature_map save_tensorboard_img BalancedSampler Bids3DDataset BidsDataframe BidsDataset save_film_params store_film_params normalize_metadata get_film_metadata_models Kde_model clustering_fit check_isMRIparam load_dataset MRI2DSegmentationDataset MRI3DSubVolumeSegmentationDataset SampleMetadata SegmentationPair SliceFilter get_subdatasets_subject_files_list reorient_image update_filename_to_nifti update_metadata split_dataset get_new_subject_file_split filter_roi dropout_input orient_img_hwd orient_img_ras get_file_extension imed_collate orient_shapes_hwd compute_bb_statistics get_bounding_boxes load_bounding_boxes generate_bounding_box_file resample_bounding_box bounding_box_prior verify_metadata adjust_transforms adjust_undo_transforms adjust_bb_size resize_to_multiple get_param_list format_results keys_are_unique split_dataset HyperparameterOption update_dict train_worker make_config_list test_worker automate_training main get_base_keys get_parser compute_statistics main get_parser convert_pytorch_to_onnx main get_parser unzip install_data download_data _format_bundles main get_parser is_good_contrast remove_some_contrasts main get_parser extract_small_dataset main get_parser extract_mid_slice_and_convert_coordinates_to_heatmaps mask2label get_events_path_list tensorboard_retrieve_event run_plot_training_curves plot_curve main get_parser main visualize_and_compare_models get_parser onclick get_data main run_visualization get_parser remove_dataset remove_tmp_dir bcolors printv download_dataset test_automate_training setup_function test_automate_training_run_test setup_function test_compare_models teardown_function test_convert_to_onnx_no_model setup_function test_convert_to_onnx teardown_function test_convert_to_onnx_no_dimension test_download_data setup_function teardown_function test_download_data_no_dataset_specified test_extract_small_dataset_contrast_list test_extract_small_dataset_default_n setup_function test_extract_small_dataset_no_derivatives teardown_function test_extract_small_dataset_n_2 test_prepare_dataset_vertebral_labeling setup_function teardown_function test_segment_volume_2d_with_patches setup_function test_segment_volume_3d test_segment_volume_2d_NumpyToTensor_retrocompatibility test_segment_volume_2d_no_prepro_transform teardown_function test_segment_volume_2d setup_function teardown_function test_training_curve test_visualize_transforms_n_2 setup_function test_visualize_transforms_n_1 teardown_function test_template setup_function teardown_function create_tmp_dir check_sha256 download_functional_test_files setup_function test_get_param_list test_make_config_list teardown_function test_config_sha256 test_bounding_box setup_function test_adjust_bb_size teardown_function test_compute_bb_statistics test_load_dataset_2d_png test_2d_patches_and_resampling test_get_target_filename_list test_bids_df_microscopy_png test_bids_df_ctscan test_dropout_input setup_function test_bids_df_anat teardown_function test_bids_df_multi test_get_target_filename_list_multiple_raters test_losscombination test_multiclassdiceloss test_L2loss test_diceloss test_adapwingloss test_tverskyloss test_focaltverskyloss test_focaldiceloss test_generalizeddiceloss test_mse test_multi_class_dice_score test_haussdorf_4d setup_function test_err_iou test_err_rec test_dice_plot test_err_spec test_err_prec test_plot_roc_curve teardown_function setup_function teardown_function test_mixup test_countception test_filmed_unet test_densenet test_film_generator test_resnet test_model_3d_att setup_function test_onnx teardown_function setup_function test_image_orientation teardown_function test_label_file_from_coordinates check_bin_vs_soft test_mask_predictions setup_function test_keep_largest_object_per_slice test_threshold nii_dummy_seg teardown_function test_fill_holes test_keep_largest_object test_save_rgb setup_function test_rgb_conversion teardown_function _cmpt_label setup_function test_sampler teardown_function _cmpt_slice setup_function test_slice_filter teardown_function check_balance test_per_patient_2 test_per_center_balance test_per_center_without_testcenter setup_function create_tsvfile test_per_patient_balance load_dataset test_per_center_testcenter_0 teardown_function create_jsonfile test_per_patient test_tensorboard_save setup_function teardown_function test_inference_2d_microscopy setup_function test_inference teardown_function setup_function test_unet_time teardown_function setup_function test_transfer_learning teardown_function _test_Resample test_DilateGT test_Resample_2D test_Resample_3D _check_shape test_Crop_3D test_AdditiveGaussianNoise test_HistogramClipping test_ElasticTransform _test_Crop test_Crop_2D test_Clahe _check_dtype create_test_image test_RandomReverse test_NumpyToTensor test_NormalizeInstance test_RandomAffine test_template setup_function teardown_function download_data_testing_test_files create_tmp_dir add_argument ArgumentParser percentile format print mean median format split_dataset get_config print Compose size DataLoader zip BidsDataset print_stats sum enumerate append len subplot axis close imshow savefig figure int list astype random copy where logical_xor binary_closing sample label binary_fill_holes round range binary_dilation int Image astype dilate_mask range list zip load join list percentile format print min get_data mean zip append max len print label get_data data_gt count_retained precision_score append sum Evaluation3DMetrics recall_score astype mean get_lfdr enumerate load join deepcopy int uint8 get_ltpr print data_pred any isfile array print format enumerate mean format plot print xlabel close ylabel ylim scatter title figure legend savefig xlim argmax array enumerate auc_homemade load join uint8 run_eval print astype get_data mean append DataFrame array Evaluation3DMetrics run_experiment run_inference print_unc_stats suffixUnc print_retained_elt percentile list cmd_test ofolder plot_roc head dump set keys load join to_csv median makedirs tqdm arange xlabel reshape print min xscale shape ylim scatter contourf figure savefig zip meshgrid xlim max yticks min plot_decision_boundaries max xlabel ylabel title hist savefig linspace figure ravel list concatenate concat PCA ravel scatterplot title savefig figure DataFrame fit_transform list TSNE concatenate print concat ravel scatterplot title savefig figure DataFrame fit_transform visualize_tsne visualize_pca plot_histogram join rmtree exists load join list remove affine system index Nifti1Image assign save float abs read_csv iterations get_results get_parser list columns compute_csa array append parse_args logdir range bids get_config zip join int to_csv average std output_path str xlabel print ylabel savefig figure xlim distplot count_nonzero listdir plot_distrib isdir dataobj any generate_binary_structure label asanyarray load affine dataobj transpose copy dot shape Nifti1Image save asanyarray range remove_slice parse_args add_argument ArgumentParser _parse_class_attributes_section _unpatched_parse _parse_keys_section list isinstance _cell _original_sphinx_jsonschema_simpletype extend _line append keys get list items isinstance Mapping Mapping isinstance info get_deriv_fnames run_eval Path save to_list DataFrame exists pred_to_png str split_classes append head Evaluation3DMetrics format get_fdata mkdir info get_file_extension enumerate load set_index to_csv tqdm Nifti1Image joinpath zeros len cpu InferenceSession array run define_device load get METADATA indexes Path normalize_metadata range len load reorient_image update_filename_to_nifti threshold_predictions debug transpose Postprocessing apply shape stack as_closest_canonical Nifti1Image save append zeros range get_fdata imwrite zip items list CENTERCROP dict bounding_box_prior warning len update set_option FILL_HOLES KEEP_LARGEST BINARIZE_MAXPOOLING reconstruct_3d_object get_model_filenames DataLoader warning set_postprocessing_options process_transformations OVERLAP_2D get update get_preds get_config FNAME_PRIOR prepare_transforms info load_filenames PIXEL_SIZE get_onehotencoder get_subdatasets_transforms enumerate MRI2DSegmentationDataset MRI3DSubVolumeSegmentationDataset bool get_fdata astype copy Nifti1Image append range get_best_affine int undo_transforms image_reconstruction split_classes volume_reconstruction adjust_undo_transforms pred_to_nib transforms array range append len zeros undo_transforms zeros undo_transforms add_argument_group add_mutually_exclusive_group dump joinpath mkdir Path info error any exit info update dump get_film_metadata_models Path normalize_metadata load_dataset update deepcopy update deepcopy display_selected_model_spec MODIFIED_3D_UNET error exit range len deepcopy mkdir info load update normalize_metadata Path BidsDataframe POSTPROCESSING warning Path save load_json to_list pred_to_png str sorted append get replace segment_volume MULTICHANNEL mkdir zip get_file_extension PATH_OUTPUT sub LOADER_PARAMETERS threshold_analysis BidsDataframe define_device create_dataset_and_ivadomed_version_log create_path_model set_model_params SUBJECT_SELECTION film_normalize_data Path run_segment_command generate_sha_256 display_selected_transfoms warning check_multiple_raters get_subdatasets_subject_files_list str add get_dataset load_dataset set_loader_params get update DICE Compose test format_path_data get_subdatasets_transforms UndoCompose stdout deepcopy remove task get_metric_fns set_output_path evaluate save_config_file df RECALL_SPECIFICITY train update_film_model_params cpu_count __get_commit Path platform _version_string open str list append range PATH_DATA get format close format_path_data items isinstance PATH_OUTPUT write len config init_ivadomed run_command get_path_data get_parser get_command get_path_output max min astype cdf linspace outer diff gaussian_kernel convolve float sum sum asarray sum asarray astype shape range reshape numeric_score divide numeric_score divide numeric_score divide numeric_score numeric_score divide range plot xlabel ylabel ylim title savefig figure xlim plot xlabel ylabel ylim title savefig figure xlim save_mixup_sample FloatTensor size randperm beta max str subplot axis close zfill imshow savefig mkdir Path figure randint data normal_ kaiming_normal_ __name__ constant_ load int named_modules named_parameters reset_parameters round len name is_dir Path label copy keep_largest_object squeeze len append range split dataobj array peak_local_max list tuple shape Nifti1Image zeros range len count_nonzero int astype label range load slice dataobj astype mean shape Nifti1Image as_closest_canonical array len load str run_uncertainty format metric_mgr run_inference DataLoader eval get_results Path mkdir info MetricManager reset cuda range save_film_params store_film_params get_gt numpy warning Path pred_to_png str get_task name transpose shape split_classes volume_reconstruction range replace image_reconstruction save_feature_map adjust_undo_transforms get_file_extension enumerate int parent zfill tqdm cpu pred_to_nib transforms len load plot_dice_thr format ConcatDataset run_inference where tqdm plot_roc_curve eval DataLoader get_results info append max append zeros get_fdata shape model get_loss_function zero_grad localtime set_model_for_retrain DataLoader numpy get_results Path DiceLoss MetricManager save_tensorboard_img save abs cuda values str list model_class all Adam AnimatedGif strftime getattr get_scheduler load_state_dict append to is_tensor range update SummaryWriter format close copy eval timedelta mkdir info sample add_scalars is_file enumerate load items get_sampler time add_scalar backward load_checkpoint loss_fct metric_mgr write unstack_tensors dict mixup tqdm reset get_metadata step save_onnx_model len CosineAnnealingLR CyclicLR CosineAnnealingWarmRestarts loss_class getattr load format load_state_dict info deepcopy zip deepcopy enumerate transforms update_metadata get_preprocessing_transforms Compose copy UndoCompose transform Subject str list structurewise_uncertainty combine_predictions set tqdm voxelwise_uncertainty Path append iterdir len uint8 astype header mean Nifti1Image save array Nifti1Image repeat save expand_dims array get_best_affine where save logical_and shape array generate_binary_structure append sum range get_fdata astype mean unique load int Nifti1Image logical_or zeros bool std len isinstance append unsqueeze range sha256 eval export int format set_device device info is_available list format keys info list format keys info show subplot use axis imshow title savefig figure interactive getenv Path str parent strip is_exe Path pathsep split parse_args rstrip communicate strip absolute returncode splitlines startswith Popen communicate Popen returncode _git_info error test info segment train path_output info path_data info isinstance format info jet binary_r copy seed dtype list pred_to_nib threshold_predictions reshape randint shape zeros range seed randint shape zeros range make_grid isinstance convert_labels_to_RGB clone copy range cat add_image load reorient_image Variable size Nifti1Image numpy as_closest_canonical Path mkdir save forward range deepcopy format fit OneHotEncoder vstack info append predict enumerate train Kde_model format isinstance mean info append float metadata clustering_fit normalize_metadata append numpy film_bottom last_film str Path save array range deepcopy get_task format Bids3DDataset length prepare_transforms info BidsDataset load_filenames len sorted format list tolist set warning train_test_split round len seed dump format split_dataset concat tolist sample warning Path zip append load sorted get_new_subject_file_split to_list empty isinstance Sequence Mapping type stack is_tensor _update enumerate orient_img_ras ornt_transform affine io_orientation next get_file_extension replace zeros_like size warning append randint len int min where unique append label max range len int append round max range len append zip keep_largest_object segment_volume get_fdata get_bounding_boxes orient_img_hwd as_closest_canonical Path save mkdir len adjust_bb_size transforms range enumerate pop insert tuple Compose resample_bounding_box BoundingBoxCrop transform resize_to_multiple transforms append enumerate pop list insert BoundingBoxCrop transform transforms enumerate get PATH_OUTPUT generate_bounding_box_file Path exists all load adjust_bb_size as_closest_canonical orient_img_hwd format min stdev mean info append max int dump current_process run_command open int current_process run_command str dump get_new_subject_file_split BidsDataframe mkdir Path info deepcopy list combinations keys_are_unique name min translate keys update_dict option set append get_base_keys base_key range len items list HyperparameterOption reversed append get items list isinstance Mapping set append from_dict list join reset_index set_index PATH_OUTPUT set append sort_values str get_param_list format_results split_dataset get_config get_context to_csv compute_statistics make_config_list mkdir Path info visualize_and_compare_models DataFrame automate_training get_arguments init_ivadomed list print concat to_csv mean pvalue zeros DataFrame std values run_test n_iterations compute_statistics out dataframe bool read_csv load str replace is_available save_onnx_model convert_pytorch_to_onnx str model dimension gpu_id n_channels get isinstance name mkdtemp Retry raise_for_status Path info HTTPAdapter Session parse_header mount items list endswith extractall info is_dir warning Path exists str relative_to mkdtemp format glob debug copy download_data unlink mkdir info is_file parent unzip rmtree iterdir d install_data output absolute Path append unlink iterdir str list RandomState print to_csv copyfile choice is_dir rmtree is_file Path mkdir read_csv remove_some_contrasts copytree seed number derivatives input extract_small_dataset split load dataobj sort as_closest_canonical append array range len load str reorient_image affine mask2label get_midslice_average shape Nifti1Image as_closest_canonical Path save zeros expand_dims range heatmap_generation is_file len extract_mid_slice_and_convert_coordinates_to_heatmaps sorted is_dir Path append iterdir len set_text concat grid max list set_xlabel tolist legend range plot set_xlim mean keys join wrap set_ylabel fill_between std set_ylim get_events_path_list add_subplot is_dir Path plot_curve str list name savefig ceil expanduser append mkdir float keys enumerate int parent print to_csv tensorboard_retrieve_event figure split len str from_dict value defaultdict Scalars startswith append range len run_plot_training_curves pop show xdata text tolist where ind set_visible linspace unique ydata gca range len grid Path DataFrame max violinplot round show str list read_table tolist title append range ks_2samp plot mpl_connect astype stripplot empty combinations T print text read_csv len ofolders metric metadata visualize_and_compare_models load get_fdata as_closest_canonical orient_img_hwd __setitem__ CROP_PARAMS SampleMetadata get_data rescale_values_array Path str list composed_transforms array_equal get_zooms rot90 orient_shapes_hwd range update get_config Compose astype mkdir sample get_subdatasets_transforms isinstance print plot_transformed_sample Tensor numpy run_visualization main printv exists rmtree printv Path rmtree print normal get isatty create_tmp_dir str check_sha256 print Path run str check_sha256 print Path run main Path remove_tmp_dir main remove_dataset main Path DICT_URL main Path main Path main Path main Path main str segment_volume rmtree Path mkdir Unet save str segment_volume rmtree Path mkdir Unet save str segment_volume rmtree Path mkdir Unet save str segment_volume rmtree Path mkdir Unet save Modified3DUNet str segment_volume rmtree Path mkdir save main Path main Path main Path mkdir info download_dataset append get_config glob str remove_tmp_dir mkdir copytree get_param_list print make_config_list BidsDataframe generate_sha_256 df update str BidsDataframe get_bounding_boxes cwd rmtree Path mkdir load_dataset zeros adjust_bb_size compute_bb_statistics Path str load_csv reset_index BidsDataframe to_csv open Path compare drop load_csv reset_index BidsDataframe to_csv open Path compare drop load_csv reset_index BidsDataframe to_csv open Path compare drop load_csv reset_index BidsDataframe to_csv open Path compare drop size dropout_input update BidsDataframe update_filename_to_nifti load_dataset update BidsDataframe load_dataset update BidsDataframe load_dataset update BidsDataframe load_dataset forward forward forward forward forward forward forward forward forward multi_class_dice_score mse hausdorff_score precision_score recall_score specificity_score intersection_over_union str Path plot_roc_curve str plot_dice_thr Path mixup float range float Countception model Modified3DUNet float model ResNet float BasicBlock model float DenseNet model float FiLMedUnet model FiLMgenerator float model Modified3DUNet str load randn rmtree eval unsqueeze mkdir save numpy save_onnx_model BidsDataframe DataLoader training_undo_transform as_closest_canonical device get_pair_data set_device append range format get_fdata stack prepare_transforms BidsDataset is_available load_filenames orient_img_ras enumerate load int reorient_image Bids3DDataset threshold_predictions print SegmentationPair array len convolve ones Nifti1Image eye zeros range array_equal astype dataobj asanyarray copy threshold_predictions keep_largest_object dataobj asanyarray copy dataobj asanyarray copy keep_largest_object_per_slice dataobj asanyarray copy fill_holes affine dataobj copy Nifti1Image asanyarray mask_predictions load label_file_from_coordinates Path save_color_labels array range tensor convert_labels_to_RGB array range print any numpy enumerate print BidsDataframe define_device _cmpt_label DataLoader load_dataset print any numpy enumerate _cmpt_slice format print BidsDataframe define_device DataLoader load_dataset len get_subdatasets_subject_files_list str create_tsvfile Path mkdir create_jsonfile read_csv load_dataset add set load_dataset load_dataset load_dataset check_balance load_dataset check_balance load_dataset append str range mkdir SummaryWriter BytesIO len mkdir Path save_tensorboard_img tensor range array zeros flush open update print BidsDataframe Compose define_device run_inference metric_mgr DataLoader eval get_results mkdir Unet load_dataset MetricManager reset cuda UndoCompose update BidsDataframe Compose define_device run_inference DataLoader eval mkdir Unet load_dataset cuda UndoCompose model BidsDataframe define_device zero_grad DataLoader DiceLoss cuda list model_class Adam getattr load_dataset append range update format mean CosineAnnealingLR enumerate time backward print loss_fct write tqdm parameters train step std len load str format print set_model_for_retrain round device sum len int random randint astype maximum uniform rescale_values_array int32 ceil zeros center_of_mass round range append HistogramClipping min_percentile zip transform max_percentile NumpyToTensor undo_transform transform enumerate resample_transform _check_shape plot_transformed_sample SampleMetadata _check_dtype undo_transform enumerate _test_Resample _test_Resample transform copy NormalizeInstance crop_transform _check_shape size plot_transformed_sample SampleMetadata _check_dtype undo_transform enumerate _test_Crop _test_Crop _check_shape plot_transformed_sample copy _check_dtype transform undo_transform enumerate _check_shape plot_transformed_sample copy elastic_transform _check_dtype dilate_transform plot_transformed_sample _check_shape copy enumerate _check_shape reverse_transform plot_transformed_sample copy _check_dtype undo_transform _check_shape plot_transformed_sample copy _check_dtype noise_transform _check_shape plot_transformed_sample copy _check_dtype clahe enumerate enumerate download_dataset |  [](https://doi.org/10.21105/joss.02868) [](https://coveralls.io/github/ivadomed/ivadomed?branch=master) [](https://github.com/ivadomed/ivadomed/actions?query=workflow%3A%22Run+tests%22) [](https://github.com/ivadomed/ivadomed/actions?query=workflow%3A%22Publish+Package%22) [](https://ivadomed.org/en/latest/?badge=latest) [](LICENSE.md) [](https://twitter.com/ivadomed) `ivadomed` is an integrated framework for medical image analysis with deep learning. | 3,122 |
nguyenkh/NeuralDenoising | ['word embeddings', 'denoising'] | ['Neural-based Noise Filtering from Word Embeddings'] | preprocessing.py common.py filter_noise_embs.py smart_open to_unicode read_file save_file to_utf8 largest_eigenvalue main HiddenLayer DeEmbs initialize_parameters pre_overcomplete_embs main pre_complete_embs endswith array join pack print len write zip open isinstance isinstance str T print dot shape largest_eigh len load DeEmbs add_argument output read_file initialize_parameters ArgumentParser file_over input parse_args save_file fit dot T largest_eigenvalue eye print pre_overcomplete_embs pre_complete_embs save str print fit shape components_ MiniBatchDictionaryLearning str transform print fit shape components_ MiniBatchDictionaryLearning | ## Neural-based Noise Filtering from Word Embeddings Kim Anh Nguyen, [email protected] Code for paper [Neural-based Noise Filtering from Word Embeddings](http://www.ims.uni-stuttgart.de/institut/mitarbeiter/anhnk/papers/coling2016/denoising-embeddings.pdf) (COLING 2016). ### Requirements 1. Sklearn 2. Theano ### Pre-trained word embeddings - The models can filter noise from any pre-trained word embeddings such as word2vec, GloVe - The format of word embeddings used in this code is either word2vec or GloVe (either binary or text) | 3,123 |
nhatsmrt/torch-styletransfer | ['style transfer'] | ['Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization'] | src/__init__.py test_multiple_style.py src/layers.py src/test.py src/unet.py src/models.py src/utils.py main.py custom_conv_layer CustomPixelShuffle_ICNR CustomMergeLayer ReflectionPaddedConv CustomResidualBlockPreActivation custom_res_block PixelShuffleDecoder MultipleStyleTransferNetwork PixelShuffleDecoderV2 MultipleStyleUNet GenericDecoder SimpleDecoder run_test_multiple run_test CustomUnetBlock _get_sfs_idxs CustomDynamicUnet relu norm_type SelfAttention conv_func init_default append BatchZero StyleTransferLearner int random_split UnlabelledImageDataset ConvolutionalLayer learn print convert Sequential pil_to_tensor DataLoader FeatureExtractor SEResidualBlockPreActivation CustomDynamicUnet len int random_split PairedDataset MultipleStyleTransferNetwork learn print PixelShuffleDecoderV2 MultipleStylesTransferLearner Adam RandomSampler parameters UnlabelledImageListDataset DataLoader FeatureExtractor compute_num_batch LRSchedulerCB len list | # Neural Style Transfer with Pytorch ## Introduction Implementation of a neural network that can transfer the style of an arbitrary image to another photo. <br /> Much of the code (e.g the layers) is implemented in my [neural network toolbox](https://github.com/nhatsmrt/nn-toolbox/tree/experimental). The training procedure can be found [here](https://github.com/nhatsmrt/nn-toolbox/blob/experimental/nntoolbox/vision/learner/style.py). This repository contains only the testing code. To replicate my work, please also clone the experimental branch of my nntoolbox repository. ## Some results ### Small Experiment: I train the thing for 850 iterations, using COCO dataset (resize to 256 for each size), and the train_9 subset of the wikiart dataset. For each dataset, I split 80% of them as training data and use the rest for evaluating. I train the network for a total of 850 iterations (1 "epoch"). Some look pretty good: <img src="demo/PixelShuffle/content_3.png" alt="content" width="175" /> <img src="demo/PixelShuffle/style_3.png" alt="style" width="175" /> <img src="demo/PixelShuffle/styled_3.png" alt="styled" width="175" /> <br /> | 3,124 |
nhh1501/E2E_MLT_VN | ['optical character recognition'] | ['E2E-MLT - an Unconstrained End-to-End Method for Multi-Language Scene Text'] | ocr_gen.py train.py demo.py nms/__init__.py ocr_test_utils.py train_ocr.py eval.py net_eval.py train_crnn.py test.py ocr_utils.py models.py utils.py data_util.py net_utils.py models_crnn.py data_gen.py train_ocr_crnn.py _draw_box_points generator get_images load_annoataion get_batch load_gt_annoataion point_dist_to_line generate_rbox2 random_perspective cut_image random_rotation generate_rbox draw_box_points GeneratorEnqueuer resize_image intersect area process_splits load_gt draw_detections evaluate_image union load_detections conv_dw_plain BasicBlockIn dice_loss iou_loss conv_bn CReLU conv_dw_res_in conv_dw_res conv_dw BasicBlockSepIn CReLU_IN ModelMLTRCTW ModelResNetSep2 BasicBlockSep ModelResNetSep_final conv_dw_in Densenet3 BasicBlockIn CReLU conv_dw CReLU_IN conv_dw_plain _ConvBlock dice_loss _TransitionBlock_xxx Densenet Model BasicBlockSepIn _DenseBlock conv_dw_in _TransitionBlock conv_bn Densenet1 Encoder Densenet2 conv_dw_res BasicBlockSep iou_loss Decoder BidirectionalLSTM ModelResNetSep_crnn conv_dw_res_in draw_text_points dice_loss intersect evaluate area load_gt evaluate_e2e_crnn eval_ocr_crnn cer evaluate_image strLabelConverter evaluate_e2e eval_ocr evaluate_crnn union load_net adjust_learning_rate np_to_variable generator get_images get_batch get_images_zip test crnn_batch ocr_image ocr_batch print_seq_ext intersect area process_boxes main union intersect area process_boxes main union main strLabelConverter main ocrDataset train_transforms BeamSearchDecoder E2Edataset random_erode test_transforms cut_image ImgAugTransform alignCollate to_array random_dilate E2Ecollate do_nms get_boxes glob append sqrt line circle line circle warpAffine cos copy pi getRotationMatrix2D uniform sin uniform warpPerspective float32 getPerspectiveTransform int uniform sum max cross norm blur fillPoly resize max ones waitKey imshow draw_box_points append asarray copy sqrt zip enumerate int min atan2 int32 argwhere zeros array len fillPoly resize ones waitKey imshow draw_box_points append asarray copy sqrt zip enumerate int point_dist_to_line atan2 int32 argwhere zeros array len load_annoataion arange random_rotation resize fromarray basename copyMakeBorder generate_rbox2 cut_image shape uniform generate_rbox dirname append imread RandomizedBlur format asarray replace Compose astype shuffle float int get_images invert uint8 load_gt_annoataion print float32 random_perspective transform BORDER_CONSTANT array generator get is_running start sleep GeneratorEnqueuer int resize asarray reshape copy draw_box_points range min max min max intersect reshape area union eval lower boundingRect draw_box_points zeros float array range len max waitKey copy imshow draw_box_points float array range append len sum view gt min mean fromarray text truetype Draw draw_text_points eval asarray squeeze np_to_variable eval eval eval strLabelConverter eval strLabelConverter join asarray glob extend eval mkdir open join asarray glob extend eval mkdir open type cuda load items list randn print size copy_ load_state_dict param_groups dirname dirname blur strip abs max len rotate getRotationMatrix2D range warpAffine ROTATE_90_COUNTERCLOCKWISE IMREAD_GRAYSCALE reshape min extend randint split strip where object distance resize round max argmax DataFrame open sorted basename name title savetxt dirname permute append expand_dims imread range forward_ocr format asarray to_html close print_seq_ext eval lower swapaxes float forward_features enumerate int print write split zeros train numpy array len append range affine_grid cos argmax max exp view FloatTensor Size transpose waitKey imshow shape sin normalize to forward_ocr asarray grid_sample size print_seq_ext mean sqrt swapaxes type forward_features int reshape atan2 unravel_index randint numpy affine_grid cos unsqueeze argmax max view Size shape sin append to range forward_ocr ones_like asarray grid_sample arctan2 size print_seq_ext sqrt stack startswith swapaxes float forward_features pow unravel_index randint numpy len affine_grid cos unsqueeze argmax max view Size permute sin append to range forward_ocr ones_like asarray grid_sample arctan2 size print_seq_ext sqrt stack startswith swapaxes float pow randint numpy len affine_grid area cos unsqueeze boundingRect round max argmax view FloatTensor Size squeeze waitKey imshow sin draw_box_points normalize to append union range forward_ocr asarray format intersect grid_sample size shuffle print_seq_ext sqrt lower startswith swapaxes float type long forward_features int print boxPoints min atan2 argwhere randint numpy array circle len model clip_grad_norm_ zero_grad process_boxes ReduceLROnPlateau numpy save resize evaluate_e2e base_lr cuda exists open ctc_loss squeeze waitKey Adam imshow permute normalize to next ModelResNetSep_final range forward_ocr format get_batch asarray save_path debug timeit size close param_groups np_to_variable d1 swapaxes long net forward_features load_net join time backward print max_iters write parameters argwhere train step loss adjust_learning_rate tensor CTCLoss evaluate_e2e_crnn ModelResNetSep_crnn CyclicLR DataLoader argmax eval_ocr ocrDataset print_seq_ext eval_ocr_crnn IntTensor encode strLabelConverter int asarray dilate array ones ones array erode Compose Compose append nms_impl array fill zeros swapaxes do_nms | This repositories was edited form E2E-MLT repositories of MichalBusta # E2E-MLT an Unconstrained End-to-End Method for Multi-Language Scene Text code base for: https://arxiv.org/abs/1801.09919 ``` @@inproceedings{buvsta2018e2e, title={E2E-MLT-an unconstrained end-to-end method for multi-language scene text}, author={Bu{\v{s}}ta, Michal and Patel, Yash and Matas, Jiri}, booktitle={Asian Conference on Computer Vision}, pages={127--143}, | 3,125 |
nhhoang96/Semantic_Matching | ['intent detection', 'few shot learning'] | ['Dynamic Semantic Matching and Aggregation Network for Few-shot Intent Detection'] | code/main.py code/utils.py code/encoder.py code/read_input_data.py code/sman.py masked_softmax create_emb_layer sort_batch_by_length Encoder convert_to_tensor parse_argument set_seed attn_loss evaluate_nonepisode load_model train_episode mask_logsoftmax word_distr_loss MakeDirectory selfattn_loss compute_loss evaluate_episode train mask_softmax read_datasets preprocess_text load_label_dict save_data train_intents find_max_fixed_length load_fasttext load_vec load_new_intents reshape_dim div_with_small_value SMAN init_support_query generate_tok_idx create_index load_test create_query_support create_samples shuffle_data produce_chosen_class load_support create_mask load_new_intents Variable sort index_select to long from_numpy Embedding size load_state_dict sum float exp log_softmax to log where seed manual_seed_all manual_seed to softmax where sum view mask_softmax contiguous mask_logsoftmax repeat kl_loss to double shape float where mask_logsoftmax double to sum range attn_loss clone word_distr_loss double loss_fn float argmax long to parse_args add_argument __dict__ ArgumentParser create_index load_test print load_support array int create_index load_test print confusion_matrix shuffle_data load_support unique ceil float array accuracy_score mkdir exists load print Adam parameters load_state_dict to exists clip_grad_norm_ zero_grad compute_loss save accuracy_score argmax forward create_query_support to range state_dict convert_to_tensor unique float backward print reshape clone parameters step create_index load_model train_episode load_support vstack open len split join generate_tok_idx open create_mask split append len open append open append delete preprocess_text save_data print find_max_fixed_length load_vec str list load_label_dict items print load_fasttext read_csv load_new_intents contiguous repeat view float print zeros zeros list range unique append array append zeros range ones arange shuffle append deepcopy shuffle init_support_query deepcopy create_index concatenate shuffle empty range deepcopy sort delete create_samples produce_chosen_class unique append array | # Dynamic Semantic Matching And Aggregation Network for Few-shot Intent Detection This repository provides PyTorch implementation for the paper [*Dynamic Semantic Matching and Aggregation Network for Few-shot Intent Detection*](https://www.aclweb.org/anthology/2020.findings-emnlp.108/) **(Findings of EMNLP'2020)** ## Requirements Python 3.6.2 <br /> Numpy <br /> Pandas <br /> Pytorch 1.0.1 <br /> Scikit-learn 0.21.1 <br /> ## Dataset We conduct the split on NLUE and SNIPS dataset in dataset directory. Please take a look at our paper for details of the split. | 3,126 |
nibtehaz/MultiResUNet | ['medical image segmentation', 'semantic segmentation'] | ['MultiResUNet : Rethinking the U-Net Architecture for Multimodal Biomedical Image Segmentation'] | MultiResUNet3D.py MultiResUNet.py trans_conv2d_bn MultiResBlock MultiResUnet conv2d_bn main ResPath trans_conv3d_bn MultiResUnet3D MultiResBlock main ResPath conv3d_bn int conv2d_bn add concatenate conv2d_bn range add concatenate ResPath MultiResBlock Model conv2d_bn Input print MultiResUnet summary conv3d_bn conv3d_bn concatenate ResPath MultiResBlock Model Input conv3d_bn MultiResUnet3D | # MultiResUNet #### Rethinking the U-Net architecture for multimodal biomedical image segmentation This repository contains the original implementation of "MultiResUNet : Rethinking the U-Net architecture for multimodal biomedical image segmentation" in Keras (Tensorflow as backend). ## Paper MultiResUNet has been published in Neural Networks >Ibtehaz, Nabil, and M. Sohel Rahman. "MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation." Neural Networks 121 (2020): 74-87. * [Read the Paper](https://doi.org/10.1016/j.neunet.2019.08.025) * [View the Preprint](https://arxiv.org/abs/1902.04049) ## Overview In this project we take motivations from the phenomenal U-Net architecture for biomedical image segmentation and take an attempt to improve the already outstanding network. | 3,127 |
nick-nikzad/RDL-SE | ['speech enhancement'] | ['Deep Residual-Dense Lattice Network for Speech Enhancement', 'MetricGAN: Generative Adversarial Networks based Black-box Metric Scores Optimization for Speech Enhancement'] | dct.py RDL_deepxi.py RDL_utils.py RDL_network.py feat.py batch.py _clean_mbatch _batch _train_list _noise_mbatch stdct hz2mel stas melfbank addnoisepad stdct psd stft bpoint addnoise nframes stps stms snr lsse mel2hz np_sigmoid ml_gamma_hat feat_extr mmse_lsa log10 mmse_stsa extract_pre_residual Build_Dense_Lattice_UpDown_v3 RDL_Net x_scale shortcut_projection conv2d_relu_norm masked_batch_norm masked_layer_norm block_unit masked_batch_stats conv_layer loss optimizer concat_x1x2 int16 read append zeros max range len int16 read choice sample zeros max range append len join read print glob len shuffle append exists makedirs join read basename int16 list isinstance glob zip append zeros max range len to_float stms square div psd matmul tile expand_dims log int bpoint zeros range slice subtract float32 addnoise div pad cast norm slice subtract multiply add div pow int32 random_uniform constant log sequence_mask multiply subtract boolean_mask square maximum map_fn add div erf nframes log10 sqrt stms exp i0 multiply divide pi add i1 sqrt isnan isinf multiply divide add dense boolean_mask sequence_mask minimum int x_scale shape extract_pre_residual power conv2d_relu_norm range append concat_x1x2 append x_scale add_n block_unit dropout sequence_mask multiply concat float32 conv1d cast expand_dims multiply subtract square divide reduce_sum dense int shortcut_projection | # Deep Residual-Dense Lattice Network for Speech Enhancement Convolutional neural networks (CNNs) with residual links (ResNets) and causal dilated convolutional units have been the network of choice for deep learning approaches to speech enhancement. While residual links improve gradient flow during training, feature diminution of shallow layer outputs can occur due to repetitive summations with deeper layer outputs. One strategy to improve feature re-usage is to fuse both ResNets and densely connected CNNs (DenseNets). DenseNets, however, over-allocate parameters for feature re-usage. Motivated by this, we propose the residual-dense lattice network (RDL-Net), which is a new CNN for speech enhancement that employs both residual and dense aggregations without over-allocating parameters for feature re-usage. This is managed through the topology of the RDL blocks, which limit the number of outputs used for dense aggregations. Our extensive experimental investigation shows that RDL-Nets are able to achieve a higher speech enhancement performance than CNNs that employ residual and/or dense aggregations. RDL-Nets also use substantially fewer parameters and have a lower computational requirement. Furthermore, we demonstrate that RDL-Nets outperform many state-of-the-art deep learning approaches to speech enhancement. ## Contact Please send email to [email protected] ## References Mohammad Nikzad, Aaron Nicolson,Yongsheng Gao, Jun Zhou, Kuldip K. Paliwal, Fanhua Shang. Deep Residual-Dense Lattice Network for Speech Enhancement. To appear in Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI), 2020. | 3,128 |
nickswalker/gpsr-command-understanding | ['semantic parsing', 'word embeddings'] | ['Neural Semantic Parsing with Anonymization for Command Understanding in General-Purpose Service Robots'] | gpsr_command_understanding/anonymizer.py test/test_commands_reader.py gpsr_command_understanding/data/generate_list_for_paraphrasing.py test/test_generator.py gpsr_command_understanding/demo/parse_utterance.py gpsr_command_understanding/generator/tokens.py scripts/compile_multiseed_results.py gpsr_command_understanding/data/enumerate_grammar.py gpsr_command_understanding/parser.py gpsr_command_understanding/generator/loading_helpers.py gpsr_command_understanding/data/evaluate_baseline_parsers.py test/test_model.py test/test_paired_generator.py gpsr_command_understanding/models/noop_tokenizer.py test/test_generator_grammar.py gpsr_command_understanding/data/make_dataset.py gpsr_command_understanding/__init__.py gpsr_command_understanding/generator/knowledge.py test/test_semantics_grammar.py gpsr_command_understanding/models/command_predictor.py gpsr_command_understanding/generator/xml_parsers.py scripts/process_turk_data.py gpsr_command_understanding/demo/sem_parse_utterance.py gpsr_command_understanding/models/metrics.py gpsr_command_understanding/util.py gpsr_command_understanding/demo/generate_utterance.py test/test_parser.py gpsr_command_understanding/models/__init__.py gpsr_command_understanding/demo/sem_parse_utterance_learned.py gpsr_command_understanding/generator/grammar.py setup.py gpsr_command_understanding/demo/logging_server.py gpsr_command_understanding/models/commands_reader.py gpsr_command_understanding/generator/generator.py scripts/process_spoken_mturk_data.py gpsr_command_understanding/models/seq2seq.py gpsr_command_understanding/generator/paired_generator.py read Anonymizer CaseInsensitiveDict NumberingAnonymizer MappingParser KNearestNeighborParser LearnedParser expr_builder ToEBNF GrammarBasedParser AnonymizingParser merge_dicts to_num replace_child get_wildcards determine_unique_data get_wildcards_forest save_data flatten has_placeholders ParseForward has_nonterminals chunker replace_child_in_tree get_placeholders main get_annotated_sentences main bench_parser sweep_thresh main main validate_args load_data main _get_predictor ServerError make_app main _html main main main Generator DiscardVoid ToString CompactUnderscorePrefixed CombineExpressions DiscardMeta rule_dict_to_str expand_shorthand RemovePrefix TypeConverter KnowledgeBase AnonymizedKnowledgebase load load_2018_by_cat load_paired load_paired_2018 load_paired_2018_by_cat load_2018 LambdaParserWrapper pairs_without_placeholders PairedGenerator WildCard NonTerminal ComplexWildCard LocationParser QuestionParser NameParser GesturesParser ObjectParser CommandsDatasetReader CommandParser ParseValidity TokenSequenceAccuracy NoOpTokenizer Seq2Seq get_immediate_subdirectories get_files_with_prefix print_full load_results_data_for_experiment main load_transcripts load_transcript get_file_duration clean_text process_turk_files process_turk_files TestCommandsReader TestGenerator TestGenerator TestModel TestPairedGenerator TestParsers TestSemanticsGrammar get update copy children enumerate iter_subtrees scan_values add set append items list defaultdict print sorted append difference keys intersection_update set int join format pairs_without_placeholders get_annotated_sentences set load_paired dirname abspath parser format KNearestNeighborParser print bench_parser append float AnonymizingParser len load_paired_2018 bench_parser ArgumentParser anonymizer str rules GrammarBasedParser sweep_thresh append parse_args AnonymizingParser val parse from_knowledge_base test eval read CommandsDatasetReader print sort add_argument knowledge_base train output_path len chunker values list Generator apply_grounding_assignment generate next range shuffle keys load generate_grounding_assignments Random groundings paraphrasings print use_logical_split exit anonymized merge_dicts validate_args match_logical_split lambda_parser flatten anonymized tree_printer determine_unique_data seed sorted name Counter chain replace_child_in_tree save_data islice run_anonymizer mkdir items deepcopy Token paraphrasings groundings generate_groundings use_logical_split rmtree AnonymizedKnowledgebase load_data incremental_datasets split extract_metadata Tree generate_random ground error exit Flask abspath cuda_device archive_path load_archive check_for_gpu _get_predictor include_package CORS WSGIServer import_module_and_submodules make_app serve_forever join pretty parser input KNearestNeighborParser exit anonymizing_parser LearnedParser from_archive load_archive pop deepcopy children replace_child visit CombineExpressions append iter_subtrees_topdown items list from_dir Generator list from_generator map load_2018_by_cat Generator close open_text load_rules from_dir load_semantics_rules from_generator load_2018 close from_dir any load_rules load Generator from_generator format print set generate expand_all_semantics get_files_with_prefix join read_json append format set_option print reset_option len get_immediate_subdirectories results_folders groupby tuple agg print_full load_results_data_for_experiment DataFrame word_tokenize str format print concat index apply rename append dropna sort_values range drop astype apply | # GPSR Command Understanding [](https://travis-ci.org/nickswalker/gpsr-command-understanding) A semantic parser for commands from the [RoboCup@Home](http://www.robocupathome.org/) _General Purpose Service Robot_ task. * [X] Utterance to λ-calculus representation parser * [X] Lexer/parser for loading the released command generation CFG * [X] Tools for generating commands along with a λ-calculus representation * [X] Crowd-sourcing interface for collecting paraphrases If you use this code or data, consider citing our paper [Neural Semantic Parsing for Command Understanding in General-Purpose Service Robots](https://arxiv.org/abs/1907.01115). The data collected for this paper is [available separately](https://github.com/nickswalker/gpsr-commands-dataset). ## Usage Set up a virtual environment using at least Python 3.6: python3.7 -m virtualenv venv | 3,129 |
nicola-decao/BNAF | ['density estimation', 'normalising flows'] | ['Block Neural Autoregressive Flow'] | bnaf.py toy2d.py optim/adam.py data/generate2d.py data/miniboone.py optim/lr_scheduler.py density_estimation.py data/gas.py optim/adamax.py data/power.py data/bsds300.py data/hepmass.py MaskedWeight Sequential BNAF Tanh Permutation save_model create_model load_model compute_log_p_x load_dataset main train load compute_kl create_model compute_log_p_x main save plot_density2d train_density2d plot_energy2d train_energy2d BSDS300 GAS get_correlation_numbers load_data load_data_and_clean load_data_and_clean_and_split U2 sample2d w1 U1 energy2d w2 U3 w3 U4 load_data_no_discrete_normalised_as_array HEPMASS load_data_no_discrete load_data_no_discrete_normalised load_data load_data_normalised load_data MINIBOONE POWER load_data_split_with_noise load_data load_data_normalised Adam Adamax ReduceLROnPlateau POWER GAS BSDS300 HEPMASS n_dims DataLoader TensorDataset device to MINIBOONE format layers flows print MaskedWeight n_dims BNAF Tanh Permutation hidden_dim item device append to range sum model clip_grad_norm_ zero_grad save tensorboard step set_postfix append range SummaryWriter format mean start_epoch item join backward print swap tqdm parameters epochs add_scalar layers flows ReduceLROnPlateau ArgumentParser dataset Adam pprint load_dataset parse_args format create_model __dict__ replace mkdir load join print add_argument path parameters hidden_dim expname train sum backward clip_grad_norm_ zero_grad parameters set_postfix device trange to step steps model Normal device sample to sum backward clip_grad_norm_ zero_grad mean parameters set_postfix trange step steps print load_state_dict print axis DataLoader device show imshow TensorDataset savefig reduce_extreme to cat format join clamp now subplots_adjust t figure Tensor numpy show join T format histogram2d hstack axis now subplots_adjust imshow savefig figure save train_energy2d plot_density2d train_density2d plot_energy2d read_pickle drop corr sum get_correlation_numbers mean any load_data std drop int as_matrix RandomState randn rand pi sqrt floor vstack sin append randint array range norm exp w2 w1 exp w3 w1 read_csv load_data drop mean std load_data_no_discrete int T Counter load_data_no_discrete_normalised append load int mean vstack load_data std int RandomState rand hstack shuffle delete load_data zeros load_data_split_with_noise | # BNAF Pytorch implementation of Block Neural Autoregressive Flow based on our paper: > De Cao Nicola, Titov Ivan and Aziz Wilker, [Block Neural Autoregressive Flow](http://arxiv.org/abs/1904.04676) (2019) ## Requirements * **``python>=3.6``** (it will probably work on older versions but I have not tested on them) * **``pytorch>=1.0.0``** Optional for visualization and plotting: ``numpy``, ``matplotlib`` and ``tensorboardX`` ## Structure * [bnaf.py](https://github.com/nicola-decao/BNAF/blob/master/bnaf.py): Implementation of Block Neural Normalzing Flow. * [toy2d.py](https://github.com/nicola-decao/BNAF/blob/master/toy2d.py): Experiments of 2d toy task (density estimation and energy matching). | 3,130 |
nicolasrosa/Sparse-to-Continuous | ['depth estimation', 'monocular depth estimation'] | ['Sparse-to-Continuous: Enhancing Monocular Depth Estimation using Occupancy Maps'] | tensorflow/modules/datasets/kitti_continuous.py tensorflow/modules/datasets/kitti_guidenet.py tensorflow/modules/datasets/lrmjose.py tensorflow/modules/third_party/laina/network.py tensorflow/modules/datasets/dataset.py tensorflow/evaluate_depth_densification_by_hilbert_maps.py tensorflow/modules/args.py tensorflow/modules/framework.py tensorflow/modules/utils.py tensorflow/modules/third_party/monodepth/utils/evaluate_kitti.py tensorflow/modules/evaluation/kitti_depth_prediction_devkit/python/read_depth.py catkin_ws/src/fcrn/src/scripts/ros_image2pred.py catkin_ws/src/fcrn/src/scripts/modules/third_party/laina/network.py catkin_ws/src/fcrn/src/scripts/modules/third_party/laina/fcrn.py tensorflow/modules/__init__.py tensorflow/modules/datasets/kitti_morphological.py tensorflow/modules/third_party/tensorflow/inception_preprocessing.py tensorflow/modules/filenames.py catkin_ws/src/fcrn/src/scripts/ros_image2pred_lrmjose.py tensorflow/modules/datasets/kitti_depth.py tensorflow/modules/loss.py catkin_ws/src/fcrn/src/scripts/ros_talker_predict_cv.py tensorflow/modules/datasets/kitti_discrete.py tensorflow/modules/datasets/nyudepth.py tensorflow/modules/test.py catkin_ws/src/fcrn/src/scripts/modules/__init__.py tensorflow/modules/third_party/monodepth/utils/evaluation_utils.py tensorflow/modules/datasets/apolloscape.py tensorflow/modules/metrics.py tensorflow/modules/plot.py tensorflow/modules/dataloader.py tensorflow/predict_cv.py tensorflow/predict_nick.py tensorflow/modules/datasets/idrid.py tensorflow/modules/train.py tensorflow/modules/third_party/laina/fcrn.py tensorflow/modules/size.py tensorflow/evaluate_depth_densification_by_close_operation.py tensorflow/modules/validation.py Listener Talker convert2uint8 argumentHandler Network Listener Talker convert2uint8 argumentHandler Network talker argumentHandler ResNet50UpProj layer interleave get_incoming_shape Network evaluate_densification read_depth_image read_text_file imsave_as_uint16_png evaluate_hilbert_maps_on_kitti_depth read_hilbert_maps_depth_image read_kitti_depth_depth_image read_text_file imsave_as_uint16_png maxDepth circular_counter convertScaleAbs argument_handler CvTimer process_images generate_colorbar main apply_overlay main train predict test argument_handler get_filenames_tensors Dataloader FilenamesHandler Model SessionWithExitSave load_model tf_eigen_loss calculate_l2norm tf_mask_out_invalid_pixels gradient_y np_mse get_trainable_vars get_global_vars tf_eigen_grads_loss gradient_x tf_mse_loss tf_berhu_loss stats_depth_txt2csv evaluation_tool_kitti_depth generate_depth_maps generate_depth_maps_eigen_continuous_split save_metrics_results_csv generate_depth_maps_eigen_split generate_depth_maps_kitti_split evaluation_tool_monodepth update_colorbar Plot Size Test EarlyStopping Train total_size Settings imsave_as_uint16_png detect_available_models Validation Apolloscape Dataset IDRiD KittiContinuous KittiDepth KittiDiscrete KittiGuideNet KittiMorphological LRMJose NyuDepth depth_read ResNet50UpProj layer interleave get_incoming_shape Network convert_gt_disps_to_depths_kitti sub2ind lin_interp convert_disps_to_depths_kitti read_calib_file generate_depth_map compute_errors read_file_data read_text_lines load_gt_disp_kitti load_velodyne_points get_focal_length_baseline distort_color apply_with_random_selector add_argument ArgumentParser video_path Rate CvBridge VideoCapture uint8 get_output print init_node float32 placeholder Publisher convert_image_dtype argumentHandler model_path gpu Tensor isinstance print list genfromtxt rescale_intensity imsave img_as_uint subplots imsave_as_uint16_png evaluation_tool_monodepth show str list set_title shape imshow read_depth_image append output_tmp_pred_dir format evaluation_tool_kitti_depth zip enumerate output_tmp_gt_dir print len subplots imsave_as_uint16_png evaluation_tool_monodepth str list read_kitti_depth_depth_image imshow append output_tmp_pred_dir format evaluation_tool_kitti_depth read_text_file zip enumerate output_tmp_gt_dir print read_hilbert_maps_depth_image pause draw len add_argument ArgumentParser addWeighted copy update int putText applyColorMap FONT_HERSHEY_SIMPLEX zeros get_avg range medianBlur convertScaleAbs COLORMAP_HSV print applyColorMap putText hconcat debug FONT_HERSHEY_SIMPLEX generate_colorbar imshow resize COLORMAP_JET apply_overlay video_path VideoCapture uint8 get_output ones_like zeros_like destroyAllWindows print multiply exit float32 placeholder where convert_image_dtype model_path gpu release int batch_size Graph print num_train_samples floor ConfigProto max_steps Test get_test_data get_train_data Dataloader resize_images uint8 print float32 placeholder convert_image_dtype image_path array open format test capitalize app_name train predict convert_to_tensor model_path restore detect_available_models gather_nd where size float32 reduce_sum square div cast tf_mask_out_invalid_pixels constant multiply reduce_max square where reduce_sum div tf_mask_out_invalid_pixels abs multiply size float32 where reduce_sum square div gather_nd cast log gradient_y multiply size float32 where reduce_sum square div gather_nd cast gradient_x log get_trainable_vars convert_gt_disps_to_depths_kitti uint8 list print pause draw astype tqdm imshow title figure load_gt_disp_kitti show_test_results range test_file_path show_test_results list generate_depth_map read_file_data imshow title read_text_lines append imread range astype uint8 print pause draw float32 tqdm figure test_file_path show_test_results list generate_depth_map len read_file_data imshow title read_text_lines append imread range replace astype join uint8 print pause draw float32 tqdm figure split generate_depth_maps_kitti_split array generate_depth_maps_eigen_continuous_split generate_depth_maps_eigen_split print to_csv output_dir DataFrame exists T list set_index test_file_path to_csv apply rename output_dir model_path output_tmp_pred_dir range read_csv drop garg_crop list len logical_and save_metrics_results_csv shape min_depth title imshow range astype max_depth print pause draw compute_errors float32 tqdm int32 figure zeros eigen_crop print stats_depth_txt2csv call draw_all linspace set_clim set_ticks input print sort glob output_dir model_path enumerate update getsizeof set float astype array open maximum mean sqrt abs log astype float32 zfill append imread range list zip shape resize append enumerate append shape logical_and enumerate readlines close open format print int32 isfile append split reshape T arange LinearNDInterpolator reshape meshgrid set reshape read_calib_file int T sub2ind lin_interp read_calib_file reshape hstack min dot shape vstack round eye zeros load_velodyne_points random_uniform | # Sparse-to-Continuous (FCRN) This is the reference Tensorflow implementation for training and testing depth estimation models using the method described in > [ICAR 2019 "Sparse-to-Continuous: Enhancing Monocular Depth Estimation using Occupancy Maps"](https://ieeexplore.ieee.org/document/8981652/authors#authors) > > [Nícolas dos Santos Rosa](https://dblp.org/pid/198/1985), [Vitor Guizilini](https://dblp.org/pid/81/7230), [Valdir Grassi Jr](https://dblp.org/pid/93/4528) **Citation** If you find our work useful in your research please consider citing our paper: ``` @INPROCEEDINGS{8981652, author={N. d. S. {Rosa} and V. {Guizilini} and V. {Grassi}}, | 3,131 |
nightldj/behance_release | ['style transfer'] | ['Learning from Multi-domain Artistic Images for Arbitrary Style Transfer'] | my_discriminator.py my_autoencoder.py folder.py net_utils.py test_autoencoder.py train_mask.py make_opt.py make_loss.py load_data.py utils.py is_image_file find_classes SelectImageFolder s_is_image_file s_make_dataset make_dataset ImageFolder accimage_loader default_loader pil_loader load_dataset get_optimizer_var adjust_learning_rate PredAdam get_optimizer autoencoder mask_autoencoder PatchDiscriminator get_gram make_wct_aux_enc_layers adin_transform2 make_wct_enc_layers make_dise_layers make_tr_dec_layers get_model_parameters get_base_dep get_dise_cfg get_autoencoder_args lower sort is_image_file join sorted append expanduser listdir walk lower join sorted s_is_image_file append expanduser listdir walk listdir print Compose ImageFolder content_data style_data len PredAdam print Adam SGD RMSprop parameters PredAdam print Adam SGD RMSprop print param_groups min float lr_freq Conv2d Conv2d print ConvTranspose2d Conv2d int isdigit print Conv2d ConvTranspose2d bmm size transpose view view clamp size mean std parse_args add_argument ArgumentParser print parameters print print | # behance_release This is the PyTorch code for [''Learning from Multi-domain Artistic Images for Arbitrary Style Transfer''](https://arxiv.org/abs/1805.09987) in Expressive 2019. The pre-trained model on the behance-face dataset can be found [here](https://drive.google.com/file/d/1fEKb9yIbXQb07jIJanPbAaO4LkcV4Xbp/view?usp=sharing). To test the pre-trained model, put the downloaded models in folder named ''models'', put content images in ''data/content/test'', style images in ''data/style/test'' and run ``` python test_autoencoder.py --content-data data/content --style-data data/style --enc-model models/vgg_normalised_conv5_1.t7 --dec-model none --dropout 0.5 --gpuid 0 --train-dec --dec-last tanh --trans-flag adin --diag-flag batch --ae-mix mask --ae-dep E5-E4 --base-mode c4 --st-layer 4w --test-dp --save-image output/face_mask --dise-model models/behance_release.pth ``` The stylized images can be found in folder ''output''. Here are some test cases used in the paper: (content/style/output)    | 3,132 |
nihalsid/ViewAL | ['superpixels', 'active learning', 'semantic segmentation'] | ['ViewAL: Active Learning with Viewpoint Entropy for Semantic Segmentation'] | model/backbone/mobilenet.py model/deeplab.py model/sync_batchnorm/batchnorm.py model/sync_batchnorm/replicate.py utils/summary.py active_selection/__init__.py active_selection/ceal.py utils/metrics.py dataloader/custom_transforms.py model/backbone/resnet.py active_selection/max_repr.py active_selection/regional_view_entropy_kl.py active_selection/softmax_confidence.py active_selection/regional_vote_entropy.py dataset/preprocessing-scripts/selections.py model/decoder.py utils/misc.py active_selection/softmax_margin.py model/aspp.py utils/saver.py train.py active_selection/vote_entropy.py argument_parser.py dataloader/paths.py dataloader/indoor_scenes.py active_selection/random_selection.py model/sync_batchnorm/comm.py constants.py dataset/preprocessing-scripts/to_lmdb.py active_selection/core_set.py active_selection/softmax_entropy.py train_active.py model/backbone/drn.py model/backbone/xception.py utils/trainer.py model/backbone/__init__.py utils/colormaps.py active_selection/view_entropy.py model/sync_batchnorm/__init__.py dataloader/dataset_base.py model/sync_batchnorm/unittest.py utils/loss.py utils/superpixel_projections.py utils/calculate_weights.py parse_args main main CEALSelector CoreSetSelector MaxRepresentativeSelector RandomSelector RegionalViewEntropyWithKldivSelector RegionalVoteEntropySelector SoftmaxConfidenceSelector SoftmaxEntropySelector SoftmaxMarginSelector ViewEntropySelector VoteEntropySelector get_active_selector transform_training_sample ToTensor RandomGaussianBlur Normalize RandomHorizontalFlip FixScaleCrop transform_validation_sample DatasetBase LMDBHandle OverlapHandler IndoorScenes get_active_dataset get_num_classes ActiveIndoorScenesPseudoLabeled IndoorScenesWithAllInfo ActiveIndoorScenesRegional ActiveIndoorScenes PathsDataset write_frames_list create_superpixel_segmentations create_seed_set create_split_mixed create_splits call read_scene_list build_aspp _ASPPModule ASPP Decoder build_decoder DeepLab drn_d_54 drn_c_58 drn_d_40 drn_d_38 drn_c_26 Bottleneck drn_d_105 DRN_A drn_d_22 conv3x3 DRN drn_a_50 drn_d_24 drn_c_42 BasicBlock fixed_padding InvertedResidual conv_bn MobileNetV2 ResNet ResNet101 Bottleneck fixed_padding Block AlignedXception SeparableConv2d build_backbone _sum_ft SynchronizedBatchNorm2d _unsqueeze_ft _SynchronizedBatchNorm SynchronizedBatchNorm1d SynchronizedBatchNorm3d SyncMaster FutureResult SlavePipe execute_replication_callbacks CallbackContext DataParallelWithCallback patch_replication_callback TorchTestCase as_numpy calculate_weights_labels create_nyu40_label_colormap get_colormap map_segmentation_to_colors map_segmentations_to_colors SegmentationLosses calculate_miou Evaluator visualize_image_target get_learning_rate visualize_entropy turn_on_dropout visualize_gt _mark_boundaries mark_boundaries visualize_vote_view_entropy visualize_seedset_spx visualize_selection_spx visualize_spx_dataset visualize_numbered_superpixels visualize_image_target_prediction visualize_point_cloud Saver TensorboardSummary test_coverage_scannet_sample project_images_to_world project_world_to_image find_superpixel_coverage project_image_to_world Trainer int print add_argument random ArgumentParser IndoorScenes batch_size load_best_checkpoint memory_hog Trainer Saver dataset cuda seed base_size training epochs load_state_dict parse_args HDD_DATASET_ROOT range LMDBHandle create_summary experiment_dir close use_balanced_weights make_dataset_multiple_of_batchsize resume manual_seed TensorboardSummary load join lr_scheduler validation print calculate_weights_labels DeepLab step RUNS add_scalar endswith get_selections round exists checkname superpixel_dir select_next_batch save_experiment_config load_selections reset_dataset active_selection_size max_iterations get_active_selector dump_matrix int save_active_selections get_fraction_of_labeled_data join OverlapHandler superpixel_coverage_dir SSD_DATASET_ROOT superpixel_overlap dataset Compose Compose startswith endswith str write_frames_list print read_scene_list append iterdir communicate print Popen split join split print print len split load_url load_state_dict DRN_A load_url DRN load_state_dict load_url DRN load_state_dict load_url DRN load_state_dict load_url DRN load_state_dict load_url DRN load_state_dict load_url DRN load_state_dict load_url DRN load_state_dict load_url DRN load_state_dict load_url DRN load_state_dict pad ResNet list hasattr __data_parallel_replicate__ modules enumerate len replicate data isinstance sum uint8 median print logical_and astype tqdm nan_to_num sqrt DataLoader bincount zeros numpy array append append transpose map_segmentation_to_colors from_numpy zeros get_colormap tolist nanmean diag sum divide param_groups draw_geometries Vector3dVector PointCloud train show subplot uint8 join imwrite astype cm_hot tight_layout close imshow title map_segmentation_to_colors Normalize figure savefig max_entropy get_cmap RUNS show subplot uint8 join list astype close PathsDataset imshow title savefig figure Normalize zip range RUNS len dtype ones_like astype range map_segmentation_to_colors lmdb_handle image_path_subset subplot sorted list transpose imshow title IndoorScenesWithAllInfo savefig range astype close join uint8 mark_boundaries tqdm figure numpy RUNS len map_segmentation_to_colors lmdb_handle image_path_subset subplot list tolist imshow title IndoorScenesWithAllInfo savefig range astype close join uint8 mark_boundaries tqdm figure RUNS len join LMDBHandle IndoorScenes print visualize_numbered_superpixels HDD_DATASET_ROOT join LMDBHandle visualize_spx_dataset load_selections ActiveIndoorScenesRegional HDD_DATASET_ROOT RUNS dtype ones_like astype range join uint8 imwrite _mark_boundaries transpose new astype cm_hot paste map_segmentation_to_colors save get_cmap max RUNS open join uint8 imwrite hstack astype RUNS join list LMDBHandle IndoorScenes visualize_image_target shuffle numpy HDD_DATASET_ROOT range len empty_cache type mm FloatTensor FloatTensor flatten DEPTH_WIDTH linspace project_image_to_world IntTensor meshgrid DEPTH_HEIGHT type range len list IntTensor FloatTensor mm tolist flatten dict round unique zip item float type enumerate join project_images_to_world SSD_DATASET_ROOT tqdm OrderedDict project_world_to_image IndoorScenesWithAllInfo save append empty_cache scene_id_to_index join LMDBHandle IndoorScenes find_superpixel_coverage HDD_DATASET_ROOT image_path_subset | # ViewAL: Active Learning with Viewpoint Entropy for Semantic Segmentation This repository contains the implementation for the paper: Yawar Siddiqui, Julien Valentin and Matthias Niessner, ["ViewAL: Active Learning with Viewpoint Entropy for Semantic Segmentation"](https://arxiv.org/abs/1911.11789) ([video](https://youtu.be/tAGdx2j-X_g))  ## Running #### Arguments ``` train_active.py [-h] [--backbone {resnet,xception,drn,mobilenet}] [--out-stride OUT_STRIDE] [--dataset {scannet,scenenet-rgbd,matterport3d,scannet-sample}] | 3,133 |
nii-yamagishilab/multi-speaker-tacotron | ['speech synthesis'] | ['Can Speaker Augmentation Improve Multi-Speaker End-to-End TTS?'] | modules/multi_speaker_postnet.py synthesize_new_texts/datasets/numbers.py synthesize_new_texts/extensions/flite.py synthesize_new_texts/util/audio.py synthesize_new_texts/hparams.py modules/channel_encoder_postnet.py synthesize_new_texts/util/tfrecord.py synthesize_new_texts/construct_tfrecords.py synthesize_new_texts/datasets/cleaners.py synthesize_new_texts/extensions/phoneset/phoneset.py synthesize_new_texts/datasets/synthesize_data.py modules/external_embedding.py synthesize_new_texts/preprocess.py synthesize_new_texts/datasets/text.py ChannelEncoderPostNet ExternalEmbedding MultiSpeakerPostNet hparams_debug_string lowercase english_cleaners expand_abbreviations collapse_whitespace basic_cleaners convert_to_ascii transliteration_cleaners expand_numbers normalize_numbers _expand_dollars _expand_ordinal _expand_decimal_point _expand_number _remove_commas SpeakerInfo Synthesize TxtWavRecord write_preprocessed_target_data write_preprocessed_source_data TargetRDD MelStatistics _symbols_to_sequence text_to_sequence Flite Phoneset Audio decode_preprocessed_target_data write_preprocessed_target_data PredictionResult write_preprocessed_mgc_lf0_data parse_preprocessed_target_data write_tfrecord read_prediction_result int64_feature bytes_feature PreprocessedTargetData write_preprocessed_mel_data list values sub lowercase collapse_whitespace lowercase convert_to_ascii collapse_whitespace lowercase expand_abbreviations collapse_whitespace convert_to_ascii expand_numbers group split int group sub tostring Example write_tfrecord tostring Example write_tfrecord cleaner append tostring Example write_tfrecord tostring Example write_tfrecord parse_single_example decode_raw float32 decode reshape Example ParseFromString tf_record_iterator frombuffer | # multi-speaker-tacotron This is an implementation of our paper from ICASSP 2020: "Zero-Shot Multi-Speaker Text-To-Speech with State-of-the-art Neural Speaker Embeddings," by Erica Cooper, Cheng-I Lai, Yusuke Yasuda, Fuming Fang, Xin Wang, Nanxin Chen, and Junichi Yamagishi. https://arxiv.org/abs/1910.10838 Please cite this paper if you use this code. Audio samples can be found here: https://nii-yamagishilab.github.io/samples-multi-speaker-tacotron/ ## News: * 2022-03-29: Migrated data from Dropbox to Zenodo. * 2021-06-21: Added scripts for creating tfrecords to synthesize new texts using pretrained models. See directory `synthesize_new_texts` and its README. * 2020-08-10: Added example scripts for our new paper accepted to Interspeech 2020, <a href=https://arxiv.org/abs/2005.01245>"Can Speaker Augmentation Improve Multi-Speaker End-to-End TTS?"</a> See directory `is20` and please also update your copies of `tacotron2` and `self-attention-tacotron` repositories as these contain some necessary changes. | 3,134 |
nikhgarg/EmbeddingDynamicStereotypes | ['word embeddings'] | ['Word Embeddings Quantify 100 Years of Gender and Ethnic Stereotypes'] | dataset_utilities/test_word_vectors.py dataset_utilities/create_yrly_datasets.py create_final_plots_all.py latexify.py utilities.py variance_only_over_time.py plot_creation.py dataset_utilities/pipeline.py dataset_utilities/normalize_vectors.py dataset_utilities/remove_duplicates.py dataset_utilities/GloVe-1.2/eval/python/evaluate.py changes_over_time.py dataset_utilities/handle_cohagbooks_vectors.py cossim load_vectors load_vectors_over_time calc_distance_over_time single_set_distances_to_single_set load_vocab calc_distance_between_words get_counts_dictionary main set_distances_to_set calc_distance_between_vectors calc_distance_over_time_averagevectorsfirst get_vector_variance main latexify format_axes plot_overtime_scatter get_highest_residual_occupations identify_top_biases_individual overtime_scatter_errorusingallotheryears static_cross_correlation_table scatter_occupation_percents_distances print_most_biased_over_time individual_regression_coefficients_for_overtime_scatter get_biases_individual plot_vector_variances_together vocab_counts plot_scatter_and_regression create_cross_time_correlation_heatmap_differencestoself residual_analysis_with_stereotypes set_plots_folder plot_averagebias_over_time_consistentoccupations plot_mean_counts_together identify_top_biases_individual_threegroups princeton_trilogy_plots get_model_residuals summarize_model test_phase_shift_heatmap do_over_time_trend_test occupation_func_female_percent load_williamsbestadjectives occupation_func_whitehispanic_logitprop load_occupationpercent_data occupation_func_female_logitprop load_files occupation_func_whiteasian_percent occupation_func_whiteasian_logitprop occupation_func_williamsbestadject differences occupation_func_whitehispanic_percent load_file load_mturkstereotype_data get_years load_vectors load_vectors_over_time load_vocab get_counts_dictionary main get_vector_variance clean_string yr_directory main_ldc95 main_coha parse_txt_file main_nyt parse_xml_file load_yr save_files load_vectors find_vector_norms normalize print_sizes normalize_vectors create_combined_years_dataset_coha create_combined_years_glove_justglove call_glove clean_googlenewsvectors create_combined_years_dataset create_combined_years_glove_coha call_justglove create_combined_years_glove remove_duplicates test_packages test_basic main evaluate_vectors dot norm append calc_distance_between_words nan enumerate mean nan append calc_distance_between_vectors array enumerate print append load_vectors append calc_distance_over_time_averagevectorsfirst calc_distance_over_time enumerate append calc_distance_over_time enumerate range len var mean nan append array enumerate print load_vectors_over_time enumerate str list set_plots_folder load_file keys print sqrt update set_linewidth set_visible set_ticks_position set_color set_tick_params list plot_scatter_and_regression print get_years extend isnan any differences append open subplots savelabel grid load_occupationpercent_data open list ylabel twinx savefig legend append format plot close tight_layout despine label print xlabel isnan any tsplot get_legend_handles_labels array set_ylim get_years list unique_everseen print isnan any append range get_years identify_top_biases_individual rankdata str format print subtract len tolist reversed isnan argsort any differences append float range enumerate get_years list format print len set get_biases_individual intersection append pearsonr range enumerate print extend append abs range len get_years set_size set_weight pearsonr heatmap yticks rankdata savefig append range format get_xticklabels close tight_layout enumerate get_yticklabels isnan any differences zeros test_phase_shift_heatmap len format individual_regression_coefficients_for_overtime_scatter plot_scatter_and_regression get_years load_occupationpercent_data overtime_scatter_errorusingallotheryears differences append enumerate sorted list format print transpose set rsquared DataFrame fit grid DataFrame list sorted transpose regplot ylabel savefig format subtract close tight_layout set despine power enumerate fittedvalues print xlabel fit average color_palette len format as_latex replace plot_scatter_and_regression get_model_residuals print DataFrame transpose load_objective_data argsort get_biases_individual append pvalues pearsonr load_mturkstereotype_data fit format replace plot_scatter_and_regression savelabel print load_objective_data get_biases_individual append format poly1d print polyfit argsort append fit_fn array enumerate differences get_years transpose DataFrame fit plot_scatter_and_regression print get_biases_individual append enumerate as_latex grid drop get_dummies DataFrame list sorted resid transpose regplot ylabel ylim savefig sum format close tight_layout set despine xlim xlabel print fit to_csv argsort summarize_model average color_palette pvalues array len Series DataFrame unstack yscale format plot print xlabel close ylabel mean savefig legend append get_years format plot xlabel close ylabel savefig legend get_years yscale list format plot print min get_years tight_layout close mean argsort savefig append max range len float float float log float float log float print load_file keys list print str lower strip split print str range yr_directory print str range yr_directory print format range yr_directory load format print load_yr print load_vectors find_vector_norms print print normalize replace print print print str create_combined_years_dataset call_glove print str call_glove create_combined_years_dataset_coha print str call_justglove call call append most_similar most_similar_cosmul score doesnt_match similarity load_word2vec_format split items T add_argument shape ArgumentParser parse_args sum evaluate_vectors zeros len int T arange print min dot flatten ceil zeros float sum array range len | This repository contains code and data associated with [Word embeddings quantify 100 years of gender and ethnic stereotypes.](https://doi.org/10.1073/pnas.1720347115) PDF available [here](http://gargnikhil.com/files/pdfs/GSJZ18_embedstereotypes.pdf). If you use the content in this repository, please cite: Garg, N., Schiebinger, L., Jurafsky, D. & Zou, J. Word embeddings quantify 100 years of gender and ethnic stereotypes. PNAS 201720347 (2018). doi:10.1073/pnas.1720347115 To re-run all analyses and plots: 1. download vectors from online sources and normalize by l2 norm (links in paper and below) 2. set up parameters to run as in run_params.csv 3. run changes_over_time.py 4. run create_final_plots_all.py dataset_utilities/ contains various helper scripts to preprocess files and create word vectors. From a corpus, for example LDC95T21-North-American-News, that contains many text files (each containing an article) from a given year, first run create_yrly_datasets.py to create a single text file per year (with only valid words). Then, run pipeline.py on each of these files to create vectors, potentially combining multiple years into a single training set. normalize_vectors.py contains utilities to standardize the vectors. We have uploaded the New York Times embeddings generated for this paper. They are available at [http://stanford.edu/~nkgarg/NYTembeddings/](http://stanford.edu/~nkgarg/NYTembeddings/). 2021/04/05 update: Unfortunately, the files are no longer available. (Upon my graduation the links died, before I was able to back them up). However, the original text data is still available at [New York Times Annotated Corpus](https://catalog.ldc.upenn.edu/LDC2008T19), and so the the vectors can be trained as described in the paper. | 3,135 |
nikitacs16/horovod_gcn_pointer_generator | ['response generation'] | ['On Incorporating Structural Information to improve Dialogue Response Generation'] | double_automate.py util.py batcher.py inspect_checkpoint.py rouge.py decode.py attention_decoder.py data.py convert_scores_to_average.py evaluate.py best_by_epoch.py convert_to_txt.py run_summarization.py bleu_simplify.py simplify.py model_old.py model.py beam_search.py single_automate.py get_all_bleu.py bleu.py bl.py convert.py run_beam_search Hypothesis sort_hyps get_metrics multi_test get_number_ get_result_dir_name run_test moses_multi_bleu moses_multi_bleu get_metrics file_to_scores run_train run_eval run_test run_eval_test _len_lcs _get_ngrams rouge rouge_l_summary_level rouge_n _recon_lcs _split_into_words _lcs rouge_l_sentence_level _union_lcs _f_p_r_lcs _get_word_ngrams restore_best_model setup_training get_data run_validation_sequential main convert_to_coverage_model get_metrics get_number_ run_train run_eval run_test run_eval_test get_config load_ckpt beam_size decode_onestep run_encoder extend append range sort_hyps search join str sorted moses_multi_bleu print glob rouge readlines strip zip append len str system join apply_async Pool close chmod urlretrieve write close NamedTemporaryFile encode flush replace compute_metrics float readlines split open str system str system str system tuple add set range len _split_into_words _lcs dict max range tuple _lcs intersection _get_word_ngrams len _len_lcs _split_into_words len _recon_lcs set _split_into_words union len _split_into_words len mean list map zip join load_ckpt replace print exit Saver save info initialize_all_variables Session log_root run load_ckpt print exit Saver save info global_variables_initializer Session run join restore_best_model build_graph Scaffold log_root makedirs restore_best_model build_graph Saver save Session open str sorted restore coverage glob info log_root join int run_eval_step Batcher write data_path next_batch makedirs load sorted glob open info append len vocab_path decode BeamSearchDecoder setup_training set_random_seed get_data set_verbosity vocab_size exp_name flow_combined str use_label_information BertVocab _replace rank Vocab run_validation_sequential tf_example_format use_val_as_test SummarizationModel use_bert init info run_eval_parallel test_by_epoch log_root INFO join beam_size items print Batcher data_path bert_vocab_file_path mode makedirs str ConfigProto local_rank join restore get_checkpoint_state model_checkpoint_path info log_root | # On Incorporating Structural Information to Improve Dialogue Response Generation [arXiv](https://arxiv.org/abs/2005.14315) The code is based on [Q-GTTP](https://github.com/nikitacs16/q_pointer_generator) which is based on [Pointer-Generator Network](https://github.com/abisee/pointer-generator) We consider the task of generating dialogue responses from background knowledge comprising of domain specific resources. Specifically, given a conversation around a movie, the task is to generate the next response based on background knowledge about the movie such as the plot, review, Reddit comments etc. This requires capturing structural, sequential and semantic information from the conversation context and the background resources. This is a new task and has not received much attention from the community. We propose a new architecture that uses the ability of BERT to capture deep contextualized representations in conjunction with explicit structure and sequence information. More specifically, we use (i) Graph Convolutional Networks (GCNs) to capture structural information, (ii) LSTMs to capture sequential information and (iii) BERT for the deep contextualized representations that capture semantic information. We analyze the proposed architecture extensively. To this end, we propose a plug-and-play Semantics-Sequences-Structures (SSS) framework which allows us to effectively combine such linguistic information. Through a series of experiments we make some interesting observations. First, we observe that the popular adaptation of the GCN model for NLP tasks where structural information (GCNs) was added on top of sequential information (LSTMs) performs poorly on our task. This leads us to explore interesting ways of combining semantic and structural information to improve the performance. Second, we observe that while BERT already outperforms other deep contextualized representations such as ELMo, it still benefits from the additional structural information explicitly added using GCNs. This is a bit surprising given the recent claims that BERT already captures structural information. Lastly, the proposed SSS framework gives an improvement of 7.95% over the baseline. ![SSS Framework][logo] [logo]: https://github.com/nikitacs16/horovod_gcn_pointer_generator/blob/master/SSSFramework.png ## Requirements Tensorflow v1.8 and above [Horovod](https://github.com/horovod/horovod) ## Pre-Processed Data | 3,136 |
nikitakit/voxelworld | ['semantic segmentation'] | ['Where is Misty? Interpreting Spatial Descriptors by Modeling Regions in Space'] | lrpc/example/example.py util/transformations.py task_tools/raycast.py mturk_server/manage_tasks.py world_service/test_client.py mturk_submit/hit_creation.py world_service/world_service.py lrpc/gen_lrpc/__main__.py task_tools/generate_whereis_guess_tasks.py task_tools/distribute_responses.py task_tools/validate_responses.py toy2d/gen_examples.py learning/data_whereis.py task_tools/generate_whereis_tasks_synthetic.py lcmproto/grpc_bridge/run_codegen.py toy2d/baseline_model_toy2d.py toy2d/shapes.py util/ipyloop.py world_service/dummy_world_service.py mturk_server/main.py toy2d/model_toy2d.py toy2d/toy2d_responses_to_tfrecords.py util/monkeypatch_voxelregion.py util/tf_util.py lrpc/lrpc/__init__.py task_tools/generate_whereis_tasks.py lcmproto/grpc_bridge/lcmbridge_client.py lcmproto/lcmproto_ws_bridge.py learning/responses_to_tfrecords.py lcmproto/example/listener.py learning/model_baseline.py task_tools/snapshot_service.py learning/model_unsupervised.py util/struct_tools.py learning/model_whereis.py task_tools/whereis_inspection.py task_tools/responses_to_csv.py learning/data_whereis_networked.py lcmproto/grpc_bridge/lcmbridge_server.py task_tools/task_service.py lcmproto/__init__.py world_service/minecraft_export.py toy2d/data_toy2d.py lcmproto/example/send_message.py toy2d/image_server.py handler WSHandler DescriptorModifiedHandler update_message_classes RPCHelper LCMProtoChannel LCMProto my_handler run sender_iter recv handle_msg serve LCMBridge handle_msg DataGenerator RandomRoomData SampleRejected main DataReader SampledWorldData main varinit eval_model batch_norm eval_correct_model inspect_one varclear get_words varinit eval_model eval_correct_model inspect_one save_filters varclear varinit apply_conv eval_model batch_norm make_radial_filter make_angular_filter eval_correct_model inspect_one plot_spheres varclear get_words get_example MyServicer ActiveCall lcmproto_to_lrpc LRPC StreamSender get_lrpc LCMWrapper SingleSender load_sqs_credentials on_assignment_changed AssignmentDatabase CompletionCode SubmitHandler DashboardWSHandler make_app DashboardHandler receive_messages PortalHandler RootHandler DevPortalHandler PortalWSHandler add_task_template extract_task_responses create_pool counter distrib_tasks sample_candidates_simple sample_candidates_landmarks sample_candidates_two_step get_camera_vector get_viewport_mask_relative visibility_mask_relative misty_locations_relative misty_locations_from_snapshot raycast_relative named_tasks_to_df get_timestamp init_snapshots_from_disk normalize_path save_snapshot_to_disk SnapshotServiceServicerImpl TaskResponsesModifiedHandler find_tasks init_task_responses_from_disk find_task_names list_folders get_region index_for_angle normalize_path TaskServiceServicerImpl init_tasks_from_disk save_task_to_disk keep_response run_event_loop add_heatmap test1 get_task_from_values add_test_heatmap add_misty_relative_heatmap prepare_submit_request main wrap_threadsafe GrammarGeneratedData DataGenerator SampleRejected main DataReader gen_example example_to_img hash_example uncache_example generate_batch cache_example make_app draw_voxels draw_heatmap draw_shapes main get_example loop_asyncio DecodeNumpyU32Array u32_getter u32_setter new_CEscape monkeypatch numpy_u32_encoder numpy_u32_sizer dict_to_struct struct_to_dict init_from_dict msg_to_struct new_variable_scope sparse_to_list normalize_distribution create_row_setter linear sparse_boolean_mask_length_capped reverse_dynamic_rnn create_var_setter sparse_boolean_mask list_to_sparse sparse_vectorize safe_run orthogonalization_matrix vector_product inverse_matrix euler_matrix translation_matrix shear_matrix vector_norm quaternion_from_matrix quaternion_inverse Arcball projection_matrix unit_vector rotation_from_matrix random_rotation_matrix quaternion_from_euler affine_matrix_from_points decompose_matrix clip_matrix quaternion_conjugate quaternion_slerp quaternion_about_axis arcball_map_to_sphere scale_from_matrix euler_from_quaternion angle_between_vectors scale_matrix random_quaternion quaternion_matrix quaternion_imag superimposition_matrix arcball_nearest_axis projection_from_matrix translation_from_matrix shear_from_matrix euler_from_matrix rotation_matrix random_vector compose_matrix identity_matrix reflection_matrix concatenate_matrices is_same_transform arcball_constrain_to_axis reflection_from_matrix quaternion_multiply quaternion_real _import_module DummyWorldServiceServicer handle_task WorldServiceServicerImpl list format Add MessageFactory name print FileDescriptorProto serialized_pb file output_type ParseFromString service startswith GetPrototype input_type GetMessages method values handle str ranges enabled print name timestamp position orientation append print handle clear Receive print msg add channel Empty publish Thread sender_iter print insecure_channel start Send LCMBridgeStub put handle print add_LCMBridgeServicer_to_server add_insecure_port start server ThreadPoolExecutor LCMBridge split RunOptions start_queue_runners isinstance get_vocab_embeddings print RandomRoomData get_inputs get_task_from_values result Coordinator create_threads run InteractiveSession SampledWorldData run add all_variables set run all_variables set constant Variable apply batch_normalization moments cond run run sparse_to_list safe_run sparse_to_list isinstance print activate_task reshape get_task_from_values result add_misty_relative_heatmap safe_run zeros_like constant transpose gather sqrt floor zeros expand_dims range int constant transpose concat pi atan2 gather floor zeros expand_dims range format len enumerate set_aspect sparse_to_list set_xlim add_subplot draw set_zlim figure get_cmap range set_ylim safe_run Example get_event_loop LCM LRPC super get_event_loop LCMProto LRPC items list get format returned add_callback WARNING debug delete resource loads warning body setLevel abandoned get_queue_by_name prompt_returned complete_assignment get_worker_for_assignment info execute isinstance name dumps MessageToJson processed_task execute fetchall Parse join basename list items NamedTask print add dirname responseroot execute makedirs join basename base_task NamedTask name MergeFrom worker_id print set counter add dirname processed_task responses next makedirs shuffled list zeros_like nonzero_adj shuffle pad zip append shuffled list nonzero_adj zeros_like extend shuffle clear_adj pad zip sample format print name shuffle sample rotation_matrix asarray rotation_matrix zeros_like atan2 array range asarray inf argmin where sign abs array list asarray all zeros_like range array raycast_relative visibility_mask_relative get_viewport_mask_relative zeros_like list asarray rotation zip print reshape extend regions misty_locations_relative dimensions position voxels_u32 localize ToDatetime data loads DataFrame str list name add append sort_values range format worker_id MessageToJson set responses items remove print len print walk join basename name print dirname makedirs replace startswith pi RegionRequest index_for_angle extend print walk print walk join basename name print dirname makedirs join sorted normalize_path extend add set ListFoldersResponse sorted FindTasksResponse responses MergeFrom normalize_path extend tasks paths return_responses names sorted normalize_path extend add set FindTaskNamesResponse paths names join get_or_create_struct int snapshot list NamedTask MergeFrom asarray reshape msg_to_struct tolist extend add shape get_or_create_struct SubmitTasks MergeFrom msg_to_struct extend Clear Heatmap asarray reshape tolist extend data array set_event_loop get_lrpc TrackerService add_to_event_loop TaskService run_forever reshape arange add_heatmap position add_test_heatmap result namedtuple show GrammarGeneratedData draw_voxels randint shape_names len join copy str save load str format print gen_example example_to_img hash_example save range cache_example int list asarray tuple size new inferno shape paste array range mode imshow figure zeros_like draw_shapes list arange grid copy imshow title figure draw_shapes xticks range yticks word_tokenize replace itertuples TFRecordWriter text write SerializeToString get_example rename append uncache_example read_csv get_event_loop run_forever call_soon ndarray isinstance _VarintSize bytes write _VarintSize _EncodeVarint tobytes len _DecodeVarint frombuffer array get array setdefault _Modified items list property Clear Parse dumps MessageToJson append values indices zip append max enumerate scatter_update int placeholder assign placeholder shape cast int32 reduce_sum reduce_sum shape cast int32 tile expand_dims range reverse_sequence update deepcopy list items isinstance run reduce_sum identity dot unit_vector identity squeeze eig array cos identity dot sin unit_vector array diag T squeeze eig atan2 trace array dot unit_vector identity diag squeeze eig array trace dot unit_vector array identity T squeeze eig dot array len dot unit_vector tan identity T vector_norm squeeze eig identity cross dot atan array T vector_norm asin inv cos copy atan2 dot any negative zeros array dot euler_matrix identity radians cos sin svd T concatenate inv identity quaternion_matrix roll dot eigh pinv vstack sum array identity sqrt atan2 empty cos sin array cos vector_norm dot array outer eigh trace negative empty array negative array negative array pi dot sin negative unit_vector acos sqrt rand pi sqrt negative array vector_norm dot arcball_constrain_to_axis array atleast_1d sqrt sum array atleast_1d sqrt expand_dims sum array sum array dot identity array import_module data int list snapshot setBlock saveChanges print reshape createChunk regions chunkPositions game dimensions deleteChunk position range | # voxelworld This repository hosts code and data to accompany the paper [Where is Misty? Interpreting Spatial Descriptors by Modeling Regions in Space][1], which appeared in *EMNLP 2017*. Out dataset, model code, and trained parameters are available in the "releases" section of this repository. We provide two archives for download: - [Model Release][2]: this contains code for training and evaluating our model, as well as a copy of our *pre-processed* dataset. The files in this archive should suffice to replicate the results we reported in our paper. - [Raw Dataset Release][3]: this contains our *raw* dataset and our pre-processing script. [1]: http://nlp.cs.berkeley.edu/pubs/Kitaev-Klein_2017_WhereIsMisty_paper.pdf [2]: https://github.com/nikitakit/voxelworld/releases/download/model/misty-model.zip [3]: https://github.com/nikitakit/voxelworld/releases/download/model/misty-raw-dataset.zip The code checked in to version control mostly relates to the data collection and visualization required for our work. This includes an in-browser renderer for voxel scenes (which we used for both data collection and for visualizing model behavior during development), our server for collecting crowdsourced data, and various other files. If you use our dataset or code in your research, please cite our paper: | 3,137 |
nikolamilosevic86/SerbianStemmer | ['information retrieval'] | ['Stemmer for Serbian language'] | StemmerByNikola.py stem_arr stem_str lower word_tokenize replace lower word_tokenize replace | # SerbianStemmer Stemmer for Serbian language created for my master thesis, rewritten in python. It is improvement of Keselj and Šipla's stemmer. ## Reference Milosevic, N. (2012). [Stemmer for Serbian language](http://arxiv.org/abs/1209.4471). arXiv preprint arXiv:1209.4471. | 3,138 |
nikolamilosevic86/TabInOut | ['information retrieval', 'table detection'] | ['Marvin: Semantic annotation using multiple knowledge sources', 'A framework for information extraction from tables in biomedical literature'] | QueryDBClassESG.py Wizard/TkGUIFirstScreen.py MLTest.py DrugDrugInteraction/CSVTransformer.py DrugDrugInteraction/DDIExtractionUsingHeaderCategories.py GetBMI.py Wizard/Process_Data.py Wizard/RuleClasses_PatternValueSem.py Wizard/DatabaseSettings.py ExtractIEAtributeToCSV.py CheckInclEx.py Scripts/GetCO2e.py Wizard/RuleClasses_Rule.py Data/Cell.py GetGender.py DrugDrugInteraction/DDIExtraction1.py GetNumPatients.py Wizard/SimpleRuleOps.py CreateMLDatasetPatientNum.py ExtractContentFromAsthmaPapers.py Data/Table.py Tests.py Wizard/RuleClasses_Pattern.py Scripts/GetAsthma_COPD_articles.py Wizard/Cell.py Wizard/FileManipulationHelper.py DrugDrugInteraction/DDIExtractionWithEffect.py DrugDrugInteraction/DDIExtractionWithEffect2.py getAge.py DrugDrugInteraction/AnalyzePattern.py Data/Article.py AnalyzePattern.py DrugDrugInteraction/DDIExtractionWithEffect3.py Wizard/Annotation.py GetAdverseEvents.py QueryDBClass.py DrugDrugInteraction/Annotation.py DrugDrugInteraction/Cell.py Wizard/QueryDBClass.py Data/GetAge.py TableLists.py Wizard/EditRule.py TableClusters.py Wizard/BlackAndWhiteList.py Balance.py Wizard/RuleClasses_LoadRulesForProcessingClass.py CreateTableDataset.py Wizard/SyntacticRules.py ExtractUsingMetaMap.py DrugDrugInteraction/QueryDBClass.py GetRange GetMean processTable QueryDBCalss QueryDBCalssESG Article Cell Table GetRange GetMean Annotation Cell getCellsByTableID getCellsByTableID getCellsByTableID QueryDBCalss getContentType Annotation SemanticListWindow WhiteListWindowEdit SaveBlackList SaveWhiteListSemanticEdit SaveWhiteList WhiteListWindow SaveWhiteListEdit createSemanticWhiteList BlackListWindow SaveWhiteListSemantic SemanticListWindowEdit Cell ConfigureDatabaseScreen ClearDBTables SaveDBSettings EditRule AddEditRule SaveRule SaveRuleEdit SaveCueListSem loadBlackList SaveSyntacticRules readProjects CreateFoderIfNotExist loadWhiteList CreateProjectCfgFileIfNotExist LoadRules MakeRuleCFGFile LoadDBConfigFile SaveToConfigFile SaveBlackList loadRules SaveCueList loadRuleConfig loadVariables CreateFolderStructure CheckSemTermListUsingRegex2 getSuperRowCells getHeaderCells getCellsByTableID getStubCells ProcessDataBase GetExtractedData CheckBListUsingRegex CheckUnits CheckSemTermListUsingRegex QueryDBCalss LoadRulesForProcessing LoadSyntacticRoles Pattern PatternValueSem Rule RemoveRule AddEditVariable MoveRuleUp SetRuleName AddVariable loadVariableConfig EnableLB MoveRuleDown EnableLEntity RefreshDatabaseData EditSintacticRules SaveSintacticRules LoadRulesCfGMainScreen askopenfilename MakeChangesToSyntacticRules ProcessDataV MakeWorkingScreen on_closing ShowChoice LoadFirstCfGScreen FinishFirstScreen LoadConfigScreen search group search group fetchall str cursor print connect execute float Cell Annotation idCell getCellAnnotation getTableCellsWithTableArticleData getCellRole append lower len withdraw split pack format Toplevel grid geometry Text title Frame pack format Toplevel grid geometry CheckList IntVar title Text Frame autosetmode createSemanticWhiteList StringVar print setstatus add grid autosetmode createSemanticWhiteList StringVar loadWhiteList loadBlackList str geometry title pack format replace Toplevel insert set IntVar Frame setstatus int CheckList Text split pack format Toplevel grid geometry IntVar title Text Frame StringVar loadBlackList pack int format str replace Toplevel insert grid geometry set IntVar title Text Frame split StringVar loadWhiteList get SaveCueListSem CreateFoderIfNotExist LoadRulesCfGMainScreen withdraw append split get SaveCueListSem CreateFoderIfNotExist withdraw append split get CreateFoderIfNotExist LoadRulesCfGMainScreen withdraw SaveCueList split get CreateFoderIfNotExist withdraw SaveCueList split CreateAdditionalTables LoadDBConfigFile QueryDBCalss ClearCreatedTables replace Toplevel grid set title StringVar split get withdraw SaveToConfigFile pack format Toplevel QueryDBCalss insert OptionMenu LoadDBConfigFile grid geometry loadVariableConfig GetPragmaticClasses set IntVar title Frame StringVar CreateFoderIfNotExist replace MakeRuleCFGFile withdraw set StringVar get CreateFoderIfNotExist SemanticListWindow MakeRuleCFGFile insert withdraw WhiteListWindow LoadDBConfigFile grid StringVar Label geometry title loadRuleConfig get pack format replace Toplevel QueryDBCalss insert OptionMenu set Entry IntVar Frame GetPragmaticClasses makedirs CreateFoderIfNotExist close write open close write open readlines replace split open get str close write open get str close write open close write open get str replace write close open readlines replace open readlines replace open readlines replace split open readlines open close write open start search replace start search replace search replace search replace append HeaderId append StubId getHeaderCells append SuperRowId getHeaderCells PossibleUnits idPMC wl_look_header getHeaderCells LoadDBConfigFile BlackList bl_look_superrow search Annotations wl_look_superrow Stub position bl_look_data getRelevantTables Super_row ClassName SuperRowId CheckSemTermListUsingRegex str Header name HeaderId Annotation SemTermList append DefaultUnit replace QueryDBCalss UNICODE PragmaticClass group idTable PatternList wl_look_stub wl_look_data Content SemanticValues CheckUnits idArticle bl_look_stub bl_look_header print getCellsByTableID getStubCells Semantics tableOrder WhiteList RuleName StubId SaveExtracted LoadDBConfigFile getExtracted QueryDBCalss int PatternValueSem print readlines Pattern open append split loadBlackList replace LoadSyntacticRoles loadRules Rule split loadRuleConfig append loadVariables loadWhiteList configure configure rmtree parent delete item insert delete get delete insert Toplevel grid set title StringVar get str CreateFoderIfNotExist replace insert withdraw write close open readlines replace split open SetRuleName GetExtractedData str size insert ProcessDataBase IntVar set pack Label format Toplevel grid geometry LoadRulesForProcessing set IntVar title Button Frame StringVar set get pack SaveSintacticRules format LoadRules str Toplevel insert withdraw geometry grid Text title Frame StringVar append pack str format LoadRules Toplevel insert grid geometry set Text title IntVar Frame StringVar loadRules withdraw SaveSyntacticRules pack Label format Toplevel insert size loadRules geometry set title Button Frame StringVar loadVariables Treeview protocol print get get CreateFoderIfNotExist CreateProjectCfgFileIfNotExist withdraw LoadFirstCfGScreen pack Label format readProjects configure Toplevel insert geometry select set Entry title Listbox Frame Button StringVar Radiobutton protocol Label pack format Thread Toplevel print size withdraw geometry LoadRulesForProcessing IntVar title Listbox Button start protocol destroy | # TabInOut (Table Information Out) - Framework for information extraction from tables TabInOut is a framework for information extraction from tables and a GUI tool for generating information extraction rules from the tables in literature. The tool is dependent on [TableDisentangler](https://github.com/nikolamilosevic86/TableAnnotator) and actually presents the second step in the extraction pipeline. Firstly, tables are processed, disentangled and annotated using Tabledisentangler tool. TabInOut uses database created by TableAnnotator, uses all the functional and structural annotation performed by TableDisentangler in order to extract information from the tables. It also creates additional table in the mySQL database where it stores the extracted information. The framework consists of: - Methodology and recipe for information extraction from tables - Language for describing syntactics of the cell content and assigning values to the cell content parts - A GUI wizard that makes describing information extraction task description easy For more information view project's GitHub Wiki. We are currently working on a paper that will present the methodology of TabInOut, however, it is based on case study and a hybrid approach already presented at BIOSTEC and BelBi conference. You can see and read relevant papers we published bellow. The project is part of my PhD project funded by EPRSC and AstraZeneca. The main application (Wizard) is located under Wizard folder. You can run it by starting TkGUIFirstScreen.py file. Alternatively you can start TableInOut wizard by running TableInOutStarter.sh from the main directory. | 3,139 |
niluthpol/multimodal_vtt | ['video text retrieval', 'video retrieval'] | ['Learning Joint Embedding with Multimodal Cues for Cross-Modal Video-Text Retrieval'] | train_vtt.py evaluation.py test_weighted.py data_resnet.py vocab/vocab.py data_i3d_audio.py model.py VTTDataset get_loaders get_test_loader get_vtt_loader collate_fn VTTDataset get_loaders get_test_loader get_vtt_loader collate_fn i2t encode_data AverageMeter LogCollector t2i_single t2i evalrank i2t_single validate accuracy save_checkpoint adjust_learning_rate main train main build_vocab from_txt Vocabulary list sort stack zip long enumerate VTTDataset DataLoader data_path join endswith get_vtt_loader data_path join endswith get_vtt_loader update time format forward_loss logging AverageMeter copy forward_emb LogCollector zeros val_start enumerate len load join vocab_path workers encode_data get_test_loader2 i2t batch_size print get_test_loader1 len crop_size t2i load_state_dict save data_name VSE open median print reshape flatten mean floor append zeros range len median T dot shape mean floor zeros array range len median print reshape min order_sim flatten shape mean floor append zeros numpy cuda range len median T min order_sim dot shape mean floor cuda zeros numpy array range len workers vocab_path validate batch_size adjust_learning_rate save_checkpoint ArgumentParser data_name max open basicConfig crop_size load_state_dict logger_name parse_args range configure get_loaders format resume num_epochs VSE optimizer load join print add_argument isfile train len update val time format validate train_start AverageMeter train_emb LogCollector log_value tb_log info enumerate len encode_data log_step log_value t2i_single info i2t_single copyfile save lr_update param_groups learning_rate topk size t eq mul_ expand_as append sum max update join word_tokenize decode Vocabulary print add_word from_txt Counter enumerate build_vocab | ### Learning Joint Embedding with Multimodal Cues for Cross-Modal Video-Text Retrieval Code to evaluate "Learning Joint Embedding with Multimodal Cues for Cross-Modal Video-Text Retrieval" (Mithun, Niluthpol C and Li, Juncheng and Metze, Florian and Roy-Chowdhury, Amit K) 2018 ### Dependencies This code is written in python. The necessary packages are below: * Python 2.7 * PyTorch (>0.4) * Tensorboard * NLTK Punkt Sentence Tokenizer ### Evaluate Models -- Download data and models from https://drive.google.com/drive/folders/1t3MwiCR72HDo6XiPvWSZpenqv4CGjnKl | 3,140 |
niluthpol/weak_supervised_video_moment | ['moment retrieval'] | ['Weakly Supervised Video Moment Retrieval From Text Queries'] | test_charades.py evaluation_charades.py model_charades.py data_charades.py vocab.py get_loaders Charades get_charades_loader get_test_loader collate_fn encode_data AverageMeter LogCollector t2i evalrank cIoU cosine_sim EncoderImagePrecomp EncoderImageFull EncoderImage l2norm order_sim EncoderText ContrastiveLoss VSE cross_attention Vocabulary list sort long zip zeros max enumerate len Charades DataLoader data_path join get_charades_loader data_path join get_charades_loader update time format forward_loss logging size AverageMeter squeeze copy forward_emb LogCollector argsort zeros dataset val_start range enumerate len min max workers vocab_path batch_size get_test_loader data_name open str crop_size shape load_state_dict VSE load join encode_data print data_path t2i read_csv len int print floor cIoU zeros float array range sqrt endswith EncoderImagePrecomp EncoderImageFull bmm unsqueeze size expand | ### Weakly Supervised Video Moment Retrieval from Text Queries Code to evaluate "Weakly Supervised Video Moment Retrieval from Text Queries" (Mithun, Niluthpol C and Paul, Sujoy and Roy-Chowdhury, Amit K) 2019 ### Dependencies This code is written in python3. The necessary packages are below: * PyTorch (>0.4) and torchvision * NumPy * pycocotools * pandas * matplotlib * NLTK Punkt Sentence Tokenizer | 3,141 |
nimadehmamy/Understanding-GCN | ['graph generation'] | ['Understanding the Representation Power of Graph Neural Networks in Learning Graph Topology'] | src/GraphConvNet.py src/run_gcn_multi_v1.py src/run_gcn_classification_v2.py src/run_gcn_classification_v1.py src/run_gcn_multi.py Graph_Operators GCN_List MultiGCN GCN_module EpochHistory mat_pow_batch GCN range matmul Tensor isinstance | # Understanding the Representation Power of Graph Neural Networks in Learning Graph Topology Code for NeurIPS 2019 paper titled [Understanding the Power of Graph Neural Networks in Learning Graph Topology](https://arxiv.org/abs/1907.05008) Code is written with Python 3.6.5. ### Poster, Slides and video The poster and slides can be found in ``doc/``. The video can be found [here](https://www.youtube.com/watch?v=kk_x0wOvZYQ). ### Set up 1. `git clone https://github.com/nimadehmamy/Understanding-GCN.git` 2. `pip install -r requirements.txt` | 3,142 |
nisargjhaveri/news-access | ['automatic post editing', 'machine translation'] | ['A Workbench for Rapid Generation of Cross-Lingual Summaries'] | nltk-binding/rpc-server.py | # news-access This was published as part of the following paper. Cite this paper if you use this. Nisarg Jhaveri, Manish Gupta, and Vasudeva Varma. 2018. A Workbench for Rapid Generation of Cross-Lingual Summaries. In *Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)*. ``` @inproceedings{jhaveri2018workbench, title={A Workbench for Rapid Generation of Cross-Lingual Summaries}, author={Jhaveri, Nisarg and Gupta, Manish and Varma, Vasudeva}, booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)}, year={2018}, isbn = {979-10-95546-00-9}, | 3,143 |
nish03/FuseVis | ['autonomous driving'] | ['FuseVis: Interpreting neural networks for image fusion using per-pixel saliency visualization'] | FuseVis/networks.py FuseVis/preprocess.py FuseVis/_init_.py FuseVis/user_interface.py DeepFuse FusionLayer ConvLayer_DeepFuse FunFuseAn Weighted_Averaging MaskNet DeepPedestrian test_input2 test_input1 ButtonObject start_mouseover load_model join glob getcwd float min to natsorted float32 expand_dims zeros imread listdir max range len join glob getcwd float min to natsorted float32 expand_dims zeros imread listdir max range len bind grid Scale set mouseover_Callback DoubleVar ButtonObject model0 delete setInput flatten create_text ColorbarBase device forward max model4 ylabel ylim scatter savefig title to imread create_image get plot close ScalarMappable add_axes eval PhotoImage model1 get_cmap xlim zeros load xlabel File min start_mouseover figure model6 PowerNorm numpy array | # FuseVis: Interpreting neural networks for image fusion using per-pixel saliency visualization The project presents a visualization tool to interpret neural networks for image fusion. The tool, named as **FuseVis**, can be used by end-user to compute per-pixel saliency maps that examine the influence of the input image pixels on each pixel of the fused image. The tool can also be adapted to interpret any deep neural network that involves image processing. The work is based on the MDPI journal [paper](https://www.mdpi.com/2073-431X/9/4/98). A video on how to use the tool can be downloaded from [here]( https://tu-dresden.de/ing/informatik/smt/cgv/ressourcen/dateien/mitarbeiter/nishant-kumar/FuseVis_teaser.mp4).  The tool performs the following key tasks: * Fast computation of per-pixel jacobian based saliency maps for the fused image with respect to input image pairs. * Visualize neural networks by considering the backpropagation heuristics using an interactive user interface that helps these networks to be more transparent in a real-time setup. **Note**: Please cite the paper if you are using this code in your research. ## Prerequisites * Python 2.7 | 3,144 |
nishantgurunath/source_separation | ['speech recognition', 'speech synthesis'] | ['Disentangling Speech and Non-Speech Components for Building Robust Acoustic Models from Found Data'] | run_eval.py multilingual_source_separation_main_unsupervised.py run_train.py multilingual_source_separation_unsupervised.py model.py rpca.py multilingual_speech_model SpeechModelDataLoader model_run kl_anneal_function loss_fn librosa_eval multilingual_speech_model soft_mask rpca speech Se J tf_mask model_eval SpeechModelDataLoader model_run kl_anneal_function exp pow sum criterion_mse load write_wav T view model stft speech shape magphase numpy istft ones shape svd norm time concatenate diagflat dot shape sqrt Se J magphase zeros max minimum exp abs sqrt soft_mask rpca load T write_wav view model stft speech shape magphase numpy istft | # Separabl ## Title: Disentangling Speech and Non-Speech Components for Building Robust Acoustic Models from Found Data ## Datasets: - Wilderness - Hub4 - How2 - Youtube (Gordon Ramsay) - https://www.youtube.com/watch?v=U9DyHthJ6LA&list=LLIEv3lZ_tNXHzL3ox-_uUGQ ## Abstract: In order to build language technologies for majority of the languages, it is important to leverage the resources available in public domain on the internet - commonly referred to as `Found Data'. However, such data is characterized by the presence of non-standard, non-trivial variations. For instance, speech resources found on the internet have non-speech content, such as music. Therefore, speech recognition and speech synthesis models need to be robust to such variations. In this work, we present an analysis to show that it is important to disentangle the latent causal factors of variation in the original data to accomplish these tasks. Based on this, we present approaches to disentangle such variations from the data using Latent Stochastic Models. Specifically, we present a method to split the latent prior space into continuous representations of dominant speech modes present in the magnitude spectra of audio signals. We propose a completely unsupervised approach using multinode latent space variational autoencoders (VAE). We show that the constraints on the latent space of a VAE can be in-fact used to separate speech and music, independent of the language of the speech. This paper also analytically presents the requirement on the number of latent variables for the task based on distribution of the speech data. ## Usage: | 3,145 |
nitarshan/robust-generalization-measures | ['generalization bounds'] | ['In Search of Robust Measures of Generalization'] | experiments/coupled_networks/supp_figure_envs_remaining.py experiments/coupled_networks/figure_triangle_cdf.py experiments/coupled_networks/supp_figure_monte_carlo_ablation.py experiments/single_network/single_network.py experiments/coupled_networks/supp_figure_easier_envs.py data/generation/experiment.py data/generation/dataset_helpers.py data/generation/logs.py data/generation/measures.py experiments/coupled_networks/create_environments.py experiments/coupled_networks/common.py data/generation/experiment_config.py data/generation/train.py experiments/single_network/plot_results.py experiments/coupled_networks/figure_cdf_all_measures.py experiments/single_network/export_results.py data/generation/models.py process_data get_dataloaders SVHN CIFAR10 Experiment ComplexityType Config Verbosity DatasetSubsetType DatasetType HParams EvaluationMetrics State OptimizerType Printer BaseLogger WandbLogger _pacbayes_sigma _perturbed_model _reparam get_all_measures NiNBlock NiN ExperimentBaseModel dump_results average_over_repeats get_complexity_measures get_hps hoeffding_weight sign_error load_data pretty_measure create_environments make_figure get_all_losses triangle_cdf_plots_get_losses make_figure make_figure get_environment_losses get_complexity_losses_per_hp make_figure subtract_baseline get_model_results get_best_by_measure preprocess_columns mean_mse get_complexity_measures combine_env penalty MLP estimator load_data str2bool dataset DataLoader transpose choice mean from_numpy index_select tensor std len deepcopy in_place_reparam deepcopy device mean device manual_seed abs range get_weights_only sum norm print get_vec_params _path_norm _reparam _pacbayes_sigma _pacbayes_mag_bound _pacbayes_bound device tensor dataset abs get_reshaped_weights log cat len clean_data round train_dataset_size read_csv float sign list defaultdict dump get_complexity_measures set dict get_hps hoeffding_weight tqdm load_data zip open abs range len list index warn logical_or array unique zip isclose enumerate subplots arange set_yticklabels linspace tick_params heatmap open list tolist axvline scatter savefig legend sum plot get_all_losses set_xticklabels hstack add_axes zip keys enumerate load join remove set_size_inches print invert_yaxis set_yticks set_ylabel set_xticks zeros set_ylim len list index warn logical_or array unique zip isclose enumerate set_visible axhline max percentile set_xlabel mean unique isinstance subplots_adjust triangle_cdf_plots_get_losses groupby list defaultdict inf product tuple keys difference logical_or array unique isclose values get_environment_losses gca gcf load_data load join list get_all_losses print hstack keys open get_complexity_losses_per_hp ylabel ylim clf barplot abs groupby reset_index apply sqrt risk_max squeeze mean_mse requires_grad_ join list items transpose squeeze stack zip append to_numpy float enumerate cat mean_mse model zero_grad SGD numpy save tensor log values list ones Adam range detach update mean stack init var join backward penalty parameters nonnegative_weights_only Tensor step steps | # In Search of Robust Measures of Generalization [](https://arxiv.org/abs/2010.11924) [](https://opensource.org/licenses/Apache-2.0) [](https://www.python.org) **Gintare Karolina Dziugaite**, **Alexandre Drouin**, Brady Neal, Nitarshan Rajkumar, Ethan Caballero, Linbo Wang, Ioannis Mitliagkas, Daniel M. Roy >One of the principal scientific challenges in deep learning is explaining generalization, i.e., why the particular way the community now trains networks to achieve small training error also leads to small error on held-out data from the same population. It is widely appreciated that some worst-case theories -- such as those based on the VC dimension of the class of predictors induced by modern neural network architectures -- are unable to explain empirical performance. A large volume of work aims to close this gap, primarily by developing bounds on generalization error, optimization error, and excess risk. When evaluated empirically, however, most of these bounds are numerically vacuous. Focusing on generalization bounds, this work addresses the question of how to evaluate such bounds empirically. Jiang et al. (2020) recently described a large-scale empirical study aimed at uncovering potential causal relationships between bounds/measures and generalization. Building on their study, we highlight where their proposed methods can obscure failures and successes of generalization measures in explaining generalization. We argue that generalization measures should instead be evaluated within the framework of distributional robustness.  ## Directory Structure ``` ├── experiments | 3,146 |
nithyadurai87/pottan-ocr-tamil | ['optical character recognition', 'scene text recognition'] | ['An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition'] | ocropy/setup.py pottan_ocr/model.py ocropy/ocrolib/common.py tools/warp-ctc-pytorch_bindings/pytorch_binding/tests/test_gpu.py ocropy/ocrolib/lstm.py ocropy/ocrolib/exceptions.py ocropy/ocrolib/psegutils.py ocropy/ocrolib/extras/fgen.py pottan_ocr/data_gen.py tools/warp-ctc-pytorch_bindings/pytorch_binding/warpctc_pytorch/__init__.py ocropy/ocrolib/lineest.py ocropy/ocrolib/hocr.py ocropy/ocrolib/utils.py ocropy/ocrolib/sl.py ocropy/ocrolib/__init__.py tools/warp-ctc-pytorch_bindings/pytorch_binding/setup.py ocropy/ocrolib/edist.py ocropy/ocrolib/lang.py pottan_ocr/utils.py ocropy/ocrolib/toplevel.py ocropy/ocrolib/chars.py misc/keras_model.py ocropy/ocrolib/extras/cairoextras.py pottan_ocr/ocr.py ocropy/ocrolib/morph.py pottan_ocr/string_converter.py tools/warp-ctc-pytorch_bindings/pytorch_binding/tests/test_cpu.py ocropy/ocrolib/ligatures.py tools/torch_to_keras.py pottan_ocr/dataset.py ocropy/ocrolib/default.py pottan_ocr/train.py KerasCrnn requote_fancy requote chist allsplitext midrange normalize_text ustrg2unicode warn write_image_gray RegionExtractor norm_max array2pil load_object binarize_range parallel_map save_object check_valid_class_label isbytearray plotgrid MovingStats caller quick_check_line_components iulib_page_iterator remove_noise base gt_explode read_line_segmentation read_image_gray project_text expand_args finddir rgb2int make_seg_black write_text_simple showrgb pil2array isintarray write_line_segmentation findfile int2rgb number_of_processors read_image_binary fvariant showgrid Record glob_all write_text unpickle_find_global set_params ocropus_find_file quick_check_page_components isintegerarray pad_by gt_implode read_text testset read_page_segmentation write_page_segmentation make_seg_white die warn_once obinfo isfloatarray write_image_binary getlocal xlevenshtein levenshtein Unimplemented FileNotFound BadClassLabel Warning Internal summary BadImage RecognitionError BadInput OcropusException footer header size_category common_ligatures LigatureTable scale_to_h CenterNormalizer hprime RangeError randu log_mul Softmax gfunc log_add ffunc Parallel rownorm make_target ctc_align_targets MLP1 check_nan ascii_codec translate_back Network Stacked forwardbackward SeqRecognizer MLP getstates_for_display Codec BIDILSTM sumouter Logreg add_training_info prepare_line hfunc forward_py normalize_nfkc gprime ocropus_codec sigmoid fprime forward_algorithm Reversed LSTM translate_back0 LSTM1 backward_py rg_closing select_regions keep_marked remove_marked r_erosion renumber_labels_ordered rg_erosion renumber_labels rg_dilation rb_closing find_objects check_binary renumber_by_xcenter r_opening propagate_labels r_dilation r_closing propagate_labels_simple rb_dilation pyargsort showlabels label all_neighbors correspondences rb_erosion rb_opening ordered_by_xcenter rg_opening spread_labels extract reading_order estimate_scale B read_gray record compute_lines read_binary rgbshow show_lines topsort blackout_images binary_objects extract_masked compute_boxmap pad_image find dims volume area yoverlap mbox dim1 extend_to is_slices xoverlaps center0 yoverlaps center_in aspect pad center1 width cut union dim center height intersect raster_FIXME bounds start dim0 box empty xoverlap_rel math ycenter shift raster xoverlap stop yoverlap_rel xcenter DATASET failfunc makeargcheck WHITESEG checks DARK inttuple PAGE SEGMENTATION ANONNEG ABINARY DATASET_VRANK PAGEEXTRA uintpair BOOL ARANGE uinttuple checktype RECTANGLE AINT disabled RANGE CheckWarning ABYTE TDATASET ANY DATASET_SIZE NUMBER BLACKSEG DATASET_VSIZE DATASET_VRANGE ARANK AFLOAT tracing CHANNELS LINE PATCH ALL trace1 unchanged method LIGHT deprecated strc replacedby CheckError sumprod sumouter PycairoContext create_cairo_font_face_for_file pango_render_gray pango_render_string pango_families gauss_distort cairo_render_string gauss_degrade cairo_render_gray cairo_render_at TextDataset normalizeBatch normaizeImg renderText getTrainingTexts main DataGen threadInitializer processInThread BidirectionalLSTM CRNN loadImg findNHFromFile threadInitializer evalModel main encodeStrList decodeStrList encodeStr decodeStr val trainBatch loadData weights_init main saveModel showImg averager readYaml loadTrainedModel writeFile myOpen writeJson readJson readLines readFile layerNameFromParamKey Conv2d LSTM BatchNorm2d Linear test_medium test_empty_label test_simple test_medium test_empty_label test_simple CTCLoss _CTC _assert_no_grad Bidirectional Reshape Sequential add Dense ZeroPadding2D MaxPooling2D TimeDistributed convRelu LSTM compile str sub str sub replacements str normalize sub upper normalize_text sub normalize_text normalize_text tobytes fromstring mean pil2array isfloatarray open print array2pil array save isfloatarray clip pil2array amax open print array2pil array save zeros list shape copy copy rgb2int make_seg_black pil2array open array2pil make_seg_white int2rgb save rgb2int make_seg_black pil2array open array2pil make_seg_white int2rgb save read_image_gray dtype zeros shape exec print ocropus_find_file get imap_unordered Pool fun search ocropus_find_file getlocal split search glob sorted join curdir get_config_var getenv dirname normpath getfile append currentframe exists pardir allsplitext items list hasattr copy setattr _getframe getframeinfo caller write exit caller write caller write length at chr range str hasattr amin amax subplot ginput reshape min gray imshow clf ion range len imshow transpose minimum int subplot str yticks xlabel ylabel gray sqrt imshow title xticks range len enumerate split append minimum list label sum range list min range join arange minimum_filter array split append empty full range len int shape affine_transform eye array T vstack amax any zip sigmoid tanh tanh hfunc gfunc ffunc dot range hprime list hfunc sumouter gprime reversed dot fprime sumprod range Logreg Stacked Logreg LSTM Softmax Stacked Softmax Parallel Reversed LSTM Stacked zeros enumerate append argmax range amax len arange reshape tile label maximum_position amax log_mul arange log_add copy append range len forward_algorithm subplot T exp amax ginput maximum dot imshow clf figure log forwardbackward set isinstance check_binary r_erosion check_binary r_dilation zeros uniform_filter shape zeros uniform_filter shape rb_erosion rb_dilation r_erosion r_dilation imshow where reshape distance_transform_edt label ravel in1d unique keep_marked array unique correspondences T label zeros amax correspondences T label zeros amax find_objects argsort label zeros len array unique roll sorted arange unique zeros ravel amax len find_objects argsort zeros array amax enumerate len find_objects range array bytearray print pad_by length label_components copy unpack_rgb textImageProbabilities at bounding_boxes rectarray range fill_rect intarray label find_objects median sorted area shape binary_objects zeros zeros sorted binary_objects shape record find_objects append enumerate ones shape array shape affine_transform shift eye extract mask where maximum_filter pad_image amax center plot ginput print imshow title clf x_overlaps zeros above enumerate left_of visit zeros range len ravel nonzero center plot bounds len add_patch imshow shape clf ylim Rectangle append xlim range cla mean imread mean imread clip print transpose shape imshow zeros abs array range list slice start stop range len list dtype intersect bounds transpose shift dims start pad empty isinstance __name__ type_ callable isinstance isinstance isinstance sum GRAYSCALE1 unique zeros zeros get_font_face CDLL ctx FORMAT_A8 cairo_ft_font_face_create_for_ft_face c_void_p cairo_set_font_face ImageSurface Context FORMAT_ARGB32 set_source_rgb create_cairo_font_face_for_file get_data select_font_face max show_text move_to range Context bytearray set_font_face fill set_font_size ImageSurface int rectangle array len CairoContext FORMAT_ARGB32 get_context ImageSurface create_layout Context set_font_description FORMAT_ARGB32 set_text set_source_rgb get_data set_size max CairoContext rotate SCALE move_to create_layout range Context bytearray zoom show_layout FontDescription fill get_pixel_extents ImageSurface int rectangle set_markup array len int gaussian_filter distance_transform_edt min binary_erosion mean shape prod sum max binary_dilation list randn transpose shape meshgrid array range gaussian_filter FONT_SLANT_NORMAL FORMAT_ARGB32 FONT_WEIGHT_BOLD set_source_rgb create_cairo_font_face_for_file get_data select_font_face show_text move_to Context bytearray set_font_face fill set_font_size ImageSurface FONT_SLANT_OBLIQUE rectangle FONT_SLANT_ITALIC FONT_WEIGHT_NORMAL array set_font_description get_data set_size get_font_description resize font_description_from_string get_size FORMAT_A8 rotate width create_layout Context height create_context show_layout astype choice FontDescription get_pixel_extents ImageSurface int invert uint8 BILINEAR translate set_markup atan array readLines astype FloatTensor list stack zip __getitem__ TextDataset DataGen print output createDataset testencoding input testEncoding int size convert BILINEAR resize loadImg view print Variable unsqueeze IntTensor crnnModel max totalGlyphs nh eval loadTrainedModel CRNN stdout cpu_count writeFile image_paths Pool enumerate len load crnn cuda print list zip append decodeStr normal_ __name__ fill_ copy_ data decode batchSize IntTensor max crnn view add encode averager loadData size eval zip float enumerate criterion print Variable parameters len criterion backward Variable loadData size zero_grad IntTensor encode step crnn format print strftime outdir save state_dict val trainBatch add parameters reset saveModel train show imshow load list print keys cuda load_state_dict crnn set_weights dictToObj state_dict set_weights dictToObj state_dict set_weights transpose set_weights cpu_ctc fill_ print contiguous size IntTensor zeros sum cpu_ctc print contiguous size IntTensor zeros sum cpu_ctc print contiguous size IntTensor zeros sum gpu_ctc view cuda gpu_ctc view cuda gpu_ctc view cuda | [](https://gitter.im/pottan-ocr/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) # pottan-ocr A stupid OCR for malayalam language. It can be Easily configured to process any other languages with complex scripts ## Demo https://harish2704.github.io/pottan-demo/ ## Installation #### Clone the project ( This project has git submodules. So downloading zip file may not work easily ) ``` git clone https://github.com/harish2704/pottan-ocr cd pottan-ocr | 3,147 |
nixingyang/AdaptiveL2Regularization | ['person re identification'] | ['Adaptive L2 Regularization in Person Re-Identification'] | datasets/dukemtmc_reid.py applications/resnet_common.py datasets/msmt17.py image_augmentation/random_erasing.py evaluation/post_processing/re_ranking_ranklist.py metric_learning/triplet_hermans.py regularizers/adaptation.py applications/__init__.py utils/vis_utils.py callbacks/__init__.py evaluation/metrics/__init__.py datasets/market1501.py datasets/__init__.py utils/model_utils.py image_augmentation/__init__.py solution.py TrainDataSequence apply_stratifiedshufflesplit read_image_file init_model learning_rate_scheduler Evaluator TestDataSequence main apply_groupshufflesplit init_resnet HistoryLogger _load_accumulated_info load_DukeMTMC_reID _load_accumulated_info load_Market1501 _load_accumulated_info load_MSMT17 _get_root_folder_path _get_attribute_name_to_label_encoder_dict load_accumulated_info_of_dataset compute_CMC_mAP re_ranking RandomErasing apply_random_erasing RandomErasingImageAugmentor BaseImageAugmentor example cdist get_at_indices all_diffs batch_hard AdaptiveL1L2 InspectRegularizationFactors replicate_model specify_regularizers summarize_model visualize_model StratifiedShuffleSplit arange split next len arange GroupShuffleSplit split next len arange model_instantiation _add_objective_module specify_regularizers _apply_concatenation last_block_for_global_branch_model summarize_model submodel Model getattr append last_block_for_regional_branch_model Input replicate_model compile len imread COLOR_BGR2RGB cvtColor resize int exp min cos pi power max log validation_size workers epoch_num kernel_regularization_factor load_accumulated_info_of_dataset layers init_model compose_transforms concat HistoryLogger logical_not abspath testing_size save use_identity_balancing_in_training apply_groupshufflesplit values beta_regularization_factor sorted visualize_model argv TrainDataSequence gamma_regularization_factor len region_num InspectRegularizationFactors image_augmentor_name evaluate_testing_every_N_epochs pretrained_model_file_path format apply_stratifiedshufflesplit Evaluator use_data_augmentation_in_training test_on_batch bias_regularization_factor evaluate_validation_every_N_epochs load_weights keys LearningRateScheduler use_re_ranking steps_per_epoch compile join output_folder_path use_horizontal_flipping_in_evaluation print augmentation_num flag_values_dict fit use_data_augmentation_in_evaluation rmtree use_adaptive_l1_l2_regularizer ModelCheckpoint makedirs get_file items list ResNet Model load_weights stack_fn append Input join sorted int glob append DataFrame _load_accumulated_info _load_accumulated_info _load_accumulated_info concat columns LabelEncoder OrderedDict drop values fit format print load_function _get_root_folder_path _get_attribute_name_to_label_encoder_dict minimum exp zeros_like concatenate transpose astype float32 mean int32 unique append zeros sum max range len int copy sqrt uniform randint round array range read uint8 format COLOR_BGR2RGB print compose_transforms destroyAllWindows waitKey IMREAD_COLOR COLOR_RGB2BGR imshow imdecode RandomErasingImageAugmentor apply_augmentation frombuffer array cvtColor enumerate range get_weights Model clone_model set_weights get_weights _init_regularizers set_weights model_from_json format layers isinstance name print id summary format layers isinstance name plot_model print id |   # Adaptive L2 Regularization in Person Re-Identification [](https://paperswithcode.com/sota/person-re-identification-on-msmt17?p=adaptivereid-adaptive-l2-regularization-in) [](https://paperswithcode.com/sota/person-re-identification-on-market-1501?p=adaptivereid-adaptive-l2-regularization-in) [](https://paperswithcode.com/sota/person-re-identification-on-dukemtmc-reid?p=adaptivereid-adaptive-l2-regularization-in) ## Overview We introduce an adaptive L2 regularization mechanism in the setting of person re-identification. In the literature, it is common practice to utilize hand-picked regularization factors which remain constant throughout the training procedure. Unlike existing approaches, the regularization factors in our proposed method are updated adaptively through backpropagation. | 3,148 |
nizhenliang/RAUNet | ['semantic segmentation'] | ['RAUNet: Residual Attention U-Net for Semantic Segmentation of Cataract Surgical Instruments'] | train.py validation.py test.py RAUNet.py load_dataset.py loss.py Load_Dataset load_image load_mask CELDice DecoderBlockLinkNet AAM RAUNet test train_model train adjust_learning_rate load_filename calculate_iou calculate_dice val_multi calculate_confusion_matrix_from_arrays open imread replace RAUNet glob CELDice DataLoader load_state_dict device Load_Dataset val_multi cuda param_groups glob train_model batch_size RAUNet Adam CELDice parameters DataParallel DataLoader load_filename Load_Dataset cuda model batch_size float64 zero_grad set_description adjust_learning_rate save dataset cuda open str strftime set_postfix append val_multi range state_dict update SummaryWriter format astype close mkdir item long criterion backward print add_scalar write tqdm step len uint32 T astype histogramdd append float sum range append float sum range | # RAUNet: Residual Attention U-Net for Semantic Segmentation of Cataract Surgical Instruments (ICONIP19) Paper address: https://link.springer.com/chapter/10.1007/978-3-030-36711-4_13 arxiv: https://arxiv.org/abs/1909.10360 Zhen-Liang Ni, Gui-Bin Bian, Xiao-Hu Zhou, Zeng-Guang Hou, Xiao-Liang Xie, Chen Wang, Yan-Jie Zhou, Rui-Qi Li, Zhen Li Chinese introduction: https://blog.csdn.net/big_dreamer1/article/details/101228624 Note: The size of the input image should be divisible by 32.   ## Citation If you find RAUNet useful in your research, please consider citing: | 3,149 |
nju-websoft/GLRE | ['relation extraction'] | ['Global-to-Local Neural Networks for Document-Level Relation Extraction'] | src/nnet/rgcn.py data_processing/utils.py data_processing/readers.py src/utils/metrics_util.py src/data/loader.py data_processing/convert2result.py src/data/converter.py src/utils/utils.py data_processing/docRedProcess.py src/models/glre.py src/utils/tensor_utils.py scripts/run_cdr.py src/data/reader.py src/nnet/modules.py src/utils/adj_utils.py src/nnet/attention.py src/nnet/transformers_word_handle.py data_processing/filter_hypernyms.py src/utils/evaluate_cdr.py src/models/basemodel.py data_processing/reduce_embeds.py data_processing/process.py src/nnet/trainer.py data_processing/statistics.py src/main.py scripts/run_docred.py scripts/run_cdr_train+dev.py data_processing/tools.py src/data/dataset.py main find_cross chunks main readPubTator load_pretrained_embeddings main chunks sentence_split_genia fix_sent_break convert2sent find_cross generate_pairs find_mentions tokenize_genia adjust_offsets to_edges replace2symbol to_graph using_split2 replace2space main train _test set_seed concat_examples _concat_arrays_with_padding _concat_arrays to_device DocRelationDataset ConfigLoader DataLoader read get_distance overlap_chunk chunks BaseModel Local_rep_layer GLRE PositionwiseFeedForward clones MultiHeadAttention Dot_Attention SelfAttention LayerNorm SublayerConnection ScaleDotProductAttention Encoder SimpleEncoder MultiHeadAttention2 MultiHeadedAttention PositionalEncoding attention EncoderLayer Classifier Encoder EmbedLayer LockedDropout EncoderLSTM RGCN_Layer Trainer transformers_word_handle normalize_adj sparse_mx_to_torch_sparse_tensor sparse_mxs_to_torch_sparse_tensor preprocess_adj convert_3dsparse_to_4dsparse main prf Accuracy pool rm_pad rm_pad_between split_n_pad write_preds load_mappings write_preds_old save_model write_errors load_model Tee print_options solve plot_learning_curve humanized_time print_results observe setup_log load str join int get replace print len min write close set add find_cross append range enumerate open range len readPubTator add_argument exit output_file ArgumentParser parse_args makedirs OrderedDict join makedirs OrderedDict format print len list map OrderedDict in_data load_pretrained_embeddings keys full_embeds combinations pmid OrderedDict PairStruct type find_cross zip abs len EntStruct pmid off2 off1 name bio sent_no len type append kb_id sum enumerate split join replace list connected_components to_graph set OrderedDict intersection append kb_id keys chdir system join replace strip tag append enumerate arange values list len exit using_split2 append format copy set enumerate join int off2 off1 print issubset write split Graph add_nodes_from add_edges_from to_edges next iter append _len index split replace replace manual_seed_all seed manual_seed set_seed print Trainer DataLoader test_loader run __call__ setup_log train_loader load_mappings load_model print Trainer test_loader DataLoader __call__ setup_log eval_epoch ConfigLoader _test train load_config isinstance _concat_arrays to_device append range len asarray _concat_arrays_with_padding concatenate insert tuple maximum shape any full array range len join split abs int min split print items str dropout size transpose matmul sqrt masked_fill softmax flatten sum array diag normalize_adj data Size astype float32 from_numpy shape int64 data row FloatTensor Size col astype extend float32 from_numpy int64 max enumerate len Size _values _indices extend unsqueeze append as_tensor max cat enumerate pad_sequence tolist to max split to max device to device masked_fill list abs sort map maxsize len print print print subplot list arange plot yticks xlabel map ylabel savefig figure legend len format print humanized_time append tabulate indent divmod open Tee stdout makedirs format print min named_parameters numpy max print join save state_dict print join load load_state_dict print items | # Global-to-Local Neural Networks for Document-Level Relation Extraction [](https://github.com/nju-websoft/GLRE/issues) [](https://github.com/nju-websoft/GLRE/blob/master/LICENSE) [](https://www.python.org/) [](https://pytorch.org/) > Relation extraction (RE) aims to identify the semantic relations between named entities in text. Recent years have witnessed it raised to the document level, which requires complex reasoning with entities and mentions throughout an entire document. In this paper, we propose a novel model to document-level RE, by encoding the document information in terms of entity global and local representations as well as context relation representations. Entity global representations model the semantic information of all entities in the document, entity local representations aggregate the contextual information of multiple mentions of specific entities, and context relation representations encode the topic information of other relations. Experimental results demonstrate that our model achieves superior performance on two public datasets for document-level RE. It is particularly effective in extracting relations between entities of long distance and having multiple mentions. ## Getting Started ### Package Description ``` GLRE/ | 3,150 |
nju-websoft/SPARQA | ['semantic parsing'] | ['SPARQA: Skeleton-based Semantic Parsing for Complex Questions over Knowledge Bases'] | code/grounding/grounding_args.py code/parsing/models/fine_tuning_based_on_bert/greed_search_redundancy_span.py code/common_structs/depth_first_paths.py code/parsing/models/fine_tuning_based_on_bert_interface/redundancy_span_interface.py code/datasets_interface/virtuoso_interface/freebase_sparql_html.py code/evaluation/cwq_precision_1/eval_script.py code/parsing/models/fine_tuning_based_on_bert/run_redundancy_span.py code/common_structs/ungrounded_graph.py code/parsing/structure_transfers.py code/evaluation/cwq_precision_1/eval_script_ywsun.py code/parsing/query_graph_generator.py code/parsing/nltk_nlp_utils.py code/grounding/_2_1_grounded_graph/entity_linking_aqqu_vocab/surface_index_memory.py code/running/running_interface.py code/common_structs/brat_ann_mention_relation.py code/parsing/models/pytorch_pretrained_bert/tokenization.py code/common/dataset_name.py code/parsing/models/fine_tuning_based_on_bert/greed_search_sequence_simplification.py code/common_structs/skeleton.py code/parsing/models/fine_tuning_based_on_bert_interface/joint_three_models_interface.py code/parsing/models/fine_tuning_based_on_bert/greed_search_headword.py code/common_structs/question_annotation.py code/parsing/models/fine_tuning_based_on_bert/run_joint_three_models.py code/grounding/_2_1_grounded_graph/literal_linking/literal_linking_graphq.py code/parsing/aggregation/comparative.py code/grounding/ranking/path_match_nn/sequence_loader.py code/grounding/_2_1_grounded_graph/class_linking/class_linking_freebase.py code/parsing/models/model_utils.py code/evaluation/evaluation_utils.py code/grounding/ranking/path_match_nn/path_match_interface.py code/common/utils.py code/common_structs/grounded_graph.py code/common_structs/depth_first_search.py code/parsing/models/fine_tuning_based_on_bert_interface/headword_span_interface.py code/grounding/ranking/path_match_sentence_level/question_match_interface.py code/parsing/models/fine_tuning_based_on_bert/greed_search_token_classifier.py code/common_structs/find_path.py code/grounding/ranking/path_match_nn/model.py code/parsing/models/fine_tuning_based_on_bert/greed_search_run_joint_three_models.py code/datasets_interface/virtuoso_interface/freebase_kb_interface.py code/parsing/aggregation/superlative.py code/grounding/_2_1_grounded_graph/entity_linking_en_vocab/entity_link_pipeline.py code/parsing/relation_extraction_nff.py code/parsing/models/fine_tuning_based_on_bert/run_headword_span.py code/common/kb_name.py code/grounding/ranking/path_match_nn/wordvec.py code/parsing/models/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py code/parsing/skeleton_parser.py code/grounding/_2_1_grounded_graph/grounded_graph_2_1_generation.py code/grounding/ranking/path_match_nn/parameters.py code/grounding/_2_1_grounded_graph/entity_linking_en_vocab/entity_linker.py code/parsing/models/fine_tuning_based_on_bert/run_token_classifier.py code/parsing/models/pytorch_pretrained_bert/modeling.py code/parsing/models/fine_tuning_based_on_bert_interface/paraphrase_classifier_interface.py code/running/freebase/pipeline_cwq.py code/parsing/models/pytorch_pretrained_bert/__main__.py code/parsing/node_recognition.py code/grounding/_2_1_grounded_graph/entity_linking_aqqu_vocab/u.py code/grounding/_2_1_grounded_graph/literal_linking/literal_linking_cwq.py code/parsing/parsing_args.py code/grounding/_2_2_grounded_graph_offline/graph_2_1_to_2_2_by_transfer.py code/parsing/parsing_utils.py code/common_structs/structure.py code/datasets_interface/virtuoso_interface/freebase_sparql_odbc.py code/parsing/models/fine_tuning_based_on_bert_interface/sequences_classifier_interface.py code/parsing/models/pytorch_pretrained_bert/file_utils.py code/parsing/models/pytorch_pretrained_bert/__init__.py code/parsing/models/fine_tuning_based_on_bert/greed_search_sequence_relation.py code/evaluation/kbcqa_evaluation.py code/parsing/models/fine_tuning_based_on_bert/span_utils.py code/common/bert_args.py code/grounding/ranking/path_match_sentence_level/question_match_data_preparation_cwq.py code/common/hand_files.py code/common_structs/cycle.py code/grounding/ranking/path_match_nn/train_test_path_nn.py code/parsing/aggregation/counting.py code/parsing/models/fine_tuning_based_on_bert/run_sequence_classifier.py code/grounding/grounding_utils.py code/grounding/_2_2_grounded_graph_offline/path_to_graph.py code/grounding/grounded_graph_to_sparql.py code/common_structs/stack.py code/grounding/_2_1_grounded_graph/node_linking_interface_freebase.py code/datasets_interface/question_interface/questions_utils.py code/datasets_interface/question_interface/complexwebquestion_interface.py code/grounding/_2_2_grounded_graph_offline/generate_oracle_input.py code/datasets_interface/question_interface/graphquestion_interface.py code/parsing/models/pytorch_pretrained_bert/optimization.py code/parsing/aggregation/aggregation_interface.py code/common/globals_args.py code/parsing/models/fine_tuning_based_on_bert_interface/simplif_classifier_interface.py code/grounding/ranking/path_match_nn/preproccess_freebase.py code/grounding/_2_1_grounded_graph/entity_linking_aqqu_vocab/entity_linker.py code/common_structs/graph.py code/grounding/ranking/path_match_nn/path_match_word_utils.py code/evaluation/sempre_evaluation.py code/parsing/models/fine_tuning_based_on_bert_interface/token_classifier_interface.py code/parsing/skeleton_to_dependency.py code/common_structs/bag.py code/running/freebase/pipeline_graphq.py BertArgs GraphqFileName CWQFileName get_args write_dict_dict_str read_json read_set write_dict loadGloveModel read_graphs_qid_to_answers_set read_ordinal_file read_grounded_graph read_structure_file OtherClassEncoder write_structure_file read_gold_graph_query read_dict read_lexicon_path read_ungrounded_graph write_dict_str write_json write_set read_list_yuanshi read_ngram_el_grounding_result structure_to_json read_structure_list_json read_pickle read_list read_structure_json read_dict_dict_update read_dict_dict write_pickle KB_Freebase_Latest KB_Freebase_en_2013 is_literal_node get_gold_entity_or_class has_literal_node get_edge_by_nodes convert_triples_to_graph get_gold_entity_or_class_by_json search_ungrounded_graph print_grounded_graph search_one_node_in_nodes_by_nid random_int_list get_unground_node_by_id Normalize get_nid_by_id get_node_by_id print_span_tree print_ungrounded_graph Stack Bag Node Mention Relation DirectedCycle Cycle DepthFirstPaths get_path_from_a_to_b get_path_every_two_node DepthFirstSearch get_labels_from_lca get_lca_length findPath Graph Digragh GroundedNode GrounedGraph GroundedEdge QuestionAnnotation Node Stack Structure look_for_question_by_id read_complexwebq_question_json look_for_compositionality_type_by_id look_for_sparql_by_id look_for_answers_by_id ComplexWebQuestion _read_complexwebq_question_json get_answers_mid_by_question read_graph_question_json get_answers_by_question GraphQuestion extract_grounded_graph_from_jena_freebase extract_grounded_graph_from_jena_dbpedia get_type_by_instance is_mediator_property_reverse_from_schema get_names get_all_relation_domain_range get_s_p_literal_function notable_type_to_instances get_range is_mediator_property_from_schema instance_to_types get_properties_with_domain_range get_all_properties execute_sparql_two_args get_reverse_property_from_lexcion get_range_by_property get_instance_by_class_notable_type mediator_to_instances get_domain_range_from_schema_by_property get_all_relation_names execute_sparql get_all_properties_with_count type_to_instances get_freebase_schema get_classes_notable_types get_quotation_instance get_numerical_properties get_all_classes_with_count is_mediator_from_schema get_s_p_literal_none get_instance_by_class get_instance_properties_by_class sparql_execuate_compared_goldanswers get_all_instances get_properties_from_schema_by_type get_domain_by_property get_s_p_by_entity get_alias get_s_o_by_property get_all_notable_types execute_sparql_three_args get_all_classes get_all_reverse_properties get_domain get_p_o_by_entity get_all_class_names get_p_set get_classes_of_instance SparqlQueryHTML SparqlQueryODBC show_f1_given_qids get_gold_answers search_for_answers_by_id get_name_by_mid get_denotation_set get_gold_qald_answers grounded_graphes_by_score_standard_ywsun_prediction_test grounded_graphes_by_score_standard_ywsun computed_every_grounded_graph_f1_graphq computed_every_grounded_graph_f1_cwq compute_all_questions_recall getResults computeF1 get_answers_names compare_span_to_answer compute_P1 proprocess evaluate sparql_to_denotation_freebase grounded_graph_to_key_path grounded_graph_to_sparql_CWQ grounded_graph_to_denotation grounded_graph_to_sparql_GraphQ grounded_graph_to_sparql_LcQuAD posword_poslist get_question_node convert_2_1_graph_to_qid_entities generate_biunigram_indexrange read_literal_to_id_map_graphq extract_class_mention read_literal_to_id_map posword_wordlist merge_dict add_dict_number is_question_node is_undate_ungrounded_graph_cycle candidate_query_to_grounded_graph get_old_mention load_word2vec_format read_literal_to_id_map_cwq analysis_structure_category generate_n_gram_indexrange PathRanking get_parameters PathMatchByLexicalNN get_qid_abstractquestion judge_twowords_samelemma get_word_pair_sim get_word_pair_sim_without_memory get_firstparts_by_path conquer_cwq train_data_generation_samestructure_graphq conquer_graphq create_data_for_trainorval train_data_generation_samestructure train_data_generation_samestructure_wq divide_train_val conquer_cwq_0904 SeqRankingLoader train WordEmbedding generate_trainset generate_qid_abstractquestion investigate_denotation_same generate_predicate_qids score_testquestion_bert generate_testset QuestionMatchInterface ungrounded_to_grounded generate_grounded_graph_interface recursion_generate_grounded_graph node_linking _node_entity_linking_quotation EntitySurfaceIndexMemory normalize_entity_name remove_abbreviations_from_entity_name remove_prefixes_from_name remove_bracket_suffix read_abbreviations remove_number_suffix remove_suffixes_from_name EntityLinker EntityVocabulary EntityLinkPipeline get_s_p_literal_function get_s_p_by_literal_none get_s_p_byliteral generate_paths_graphq_interface_from_graph_2_1_graphq generate_paths_graphq_interface_from_graph_2_1 generate_paths_graphq_interface_from_graph_2_1_cwq generate_paths_graphq_interface_from_lcquad_el get_2_2_graphs_by_type_and_literals _get_2_2_graphs_by_structure_and_type_only_entities generate_candidates_by_2_1_grounded_graph_interface _2_2_b_to_graphs _1_2_f_to_graphs _2_1_d_to_graphs _2_1_g_to_graphs _1_1_d_to_graphs _2_2_to_graphs _2_1_c_to_graphs _2_0_to_graphs _2_0_d_to_graphs _2_1_b_to_graphs _1_0_b_to_graphs _1_2_g_to_graphs _2_1_f_to_graphs _2_2_f_to_graphs _2_2_d_to_graphs parser_conjunction_q_cwq_ _3_0_to_graphs _2_1_to_graphs _2_2_e_to_graphs _1_2_to_graphs _2_2_g_to_graphs _1_2_c_to_graphs _1_1_c_to_graphs _1_2_b_to_graphs _1_2_e_to_graphs _2_1_e_to_graphs _2_3_to_graphs _1_2_h_to_graphs _3_0_b_to_graphs _2_2_c_to_graphs parser_conjunction_q_graphq parser_composition_q_graphq _1_1_to_graphs _1_2_d_to_graphs _2_0_b_to_graphs parser_composition_q_cwq_ _1_0_to_graphs _2_0_c_to_graphs _2_2_h_to_graphs _1_1_b_to_graphs _2_1_h_to_graphs NLTK_NLP run_ungrounded_graph_interface test_ungrounded_graph test_span_tree test_hybrid_dependency_tree span_tree_generation_head span_tree_generation_joint__ span_tree_generation_only_dep set_class_aggregation_function aggregation_interface comparative_ground comparative_serialization is_comparative_funct is_comparative_by_token_ner_tag grounded_graph_to_sparql is_count_funct count_serialization counting_binding counting_recognition_interface grounded_to_answers is_count_by_token_ner_tag superlative_serialization is_superlative_by_token_ner_tag superlative_ground is_superlative_funct run_joint_three_models_get_local_args run_redundancy_span_get_local_args token_classifier_accuracy ner_postprocess sequence_classifier_accuracy _truncate_seq_pair ner_prediction_sequence warmup_linear run_sequence_classifier_get_local_args run_token_classifier_get_local_args InputFeatures write_span_headwords_with_nbest read_many_examples read_one_example convert_examples_to_features SquadExample main write_predictions InputFeatures write_span_headwords_with_nbest read_many_examples read_one_example convert_examples_to_features main write_predictions SequenceExample SequencesRelationProcess ParaphraseProcess InputFeatures InputExample convert_examples_to_features SimplificationQuestionProcessor main DataProcessor convert_examples_to_features_for_train InputFeatures convert_example_to_features_for_test InputExample main NodeRecogniationProcessor DataProcessor _check_is_max_context _compute_softmax duplicate_word is_whitespace get_final_text _improve_answer_span _get_best_indexes read_cols_lines warmup_linear process process convert_tf_checkpoint_to_pytorch cached_path s3_etag http_get s3_request s3_get read_set_from_file get_from_cache filename_to_url url_to_filename split_s3_path get_file_extension BertPreTrainingHeads BertForQuestionAnswering BertEncoder PreTrainedBertModel BertSelfAttention BertForMaskedLM BertOnlyMLMHead BertOnlyNSPHead BertEmbeddings BertOutput BertPredictionHeadTransform BertAttention BertPooler gelu BertForMultipleChoice BertConfig BertLayer BertForTokenClassification BertModel BertForNextSentencePrediction BertIntermediate BertForSequenceClassification BertForSpanWithHeadwordWithLabel BertForPreTraining swish BertLMPredictionHead BertSelfOutput warmup_cosine warmup_constant warmup_linear BertAdam BasicTokenizer WordpieceTokenizer load_vocab whitespace_tokenize _is_whitespace _is_control BertTokenizer _is_punctuation main run_grounded_node_grounding_freebase run_grounded_graph_generation_by_structure_transformation run_grounding_graph_add_question_match run_grounding_graph_path_match run_query_graph_generation run_end_to_end_evaluation run_grounding_graph_guiyi_add_question_match run_ungrounded_graph_from_complexwebquestion run_ungrounded_graph_from_graphq add_argument ArgumentParser read_structure_list_json read_json append read_structure_json read_ungrounded_graph add_ungrounded_graph Structure num_edge function ungrounded_graph_forest span_tree abstract_question important_words_list str grounded_graph_forest commonness qid nodes ungrounded_query_id edges append sequence_ner_tag_dict words gold_graph_query question compositionality_type num_node blag grounded_linking gold_sparql_query dict gold_answer append UngroundedEdge UngroundedNode read_grounded_graph append GroundedNode GroundedEdge append GroundedNode GroundedEdge dict close dict close OrderedDict set read_list split print split array open int isdigit eval read_list startswith split load close open close close set list close list close dict close dict close dict close close write open str close write open str close write open str close write open close dump close open append write_json structure_to_json add nodes id set add set nid Graph add_edge end start get_root_span_node print nodes edges print nodes edges append randint range get_ungrounded_graph_forest enumerate mean min max add_edge Graph get_path_from_a_to_b vertices append has_path_to DepthFirstPaths append label range len leaf_treeposition get_labels_from_lca index leaves get_lca_length append ComplexWebQuestion append look_for_answers_by_id ComplexWebQuestion answers question sparql compositionality_type GroundedNode list qid GraphQuestion append GroundedEdge answer str isinstance add set answer_mid GroundedNode read_list_yuanshi GrounedGraph dict startswith append get_node_by_id GroundedEdge split GroundedNode read_list_yuanshi GrounedGraph dict eval startswith append get_node_by_id GroundedEdge split print execute_sparql add dict set get_p_o get_s_p execute_sparql execute_sparql execute_sparql execute_sparql execute_sparql print execute_sparql pop get_instance_by_class print execute_sparql set dict add write_set len print execute_sparql add dict set write_set len print get_names OrderedDict dict lower read_list write_dict enumerate execute_sparql_two_args execute_sparql add get_p_set execute_sparql set execute_sparql print execute_sparql print write_set execute_sparql append get_s_o_by_property len print get_domain execute_sparql set dict get_range stdout execute_sparql_three_args print Logger read_list enumerate stdout get_instance_by_class print Logger read_list stdout items list print set dict add Logger read_list enumerate split stdout print read_list Logger get_instance_by_class_notable_type execute_sparql_two_args append print execute_sparql enumerate stdout pop print get_names Logger read_list split stdout print get_domain Logger read_list get_range stdout print eval read_list Logger append enumerate split print read_pickle len print get_names execute_sparql append split split items list split is_mediator_property_from_schema get_reverse_property_from_lexcion append split str add set add set denotation set ungrounded_graph_forest print dict write_json read_structure_file get_grounded_graph_forest f1_score listdir ungrounded_graph_forest computeF1 print get_gold_answers write_structure_file denotation set gold_answer read_structure_file get_grounded_graph_forest listdir get_answers_mid_by_question str list ungrounded_graph_forest isinstance computeF1 print write_structure_file denotation set question add read_structure_file get_grounded_graph_forest listdir ungrounded_graph_forest print read_structure_file get_grounded_graph_forest f1_score listdir ungrounded_graph_forest open str sorted defaultdict list qid sparql_query read_structure_file append grounded_query_id close f1_score listdir items print write dict get_grounded_graph_forest items sorted defaultdict list ungrounded_graph_forest grounded_query_id print get_grounded_graph_forest denotation qid dict write_json read_structure_file append listdir float len float decode strip startswith str isinstance lower proprocess append print Series strip search group index proprocess zip append DataFrame compare_span_to_answer print append get_answers_names compute_P1 join list relation nodes id edges append str function question_node node_type id nodes dict type_class edges str function question_node node_type id nodes dict type_class edges str list join function question_node node_type id nodes dict type_class edges append keys range len add nodes id set execute_sparql append lower list append list str add set range len str add set range len dict join pop join split append enumerate is_question_node add_edge get_question_node has_path_to Graph end nid print nodes start edges append DepthFirstPaths add_edge end Digragh start edges DirectedCycle list close dict split enumerate append GrounedGraph append nodes dict append join strip split list close dict split enumerate list close dict split enumerate parse_args add_argument ArgumentParser defaultdict replace ungrounded_graph_forest print qid friendly_name nodes question add len max judge_twowords_samelemma tensor float max judge_twowords_samelemma list extend set append split min max range len list divide_train_val append dataset listdir path_match_dir divide_train_val path_match_dir divide_train_val join list grounded_graph_forest ungrounded_graph_forest key_path print sort len dict write_json read_structure_file append path_match_dir range enumerate split join list grounded_graph_forest path_match_dir ungrounded_graph_forest key_path print sort len friendly_name dict edges read_structure_file append write_json range enumerate split join list grounded_graph_forest ungrounded_graph_forest key_path print sort len friendly_name dict edges read_structure_file append write_json range enumerate split str print get_importantwords_byabstractquestion len get_word_pair_sim_without_memory get_firstparts_by_path zero_ split append tensor save max enumerate load deepcopy int list print shuffle save append range len SeqRankingLoader model zero_grad MarginRankingLoss save cuda ones clip_gradient Adam epochs range format size eval clip_grad_norm dev_every enumerate loss_margin next_batch criterion backward print Variable named_parameters parameters PathRanking step gpu batch_num str replace ungrounded_graph_forest qid friendly_name nodes dict question write_json read_structure_file join list defaultdict items print sort read_json friendly_name extract_grounded_graph_from_jena_freebase edges write_json append list print read_json shuffle add set write_json append enumerate read_json add set write_json read_structure_file append str sorted list items print read_json qid read_abstractquestionpair_pro dict reverse write_json read_structure_file float str ungrounded_graph_forest print read_json nodes testqid_correspondingtrainqid_denotations set QuestionMatchInterface get_denotation_by_testqid_nodes_freebase write_json read_structure_file get_grounded_graph_forest listdir pop items list copy append clear grounded_query_id ungrounded_to_grounded recursion_generate_grounded_graph nodes append get_copy range len GroundedNode nodes edges append GroundedEdge items list _node_entity_linking_quotation OrderedDict get_indexrange_entity_el_pro_one_mention dict lower items list sorted dict ratio lower replace set split startswith remove_bracket_suffix remove_number_suffix match match write_set str get_s_p_literal_none get_s_p_literal_none print get_topic_entities_list_by_question_from_nn question read_structure_file append enumerate convert_2_1_graph_to_qid_entities ungrounded_graph_forest print read_structure_file get_grounded_graph_forest enumerate convert_2_1_graph_to_qid_entities ungrounded_graph_forest print is_exist read_structure_file get_grounded_graph_forest enumerate convert_2_1_graph_to_qid_entities ungrounded_graph_forest print is_exist read_structure_file get_grounded_graph_forest enumerate str parser_conjunction_q_graphq oracle_file_root parser_composition_q_graphq read_json parser_composition_q_cwq_ parser_conjunction_q_cwq_ parser_conjunction_q_graphq oracle_file_root parser_composition_q_graphq read_json parser_composition_q_cwq_ parser_conjunction_q_cwq_ _get_2_2_graphs_by_structure_and_type_only_entities get_2_2_graphs_by_type_and_literals print nodes append analysis_structure_category print _1_1_to_graphs extend _1_0_to_graphs _1_2_to_graphs print extend _2_2_to_graphs _2_0_to_graphs _2_1_to_graphs print _1_1_to_graphs _1_0_to_graphs extend print extend _2_2_to_graphs _2_0_to_graphs _2_1_to_graphs GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split GroundedNode list defaultdict add dict append GroundedEdge split update_ungrounded_graph_merge_question_node generate_nodes abstract_question_word_generation span_tree_to_hybrid_dependency_graph_interface create_tokens aggregation_interface str QuestionAnnotation set_class_aggregation_function append set_blag generate_ungrounded_graph convert_to_structure span_tree_generation_head importantwords_by_unimportant_abstractq update_dependencygraph_indexs enumerate span_tree_generation_only_dep print get_nertag_sequence undate_ungrounded_graph_del_cycle set_ungrounded_graph_forest split print span_tree_generation_head create_tokens split print span_tree_generation_head span_tree_to_hybrid_dependency_graph_interface create_tokens split print span_tree_generation_head generate_nodes span_tree_to_hybrid_dependency_graph_interface update_dependencygraph_indexs create_tokens generate_ungrounded_graph split SpanTree add_span_node tokens look_for_position SpanTree process simple_process add_child_rel_with_headword update_headword_index get_sub_tokens add_span_node content update_span_tree_structure update_span_tree_nodes tokens look_for_position SpanTree simple_process add_child_rel_with_headword update_headword_index get_sub_tokens add_span_node content update_span_tree_structure update_span_tree_nodes is_comparative_funct is_count_funct count_serialization superlative_serialization is_superlative_funct lower comparative_serialization ner_tag end_position adj_edge_nodes_update is_superlative_by_token_ner_tag start_position is_comparative_by_token_ner_tag is_count_by_token_ner_tag range xiaoyu_dengyu_phrases serialization_mention dayu_dengyu_phrases xiaoyu_phrases dayu_phrases split deepcopy function is_exist_in_nodes end literal_comparative_node_in_one_edge start search_one_node_in_nodes_by_nid search_one_node_in_nodes edges append count_phrases serialization_mention split argmin_phrases argmax_phrases serialization_mention split deepcopy function is_exist_in_nodes end class_superlative_node_in_one_edge relation_superlative_node_in_one_edge friendly_name node_type start search_one_node_in_nodes_by_nid search_one_node_in_nodes edges append superlative_to_function add_argument ArgumentParser add_argument ArgumentParser add_argument ArgumentParser parse_args add_argument ArgumentParser pop len argmax argmax append enumerate append str join duplicate_word is_whitespace whitespace_tokenize read_cols_lines SquadExample append range len is_whitespace strip SquadExample append len _DocSpan _check_is_max_context namedtuple doc_tokens orig_answer_text length question_text len convert_tokens_to_ids _improve_answer_span InputFeatures start append tokenize range enumerate strip end_logit _get_best_indexes text_index doc_char_to_word_offset sorted defaultdict orig_answer_text get_final_text _NbestPrediction end_logits OrderedDict find append start_logit replace _compute_softmax doc_tokens question_text start_logits enumerate join namedtuple print text _PrelimPrediction split len strip end_logit _get_best_indexes text_index doc_char_to_word_offset sorted defaultdict get_final_text _NbestPrediction end_logits OrderedDict find append start_logit replace _compute_softmax doc_tokens question_text start_logits enumerate join namedtuple text _PrelimPrediction split len gradient_accumulation_steps from_pretrained arange BertAdam model tuple zero_grad max_query_length RawResult DataParallel doc_stride DataLoader do_train device do_predict output_dir train_file tensor FP16_Optimizer save max_answer_length seed str list DDP max_seq_length set_device len DistributedSampler tolist write_predictions half do_lower_case device_count read_many_examples FusedAdam TensorDataset warmup_linear convert_examples_to_features n_best_size append to SequentialSampler state_dict manual_seed_all format run_redundancy_span_get_local_args init_process_group param_groups size get_world_size mean eval num_train_epochs manual_seed fp16 trange verbose_logging enumerate load int warmup_proportion learning_rate join bert_model backward named_parameters RandomSampler tqdm unique_id train step local_rank train_batch_size makedirs print find SequenceExample SequenceExample print text_b _truncate_seq_pair text_a get_train_examples numpy data_dir get_labels lower sequence_classifier_accuracy get_dev_examples run_sequence_classifier_get_local_args InputFeatures convert_tokens_to_ids extend split append tokenize range enumerate len InputFeatures convert_tokens_to_ids len extend append tokenize range enumerate split token_classifier_accuracy run_token_classifier_get_local_args convert_examples_to_features_for_train append exp sorted append range enumerate len join _strip_spaces list items BasicTokenizer find tokenize len length start min enumerate join tokenize range close print fabs len argmax get_simple_examples max_seq_length to DataLoader eval TensorDataset convert_examples_to_features tensor SequentialSampler numpy get_sequence_example convert_example_to_features_for_test shape ner_prediction_sequence range abspath save from_json_file str transpose from_numpy getattr list_variables append state_dict format zip load_variable join int print BertForPreTraining fullmatch any split encode hexdigest sha256 str join isinstance str urlparse isinstance exists path netloc urlparse startswith resource split_s3_path Object resource split_s3_path download_fileobj get update write close tqdm iter_content len get str s3_etag join isinstance url_to_filename startswith head makedirs set OrderedDict strip split category category startswith startswith category ord pop convert_tf_checkpoint_to_pytorch print run_ungrounded_graph_interface append enumerate get_ungrounded_graph_forest set_grounded_linking print write_structure_file generate_grounded_graph_interface nodes qid set_grounded_graph_forest read_structure_file append enumerate clear str ungrounded_graph_forest print write_structure_file generate_candidates_by_2_1_grounded_graph_interface grounded_graph_to_sparql_CWQ extend qid ungrounded_query_id enumerate set_grounded_graph_forest read_structure_file get_grounded_graph_forest count_denotation_to_num range append len ungrounded_graph_forest key_path print get_path_pro write_structure_file question read_structure_file get_grounded_graph_forest extract_importantwords_from_question listdir PathMatchByLexicalNN get_score ungrounded_graph_forest print score write_structure_file denotation qid QuestionMatchInterface read_structure_file get_grounded_graph_forest listdir get_score ungrounded_graph_forest print score get_grounded_graph_forest write_structure_file denotation qid dict QuestionMatchInterface Normalize read_structure_file append listdir enumerate grounded_graphes_by_score_standard_ywsun_prediction_test grounded_graphes_by_score_standard_ywsun read_complexwebq_question_json print write_structure_file run_query_graph_generation append enumerate len write_structure_file read_graph_question_json run_query_graph_generation append range len | # SPARQA: question answering over knowledge bases Codes for paper: "SPARQA: Skeleton-based Semantic Parsing for Complex Questions over Knowledge Bases" (AAAI-2020) [detail](https://aaai.org/ojs/index.php/AAAI/article/view/6426). If you meet any questions, please email to him (ywsun at smail.nju.edu.cn). **Note that SPARQA is updated to SkeletonKBQA. If you are interested in SkeletonKBQA, please see [here](https://github.com/nju-websoft/SkeletonKBQA).** ## Project Structure: <table> <tr> <th>File</th><th>Description</th> </tr> <tr> | 3,151 |
njuzrs/dialogue_distillation | ['data augmentation'] | ['Dialogue Distillation: Open-Domain Dialogue Augmentation Using Unpaired Data'] | code/retrieval/pytorch_pretrained_bert/tokenization_gpt2.py code/retrieval/pytorch_pretrained_bert/tokenization_transfo_xl.py code/retrieval/run_ranker_test.py code/retrieval/pytorch_pretrained_bert/modeling_transfo_xl_utilities.py code/generation/model/optim.py code/retrieval/pytorch_pretrained_bert/__main__.py code/generation/model/transformer_model.py code/retrieval/pytorch_pretrained_bert/__init__.py code/generation/train_kd.py code/generation/model/loss.py code/retrieval/pytorch_pretrained_bert/modeling.py code/generation/model/trainer_kd.py code/generation/config.py code/retrieval/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py code/generation/model/transformer_module.py code/retrieval/pytorch_pretrained_bert/modeling_openai.py code/generation/model/utils.py code/retrieval/pytorch_pretrained_bert/convert_gpt2_checkpoint_to_pytorch.py code/retrieval/pytorch_pretrained_bert/modeling_transfo_xl.py code/retrieval/pytorch_pretrained_bert/tokenization.py code/retrieval/pytorch_pretrained_bert/optimization.py code/retrieval/pytorch_pretrained_bert/modeling_gpt2.py code/generation/model/dataset.py code/generation/run.py code/generation/train.py code/retrieval/pytorch_pretrained_bert/optimization_openai.py code/generation/metrics.py code/generation/run_dialog.py code/retrieval/run_distilling.py code/retrieval/pytorch_pretrained_bert/convert_transfo_xl_checkpoint_to_pytorch.py code/retrieval/pytorch_pretrained_bert/file_utils.py code/retrieval/metrics.py code/generation/run_dialog_kd.py code/retrieval/pytorch_pretrained_bert/convert_openai_checkpoint_to_pytorch.py code/generation/model/trainer.py code/retrieval/pytorch_pretrained_bert/tokenization_openai.py code/generation/model/postprocessing.py code/generation/model/text.py get_test_config_meme get_model_config_dialog get_trainer_config_meme get_model_config_dialog_overlap get_trainer_config_poem get_test_config_poem get_trainer_config_dialog_overlap get_model_config_poem get_test_config_dialog get_test_config_dialog_overlap get_model_config_meme get_trainer_config_dialog calc_distinct calc_bleu calc_f1 calc_bp calc_cover_rate calc_avg_len calc_distinct_ngram get_dict count main main main main S2sDataset_poem S2sDataset_dialog_overlap S2sDataset_dialog S2sDataset_meme LabelSmoothingLoss NoamOpt Adam syntax_fix equal_phrases detokenize ngram_replaser get_syn ReplyChecker myVocab Trainer Trainer TransformerModel MultiheadAttention TransformerModule FeedForward TransformerBlock set_seed pad_sequence checkpoint_sequential load_openai_weights load_openai_weights_chinese f1_score openai_transformer_config evaluate evaluation_one_session mean_average_precision precision_at_position_1 recall_at_position_k_in_10 mean_reciprocal_rank InputFeatures accuracy InputExample _truncate_seq_pair eval BuyMTProcessor convert_examples_to_features warmup_linear main BuyProcessor DataProcessor SnliProcessor DoubanProcessor InputFeatures MrpcProcessor ColaProcessor accuracy InputExample _truncate_seq_pair KeywordProcessor eval BuyMTProcessor convert_examples_to_features warmup_linear main BuyProcessor AntProcessor SingleSentProcessor DataProcessor convert_gpt2_checkpoint_to_pytorch convert_openai_checkpoint_to_pytorch convert_tf_checkpoint_to_pytorch convert_transfo_xl_checkpoint_to_pytorch cached_path s3_etag http_get s3_request s3_get read_set_from_file get_from_cache filename_to_url url_to_filename split_s3_path get_file_extension CosineEncodingClassifier BertPreTrainingHeads BertForSequenceMultiClassification BertForSequenceClassificationMean BertForSequenceMultiLogits BertForQuestionAnswering BertModelMean BertEncoder BertSelfAttention BertForMaskedLM BertOnlyMLMHead BertOnlyNSPHead BertEmbeddings BertOutput BertPredictionHeadTransform BertAttention BertPooler gelu BertPreTrainedModel BertForMultipleChoice BertConfig BertModelMultiClassifier BertLayer BertForTokenClassification BertForSequenceMultiLayerClassification BertPoolerMean BertModel BertForNextSentencePrediction BertModelMultiLayer BertIntermediate BertForSequenceClassification BertForSequenceClassificationTwoHead BertForPreTraining swish BertLMPredictionHead load_tf_weights_in_bert BertSelfOutput GPT2LMHeadModel Block GPT2DoubleHeadsModel load_tf_weights_in_gpt2 MLP gelu GPT2PreTrainedModel GPT2Model GPT2LMHead Conv1D GPT2MultipleChoiceHead Attention GPT2Config Attention Block OpenAIGPTPreTrainedModel OpenAIGPTLMHeadModel OpenAIGPTMultipleChoiceHead OpenAIGPTConfig MLP gelu swish OpenAIGPTLMHead OpenAIGPTDoubleHeadsModel Conv1D load_tf_weights_in_openai_gpt OpenAIGPTModel DecoderLayer TransfoXLModel PositionalEmbedding load_tf_weights_in_transfo_xl RelLearnableDecoderLayer AdaptiveEmbedding RelLearnableMultiHeadAttn TransfoXLPreTrainedModel MultiHeadAttn RelPartialLearnableDecoderLayer TransfoXLLMHeadModel PositionwiseFF TransfoXLConfig RelMultiHeadAttn build_tf_to_pytorch_map RelPartialLearnableMultiHeadAttn ProjectedAdaptiveLogSoftmax LogUniformSampler sample_logits _LRSchedule BertAdam WarmupCosineWithWarmupRestartsSchedule WarmupCosineSchedule WarmupCosineWithHardRestartsSchedule WarmupConstantSchedule WarmupLinearSchedule ConstantLR warmup_cosine warmup_constant warmup_linear OpenAIAdam BasicTokenizer WordpieceTokenizer load_vocab whitespace_tokenize _is_whitespace _is_control BertTokenizer _is_punctuation bytes_to_unicode get_pairs GPT2Tokenizer get_pairs text_standardize OpenAIGPTTokenizer LMOrderedIterator TransfoXLCorpus TransfoXLTokenizer LMMultiFileIterator get_lm_corpus _is_whitespace _is_control _is_punctuation LMShuffledIterator main openai_transformer_config AttrDict AttrDict AttrDict openai_transformer_config AttrDict AttrDict AttrDict openai_transformer_config AttrDict AttrDict AttrDict openai_transformer_config AttrDict AttrDict AttrDict join range len items list get_dict exp count exp print calc_bp calc_cover_rate log items list get_dict calc_distinct_ngram join Counter vocab_path last_checkpoint_path device seed list set_seed load_state_dict input to format replace answer_beams get_test_config_poem eval keys load pop print TransformerModel get_model_config_poem dict myVocab get_model_config_dialog get_test_config_dialog openai_parameters_dir Trainer DistributedDataParallel ArgumentParser str test_datasets train_datasets set_device S2sDataset_dialog parse_args transformer_module range get_trainer_config_dialog n_layers init_process_group load_last n_epochs add_argument n_pos_embeddings train local_rank split teacher_checkpoint_path LanguageTool check syntax_fix title split word_tokenize list replace lemma_names choice pos_tag keys synsets items list join replace append get_syn split join equal_phrases replace lower zip append sum split seed manual_seed size max fill_ enumerate children list isinstance Sequential run_function range checkpoint len dotdict load pop list load_state_dict keys join int arange num_embeddings new_kernel cumsum transpose fullmatch from_numpy RectBivariateSpline getattr linspace zip split range len sorted mean_average_precision precision_at_position_1 recall_at_position_k_in_10 mean_reciprocal_rank print str format join text_b InputFeatures convert_tokens_to_ids _truncate_seq_pair tokenize guid info append text_a enumerate len pop len argmax gradient_accumulation_steps from_pretrained get_train_examples BertAdam tuple zero_grad temperature DataParallel DataLoader loss_ce output_dir do_train tensor save FP16_Optimizer view DDP data_dir len max_seq_length get_labels DistributedSampler half device_count FusedAdam student_bert_model TensorDataset convert_examples_to_features warmup_linear CrossEntropyLoss state_dict manual_seed_all student_model param_groups log_softmax get_world_size mean lower num_train_epochs info manual_seed fp16 alpha trange enumerate int time learning_rate warmup_proportion join backward teacher_bert_model named_parameters RandomSampler tqdm bool step train_batch_size makedirs eval_batch_size join max_seq_length data_dir to accuracy get_dev_examples DataLoader TensorDataset convert_examples_to_features output_dir info tensor SequentialSampler numpy len model bert_model vstack tolist append evaluate extend format load_tf_weights_in_gpt2 print GPT2Model save GPT2Config state_dict format OpenAIGPTConfig print save load_tf_weights_in_openai_gpt OpenAIGPTModel state_dict str format print BertForPreTraining save load_tf_weights_in_bert from_json_file state_dict pop str join format __dict__ load_tf_weights_in_transfo_xl print TransfoXLLMHeadModel save abspath TransfoXLConfig state_dict encode hexdigest sha256 str join str urlparse exists path netloc urlparse startswith resource split_s3_path Object resource split_s3_path download_fileobj get update write close tqdm iter_content len get str s3_etag join url_to_filename startswith head makedirs set load_variable join int format zip print transpose fullmatch from_numpy any getattr list_variables abspath append split load_variable int format zip print squeeze fullmatch from_numpy getattr list_variables abspath append split load pop int format zip print cumsum fullmatch from_numpy getattr split open update r_r_bias hasattr tie_weight layers out_layers tie_projs emb_layers r_w_bias transformer emb_projs untie_r zip append out_projs enumerate load_variable pop list format items join print transpose from_numpy list_variables keys build_tf_to_pytorch_map enumerate embedding view size einsum masked_fill_ sample cat detach OrderedDict strip split category category startswith startswith category ord append list range ord add set sub replace load join format TransfoXLCorpus print save exists convert_openai_checkpoint_to_pytorch convert_transfo_xl_checkpoint_to_pytorch convert_tf_checkpoint_to_pytorch convert_gpt2_checkpoint_to_pytorch | # Dialogue Distillation code/data for EMNLP'2020 long paper "[Dialogue Distillation: Open-domain Dialogue Augmentation Using Unpaired Data](https://arxiv.org/abs/2009.09427)" ## Code `code/generation`: code of generation based dialogue model `code/retrieval`: code of retrieval based dialogue model ## Data The data we used can be downloaded from this [link](https://drive.google.com/file/d/1mNQf7QydWGhxPE1-1IW0yfwSLJJ9zVG7/view?usp=sharing) ## Citation Please cite our EMNLP paper if you find our work useful :) @inproceedings{zhang2020distill, | 3,152 |
nkolkin13/STROTSS | ['style transfer'] | ['Style Transfer by Relaxed Optimal Transport and Self-Similarity'] | styleTransfer.py stylize_objectives.py contextual_loss.py pyr_lap.py vgg_pt.py utils.py st_helper.py dp_loss_warp pairwise_distances_sq_l2 dp_loss remd_loss_g remd_loss viz_d dp_loss_g moment_loss pairwise_distances_cos get_DMat moment_loss_g syn_lap_pyr dec_lap_pyr run_st objective_class style_transfer build_guidance load_path_for_pytorch load_style_folder rgb_to_yuv_pc yuv_to_rgb load_style_guidance rgb_to_yuv to_device aug_canvas split_99 match_device extract_regions Vgg16_pt transpose mm view sqrt transpose mm view pairwise_distances_sq_l2 Variable size sqrt pairwise_distances_cos to_device zeros range len int upsample size clone sqrt sum max range len size min transpose mean get_DMat max topk size min transpose mean get_DMat max range size transpose mean mm range len size transpose mm len mean pow get_DMat float sum max range detach exp size transpose mean get_DMat sum detach exp size transpose mean get_DMat sum detach size transpose matmul mean pow get_DMat sum max detach upsample_bilinear size range append upsample_bilinear size len data normal time imwrite Variable upsample transpose print clone mean unsqueeze to_device range style_transfer shuffle_feature_inds imwrite move syn_lap_pyr zero_grad objective_class to_device load_style_guidance init_g_inds Vgg16_pt load_style_folder step RMSprop aug_canvas append range phi eval dec_lap_pyr backward print contiguous greater init_inds numpy array len cpu cuda int size transpose cpu float max t match_device mm view match_device mm match_device mm view append size range int concatenate copy append zeros imread range max clip reshape transpose astype float32 unique append expand_dims com_f float contiguous astype float32 copy shape stack imread view Variable size add_ astype phi copy float32 int64 unsqueeze int32 to_device mul_ append range clip cat len Variable sort contiguous unsqueeze to_device append numpy range | # Style Transfer by Relaxed Optimal Transport and Self-Similarity (STROTSS) Code for the paper https://arxiv.org/abs/1904.12785 (CVPR 2019) webdemo: http://style.ttic.edu/ UPDATE 5/8/2020: David Futschik (https://github.com/futscdav) very kindly pointed out a bug in the feature extraction pipeline where the images were not properly normalized with imagenet's mean and standard deviation for each color channel. Fixing this dramatically improves results in many cases. He also has implemented a much faster and more memory efficient version of strotts (https://github.com/futscdav/strotss), it doesn't allow for spatial guidance yet but I'm planning on incorporating his improvements into this repo soon so that the faster version is available for spatial guidance as well. ## Dependencies: * python3 >= 3.5 * pytorch >= 1.0 * imageio >= 2.2 * numpy >= 1.1 ## Usage: | 3,153 |
nlpyang/SUMO | ['document summarization', 'extractive summarization'] | ['Single Document Summarization as Tree Induction'] | src/models/stats.py src/others/logging.py src/models/encoder.py src/train.py src/models/trainer.py src/others/pyrouge.py src/models/optimizers.py src/models/attentions.py src/models/reporter.py src/models/data_loader.py src/models/model_builder.py validate wait_and_validate test train str2bool _getMatrixTree_multi MultiHeadedAttention StructuredAttention MultiHeadedPooling Batch Dataloader simple_batch_size_fn load_dataset DataIterator batch StructuredEncoder PositionwiseFeedForward TransformerInterEncoder TMTLayer PositionalEncoding TransformerEncoderLayer build_optim Summarizer use_gpu MultipleOptimizer Optimizer ReportMgr ReportMgrBase build_report_manager Statistics test_rouge _get_ngrams _tally_parameters build_trainer Trainer process rouge_results_to_str init_logger Rouge155 DirectoryProcessor join sorted int validate str info getmtime glob sort min test index sleep model_path test_all append enumerate vocab_path PieceToId batch_size build_trainer test_from SentencePieceProcessor list Summarizer Dataloader load_dataset _report_step eval info vars setattr keys load print Load len vocab_path PieceToId batch_size build_trainer test_from SentencePieceProcessor list Summarizer Dataloader load_dataset eval info vars setattr keys load print Load len vocab_path PieceToId build_optim build_trainer SentencePieceProcessor log_file train_steps seed list Summarizer init_logger train_from manual_seed info vars setattr keys load Load len diag_embed exp transpose unsqueeze inverse diagonal sum append simple_batch_size_fn len glob sorted onmt_path shuffle max len items list Optimizer set_parameters named_parameters lr max_grad_norm load_state_dict optim is_tensor cuda values state_dict SummaryWriter tensorboard report_every ReportMgr tensorboard_log_dir tuple add set range len sum named_parameters accum_count SummaryWriter _tally_parameters model_path Trainer report_every info ReportMgr join format print Rouge155 output_to_dict strftime localtime convert_and_evaluate mkdir range len print process len setFormatter getLogger addHandler StreamHandler Formatter setLevel INFO FileHandler | # SUMO **This code is for paper `Single Document Summarization as Tree Induction`** **Python version**: This code is in Python3.6 **Package Requirements**: pytorch tensorboardX pyrouge Some codes are borrowed from ONMT(https://github.com/OpenNMT/OpenNMT-py) ## Data Preparation: Download the processed data for CNN/Dailymail download https://drive.google.com/open?id=1BM9wvnyXx9JvgW2um0Fk9bgQRrx03Tol unzip the zipfile and copy to `data/` ## Model Training | 3,154 |
nlx-group/BrainActivation | ['word embeddings'] | ['Predicting Brain Activation with WordNet Embeddings'] | scripts/data_load.py scripts/brain_plot.py plot load_data abs open list ProgBar range update format asarray close plot_glass_brain listdir keys load int print index Nifti1Image load_data zeros array len join str append loadmat range | # Predicting Brain Activation with WordNet Embeddings The task of taking a semantic representation of a noun and predicting the brain activity triggered by it in terms of fMRI spatial patterns was pioneered by *Mitchell et al.*. That seminal work used word co-occurrence features to represent the meaning of the nouns. Even though the task does not impose any specific type of semantic representation, the vast majority of subsequent approaches resort to feature-based models or to semantic spaces (aka word embeddings). We address this task, with competitive results, by using instead a semantic network to encode lexical semantics, thus providing further evidence for the cognitive plausibility of this approach to model lexical meaning. **Article** *João Rodrigues, Ruben Branco, João Silva and António Branco, 2018, [Predicting Brain Activation with WordNet Embeddings](https://sites.google.com/view/cognitivews2018/accepted-papers), To appear in ACL 2018 Workshop on Cognitive Aspects of Computational Language Learning and Processing* **Semantic network (wnet2vec)** The wnet2vec model referred in the article is available for [download](http://lxcenter.di.fc.ul.pt/wnet2vec_brain.tar.gz). The model was obtained resorting to 60,000 words using Princeton WordNet 3.0, the training of the model is described in *Chakaveh Saedi, António Branco, João Rodrigues and João Silva, 2018, [Wordnet embeddings](https://sites.google.com/site/repl4nlp2018/accepted-papers), In Proceedings of the ACL2018 3rd Workshop on Representation Learning for Natural Language Processing (RepL4NLP)*. **Evaluation and scripts** | 3,155 |
nmheim/torsk | ['time series', 'anomaly detection'] | ['Adaptive Anomaly Detection in Chaotic Time Series with a Spatially Aware Echo State Network'] | experiments/ocean/conv_run_5daymean_highres.py tests/data/test_torch_dataset.py torsk/scripts/debug_train.py torsk/scripts/ncextract.py torsk/models/torch_optimize.py torsk/data/detrend.py torsk/timing.py torsk/scripts/cli.py torsk/scripts/unmask+dct.py tests/test_experiments/test_run_kuro.py torsk/numpy_accelerate.py torsk/params.py experiments/agulhas/gulf_run_3daymean.py torsk/scripts/defaults.py torsk/scripts/unmask+ct.py experiments/lissajous/run.py torsk/__init__.py torsk/hpopt.py torsk/models/numpy_map_esn.py experiments/agulhas/anomaly_count.py torsk/models/numpy_optimize.py tests/test_experiments/test_run_mackey_1d.py torsk/visualize.py torsk/scripts/cycle_predict.py experiments/agulhas/conv_run_3daymean.py experiments/agulhas/detect_row.py experiments/mackey/run.py tests/models/test_torch_lstm.py experiments/agulhas/pred_perf.py experiments/agulhas/utils.py torsk/config.py torsk/data/conv.py torsk/models/torch_lstm.py experiments/chaotic_lissajous/conv_run.py torsk/scripts/scale.py tests/test_experiments/test_run_mackey_2d.py tests/data/test_numpy_dataset.py torsk/scripts/animate.py tests/models/test_torch_esn.py torsk/imed.py experiments/agulhas/animate_aguhlas.py torsk/sparse.py tests/test_save_load.py tests/test_experiments/test_run_lissajous_2d.py torsk/scripts/detect_row.py torsk/data/dct.py torsk/models/initialize.py torsk/models/torch_map_esn.py tests/test_sparse.py torsk/models/numpy_esn.py experiments/lstm/eval_esn_lstm_1dmackey.py torsk/scripts/pred_perf.py torsk/anomaly.py experiments/ocean/conv_run_5daymean.py torsk/data/torch_dataset.py torsk/scripts/lstm_predict.py experiments/lissajous/conv_run.py torsk/data/numpy_dataset.py experiments/agulhas/extract_aguhlas.py tests/models/test_numpy_esn.py tests/test_imed.py torsk/scripts/unmask+sct.py setup.py tests/models/test_initialize.py experiments/chaotic_lissajous/run.py torsk/models/torch_esn.py torsk/scripts/anomaly_count.py experiments/lstm/train_esn_lstm_1dmackey.py experiments/agulhas/detect.py torsk/scripts/detect.py torsk/data/utils.py esn_perf read_imed initialize finalize generate_cycle_pred generate_offline_esn_pred read_online_esn generate_lstm_preds run_offline_esn run_online_esn get_data_loaders mackey_train_eval_test esn_params lstm_params run_lstm test_imed gauss_kernel test_numpy_sparse_save_load test_numpy_save_load test_square_sparse test_rect_sparse test_image_dataset test_reservoir_initialization model_forward test_dtypes test_esn_cell test_esn test_convlstm test_lstm test_kuro test_lissajous test_run_1dmackey test_mackey_2d cumulative_distribution qfunction sliding_score is_second_level_param optimize_wout parse_path _evaluate_hyperparams get_hpopt_dirs create_path valid_second_level_params esn_tikhonov_fitnessfunc evaluate_hyperparams get_metric imed_metric metric_matrix eucd_metric Id bohriumize after_storage void _get_all_class_attrs before_storage to_np numpyize ParamsSchema default_params Params InputMap sparse_dense_mv SparseMatrix sparse_dense_mm start_timer end_timer Timer animate_imshow plot_mackey animate_quad_imshow plot_iteration write_double_video write_video animate_double_imshow to_byte save_model load_model initial_state train_esn _load_torch_model _fix_prefix _save_torch_model _load_numpy_model _save_numpy_model dump_training dump_cycles train_predict_esn dump_prediction _random_kernel _mean_kernel _gauss_kernel _conv_out_size conv2d_output_shape get_kernel conv2d conv2d_sequence dct2_sequence sct_basis sct dct2 isct2 idct2 idct2_sequence sct2 isct separate_trend_scaled kspace_predict_from_trend_unscaled separate_trend_unscaled predict_from_trend_unscaled separate_trends_scaled recombine_trends predict_from_trend_scaled recombine_trend_scaled recombine_trends_unscaled recombine_trend_unscaled separate_trends_unscaled cycles kspace_predict_from_trend polynomial_trend split_train_label_pred NumpyImageDataset TorchFlatImageDataset TorchImageDataset svd sine_sequence mackey_sequence gauss2d_sequence min_max_scale fft_derivative_1d resample2d_sequence mackey_anomaly_sequence eigh resample2d upscale normalize resample2d_skt resample2d_numpy lstsq downscale connection_mask sparse_esn_reservoir sparse_nzpr_esn_reservoir dense_esn_reservoir scale_weight NumpyESN NumpyStandardESNCell NumpyMapSparseESNCell input_map NumpyMapESNCell hidden_size_of init_input_map_specs apply_input_map get_hidden_size _pseudo_inverse_svd _extended_states pseudo_inverse tikhonov _pseudo_inverse_lstsq TorchESN TorchStandardSparseESNCell TorchStandardESNCell LSTM ConvLSTM input_map get_kernel init_input_map_specs TorchMapSparseESNCell TorchMapESNCell _extended_states tikhonov pseudo_inverse cli cli cli cli cli cli cli cli cli get_metadata create_dims get_dims cli trivial_imed imed_plot sort_filenames cli cli sct2 outputfile sct_basis sct smooth_mask smooth_mask_and_ict dct2 isct2 idct2 isct smooth_mask_and_dct sct2 sct_basis sct smooth_mask smooth_mask_and_isct isct2 smooth_mask_and_dct isct show list glob tqdm empty animate_double_imshow enumerate len add_argument set_context set_style ArgumentParser parse_args show outfile tight_layout close savefig legend append get_legend_handles_labels sort_filenames parent tqdm zip append imed_metric exists enumerate load append glob glob list mean load_model print abs squeeze predict esn_params mean save zip append zeros forward hidden_size load print predict mean load_state_dict lstm_params LSTM numpy hidden_size Params mackey_sequence TorchImageDataset mackey_train_eval_test DataLoader Params ModelCheckpoint add_event_handler get_data_loaders Adam MSELoss parameters create_supervised_trainer Timer lstm_params LSTM create_supervised_evaluator attach run mackey_train_eval_test esn_params ESN train_predict_esn NumpyImageDataset time train_esn print mackey_train_eval_test esn_params Path ESN NumpyImageDataset int exp min pi linspace seed normal reshape convolve2d shape gauss_kernel metric_matrix imed_metric sum join str save_model load_model forward float64 NumpyESN astype Params join str save_model load_model forward float64 NumpyESN astype Params sparse_dense_mv arange ones from_dense SparseMatrix eye sparse_dense_mm enumerate from_dense array unscale dtype zeros Params NumpyImageDataset data sparse_esn_reservoir dense_esn_reservoir connection_mask input_shape print NumpyESN ones zeros forward default_params hidden_size model_forward dtype forward NumpyMapESNCell NumpyMapSparseESNCell astype optimize NumpyESN copy uniform forward default_params predict forward LSTM rand m rand ConvLSTM seed load basicConfig max parent getLogger abs mean ImageDataset info ESN train_predict_esn setLevel default_params seed max basicConfig T arange getLogger abs pi mean ImageDataset info ESN train_predict_esn setLevel default_params gauss2d_sequence seed max basicConfig normalize mackey_sequence getLogger abs mean ImageDataset info ESN train_predict_esn setLevel default_params arange getLogger cos pi ESN abs max setLevel seed basicConfig normalize mackey_anomaly_sequence mean ImageDataset info T train_predict_esn default_params gauss2d_sequence max zeros_like min maximum mean shape qfunction empty std range items _fix_prefix Path eval split glob Path exists optimize save_model transient_length debug pred_length dump_prediction predict update save_model optimize_wout debug randint create_path valid_second_level_params dump_training zeros forward NumpyESN _evaluate_hyperparams joinpath Path info range exp arange pi reshape metric_matrix reshape bh_check numpyize bohriumize items ndarray isinstance debug _get_all_class_attrs to_bh items isinstance debug _get_all_class_attrs to_np Int List String Float Nested Boolean Int List String InputMap Float Nested reshape sum reshape sum begin end show subplots set_title suptitle input_map reshape cat_input_map colorbar range state_map imshow forward vec_to_rect to_np len append_data get_writer close shape normalize range to_byte zeros colormap shape write_video empty subplots arange suptitle text set_xlabel tight_layout colorbar flatten imshow zip to_np FuncAnimation FuncAnimation subplots set_title suptitle arange text colorbar imshow to_np len arange text colorbar imshow figure gca to_np FuncAnimation subplots arange plot set_ylim tight_layout histogram legend fill_between abs sort_output dump before_storage after_storage load NumpyESN after_storage params numpyize pop isinstance FloatTensor as_posix indices save values state_dict strip TorchESN pop load FloatTensor load_state_dict hidden_size info _fix_prefix _save_torch_model _save_numpy_model mkdir Path save isinstance _load_torch_model after_storage _fix_prefix _load_numpy_model Path Params zeros setncatts quadratic_trend mean_cycle createVariable createDimension cycle_length mkdir debug Path reshape mkdir debug Path reshape dtype optimize save_model backend transient_length initial_state dump_training Path info forward hidden_size dtype optimize save_model backend transient_length initial_state predict range dump_prediction reset warning Path dump_training info randint forward pretty_print hidden_size exp min linspace sum _random_kernel _mean_kernel _gauss_kernel _conv_out_size convolve2d get_kernel get_kernel arange cos pi lstsq dot sct T isct T idctn idctn T arange len len arange concatenate mean downscale cycles upscale polynomial_trend len arange concatenate downscale cycles upscale len separate_trend_scaled flatten shape empty range shape empty range recombine_trend_scaled arange ones recombine_trends shape array int arange separate_trends dct2 kspace_predict_from_trend arange concatenate mean cycles polynomial_trend len separate_trend_unscaled flatten shape empty range cycles concatenate arange len recombine_trend_unscaled shape empty range ones recombine_trends_unscaled shape array len kspace_predict_from_trend_unscaled separate_trends_unscaled idctn arange start_timer end_timer start_timer end_timer start_timer end_timer start_timer dtype astype end_timer uint64 modf reshape end_timer astype start_timer to_np start_timer end_timer resample2d start_timer end_timer dct idct start_timer end_timer idct ifft fft fftshift pi linspace len min max min max T arange cos pi linspace sin zeros array range zeros range tile linspace pi sin triu T tril normal T connection_mask eigvals triu max tril rvs eigsh tocsr eigs transpose multiply random triu max tril asarray tocoo reshape eigs multiply SparseMatrix add uniform coo_matrix randint max range bh_dot concatenate reshape gradient hidden_size_of shape convolve2d resample2d normalize start_timer reshape end_timer append get_kernel uniform astype ones T begin svd T bh_dot end _extended_states len begin T end _extended_states warning lstsq T solve dot _extended_states eye to_np getattr get_np_kernel tensor numpy apply_input_map rand getattr numpy np_pinv reshape size t gesv mm show animate_imshow as_posix save write_video subplots masked_array abs set_context colorbar imshow savefig append sum sliding_score close tight_layout mean zip annotate load sort_filenames parent write tqdm set_style array forward load_model copyfile arange Params train_length set_yscale set_xlabel legend normalize mackey_anomaly_sequence ones_like plot set_xlim get_cmap zeros cycle set_ylabel set_xticks fill_between imed_metric set_ylim grid pcolormesh meshgrid T replace load_state_dict LSTM hidden_size dimensions dtype get_dims createDimension read exists split imed_metric tile subplots arange plot mean set_ylabel array legend fill_between std animate_quad_imshow concatenate enumerate imed_plot print min max str createVariable float32 createDimension float Dataset all compressed interp1d copy mask mean masked_array shape range fi print smooth_mask dct2 append sct2 range len compressed interp1d copy mean shape masked_array range fi lstsq dot smooth_mask isct2 smooth_mask | # Torsk An extended Echo State Network (ESN) for chaotic time series prediction and anomaly detection. This is a new implementation of the framework used in my [thesis](https://github.com/nmheim/thesis). If you are looking for the legacy `torsk` that was used there you can find it [here](https://github.com/nmheim/torsk_archived). In addition to a randomly initialized input matrix this implementation makes it possible to use convolutions, discrete fourier transforms, and gradients of images as inputs to the ESN. ## Prediction Examples | 3,156 |
nnaisense/deep-iterative-surface-normal-estimation | ['surface normals estimation'] | ['Deep Iterative Surface Normal Estimation'] | utils/prf.py utils/radius.py normals_pcpnetdata_eval.py utils/covariance.py setup.py datasets/pcpnet_dataset.py normals_pcpnetdata_train.py utils/quaternion.py normals_nyudepthv2_eval.py networks/gnn.py datasets/nyu_depth_v2.py NormalEstimation test run NormalEstimation run test save_normals NormalEstimation train test run NYUDepthV2_PC PCPNetDataset GNNFixedK GNNVariableK scatter_softmax compute_cov_matrices_dense compute_weighted_cov_matrices_dense compute_weighted_cov_matrices compute_cov_matrices compute_prf cangle QuatToMat radius_var radius_graph_var radius_graph radius eval test join numpy savetxt results_path print iterations format print k_test array model zero_grad iterations gather cuda radius_graph view apply expand_as range detach format param_groups size mean float enumerate backward print sort parameters step model pi gather abs cuda radius_graph view apply expand_as sum range detach size item float enumerate sort clamp array format k_train copy save train range state_dict exp scatter_add scatter_mean view scatter_add matmul mean sum view matmul scatter_add view sum view float norm sum stack view size new_zeros cKDTree query unique append cat size new_zeros cKDTree query unique append cat | ## Deep Iterative Surface Normal Estimation Code repository for the paper [<i>Deep Iterative Surface Normal Estimation</i>](https://arxiv.org/abs/1904.07172), CVPR 2020 (oral), by Jan Eric Lenssen, Christian Osendorfer and Jonathan Masci @NNAISENSE. <p align="center"> <img width="40%" src="overview.png?sanitize=true"/> </p> Below, we explain how to * install the code, * reproduce paper results for the PCPNet and NYU datasets, * train a new model Further, we provide a short overview of important classes and functions. | 3,157 |
nnaisense/pytorch_sym3eig | ['surface normals estimation'] | ['Deep Iterative Surface Normal Estimation'] | torch_sym3eig/sym3eig.py test/test_sym3eig.py setup.py test/utils.py test/benchmark.py torch_sym3eig/__init__.py test_sym3eig_forward test_sym3eig_backward tensor Sym3Eig_core get_func Sym3Eig zeros_like sign gather tensor is_cuda view transpose matmul apply expand_as double size op assert_almost_equal reshape argsort cross repeat cpu numpy requires_grad_ tensor | # Pytorch extension: Batch-wise eigencomputation for symmetric 3x3 matrices -------------------------------------------------------------------------------- The operator works on 32 and 64 bit floating point data types and is implemented both for CPU and GPU with custom kernels. Implementations include forward and backward steps. The full code for our work [Deep Iterative Surface Normal Estimation](https://arxiv.org/abs/1904.07172) can be found here: https://github.com/nnaisense/deep-iterative-surface-normal-estimation. ## Installation Ensure that at least PyTorch >= 1.0.0 is installed, checkout repository and run: ``` python setup.py install ``` | 3,158 |
nng555/ssmba | ['data augmentation'] | ['SSMBA: Self-Supervised Manifold Based Data Augmentation for Improving Out-of-Domain Robustness'] | utils.py ssmba.py gen_neighborhood hf_reconstruction_prob_tok hf_masked_encode fill_batch from_pretrained model tuple hf_masked_encode cuda max open seed str list label_file shard fill_batch tolist tokenizer append range vocab num_shards hf_reconstruction_prob_tok eval stack manual_seed is_available pop items int join pad_token_id write full output_prefix len vocab items list asarray int pad_token_id ones mask_token_id rand choice encode sum full range len ones_like topk model float clone multinomial unsqueeze softmax nonzero to cat list zip encode append len | # SSMBA: **S**elf-**S**upervised **M**anifold **B**ased Data **A**ugmentation ## Overview Self-Supervised Manifold Based Data Augmentation or SSMBA is a semi-supervised data augmentation method that improves both in-domain and out-of-domain performance across multiple models and tasks. SSMBA relies on the assumption that the underlying data clusters around a lower dimensional manifold. A corruption function is applied to perturb a training example off the manifold, then a reconstruction function (typically a denoising autoencoder) is used to project the noised point back onto the manifold.  SSMBA is a static augmentation method, meaning all augmentation is completed prior to training. Labels can be generated for these augmented examples either by preserving the labels (which we implement by default here), or by optionally pseudo-labelling with a teacher model. The final labelled examples can be added to the original training examples to form a larger augmented dataset for training. ## SSMBA in NLP When applied in NLP settings, we apply masked language modeling (MLM) training noise as our corruption function. Specifically, we select a fraction of tokens to apply noise to, then of these tokens, either `<MASK>` them, replace them with a random token, or leave them unchanged. In the original BERT training regime, these percentages are 80% `<MASK>`, 10% random, and 10% unchanged. Once corrupted, we use a BERT model to predict each of the selected tokens and reconstruct the input.  ## How to Use SSMBA `ssmba.py` is based on the HuggingFace Transformers library and uses BERT models implemented in this library for reconstruction. Any valid BERT model in the [HuggingFace model library](https://huggingface.co/models) can be used, as well as local model paths. | 3,159 |
nnzhan/Graph-WaveNet | ['traffic prediction'] | ['Graph WaveNet for Deep Spatial-Temporal Graph Modeling', 'Incrementally Improving Graph WaveNet Performance on Traffic Prediction'] | train.py generate_training_data.py test.py util.py engine.py model.py trainer generate_train_val_test generate_graph_seq2seq_io_data linear gcn nconv gwnet main main load_pickle calculate_scaled_laplacian calculate_normalized_laplacian load_adj masked_mae metric masked_rmse asym_adj DataLoader masked_mape sym_adj load_dataset StandardScaler masked_mse concatenate abs transpose min astype range shape stack append expand_dims dayofweek max timedelta64 values join arange concatenate print sort generate_graph_seq2seq_io_data read_hdf shape traffic_df_filename output_dir y_start round savez_compressed data gwnet batch_size nodevec2 adjdata numpy device inverse_transform DataFrame max heatmap load_adj transpose squeeze metric savefig load_state_dict load_dataset append to adjtype range cat format randomadj dropout relu get_iterator mean eval softmax num_nodes checkpoint enumerate load print to_csv nodevec1 aptonly mm in_dim gcn_bool weight_decay save round nhid seq_length str addaptadj argmin state_dict trainer shuffle expid train time learning_rate epochs diags flatten coo_matrix sum array flatten coo_matrix diags diags tocoo flatten coo_matrix eye sum array calculate_normalized_laplacian csr_matrix reduce identity shape eigsh load_pickle load join DataLoader transform StandardScaler isnan float zeros_like where zeros_like where isnan float abs zeros_like where isnan float abs item | # Graph WaveNet for Deep Spatial-Temporal Graph Modeling This is the original pytorch implementation of Graph WaveNet in the following paper: [Graph WaveNet for Deep Spatial-Temporal Graph Modeling, IJCAI 2019] (https://arxiv.org/abs/1906.00121). A nice improvement over GraphWavenet is presented by Shleifer et al. [paper](https://arxiv.org/abs/1912.07390) [code](https://github.com/sshleifer/Graph-WaveNet). <p align="center"> <img width="350" height="400" src=./fig/model.png> </p> ## Requirements - python 3 - see `requirements.txt` ## Data Preparation | 3,160 |
no-execution/Summa_label | ['text summarization', 'document summarization', 'extractive summarization'] | ['SummaRuNNer: A Recurrent Neural Network based Sequence Model for Extractive Summarization of Documents'] | summa_label.py main get_label test_process word_token list cut Rouge open join time print enable apply_async close disable Pool range | # SummaRunner sentence-level label Implementation of the heuristic method to 0/1 label every sentence of article. Multiprocess Implementation is also provided (you need to devide source text data into several files to accelerate process by multiprocess) More information: SummaRunner:https://arxiv.org/pdf/1611.04230.pdf SummaRunner Pytorch Implementation: https://github.com/hpzhao/SummaRuNNer Thanks for hpZhao's work. ## Tips requirement: rouge (pip install rouge) Welcome to ask questions and issues. | 3,161 |
nocotan/cocob_backprop | ['stochastic optimization'] | ['Training Deep Networks without Learning Rates Through Coin Betting'] | main.py cocob_backprop.py COCOBBackprop save_csv MLP test main train COCOBBackprop save_csv DataLoader ArgumentParser seed Adam RMSprop append parse_args to range format MLP Compose test manual_seed n_epochs MNIST print add_argument parameters train backward nll_loss zero_grad step net enumerate eval nll_loss net enumerate format to_csv n_epochs dataset DataFrame optimizer | # COntinuous COin Betting Backprop (COCOB) [](https://paperswithcode.com/sota/stochastic-optimization-on-mnist?p=training-deep-networks-without-learning-rates) Unofficial pytorch implementation of COCOB Backprop. ## Training deep networks without learning rates through coin betting * [paper link](https://proceedings.neurips.cc/paper/2017/hash/7c82fab8c8f89124e2ce92984e04fb40-Abstract.html) >Deep learning methods achieve state-of-the-art performance in many application scenarios. Yet, these methods require a significant amount of hyperparameters tuning in order to achieve the best results. In particular, tuning the learning rates in the stochastic optimization process is still one of the main bottlenecks. In this paper, we propose a new stochastic gradient descent procedure for deep networks | 3,162 |
nonparametric-adversarial/nonparametric | ['adversarial defense', 'adversarial attack'] | ['Robust Decision Trees Against Adversarial Examples'] | nnattack/attacks/__init__.py nnattack/datasets/__init__.py nnattack/models/__init__.py nnattack/attacks/kernel_sub_tf.py nnattack/models/adversarial_dt.py nnattack/attacks/nns/gradient_based.py nnattack/attacks/tests/test_attack.py nnattack/attacks/blackbox/__init__.py nnattack/attacks/nns/direct.py nnattack/models/keras_model.py nnattack/variables.py nnattack/models/adversarial_knn.py nnattack/models/robust_nn/eps_separation.py nnattack/attacks/trees/rf_attack.py nnattack/attacks/trees/dt_opt.py nnattack/attacks/blackbox/blackbox_attack.py nnattack/attacks/blackbox/boundary_attack.py main.py nnattack/attacks/blackbox/attackbox.py nnattack/models/defense.py nnattack/models/robust_nn/hopcroftkarp.py nnattack/attacks/nns/nn_attack.py nnattack/attacks/base.py setup.py nnattack/attacks/utils.py nnattack/models/kernel_sub_tf.py nnattack/attacks/trees/papernots.py pass_random_state baseline_pert set_random_seed eps_accuracy main estimate_model_roubstness OrdVarClass get_file_name AttackModel KernelSubTf pgd_perturb fgm_perturb solve_qp solve_lp AttackVarClass mulvt OPT_attack_lf attack_untargeted fine_grained_binary_search_local fine_grained_binary_search BlackBoxAttack attack_untargeted fine_grained_binary_search fine_grained_binary_search_local BoundaryAttack boundary_attack_mnist DirectAttack compute_cosine find_2nd_nn_l2 find_nn GradientBased classify get_sol_linf rev_get_adv get_sol NNAttack get_sol_l1 get_half_space NNOptAttack sol_sat_constraints KNNRegionBasedAttackApprox get_adv RevNNAttack KNNRegionBasedAttackExact HybridNNAttack attack_with_eps_constraint TestNNAttack get_sol_linf get_sol_l2 get_tree_constraints _get_path_constraints DTOpt find_adv prediction tree_parser Papernots decisionTreeNode RFAttack tree_instance_constraint rev_get_sol_linf union_constraints constraint_list_to_matrix DatasetVarClass AdversarialRf AdversarialDt AdversarialKnn get_aug_data get_aug_v2 find_confident_label find_red_points mlp get_adversarial_loss logistic_regression KerasModel get_adversarial_acc_metric pgd_perturb fgm_perturb KernelSubTFModel ModelVarClass build_collision_graph find_matching find_Z find_min_cover find_eps_separated_set find_num_collision HopcroftKarp seed set_learning_phase RandomState set_session global_variables_initializer set_intermidiate_variable get_var Session run norm copy predict append baseline_pert predict enumerate arange zeros_like perturb get_variable_name set_random_seed get_var get_var_with_argument augy len estimate_model_roubstness perts fit_transform range predict baseline_pert set_intermidiate_variable astype shuffle OneHotEncoder mean print reshape augX transform MinMaxScaler array fit run_single_experiment parse_argparse replace join get_variable_name loss_fn clip_by_value optimize_linear gradients dtype while_loop shape cast clip_eta random_uniform clip_by_value zeros T Minimize Problem Variable solve T Minimize Problem Variable solve square sum size expand unsqueeze range len list norm fine_grained_binary_search time zeros_like fine_grained_binary_search_local min random set shape sample float range enumerate len FloatTensor print predict type enumerate MNIST attack_untargeted format load_model print DataParallel eval is_available cuda load_mnist_data enumerate dot reshape array len arange tuple ndim argsort mean zeros sum range enumerate zeros sum sqrt enumerate array dot T asarray get_constraints reshape solve_qp eye matrix array zeros matrix lp asarray get_constraints concatenate reshape hstack lp vstack matrix asarray get_constraints reshape hstack solve_lp vstack combinations list norm arange reshape query filter zip append array range enumerate norm asarray isinstance copy append norm get_sol_fn reshape tuple KNeighborsClassifier array predict fit norm value concatenate reshape qp eye matrix n_features enumerate norm value concatenate lp matrix n_features enumerate threshold asarray feature float64 reshape zeros tree_ _dfs tree_ children_right children_left pop str threshold node_count children_left feature children_right append tree_ decisionTreeNode append input_component prediction append zeros range len zip decision_path threshold feature apply append range len norm concatenate print reshape hstack solve_lp vstack append enumerate len NearestNeighbors kneighbors sum array fit cdist range ones int find_red_points concatenate print min find_confident_label find_eps_separated_set log len join eps set_intermidiate_variable perturb print concatenate astype get_variable_name where delta var_value vstack Delta get_aug_v2 find_eps_separated_set range fit Input Input norm min set dict add range len maximum_matching norm min add set range len add set find_matching set intersection find_Z union list find_min_cover build_collision_graph delete | # Nonparametric Adversarial Attack and Defense ## Installation Python 3.6+ ### Dependencies ``` pip install --upgrade -r requirements.txt ``` #### LP, QP Solvers - Install gruobi: https://www.cvxpy.org/install/index.html#install-with-gurobi-support - Install GLPK: https://www.cvxpy.org/install/index.html#install-with-cvxopt-and-glpk-support | 3,163 |
northeastsquare/bts | ['depth estimation', 'monocular depth estimation'] | ['From Big to Small: Multi-Scale Local Planar Guidance for Monocular Depth Estimation'] | eval_with_pngs.py generate_rili_train_files_with_gt.py average_gradients.py bts_main.py bts_dataloader.py bts_test.py bts_eval.py utils/extract_official_train_test_set_from_mat.py bts.py bts_sequence.py custom_layer/_compute_depth_grad.py bts_test_competition_data.py evaluation_metrics.py utils/download_from_gdrive.py run_bts_eval_schedule.py average_gradients BtsModel BtsDataloader convert_arg_line_to_args compute_errors test eval main get_num_lines get_tensors_in_checkpoint_file convert_arg_line_to_args sum_gradients build_tensors_in_checkpoint_file main train get_num_lines test_sequence main main convert_arg_line_to_args test get_num_lines convert_arg_line_to_args generate_test_txt test main get_num_lines run_eval _compute_depth_grad_cc download_file_from_google_drive convert_image concat reduce_mean zip append expand_dims split maximum mean sqrt log10 abs log readlines close open rstrip checkpoint_path initializer getmtime filenames_file get_next Saver exists Session run str sorted make_initializable_iterator add start_queue_runners format replace global_variables_initializer BtsModel FileWriter set ConfigProto local_variables_initializer int remove join time print gt_path data_path Coordinator output_directory model_name BtsDataloader filenames_file garg_crop max_depth_eval len logical_and shape append do_kb_crop range format mean eigen_crop int min_depth_eval print compute_errors float32 zeros get_num_lines bts_parameters test sorted NewCheckpointReader get_tensor append get_variable_to_shape_map list add set append get_tensor_by_name enumerate info name reduce_mean add_n histogram zip append scalar basename checkpoint_path print log_directory system model_name dirname train Saver Session run placeholder image_path append start_queue_runners glob BtsModel ConfigProto local_variables_initializer join constant print sort float32 Coordinator global_variables_initializer len test_sequence join print write splitext open walk split generate_test_txt print system now get get_confirm_token save_response_content Session int imwrite uint16 astype zeros makedirs | # BTS From Big to Small: Multi-Scale Local Planar Guidance for Monocular Depth Estimation [arXiv](https://arxiv.org/abs/1907.10326) [Supplementary material](https://arxiv.org/src/1907.10326v4/anc/bts_sm.pdf) ## Video Demo 1 [](https://www.youtube.com/watch?v=2fPdZYzx9Cg) ## Video Demo 2 [](https://www.youtube.com/watch?v=1J-GSb0fROw) ## Note This repository contains a Tensorflow implementation of BTS.\ | 3,164 |
notdibya/gcsl | ['multi goal reinforcement learning'] | ['Learning to Reach Goals via Iterated Supervised Learning'] | dependencies/rlutil/dictarray.py dependencies/rlutil/envs/tabular_cy/q_iteration_py.py dependencies/rlutil/envs/lqr/lqr_solver.py dependencies/rlutil/logging/log_utils.py dependencies/robel/utils/__init__.py dependencies/rlutil/envs/lqr/test.py dependencies/multiworld/core/image_env.py dependencies/robel/components/robot/dynamixel_client.py dependencies/multiworld/core/flat_goal_env.py dependencies/rlutil/envs/tabular_cy/test_q_iteration.py dependencies/multiworld/envs/mujoco/sawyer_xyz/sawyer_door.py dependencies/robel/utils/testing/mock_sim_scene.py dependencies/robel/components/base.py experiments/gcsl_sweep.py dependencies/robel/utils/testing/mock_sim_scene_test.py dependencies/robel/components/robot/dynamixel_client_test.py dependencies/robel/components/tracking/__init__.py dependencies/rlutil/envs/baird.py dependencies/robel/utils/testing/mock_time.py dependencies/multiworld/envs/mujoco/sawyer_xyz/sawyer_push_and_reach_env_two_pucks.py dependencies/setup.py doodad/ssh/credentials.py dependencies/robel/utils/configurable.py gcsl/envs/room_env.py dependencies/multiworld/envs/real_world/sawyer/sawyer_reaching.py dependencies/robel/dkitty/avoid.py dependencies/robel/dkitty/base_env.py dependencies/multiworld/envs/pygame/point2d.py doodad/mode.py dependencies/robel/utils/plotting.py dependencies/robel/robot_env_test.py dependencies/robel/components/tracking/vr_tracker.py gcsl/envs/lunar_lander_base.py dependencies/rlutil/logging/log_processor.py dependencies/multiworld/envs/mujoco/util/create_xml.py dependencies/robel/robot_env.py dependencies/robel/dclaw/turn_test.py gcsl/algo/networks.py scripts/run_experiment_lite_doodad.py dependencies/robel/components/tracking/group_config.py dependencies/multiworld/envs/gridworlds/goal_gridworld.py dependencies/rlutil/envs/tabular/simple_env.py dependencies/robel/components/tracking/virtual_reality/poses.py dependencies/robel/components/robot/group_config_test.py doodad/arg_parse.py dependencies/multiworld/core/multitask_env.py dependencies/room_world/model_builder.py setup.py dependencies/robel/components/robot/hardware_robot_test.py dependencies/robel/dkitty/walk.py gcsl/envs/sawyer_push.py dependencies/rlutil/envs/gridcraft/true_qvalues.py dependencies/rlutil/envs/gridcraft/mazes.py dependencies/rlutil/logging/hyperparameterized.py dependencies/rlutil/hyper_sweep.py gcsl/envs/sawyer_door.py dependencies/multiworld/envs/mujoco/sawyer_xyz/base.py dependencies/rlutil/envs/tabular_cy/test_mountaincar.py gcsl/algo/gcsl.py dependencies/robel/simulation/sim_scene.py dependencies/robel/utils/math_utils.py dependencies/multiworld/envs/pygame/__init__.py dependencies/multiworld/envs/mujoco/__init__.py dependencies/robel/simulation/randomize.py dependencies/robel/dclaw/pose_test.py dependencies/rlutil/viskit/ext.py doodad/launch_tools.py dependencies/rlutil/envs/tabular_cy/test_env_wrapper.py experiments/gcsl_docker.py gcsl/algo/variants.py dependencies/robel/dkitty/utils/__init__.py dependencies/multiworld/__init__.py dependencies/robel/components/robot/dynamixel_utils.py dependencies/multiworld/envs/mujoco/sawyer_xyz/sawyer_reach.py dependencies/rlutil/envs/gridcraft/grid_env.py dependencies/robel/dkitty/stand_test.py doodad/easy_sweep/hyper_sweep.py dependencies/multiworld/envs/mujoco/sawyer_xyz/sawyer_push_nips.py dependencies/robel/simulation/dm_renderer.py dependencies/robel/dkitty/orient.py dependencies/robel/dclaw/screw.py dependencies/robel/dkitty/__init__.py dependencies/rlutil/viskit/core.py dependencies/room_world/pointmass.py dependencies/rlutil/serializable.py dependencies/rlutil/envs/__init__.py dependencies/rlutil/envs/gridcraft/utils.py dependencies/robel/components/tracking/phasespace_tracker.py dependencies/robel/scripts/enjoy_mjrl.py gcsl/envs/env_utils.py dependencies/rlutil/logging/autoargs.py dependencies/robel/components/base_test.py dependencies/robel/components/builder.py dependencies/rlutil/general.py dependencies/robel/dclaw/screw_test.py dependencies/multiworld/envs/mujoco/classic_mujoco/half_cheetah.py dependencies/rlutil/viskit/frontend.py dependencies/robel/scripts/check_mujoco_deps.py dependencies/rlutil/envs/wrappers.py dependencies/rlutil/logging/console.py dependencies/rlutil/envs/tabular_cy/test_random_env.py dependencies/rlutil/torch/nn.py dependencies/robel/components/tracking/virtual_reality/__init__.py doodad/easy_sweep/launcher.py dependencies/robel/dkitty/utils/manual_reset.py dependencies/robel/simulation/__init__.py dependencies/robel/components/tracking/hardware_tracker.py dependencies/robel/simulation/mjpy_renderer.py dependencies/rlutil/envs/gridcraft/test_grid_env.py dependencies/robel/components/robot/group_config.py dependencies/multiworld/envs/pygame/walls.py doodad/mount.py dependencies/multiworld/envs/mujoco/sawyer_xyz/sawyer_push_multiobj.py doodad/easy_sweep/__init__.py dependencies/robel/dclaw/base_env.py dependencies/robel/components/tracking/utils/coordinate_system.py dependencies/room_world/room_env.py dependencies/robel/utils/resources.py gcsl/envs/lunarlander.py doodad/ec2/credentials.py gcsl/doodad_utils.py dependencies/multiworld/envs/real_world/sawyer/sawyer_pushing.py dependencies/robel/components/robot/dynamixel_robot.py dependencies/rlutil/torch/__init__.py dependencies/robel/components/robot/builder.py gcsl/algo/buffer.py dependencies/rlutil/envs/tabular/maxent_irl.py dependencies/robel/components/robot/dynamixel_robot_test.py dependencies/rlutil/envs/test_wrapper.py dependencies/robel/scripts/__init__.py dependencies/rlutil/viskit/__init__.py dependencies/robel/simulation/renderer.py dependencies/multiworld/envs/mujoco/sawyer_xyz/sawyer_push_multiobj_subset.py dependencies/robel/scripts/find_vr_devices.py dependencies/robel/utils/math_utils_test.py gcsl/envs/goal_env.py dependencies/multiworld/envs/gridworlds/__init__.py dependencies/rlutil/torch/test_torch.py dependencies/robel/utils/testing/mock_dynamixel_sdk_test.py dependencies/multiworld/core/gym_to_multi_env.py dependencies/multiworld/envs/env_util.py dependencies/robel/components/robot/hardware_robot.py dependencies/robel/scripts/utils.py experiments/gcsl_example.py dependencies/rlutil/envs/tabular/q_iteration.py dependencies/robel/scripts/rollout.py doodad/utils.py dependencies/robel/dclaw/__init__.py dependencies/robel/dclaw/pose.py gcsl/policy.py dependencies/robel/__init__.py dependencies/robel/components/__init__.py dependencies/rlutil/logging/tabulate.py dependencies/multiworld/envs/mujoco/cameras.py dependencies/rlutil/torch/pytorch_util.py dependencies/room_world/env_utils.py dependencies/robel/dkitty/push.py dependencies/robel/utils/registration.py dependencies/robel/components/robot/__init__.py dependencies/robel/dkitty/utils/scripted_reset.py gcsl/envs/claw_env.py dependencies/rlutil/envs/env_utils.py doodad/ec2/autoconfig.py dependencies/rlutil/envs/lqr/lqrenv.py doodad/ec2/aws_util.py dependencies/robel/components/robot/robot_test.py dependencies/multiworld/envs/mujoco/mujoco_env.py dependencies/multiworld/core/serializable.py dependencies/multiworld/envs/mujoco/util/interpolation.py dependencies/robel/components/robot/robot.py dependencies/robel/dclaw/scripted_reset.py dependencies/room_world/rooms.py dependencies/robel/utils/testing/mock_dynamixel_sdk.py dependencies/rlutil/math_utils.py dependencies/robel/dkitty/stand.py doodad/relaunch.py doodad/__init__.py gcsl/envs/gymenv_wrapper.py dependencies/robel/components/tracking/virtual_reality/client.py dependencies/robel/components/builder_test.py dependencies/robel/dclaw/turn.py dependencies/multiworld/envs/mujoco/sawyer_xyz/sawyer_multiple_objects.py dependencies/robel/components/tracking/builder.py dependencies/robel/utils/configurable_test.py dependencies/robel/simulation/mjpy_sim_scene.py dependencies/multiworld/envs/real_world/sawyer/sawyer_door.py dependencies/multiworld/envs/pygame/pygame_viewer.py dependencies/rlutil/logging/logger.py dependencies/multiworld/core/wrapper_env.py dependencies/rlutil/logging/qval_plotter.py dependencies/robel/utils/testing/__init__.py dependencies/multiworld/envs/mujoco/sawyer_xyz/sawyer_door_hook.py dependencies/multiworld/envs/mujoco/sawyer_xyz/sawyer_push_and_reach_env.py dependencies/rlutil/envs/gridcraft/grid_spec.py dependencies/robel/simulation/dm_sim_scene.py dependencies/robel/components/tracking/tracker.py dependencies/robel/scripts/enjoy_softlearning.py dependencies/robel/dkitty/orient_test.py dependencies/robel/dkitty/walk_test.py dependencies/multiworld/envs/mujoco/sawyer_torque/sawyer_torque_reach.py gcsl/envs/__init__.py dependencies/robel/simulation/sim_scene_test.py dependencies/robel/components/tracking/utils/__init__.py dependencies/robel/scripts/play.py dependencies/robel/components/tracking/virtual_reality/device.py dependencies/rlutil/envs/gridcraft/test_grid_env_cy.py dependencies/robel/scripts/reset_hardware.py dependencies/rlutil/envs/gridcraft/wrappers.py doodad/ssh/__init__.py dependencies/robel/utils/reset_procedure.py dependencies/rlutil/envs/tabular_cy/test_pendulum.py doodad/ec2/__init__.py dependencies/rlutil/logging/test_hyperparameterized.py dependencies/robel/utils/resources_test.py register_all_envs FlatGoalEnv GymToMultiEnv MujocoGymToMultiEnv ImageEnv normalize_image unormalize_image MultitaskEnv Serializable ProxyEnv NormalizedBoxEnv get_path_lengths concatenate_box_spaces create_stats_ordered_dict get_average_returns get_generic_path_information get_asset_full_path get_stat_in_paths GoalGridworld sawyer_pusher_camera_upright_v3 init_sawyer_camera_v1 create_sawyer_camera_init sawyer_xyz_reacher_camera_v0 init_sawyer_camera_v2 sawyer_door_env_camera_v0 sawyer_pusher_camera_upright_v2 sawyer_pusher_camera_upright_v0 sawyer_init_camera_zoomed_in sawyer_pick_and_place_camera init_sawyer_camera_v5 sawyer_pusher_camera_top_down init_sawyer_camera_v4 init_sawyer_camera_v3 sawyer_torque_reacher_camera sawyer_pick_and_place_camera_slanted_angle sawyer_pick_and_place_camera_zoomed MujocoEnv create_image_48_sawyer_push_and_reach_arena_env_v0 create_image_48_sawyer_pickup_easy_v0 create_image_48_sawyer_push_and_reach_arena_env_reset_free_v0 create_image_48_sawyer_door_hook_reset_free_v1 register_mujoco_envs create_image_48_sawyer_reach_xy_env_v1 create_image_84_sawyer_reach_xy_env_v1 HalfCheetahEnv SawyerReachTorqueEnv SawyerMocapBase SawyerXYZEnv SawyerDoorEnv SawyerDoorHookEnv zangle_to_quat quat_to_zangle MultiSawyerEnv SawyerPushAndReachXYEnv SawyerPushAndReachXYZEnv SawyerPushAndReachXYZDoublePuckEnv SawyerPushAndReachXYDoublePuckEnv SawyerTwoObjectEnv SawyerMultiobjectEnv SawyerTwoObjectEnv SawyerMultiobjectEnv SawyerPushAndReachXYEasyEnv SawyerPushAndReachXYEnv SawyerPushAndReachXYHarderEnv SawyerReachXYZEnv SawyerReachXYEnv create_root_xml find_mins_maxs create_object_xml clean_xml file_len TwoPointCSpline CSpline QuinticSpline Point2DEnv Point2DWallEnv PygameViewer LinearMapper VerticalWall HorizontalWall Segment Wall register_pygame_envs point2d_image_v0 point2d_image_fixed_goal_v0 SawyerDoorEnv SawyerPushXYEnv SawyerReachXYZEnv DictArray flatten_list TrainingIterator example_run_method Sweeper always_true chunk_filter kwargs_wrapper run_sweep_parallel run_sweep_doodad run_sweep_serial gd_momentum_optimizer categorical_log_pdf gauss_log_pdf adam_optimizer np_seed gd_optimizer split_list_by_lengths clip_sing rle Serializable Baird CustomGymEnv test_env get_inner_env RllabGymEnv get_asset_xml one_hot_to_flat flat_to_one_hot TestTimeLimitWrapper ObsWrapper Wrapper TimeLimitWrapper FixedEncodeWrapper register_envs TransitionModel GridEnv RewardFunction local_spec spec_from_sparse_locations spec_from_string GridSpec GridEnvCyTest GridEnvCyTest load_qvals hash_env dense_tabular_solver plot_qval QFunc get_hashname one_hot_to_flat flat_to_one_hot EyesWrapper RandomObsWrapper GridObsWrapper PointmassEnvVelocity LQREnv PointmassEnvTorque PointmassEnvVision lqr_inf lqr_fin solve_lqr_env compute_vistation_demos sample_states compute_visitation get_reward inspect_path tabular_maxent_irl get_policy q_iteration softq_iteration logsumexp compute_visitation compute_occupancy softmax random_env_register DiscreteEnv random_env q_iteration_sparse compute_value_function q_iteration_dense TestEpsGreedyWrapper TestAbsorbingStateWrapper QIterationTest QIterationTest QIterationTest QIterationTest arg inherit _get_prefix new_from_args get_all_parameters _t_or_f add_args prefix _get_info SimpleMessage tweakfun colorize type_hint tweakval tweak prefix_log tee_log collect_args mkdir_p query_yes_no Message log HyperparamWrapper extract_hyperparams Hyperparameterized get_snapshot_gap remove_text_output _add_output get_snapshot_dir log_parameters get_snapshot_mode set_snapshot_gap log set_snapshot_dir tabular_prefix prefix pop_tabular_prefix push_prefix remove_tabular_output pop_prefix add_text_output MyEncoder get_log_tabular_only _remove_output set_snapshot_mode set_log_tabular_only add_tabular_output push_tabular_prefix reset TerminalTablePrinter record_tabular record_tabular_misc_stat dump_tabular reduce_trimmed_mean partition_params rename_partitions reduce_first aggregate_partitions reduce_mean_keys to_data_frame label_scatter_points normalize_loss reduce_last iterate_experiments reduce_mean filter_params reduce_mean_key rename_values timewise_data_frame _find_logs_recursive SubTimer save_exception setup_logger timer generate_exp_name record_tabular_stats reset_logger record_tabular_moving TabularQValuePlotter _pipe_segment_with_colons _visible_width _pipe_line_with_colons _column_type _latex_line_begin_tabular _build_simple_row _strip_invisible _align_header _mediawiki_row_with_attrs _format_table _more_generic _isint _build_row _build_line _normalize_tabular_data _pad_row simple_separated_format _format _type tabulate _afterpoint _padboth _align_column _isnumber _padright _isconvertible _padleft Model1 get_params_json Algo2 HyperparameterizedTest Algo1 Module _NewInitCaller default_device one_hot set_gpu logsumexp all_tensor copy_params_from_to tensor to_numpy ModuleTest TestModule DeviceTest _replace_funcs hex_to_rgb extract_distinct_params smart_repr load_exps_data flatten lookup Selector unique load_progress flatten_dict load_params to_json extract lazydict AttrDict shuffled iterate_minibatches_generic compact concat_paths iscanl iscanr scanr flatten truncate_path path_len scanl is_iterable extract_dict stdize check_nan plot_div reload_data parse_float_arg sliding_mean index make_plot make_plot_eps safer_eval get_plot_instruction send_css send_js summary_name RobotEnv make_box_space RobotEnvTest TestEnv BaseComponent DummyComponent BaseComponentTest ComponentBuilder DummyBuilder ComponentBuilderTest RobotComponentBuilder DynamixelClient DynamixelPosVelCurReader unsigned_to_signed DynamixelReader dynamixel_cleanup_handler signed_to_unsigned DynamixelClientTest DynamixelRobotComponent DynamixelRobotState DynamixelGroupConfig MockDynamixelClient RobotComponentTest patch_dynamixel CalibrationMap RobotGroupConfig ControlMode RobotGroupConfigTest HardwareRobotComponent HardwareRobotGroupConfig HardwareRobotComponentTest DummyHardwareRobotComponent RobotState RobotComponent RobotComponentTest TrackerType TrackerComponentBuilder TrackerGroupConfig HardwareTrackerComponent HardwareTrackerGroupConfig PhaseSpaceTrackerGroupConfig PhaseSpaceTrackerComponent TrackerComponent TrackerState VrTrackerComponent VrTrackerGroupConfig CoordinateSystem VrClient VrDevice VrPoseBatch BaseDClawObjectEnv BaseDClawEnv DClawPoseRandomDynamics DClawPoseFixed BaseDClawPose DClawPoseRandom DClawPoseTest DClawScrewRandom BaseDClawScrew DClawScrewRandomDynamics DClawScrewFixed DClawScrewTest disentangle_dclaw reset_to_states add_groups_for_reset BaseDClawTurn DClawTurnRandomDynamics DClawTurnRandom DClawTurnFixed DClawTurnTest DKittyAvoid BaseDKittyEnv BaseDKittyUprightEnv DKittyOrientRandomDynamics BaseDKittyOrient DKittyOrientRandom DKittyOrientFixed DKittyOrientTest DKittyPush DKittyStandFixed DKittyStandRandom DKittyStandRandomDynamics BaseDKittyStand DKittyStandTest DKittyWalkFixed DKittyWalkRandom DKittyWalkRandomDynamics BaseDKittyWalk DKittyWalkTest ManualAutoDKittyResetProcedure ScriptedDKittyResetProcedure main policy_factory policy_factory VrDeviceShell PlayShell main rollout_script do_rollouts parse_env_params parse_env_args EpisodeLogger DMRenderer DMRenderWindow DMSimScene MjPyRenderer MjPySimScene _MjlibWrapper _mj_warning_fn SimRandomizer Renderer RenderMode SimBackend SimScene SimSceneTest mjpy_and_dm test_model_file configurable set_env_params DummyWithConfig TestConfigurable ChildDummyWithConfig DummyWithConfigPickleable calculate_cosine average_quaternions AverageQuaternionsTest CalculateCosineTest AnimatedPlot register ResetProcedure ManualResetProcedure get_resource AssetBundle get_asset_path DummyResources TestAssetBundle MockDynamixelSdk patch_dynamixel MockDynamixelSdkTest MockMjData MockMjModel MockSimScene patch_sim_scene MockSimSceneTest patch_time MockTime DiscreteActionMultiWorldEnv MultiWorldEnvWrapper MJCModel pointmass_model default_model MJCTreeNode PMEnv pointmass_camera_config WheeledRoomGenerator draw_wall RoomWithWall FourRoom AntRoomGenerator draw_start_goal Room draw_borders PMRoomGenerator RoomGenerator RoomEnv encode_args __get_arg_config get_args make_python_command launch_python launch_shell DockerMode SlurmSingularity Local LaunchMode SingularityMode SSHDocker LocalDocker dedent EC2AutoconfigDocker EC2SpotDocker CodalabDocker LocalSingularity MountS3 Mount MountLocal MountGitRepo call_and_wait hash_file CommandBuilder run_single_doodad example_run_method Sweeper kwargs_wrapper run_sweep_parallel run_sweep_doodad run_sweep_serial DoodadSweeper example_function Autoconfig s3_upload s3_exists AWSCredentials SSHCredentials run run run launch GoalConditionedPolicy Policy ReplayBuffer GCSL class_select IndependentDiscretizedStochasticGoalPolicy cross_entropy_with_weights FCNetwork MultiInputNetwork DiscreteStochasticGoalPolicy StateGoalNetwork CBCNetwork CrossEntropyLoss Flatten get_horizon get_params default_gcsl_params default_markov_policy discretize_environment ClawEnv Discretized DiscretizedActionEnv normalize_image ImageandProprio ImageEnv unormalize_image GoalEnv GymGoalEnvWrapper LunarEnv heuristic LunarLander LunarLanderContinuous demo_heuristic_lander ContactDetector PointmassGoalEnv main SawyerViews SawyerDoorGoalEnv SawyerPushAndReachXYEnv SawyerViews SawyerPushGoalEnv get_env_params create_env failure register_pygame_envs register_mujoco_envs update format isinstance min OrderedDict iter max enumerate update hstack OrderedDict create_stats_ordered_dict vstack len concatenate array range array range register info make make make make realpath join dirname item realpath join dirname item array array hstack array array min max points get join format SubElement Element print glob from_file find_mins_maxs min toprettyxml choice uniform append ElementTree max range enumerate int join format remove print join format getpid array register info Point2DEnv ImageEnv FlatGoalEnv Point2DEnv ImageEnv list sorted slice array sum keys enumerate Sweeper run_method filter_fn Sweeper shuffle map append filter_fn Pool Sweeper launch_python print sleep len where append array diff cumsum insert svd clip exp square pi shape sum log seed get_state getstate set_state setstate isinstance print action_space render reset sample step range shape zeros register endswith GridSpec array range split array range GridSpec array spec_from_string find md5 encode update max join abs GridEnv gs len print get_transitions dot hash_env savetxt zeros sum rew_fn range n join print loadtxt hash_env dense_tabular_solver show list height set_value product make_plot width TabularQValuePlotter range enumerate lqr_fin T slice dot shape cholesky zeros range rew_Q lqr_fin rew_R lqr_inf zeros dynamics dot transition_matrix logsumexp get_policy initial_state_distribution zeros expand_dims range transition_matrix flat_dim einsum zeros flat_dim range one_hot_to_flat len get_policy arange len choice shape tabular_trans_distr append range array log flat_to_one_hot update sum heartbeat q_iteration record print adam_optimizer itr_message TrainingIterator compute_visitation zeros abs max flat_dim set_trace shape zeros float range one_hot_to_flat sum exp max expand_dims exp logsumexp reward_matrix log logsumexp dot num_actions range num_states sum transition_matrix zeros items list num_actions num_states items get_policy list num_actions num_states expand_dims range transition_matrix zeros einsum int list range lse max compute_value_function abs dot shape zeros sum max range reward_fn compute_value_function reward num_actions transitions num_states range zeros hasattr __init__ hasattr isinstance __init__ upper getargspec items list hasattr _get_prefix __init__ dict getattr zip _get_info ismethod append str makedirs print flush open join split Callable isinstance items list replace collect_args log getargspec items list replace locate log __init__ dict collect_args lower getattr startswith zip __name__ lower write isinstance any getattr type append open mkdir_p dirname remove close append join _add_output _remove_output _add_output remove _remove_output list values colorize write now strftime tzlocal append append join join push_prefix push_tabular_prefix pop_tabular_prefix pop list values writerow add dict DictWriter writeheader print_tabular keys log flush split join items list isinstance __module__ get_all_parameters dict any getattr mkdir_p dirname __name__ max min average nan record_tabular median std join listdir isdir print join defaultdict _find_logs_recursive append tuple defaultdict isinstance list defaultdict DataFrame append reduce_fn keys list defaultdict min append DataFrame keys range len list defaultdict append DataFrame keys list set mean append DataFrame product mean append DataFrame range len iterrows str concat text join set_snapshot_dir get_snapshot_dir remove_tabular_output now strftime tzlocal generate_exp_name add_tabular_output reset_logger join get_snapshot_dir print_exc mean min max record_tabular append mean extend record_tabular clear reset SubTimer time _times log join conv hasattr isinstance _isint _strip_invisible _isnumber _isint _isnumber rfind isinstance list max map get max list hasattr map index _fields zip_longest names keys range values len get join list search map _normalize_tabular_data zip hasattr hasattr linebetweenrows datarow lineabove padding _build_line linebelow _pad_row linebelowheader _build_row append headerrow extract_hyperparams HyperparamWrapper default_device isinstance pin_memory Tensor ndarray isinstance view scatter_ copy_ parameters data zip getattr dir isinstance DeviceWrapped print dict items list isinstance dict hasattr split join load_progress append load_params AttrDict hasattr isinstance list sorted map flatten unique isinstance isinstance f f pop randint list len arange slice shuffle range len list min array append max range len join list percentile50 plot print stds means percentile75 mean percentile25 Layout zip append Figure Scatter range enumerate len subplots percentile50 stds grid set_visible legendHandles list str set_linewidth savefig legend range replace plot means set_xlim enumerate percentile25 percentile75 fill_between set_ylim len extract nanmedian where nanpercentile make_plot Selector custom_series_splitter round max clip str list sorted extract_distinct_params map make_plot_eps append AttrDict custom_filter asarray format product replace sliding_mean mean zip float enumerate items int print maximum dict nanmean filter nanstd std _filters len get get parse_float_arg loads get_plot_instruction safer_eval args get_plot_instruction list sorted disable_variant extract_distinct_params data_paths load_exps_data set flatten list disconnect is_using warning OPEN_CLIENTS set array add_group str time get_state format motor_ids input print error reset_time eval set_state qpos set_motors_engaged sleep append disentangle_dclaw enumerate set_state set_motors_engaged sleep parse_args basicConfig add_argument ArgumentParser policy env_name format format parse_env_args unwrapped print eval reset set_motors_engaged input sleep range set_env_params num_repeats items time defaultdict record_duration list Trajectory render reset append step range action_fn seed make total_reward arg_def_fn parse_env_args format sorted print add_argument policy_factory durations output dict do_rollouts ArgumentParser env_factory set_env_params append basicConfig param info debug add_argument parse_env_params env_name ArgumentParser device parse_args int float is_value_convertable split user_warning_raise_exception import_module getattr split transpose matmul eigh vstack len str norm ndim any warning warning normpath join startswith geom joint compiler option root MJCModel default geom joint compiler option root MJCModel default array hlines isclose plot vlines draw_wall scatter get int bool __get_arg_config b64decode args_data loads use_cloudpickle decode __version__ launch_command join basename launch_command isinstance make_python_command mount_dir dirname MountLocal format encode_args md5 print wait Popen Sweeper launch_python print check_output print join check_call seed GCSL print set_gpu get_params manual_seed get_env_params create_env join Local LocalDocker EC2AutoconfigDocker run_sweep_doodad t size eq mean class_select logsumexp ReplayBuffer dict default_gcsl_params default_markov_policy discretize_environment get action_space DiscretizedActionEnv isinstance IndependentDiscretizedStochasticGoalPolicy DiscreteStochasticGoalPolicy abs array clip continuous seed format heuristic print render reset step sample_goal SawyerDoorGoalEnv sample stack append step dict update | # Goal-Conditioned Supervised Learning (GCSL) This repository provides an implementation of Goal-Conditioned Supervised Learning (GCSL), as proposed in *Learning to Reach Goals via Iterated Supervised Learning* The manuscript is available on [arXiv](https://arxiv.org/abs/1912.06088) If you use this codebase, please cite Dibya Ghosh, Abhishek Gupta, Justin Fu, Ashwin Reddy, Coline Devin, Benjamin Eysenbach, Sergey Levine. Learning to Reach Goals via Iterated Supervised Learning Bibtex source is provided at the bottom of the Readme. ## Setup Conda ``` conda env create -f environment/environment.yml | 3,165 |
noureldien/timeception | ['action recognition'] | ['Timeception for Complex Action Recognition'] | datasets/charades.py core/image_utils.py nets/i3d_torch_charades_utils.py nets/layers_pytorch.py experiments/train_keras.py nets/resnet_152_pytorch.py nets/i3d_torch_charades.py nets/i3d_keras.py nets/timeception.py core/keras_utils.py core/const.py __doc__.py core/metrics.py nets/i3d_torch_charades_test.py core/config_utils.py experiments/test_keras.py core/data_utils.py core/pytorch_utils.py experiments/train_pytorch.py main.py nets/layers_keras.py nets/timeception_pytorch.py core/config.py core/utils.py nets/resnet_152_keras.py __main cfg_print_cfg get_machine_name cfg_from_list __parse_gpu_id cfg_from_file cfg_from_attrdict __config_gpu_for_keras __config_gpu_for_tensorflow __config_gpu_for_pytorch cfg_merge_dicts import_dl_platform config_gpu cfg_from_dict __config_gpu_for_caffe cfg_sanity_check DataGeneratorCharades AsyncLoaderVideoFeatures DatasetCharades __resize_frame resize_crop_scaled AsyncImageReaderMultiTHUMOSForI3DKerasModel __resize_crop resize_crop resize_keep_aspect_ratio_padded __resize_keep_aspect_ratio_padded AsyncImageReaderBreakfastForI3DKerasModel __resize_keep_aspect_ratio_min_dim AsyncImageReaderResNet152Keras resize_keep_aspect_ratio_max_dim __resize_keep_aspect_ratio_max_dim resize_frame resize_keep_aspect_ratio_min_dim __resize_crop_scaled save_model_figure save_model load_model map_charades calc_num_batches SaveCallback layer_exist map_sklearn acuracy_top_n map_charades accuracy save_model load_model padding1d padding3d summary ModelSaver calc_padding_1d h5_dump get_size_in_kb normalize_sum file_pathes pkl_dump learn_manifold file_names print_array_joined Path folder_names json_dump byte_dump normalize_range_0_to_1 h5_load_multi print_counter get_model_feat_maps_info byte_load remove_extension normalize_mean normalize_l1 get_array_memory_size get_expected_memory_size h5_dump_multi AttrDict pkl_load get_size_in_mb DurationTimer h5_load debinarize_label folder_pathes csv_load mat_load txt_dump convert_dict_to_attrdict calc_num_batches get_file_extension json_load normalize_mean_std array_to_text txt_load yaml_load timestamp normalize_l2 print_array get_size_in_gb __convert_seconds_to_frame_idx __count_time_in_each_video _02_prepare_annotation_frame_dict __sample_frames_ordered extract_features_i3d_charades __preprocess_img __get_video_frame_pathes _13_prepare_annotation_frames_per_video_dict_untrimmed_multi_label_for_i3d _06_prepare_video_annotation_multi_label __get_frame_names_from_csv_file _12_prepare_annotation_frames_per_video_dict_multi_label_all_frames __get_frames_names_in_given_duration __get_frame_names_untrimmed_from_csv_file_for_ordered __relative_to_absolute_pathes _03_prepare_annotation_frame_list _01_prepare_annotation_class_names __pre_process_for_charades __count_how_many_videos_per_class __sample_frames_for_i3d __sample_frames_ordered_for_resnet __get_frames_relative_pathes_in_given_duration _08_prepare_annotation_frames_per_video_dict_multi_label __sample_frames _14_prepare_annotation_frames_per_video_dict_untrimmed_multi_label_for_resnet_ordered __test_video_names_in_annotation_list __get_frame_names_untrimmed_from_csv_file_for_i3d test_tco train_ete __define_data_generator __define_timeception_model train_tco __main train_ete __main __define_loader Model train_tco __define_timeception_model _obtain_input_shape extract_features Inception_Inflated3d evaluate_model conv3d_bn InceptionI3d MaxPool3dSamePadding InceptionModule Unit3D InceptionI3d MaxPool3dSamePadding InceptionModule Unit3D __load_i3d_model_rgb __extract_features_rgb load_model_i3d_charades_rgb_for_testing __get_video_frame_pathes extract_features_rgb ExpandDimsLayer GroupedDenseLayer DepthwiseConv3DLayer DepthwiseDenseLayer GroupedConv3DLayer MaxLayer NormalizationLayer SliceLayer DepthwiseConv1DLayer ChannelShuffleLayer TransposeLayer SqueezeLayer DepthwiseConvOverTimeLayer AverageLayer SqueezeAllLayer DepthwiseConv2DLayer SumLayer ReshapeLayer DepthwiseConv1DLayer ChannelShuffleLayer identity_block conv_block Scale ResNet152 get_mean_std_for_resnet_152_pytorch_model get_resnet_152_charades_model __grouped_convolutions Timeception __temporal_convolutional_block __get_n_channels_per_branch timeception_layers Timeception __config_gpu_for_caffe __config_gpu_for_tensorflow __config_gpu_for_pytorch __config_gpu_for_keras __parse_gpu_id clear_session str set_session __parse_gpu_id ConfigProto Session __parse_gpu_id set_device str __parse_gpu_id gpu_core_id parse_args add_argument ArgumentParser pformat info AttrDict list items isinstance literal_eval type yaml_load cfg_merge_dicts cfg_sanity_check cfg_merge_dicts items list format literal_eval type split format literal_eval zip type split tile resize int float tile resize int float tile resize shape int tile resize shape int tile resize int shape __resize_keep_aspect_ratio_max_dim tile zeros float plot_model compile loads load_weights model_from_json to_json save_weights layers int float constant concatenate cumsum float32 reduce_sum map_fn argsort mean cast reverse append expand_dims range equal invert astype float sum array nan_to_num mean sum float argmax len float len zip save_state_dict load load_state_dict str pad any max conv3d shape pad any max int str remove format isinstance FloatTensor model print apply OrderedDict lower numpy abs prod array convert_dict_to_attrdict File close value File close len File close create_dataset File close create_dataset range len read_csv values loadmat natsorted natsorted natsorted natsorted mean std mean sum array max divide add join dtype size print print join fit_transform array format now info items list isinstance AttrDict arange data_root_path txt_load pkl_dump zip array data_root_path print pkl_dump average sum range len list data_root_path print pkl_dump choice shape append randint array range pkl_load len pkl_dump add dict unique zip append zeros enumerate pkl_load len pkl_dump __get_frame_names_from_csv_file data_root_path pkl_dump __get_frame_names_from_csv_file data_root_path pkl_dump data_root_path __get_frame_names_untrimmed_from_csv_file_for_i3d pkl_dump __get_frame_names_untrimmed_from_csv_file_for_ordered data_root_path max data_root_path print min dict average array len max data_root_path print readlines min dict average array len dict readlines data_root_path len randint choice len int arange tolist float len arange astype int32 float len int float arange len data_root_path print min average sum max len plot_multi data_root_path print append array range pkl_load len data_root_path print vstack enumerate pkl_load len __convert_seconds_to_frame_idx file_names data_root_path __convert_seconds_to_frame_idx file_names data_root_path int float round pkl_dump _sleep list load_video_frames_in_batch transpose squeeze __get_video_frame_pathes range pkl_load update load_model_i3d_charades_rgb_for_testing AsyncVideoReaderCharadesForI3DTorchModel keys time get_images is_busy print reshape dict timestamp summary len array data_root_path array float32 imread astype resize_crop resize_crop astype float32 N_WORKERS now fit_generator __define_timeception_model DATASET_NAME summary SaveCallback info __define_data_generator N_EPOCHS __define_timeception_model get_model_feat_maps_info N_CLASSES BACKBONE_FEATURE BATCH_SIZE BACKBONE_CNN DATASET_NAME data_generator_class N_TC_TIMESTEPS int get_model_feat_maps_info LR ADAM_EPSILON BACKBONE_FEATURE N_TC_LAYERS N_CLASSES Input map_charades Timeception NAME BACKBONE_CNN Model MULTISCALE_TYPE timeception_module CLASSIFICATION_TYPE compile N_TC_TIMESTEPS train_ete add_option OptionParser SCHEME error cfg_from_file config_file warning train_tco parse_args model __define_loader zero_grad save ModelSaver step metric_fn range astype eval float enumerate time backward write int32 loss_fn train numpy get_model_feat_maps_info N_CLASSES BACKBONE_FEATURE BATCH_SIZE n_samples N_WORKERS n_batches BACKBONE_CNN DataLoader DATASET_NAME dataset_class N_TC_TIMESTEPS NLLLoss accuracy parameters BCELoss to str warn _obtain_input_shape int concatenate Model load_weights Input conv3d_bn norm exp print DATA_ROOT_PATH zeros sum Inception_Inflated3d predict load norm exp print DATA_ROOT_PATH sum Inception_Inflated3d predict int str add_option OptionParser gpu_core_id is_local_machine __extract_features_rgb parse_args end_num begin_num load InceptionI3d replace_logits eval load_state_dict train cuda pkl_dump _sleep list load_imgs_in_batch transpose __get_video_frame_pathes expand_dims range pkl_load update __load_i3d_model_rgb DATA_ROOT_PATH AsyncImageReaderCharadesForI3DTorchModel keys time get_images is_busy print dict timestamp summary len load InceptionI3d replace_logits load_state_dict train cuda str add str add _obtain_input_shape conv_block get_source_inputs Model load_weights identity_block Input range load items list format replace print Sequential OrderedDict eval load_state_dict DATA_ROOT_PATH train cuda int_shape __grouped_convolutions range __get_n_channels_per_branch as_list int __temporal_convolutional_block append range int float | ## Timeception for Complex Action Recognition    This code repository is the implementation for the paper [Timeception for Complex Action Recognition](https://arxiv.org/abs/1812.01289). We provide the implementation for 3 different libraries: `keras`, `tensorflow` and `pytorch`.  ### Citation Please consider citing this work using this BibTeX entry ```bibtex @inproceedings{hussein2018timeception, title = {Timeception for Complex Action Recognition}, | 3,166 |
nouwaarom/ml-agents | ['unity'] | ['Unity: A General Platform for Intelligent Agents'] | ml-agents/mlagents/trainers/components/reward_signals/curiosity/model.py ml-agents-envs/mlagents_envs/communicator_objects/command_pb2.py ml-agents-envs/mlagents_envs/mock_communicator.py ml-agents-envs/mlagents_envs/communicator_objects/unity_to_external_pb2.py ml-agents-envs/mlagents_envs/communicator.py gym-unity/gym_unity/envs/__init__.py ml-agents-envs/mlagents_envs/communicator_objects/brain_parameters_pb2.py ml-agents/mlagents/trainers/learn.py ml-agents/mlagents/trainers/tests/test_sampler_class.py ml-agents/mlagents/trainers/meta_curriculum.py ml-agents/mlagents/trainers/tests/test_barracuda_converter.py ml-agents/mlagents/trainers/ppo/models.py ml-agents-envs/mlagents_envs/side_channel/raw_bytes_channel.py gym-unity/gym_unity/__init__.py utils/validate_meta_files.py ml-agents/mlagents/trainers/trainer_controller.py ml-agents/mlagents/trainers/components/bc/model.py ml-agents/mlagents/trainers/tests/test_curriculum.py ml-agents/mlagents/trainers/action_info.py ml-agents/mlagents/trainers/tests/test_ppo.py ml-agents/mlagents/tf_utils/__init__.py ml-agents/mlagents/trainers/components/reward_signals/__init__.py ml-agents-envs/setup.py ml-agents-envs/mlagents_envs/side_channel/engine_configuration_channel.py ml-agents-envs/mlagents_envs/communicator_objects/unity_rl_output_pb2.py ml-agents/mlagents/trainers/tests/mock_brain.py ml-agents/mlagents/trainers/tests/test_bcmodule.py ml-agents/mlagents/trainers/tests/test_trainer_controller.py ml-agents/mlagents/trainers/components/reward_signals/reward_signal_factory.py ml-agents-envs/mlagents_envs/rpc_utils.py ml-agents-envs/mlagents_envs/communicator_objects/unity_rl_initialization_output_pb2.py ml-agents/setup.py ml-agents/mlagents/trainers/barracuda.py ml-agents/mlagents/trainers/env_manager.py ml-agents/mlagents/trainers/sac/policy.py ml-agents/mlagents/trainers/ppo/trainer.py ml-agents-envs/mlagents_envs/communicator_objects/agent_action_pb2.py ml-agents-envs/mlagents_envs/tests/test_rpc_communicator.py ml-agents-envs/mlagents_envs/tests/test_envs.py ml-agents/mlagents/trainers/brain.py ml-agents-envs/mlagents_envs/side_channel/float_properties_channel.py ml-agents/mlagents/trainers/tests/test_meta_curriculum.py ml-agents/mlagents/trainers/components/reward_signals/curiosity/signal.py ml-agents/mlagents/trainers/simple_env_manager.py ml-agents-envs/mlagents_envs/exception.py ml-agents/mlagents/trainers/curriculum.py ml-agents/mlagents/trainers/tests/test_policy.py ml-agents/mlagents/trainers/ppo/policy.py ml-agents-envs/mlagents_envs/communicator_objects/unity_message_pb2.py ml-agents/mlagents/trainers/tests/test_learn.py ml-agents-envs/mlagents_envs/communicator_objects/agent_info_pb2.py ml-agents/mlagents/trainers/tests/test_demo_loader.py ml-agents-envs/mlagents_envs/communicator_objects/observation_pb2.py utils/validate_versions.py ml-agents-envs/mlagents_envs/tests/test_rpc_utils.py ml-agents/mlagents/trainers/models.py ml-agents-envs/mlagents_envs/tests/test_timers.py ml-agents/mlagents/trainers/__init__.py ml-agents-envs/mlagents_envs/communicator_objects/custom_reset_parameters_pb2.py ml-agents-envs/mlagents_envs/communicator_objects/agent_info_action_pair_pb2.py ml-agents-envs/mlagents_envs/communicator_objects/unity_rl_input_pb2.py ml-agents-envs/mlagents_envs/timers.py ml-agents/mlagents/trainers/tests/test_simple_rl.py ml-agents/mlagents/trainers/exception.py gym-unity/gym_unity/tests/test_gym.py ml-agents/mlagents/tf_utils/tf.py ml-agents/mlagents/trainers/buffer.py ml-agents-envs/mlagents_envs/side_channel/side_channel.py ml-agents/mlagents/trainers/tests/test_subprocess_env_manager.py ml-agents/mlagents/trainers/subprocess_env_manager.py ml-agents/mlagents/trainers/tensorflow_to_barracuda.py ml-agents/mlagents/trainers/agent_processor.py ml-agents/mlagents/trainers/policy.py ml-agents-envs/mlagents_envs/communicator_objects/engine_configuration_pb2.py ml-agents/mlagents/trainers/tests/test_rl_trainer.py ml-agents-envs/mlagents_envs/rpc_communicator.py ml-agents-envs/mlagents_envs/communicator_objects/demonstration_meta_pb2.py ml-agents-envs/mlagents_envs/__init__.py gym-unity/setup.py ml-agents/mlagents/trainers/tests/__init__.py ml-agents-envs/mlagents_envs/communicator_objects/unity_output_pb2.py ml-agents-envs/mlagents_envs/communicator_objects/space_type_pb2.py ml-agents/mlagents/trainers/trainer_util.py ml-agents/mlagents/trainers/tests/test_trainer_util.py ml-agents/mlagents/trainers/components/reward_signals/extrinsic/signal.py ml-agents/mlagents/trainers/sac/trainer.py ml-agents/mlagents/trainers/sampler_class.py ml-agents/mlagents/trainers/tests/test_sac.py ml-agents/mlagents/trainers/trajectory.py ml-agents-envs/mlagents_envs/communicator_objects/unity_rl_initialization_input_pb2.py ml-agents-envs/mlagents_envs/base_env.py ml-agents-envs/mlagents_envs/communicator_objects/header_pb2.py ml-agents/mlagents/trainers/sac/models.py ml-agents/mlagents/trainers/tests/test_stats.py ml-agents/mlagents/trainers/components/reward_signals/gail/model.py ml-agents/mlagents/trainers/rl_trainer.py ml-agents/mlagents/trainers/tests/test_reward_signals.py ml-agents/mlagents/trainers/components/reward_signals/gail/signal.py ml-agents-envs/mlagents_envs/tests/test_side_channel.py ml-agents/mlagents/trainers/ppo/multi_gpu_policy.py ml-agents/mlagents/trainers/tests/test_multigpu.py ml-agents-envs/mlagents_envs/environment.py ml-agents/mlagents/trainers/demo_loader.py ml-agents/mlagents/trainers/components/bc/module.py ml-agents-envs/mlagents_envs/communicator_objects/unity_input_pb2.py ml-agents/mlagents/trainers/tests/test_buffer.py ml-agents/mlagents/trainers/trainer.py ml-agents/mlagents/trainers/tests/test_agent_processor.py ml-agents-envs/mlagents_envs/communicator_objects/unity_to_external_pb2_grpc.py ml-agents/mlagents/trainers/brain_conversion_utils.py ml-agents/mlagents/trainers/stats.py ml-agents/mlagents/trainers/tf_policy.py ml-agents/mlagents/trainers/tests/test_trajectory.py VerifyVersionCommand UnityGymException ActionFlattener UnityEnv test_gym_wrapper test_multi_agent create_mock_group_spec test_branched_flatten setup_mock_unityenvironment test_gym_wrapper_visual create_mock_vector_step_result VerifyVersionCommand set_warnings_enabled ActionInfo AgentProcessor BarracudaWriter fuse print_known_operations compress Build sort lstm write fuse_batchnorm_weights trim mean gru Model summary Struct parse_args to_json rnn BrainInfo BrainParameters CameraResolution group_spec_to_brain_parameters step_result_to_brain_info BufferException AgentBuffer Curriculum make_demo_buffer load_demonstration demo_to_buffer EnvManager EnvironmentStep SamplerException TrainerConfigError CurriculumError TrainerError MetaCurriculumError CurriculumLoadingError CurriculumConfigError create_environment_factory CommandLineOptions create_sampler_manager parse_command_line run_training prepare_for_docker_run try_create_meta_curriculum main get_version_string MetaCurriculum EncoderType LearningModel LearningRateSchedule Policy RLTrainer MultiRangeUniformSampler UniformSampler SamplerFactory SamplerManager GaussianSampler Sampler SimpleEnvManager StatsWriter StatsSummary StatsReporter TensorboardWriter CSVWriter worker EnvironmentResponse UnityEnvWorker StepResponse SubprocessEnvManager EnvironmentCommand get_layer_shape pool_to_HW flatten sqr_diff process_layer process_model get_layer_rank slow_but_stable_topological_sort get_attr basic_lstm ModelBuilderContext order_by get_epsilon get_tensor_dtype replace_strings_in_list debug embody by_op get_tensor_dims strided_slice remove_duplicates_from_list axis_to_barracuda by_name locate_actual_output_node convert strides_to_HW get_tensor_data very_slow_but_stable_topological_sort gru TFPolicy UnityPolicyException UnityTrainerException Trainer TrainerController AgentManager TrainerFactory initialize_trainer load_config _load_config AgentExperience Trajectory SplitObservations BCModel BCModule create_reward_signal RewardSignal CuriosityModel CuriosityRewardSignal ExtrinsicRewardSignal GAILModel GAILRewardSignal PPOModel get_devices MultiGpuPPOPolicy PPOPolicy PPOTrainer get_gae discount_rewards SACPolicyNetwork SACTargetNetwork SACNetwork SACModel SACPolicy SACTrainer create_mock_pushblock_brain create_buffer simulate_rollout create_mock_3dball_brain make_brain_parameters create_mock_banana_brain setup_mock_unityenvironment create_mock_braininfo create_mock_brainparams setup_mock_env_and_brains test_agentprocessor create_mock_policy create_mock_brain test_barracuda_converter sac_dummy_config test_bcmodule_rnn_update test_bcmodule_update ppo_dummy_config test_bcmodule_constant_lr_update create_policy_with_bc_mock test_bcmodule_dc_visual_update test_bcmodule_defaults test_bcmodule_rnn_dc_update test_buffer_sample construct_fake_buffer test_num_experiences assert_array fakerandint test_buffer test_buffer_truncate test_curriculum_load_invalid_json location default_reset_parameters test_init_curriculum_bad_curriculum_raises_error test_curriculum_load_missing_file test_init_curriculum_happy_path test_increment_lesson test_curriculum_load_good test_get_config test_load_demo test_load_demo_dir basic_options test_docker_target_path test_run_training test_env_args test_commandline_args test_init_meta_curriculum_happy_path test_increment_lessons_with_reward_buff_sizes default_reset_parameters MetaCurriculumTest test_increment_lessons measure_vals reward_buff_sizes test_set_all_curriculums_to_lesson_num test_get_config test_set_lesson_nums test_init_meta_curriculum_bad_curriculum_folder_raises_error test_simple_metacurriculum more_reset_parameters test_create_model dummy_config test_average_gradients test_update basic_mock_brain test_take_action_returns_action_info_when_available basic_params test_take_action_returns_nones_on_missing_values test_take_action_returns_empty_with_no_agents test_trainer_increment_step test_trainer_update_policy test_min_visual_size test_process_trajectory test_rl_functions test_ppo_model_dc_vector_rnn test_ppo_model_cc_vector_rnn test_ppo_policy_evaluate test_ppo_model_cc_visual dummy_config test_ppo_model_dc_vector test_ppo_model_dc_visual test_ppo_get_value_estimates test_normalization test_ppo_model_cc_vector test_gail_dc_visual sac_dummy_config reward_signal_update reward_signal_eval test_extrinsic test_curiosity_cc test_gail_rnn test_gail_cc ppo_dummy_config create_policy_mock test_curiosity_dc curiosity_dummy_config test_curiosity_visual test_curiosity_rnn gail_dummy_config create_mock_all_brain_info create_rl_trainer test_clear_update_buffer dummy_config test_rl_trainer create_mock_brain create_mock_policy test_sac_update_reward_signals create_sac_policy_mock test_process_trajectory test_sac_model_dc_visual test_sac_cc_policy test_sac_visual_policy test_sac_model_cc_vector_rnn test_sac_model_dc_vector test_sac_model_cc_vector dummy_config test_sac_model_dc_vector_rnn test_sac_model_cc_visual test_sac_rnn_policy test_sac_save_load_buffer test_sac_dc_policy test_empty_samplers sampler_config_1 check_value_in_intervals incorrect_uniform_sampler test_incorrect_sampler test_sampler_config_1 sampler_config_2 incorrect_sampler_config test_incorrect_uniform_sampler test_sampler_config_2 test_simple_sac clamp test_simple_ppo Simple1DEnvironment _check_environment_trains test_tensorboard_writer test_stat_reporter_text test_stat_reporter_add_summary_write test_csv_writer mock_env_factory SubprocessEnvManagerTest MockEnvWorker test_initialization_seed test_take_step_if_not_training test_start_learning_trains_until_max_steps_then_saves basic_trainer_controller test_take_step_adds_experiences_to_trainer_and_trains trainer_controller_with_take_step_mocks trainer_controller_with_start_learning_mocks test_start_learning_trains_forever_if_no_train_model test_initialize_ppo_trainer test_handles_no_default_section test_load_config_invalid_yaml test_initialize_invalid_trainer_raises_exception dummy_bad_config dummy_config test_load_config_missing_file test_load_config_valid_yaml test_initialize_trainer_parameters_override_defaults test_raise_if_no_config_for_brain dummy_config_with_override make_fake_trajectory test_trajectory_to_agentbuffer test_split_obs np_zeros_no_float64 np_array_no_float64 _check_no_float64 np_ones_no_float64 VerifyVersionCommand StepResult ActionType AgentGroupSpec BatchedStepResult BaseEnv Communicator UnityEnvironment UnityWorkerInUseException UnityException UnityTimeOutException UnityCommunicationException UnityEnvironmentException UnityActionException MockCommunicator RpcCommunicator UnityToExternalServicerImplementation agent_group_spec_from_proto _generate_split_indices process_pixels batched_step_result_from_proto _process_vector_observation _process_visual_observation TimerNode hierarchical_timer get_timer_root get_timer_tree reset_timers set_gauge timed GaugeNode TimerStack UnityToExternalProtoServicer add_UnityToExternalProtoServicer_to_server UnityToExternalProtoStub EngineConfigurationChannel EngineConfig FloatPropertiesChannel RawBytesChannel SideChannelType SideChannel test_initialization test_reset test_returncode_to_signal_name test_close test_step test_handles_bad_filename test_rpc_communicator_checks_port_on_create test_rpc_communicator_create_multiple_workers test_rpc_communicator_close test_batched_step_result_from_proto generate_compressed_proto_obs test_agent_group_spec_from_proto test_vector_observation test_action_masking_continuous test_process_visual_observation test_action_masking_discrete_1 test_process_pixels test_action_masking_discrete test_action_masking_discrete_2 generate_compressed_data test_process_pixels_gray generate_list_agent_proto test_raw_bytes test_int_channel test_float_properties IntChannel test_timers decorated_func main set_version extract_version_string check_versions sample UnityEnv create_mock_group_spec create_mock_vector_step_result setup_mock_unityenvironment step UnityEnv create_mock_group_spec create_mock_vector_step_result setup_mock_unityenvironment step setup_mock_unityenvironment UnityEnv create_mock_group_spec create_mock_vector_step_result sample UnityEnv create_mock_group_spec create_mock_vector_step_result setup_mock_unityenvironment step tuple CONTINUOUS range DISCRETE list array range set_verbosity join isdir print replaceFilenameExtension add_argument exit verbose source_file ArgumentParser target_file sqrt topologicalSort list hasattr layers addEdge Graph print inputs set len list hasattr layers print filter match trim_model compile data layers print tensors float16 replace layers dumps layers isinstance print tensors inputs zip to_json globals Build array_equal pool reduce Build tanh mad tanh mul Build concat add sigmoid sub mad _ tanh mul Build concat add sigmoid mad print sorted keys obs concatenate ones n_agents append zeros is_action_discrete action_mask enumerate is_action_discrete sum resequence_and_append from_agent_proto number_visual_observations vector_actions AgentBuffer append reset_agent array range enumerate make_demo_buffer load_demonstration join suffix isdir endswith isfile append listdir add_argument_group parse_args add_argument ArgumentParser start_learning target_frame_rate create_sampler_manager sampler_file_path put EngineConfig lesson load_config keep_checkpoints str docker_target_name load_model multi_gpu TrainerController save_freq trainer_config_path width quality_level run_id CSVWriter num_envs format create_environment_factory height no_graphics try_create_meta_curriculum add_writer curriculum_folder base_port env_args TrainerFactory time_scale SubprocessEnvManager train_model TensorboardWriter env_path pop SamplerManager load_config set_all_curriculums_to_lesson_num MetaCurriculum chmod format basename isdir glob copyfile copytree prepare_for_docker_run replace getLogger set_warnings_enabled setLevel seed Process append range parse_command_line debug run_training start Queue info get_version_string join print cpu randint num_runs FloatPropertiesChannel get_property_dict_copy get_timer_root reset_timers put _send_response StepResponse list set_actions _generate_all_brain_info set_property action set_configuration EngineConfigurationChannel external_brains payload items EnvironmentResponse reset step endswith len print HasField hasattr get_attr isinstance get_attr tensor_shape ndarray isinstance shape int_val bool_val float_val ListFields name ndarray isinstance str tensor_content ndarray product isinstance get_tensor_dtype print get_tensor_dims unpack int_val bool_val array float_val enter append add set Build mul sub insert Build tolist append range len locate_actual_output_node name find_tensor_by_name split locate_actual_output_node name lstm find_tensor_by_name find_forget_bias split get_layer_shape id Struct tensor get_layer_rank layer_ranks hasattr name patch_data rank input_shapes out_shapes input get_attr append replace_strings_in_list tensors embody astype op inputs zip enumerate print float32 patch_data_fn model_tensors map_ignored_layer_to_its_input co_argcount len items list hasattr get_tensors name print process_layer eval slow_but_stable_topological_sort ModelBuilderContext sort assign_ids pop range insert len layers verbose Struct process_model open print_known_operations fuse compress node GraphDef Model dims_to_barracuda_shape insert get_tensor_dims inputs MessageToJson ParseFromString cleanup_layers read memories print sort write trim summary print_supported_ops update str min_lesson_length format SACTrainer PPOTrainer copy warning brain_name get check_config rcls list_local_devices list zeros_like size reversed range append discount_rewards Mock CameraResolution Mock list ones array range brain_name pop create_buffer brain sequence_length append range vector_action_space_size resequence_and_append ones number_visual_observations shape AgentBuffer append zeros sum range enumerate len setup_mock_unityenvironment mock_env create_mock_braininfo create_mock_brainparams create_mock_brainparams create_mock_brainparams create_mock_brainparams create_mock_brainparams zeros Mock Mock create_mock_braininfo AgentProcessor range create_mock_policy add_experiences join remove _get_candidate_names convert _get_default_tempdir dirname abspath isfile next mock_env dirname abspath setup_mock_unityenvironment create_mock_braininfo create_policy_with_bc_mock close ppo_dummy_config create_mock_3dball_brain update items list close create_policy_with_bc_mock create_mock_3dball_brain update items list create_policy_with_bc_mock current_lr create_mock_3dball_brain update items list close create_policy_with_bc_mock create_mock_3dball_brain update items list close create_mock_banana_brain create_policy_with_bc_mock update items list close create_mock_banana_brain create_policy_with_bc_mock flatten list range len append range AgentBuffer resequence_and_append get_batch construct_fake_buffer assert_array make_mini_batch AgentBuffer reset_agent array resequence_and_append sample_mini_batch construct_fake_buffer AgentBuffer resequence_and_append construct_fake_buffer AgentBuffer truncate resequence_and_append construct_fake_buffer AgentBuffer Curriculum Curriculum Curriculum dumps StringIO StringIO load_demonstration demo_to_buffer dirname abspath load_demonstration demo_to_buffer dirname abspath MagicMock basic_options MagicMock parse_command_line parse_command_line MetaCurriculum assert_has_calls MetaCurriculumTest increment_lessons assert_called_with MetaCurriculumTest increment_lessons assert_called_with assert_not_called MetaCurriculumTest set_all_curriculums_to_lesson_num MetaCurriculumTest dict update MetaCurriculumTest MetaCurriculumTest Simple1DEnvironment _check_environment_trains reset_default_graph MultiGpuPPOPolicy create_mock_brainparams reset_default_graph create_mock_brainparams update Mock reset_default_graph MultiGpuPPOPolicy create_mock_brainparams MagicMock TFPolicy basic_mock_brain basic_params BrainInfo get_action MagicMock TFPolicy basic_mock_brain basic_params BrainInfo get_action MagicMock TFPolicy basic_mock_brain ActionInfo basic_params BrainInfo get_action evaluate group_spec_to_brain_parameters close get_agent_group_spec reset get_step_result MockCommunicator PPOPolicy reset_default_graph step_result_to_brain_info UnityEnvironment get_value_estimates items list next_obs to_agentbuffer make_fake_trajectory BrainParameters PPOPolicy reset_default_graph values get_batched_value_estimates reset_default_graph reset_default_graph reset_default_graph reset_default_graph reset_default_graph reset_default_graph assert_array_almost_equal array discount_rewards Mock increment_step BrainParameters assert_called_with PPOTrainer simulate_rollout update_policy policy PPOTrainer setup_mock_env_and_brains list PPOTrainer make_fake_trajectory BrainParameters process_trajectory values process_trajectory make_fake_trajectory BrainParameters zeros PPOTrainer range run update SACPolicy PPOPolicy setup_mock_env_and_brains ones reset evaluate model simulate_rollout _execute_model prepare_update update_dict make_mini_batch create_policy_mock reward_signal_update reward_signal_eval reward_signal_update reward_signal_eval create_policy_mock dirname abspath create_policy_mock reward_signal_update reward_signal_eval create_policy_mock reward_signal_update reward_signal_eval create_policy_mock reward_signal_update reward_signal_eval create_policy_mock reward_signal_update reward_signal_eval create_policy_mock reward_signal_update reward_signal_eval create_policy_mock reward_signal_update reward_signal_eval RLTrainer dummy_config create_mock_brain list create_rl_trainer end_episode episode_steps values items list construct_fake_buffer create_rl_trainer clear_update_buffer SACPolicy setup_mock_env_and_brains update evaluate create_sac_policy_mock simulate_rollout close reset reset_default_graph create_sac_policy_mock simulate_rollout close update_reward_signals reset_default_graph update evaluate create_sac_policy_mock simulate_rollout close reset reset_default_graph update evaluate create_sac_policy_mock simulate_rollout reset reset_default_graph update evaluate create_sac_policy_mock simulate_rollout close reset reset_default_graph reset_default_graph reset_default_graph reset_default_graph reset_default_graph reset_default_graph reset_default_graph str Mock SACTrainer save_model simulate_rollout num_experiences policy setup_mock_env_and_brains SACTrainer make_brain_parameters SamplerManager sample_all sampler_config_1 sampler_config_2 SamplerManager SamplerManager sample_all incorrect_uniform_sampler incorrect_sampler_config Simple1DEnvironment _check_environment_trains Simple1DEnvironment _check_environment_trains clear assert_called_once_with Mock get_stats_summaries add_stat add_writer StatsReporter float range write_stats clear Mock write_text add_writer StatsReporter assert_called_once_with TrainerController assert_called_with MagicMock start_learning assert_called_once MagicMock assert_not_called start_learning assert_called_once MagicMock MagicMock assert_called_once MagicMock EnvironmentStep advance outputs processor assert_not_called assert_called_once_with assert_called_once MagicMock EnvironmentStep advance outputs processor assert_not_called assert_called_once_with BrainParametersMock BrainParametersMock TrainerFactory BrainParameters generate TrainerFactory BrainParameters _load_config StringIO ones AgentExperience append zeros range append from_observations range ones items list to_agentbuffer add set make_fake_trajectory extract_stack filename get __old_np_array _check_no_float64 get _check_no_float64 __old_np_zeros get __old_np_ones _check_no_float64 tuple vector_action_size mean reshape array mean nan_to_num isnan warning array sum _generate_split_indices ones discrete_action_branches len astype dot isnan nan_to_num any cast warning split bool is_action_discrete array observation_shapes enumerate range len perf_counter push reset method_handlers_generic_handler add_generic_rpc_handlers UnityEnvironment close MockCommunicator obs n_agents close get_agent_group_spec get_step_result reset MockCommunicator zip UnityEnvironment observation_shapes obs zip ones n_agents step close get_agent_group_spec get_step_result MockCommunicator set_actions zeros UnityEnvironment observation_shapes UnityEnvironment close MockCommunicator close RpcCommunicator close RpcCommunicator close RpcCommunicator list extend ObservationProto AgentInfoProto append prod range len fromarray uint8 BytesIO astype save ObservationProto generate_compressed_data extend shape generate_compressed_data process_pixels rand generate_compressed_data process_pixels rand _process_vector_observation generate_list_agent_proto enumerate generate_compressed_proto_obs rand extend AgentInfoProto _process_visual_observation AgentGroupSpec CONTINUOUS batched_step_result_from_proto generate_list_agent_proto range AgentGroupSpec batched_step_result_from_proto DISCRETE generate_list_agent_proto action_mask AgentGroupSpec batched_step_result_from_proto DISCRETE generate_list_agent_proto action_mask AgentGroupSpec batched_step_result_from_proto DISCRETE generate_list_agent_proto action_mask AgentGroupSpec CONTINUOUS batched_step_result_from_proto generate_list_agent_proto action_mask BrainParametersProto agent_group_spec_from_proto extend _parse_side_channel_message _generate_side_channel_data send_int IntChannel FloatPropertiesChannel _parse_side_channel_message _generate_side_channel_data get_property set_property _parse_side_channel_message _generate_side_channel_data RawBytesChannel encode send_raw_data get_and_clear_received_messages set_gauge replace endswith add set walk join print extract_version_string set values print join | <img src="docs/images/unity-wide.png" align="middle" width="3000"/> <img src="docs/images/image-banner.png" align="middle" width="3000"/> # Unity ML-Agents Toolkit (Beta) [](docs/Readme.md) [](LICENSE) ([latest release](https://github.com/Unity-Technologies/ml-agents/releases/tag/latest_release)) ([all releases](https://github.com/Unity-Technologies/ml-agents/releases)) **The Unity Machine Learning Agents Toolkit** (ML-Agents) is an open-source Unity plugin that enables games and simulations to serve as environments for training intelligent agents. Agents can be trained using reinforcement learning, | 3,167 |
nrhine1/deep_imitative_models | ['imitation learning', 'autonomous driving'] | ['Deep Imitative Models for Flexible Inference, Planning, and Control'] | dim/plan/traffic_light_decider.py dim/env/util/gsheets_util.py dim/env/run_carla_episode.py dim/plan/pid_controller.py scripts/create_checkpoint_file.py dim/env/util/agent_util.py dim/plan/plot/waypoints_plot.py dim/env/preprocess/carla_io.py dim/env/util/geom_util.py carla_agent.py dim/plan/waypointer.py dim/plan/dim_controller.py dim/plan/user_controller.py dim/env/util/util.py dim/plan/runtime_metrics.py dim/env/util/agent_server.py dim/plan/dim_plan.py dim/load_old_model.py dim/env/preprocess/carla_preprocess.py dim/plan/autopilot_controller.py dim/plan/goal_distributions.py dim/env/util/carla_settings.py dim/env/util/tfutils.py ext/transform.py dim/env/plot/carla_plot.py dim/env/util/carla_gsheets.py _fmt_cfg postprocess_cfg query_remove_logdir main run_carla_client run_and_complete_client load_old_model is_closest_vehicle_behind_us generate_control choose_player_start set_up_directories save_metadata check_for_collision run_episode format_control_string update_pilots_plot PlottablePast DIMPlottingData online_plot full_extent PlottableMixtureWaypoints PlotState PlottableWaypointsDefault create_agent_figure Removable PlottableDestination plot_trajectories PlottablePolygonSeed PlottablePolygon PlottableRegionVertices PlottableControlWaypoints cmap_discretize plot_plan_colormap get_settings PlottableSegmentSet PlottablePlan PlottableRoute GenericPerFramePlottingData PlottablePlanQuality PlottableChosenWaypoint Nonremovable PlottableLegend get_unique_output_dir PlottableText plot_item_legend PlottableControlRegion PlottableTextBlock PlottableManager PlottableWaypoints PlottableDIMTextBlock traj2grid transform Plottable save_data_at_frame load_control_at_frame is_frame_index_valid load_data_at_frame load_measurements_at_frame get_max_frame extract_nonplayer_transforms_list get_occupancy_grid extract_nonplayer_transform splat_lidar LidarParams flip_light_state dict_to_json NumpyEncoder get_rectified_sensor_transform get_nearby_agent_ids rectify_rigid_rotation_of_sensor get_rectified_player_transform get_rectifying_player_transform BEVFeatureParams StreamingCARLALoader splat_points build_BEV extract_nonplayer_transforms get_rectified_depth_transform PlayerObservations get_pid randstring kill_old_carla kill_server start_server decide_episode_state EpisodeState EpisodeParams lock_observations discretize_throttle TurnTracker build_sensor_blacklist NonredStuckTracker StationaryTracker SensorParams EpisodeResults MultiepisodeResults create_lidar create_settings create_region_from_route trim_left_right_polygon create_left_region_seed inside_polygon generate_stopping_polygon create_right_region_seed orientation create_straight_region_seed create_arc intersects_q_line get_2d_general_from_two_points_tf generate_unsticking_polygon point_intersects_right postprocess_polygon preprocess_route_2d_for_polygon do_intersect get_2d_general_from_two_points GSheetsResults save_feed_dict safe_divide member_initialize Timer noisy_autopilot DIMJointMiddleLowController DIMPlanner Plan PosteriorComponents create SegmentSetIndicator EmptyGoalLikelihood BatchedComponentsMixture BatchedSingleGoalNormal RegionIndicator GoalLikelihood BatchedGaussianMixture PIDController clip_throttle PIDSteerController PIDBrakeController compute_setpoint_signed_distance CarPIDControllers pilot_with_PID_steer_controller PIDThrottleController plan_p_steer clip_steer FrameMetrics print_joint_tex print_tex compute_pothole_totals EpisodeMetrics get_red_light MultiepisodeMetrics multiep_from_concluded TrafficLightDecider UserController init Waypointer WaypointerError Countlog main waypoints_plot Transform main join format argv postprocess_cfg print port info host run_and_complete_client pid format skip_to n_episodes n_episodes_run exit killpg SIGKILL setpgrp info sleep register run_carla_client start_server data pid T_past checkpoint_path hires_plot frames_per_episode shapes reset_default_graph S_world_frame open seed LidarParams list create argv experiment DIMJointMiddleLowController create_agent_figure set_trace create_session shape safe_load server dim range old_version load_annotated_model ESPInference _fmt_cfg format plot record_wait_frames debug TT close phi debug_post_load waypointer MultiepisodeMetrics info model_path main save_period_frames S_past_world_frame plotting join int T frompidconf DIMPlanner ESPPhi zeros len format from_bytes experiment tolist urandom format print eval input nof waypointer dim experiment import_meta_graph set_trace noisy_autopilot inverse begin_new_episode choose_player_start populate_expert_feeds dict_to_json info save_plots join error NonredStuckTracker save_dim_feeds online_plot frames_per_episode lock_nd list randomize_seeds TurnTracker imgfmt savefig sleep append dim start_episode trackers plot inverted prepare_waypoints_for_control plotting get_waypoints_from_transform_and_measurement save_metadata is_turning n_failures update_pilots_plot load_settings warning intersection_count decide_episode_state is_stuck_far_from_red_lights settings send_control specific_episodes AttrDict range update episode Waypointer main save_to_disk inv_tform_t remove_second_row reset all_episode_metrics override_pilot check_for_collision populate_phi_feeds PlayerObservations data update_passenger_comfort full_extent read_data experiment lock_observations transformed index_new_measurement StationaryTracker format get_upcoming_traffic_light debug generate_control get_unsticking_waypoints waypointer red_light_violations GenericPerFramePlottingData prune_old deque items pilot frame n_successes set_up_directories conclude_episode collision_other format root_dir episode mkdir makedirs format debug player_start_spots randint max len format root_dir collision_pedestrians is_closest_vehicle_behind_us debug collision_vehicles episode warning conclude_episode info collision_other norm HasField location argmin vector3_to_np non_player_agents transform_points stack append DIMPlottingData generate_mid_and_low_level_controls data remove plot plot_overhead_im axis add set shape imshow zoom_out_bounds zoom_bounds plot_zoomed removable ravel plot_keys flush_events enumerate cmap linspace concatenate get_position format subplots debug pause draw axis set_position subplots_adjust info ravel update get plot debug set_alpha zip set_ydata enumerate update_dim update_from_observation inverse concatenate print out_fmt format range axis roll scatter savefig figure linspace legend get_legend_handles_labels xlabel imshow stack set_visible savefig linspace figure tick_params warning len union join list format items info makedirs join format load_control_at_frame isfile load_measurements_at_frame AttrDict join format join format join format glob int sorted format build_overhead_lidar Lidar32 build_occupancy_grid concatenate debug player_measurements get_occupancy_grid shape build_overhead_semantic splat_lidar Transform Rotation Translation set transform get_rectifying_player_transform get_transform get_rectifying_player_transform inverse T asarray _array point_cloud transform_points array get_rectified_sensor_transform linspace T asarray _array point_cloud transform_points array get_rectified_sensor_transform items list defaultdict extract_nonplayer_transforms append extract_nonplayer_transform format non_player_agents id HasField items list sorted location vector3_to_np extract_nonplayer_transforms transform_to_loc append euclidean getstate join seed setstate format randstring relpath chdir __file__ getcwd split dirname info Popen open int kill check_output error strip SIGTERM format is_anyone_stationary size min goal_position get_distance_to_goal info data lock_nd _array append info format set_rotation SeedVehicles set_position CarlaSettings randomize_seeds add_sensor set Camera info create_lidar set_image_size set_rotation Lidar set_position set info isclose ones_like zeros_like stack tile ones_like zeros_like where ones_like zeros_like not_equal logical_and where logical_or orientation equal format abs logical_and einsum value reduce_sum floormod stack cast int32 equal norm asarray cos pi sqrt stack around linspace sin det norm arccos concatenate pi copy sqrt stack tile sin postprocess_polygon einsum norm concatenate print copy append range norm dot zeros isclose range error norm trim_left_right_polygon error info argmax asarray info stack linspace zeros_like concatenate cos pi stack linspace sin concatenate cos pi stack linspace sin ones_like zeros_like not_equal f where safe_f getargspec format Control debug brake min autopilot_control steer throttle uniform reverse add_throttle_noise hand_brake max add_steering_noise EmptyGoalLikelihood SegmentSetIndicator RegionIndicator BatchedGaussianMixture norm Control min max arctan2 arctan2 eps get int format format_comfort_metrics print n_successes n_total get_red_light round asarray format print ir mean get_red_light append sem sum all_episode_metrics MultiepisodeMetrics quantify append concluded basicConfig add_argument ArgumentParser parse_args waypoints_plot sorted model glob realpath | [](https://creativecommons.org/licenses/by-nc-nd/4.0/) # Purposes 1. Collect data to train a Deep Imitative Model in CARLA 2. Apply a Deep Imitative Model to CARLA (a pretrained model is included with this repo) Openreview: [https://openreview.net/pdf?id=Skl4mRNYDr](https://openreview.net/pdf?id=Skl4mRNYDr) <img src="example_ims/mixture.gif" width="400"/><img src="example_ims/region.gif" width="400"/> # Primary files 1. `carla_agent.py` : Interface to perform both purposes above 2. `dim/plan/dim_plan.py` : The core planning module that assembles the prior from the model and the goal likelihood. 3. `dim/plan/goal_distributions.py` : Implementations of various goal likelihood distributions. | 3,168 |
nrupatunga/Fast-Image-Filters | ['style transfer'] | ['Fast Image Processing with Fully-Convolutional Networks'] | src/core/network/basic_blocks.py src/core/network/weights_init.py src/run/train.py src/core/trainers/filter_trainer.py src/core/utils/vis_utils.py src/run/app.py src/core/network/custom_nets.py src/run/test.py src/core/dataloaders/mit_dataloader.py MitData AdaptiveBatchNorm2d ConvBlock FIP weights_init LitModel Visualizer get_output apply_filter load_model get_output freeze eval load_from_checkpoint cuda uint8 transpose astype unsqueeze float numpy clip imread | <p align="center"> <h3 align="center">Fast Image Processing with Fully-Convolutional Networks</h3> <p align="center"> PyTorch implementation <br /> <br /> <a href="https://github.com/nrupatunga/Fast-Image-Filters/issues">Report Bug</a> · <a href="https://github.com/nrupatunga/Fast-Image-Filters/issues">Request Feature</a> </p> | 3,169 |
nsfzyzz/boundary_thickness | ['adversarial defense', 'data augmentation'] | ['Boundary thickness and robustness in learning models'] | models/resnet_cifar_CIFAR100.py models/resnet_cifar.py measure_thickness.py visualize_3D.py test_ood.py attack_functions.py train_models.py visualize_thickness_measurement_experiment.py models/densenet.py models/resnet_CIFAR100.py utils.py test_bb.py noisy_mixup.py models/vgg.py Reproduce_thickness_measurement_experiment.py utils3d.py model_zoo.py models/resnet.py test_pgd.py Attacks attacker_linf evaluate test_adv PGD_l2 calculate_thickness update_lr DispatchThread ChildThread check_result return_results return_command get_free_gpu_indices eval_adv_test_blackbox _pgd_blackbox eval_adv_test_whitebox _pgd_whitebox mixup_criterion test mixup_data getData train find_specific_class visualize3D Assert_three_orthogonal run_many Compute_grid_outputs draw_once DenseNet201 DenseNet161 DenseNet121 Transition DenseNet Bottleneck densenet_cifar test DenseNet169 ResNet ResNet18 ResNet34 Bottleneck ResNet101 test ResNet50 BasicBlock ResNet152 resnet44_cifar resnet110_cifar resnet1202_cifar resnet1001_cifar PreActBasicBlock ResNet_Cifar Bottleneck resnet164_cifar preact_resnet110_cifar resnet20_cifar conv3x3 preact_resnet1001_cifar resnet32_cifar PreActBottleneck BasicBlock resnet56_cifar PreAct_ResNet_Cifar preact_resnet164_cifar ResNet ResNet18 ResNet34 Bottleneck ResNet101 test ResNet50 BasicBlock ResNet152 resnet44_cifar resnet110_cifar resnet1202_cifar resnet1001_cifar PreActBasicBlock ResNet_Cifar Bottleneck resnet164_cifar preact_resnet110_cifar resnet20_cifar conv3x3 preact_resnet1001_cifar resnet32_cifar PreActBottleneck BasicBlock resnet56_cifar PreAct_ResNet_Cifar preact_resnet164_cifar VGG data Variable clamp min sign requires_grad_ eval train max range detach astype float32 device Softmax model flatten linspace cuda squeeze logical_and append sum range remainder format softmax1 eval stack item __call__ enumerate norm print num_points param_groups info gpus sleep new_query enumerate append str range print min sign requires_grad_ sum max range model_target detach print eval _pgd_blackbox model print min sign requires_grad_ sum max range detach print eval _pgd_whitebox MNIST Compose SVHN DataLoader CIFAR10 CIFAR100 criterion model print backward progress_bar zero_grad step max enumerate len print eval model randperm cuda beta Softmax DataLoader vis_net device Figure abs show view TensorDataset append to plot concatenate softmax1 eval item norm print tqdm dot numpy dot abs view item Softmax concatenate tqdm softmax1 eval DataLoader TensorDataset vis_net device append to numpy range find_specific_class rand make_subplots Compute_grid_outputs show view ones shape update_layout range remainder format plot __call__ long Volume enumerate int norm add_trace Assert_three_orthogonal print dot cpu randint int list subplots set_title plot set_xlabel makedirs tight_layout mean set_ylabel savefig legend append tick_params keys range set_ylim enumerate net densenet_cifar randn ResNet18 size ResNet_Cifar ResNet_Cifar ResNet_Cifar ResNet_Cifar ResNet_Cifar ResNet_Cifar ResNet_Cifar ResNet_Cifar PreAct_ResNet_Cifar PreAct_ResNet_Cifar PreAct_ResNet_Cifar | # Boundary thickness Boundary thickness and robustness in learning models ## Introduction This repository includes all necessary programs to reproduce the results of our NeurIPS paper [Boundary thickness and robustness in learning models](https://proceedings.neurips.cc/paper/2020/file/44e76e99b5e194377e955b13fb12f630-Paper.pdf). The code has been tested on Python 3.7.4 with PyTorch 1.5.0 and torchvision 0.6.0. ## Usage Please use the following commands to download the repo and the example checkpoints. ``` mkdir Boundary_thickness_repo cd Boundary_thickness_repo mkdir data | 3,170 |
nttcslab-nlp/Top-Down-RST-Parser | ['discourse parsing'] | ['Top-Down RST Parsing Utilizing Granularity Levels in Documents'] | tools/rsteval.py src/networks/hierarchical.py src/evaluate/evaluate_trees.py tools/preprocess.py src/trainer/checkpointer.py tools/hdf.py src/dataset/rst_tree.py src/networks/embedder.py tools/make_hdf.py src/networks/ensemble.py tools/label_vocab.py src/trainer/trainer.py tools/embedder.py src/config.py src/dataset/merge_file.py src/dataset/data_loader.py src/evaluate/rsteval.py tools/make_merge.py tools/word_vocab.py tools/data_loader.py src/networks/layers.py tools/evaluate_trees.py src/evaluate/tree_function.py tools/tree_utils.py src/main.py src/dataset/fields.py src/dataset/hdf.py src/trainer/score.py src/networks/parser.py load_config finetune parse test main train Dataset load_vocab rstdt_fields HDF Doc load_tree_from_string main load_gold_tree load_tree rst_parseval original_parseval get_brackets convert2attach_tree is_binary convert2rst_tree TextEmbedder ElmoEmbeddings Embeddings WordEmbedder EnsembleParser joint_tree get_leaf_labels HierarchicalParser DeepBiAffine FeedForward SelectiveGate BiLSTM SpanBasedParser Checkpointer Score Trainer Dataset load_vocab TextEmbedder ElmoEmbeddings Embeddings WordEmbedder main load_gold_tree load_tree HDF load main save count load main main conll_format has_duplication get_starts_xxx separate_tree edu_starts_xx2spans search_new_position write get_parent_labels preprocess make_text_span load_heilman_dataset spans2tree_positions get_span_txt get_edu_strings main get_treepositions init_edu_idx tree_division rst_parseval re_categorize get_brackets convert2labelled_attachment_tree binarize convert2rst_tree load main pickle_counter count train_config test_config add_parser parse_config ArgumentParser set_defaults add_subparsers finetune_config finetune parse print test train load_config ExponentialLR rstdt_fields build_model print load_train_valid Trainer parameters lr_decay Dataset run ExponentialLR rstdt_fields print load_train_valid Trainer parameters lr_decay model_path Dataset load_pretrained_model run load_test rstdt_fields load_model doc_id format extend mkdir output_dir zip model_path Dataset format vocab_file rstdt_fields load_model input_doc with_suffix print load_vocab is_dir mkdir model_path iterdir exists Vocab Counter NestedField RawField Field helper split items list format add_argument json_file load_tree tgt_dir rst_parseval ArgumentParser parse_args load_gold_tree iterdir get_brackets len zip convert2rst_tree join split helper print exit treepositions append label split fromstring fromstring Tree copy treepositions enumerate load fmt ns_vocab len save relation_vocab count fromstring Counter split save_hdf vocab_file zip load_vocab extend hdf_file split append to cat WordEmbedder join list map zip append enumerate src with_suffix write map divide load_heilman_dataset tree_division re_categorize join convert2labelled_attachment_tree binarize fromstring make_text_span append get_starts_xxx separate_tree fromstring get_parent_labels make_text_span zip get_edu_strings get_treepositions init_edu_idx append append Tree get_span_txt join leaves int join map leaves append range split append len split append int leaves append label treepositions str treepositions enumerate search_new_position edu_starts_xx2spans spans2tree_positions list index filter append range len append treeposition_spanning_leaves filter range len label all Tree join set_label _re_categorize treepositions split join split vocab pickle_counter update sum | # Top-Down RST Parser This repository is the implementation of "Top-down RST Parsing Utilizing Granularity Levels in Documents" published at AAAI 2020. ## Requirements python 3.6 or newer libraries: - allennlp==0.9.0 - h5py==2.10.0 - nltk==3.5 - numpy==1.18.4 - torch==1.5.0 | 3,171 |
ntucllab/libact | ['active learning'] | ['libact: Pool-based Active Learning in Python'] | examples/albl_plot.py libact/query_strategies/query_by_committee.py libact/query_strategies/density_weighted_meta.py libact/query_strategies/random_sampling.py libact/query_strategies/multilabel/tests/test_multilabel_realdata.py libact/query_strategies/tests/test_realdata.py libact/query_strategies/tests/utils.py libact/labelers/__init__.py libact/models/tests/test_svm.py libact/query_strategies/variance_reduction.py libact/utils/tests/test_criteria.py libact/query_strategies/multiclass/__init__.py libact/__init__.py libact/models/tests/test_logistic_regression.py libact/query_strategies/multiclass/hierarchical_sampling.py libact/query_strategies/multiclass/tests/test_hierarchical_sampling.py libact/query_strategies/multilabel/adaptive_active_learning.py libact/query_strategies/multilabel/multilabel_with_auxiliary_learner.py libact/models/multilabel/binary_relevance.py libact/query_strategies/tests/test_variance_reduction.py libact/query_strategies/tests/test_density_weighted_meta.py libact/base/interfaces.py docs/conf.py libact/query_strategies/quire.py examples/get_dataset.py libact/models/logistic_regression.py libact/labelers/interactive_labeler.py examples/label_digits.py libact/query_strategies/multiclass/active_learning_with_cost_embedding.py libact/query_strategies/multilabel/maximum_margin_reduction.py libact/models/perceptron.py libact/query_strategies/multilabel/cost_sensitive_reference_pair_encoding.py libact/models/__init__.py libact/models/tests/test_sklearn_adapter.py examples/alce_plot.py libact/utils/__init__.py libact/query_strategies/active_learning_by_learning.py libact/query_strategies/density_weighted_uncertainty_sampling.py libact/models/sklearn_adapter.py examples/multilabel_plot.py examples/plot.py libact/base/tests/test_dataset.py libact/labelers/tests/test_labelers.py libact/query_strategies/multilabel/__init__.py libact/query_strategies/multilabel/binary_minimization.py libact/query_strategies/tests/test_hintsvm.py libact/models/multilabel/dummy_clf.py libact/labelers/ideal_labeler.py libact/models/multilabel/__init__.py libact/query_strategies/tests/test_quire.py libact/query_strategies/hintsvm.py libact/models/multilabel/tests/test_binary_relevance.py libact/utils/multilabel/__init__.py libact/models/tests/test_perceptron.py libact/query_strategies/multiclass/mdsp.py libact/query_strategies/__init__.py libact/query_strategies/multiclass/tests/test_iris.py libact/query_strategies/tests/test_uncertainty_sampling.py setup.py libact/models/svm.py libact/query_strategies/uncertainty_sampling.py libact/query_strategies/multiclass/expected_error_reduction.py libact/base/dataset.py skip_private_member setup main split_train_test run main split_train_test run main main split_train_test main split_train_test run main split_train_test run import_scipy_mat import_libsvm_sparse Dataset ProbabilisticModel Labeler QueryStrategy ContinuousModel Model MultilabelModel TestDatasetMethods IdealLabeler InteractiveLabeler TestDatasetMethods LogisticRegression Perceptron SklearnAdapter SklearnProbaAdapter SVM _fit_model BinaryRelevance DummyClf BinaryRelevanceTestCase LogisticRegressionIrisTestCase SVMIrisTestCase IrisTestCase SVMIrisTestCase ActiveLearningByLearning Exp4P DensityWeightedMeta DensityWeightedLogisticRegression DWUS HintSVM QueryByCommittee QUIRE RandomSampling UncertaintySampling _E _Phi VarianceReduction ActiveLearningWithCostEmbedding EER HierarchicalSampling _smacof_single_p MDSP smacof_p HierarchicalSamplingTestCase IrisTestCase _calc_approx_err AdaptiveActiveLearning BinaryMinimization BinaryCLF CSRPE CostSensitiveReferencePairEncoding MaximumLossReductionMaximalConfidence MultilabelWithAuxiliaryLearner MultilabelRealdataTestCase DensityWeightedMetaTestCase UncertaintySamplingTestCase QUIRETestCase RealdataTestCase UncertaintySamplingTestCase run_qs init_toyexample VarianceReductionTestCase run_qs inherit_docstring_from calc_cost seed_random_state pairwise_f1_score pairwise_rank_loss MultiLabelCriteriaTestCase startswith connect update score make_query append label train range train_test_split Dataset format_sklearn concatenate arange HintSVM run show split_train_test tolist ylabel title dirname QUIRE legend append range IdealLabeler SVM plot mean realpath RandomSampling UncertaintySampling join deepcopy print xlabel ActiveLearningByLearning len list calc_cost zip predict fill_diagonal fetch_mldata rand array vstack unique fit_transform len ALCE SVR figure urlopen list sample data load_digits print target shape score InteractiveLabeler add_subplot make_query set_xlabel LogisticRegression set_xdata input get_position update set_position set_xlim eval label reshape draw set_ylabel train set_ydata set_ylim make_multilabel_classification tolist BinaryRelevance subplot MMC MultilabelWithAuxiliaryLearner BinaryMinimization load_svmlight_file list loadmat shuffle zip reshape array train Dataset estVar predict_real copy sigmoid vstack append train Dataset range len T check_random_state euclidean_distances ones reshape print inv rand copy ravel dot IsotonicRegression sum check_symmetric range fit_transform list check_random_state hasattr check_array argmin copy warn zip _smacof_single_p randint max range train max predict_real copy Dataset concatenate update list make_query zip append label range RandomState isinstance sum sum astype | # libact: Pool-based Active Learning in Python authors: [Yao-Yuan Yang](http://yyyang.me), Shao-Chuan Lee, Yu-An Chung, Tung-En Wu, Si-An Chen, [Hsuan-Tien Lin](http://www.csie.ntu.edu.tw/~htlin) [](https://travis-ci.org/ntucllab/libact) [](http://libact.readthedocs.org/en/latest/?badge=latest) [](https://badge.fury.io/py/libact) [](https://codecov.io/github/ntucllab/libact?branch=master) # Introduction `libact` is a Python package designed to make active learning easier for real-world users. The package not only implements several popular active learning strategies, but also features the [active-learning-by-learning](http://www.csie.ntu.edu.tw/~htlin/paper/doc/aaai15albl.pdf) meta-algorithm that assists the users to automatically select the best strategy | 3,172 |
ntunlp/ptrnet-depparser | ['discourse parsing'] | ['Hierarchical Pointer Net Parsing'] | neuronlp2/nn/init.py neuronlp2/io/alphabet.py neuronlp2/models/sequence_labeling.py neuronlp2/nn/_functions/__init__.py neuronlp2/__init__.py examples/HPtrNetParser.py neuronlp2/io/conllx_data.py neuronlp2/models/__init__.py neuronlp2/io/writer.py neuronlp2/nn/modules/__init__.py neuronlp2/io/instance.py neuronlp2/models/parsing.py neuronlp2/nn/modules/variational_rnn.py neuronlp2/io/utils.py neuronlp2/nn/__init__.py neuronlp2/nn/_functions/variational_rnn.py neuronlp2/io/__init__.py neuronlp2/io/conllx_stacked_data.py neuronlp2/tasks/__init__.py neuronlp2/tasks/parser.py neuronlp2/nn/modules/linear.py neuronlp2/nn/_functions/skipconnect_rnn.py neuronlp2/models/parsing2.py neuronlp2/nn/modules/crf.py neuronlp2/nlinalg/__init__.py neuronlp2/nn/modules/skipconnect_rnn.py neuronlp2/utils.py neuronlp2/io/reader.py neuronlp2/nn/modules/attention.py neuronlp2/io/logger.py neuronlp2/io/conll03_data.py neuronlp2/nn/_functions/masked_rnn.py neuronlp2/nn/modules/masked_rnn.py neuronlp2/nlinalg/nlinalg.py neuronlp2/nn/utils.py main load_embedding_dict Alphabet get_batch read_data iterate_batch_tensor create_alphabets read_data_to_tensor iterate_batch get_batch_tensor get_batch read_data iterate_batch_tensor create_alphabets read_data_to_tensor iterate_batch get_batch_tensor get_batch_stacked_tensor _generate_stack_inputs _obtain_child_index_for_depth _obtain_child_index_for_inside_out read_stacked_data iterate_batch_stacked_variable read_stacked_data_to_tensor _obtain_child_index_for_left2right DependencyInstance NERInstance Sentence get_logger CoNLLXReader CoNLL03Reader CoNLL03Writer CoNLLXWriter PriorOrder BiRecurrentConvBiAffine StackPtrNet HPtrNetPSGate HPtrNetPSTSGate HPtrNetPSTGate BiVarRecurrentConv BiRecurrentConv BiRecurrentConvCRF BiVarRecurrentConvCRF logsumexp logdet assign_tensor freeze_embedding recover_rnn_seq prepare_rnn_seq _ntuple BiAAttention ConcatAttention TreeCRF ChainCRF BiLinear MaskedRNN MaskedLSTM MaskedRNNBase MaskedGRU SkipConnectRNN SkipConnectGRU SkipConnectFastGRU SkipConnectRNNBase SkipConnectLSTM SkipConnectLSTMCell SkipConnectRNNCell SkipConnectFastLSTMCell SkipConnectFastLSTM SkipConnectGRUCell SkipConnectFastGRUCell VarMaskedRNNBase VarGRUCell VarFastGRUCell VarRNNCell default_initializer VarMaskedLSTM VarRNNCellBase VarMaskedRNN VarMaskedFastLSTM VarLSTMCell VarMaskedGRU VarFastLSTMCell VarMaskedFastGRU MaskedStep StackedStep AutogradMaskedRNN MaskedRecurrent StackedRNN AutogradMaskedStep StackedStep SkipConnectRNNReLUCell AutogradSkipConnectRNN AutogradSkipConnectStep SkipConnectRecurrent SkipConnectLSTMCell SkipConnectFastLSTMCell SkipConnectStep StackedRNN SkipConnectGRUCell SkipConnectFastGRUCell SkipConnectRNNTanhCell StackedStep VarRNNTanhCell AutogradVarMaskedRNN VarGRUCell VarFastGRUCell VarMaskedRecurrent AutogradVarMaskedStep VarMaskedStep StackedRNN VarRNNReLUCell VarLSTMCell VarFastLSTMCell eval is_punctuation is_uni_punctuation decode_MST get_batch_stacked_tensor p_out ArgumentParser p_in label_smooth clip coverage prior_order save_args parse_args p_rnn info word_embedding join time learning_rate parameters model_name step hidden_size epsilon punctuation arc_space zero_grad dev decoder_layers to get_logger encoder_layers load_embedding_dict set grandPar max_decay beam type_space char_path add_argument decoder_input_size construct_char_embedding_table train read_stacked_data_to_tensor mode tuple clip_grad_norm_ word_path opt char double_schedule_decay sum range MyModel char_dim test schedule gamma num_epochs flush CoNLLXWriter char_embedding print sibling write pos_dim batch_size generate_optimizer unk_replace decay_rate num_filters construct_word_embedding_table freeze freeze_embedding word_embedd pos size model_path int backward create_alphabets skipConnect loss len load vector_size print dict shape load_word2vec_format empty enumerate open load list sorted size add_singleton close add dict set keys expand_vocab save info Alphabet get_index get_logger len sentence getNext print length close CoNLL03Reader append max enumerate MAX_CHAR_LENGTH random_sample is_singleton min choice NUM_CHAR_PAD empty binomial zeros float sum range enumerate len MAX_CHAR_LENGTH arange slice is_singleton min shuffle NUM_CHAR_PAD empty binomial zeros float sum range enumerate len MAX_CHAR_LENGTH read_data is_singleton min NUM_CHAR_PAD append zeros to empty range enumerate len random_sample float min new_ones device to sum long arange slice shuffle new_ones device to long range len CoNLLXReader append range len list reversed append range len calc_depth _obtain_child_index_for_left2right pop _obtain_child_index_for_depth _obtain_child_index_for_inside_out append _obtain_child_index_for_left2right max sentence getNext _generate_stack_inputs print length close type_ids append heads CoNLLXReader enumerate MAX_CHAR_LENGTH is_singleton min read_stacked_data NUM_CHAR_PAD append zeros to empty range enumerate len random_sample float min new_ones device to sum long arange slice shuffle new_ones device to long range len setFormatter getLogger addHandler StreamHandler Formatter setLevel INFO print log potrf max isinstance pack_padded_sequence check_decreasing tolist index_select index_select pad_packed_sequence isinstance detach_ sqrt len StackedRNN MaskedStep StackedStep linear cat relu linear tanh cat tanh baddbmm sigmoid unsqueeze cat tanh linear chunk apply sigmoid is_cuda cat tanh baddbmm sigmoid unsqueeze cat tanh linear chunk apply sigmoid is_cuda cat StackedRNN StackedStep SkipConnectStep linear relu linear tanh sigmoid tanh baddbmm unsqueeze tanh linear chunk apply sigmoid is_cuda sigmoid tanh baddbmm unsqueeze tanh linear chunk apply sigmoid is_cuda StackedRNN StackedStep VarMaskedStep match get_instance shape encode range items list ones set add shape dict chuLiuEdmonds array int32 append zeros argmax max range | # ptrnet-depparser This is the source code of our depedency parser proposed in paper "[Hierarchical Pointer Net Parsing](https://arxiv.org/abs/1908.11571)" accepted by EMNLP 2019. Git Repository: https://github.com/ntunlp/ptrnet-depparser.git # Requirements Python 2.7, PyTorch >=0.3.0, Gensim >= 0.12.0 # Models We have implemented the below models in this project, which can be found in ./neuronlp2/models/parsing2.py: - **HPtrNetPSTGate**: In each step, decoder receives hidden states from sibling, parent and previous step. Use Gate described in the paper. - **HPtrNetPSTSGate**: In each step, decoder receives hidden states from sibling, parent and previous step. Use SGate described in the paper. - **HPtrNetPSGate**: In each step, decoder receives hidden states from sibling and parent. Use Gate described in the paper. | 3,173 |
nunenuh/craft.pytorch | ['scene text detection'] | ['Character Region Awareness for Text Detection'] | craft/ops/gaussian.py craft/ops/box_ops.py craft/trainer/task.py craft/ops/synthtext.py craft/pred/functional.py craft/transforms/functional.py train.py craft/datasets/custom.py craft/models/backbone.py craft/datasets/utils.py craft/utils/viz.py craft/models/craft.py craft/ops/affinity.py craft/datasets/loader.py craft/nn/loss.py craft/utils/file_utils.py craft/models/vgg16_bn.py craft/models/refinenet.py craft/ops/transform.py craft/trainer/__init__.py craft/transforms/__init__.py craft/datasets/synthtext.py craft/datasets/__init__.py craft/nn/functional.py craft/ops/boxes.py craft/utils/craft_utils.py craft/trainer/helper.py craft/utils/imgproc.py craft/transforms/transforms.py craft/nn/__init__.py craft/utils/viz_utils.py CustomDataset _custom_loader trainset_transform synthtext_trainloader _synthtext_loader synthtext_validloader custom_trainloader validset_transform custom_validloader SynthTextSampler SynthTextChecker SynthTextDataset SynthTextDataLoader raw2bbox combine_point_single raw2boxmulti combine_point_multi load_image raw2box load_json2dict _backbone_feature_names get_backbone _backbone_model CRAFT double_conv RefineNet init_weights vgg16_bn positive_mask_loss negative_mask_loss negative_mask ohem_number positive_mask MSE_OHEM_Loss OHEMLoss MSEOHEMLoss Maploss _affine_box _affine_boxes boxes _find_word_over_char _is_box_intersect iou batch_xymm2coord bounds xymm2coord order center_triangle_top_bottom delta_xy coord2xywh pad batch_coord2xymm centroid coord2xymm triangle_centroid batch_box_coordinate_to_xyminmax box_xyminmax_to_coordinates affinity_boxes affine_boxes affine_box box_coordinate_to_xywh order_points box_delta_xy box_pad box_order batch_box_xyminmax_to_coordinates triangle_centroid box_coordinate_to_xyminmax box_bounds find_word_over_character_box box_center_triangle_top_bottom iou intersect_over_union box_centroid _perspective_transform GaussianGenerator _isotropic_gaussian_heatmap raw2bbox combine_point_single raw2boxmulti combine_point_multi raw2box autocrop_warped_image order_points warp_perspective_by_boxes_area get_text_area_coord four_point_transform rotate_image do_autocrop_deskew do_warp_perspective_deskew screen_contour_pad warp_perspective word_boxes revert_back minmax_scale resize_aspect_ratio char_bbox sort_boxes_lrtb sort_boxes_lrtb_segmented normalize_variances normalize_dim load_image_tensor tensor_minmax_scale from_tensor_to_numpy resize load_image boxes_to_images copy_state_dict net_forward load_craft_network unfreeze_conv_cls_module freeze_network str2bool TaskCRAFT sharpness contrast hflip brightness rotate color pad hue vflip crop RandomHue RandomColor NumpyToTensor RandomRotation Compose ScaleRegionAffinity RandomVerticalFlip RandomContrast Resize RandomCrop Normalize RandomSharpness RandomHorizontalFLip RegionAffinityMinMaxScaler RandomBrightness warpCoord getDetBoxes_core getPoly_core adjustResultCoordinates getDetBoxes get_files list_files saveResult normalizeMeanVariance cvt2HeatmapImg resize_aspect_ratio loadImage denormalizeMeanVariance visual_analysis show_word_char show_grid show_with_mask word_bbox_draw_rect revert_back minmax_scale draw_rect image_overlay find_bbox_and_draw_rect from_tensor_to_numpy to_colormapjet to_uint8 Compose Compose SafeDataset trainset_transform CustomDataset DataLoader validset_transform SafeDataset trainset_transform DataLoader validset_transform SynthTextDataset imread cvtColor COLOR_BGR2RGB combine_point_single combine_point_multi raw2boxmulti raw2box range append len startswith startswith _backbone_feature_names _backbone_model data isinstance fill_ Conv2d xavier_uniform_ normal_ zero_ BatchNorm2d Linear gt sum masked_select le sum masked_select center_triangle_top_bottom order append _affine_box range len iou append _is_box_intersect range len sorted tolist _affine_boxes _find_word_over_char coord2xymm append coord2xymm astype float32 append xymm2coord astype float32 enumerate bounds pad bounds centroid centroid triangle_centroid area Polygon zeros sum diff append float32 astype box_coordinate_to_xyminmax append float32 astype box_xyminmax_to_coordinates box_coordinate_to_xyminmax enumerate bounds box_pad box_bounds centroid box_bounds triangle_centroid box_centroid box_order box_center_triangle_top_bottom append affine_box range len iou sorted intersect_over_union append range len sorted tolist affine_boxes find_word_over_character_box max uint8 norm scaled_gaussian applyColorMap zeros range array clip cvtColor COLORMAP_JET delta_xy warpPerspective array getPerspectiveTransform int max order_points sqrt getPerspectiveTransform warpPerspective array reshape max copy approxPolyDP bitwise_not RETR_LIST grab_contours resize screen_contour_pad array Canny COLOR_BGR2GRAY findContours copy GaussianBlur drawContours CHAIN_APPROX_SIMPLE reshape four_point_transform arcLength threshold_local cvtColor determine_skew rotate cvtColor COLOR_BGR2GRAY batch_box_coordinate_to_xyminmax box_pad COLOR_BGR2GRAY get_text_area_coord box_xyminmax_to_coordinates copy float32 getPerspectiveTransform warpPerspective cvtColor COLOR_BGR2GRAY float32 copy getPerspectiveTransform cvtColor str suffix warp_perspective_by_boxes_area imsave print stem astype rotate_image unlink joinpath Path imread exists autocrop_warped_image str suffix imsave stem astype rotate_image unlink joinpath Path imread exists COLOR_GRAY2RGB array resize_aspect_ratio normalize_variances normalize_dim load_image to to_tensor requires_grad uint8 squeeze astype permute numpy is_cuda detach from_tensor_to_numpy shape max getDetBoxes box_bounds astype int32 append enumerate uint8 threshold sorted CHAIN_APPROX_SIMPLE findContours RETR_EXTERNAL astype float32 copy boundingRect box_pad append sorted batch_box_coordinate_to_xyminmax tolist min append batch_box_xyminmax_to_coordinates enumerate sorted batch_box_coordinate_to_xyminmax tolist min append batch_box_xyminmax_to_coordinates enumerate join list items OrderedDict startswith cuda model parameters parameters load CRAFT copy_state_dict load_state_dict cuda isinstance COLOR_RGB2HSV COLOR_HSV2RGB cvtColor fromarray asarray enhance fromarray asarray enhance fromarray asarray enhance fromarray asarray enhance getRotationMatrix2D warpAffine matmul threshold roll max clip connectedComponentsWithStats argmin MORPH_RECT shape append minAreaRect range astype copy sqrt dilate int uint8 getStructuringElement reshape boxPoints min zeros array warpCoord line arange zeros inv float32 reversed shape array getPerspectiveTransform append median warpPerspective range enumerate len getPoly_core getDetBoxes_core len array range len list_files join lower splitext append walk basename imwrite mkdir splitext array COLOR_GRAY2RGB imread array cvtColor astype float32 uint8 astype copy shape max zeros resize applyColorMap uint8 astype COLORMAP_JET show imshow figure get_charword_bbox polylines imshow figure array int subplots set_title suptitle print set_yticks close tight_layout subplots_adjust imshow set_xticks savefig len subplots word_bbox_draw_rect suptitle set_title print set_yticks draw_rect tight_layout subplots_adjust imshow set_xticks find_bbox_and_draw_rect savefig rectangle copy box_bounds draw_rect copy shape getDetBoxes max threshold CHAIN_APPROX_SIMPLE RETR_EXTERNAL findContours astype rectangle boundingRect box_pad append applyColorMap COLORMAP_JET | # craft.pytorch This is a replication of CRAFT(Character-Region Awareness For Text Detection) with training code example. ## Requirement - Anaconda - python 3.8 - pytorch 1.9 - pytorch-lightning 1.4.8 ## Setup Environment To run this repository you need to install and activate the environment from yaml file using anaconda with this command: ``` | 3,174 |
nupam/GANs-for-Image-enhancement | ['style transfer'] | ['A Neural Algorithm of Artistic Style'] | server/helpers.py server/server.py FeatureLoss index enhance data join print resize predict refresh flip_lr choices image2np imsave ascii_uppercase | # GANs-for-Image-enhancement ## Comparing supervised feature loss in GANs with pretraining for image enhancement(superres and decrappify) #### A big thanks to Jeremy Howard, fast.ai, for his lectures on Deeplearning #### Skip to end of this readme for observations #### If viewing notebooks on github fails, try nbviewer/kaggle links in this readme #### you can test it out at: http://anupam.gq:5042/ (ps: if unavailable, let me know)<br> #### It includes a web application to view the automatic image enhancement. <br> GANs are hard to train. They are notoriously hard to train, require multiple GPUs and training time ranges from many hours to days, and also requiring tons of data. Here we compare two GANs whhose discriminator and generators are first pretrained, then put together as GAN. Two models are trained, there is only one major differene between the model is that of loss function used for pretraining GAN, all other hyper-parameters are same unless otherwise stated. | 3,175 |
nurpeiis/LeakGAN-PyTorch | ['text generation'] | ['Long Text Generation via Adversarial Training with Leaked Information'] | train.py Discriminator.py convert.py data_iter.py target_lstm.py eval_bleu.py main.py encode.py utils.py Generator.py Dis_Dataset Real_Dataset real_data_loader dis_data_loader Discriminator Highway truncated_normal tensor_to_text text_to_tensor Generator Manager Worker truncated_normal TargetLSTM get_params get_arguments Real_Dataset Dis_Dataset rvs float view list dump len close extend set open save append array range split append reduce print enumerate close get_params | # LeakGAN-PyTorch A simple implementation of LeakGAN in PyTorch described in [Long Text Generation via Adversarial Training with Leaked Information](https://arxiv.org/abs/1709.08624). ## Requirements * **PyTorch r1.1.0** * Python 3.5+ * CUDA 8.0+ (For GPU) ## File * Discriminator.py: The discriminator model of LeakGAN including Feature Extractor and classification * Generator.py: The generator model of LeakGAN including worker and manager units * data_iter.py: Data loader for Generator and Discriminator | 3,176 |
nusnlp/MFA4RE | ['relation extraction'] | ['Effective Attention Modeling for Neural Relation Extraction'] | re_models.py CNN get_class_label_map BGWA get_data get_max_len get_padded_mask PCNN write_pred_file get_batch_data read_data MFA get_F1 PCNN_Layer custom_print write_PR_curve pr_curve get_dep_dist Attention predict build_vocab get_sample CNN_Layer load_word_embedding get_threshold get_distance_seq get_words_index_seq load_vocab get_rel_counts EA shuffle_data get_dep_distance torch_train get_model cal_auc get_ent_indicator_seq str print write range len list custom_print OrderedDict uniform append zeros len dump strip close OrderedDict split load_word_embedding open int list QASample strip len min get_dep_dist floor append max range split get_sample int print custom_print strip WordsDepDist loads append range len get_data readlines close open float get_F1 argmax RelationName range len join get_F1 write float open join list readlines close len write open float abs range append split custom_print readlines close len open float round range append split str Id Arg1 len write range dumps OrderedDict Arg2 Text close append argmax max RelationName open int custom_print sort sample range len strip readlines close OrderedDict open OrderedDict Words range len append list range len append list range len list pow append float range len append list range len append list range len WordsDepDist get_max_len get_padded_mask Piece3Mask list Arg2Mask Piece1Mask Arg1Mask append WordsEntIndicator WordsArg1Dist Words WordsArg1DepDist get_words_index_seq get_distance_seq get_dep_distance Piece2Mask WordsArg2DepDist WordsArg2Dist WordsMask get_ent_indicator_seq seed list model Variable min zero_grad tqdm eval get_batch_data manual_seed ceil cuda range len model zero_grad save round cuda get_batch_data seed get_F1 custom_print ceil range predict state_dict get_model Adagrad manual_seed is_available clip_grad_norm float NLLLoss int backward Variable rel_loss_func min now tqdm parameters shuffle_data train step len | This repository contains the source code of the paper "Effective Attention Modeling for Neural Relation Extraction" published in CoNLL 2019. ### Datasets ### NYT10 and NYT11 datasets used for experiments in the paper can be downloaded from the following link: https://drive.google.com/drive/folders/1xWoN8zfK3IA1WZqxBQ1-Nw-y275YE628?usp=sharing Each line in the '.json' files is one instance. It containes the sentence text, relation mentions and entity mentions. Fields are self explanatory. Each line in the '.dep' files containes the dependency distance information of the entities to corresponding line in '.json' file. Each dataset has a sub-directory named 'Best' which contains our final model which gives the best result mentioned in Table 2 of our paper. Use following commands to get the results. python3.5 re_models.py 0 NYT10/ NYT10/Best/ 5 test 1 python3.5 re_models.py 0 NYT11/ NYT11/Best/ 5 test 4 ### Requirements ### | 3,177 |
nusnlp/crosentgec | ['grammatical error correction'] | ['Cross-Sentence Grammatical Error Correction'] | fairseq/fairseq/tasks/translation_ctx.py fairseq/fairseq/data/__init__.py fairseq/fairseq/multiinput_sequence_generator.py scripts/lang8_preprocess.py fairseq/fairseq/criterions/edit_weighted_label_smoothed_cross_entropy.py fairseq/fairseq/optim/fairseq_optimizer.py fairseq/fairseq/tokenizer.py tools/nbest-reranker/log_utils.py fairseq/fairseq/sequence_scorer.py scripts/apply_bpe.py fairseq/tests/test_binaries.py fairseq/fairseq/optim/lr_scheduler/__init__.py fairseq/fairseq/models/lstm.py fairseq/fairseq/criterions/edit_weighted_cross_entropy.py scripts/nbest_reformat.py fairseq/tests/test_train.py fairseq/fairseq/optim/lr_scheduler/fairseq_lr_scheduler.py fairseq/fairseq/optim/lr_scheduler/fixed_schedule.py fairseq/interactive.py fairseq/fairseq/bleu.py fairseq/fairseq/data/indexed_dataset.py fairseq/fairseq/models/fairseq_decoder.py tools/nbest-reranker/lib/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py fairseq/setup.py fairseq/fairseq/models/fairseq_incremental_decoder.py fairseq/fairseq/modules/downsampled_multihead_attention.py fairseq/fairseq/criterions/adaptive_loss.py fairseq/fairseq/models/transformer.py tools/nbest-reranker/m2.py fairseq/fairseq/optim/nag.py scripts/m2_preprocess.py fairseq/fairseq/models/fconv.py scripts/sentence_pairs_with_ctx.py fairseq/fairseq/data/data_utils.py fairseq/fairseq/optim/sgd.py tools/nbest-reranker/configreader.py fairseq/fairseq/tasks/language_modeling.py tools/nbest-reranker/lib/pytorch_pretrained_bert/modeling.py tools/nbest-reranker/lib/pytorch_pretrained_bert/optimization.py tools/nbest-reranker/lib/pytorch_pretrained_bert/__init__.py fairseq/fairseq/tasks/__init__.py fairseq/train.py fairseq/fairseq/models/fairseq_encoder.py fairseq/fairseq/progress_bar.py fairseq/fairseq/modules/adaptive_softmax.py fairseq/fairseq/data/monolingual_dataset.py fairseq/fairseq/data/language_triple_dataset.py fairseq/fairseq/models/fconv_self_att.py tools/nbest-reranker/lib/m2scorer/scorer/reader.py fairseq/tests/test_label_smoothing.py fairseq/fairseq/trainer.py scripts/clean_data.py fairseq/tests/utils.py fairseq/fairseq/models/fconv_dualenc_gec_gatedaux.py tools/nbest-reranker/augmenter.py tools/nbest-reranker/features.py tools/nbest-reranker/lib/pytorch_pretrained_bert/tokenization.py fairseq/fairseq/modules/conv_tbc.py fairseq/fairseq/criterions/__init__.py fairseq/fairseq/modules/__init__.py fairseq/fairseq/data/language_pair_dataset.py fairseq/fairseq/fp16_trainer.py fairseq/fairseq/optim/lr_scheduler/inverse_square_root_schedule.py fairseq/generate.py tools/nbest-reranker/train.py fairseq/fairseq/__init__.py fairseq/scripts/average_checkpoints.py fairseq/scripts/build_sym_alignment.py tools/nbest-reranker/lib/pytorch_pretrained_bert/file_utils.py tools/nbest-reranker/lib/pytorch_pretrained_bert/__main__.py fairseq/fairseq/modules/sinusoidal_positional_embedding.py fairseq/eval_lm.py fairseq/fairseq/criterions/label_smoothed_cross_entropy.py tools/nbest-reranker/lib/m2scorer/scorer/util.py tools/nbest-reranker/rerank.py fairseq/fairseq/optim/lr_scheduler/reduce_lr_on_plateau.py fairseq/fairseq/utils.py tools/nbest-reranker/candidatesreader.py fairseq/fairseq/optim/adagrad.py fairseq/interactive_multi.py tools/nbest-reranker/lib/m2scorer/scorer/levenshtein.py fairseq/tests/test_sequence_scorer.py fairseq/fairseq/modules/linearized_convolution.py fairseq/tests/test_convtbc.py fairseq/fairseq/models/fairseq_model.py fairseq/fairseq/criterions/cross_entropy.py fairseq/fairseq/models/__init__.py fairseq/fairseq/modules/learned_positional_embedding.py tools/nbest-reranker/lib/kenlm_python/example.py fairseq/fairseq/models/composite_encoder.py fairseq/fairseq/modules/beamable_mm.py tools/nbest-reranker/lib/levenshtein.py fairseq/fairseq/modules/scalar_bias.py fairseq/multiprocessing_train.py fairseq/tests/test_utils.py fairseq/score.py fairseq/fairseq/data/dictionary.py fairseq/fairseq/data/token_block_dataset.py fairseq/fairseq/multiprocessing_pdb.py fairseq/tests/test_data_utils.py fairseq/fairseq/modules/multihead_attention.py fairseq/fairseq/meters.py fairseq/fairseq/optim/adam.py scripts/nucle_preprocess.py scripts/partition_data_into_train_and_dev.py fairseq/preprocess.py fairseq/fairseq/models/fconv_gec.py fairseq/fairseq/data/fairseq_dataset.py fairseq/fairseq/options.py fairseq/fairseq/modules/grad_multiply.py fairseq/fairseq/distributed_utils.py fairseq/distributed_train.py fairseq/fairseq/criterions/fairseq_criterion.py fairseq/tests/test_average_checkpoints.py fairseq/fairseq/sequence_generator.py fairseq/fairseq/tasks/translation.py fairseq/fairseq/tasks/fairseq_task.py fairseq/tests/test_sequence_generator.py fairseq/fairseq/optim/__init__.py main main main main buffered_read make_batches main buffered_read make_batches main ErrorHandler run main get_parser main validate get_valid_stats load_checkpoint load_dataset_splits save_checkpoint get_perplexity main train get_training_stats BleuStat Scorer distributed_init all_gather_list is_master suppress_output DynamicLossScaler FP16Trainer AverageMeter TimeMeter StopwatchMeter MultiInputSequenceGenerator MultiprocessingPdb add_optimization_args add_model_args eval_bool add_common_eval_args parse_args_and_arch get_generation_parser get_eval_lm_parser eval_str_list add_dataset_args add_distributed_training_args add_checkpoint_args add_eval_lm_args add_interactive_args add_generation_args get_parser get_training_parser progress_bar noop_progress_bar tqdm_progress_bar json_progress_bar build_progress_bar simple_progress_bar SequenceGenerator SequenceScorer tokenize_line Tokenizer Trainer convert_state_dict_type clip_grad_norm_ load_model_state print_embed_overlap save_state set_incremental_state _upgrade_state_dict load_ensemble_for_inference parse_embedding move_to_cuda post_process_prediction load_embedding strip_pad buffered_arange prepare_align_batch load_align_dict _override_model_args item make_positions convert_padding_direction get_incremental_state torch_persistent_save _get_full_incremental_state_key fill_with_neg_inf replace_unk get_weighted_alignments checkpoint_paths AdaptiveLoss CrossEntropyCriterion EditWeightedCrossEntropyCriterion EditWeightedLabelSmoothedCrossEntropyCriterion FairseqCriterion LabelSmoothedCrossEntropyCriterion register_criterion build_criterion ShardedIterator CountingIterator numpy_seed collate_tokens infer_language_pair EpochBatchIterator Dictionary FairseqDataset index_file_path IndexedDatasetBuilder code IndexedDataset IndexedRawTextDataset read_longs IndexedInMemoryDataset write_longs data_file_path LanguagePairDataset collate LanguageTripleDataset collate MonolingualDataset collate TokenBlockDataset CompositeEncoder FairseqDecoder FairseqEncoder FairseqIncrementalDecoder FairseqDualEncoderModel BaseFairseqModel FairseqModel FairseqLanguageModel PositionalEmbedding AttentionLayer fconv_wmt_en_de ConvTBC Embedding fconv_lm_dauphin_wikitext103 FConvModel base_lm_architecture fconv_wmt_en_fr LinearizedConv1d FConvLanguageModel fconv_iwslt_de_en fconv_wmt_en_ro fconv_lm_dauphin_gbw Linear base_architecture FConvEncoder extend_conv_spec FConvDecoder PositionalEmbedding ConvTBC FConvDualEncoderModel FConvCustomDecoder LinearizedConv1d Embedding FConvCustomEncoder Gating AttentionLayer base_architecture extend_conv_spec Linear FConvCustomModel PositionalEmbedding ConvTBC FConvCustomDecoder LinearizedConv1d Embedding FConvCustomEncoder base_architecture AttentionLayer extend_conv_spec Linear FConvModelSelfAtt PositionalEmbedding ConvTBC LinearizedConv1d Embedding SelfAttention fconv_self_att_wp base_architecture FConvEncoder FConvDecoder Linear lstm_wiseman_iwslt_de_en LSTMModel Embedding LSTM lstm_luong_wmt_en_de LSTMEncoder AttentionLayer LSTMCell base_architecture LSTMDecoder Linear PositionalEmbedding transformer_vaswani_wmt_en_de_big TransformerDecoderLayer transformer_wmt_en_de_big Embedding transformer_wmt_en_de_big_t2t LayerNorm transformer_wmt_en_de TransformerDecoder TransformerModel base_architecture transformer_vaswani_wmt_en_fr_big transformer_iwslt_de_en TransformerEncoder TransformerEncoderLayer Linear register_model_architecture register_model build_model AdaptiveSoftmax BeamableMM ConvTBC SingleHeadAttention Downsample GatedLinear DownsampledMultiHeadAttention Linear GradMultiply LearnedPositionalEmbedding LinearizedConvolution MultiheadAttention ScalarBias scalar_bias SinusoidalPositionalEmbedding Adagrad Adam FairseqAdam FairseqOptimizer NAG FairseqNAG SGD build_optimizer register_optimizer FairseqLRScheduler FixedSchedule InverseSquareRootSchedule ReduceLROnPlateau build_lr_scheduler register_lr_scheduler FairseqTask LanguageModelingTask TranslationTask TranslationContextTask setup_task register_task main last_n_checkpoints average_checkpoints main TestAverageCheckpoints create_dummy_data TestTranslation eval_lm_main train_translation_model preprocess_translation_data TestStories TestLanguageModeling generate_main preprocess_lm_data train_language_model TestConvTBC TestDataUtils TestLabelSmoothing TestSequenceGenerator TestSequenceScorer mock_dict mock_trainer get_trainer_and_epoch_itr TestLoadCheckpoint TestUtils TestDataset TestEncoder dummy_dataloader TestModel dummy_dictionary TestTranslationTask TestIncrementalDecoder create_parser BPE get_pairs encode clean_sentence process remove_tags indent essay_boundary src_tag_map generate_XMLTree indent indent partition_from_xml extract_from_xml augment NBestList NBestGroup NBestItem RefernceManager parse_ini LexWeights BERT LM EditOps feature_extractor SAMPLE WordPenalty b_green blue green b_red BColors set_logger b_yellow white print_args b_fail red b_warning ColoredFormatter b_okblue yellow m2_extractor is_number levenshtein_matrix score merge_edits_del best_edit_seq_bf transitive_arcs matchEdit comp_r check_in_gold_range top_sort shrinkEdit check_in_gold_sub matchSeq relax make_graph set_weights get_edits move_ins get_distance merge_non_gold next_identical_edge batch_multi_pre_rec_f1 single_source_shortest_path levenshtein_distance check_movable get_gold_range check_match_gold merge_edits edit_graph merge_graph prev_identical_edge initialize_single_source comp_f1 sort levenshtein_matrix comp_p equals_ignore_whitespace_casing get_prev_edges get_next_edges read_nbest_sentences gold_to_m2 load_annotation fix_cp1252codes pairs max_dict green isASCII red yellow uniq b_warning b_green intersect BColors clean_utf8 softmax b_okblue blue paragraphs frange b_red b_yellow white smart_open min_dict b_fail randint sort_dict convert_tf_checkpoint_to_pytorch cached_path s3_etag http_get s3_request s3_get read_set_from_file get_from_cache filename_to_url url_to_filename split_s3_path get_file_extension BertPreTrainingHeads BertForQuestionAnswering BertEncoder PreTrainedBertModel BertSelfAttention BertForMaskedLM BertOnlyMLMHead BertOnlyNSPHead BertEmbeddings BertOutput BertPredictionHeadTransform BertAttention BertPooler gelu BertForMultipleChoice BertConfig BertLayer BertForTokenClassification BertModel BertForNextSentencePrediction BertIntermediate BertForSequenceClassification BertForPreTraining swish BertLMPredictionHead BertSelfOutput warmup_cosine warmup_constant warmup_linear BertAdam BasicTokenizer WordpieceTokenizer load_vocab whitespace_tokenize _is_whitespace _is_control BertTokenizer _is_punctuation main get int format print check_output gethostname single_process_main distributed_rank distributed_init setup_task data StopwatchMeter make_generation_fast_ dataset cuda SequenceScorer exp load_ensemble_for_inference len load_dataset sum gen_subset avg n target_dictionary path next_epoch_itr split unk eos result_string pad SequenceGenerator Scorer load_align_dict beam score_reference source_dictionary replace_unk append strip stdin array next_epoch_itr hypos make_batches buffered_read buffer_size zip alignments src_str extend argsort max_positions zip input_files context_dictionary MultiInputSequenceGenerator Process pid join add_child get_context SimpleQueue device_count ErrorHandler start distributed_world_size append range single_process_main distributed_init add_argument ArgumentParser destdir tgtdict build_dictionary save max srcdict joined_dictionary make_all alignfile dict_path set keys load train_path source_lang finalize target_lang makedirs stdin score add_argument ArgumentParser parse_args Dictionary validate get_dummy_batch Trainer save_checkpoint arch FP16Trainer seed max_tokens set_device device_id get_lr epoch build_model build_criterion manual_seed fp16 lr_step __name__ load_dataset_splits load_checkpoint dummy_train_step EpochBatchIterator stop train max_sentences update items epoch defaultdict get_num_updates train_step validate get_meter print log enumerate reset avg save_checkpoint get_training_stats build_progress_bar next_epoch_itr len get_num_updates format elapsed_time get_lr OrderedDict avg get_perplexity round get_meter update epoch defaultdict items get_valid_stats print valid_step reset avg append build_progress_bar next_epoch_itr get_num_updates best hasattr min OrderedDict avg get_perplexity get_num_updates epoch remove end_of_epoch checkpoint_paths min OrderedDict getattr save_dir exists join epoch format get_num_updates restore_file print lr_step lr_step_update load_state_dict isfile save_dir makedirs data format count print load_dataset dataset len distributed_init_method format init_process_group print distributed_rank startswith get_rank suppress_output print list bytes tolist dumps get_world_size _out_buffers ByteTensor loads all_gather item append _in_buffer cuda range len fileno Lock add_model_args add_optimization_args add_dataset_args add_distributed_training_args add_checkpoint_args get_parser add_interactive_args add_dataset_args get_parser add_generation_args add_eval_lm_args add_dataset_args get_parser eval isinstance update_freq hasattr add_argument_group add_args parse_known_args eval_str_list lr parse_args max_sentences add_argument_group add_argument add_argument_group add_argument add_argument_group add_argument add_argument_group add_argument add_argument add_argument_group add_common_eval_args add_argument_group add_common_eval_args add_argument add_argument_group add_argument add_argument_group add_argument log_interval noop_progress_bar tqdm_progress_bar json_progress_bar simple_progress_bar strip sub range items list isinstance OrderedDict is_tensor torch_persistent_save load pop list format update print set upgrade_state_dict load_state_dict _upgrade_state_dict keys state_dict max_positions load build_model _override_model_args upgrade_state_dict load_state_dict append _upgrade_state_dict items list setattr __name__ _get_full_incremental_state_key _get_full_incremental_state_key isinstance format print set symbols keys len range len get tokenize_line enumerate unk_string replace_unk tokenize string ne arange size type_as new unsqueeze expand_as arange LongTensor remainder size eq expand_as sum hasattr norm mul_ item fullmatch append listdir compile enumerate copy_ max fill_ enumerate source_dictionary target_dictionary listdir split copy_tensor max fill_ enumerate seed get_state empty readinto write array list keys LongTensor sort index_select sum merge append normal_ weight constant_ normal_ LearnedPositionalEmbedding weight constant_ bias normal_ weight constant_ bias LinearizedConvolution sqrt normal_ weight constant_ bias sqrt normal_ weight constant_ getattr base_lm_architecture getattr base_lm_architecture getattr getattr base_architecture getattr base_architecture getattr base_architecture getattr base_architecture getattr zero_ zero_ zero_ base_architecture getattr uniform_ uniform_ named_parameters named_parameters uniform_ uniform_ encoder_embed_dim decoder_embed_dim dropout base_architecture getattr dropout base_architecture getattr xavier_uniform_ SinusoidalPositionalEmbedding encoder_ffn_embed_dim base_architecture getattr base_architecture base_architecture getattr getattr transformer_vaswani_wmt_en_de_big getattr transformer_vaswani_wmt_en_de_big getattr transformer_vaswani_wmt_en_de_big load items list isinstance OrderedDict HalfTensor append float keys len append listdir fullmatch compile inputs add_mutually_exclusive_group num_epoch_checkpoints output last_n_checkpoints num_update_checkpoints average_checkpoints mosesdecoder_dir fast_align_dir mkdir output_dir _create_dummy_data main parse_args get_parser main parse_args_and_arch get_training_parser stdin parse_args_and_arch get_generation_parser main StringIO main parse_args get_parser main parse_args_and_arch get_training_parser main get_eval_lm_parser parse_args_and_arch MagicMock MagicMock list LongTensor mock_trainer TokenBlockDataset EpochBatchIterator range str finalize Dictionary range add_symbol DataLoader TestDataset enumerate len add_argument ArgumentParser add set get_pairs endswith tuple min extend index append sub strip sub clean_sentence SubElement text strip set loads sub append classify range len len essay_boundary src_tag_map SubElement str sorted list m2 getroot find append range get parse replace Element set startswith ElementTree indent items print write findall len print get_score str format append_feature zip name write now b_yellow close hyp info NBestList open OrderedDict float append split join str sorted argv list getLogger write close info vars keys open setFormatter getLogger addHandler StreamHandler Formatter ColoredFormatter DEBUG setLevel FileHandler NBestList m2_load_annotation float append range len print dict merge_graph list items transitive_arcs zip comp_f1 print set_weights comp_r len levenshtein_matrix single_source_shortest_path comp_p matchSeq float edit_graph make_graph split deepcopy len split deepcopy print matchEdit reversed append range len list best_edit_seq_bf transitive_arcs matchSeq levenshtein_matrix split edit_graph set_weights dict deepcopy print sort pop remove list insert keys float matchEdit check_match_gold append index insert join move_ins merge_edits_del print insert len shrinkEdit merge_edits append range split list initialize_single_source merge_non_gold print insert top_sort relax keys append float range len append append deepcopy sorted list print reversed append keys range len items list append items list get_gold_range list remove get_distance print check_in_gold_range check_in_gold_sub merge_edits append float keys range len dict append deepcopy sorted list print min set keys union levenshtein_matrix int read join list items paragraphs close smart_open splitlines append split strip close smart_open append split items endswith append idfun list items sorted sort append is_separator decode str isinstance search sub type __iter__ next append len max abspath save from_json_file str transpose from_numpy getattr list_variables append state_dict format zip load_variable join int print BertForPreTraining fullmatch any split encode hexdigest sha256 str join isinstance str urlparse isinstance exists path netloc urlparse startswith resource split_s3_path Object resource split_s3_path download_fileobj get update write close tqdm iter_content len get str s3_etag join isinstance url_to_filename startswith head makedirs set OrderedDict strip split category category startswith startswith category ord pop convert_tf_checkpoint_to_pytorch | Cross-sentence Grammatical Error Correction ------------------------------------------- This repository contains the code and models to train and test cross-sentence grammatical error correction models using convolutional sequence-to-sequence models. If you use this code, please cite this [paper](https://www.aclweb.org/anthology/P19-1042): ``` @InProceedings{chollampatt2019crosent, author = {Shamil Chollampatt and Weiqi Wang and Hwee Tou Ng}, title = {Cross-Sentence Grammatical Error Correction}, booktitle = {Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics}, year = {2019} | 3,178 |
nusnlp/gecmetrics | ['machine translation', 'grammatical error correction'] | ['A Reassessment of Reference-Based Grammatical Error Correction Metrics'] | scripts/system_correlation.py scripts/sentence_correlation.py ResultRow SegmentLevelData MetricLanguagePairData safe_max main safe_avg ResultTable parse_args ResultRow KeyAlreadySetException MetricData MetricLanguagePairData SystemLevelMetricsData ResultTable safe_max main safe_avg NumberOfFieldsNotExpectedException parse_args add_argument ArgumentParser tabulate metrics SegmentLevelData print directions add_metrics_data add_human_data judgments ResultTable list filter filter max list human SystemLevelMetricsData samples add_sample_data | ## A Reassessment of Reference-Based Grammatical Error Correction Metrics If you use the data/code from this repository, please cite the following [paper](http://aclweb.org/anthology/C18-1231): ``` @InProceedings{chollampatt2018reassessment, author = {Chollampatt, Shamil and Ng, Hwee Tou}, title = {A Reassessment of Reference-Based Grammatical Error Correction Metrics}, booktitle = {Proceedings of the 27th International Conference on Computational Linguistics }, month = {August}, year = {2018}, address = {Santa Fe, New Mexico, USA}, | 3,179 |
nutli/concept_normalisation | ['machine translation'] | ['Normalising Medical Concepts in Social Media Texts by Learning Semantic Representation'] | concept_normalisation.py load_word_embedding_dict iterate_minibatches build_embedd_table categorical_accuracy load_data append arange slice shuffle range len dict sqrt uniform info load_word2vec_format vector_size sqrt uniform empty argsort shape_padaxis argmax | # concept_normalisation This repository contains example data and a simplified version of codes used in an ACL 2016 paper titled "Normalising medical concepts in social media texts by learning semantic representation (https://www.aclweb.org/anthology/P/P16/P16-1096.pdf)" | 3,180 |
nv-tlabs/GSCNN | ['semantic segmentation'] | ['Gated-SCNN: Gated Shape CNNs for Semantic Segmentation'] | optimizer.py my_functionals/DualTaskLoss.py utils/f_boundary.py my_functionals/__init__.py my_functionals/custom_functional.py utils/AttrDict.py utils/misc.py loss.py train.py network/SEresnext.py network/mynn.py utils/image_page.py transforms/joint_transforms.py network/__init__.py network/wider_resnet.py datasets/__init__.py datasets/cityscapes.py network/Resnet.py network/gscnn.py datasets/cityscapes_labels.py datasets/edge_utils.py transforms/transforms.py config.py my_functionals/GatedSpatialConv.py assert_and_infer_cfg ImageBasedCrossEntropyLoss2d CrossEntropyLoss2d get_loss JointEdgeSegLoss forgiving_state_restore restore_snapshot get_optimizer main train evaluate validate colorize_mask make_cv_splits make_dataset_video make_test_split make_dataset make_split_coarse add_items CityScapes CityScapesVideo assureSingleInstanceName onehot_to_binary_edges onehot_to_mask mask_to_onehot onehot_to_multiclass_edges setup_loaders conv2d_same numerical_gradients_2d gradient_central_diff calc_pad_same convTri compute_normal compute_normal_2 compute_grad_mag compute_single_sided_diferences DualTaskLoss _one_hot_embedding perturbate_input_ _gumbel_softmax_sample _sample_gumbel t GatedSpatialConv2d HighFrequencyGatedSpatialConv2d Conv2dPad _AtrousSpatialPyramidPoolingModule Crop SideOutputCrop MyIdentity GSCNN initialize_weights Norm2d ResNet resnet50 Bottleneck resnet152 conv3x3 resnet34 resnet18 BasicBlock resnet101 se_resnext50_32x4d SENet SEResNetBottleneck SEBottleneck SEResNeXtBottleneck initialize_pretrained_model Bottleneck se_resnext101_32x4d SEModule WiderResNet IdentityResidualBlock WiderResNetA2 bnrelu GlobalAvgPool2d get_model get_net RandomHorizontallyFlip ResizeHeight SlidingCropOld CenterCrop RandomSizedCrop FreeScale RandomRotate CenterCropPad Compose PadImage Scale RandomSizeAndCrop Resize RandomCrop ScaleMin SlidingCrop ClassUniform DeNormalize ResizeHeight adjust_saturation RandomBilateralBlur adjust_hue FreeScale RelaxedBoundaryLossToTensor RandomVerticalFlip RandomGaussianBlur FlipChannels adjust_brightness _is_pil_image MaskToTensor adjust_contrast ColorJitter AttrDict db_eval_boundary eval_mask_boundary db_eval_boundary_wrapper seg2bmap main ImagePage AverageMeter save_log fast_hist evaluate_eval make_exp_name print_evaluate_results prep_experiment save_code batch_weighting syncbn print immutable BatchNorm2d joint_edgeseg_loss cuda img_wt_loss restore_snapshot format snapshot LambdaLR Adam sgd SGD parameters adam amsgrad info load print load_state_dict sgd_finetuned info forgiving_state_restore update format load_state_dict info state_dict setup_loaders validate get_loss evaluate max_epoch immutable range assert_and_infer_cfg start_epoch empty_cache train step get_optimizer prep_experiment get_net joint_edgeseg_loss zero_grad len set_trace update detach_ val format size mean avg info item net enumerate backward print AverageMeter step add_scalar joint_edgeseg_loss update dataset_cls criterion info size AverageMeter evaluate_eval eval item cpu append enumerate str sum num_classes size AverageMeter eval eval_mask_boundary split info cpu zeros numpy enumerate convert putpalette append join sorted append CV_SPLITS range len sorted listdir join join str format make_cv_splits make_test_split add_items info len append join listdir argmax uint8 distance_transform_edt astype pad append range uint8 distance_transform_edt astype pad zeros expand_dims range gblur bs_mult_val bblur Compose test_mode DataLoader bs_mult ngpu CityScapes MaskToTensor color_aug shape pad conv2d calc_pad_same conv2d_same shape repeat Tensor cuda clone shape gradient_central_diff list reversed shape pad conv2d repeat Tensor cuda range cat remainder print numerical_gradients_2d set_trace pi sign convTri atan remainder print numerical_gradients_2d set_trace pi sign convTri atan mul numerical_gradients_2d convTri sqrt max shape random_integers cuda size _sample_gumbel cuda show normal print imshow float GatedSpatialConv2d gconv MODEL getattr layer fill_ isinstance modules zero_ BatchNorm2d weight kaiming_normal load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url load_state_dict initialize_pretrained_model SENet initialize_pretrained_model SENet format DataParallel info get_model sum cuda import_module getattr net_func Brightness enhance enhance Contrast Color enhance fromarray convert mode array split list map tqdm array zeros sum Pool range disk seg2bmap float sum binary_dilation bool zeros_like astype floor zeros float range glob write_page add_table ImagePage sorted format isinstance sub vars join basicConfig setFormatter addHandler StreamHandler Formatter setLevel INFO join format print call split join str exp date_str tb_path SummaryWriter ckpt exp_path save_log write strftime make_exp_name device_count tb_exp_path makedirs exp_path save print_evaluate_results max fromarray copyfile sum ImagePage format synchronize Compose astype stack avg info zip enumerate add_image join remove uint8 colorize_mask add_table make_grid add_scalar extend nanmean write_page numpy diag makedirs reshape format id2cat info sum diag enumerate add_scalar | # GSCNN This is the official code for: #### Gated-SCNN: Gated Shape CNNs for Semantic Segmentation [Towaki Takikawa](https://tovacinni.github.io), [David Acuna](http://www.cs.toronto.edu/~davidj/), [Varun Jampani](https://varunjampani.github.io), [Sanja Fidler](http://www.cs.toronto.edu/~fidler/) ICCV 2019 **[[Paper](https://arxiv.org/abs/1907.05740)] [[Project Page](https://nv-tlabs.github.io/GSCNN/)]**  Based on based on https://github.com/NVIDIA/semantic-segmentation. ## License ``` | 3,181 |
nv-tlabs/STEAL | ['semantic segmentation'] | ['Devil is in the Edges: Learning Semantic Boundaries from Noisy Annotations'] | coarse_to_fine/VisualizerBox.py contours/ContourBox.py coarse_to_fine/refine_cityscapes_coarse.py coarse_to_fine/input_reader.py inference_cityscapes.py utils/VisualizerBox.py inference_sbd.py contours/ContourBox_MLS.py utils/dataloader.py contours/morph_snakes.py utils/vis_utils.py coarse_to_fine/refine_segmentation.py contours/cutils.py models/casenet.py main do_test InputReaderSemMat InputReader InputReaderSemMatDemo InputReaderSemMat2BaseName InputReaderSemMatCoarse InputReaderSemMat2 InputReaderSemMatClickSim InputReaderDummy InputReaderRGBImage InputReaderSemMatCoarsePerComponent InputReaderBaseName InputReaderBdryMat InputReaderBase parse_args GenerateGT_PNGMask do_it main parse_args prepair_contour_box VisualizerBox LevelSetAlignment LevelSetAlignmentBase MLS compute_h_additive compute_h_additive_torch update_callback_in_image seg2edges seg2edges_2d compute_h_caselles_torch morphological_geodesic_active_contour _init_level_set sup_inf _fcycle inf_sup _check_input assert_nD Crop ResNet SideOutputCrop MyIdentity Bottleneck Res5OutputCrop conv3x3 get_upsample_filter BasicBlock casenet101 binary_file_to_channel_masks ValidationDataset ImageFilelist seg_img_to_Kchannels _is_bit_set default_flist_reader _decode_integer VisualizerBox read_resize_image imwrite BORDER_REFLECT unsqueeze resize cuda str list basename copyMakeBorder transpose len append range astype eval net join int uint8 float32 divide tqdm zeros array makedirs load flist_val output_folder ckpt print CaseNet101 add_argument DataParallel root_dir_val ArgumentParser load_state_dict parse_args cuda do_test default_flist_reader add_argument ArgumentParser level_set_config_dict list GenerateGT_PNGMask set_external_list InputReaderSemMat2BaseName zip in_dir tqdm _read_list val_file_list cbox stack coarse_dir LevelSetAlignment output_dir generate_save expand_dims InputReaderBaseName middle_step literal_eval step_ckpts level_set_config_dict LevelSetAlignment vis_steps coarse_dir output_dir resize exp_name savez_compressed random_pick list InputReader set_external_list in_dir image_dir literal_eval InputReaderSemMat2 val_dir InputReaderRGBImage expand_dims randompick set_classes_to_keep set_output_folder size copy val_file_list stack zip join VisualizerBox classes_to_keep _read_list tqdm prepair_contour_box cbox save_vis array makedirs print items uint8 list distance_transform_edt astype pad nonzero pad distance_transform_edt astype uint8 sqrt max sqrt max max asanyarray isinstance append binary_erosion append binary_dilation assert_nD isinstance zeros_like _init_level_set ones float64 binary_dilation astype gradient int8 shape _curvop iter_callback zip _check_input range binary_erosion len abs ResNet _is_bit_set range add argwhere fromfile zeros _is_bit_set range array open show imshow resize array open | # STEAL This is the official inference code for: #### Devil Is in the Edges: Learning Semantic Boundaries from Noisy Annotations [David Acuna](http://www.cs.toronto.edu/~davidj/), [Amlan Kar](http://www.cs.toronto.edu/~amlan/), [Sanja Fidler](http://www.cs.toronto.edu/~fidler/) CVPR 2019 **[[Paper](https://arxiv.org/abs/1904.07934)] [[Project Page](https://nv-tlabs.github.io/STEAL/)]**  ## License ``` # Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. | 3,182 |
nvecoven/BRC | ['time series'] | ['A bio-inspired bistable recurrent cell allows for long-lasting memory'] | Models/CustomRNN.py Launcher.py cells/cells.py datasets/datasets.py run get_cells_list LMU GORU BRC NBRC RememberLinePMnist PermutedMnist CopyInputDataset DenoisingDataset Dataset CustomModel AccuracyMetric CustomRNN print name train CustomRNN | # A bio-inspired bistable recurrent cell allows for long-lasting memory A repository containing the code for the "A bio-inspired bistable recurrent cell allows for long-lasting memory" paper. Link to the paper : https://arxiv.org/abs/2006.05252 ## Pytorch implementation and binary addition task A great [implementation of the NBRC and BRC cells has been made in Pytorch](https://github.com/niklexical/brc_pytorch) thanks to [Nikita Janakarajan](https://github.com/niklexical) and [Jannis Born ](https://github.com/jannisborn). They also test the NBRC and BRC cells on binary addition tasks, on which they exhibit great performances. ## Dependencies The only dependency for the code provided is tensorflow 2 (and numpy, which is installed by default with tensorflow). Install tensorflow : https://www.tensorflow.org/install Nengolib is also required for the LMU cells. | 3,183 |
nweir127/COD3S | ['semantic textual similarity'] | ['COD3S: Diverse Generation with Discrete Semantic Signatures'] | src/utils/read_data.py src/utils/dataloaders.py src/scripts/spm_train.py src/utils/metrics.py src/utils/fairseq_cod3_utils.py src/utils/print_utils.py src/utils/device.py src/utils/__init__.py src/sbert_lsh_model.py src/scripts/diversity_evaluation.py src/scripts/mmi_generate.py src/scripts/fairseq_preprocess.py src/scripts/fairseq_generate_mmi.py src/scripts/compute_signatures.py src/scripts/spm_encode.py src/utils/bert_utils.py src/fairseq_options.py src/utils/batched.py src/utils/config.py parse_args_and_arch eval_str_list get_validation_parser add_distributed_training_args add_checkpoint_args get_preprocessing_parser add_generation_args get_parser eval_bool add_eval_lm_args add_preprocess_args add_model_args add_common_eval_args get_eval_lm_parser add_dataset_args add_cod3s_arguments add_interactive_args add_optimization_args get_interactive_generation_parser get_generation_parser get_training_parser SBERTLSHModel LSHModel BertLSHModel respath read_predictions Logger main _main cli_main dataset_dest_file binarize_alignments binarize get_offsets dataset_dest_prefix cli_main main main _main cli_main main batched chunks InputFeatures extract_features collate_features construct_bert_input Config first_lower first_upper model_output_sentence tsv_reader well_formed_sentence Device construct_backward_sample update_beams_mmi rerank_bidi diversity_bleu get_significance_matrix div_threshold diversity_sbert wilcox_test SentenceDataset read_sentences blocks get_num_lines add_preprocess_args get_parser add_model_args add_optimization_args add_dataset_args add_distributed_training_args add_checkpoint_args get_parser add_interactive_args add_dataset_args get_parser add_generation_args add_eval_lm_args add_dataset_args get_parser add_argument_group add_dataset_args add_common_eval_args get_parser eval isinstance items list hasattr max_tokens add_argument_group add_argument import_user_module parse_known_args add_args getattr ArgumentParser modify_parser parse_args set_defaults max_sentences items list replace add_argument import_user_module parse_known_args ArgumentParser add_argument_group add_argument add_argument_group add_argument add_argument_group add_argument add_argument_group add_argument add_argument_group add_argument add_argument add_argument_group add_argument add_argument_group add_common_eval_args add_argument add_argument_group add_common_eval_args add_argument add_argument_group add_argument add_argument_group add_argument strip lower match read_sentences zip append join format print results_path gen_subset makedirs forward_prefix_sampling unk prefix_oracle import_user_module verbose basicConfig print_step inference_step hasattr post_process_prediction map encoder gen_subset ones_like Namespace backward_prefix_path integer_decode info zip enumerate join namedtuple source_lang path get_original_text setup_task generator getLogger DirectionTask add_string make_generation_fast_ string sacrebleu progress_bar backward_sequence_data backward_sequence_path tolist add forward_prefix_data append _check_excluded _all_signatures Scorer integer_decode_tokens copy load_align_dict TimeMeter build_criterion beam target_dictionary target_perplexity tqdm encode_line bucket_distance model pad_index eos cuda split_paths prefix_beam half forward_sequence_sampling prefix_exclude_file sum range update SacrebleuScorer forward_prefix_path remove_bpe _decoded_signatures fp16 decoder print build_generator replace_unk backward_prefix_data print_alignment compute_loss log sorted pad rerank_bidi getattr load_model_ensemble load_dataset _num2tensor format inf strip_pad max_len_b cpu next_epoch_itr len vanilla_main parse_args_and_arch get_generation_parser vanilla main load_dictionary destdir tgtdict build_dictionary align_suffix import_user_module save max srcdict get_task joined_dictionary addHandler make_all alignfile dict_path info keys FileHandler task train_path source_lang make_all_alignments target_lang dataset_dest_file make_builder finalize dataset_dest_file make_builder finalize format destdir source_lang only_source target_lang dataset_dest_prefix parse_args get_preprocessing_parser add_cod3s_arguments model add_argument Load SentencePieceProcessor ArgumentParser parse_args append clear tqdm len append InputFeatures convert_tokens_to_ids len tensor size arange construct_bert_input model ids mask move strip replace first_upper first_lower strip replace remove_end_punc DataLoader ones_like pad_sequence pad_index stack eos flip cat enumerate len range zip generator inference_step zeros_like construct_backward_sample build_criterion update_beams_mmi array range enumerate append sum div_fn enumerate get_embeddings pdist sum squareform list exp all_pairs _pair_distance sentence_bleu bp zeros sum range len all read print get_num_lines | # COD3S This repository houses the cod3 for - [Nathaniel Weir, Joao Sedoc, and Benjamin Van Durme (2020): COD3S: Diverse Generation with Discrete Semantic Signatures.](https://arxiv.org/pdf/2010.02882.pdf) In _Proceedings of EMNLP_. We train seq2seq models for the purpose of diverse causal generation by generating semantic similarity-preserving LSH bit signatures of sentences' SBERT embeddings. <img src="img/cod3s_overview_camera_ready.png" width="500"> ## Installation 1. This repository uses [conda](https://docs.conda.io/en/latest/miniconda.html) to manage packages and [Ducttape](https://github.com/jhclark/ducttape) to manage intermediate results of the experiment pipeline. Follow the latter's [quickstart guide](https://github.com/jhclark/ducttape#quick-start) to add ducttape to your path. | 3,184 |
nybupt/athena | ['adversarial defense', 'denoising'] | ['ATHENA: A Framework based on Diverse Weak Defenses for Building Adversarial Defense'] | scripts/plot_figs_for_paper.py src/overhead/overhead.py src/models/models.py src/models/test.py src/utils/util.py src/evaluation/compute_accuracy.py src/models/train_new_target_models.py scripts/train_model.py src/models/craft_adversarial_examples.py src/models/predict_testset.py src/attacks/one_pixel.py src/utils/ensemble.py src/rank_correlation/heatmap.py src/__init__.py src/utils/csv_headers.py src/models/train_models.py src/utils/file.py src/models/train_bb_sur_models.py src/data/generate_data_from_ensemble_models.py scripts/train_models.py src/attacks/whitebox.py scripts/visualize_models.py scripts/evaluate_models.py src/clustering/ensemble_model_KMeans.py src/rank_correlation/cal_dissimilarity.py src/clustering/weighted_prob_predict.py scripts/craft_adversarial_examples.py src/visualization/visualize_models.py src/visualization/visualizact.py src/utils/measure.py src/data/data.py src/models/test_ensemble.py src/models/train.py src/models/detection_as_defense.py src/models/evaluate_models.py src/models/eval_models.py src/models/BB_test_surrogate.py src/models/test_ensemble_model_on_all_types_of_AEs.py src/models/train_model.py src/models/BB_test_ensemble.py src/clustering/topKModels_UpperBoundAcc_LuckyPoorSets.py src/models/transformation.py src/clustering/kmeans.py src/visualization/plot_figs_for_paper.py src/clustering/prediction_accuracy_EM_MSV.py src/attacks/carlini_wagner_l2.py src/clustering/clustering_based_defense.py src/clustering/ensemble_model_fuzzyKMeans.py src/rank_correlation/rank_correlation_analysis.py src/models/train_cnn_mnist_models_multiple_dataset.py src/models/latency_sample_wise.py src/data/generate_bb_AE.py src/utils/config.py src/evaluation/eval_whitebox.py src/rank_correlation/plot_dissimilarity.py src/models/collection_new_labels.py scripts/eval_models.py src/utils/plot.py src/attacks/attacker.py main reset craft main eval_ideal_model eval_single_model eval_batch eval_single main plot_ideal_accuracy train_model main test_model train_model train_model_batch train_composition train_models_with_newLabels main main keract_visual attack_whitebox get_adversarial_examples CWL2 ZERO CarliniWagnerL2 OnePixel generate get_adversarial_loss get_adversarial_metric get_compile_params generate vote2 vote1 distanceToCenter k_means calAccu main normalize load_data main craft usage get_ideal_accuracy get_test_accuracies PERFORM_MODE compute_model_accuracy get_attack_params get_waive_list gen_greedy init_candidate_targets ATTACK_STRATEGY pick_target_model reset main attack_single get_perturb_upperbound generate_single sort_candidates gen_separately testUndefendedModel saveResultPerTarget saveOneTypeResultPerTarget testOneData testOneEnsemble testOneTargetModel testOneData saveResultPerTarget main reset craft majorityVote distMatrx saveDATable dumpMat usage main eval_ideal_model eval_single_model eval_batch eval_single getTimeCost train_model create_model save_model load_model cnn_mnist load_from_json train_and_save main cnn_cifar train evaluate_model save_to_json lr_schedule usage usage testOneData saveAccTable usage usage train_model main train_model main train_model main test_model train_model train_model_batch train_composition train_models_with_newLabels main usage cartoon_effect quantize affine_trans morph_trans segmentations compress transform_images rotate geometric_transformations cartoonify composite_transforms denoising main augment flip add_noise distort shift filter transform distortion heatmap corrplot adjust_fig_aspect MODE DATA ATTACK MODEL TRANSFORMATION PATH IdealModelEvalHeaders evaluate_ensemble_defenses ensemble_ave_confidence ensemble_random_defense ensemble_defenses_util prediction ensemble_defenses load_models ensemble_top2labels_majority_voting ensemble_majority_voting ORIENT csv2dict dict2csv save_adv_examples frobenius_norm plot_lines plotTrainingResult plot_image plot_comparisons LEGEND_LOCATION plot_scatter_with_certainty plot_training_history legend boxplot plot_settings x_iter_schedule plot_difference postAnalysis loadClusteringResult weightedConfBasedDefsTrainPre boxPlot predictionForTest0 predictionForTest clusteringDefensesEvaluation loadModels loadCAVModel kFoldPredictionSetup calAccProb wc_based_defense wcdefenses clusteringBasedDefesTrain weightedConfDefenseEvaluation weightedConfBasedDefsTest calAccuracyAllSingleModels randomChoiceBasedDefense loadAllClusteringResult prediction curvePlot calLatency votingAsDefense calAccuracy createKFoldDirs clusteringBasedDefesTest getUpperBoundAccuracy create2DTable clusteringBasedDefesTrainPre majorityVote weightedConfBasedDefsTrain upperBoundAccuracy createDirSafely maxConfidenceVote wc_mv_defense drawUBCurve main plot_ideal_accuracy Visualizer main keract_visual transform rotate90 rotate180 rotate270 get_df_overshoots get_bim_nbIter get_op_maxIter get_jsma_theta DEBUG get_bim_eps get_op_popsize get_jsma_gamma get_mim_eps save_adv_examples format asarray inf get_cwl2_maxIter get_op_pxCnt upper get_mim_nbIter int get_adversarial_examples print get_cwl2_lr get_pgd_eps get_bim_norm reset load_data transform get_fgsm_eps craft infty DEBUG round max values list TOP RANDOM shape append range format value asarray keys join dict2csv print min get_test_accuracies BOTTOM zeros load format load_model evaluate print get_transformation_compositions ADVERSARIAL_FILE transform normalize split eval_ideal_model split eval_ideal_model set_xticks_fontsize set_ncol set_fontsize set_title plot_scatter_with_certainty set_location set_ylabel_fontsize set_box_anchor legend set_legend set_title_fontsize set_xlabel_fontsize format value csv2dict join print set_yticks_fontsize set_ylabel plot_settings plot_ideal_accuracy format create_model CUR_DATASET_NAME save_model print transform train print format transform evaluate_model set_current_dataset_name train_model join mnist CUR_DATASET_NAME format load deepcopy print test_model set_cur_transformation_type load_data ADVERSARIAL_FILE get_AETypes train_model supported_types load_model evaluate shape load_data format create_model save_model load_model evaluate print shape load_data transform normalize train print format transform train_and_save isinstance train_composition train_model_batch init_model_activations load_model Visualizer display_model_activations keract_visual expand_dims get monotonic time CUR_DATASET_NAME format set_session load_model inf print generate close upper shape info ConfigProto Session get monotonic time CUR_DATASET_NAME format inf print generate upper shape info attack_all OnePixel CarliniWagnerL2 model stop_gradient DEBUG KerasModelWrapper get_adversarial_metric input format FastGradientMethod compile pop get_compile_params evaluate print batch_eval ProjectedGradientDescent DeepFool BasicIterativeMethod MomentumIterativeMethod SaliencyMapMethod print zeros list range items zeros list range items shape range seed deepcopy norm randn argmin mean shape distanceToCenter zeros std range print argmax format len format print reshape to_categorical astype upper shape mean std BIM split str time list to_categorical zip zeros round range len print load join compute_model_accuracy format list items sorted print get_ideal_accuracy shuffle dict DEBUG keys range len range round range len set_current_dataset_name mnist extend QUANTIZATIONS inf get_waive_list print format choice load join CUR_DATASET_NAME format print ADVERSARIAL_FILE round frobenius_norm expand_dims attack_whitebox pick_target_model DEBUG argmax list attack_single append expand_dims predict get_attack_params format keys frobenius_norm deepcopy remove print plot_image reset transform len Session monotonic list set_session init_candidate_targets append expand_dims range save_adv_examples format value asarray close zip generate_single ConfigProto keys int print min load_data len init_candidate_targets gen_greedy load format std ones print ensemble_defenses_util prediction range mean append accuracy_score round array clip join format ones print testOneData save zeros range load join format ones print range round save zeros accuracy_score argmax clip predict join format save saveOneTypeResultPerTarget argmax predict load join format load_model ones print range round save zeros accuracy_score argmax clip predict join save zeros list range items mean distance_matrix range std zeros transform_images monotonic predict set_current_dataset_name set_learning_rate mnist set_batch_size set_epochs cifar_10 fation_mnist DEBUG print Sequential set_architecture add summary print Sequential set_architecture add summary DEBUG flow Adam shape RESULTS normalize get categorical_crossentropy format plotTrainingResult fit_generator ImageDataGenerator compile int dict2csv evaluate print history split fit save_model warn save RMSprop shape RESULTS normalize format create_model plotTrainingResult ImageDataGenerator compile int dict2csv print MODEL history split fit predict_class isinstance print Sequential zip argmax predict format print save save_to_json split split to_json format save_weights read format model_from_json close load_weights compile open load_model evaluate time clean hstack vstack clean shuffle save array warpAffine format print reshape getRotationMatrix2D shape stack append DEBUG int warpAffine format print reshape float32 stack append DEBUG format print reshape shape stack append DEBUG warpAffine format getAffineTransform print reshape float32 shape stack append DEBUG uint8 format print ones reshape dilate morphologyEx MORPH_CLOSE MORPH_GRADIENT shape stack erode append DEBUG MORPH_OPEN format zeros_like print reshape fit shape stack flow ImageDataGenerator DEBUG append len bitwise_and bilateralFilter DEBUG COLOR_GRAY2RGB medianBlur shape adaptiveThreshold append range get asarray COLOR_RGB2GRAY ADAPTIVE_THRESH_MEAN_C stack pyrDown pyrUp uint8 print reshape cvtColor format print ADAPTIVE_THRESH_MEAN_C shape DEBUG ADAPTIVE_THRESH_GAUSSIAN_C COLOR_GRAY2RGB int COLOR_RGB2GRAY COLOR_RGB2LAB MiniBatchKMeans reshape copy shape stack append fit_predict range COLOR_Lab2RGB cvtColor roll resize clip COLOR_GRAY2RGB fromarray shape append range COLOR_RGB2GRAY copy mean stack hsv2rgb int rgb2hsv reshape shift array cvtColor meijering minimum_filter maximum_filter scharr COLOR_GRAY2RGB shape skeletonize sato append prewitt COLOR_RGB2GRAY disk stack median_filter hessian thin gaussian_filter invert frangi entropy reshape roberts rank_filter float32 sobel cvtColor reshape random_noise shape stack append int format COLOR_RGB2GRAY print reshape imencode shape imdecode stack append quit cvtColor reshape denoise_bilateral dict shape denoise_nl_means mean estimate_sigma denoise_wavelet stack denoise_tv_chambolle append double denoise_tv_bregman swirl COLOR_GRAY2RGB COLOR_RGB2GRAY reshape iradon radon float32 iradon_sart shape stack linspace append max cvtColor COLOR_GRAY2RGB COLOR_RGB2GRAY reshape disk gradient shape stack append median cvtColor watershed print isinstance quantize set_cur_transformation_type affine_trans morph_trans segmentations clip list compress rotate geometric_transformations CUR_TRANS_TYPE format cartoonify denoising augment flip add_noise deepcopy distort print shift filter transform_images plot_comparisons copy transform load norm get subplot tick_right barh set_yticks grid set_xlim min GridSpec scatter set_xticks figure set_facecolor color_palette linspace max set_ylim len melt savefig reset_index heatmap get_size_inches min subplots_adjust clean print format absolute join format load_model print name Model append range len time predict copy transform zeros range append choice shape argmax array range tolist Counter shape argmax array range mean shape argmax array range tolist Counter shape ravel array range load load_models clip prediction load ensemble_defenses print get format print plot_comparisons ADVERSARIAL_FILE save DEBUG format print reshape zip float abs show join format set_xticklabels FIGURES add_subplot close set savefig figure show join format reshape FIGURES close shape title imshow savefig set_aspect join show format suptitle print reshape FIGURES grid add_subplot axis subplots_adjust close shape imshow savefig figure range set_aspect join show format suptitle print reshape FIGURES grid add_subplot axis subplots_adjust close shape imshow savefig figure range show join list format plot xlabel FIGURES close ylabel title savefig legend keys range len xlim_max FIGURES xlim_min set_major_formatter xticks ylim_max yticks show list FormatStrFormatter ylabel title ylim savefig legend append range format replace plot close xlim keys int join ylim_min print xlabel subplots_adjust x_iter_schedule fill_between len join format plot xlabel FIGURES add_subplot ylabel subplots_adjust title savefig figure legend show join plot print xlabel FIGURES close ylabel title savefig figure legend DEBUG monotonic random_integers zeros argmax max range join list format inf print tolist fit inertia_ getUpperBoundAccuracy createDirSafely save append zeros array range len join print hstack loadClusteringResult calAccuracy diagonal votingAsDefense save zeros argmax range join loadClusteringResult calAccuracy votingAsDefense append zeros range join list clusteringBasedDefesTrainPre print astype clusteringBasedDefesTest save zeros argmax max range clusteringBasedDefesTrain monotonic format ones shape wc_based_defense wc_mv_defense join calAccuracyAllSingleModels print reshape round save append zeros argmax max range len argmax join print hstack argsort calAccuracy save zeros range wcdefenses zeros range calAccuracy wcdefenses join list format weightedConfBasedDefsTrain weightedConfBasedDefsTest print save zip append weightedConfBasedDefsTrainPre range zeros plot xlabel close ylabel tight_layout title savefig xticks yticks join curvePlot argsort vstack save append zeros getUpperBoundAccuracy range subplots set_title set_xticklabels close dict set_ylabel savefig boxplot save argmax round max boxPlot str list calAccProb append range hstack copy mean calAccuracy getUpperBoundAccuracy create2DTable load join print extend createDirSafely zeros std drawUBCurve len zeros calAccProb range save argmax loadModels monotonic list transform_images normalize sum range format hstack shuffle copy prediction zip load join int print extend createDirSafely zeros array len loadModels join load format monotonic normalize list print transform_images copy prediction createDirSafely save zip zeros range len save argmax loadModels monotonic list transform_images normalize range format copy prediction zip load join print extend createDirSafely zeros len join read format load_model model_from_json print name set_weights close Model open load_weights sub append get_weights range len monotonic zeros sum range hstack makedirs argmax zeros range round range append monotonic maxConfidenceVote majorityVote append majorityVote join str loadClusteringResult range ones multiply mean shape zeros argmax range items list multiply shape zip zeros argmax max range join str createDirSafely append range | <p align="center"> <img src="https://github.com/softsys4ai/athena/blob/master/reports/figures/logo/Athena_logo.png" width="20%" height="20%" title="Athena logo"> <p> # Athena: A Framework for Defending Machine Learning Systems Against Adversarial Attacks ## Introduction This is the code base for Athena, a framework for defending [machine learning systems](https://pooyanjamshidi.github.io/mls/) against adversarial attacks. We found out that, surprisingly, an Ensemble of Many Diverse Weak Defenses, say deep neural networks trained on disjointly transformed data, can can be very effective for defending ML systems against adversarial attacks. This codebase provides a framework for training weak defenses with transformations, building ensemble of weak defenses, provides implemented attack methods to test the effectiveness of Athena, and provides all other subsequent source code and tooling to replicate the results of the experiments in this [publication](https://arxiv.org/pdf/2001.00308.pdf). ## Framework Architecture  ## Manual Installation | 3,185 |
nyk510/simple-ngboost | ['weather forecasting'] | ['NGBoost: Natural Gradient Boosting for Probabilistic Prediction'] | ngboost.py NormalNGBoost true_noise_scale LogVarianceNorm true_function | nyk510/simple-ngboost | 3,186 |
nyukat/BIRADS_classifier | ['breast cancer detection', 'medical diagnosis'] | ['High-Resolution Breast Cancer Screening with Multi-View Deep Convolutional Neural Networks'] | birads_prediction_torch.py layers_torch.py test_inference.py layers_tf.py models_torch.py utils.py models_tf.py birads_prediction_tf.py convert_model.py inference inference tf_to_torch fc_layer all_views_global_avg_pool all_views_gaussian_noise_layer all_views_max_pool all_views_flattening_layer dropout_layer gaussian_noise_layer softmax_layer all_views_conv_layer AllViewsMaxPool AllViewsPad AllViewsAvgPool AllViewsGaussianNoise AllViewsConvLayer baseline BaselineBreastModel BaselineBreastModel get_torch_gpu test_torch_golden_equal test_tf_cpu_gpu_equal get_tf_cpu get_torch_cpu get_tf_gpu test_torch_cpu_gpu_equal test_tf_golden_equal load_images normalize_single_image set_random_seed load str print load_images load_state_dict device to Graph BaselineBreastModel device max_pool get_shape avg_pool get_shape concat int reshape fully_connected dropout shape add_n random_normal gaussian_noise_layer fc_layer all_views_global_avg_pool all_views_gaussian_noise_layer all_views_max_pool all_views_flattening_layer dropout_layer softmax_layer all_views_conv_layer astype float32 normalize_single_image expand_dims imread | # High-resolution breast cancer screening with multi-view deep convolutional neural networks ## Introduction This is an implementation of the model used for [BI-RADS](https://breast-cancer.ca/bi-rads/) classification as described in our paper ["High-resolution breast cancer screening with multi-view deep convolutional neural networks"](https://arxiv.org/abs/1703.07047). The implementation allows users to get the BI-RADS prediction by applying our pretrained CNN model on standard screening mammogram exam with four views. As a part of this repository, we provide a sample exam (in `images` directory). The model is implemented in both TensorFlow and PyTorch. ## Prerequisites * Python (3.6) * TensorFlow (1.5.0) or PyTorch (0.4.0) * NumPy (1.14.3) * SciPy (1.0.0) * Pillow (5.1.0) ## Data | 3,187 |
nyukat/breast_cancer_classifier | ['breast cancer detection', 'medical diagnosis'] | ["Deep Neural Networks Improve Radiologists' Performance in Breast Cancer Screening"] | src/utilities/saving_images.py src/optimal_centers/calc_optimal_centers.py src/cropping/crop_single.py src/modeling/layers.py src/modeling/models_tf.py src/modeling/run_model_single.py src/utilities/data_handling.py src/optimal_centers/get_optimal_center_single.py src/optimal_centers/get_optimal_centers.py src/modeling/models.py src/cropping/crop_mammogram.py src/utilities/pickling.py src/utilities/tf_utils.py src/utilities/tools.py src/utilities/reading_images.py src/modeling/run_model.py src/constants.py src/modeling/layers_tf.py src/heatmaps/run_producer.py src/heatmaps/models.py src/data_loading/loading.py src/data_loading/augmentations.py src/heatmaps/run_producer_single.py src/modeling/run_model_single_tf.py src/heatmaps/run_producer_single_tf.py VIEWS MODELMODES LABELS VIEWANGLES get_masks_and_sizes_of_connected_components image_orientation crop_mammogram_one_image_short_path get_mask_of_largest_connected_component include_buffer_y_axis crop_mammogram_one_image get_edge_values convert_bottommost_pixels_wrt_cropped_image crop_img_from_largest_connected get_distance_from_starting_side get_rightmost_pixels_wrt_cropped_image crop_mammogram include_buffer_x_axis get_bottommost_pixels main crop_single_mammogram simple_resize window_location_at_center_point zero_pad_and_align_window sample_crop_best_center shift_window_inside_image sample_crop random_augmentation_best_center crop_image augment_and_normalize_image load_image_and_heatmaps standard_normalize_single_image load_heatmaps load_image flip_image TFSamePadWrapper ModifiedDenseNet121 ori_image_prepare load_model produce_heatmaps prediction_by_batch probabilities_to_heatmap save_heatmaps stride_list_generator get_all_prob patch_batch_prepare get_image_path main making_heatmap_with_large_minibatch_potential sample_patches_single sample_patches main produce_heatmaps get_all_prob_tf load_model_tf produce_heatmaps construct_densenet_match_dict main prediction_by_batch_tf AllViewsGaussianNoise OutputLayer AllViewsAvgPool BasicBlockV2 batch_norm conv1x1 output_layer avg_pool_layer conv3x3 conv2d_fixed_padding gaussian_noise_layer basic_block_v2 FourViewResNet ViewResNetV2 resnet22 ImageBreastModel SplitBreastModel filter_strip_prefix SingleImageBreastModel single_image_breast_model resnet22 construct_single_image_breast_model_match_dict four_view_resnet view_resnet_v2 make_layer load_run_save load_model run_model compute_batch_predictions main process_augment_inputs load_model ModelInput batch_to_tensor main load_inputs run process_augment_inputs load_model batch_to_inputs ModelInput main load_inputs run get_rightmost_pixel_constraint get_topleft_bottomright_cumsum v_get_topleft_bottomright_partialsum get_image_cumsum get_bottomrightmost_pixel_constraint get_joint_axes get_image_optimal_window_info get_images_optimal_window_info get_candidate_center_topleft_bottomright get_candidate_topleft_bottomright main extract_center load_and_extract_center get_optimal_centers main get_optimal_center_single add_metadata unpack_exam_into_images unpickle_from_file pickle_to_file read_image_mat read_image_png save_image_as_png save_image_as_hdf5 convert_conv_torch2tf convert_fc_weight_torch2tf get_tf_variables construct_weight_assign_ops partition_batch label sum range get_masks_and_sizes_of_connected_components idxmax any flip int any get_mask_of_largest_connected_component include_buffer_y_axis get_edge_values convert_bottommost_pixels_wrt_cropped_image binary_erosion get_distance_from_starting_side get_rightmost_pixels_wrt_cropped_image include_buffer_x_axis get_bottommost_pixels binary_dilation unpickle_from_file partial print unpack_exam_into_images add_metadata pickle_to_file dict exists makedirs image_orientation crop_img_from_largest_connected read_image_png join crop_mammogram_one_image dict crop_mammogram_one_image pickle_to_file parse_args add_argument crop_single_mammogram ArgumentParser int expand_dims resize simple_resize is_mlo window_location_at_center_point zero_pad_and_align_window uniform any round shift_window_inside_image is_cc zeros abs array uniform min round array concatenate sample_crop_best_center sample_crop expand_dims crop_image fliplr is_right is_left read_image_mat read_image_png endswith astype float32 flip_image load_image stack load_image load_heatmaps random_augmentation_best_center standard_normalize_single_image copy sample list range transpose standard_normalize_single_image astype shape stride_list_generator load_image append shape expand_dims zeros prediction_by_batch zeros partition_batch enumerate join flip_image save_image_as_hdf5 append sample_patches_single LIST get_image_path ori_image_prepare patch_batch_prepare probabilities_to_heatmap save_heatmaps tqdm get_all_prob sample_patches makedirs format load_from_path ModifiedDenseNet121 eval device to unpickle_from_file making_heatmap_with_large_minibatch_potential seed load_model produce_heatmaps dict seed load_model probabilities_to_heatmap save_image_as_hdf5 get_all_prob sample_patches_single flip_image items list convert_conv_torch2tf numpy convert_fc_weight_torch2tf Graph format Session reshape run zeros prediction_by_batch_tf partition_batch enumerate get_all_prob_tf load_model_tf as_list pad conv2d_fixed_padding slice items list format replace isinstance convert_conv_torch2tf convert_fc_weight_torch2tf LIST range model_class load_state_dict RandomState OrderedDict unpickle_from_file load_model to_csv run_model dirname DataFrame makedirs load_run_save SingleImageBreastModel load_state_from_shared_weights unpickle_from_file load_image load_heatmaps augment_and_normalize_image process_augment_inputs list RandomState load_model print dumps batch_to_tensor mean partition_batch append load_inputs range run Graph lower Session batch_to_inputs append get_image_optimal_window_info get_image_cumsum max tl_br_constraint arange v_get_topleft_bottomright_partialsum get_image_cumsum argmin shape get_joint_axes sum prod get_candidate_center_topleft_bottomright get_candidate_topleft_bottomright clip zeros len get_image_optimal_window_info get_bottomrightmost_pixel_constraint get_rightmost_pixel_constraint flip_image join read_image_png list starmap repeat zip Pool unpickle_from_file unpack_exam_into_images add_metadata pickle_to_file dirname get_optimal_centers makedirs unpickle_from_file pickle_to_file extract_center read_image_png get_optimal_center_single dict LIST append enumerate dict LIST append enumerate imread array T File close imwrite File close create_dataset append get_collection TRAINABLE_VARIABLES GLOBAL_VARIABLES append items assign list append | # Deep Neural Networks Improve Radiologists' Performance in Breast Cancer Screening ## Introduction This is an implementation of the model used for breast cancer classification as described in our paper [Deep Neural Networks Improve Radiologists' Performance in Breast Cancer Screening](https://ieeexplore.ieee.org/document/8861376). The implementation allows users to get breast cancer predictions by applying one of our pretrained models: a model which takes images as input (*image-only*) and a model which takes images and heatmaps as input (*image-and-heatmaps*). * Input images: 2 CC view mammography images of size 2677x1942 and 2 MLO view mammography images of size 2974x1748. Each image is saved as 16-bit png file and gets standardized separately before being fed to the models. * Input heatmaps: output of the patch classifier constructed to be the same size as its corresponding mammogram. Two heatmaps are generated for each mammogram, one for benign and one for malignant category. The value of each pixel in both of them is between 0 and 1. * Output: 2 predictions for each breast, probability of benign and malignant findings: `left_benign`, `right_benign`, `left_malignant`, and `right_malignant`. Both models act on screening mammography exams with four standard views (L-CC, R-CC, L-MLO, R-MLO). As a part of this repository, we provide 4 sample exams (in `sample_data/images` directory and exam list stored in `sample_data/exam_list_before_cropping.pkl`). Heatmap generation model and cancer classification models are implemented in PyTorch. **Update (2019/10/26)**: [Our paper](https://ieeexplore.ieee.org/document/8861376) will be published in the IEEE Transactions on Medical Imaging! **Update (2019/08/26)**: We have added a [TensorFlow implementation](using_tensorflow.md) of our *image-wise* model. **Update (2019/06/21)**: We have included the *image-wise* model as described in the paper that generates predictions based on a single mammogram image. This model slightly under-performs the *view-wise* model used above, but can be used on single mammogram images as opposed to full exams. | 3,188 |
nyukat/perception_comparison | ['breast cancer detection', 'medical diagnosis'] | ['Differences between human and machine perception in medical diagnosis'] | perception_comparison/utils.py perception_comparison/fourier_filter.py perception_comparison/probabilistic_inference.py perception_comparison/perturbation_study_analysis.py perception_comparison/annotation_study_analysis.py main plot_class_separability fourier_filter calc_gaussian_mask calc_dist_from_center min_max_normalize main plot_predictive_confidence plot_class_separability calc_class_separability main calc_posterior_pred get_y split_arr get_subgroups tight_layout calc_posterior_pred_elem save_file get_exam_idxs Side load_file std errorbar arange concatenate get_y mean shape array get_exam_idxs nan append full range len subplots grid get_data list set_xlabel load_file savefig dirname legend append range set_xticklabels tight_layout zip annotate get_width_height join T makedirs subplots_adjust plot_class_separability set_ylabel set_xticks transform get_legend_handles_labels array read_csv len shape exp calc_dist_from_center calc_gaussian_mask fftshift size fft2 astype sqrt real min_max_normalize join errorbar arange reshape mean append load_file cdf std range len concatenate get_y split_arr get_exam_idxs read_csv join calc_posterior_pred reshape nanmean nanstd save_file info load_file exists calc_class_separability plot_predictive_confidence get_subgroups index keys shape nan save_file vb full max StanModel dirname makedirs index replace copy values append range replace len astype concatenate calc_sigmoid shuffle mean zip append value concatenate get_subgroups calc_posterior_pred_elem nan append load_file range full read_csv len range | # Differences between human and machine perception in medical diagnosis This repository accompanies our paper [Differences between human and machine perception in medical diagnosis](https://arxiv.org/abs/2011.14036). In the paper, we propose a framework for comparing human and machine perception in medical diagnosis, and demonstrate it with a case study in breast cancer screening. This repository contains the data and code necessary to reproduce the results from our case study. There are three components: 1. `probabilistic_inference.py`: We collected predictions from radiologists and DNNs on screening mammograms perturbed with Gaussian low-pass filtering (Figure 1a--b). The predictions are provided in `data/observed_predictions`, see below for details. We apply probabilistic modeling to these predictions in order to isolate the effect that low-pass filtering has on their predictions (Figure 1d). 2. `perturbation_study_analysis.py`: We sample from the probabilistic model and compare radiologists and DNNs | 3,189 |
obo/lego | ['camera localization', 'autonomous driving'] | ['Image Based Camera Localization: an Overview'] | s3parator/manual-mover.py random-bits/report-light-value.py tracker/04-tank-switchable.py random-bits/max_min_finder.py random-bits/find_angle_speed.py python-opencv-experiments/detect-from-camera.py python-opencv-experiments/hough_line_peaks.py random-bits/rot_max_min_finder.py s3parator/testgetkey.py python-opencv-experiments/detect-circles.py tracker/TRACK3R.py random-bits/draw-grayscale.py random-bits/mousometry/rot-xy.py tracker/01-tank-and-leds.py random-bits/leds.py random-bits/report-beacon-location.py python-opencv-experiments/detect-lines-from-camera.py tracker/05-tank-switchable-with-pen.py s3parator/ipwebcam.py random-bits/find_line_and_go.py random-bits/mousometry/report-mouse-moves.py fake3dev/ev3dev/helper.py fake3dev/ev3dev/auto.py random-bits/Line_follow_2.py python-opencv-experiments/detect-lines.py random-bits/leds-on-touch.py random-bits/ev3dev-photo-booth.py random-bits/report-color-values.py python-opencv-experiments/detect-circles-from-camera.py fake3dev/ev3dev/core.py random-bits/psturtle-test.py fake3dev/ev3dev/ev3.py random-bits/draw-on-lcd.py tracker/02-tank-and-say-color.py python-opencv-experiments/record-video-to-grayscale.py random-bits/report-ir-value.py s3parator/myreadchar.py python-opencv-experiments/detect-blobs.py random-bits/draw-smiley-frowney.py random-bits/asyncio-buttons.py python-opencv-experiments/detect-color.py tracker/03-tank-custom.py get_current_platform I2cSensor eprint FbMem ActuonixL1250Motor Motor ActuonixL12100Motor GyroSensor Sensor Device Screen LargeMotor LegoPort TouchSensor BeaconSeeker InfraredSensor ColorSensor Sound list_devices SoundSensor MediumMotor PowerSupply ButtonEVIO ServoMotor LightSensor ButtonBase list_motors DcMotor list_device_names Led UltrasonicSensor list_sensors RemoteControl _make_scales Button Leds LargeMotor MediumMotor MotorMixin MotorStopFail MotorStartFail Wheel Tank MotorPositionFail ColorSensor MotorStall RemoteControlledTank ColorSensorMixin EV3RubberWheel gr2bgr detect_circles process_and_show gr2bgr detect_circles detect_lines detect_lines main Main VirtualTerminal Framebuffer eprint Test sign steering2 get_color find_stable_color_on_one_side steering3 run run steering2 steering3 run find_stable_color_on_one_side get_color rotated_data f eprint ResizeWithAspectRatio eprint IPWebCam main eprint Writer mymotor eprint readkey test readchar eprint signal_handler touch_leds signal_handler color_speaker touch_leds color_speaker signal_handler TRACK3R Tank TRACK3RWithClaw RemoteControlledTank touch_leds button_watcher color_speaker signal_handler TRACK3R Tank TRACK3RWithPen RemoteControlledTank toggle_event play_leds touch_leds button_watcher color_speaker signal_handler TRACK3R Tank TRACK3RWithPen RemoteControlledTank toggle_event play_leds touch_leds TRACK3RWithClaw TRACK3RWithBallShooter TRACK3R TRACK3RWithSpinner machine print listdir fnmatch all compile DEVICE_ROOT_PATH abspath DEVICE_ROOT_PATH SYSTEM_CLASS_NAME abspath SYSTEM_CLASS_NAME SYSTEM_CLASS_NAME SYSTEM_CLASS_NAME SYSTEM_CLASS_NAME DEVICE_ROOT_PATH SYSTEM_CLASS_NAME abspath SYSTEM_CLASS_NAME SYSTEM_CLASS_NAME SYSTEM_DEVICE_NAME_CONVENTION SYSTEM_CLASS_NAME SYSTEM_DEVICE_NAME_CONVENTION SYSTEM_CLASS_NAME SYSTEM_DEVICE_NAME_CONVENTION SYSTEM_CLASS_NAME SYSTEM_DEVICE_NAME_CONVENTION SYSTEM_CLASS_NAME SYSTEM_DEVICE_NAME_CONVENTION SYSTEM_CLASS_NAME SYSTEM_DEVICE_NAME_CONVENTION SYSTEM_CLASS_NAME SYSTEM_DEVICE_NAME_CONVENTION set int ord dict round split _make_scales Led equalizeHist COLOR_BGR2GRAY HoughCircles print createCLAHE astype copy apply circle rectangle gr2bgr CV_HOUGH_GRADIENT cvtColor line copy shape HoughLinesP CV_AA range uint8 Canny COLOR_BGR2GRAY ones astype detect_lines mean stack gr2bgr bilateralFilter dilate cvtColor Canny COLOR_BGR2GRAY cvtColor print write close tostring sqrt sleep O_RDWR range open str time run_direct print get_color stop abs min print value run_direct print steering zip sleep float steering3 str time stop radians transpose cos matmul sin array rotated_data min max float follow_keys Writer setraw fileno tcgetattr read getchar print eprint readkey exit TouchSensor LEFT set_color set value print wait ColorSensor sleep radians RIGHT zip print cos LEFT all_off sleep range sin clear set is_set info sleep process Button info info | # Lego: Random Ideas and Tools for EV3 Brick Running ev3dev Linux This repository stores my personal experiments with ev3 brick running ev3dev linux. Most of it is not sufficiently general but it may still serve as a source of ideas for others. The files ``.history-*`` are my local bash histories, for finding how I ran what. ## Connecting via Bluetooth brickman: Wireless->Bluetooth->scroll to my paired NB Network Connection -> Connect | 3,190 |
od-crypto/aerial | ['semantic segmentation'] | ['TernausNet: U-Net with VGG11 Encoder Pre-Trained on ImageNet for Image Segmentation'] | lib/preprocessing.py lib/utils.py Toloka/create_masks.py lib/train.py lib/dataset.py lib/metrics.py demo/inference.py demo/app.py download_file inference get_model draw_mask WaterDataset MIOUMetric LossMetric F1Metric AccuracyMetric LakeAccuracyMetric Metric NoLakeAccuracyMetric LandcoverDatasetPreprocessor SentinelDatasetPreprocessor get_rotated_crops DatasetPreprocessor viz create_masks create_poly create_collage plot_locally load UNet11 eval load_state_dict to list uint8 zeros_like concatenate resize transpose astype stack zip append to range uint8 ones astype morphologyEx MORPH_GRADIENT stack ones_like zeros_like ones pi rotate erode nonzero randint len sum asarray print transpose copy shape resize tensor to numpy uint8 BytesIO asarray ones astype morphologyEx MORPH_GRADIENT copy plot_locally save content open zeros astype int32 imshow subplots set_title decode create_collage print append range makedirs | # Comparison of two segmentation methods for Lakes Datasets ### Computer vision project on satellite image segmentation #### We provide PyTorch code for building and training models, and python code for image retrieval and local feature matching. ## I. Setting the task. Pipeline. For the semantic segmentation problem are currently being used different techniques, both with complex architecture and with complex training policies. An example of a complex learning policy is the transfer learning. In the situation of transfer learning one has a pre-trained network that does not know nothing about the task we want to solve, but already has some preliminary knowledge. Most likely, because we have given this preliminary knowledge in the form of the weights of the trained network, it will learn a bit | 3,191 |
odysseaspap/msc_thesis | ['camera calibration'] | ['Targetless Rotational Auto-Calibration of Radar and Camera for Intelligent Transportation Systems'] | src/util/dead_relu_detector.py src/tf_ops/interpolation/visu_interpolation.py src/util/metrics.py src/rad_net_with_loss.py src/util/radar_reprojection_manager_batch.py src/util/dual_quaternion/Tests.py helpers/transformations.py src/run_config.py src/tf_ops/sampling/tf_sampling.py src/tf_ops/grouping/tf_grouping_op_test.py dataset_generation/update_json_dicts.py src/util/learning_rate_loss_callback.py src/test.py src/util/radar_reprojection_manager.py dataset_utils/plot_avg_pcloud_depth.py src/util/dataloading.py helpers/analyze_dataset.py src/snippet.py src/util/all_transformer.py src/util/dataset_provider.py src/util/dual_quaternion/DualQuaternion.py src/mlpconv_layer.py src/util/plotting.py src/clr/clr_callback.py dataset_generation/DualQuaternion/Tests.py src/tf_ops/emd/tf_auctionmatch.py src/util/model_utils.py src/custom_loss_functions.py src/paper_visualizations.py src/run_training.py dataset_generation/decalibrate_filter_and_store.py src/rad_net.py src/tf_ops/interpolation/tf_interpolate_op_test.py src/util/data_wrangling.py src/util/quaternion_ops.py dataset_generation/DualQuaternion/transformations.py src/run_training_single_model.py src/util/dual_quaternion/transformations.py src/tf_ops/grouping/tf_grouping.py src/util/create_decalibration.py src/tf_ops/interpolation/tf_interpolate.py src/plot_example_images.py src/util/data_generator.py dataset_generation/create_delta_calib_list.py dataset_generation/decalibrate_and_store.py src/util/tee.py dataset_generation/DualQuaternion/DualQuaternion.py src/tf_ops/CD/tf_nndistance.py src/tf_ops/grouping/test_knn.py create_random_translation_vector create_homogeneous_transformation create_random_rotation_matrix create_decalib_transformation get_rad_to_cam valid_pixel_coordinates tokens_to_data_pairs store_data create_and_store_samples store_sample invert_homogeneous_matrix main comp_uv_depth load_keyframe_rad_cam_data get_rad_to_cam valid_pixel_coordinates tokens_to_data_pairs store_data create_and_store_samples store_sample invert_homogeneous_matrix main comp_uv_depth load_keyframe_rad_cam_data main TestQuaternionMethods orthogonalization_matrix vector_product inverse_matrix euler_matrix translation_matrix shear_matrix vector_norm quaternion_from_matrix quaternion_inverse Arcball projection_matrix unit_vector rotation_from_matrix random_rotation_matrix quaternion_from_euler affine_matrix_from_points decompose_matrix clip_matrix quaternion_division quaternion_conjugate quaternion_slerp quaternion_about_axis arcball_map_to_sphere scale_from_matrix euler_from_quaternion angle_between_vectors scale_matrix random_quaternion quaternion_matrix quaternion_imag dual_quaternion_matrix superimposition_matrix arcball_nearest_axis projection_from_matrix translation_from_matrix is_same_quaternion shear_from_matrix euler_from_matrix rotation_matrix random_vector compose_matrix identity_matrix reflection_matrix concatenate_matrices is_same_transform arcball_constrain_to_axis reflection_from_matrix quaternion_multiply quaternion_real tokens_to_data main load_keyframe_rad_tokens analyze_rotation comp_absmean_min_max run_analyzation analyze_translation to_angle_axis parse_arguments to_angle_axis_matrix main orthogonalization_matrix vector_product inverse_matrix euler_matrix translation_matrix shear_matrix vector_norm quaternion_from_matrix quaternion_inverse Arcball projection_matrix unit_vector rotation_from_matrix random_rotation_matrix quaternion_from_euler affine_matrix_from_points decompose_matrix clip_matrix quaternion_division quaternion_conjugate quaternion_slerp quaternion_about_axis arcball_map_to_sphere scale_from_matrix euler_from_quaternion angle_between_vectors scale_matrix random_quaternion quaternion_matrix quaternion_imag dual_quaternion_matrix superimposition_matrix arcball_nearest_axis projection_from_matrix translation_from_matrix is_same_quaternion shear_from_matrix euler_from_matrix rotation_matrix random_vector compose_matrix identity_matrix reflection_matrix concatenate_matrices is_same_transform arcball_constrain_to_axis reflection_from_matrix quaternion_multiply quaternion_real keras_weighted_quaternion_translation_loss weighted_quaternion_translation_loss photometric_and_3d_pointcloud_loss keras_photometric_and_3d_pointcloud_loss MlpConv save_hist_plot quat_tilt_angles quat_roll_angles comp_abs_quat_angles split_quat create_paper_plots comp_total_quat_angles quat_pan_angles print_angular_errors save_boxplot plot_correspondences_error_scatter plot_angle_error_distributions RadNet RadNet RunConfig train_model get_metrics visualize_model create_model create_callbacks start_training retrieve_git_hash cross_evaluate_models cross_evaluate_models_static_decalib save_run_params_in_file load_models copy_code visualize_corrected_projections save_minimum_loss_and_metrics main compute_example_predictions parse_commandline train_model get_metrics visualize_model create_model create_callbacks start_training retrieve_git_hash cross_evaluate_models cross_evaluate_models_static_decalib save_run_params_in_file load_models copy_code visualize_corrected_projections save_minimum_loss_and_metrics main compute_example_predictions parse_commandline split_stuff CyclicLR _nn_distance_grad nn_distance auction_match query_ball_point group_point select_top_k _group_point_grad knn_point GroupPointTest three_nn three_interpolate _three_interpolate_grad GroupPointTest fun farthest_point_sample gather_point _gather_point_grad prob_sample reverse_all _bilinear_sampling _3D_meshgrid_batchwise_diff sparsify_cloud _simple_transformer get_pixel_value pad_and_sparsify_cloud create_random_translation_vector invert_homogeneous_matrix create_random_rotation_matrix create_homogeneous_transformation create_decalib_transformation get_csr_matrix_from_npz_file save_augmented_projection_sample load_radnet_training_sample_with_intrinsics_gt_decalib load_augmented_projection_sample load_data_from_samples load_nparray_from_file expand_input_data_batchdim load_complete_sample load_dataset load_nparrays_from_file load_np_file load_radnet_training_sample get_projections_from_npz_file load_radnet_training_sample_batchdim DatasetProvider DataGenerator to_sparse_sample to_dense_sample to_dense_batch split_validation make_dense standardize_images DeadReluDetector SGDLearningRateTracker pan_error trans_error roll_error tilt_error trans_error_x trans_error_z trans_error_y rot_angle_error get_cd_loss get_emd_loss get_repulsion_loss4 pre_load_checkpoint draw_image_with_projections plot_history draw_projection_circles visualize_corrected_projection conjugate_quaternions split_dual_quaternions transform_from_quat_and_trans compute_delta_quaternion batchwise_dot_product rot_matrix_from_quat_wxyz multiply_quaternions normalize_quaternions RadarReprojectionManager RadarBatchReprojectionManager Tee Vector Quaternion DualQuaternion TestQuaternionMethods orthogonalization_matrix vector_product inverse_matrix euler_matrix translation_matrix shear_matrix vector_norm quaternion_from_matrix quaternion_inverse Arcball projection_matrix unit_vector rotation_from_matrix random_rotation_matrix quaternion_from_euler affine_matrix_from_points decompose_matrix clip_matrix quaternion_division quaternion_conjugate quaternion_slerp quaternion_about_axis arcball_map_to_sphere scale_from_matrix euler_from_quaternion angle_between_vectors scale_matrix random_quaternion quaternion_matrix quaternion_imag dual_quaternion_matrix superimposition_matrix arcball_nearest_axis projection_from_matrix translation_from_matrix is_same_quaternion shear_from_matrix euler_from_matrix rotation_matrix random_vector compose_matrix identity_matrix reflection_matrix concatenate_matrices is_same_transform arcball_constrain_to_axis reflection_from_matrix quaternion_multiply quaternion_real create_random_translation_vector create_random_rotation_matrix identity radians uniform array uniform get replace print scene append list get_sample_data_path zip COLOR_BGR2RGB from_file transpose hstack resize append imread range cvtColor len get Quaternion dot transform_matrix zeros inv shape transpose matmul imwrite quaternion_from_matrix COLOR_RGB2BGR str list csr_matrix identity circle points append range format concatenate copy store_sample invert_homogeneous_matrix comp_uv_depth create_decalib_transformation print zeros valid_pixel_coordinates cvtColor len print str array savez_compressed print create_and_store_samples ArgumentParser exists get_rad_to_cam tokens_to_data_pairs append parse_args range get hstack load_keyframe_rad_cam_data static_decalib add_argument makedirs store_data rmtree out_dir NuScenes array len filter_pointcloud points threshold depth sensor calibrated_sensor identity dot unit_vector identity squeeze eig array cos identity dot sin unit_vector array diag T squeeze eig atan2 trace array dot unit_vector identity diag squeeze eig array trace dot unit_vector array identity T squeeze eig dot array len dot unit_vector tan identity T vector_norm squeeze eig identity cross dot atan array T vector_norm asin inv cos copy atan2 dot any negative zeros array dot euler_matrix identity radians cos sin svd T concatenate inv identity quaternion_matrix roll dot eigh pinv vstack sum array identity sqrt atan2 empty cos sin array cos vector_norm dot array outer mReal mDual quaternion_matrix quaternion_conjugate quaternion_multiply eigh trace negative empty array negative array negative array pi dot sin negative unit_vector acos sqrt rand pi sqrt negative array vector_norm dot arcball_constrain_to_axis array atleast_1d sqrt sum array atleast_1d sqrt expand_dims sum array sum array dot identity array array print get append scene get_sample_data_path from_file hstack transpose append range len format arange print xlabel ylabel title hist savefig points load_keyframe_rad_tokens tokens_to_data xticks max rotation_from_matrix quaternion_matrix str format print comp_absmean_min_max square mean sqrt sum append to_angle_axis array print comp_absmean_min_max format to_angle_axis_matrix analyze_rotation analyze_translation parse_args add_argument ArgumentParser folder_path run_analyzation parse_arguments reshape transform_from_quat_and_trans reduce_sum map_fn reduce_mean sqrt reduce_mean reduce_sum normalize_quaternions plot_correspondences_error_scatter plot_angle_error_distributions makedirs comp_abs_quat_angles regplot tight_layout dumps set clf savefig sum save_hist_plot quat_tilt_angles quat_roll_angles dumps comp_total_quat_angles quat_pan_angles save_boxplot DataFrame array artists close set get_figure get_facecolor set_style figure set_facecolor boxplot savefig grid close set get_figure set_style figure savefig distplot split_quat degrees arctan2 degrees arcsin split_quat clip split_quat degrees arctan2 abs arccos pi arccos degrees format quat_tilt_angles comp_abs_quat_angles quat_roll_angles print absolute mean quat_pan_angles median str sorted min write close keys array open str load_radnet_training_sample_with_intrinsics_gt_decalib print transpose square set_printoptions sqrt expand_dims sum range predict append EarlyStopping ReduceLROnPlateau pan_error roll_error tilt_error append rot_angle_error str time input_shape visualize_model model print RadNet create_model create_callbacks DataGenerator Adam fit_generator plot_history load_weights history save save_minimum_loss_and_metrics compile visualize_corrected_projection join original_resolution compute_projections_and_labels save_augmented_projection_sample val_split str RadarReprojectionManager len identity create_paper_plots split_validation load_dataset format shuffle compute_and_save_corrected_projections_labels compute_example_predictions RadarBatchReprojectionManager enumerate train_model time input_shape print rmtree visualize_corrected_projections makedirs print plot_model summary parse_args add_argument ArgumentParser join copyfile dirname walk makedirs strip join RadNet load_weights input_shape original_resolution time RadarReprojectionManager input_shape compute_projections_and_labels_static_decalib str compute_projections_and_labels print identity create_paper_plots load_models visualize_corrected_projections load_dataset print_angular_errors RadarBatchReprojectionManager original_resolution quat_roll_angles comp_abs_quat_angles compute_projections_and_labels quaternion_from_matrix _project_radar_detections DataFrame str list quat_tilt_angles matmul create_paper_plots load_models quat_pan_angles load_dataset append expand_dims format concatenate mean invert_homogeneous_matrix zip print_angular_errors save_boxplot RadarBatchReprojectionManager create_decalib_transformation time input_shape print absolute repeat array len cross_evaluate_models copy_code dataset_paths str Tee cross_evaluate_models_static_decalib strftime static_analysis weights_path parse_commandline start_training time sort extend save_run_params_in_file gpu print expand_dims top_k reduce_sum _3D_meshgrid_batchwise_diff reshape zeros_like concat boolean_mask where linspace round multiply transpose matmul add cast meshgrid expand_dims range ones_like matrix_inverse stack equal constant not_equal reshape scatter_nd pad_and_sparsify_cloud sqrt floor cast clip_by_value expand_dims constant cast clip_by_value floor zeros get_pixel_value str load_nparrays_from_file append array range enumerate len get_csr_matrix_from_npz_file todense load_radnet_training_sample expand_input_data_batchdim expand_dims range len csr_matrix str savez_compressed append array load_complete_sample append csr_matrix append todense append to_dense_sample range len append todense subtract astype divide mean sqrt float int len assert_greater_equal clip_by_value batchwise_dot_product assert_less_equal abs normalize_quaternions compute_delta_quaternion square pi atan2 abs split asin compute_delta_quaternion pi clip_by_value abs split compute_delta_quaternion square pi atan2 abs split get_checkpoint_state int exp query_ball_point group_point maximum reduce_sum sqrt reduce_mean top_k histogram reshape gather_point auction_match reduce_sum nn_distance reduce_sum list format plot xlabel close ylabel title savefig figure legend keys Circle nonzero add_patch zip gca imshow set_visible draw_projection_circles format draw_image_with_projections close savefig range makedirs split unstack sqrt constant square reduce_sum conjugate_quaternions multiply_quaternions normalize_quaternions constant concat pad normalize rot_matrix_from_quat_wxyz reshape stack unstack concat property property | ### RadNet++: *Geometric supervision model for rotational radar-camera calibration in an autonomous vehicle setup.* Created by Odysseas Papanikolaou from Technical University of Munich. Contact: [email protected]  ### Introduction This work was created for my MSc thesis, conducted at the Chair of Robotics, Artificial Intelligence and Embedded Systems of the Technical University of Munich. RadNet++ is a follow-up project that builds on and extends the model presented in the work "Targetless Rotational Auto-Calibration of Radar and Camera for Intelligent Transportation Systems". You can find arXiv version of the paper <a href="https://arxiv.org/abs/1904.08743">here</a>. RadNet (the v1 model) was successfully applied for the rotational calibration of traffic radar and camera sensors installed on the gantry bridges of the German highway A9. This sensor setup was created for the smart infrastructure project <a href="https://www.fortiss.org/en/research/projects/detail/providentia">Providentia</a>. | 3,192 |
oeberle/BiLRP_explain_similarity | ['anomaly detection'] | ['Building and Interpreting Deep Similarity Models'] | utils.py model/bilrp.py visualization/plotting.py pool set_up_dir newlayer proc_image Identity load_image Flatten vgg_gamma VggLayers plot_relevances get_alpha clip makedirs reshape transpose load_image FloatTensor Parameter deepcopy g bias weight mean list array show int subplots plot squeeze clip_func axis close imshow ylim savefig linspace zip append xlim array | # Building and Interpreting Deep Similarity Models This repository provides a [PyTorch](https://pytorch.org/) implementation of the BiLRP method (preprint available at [https://arxiv.org/abs/2003.05431](https://arxiv.org/abs/2003.05431)) to explain similarity models using second-order relevance scores. | 3,193 |
oellop/Style_Transfer | ['style transfer'] | ['A Neural Algorithm of Artistic Style'] | get_picture.py VGG16_AvgPool_CutOff get_images create_content_model unpreprocess get_loss_and_grads_wrapper VGG16_AvgPool gram_matrix content_loss create_style_model style_loss astype expand_dims preprocess_input img_to_array load_img Model Model transpose batch_flatten dot num_elements permute_dimensions gradients output square mean input input gradients zip VGG16 layers Sequential add AveragePooling2D add Sequential layers VGG16_AvgPool get_loss_grads | # Style_Transfer | 3,194 |
oereo/ML_Agents_App | ['unity'] | ['Unity: A General Platform for Intelligent Agents'] | ml-agents-envs/mlagents_envs/communicator_objects/capabilities_pb2.py ml-agents/mlagents/trainers/tests/test_tf_policy.py ml-agents/mlagents/trainers/environment_parameter_manager.py ml-agents/mlagents/trainers/cli_utils.py ml-agents/mlagents/trainers/run_experiment.py ml-agents/mlagents/trainers/components/reward_signals/curiosity/model.py ml-agents-envs/mlagents_envs/communicator_objects/command_pb2.py ml-agents-envs/mlagents_envs/mock_communicator.py ml-agents/mlagents/trainers/policy/__init__.py ml-agents-envs/mlagents_envs/communicator_objects/unity_to_external_pb2.py ml-agents-envs/mlagents_envs/communicator.py gym-unity/gym_unity/envs/__init__.py ml-agents-envs/mlagents_envs/communicator_objects/brain_parameters_pb2.py ml-agents/mlagents/trainers/learn.py ml-agents/mlagents/trainers/tests/test_barracuda_converter.py ml-agents-envs/mlagents_envs/side_channel/raw_bytes_channel.py ml-agents/mlagents/trainers/trainer/trainer.py gym-unity/gym_unity/__init__.py ml-agents-envs/mlagents_envs/side_channel/__init__.py utils/validate_meta_files.py ml-agents/mlagents/trainers/trainer_controller.py ml-agents/mlagents/trainers/components/bc/model.py ml-agents/mlagents/trainers/action_info.py ml-agents/mlagents/trainers/tests/test_ppo.py ml-agents/mlagents/tf_utils/__init__.py ml-agents/mlagents/trainers/components/reward_signals/__init__.py ml-agents-envs/setup.py ml-agents-envs/mlagents_envs/side_channel/engine_configuration_channel.py ml-agents-envs/mlagents_envs/communicator_objects/unity_rl_output_pb2.py ml-agents/mlagents/trainers/tests/mock_brain.py ml-agents/mlagents/trainers/policy/checkpoint_manager.py ml-agents/mlagents/trainers/tests/test_bcmodule.py ml-agents/mlagents/trainers/tests/test_models.py ml-agents/mlagents/trainers/tests/test_trainer_controller.py ml-agents-envs/mlagents_envs/side_channel/incoming_message.py ml-agents/mlagents/trainers/components/reward_signals/reward_signal_factory.py ml-agents-envs/mlagents_envs/rpc_utils.py ml-agents-envs/mlagents_envs/communicator_objects/unity_rl_initialization_output_pb2.py ml-agents/setup.py ml-agents/tests/yamato/setup_venv.py ml-agents/mlagents/trainers/barracuda.py ml-agents/mlagents/trainers/optimizer/tf_optimizer.py utils/run_markdown_link_check.py ml-agents/mlagents/trainers/env_manager.py ml-agents/mlagents/trainers/ppo/trainer.py ml-agents/mlagents/trainers/policy/policy.py ml-agents-envs/mlagents_envs/communicator_objects/agent_action_pb2.py ml-agents/mlagents/model_serialization.py ml-agents-envs/mlagents_envs/tests/test_rpc_communicator.py ml-agents-envs/mlagents_envs/tests/test_envs.py utils/validate_inits.py ml-agents-envs/mlagents_envs/side_channel/float_properties_channel.py ml-agents/mlagents/trainers/components/reward_signals/curiosity/signal.py ml-agents/mlagents/trainers/simple_env_manager.py ml-agents/mlagents/trainers/tf/tensorflow_to_barracuda.py ml-agents-envs/mlagents_envs/side_channel/outgoing_message.py ml-agents-envs/mlagents_envs/exception.py ml-agents-envs/mlagents_envs/registry/remote_registry_entry.py ml-agents/mlagents/trainers/trainer/__init__.py ml-agents/mlagents/trainers/upgrade_config.py ml-agents-envs/mlagents_envs/communicator_objects/unity_message_pb2.py ml-agents/mlagents/trainers/tests/test_learn.py ml-agents/tests/yamato/scripts/run_gym.py utils/validate_release_links.py ml-agents-envs/mlagents_envs/communicator_objects/agent_info_pb2.py ml-agents/mlagents/trainers/tests/test_demo_loader.py ml-agents-envs/mlagents_envs/communicator_objects/observation_pb2.py utils/validate_versions.py ml-agents-envs/mlagents_envs/tests/test_rpc_utils.py ml-agents-envs/mlagents_envs/tests/test_timers.py ml-agents/mlagents/trainers/__init__.py ml-agents-envs/mlagents_envs/communicator_objects/custom_reset_parameters_pb2.py ml-agents/mlagents/trainers/tests/test_env_param_manager.py ml-agents-envs/mlagents_envs/tests/test_registry.py ml-agents-envs/mlagents_envs/communicator_objects/agent_info_action_pair_pb2.py ml-agents-envs/mlagents_envs/communicator_objects/unity_rl_input_pb2.py ml-agents/mlagents/trainers/tests/test_nn_policy.py ml-agents-envs/mlagents_envs/timers.py ml-agents/tests/yamato/check_coverage_percent.py ml-agents/mlagents/trainers/tests/test_simple_rl.py ml-agents/mlagents/trainers/exception.py ml-agents/mlagents/trainers/tests/test_distributions.py gym-unity/gym_unity/tests/test_gym.py utils/make_readme_table.py ml-agents/mlagents/tf_utils/tf.py ml-agents/mlagents/trainers/tests/test_ghost.py ml-agents/mlagents/trainers/buffer.py ml-agents-envs/mlagents_envs/side_channel/side_channel.py ml-agents-envs/mlagents_envs/side_channel/environment_parameters_channel.py ml-agents/mlagents/trainers/tests/test_subprocess_env_manager.py ml-agents/mlagents/trainers/subprocess_env_manager.py ml-agents/mlagents/trainers/agent_processor.py ml-agents-envs/mlagents_envs/communicator_objects/engine_configuration_pb2.py ml-agents-envs/mlagents_envs/tests/test_env_utils.py ml-agents/mlagents/trainers/tests/test_rl_trainer.py ml-agents-envs/mlagents_envs/rpc_communicator.py ml-agents/mlagents/trainers/training_status.py ml-agents-envs/mlagents_envs/communicator_objects/demonstration_meta_pb2.py ml-agents-envs/mlagents_envs/__init__.py gym-unity/setup.py ml-agents/mlagents/trainers/behavior_id_utils.py ml-agents/mlagents/trainers/tests/test_config_conversion.py ml-agents/mlagents/trainers/sac/network.py ml-agents/mlagents/trainers/policy/tf_policy.py ml-agents/mlagents/trainers/optimizer/__init__.py ml-agents-envs/mlagents_envs/registry/__init__.py ml-agents/mlagents/trainers/tests/simple_test_envs.py ml-agents/mlagents/trainers/tests/__init__.py ml-agents-envs/mlagents_envs/communicator_objects/unity_output_pb2.py ml-agents-envs/mlagents_envs/env_utils.py ml-agents-envs/mlagents_envs/communicator_objects/space_type_pb2.py ml-agents/mlagents/trainers/trainer_util.py ml-agents/mlagents/trainers/tests/test_trainer_util.py ml-agents-envs/mlagents_envs/logging_util.py ml-agents/mlagents/trainers/components/reward_signals/extrinsic/signal.py ml-agents/mlagents/trainers/sac/trainer.py ml-agents-envs/mlagents_envs/side_channel/side_channel_manager.py ml-agents/tests/yamato/training_int_tests.py ml-agents/mlagents/trainers/tests/test_sac.py ml-agents/mlagents/trainers/trajectory.py ml-agents/mlagents/trainers/settings.py ml-agents/mlagents/trainers/ppo/optimizer.py ml-agents-envs/mlagents_envs/communicator_objects/unity_rl_initialization_input_pb2.py ml-agents-envs/mlagents_envs/base_env.py ml-agents-envs/mlagents_envs/communicator_objects/header_pb2.py ml-agents/mlagents/trainers/tests/test_stats.py ml-agents/mlagents/trainers/components/reward_signals/gail/model.py ml-agents/mlagents/trainers/tests/test_reward_signals.py ml-agents-envs/mlagents_envs/side_channel/stats_side_channel.py ml-agents/mlagents/trainers/components/reward_signals/gail/signal.py ml-agents-envs/mlagents_envs/tests/test_side_channel.py ml-agents/mlagents/trainers/tf/models.py ml-agents/mlagents/trainers/tf/distributions.py ml-agents-envs/mlagents_envs/registry/base_registry_entry.py ml-agents/mlagents/trainers/ghost/controller.py ml-agents/mlagents/trainers/sac/optimizer.py ml-agents/tests/yamato/standalone_build_tests.py ml-agents-envs/mlagents_envs/environment.py ml-agents/mlagents/trainers/tests/test_training_status.py ml-agents/mlagents/trainers/demo_loader.py ml-agents/mlagents/trainers/ghost/trainer.py ml-agents-envs/mlagents_envs/registry/binary_utils.py ml-agents/tests/yamato/editmode_tests.py ml-agents/mlagents/trainers/tests/test_settings.py ml-agents/mlagents/trainers/components/bc/module.py ml-agents-envs/mlagents_envs/communicator_objects/unity_input_pb2.py ml-agents-envs/mlagents_envs/tests/test_steps.py ml-agents/mlagents/trainers/tests/test_buffer.py ml-agents/mlagents/trainers/trainer/rl_trainer.py ml-agents/mlagents/trainers/tests/test_agent_processor.py ml-agents-envs/mlagents_envs/communicator_objects/unity_to_external_pb2_grpc.py ml-agents/tests/yamato/yamato_utils.py ml-agents/tests/yamato/scripts/run_llapi.py ml-agents/mlagents/trainers/stats.py ml-agents-envs/mlagents_envs/registry/unity_env_registry.py ml-agents/mlagents/trainers/tests/test_trajectory.py ml-agents/mlagents/trainers/optimizer/optimizer.py VerifyVersionCommand UnityGymException ActionFlattener UnityToGymWrapper create_mock_vector_steps test_gym_wrapper_multi_visual_and_vector test_gym_wrapper create_mock_group_spec test_branched_flatten setup_mock_unityenvironment test_gym_wrapper_visual test_gym_wrapper_single_visual_and_vector VerifyVersionCommand _get_frozen_graph_node_names export_policy_model copy_model_files _make_frozen_graph _get_output_node_names _get_input_node_names convert_frozen_to_onnx _enforce_onnx_conversion SerializationSettings _process_graph set_warnings_enabled generate_session_config ActionInfo AgentManager AgentProcessor AgentManagerQueue BarracudaWriter fuse print_known_operations compress Build sort lstm write fuse_batchnorm_weights trim mean gru Model summary Struct parse_args to_json rnn BehaviorIdentifiers get_global_agent_id create_name_behavior_id BufferException AgentBuffer StoreConfigFile _load_config DetectDefaultStoreTrue DetectDefault load_config _create_parser make_demo_buffer write_demo get_demo_files load_demonstration write_delimited demo_to_buffer EnvironmentParameterManager EnvManager EnvironmentStep SamplerException TrainerConfigError CurriculumError CurriculumConfigError MetaCurriculumError TrainerConfigWarning CurriculumLoadingError UnityTrainerException TrainerError write_timing_tree create_environment_factory write_run_options parse_command_line run_training write_training_status main run_cli get_version_string main parse_command_line ScheduleType TrainerSettings PPOSettings ConstantSettings GaussianSettings strict_to_cls RewardSignalSettings EnvironmentSettings ParameterRandomizationType EnvironmentParameterSettings check_and_structure RewardSignalType TrainerType MultiRangeUniformSettings HyperparamSettings NetworkSettings SACSettings UniformSettings SelfPlaySettings Lesson EngineSettings EncoderType RunOptions GAILSettings CheckpointSettings BehavioralCloningSettings ParameterRandomizationSettings ExportableSettings CompletionCriteriaSettings defaultdict_to_dict CuriositySettings SimpleEnvManager StatsWriter StatsSummary ConsoleWriter StatsReporter GaugeWriter TensorboardWriter StatsPropertyType worker EnvironmentResponse EnvironmentRequest UnityEnvWorker StepResponse SubprocessEnvManager EnvironmentCommand TrainerController TrainerFactory initialize_trainer handle_existing_directories StatusMetaData StatusType GlobalTrainingStatus AgentExperience Trajectory SplitObservations parse_args write_to_yaml_file convert convert_behaviors main remove_nones convert_samplers_and_curriculum convert_samplers BCModel BCModule create_reward_signal RewardSignal CuriosityModel CuriosityRewardSignal ExtrinsicRewardSignal GAILModel GAILRewardSignal GhostController GhostTrainer Optimizer TFOptimizer NNCheckpointManager NNCheckpoint Policy UnityPolicyException TFPolicy UnityPolicyException PPOOptimizer PPOTrainer get_gae discount_rewards SACPolicyNetwork SACTargetNetwork SACNetwork SACOptimizer SACTrainer simulate_rollout create_mock_pushblock_behavior_specs create_mock_banana_behavior_specs setup_test_behavior_specs create_steps_from_behavior_spec create_mock_3dball_behavior_specs make_fake_trajectory create_mock_steps RecordEnvironment clamp SimpleEnvironment MemoryEnvironment test_end_episode test_agent_deletion test_agent_manager_queue test_agentprocessor test_agent_manager test_agent_manager_stats create_mock_policy test_barracuda_converter test_policy_conversion test_bcmodule_rnn_update test_bcmodule_update test_bcmodule_constant_lr_update test_bcmodule_dc_visual_update create_bc_module test_bcmodule_defaults test_bcmodule_rnn_dc_update test_buffer_sample construct_fake_buffer test_num_experiences assert_array fakerandint test_buffer test_buffer_truncate test_convert test_convert_behaviors test_remove_nones test_unsupported_version_raises_error test_load_demo test_demo_mismatch test_edge_cases test_load_demo_dir test_multicategorical_distribution test_tanh_distribution test_gaussian_distribution test_sampler_conversion test_sampler_and_constant_conversion test_create_manager test_curriculum_raises_no_completion_criteria_conversion test_curriculum_conversion test_curriculum_raises_all_completion_criteria_conversion test_load_and_set dummy_config test_publish_queue test_process_trajectory basic_options test_run_training test_yaml_args test_bad_env_path test_commandline_args test_env_args test_create_input_placeholders create_behavior_spec test_min_visual_size test_load_save create_policy_mock test_normalization ModelVersionTest test_policy_evaluate _compare_two_policies test_trainer_increment_step test_trainer_update_policy test_ppo_optimizer_update test_ppo_optimizer_update_curiosity test_process_trajectory test_rl_functions test_add_get_policy _create_ppo_optimizer_ops_mock dummy_config test_ppo_optimizer_update_gail test_ppo_get_value_estimates test_gail_dc_visual sac_dummy_config reward_signal_update reward_signal_eval test_extrinsic extrinsic_dummy_config test_gail_rnn test_curiosity_cc test_gail_cc ppo_dummy_config test_curiosity_dc curiosity_dummy_config test_curiosity_visual test_curiosity_rnn create_optimizer_mock gail_dummy_config FakeTrainer create_rl_trainer test_rl_trainer test_summary_checkpoint test_advance test_clear_update_buffer test_sac_update_reward_signals test_add_get_policy create_sac_optimizer_mock test_sac_optimizer_update dummy_config test_advance test_sac_save_load_buffer check_dict_is_at_least test_environment_settings test_strict_to_cls test_memory_settings_validation check_if_different test_is_new_instance test_no_configuration test_env_parameter_structure test_exportable_settings test_trainersettings_structure test_reward_signal_structure test_simple_ghost_fails test_gail test_visual_advanced_sac _check_environment_trains test_visual_sac test_2d_ppo test_simple_sac test_simple_ghost default_reward_processor test_simple_asymm_ghost test_gail_visual_ppo test_simple_ppo test_gail_visual_sac test_recurrent_ppo DebugWriter test_recurrent_sac test_simple_asymm_ghost_fails test_visual_advanced_ppo test_visual_ppo test_2d_sac simple_record test_tensorboard_writer test_stat_reporter_add_summary_write test_tensorboard_writer_clear test_gauge_stat_writer_sanitize ConsoleWriterTest test_stat_reporter_property MockEnvWorker mock_env_factory SubprocessEnvManagerTest test_subprocess_env_raises_errors create_worker_mock test_subprocess_env_endtoend basic_mock_brain test_take_action_returns_action_info_when_available test_convert_version_string test_checkpoint_writes_tf_and_nn_checkpoints test_take_action_returns_nones_on_missing_values test_take_action_returns_empty_with_no_agents FakePolicy test_initialization_seed test_start_learning_trains_until_max_steps_then_saves basic_trainer_controller trainer_controller_with_take_step_mocks test_advance_adds_experiences_to_trainer_and_trains trainer_controller_with_start_learning_mocks test_start_learning_trains_forever_if_no_train_model test_initialize_ppo_trainer test_load_config_invalid_yaml test_load_config_missing_file test_handles_no_config_provided dummy_config test_load_config_valid_yaml test_existing_directories test_globaltrainingstatus test_model_management StatsMetaDataTest test_trajectory_to_agentbuffer test_split_obs np_zeros_no_float64 np_array_no_float64 _check_no_float64 np_ones_no_float64 OutputDistribution DiscreteOutputDistribution MultiCategoricalDistribution GaussianDistribution ModelUtils Tensor3DShape NormalizerTensors get_layer_shape pool_to_HW flatten sqr_diff process_layer process_model get_layer_rank slow_but_stable_topological_sort get_attr basic_lstm ModelBuilderContext order_by get_epsilon get_tensor_dtype replace_strings_in_list debug embody by_op get_tensor_dims strided_slice remove_duplicates_from_list axis_to_barracuda by_name locate_actual_output_node convert strides_to_HW get_tensor_data very_slow_but_stable_topological_sort gru RLTrainer Trainer main check_coverage main clean_previous_results TestResults parse_results main main main run_training run_inference find_executables override_config_file init_venv get_unity_executable_path override_legacy_config_file get_base_path run_standalone_build checkout_csharp_version _override_config_dict undo_git_checkout get_base_output_path test_closing test_run_environment test_closing test_run_environment VerifyVersionCommand ActionType BehaviorMapping TerminalStep DecisionSteps BehaviorSpec TerminalSteps BaseEnv DecisionStep Communicator UnityEnvironment validate_environment_path launch_executable get_platform UnityCommunicatorStoppedException UnityObservationException UnityWorkerInUseException UnityException UnityCommunicationException UnityTimeOutException UnitySideChannelException UnityEnvironmentException UnityActionException get_logger set_log_level MockCommunicator RpcCommunicator UnityToExternalServicerImplementation _generate_split_indices process_pixels behavior_spec_from_proto _raise_on_nan_and_inf observation_to_np_array steps_from_proto _process_vector_observation _process_visual_observation _get_thread_timer TimerNode merge_gauges hierarchical_timer add_metadata get_timer_tree get_timer_root reset_timers get_timer_stack_for_thread set_gauge timed GaugeNode TimerStack UnityToExternalProtoServicer add_UnityToExternalProtoServicer_to_server UnityToExternalProtoStub BaseRegistryEntry ZipFileWithProgress get_tmp_dir get_local_binary_path_if_exists get_local_binary_path load_local_manifest load_remote_manifest download_and_extract_zip print_progress RemoteRegistryEntry UnityEnvRegistry EngineConfigurationChannel EngineConfig EnvironmentParametersChannel FloatPropertiesChannel IncomingMessage OutgoingMessage RawBytesChannel SideChannel SideChannelManager StatsAggregationMethod StatsSideChannel test_initialization test_reset test_returncode_to_signal_name test_log_file_path_is_set test_close test_step test_port_defaults test_handles_bad_filename test_check_communication_compatibility test_set_logging_level test_validate_path mock_glob_method test_launch_executable test_validate_path_empty create_registry test_basic_in_registry delete_binaries test_rpc_communicator_checks_port_on_create test_rpc_communicator_create_multiple_workers test_rpc_communicator_close test_batched_step_result_from_proto_raises_on_nan test_process_pixels test_process_visual_observation_bad_shape test_agent_behavior_spec_from_proto proto_from_steps_and_action test_batched_step_result_from_proto test_action_masking_continuous test_action_masking_discrete_1 generate_list_agent_proto generate_uncompressed_proto_obs test_batched_step_result_from_proto_raises_on_infinite generate_compressed_proto_obs test_vector_observation proto_from_steps test_action_masking_discrete generate_compressed_data test_action_masking_discrete_2 test_process_pixels_gray test_process_visual_observation test_raw_bytes test_int_channel test_message_float_list IntChannel test_engine_configuration test_message_bool test_message_string test_float_properties test_environment_parameters test_message_int32 test_stats_channel test_message_float32 test_decision_steps test_specs test_terminal_steps test_empty_terminal_steps test_action_generator test_empty_decision_steps test_timers decorated_func table_line ReleaseInfo validate_packages main NonTrivialPEP420PackageFinder main get_release_tag check_file test_pattern main check_all_files git_ls_files set_academy_version_string _escape_non_none extract_version_string print_release_tag_commands check_versions set_package_version set_version set_extension_package_version MagicMock create_mock_vector_steps UnityToGymWrapper sample create_mock_group_spec setup_mock_unityenvironment step MagicMock create_mock_vector_steps UnityToGymWrapper create_mock_group_spec setup_mock_unityenvironment MagicMock create_mock_vector_steps UnityToGymWrapper sample create_mock_group_spec setup_mock_unityenvironment step MagicMock create_mock_vector_steps UnityToGymWrapper sample reset create_mock_group_spec setup_mock_unityenvironment step MagicMock create_mock_vector_steps UnityToGymWrapper sample reset create_mock_group_spec setup_mock_unityenvironment step tuple CONTINUOUS range DISCRETE list array range BehaviorMapping convert_to_barracuda convert convert_to_onnx _make_frozen_graph _enforce_onnx_conversion convert_frozen_to_onnx info model_path makedirs tf_optimize make_model _get_output_node_names _get_input_node_names info brain_name optimize_graph _get_frozen_graph_node_names add _get_frozen_graph_node_names name add node set brain_name info copyfile info set_verbosity ConfigProto join isdir print replaceFilenameExtension add_argument exit verbose source_file ArgumentParser target_file sqrt topologicalSort list hasattr layers addEdge Graph print inputs set len list hasattr layers print filter match trim_model compile data layers print tensors float16 replace layers dumps data dtype layers isinstance print name tensors inputs outputs shape zip array_without_brackets to_json globals Build array_equal pool reduce Build tanh mad tanh mul Build concat add sigmoid sub mad _ tanh mul Build concat add sigmoid mad print sorted keys add_argument_group add_argument ArgumentParser resequence_and_append obs from_observations steps_from_proto vector_actions AgentBuffer append reset_agent vector_observations array visual_observations enumerate make_demo_buffer load_demonstration zip observation_shapes enumerate isdir isfile get_demo_files write SerializeToString _EncodeVarint len parse_args start_learning join save_state join join set_warnings_enabled warning __version__ DEBUG seed load_model set_log_level as_dict debug run_training add_timer_metadata info INFO get_version_string train_model API_VERSION print dumps randint parse_command_line run_cli add_argument ArgumentParser from_dict experiment_config_path load_config fields_dict update items list check_and_structure structure register_structure_hook unstructure defaultdict dict_to_defaultdict register_structure_hook register_unstructure_hook structure RLock get_timer_root reset_timers put _send_response StepResponse env_factory list behavior_specs _generate_all_results set_log_level apply get_and_reset_stats set_actions StatsSideChannel action set_configuration EngineConfigurationChannel payload BEHAVIOR_SPECS STEP EnvironmentParametersChannel items EnvironmentResponse isinstance reset RESET step join SACTrainer GhostTrainer PPOTrainer get_minimum_reward_buffer_size trainer_type isdir get update list items copy MemorySettings structure to_settings list items isinstance pop items list print dict pop items list get print set add append keys range len get pop unstructure print convert_behaviors convert_samplers_and_curriculum convert_samplers output_config_path curriculum remove_nones print write_to_yaml_file convert sampler trainer_config_path parse_args get rcls list zeros_like size reversed range append discount_rewards arange ones BehaviorSpec append array int ones AgentExperience append zeros sum range len pop action_shape to_agentbuffer make_fake_trajectory is_action_discrete observation_shapes int BehaviorSpec zeros Mock Mock ActionInfo publish_trajectory_queue range create_mock_steps AgentProcessor empty create_mock_policy add_experiences Mock assert_has_calls ActionInfo publish_trajectory_queue range call create_mock_steps append AgentProcessor empty create_mock_policy add_experiences Mock assert_has_calls ActionInfo end_episode publish_trajectory_queue range call create_mock_steps append AgentProcessor empty create_mock_policy add_experiences AgentManager create_mock_policy Mock get_nowait AgentManagerQueue put Mock assert_any_call remove record_environment_stats AgentManager add_writer StatsReporter write_stats join remove _get_candidate_names convert _get_default_tempdir dirname abspath isfile next TrainerSettings create_policy_mock SerializationSettings model_path reset_default_graph checkpoint TrainerSettings TFPolicy initialize_or_load BehavioralCloningSettings create_bc_module create_mock_3dball_behavior_specs update items list create_mock_3dball_behavior_specs BehavioralCloningSettings create_bc_module update items list create_mock_3dball_behavior_specs BehavioralCloningSettings current_lr create_bc_module update items list create_mock_3dball_behavior_specs BehavioralCloningSettings create_bc_module update items list create_mock_banana_behavior_specs BehavioralCloningSettings create_bc_module update items list create_mock_banana_behavior_specs BehavioralCloningSettings create_bc_module flatten list range len append range AgentBuffer resequence_and_append get_batch construct_fake_buffer assert_array make_mini_batch AgentBuffer reset_agent array resequence_and_append sample_mini_batch construct_fake_buffer AgentBuffer resequence_and_append construct_fake_buffer AgentBuffer resequence_and_append list construct_fake_buffer AgentBuffer truncate values safe_load convert_behaviors safe_load convert enumerate remove_nones load_demonstration demo_to_buffer dirname abspath load_demonstration demo_to_buffer dirname abspath dirname abspath dirname abspath mock_open BytesIO DemonstrationMetaProto write_delimited from_dict safe_load curriculum from_dict safe_load curriculum from_dict safe_load curriculum from_dict safe_load EnvironmentParameterManager environment_parameters create_tf_graph setup_test_behavior_specs load_weights init_load_weights zip assert_array_equal get_weights PPOTrainer create_policy GhostController GhostTrainer PPOTrainer subscribe_trajectory_queue setup_test_behavior_specs put advance make_fake_trajectory from_name_behavior_id AgentManagerQueue add_policy brain_name create_policy GhostController GhostTrainer PPOTrainer simulate_rollout get_nowait setup_test_behavior_specs _swap_snapshots advance publish_policy_queue from_name_behavior_id AgentManagerQueue add_policy brain_name create_policy clear safe_load MagicMock parse_command_line clear parse_command_line parse_command_line DISCRETE int BehaviorSpec create_input_placeholders observation_shapes create_behavior_spec TFPolicy setup_test_behavior_specs TrainerSettings join _set_step initialize_or_load create_policy_mock SerializationSettings model_path _compare_two_policies checkpoint list evaluate agent_id create_steps_from_behavior_spec behavior_spec assert_array_equal TrainerSettings list evaluate agent_id create_steps_from_behavior_spec behavior_spec create_policy_mock reset_default_graph TrainerSettings TFPolicy update_normalization to_agentbuffer setup_test_behavior_specs make_fake_trajectory zeros range run evolve PPOOptimizer TFPolicy setup_test_behavior_specs update simulate_rollout behavior_spec _create_ppo_optimizer_ops_mock reset_default_graph update simulate_rollout behavior_spec _create_ppo_optimizer_ops_mock reset_default_graph update simulate_rollout behavior_spec _create_ppo_optimizer_ops_mock reset_default_graph items list get_trajectory_value_estimates to_agentbuffer make_fake_trajectory _create_ppo_optimizer_ops_mock next_obs reset_default_graph assert_array_almost_equal array discount_rewards Mock brain_name _increment_step from_name_behavior_id assert_called_with add_policy PPOTrainer _update_policy simulate_rollout setup_test_behavior_specs MemorySettings from_name_behavior_id add_policy PPOTrainer create_policy list values Mock brain_name from_name_behavior_id add_policy PPOTrainer PPOOptimizer TFPolicy setup_test_behavior_specs SACOptimizer simulate_rollout behavior_spec evaluate_batch simulate_rollout prepare_update _execute_model behavior_spec update_dict make_mini_batch policy BehavioralCloningSettings create_optimizer_mock reward_signal_eval reward_signal_update create_optimizer_mock reward_signal_eval reward_signal_update create_optimizer_mock reward_signal_eval reward_signal_update create_optimizer_mock reward_signal_eval reward_signal_update create_optimizer_mock reward_signal_eval reward_signal_update create_optimizer_mock reward_signal_eval reward_signal_update create_optimizer_mock reward_signal_eval reward_signal_update create_optimizer_mock reward_signal_eval reward_signal_update TrainerSettings FakeTrainer set_is_policy_updating end_episode list create_rl_trainer values items list construct_fake_buffer create_rl_trainer _clear_update_buffer Mock create_rl_trainer set_is_policy_updating subscribe_trajectory_queue advance put make_fake_trajectory publish_policy_queue AgentManagerQueue add_policy get_nowait range Mock list assert_has_calls create_rl_trainer subscribe_trajectory_queue summary_freq checkpoint_interval put make_fake_trajectory publish_policy_queue advance AgentManagerQueue add_policy get_nowait range TFPolicy setup_test_behavior_specs SACOptimizer update simulate_rollout create_sac_optimizer_mock behavior_spec reset_default_graph simulate_rollout create_sac_optimizer_mock behavior_spec update_reward_signals reset_default_graph SACTrainer save_model simulate_rollout num_experiences setup_test_behavior_specs behavior_spec from_name_behavior_id add_policy brain_name create_policy SACTrainer list SACTrainer values setup_test_behavior_specs from_name_behavior_id brain_name create_policy list items items list isinstance zip RunOptions check_if_different TrainerSettings RunOptions structure structure structure RunOptions from_dict check_dict_is_at_least safe_load as_dict EnvironmentSettings print EnvironmentParameterManager evolve SimpleEnvironment _check_environment_trains hyperparameters evolve SimpleEnvironment _check_environment_trains hyperparameters evolve SimpleEnvironment _check_environment_trains network_settings evolve SimpleEnvironment hyperparameters _check_environment_trains network_settings evolve MemoryEnvironment hyperparameters _check_environment_trains evolve SimpleEnvironment _check_environment_trains hyperparameters evolve SimpleEnvironment _check_environment_trains hyperparameters evolve SimpleEnvironment _check_environment_trains network_settings evolve SimpleEnvironment hyperparameters _check_environment_trains network_settings evolve MemoryEnvironment hyperparameters _check_environment_trains evolve SimpleEnvironment SelfPlaySettings _check_environment_trains evolve SimpleEnvironment SelfPlaySettings _check_environment_trains evolve SimpleEnvironment SelfPlaySettings _check_environment_trains evolve SimpleEnvironment SelfPlaySettings _check_environment_trains evolve SimpleEnvironment BehavioralCloningSettings _check_environment_trains simple_record evolve SimpleEnvironment BehavioralCloningSettings hyperparameters _check_environment_trains simple_record evolve SimpleEnvironment BehavioralCloningSettings hyperparameters _check_environment_trains simple_record clear assert_called_once_with Mock get_stats_summaries add_stat add_writer StatsReporter float range write_stats clear Mock add_property add_writer StatsReporter assert_called_once_with sleep TensorboardWriter StatsSummary write_stats close SubprocessEnvManager simple_env_factory _check_environment_trains default_config default_config close SubprocessEnvManager MagicMock TrainerSettings basic_mock_brain BehaviorSpec get_action empty FakePolicy TrainerSettings MagicMock basic_mock_brain DecisionSteps get_action array FakePolicy TrainerSettings MagicMock basic_mock_brain ActionInfo DecisionSteps get_action array FakePolicy _convert_version_string TrainerSettings sess MagicMock basic_mock_brain graph SerializationSettings assert_called_once_with brain_name checkpoint FakePolicy GhostController MagicMock GhostController TrainerController MagicMock assert_called_with MagicMock start_learning assert_called_once MagicMock assert_not_called start_learning assert_called_once MagicMock MagicMock assert_called_once MagicMock advance add assert_not_called behaviors behaviors TrainerFactory generate _load_config StringIO mkdir join handle_existing_directories join set_parameter_state LESSON_NUM load_state NOTAREALKEY get_parameter_state save_state join NNCheckpoint time add_checkpoint set_parameter_state CHECKPOINTS track_final_checkpoint append from_observations range ones items list to_agentbuffer add set make_fake_trajectory extract_stack filename get __old_np_array _check_no_float64 get _check_no_float64 __old_np_zeros get __old_np_ones _check_no_float64 endswith len print HasField hasattr get_attr isinstance get_attr tensor_shape ndarray isinstance shape int_val bool_val float_val ListFields name ndarray isinstance str tensor_content ndarray product isinstance get_tensor_dtype print get_tensor_dims unpack int_val bool_val array float_val enter append add set Build mul sub insert Build tolist append range len locate_actual_output_node name find_tensor_by_name split locate_actual_output_node name lstm find_tensor_by_name find_forget_bias split get_layer_shape id Struct tensor get_layer_rank layer_ranks hasattr name patch_data rank input_shapes out_shapes input get_attr append replace_strings_in_list tensors embody astype op inputs zip enumerate print float32 patch_data_fn model_tensors map_ignored_layer_to_its_input co_argcount len items list hasattr get_tensors name print process_layer eval slow_but_stable_topological_sort ModelBuilderContext sort assign_ids pop range insert len layers verbose Struct process_model open print_known_operations fuse compress node GraphDef Model dims_to_barracuda_shape insert get_tensor_dims inputs MessageToJson ParseFromString cleanup_layers read memories sort write trim summary print_supported_ops print join exit walk float check_coverage join remove mkdir rmdir exists documentElement getAttribute parse join clean_previous_results parse_results get_unity_executable_path exit returncode get_base_path copy2 init_venv add_argument ArgumentParser split strip run_standalone_build override_config_file run_inference get_base_path rename abspath checkout_csharp_version exists run dirname get_base_output_path init_venv override_legacy_config_file copy int time join print run_standalone_build makedirs find_executables join time print run python run_training csharp exists join move get_unity_executable_path print makedirs dirname get_base_output_path run join X_OK frozenset splitext append access walk check_call check_call check_call list _override_config_dict values items list isinstance update list values check_call str UnityToGymWrapper print step reset sample UnityEnvironment range reset UnityEnvironment close UnityToGymWrapper get_steps is_action_discrete format EngineConfigurationChannel randn set_configuration_parameters discrete_action_branches len action_size any set_actions is_action_continuous column_stack join basename replace glob debug getcwd normpath validate_environment_path debug add setLevel getLogger basicConfig setLevel tuple vector_action_size mean reshape array data compressed_data reshape process_pixels shape array mean isnan array _raise_on_nan_and_inf sum is_action_discrete _generate_split_indices ones discrete_action_branches len astype _raise_on_nan_and_inf any cast split append _process_vector_observation bool _process_visual_observation array observation_shapes enumerate range len get_ident TimerStack perf_counter push items list merge reset method_handlers_generic_handler add_generic_rpc_handlers download_and_extract_zip get_local_binary_path_if_exists debug range glob hexdigest join get_tmp_dir join chmod gettempdir makedirs uuid4 join int str remove get_tmp_dir exists chmod print glob rmtree urlopen print_progress hexdigest print int min max uuid4 join str get_tmp_dir load_local_manifest urlopen UnityEnvironment close MockCommunicator UnityEnvironment MockCommunicator _executable_args UnityEnvironment MockCommunicator index get_steps obs close reset MockCommunicator zip UnityEnvironment observation_shapes len get_steps obs zip ones step close MockCommunicator set_actions zeros UnityEnvironment observation_shapes len UnityEnvironment close MockCommunicator validate_environment_path validate_environment_path launch_executable PermissionError set_log_level rmtree get_tmp_dir RemoteRegistryEntry register UnityEnvRegistry create_registry make close reset step range delete_binaries close RpcCommunicator close RpcCommunicator close RpcCommunicator list extend ObservationProto AgentInfoProto append prod range len fromarray uint8 BytesIO astype save ObservationProto generate_compressed_data extend shape ObservationProto shape tolist extend obs concatenate action_mask agent_id ObservationProto AgentInfoProto append generate_uncompressed_proto_obs proto_from_steps generate_compressed_data process_pixels rand generate_compressed_data process_pixels rand _process_vector_observation generate_list_agent_proto enumerate generate_compressed_proto_obs rand extend AgentInfoProto _process_visual_observation generate_uncompressed_proto_obs generate_compressed_proto_obs rand AgentInfoProto extend list sort CONTINUOUS agent_id BehaviorSpec steps_from_proto generate_list_agent_proto range BehaviorSpec steps_from_proto DISCRETE generate_list_agent_proto action_mask BehaviorSpec steps_from_proto DISCRETE generate_list_agent_proto action_mask BehaviorSpec steps_from_proto DISCRETE generate_list_agent_proto action_mask CONTINUOUS BehaviorSpec steps_from_proto generate_list_agent_proto action_mask BrainParametersProto behavior_spec_from_proto extend CONTINUOUS generate_list_agent_proto BehaviorSpec CONTINUOUS generate_list_agent_proto BehaviorSpec generate_side_channel_messages process_side_channel_message send_int IntChannel FloatPropertiesChannel process_side_channel_message generate_side_channel_messages get_property set_property uuid4 process_side_channel_message generate_side_channel_messages RawBytesChannel send_raw_data get_and_clear_received_messages len buffer read_bool append write_bool IncomingMessage range OutgoingMessage buffer write_int32 read_int32 IncomingMessage OutgoingMessage IncomingMessage write_float32 buffer read_float32 OutgoingMessage read_string write_string buffer IncomingMessage OutgoingMessage IncomingMessage buffer OutgoingMessage read_float32_list write_float32_list set_configuration channel_id EngineConfigurationChannel generate_side_channel_messages process_side_channel_message set_configuration_parameters RawBytesChannel read_float32 read_int32 IncomingMessage get_and_clear_received_messages default_config channel_id generate_side_channel_messages process_side_channel_message read_string set_float_parameter RawBytesChannel read_float32 read_int32 IncomingMessage EnvironmentParametersChannel IncomingMessage write_float32 write_string buffer write_int32 get_and_reset_stats on_message_received StatsSideChannel OutgoingMessage DecisionSteps action_mask empty BehaviorSpec TerminalSteps empty BehaviorSpec BehaviorSpec create_random_action enumerate BehaviorSpec create_empty_action set_gauge TimerStack startswith print find_packages find validate_packages remove replace frozenset endswith set add walk print git_ls_files get_release_tag check_all_files compile join print extract_version_string set values join format set_academy_version_string print set_package_version set_extension_package_version enumerate split print | <img src="docs/images/image-banner.png" align="middle" width="3000"/> # Unity ML-Agents Toolkit [](https://github.com/Unity-Technologies/ml-agents/tree/release_5_docs/docs/) [](LICENSE) ([latest release](https://github.com/Unity-Technologies/ml-agents/releases/tag/latest_release)) ([all releases](https://github.com/Unity-Technologies/ml-agents/releases)) **The Unity Machine Learning Agents Toolkit** (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents. Agents can be trained using reinforcement learning, imitation learning, neuroevolution, or other machine learning methods through a | 3,195 |
oesteban/MBIS | ['semantic segmentation'] | ['MBIS: Multivariate Bayesian Image Segmentation Tool'] | Scripts/util/functions.py Scripts/interfaces/CollectResults.py Scripts/util/postproc.py Scripts/interfaces/mbis.py Scripts/interfaces/post.py Scripts/interfaces/mri_watershed.py CollectResults CollectResultsOutputSpec CollectResultsInputSpec MBIS MBISOutputSpec MBISInputSpec WatershedInputSpec Watershed WatershedOutputSpec PVProcessInputSpec PVProcessOutputSpec PVProcess measureVolume outlierFilter plot_slice loadParameters fusePV distancePV ListFloat String File File Bool Range File InputMultiPath File OutputMultiPath File File ListInt String File InputMultiPath Bool File OutputMultiPath load join basename scoreatpercentile reshape getcwd get_header get_affine get_data Nifti1Image splitext abspath save load reshape min get_affine get_data imshow xticks max yticks load reshape append float sum exists int reshape append matrix len T getI shape zeros ravel array list distancePV sort reshape get_header get_affine argsort shape Nifti1Image take swapaxes zip append zeros save array range len | oesteban/MBIS | 3,196 |
offscale/cdd-python | ['stochastic optimization'] | ['Adam: A Method for Stochastic Optimization'] | cdd/tests/test_cli_sync.py cdd/conformance.py cdd/tests/test_cst_utils.py cdd/routes/parser_utils.py cdd/tests/test_cli_openapi.py cdd/openapi/parse.py cdd/openapi/parser_utils.py cdd/openapi/emitter_utils.py cdd/tests/test_ast_cst_utils.py cdd/docstring_utils.py cdd/tests/mocks/argparse.py cdd/tests/test_route_emit.py cdd/tests/test_parsers.py cdd/__init__.py cdd/tests/utils_for_tests.py cdd/tests/test_emitters.py cdd/routes/parse.py cdd/tests/__init__.py cdd/defaults_utils.py cdd/gen.py cdd/tests/test_doctrans_utils.py setup.py cdd/parse.py cdd/tests/test_pure_utils.py cdd/tests/test_cli_sync_properties.py cdd/pure_utils.py cdd/tests/mocks/cstify.py cdd/tests/test_parser_utils.py cdd/tests/test_default_utils.py cdd/tests/mocks/routes.py cdd/docstring_parsers.py cdd/ast_cst_utils.py cdd/openapi/__init__.py cdd/tests/test_source_transformer.py cdd/tests/test_cli_doctrans.py cdd/tests/test_gen_routes.py cdd/tests/test_setup.py cdd/pkg_utils.py cdd/emitter_utils.py cdd/tests/mocks/doctrans.py cdd/openapi/gen_routes.py cdd/__main__.py cdd/doctrans.py cdd/tests/mocks/cst.py cdd/tests/test_doctrans.py cdd/tests/mocks/eval.py cdd/tests/test_exmod_utils.py cdd/tests/test_ast_utils.py cdd/source_transformer.py cdd/tests/test_cli.py cdd/tests/test_docstring_utils.py cdd/emit.py cdd/tests/test_emitter_utils.py cdd/cst.py cdd/tests/test_cst.py cdd/tests/mocks/__init__.py cdd/tests/mocks/openapi.py cdd/tests/test_cli_exmod.py cdd/doctrans_utils.py cdd/tests/mocks/classes.py cdd/tests/mocks/exmod.py cdd/routes/emit.py cdd/tests/test_sync_properties.py cdd/parser_utils.py cdd/tests/test_cli_gen.py cdd/tests/test_cli_gen_routes.py cdd/tests/mocks/sqlalchemy.py cdd/sync_properties.py cdd/cst_utils.py cdd/tests/mocks/ir.py cdd/tests/test_pkg_utils.py cdd/tests/test_utils_for_tests.py cdd/exmod.py cdd/tests/test_openapi_sub.py cdd/routes/__init__.py cdd/tests/test_ast_equality.py cdd/ast_utils.py cdd/tests/test_openapi_bulk.py cdd/tests/mocks/json_schema.py cdd/openapi/gen_openapi.py cdd/tests/test_gen.py cdd/tests/test_route_parse.py cdd/routes/emit_constants.py cdd/openapi/emit.py cdd/tests/test_exmod.py cdd/exmod_utils.py cdd/tests/mocks/methods.py cdd/tests/mocks/docstrings.py cdd/tests/test_marshall_docstring.py cdd/tests/test_conformance.py setup_py_main main to_funcs maybe_replace_function_args debug_doctrans find_cst_at_ast maybe_replace_doc_str_in_function_or_class Delta maybe_replace_function_return_type _to_code get_value infer_type_and_default Undedined func_arg2param param2argparse_param _resolve_arg get_function_type find_in_ast is_argparse_add_argument _infer_type_and_default_for_list_or_tuple parse_to_scalar _parse_default_from_ast set_value cmp_ast is_argparse_description get_ass_where_name set_docstring _generic_param2ast emit_arg merge_modules get_at_root set_arg to_annotation _parse_node_for_arg RewriteAtQuery to_type_comment it2literal annotate_ancestry param2ast get_doc_str node_to_dict find_ast_type del_ass_where_name _infer_type_and_default_from_quoted set_slice emit_ann_assign merge_assignment_lists _default_options _get_name_from_namespace ground_truth _conform_filename cst_parse reindent_block_with_pass_body set_prev_node infer_cst_type cst_parse_one_node cst_scanner get_construct_name cst_parser cst_scan needs_quoting extract_default _remove_default_from_param set_default_doc ast_parse_fix remove_defaults_from_intermediate_repr _parse_out_default_and_doc _scan_phase_numpydoc_and_google _set_name_and_type _infer_default parse_docstring _set_param_values _scan_phase _scan_phase_rest _parse_phase_numpydoc_and_google _fill_doc_with_afterward _parse_phase _parse_phase_rest _return_parse_phase_numpydoc_and_google Style _get_start_of_last_found header_args_footer_to_str derive_docstring_format _get_token_last_idx_if_no_next_token _get_end_of_last_found_numpydoc _find_end_of_args_returns _last_doc_str_token _get_token_start_idx emit_param_str _get_end_of_last_found _get_token_last_idx ensure_doc_args_whence_original parse_docstring_into_header_args_footer doctrans has_type_annotations clear_annotation doctransify_cst DocTrans sqlalchemy_table function argparse_function file json_schema class_ sqlalchemy docstring param2json_schema_property generate_repr_method _make_call_meth _handle_value parse_out_param get_internal_body ast_parse_fix _parse_return _handle_keyword RewriteName param_to_sqlalchemy_column_call interpolate_defaults exmod emit_file_on_hierarchy _emit_symbol get_module_contents gen sqlalchemy_table _class_from_memory _inspect function _merge_inner_function json_schema class_ sqlalchemy docstring argparse_ast _inspect_process_ir_param infer _join_non_none _interpolate_return json_schema_property_to_param column_call_to_param ir_merge get_source relative_filename quote set_attr rpartial strip_starting pluralise blockwise count_iter_items is_triple_quoted is_ir_empty no_magic_dir2attr identity reindent paren_wrap_code deindent unquote multiline assert_equal balanced_parentheses filename_from_mod_or_filename strip_split lstrip_namespace location_within emit_separating_tabs indent_all_but_first update_d get_module has_nl code_quoted sanitise set_item diff to_code ast_parse sync_properties sync_property require_file_existent main _build_parser openapi components_paths_from_name_model_route_id_crud openapi_bulk gen_routes upsert_routes openapi extract_entities read create destroy create_util bottle get_route_meta TestDefaultUtils TestAstCstUtils TestAstEquality TestAstUtils TestCli TestCliDocTrans TestCliExMod TestCliGen TestCliGenRoutes TestOpenApi TestCliSync TestCliSyncProperties TestConformance TestCst TestCstUtils TestDocstringUtils TestDocTrans TestDocTransUtils TestEmitters TestEmitterUtils TestExMod TestExmodUtils TestGen populate_files TestGenRoutes populate_files TestMarshallDocstring TestOpenApi TestOpenApiBulk TestParsers TestParserUtils TestPkgUtils TestPureUtils TestRouteEmit TestRouteEmit TestSetupPy TestSourceTransformer TestSyncProperties populate_files TestUtilsForTests remove_args_from_docstring ShowSourceLoader replace_docstring unittest_main run_cli_test run_ast_test inspectable_compile mock_function reindent_docstring C f list attrgetter setup map filter main print ljust format print format __name__ enumerate ne nop partition formatted_doc_str insert name debug_doctrans replaced removed __name__ added deepcopy add_return_typ value nop cmp_ast name remove_return_typ debug_doctrans replaced FunctionDefinitionStart body returns removed __name__ added deepcopy nop rpartition attrgetter __name__ name map debug_doctrans replaced FunctionDefinitionStart body removed range added len needs_quoting get_value __name__ isinstance ast_parse_fix set_value list isinstance tuple rpartial filter body extract_default get setdefault infer_type_and_default _resolve_arg ast_parse_fix _parse_node_for_arg walk tuple id isinstance value isinstance Expr set_value pop list setattr isinstance args filter body next enumerate len _location list kwonlyargs isinstance args map targets iter_child_nodes walk enumerate arg isinstance arg isinstance _parse_default_from_ast isinstance dumps any code_quoted __name__ _infer_type_and_default_for_list_or_tuple get_value __name__ get_value isinstance AST isinstance _fields getattr isinstance zip isinstance list filter get_value isinstance list attrgetter del_ass_where_name tuple map get_ass_where_name from_iterable Assign append deepcopy update items list _get_name_from_namespace map OrderedDict parse_func strip_split pluralise getattr find_in_ast split print file realpath visit emit_func RewriteAtQuery replaced expanduser find_in_ast cst_scanner cst_parser enumerate find append join enumerate cst_scan clear join is_triple_quoted all endswith tuple strip balanced_parentheses startswith append add_and_clear enumerate deque list partial map any frozenset startswith dict tuple strip get_construct_name Name ast_parse_fix strip isinstance strip location_within len enumerate count_iter_items int partial frozenset literal_eval isdecimal takewhile update deepcopy partial extract_default update format update _scan_phase _parse_phase derive_docstring_format name list attrgetter map count_iter_items get isspace location_within deepcopy clear white_spacer partial list insert map _return_parse_phase_numpydoc_and_google append takewhile enumerate list isspace splitlines startswith append range enumerate len join list tuple map append len name list attrgetter map get _infer_default rstrip format tuple lstrip unquote isinstance get_value literal_eval __name__ update next _fill_doc_with_afterward list next filter partial _set_name_and_type update _remove_default_from_param strip map OrderedDict any startswith interpolate_defaults find count_iter_items join isspace clear filter any startswith append takewhile enumerate clear isspace copy append enumerate len range append clear range len range len count_iter_items isspace takewhile count_iter_items isspace partial namedtuple takewhile PrevParam count_iter_items _get_start_of_last_found isspace derive_docstring_format _get_token_last_idx_if_no_next_token _find_end_of_args_returns filter _last_doc_str_token any _get_end_of_last_found startswith takewhile count_iter_items list isspace map _get_token_start_idx _get_token_last_idx takewhile indent parse_docstring_into_header_args_footer count_iter_items isspace format partition indent has_nl takewhile rpartition len partial map rest any google numpydoc deepcopy list doctransify_cst cst_parse visit fix_missing_locations ast_parse setattr hasattr isinstance maybe_replace_function_args find_cst_at_ast maybe_replace_doc_str_in_function_or_class maybe_replace_function_return_type walk get_internal_body update get list partial map visit fix_missing_locations get join list items partial format header_args_footer_to_str map splitlines parse_docstring_into_header_args_footer to_code Module format_str items list tuple map get_internal_body filter items list partial map dict isinstance extract_default format set_value get_value next extract_default unquote rstrip list filter len get pop list get_value map elts startswith append set_value isinstance print Name get_value Load lstrip Call startswith append keyword tuple keys groupby itemgetter tuple sep list basename map file dirname format partial import_module deque join items Module print __file__ any makedirs items list format isinstance getfile _emit_symbol rpartial sep list map dirname rpartition get format parse replace partial attrgetter close body join print filter any isfile makedirs merge_modules format Module print Assign file merge_assignment_lists tuple strip sanitise_emit_name fix_missing_locations list map from_iterable getattr rpartition update parse get_at_root format get_docstring eval compile Module filter get_module body to_code rstrip find_ast_type isinstance get_docstring get_value targets OrderedDict dict lstrip _merge_inner_function deque _inspect name get_docstring _merge_inner_function class_ ir_merge get_source list function rpartial filter ir_merge next walk items list partial map OrderedDict parser isfunction get_source ir_merge abs get_function_type len OrderedDict getattr update arg partial replace get_docstring islice _interpolate_return ir_merge setattr pop deepcopy _inspect isinstance cycle body docstring args list value filterfalse parse_docstring parse_out_param get_docstring get_value OrderedDict is_argparse_add_argument deepcopy OrderedDict docstring list isinstance get_value map assert_equal ir_merge next get_docstring value get_value get update list itemgetter map _join_non_none keys frozenset from_iterable format endswith lstrip_typings lstrip __name__ default list rstrip rpartial OrderedDict filter next list format map from_iterable dict filter args endswith format pop get list partial attrgetter isinstance bases rpartial get_value map filter any next args casefold startswith lstrip split update lstrip op len enumerate cmp tuple range len deque list zip count tuple type setattr annotate_ancestry get_docstring parse format file zip sync_property annotate_ancestry list value hasattr visit eval RewriteAtQuery AnnAssign find_in_ast compile strip_split add_argument add_mutually_exclusive_group add_parser ArgumentParser add_subparsers gen_routes pluralise exmod truth doctrans command getattr openapi_bulk filename require_file_existent parse_args gen expanduser sum format _build_parser Namespace realpath deque pop error sync_properties output_filename isfile error format deque list map format items list create itemgetter map extend filter iter sqlalchemy filename_from_mod_or_filename next keys walk append list rpartial get_names filter filename_from_mod_or_filename keys walk extract_entities format replace append isspace add_then_clear_stack enumerate list isinstance parse_docstring get_docstring decorator_list filter next find join deepcopy format function Module class_ deepcopy path filter close format deepcopy isinstance assertTrue tuple assertEqual cmp_ast map to_code output_checker assertEqual assertIsNone main __dict__ name ShowSourceLoader module_from_spec exec NamedTemporaryFile spec_from_loader setattr compile format set_value get_docstring Expr abs Expr isspace lstrip filter any splitlines startswith append | cdd-python ==========   [](https://opensource.org/licenses/Apache-2.0) [](https://github.com/offscale/cdd-python/actions)   [](https://codecov.io/gh/offscale/cdd-python) [](https://github.com/psf/black) | 3,197 |
ofnt/real_anot | ['active learning'] | ['Bayesian Dark Knowledge'] | src/experiment/.ipynb_checkpoints/main-checkpoint.py src/inference.py src/.ipynb_checkpoints/app-checkpoint.py src/test.py src/plot.py src/datapipeline/preproc.py src/app.py src/.ipynb_checkpoints/inference-checkpoint.py src/modelling/model.py src/datapipeline/.ipynb_checkpoints/loader-checkpoint.py src/datapipeline/simple_preprocessing.py src/experiment/main.py src/.ipynb_checkpoints/test-checkpoint.py src/datapipeline/loader.py predict index give_uncertainties run_inference_single initialize model load_checkpoint guide get_prediction LemmaTokenizer save_checkpoint BaseModel train load_pipeline generate_plot train predict status give_uncertainties run_inference_single initialize model load_checkpoint guide get_prediction LemmaTokenizer save_checkpoint BaseModel train load_pipeline Datapipeline remove_stopwords pipelinize simple_lemmatizer remove_special_characters run_experiment run_experiment train predict seed format randn predict float32 generate_plot int get_prediction random_module lifted_module independent param softplus randn ones random_module independent as_tensor todense clear_param_store print reshape tensor step array range give_uncertainties reshape percentile save load module load load_pipeline run_inference_single tensor todense array seed randn write_html update_layout create_distplot initialize sub join WordNetLemmatizer tokenize join words ToktokTokenizer basicConfig get_log_level Experiment info log_metrics train INFO | # <center><b>Real Anot</b>: identifying COVID-19-related fake news using machine learning</center> <center>Barry YAP, Kelvin SOH, Kenny CHUA and Zhong Hao NEO</center> <center>AI Apprentices, Batch 6, AI Singapore</center> ## Introduction Your phone buzzes to notify you of a new message in your extended family's WhatsApp chat. The message contains claims regarding COVID-19, but you're not sure if this information is trustworthy? Is it a case of fake news? Fake news is a form of intentional disinformation. When this disinformation is unquestioning taken as true, this can potentially result in severe negative consequences, particularly in the current COVID-19 climate. <b>Real Anot</b> is a web app that uses machine learning technology to predict the probability that given piece text is fake news. Larger probability values means that the text is more likely to be fake news. ## Dataset The dataset is a subset of the CoAid dataset which containts a set of diverse COVID-19 healthcare misinformation. This dataset has a total of 1,127 real and 266 fake news samples. ## Preprocessing | 3,198 |
ofwallaart/HAABSA | ['sentiment analysis', 'aspect based sentiment analysis'] | ['A Hybrid Approach for Aspect-Based Sentiment Analysis Using a Lexicalized Domain Ontology and Attentional Neural Models'] | cabascModel.py lcrModelAlt.py main_hyper.py svmModel.py att_layer.py main_cross.py lcrModel.py lcrModelInverse.py config.py dataReader2016.py main.py OntologyReasoner.py loadData.py utils.py nn_layer.py softmax_with_len Mlp_attention_layer mlp_attention_layer mlp_layer bilinear_attention_layer dot_produce_attention_layer cam_mlp_attention_layer triple_attention_layer mlp_layer2 triple_attention_layer2 cabasc main train_func summary_func loss_func print_config saver_func acc_func _get_data_tuple read_data_2016 window main lcr_rot main lcr_rot main lcr_rot main main lcr_inv_objective print_json load_best_hyperspace cabasc_objective save_json_result run_a_trial load_json_result plot_best_model lcr_alt_objective lcr_objective svm_objective dynamic_rnn softmax_layer stack_bi_dynamic_rnn bi_dynamic_rnn_diff reduce_mean_with_len cnn_layer bi_dynamic_rnn OntReasoner main load_inputs_twitter load_aspect2id batch_index load_sentence load_w2v change_y_to_onehot load_inputs_document load_inputs_document_nohn load_inputs_twitter_ extract_aspect_to_id load_inputs_cabasc load_inputs_twitter_at load_word_embedding load_word_id_mapping load_inputs_full load_inputs_sentence exp sequence_mask reshape float32 reduce_sum shape cast reshape expand_dims matmul get_variable reshape softmax_with_len matmul get_variable softmax_with_len tanh reshape matmul get_variable sigmoid reshape matmul get_variable tanh softmax_with_len reshape transpose matmul get_variable softmax_with_len tanh reshape matmul get_variable reshape matmul tanh get_variable tanh reshape matmul softmax get_variable reshape matmul tanh get_variable embedding_dim n_class random_base multiply max_sentence_len squeeze matmul add reverse expand_dims GRUCell n_hidden softmax_layer dropout tile triple_attention_layer dynamic_rnn print l2_reg reshape mlp_layer cam_mlp_attention_layer print_config ConfigProto items sorted format argv print FLAGS sum get_collection REGULARIZATION_LOSSES float32 reduce_sum reduce_mean cast int32 argmax equal graph FileWriter scalar merge Saver makedirs append next range iter list window print len min lower append range enumerate _get_data_tuple most_common open str word_tokenize getroot iter append get parse replace close lower print text write extend sub findall len dropout print softmax_layer max_sentence_len squeeze concat matmul LSTMCell bilinear_attention_layer n_class reduce_mean_with_len bi_dynamic_rnn n_hidden random_base str range test_path reset_default_graph train_svm_path open test_svm_path str run OntReasoner loadDataAndEmbeddings remaining_svm_test_path append format remaining_test_path subtract close enumerate join train_path print write loadCrossValidation range year str print save_json_result hyper_train_path reset_default_graph main hyper_eval_path str print save_json_result hyper_train_path reset_default_graph main hyper_eval_path str print save_json_result hyper_train_path reset_default_graph main hyper_eval_path str print save_json_result hyper_train_path reset_default_graph main hyper_eval_path str print save_json_result hyper_svm_eval_path reset_default_graph hyper_svm_train_path main load dump format print trials fmin open len print dumps format makedirs join print load_best_hyperspace print_json conv2d relu get_variable sequence_mask reshape cell float32 shape reverse cast tile reduce_mean_with_len gather range reshape concat int64 cast reverse_sequence bidirectional_dynamic_rnn reduce_mean_with_len gather range concat cells_fw reshape concat stack_bidirectional_dynamic_rnn int64 cast reverse_sequence reduce_mean_with_len gather range cells_bw split reshape cast float32 reduce_sum get_variable A score reshape fit predict OneHotEncoder LabelEncoder split CountVectorizer polarity_scores SentimentIntensityAnalyzer expand_dims array fit_transform len shuffle int list range int print dict split open readline format asarray print dict shape open row_stack append split list load_w2v print row_stack load_word_id_mapping keys len list print len dict uniform open append sum split list print Counter set dict zip append range len print change_y_to_onehot readlines min len reverse append load_word_id_mapping range split print change_y_to_onehot readlines min len extend reverse append load_word_id_mapping range split join list str readlines len write add set zip open range split get load_aspect2id join asarray print change_y_to_onehot readlines len append load_word_id_mapping range split format print change_y_to_onehot open split append load_word_id_mapping len join format print change_y_to_onehot open append zeros load_word_id_mapping split format print change_y_to_onehot split append load_word_id_mapping open list asarray get_q_id change_y_to_onehot len open append range split format print change_y_to_onehot readlines min len extend reverse append load_word_id_mapping range split print change_y_to_onehot readlines min len extend reverse append load_word_id_mapping range split | # HAABSA Code for A Hybrid Approach for Aspect-Based Sentiment Analysis Using a Lexicalized Domain Ontology and Attentional Neural Models All software is written in PYTHON3 (https://www.python.org/) and makes use of the TensorFlow framework (https://www.tensorflow.org/). ## Installation Instructions (Windows): ### Dowload required files and add them to data/externalData folder: 1. Download ontology: https://github.com/KSchouten/Heracles/tree/master/src/main/resources/externalData 2. Download SemEval2015 Datasets: http://alt.qcri.org/semeval2015/task12/index.php?id=data-and-tools 3. Download SemEval2016 Dataset: http://alt.qcri.org/semeval2016/task5/index.php?id=data-and-tools 4. Download Glove Embeddings: http://nlp.stanford.edu/data/glove.42B.300d.zip 5. Download Stanford CoreNLP parser: https://nlp.stanford.edu/software/stanford-parser-full-2018-02-27.zip | 3,199 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.